instance_id
stringclasses 7
values | text
stringlengths 11.4k
828k
| repo
stringclasses 3
values | base_commit
stringclasses 7
values | problem_statement
stringclasses 6
values | hints_text
stringclasses 5
values | created_at
stringclasses 7
values | patch
stringclasses 7
values | test_patch
stringclasses 7
values | version
stringclasses 1
value | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
celery__celery-2840
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
</issue>
<code>
[start of README.rst]
1 =================================
2 celery - Distributed Task Queue
3 =================================
4
5 .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
6
7 |build-status| |coverage-status|
8
9 :Version: 3.2.0a1 (Cipater)
10 :Web: http://celeryproject.org/
11 :Download: http://pypi.python.org/pypi/celery/
12 :Source: http://github.com/celery/celery/
13 :Keywords: task queue, job queue, asynchronous, async, rabbitmq, amqp, redis,
14 python, webhooks, queue, distributed
15
16 --
17
18 What is a Task Queue?
19 =====================
20
21 Task queues are used as a mechanism to distribute work across threads or
22 machines.
23
24 A task queue's input is a unit of work, called a task, dedicated worker
25 processes then constantly monitor the queue for new work to perform.
26
27 Celery communicates via messages, usually using a broker
28 to mediate between clients and workers. To initiate a task a client puts a
29 message on the queue, the broker then delivers the message to a worker.
30
31 A Celery system can consist of multiple workers and brokers, giving way
32 to high availability and horizontal scaling.
33
34 Celery is a library written in Python, but the protocol can be implemented in
35 any language. So far there's RCelery_ for the Ruby programming language, and a
36 `PHP client`, but language interoperability can also be achieved
37 by using webhooks.
38
39 .. _RCelery: https://github.com/leapfrogonline/rcelery
40 .. _`PHP client`: https://github.com/gjedeer/celery-php
41 .. _`using webhooks`:
42 http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
43
44 What do I need?
45 ===============
46
47 Celery version 3.0 runs on,
48
49 - Python (2.6, 2.7, 3.3, 3.4)
50 - PyPy (1.8, 1.9)
51 - Jython (2.5, 2.7).
52
53 This is the last version to support Python 2.5,
54 and from Celery 3.1, Python 2.6 or later is required.
55 The last version to support Python 2.4 was Celery series 2.2.
56
57 *Celery* is usually used with a message broker to send and receive messages.
58 The RabbitMQ, Redis transports are feature complete,
59 but there's also experimental support for a myriad of other solutions, including
60 using SQLite for local development.
61
62 *Celery* can run on a single machine, on multiple machines, or even
63 across datacenters.
64
65 Get Started
66 ===========
67
68 If this is the first time you're trying to use Celery, or you are
69 new to Celery 3.0 coming from previous versions then you should read our
70 getting started tutorials:
71
72 - `First steps with Celery`_
73
74 Tutorial teaching you the bare minimum needed to get started with Celery.
75
76 - `Next steps`_
77
78 A more complete overview, showing more features.
79
80 .. _`First steps with Celery`:
81 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
82
83 .. _`Next steps`:
84 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
85
86 Celery is...
87 ==========
88
89 - **Simple**
90
91 Celery is easy to use and maintain, and does *not need configuration files*.
92
93 It has an active, friendly community you can talk to for support,
94 including a `mailing-list`_ and and an IRC channel.
95
96 Here's one of the simplest applications you can make::
97
98 from celery import Celery
99
100 app = Celery('hello', broker='amqp://guest@localhost//')
101
102 @app.task
103 def hello():
104 return 'hello world'
105
106 - **Highly Available**
107
108 Workers and clients will automatically retry in the event
109 of connection loss or failure, and some brokers support
110 HA in way of *Master/Master* or *Master/Slave* replication.
111
112 - **Fast**
113
114 A single Celery process can process millions of tasks a minute,
115 with sub-millisecond round-trip latency (using RabbitMQ,
116 py-librabbitmq, and optimized settings).
117
118 - **Flexible**
119
120 Almost every part of *Celery* can be extended or used on its own,
121 Custom pool implementations, serializers, compression schemes, logging,
122 schedulers, consumers, producers, autoscalers, broker transports and much more.
123
124 It supports...
125 ============
126
127 - **Message Transports**
128
129 - RabbitMQ_, Redis_,
130 - MongoDB_ (experimental), Amazon SQS (experimental),
131 - CouchDB_ (experimental), SQLAlchemy_ (experimental),
132 - Django ORM (experimental), `IronMQ`_
133 - and more...
134
135 - **Concurrency**
136
137 - Prefork, Eventlet_, gevent_, threads/single threaded
138
139 - **Result Stores**
140
141 - AMQP, Redis
142 - memcached, MongoDB
143 - SQLAlchemy, Django ORM
144 - Apache Cassandra, IronCache
145
146 - **Serialization**
147
148 - *pickle*, *json*, *yaml*, *msgpack*.
149 - *zlib*, *bzip2* compression.
150 - Cryptographic message signing.
151
152 .. _`Eventlet`: http://eventlet.net/
153 .. _`gevent`: http://gevent.org/
154
155 .. _RabbitMQ: http://rabbitmq.com
156 .. _Redis: http://redis.io
157 .. _MongoDB: http://mongodb.org
158 .. _Beanstalk: http://kr.github.com/beanstalkd
159 .. _CouchDB: http://couchdb.apache.org
160 .. _SQLAlchemy: http://sqlalchemy.org
161 .. _`IronMQ`: http://iron.io
162
163 Framework Integration
164 =====================
165
166 Celery is easy to integrate with web frameworks, some of which even have
167 integration packages:
168
169 +--------------------+----------------------------------------------------+
170 | `Django`_ | not needed |
171 +--------------------+----------------------------------------------------+
172 | `Pyramid`_ | `pyramid_celery`_ |
173 +--------------------+----------------------------------------------------+
174 | `Pylons`_ | `celery-pylons`_ |
175 +--------------------+----------------------------------------------------+
176 | `Flask`_ | not needed |
177 +--------------------+----------------------------------------------------+
178 | `web2py`_ | `web2py-celery`_ |
179 +--------------------+----------------------------------------------------+
180 | `Tornado`_ | `tornado-celery`_ | `another tornado-celery`_ |
181 +--------------------+----------------------------------------------------+
182
183 The integration packages are not strictly necessary, but they can make
184 development easier, and sometimes they add important hooks like closing
185 database connections at ``fork``.
186
187 .. _`Django`: http://djangoproject.com/
188 .. _`Pylons`: http://www.pylonsproject.org/
189 .. _`Flask`: http://flask.pocoo.org/
190 .. _`web2py`: http://web2py.com/
191 .. _`Bottle`: http://bottlepy.org/
192 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
193 .. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
194 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
195 .. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
196 .. _`web2py-celery`: http://code.google.com/p/web2py-celery/
197 .. _`Tornado`: http://www.tornadoweb.org/
198 .. _`tornado-celery`: http://github.com/mher/tornado-celery/
199 .. _`another tornado-celery`: https://github.com/mayflaver/tornado-celery
200
201 .. _celery-documentation:
202
203 Documentation
204 =============
205
206 The `latest documentation`_ with user guides, tutorials and API reference
207 is hosted at Read The Docs.
208
209 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
210
211 .. _celery-installation:
212
213 Installation
214 ============
215
216 You can install Celery either via the Python Package Index (PyPI)
217 or from source.
218
219 To install using `pip`,::
220
221 $ pip install -U Celery
222
223 To install using `easy_install`,::
224
225 $ easy_install -U Celery
226
227 .. _bundles:
228
229 Bundles
230 -------
231
232 Celery also defines a group of bundles that can be used
233 to install Celery and the dependencies for a given feature.
234
235 You can specify these in your requirements or on the ``pip`` comand-line
236 by using brackets. Multiple bundles can be specified by separating them by
237 commas.
238 ::
239
240 $ pip install "celery[librabbitmq]"
241
242 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
243
244 The following bundles are available:
245
246 Serializers
247 ~~~~~~~~~~~
248
249 :celery[auth]:
250 for using the auth serializer.
251
252 :celery[msgpack]:
253 for using the msgpack serializer.
254
255 :celery[yaml]:
256 for using the yaml serializer.
257
258 Concurrency
259 ~~~~~~~~~~~
260
261 :celery[eventlet]:
262 for using the eventlet pool.
263
264 :celery[gevent]:
265 for using the gevent pool.
266
267 :celery[threads]:
268 for using the thread pool.
269
270 Transports and Backends
271 ~~~~~~~~~~~~~~~~~~~~~~~
272
273 :celery[librabbitmq]:
274 for using the librabbitmq C library.
275
276 :celery[redis]:
277 for using Redis as a message transport or as a result backend.
278
279 :celery[mongodb]:
280 for using MongoDB as a message transport (*experimental*),
281 or as a result backend (*supported*).
282
283 :celery[sqs]:
284 for using Amazon SQS as a message transport (*experimental*).
285
286 :celery[memcache]:
287 for using memcached as a result backend.
288
289 :celery[cassandra]:
290 for using Apache Cassandra as a result backend.
291
292 :celery[couchdb]:
293 for using CouchDB as a message transport (*experimental*).
294
295 :celery[couchbase]:
296 for using CouchBase as a result backend.
297
298 :celery[beanstalk]:
299 for using Beanstalk as a message transport (*experimental*).
300
301 :celery[zookeeper]:
302 for using Zookeeper as a message transport.
303
304 :celery[zeromq]:
305 for using ZeroMQ as a message transport (*experimental*).
306
307 :celery[sqlalchemy]:
308 for using SQLAlchemy as a message transport (*experimental*),
309 or as a result backend (*supported*).
310
311 :celery[pyro]:
312 for using the Pyro4 message transport (*experimental*).
313
314 :celery[slmq]:
315 for using the SoftLayer Message Queue transport (*experimental*).
316
317 .. _celery-installing-from-source:
318
319 Downloading and installing from source
320 --------------------------------------
321
322 Download the latest version of Celery from
323 http://pypi.python.org/pypi/celery/
324
325 You can install it by doing the following,::
326
327 $ tar xvfz celery-0.0.0.tar.gz
328 $ cd celery-0.0.0
329 $ python setup.py build
330 # python setup.py install
331
332 The last command must be executed as a privileged user if
333 you are not currently using a virtualenv.
334
335 .. _celery-installing-from-git:
336
337 Using the development version
338 -----------------------------
339
340 With pip
341 ~~~~~~~~
342
343 The Celery development version also requires the development
344 versions of ``kombu``, ``amqp`` and ``billiard``.
345
346 You can install the latest snapshot of these using the following
347 pip commands::
348
349 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
350 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
351 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
352 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
353
354 With git
355 ~~~~~~~~
356
357 Please the Contributing section.
358
359 .. _getting-help:
360
361 Getting Help
362 ============
363
364 .. _mailing-list:
365
366 Mailing list
367 ------------
368
369 For discussions about the usage, development, and future of celery,
370 please join the `celery-users`_ mailing list.
371
372 .. _`celery-users`: http://groups.google.com/group/celery-users/
373
374 .. _irc-channel:
375
376 IRC
377 ---
378
379 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
380 network.
381
382 .. _`Freenode`: http://freenode.net
383
384 .. _bug-tracker:
385
386 Bug tracker
387 ===========
388
389 If you have any suggestions, bug reports or annoyances please report them
390 to our issue tracker at http://github.com/celery/celery/issues/
391
392 .. _wiki:
393
394 Wiki
395 ====
396
397 http://wiki.github.com/celery/celery/
398
399
400 .. _maintainers:
401
402 Maintainers
403 ===========
404
405 - `@ask`_ (primary maintainer)
406 - `@thedrow`_
407 - `@chrisgogreen`_
408 - `@PMickael`_
409 - `@malinoff`_
410 - And you? We really need more: https://github.com/celery/celery/issues/2534
411
412 .. _`@ask`: http://github.com/ask
413 .. _`@thedrow`: http://github.com/thedrow
414 .. _`@chrisgogreen`: http://github.com/chrisgogreen
415 .. _`@PMickael`: http://github.com/PMickael
416 .. _`@malinoff`: http://github.com/malinoff
417
418
419 .. _contributing-short:
420
421 Contributing
422 ============
423
424 Development of `celery` happens at Github: http://github.com/celery/celery
425
426 You are highly encouraged to participate in the development
427 of `celery`. If you don't like Github (for some reason) you're welcome
428 to send regular patches.
429
430 Be sure to also read the `Contributing to Celery`_ section in the
431 documentation.
432
433 .. _`Contributing to Celery`:
434 http://docs.celeryproject.org/en/master/contributing.html
435
436 .. _license:
437
438 License
439 =======
440
441 This software is licensed under the `New BSD License`. See the ``LICENSE``
442 file in the top distribution directory for the full license text.
443
444 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
445
446
447 .. image:: https://d2weczhvl823v0.cloudfront.net/celery/celery/trend.png
448 :alt: Bitdeli badge
449 :target: https://bitdeli.com/free
450
451 .. |build-status| image:: https://travis-ci.org/celery/celery.svg?branch=master
452 :target: https://travis-ci.org/celery/celery
453 .. |coverage-status| image:: https://coveralls.io/repos/celery/celery/badge.svg
454 :target: https://coveralls.io/r/celery/celery
455
[end of README.rst]
[start of celery/app/defaults.py]
...
118 'EAGER_PROPAGATES_EXCEPTIONS': Option(False, type='bool'),
119 'ENABLE_UTC': Option(True, type='bool'),
120 'ENABLE_REMOTE_CONTROL': Option(True, type='bool'),
121 'EVENT_SERIALIZER': Option('json'),
122 'EVENT_QUEUE_EXPIRES': Option(60.0, type='float'),
123 'EVENT_QUEUE_TTL': Option(5.0, type='float'),
124 'IMPORTS': Option((), type='tuple'),
125 'INCLUDE': Option((), type='tuple'),
126 'IGNORE_RESULT': Option(False, type='bool'),
127 'MAX_CACHED_RESULTS': Option(100, type='int'),
128 'MESSAGE_COMPRESSION': Option(type='string'),
129 'MONGODB_BACKEND_SETTINGS': Option(type='dict'),
130 'REDIS_HOST': Option(type='string', **_REDIS_OLD),
131 'REDIS_PORT': Option(type='int', **_REDIS_OLD),
132 'REDIS_DB': Option(type='int', **_REDIS_OLD),
133 'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
134 'REDIS_MAX_CONNECTIONS': Option(type='int'),
135 'RESULT_BACKEND': Option(type='string'),
136 'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
137 'RESULT_DB_TABLENAMES': Option(type='dict'),
138 'RESULT_DBURI': Option(),
...
[end of celery/app/defaults.py]
[start of celery/app/task.py]
...
206 #:
207 #: The application default can be overridden using the
208 #: :setting:`CELERY_TRACK_STARTED` setting.
209 track_started = None
210
211 #: When enabled messages for this task will be acknowledged **after**
212 #: the task has been executed, and not *just before* which is the
213 #: default behavior.
214 #:
215 #: Please note that this means the task may be executed twice if the
216 #: worker crashes mid execution (which may be acceptable for some
217 #: applications).
218 #:
219 #: The application default can be overridden with the
220 #: :setting:`CELERY_ACKS_LATE` setting.
221 acks_late = None
222
223 #: Tuple of expected exceptions.
224 #:
225 #: These are errors that are expected in normal operation
226 #: and that should not be regarded as a real error by the worker.
...
...
234 #: Task request stack, the current request will be the topmost.
235 request_stack = None
236
237 #: Some may expect a request to exist even if the task has not been
238 #: called. This should probably be deprecated.
239 _default_request = None
240
241 _exec_options = None
242
243 __bound__ = False
244
245 from_config = (
246 ('send_error_emails', 'CELERY_SEND_TASK_ERROR_EMAILS'),
247 ('serializer', 'CELERY_TASK_SERIALIZER'),
248 ('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
249 ('track_started', 'CELERY_TRACK_STARTED'),
250 ('acks_late', 'CELERY_ACKS_LATE'),
251 ('ignore_result', 'CELERY_IGNORE_RESULT'),
252 ('store_errors_even_if_ignored',
253 'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
254 )
...
[end of celery/app/task.py]
[start of celery/worker/request.py]
...
312 if self.task.acks_late:
313 self.acknowledge()
314
315 self.send_event('task-succeeded', result=retval, runtime=runtime)
316
317 def on_retry(self, exc_info):
318 """Handler called if the task should be retried."""
319 if self.task.acks_late:
320 self.acknowledge()
321
322 self.send_event('task-retried',
323 exception=safe_repr(exc_info.exception.exc),
324 traceback=safe_str(exc_info.traceback))
325
326 def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
327 """Handler called if the task raised an exception."""
328 task_ready(self)
329
330 if isinstance(exc_info.exception, MemoryError):
331 raise MemoryError('Process got: %s' % (exc_info.exception,))
332 elif isinstance(exc_info.exception, Reject):
333 return self.reject(requeue=exc_info.exception.requeue)
...
...
338
339 if isinstance(exc, Retry):
340 return self.on_retry(exc_info)
341
342 # These are special cases where the process would not have had
343 # time to write the result.
344 if self.store_errors:
345 if isinstance(exc, Terminated):
346 self._announce_revoked(
347 'terminated', True, string(exc), False)
348 send_failed_event = False # already sent revoked event
349 elif isinstance(exc, WorkerLostError) or not return_ok:
350 self.task.backend.mark_as_failure(
351 self.id, exc, request=self,
352 )
353 # (acks_late) acknowledge after result stored.
354 if self.task.acks_late:
355 self.acknowledge()
356
357 if send_failed_event:
358 self.send_event(
359 'task-failed',
...
[end of celery/worker/request.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
celery/celery
|
045b52f1450d6d5cc500e0057a4b498250dc5692
|
Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
|
This is deliberate as if a task is killed it may mean that the next invocation will also cause the same to happen. If the task is redelivered it may cause a loop where the same conditions occur again and again. Also, sadly you cannot distinguish processes killed by OOM from processes killed by other means, and if an administrator kills -9 a task going amok, you usually don't want that task to be called again.
There could be a configuration option for not acking terminated tasks, but I'm not sure how useful that would be.
A better solution could be to use `basic_reject(requeue=False)` instead of `basic_ack`, that way you can configure
a dead letter queue so that the killed tasks will be sent to a queue for manual inspection.
I must say, regardless of the status of this feature request, the documentation is misleading. Specifically, [this FAQ makes it seem that process failures would NOT acknowledge messages](http://celery.readthedocs.org/en/latest/faq.html#faq-acks-late-vs-retry). And [this FAQ boldface states](http://celery.readthedocs.org/en/latest/faq.html#id54) that in the event of a kill signal (9), that acks_late will allow the task to re-run (which again, is patently wrong based on this poorly documented behavior). Nowhere in the docs have I found that if the process _dies_, the message will be acknowledged, regardless of acks_late or not. (for instance, I have a set of 10k+ tasks, and some 1% of tasks wind up acknowledged but incomplete when a WorkerLostError is thrown in connection with the worker, although there are no other errors of any kind in any of my logs related to that task).
TL;DR at the least, appropriately document the current state when describing the functionality and limitations of acks_late. A work-around would be helpful -- I'm not sure I understand the solution of using `basic_reject`, although I'll keep looking into it.
The docs are referring to killing the worker process with KILL, not the child processes. The term worker will always refer to the worker instance, not the pool processes. The section within about acks_late is probably not very helpful and should be removed
|
2015-10-06T05:34:34Z
|
<patch>
<patch>
diff --git a/celery/app/defaults.py b/celery/app/defaults.py
--- a/celery/app/defaults.py
+++ b/celery/app/defaults.py
@@ -132,6 +132,7 @@ def __repr__(self):
'REDIS_DB': Option(type='int', **_REDIS_OLD),
'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
'REDIS_MAX_CONNECTIONS': Option(type='int'),
+ 'REJECT_ON_WORKER_LOST': Option(type='bool'),
'RESULT_BACKEND': Option(type='string'),
'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
'RESULT_DB_TABLENAMES': Option(type='dict'),
diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -220,6 +220,12 @@ class Task(object):
#: :setting:`CELERY_ACKS_LATE` setting.
acks_late = None
+ #: When CELERY_ACKS_LATE is set to True, the default behavior to
+ #: handle worker crash is to acknowledge the message. Setting
+ #: this to true allows the message to be rejected and requeued so
+ #: it will be executed again by another worker.
+ reject_on_worker_lost = None
+
#: Tuple of expected exceptions.
#:
#: These are errors that are expected in normal operation
@@ -248,6 +254,7 @@ class Task(object):
('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
('track_started', 'CELERY_TRACK_STARTED'),
('acks_late', 'CELERY_ACKS_LATE'),
+ ('reject_on_worker_lost', 'CELERY_REJECT_ON_WORKER_LOST'),
('ignore_result', 'CELERY_IGNORE_RESULT'),
('store_errors_even_if_ignored',
'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -326,7 +326,6 @@ def on_retry(self, exc_info):
def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
"""Handler called if the task raised an exception."""
task_ready(self)
-
if isinstance(exc_info.exception, MemoryError):
raise MemoryError('Process got: %s' % (exc_info.exception,))
elif isinstance(exc_info.exception, Reject):
@@ -352,7 +351,13 @@ def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
)
# (acks_late) acknowledge after result stored.
if self.task.acks_late:
- self.acknowledge()
+ reject_and_requeue = (self.task.reject_on_worker_lost and
+ isinstance(exc, WorkerLostError) and
+ self.delivery_info.get('redelivered', False) is False)
+ if reject_and_requeue:
+ self.reject(requeue=True)
+ else:
+ self.acknowledge()
if send_failed_event:
self.send_event(
</patch>
</s>
</patch>
|
diff --git a/celery/tests/worker/test_request.py b/celery/tests/worker/test_request.py
--- a/celery/tests/worker/test_request.py
+++ b/celery/tests/worker/test_request.py
@@ -325,6 +325,20 @@ def test_on_failure_Reject_rejects_with_requeue(self):
req_logger, req.connection_errors, True,
)
+ def test_on_failure_WrokerLostError_rejects_with_requeue(self):
+ einfo = None
+ try:
+ raise WorkerLostError()
+ except:
+ einfo = ExceptionInfo(internal=True)
+ req = self.get_request(self.add.s(2, 2))
+ req.task.acks_late = True
+ req.task.reject_on_worker_lost = True
+ req.delivery_info['redelivered'] = False
+ req.on_failure(einfo)
+ req.on_reject.assert_called_with(req_logger,
+ req.connection_errors, True)
+
def test_tzlocal_is_cached(self):
req = self.get_request(self.add.s(2, 2))
req._tzlocal = 'foo'
|
1.0
| |||
NVIDIA__NeMo-473
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
</issue>
<code>
[start of README.rst]
1 .. image:: http://www.repostatus.org/badges/latest/active.svg
2 :target: http://www.repostatus.org/#active
3 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
4
5 .. image:: https://img.shields.io/badge/documentation-github.io-blue.svg
6 :target: https://nvidia.github.io/NeMo/
7 :alt: NeMo documentation on GitHub pages
8
9 .. image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
10 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
11 :alt: NeMo core license and license for collections in this repo
12
13 .. image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
14 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
15 :alt: Language grade: Python
16
17 .. image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
18 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
19 :alt: Total alerts
20
21 .. image:: https://img.shields.io/badge/code%20style-black-000000.svg
22 :target: https://github.com/psf/black
23 :alt: Code style: black
24
25
26
27 NVIDIA Neural Modules: NeMo
28 ===========================
29
30 NeMo is a toolkit for defining and building `Conversational AI <https://developer.nvidia.com/conversational-ai#started>`_ applications.
31
32 Goal of the NeMo toolkit is to make it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components. Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
33
34 **Neural Modules** are conceptual blocks of neural networks that take *typed* inputs and produce *typed* outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.
35
36 The toolkit comes with extendable collections of pre-built modules for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS).
37
38 **Introduction**
39
40 * Watch `this video <https://nvidia.github.io/NeMo/>`_ for a quick walk-through.
41
42 * Documentation (latest released version): https://nvidia.github.io/NeMo/
43
44 * Read NVIDIA `Developer Blog for example applications <https://devblogs.nvidia.com/how-to-build-domain-specific-automatic-speech-recognition-models-on-gpus/>`_
45
46 * Read NVIDIA `Developer Blog for Quartznet ASR model <https://devblogs.nvidia.com/develop-smaller-speech-recognition-models-with-nvidias-nemo-framework/>`_
47
48 * Recommended version to install is **0.9.0** via ``pip install nemo-toolkit``
49
50 * Recommended NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_
51
52 * Pretrained models are available on NVIDIA `NGC Model repository <https://ngc.nvidia.com/catalog/models?orderBy=modifiedDESC&query=nemo&quickFilter=models&filters=>`_
53
54
55 Getting started
56 ~~~~~~~~~~~~~~~
57
58 THE LATEST STABLE VERSION OF NeMo is **0.9.0** (Available via PIP).
59
60 **Requirements**
61
62 1) Python 3.6 or 3.7
63 2) PyTorch 1.4.* with GPU support
64 3) (optional, for best performance) NVIDIA APEX. Install from here: https://github.com/NVIDIA/apex
65
66 **NeMo Docker Container**
67 NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_ is now available.
68
69 * Pull the docker: ``docker pull nvcr.io/nvidia/nemo:v0.9``
70 * Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/nemo:v0.9``
71
72 If you are using the NVIDIA `NGC PyTorch container <https://ngc.nvidia.com/catalog/containers/nvidia:pytorch>`_ follow these instructions
73
74 * Pull the docker: ``docker pull nvcr.io/nvidia/pytorch:20.01-py3``
75 * Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:20.01-py3``
76 * ``apt-get update && apt-get install -y libsndfile1``
77 * ``pip install nemo_toolkit`` NeMo core
78 * ``pip install nemo_asr`` NeMo ASR (Speech Recognition) collection
79 * ``pip install nemo_nlp`` NeMo NLP (Natural Language Processing) collection
80 * ``pip install nemo_tts`` NeMo TTS (Speech Synthesis) collection
81
82 See `examples/start_here` to get started with the simplest example. The folder `examples` contains several examples to get you started with various tasks in NLP and ASR.
83
84 **Tutorials**
85
86 * `Speech recognition <https://nvidia.github.io/NeMo/asr/intro.html>`_
87 * `Natural language processing <https://nvidia.github.io/NeMo/nlp/intro.html>`_
88 * `Speech Synthesis <https://nvidia.github.io/NeMo/tts/intro.html>`_
89
90
91 DEVELOPMENT
92 ~~~~~~~~~~~
93 If you'd like to use master branch and/or develop NeMo you can run "reinstall.sh" script.
94
95 `Documentation (master branch) <http://nemo-master-docs.s3-website.us-east-2.amazonaws.com/>`_.
96
97 **Installing From Github**
98
99 If you prefer to use NeMo's latest development version (from GitHub) follow the steps below:
100
101 1) Clone the repository ``git clone https://github.com/NVIDIA/NeMo.git``
102 2) Go to NeMo folder and re-install the toolkit with collections:
103
104 .. code-block:: bash
105
106 ./reinstall.sh
107
108 **Style tests**
109
110 .. code-block:: bash
111
112 python setup.py style # Checks overall project code style and output issues with diff.
113 python setup.py style --fix # Tries to fix error in-place.
114 python setup.py style --scope=tests # Operates within certain scope (dir of file).
115
116 **Unittests**
117
118 This command runs unittests:
119
120 .. code-block:: bash
121
122 ./reinstall.sh
123 python pytest tests
124
125
126 Citation
127 ~~~~~~~~
128
129 If you are using NeMo please cite the following publication
130
131 .. code-block:: tex
132
133 @misc{nemo2019,
134 title={NeMo: a toolkit for building AI applications using Neural Modules},
135 author={Oleksii Kuchaiev and Jason Li and Huyen Nguyen and Oleksii Hrinchuk and Ryan Leary and Boris Ginsburg and Samuel Kriman and Stanislav Beliaev and Vitaly Lavrukhin and Jack Cook and Patrice Castonguay and Mariya Popova and Jocelyn Huang and Jonathan M. Cohen},
136 year={2019},
137 eprint={1909.09577},
138 archivePrefix={arXiv},
139 primaryClass={cs.LG}
140 }
141
142
[end of README.rst]
[start of nemo/core/neural_modules.py]
...
379 def input_ports(self) -> Optional[Dict[str, NeuralType]]:
380 """Returns definitions of module input ports
381
382 Returns:
383 A (dict) of module's input ports names to NeuralTypes mapping
384 """
385
386 @property
387 @abstractmethod
388 def output_ports(self) -> Optional[Dict[str, NeuralType]]:
389 """Returns definitions of module output ports
390
391 Returns:
392 A (dict) of module's output ports names to NeuralTypes mapping
393 """
394
395 @property
396 def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
397 """Returns names of input ports that will not be included in an export
398
399 Returns:
400 A (set) of module's input port names that are not exportable
...
...
388 def output_ports(self) -> Optional[Dict[str, NeuralType]]:
389 """Returns definitions of module output ports
390
391 Returns:
392 A (dict) of module's output ports names to NeuralTypes mapping
393 """
394
395 @property
396 def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
397 """Returns names of input ports that will not be included in an export
398
399 Returns:
400 A (set) of module's input port names that are not exportable
401 """
402 return set([])
403
404 @property
405 def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
406 """Returns names of output ports that will not be included in an export
407
408 Returns:
409 A (set) of module's output port names that are not exportable
...
...
396 def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
397 """Returns names of input ports that will not be included in an export
398
399 Returns:
400 A (set) of module's input port names that are not exportable
401 """
402 return set([])
403
404 @property
405 def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
406 """Returns names of output ports that will not be included in an export
407
408 Returns:
409 A (set) of module's output port names that are not exportable
410 """
411 return set([])
412
413 def prepare_for_deployment(self) -> None:
414 """Patch the module if required to prepare for deployment
415
416 """
417 return
...
[end of nemo/core/neural_modules.py]
[start of nemo/backends/pytorch/actions.py]
...
923 @staticmethod
924 def __module_export(module, output, d_format: DeploymentFormat, input_example=None, output_example=None):
925 # Check if output already exists
926 destination = Path(output)
927 if destination.exists():
928 raise FileExistsError(f"Destination {output} already exists. " f"Aborting export.")
929
930 input_names = list(module.input_ports.keys())
931 output_names = list(module.output_ports.keys())
932 dynamic_axes = defaultdict(list)
933
934 def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defaultdict):
935 if ntype.axes:
936 for ind, axis in enumerate(ntype.axes):
937 if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
938 dynamic_axes[port_name].append(ind)
939
940 # This is a hack for Jasper to Jarvis export -- need re-design for this
941 inputs_to_drop = set()
942 outputs_to_drop = set()
943 if type(module).__name__ == "JasperEncoder":
944 logging.info(
945 "Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
946 "deployment"
947 )
948 inputs_to_drop.add("length")
949 outputs_to_drop.add("encoded_lengths")
950
951 # for input_ports
952 for port_name, ntype in module.input_ports.items():
953 if port_name in inputs_to_drop:
954 input_names.remove(port_name)
955 continue
956 __extract_dynamic_axes(port_name, ntype, dynamic_axes)
957 # for output_ports
958 for port_name, ntype in module.output_ports.items():
959 if port_name in outputs_to_drop:
960 output_names.remove(port_name)
961 continue
962 __extract_dynamic_axes(port_name, ntype, dynamic_axes)
963
...
[end of nemo/backends/pytorch/actions.py]
[start of nemo/collections/asr/jasper.py]
...
104 }
105
106 @property
107 @add_port_docs()
108 def output_ports(self):
109 """Returns definitions of module output ports.
110 """
111 return {
112 # "outputs": NeuralType(
113 # {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag),}
114 # ),
115 # "encoded_lengths": NeuralType({0: AxisType(BatchTag)}),
116 "outputs": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
117 "encoded_lengths": NeuralType(tuple('B'), LengthsType()),
118 }
119
120 @property
121 def disabled_deployment_input_ports(self):
122 return set(["length"])
123
124 @property
125 def disabled_deployment_output_ports(self):
126 return set(["encoded_lengths"])
127
128 def prepare_for_deployment(self):
129 m_count = 0
130 for m in self.modules():
131 if type(m).__name__ == "MaskedConv1d":
132 m.use_mask = False
...
[end of nemo/collections/asr/jasper.py]
[start of nemo/core/neural_factory.py]
...
596 raise TypeError(f"All callbacks passed to the eval action must" f"be inherited from EvaluatorCallback")
597 self.train(
598 tensors_to_optimize=None, optimizer='sgd', callbacks=callbacks, optimization_params={'num_epochs': 1},
599 )
600
601 def deployment_export(
602 self, module, output: str, d_format: DeploymentFormat, input_example=None, output_example=None
603 ):
604 """Exports Neural Module instance for deployment.
605
606 Args:
607 module: neural module to export
608 output (str): where export results should be saved
609 d_format (DeploymentFormat): which deployment format to use
610 input_example: sometimes tracing will require input examples
611 output_example: Should match inference on input_example
612 """
613 module.prepare_for_deployment()
614
615 return self._trainer.deployment_export(
616 module=module,
617 output=output,
...
[end of nemo/core/neural_factory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
NVIDIA/NeMo
|
ba4616f1f011d599de87f0cb3315605e715d402a
|
Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
|
2020-03-10T03:03:23Z
|
<patch>
<patch>
diff --git a/nemo/backends/pytorch/actions.py b/nemo/backends/pytorch/actions.py
--- a/nemo/backends/pytorch/actions.py
+++ b/nemo/backends/pytorch/actions.py
@@ -937,26 +937,16 @@ def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defa
if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
dynamic_axes[port_name].append(ind)
- # This is a hack for Jasper to Jarvis export -- need re-design for this
- inputs_to_drop = set()
- outputs_to_drop = set()
- if type(module).__name__ == "JasperEncoder":
- logging.info(
- "Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
- "deployment"
- )
- inputs_to_drop.add("length")
- outputs_to_drop.add("encoded_lengths")
-
+ # extract dynamic axes and remove unnecessary inputs/outputs
# for input_ports
for port_name, ntype in module.input_ports.items():
- if port_name in inputs_to_drop:
+ if port_name in module._disabled_deployment_input_ports:
input_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
# for output_ports
for port_name, ntype in module.output_ports.items():
- if port_name in outputs_to_drop:
+ if port_name in module._disabled_deployment_output_ports:
output_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
diff --git a/nemo/collections/asr/jasper.py b/nemo/collections/asr/jasper.py
--- a/nemo/collections/asr/jasper.py
+++ b/nemo/collections/asr/jasper.py
@@ -118,14 +118,14 @@ def output_ports(self):
}
@property
- def disabled_deployment_input_ports(self):
+ def _disabled_deployment_input_ports(self):
return set(["length"])
@property
- def disabled_deployment_output_ports(self):
+ def _disabled_deployment_output_ports(self):
return set(["encoded_lengths"])
- def prepare_for_deployment(self):
+ def _prepare_for_deployment(self):
m_count = 0
for m in self.modules():
if type(m).__name__ == "MaskedConv1d":
diff --git a/nemo/core/neural_factory.py b/nemo/core/neural_factory.py
--- a/nemo/core/neural_factory.py
+++ b/nemo/core/neural_factory.py
@@ -610,7 +610,7 @@ def deployment_export(
input_example: sometimes tracing will require input examples
output_example: Should match inference on input_example
"""
- module.prepare_for_deployment()
+ module._prepare_for_deployment()
return self._trainer.deployment_export(
module=module,
diff --git a/nemo/core/neural_modules.py b/nemo/core/neural_modules.py
--- a/nemo/core/neural_modules.py
+++ b/nemo/core/neural_modules.py
@@ -393,7 +393,7 @@ def output_ports(self) -> Optional[Dict[str, NeuralType]]:
"""
@property
- def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_input_ports(self) -> Optional[Set[str]]:
"""Returns names of input ports that will not be included in an export
Returns:
@@ -402,7 +402,7 @@ def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
return set([])
@property
- def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""Returns names of output ports that will not be included in an export
Returns:
@@ -410,7 +410,7 @@ def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""
return set([])
- def prepare_for_deployment(self) -> None:
+ def _prepare_for_deployment(self) -> None:
"""Patch the module if required to prepare for deployment
"""
</patch>
</s>
</patch>
|
diff --git a/tests/unit/core/test_deploy_export.py b/tests/unit/core/test_deploy_export.py
--- a/tests/unit/core/test_deploy_export.py
+++ b/tests/unit/core/test_deploy_export.py
@@ -46,9 +46,11 @@
import nemo.collections.nlp.nm.trainables.common.token_classification_nm
from nemo import logging
+TRT_ONNX_DISABLED = False
+
# Check if the required libraries and runtimes are installed.
+# Only initialize GPU after this runner is activated.
try:
- # Only initialize GPU after this runner is activated.
import pycuda.autoinit
# This import causes pycuda to automatically manage CUDA context creation and cleanup.
@@ -63,16 +65,17 @@
)
from .tensorrt_runner import TensorRTRunnerV2
except:
- # Skip tests.
- pytestmark = pytest.mark.skip
+ TRT_ONNX_DISABLED = True
@pytest.mark.usefixtures("neural_factory")
class TestDeployExport(TestCase):
- def setUp(self):
- logging.setLevel(logging.WARNING)
- device = nemo.core.DeviceType.GPU
- self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
+ # def setUp(self):
+ # super().setUp()
+
+ # logging.setLevel(logging.WARNING)
+ # device = nemo.core.DeviceType.GPU
+ # self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
def __test_export_route(self, module, out_name, mode, input_example=None):
out = Path(out_name)
@@ -112,7 +115,13 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
loader_cache = DataLoaderCache(data_loader)
profile_shapes = OrderedDict()
names = list(module.input_ports) + list(module.output_ports)
-
+ names = list(
+ filter(
+ lambda x: x
+ not in (module._disabled_deployment_input_ports | module._disabled_deployment_output_ports),
+ names,
+ )
+ )
if isinstance(input_example, tuple):
si = [tuple(input_example[i].shape) for i in range(len(input_example))]
elif isinstance(input_example, OrderedDict):
@@ -152,7 +161,7 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
input_names = list(input_metadata.keys())
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
+ if input_name in module._disabled_deployment_input_ports:
continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
@@ -209,8 +218,8 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
ort_inputs = ort_session.get_inputs()
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
- input_name = ort_inputs[i].name
+ if input_name in module._disabled_deployment_input_ports:
+ continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
if isinstance(input_example, OrderedDict)
@@ -263,9 +272,10 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
def __test_export_route_all(self, module, out_name, input_example=None):
if input_example is not None:
- self.__test_export_route(
- module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
- )
+ if not TRT_ONNX_DISABLED:
+ self.__test_export_route(
+ module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
+ )
self.__test_export_route(module, out_name + '.onnx', nemo.core.DeploymentFormat.ONNX, input_example)
self.__test_export_route(module, out_name + '.pt', nemo.core.DeploymentFormat.PYTORCH, input_example)
self.__test_export_route(module, out_name + '.ts', nemo.core.DeploymentFormat.TORCHSCRIPT, input_example)
@@ -323,9 +333,7 @@ def test_jasper_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="jasper_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randn(256).cuda()),
+ module=jasper_encoder, out_name="jasper_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
@pytest.mark.unit
@@ -343,7 +351,5 @@ def test_quartz_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="quartz_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randint(20, (16,)).cuda()),
+ module=jasper_encoder, out_name="quartz_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
|
1.0
| ||||
NVIDIA__NeMo-3632
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
./reinstall.sh crashes due to not being able to uninstall llvmlite
Starting off of `nemo:1.5.1` container, cloning the NeMo repo to a folder inside of it and calling `./reinstall.sh` fails with
```
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
`pip install -e` on the other hand succeeds installing `nemo:1.7.0rc0` and `numpy:1.22.2`, the rest of the packages remain untouched.
It seems that `./reinstall.sh` which used to work fine, a week or so ago when following the same procedure to upgrade to `nemo:1.6.0rc` redeveloped issue #841. The solution remains the same, first call
```
pip install --ignore-installed llvmlite
```
followed by `./reinstall.sh`. In this case, apart `llvml`, the following packages are updated
```
ftfy-6.0.3 nemo-toolkit-1.7.0rc0 numba-0.55.1 pytorch-lightning-1.5.9 sacrebleu-2.0.0 setuptools-59.5.0
```
Interestingly `numpy` in this case is left at `1.21.5`.
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |license| |lgtm_grade| |lgtm_alerts| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |lgtm_grade| image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
17 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
18 :alt: Language grade: Python
19
20 .. |lgtm_alerts| image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
21 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
22 :alt: Total alerts
23
24 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
25 :target: https://github.com/psf/black
26 :alt: Code style: black
27
28 .. _main-readme:
29
30 **NVIDIA NeMo**
31 ===============
32
33 Introduction
34 ------------
35
36 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR), natural language processing (NLP), and text-to-speech synthesis (TTS).
37 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models) and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
38
39 `Pre-trained NeMo models. <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_
40
41 `Introductory video. <https://www.youtube.com/embed/wBgpMf_KQVw>`_
42
43 Key Features
44 ------------
45
46 * Speech processing
47 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
48 * Supported models: Jasper, QuartzNet, CitriNet, Conformer-CTC, Conformer-Transducer, ContextNet, ...
49 * Supports CTC and Transducer/RNNT losses/decoders
50 * Beam Search decoding
51 * `Language Modelling for ASR <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
52 * Streaming and Buffered ASR (CTC/Transdcer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/main/examples/asr/asr_chunked_inference>`_
53 * `Speech Classification and Speech Command Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition)
54 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
55 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
56 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
57 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
58 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
59 * Natural Language Processing
60 * `Compatible with Hugging Face Transformers and NVIDIA Megatron <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron_finetuning.html>`_
61 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation.html>`_
62 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
63 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
64 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
65 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
66 * `BERT pre-training <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/bert_pretraining.html>`_
67 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
68 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
69 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
70 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
71 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
72 * `Neural Duplex Text Normalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization.html>`_
73 * `Prompt Tuning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron_finetuning.html#prompt-tuning>`_
74 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
75 * `Speech synthesis (TTS) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
76 * Spectrogram generation: Tacotron2, GlowTTS, TalkNet, FastPitch, FastSpeech2, Mixer-TTS, Mixer-TTS-X
77 * Vocoders: WaveGlow, SqueezeWave, UniGlow, MelGAN, HiFiGAN, UnivNet
78 * End-to-end speech generation: FastPitch_HifiGan_E2E, FastSpeech2_HifiGan_E2E
79 * `NGC collection of pre-trained TTS models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
80 * `Tools <https://github.com/NVIDIA/NeMo/tree/main/tools>`_
81 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/text_processing_deployment.html>`_
82 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
83 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
84
85
86 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
87
88 Requirements
89 ------------
90
91 1) Python 3.6, 3.7 or 3.8
92 2) Pytorch 1.10.0 or above
93 3) NVIDIA GPU for training
94
95 Documentation
96 -------------
97
98 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
99 :alt: Documentation Status
100 :scale: 100%
101 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
102
103 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
104 :alt: Documentation Status
105 :scale: 100%
106 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
107
108 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
109 | Version | Status | Description |
110 +=========+=============+==========================================================================================================================================+
111 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
112 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
113 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
114 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
115
116 Tutorials
117 ---------
118 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
119
120 Getting help with NeMo
121 ----------------------
122 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
123
124
125 Installation
126 ------------
127
128 Pip
129 ~~~
130 Use this installation mode if you want the latest released version.
131
132 .. code-block:: bash
133
134 apt-get update && apt-get install -y libsndfile1 ffmpeg
135 pip install Cython
136 pip install nemo_toolkit['all']
137
138 .. note::
139
140 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
141
142 Pip from source
143 ~~~~~~~~~~~~~~~
144 Use this installation mode if you want the a version from particular GitHub branch (e.g main).
145
146 .. code-block:: bash
147
148 apt-get update && apt-get install -y libsndfile1 ffmpeg
149 pip install Cython
150 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
151
152
153 From source
154 ~~~~~~~~~~~
155 Use this installation mode if you are contributing to NeMo.
156
157 .. code-block:: bash
158
159 apt-get update && apt-get install -y libsndfile1 ffmpeg
160 git clone https://github.com/NVIDIA/NeMo
161 cd NeMo
162 ./reinstall.sh
163
164 .. note::
165
166 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
167 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
168
169 RNNT
170 ~~~~
171 Note that RNNT requires numba to be installed from conda.
172
173 .. code-block:: bash
174
175 conda remove numba
176 pip uninstall numba
177 conda install -c conda-forge numba
178
179 Megatron GPT
180 ~~~~~~~~~~~~
181 Megatron GPT training requires NVIDIA Apex to be installed.
182
183 .. code-block:: bash
184
185 git clone https://github.com/NVIDIA/apex
186 cd apex
187 git checkout c8bcc98176ad8c3a0717082600c70c907891f9cb
188 pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" ./
189
190 Docker containers:
191 ~~~~~~~~~~~~~~~~~~
192 To build a nemo container with Dockerfile from a branch, please run
193
194 .. code-block:: bash
195
196 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
197
198
199 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 22.01-py3 and then installing from GitHub.
200
201 .. code-block:: bash
202
203 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
204 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
205 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:22.01-py3
206
207 Examples
208 --------
209
210 Many examples can be found under `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
211
212
213 Contributing
214 ------------
215
216 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
217
218 Publications
219 ------------
220
221 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/blob/main/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
222
223 Citation
224 --------
225
226 .. code-block:: bash
227
228 @article{kuchaiev2019nemo,
229 title={Nemo: a toolkit for building ai applications using neural modules},
230 author={Kuchaiev, Oleksii and Li, Jason and Nguyen, Huyen and Hrinchuk, Oleksii and Leary, Ryan and Ginsburg, Boris and Kriman, Samuel and Beliaev, Stanislav and Lavrukhin, Vitaly and Cook, Jack and others},
231 journal={arXiv preprint arXiv:1909.09577},
232 year={2019}
233 }
234
235 License
236 -------
237 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
238
[end of README.rst]
[start of nemo_text_processing/text_normalization/normalize.py]
...
32 except (ModuleNotFoundError, ImportError):
33 PYNINI_AVAILABLE = False
34
35 try:
36 from nemo.collections.common.tokenizers.moses_tokenizers import MosesProcessor
37 from nemo.collections.nlp.data.text_normalization.utils import post_process_punct
38
39 NLP_AVAILABLE = True
40 except (ModuleNotFoundError, ImportError):
41 NLP_AVAILABLE = False
42
43
44 SPACE_DUP = re.compile(' {2,}')
45
46
47 class Normalizer:
48 """
49 Normalizer class that converts text from written to spoken form.
50 Useful for TTS preprocessing.
51
52 Args:
53 input_case: expected input capitalization
54 lang: language specifying the TN rules, by default: English
...
...
69 assert input_case in ["lower_cased", "cased"]
70
71 if not PYNINI_AVAILABLE:
72 raise ImportError(get_installation_msg())
73
74 if lang == 'en' and deterministic:
75 from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify import ClassifyFst
76 from nemo_text_processing.text_normalization.en.verbalizers.verbalize_final import VerbalizeFinalFst
77 elif lang == 'en' and not deterministic:
78 from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify_with_audio import ClassifyFst
79 from nemo_text_processing.text_normalization.en.verbalizers.verbalize_final import VerbalizeFinalFst
80 elif lang == 'ru':
81 # Ru TN only support non-deterministic cases and produces multiple normalization options
82 # use normalize_with_audio.py
83 from nemo_text_processing.text_normalization.ru.taggers.tokenize_and_classify import ClassifyFst
84 from nemo_text_processing.text_normalization.ru.verbalizers.verbalize_final import VerbalizeFinalFst
85 elif lang == 'de':
86 # Ru TN only support non-deterministic cases and produces multiple normalization options
87 # use normalize_with_audio.py
88 from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import ClassifyFst
89 from nemo_text_processing.text_normalization.de.verbalizers.verbalize_final import VerbalizeFinalFst
90 self.tagger = ClassifyFst(
91 input_case=input_case,
92 deterministic=deterministic,
93 cache_dir=cache_dir,
...
...
92 deterministic=deterministic,
93 cache_dir=cache_dir,
94 overwrite_cache=overwrite_cache,
95 whitelist=whitelist,
96 )
97 self.verbalizer = VerbalizeFinalFst(deterministic=deterministic)
98 self.parser = TokenParser()
99 self.lang = lang
100
101 if NLP_AVAILABLE:
102 self.processor = MosesProcessor(lang_id=lang)
103 else:
104 self.processor = None
105 print("NeMo NLP is not available. Moses de-tokenization will be skipped.")
106
107 def normalize_list(self, texts: List[str], verbose=False, punct_post_process: bool = False) -> List[str]:
108 """
109 NeMo text normalizer
110
111 Args:
112 texts: list of input strings
113 verbose: whether to print intermediate meta information
...
...
343
344 def select_verbalizer(self, lattice: 'pynini.FstLike') -> str:
345 """
346 Given verbalized lattice return shortest path
347
348 Args:
349 lattice: verbalization lattice
350
351 Returns: shortest path
352 """
353 output = pynini.shortestpath(lattice, nshortest=1, unique=True).string()
354 return output
355
356
357 def parse_args():
358 parser = ArgumentParser()
359 parser.add_argument("input_string", help="input string", type=str)
360 parser.add_argument("--language", help="language", choices=["en", "de"], default="en", type=str)
361 parser.add_argument(
362 "--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
363 )
364 parser.add_argument("--verbose", help="print info for debugging", action='store_true')
...
[end of nemo_text_processing/text_normalization/normalize.py]
[start of nemo_text_processing/text_normalization/__init__.py]
...
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from nemo.utils import logging
16
17 try:
18 import pynini
19
20 PYNINI_AVAILABLE = True
21 except (ModuleNotFoundError, ImportError):
22 logging.warning(
23 "`pynini` is not installed ! \n"
24 "Please run the `nemo_text_processing/setup.sh` script"
25 "prior to usage of this toolkit."
26 )
27
28 PYNINI_AVAILABLE = False
...
[end of nemo_text_processing/text_normalization/__init__.py]
[start of nemo_text_processing/text_normalization/en/graph_utils.py]
...
145
146 def get_singulars(fst):
147 """
148 Given plural returns singulars
149
150 Args:
151 fst: Fst
152
153 Returns singulars to given plural forms
154 """
155 return PLURAL_TO_SINGULAR @ fst
156
157
158 def convert_space(fst) -> 'pynini.FstLike':
159 """
160 Converts space to nonbreaking space.
161 Used only in tagger grammars for transducing token values within quotes, e.g. name: "hello kitty"
162 This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
163
164 Args:
165 fst: input fst
166
...
...
194 """
195 Returns true if FAR can be loaded
196 """
197 return self.far_path.exists()
198
199 @property
200 def fst(self) -> 'pynini.FstLike':
201 return self._fst
202
203 @fst.setter
204 def fst(self, fst):
205 self._fst = fst
206
207 def add_tokens(self, fst) -> 'pynini.FstLike':
208 """
209 Wraps class name around to given fst
210
211 Args:
212 fst: input fst
213
214 Returns:
215 Fst: fst
216 """
217 return pynutil.insert(f"{self.name} {{ ") + fst + pynutil.insert(" }")
...
[end of nemo_text_processing/text_normalization/en/graph_utils.py]
[start of nemo_text_processing/text_normalization/normalize_with_audio.py]
...
41
42 try:
43 from nemo.collections.nlp.data.text_normalization.utils import post_process_punct
44 from nemo_text_processing.text_normalization.data_loader_utils import pre_process
45
46 NLP_AVAILABLE = True
47 except (ModuleNotFoundError, ImportError):
48 NLP_AVAILABLE = False
49
50 """
51 The script provides multiple normalization options and chooses the best one that minimizes CER of the ASR output
52 (most of the semiotic classes use deterministic=False flag).
53
54 To run this script with a .json manifest file, the manifest file should contain the following fields:
55 "audio_data" - path to the audio file
56 "text" - raw text
57 "pred_text" - ASR model prediction
58
59 See https://github.com/NVIDIA/NeMo/blob/main/examples/asr/transcribe_speech.py on how to add ASR predictions
60
61 When the manifest is ready, run:
62 python normalize_with_audio.py \
63 --audio_data PATH/TO/MANIFEST.JSON \
64 --language en
65
66
67 To run with a single audio file, specify path to audio and text with:
68 python normalize_with_audio.py \
69 --audio_data PATH/TO/AUDIO.WAV \
70 --language en \
...
...
57 "pred_text" - ASR model prediction
58
59 See https://github.com/NVIDIA/NeMo/blob/main/examples/asr/transcribe_speech.py on how to add ASR predictions
60
61 When the manifest is ready, run:
62 python normalize_with_audio.py \
63 --audio_data PATH/TO/MANIFEST.JSON \
64 --language en
65
66
67 To run with a single audio file, specify path to audio and text with:
68 python normalize_with_audio.py \
69 --audio_data PATH/TO/AUDIO.WAV \
70 --language en \
71 --text raw text OR PATH/TO/.TXT/FILE
72 --model QuartzNet15x5Base-En \
73 --verbose
74
75 To see possible normalization options for a text input without an audio file (could be used for debugging), run:
76 python python normalize_with_audio.py --text "RAW TEXT"
77
78 Specify `--cache_dir` to generate .far grammars once and re-used them for faster inference
79 """
80
81
82 class NormalizerWithAudio(Normalizer):
83 """
84 Normalizer class that converts text from written to spoken form.
85 Useful for TTS preprocessing.
86
87 Args:
88 input_case: expected input capitalization
89 lang: language
...
...
268 asr_model = ASRModel.restore_from(asr_model)
269 elif args.model in ASRModel.get_available_model_names():
270 asr_model = ASRModel.from_pretrained(asr_model)
271 else:
272 raise ValueError(
273 f'Provide path to the pretrained checkpoint or choose from {ASRModel.get_available_model_names()}'
274 )
275 return asr_model
276
277
278 def parse_args():
279 parser = ArgumentParser()
280 parser.add_argument("--text", help="input string or path to a .txt file", default=None, type=str)
281 parser.add_argument(
282 "--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
283 )
284 parser.add_argument(
285 "--language", help="Select target language", choices=["en", "ru", "de"], default="en", type=str
286 )
287 parser.add_argument("--audio_data", default=None, help="path to an audio file or .json manifest")
288 parser.add_argument(
289 '--model', type=str, default='QuartzNet15x5Base-En', help='Pre-trained model name or path to model checkpoint'
...
[end of nemo_text_processing/text_normalization/normalize_with_audio.py]
[start of /dev/null]
...
[end of /dev/null]
[start of nemo_text_processing/text_normalization/en/verbalizers/whitelist.py]
...
...
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16
17 from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
18
19 try:
20 import pynini
21 from pynini.lib import pynutil
22
23 PYNINI_AVAILABLE = True
24 except (ModuleNotFoundError, ImportError):
25 PYNINI_AVAILABLE = False
26
27
...
[end of nemo_text_processing/text_normalization/en/verbalizers/whitelist.py]
[start of tools/text_processing_deployment/pynini_export.py]
...
53
54 def tn_grammars(**kwargs):
55 d = {}
56 d['classify'] = {
57 'TOKENIZE_AND_CLASSIFY': TNClassifyFst(
58 input_case=kwargs["input_case"],
59 deterministic=True,
60 cache_dir=kwargs["cache_dir"],
61 overwrite_cache=kwargs["overwrite_cache"],
62 ).fst
63 }
64 d['verbalize'] = {'ALL': TNVerbalizeFst(deterministic=True).fst, 'REDUP': pynini.accep("REDUP")}
65 return d
66
67
68 def export_grammars(output_dir, grammars):
69 """
70 Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
71
72 Args:
73 output_dir: directory to export FAR files to. Subdirectories will be created for tagger and verbalizer respectively.
74 grammars: grammars to be exported
...
...
95 )
96 parser.add_argument(
97 "--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
98 )
99 parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
100 parser.add_argument(
101 "--cache_dir",
102 help="path to a dir with .far grammar file. Set to None to avoid using cache",
103 default=None,
104 type=str,
105 )
106 return parser.parse_args()
107
108
109 if __name__ == '__main__':
110 args = parse_args()
111
112 if args.language in ['ru', 'fr', 'es', 'vi'] and args.grammars == 'tn_grammars':
113 raise ValueError('Only ITN grammars could be deployed in Sparrowhawk for the selected languages.')
114
115 if args.language == 'en':
116 from nemo_text_processing.inverse_text_normalization.en.taggers.tokenize_and_classify import (
...
...
134 ClassifyFst as TNClassifyFst,
135 )
136 from nemo_text_processing.text_normalization.de.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
137 elif args.language == 'ru':
138 from nemo_text_processing.inverse_text_normalization.ru.taggers.tokenize_and_classify import (
139 ClassifyFst as ITNClassifyFst,
140 )
141 from nemo_text_processing.inverse_text_normalization.ru.verbalizers.verbalize import (
142 VerbalizeFst as ITNVerbalizeFst,
143 )
144 elif args.language == 'es':
145 from nemo_text_processing.inverse_text_normalization.es.taggers.tokenize_and_classify import (
146 ClassifyFst as ITNClassifyFst,
147 )
148 from nemo_text_processing.inverse_text_normalization.es.verbalizers.verbalize import (
149 VerbalizeFst as ITNVerbalizeFst,
150 )
151 elif args.language == 'fr':
152 from nemo_text_processing.inverse_text_normalization.fr.taggers.tokenize_and_classify import (
153 ClassifyFst as ITNClassifyFst,
154 )
...
[end of tools/text_processing_deployment/pynini_export.py]
[start of nemo_text_processing/text_normalization/en/verbalizers/word.py]
...
...
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
17
18 try:
19 import pynini
20 from pynini.lib import pynutil
21
22 PYNINI_AVAILABLE = True
23 except (ModuleNotFoundError, ImportError):
24 PYNINI_AVAILABLE = False
25
26
...
[end of nemo_text_processing/text_normalization/en/verbalizers/word.py]
[start of nemo_text_processing/text_normalization/en/taggers/punctuation.py]
...
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import sys
17 from unicodedata import category
18
19 from nemo_text_processing.text_normalization.en.graph_utils import GraphFst
20
21 try:
22 import pynini
23 from pynini.lib import pynutil
24
25 PYNINI_AVAILABLE = False
26 except (ModuleNotFoundError, ImportError):
27 PYNINI_AVAILABLE = False
28
29
...
[end of nemo_text_processing/text_normalization/en/taggers/punctuation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
NVIDIA/NeMo
|
022f0292aecbc98d591d49423d5045235394f793
|
./reinstall.sh crashes due to not being able to uninstall llvmlite
Starting off of `nemo:1.5.1` container, cloning the NeMo repo to a folder inside of it and calling `./reinstall.sh` fails with
```
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
`pip install -e` on the other hand succeeds installing `nemo:1.7.0rc0` and `numpy:1.22.2`, the rest of the packages remain untouched.
It seems that `./reinstall.sh` which used to work fine, a week or so ago when following the same procedure to upgrade to `nemo:1.6.0rc` redeveloped issue #841. The solution remains the same, first call
```
pip install --ignore-installed llvmlite
```
followed by `./reinstall.sh`. In this case, apart `llvml`, the following packages are updated
```
ftfy-6.0.3 nemo-toolkit-1.7.0rc0 numba-0.55.1 pytorch-lightning-1.5.9 sacrebleu-2.0.0 setuptools-59.5.0
```
Interestingly `numpy` in this case is left at `1.21.5`.
|
2022-02-09T05:12:31Z
|
<patch>
<patch>
diff --git a/nemo_text_processing/text_normalization/__init__.py b/nemo_text_processing/text_normalization/__init__.py
--- a/nemo_text_processing/text_normalization/__init__.py
+++ b/nemo_text_processing/text_normalization/__init__.py
@@ -21,7 +21,7 @@
except (ModuleNotFoundError, ImportError):
logging.warning(
"`pynini` is not installed ! \n"
- "Please run the `nemo_text_processing/setup.sh` script"
+ "Please run the `nemo_text_processing/setup.sh` script "
"prior to usage of this toolkit."
)
diff --git a/nemo_text_processing/text_normalization/en/graph_utils.py b/nemo_text_processing/text_normalization/en/graph_utils.py
--- a/nemo_text_processing/text_normalization/en/graph_utils.py
+++ b/nemo_text_processing/text_normalization/en/graph_utils.py
@@ -159,7 +159,7 @@ def convert_space(fst) -> 'pynini.FstLike':
"""
Converts space to nonbreaking space.
Used only in tagger grammars for transducing token values within quotes, e.g. name: "hello kitty"
- This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
+ This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
Args:
fst: input fst
@@ -208,9 +208,9 @@ def add_tokens(self, fst) -> 'pynini.FstLike':
"""
Wraps class name around to given fst
- Args:
+ Args:
fst: input fst
-
+
Returns:
Fst: fst
"""
diff --git a/nemo_text_processing/text_normalization/en/taggers/punctuation.py b/nemo_text_processing/text_normalization/en/taggers/punctuation.py
--- a/nemo_text_processing/text_normalization/en/taggers/punctuation.py
+++ b/nemo_text_processing/text_normalization/en/taggers/punctuation.py
@@ -22,7 +22,7 @@
import pynini
from pynini.lib import pynutil
- PYNINI_AVAILABLE = False
+ PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py b/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
--- a/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
+++ b/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
@@ -12,8 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
-
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
@@ -21,6 +19,7 @@
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
+
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/en/verbalizers/word.py b/nemo_text_processing/text_normalization/en/verbalizers/word.py
--- a/nemo_text_processing/text_normalization/en/verbalizers/word.py
+++ b/nemo_text_processing/text_normalization/en/verbalizers/word.py
@@ -12,7 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
@@ -20,6 +19,7 @@
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
+
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/es/__init__.py b/nemo_text_processing/text_normalization/es/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/__init__.py
@@ -0,0 +1,15 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+LOCALIZATION = "eu" # Set to am for alternate formatting
diff --git a/nemo_text_processing/text_normalization/es/data/__init__.py b/nemo_text_processing/text_normalization/es/data/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/dates/__init__.py b/nemo_text_processing/text_normalization/es/data/dates/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/dates/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/electronic/__init__.py b/nemo_text_processing/text_normalization/es/data/electronic/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/electronic/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/fractions/__init__.py b/nemo_text_processing/text_normalization/es/data/fractions/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/fractions/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/measures/__init__.py b/nemo_text_processing/text_normalization/es/data/measures/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/measures/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/money/__init__.py b/nemo_text_processing/text_normalization/es/data/money/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/money/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/numbers/__init__.py b/nemo_text_processing/text_normalization/es/data/numbers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/numbers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py b/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/roman/__init__.py b/nemo_text_processing/text_normalization/es/data/roman/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/roman/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/time/__init__.py b/nemo_text_processing/text_normalization/es/data/time/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/time/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/graph_utils.py b/nemo_text_processing/text_normalization/es/graph_utils.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/graph_utils.py
@@ -0,0 +1,179 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_SIGMA, NEMO_SPACE
+from nemo_text_processing.text_normalization.es import LOCALIZATION
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digits = pynini.project(pynini.string_file(get_abs_path("data/numbers/digit.tsv")), "input")
+ tens = pynini.project(pynini.string_file(get_abs_path("data/numbers/ties.tsv")), "input")
+ teens = pynini.project(pynini.string_file(get_abs_path("data/numbers/teen.tsv")), "input")
+ twenties = pynini.project(pynini.string_file(get_abs_path("data/numbers/twenties.tsv")), "input")
+ hundreds = pynini.project(pynini.string_file(get_abs_path("data/numbers/hundreds.tsv")), "input")
+
+ accents = pynini.string_map([("รก", "a"), ("รฉ", "e"), ("รญ", "i"), ("รณ", "o"), ("รบ", "u")])
+
+ if LOCALIZATION == "am": # Setting localization for central and northern america formatting
+ cardinal_separator = pynini.string_map([",", NEMO_SPACE])
+ decimal_separator = pynini.accep(".")
+ else:
+ cardinal_separator = pynini.string_map([".", NEMO_SPACE])
+ decimal_separator = pynini.accep(",")
+
+ ones = pynini.union("un", "รบn")
+ fem_ones = pynini.union(pynini.cross("un", "una"), pynini.cross("รบn", "una"), pynini.cross("uno", "una"))
+ one_to_one_hundred = pynini.union(digits, tens, teens, twenties, tens + pynini.accep(" y ") + digits)
+ fem_hundreds = hundreds @ pynini.cdrewrite(pynini.cross("ientos", "ientas"), "", "", NEMO_SIGMA)
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ digits = None
+ tens = None
+ teens = None
+ twenties = None
+ hundreds = None
+
+ accents = None
+
+ cardinal_separator = None
+ decimal_separator = None
+
+ ones = None
+ fem_ones = None
+ one_to_one_hundred = None
+ fem_hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def strip_accent(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Converts all accented vowels to non-accented equivalents
+
+ Args:
+ fst: Any fst. Composes vowel conversion onto fst's output strings
+ """
+ return fst @ pynini.cdrewrite(accents, "", "", NEMO_SIGMA)
+
+
+def shift_cardinal_gender(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Applies gender conversion rules to a cardinal string. These include: rendering all masculine forms of "uno" (including apocopated forms) as "una" and
+ Converting all gendered numbers in the hundreds series (200,300,400...) to feminine equivalent (e.g. "doscientos" -> "doscientas"). Converssion only applies
+ to value place for <1000 and multiple of 1000. (e.g. "doscientos mil doscientos" -> "doscientas mil doscientas".) For place values greater than the thousands, there
+ is no gender shift as the higher powers of ten ("millones", "billones") are masculine nouns and any conversion would be formally
+ ungrammatical.
+ e.g.
+ "doscientos" -> "doscientas"
+ "doscientos mil" -> "doscientas mil"
+ "doscientos millones" -> "doscientos millones"
+ "doscientos mil millones" -> "doscientos mil millones"
+ "doscientos millones doscientos mil doscientos" -> "doscientos millones doscientas mil doscientas"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ before_mil = (
+ NEMO_SPACE
+ + (pynini.accep("mil") | pynini.accep("milรฉsimo"))
+ + pynini.closure(NEMO_SPACE + hundreds, 0, 1)
+ + pynini.closure(NEMO_SPACE + one_to_one_hundred, 0, 1)
+ + pynini.union(pynini.accep("[EOS]"), pynini.accep("\""), decimal_separator)
+ )
+ before_double_digits = pynini.closure(NEMO_SPACE + one_to_one_hundred, 0, 1) + pynini.union(
+ pynini.accep("[EOS]"), pynini.accep("\"")
+ )
+
+ fem_allign = pynini.cdrewrite(fem_hundreds, "", before_mil, NEMO_SIGMA) # doscientas mil dosciento
+ fem_allign @= pynini.cdrewrite(fem_hundreds, "", before_double_digits, NEMO_SIGMA) # doscientas mil doscienta
+ fem_allign @= pynini.cdrewrite(
+ fem_ones, "", pynini.union("[EOS]", "\"", decimal_separator), NEMO_SIGMA
+ ) # If before a quote or EOS, we know it's the end of a string
+
+ return fst @ fem_allign
+
+
+def shift_number_gender(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Performs gender conversion on all verbalized numbers in output. All values in the hundreds series (200,300,400) are changed to
+ feminine gender (e.g. "doscientos" -> "doscientas") and all forms of "uno" (including apocopated forms) are converted to "una".
+ This has no boundary restriction and will perform shift across all values in output string.
+ e.g.
+ "doscientos" -> "doscientas"
+ "doscientos millones" -> "doscientas millones"
+ "doscientos millones doscientos" -> "doscientas millones doscientas"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ fem_allign = pynini.cdrewrite(fem_hundreds, "", "", NEMO_SIGMA)
+ fem_allign @= pynini.cdrewrite(
+ fem_ones, "", pynini.union(NEMO_SPACE, pynini.accep("[EOS]"), pynini.accep("\"")), NEMO_SIGMA
+ ) # If before a quote or EOS, we know it's the end of a string
+
+ return fst @ fem_allign
+
+
+def strip_cardinal_apocope(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Reverts apocope on cardinal strings in line with formation rules. e.g. "un" -> "uno". Due to cardinal formation rules, this in effect only
+ affects strings where the final value is a variation of "un".
+ e.g.
+ "un" -> "uno"
+ "veintiรบn" -> "veintiuno"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ # Since cardinals use apocope by default for large values (e.g. "millรณn"), this only needs to act on the last instance of one
+ strip = pynini.cross("un", "uno") | pynini.cross("รบn", "uno")
+ strip = pynini.cdrewrite(strip, "", pynini.union("[EOS]", "\""), NEMO_SIGMA)
+ return fst @ strip
+
+
+def roman_to_int(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Alters given fst to convert Roman integers (lower and upper cased) into Arabic numerals. Valid for values up to 1000.
+ e.g.
+ "V" -> "5"
+ "i" -> "1"
+
+ Args:
+ fst: Any fst. Composes fst onto Roman conversion outputs.
+ """
+
+ def _load_roman(file: str):
+ roman = load_labels(get_abs_path(file))
+ roman_numerals = [(x, y) for x, y in roman] + [(x.upper(), y) for x, y in roman]
+ return pynini.string_map(roman_numerals)
+
+ digit = _load_roman("data/roman/digit.tsv")
+ ties = _load_roman("data/roman/ties.tsv")
+ hundreds = _load_roman("data/roman/hundreds.tsv")
+
+ graph = (
+ digit
+ | ties + (digit | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ | (
+ hundreds
+ + (ties | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ + (digit | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ )
+ ).optimize()
+
+ return graph @ fst
diff --git a/nemo_text_processing/text_normalization/es/taggers/__init__.py b/nemo_text_processing/text_normalization/es/taggers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/taggers/cardinal.py b/nemo_text_processing/text_normalization/es/taggers/cardinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/cardinal.py
@@ -0,0 +1,190 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import cardinal_separator
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ teen = pynini.invert(pynini.string_file(get_abs_path("data/numbers/teen.tsv")))
+ ties = pynini.invert(pynini.string_file(get_abs_path("data/numbers/ties.tsv")))
+ twenties = pynini.invert(pynini.string_file(get_abs_path("data/numbers/twenties.tsv")))
+ hundreds = pynini.invert(pynini.string_file(get_abs_path("data/numbers/hundreds.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ zero = None
+ digit = None
+ teen = None
+ ties = None
+ twenties = None
+ hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def filter_punctuation(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Helper function for parsing number strings. Converts common cardinal strings (groups of three digits delineated by 'cardinal_separator' - see graph_utils)
+ and converts to a string of digits:
+ "1 000" -> "1000"
+ "1.000.000" -> "1000000"
+ Args:
+ fst: Any pynini.FstLike object. Function composes fst onto string parser fst
+
+ Returns:
+ fst: A pynini.FstLike object
+ """
+ exactly_three_digits = NEMO_DIGIT ** 3 # for blocks of three
+ up_to_three_digits = pynini.closure(NEMO_DIGIT, 1, 3) # for start of string
+
+ cardinal_string = pynini.closure(
+ NEMO_DIGIT, 1
+ ) # For string w/o punctuation (used for page numbers, thousand series)
+
+ cardinal_string |= (
+ up_to_three_digits
+ + pynutil.delete(cardinal_separator)
+ + pynini.closure(exactly_three_digits + pynutil.delete(cardinal_separator))
+ + exactly_three_digits
+ )
+
+ return cardinal_string @ fst
+
+
+class CardinalFst(GraphFst):
+ """
+ Finite state transducer for classifying cardinals, e.g.
+ "1000" -> cardinal { integer: "mil" }
+ "2.000.000" -> cardinal { integer: "dos millones" }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="cardinal", kind="classify", deterministic=deterministic)
+
+ # Any single digit
+ graph_digit = digit
+ digits_no_one = (NEMO_DIGIT - "1") @ graph_digit
+
+ # Any double digit
+ graph_tens = teen
+ graph_tens |= ties + (pynutil.delete('0') | (pynutil.insert(" y ") + graph_digit))
+ graph_tens |= twenties
+
+ self.tens = graph_tens.optimize()
+
+ self.two_digit_non_zero = pynini.union(
+ graph_digit, graph_tens, (pynini.cross("0", NEMO_SPACE) + graph_digit)
+ ).optimize()
+
+ # Three digit strings
+ graph_hundreds = hundreds + pynini.union(
+ pynutil.delete("00"), (insert_space + graph_tens), (pynini.cross("0", NEMO_SPACE) + graph_digit)
+ )
+ graph_hundreds |= pynini.cross("100", "cien")
+ graph_hundreds |= (
+ pynini.cross("1", "ciento") + insert_space + pynini.union(graph_tens, pynutil.delete("0") + graph_digit)
+ )
+
+ self.hundreds = graph_hundreds.optimize()
+
+ # For all three digit strings with leading zeroes (graph appends '0's to manage place in string)
+ graph_hundreds_component = pynini.union(graph_hundreds, pynutil.delete("0") + graph_tens)
+
+ graph_hundreds_component_at_least_one_none_zero_digit = graph_hundreds_component | (
+ pynutil.delete("00") + graph_digit
+ )
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one = graph_hundreds_component | (
+ pynutil.delete("00") + digits_no_one
+ )
+
+ graph_thousands_component_at_least_one_none_zero_digit = pynini.union(
+ pynutil.delete("000") + graph_hundreds_component_at_least_one_none_zero_digit,
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one
+ + pynutil.insert(" mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ pynini.cross("001", "mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ )
+
+ graph_thousands_component_at_least_one_none_zero_digit_no_one = pynini.union(
+ pynutil.delete("000") + graph_hundreds_component_at_least_one_none_zero_digit_no_one,
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one
+ + pynutil.insert(" mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ pynini.cross("001", "mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ )
+
+ graph_million = pynutil.add_weight(pynini.cross("000001", "un millรณn"), -0.001)
+ graph_million |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" millones")
+ graph_million |= pynutil.delete("000000")
+ graph_million += insert_space
+
+ graph_billion = pynutil.add_weight(pynini.cross("000001", "un billรณn"), -0.001)
+ graph_billion |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" billones")
+ graph_billion |= pynutil.delete("000000")
+ graph_billion += insert_space
+
+ graph_trillion = pynutil.add_weight(pynini.cross("000001", "un trillรณn"), -0.001)
+ graph_trillion |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" trillones")
+ graph_trillion |= pynutil.delete("000000")
+ graph_trillion += insert_space
+
+ graph = (
+ graph_trillion
+ + graph_billion
+ + graph_million
+ + (graph_thousands_component_at_least_one_none_zero_digit | pynutil.delete("000000"))
+ )
+
+ self.graph = (
+ ((NEMO_DIGIT - "0") + pynini.closure(NEMO_DIGIT, 0))
+ @ pynini.cdrewrite(pynini.closure(pynutil.insert("0")), "[BOS]", "", NEMO_SIGMA)
+ @ NEMO_DIGIT ** 24
+ @ graph
+ @ pynini.cdrewrite(delete_space, "[BOS]", "", NEMO_SIGMA)
+ @ pynini.cdrewrite(delete_space, "", "[EOS]", NEMO_SIGMA)
+ @ pynini.cdrewrite(
+ pynini.cross(pynini.closure(NEMO_WHITE_SPACE, 2), NEMO_SPACE), NEMO_ALPHA, NEMO_ALPHA, NEMO_SIGMA
+ )
+ )
+ self.graph |= zero
+
+ self.graph = filter_punctuation(self.graph).optimize()
+
+ optional_minus_graph = pynini.closure(pynutil.insert("negative: ") + pynini.cross("-", "\"true\" "), 0, 1)
+
+ final_graph = optional_minus_graph + pynutil.insert("integer: \"") + self.graph + pynutil.insert("\"")
+
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/date.py b/nemo_text_processing/text_normalization/es/taggers/date.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/date.py
@@ -0,0 +1,107 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_DIGIT, NEMO_SPACE, GraphFst, delete_extra_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ articles = pynini.union("de", "del", "el", "del aรฑo")
+ delete_leading_zero = (pynutil.delete("0") | (NEMO_DIGIT - "0")) + NEMO_DIGIT
+ month_numbers = pynini.string_file(get_abs_path("data/dates/months.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ articles = None
+ delete_leading_zero = None
+ month_numbers = None
+
+ PYNINI_AVAILABLE = False
+
+
+class DateFst(GraphFst):
+ """
+ Finite state transducer for classifying date, e.g.
+ "01.04.2010" -> date { day: "un" month: "enero" year: "dos mil diez" preserve_order: true }
+ "marzo 4 2000" -> date { month: "marzo" day: "cuatro" year: "dos mil" }
+ "1990-20-01" -> date { year: "mil novecientos noventa" day: "veinte" month: "enero" }
+
+ Args:
+ cardinal: cardinal GraphFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool):
+ super().__init__(name="date", kind="classify", deterministic=deterministic)
+
+ number_to_month = month_numbers.optimize()
+ month_graph = pynini.project(number_to_month, "output")
+
+ numbers = cardinal.graph
+ optional_leading_zero = delete_leading_zero | NEMO_DIGIT
+
+ # 01, 31, 1
+ digit_day = optional_leading_zero @ pynini.union(*[str(x) for x in range(1, 32)]) @ numbers
+ day = (pynutil.insert("day: \"") + digit_day + pynutil.insert("\"")).optimize()
+
+ digit_month = optional_leading_zero @ pynini.union(*[str(x) for x in range(1, 13)])
+ number_to_month = digit_month @ number_to_month
+
+ month_name = (pynutil.insert("month: \"") + month_graph + pynutil.insert("\"")).optimize()
+ month_number = (pynutil.insert("month: \"") + number_to_month + pynutil.insert("\"")).optimize()
+
+ # prefer cardinal over year
+ year = (NEMO_DIGIT - "0") + pynini.closure(NEMO_DIGIT, 1, 3) # 90, 990, 1990
+ year @= numbers
+ self.year = year
+
+ year_only = pynutil.insert("year: \"") + year + pynutil.insert("\"")
+ year_with_articles = (
+ pynutil.insert("year: \"") + pynini.closure(articles + NEMO_SPACE, 0, 1) + year + pynutil.insert("\"")
+ )
+
+ graph_dmy = (
+ day
+ + pynini.closure(pynutil.delete(" de"))
+ + NEMO_SPACE
+ + month_name
+ + pynini.closure(NEMO_SPACE + year_with_articles, 0, 1)
+ )
+
+ graph_mdy = ( # English influences on language
+ month_name + delete_extra_space + day + pynini.closure(NEMO_SPACE + year_with_articles, 0, 1)
+ )
+
+ separators = [".", "-", "/"]
+ for sep in separators:
+ year_optional = pynini.closure(pynini.cross(sep, NEMO_SPACE) + year_only, 0, 1)
+ new_graph = day + pynini.cross(sep, NEMO_SPACE) + month_number + year_optional
+ graph_dmy |= new_graph
+ if not deterministic:
+ new_graph = month_number + pynini.cross(sep, NEMO_SPACE) + day + year_optional
+ graph_mdy |= new_graph
+
+ dash = "-"
+ day_optional = pynini.closure(pynini.cross(dash, NEMO_SPACE) + day, 0, 1)
+ graph_ymd = NEMO_DIGIT ** 4 @ year_only + pynini.cross(dash, NEMO_SPACE) + month_number + day_optional
+
+ final_graph = graph_dmy + pynutil.insert(" preserve_order: true")
+ final_graph |= graph_ymd
+ final_graph |= graph_mdy
+
+ self.final_graph = final_graph.optimize()
+ self.fst = self.add_tokens(self.final_graph).optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/decimals.py b/nemo_text_processing/text_normalization/es/taggers/decimals.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/decimals.py
@@ -0,0 +1,138 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ cardinal_separator,
+ decimal_separator,
+ strip_cardinal_apocope,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ quantities = pynini.string_file(get_abs_path("data/numbers/quantities.tsv"))
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ quantities = None
+ digit = None
+ zero = None
+
+ PYNINI_AVAILABLE = False
+
+
+def get_quantity(decimal_graph: 'pynini.FstLike', cardinal_graph: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Returns FST that transforms either a cardinal or decimal followed by a quantity into a numeral,
+ e.g. 2 millones -> integer_part: "dos" quantity: "millones"
+ e.g. 2,4 millones -> integer_part: "dos" fractional_part: "quatro" quantity: "millones"
+ e.g. 2,400 millones -> integer_part: "dos mil cuatrocientos" fractional_part: "quatro" quantity: "millones"
+
+ Args:
+ decimal_graph: DecimalFST
+ cardinal_graph: CardinalFST
+ """
+ numbers = pynini.closure(NEMO_DIGIT, 1, 6) @ cardinal_graph
+ numbers = pynini.cdrewrite(pynutil.delete(cardinal_separator), "", "", NEMO_SIGMA) @ numbers
+
+ res = (
+ pynutil.insert("integer_part: \"")
+ + numbers # The cardinal we're passing only produces 'un' for one, so gender agreement is safe (all quantities are masculine). Limit to 10^6 power.
+ + pynutil.insert("\"")
+ + NEMO_SPACE
+ + pynutil.insert("quantity: \"")
+ + quantities
+ + pynutil.insert("\"")
+ )
+ res |= decimal_graph + NEMO_SPACE + pynutil.insert("quantity: \"") + quantities + pynutil.insert("\"")
+ return res
+
+
+class DecimalFst(GraphFst):
+ """
+ Finite state transducer for classifying decimal, e.g.
+ -11,4006 billones -> decimal { negative: "true" integer_part: "once" fractional_part: "cuatro cero cero seis" quantity: "billones" preserve_order: true }
+ 1 billรณn -> decimal { integer_part: "un" quantity: "billรณn" preserve_order: true }
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="decimal", kind="classify", deterministic=deterministic)
+ graph_digit = digit | zero
+
+ if not deterministic:
+ graph = pynini.union(graph_digit, cardinal.hundreds, cardinal.tens)
+ graph += pynini.closure(insert_space + graph)
+
+ else:
+ # General pattern seems to be 1-3 digits: map as cardinal, default to digits otherwise \
+ graph = pynini.union(
+ graph_digit,
+ cardinal.tens,
+ cardinal.hundreds,
+ graph_digit + pynini.closure(insert_space + graph_digit, 3),
+ zero
+ + pynini.closure(insert_space + zero)
+ + pynini.closure(insert_space + graph_digit), # For cases such as "1,010"
+ )
+
+ # Need to strip apocope everywhere BUT end of string
+ reverse_apocope = pynini.string_map([("un", "uno"), ("รบn", "uno")])
+ apply_reverse_apocope = pynini.cdrewrite(reverse_apocope, "", NEMO_SPACE, NEMO_SIGMA)
+ graph @= apply_reverse_apocope
+
+ # Technically decimals should be space delineated groups of three, e.g. (1,333 333). This removes any possible spaces
+ strip_formatting = pynini.cdrewrite(delete_space, "", "", NEMO_SIGMA)
+ graph = strip_formatting @ graph
+
+ self.graph = graph.optimize()
+
+ graph_separator = pynutil.delete(decimal_separator)
+ optional_graph_negative = pynini.closure(pynutil.insert("negative: ") + pynini.cross("-", "\"true\" "), 0, 1)
+
+ self.graph_fractional = pynutil.insert("fractional_part: \"") + self.graph + pynutil.insert("\"")
+
+ # Integer graph maintains apocope except for ones place
+ graph_integer = (
+ strip_cardinal_apocope(cardinal.graph)
+ if deterministic
+ else pynini.union(cardinal.graph, strip_cardinal_apocope(cardinal.graph))
+ ) # Gives us forms w/ and w/o apocope
+ self.graph_integer = pynutil.insert("integer_part: \"") + graph_integer + pynutil.insert("\"")
+ final_graph_wo_sign = self.graph_integer + graph_separator + insert_space + self.graph_fractional
+
+ self.final_graph_wo_negative = (
+ final_graph_wo_sign | get_quantity(final_graph_wo_sign, cardinal.graph).optimize()
+ )
+ final_graph = optional_graph_negative + self.final_graph_wo_negative
+
+ final_graph += pynutil.insert(" preserve_order: true")
+ final_graph = self.add_tokens(final_graph)
+
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/electronic.py b/nemo_text_processing/text_normalization/es/taggers/electronic.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/electronic.py
@@ -0,0 +1,84 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_ALPHA, NEMO_DIGIT, GraphFst, insert_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ common_domains = [x[0] for x in load_labels(get_abs_path("data/electronic/domain.tsv"))]
+ symbols = [x[0] for x in load_labels(get_abs_path("data/electronic/symbols.tsv"))]
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ common_domains = None
+ symbols = None
+
+ PYNINI_AVAILABLE = False
+
+
+class ElectronicFst(GraphFst):
+ """
+ Finite state transducer for classifying electronic: email addresses
+ e.g. "abc@hotmail.com" -> electronic { username: "abc" domain: "hotmail.com" preserve_order: true }
+ e.g. "www.abc.com/123" -> electronic { protocol: "www." domain: "abc.com/123" preserve_order: true }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="electronic", kind="classify", deterministic=deterministic)
+
+ dot = pynini.accep(".")
+ accepted_common_domains = pynini.union(*common_domains)
+ accepted_symbols = pynini.union(*symbols) - dot
+ accepted_characters = pynini.closure(NEMO_ALPHA | NEMO_DIGIT | accepted_symbols)
+ acceepted_characters_with_dot = pynini.closure(NEMO_ALPHA | NEMO_DIGIT | accepted_symbols | dot)
+
+ # email
+ username = (
+ pynutil.insert("username: \"")
+ + acceepted_characters_with_dot
+ + pynutil.insert("\"")
+ + pynini.cross('@', ' ')
+ )
+ domain_graph = accepted_characters + dot + accepted_characters
+ domain_graph = pynutil.insert("domain: \"") + domain_graph + pynutil.insert("\"")
+ domain_common_graph = (
+ pynutil.insert("domain: \"")
+ + accepted_characters
+ + accepted_common_domains
+ + pynini.closure((accepted_symbols | dot) + pynini.closure(accepted_characters, 1), 0, 1)
+ + pynutil.insert("\"")
+ )
+ graph = (username + domain_graph) | domain_common_graph
+
+ # url
+ protocol_start = pynini.accep("https://") | pynini.accep("http://")
+ protocol_end = (
+ pynini.accep("www.")
+ if deterministic
+ else pynini.accep("www.") | pynini.cross("www.", "doble ve doble ve doble ve.")
+ )
+ protocol = protocol_start | protocol_end | (protocol_start + protocol_end)
+ protocol = pynutil.insert("protocol: \"") + protocol + pynutil.insert("\"")
+ graph |= protocol + insert_space + (domain_graph | domain_common_graph)
+ self.graph = graph
+
+ final_graph = self.add_tokens(self.graph + pynutil.insert(" preserve_order: true"))
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/fraction.py b/nemo_text_processing/text_normalization/es/taggers/fraction.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/fraction.py
@@ -0,0 +1,124 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ ordinal_exceptions = pynini.string_file(get_abs_path("data/fractions/ordinal_exceptions.tsv"))
+ higher_powers_of_ten = pynini.string_file(get_abs_path("data/fractions/powers_of_ten.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ ordinal_exceptions = None
+ higher_powers_of_ten = None
+
+ PYNINI_AVAILABLE = False
+
+
+class FractionFst(GraphFst):
+ """
+ Finite state transducer for classifying fraction
+ "23 4/5" ->
+ tokens { fraction { integer: "veintitrรฉs" numerator: "cuatro" denominator: "quinto" mophosyntactic_features: "ordinal" } }
+
+ Args:
+ cardinal: CardinalFst
+ ordinal: OrdinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, ordinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="fraction", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+ ordinal_graph = ordinal.graph
+
+ # 2-10 are all ordinals
+ three_to_ten = pynini.string_map(["2", "3", "4", "5", "6", "7", "8", "9", "10",])
+ block_three_to_ten = pynutil.delete(three_to_ten) # To block cardinal productions
+ if not deterministic: # Multiples of tens are sometimes rendered as ordinals
+ three_to_ten |= pynini.string_map(["20", "30", "40", "50", "60", "70", "80", "90",])
+ graph_three_to_ten = three_to_ten @ ordinal_graph
+ graph_three_to_ten @= pynini.cdrewrite(ordinal_exceptions, "", "", NEMO_SIGMA)
+
+ # Higher powers of tens (and multiples) are converted to ordinals.
+ hundreds = pynini.string_map(["100", "200", "300", "400", "500", "600", "700", "800", "900",])
+ graph_hundreds = hundreds @ ordinal_graph
+
+ multiples_of_thousand = ordinal.multiples_of_thousand # So we can have X milรฉsimos
+
+ graph_higher_powers_of_ten = (
+ pynini.closure(ordinal.one_to_one_thousand + NEMO_SPACE, 0, 1)
+ + pynini.closure("mil ", 0, 1)
+ + pynini.closure(ordinal.one_to_one_thousand + NEMO_SPACE, 0, 1)
+ ) # x millones / x mil millones / x mil z millones
+ graph_higher_powers_of_ten += higher_powers_of_ten
+ graph_higher_powers_of_ten = cardinal_graph @ graph_higher_powers_of_ten
+ graph_higher_powers_of_ten @= pynini.cdrewrite(
+ pynutil.delete("un "), pynini.accep("[BOS]"), pynini.project(higher_powers_of_ten, "output"), NEMO_SIGMA
+ ) # we drop 'un' from these ordinals (millionths, not one-millionths)
+
+ graph_higher_powers_of_ten = multiples_of_thousand | graph_hundreds | graph_higher_powers_of_ten
+ block_higher_powers_of_ten = pynutil.delete(
+ pynini.project(graph_higher_powers_of_ten, "input")
+ ) # For cardinal graph
+
+ graph_fractions_ordinals = graph_higher_powers_of_ten | graph_three_to_ten
+ graph_fractions_ordinals += pynutil.insert(
+ "\" morphosyntactic_features: \"ordinal\""
+ ) # We note the root for processing later
+
+ # Blocking the digits and hundreds from Cardinal graph
+ graph_fractions_cardinals = pynini.cdrewrite(
+ block_three_to_ten | block_higher_powers_of_ten, pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+ graph_fractions_cardinals @= NEMO_CHAR.plus @ pynini.cdrewrite(
+ pynutil.delete("0"), pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA
+ ) # Empty characters become '0' for NEMO_CHAR fst, so ned to block
+ graph_fractions_cardinals @= cardinal_graph
+ graph_fractions_cardinals += pynutil.insert(
+ "\" morphosyntactic_features: \"add_root\""
+ ) # blocking these entries to reduce erroneous possibilities in debugging
+
+ if deterministic:
+ graph_fractions_cardinals = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ graph_fractions_cardinals
+ ) # Past hundreds the conventional scheme can be hard to read. For determinism we stop here
+
+ graph_denominator = pynini.union(
+ graph_fractions_ordinals,
+ graph_fractions_cardinals,
+ pynutil.add_weight(cardinal_graph + pynutil.insert("\""), 0.001),
+ ) # Last form is simply recording the cardinal. Weighting so last resort
+
+ integer = pynutil.insert("integer_part: \"") + cardinal_graph + pynutil.insert("\"") + NEMO_SPACE
+ numerator = (
+ pynutil.insert("numerator: \"") + cardinal_graph + (pynini.cross("/", "\" ") | pynini.cross(" / ", "\" "))
+ )
+ denominator = pynutil.insert("denominator: \"") + graph_denominator
+
+ self.graph = pynini.closure(integer, 0, 1) + numerator + denominator
+
+ final_graph = self.add_tokens(self.graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/measure.py b/nemo_text_processing/text_normalization/es/taggers/measure.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/measure.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_NON_BREAKING_SPACE,
+ NEMO_SPACE,
+ GraphFst,
+ convert_space,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import strip_cardinal_apocope
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ unit = pynini.string_file(get_abs_path("data/measures/measurements.tsv"))
+ unit_plural_fem = pynini.string_file(get_abs_path("data/measures/measurements_plural_fem.tsv"))
+ unit_plural_masc = pynini.string_file(get_abs_path("data/measures/measurements_plural_masc.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ unit = None
+ unit_plural_fem = None
+ unit_plural_masc = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MeasureFst(GraphFst):
+ """
+ Finite state transducer for classifying measure, e.g.
+ "2,4 g" -> measure { cardinal { integer_part: "dos" fractional_part: "cuatro" units: "gramos" preserve_order: true } }
+ "1 g" -> measure { cardinal { integer: "un" units: "gramo" preserve_order: true } }
+ "1 millรณn g" -> measure { cardinal { integer: "un quantity: "millรณn" units: "gramos" preserve_order: true } }
+ e.g. "a-8" โ> "a ocho"
+ e.g. "1,2-a" โ> "uno coma dos a"
+ This class also converts words containing numbers and letters
+ e.g. "a-8" โ> "a ocho"
+ e.g. "1,2-a" โ> "uno coma dos a"
+
+
+ Args:
+ cardinal: CardinalFst
+ decimal: DecimalFst
+ fraction: FractionFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, decimal: GraphFst, fraction: GraphFst, deterministic: bool = True):
+ super().__init__(name="measure", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+
+ unit_singular = unit
+ unit_plural = unit_singular @ (unit_plural_fem | unit_plural_masc)
+
+ graph_unit_singular = convert_space(unit_singular)
+ graph_unit_plural = convert_space(unit_plural)
+
+ optional_graph_negative = pynini.closure("-", 0, 1)
+
+ graph_unit_denominator = (
+ pynini.cross("/", "por") + pynutil.insert(NEMO_NON_BREAKING_SPACE) + graph_unit_singular
+ )
+
+ optional_unit_denominator = pynini.closure(
+ pynutil.insert(NEMO_NON_BREAKING_SPACE) + graph_unit_denominator, 0, 1,
+ )
+
+ unit_plural = (
+ pynutil.insert("units: \"")
+ + ((graph_unit_plural + optional_unit_denominator) | graph_unit_denominator)
+ + pynutil.insert("\"")
+ )
+
+ unit_singular_graph = (
+ pynutil.insert("units: \"")
+ + ((graph_unit_singular + optional_unit_denominator) | graph_unit_denominator)
+ + pynutil.insert("\"")
+ )
+
+ subgraph_decimal = decimal.fst + insert_space + pynini.closure(NEMO_SPACE, 0, 1) + unit_plural
+
+ subgraph_cardinal = (
+ (optional_graph_negative + (pynini.closure(NEMO_DIGIT) - "1")) @ cardinal.fst
+ + insert_space
+ + pynini.closure(delete_space, 0, 1)
+ + unit_plural
+ )
+
+ subgraph_cardinal |= (
+ (optional_graph_negative + pynini.accep("1")) @ cardinal.fst
+ + insert_space
+ + pynini.closure(delete_space, 0, 1)
+ + unit_singular_graph
+ )
+
+ subgraph_fraction = fraction.fst + insert_space + pynini.closure(delete_space, 0, 1) + unit_plural
+
+ decimal_times = (
+ pynutil.insert("decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.insert(" } units: \"")
+ + pynini.union('x', 'X')
+ + pynutil.insert("\"")
+ )
+
+ cardinal_times = (
+ pynutil.insert("cardinal { integer: \"")
+ + strip_cardinal_apocope(cardinal_graph)
+ + pynutil.insert("\" } units: \"")
+ + pynini.union('x', 'X')
+ + pynutil.insert("\"")
+ )
+
+ cardinal_dash_alpha = (
+ pynutil.insert("cardinal { integer: \"")
+ + strip_cardinal_apocope(cardinal_graph)
+ + pynutil.delete('-')
+ + pynutil.insert("\" } units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.insert("\"")
+ )
+
+ decimal_dash_alpha = (
+ pynutil.insert("decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.delete('-')
+ + pynutil.insert(" } units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.insert("\"")
+ )
+
+ alpha_dash_cardinal = (
+ pynutil.insert("units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.delete('-')
+ + pynutil.insert("\"")
+ + pynutil.insert(" cardinal { integer: \"")
+ + cardinal_graph
+ + pynutil.insert("\" } preserve_order: true")
+ )
+
+ alpha_dash_decimal = (
+ pynutil.insert("units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.delete('-')
+ + pynutil.insert("\"")
+ + pynutil.insert(" decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.insert(" } preserve_order: true")
+ )
+
+ final_graph = (
+ subgraph_decimal
+ | subgraph_cardinal
+ | subgraph_fraction
+ | cardinal_dash_alpha
+ | alpha_dash_cardinal
+ | decimal_dash_alpha
+ | decimal_times
+ | cardinal_times
+ | alpha_dash_decimal
+ )
+ final_graph += pynutil.insert(" preserve_order: true")
+ final_graph = self.add_tokens(final_graph)
+
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/money.py b/nemo_text_processing/text_normalization/es/taggers/money.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/money.py
@@ -0,0 +1,194 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import decimal_separator
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ maj_singular_labels = load_labels(get_abs_path("data/money/currency_major.tsv"))
+ maj_singular = pynini.string_file((get_abs_path("data/money/currency_major.tsv")))
+ min_singular = pynini.string_file(get_abs_path("data/money/currency_minor.tsv"))
+ fem_plural = pynini.string_file((get_abs_path("data/money/currency_plural_fem.tsv")))
+ masc_plural = pynini.string_file((get_abs_path("data/money/currency_plural_masc.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ maj_singular_labels = None
+ min_singular = None
+ maj_singular = None
+ fem_plural = None
+ masc_plural = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MoneyFst(GraphFst):
+ """
+ Finite state transducer for classifying money, e.g.
+ "โฌ1" -> money { currency_maj: "euro" integer_part: "un"}
+ "โฌ1,000" -> money { currency_maj: "euro" integer_part: "un" }
+ "โฌ1,001" -> money { currency_maj: "euro" integer_part: "un" fractional_part: "cero cero un" }
+ "ยฃ1,4" -> money { integer_part: "una" currency_maj: "libra" fractional_part: "cuarenta" preserve_order: true }
+ -> money { integer_part: "una" currency_maj: "libra" fractional_part: "cuarenta" currency_min: "penique" preserve_order: true }
+ "0,01 ยฃ" -> money { fractional_part: "un" currency_min: "penique" preserve_order: true }
+ "0,02 ยฃ" -> money { fractional_part: "dos" currency_min: "peniques" preserve_order: true }
+ "ยฃ0,01 million" -> money { currency_maj: "libra" integer_part: "cero" fractional_part: "cero un" quantity: "million" }
+
+ Args:
+ cardinal: CardinalFst
+ decimal: DecimalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, decimal: GraphFst, deterministic: bool = True):
+ super().__init__(name="money", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+ graph_decimal_final = decimal.final_graph_wo_negative
+
+ maj_singular_graph = maj_singular
+ min_singular_graph = min_singular
+ maj_plural_graph = maj_singular @ (fem_plural | masc_plural)
+ min_plural_graph = min_singular @ (fem_plural | masc_plural)
+
+ graph_maj_singular = pynutil.insert("currency_maj: \"") + maj_singular_graph + pynutil.insert("\"")
+ graph_maj_plural = pynutil.insert("currency_maj: \"") + maj_plural_graph + pynutil.insert("\"")
+
+ graph_integer_one = pynutil.insert("integer_part: \"") + pynini.cross("1", "un") + pynutil.insert("\"")
+
+ decimal_with_quantity = (NEMO_SIGMA + NEMO_ALPHA) @ graph_decimal_final
+
+ graph_decimal_plural = pynini.union(
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + graph_decimal_final, # $1,05
+ graph_decimal_final + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_plural, # 1,05 $
+ )
+ graph_decimal_plural = (
+ (NEMO_SIGMA - "1") + decimal_separator + NEMO_SIGMA
+ ) @ graph_decimal_plural # Can't have "un euros"
+
+ graph_decimal_singular = pynini.union(
+ graph_maj_singular + pynini.closure(delete_space, 0, 1) + insert_space + graph_decimal_final, # $1,05
+ graph_decimal_final + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_singular, # 1,05 $
+ )
+ graph_decimal_singular = (pynini.accep("1") + decimal_separator + NEMO_SIGMA) @ graph_decimal_singular
+
+ graph_decimal = pynini.union(
+ graph_decimal_singular,
+ graph_decimal_plural,
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + decimal_with_quantity,
+ )
+
+ graph_integer = (
+ pynutil.insert("integer_part: \"") + ((NEMO_SIGMA - "1") @ cardinal_graph) + pynutil.insert("\"")
+ )
+
+ graph_integer_only = pynini.union(
+ graph_maj_singular + pynini.closure(delete_space, 0, 1) + insert_space + graph_integer_one,
+ graph_integer_one + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_singular,
+ )
+ graph_integer_only |= pynini.union(
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + graph_integer,
+ graph_integer + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_plural,
+ )
+
+ graph = graph_integer_only | graph_decimal
+
+ # remove trailing zeros of non zero number in the first 2 digits and fill up to 2 digits
+ # e.g. 2000 -> 20, 0200->02, 01 -> 01, 10 -> 10
+ # not accepted: 002, 00, 0,
+ two_digits_fractional_part = (
+ pynini.closure(NEMO_DIGIT) + (NEMO_DIGIT - "0") + pynini.closure(pynutil.delete("0"))
+ ) @ (
+ (pynutil.delete("0") + (NEMO_DIGIT - "0"))
+ | ((NEMO_DIGIT - "0") + pynutil.insert("0"))
+ | ((NEMO_DIGIT - "0") + NEMO_DIGIT)
+ )
+
+ graph_min_singular = pynutil.insert("currency_min: \"") + min_singular_graph + pynutil.insert("\"")
+ graph_min_plural = pynutil.insert("currency_min: \"") + min_plural_graph + pynutil.insert("\"")
+
+ # format ** euro ** cent
+ decimal_graph_with_minor = None
+ for curr_symbol, _ in maj_singular_labels:
+ preserve_order = pynutil.insert(" preserve_order: true")
+
+ integer_plus_maj = pynini.union(
+ graph_integer + insert_space + pynutil.insert(curr_symbol) @ graph_maj_plural,
+ graph_integer_one + insert_space + pynutil.insert(curr_symbol) @ graph_maj_singular,
+ )
+ # non zero integer part
+ integer_plus_maj = (pynini.closure(NEMO_DIGIT) - "0") @ integer_plus_maj
+
+ graph_fractional_one = (
+ pynutil.insert("fractional_part: \"")
+ + two_digits_fractional_part @ pynini.cross("1", "un")
+ + pynutil.insert("\"")
+ )
+
+ graph_fractional = (
+ two_digits_fractional_part @ (pynini.closure(NEMO_DIGIT, 1, 2) - "1") @ cardinal.two_digit_non_zero
+ )
+ graph_fractional = pynutil.insert("fractional_part: \"") + graph_fractional + pynutil.insert("\"")
+
+ fractional_plus_min = pynini.union(
+ graph_fractional + insert_space + pynutil.insert(curr_symbol) @ graph_min_plural,
+ graph_fractional_one + insert_space + pynutil.insert(curr_symbol) @ graph_min_singular,
+ )
+
+ decimal_graph_with_minor_curr = (
+ integer_plus_maj + pynini.cross(decimal_separator, NEMO_SPACE) + fractional_plus_min
+ )
+ decimal_graph_with_minor_curr |= pynutil.add_weight(
+ integer_plus_maj
+ + pynini.cross(decimal_separator, NEMO_SPACE)
+ + pynutil.insert("fractional_part: \"")
+ + two_digits_fractional_part @ cardinal.two_digit_non_zero
+ + pynutil.insert("\""),
+ weight=0.0001,
+ )
+
+ decimal_graph_with_minor_curr |= pynutil.delete("0,") + fractional_plus_min
+ decimal_graph_with_minor_curr = pynini.union(
+ pynutil.delete(curr_symbol)
+ + pynini.closure(delete_space, 0, 1)
+ + decimal_graph_with_minor_curr
+ + preserve_order,
+ decimal_graph_with_minor_curr
+ + preserve_order
+ + pynini.closure(delete_space, 0, 1)
+ + pynutil.delete(curr_symbol),
+ )
+
+ decimal_graph_with_minor = (
+ decimal_graph_with_minor_curr
+ if decimal_graph_with_minor is None
+ else pynini.union(decimal_graph_with_minor, decimal_graph_with_minor_curr)
+ )
+
+ final_graph = graph | pynutil.add_weight(decimal_graph_with_minor, -0.001)
+
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/ordinal.py b/nemo_text_processing/text_normalization/es/taggers/ordinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/ordinal.py
@@ -0,0 +1,186 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import roman_to_int, strip_accent
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/digit.tsv")))
+ teens = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/teen.tsv")))
+ twenties = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/twenties.tsv")))
+ ties = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/ties.tsv")))
+ hundreds = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/hundreds.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ImportError, ModuleNotFoundError):
+ digit = None
+ teens = None
+ twenties = None
+ ties = None
+ hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def get_one_to_one_thousand(cardinal: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Produces an acceptor for verbalizations of all numbers from 1 to 1000. Needed for ordinals and fractions.
+
+ Args:
+ cardinal: CardinalFst
+
+ Returns:
+ fst: A pynini.FstLike object
+ """
+ numbers = pynini.string_map([str(_) for _ in range(1, 1000)]) @ cardinal
+ return pynini.project(numbers, "output").optimize()
+
+
+class OrdinalFst(GraphFst):
+ """
+ Finite state transducer for classifying ordinal
+ "21.ยบ" -> ordinal { integer: "vigรฉsimo primero" morphosyntactic_features: "gender_masc" }
+ This class converts ordinal up to the millionth (millonรฉsimo) order (exclusive).
+
+ This FST also records the ending of the ordinal (called "morphosyntactic_features"):
+ either as gender_masc, gender_fem, or apocope. Also introduces plural feature for non-deterministic graphs.
+
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="ordinal", kind="classify")
+ cardinal_graph = cardinal.graph
+
+ graph_digit = digit.optimize()
+ graph_teens = teens.optimize()
+ graph_ties = ties.optimize()
+ graph_twenties = twenties.optimize()
+ graph_hundreds = hundreds.optimize()
+
+ if not deterministic:
+ # Some alternative derivations
+ graph_ties = graph_ties | pynini.cross("sesenta", "setuagรฉsimo")
+
+ graph_teens = graph_teens | pynini.cross("once", "decimoprimero")
+ graph_teens |= pynini.cross("doce", "decimosegundo")
+
+ graph_digit = graph_digit | pynini.cross("nueve", "nono")
+ graph_digit |= pynini.cross("siete", "sรฉtimo")
+
+ graph_tens_component = (
+ graph_teens
+ | (graph_ties + pynini.closure(pynini.cross(" y ", NEMO_SPACE) + graph_digit, 0, 1))
+ | graph_twenties
+ )
+
+ graph_hundred_component = pynini.union(
+ graph_hundreds + pynini.closure(NEMO_SPACE + pynini.union(graph_tens_component, graph_digit), 0, 1),
+ graph_tens_component,
+ graph_digit,
+ )
+
+ # Need to go up to thousands for fractions
+ self.one_to_one_thousand = get_one_to_one_thousand(cardinal_graph)
+
+ thousands = pynini.cross("mil", "milรฉsimo")
+
+ graph_thousands = (
+ strip_accent(self.one_to_one_thousand) + NEMO_SPACE + thousands
+ ) # Cardinals become prefix for thousands series. Snce accent on the powers of ten we strip accent from leading words
+ graph_thousands @= pynini.cdrewrite(delete_space, "", "milรฉsimo", NEMO_SIGMA) # merge as a prefix
+ graph_thousands |= thousands
+
+ self.multiples_of_thousand = (cardinal_graph @ graph_thousands).optimize()
+
+ if (
+ not deterministic
+ ): # Formally the words preceding the power of ten should be a prefix, but some maintain word boundaries.
+ graph_thousands |= (self.one_to_one_thousand @ graph_hundred_component) + NEMO_SPACE + thousands
+
+ graph_thousands += pynini.closure(NEMO_SPACE + graph_hundred_component, 0, 1)
+
+ ordinal_graph = graph_thousands | graph_hundred_component
+ ordinal_graph = cardinal_graph @ ordinal_graph
+
+ if not deterministic:
+ # The 10's and 20's series can also be two words
+ split_words = pynini.cross("decimo", "dรฉcimo ") | pynini.cross("vigesimo", "vigรฉsimo ")
+ split_words = pynini.cdrewrite(split_words, "", NEMO_CHAR, NEMO_SIGMA)
+ ordinal_graph |= ordinal_graph @ split_words
+
+ # If "octavo" is preceeded by the "o" within string, it needs deletion
+ ordinal_graph @= pynini.cdrewrite(pynutil.delete("o"), "", "octavo", NEMO_SIGMA)
+
+ self.graph = ordinal_graph.optimize()
+
+ masc = pynini.accep("gender_masc")
+ fem = pynini.accep("gender_fem")
+ apocope = pynini.accep("apocope")
+
+ delete_period = pynini.closure(pynutil.delete("."), 0, 1) # Sometimes the period is omitted f
+
+ accept_masc = delete_period + pynini.cross("ยบ", masc)
+ accep_fem = delete_period + pynini.cross("ยช", fem)
+ accep_apocope = delete_period + pynini.cross("แตสณ", apocope)
+
+ # Managing Romanization
+ graph_roman = pynutil.insert("integer: \"") + roman_to_int(ordinal_graph) + pynutil.insert("\"")
+ if not deterministic:
+ # Introduce plural
+ plural = pynini.closure(pynutil.insert("/plural"), 0, 1)
+ accept_masc += plural
+ accep_fem += plural
+
+ # Romanizations have no morphology marker, so in non-deterministic case we provide option for all
+ insert_morphology = pynutil.insert(pynini.union(masc, fem)) + plural
+ insert_morphology |= pynutil.insert(apocope)
+ insert_morphology = (
+ pynutil.insert(" morphosyntactic_features: \"") + insert_morphology + pynutil.insert("\"")
+ )
+
+ graph_roman += insert_morphology
+
+ else:
+ # We assume masculine gender as default
+ graph_roman += pynutil.insert(" morphosyntactic_features: \"gender_masc\"")
+
+ # Rest of graph
+ convert_abbreviation = accept_masc | accep_fem | accep_apocope
+
+ graph = (
+ pynutil.insert("integer: \"")
+ + ordinal_graph
+ + pynutil.insert("\"")
+ + pynutil.insert(" morphosyntactic_features: \"")
+ + convert_abbreviation
+ + pynutil.insert("\"")
+ )
+ graph = pynini.union(graph, graph_roman)
+
+ final_graph = self.add_tokens(graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/telephone.py b/nemo_text_processing/text_normalization/es/taggers/telephone.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/telephone.py
@@ -0,0 +1,156 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_SIGMA, GraphFst, insert_space
+from nemo_text_processing.text_normalization.es.graph_utils import ones
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ graph_digit = pynini.string_file(get_abs_path("data/numbers/digit.tsv"))
+ graph_ties = pynini.string_file(get_abs_path("data/numbers/ties.tsv"))
+ graph_teen = pynini.string_file(get_abs_path("data/numbers/teen.tsv"))
+ graph_twenties = pynini.string_file(get_abs_path("data/numbers/twenties.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ graph_digit = None
+ graph_ties = None
+ graph_teen = None
+ graph_twenties = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TelephoneFst(GraphFst):
+ """
+ Finite state transducer for classifying telephone numbers, e.g.
+ 123-123-5678 -> { number_part: "uno dos tres uno dos tres cinco seis siete ocho" }.
+ In Spanish, digits are generally read individually, or as 2-digit numbers,
+ eg. "123" = "uno dos tres",
+ "1234" = "doce treinta y cuatro".
+ This will verbalize sequences of 10 (3+3+4 e.g. 123-456-7890).
+ 9 (3+3+3 e.g. 123-456-789) and 8 (4+4 e.g. 1234-5678) digits.
+
+ (we ignore more complicated cases such as "doscientos y dos" or "tres nueves").
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="telephone", kind="classify")
+
+ # create `single_digits` and `double_digits` graphs as these will be
+ # the building blocks of possible telephone numbers
+ single_digits = pynini.invert(graph_digit).optimize() | pynini.cross("0", "cero")
+
+ double_digits = pynini.union(
+ graph_twenties,
+ graph_teen,
+ (graph_ties + pynutil.delete("0")),
+ (graph_ties + insert_space + pynutil.insert("y") + insert_space + graph_digit),
+ )
+ double_digits = pynini.invert(double_digits)
+
+ # define `ten_digit_graph`, `nine_digit_graph`, `eight_digit_graph`
+ # which produces telephone numbers spoken (1) only with single digits,
+ # or (2) spoken with double digits (and sometimes single digits)
+
+ # 10-digit option (1): all single digits
+ ten_digit_graph = (
+ pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + single_digits
+ )
+
+ # 9-digit option (1): all single digits
+ nine_digit_graph = (
+ pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 2, 2)
+ + single_digits
+ )
+
+ # 8-digit option (1): all single digits
+ eight_digit_graph = (
+ pynini.closure(single_digits + insert_space, 4, 4)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + single_digits
+ )
+
+ if not deterministic:
+ # 10-digit option (2): (1+2) + (1+2) + (2+2) digits
+ ten_digit_graph |= (
+ single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + double_digits
+ + insert_space
+ + double_digits
+ )
+
+ # 9-digit option (2): (1+2) + (1+2) + (1+2) digits
+ nine_digit_graph |= (
+ single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ )
+
+ # 8-digit option (2): (2+2) + (2+2) digits
+ eight_digit_graph |= (
+ double_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + double_digits
+ + insert_space
+ + double_digits
+ )
+
+ number_part = pynini.union(ten_digit_graph, nine_digit_graph, eight_digit_graph)
+ number_part @= pynini.cdrewrite(pynini.cross(ones, "uno"), "", "", NEMO_SIGMA)
+
+ number_part = pynutil.insert("number_part: \"") + number_part + pynutil.insert("\"")
+
+ graph = number_part
+ final_graph = self.add_tokens(graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/time.py b/nemo_text_processing/text_normalization/es/taggers/time.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/time.py
@@ -0,0 +1,218 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ time_zone_graph = pynini.string_file(get_abs_path("data/time/time_zone.tsv"))
+ suffix = pynini.string_file(get_abs_path("data/time/time_suffix.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ time_zone_graph = None
+ suffix = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TimeFst(GraphFst):
+ """
+ Finite state transducer for classifying time, e.g.
+ "02:15 est" -> time { hours: "dos" minutes: "quince" zone: "e s t"}
+ "2 h" -> time { hours: "dos" }
+ "9 h" -> time { hours: "nueve" }
+ "02:15:10 h" -> time { hours: "dos" minutes: "quince" seconds: "diez"}
+
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="time", kind="classify", deterministic=deterministic)
+
+ delete_time_delimiter = pynutil.delete(pynini.union(".", ":"))
+
+ one = pynini.string_map([("un", "una"), ("รบn", "una")])
+ change_one = pynini.cdrewrite(one, "", "", NEMO_SIGMA)
+ cardinal_graph = cardinal.graph @ change_one
+
+ day_suffix = pynutil.insert("suffix: \"") + suffix + pynutil.insert("\"")
+ day_suffix = delete_space + insert_space + day_suffix
+
+ delete_hora_suffix = delete_space + insert_space + pynutil.delete("h")
+ delete_minute_suffix = delete_space + insert_space + pynutil.delete("min")
+ delete_second_suffix = delete_space + insert_space + pynutil.delete("s")
+
+ labels_hour_24 = [
+ str(x) for x in range(0, 25)
+ ] # Can see both systems. Twelve hour requires am/pm for ambiguity resolution
+ labels_hour_12 = [str(x) for x in range(1, 13)]
+ labels_minute_single = [str(x) for x in range(1, 10)]
+ labels_minute_double = [str(x) for x in range(10, 60)]
+
+ delete_leading_zero_to_double_digit = (
+ pynini.closure(pynutil.delete("0") | (NEMO_DIGIT - "0"), 0, 1) + NEMO_DIGIT
+ )
+
+ graph_24 = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ delete_leading_zero_to_double_digit @ pynini.union(*labels_hour_24)
+ )
+ graph_12 = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ delete_leading_zero_to_double_digit @ pynini.union(*labels_hour_12)
+ )
+
+ graph_hour_24 = graph_24 @ cardinal_graph
+ graph_hour_12 = graph_12 @ cardinal_graph
+
+ graph_minute_single = delete_leading_zero_to_double_digit @ pynini.union(*labels_minute_single)
+ graph_minute_double = pynini.union(*labels_minute_double)
+
+ graph_minute = pynini.union(graph_minute_single, graph_minute_double) @ cardinal_graph
+
+ final_graph_hour_only_24 = (
+ pynutil.insert("hours: \"") + graph_hour_24 + pynutil.insert("\"") + delete_hora_suffix
+ )
+ final_graph_hour_only_12 = pynutil.insert("hours: \"") + graph_hour_12 + pynutil.insert("\"") + day_suffix
+
+ final_graph_hour_24 = pynutil.insert("hours: \"") + graph_hour_24 + pynutil.insert("\"")
+ final_graph_hour_12 = pynutil.insert("hours: \"") + graph_hour_12 + pynutil.insert("\"")
+
+ final_graph_minute = pynutil.insert("minutes: \"") + graph_minute + pynutil.insert("\"")
+ final_graph_second = pynutil.insert("seconds: \"") + graph_minute + pynutil.insert("\"")
+ final_time_zone_optional = pynini.closure(
+ delete_space + insert_space + pynutil.insert("zone: \"") + time_zone_graph + pynutil.insert("\""), 0, 1,
+ )
+
+ # 02.30 h
+ graph_hm = (
+ final_graph_hour_24
+ + delete_time_delimiter
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + pynini.closure(
+ delete_time_delimiter + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second)),
+ 0,
+ 1,
+ ) # For seconds 2.30.35 h
+ + pynini.closure(delete_hora_suffix, 0, 1) # 2.30 is valid if unambiguous
+ + final_time_zone_optional
+ )
+
+ # 2 h 30 min
+ graph_hm |= (
+ final_graph_hour_24
+ + delete_hora_suffix
+ + delete_space
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + delete_minute_suffix
+ + pynini.closure(
+ delete_space
+ + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second))
+ + delete_second_suffix,
+ 0,
+ 1,
+ ) # For seconds
+ + final_time_zone_optional
+ )
+
+ # 2.30 a. m. (Only for 12 hour clock)
+ graph_hm |= (
+ final_graph_hour_12
+ + delete_time_delimiter
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + pynini.closure(
+ delete_time_delimiter + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second)),
+ 0,
+ 1,
+ ) # For seconds 2.30.35 a. m.
+ + day_suffix
+ + final_time_zone_optional
+ )
+
+ graph_h = (
+ pynini.union(final_graph_hour_only_24, final_graph_hour_only_12) + final_time_zone_optional
+ ) # Should always have a time indicator, else we'll pass to cardinals
+
+ if not deterministic:
+ # This includes alternate vocalization (hour menos min, min para hour), here we shift the times and indicate a `style` tag
+ hour_shift_24 = pynini.invert(pynini.string_file(get_abs_path("data/time/hour_to_24.tsv")))
+ hour_shift_12 = pynini.invert(pynini.string_file(get_abs_path("data/time/hour_to_12.tsv")))
+ minute_shift = pynini.string_file(get_abs_path("data/time/minute_to.tsv"))
+
+ graph_hour_to_24 = graph_24 @ hour_shift_24 @ cardinal_graph
+ graph_hour_to_12 = graph_12 @ hour_shift_12 @ cardinal_graph
+
+ graph_minute_to = pynini.union(graph_minute_single, graph_minute_double) @ minute_shift @ cardinal_graph
+
+ final_graph_hour_to_24 = pynutil.insert("hours: \"") + graph_hour_to_24 + pynutil.insert("\"")
+ final_graph_hour_to_12 = pynutil.insert("hours: \"") + graph_hour_to_12 + pynutil.insert("\"")
+
+ final_graph_minute_to = pynutil.insert("minutes: \"") + graph_minute_to + pynutil.insert("\"")
+
+ graph_menos = pynutil.insert(" style: \"1\"")
+ graph_para = pynutil.insert(" style: \"2\"")
+
+ final_graph_style = graph_menos | graph_para
+
+ # 02.30 h (omitting seconds since a bit awkward)
+ graph_hm |= (
+ final_graph_hour_to_24
+ + delete_time_delimiter
+ + insert_space
+ + final_graph_minute_to
+ + pynini.closure(delete_hora_suffix, 0, 1) # 2.30 is valid if unambiguous
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ # 2 h 30 min
+ graph_hm |= (
+ final_graph_hour_to_24
+ + delete_hora_suffix
+ + delete_space
+ + insert_space
+ + final_graph_minute_to
+ + delete_minute_suffix
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ # 2.30 a. m. (Only for 12 hour clock)
+ graph_hm |= (
+ final_graph_hour_to_12
+ + delete_time_delimiter
+ + insert_space
+ + final_graph_minute_to
+ + day_suffix
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ final_graph = graph_hm | graph_h
+ if deterministic:
+ final_graph = final_graph + pynutil.insert(" preserve_order: true")
+ final_graph = final_graph.optimize()
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py b/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py
@@ -0,0 +1,157 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_extra_space,
+ delete_space,
+ generator_main,
+)
+from nemo_text_processing.text_normalization.en.taggers.punctuation import PunctuationFst
+from nemo_text_processing.text_normalization.es.taggers.cardinal import CardinalFst
+from nemo_text_processing.text_normalization.es.taggers.date import DateFst
+from nemo_text_processing.text_normalization.es.taggers.decimals import DecimalFst
+from nemo_text_processing.text_normalization.es.taggers.electronic import ElectronicFst
+from nemo_text_processing.text_normalization.es.taggers.fraction import FractionFst
+from nemo_text_processing.text_normalization.es.taggers.measure import MeasureFst
+from nemo_text_processing.text_normalization.es.taggers.money import MoneyFst
+from nemo_text_processing.text_normalization.es.taggers.ordinal import OrdinalFst
+from nemo_text_processing.text_normalization.es.taggers.telephone import TelephoneFst
+from nemo_text_processing.text_normalization.es.taggers.time import TimeFst
+from nemo_text_processing.text_normalization.es.taggers.whitelist import WhiteListFst
+from nemo_text_processing.text_normalization.es.taggers.word import WordFst
+
+from nemo.utils import logging
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class ClassifyFst(GraphFst):
+ """
+ Final class that composes all other classification grammars. This class can process an entire sentence, that is lower cased.
+ For deployment, this grammar will be compiled and exported to OpenFst Finate State aRchive (FAR) File.
+ More details to deployment at NeMo/tools/text_processing_deployment.
+
+ Args:
+ input_case: accepting either "lower_cased" or "cased" input.
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
+ overwrite_cache: set to True to overwrite .far files
+ whitelist: path to a file with whitelist replacements
+ """
+
+ def __init__(
+ self,
+ input_case: str,
+ deterministic: bool = False,
+ cache_dir: str = None,
+ overwrite_cache: bool = False,
+ whitelist: str = None,
+ ):
+ super().__init__(name="tokenize_and_classify", kind="classify", deterministic=deterministic)
+ far_file = None
+ if cache_dir is not None and cache_dir != "None":
+ os.makedirs(cache_dir, exist_ok=True)
+ whitelist_file = os.path.basename(whitelist) if whitelist else ""
+ far_file = os.path.join(
+ cache_dir, f"_{input_case}_es_tn_{deterministic}_deterministic{whitelist_file}.far"
+ )
+ if not overwrite_cache and far_file and os.path.exists(far_file):
+ self.fst = pynini.Far(far_file, mode="r")["tokenize_and_classify"]
+ logging.info(f"ClassifyFst.fst was restored from {far_file}.")
+ else:
+ logging.info(f"Creating ClassifyFst grammars. This might take some time...")
+
+ self.cardinal = CardinalFst(deterministic=deterministic)
+ cardinal_graph = self.cardinal.fst
+
+ self.ordinal = OrdinalFst(cardinal=self.cardinal, deterministic=deterministic)
+ ordinal_graph = self.ordinal.fst
+
+ self.decimal = DecimalFst(cardinal=self.cardinal, deterministic=deterministic)
+ decimal_graph = self.decimal.fst
+
+ self.fraction = FractionFst(cardinal=self.cardinal, ordinal=self.ordinal, deterministic=deterministic)
+ fraction_graph = self.fraction.fst
+ self.measure = MeasureFst(
+ cardinal=self.cardinal, decimal=self.decimal, fraction=self.fraction, deterministic=deterministic
+ )
+ measure_graph = self.measure.fst
+ self.date = DateFst(cardinal=self.cardinal, deterministic=deterministic)
+ date_graph = self.date.fst
+ word_graph = WordFst(deterministic=deterministic).fst
+ self.time = TimeFst(self.cardinal, deterministic=deterministic)
+ time_graph = self.time.fst
+ self.telephone = TelephoneFst(deterministic=deterministic)
+ telephone_graph = self.telephone.fst
+ self.electronic = ElectronicFst(deterministic=deterministic)
+ electronic_graph = self.electronic.fst
+ self.money = MoneyFst(cardinal=self.cardinal, decimal=self.decimal, deterministic=deterministic)
+ money_graph = self.money.fst
+ self.whitelist = WhiteListFst(input_case=input_case, deterministic=deterministic, input_file=whitelist)
+ whitelist_graph = self.whitelist.fst
+ punct_graph = PunctuationFst(deterministic=deterministic).fst
+
+ classify = (
+ pynutil.add_weight(whitelist_graph, 1.01)
+ | pynutil.add_weight(time_graph, 1.09)
+ | pynutil.add_weight(measure_graph, 1.08)
+ | pynutil.add_weight(cardinal_graph, 1.1)
+ | pynutil.add_weight(fraction_graph, 1.09)
+ | pynutil.add_weight(date_graph, 1.1)
+ | pynutil.add_weight(ordinal_graph, 1.1)
+ | pynutil.add_weight(decimal_graph, 1.1)
+ | pynutil.add_weight(money_graph, 1.1)
+ | pynutil.add_weight(telephone_graph, 1.1)
+ | pynutil.add_weight(electronic_graph, 1.1)
+ | pynutil.add_weight(word_graph, 200)
+ )
+ punct = pynutil.insert("tokens { ") + pynutil.add_weight(punct_graph, weight=2.1) + pynutil.insert(" }")
+ punct = pynini.closure(
+ pynini.compose(pynini.closure(NEMO_WHITE_SPACE, 1), delete_extra_space)
+ | (pynutil.insert(" ") + punct),
+ 1,
+ )
+ token = pynutil.insert("tokens { ") + classify + pynutil.insert(" }")
+ token_plus_punct = (
+ pynini.closure(punct + pynutil.insert(" ")) + token + pynini.closure(pynutil.insert(" ") + punct)
+ )
+
+ graph = token_plus_punct + pynini.closure(
+ (
+ pynini.compose(pynini.closure(NEMO_WHITE_SPACE, 1), delete_extra_space)
+ | (pynutil.insert(" ") + punct + pynutil.insert(" "))
+ )
+ + token_plus_punct
+ )
+
+ graph = delete_space + graph + delete_space
+ graph |= punct
+
+ self.fst = graph.optimize()
+
+ if far_file:
+ generator_main(far_file, {"tokenize_and_classify": self.fst})
+ logging.info(f"ClassifyFst grammars are saved to {far_file}.")
diff --git a/nemo_text_processing/text_normalization/es/taggers/whitelist.py b/nemo_text_processing/text_normalization/es/taggers/whitelist.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/whitelist.py
@@ -0,0 +1,69 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst, convert_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class WhiteListFst(GraphFst):
+ """
+ Finite state transducer for classifying whitelist, e.g.
+ "sr." -> tokens { name: "seรฑor" }
+ This class has highest priority among all classifier grammars. Whitelisted tokens are defined and loaded from "data/whitelist.tsv".
+
+ Args:
+ input_case: accepting either "lower_cased" or "cased" input.
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ input_file: path to a file with whitelist replacements
+ """
+
+ def __init__(self, input_case: str, deterministic: bool = True, input_file: str = None):
+ super().__init__(name="whitelist", kind="classify", deterministic=deterministic)
+
+ def _get_whitelist_graph(input_case, file):
+ whitelist = load_labels(file)
+ if input_case == "lower_cased":
+ whitelist = [[x[0].lower()] + x[1:] for x in whitelist]
+ graph = pynini.string_map(whitelist)
+ return graph
+
+ graph = _get_whitelist_graph(input_case, get_abs_path("data/whitelist.tsv"))
+ if not deterministic and input_case != "lower_cased":
+ graph |= pynutil.add_weight(
+ _get_whitelist_graph("lower_cased", get_abs_path("data/whitelist.tsv")), weight=0.0001
+ )
+
+ if input_file:
+ whitelist_provided = _get_whitelist_graph(input_case, input_file)
+ if not deterministic:
+ graph |= whitelist_provided
+ else:
+ graph = whitelist_provided
+
+ if not deterministic:
+ units_graph = _get_whitelist_graph(input_case, file=get_abs_path("data/measures/measurements.tsv"))
+ graph |= units_graph
+
+ self.graph = graph
+ self.final_graph = convert_space(self.graph).optimize()
+ self.fst = (pynutil.insert("name: \"") + self.final_graph + pynutil.insert("\"")).optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/word.py b/nemo_text_processing/text_normalization/es/taggers/word.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/word.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_SPACE, GraphFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class WordFst(GraphFst):
+ """
+ Finite state transducer for classifying word.
+ e.g. dormir -> tokens { name: "dormir" }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="word", kind="classify")
+ word = pynutil.insert("name: \"") + pynini.closure(NEMO_NOT_SPACE, 1) + pynutil.insert("\"")
+ self.fst = word.optimize()
diff --git a/nemo_text_processing/text_normalization/es/utils.py b/nemo_text_processing/text_normalization/es/utils.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/utils.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import csv
+import os
+
+
+def get_abs_path(rel_path):
+ """
+ Get absolute path
+
+ Args:
+ rel_path: relative path to this file
+
+ Returns absolute path
+ """
+ return os.path.dirname(os.path.abspath(__file__)) + '/' + rel_path
+
+
+def load_labels(abs_path):
+ """
+ loads relative path file as dictionary
+
+ Args:
+ abs_path: absolute path
+
+ Returns dictionary of mappings
+ """
+ label_tsv = open(abs_path)
+ labels = list(csv.reader(label_tsv, delimiter="\t"))
+ return labels
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/__init__.py b/nemo_text_processing/text_normalization/es/verbalizers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py b/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py
@@ -0,0 +1,57 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, GraphFst
+from nemo_text_processing.text_normalization.es.graph_utils import shift_cardinal_gender, strip_cardinal_apocope
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class CardinalFst(GraphFst):
+ """
+ Finite state transducer for verbalizing cardinals
+ e.g. cardinal { integer: "dos" } -> "dos"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="cardinal", kind="verbalize", deterministic=deterministic)
+ optional_sign = pynini.closure(pynini.cross("negative: \"true\" ", "menos "), 0, 1)
+ self.optional_sign = optional_sign
+
+ integer = pynini.closure(NEMO_NOT_QUOTE, 1)
+ self.integer = pynutil.delete(" \"") + integer + pynutil.delete("\"")
+
+ integer = pynutil.delete("integer:") + self.integer
+ self.numbers = integer
+ graph = optional_sign + self.numbers
+
+ if not deterministic:
+ # For alternate renderings
+ no_adjust = graph
+ fem_adjust = shift_cardinal_gender(graph)
+ apocope_adjust = strip_cardinal_apocope(graph)
+ graph = no_adjust | fem_adjust | apocope_adjust
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/date.py b/nemo_text_processing/text_normalization/es/verbalizers/date.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/date.py
@@ -0,0 +1,86 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import strip_cardinal_apocope
+from nemo_text_processing.text_normalization.es.taggers.date import articles
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class DateFst(GraphFst):
+ """
+ Finite state transducer for verbalizing date, e.g.
+ date { day: "treinta y uno" month: "marzo" year: "dos mil" } -> "treinta y uno de marzo de dos mil"
+ date { day: "uno" month: "mayo" year: "del mil novecientos noventa" } -> "primero de mayo del mil novecientos noventa"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="date", kind="verbalize", deterministic=deterministic)
+
+ day_cardinal = pynutil.delete("day: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ day = strip_cardinal_apocope(day_cardinal)
+
+ primero = pynini.cdrewrite(pynini.cross("uno", "primero"), "[BOS]", "[EOS]", NEMO_SIGMA)
+ day = (
+ (day @ primero) if deterministic else pynini.union(day, day @ primero)
+ ) # Primero for first day is traditional, but will vary depending on region
+
+ month = pynutil.delete("month: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+
+ year = (
+ pynutil.delete("year: \"")
+ + articles
+ + NEMO_SPACE
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ # Insert preposition if wasn't originally with the year. This would mean a space was present
+ year = pynutil.add_weight(year, -0.001)
+ year |= (
+ pynutil.delete("year: \"")
+ + pynutil.insert("de ")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ # day month year
+ graph_dmy = day + pynini.cross(NEMO_SPACE, " de ") + month + pynini.closure(pynini.accep(" ") + year, 0, 1)
+
+ graph_mdy = month + NEMO_SPACE + day + pynini.closure(NEMO_SPACE + year, 0, 1)
+ if deterministic:
+ graph_mdy += pynutil.delete(" preserve_order: true") # Only accepts this if was explicitly passed
+
+ self.graph = graph_dmy | graph_mdy
+ final_graph = self.graph + delete_preserve_order
+
+ delete_tokens = self.delete_tokens(final_graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/decimals.py b/nemo_text_processing/text_normalization/es/verbalizers/decimals.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/decimals.py
@@ -0,0 +1,87 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ GraphFst,
+ delete_preserve_order,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es import LOCALIZATION
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ shift_cardinal_gender,
+ shift_number_gender,
+ strip_cardinal_apocope,
+)
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class DecimalFst(GraphFst):
+ """
+ Finite state transducer for classifying decimal, e.g.
+ decimal { negative: "true" integer_part: "dos" fractional_part: "cuatro cero" quantity: "billones" } -> menos dos coma quatro cero billones
+ decimal { integer_part: "un" quantity: "billรณn" } -> un billรณn
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="decimal", kind="classify", deterministic=deterministic)
+
+ self.optional_sign = pynini.closure(pynini.cross("negative: \"true\"", "menos ") + delete_space, 0, 1)
+ self.integer = pynutil.delete("integer_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ self.fractional_default = (
+ pynutil.delete("fractional_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ )
+
+ conjunction = pynutil.insert(" punto ") if LOCALIZATION == "am" else pynutil.insert(" coma ")
+ if not deterministic:
+ conjunction |= pynutil.insert(pynini.union(" con ", " y "))
+ self.fractional_default |= strip_cardinal_apocope(self.fractional_default)
+ self.fractional = conjunction + self.fractional_default
+
+ self.quantity = (
+ delete_space
+ + insert_space
+ + pynutil.delete("quantity: \"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ self.optional_quantity = pynini.closure(self.quantity, 0, 1)
+
+ graph = self.optional_sign + pynini.union(
+ (self.integer + self.quantity), (self.integer + delete_space + self.fractional + self.optional_quantity)
+ )
+
+ self.numbers = graph.optimize()
+ self.numbers_no_quantity = self.integer + delete_space + self.fractional + self.optional_quantity
+
+ if not deterministic:
+ graph |= self.optional_sign + (
+ shift_cardinal_gender(self.integer + delete_space) + shift_number_gender(self.fractional)
+ )
+
+ graph += delete_preserve_order
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/electronic.py b/nemo_text_processing/text_normalization/es/verbalizers/electronic.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/electronic.py
@@ -0,0 +1,91 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_preserve_order,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digit_no_zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+
+ graph_symbols = pynini.string_file(get_abs_path("data/electronic/symbols.tsv"))
+ server_common = pynini.string_file(get_abs_path("data/electronic/server_name.tsv"))
+ domain_common = pynini.string_file(get_abs_path("data/electronic/domain.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ digit_no_zero = None
+ zero = None
+
+ graph_symbols = None
+ server_common = None
+ domain_common = None
+
+ PYNINI_AVAILABLE = False
+
+
+class ElectronicFst(GraphFst):
+ """
+ Finite state transducer for verbalizing electronic
+ e.g. electronic { username: "abc" domain: "hotmail.com" } -> "a b c arroba hotmail punto com"
+ -> "a b c arroba h o t m a i l punto c o m"
+ -> "a b c arroba hotmail punto c o m"
+ -> "a b c at h o t m a i l punto com"
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="electronic", kind="verbalize", deterministic=deterministic)
+
+ graph_digit_no_zero = (
+ digit_no_zero @ pynini.cdrewrite(pynini.cross("un", "uno"), "", "", NEMO_SIGMA).optimize()
+ )
+ graph_digit = graph_digit_no_zero | zero
+
+ def add_space_after_char():
+ return pynini.closure(NEMO_NOT_QUOTE - pynini.accep(" ") + insert_space) + (
+ NEMO_NOT_QUOTE - pynini.accep(" ")
+ )
+
+ verbalize_characters = pynini.cdrewrite(graph_symbols | graph_digit, "", "", NEMO_SIGMA)
+
+ user_name = pynutil.delete("username: \"") + add_space_after_char() + pynutil.delete("\"")
+ user_name @= verbalize_characters
+
+ convert_defaults = pynutil.add_weight(NEMO_NOT_QUOTE, weight=0.0001) | domain_common | server_common
+ domain = convert_defaults + pynini.closure(insert_space + convert_defaults)
+ domain @= verbalize_characters
+
+ domain = pynutil.delete("domain: \"") + domain + pynutil.delete("\"")
+ protocol = (
+ pynutil.delete("protocol: \"")
+ + add_space_after_char() @ pynini.cdrewrite(graph_symbols, "", "", NEMO_SIGMA)
+ + pynutil.delete("\"")
+ )
+ self.graph = (pynini.closure(protocol + pynini.accep(" "), 0, 1) + domain) | (
+ user_name + pynini.accep(" ") + pynutil.insert("arroba ") + domain
+ )
+ delete_tokens = self.delete_tokens(self.graph + delete_preserve_order)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/fraction.py b/nemo_text_processing/text_normalization/es/verbalizers/fraction.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/fraction.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_NOT_QUOTE,
+ NEMO_NOT_SPACE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ accents,
+ shift_cardinal_gender,
+ strip_cardinal_apocope,
+)
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class FractionFst(GraphFst):
+ """
+ Finite state transducer for verbalizing fraction
+ e.g. tokens { fraction { integer: "treinta y tres" numerator: "cuatro" denominator: "quinto" } } ->
+ treinta y tres y cuatro quintos
+
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="fraction", kind="verbalize", deterministic=deterministic)
+
+ # Derivational strings append 'avo' as a suffix. Adding space for processing aid
+ fraction_stem = pynutil.insert(" avo")
+ plural = pynutil.insert("s")
+
+ integer = (
+ pynutil.delete("integer_part: \"")
+ + strip_cardinal_apocope(pynini.closure(NEMO_NOT_QUOTE))
+ + pynutil.delete("\"")
+ )
+
+ numerator_one = pynutil.delete("numerator: \"") + pynini.accep("un") + pynutil.delete("\" ")
+ numerator = (
+ pynutil.delete("numerator: \"")
+ + pynini.difference(pynini.closure(NEMO_NOT_QUOTE), "un")
+ + pynutil.delete("\" ")
+ )
+
+ denominator_add_stem = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE)
+ + fraction_stem
+ + pynutil.delete("\" morphosyntactic_features: \"add_root\"")
+ )
+ denominator_ordinal = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\" morphosyntactic_features: \"ordinal\"")
+ )
+ denominator_cardinal = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\"")
+ )
+
+ denominator_singular = pynini.union(denominator_add_stem, denominator_ordinal)
+ denominator_plural = denominator_singular + plural
+
+ if not deterministic:
+ # Occasional exceptions
+ denominator_singular |= denominator_add_stem @ pynini.string_map(
+ [("once avo", "undรฉcimo"), ("doce avo", "duodรฉcimo")]
+ )
+
+ # Merging operations
+ merge = pynini.cdrewrite(
+ pynini.cross(" y ", "i"), "", "", NEMO_SIGMA
+ ) # The denominator must be a single word, with the conjunction "y" replaced by i
+ merge @= pynini.cdrewrite(delete_space, "", pynini.difference(NEMO_CHAR, "parte"), NEMO_SIGMA)
+
+ # The merger can produce duplicate vowels. This is not allowed in orthography
+ delete_duplicates = pynini.string_map([("aa", "a"), ("oo", "o")]) # Removes vowels
+ delete_duplicates = pynini.cdrewrite(delete_duplicates, "", "", NEMO_SIGMA)
+
+ remove_accents = pynini.cdrewrite(
+ accents,
+ pynini.union(NEMO_SPACE, pynini.accep("[BOS]")) + pynini.closure(NEMO_NOT_SPACE),
+ pynini.closure(NEMO_NOT_SPACE) + pynini.union("avo", "ava", "รฉsimo", "รฉsima"),
+ NEMO_SIGMA,
+ )
+ merge_into_single_word = merge @ remove_accents @ delete_duplicates
+
+ fraction_default = numerator + delete_space + insert_space + (denominator_plural @ merge_into_single_word)
+ fraction_with_one = (
+ numerator_one + delete_space + insert_space + (denominator_singular @ merge_into_single_word)
+ )
+
+ fraction_with_cardinal = strip_cardinal_apocope(numerator | numerator_one)
+ fraction_with_cardinal += (
+ delete_space + pynutil.insert(" sobre ") + strip_cardinal_apocope(denominator_cardinal)
+ )
+
+ conjunction = pynutil.insert(" y ")
+
+ if not deterministic:
+ # There is an alternative rendering where ordinals act as adjectives for 'parte'. This requires use of the feminine
+ # Other rules will manage use of "un" at end, so just worry about endings
+ exceptions = pynini.string_map([("tercia", "tercera")])
+ apply_exceptions = pynini.cdrewrite(exceptions, "", "", NEMO_SIGMA)
+ vowel_change = pynini.cdrewrite(pynini.cross("o", "a"), "", pynini.accep("[EOS]"), NEMO_SIGMA)
+
+ denominator_singular_fem = shift_cardinal_gender(denominator_singular) @ vowel_change @ apply_exceptions
+ denominator_plural_fem = denominator_singular_fem + plural
+
+ numerator_one_fem = shift_cardinal_gender(numerator_one)
+ numerator_fem = shift_cardinal_gender(numerator)
+
+ fraction_with_cardinal |= (
+ (numerator_one_fem | numerator_fem)
+ + delete_space
+ + pynutil.insert(" sobre ")
+ + shift_cardinal_gender(denominator_cardinal)
+ )
+
+ # Still need to manage stems
+ merge_stem = pynini.cdrewrite(
+ delete_space, "", pynini.union("avo", "ava", "avos", "avas"), NEMO_SIGMA
+ ) # For managing alternative spacing
+ merge_stem @= remove_accents @ delete_duplicates
+
+ fraction_with_one_fem = numerator_one_fem + delete_space + insert_space
+ fraction_with_one_fem += pynini.union(
+ denominator_singular_fem @ merge_stem, denominator_singular_fem @ merge_into_single_word
+ ) # Both forms exists
+ fraction_with_one_fem @= pynini.cdrewrite(pynini.cross("una media", "media"), "", "", NEMO_SIGMA)
+ fraction_with_one_fem += pynutil.insert(" parte")
+
+ fraction_default_fem = numerator_fem + delete_space + insert_space
+ fraction_default_fem += pynini.union(
+ denominator_plural_fem @ merge_stem, denominator_plural_fem @ merge_into_single_word
+ )
+ fraction_default_fem += pynutil.insert(" partes")
+
+ fraction_default |= (
+ numerator + delete_space + insert_space + denominator_plural @ merge_stem
+ ) # Case of no merger
+ fraction_default |= fraction_default_fem
+
+ fraction_with_one |= numerator_one + delete_space + insert_space + denominator_singular @ merge_stem
+ fraction_with_one |= fraction_with_one_fem
+
+ # Integers are influenced by dominant noun, need to allow feminine forms as well
+ integer |= shift_cardinal_gender(integer)
+
+ # Remove 'un medio'
+ fraction_with_one @= pynini.cdrewrite(pynini.cross("un medio", "medio"), "", "", NEMO_SIGMA)
+
+ integer = pynini.closure(integer + delete_space + conjunction, 0, 1)
+
+ fraction = fraction_with_one | fraction_default | fraction_with_cardinal
+
+ graph = integer + fraction
+
+ self.graph = graph
+ delete_tokens = self.delete_tokens(self.graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/measure.py b/nemo_text_processing/text_normalization/es/verbalizers/measure.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/measure.py
@@ -0,0 +1,110 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_extra_space,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import ones, shift_cardinal_gender
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ unit_plural_fem = pynini.string_file(get_abs_path("data/measures/measurements_plural_fem.tsv"))
+ unit_plural_masc = pynini.string_file(get_abs_path("data/measures/measurements_plural_masc.tsv"))
+
+ unit_singular_fem = pynini.project(unit_plural_fem, "input")
+ unit_singular_masc = pynini.project(unit_plural_masc, "input")
+
+ unit_plural_fem = pynini.project(unit_plural_fem, "output")
+ unit_plural_masc = pynini.project(unit_plural_masc, "output")
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ unit_plural_fem = None
+ unit_plural_masc = None
+
+ unit_singular_fem = None
+ unit_singular_masc = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MeasureFst(GraphFst):
+ """
+ Finite state transducer for verbalizing measure, e.g.
+ measure { cardinal { integer: "dos" units: "gramos" } } -> "dos gramos"
+ measure { cardinal { integer_part: "dos" quantity: "millones" units: "gramos" } } -> "dos millones de gramos"
+
+ Args:
+ decimal: DecimalFst
+ cardinal: CardinalFst
+ fraction: FractionFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, decimal: GraphFst, cardinal: GraphFst, fraction: GraphFst, deterministic: bool):
+ super().__init__(name="measure", kind="verbalize", deterministic=deterministic)
+
+ graph_decimal = decimal.fst
+ graph_cardinal = cardinal.fst
+ graph_fraction = fraction.fst
+
+ unit_masc = (unit_plural_masc | unit_singular_masc) + pynini.closure(
+ NEMO_WHITE_SPACE + "por" + pynini.closure(NEMO_NOT_QUOTE, 1), 0, 1
+ )
+ unit_masc |= "por" + pynini.closure(NEMO_NOT_QUOTE, 1)
+ unit_masc = pynutil.delete("units: \"") + (pynini.closure(NEMO_NOT_QUOTE) @ unit_masc) + pynutil.delete("\"")
+
+ unit_fem = (unit_plural_fem | unit_singular_fem) + pynini.closure(
+ NEMO_WHITE_SPACE + "por" + pynini.closure(NEMO_NOT_QUOTE, 1), 0, 1
+ )
+ unit_fem = pynutil.delete("units: \"") + (pynini.closure(NEMO_NOT_QUOTE) @ unit_fem) + pynutil.delete("\"")
+
+ graph_masc = (graph_cardinal | graph_decimal | graph_fraction) + NEMO_WHITE_SPACE + unit_masc
+ graph_fem = (
+ shift_cardinal_gender(graph_cardinal | graph_decimal | graph_fraction) + NEMO_WHITE_SPACE + unit_fem
+ )
+ graph = graph_masc | graph_fem
+
+ graph = (
+ pynini.cdrewrite(
+ pynutil.insert(" de"), "quantity: \"" + pynini.closure(NEMO_NOT_QUOTE, 1), "\"", NEMO_SIGMA
+ )
+ @ graph
+ ) # billones de xyz
+
+ graph @= pynini.cdrewrite(pynini.cross(ones, "uno"), "", NEMO_WHITE_SPACE + "por", NEMO_SIGMA)
+
+ # To manage alphanumeric combonations ("a-8, 5x"), we let them use a weighted default path.
+ alpha_num_unit = pynutil.delete("units: \"") + pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\"")
+ graph_alpha_num = pynini.union(
+ (graph_cardinal | graph_decimal) + NEMO_SPACE + alpha_num_unit,
+ alpha_num_unit + delete_extra_space + (graph_cardinal | graph_decimal),
+ )
+
+ graph |= pynutil.add_weight(graph_alpha_num, 0.01)
+
+ graph += delete_preserve_order
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/money.py b/nemo_text_processing/text_normalization/es/verbalizers/money.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/money.py
@@ -0,0 +1,195 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ shift_cardinal_gender,
+ shift_number_gender,
+ strip_cardinal_apocope,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ fem = pynini.string_file((get_abs_path("data/money/currency_plural_fem.tsv")))
+ masc = pynini.string_file((get_abs_path("data/money/currency_plural_masc.tsv")))
+
+ fem_singular = pynini.project(fem, "input")
+ masc_singular = pynini.project(masc, "input")
+
+ fem_plural = pynini.project(fem, "output")
+ masc_plural = pynini.project(masc, "output")
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ fem_plural = None
+ masc_plural = None
+
+ fem_singular = None
+ masc_singular = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MoneyFst(GraphFst):
+ """
+ Finite state transducer for verbalizing money, e.g.
+ money { currency_maj: "euro" integer_part: "un"} -> "un euro"
+ money { currency_maj: "euro" integer_part: "un" fractional_part: "cero cero un"} -> "uno coma cero cero uno euros"
+ money { integer_part: "un" currency_maj: "libra" fractional_part: "cuarenta" preserve_order: true} -> "una libra cuarenta"
+ money { integer_part: "un" currency_maj: "libra" fractional_part: "cuarenta" currency_min: "peniques" preserve_order: true} -> "una libra con cuarenta peniques"
+ money { fractional_part: "un" currency_min: "penique" preserve_order: true} -> "un penique"
+
+ Args:
+ decimal: GraphFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, decimal: GraphFst, deterministic: bool = True):
+ super().__init__(name="money", kind="verbalize", deterministic=deterministic)
+
+ maj_singular_masc = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_singular)
+ + pynutil.delete("\"")
+ )
+ maj_singular_fem = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_singular)
+ + pynutil.delete("\"")
+ )
+
+ maj_plural_masc = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_plural)
+ + pynutil.delete("\"")
+ )
+ maj_plural_fem = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_plural)
+ + pynutil.delete("\"")
+ )
+
+ maj_masc = maj_plural_masc | maj_singular_masc # Tagger kept quantity resolution stable
+ maj_fem = maj_plural_fem | maj_singular_fem
+
+ min_singular_masc = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_singular)
+ + pynutil.delete("\"")
+ )
+ min_singular_fem = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_singular)
+ + pynutil.delete("\"")
+ )
+
+ min_plural_masc = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_plural)
+ + pynutil.delete("\"")
+ )
+ min_plural_fem = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_plural)
+ + pynutil.delete("\"")
+ )
+
+ min_masc = min_plural_masc | min_singular_masc
+ min_fem = min_plural_fem | min_singular_fem
+
+ fractional_part = (
+ pynutil.delete("fractional_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ )
+
+ integer_part = pynutil.delete("integer_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ optional_add_and = pynini.closure(pynutil.insert(pynini.union("con ", "y ")), 0, 1)
+
+ # *** currency_maj
+ graph_integer_masc = integer_part + NEMO_SPACE + maj_masc
+ graph_integer_fem = shift_cardinal_gender(integer_part) + NEMO_SPACE + maj_fem
+ graph_integer = graph_integer_fem | graph_integer_masc
+
+ # *** currency_maj + (***) | ((con) *** current_min)
+ graph_integer_with_minor_masc = (
+ integer_part
+ + NEMO_SPACE
+ + maj_masc
+ + NEMO_SPACE
+ + pynini.union(
+ optional_add_and + strip_cardinal_apocope(fractional_part),
+ (optional_add_and + fractional_part + NEMO_SPACE + min_masc),
+ (optional_add_and + shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem),
+ ) # Could be minor currency that is different gender
+ + delete_preserve_order
+ )
+
+ graph_integer_with_minor_fem = (
+ shift_cardinal_gender(integer_part)
+ + NEMO_SPACE
+ + maj_fem
+ + NEMO_SPACE
+ + pynini.union(
+ optional_add_and + shift_cardinal_gender(fractional_part),
+ (optional_add_and + fractional_part + NEMO_SPACE + min_masc),
+ (optional_add_and + shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem),
+ ) # Could be minor currency that is different gender
+ + delete_preserve_order
+ )
+
+ graph_integer_with_minor = graph_integer_with_minor_fem | graph_integer_with_minor_masc
+
+ # *** coma *** currency_maj
+ graph_decimal_masc = decimal.numbers + NEMO_SPACE + maj_masc
+
+ # Need to fix some of the inner parts, so don't use decimal here (note: quantities covered by masc)
+ graph_decimal_fem = (
+ pynini.accep("integer_part: \"")
+ + shift_cardinal_gender(pynini.closure(NEMO_NOT_QUOTE, 1))
+ + pynini.accep("\"")
+ + NEMO_SPACE
+ + pynini.accep("fractional_part: \"")
+ + shift_number_gender(pynini.closure(NEMO_NOT_QUOTE, 1))
+ + pynini.accep("\"")
+ + NEMO_SIGMA
+ )
+ graph_decimal_fem @= decimal.numbers_no_quantity
+ graph_decimal_fem += NEMO_SPACE + maj_fem
+
+ graph_decimal = graph_decimal_fem | graph_decimal_masc
+ graph_decimal = (
+ pynini.cdrewrite(
+ pynutil.insert(" de"), "quantity: \"" + pynini.closure(NEMO_NOT_QUOTE, 1), "\"", NEMO_SIGMA
+ )
+ @ graph_decimal
+ ) # formally it's millones/billones de ***
+
+ # *** current_min
+ graph_minor_masc = fractional_part + NEMO_SPACE + min_masc + delete_preserve_order
+ graph_minor_fem = shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem + delete_preserve_order
+ graph_minor = graph_minor_fem | graph_minor_masc
+
+ graph = graph_integer | graph_integer_with_minor | graph_decimal | graph_minor
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py b/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py
@@ -0,0 +1,76 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, NEMO_SIGMA, NEMO_SPACE, GraphFst
+from nemo_text_processing.text_normalization.es.graph_utils import shift_number_gender
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class OrdinalFst(GraphFst):
+ """
+ Finite state transducer for verbalizing ordinals
+ e.g. ordinal { integer: "tercer" } } -> "tercero"
+ -> "tercera"
+ -> "tercer"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="ordinal", kind="verbalize", deterministic=deterministic)
+
+ graph = pynutil.delete("integer: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+
+ # masculne gender we leave as is
+ graph_masc = graph + pynutil.delete(" morphosyntactic_features: \"gender_masc")
+
+ # shift gender
+ graph_fem_ending = graph @ pynini.cdrewrite(
+ pynini.cross("o", "a"), "", NEMO_SPACE | pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+ graph_fem = shift_number_gender(graph_fem_ending) + pynutil.delete(" morphosyntactic_features: \"gender_fem")
+
+ # Apocope just changes tercero and primero. May occur if someone wrote 11.er (uncommon)
+ graph_apocope = (
+ pynini.cross("tercero", "tercer")
+ | pynini.cross("primero", "primer")
+ | pynini.cross("undรฉcimo", "decimoprimer")
+ ) # In case someone wrote 11.er with deterministic
+ graph_apocope = (graph @ pynini.cdrewrite(graph_apocope, "", "", NEMO_SIGMA)) + pynutil.delete(
+ " morphosyntactic_features: \"apocope"
+ )
+
+ graph = graph_apocope | graph_masc | graph_fem
+
+ if not deterministic:
+ # Plural graph
+ graph_plural = pynini.cdrewrite(
+ pynutil.insert("s"), pynini.union("o", "a"), NEMO_SPACE | pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+
+ graph |= (graph @ graph_plural) + pynutil.delete("/plural")
+
+ self.graph = graph + pynutil.delete("\"")
+
+ delete_tokens = self.delete_tokens(self.graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/telephone.py b/nemo_text_processing/text_normalization/es/verbalizers/telephone.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/telephone.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, GraphFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class TelephoneFst(GraphFst):
+ """
+ Finite state transducer for verbalizing telephone, e.g.
+ telephone { number_part: "uno dos tres uno dos tres cinco seis siete ocho" }
+ -> uno dos tres uno dos tres cinco seis siete ocho
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="telephone", kind="verbalize")
+
+ number_part = pynutil.delete("number_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ delete_tokens = self.delete_tokens(number_part)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/time.py b/nemo_text_processing/text_normalization/es/verbalizers/time.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/time.py
@@ -0,0 +1,269 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_preserve_order,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ alt_minutes = pynini.string_file(get_abs_path("data/time/alt_minutes.tsv"))
+
+ morning_times = pynini.string_file(get_abs_path("data/time/morning_times.tsv"))
+ afternoon_times = pynini.string_file(get_abs_path("data/time/afternoon_times.tsv"))
+ evening_times = pynini.string_file(get_abs_path("data/time/evening_times.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ alt_minutes = None
+
+ morning_times = None
+ afternoon_times = None
+ evening_times = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TimeFst(GraphFst):
+ """
+ Finite state transducer for verbalizing time, e.g.
+ time { hours: "doce" minutes: "media" suffix: "a m" } -> doce y media de la noche
+ time { hours: "doce" } -> twelve o'clock
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="time", kind="verbalize", deterministic=deterministic)
+
+ change_minutes = pynini.cdrewrite(alt_minutes, pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA)
+
+ morning_phrases = pynini.cross("am", "de la maรฑana")
+ afternoon_phrases = pynini.cross("pm", "de la tarde")
+ evening_phrases = pynini.cross("pm", "de la noche")
+
+ # For the 12's
+ mid_times = pynini.accep("doce")
+ mid_phrases = (
+ pynini.string_map([("pm", "del mediodรญa"), ("am", "de la noche")])
+ if deterministic
+ else pynini.string_map(
+ [
+ ("pm", "de la maรฑana"),
+ ("pm", "del dรญa"),
+ ("pm", "del mediodรญa"),
+ ("am", "de la noche"),
+ ("am", "de la medianoche"),
+ ]
+ )
+ )
+
+ hour = (
+ pynutil.delete("hours:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ minute = (
+ pynutil.delete("minutes:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ minute = (minute @ change_minutes) if deterministic else pynini.union(minute, minute @ change_minutes)
+
+ suffix = (
+ pynutil.delete("suffix:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ zone = (
+ pynutil.delete("zone:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ optional_zone = pynini.closure(delete_space + insert_space + zone, 0, 1)
+ second = (
+ pynutil.delete("seconds:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ graph_hms = (
+ hour
+ + pynutil.insert(" horas ")
+ + delete_space
+ + minute
+ + pynutil.insert(" minutos y ")
+ + delete_space
+ + second
+ + pynutil.insert(" segundos")
+ )
+
+ graph_hm = hour + delete_space + pynutil.insert(" y ") + minute
+ graph_hm |= pynini.union(
+ (hour @ morning_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases),
+ (hour @ afternoon_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases),
+ (hour @ evening_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases),
+ (hour @ mid_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases),
+ )
+
+ graph_h = pynini.union(
+ hour,
+ (hour @ morning_times) + delete_space + insert_space + (suffix @ morning_phrases),
+ (hour @ afternoon_times) + delete_space + insert_space + (suffix @ afternoon_phrases),
+ (hour @ evening_times) + delete_space + insert_space + (suffix @ evening_phrases),
+ (hour @ mid_times) + delete_space + insert_space + (suffix @ mid_phrases),
+ )
+
+ graph = (graph_hms | graph_hm | graph_h) + optional_zone
+
+ if not deterministic:
+ graph_style_1 = pynutil.delete(" style: \"1\"")
+ graph_style_2 = pynutil.delete(" style: \"2\"")
+
+ graph_menos = hour + delete_space + pynutil.insert(" menos ") + minute + graph_style_1
+ graph_menos |= (
+ (hour @ morning_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ afternoon_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ evening_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ mid_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases)
+ + graph_style_1
+ )
+ graph_menos += optional_zone
+
+ graph_para = minute + pynutil.insert(" para las ") + delete_space + hour + graph_style_2
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ morning_times)
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ afternoon_times)
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ evening_times)
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ mid_times)
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases)
+ + graph_style_2
+ )
+ graph_para += optional_zone
+ graph_para @= pynini.cdrewrite(
+ pynini.cross(" las ", " la "), "para", "una", NEMO_SIGMA
+ ) # Need agreement with one
+
+ graph |= graph_menos | graph_para
+ delete_tokens = self.delete_tokens(graph + delete_preserve_order)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py b/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py
@@ -0,0 +1,73 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst
+from nemo_text_processing.text_normalization.en.verbalizers.whitelist import WhiteListFst
+from nemo_text_processing.text_normalization.es.verbalizers.cardinal import CardinalFst
+from nemo_text_processing.text_normalization.es.verbalizers.date import DateFst
+from nemo_text_processing.text_normalization.es.verbalizers.decimals import DecimalFst
+from nemo_text_processing.text_normalization.es.verbalizers.electronic import ElectronicFst
+from nemo_text_processing.text_normalization.es.verbalizers.fraction import FractionFst
+from nemo_text_processing.text_normalization.es.verbalizers.measure import MeasureFst
+from nemo_text_processing.text_normalization.es.verbalizers.money import MoneyFst
+from nemo_text_processing.text_normalization.es.verbalizers.ordinal import OrdinalFst
+from nemo_text_processing.text_normalization.es.verbalizers.telephone import TelephoneFst
+from nemo_text_processing.text_normalization.es.verbalizers.time import TimeFst
+
+
+class VerbalizeFst(GraphFst):
+ """
+ Composes other verbalizer grammars.
+ For deployment, this grammar will be compiled and exported to OpenFst Finate State Archiv (FAR) File.
+ More details to deployment at NeMo/tools/text_processing_deployment.
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="verbalize", kind="verbalize", deterministic=deterministic)
+ cardinal = CardinalFst(deterministic=deterministic)
+ cardinal_graph = cardinal.fst
+ ordinal = OrdinalFst(deterministic=deterministic)
+ ordinal_graph = ordinal.fst
+ decimal = DecimalFst(deterministic=deterministic)
+ decimal_graph = decimal.fst
+ fraction = FractionFst(deterministic=deterministic)
+ fraction_graph = fraction.fst
+ date = DateFst(deterministic=deterministic)
+ date_graph = date.fst
+ measure = MeasureFst(cardinal=cardinal, decimal=decimal, fraction=fraction, deterministic=deterministic)
+ measure_graph = measure.fst
+ electronic = ElectronicFst(deterministic=deterministic)
+ electronic_graph = electronic.fst
+ whitelist_graph = WhiteListFst(deterministic=deterministic).fst
+ money_graph = MoneyFst(decimal=decimal, deterministic=deterministic).fst
+ telephone_graph = TelephoneFst(deterministic=deterministic).fst
+ time_graph = TimeFst(deterministic=deterministic).fst
+
+ graph = (
+ cardinal_graph
+ | measure_graph
+ | decimal_graph
+ | ordinal_graph
+ | date_graph
+ | electronic_graph
+ | money_graph
+ | fraction_graph
+ | whitelist_graph
+ | telephone_graph
+ | time_graph
+ )
+ self.fst = graph
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py b/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py
@@ -0,0 +1,52 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst, delete_extra_space, delete_space
+from nemo_text_processing.text_normalization.en.verbalizers.word import WordFst
+from nemo_text_processing.text_normalization.es.verbalizers.verbalize import VerbalizeFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class VerbalizeFinalFst(GraphFst):
+ """
+ Finite state transducer that verbalizes an entire sentence
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="verbalize_final", kind="verbalize", deterministic=deterministic)
+ verbalize = VerbalizeFst(deterministic=deterministic).fst
+ word = WordFst(deterministic=deterministic).fst
+ types = verbalize | word
+ graph = (
+ pynutil.delete("tokens")
+ + delete_space
+ + pynutil.delete("{")
+ + delete_space
+ + types
+ + delete_space
+ + pynutil.delete("}")
+ )
+ graph = delete_space + pynini.closure(graph + delete_extra_space) + graph + delete_space
+ self.fst = graph
diff --git a/nemo_text_processing/text_normalization/normalize.py b/nemo_text_processing/text_normalization/normalize.py
--- a/nemo_text_processing/text_normalization/normalize.py
+++ b/nemo_text_processing/text_normalization/normalize.py
@@ -46,8 +46,8 @@
class Normalizer:
"""
- Normalizer class that converts text from written to spoken form.
- Useful for TTS preprocessing.
+ Normalizer class that converts text from written to spoken form.
+ Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
@@ -83,10 +83,11 @@ def __init__(
from nemo_text_processing.text_normalization.ru.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.ru.verbalizers.verbalize_final import VerbalizeFinalFst
elif lang == 'de':
- # Ru TN only support non-deterministic cases and produces multiple normalization options
- # use normalize_with_audio.py
from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.de.verbalizers.verbalize_final import VerbalizeFinalFst
+ elif lang == 'es':
+ from nemo_text_processing.text_normalization.es.taggers.tokenize_and_classify import ClassifyFst
+ from nemo_text_processing.text_normalization.es.verbalizers.verbalize_final import VerbalizeFinalFst
self.tagger = ClassifyFst(
input_case=input_case,
deterministic=deterministic,
@@ -106,7 +107,7 @@ def __init__(
def normalize_list(self, texts: List[str], verbose=False, punct_post_process: bool = False) -> List[str]:
"""
- NeMo text normalizer
+ NeMo text normalizer
Args:
texts: list of input strings
@@ -357,7 +358,7 @@ def select_verbalizer(self, lattice: 'pynini.FstLike') -> str:
def parse_args():
parser = ArgumentParser()
parser.add_argument("input_string", help="input string", type=str)
- parser.add_argument("--language", help="language", choices=["en", "de"], default="en", type=str)
+ parser.add_argument("--language", help="language", choices=["en", "de", "es"], default="en", type=str)
parser.add_argument(
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
diff --git a/nemo_text_processing/text_normalization/normalize_with_audio.py b/nemo_text_processing/text_normalization/normalize_with_audio.py
--- a/nemo_text_processing/text_normalization/normalize_with_audio.py
+++ b/nemo_text_processing/text_normalization/normalize_with_audio.py
@@ -55,15 +55,15 @@
"audio_data" - path to the audio file
"text" - raw text
"pred_text" - ASR model prediction
-
+
See https://github.com/NVIDIA/NeMo/blob/main/examples/asr/transcribe_speech.py on how to add ASR predictions
-
+
When the manifest is ready, run:
python normalize_with_audio.py \
--audio_data PATH/TO/MANIFEST.JSON \
- --language en
-
-
+ --language en
+
+
To run with a single audio file, specify path to audio and text with:
python normalize_with_audio.py \
--audio_data PATH/TO/AUDIO.WAV \
@@ -71,18 +71,18 @@
--text raw text OR PATH/TO/.TXT/FILE
--model QuartzNet15x5Base-En \
--verbose
-
+
To see possible normalization options for a text input without an audio file (could be used for debugging), run:
python python normalize_with_audio.py --text "RAW TEXT"
-
+
Specify `--cache_dir` to generate .far grammars once and re-used them for faster inference
"""
class NormalizerWithAudio(Normalizer):
"""
- Normalizer class that converts text from written to spoken form.
- Useful for TTS preprocessing.
+ Normalizer class that converts text from written to spoken form.
+ Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
@@ -282,7 +282,7 @@ def parse_args():
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
parser.add_argument(
- "--language", help="Select target language", choices=["en", "ru", "de"], default="en", type=str
+ "--language", help="Select target language", choices=["en", "ru", "de", "es"], default="en", type=str
)
parser.add_argument("--audio_data", default=None, help="path to an audio file or .json manifest")
parser.add_argument(
diff --git a/tools/text_processing_deployment/pynini_export.py b/tools/text_processing_deployment/pynini_export.py
--- a/tools/text_processing_deployment/pynini_export.py
+++ b/tools/text_processing_deployment/pynini_export.py
@@ -67,7 +67,7 @@ def tn_grammars(**kwargs):
def export_grammars(output_dir, grammars):
"""
- Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
+ Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
Args:
output_dir: directory to export FAR files to. Subdirectories will be created for tagger and verbalizer respectively.
@@ -109,7 +109,7 @@ def parse_args():
if __name__ == '__main__':
args = parse_args()
- if args.language in ['ru', 'fr', 'es', 'vi'] and args.grammars == 'tn_grammars':
+ if args.language in ['ru', 'fr', 'vi'] and args.grammars == 'tn_grammars':
raise ValueError('Only ITN grammars could be deployed in Sparrowhawk for the selected languages.')
if args.language == 'en':
@@ -148,6 +148,10 @@ def parse_args():
from nemo_text_processing.inverse_text_normalization.es.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
+ from nemo_text_processing.text_normalization.es.taggers.tokenize_and_classify import (
+ ClassifyFst as TNClassifyFst,
+ )
+ from nemo_text_processing.text_normalization.es.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
elif args.language == 'fr':
from nemo_text_processing.inverse_text_normalization.fr.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
</patch>
</s>
</patch>
|
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt
@@ -0,0 +1,86 @@
+1~un
+2~dos
+3~tres
+4~cuatro
+5~cinco
+6~seis
+7~siete
+8~ocho
+9~nueve
+10~diez
+11~once
+12~doce
+13~trece
+14~catorce
+15~quince
+16~diecisรฉis
+17~diecisiete
+18~dieciocho
+19~diecinueve
+20~veinte
+21~veintiรบn
+22~veintidรณs
+23~veintitrรฉs
+24~veinticuatro
+25~veinticinco
+26~veintisรฉis
+27~veintisiete
+28~veintiocho
+29~veintinueve
+30~treinta
+31~treinta y un
+40~cuarenta
+41~cuarenta y un
+50~cincuenta
+51~cincuenta y un
+60~sesenta
+70~setenta
+80~ochenta
+90~noventa
+100~cien
+101~ciento un
+120~ciento veinte
+121~ciento veintiรบn
+130~ciento treinta
+131~ciento treinta y un
+200~doscientos
+201~doscientos un
+300~trescientos
+301~trescientos un
+1000~mil
+1 000~mil
+1.000~mil
+1001~mil un
+1010~mil diez
+1020~mil veinte
+1021~mil veintiรบn
+1100~mil cien
+1101~mil ciento un
+1110~mil ciento diez
+1111~mil ciento once
+1234~mil doscientos treinta y cuatro
+2000~dos mil
+2001~dos mil un
+2010~dos mil diez
+2020~dos mil veinte
+2100~dos mil cien
+2101~dos mil ciento un
+2110~dos mil ciento diez
+2111~dos mil ciento once
+2222~dos mil doscientos veintidรณs
+10000~diez mil
+10 000~diez mil
+10.000~diez mil
+100000~cien mil
+100 000~cien mil
+100.000~cien mil
+1 000 000~un millรณn
+1.000.000~un millรณn
+1 234 568~un millรณn doscientos treinta y cuatro mil quinientos sesenta y ocho
+2.000.000~dos millones
+1.000.000.000~mil millones
+2.000.000.000~dos mil millones
+3 000 000 000 000~tres billones
+3.000.000.000.000~tres billones
+100 000 000 000 000 000 000 000~cien mil trillones
+100 000 000 000 000 000 000 001~cien mil trillones un
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt
@@ -0,0 +1,13 @@
+1 enero~primero de enero
+5 febrero~cinco de febrero
+20 de marzo~veinte de marzo
+abril 30~treinta de abril
+31 marzo~treinta y uno de marzo
+10 mayo 1990~diez de mayo de mil novecientos noventa
+junio 11 2000~once de junio de dos mil
+30 julio del 2020~treinta de julio del dos mil veinte
+30-2-1990~treinta de febrero de mil novecientos noventa
+30/2/1990~treinta de febrero de mil novecientos noventa
+30.2.1990~treinta de febrero de mil novecientos noventa
+1990-2-30~treinta de febrero de mil novecientos noventa
+1990-02-30~treinta de febrero de mil novecientos noventa
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt
@@ -0,0 +1,27 @@
+0,1~cero coma un
+0,01~cero coma cero un
+0,010~cero coma cero uno cero
+1,0101~uno coma cero uno cero un
+0,0~cero coma cero
+1,0~uno coma cero
+1,00~uno coma cero cero
+1,1~uno coma un
+233,32~doscientos treinta y tres coma treinta y dos
+32,22 millones~treinta y dos coma veintidรณs millones
+320 320,22 millones~trescientos veinte mil trescientos veinte coma veintidรณs millones
+5.002,232~cinco mil dos coma doscientos treinta y dos
+3,2 trillones~tres coma dos trillones
+3 millones~tres millones
+3 000 millones~tres mil millones
+3000 millones~tres mil millones
+3.000 millones~tres mil millones
+3.001 millones~tres mil un millones
+1 millรณn~un millรณn
+1 000 millones~mil millones
+1000 millones~mil millones
+1.000 millones~mil millones
+2,33302 millones~dos coma tres tres tres cero dos millones
+1,5332 millรณn~uno coma cinco tres tres dos millรณn
+1,53322 millรณn~uno coma cinco tres tres dos dos millรณn
+1,53321 millรณn~uno coma cinco tres tres dos un millรณn
+101,010101 millones~ciento uno coma cero uno cero uno cero un millones
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt
@@ -0,0 +1,12 @@
+a.bc@gmail.com~a punto b c arroba gmail punto com
+cdf@abc.edu~c d f arroba a b c punto e d u
+abc@gmail.abc~a b c arroba gmail punto a b c
+abc@abc.com~a b c arroba a b c punto com
+asdf123@abc.com~a s d f uno dos tres arroba a b c punto com
+a1b2@abc.com~a uno b dos arroba a b c punto com
+ab3.sdd.3@gmail.com~a b tres punto s d d punto tres arroba gmail punto com
+https://www.nvidia.com~h t t p s dos puntos barra barra w w w punto nvidia punto com
+www.nvidia.com~w w w punto nvidia punto com
+www.abc.es/efg~w w w punto a b c punto es barra e f g
+www.abc.es~w w w punto a b c punto es
+http://www.ourdailynews.com.sm~h t t p dos puntos barra barra w w w punto o u r d a i l y n e w s punto com punto s m
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt
@@ -0,0 +1,76 @@
+1/2~medio
+1 1/2~uno y medio
+3/2~tres medios
+1 3/2~uno y tres medios
+1/3~un tercio
+2/3~dos tercios
+1/4~un cuarto
+2/4~dos cuartos
+1/5~un quinto
+2/5~dos quintos
+1/6~un sexto
+2/6~dos sextos
+1/7~un sรฉptimo
+2/7~dos sรฉptimos
+1/8~un octavo
+2/8~dos octavos
+1/9~un noveno
+2/9~dos novenos
+1/10~un dรฉcimo
+2/10~dos dรฉcimos
+1/11~un onceavo
+1/12~un doceavo
+1/13~un treceavo
+1/14~un catorceavo
+1/15~un quinceavo
+1/16~un dieciseisavo
+1/17~un diecisieteavo
+1/18~un dieciochoavo
+1/19~un diecinueveavo
+1/20~un veinteavo
+1/21~un veintiunavo
+1/22~un veintidosavo
+1/30~un treintavo
+1/31~un treintaiunavo
+1/40~un cuarentavo
+1/41~un cuarentaiunavo
+1/50~un cincuentavo
+1/60~un sesentavo
+1/70~un setentavo
+1/80~un ochentavo
+1/90~un noventavo
+1/100~un centรฉsimo
+2/100~dos centรฉsimos
+1 2/100~uno y dos centรฉsimos
+1/101~uno sobre ciento uno
+1/110~uno sobre ciento diez
+1/111~uno sobre ciento once
+1/112~uno sobre ciento doce
+1/123~uno sobre ciento veintitrรฉs
+1/134~uno sobre ciento treinta y cuatro
+1/200~un ducentรฉsimo
+1/201~uno sobre doscientos uno
+1/234~uno sobre doscientos treinta y cuatro
+1/300~un tricentรฉsimo
+1/345~uno sobre trescientos cuarenta y cinco
+1/400~un cuadringentรฉsimo
+1/456~uno sobre cuatrocientos cincuenta y seis
+1/500~un quingentรฉsimo
+1/600~un sexcentรฉsimo
+1/700~un septingentรฉsimo
+1/800~un octingentรฉsimo
+1/900~un noningentรฉsimo
+1/1000~un milรฉsimo
+2/1000~dos milรฉsimos
+1 2/1000~uno y dos milรฉsimos
+1/1001~uno sobre mil uno
+1/1100~uno sobre mil cien
+1/1200~uno sobre mil doscientos
+1/1234~uno sobre mil doscientos treinta y cuatro
+1/2000~un dosmilรฉsimo
+1/5000~un cincomilรฉsimo
+1/10000~un diezmilรฉsimo
+1/100.000~un cienmilรฉsimo
+1/1.000.000~un millonรฉsimo
+1/100.000.000~un cienmillonรฉsimo
+1/1.200.000.000~un mildoscientosmillonรฉsimo
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt
@@ -0,0 +1,17 @@
+1,2-a~uno coma dos a
+a-5~a cinco
+200 m~doscientos metros
+3 h~tres horas
+1 h~una hora
+245 mph~doscientas cuarenta y cinco millas por hora
+2 kg~dos kilogramos
+60,2400 kg~sesenta coma dos cuatro cero cero kilogramos
+-60,2400 kg~menos sesenta coma dos cuatro cero cero kilogramos
+8,52 %~ocho coma cincuenta y dos por ciento
+-8,52 %~menos ocho coma cincuenta y dos por ciento
+1 %~uno por ciento
+3 cm~tres centรญmetros
+4 s~cuatro segundos
+5 l~cinco litros
+4,51/s~cuatro coma cincuenta y uno por segundo
+0,0101 s~cero coma cero uno cero un segundos
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt
@@ -0,0 +1,24 @@
+$1~un dรณlar
+1 $~un dรณlar
+$1,50~un dรณlar cincuenta centavos
+1,50 $~un dรณlar cincuenta centavos
+ยฃ200.000.001~doscientos millones una libras
+200.000.001 ยฃ~doscientos millones una libras
+2 billones de euros~dos billones de euros
+โฌ2 billones~dos billones de euros
+โฌ 2 billones~dos billones de euros
+โฌ 2,3 billones~dos coma tres billones de euros
+2,3 billones de euros~dos coma tres billones de euros
+โฌ5,50~cinco euros cincuenta cรฉntimos
+5,50 โฌ~cinco euros cincuenta cรฉntimos
+5,01 โฌ~cinco euros un cรฉntimo
+5,01 ยฃ~cinco libras un penique
+21 czk~veintiuna coronas checas
+czk21~veintiuna coronas checas
+czk21,1 millones~veintiuna coma una millones de coronas checas
+czk 5,50 billones~cinco coma cincuenta billones de coronas checas
+rs 5,50 billones~cinco coma cincuenta billones de rupias
+czk5,50 billones~cinco coma cincuenta billones de coronas checas
+0,55 $~cincuenta y cinco centavos
+1,01 $~un dรณlar un centavo
+ยฅ12,05~doce yenes cinco centavos
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt
@@ -0,0 +1,120 @@
+~121
+ciento veintiรบn
+ciento veintiuno
+ciento veintiuna
+121
+~200
+doscientos
+doscientas
+200
+~201
+doscientos un
+doscientos uno
+doscientas una
+201
+~1
+un
+uno
+una
+1
+~550.000.001
+quinientos cincuenta millones un
+quinientos cincuenta millones una
+quinientos cincuenta millones uno
+550.000.001
+~500.501
+quinientos mil quinientos un
+quinientos mil quinientos uno
+quinientas mil quinientas una
+500.501
+~500.001.ยบ
+quinientosmilรฉsimo primero
+quingentรฉsimo milรฉsimo primero
+quinientosmilรฉsimos primeros
+quingentรฉsimos milรฉsimos primeros
+500.001.ยบ
+~500.001.ยช
+quinientasmilรฉsima primera
+quingentรฉsima milรฉsima primera
+quinientasmilรฉsimas primeras
+quingentรฉsimas milรฉsimas primeras
+500.001.ยช
+~11.ยช
+dรฉcima primera
+decimoprimera
+dรฉcimas primeras
+decimoprimeras
+undรฉcima
+undรฉcimas
+11.ยช
+~11.ยบ
+dรฉcimo primero
+decimoprimero
+dรฉcimos primeros
+decimoprimeros
+undรฉcimo
+undรฉcimos
+11.ยบ
+~12.ยบ
+dรฉcimo segundo
+decimosegundo
+dรฉcimos segundos
+decimosegundos
+duodรฉcimo
+duodรฉcimos
+12.ยบ
+~200,0101
+doscientos coma cero uno cero un
+doscientos coma cero uno cero uno
+doscientas coma cero una cero una
+200,0101
+~1.000.200,21
+un millรณn doscientos coma veintiรบn
+un millรณn doscientos coma veintiuno
+un millรณn doscientas coma veintiuna
+un millรณn doscientos coma dos un
+un millรณn doscientos coma dos uno
+un millรณn doscientas coma dos una
+1.000.200,21
+~1/12
+un doceavo
+una doceava parte
+un duodรฉcimo
+una duodรฉcima parte
+uno sobre doce
+1/12
+~5/200
+cinco ducentรฉsimos
+cinco ducentรฉsimas partes
+cinco sobre doscientos
+5/200
+~1 5/3
+uno y cinco tercios
+una y cinco terceras partes
+uno y cinco sobre tres
+una y cinco sobre tres
+~1/5/2020
+primero de mayo de dos mil veinte
+uno de mayo de dos mil veinte
+cinco de enero de dos mil veinte
+~$5,50
+cinco dรณlares con cincuenta
+cinco dรณlares y cincuenta
+cinco dรณlares cincuenta
+cinco dรณlares con cincuenta centavos
+cinco dรณlares y cincuenta centavos
+cinco dรณlares cincuenta centavos
+~2.30 h
+dos y treinta
+dos y media
+tres menos treinta
+tres menos media
+treinta para las tres
+~12.30 a.m.
+doce y treinta de la medianoche
+doce y treinta de la noche
+doce y media de la medianoche
+doce y media de la noche
+una menos treinta de la maรฑana
+una menos media de la maรฑana
+treinta para la una de la maรฑana
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt
@@ -0,0 +1,137 @@
+1.แตสณ~primer
+1.ยบ~primero
+1.ยช~primera
+2.ยบ~segundo
+2.ยช~segunda
+ii~segundo
+II~segundo
+3.แตสณ~tercer
+3.ยบ~tercero
+3.ยช~tercera
+4.ยบ~cuarto
+4.ยช~cuarta
+5.ยบ~quinto
+5.ยช~quinta
+6.ยบ~sexto
+6.ยช~sexta
+7.ยบ~sรฉptimo
+7.ยช~sรฉptima
+8.ยบ~octavo
+8.ยช~octava
+9.ยบ~noveno
+9.ยช~novena
+10.ยบ~dรฉcimo
+10.ยช~dรฉcima
+11.แตสณ~decimoprimer
+11.ยบ~undรฉcimo
+11.ยช~undรฉcima
+12.ยบ~duodรฉcimo
+12.ยช~duodรฉcima
+13.แตสณ~decimotercer
+13.ยบ~decimotercero
+13.ยช~decimotercera
+14.ยบ~decimocuarto
+14.ยช~decimocuarta
+15.ยบ~decimoquinto
+15.ยช~decimoquinta
+16.ยบ~decimosexto
+16.ยช~decimosexta
+17.ยบ~decimosรฉptimo
+17.ยช~decimosรฉptima
+18.ยบ~decimoctavo
+18.ยช~decimoctava
+19.ยบ~decimonoveno
+19.ยช~decimonovena
+20.ยบ~vigรฉsimo
+20.ยช~vigรฉsima
+21.แตสณ~vigesimoprimer
+21.ยบ~vigesimoprimero
+21.ยช~vigesimoprimera
+30.ยบ~trigรฉsimo
+30.ยช~trigรฉsima
+31.แตสณ~trigรฉsimo primer
+31.ยบ~trigรฉsimo primero
+31.ยช~trigรฉsima primera
+40.ยบ~cuadragรฉsimo
+40.ยช~cuadragรฉsima
+41.แตสณ~cuadragรฉsimo primer
+41.ยบ~cuadragรฉsimo primero
+41.ยช~cuadragรฉsima primera
+50.ยบ~quincuagรฉsimo
+50.ยช~quincuagรฉsima
+51.แตสณ~quincuagรฉsimo primer
+51.ยบ~quincuagรฉsimo primero
+51.ยช~quincuagรฉsima primera
+60.ยบ~sexagรฉsimo
+60.ยช~sexagรฉsima
+70.ยบ~septuagรฉsimo
+70.ยช~septuagรฉsima
+80.ยบ~octogรฉsimo
+80.ยช~octogรฉsima
+90.ยบ~nonagรฉsimo
+90.ยช~nonagรฉsima
+100.ยบ~centรฉsimo
+100.ยช~centรฉsima
+101.แตสณ~centรฉsimo primer
+101.ยบ~centรฉsimo primero
+101.ยช~centรฉsima primera
+134.ยบ~centรฉsimo trigรฉsimo cuarto
+134.ยช~centรฉsima trigรฉsima cuarta
+200.ยบ~ducentรฉsimo
+200.ยช~ducentรฉsima
+300.ยบ~tricentรฉsimo
+300.ยช~tricentรฉsima
+400.ยบ~cuadringentรฉsimo
+400.ยช~cuadringentรฉsima
+500.ยบ~quingentรฉsimo
+500.ยช~quingentรฉsima
+600.ยบ~sexcentรฉsimo
+600.ยช~sexcentรฉsima
+700.ยบ~septingentรฉsimo
+700.ยช~septingentรฉsima
+800.ยบ~octingentรฉsimo
+800.ยช~octingentรฉsima
+900.ยบ~noningentรฉsimo
+900.ยช~noningentรฉsima
+1000.ยบ~milรฉsimo
+1000.ยช~milรฉsima
+1001.แตสณ~milรฉsimo primer
+1 000.ยบ~milรฉsimo
+1 000.ยช~milรฉsima
+1 001.แตสณ~milรฉsimo primer
+1.000.ยบ~milรฉsimo
+1.000.ยช~milรฉsima
+1.001.แตสณ~milรฉsimo primer
+1248.ยบ~milรฉsimo ducentรฉsimo cuadragรฉsimo octavo
+1248.ยช~milรฉsima ducentรฉsima cuadragรฉsima octava
+2000.ยบ~dosmilรฉsimo
+100 000.ยบ~cienmilรฉsimo
+i~primero
+I~primero
+ii~segundo
+II~segundo
+iii~tercero
+III~tercero
+iv~cuarto
+IV~cuarto
+V~quinto
+VI~sexto
+VII~sรฉptimo
+VIII~octavo
+IX~noveno
+X~dรฉcimo
+XI~undรฉcimo
+XII~duodรฉcimo
+XIII~decimotercero
+XX~vigรฉsimo
+XXI~vigesimoprimero
+XXX~trigรฉsimo
+XL~cuadragรฉsimo
+L~quincuagรฉsimo
+XC~nonagรฉsimo
+C~centรฉsimo
+CD~cuadringentรฉsimo
+D~quingentรฉsimo
+CM~noningentรฉsimo
+999.ยบ~noningentรฉsimo nonagรฉsimo noveno
+cmxcix~noningentรฉsimo nonagรฉsimo noveno
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt
@@ -0,0 +1,3 @@
+123-123-5678~uno dos tres uno dos tres cinco seis siete ocho
+123-456-789~uno dos tres cuatro cinco seis siete ocho nueve
+1234-5678~uno dos tres cuatro cinco seis siete ocho
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt
@@ -0,0 +1,26 @@
+1.00~una
+1:00~una
+01:00~una
+01 h~una
+3 h~tres horas
+1 h~una hora
+1.05 h~una y cinco
+01.05 h~una y cinco
+1.00 h~una
+1.00 a.m.~una de la maรฑana
+1.00 a.m~una de la maรฑana
+1.00 p.m.~una de la tarde
+1.00 p.m est~una de la tarde e s t
+1.00 est~una e s t
+5:02 est~cinco y dos e s t
+5:02 p.m pst~cinco y dos de la noche p s t
+5:02 p.m.~cinco y dos de la noche
+12.15~doce y cuarto
+12.15 a.m.~doce y cuarto de la noche
+12.15 p.m.~doce y cuarto del mediodรญa
+13.30~trece y media
+14.05~catorce y cinco
+24:50~veinticuatro y cincuenta
+3:02:32 pst~tres horas dos minutos y treinta y dos segundos p s t
+00:52~cero y cincuenta y dos
+0:52~cero y cincuenta y dos
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt
@@ -0,0 +1,3 @@
+el dr.~el doctor
+sr. rodriguez~seรฑor rodriguez
+182 esq. toledo~ciento ochenta y dos esquina toledo
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt
@@ -0,0 +1,48 @@
+~
+yahoo!~yahoo!
+veinte!~veinte!
+โ~โ
+aaa~aaa
+aabach~aabach
+aabenraa~aabenraa
+aabye~aabye
+aaccessed~aaccessed
+aach~aach
+aachen's~aachen's
+aadri~aadri
+aafia~aafia
+aagaard~aagaard
+aagadu~aagadu
+aagard~aagard
+aagathadi~aagathadi
+aaghart's~aaghart's
+aagnes~aagnes
+aagomoni~aagomoni
+aagon~aagon
+aagoo~aagoo
+aagot~aagot
+aahar~aahar
+aahh~aahh
+aahperd~aahperd
+aaibinterstate~aaibinterstate
+aajab~aajab
+aakasa~aakasa
+aakervik~aakervik
+aakirkeby~aakirkeby
+aalam~aalam
+aalbaek~aalbaek
+aaldiu~aaldiu
+aalem~aalem
+a'ali~a'ali
+aalilaassamthey~aalilaassamthey
+aalin~aalin
+aaliyan~aaliyan
+aaliyan's~aaliyan's
+aamadu~aamadu
+aamara~aamara
+aambala~aambala
+aamera~aamera
+aamer's~aamer's
+aamina~aamina
+aaminah~aaminah
+aamjiwnaang~aamjiwnaang
diff --git a/tests/nemo_text_processing/es/test_cardinal.py b/tests/nemo_text_processing/es/test_cardinal.py
--- a/tests/nemo_text_processing/es/test_cardinal.py
+++ b/tests/nemo_text_processing/es/test_cardinal.py
@@ -22,7 +22,8 @@
class TestCardinal:
- inverse_normalizer_es = (
+
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +33,34 @@ class TestCardinal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_cardinal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_date.py b/tests/nemo_text_processing/es/test_date.py
--- a/tests/nemo_text_processing/es/test_date.py
+++ b/tests/nemo_text_processing/es/test_date.py
@@ -22,7 +22,7 @@
class TestDate:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +32,34 @@ class TestDate:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_date.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_decimal.py b/tests/nemo_text_processing/es/test_decimal.py
--- a/tests/nemo_text_processing/es/test_decimal.py
+++ b/tests/nemo_text_processing/es/test_decimal.py
@@ -22,7 +22,7 @@
class TestDecimal:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +32,34 @@ class TestDecimal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_decimal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_electronic.py b/tests/nemo_text_processing/es/test_electronic.py
--- a/tests/nemo_text_processing/es/test_electronic.py
+++ b/tests/nemo_text_processing/es/test_electronic.py
@@ -35,3 +35,31 @@ class TestElectronic:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_electronic.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_fraction.py b/tests/nemo_text_processing/es/test_fraction.py
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_fraction.py
@@ -0,0 +1,51 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import pytest
+from nemo_text_processing.text_normalization.normalize import Normalizer
+from nemo_text_processing.text_normalization.normalize_with_audio import NormalizerWithAudio
+from parameterized import parameterized
+
+from ..utils import CACHE_DIR, PYNINI_AVAILABLE, parse_test_case_file
+
+
+class TestFraction:
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_fraction.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_measure.py b/tests/nemo_text_processing/es/test_measure.py
--- a/tests/nemo_text_processing/es/test_measure.py
+++ b/tests/nemo_text_processing/es/test_measure.py
@@ -36,3 +36,31 @@ class TestMeasure:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_measure.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_money.py b/tests/nemo_text_processing/es/test_money.py
--- a/tests/nemo_text_processing/es/test_money.py
+++ b/tests/nemo_text_processing/es/test_money.py
@@ -23,7 +23,7 @@
class TestMoney:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -33,6 +33,34 @@ class TestMoney:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_money.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_normalization_with_audio.py b/tests/nemo_text_processing/es/test_normalization_with_audio.py
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_normalization_with_audio.py
@@ -0,0 +1,43 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+from nemo_text_processing.text_normalization.normalize_with_audio import NormalizerWithAudio
+from parameterized import parameterized
+
+from ..utils import CACHE_DIR, PYNINI_AVAILABLE, get_test_cases_multiple
+
+
+class TestNormalizeWithAudio:
+
+ normalizer_es = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ @parameterized.expand(get_test_cases_multiple('es/data_text_normalization/test_cases_normalize_with_audio.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer_es.normalize(test_input, n_tagged=1000, punct_post_process=False)
+ print(expected)
+ print("pred")
+ print(pred)
+ assert len(set(pred).intersection(set(expected))) == len(
+ expected
+ ), f'missing: {set(expected).difference(set(pred))}'
diff --git a/tests/nemo_text_processing/es/test_ordinal.py b/tests/nemo_text_processing/es/test_ordinal.py
--- a/tests/nemo_text_processing/es/test_ordinal.py
+++ b/tests/nemo_text_processing/es/test_ordinal.py
@@ -23,7 +23,7 @@
class TestOrdinal:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -33,6 +33,33 @@ class TestOrdinal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_ordinal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=30, punct_post_process=False,
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh b/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh
@@ -0,0 +1,84 @@
+#! /bin/sh
+
+PROJECT_DIR=/workspace/tests
+
+runtest () {
+ input=$1
+ cd /workspace/sparrowhawk/documentation/grammars
+
+ # read test file
+ while read testcase; do
+ IFS='~' read written spoken <<< $testcase
+ denorm_pred=$(echo $written | normalizer_main --config=sparrowhawk_configuration.ascii_proto 2>&1 | tail -n 1)
+
+ # trim white space
+ spoken="$(echo -e "${spoken}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
+ denorm_pred="$(echo -e "${denorm_pred}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
+
+ # input expected actual
+ assertEquals "$written" "$spoken" "$denorm_pred"
+ done < "$input"
+}
+
+testTNCardinal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_cardinal.txt
+ runtest $input
+}
+
+testTNDate() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_date.txt
+ runtest $input
+}
+
+testTNDecimal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_decimal.txt
+ runtest $input
+}
+
+testTNElectronic() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_electronic.txt
+ runtest $input
+}
+
+testTNFraction() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_fraction.txt
+ runtest $input
+}
+
+testTNMoney() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_money.txt
+ runtest $input
+}
+
+testTNOrdinal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_ordinal.txt
+ runtest $input
+}
+
+testTNTelephone() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_ordinal.txt
+ runtest $input
+}
+
+testTNTime() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_time.txt
+ runtest $input
+}
+
+testTNMeasure() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_measure.txt
+ runtest $input
+}
+
+testTNWhitelist() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_whitelist.txt
+ runtest $input
+}
+
+testTNWord() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_word.txt
+ runtest $input
+}
+
+# Load shUnit2
+. $PROJECT_DIR/../shunit2/shunit2
diff --git a/tests/nemo_text_processing/es/test_telephone.py b/tests/nemo_text_processing/es/test_telephone.py
--- a/tests/nemo_text_processing/es/test_telephone.py
+++ b/tests/nemo_text_processing/es/test_telephone.py
@@ -36,3 +36,31 @@ class TestTelephone:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_telephone.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_time.py b/tests/nemo_text_processing/es/test_time.py
--- a/tests/nemo_text_processing/es/test_time.py
+++ b/tests/nemo_text_processing/es/test_time.py
@@ -35,3 +35,31 @@ class TestTime:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_time.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_whitelist.py b/tests/nemo_text_processing/es/test_whitelist.py
--- a/tests/nemo_text_processing/es/test_whitelist.py
+++ b/tests/nemo_text_processing/es/test_whitelist.py
@@ -35,3 +35,30 @@ class TestWhitelist:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_whitelist.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=10, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_word.py b/tests/nemo_text_processing/es/test_word.py
--- a/tests/nemo_text_processing/es/test_word.py
+++ b/tests/nemo_text_processing/es/test_word.py
@@ -35,3 +35,30 @@ class TestWord:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer_es = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio_es = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_word.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer_es.normalize(test_input, verbose=False)
+ assert pred == expected, f"input: {test_input}"
+
+ if self.normalizer_with_audio_es:
+ pred_non_deterministic = self.normalizer_with_audio_es.normalize(
+ test_input, n_tagged=150, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic, f"input: {test_input}"
|
1.0
| ||||
NVIDIA__NeMo-7582
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
17 :target: https://badge.fury.io/py/nemo-toolkit
18 :alt: Release version
19
20 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
21 :target: https://badge.fury.io/py/nemo-toolkit
22 :alt: Python version
23
24 .. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
25 :target: https://pepy.tech/project/nemo-toolkit
26 :alt: PyPi total downloads
27
28 .. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
29 :target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
30 :alt: CodeQL
31
32 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
33 :target: https://github.com/psf/black
34 :alt: Code style: black
35
36 .. _main-readme:
37
38 **NVIDIA NeMo**
39 ===============
40
41 Introduction
42 ------------
43
44 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
45 text-to-speech synthesis (TTS), large language models (LLMs), and
46 natural language processing (NLP).
47 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
48 and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
49
50 All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
51 training is automatically scalable to 1000s of GPUs.
52 Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
53 NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
54
55 Getting started with NeMo is simple.
56 State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
57 `NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
58 These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
59
60 We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
61 can all be run on `Google Colab <https://colab.research.google.com>`_.
62
63 For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
64 we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
65
66 For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
67 The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
68 which can be used to find the optimal model parallel configuration for training on a specific cluster.
69
70 Also see our `introductory video <https://www.youtube.com/embed/wBgpMf_KQVw>`_ for a high level overview of NeMo.
71
72 Key Features
73 ------------
74
75 * Speech processing
76 * `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
77 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
78 * Supported ASR models: `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_
79 * Jasper, QuartzNet, CitriNet, ContextNet
80 * Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
81 * Squeezeformer-CTC and Squeezeformer-Transducer
82 * LSTM-Transducer (RNNT) and LSTM-CTC
83 * Supports the following decoders/losses:
84 * CTC
85 * Transducer/RNNT
86 * Hybrid Transducer/CTC
87 * NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
88 * Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
89 * Cache-aware Streaming Conformer with multiple lookaheads - `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_
90 * Beam Search decoding
91 * `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
92 * `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
93 * `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
94 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
95 * ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
96 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
97 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
98 * Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
99 * Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
100 * `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
101 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
102 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
103 * Natural Language Processing
104 * `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
105 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
106 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
107 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
108 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
109 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
110 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
111 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
112 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
113 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
114 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
115 * `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
116 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
117 * `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
118 * Text-to-Speech Synthesis (TTS):
119 * `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
120 * Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
121 * Vocoders: HiFiGAN, UnivNet, WaveGlow
122 * End-to-End Models: VITS
123 * `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
124 * `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
125 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
126 * `NeMo Forced Aligner <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/nemo_forced_aligner.html>`_
127 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
128 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
129 * `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
130
131
132 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
133
134 Requirements
135 ------------
136
137 1) Python 3.10 or above
138 2) Pytorch 1.13.1 or above
139 3) NVIDIA GPU for training
140
141 Documentation
142 -------------
143
144 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
145 :alt: Documentation Status
146 :scale: 100%
147 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
148
149 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
150 :alt: Documentation Status
151 :scale: 100%
152 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
153
154 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
155 | Version | Status | Description |
156 +=========+=============+==========================================================================================================================================+
157 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
158 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
159 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
160 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
161
162 Tutorials
163 ---------
164 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
165
166 Getting help with NeMo
167 ----------------------
168 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
169
170
171 Installation
172 ------------
173
174 Conda
175 ~~~~~
176
177 We recommend installing NeMo in a fresh Conda environment.
178
179 .. code-block:: bash
180
181 conda create --name nemo python==3.10.12
182 conda activate nemo
183
184 Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
185
186 .. code-block:: bash
187
188 conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
189
190 The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
191
192 Pip
193 ~~~
194 Use this installation mode if you want the latest released version.
195
196 .. code-block:: bash
197
198 apt-get update && apt-get install -y libsndfile1 ffmpeg
199 pip install Cython
200 pip install nemo_toolkit['all']
201
202 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
203
204 Pip from source
205 ~~~~~~~~~~~~~~~
206 Use this installation mode if you want the version from a particular GitHub branch (e.g main).
207
208 .. code-block:: bash
209
210 apt-get update && apt-get install -y libsndfile1 ffmpeg
211 pip install Cython
212 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
213
214
215 From source
216 ~~~~~~~~~~~
217 Use this installation mode if you are contributing to NeMo.
218
219 .. code-block:: bash
220
221 apt-get update && apt-get install -y libsndfile1 ffmpeg
222 git clone https://github.com/NVIDIA/NeMo
223 cd NeMo
224 ./reinstall.sh
225
226 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
227 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
228
229 RNNT
230 ~~~~
231 Note that RNNT requires numba to be installed from conda.
232
233 .. code-block:: bash
234
235 conda remove numba
236 pip uninstall numba
237 conda install -c conda-forge numba
238
239 NeMo Megatron
240 ~~~~~~~~~~~~~
241 NeMo Megatron training requires NVIDIA Apex to be installed.
242 Install it manually if not using the NVIDIA PyTorch container.
243
244 To install Apex, run
245
246 .. code-block:: bash
247
248 git clone https://github.com/NVIDIA/apex.git
249 cd apex
250 git checkout 52e18c894223800cb611682dce27d88050edf1de
251 pip install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
252
253 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
254
255 While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
256 This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
257
258 cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
259
260 .. code-block:: bash
261
262 conda install -c nvidia cuda-nvprof=11.8
263
264 packaging is also needed:
265
266 .. code-block:: bash
267
268 pip install packaging
269
270 With the latest versions of Apex, the `pyproject.toml` file in Apex may need to be deleted in order to install locally.
271
272
273 Transformer Engine
274 ~~~~~~~~~~~~~~~~~~
275 NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
276 Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
277 `Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
278
279 .. code-block:: bash
280
281 pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
282
283 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
284
285 Transformer Engine requires PyTorch to be built with CUDA 11.8.
286
287
288 Flash Attention
289 ~~~~~~~~~~~~~~~~~~~~
290 Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
291
292 .. code-block:: bash
293
294 pip install flash-attn
295 pip install triton==2.0.0.dev20221202
296
297 NLP inference UI
298 ~~~~~~~~~~~~~~~~~~~~
299 To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
300
301 .. code-block:: bash
302
303 pip install gradio==3.34.0
304
305 NeMo Text Processing
306 ~~~~~~~~~~~~~~~~~~~~
307 NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
308
309 Docker containers:
310 ~~~~~~~~~~~~~~~~~~
311 We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.20.0`` comes with container ``nemo:23.06``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
312
313 To use built container, please run
314
315 .. code-block:: bash
316
317 docker pull nvcr.io/nvidia/nemo:23.06
318
319 To build a nemo container with Dockerfile from a branch, please run
320
321 .. code-block:: bash
322
323 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
324
325
326 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
327
328 .. code-block:: bash
329
330 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
331 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
332 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
333
334 Examples
335 --------
336
337 Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
338
339
340 Contributing
341 ------------
342
343 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
344
345 Publications
346 ------------
347
348 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/tree/stable/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
349
350 License
351 -------
352 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
353
[end of README.rst]
[start of nemo/collections/asr/metrics/wer.py]
...
...
1283
1284 # preserve decoding alignments
1285 preserve_alignments: Optional[bool] = None
1286
1287 # compute ctc time stamps
1288 compute_timestamps: Optional[bool] = None
1289
1290 # token representing word seperator
1291 word_seperator: str = " "
1292
1293 # type of timestamps to calculate
1294 ctc_timestamp_type: str = "all" # can be char, word or all for both
1295
1296 # batch dimension
1297 batch_dim_index: int = 0
1298
1299 # greedy decoding config
1300 greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
1301
1302 # beam decoding config
1303 beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
1304
1305 # confidence config
1306 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1307
1308 # can be used to change temperature for decoding
1309 temperature: float = 1.0
1310
[end of nemo/collections/asr/metrics/wer.py]
[start of nemo/collections/tts/models/fastpitch.py]
...
...
56 @dataclass
57 class G2PConfig:
58 _target_: str = "nemo.collections.tts.g2p.models.en_us_arpabet.EnglishG2p"
59 phoneme_dict: str = "scripts/tts_dataset_files/cmudict-0.7b_nv22.10"
60 heteronyms: str = "scripts/tts_dataset_files/heteronyms-052722"
61 phoneme_probability: float = 0.5
62
63
64 @dataclass
65 class TextTokenizer:
66 _target_: str = "nemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.EnglishPhonemesTokenizer"
67 punct: bool = True
68 stresses: bool = True
69 chars: bool = True
70 apostrophe: bool = True
71 pad_with_space: bool = True
72 add_blank_at: bool = True
73 g2p: G2PConfig = G2PConfig()
74
75
76 @dataclass
77 class TextTokenizerConfig:
78 text_tokenizer: TextTokenizer = TextTokenizer()
79
80
81 class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
82 """FastPitch model (https://arxiv.org/abs/2006.06873) that is used to generate mel spectrogram from text."""
...
[end of nemo/collections/tts/models/fastpitch.py]
[start of nemo/collections/asr/models/configs/asr_models_config.py]
...
...
60
61 # bucketing params
62 bucketing_strategy: str = "synced_randomized"
63 bucketing_batch_size: Optional[Any] = None
64 bucketing_weights: Optional[List[int]] = None
65
66
67 @dataclass
68 class EncDecCTCConfig(model_cfg.ModelConfig):
69 # Model global arguments
70 sample_rate: int = 16000
71 repeat: int = 1
72 dropout: float = 0.0
73 separable: bool = False
74 labels: List[str] = MISSING
75
76 # Dataset configs
77 train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
78 validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
79 test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
80
81 # Optimizer / Scheduler config
82 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
83
84 # Model component configs
85 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
86 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
87 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
88 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
89 decoding: CTCDecodingConfig = CTCDecodingConfig()
90
91
92 @dataclass
93 class EncDecCTCModelConfig(model_cfg.NemoConfig):
94 model: EncDecCTCConfig = EncDecCTCConfig()
95
96
97 @dataclass
98 class CacheAwareStreamingConfig:
...
[end of nemo/collections/asr/models/configs/asr_models_config.py]
[start of nemo/collections/asr/models/configs/diarizer_config.py]
...
...
64 # Offset to set a reference point from the start of the word. Recommended range of values is [-0.05 0.2].
65 word_ts_anchor_offset: Optional[float] = None
66 # Select which part of the word timestamp we want to use. The options are: 'start', 'end', 'mid'.
67 word_ts_anchor_pos: str = "start"
68 # Fix the word timestamp using VAD output. You must provide a VAD model to use this feature.
69 fix_word_ts_with_VAD: bool = False
70 # If True, use colored text to distinguish speakers in the output transcript.
71 colored_text: bool = False
72 # If True, the start and end time of each speaker turn is printed in the output transcript.
73 print_time: bool = True
74 # If True, the output transcript breaks the line to fix the line width (default is 90 chars)
75 break_lines: bool = False
76
77
78 @dataclass
79 class ASRDiarizerConfig(DiarizerComponentConfig):
80 model_path: Optional[str] = "stt_en_conformer_ctc_large"
81 parameters: ASRDiarizerParams = ASRDiarizerParams()
82 ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
83 realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
84
85
86 @dataclass
87 class VADParams(DiarizerComponentConfig):
...
...
88 window_length_in_sec: float = 0.15 # Window length in sec for VAD context input
89 shift_length_in_sec: float = 0.01 # Shift length in sec for generate frame level VAD prediction
90 smoothing: Union[str, bool] = "median" # False or type of smoothing method (eg: median)
91 overlap: float = 0.5 # Overlap ratio for overlapped mean/median smoothing filter
92 onset: float = 0.1 # Onset threshold for detecting the beginning and end of a speech
93 offset: float = 0.1 # Offset threshold for detecting the end of a speech
94 pad_onset: float = 0.1 # Adding durations before each speech segment
95 pad_offset: float = 0 # Adding durations after each speech segment
96 min_duration_on: float = 0 # Threshold for small non_speech deletion
97 min_duration_off: float = 0.2 # Threshold for short speech segment deletion
98 filter_speech_first: bool = True
99
100
101 @dataclass
102 class VADConfig(DiarizerComponentConfig):
103 model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
104 external_vad_manifest: Optional[str] = None
105 parameters: VADParams = VADParams()
106
107
108 @dataclass
109 class SpeakerEmbeddingsParams(DiarizerComponentConfig):
...
...
107
108 @dataclass
109 class SpeakerEmbeddingsParams(DiarizerComponentConfig):
110 # Window length(s) in sec (floating-point number). either a number or a list. ex) 1.5 or [1.5,1.0,0.5]
111 window_length_in_sec: Tuple[float] = (1.5, 1.25, 1.0, 0.75, 0.5)
112 # Shift length(s) in sec (floating-point number). either a number or a list. ex) 0.75 or [0.75,0.5,0.25]
113 shift_length_in_sec: Tuple[float] = (0.75, 0.625, 0.5, 0.375, 0.25)
114 # Weight for each scale. None (for single scale) or list with window/shift scale count. ex) [0.33,0.33,0.33]
115 multiscale_weights: Tuple[float] = (1, 1, 1, 1, 1)
116 # save speaker embeddings in pickle format. True if clustering result is used for other models, such as MSDD.
117 save_embeddings: bool = True
118
119
120 @dataclass
121 class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
122 # .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
123 model_path: Optional[str] = None
124 parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
125
126
127 @dataclass
128 class ClusteringParams(DiarizerComponentConfig):
...
...
128 class ClusteringParams(DiarizerComponentConfig):
129 # If True, use num of speakers value provided in manifest file.
130 oracle_num_speakers: bool = False
131 # Max number of speakers for each recording. If an oracle number of speakers is passed, this value is ignored.
132 max_num_speakers: int = 8
133 # If the number of segments is lower than this number, enhanced speaker counting is activated.
134 enhanced_count_thres: int = 80
135 # Determines the range of p-value search: 0 < p <= max_rp_threshold.
136 max_rp_threshold: float = 0.25
137 # The higher the number, the more values will be examined with more time.
138 sparse_search_volume: int = 30
139 # If True, take a majority vote on multiple p-values to estimate the number of speakers.
140 maj_vote_spk_count: bool = False
141
142
143 @dataclass
144 class ClusteringConfig(DiarizerComponentConfig):
145 parameters: ClusteringParams = ClusteringParams()
146
147
148 @dataclass
149 class MSDDParams(DiarizerComponentConfig):
...
...
152 # Batch size for MSDD inference.
153 infer_batch_size: int = 25
154 # Sigmoid threshold for generating binarized speaker labels. The smaller the more generous on detecting overlaps.
155 sigmoid_threshold: Tuple[float] = (0.7,)
156 # If True, use oracle number of speaker and evaluate F1 score for the given speaker sequences. Default is False.
157 seq_eval_mode: bool = False
158 # If True, break the input audio clip to short sequences and calculate cluster average embeddings for inference.
159 split_infer: bool = True
160 # The length of split short sequence when split_infer is True.
161 diar_window_length: int = 50
162 # If the estimated number of speakers are larger than this number, overlap speech is not estimated.
163 overlap_infer_spk_limit: int = 5
164
165
166 @dataclass
167 class MSDDConfig(DiarizerComponentConfig):
168 model_path: Optional[str] = "diar_msdd_telephonic"
169 parameters: MSDDParams = MSDDParams()
170
171
172 @dataclass
173 class DiarizerConfig(DiarizerComponentConfig):
...
...
162 # If the estimated number of speakers are larger than this number, overlap speech is not estimated.
163 overlap_infer_spk_limit: int = 5
164
165
166 @dataclass
167 class MSDDConfig(DiarizerComponentConfig):
168 model_path: Optional[str] = "diar_msdd_telephonic"
169 parameters: MSDDParams = MSDDParams()
170
171
172 @dataclass
173 class DiarizerConfig(DiarizerComponentConfig):
174 manifest_filepath: Optional[str] = None
175 out_dir: Optional[str] = None
176 oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
177 collar: float = 0.25 # Collar value for scoring
178 ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
179 vad: VADConfig = VADConfig()
180 speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
181 clustering: ClusteringConfig = ClusteringConfig()
182 msdd_model: MSDDConfig = MSDDConfig()
183 asr: ASRDiarizerConfig = ASRDiarizerConfig()
184
185
186 @dataclass
187 class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
188 diarizer: DiarizerConfig = DiarizerConfig()
189 device: str = "cpu"
190 verbose: bool = False
191 batch_size: int = 64
192 num_workers: int = 1
...
[end of nemo/collections/asr/models/configs/diarizer_config.py]
[start of nemo/collections/asr/parts/k2/classes.py]
...
...
29
30 search_beam: float = 20.0
31 output_beam: float = 10.0
32 min_active_states: int = 30
33 max_active_states: int = 10000
34
35
36 @dataclass
37 class GraphModuleConfig:
38 """Config for graph modules.
39 Typically used with graph losses and decoders.
40 """
41
42 topo_type: str = "default"
43 topo_with_self_loops: bool = True
44 token_lm: Optional[Any] = None
45 intersect_pruned: bool = False
46 intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
47 boost_coeff: float = 0.0
48 predictor_window_size: int = 0
49 predictor_step_size: int = 1
50
...
[end of nemo/collections/asr/parts/k2/classes.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
...
101 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
102 num_workers: int = 1 # Number of workers for DataLoader
103
104 # The decoding scheme to be used for evaluation
105 decoding_strategy: str = "greedy_batch" # ["greedy_batch", "beam", "tsd", "alsd", "maes"]
106
107 # Beam Search hyperparameters
108 beam_width: List[int] = field(default_factory=lambda: [8]) # The width or list of the widths for the beam search decoding
109 beam_alpha: List[float] = field(default_factory=lambda: [0.2]) # The alpha parameter or list of the alphas for the beam search decoding
110
111 maes_prefix_alpha: List[int] = field(default_factory=lambda: [2]) # The maes_prefix_alpha or list of the maes_prefix_alpha for the maes decoding
112 maes_expansion_gamma: List[float] = field(default_factory=lambda: [2.3]) # The maes_expansion_gamma or list of the maes_expansion_gamma for the maes decoding
113
114 # HAT related parameters (only for internal lm subtraction)
115 hat_subtract_ilm: bool = False
116 hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
117
118 decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
119
120
121 # fmt: on
122
...
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
[start of nemo/core/config/modelPT.py]
...
44 # ...
45 train_ds: Optional[DatasetConfig] = None
46 validation_ds: Optional[DatasetConfig] = None
47 test_ds: Optional[DatasetConfig] = None
48 optim: Optional[OptimConfig] = None
49
50
51 @dataclass
52 class HydraConfig:
53 run: Dict[str, Any] = field(default_factory=lambda: {"dir": "."})
54 job_logging: Dict[str, Any] = field(default_factory=lambda: {"root": {"handlers": None}})
55
56
57 @dataclass
58 class NemoConfig:
59 name: str = MISSING
60 model: ModelConfig = MISSING
61 trainer: config.TrainerConfig = config.TrainerConfig(
62 strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
63 )
64 exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
65 hydra: HydraConfig = HydraConfig()
66
67
68 class ModelConfigBuilder:
69 def __init__(self, model_cfg: ModelConfig):
...
[end of nemo/core/config/modelPT.py]
[start of nemo/collections/asr/models/configs/aligner_config.py]
...
...
21 class AlignerCTCConfig:
22 prob_suppress_index: int = -1
23 prob_suppress_value: float = 1.0
24
25
26 @dataclass
27 class AlignerRNNTConfig:
28 predictor_window_size: int = 0
29 predictor_step_size: int = 1
30
31
32 @dataclass
33 class AlignerWrapperModelConfig:
34 alignment_type: str = "forced"
35 word_output: bool = True
36 cpu_decoding: bool = False
37 decode_batch_size: int = 0
38 ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
39 rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
40
41
42 @dataclass
43 class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
44 decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
45
[end of nemo/collections/asr/models/configs/aligner_config.py]
[start of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
...
...
239
240 if self.preserve_frame_confidence:
241 raise ValueError(
242 "Requested for per-frame confidence, but predictions provided were labels, not log probabilities."
243 )
244
245 return hypothesis
246
247 def __call__(self, *args, **kwargs):
248 return self.forward(*args, **kwargs)
249
250
251 @dataclass
252 class GreedyCTCInferConfig:
253 preserve_alignments: bool = False
254 compute_timestamps: bool = False
255 preserve_frame_confidence: bool = False
256 confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
257 confidence_method_cfg: str = "DEPRECATED"
258
259 def __post_init__(self):
260 # OmegaConf.structured ensures that post_init check is always executed
...
[end of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
[start of nemo/collections/common/parts/adapter_modules.py]
...
...
146 def forward(self, x):
147 x = self.module(x)
148
149 # Add dropout if available
150 if self.dropout is not None:
151 x = self.dropout(x)
152
153 return x
154
155
156 @dataclass
157 class LinearAdapterConfig:
158 in_features: int
159 dim: int
160 activation: str = 'swish'
161 norm_position: str = 'pre'
162 dropout: float = 0.0
163 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
164 _target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
165
[end of nemo/collections/common/parts/adapter_modules.py]
[start of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
...
...
169 return super().forward(query, key, value, mask, pos_emb, cache=cache)
170
171 def reset_parameters(self):
172 with torch.no_grad():
173 nn.init.zeros_(self.linear_out.weight)
174 nn.init.zeros_(self.linear_out.bias)
175
176 def get_default_strategy_config(self) -> 'dataclass':
177 return MHAResidualAddAdapterStrategyConfig()
178
179
180 @dataclass
181 class MultiHeadAttentionAdapterConfig:
182 n_head: int
183 n_feat: int
184 dropout_rate: float = 0.0
185 proj_dim: Optional[int] = None
186 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
187 _target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
188
189
190 class RelPositionMultiHeadAttentionAdapter(mha.RelPositionMultiHeadAttention, adapter_modules.AdapterModuleUtil):
...
...
273 # However:
274 # DO NOT REPLACE BELOW WITH self.pos_bias_u *= 0.0 OR self.pos_bias_v *= 0.0
275 # For some reason at init sometimes it will cause the value of the tensor to become NaN
276 # All operations to compute matrix_ac and matrix_bd will then fail.
277 nn.init.zeros_(self.pos_bias_u)
278 nn.init.zeros_(self.pos_bias_v)
279
280 def get_default_strategy_config(self) -> 'dataclass':
281 return MHAResidualAddAdapterStrategyConfig()
282
283
284 @dataclass
285 class RelPositionMultiHeadAttentionAdapterConfig:
286 n_head: int
287 n_feat: int
288 dropout_rate: float = 0.0
289 proj_dim: Optional[int] = None
290 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
291 _target_: str = "{0}.{1}".format(
292 RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
293 )
294
...
...
322
323 super().__init__(
324 d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0,
325 )
326
327 # Setup adapter strategy
328 self.setup_adapter_strategy(adapter_strategy)
329
330 def get_default_strategy_config(self) -> 'dataclass':
331 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
332
333
334 @dataclass
335 class PositionalEncodingAdapterConfig:
336 d_model: int
337 max_len: int = 5000
338 xscale: float = 1.0
339 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
340 _target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
341
342
343 class RelPositionalEncodingAdapter(mha.RelPositionalEncoding, adapter_modules.AdapterModuleUtil):
...
...
364 xscale=1.0,
365 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
366 ):
367 super().__init__(d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0)
368
369 # Setup adapter strategy
370 self.setup_adapter_strategy(adapter_strategy)
371
372 def get_default_strategy_config(self) -> 'dataclass':
373 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
374
375
376 @dataclass
377 class RelPositionalEncodingAdapterConfig:
378 d_model: int
379 max_len: int = 5000
380 xscale: float = 1.0
381 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
382 _target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
383
[end of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
...
98 # Parameters for inference
99 acoustic_batch_size: int = 16 # The batch size to calculate log probabilities
100 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
101 device: str = "cuda" # The device to load the model onto to calculate log probabilities
102 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
103
104 # Beam Search hyperparameters
105
106 # The decoding scheme to be used for evaluation.
107 # Can be one of ["greedy", "beamsearch", "beamsearch_ngram"]
108 decoding_mode: str = "beamsearch_ngram"
109
110 beam_width: List[int] = field(default_factory=lambda: [128]) # The width or list of the widths for the beam search decoding
111 beam_alpha: List[float] = field(default_factory=lambda: [1.0]) # The alpha parameter or list of the alphas for the beam search decoding
112 beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
113
114 decoding_strategy: str = "beam"
115 decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
116
117 text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
118 punctuation_marks = ".,?",
119 separate_punctuation = False,
120 do_lowercase = False,
121 rm_punctuation = False,
122 )
123 # fmt: on
124
125
126 def beam_search_eval(
...
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
[start of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
...
...
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Transformer based language model."""
16 from MeCab import Model
17 from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
18 from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
19 from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
20 MegatronRetrievalTransformerEncoderModule,
21 )
22 from nemo.collections.nlp.modules.common.megatron.utils import (
23 ApexGuardDefaults,
24 init_method_normal,
25 scaled_init_method_normal,
26 )
27
28 try:
29 from apex.transformer.enums import AttnMaskType, ModelType
30
31 HAVE_APEX = True
...
[end of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
[start of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
...
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Copyright 2017 Johns Hopkins University (Shinji Watanabe)
16 #
17 # Licensed under the Apache License, Version 2.0 (the "License");
18 # you may not use this file except in compliance with the License.
19 # You may obtain a copy of the License at
20 #
21 # http://www.apache.org/licenses/LICENSE-2.0
22 #
23 # Unless required by applicable law or agreed to in writing, software
24 # distributed under the License is distributed on an "AS IS" BASIS,
25 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
26 # See the License for the specific language governing permissions and
27 # limitations under the License.
28
29 from dataclasses import dataclass
30 from typing import List, Optional, Tuple, Union
31
32 import numpy as np
33 import torch
...
...
2171 if self.preserve_frame_confidence:
2172 for batch_idx in range(batchsize):
2173 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2174 del hypotheses[batch_idx].frame_confidence[-1]
2175
2176 # Preserve states
2177 for batch_idx in range(batchsize):
2178 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2179
2180 return hypotheses
2181
2182
2183 @dataclass
2184 class GreedyRNNTInferConfig:
2185 max_symbols_per_step: Optional[int] = 10
2186 preserve_alignments: bool = False
2187 preserve_frame_confidence: bool = False
2188 confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
2189 confidence_method_cfg: str = "DEPRECATED"
2190
2191 def __post_init__(self):
2192 # OmegaConf.structured ensures that post_init check is always executed
...
...
2203
2204 # TODO (alaptev): delete the following two lines sometime in the future
2205 logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
2206 # OmegaConf.structured ensures that post_init check is always executed
2207 self.confidence_measure_cfg = OmegaConf.structured(
2208 self.confidence_method_cfg
2209 if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
2210 else ConfidenceMeasureConfig(**self.confidence_method_cfg)
2211 )
2212 self.confidence_method_cfg = "DEPRECATED"
2213
2214
2215 @dataclass
2216 class GreedyBatchedRNNTInferConfig:
2217 max_symbols_per_step: Optional[int] = 10
2218 preserve_alignments: bool = False
2219 preserve_frame_confidence: bool = False
2220 confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
2221 confidence_method_cfg: str = "DEPRECATED"
2222
2223 def __post_init__(self):
2224 # OmegaConf.structured ensures that post_init check is always executed
...
[end of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
[start of nemo/collections/tts/models/tacotron2.py]
...
...
39 LengthsType,
40 LogitsType,
41 MelSpectrogramType,
42 SequenceToSequenceAlignmentType,
43 )
44 from nemo.core.neural_types.neural_type import NeuralType
45 from nemo.utils import logging, model_utils
46
47
48 @dataclass
49 class Preprocessor:
50 _target_: str = MISSING
51 pad_value: float = MISSING
52
53
54 @dataclass
55 class Tacotron2Config:
56 preprocessor: Preprocessor = Preprocessor()
57 encoder: Dict[Any, Any] = MISSING
58 decoder: Dict[Any, Any] = MISSING
59 postnet: Dict[Any, Any] = MISSING
60 labels: List = MISSING
...
[end of nemo/collections/tts/models/tacotron2.py]
[start of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
...
188 num_shards: int = -1
189 shuffle: bool = False
190 max_duration: Optional[float] = None
191 min_duration: Optional[float] = None
192 shuffle_seed: Optional[int] = None
193 sort_in_shards: bool = True
194 shard_manifests: bool = True
195 keep_files_together: bool = False
196
197
198 @dataclass
199 class ASRTarredDatasetMetadata:
200 created_datetime: Optional[str] = None
201 version: int = 0
202 num_samples_per_shard: Optional[int] = None
203 is_concatenated_manifest: bool = False
204
205 dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
206 history: Optional[List[Any]] = field(default_factory=lambda: [])
207
208 def __post_init__(self):
209 self.created_datetime = self.get_current_datetime()
...
[end of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
[start of nemo/collections/asr/models/configs/classification_models_config.py]
...
...
58 bucketing_strategy: str = "synced_randomized"
59 bucketing_batch_size: Optional[Any] = None
60 bucketing_weights: Optional[List[int]] = None
61
62
63 @dataclass
64 class EncDecClassificationConfig(model_cfg.ModelConfig):
65 # Model global arguments
66 sample_rate: int = 16000
67 repeat: int = 1
68 dropout: float = 0.0
69 separable: bool = True
70 kernel_size_factor: float = 1.0
71 labels: List[str] = MISSING
72 timesteps: int = MISSING
73
74 # Dataset configs
75 train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
76 manifest_filepath=None, shuffle=True, trim_silence=False
77 )
78 validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
79 manifest_filepath=None, shuffle=False
80 )
81 test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
82 manifest_filepath=None, shuffle=False
83 )
84
85 # Optimizer / Scheduler config
86 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
87
88 # Model component configs
89 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
90 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
91 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
92 audio_length=timesteps
93 )
94
95 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
96 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
97
98
99 @dataclass
100 class EncDecClassificationModelConfig(model_cfg.NemoConfig):
101 model: EncDecClassificationConfig = EncDecClassificationConfig()
102
[end of nemo/collections/asr/models/configs/classification_models_config.py]
[start of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
...
...
167 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
168
169 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
170 When the alpha equals one, scaling is not applied to 'max_prob',
171 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
172
173 entropy_norm: A mapping of the entropy value to the interval [0,1].
174 Supported values:
175 - 'lin' for using the linear mapping.
176 - 'exp' for using exponential mapping with linear shift.
177 """
178
179 preserve_frame_confidence: bool = False
180 preserve_token_confidence: bool = False
181 preserve_word_confidence: bool = False
182 exclude_blank: bool = True
183 aggregation: str = "min"
184 measure_cfg: ConfidenceMeasureConfig = ConfidenceMeasureConfig()
185 method_cfg: str = "DEPRECATED"
186
187 def __post_init__(self):
188 # OmegaConf.structured ensures that post_init check is always executed
...
[end of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
[start of scripts/confidence_ensembles/build_ensemble.py]
...
61 <all arguments as in the previous examples>
62 tune_confidence_config.confidence_type='[entropy_renyi_exp,entropy_tsallis_exp]' # only tune over this set
63 tune_confidence_config.alpha='[0.1,0.5,1.0]' # only tune over this set
64
65 You can check the dataclasses in this file for the full list of supported
66 arguments and their default values.
67 """
68
69 import atexit
70
71 # using default logging to be able to silence unnecessary messages from nemo
72 import logging
73 import os
74 import random
75 import sys
76 import tempfile
77 from copy import deepcopy
78 from dataclasses import dataclass
79 from pathlib import Path
80 from typing import Dict, List, Optional, Tuple
81
82 import joblib
...
...
195 class_weight: Tuple = (None, "balanced")
196
197 # increase if getting many warnings that algorithm didn't converge
198 max_iter: int = 1000
199
200
201 @dataclass
202 class BuildEnsembleConfig:
203 # where to save the resulting ensemble model
204 output_path: str = MISSING
205
206 # each model specification
207 ensemble: List[EnsembleConfig] = MISSING
208
209 random_seed: int = 0 # for reproducibility
210
211 # default confidence, can override
212 confidence: ConfidenceConfig = ConfidenceConfig(
213 # we keep frame confidences and apply aggregation manually to get full-utterance confidence
214 preserve_frame_confidence=True,
215 exclude_blank=True,
216 aggregation="mean",
217 measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
218 )
219 temperature: float = 1.0
220
221 # this is optional, but can be used to change any aspect of the transcription
222 # config, such as batch size or amp usage. Note that model, data and confidence
223 # will be overriden by this script
224 transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
225
226 # set to True to tune the confidence.
227 # requires dev manifests to be specified for each model
228 tune_confidence: bool = False
...
...
215 exclude_blank=True,
216 aggregation="mean",
217 measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
218 )
219 temperature: float = 1.0
220
221 # this is optional, but can be used to change any aspect of the transcription
222 # config, such as batch size or amp usage. Note that model, data and confidence
223 # will be overriden by this script
224 transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
225
226 # set to True to tune the confidence.
227 # requires dev manifests to be specified for each model
228 tune_confidence: bool = False
229 # used to specify what to tune over. By default runs tuning over some
230 # reasonalbe grid, so that it does not take forever.
231 # Can be changed as needed
232 tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
233
234 # very fast to tune and can be important in case of imbalanced datasets
235 # will automatically set to False if dev data is not available
236 tune_logistic_regression: bool = True
237 tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
238
239 def __post_init__(self):
240 """Checking that if any dev data is provided, all are provided.
241
...
[end of scripts/confidence_ensembles/build_ensemble.py]
[start of examples/asr/experimental/k2/align_speech_parallel.py]
...
60
61 You may control the aligner's config by setting the aligner_args:
62 aligner_args.alignment_type=argmax \
63 aligner_args.word_output=False \
64 aligner_args.cpu_decoding=True \
65 aligner_args.decode_batch_size=8 \
66 aligner_args.ctc_cfg.prob_suppress_index=-1 \
67 aligner_args.ctc_cfg.prob_suppress_value=0.5 \
68 aligner_args.rnnt_cfg.predictor_window_size=10 \
69 aligner_args.decoder_module_cfg.intersect_pruned=true \
70 aligner_args.decoder_module_cfg.intersect_conf.search_beam=40 \
71 ...
72
73 """
74
75
76 import os
77 from dataclasses import dataclass, is_dataclass
78 from typing import Optional
79
80 import pytorch_lightning as ptl
81 import torch
...
...
80 import pytorch_lightning as ptl
81 import torch
82 from omegaconf import MISSING, OmegaConf
83
84 from nemo.collections.asr.data.audio_to_ctm_dataset import ASRCTMPredictionWriter
85 from nemo.collections.asr.models import ASRModel
86 from nemo.collections.asr.models.configs.aligner_config import K2AlignerWrapperModelConfig
87 from nemo.collections.asr.models.configs.asr_models_config import ASRDatasetConfig
88 from nemo.collections.asr.models.k2_aligner_model import AlignerWrapperModel
89 from nemo.core.config import TrainerConfig, hydra_runner
90 from nemo.utils import logging
91 from nemo.utils.get_rank import is_global_rank_zero
92
93
94 @dataclass
95 class ParallelAlignmentConfig:
96 model: Optional[str] = None # name
97 predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
98 aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
99 output_path: str = MISSING
100 model_stride: int = 8
101
102 trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
103
104 # there arguments will be ignored
105 return_predictions: bool = False
106 use_cer: bool = False
...
[end of examples/asr/experimental/k2/align_speech_parallel.py]
[start of nemo/collections/asr/models/configs/matchboxnet_config.py]
...
93
94
95 # fmt: on
96
97
98 @dataclass
99 class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
100 # Model global arguments
101 sample_rate: int = 16000
102 repeat: int = 1
103 dropout: float = 0.0
104 separable: bool = True
105 kernel_size_factor: float = 1.0
106 timesteps: int = 128
107 labels: List[str] = MISSING
108
109 # Dataset configs
110 train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
111 manifest_filepath=None, shuffle=True, trim_silence=False
112 )
113 validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
114 manifest_filepath=None, shuffle=False
115 )
116 test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
117 manifest_filepath=None, shuffle=False
118 )
119
120 # Optimizer / Scheduler config
121 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
122
123 # Model general component configs
124 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
125 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
126 freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
127 )
128 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
129 audio_length=128
130 )
131
132 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
133 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
134
135
136 @dataclass
137 class MatchboxNetVADModelConfig(MatchboxNetModelConfig):
...
[end of nemo/collections/asr/models/configs/matchboxnet_config.py]
[start of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
...
...
201 of ``model.class_labels`` files."""
202
203
204 @dataclass
205 class PunctuationCapitalizationModelConfig:
206 """
207 A configuration of
208 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
209 model.
210
211 See an example of model config in
212 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
213 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
214
215 This config is a part of :class:`~PunctuationCapitalizationConfig`.
216 """
217
218 class_labels: ClassLabelsConfig = ClassLabelsConfig()
219 """A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
220 These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
221 for passing vocabularies, please provide path to vocabulary files in
222 ``model.common_dataset_parameters.label_vocab_dir`` parameter."""
223
224 common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
225 """Label ids and loss mask information information."""
226
227 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
228 """A configuration for creating training dataset and data loader."""
...
...
219 """A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
220 These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
221 for passing vocabularies, please provide path to vocabulary files in
222 ``model.common_dataset_parameters.label_vocab_dir`` parameter."""
223
224 common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
225 """Label ids and loss mask information information."""
226
227 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
228 """A configuration for creating training dataset and data loader."""
229
230 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
231 """A configuration for creating validation datasets and data loaders."""
232
233 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
234 """A configuration for creating test datasets and data loaders."""
235
236 punct_head: HeadConfig = HeadConfig()
237 """A configuration for creating punctuation MLP head that is applied to a language model outputs."""
238
239 capit_head: HeadConfig = HeadConfig()
240 """A configuration for creating capitalization MLP head that is applied to a language model outputs."""
241
242 tokenizer: Any = TokenizerConfig()
243 """A configuration for source text tokenizer."""
244
245 language_model: LanguageModelConfig = LanguageModelConfig()
246 """A configuration of a BERT-like language model which serves as a model body."""
247
248 optim: Optional[Any] = None
249 """A configuration of optimizer and learning rate scheduler. There is much variability in such config. For
...
...
297 """
298
299 pretrained_model: Optional[str] = None
300 """Can be an NVIDIA's NGC cloud model or a path to a .nemo checkpoint. You can get list of possible cloud options
301 by calling method
302 :func:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel.list_available_models`.
303 """
304
305 name: Optional[str] = 'Punctuation_and_Capitalization'
306 """A name of the model. Used for naming output directories and ``.nemo`` checkpoints."""
307
308 do_training: bool = True
309 """Whether to perform training of the model."""
310
311 do_testing: bool = False
312 """Whether ot perform testing of the model."""
313
314 model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
315 """A configuration for the
316 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
317 model."""
318
319 trainer: Optional[TrainerConfig] = TrainerConfig()
320 """Contains ``Trainer`` Lightning class constructor parameters."""
321
322 exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
323 """A configuration with various NeMo training options such as output directories, resuming from checkpoint,
324 tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
325
326
327 @dataclass
328 class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
329 model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
330
331
332 def is_legacy_model_config(model_cfg: DictConfig) -> bool:
333 """
...
[end of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
[start of tools/nemo_forced_aligner/align.py]
...
135 viterbi_device: Optional[str] = None
136 batch_size: int = 1
137 use_local_attention: bool = True
138 additional_segment_grouping_separator: Optional[str] = None
139 audio_filepath_parts_in_utt_id: int = 1
140
141 # Buffered chunked streaming configs
142 use_buffered_chunked_streaming: bool = False
143 chunk_len_in_secs: float = 1.6
144 total_buffer_in_secs: float = 4.0
145 chunk_batch_size: int = 32
146
147 # Cache aware streaming configs
148 simulate_cache_aware_streaming: Optional[bool] = False
149
150 # Output file configs
151 save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
152 ctm_file_config: CTMFileConfig = CTMFileConfig()
153 ass_file_config: ASSFileConfig = ASSFileConfig()
154
155
156 @hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
157 def main(cfg: AlignmentConfig):
...
[end of tools/nemo_forced_aligner/align.py]
[start of nemo/collections/asr/models/configs/quartznet_config.py]
...
...
160 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
161 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
162 ]
163 return config
164 # fmt: on
165
166
167 @dataclass
168 class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
169 # Model global arguments
170 sample_rate: int = 16000
171 repeat: int = 1
172 dropout: float = 0.0
173 separable: bool = False
174 labels: List[str] = MISSING
175
176 # Dataset configs
177 train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
178 manifest_filepath=None, shuffle=True, trim_silence=True
179 )
180 validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
181 test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
182
183 # Optimizer / Scheduler config
184 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
185
186 # Model general component configs
187 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
188 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
189 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
190 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
191
192
193 @dataclass
194 class QuartzNetModelConfig(JasperModelConfig):
...
[end of nemo/collections/asr/models/configs/quartznet_config.py]
[start of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
...
...
53 return ' '.join(tokens)
54
55 def normalize(self, text) -> str:
56 # Normalization doesn't handle Japanese periods correctly;
57 # 'ใ'becomes '.'.
58 if self.lang_id == 'en':
59 return self.normalizer.normalize(text)
60 else:
61 return text
62
63
64 class JaMecabProcessor:
65 """
66 Tokenizer, Detokenizer and Normalizer utilities for Japanese MeCab & English
67 """
68
69 def __init__(self):
70 self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
71
72 def detokenize(self, text: List[str]) -> str:
73 RE_WS_IN_FW = re.compile(
...
[end of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
[start of nemo/collections/asr/metrics/rnnt_wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import re
17 from abc import abstractmethod
18 from dataclasses import dataclass, is_dataclass
19 from typing import Callable, Dict, List, Optional, Tuple, Union
20
21 import editdistance
22 import numpy as np
...
...
1285
1286 def compute(self):
1287 wer = self.scores.float() / self.words
1288 return wer, self.scores.detach(), self.words.detach()
1289
1290
1291 @dataclass
1292 class RNNTDecodingConfig:
1293 model_type: str = "rnnt" # one of "rnnt", "multiblank" or "tdt"
1294 strategy: str = "greedy_batch"
1295
1296 compute_hypothesis_token_set: bool = False
1297
1298 # preserve decoding alignments
1299 preserve_alignments: Optional[bool] = None
1300
1301 # confidence config
1302 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1303
1304 # RNNT Joint fused batch size
1305 fused_batch_size: Optional[int] = None
1306
...
...
1303
1304 # RNNT Joint fused batch size
1305 fused_batch_size: Optional[int] = None
1306
1307 # compute RNNT time stamps
1308 compute_timestamps: Optional[bool] = None
1309
1310 # compute language IDs
1311 compute_langs: bool = False
1312
1313 # token representing word seperator
1314 word_seperator: str = " "
1315
1316 # type of timestamps to calculate
1317 rnnt_timestamp_type: str = "all" # can be char, word or all for both
1318
1319 # greedy decoding config
1320 greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
1321
1322 # beam decoding config
1323 beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
1324
1325 # can be used to change temperature for decoding
1326 temperature: float = 1.0
1327
[end of nemo/collections/asr/metrics/rnnt_wer.py]
[start of nemo/collections/asr/modules/audio_preprocessing.py]
...
620 return augmented_spec
621
622
623 class CropOrPadSpectrogramAugmentation(NeuralModule):
624 """
625 Pad or Crop the incoming Spectrogram to a certain shape.
626
627 Args:
628 audio_length (int): the final number of timesteps that is required.
629 The signal will be either padded or cropped temporally to this
630 size.
631 """
632
633 def __init__(self, audio_length):
634 super(CropOrPadSpectrogramAugmentation, self).__init__()
635 self.audio_length = audio_length
636
637 @typecheck()
638 @torch.no_grad()
639 def forward(self, input_signal, length):
640 image = input_signal
...
[end of nemo/collections/asr/modules/audio_preprocessing.py]
[start of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
...
...
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
18 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
19 from nemo.core.config.modelPT import NemoConfig
20
21
22 @dataclass
23 class GraphModuleConfig:
24 criterion_type: str = "ml"
25 loss_type: str = "ctc"
26 split_batch_size: int = 0
27 dec_type: str = "topo"
28 transcribe_training: bool = True
29 backend_cfg: BackendConfig = BackendConfig()
30
31
32 @dataclass
33 class EncDecK2SeqConfig(EncDecCTCConfig):
34 graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
35
36
37 @dataclass
38 class EncDecK2SeqModelConfig(NemoConfig):
39 model: EncDecK2SeqConfig = EncDecK2SeqConfig()
40
[end of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
[start of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
...
...
110 pretrained_name: Optional[str] = None # Name of a pretrained model
111 dataset_manifest: str = MISSING
112 output_dir: str = MISSING
113
114 # General configs
115 batch_size: int = 32
116 num_workers: int = 4
117
118 # Set `cuda` to int to define CUDA device. If 'None', will look for CUDA
119 # device anyway, and do inference on CPU only if CUDA device is not found.
120 # If `cuda` is a negative number, inference will be on CPU only.
121 cuda: Optional[int] = None
122 amp: bool = False
123 audio_type: str = "wav"
124
125 # Confidence configs
126 target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
127 confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
128 grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
129
130
131 @hydra_runner(config_name="ConfidenceBenchmarkingConfig", schema=ConfidenceBenchmarkingConfig)
...
[end of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
[start of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
...
...
32 from nemo.core.config.modelPT import OptimConfig, SchedConfig
33
34
35 @dataclass
36 class MTSchedConfig(SchedConfig):
37 name: str = 'InverseSquareRootAnnealing'
38 warmup_ratio: Optional[float] = None
39 last_epoch: int = -1
40
41
42 # TODO: Refactor this dataclass to to support more optimizers (it pins the optimizer to Adam-like optimizers).
43 @dataclass
44 class MTOptimConfig(OptimConfig):
45 name: str = 'adam'
46 lr: float = 1e-3
47 betas: Tuple[float, float] = (0.9, 0.98)
48 weight_decay: float = 0.0
49 sched: Optional[MTSchedConfig] = MTSchedConfig()
50
51
52 @dataclass
53 class MTEncDecModelConfig(EncDecNLPModelConfig):
...
...
60 len_pen: float = 0.0
61 src_language: Any = 'en' # Any = str or List[str]
62 tgt_language: Any = 'en' # Any = str or List[str]
63 find_unused_parameters: Optional[bool] = True
64 shared_tokenizer: Optional[bool] = True
65 multilingual: Optional[bool] = False
66 preproc_out_dir: Optional[str] = None
67 validate_input_ids: Optional[bool] = True
68 shared_embeddings: bool = False
69
70 # network architecture configuration
71 encoder_tokenizer: Any = MISSING
72 encoder: Any = MISSING
73
74 decoder_tokenizer: Any = MISSING
75 decoder: Any = MISSING
76
77 head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
78
79 # dataset configurations
80 train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
81 src_file_name=MISSING,
82 tgt_file_name=MISSING,
83 tokens_in_batch=512,
84 clean=True,
85 shuffle=True,
86 cache_ids=False,
87 use_cache=False,
88 )
89 validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
90 src_file_name=MISSING,
91 tgt_file_name=MISSING,
92 tokens_in_batch=512,
93 clean=False,
94 shuffle=False,
95 cache_ids=False,
96 use_cache=False,
97 )
98 test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
99 src_file_name=MISSING,
100 tgt_file_name=MISSING,
101 tokens_in_batch=512,
102 clean=False,
103 shuffle=False,
104 cache_ids=False,
105 use_cache=False,
106 )
107 optim: Optional[OptimConfig] = MTOptimConfig()
108
109
110 @dataclass
111 class AAYNBaseConfig(MTEncDecModelConfig):
112
113 # Attention is All You Need Base Configuration
114 encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
115 decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
116
117 encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
118 library='nemo',
119 model_name=None,
120 pretrained=False,
121 hidden_size=512,
122 inner_size=2048,
123 num_layers=6,
124 num_attention_heads=8,
125 ffn_dropout=0.1,
126 attn_score_dropout=0.1,
127 attn_layer_dropout=0.1,
128 )
129
130 decoder: NeMoTransformerConfig = NeMoTransformerConfig(
131 library='nemo',
132 model_name=None,
133 pretrained=False,
134 hidden_size=512,
135 inner_size=2048,
136 num_layers=6,
137 num_attention_heads=8,
138 ffn_dropout=0.1,
139 attn_score_dropout=0.1,
140 attn_layer_dropout=0.1,
141 )
142
143
144 @dataclass
...
...
136 num_layers=6,
137 num_attention_heads=8,
138 ffn_dropout=0.1,
139 attn_score_dropout=0.1,
140 attn_layer_dropout=0.1,
141 )
142
143
144 @dataclass
145 class MTBottleneckModelConfig(AAYNBaseConfig):
146 model_type: str = 'nll'
147 min_logv: float = -6
148 latent_size: int = -1 # -1 will take value of encoder hidden
149 non_recon_warmup_batches: int = 200000
150 recon_per_token: bool = True
151 log_timing: bool = True
152
153 encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
154 library='nemo',
155 model_name=None,
156 pretrained=False,
157 hidden_size=512,
158 inner_size=2048,
159 num_layers=6,
160 num_attention_heads=8,
161 ffn_dropout=0.1,
162 attn_score_dropout=0.1,
163 attn_layer_dropout=0.1,
164 arch='seq2seq',
165 hidden_steps=32,
166 hidden_blocks=1,
167 hidden_init_method='params',
168 )
169
170 decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
171 library='nemo',
172 model_name=None,
173 pretrained=False,
174 inner_size=2048,
175 num_layers=6,
176 num_attention_heads=8,
177 ffn_dropout=0.1,
178 attn_score_dropout=0.1,
179 attn_layer_dropout=0.1,
180 arch='seq2seq',
181 )
182
[end of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
[start of nemo/utils/exp_manager.py]
...
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import os
17 import subprocess
18 import sys
19 import time
20 import warnings
21 from dataclasses import dataclass
22 from datetime import timedelta
23 from pathlib import Path
24 from shutil import copy, move
25 from typing import Any, Dict, List, Optional, Tuple, Union
...
...
132
133 # Log dir creation parameters
134 explicit_log_dir: Optional[str] = None
135 exp_dir: Optional[str] = None
136 name: Optional[str] = None
137 version: Optional[str] = None
138 use_datetime_version: Optional[bool] = True
139 resume_if_exists: Optional[bool] = False
140 resume_past_end: Optional[bool] = False
141 resume_ignore_no_checkpoint: Optional[bool] = False
142 resume_from_checkpoint: Optional[str] = None
143 # Logging parameters
144 create_tensorboard_logger: Optional[bool] = True
145 summary_writer_kwargs: Optional[Dict[Any, Any]] = None
146 create_wandb_logger: Optional[bool] = False
147 wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
148 create_mlflow_logger: Optional[bool] = False
149 mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
150 create_dllogger_logger: Optional[bool] = False
151 dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
152 create_clearml_logger: Optional[bool] = False
153 clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
154 # Checkpointing parameters
155 create_checkpoint_callback: Optional[bool] = True
156 checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
157 create_early_stopping_callback: Optional[bool] = False
158 early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
159 create_preemption_callback: Optional[bool] = True
160 # Additional exp_manager arguments
161 files_to_copy: Optional[List[str]] = None
162 # logs timing of train/val/test steps
163 log_step_timing: Optional[bool] = True
164 step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
165 # Configures creation of log files for different ranks
166 log_local_rank_0_only: Optional[bool] = False
167 log_global_rank_0_only: Optional[bool] = False
168 # disable initial validation when resuming from a checkpoint saved during validation
169 disable_validation_on_resume: Optional[bool] = True
170 ema: Optional[EMAParams] = EMAParams()
171 # Wall clock time limit
172 max_time_per_run: Optional[str] = None
173 # time to sleep non 0 ranks during initialization
174 seconds_to_sleep: float = 5
...
[end of nemo/utils/exp_manager.py]
[start of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
...
...
588 beam_threshold: float = 20.0
589 unk_weight: float = -math.inf
590 sil_weight: float = 0.0
591
592
593 @dataclass
594 class BeamCTCInferConfig:
595 beam_size: int
596 search_type: str = 'default'
597 preserve_alignments: bool = False
598 compute_timestamps: bool = False
599 return_best_hypothesis: bool = True
600
601 beam_alpha: float = 1.0
602 beam_beta: float = 0.0
603 kenlm_path: Optional[str] = None
604
605 flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
606 pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
607
[end of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
NVIDIA/NeMo
|
8a892b86186dbdf61803d75570cb5c58471e9dda
|
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
|
Seems to be a similar to #7002
Interesting. The fix is easy but needs to be applied to basically every single place that has this constructor for our adapter configs. Let me see if I can update it. But no guarantees on how soon fixes will come in main.
Looking forward to it @titu1994 ! Thanks ๐
@titu1994 I was looking to use NeMo speaker diarization with Python 3.11 and hit this dataclass issue. I patched everything involved in the specific code paths I needed: https://github.com/lmnt-com/NeMo/commit/d89acf9f0152e43dee29d7d1c4667ee34c26ffd7
I was using the neural diarizer as described in https://github.com/NVIDIA/NeMo/tree/main/examples/speaker_tasks/diarization
I'd be happy to upstream this if it's helpful.
I haven't checked whether this is backwards compatible for earlier python/dataclass versions, do you know?
For reference, what led me to this issue, though it's duplicative to the above discussion:
[Similar error](https://github.com/huggingface/datasets/issues/5230)
[StackOverflow solution](https://stackoverflow.com/questions/53632152/why-cant-dataclasses-have-mutable-defaults-in-their-class-attributes-declaratio)
@shaper Thanks for sharing. For brevity, you don't really need a `lambda` when you don't pass any init parameters, like this:
```
field(default_factory=lambda: ConfidenceConfig())
```
You can just do
```
field(default_factory=ConfidenceConfig)
```
It's only needed when you do pass parameter(s), like
```
field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
```
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
I have the same issue. @tango4j suggested using one of the models from https://huggingface.co/spaces/hf-audio/open_asr_leaderboard, but I cannot import nemo.collections.asr:
```
Traceback (most recent call last):
File "/opt/pycharm-2022.3.3/plugins/python/helpers/pycharm/docrunner.py", line 138, in __run
exec(compile(example.source, filename, "single",
File "<doctest NeMoASR[2]>", line 1, in <module>
NeMoASR().apply_asr(file)
^^^^^^^^^
File "/home/cbj/python/cbj/cbj/transcribe/pretrained.py", line 504, in __init__
import nemo.collections.asr as nemo_asr
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
For documentation (I had to search in the provided links):
Mutable defaults were never allowed in dataclasses (by convention), but in python 3.11 they improved the check: Instead of checking some types (dict, list, set) they now use hashable as indicator for mutable.
An alternative to default_factory would be to use frozen dataclasses, but I don't know whether in this code base the configs are used as mutable objects or not.
You need to update to NeMo 1.20, omegaconf did a fix that should resolve this
I have NeMo 1.20.0.
With `pip install nemo_toolkit` and `pip install pytorch_lightning` I installed yesterday nemo.
So it should be the newest PyPI version.
```
$ pip show nemo_toolkit
Name: nemo-toolkit
Version: 1.20.0
Summary: NeMo - a toolkit for Conversational AI
Home-page: https://github.com/nvidia/nemo
Author: NVIDIA
Author-email: nemo-toolkit@nvidia.com
License: Apache2
Location: /opt/py/2023/lib/python3.11/site-packages
Requires: huggingface-hub, numba, numpy, onnx, python-dateutil, ruamel.yaml, scikit-learn, setuptools, tensorboard, text-unidecode, torch, tqdm, wget, wrapt
Required-by:
$ pip show omegaconf
Name: omegaconf
Version: 2.3.0
Summary: A flexible configuration library
Home-page: https://github.com/omry/omegaconf
Author: Omry Yadan
Author-email: omry@yadan.net
License:
Location: /home/cbj/.local/lib/python3.11/site-packages
Requires: antlr4-python3-runtime, PyYAML
Required-by: hydra-core
$ python -c "import nemo.collections.asr as nemo_asr"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
Hmm ok I'll take a look
|
2023-09-30T01:26:50Z
|
<patch>
<patch>
diff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr/experimental/k2/align_speech_parallel.py
--- a/examples/asr/experimental/k2/align_speech_parallel.py
+++ b/examples/asr/experimental/k2/align_speech_parallel.py
@@ -74,7 +74,7 @@
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
@@ -94,12 +94,14 @@
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
- predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
- aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
+ predict_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(return_sample_id=True, num_workers=4)
+ )
+ aligner_args: K2AlignerWrapperModelConfig = field(default_factory=lambda: K2AlignerWrapperModelConfig())
output_path: str = MISSING
model_stride: int = 8
- trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
+ trainer: TrainerConfig = field(default_factory=lambda: TrainerConfig(gpus=-1, accelerator="ddp"))
# there arguments will be ignored
return_predictions: bool = False
diff --git a/nemo/collections/asr/metrics/rnnt_wer.py b/nemo/collections/asr/metrics/rnnt_wer.py
--- a/nemo/collections/asr/metrics/rnnt_wer.py
+++ b/nemo/collections/asr/metrics/rnnt_wer.py
@@ -15,7 +15,7 @@
import copy
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1299,7 +1299,7 @@ class RNNTDecodingConfig:
preserve_alignments: Optional[bool] = None
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
@@ -1317,10 +1317,10 @@ class RNNTDecodingConfig:
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
- greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
+ greedy: greedy_decode.GreedyRNNTInferConfig = field(default_factory=lambda: greedy_decode.GreedyRNNTInferConfig())
# beam decoding config
- beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
+ beam: beam_decode.BeamRNNTInferConfig = field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/metrics/wer.py b/nemo/collections/asr/metrics/wer.py
--- a/nemo/collections/asr/metrics/wer.py
+++ b/nemo/collections/asr/metrics/wer.py
@@ -14,7 +14,7 @@
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1297,13 +1297,17 @@ class CTCDecodingConfig:
batch_dim_index: int = 0
# greedy decoding config
- greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
+ greedy: ctc_greedy_decoding.GreedyCTCInferConfig = field(
+ default_factory=lambda: ctc_greedy_decoding.GreedyCTCInferConfig()
+ )
# beam decoding config
- beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ beam: ctc_beam_decoding.BeamCTCInferConfig = field(
+ default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ )
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/models/configs/aligner_config.py b/nemo/collections/asr/models/configs/aligner_config.py
--- a/nemo/collections/asr/models/configs/aligner_config.py
+++ b/nemo/collections/asr/models/configs/aligner_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@@ -35,10 +35,10 @@ class AlignerWrapperModelConfig:
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
- ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
- rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
+ ctc_cfg: AlignerCTCConfig = field(default_factory=lambda: AlignerCTCConfig())
+ rnnt_cfg: AlignerRNNTConfig = field(default_factory=lambda: AlignerRNNTConfig())
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
- decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ decoder_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
diff --git a/nemo/collections/asr/models/configs/asr_models_config.py b/nemo/collections/asr/models/configs/asr_models_config.py
--- a/nemo/collections/asr/models/configs/asr_models_config.py
+++ b/nemo/collections/asr/models/configs/asr_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -74,24 +74,32 @@ class EncDecCTCConfig(model_cfg.ModelConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
- validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ train_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=True))
+ validation_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False))
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
- decoding: CTCDecodingConfig = CTCDecodingConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
+ decoding: CTCDecodingConfig = field(default_factory=lambda: CTCDecodingConfig())
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
- model: EncDecCTCConfig = EncDecCTCConfig()
+ model: EncDecCTCConfig = field(default_factory=lambda: EncDecCTCConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/classification_models_config.py b/nemo/collections/asr/models/configs/classification_models_config.py
--- a/nemo/collections/asr/models/configs/classification_models_config.py
+++ b/nemo/collections/asr/models/configs/classification_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -72,30 +72,40 @@ class EncDecClassificationConfig(model_cfg.ModelConfig):
timesteps: int = MISSING
# Dataset configs
- train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=timesteps
+ preprocessor: AudioToMFCCPreprocessorConfig = field(default_factory=lambda: AudioToMFCCPreprocessorConfig())
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=-1)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
+
+ def __post_init__(self):
+ if self.crop_or_pad_augment is not None:
+ self.crop_or_pad_augment.audio_length = self.timesteps
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
- model: EncDecClassificationConfig = EncDecClassificationConfig()
+ model: EncDecClassificationConfig = field(default_factory=lambda: EncDecClassificationConfig())
diff --git a/nemo/collections/asr/models/configs/diarizer_config.py b/nemo/collections/asr/models/configs/diarizer_config.py
--- a/nemo/collections/asr/models/configs/diarizer_config.py
+++ b/nemo/collections/asr/models/configs/diarizer_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import asdict, dataclass
+from dataclasses import asdict, dataclass, field
from typing import Any, Dict, Optional, Tuple, Union
@@ -78,9 +78,9 @@ class ASRDiarizerParams(DiarizerComponentConfig):
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
- parameters: ASRDiarizerParams = ASRDiarizerParams()
- ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
- realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
+ parameters: ASRDiarizerParams = field(default_factory=lambda: ASRDiarizerParams())
+ ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = field(default_factory=lambda: ASRDiarizerCTCDecoderParams())
+ realigning_lm_parameters: ASRRealigningLMParams = field(default_factory=lambda: ASRRealigningLMParams())
@dataclass
@@ -102,7 +102,7 @@ class VADParams(DiarizerComponentConfig):
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
- parameters: VADParams = VADParams()
+ parameters: VADParams = field(default_factory=lambda: VADParams())
@dataclass
@@ -121,7 +121,7 @@ class SpeakerEmbeddingsParams(DiarizerComponentConfig):
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
- parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
+ parameters: SpeakerEmbeddingsParams = field(default_factory=lambda: SpeakerEmbeddingsParams())
@dataclass
@@ -142,7 +142,7 @@ class ClusteringParams(DiarizerComponentConfig):
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
- parameters: ClusteringParams = ClusteringParams()
+ parameters: ClusteringParams = field(default_factory=lambda: ClusteringParams())
@dataclass
@@ -166,7 +166,7 @@ class MSDDParams(DiarizerComponentConfig):
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
- parameters: MSDDParams = MSDDParams()
+ parameters: MSDDParams = field(default_factory=lambda: MSDDParams())
@dataclass
@@ -176,16 +176,16 @@ class DiarizerConfig(DiarizerComponentConfig):
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
- vad: VADConfig = VADConfig()
- speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
- clustering: ClusteringConfig = ClusteringConfig()
- msdd_model: MSDDConfig = MSDDConfig()
- asr: ASRDiarizerConfig = ASRDiarizerConfig()
+ vad: VADConfig = field(default_factory=lambda: VADConfig())
+ speaker_embeddings: SpeakerEmbeddingsConfig = field(default_factory=lambda: SpeakerEmbeddingsConfig())
+ clustering: ClusteringConfig = field(default_factory=lambda: ClusteringConfig())
+ msdd_model: MSDDConfig = field(default_factory=lambda: MSDDConfig())
+ asr: ASRDiarizerConfig = field(default_factory=lambda: ASRDiarizerConfig())
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
- diarizer: DiarizerConfig = DiarizerConfig()
+ diarizer: DiarizerConfig = field(default_factory=lambda: DiarizerConfig())
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
diff --git a/nemo/collections/asr/models/configs/k2_sequence_models_config.py b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
--- a/nemo/collections/asr/models/configs/k2_sequence_models_config.py
+++ b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
@@ -26,14 +26,14 @@ class GraphModuleConfig:
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
- backend_cfg: BackendConfig = BackendConfig()
+ backend_cfg: BackendConfig = field(default_factory=lambda: BackendConfig())
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
- graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ graph_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
- model: EncDecK2SeqConfig = EncDecK2SeqConfig()
+ model: EncDecK2SeqConfig = field(default_factory=lambda: EncDecK2SeqConfig())
diff --git a/nemo/collections/asr/models/configs/matchboxnet_config.py b/nemo/collections/asr/models/configs/matchboxnet_config.py
--- a/nemo/collections/asr/models/configs/matchboxnet_config.py
+++ b/nemo/collections/asr/models/configs/matchboxnet_config.py
@@ -107,30 +107,38 @@ class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
- freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ preprocessor: AudioToMFCCPreprocessorConfig = field(
+ default_factory=lambda: AudioToMFCCPreprocessorConfig(window_size=0.025)
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig(
+ freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ )
)
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=128
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=128)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/quartznet_config.py b/nemo/collections/asr/models/configs/quartznet_config.py
--- a/nemo/collections/asr/models/configs/quartznet_config.py
+++ b/nemo/collections/asr/models/configs/quartznet_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
@@ -174,20 +174,30 @@ class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=True
+ train_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=True, trim_silence=True)
+ )
+ validation_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
)
- validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
@dataclass
diff --git a/nemo/collections/asr/modules/audio_preprocessing.py b/nemo/collections/asr/modules/audio_preprocessing.py
--- a/nemo/collections/asr/modules/audio_preprocessing.py
+++ b/nemo/collections/asr/modules/audio_preprocessing.py
@@ -634,6 +634,12 @@ def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
+ if self.audio_length < 0:
+ raise ValueError(
+ 'audio_length must be non-negative. If using a dataclass with OmegaConf, '
+ 'please call OmegaConf.to_object(cfg) to call appropriate __post_init__ methods.'
+ )
+
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
diff --git a/nemo/collections/asr/parts/k2/classes.py b/nemo/collections/asr/parts/k2/classes.py
--- a/nemo/collections/asr/parts/k2/classes.py
+++ b/nemo/collections/asr/parts/k2/classes.py
@@ -13,7 +13,7 @@
# limitations under the License.
from abc import ABC
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
import torch
@@ -43,7 +43,7 @@ class GraphModuleConfig:
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
- intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
+ intersect_conf: GraphIntersectDenseConfig = field(default_factory=lambda: GraphIntersectDenseConfig())
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
diff --git a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
--- a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
+++ b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
@@ -13,7 +13,7 @@
# limitations under the License.
import math
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional
import torch
@@ -183,7 +183,7 @@ class MultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
@@ -287,7 +287,7 @@ class RelPositionMultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
@@ -336,7 +336,9 @@ class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
@@ -378,5 +380,7 @@ class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
diff --git a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
@@ -14,7 +14,7 @@
import math
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import torch
@@ -602,5 +602,5 @@ class BeamCTCInferConfig:
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
- flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
- pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
+ flashlight_cfg: Optional[FlashlightConfig] = field(default_factory=lambda: FlashlightConfig())
+ pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = field(default_factory=lambda: PyCTCDecodeConfig())
diff --git a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional
import torch
@@ -253,7 +253,9 @@ class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
@@ -26,7 +26,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import numpy as np
@@ -2185,7 +2185,9 @@ class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
@@ -2217,7 +2219,9 @@ class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/asr/parts/utils/asr_confidence_utils.py b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
--- a/nemo/collections/asr/parts/utils/asr_confidence_utils.py
+++ b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
@@ -14,7 +14,7 @@
import math
from abc import ABC, abstractmethod
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from functools import partial
from typing import List, Optional
@@ -181,7 +181,7 @@ class ConfidenceConfig:
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
- measure_cfg: ConfidenceMeasureConfig = ConfidenceMeasureConfig()
+ measure_cfg: ConfidenceMeasureConfig = field(default_factory=lambda: ConfidenceMeasureConfig())
method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/common/parts/adapter_modules.py b/nemo/collections/common/parts/adapter_modules.py
--- a/nemo/collections/common/parts/adapter_modules.py
+++ b/nemo/collections/common/parts/adapter_modules.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
@@ -160,5 +160,7 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
diff --git a/nemo/collections/common/tokenizers/en_ja_tokenizers.py b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
--- a/nemo/collections/common/tokenizers/en_ja_tokenizers.py
+++ b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
@@ -14,11 +14,19 @@
import re
from typing import List
-import ipadic
-import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
+try:
+ import ipadic
+ import MeCab
+
+ HAVE_MECAB = True
+ HAVE_IPADIC = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+ HAVE_IPADIC = False
+
class EnJaProcessor:
"""
@@ -67,6 +75,9 @@ class JaMecabProcessor:
"""
def __init__(self):
+ if not HAVE_MECAB or not HAVE_IPADIC:
+ raise ImportError("Please ensure that you have installed `MeCab` and `ipadic` to use JaMecabProcessor")
+
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
diff --git a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
--- a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
+++ b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
@@ -46,7 +46,7 @@ class MTOptimConfig(OptimConfig):
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
- sched: Optional[MTSchedConfig] = MTSchedConfig()
+ sched: Optional[MTSchedConfig] = field(default_factory=lambda: MTSchedConfig())
@dataclass
@@ -74,70 +74,80 @@ class MTEncDecModelConfig(EncDecNLPModelConfig):
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
- head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
+ head: TokenClassifierConfig = field(default_factory=lambda: TokenClassifierConfig(log_softmax=True))
# dataset configurations
- train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=True,
- shuffle=True,
- cache_ids=False,
- use_cache=False,
+ train_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=True,
+ shuffle=True,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ validation_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ test_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- optim: Optional[OptimConfig] = MTOptimConfig()
+ optim: Optional[OptimConfig] = field(default_factory=lambda: MTOptimConfig())
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
- encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
- decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
-
- encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ encoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+ decoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+
+ encoder: NeMoTransformerEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
- decoder: NeMoTransformerConfig = NeMoTransformerConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ decoder: NeMoTransformerConfig = field(
+ default_factory=lambda: NeMoTransformerConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
@@ -150,32 +160,36 @@ class MTBottleneckModelConfig(AAYNBaseConfig):
recon_per_token: bool = True
log_timing: bool = True
- encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
- hidden_steps=32,
- hidden_blocks=1,
- hidden_init_method='params',
+ encoder: NeMoTransformerBottleneckEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ hidden_steps=32,
+ hidden_blocks=1,
+ hidden_init_method='params',
+ )
)
- decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
+ decoder: NeMoTransformerBottleneckDecoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckDecoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ )
)
diff --git a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
--- a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
+++ b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
@@ -215,13 +215,15 @@ class PunctuationCapitalizationModelConfig:
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
- class_labels: ClassLabelsConfig = ClassLabelsConfig()
+ class_labels: ClassLabelsConfig = field(default_factory=lambda: ClassLabelsConfig())
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
- common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
+ common_dataset_parameters: Optional[CommonDatasetParametersConfig] = field(
+ default_factory=lambda: CommonDatasetParametersConfig()
+ )
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
@@ -233,16 +235,16 @@ class PunctuationCapitalizationModelConfig:
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
- punct_head: HeadConfig = HeadConfig()
+ punct_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
- capit_head: HeadConfig = HeadConfig()
+ capit_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
- tokenizer: Any = TokenizerConfig()
+ tokenizer: Any = field(default_factory=lambda: TokenizerConfig())
"""A configuration for source text tokenizer."""
- language_model: LanguageModelConfig = LanguageModelConfig()
+ language_model: LanguageModelConfig = field(default_factory=lambda: LanguageModelConfig())
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
@@ -311,22 +313,30 @@ class PunctuationCapitalizationConfig(NemoConfig):
do_testing: bool = False
"""Whether ot perform testing of the model."""
- model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
+ model: PunctuationCapitalizationModelConfig = field(default_factory=lambda: PunctuationCapitalizationModelConfig())
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
- trainer: Optional[TrainerConfig] = TrainerConfig()
+ trainer: Optional[TrainerConfig] = field(default_factory=lambda: TrainerConfig())
"""Contains ``Trainer`` Lightning class constructor parameters."""
- exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
+ exp_manager: Optional[ExpManagerConfig] = field(
+ default_factory=lambda: ExpManagerConfig(name=None, files_to_copy=[])
+ )
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
+ def __post_init__(self):
+ if self.exp_manager is not None:
+ self.exp_manager.name = self.name
+
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
- model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
+ model: PunctuationCapitalizationLexicalAudioModelConfig = field(
+ default_factory=lambda: PunctuationCapitalizationLexicalAudioModelConfig()
+ )
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
diff --git a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
--- a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
+++ b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
@@ -13,7 +13,6 @@
# limitations under the License.
"""Transformer based language model."""
-from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
@@ -25,6 +24,13 @@
scaled_init_method_normal,
)
+try:
+ from MeCab import Model
+
+ HAVE_MECAB = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+
try:
from apex.transformer.enums import AttnMaskType, ModelType
diff --git a/nemo/collections/tts/models/fastpitch.py b/nemo/collections/tts/models/fastpitch.py
--- a/nemo/collections/tts/models/fastpitch.py
+++ b/nemo/collections/tts/models/fastpitch.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import List, Optional
@@ -70,12 +70,12 @@ class TextTokenizer:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
@dataclass
class TextTokenizerConfig:
- text_tokenizer: TextTokenizer = TextTokenizer()
+ text_tokenizer: TextTokenizer = field(default_factory=lambda: TextTokenizer())
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
diff --git a/nemo/collections/tts/models/tacotron2.py b/nemo/collections/tts/models/tacotron2.py
--- a/nemo/collections/tts/models/tacotron2.py
+++ b/nemo/collections/tts/models/tacotron2.py
@@ -13,7 +13,7 @@
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
import torch
@@ -53,7 +53,7 @@ class Preprocessor:
@dataclass
class Tacotron2Config:
- preprocessor: Preprocessor = Preprocessor()
+ preprocessor: Preprocessor = field(default_factory=lambda: Preprocessor())
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
diff --git a/nemo/core/config/modelPT.py b/nemo/core/config/modelPT.py
--- a/nemo/core/config/modelPT.py
+++ b/nemo/core/config/modelPT.py
@@ -58,11 +58,13 @@ class HydraConfig:
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
- trainer: config.TrainerConfig = config.TrainerConfig(
- strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ trainer: config.TrainerConfig = field(
+ default_factory=lambda: config.TrainerConfig(
+ strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ )
)
- exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
- hydra: HydraConfig = HydraConfig()
+ exp_manager: Optional[Any] = field(default_factory=lambda: exp_manager.ExpManagerConfig())
+ hydra: HydraConfig = field(default_factory=lambda: HydraConfig())
class ModelConfigBuilder:
diff --git a/nemo/utils/exp_manager.py b/nemo/utils/exp_manager.py
--- a/nemo/utils/exp_manager.py
+++ b/nemo/utils/exp_manager.py
@@ -18,7 +18,7 @@
import sys
import time
import warnings
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
@@ -146,28 +146,30 @@ class ExpManagerConfig:
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
- mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
+ mlflow_logger_kwargs: Optional[MLFlowParams] = field(default_factory=lambda: MLFlowParams())
create_dllogger_logger: Optional[bool] = False
- dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
+ dllogger_logger_kwargs: Optional[DLLoggerParams] = field(default_factory=lambda: DLLoggerParams())
create_clearml_logger: Optional[bool] = False
- clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
+ clearml_logger_kwargs: Optional[ClearMLParams] = field(default_factory=lambda: ClearMLParams())
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
- checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
+ checkpoint_callback_params: Optional[CallbackParams] = field(default_factory=lambda: CallbackParams())
create_early_stopping_callback: Optional[bool] = False
- early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
+ early_stopping_callback_params: Optional[EarlyStoppingParams] = field(
+ default_factory=lambda: EarlyStoppingParams()
+ )
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
- step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
+ step_timing_kwargs: Optional[StepTimingParams] = field(default_factory=lambda: StepTimingParams())
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
- ema: Optional[EMAParams] = EMAParams()
+ ema: Optional[EMAParams] = field(default_factory=lambda: EMAParams())
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
@@ -112,14 +112,14 @@ class EvalBeamSearchNGramConfig:
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
- decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
+ decoding: ctc_beam_decoding.BeamCTCInferConfig = field(default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=128))
- text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
+ text_processing: Optional[TextProcessingConfig] = field(default_factory=lambda: TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
- )
+ ))
# fmt: on
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
@@ -115,7 +115,7 @@ class EvalBeamSearchNGramConfig:
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
- decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
+ decoding: rnnt_beam_decoding.BeamRNNTInferConfig = field(default_factory=lambda: rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128))
# fmt: on
diff --git a/scripts/confidence_ensembles/build_ensemble.py b/scripts/confidence_ensembles/build_ensemble.py
--- a/scripts/confidence_ensembles/build_ensemble.py
+++ b/scripts/confidence_ensembles/build_ensemble.py
@@ -75,7 +75,7 @@
import sys
import tempfile
from copy import deepcopy
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Tuple
@@ -209,19 +209,23 @@ class BuildEnsembleConfig:
random_seed: int = 0 # for reproducibility
# default confidence, can override
- confidence: ConfidenceConfig = ConfidenceConfig(
- # we keep frame confidences and apply aggregation manually to get full-utterance confidence
- preserve_frame_confidence=True,
- exclude_blank=True,
- aggregation="mean",
- measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ confidence: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(
+ # we keep frame confidences and apply aggregation manually to get full-utterance confidence
+ preserve_frame_confidence=True,
+ exclude_blank=True,
+ aggregation="mean",
+ measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ )
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
- transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
+ transcription: transcribe_speech.TranscriptionConfig = field(
+ default_factory=lambda: transcribe_speech.TranscriptionConfig()
+ )
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
@@ -229,12 +233,14 @@ class BuildEnsembleConfig:
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
- tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
+ tune_confidence_config: TuneConfidenceConfig = field(default_factory=lambda: TuneConfidenceConfig())
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
- tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
+ tune_logistic_regression_config: TuneLogisticRegressionConfig = field(
+ default_factory=lambda: TuneLogisticRegressionConfig()
+ )
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
diff --git a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
--- a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
+++ b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
@@ -14,7 +14,7 @@
import json
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import Optional
@@ -124,7 +124,9 @@ class ConfidenceBenchmarkingConfig:
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
- confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ confidence_cfg: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ )
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
diff --git a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
--- a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
+++ b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
@@ -202,7 +202,7 @@ class ASRTarredDatasetMetadata:
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
- dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
+ dataset_config: Optional[ASRTarredDatasetConfig] = field(default_factory=lambda: ASRTarredDatasetConfig())
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
diff --git a/tools/nemo_forced_aligner/align.py b/tools/nemo_forced_aligner/align.py
--- a/tools/nemo_forced_aligner/align.py
+++ b/tools/nemo_forced_aligner/align.py
@@ -149,8 +149,8 @@ class AlignmentConfig:
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig = CTMFileConfig()
- ass_file_config: ASSFileConfig = ASSFileConfig()
+ ctm_file_config: CTMFileConfig = field(default_factory=lambda: CTMFileConfig())
+ ass_file_config: ASSFileConfig = field(default_factory=lambda: ASSFileConfig())
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
</patch>
</s>
</patch>
|
diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_to_text_dataset.py
--- a/tests/collections/asr/test_text_to_text_dataset.py
+++ b/tests/collections/asr/test_text_to_text_dataset.py
@@ -15,7 +15,7 @@
import json
import multiprocessing
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
import pytest
@@ -110,7 +110,7 @@ class TextTokenizerCfg:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
config = OmegaConf.create(OmegaConf.to_yaml(TextTokenizerCfg()))
return instantiate(config)
|
1.0
| |||
NVIDIA__NeMo-7616
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
17 :target: https://badge.fury.io/py/nemo-toolkit
18 :alt: Release version
19
20 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
21 :target: https://badge.fury.io/py/nemo-toolkit
22 :alt: Python version
23
24 .. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
25 :target: https://pepy.tech/project/nemo-toolkit
26 :alt: PyPi total downloads
27
28 .. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
29 :target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
30 :alt: CodeQL
31
32 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
33 :target: https://github.com/psf/black
34 :alt: Code style: black
35
36 .. _main-readme:
37
38 **NVIDIA NeMo**
39 ===============
40
41 Introduction
42 ------------
43
44 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
45 text-to-speech synthesis (TTS), large language models (LLMs), and
46 natural language processing (NLP).
47 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
48 and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
49
50 All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
51 training is automatically scalable to 1000s of GPUs.
52 Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
53 NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
54
55 Getting started with NeMo is simple.
56 State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
57 `NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
58 These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
59
60 We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
61 can all be run on `Google Colab <https://colab.research.google.com>`_.
62
63 For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
64 we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
65
66 For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
67 The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
68 which can be used to find the optimal model parallel configuration for training on a specific cluster.
69
70 Also see the two introductory videos below for a high level overview of NeMo.
71
72 * Developing State-Of-The-Art Conversational AI Models in Three Lines of Code.
73 * NVIDIA NeMo: Toolkit for Conversational AI at PyData Yerevan 2022.
74
75 |three_lines| |pydata|
76
77 .. |pydata| image:: https://img.youtube.com/vi/J-P6Sczmas8/maxres3.jpg
78 :target: https://www.youtube.com/embed/J-P6Sczmas8?mute=0&start=14&autoplay=0
79 :width: 600
80 :alt: Develop Conversational AI Models in 3 Lines
81
82 .. |three_lines| image:: https://img.youtube.com/vi/wBgpMf_KQVw/maxresdefault.jpg
83 :target: https://www.youtube.com/embed/wBgpMf_KQVw?mute=0&start=0&autoplay=0
84 :width: 600
85 :alt: Introduction at PyData@Yerevan 2022
86
87 Key Features
88 ------------
89
90 * Speech processing
91 * `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
92 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
93 * Supported ASR models: `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_
94 * Jasper, QuartzNet, CitriNet, ContextNet
95 * Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
96 * Squeezeformer-CTC and Squeezeformer-Transducer
97 * LSTM-Transducer (RNNT) and LSTM-CTC
98 * Supports the following decoders/losses:
99 * CTC
100 * Transducer/RNNT
101 * Hybrid Transducer/CTC
102 * NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
103 * Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
104 * Cache-aware Streaming Conformer with multiple lookaheads - `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_
105 * Beam Search decoding
106 * `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
107 * `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
108 * `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
109 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
110 * ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
111 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
112 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
113 * Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
114 * Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
115 * `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
116 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
117 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
118 * Natural Language Processing
119 * `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
120 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
121 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
122 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
123 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
124 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
125 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
126 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
127 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
128 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
129 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
130 * `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
131 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
132 * `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
133 * Text-to-Speech Synthesis (TTS):
134 * `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
135 * Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
136 * Vocoders: HiFiGAN, UnivNet, WaveGlow
137 * End-to-End Models: VITS
138 * `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
139 * `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
140 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
141 * `NeMo Forced Aligner <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/nemo_forced_aligner.html>`_
142 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
143 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
144 * `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
145
146
147 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
148
149 Requirements
150 ------------
151
152 1) Python 3.10 or above
153 2) Pytorch 1.13.1 or above
154 3) NVIDIA GPU for training
155
156 Documentation
157 -------------
158
159 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
160 :alt: Documentation Status
161 :scale: 100%
162 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
163
164 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
165 :alt: Documentation Status
166 :scale: 100%
167 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
168
169 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
170 | Version | Status | Description |
171 +=========+=============+==========================================================================================================================================+
172 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
173 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
174 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
175 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
176
177 Tutorials
178 ---------
179 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
180
181 Getting help with NeMo
182 ----------------------
183 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
184
185
186 Installation
187 ------------
188
189 Conda
190 ~~~~~
191
192 We recommend installing NeMo in a fresh Conda environment.
193
194 .. code-block:: bash
195
196 conda create --name nemo python==3.10.12
197 conda activate nemo
198
199 Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
200
201 .. code-block:: bash
202
203 conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
204
205 The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
206
207 Pip
208 ~~~
209 Use this installation mode if you want the latest released version.
210
211 .. code-block:: bash
212
213 apt-get update && apt-get install -y libsndfile1 ffmpeg
214 pip install Cython
215 pip install nemo_toolkit['all']
216
217 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
218
219 Pip from source
220 ~~~~~~~~~~~~~~~
221 Use this installation mode if you want the version from a particular GitHub branch (e.g main).
222
223 .. code-block:: bash
224
225 apt-get update && apt-get install -y libsndfile1 ffmpeg
226 pip install Cython
227 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
228
229
230 From source
231 ~~~~~~~~~~~
232 Use this installation mode if you are contributing to NeMo.
233
234 .. code-block:: bash
235
236 apt-get update && apt-get install -y libsndfile1 ffmpeg
237 git clone https://github.com/NVIDIA/NeMo
238 cd NeMo
239 ./reinstall.sh
240
241 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
242 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
243
244 RNNT
245 ~~~~
246 Note that RNNT requires numba to be installed from conda.
247
248 .. code-block:: bash
249
250 conda remove numba
251 pip uninstall numba
252 conda install -c conda-forge numba
253
254 NeMo Megatron
255 ~~~~~~~~~~~~~
256 NeMo Megatron training requires NVIDIA Apex to be installed.
257 Install it manually if not using the NVIDIA PyTorch container.
258
259 To install Apex, run
260
261 .. code-block:: bash
262
263 git clone https://github.com/NVIDIA/apex.git
264 cd apex
265 git checkout 52e18c894223800cb611682dce27d88050edf1de
266 pip install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
267
268 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
269
270 While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
271 This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
272
273 cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
274
275 .. code-block:: bash
276
277 conda install -c nvidia cuda-nvprof=11.8
278
279 packaging is also needed:
280
281 .. code-block:: bash
282
283 pip install packaging
284
285 With the latest versions of Apex, the `pyproject.toml` file in Apex may need to be deleted in order to install locally.
286
287
288 Transformer Engine
289 ~~~~~~~~~~~~~~~~~~
290 NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
291 Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
292 `Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
293
294 .. code-block:: bash
295
296 pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
297
298 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
299
300 Transformer Engine requires PyTorch to be built with CUDA 11.8.
301
302
303 Flash Attention
304 ~~~~~~~~~~~~~~~~~~~~
305 Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
306
307 .. code-block:: bash
308
309 pip install flash-attn
310 pip install triton==2.0.0.dev20221202
311
312 NLP inference UI
313 ~~~~~~~~~~~~~~~~~~~~
314 To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
315
316 .. code-block:: bash
317
318 pip install gradio==3.34.0
319
320 NeMo Text Processing
321 ~~~~~~~~~~~~~~~~~~~~
322 NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
323
324 Docker containers:
325 ~~~~~~~~~~~~~~~~~~
326 We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.20.0`` comes with container ``nemo:23.06``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
327
328 To use built container, please run
329
330 .. code-block:: bash
331
332 docker pull nvcr.io/nvidia/nemo:23.06
333
334 To build a nemo container with Dockerfile from a branch, please run
335
336 .. code-block:: bash
337
338 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
339
340
341 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
342
343 .. code-block:: bash
344
345 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
346 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
347 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
348
349 Examples
350 --------
351
352 Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
353
354
355 Contributing
356 ------------
357
358 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
359
360 Publications
361 ------------
362
363 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/tree/stable/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
364
365 License
366 -------
367 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
368
[end of README.rst]
[start of nemo/collections/asr/metrics/wer.py]
...
...
1283
1284 # preserve decoding alignments
1285 preserve_alignments: Optional[bool] = None
1286
1287 # compute ctc time stamps
1288 compute_timestamps: Optional[bool] = None
1289
1290 # token representing word seperator
1291 word_seperator: str = " "
1292
1293 # type of timestamps to calculate
1294 ctc_timestamp_type: str = "all" # can be char, word or all for both
1295
1296 # batch dimension
1297 batch_dim_index: int = 0
1298
1299 # greedy decoding config
1300 greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
1301
1302 # beam decoding config
1303 beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
1304
1305 # confidence config
1306 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1307
1308 # can be used to change temperature for decoding
1309 temperature: float = 1.0
1310
[end of nemo/collections/asr/metrics/wer.py]
[start of nemo/collections/tts/models/fastpitch.py]
...
...
56 @dataclass
57 class G2PConfig:
58 _target_: str = "nemo.collections.tts.g2p.models.en_us_arpabet.EnglishG2p"
59 phoneme_dict: str = "scripts/tts_dataset_files/cmudict-0.7b_nv22.10"
60 heteronyms: str = "scripts/tts_dataset_files/heteronyms-052722"
61 phoneme_probability: float = 0.5
62
63
64 @dataclass
65 class TextTokenizer:
66 _target_: str = "nemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.EnglishPhonemesTokenizer"
67 punct: bool = True
68 stresses: bool = True
69 chars: bool = True
70 apostrophe: bool = True
71 pad_with_space: bool = True
72 add_blank_at: bool = True
73 g2p: G2PConfig = G2PConfig()
74
75
76 @dataclass
77 class TextTokenizerConfig:
78 text_tokenizer: TextTokenizer = TextTokenizer()
79
80
81 class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
82 """FastPitch model (https://arxiv.org/abs/2006.06873) that is used to generate mel spectrogram from text."""
...
[end of nemo/collections/tts/models/fastpitch.py]
[start of nemo/collections/asr/models/configs/asr_models_config.py]
...
...
60
61 # bucketing params
62 bucketing_strategy: str = "synced_randomized"
63 bucketing_batch_size: Optional[Any] = None
64 bucketing_weights: Optional[List[int]] = None
65
66
67 @dataclass
68 class EncDecCTCConfig(model_cfg.ModelConfig):
69 # Model global arguments
70 sample_rate: int = 16000
71 repeat: int = 1
72 dropout: float = 0.0
73 separable: bool = False
74 labels: List[str] = MISSING
75
76 # Dataset configs
77 train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
78 validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
79 test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
80
81 # Optimizer / Scheduler config
82 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
83
84 # Model component configs
85 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
86 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
87 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
88 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
89 decoding: CTCDecodingConfig = CTCDecodingConfig()
90
91
92 @dataclass
93 class EncDecCTCModelConfig(model_cfg.NemoConfig):
94 model: EncDecCTCConfig = EncDecCTCConfig()
95
96
97 @dataclass
98 class CacheAwareStreamingConfig:
...
[end of nemo/collections/asr/models/configs/asr_models_config.py]
[start of nemo/collections/asr/models/configs/diarizer_config.py]
...
...
64 # Offset to set a reference point from the start of the word. Recommended range of values is [-0.05 0.2].
65 word_ts_anchor_offset: Optional[float] = None
66 # Select which part of the word timestamp we want to use. The options are: 'start', 'end', 'mid'.
67 word_ts_anchor_pos: str = "start"
68 # Fix the word timestamp using VAD output. You must provide a VAD model to use this feature.
69 fix_word_ts_with_VAD: bool = False
70 # If True, use colored text to distinguish speakers in the output transcript.
71 colored_text: bool = False
72 # If True, the start and end time of each speaker turn is printed in the output transcript.
73 print_time: bool = True
74 # If True, the output transcript breaks the line to fix the line width (default is 90 chars)
75 break_lines: bool = False
76
77
78 @dataclass
79 class ASRDiarizerConfig(DiarizerComponentConfig):
80 model_path: Optional[str] = "stt_en_conformer_ctc_large"
81 parameters: ASRDiarizerParams = ASRDiarizerParams()
82 ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
83 realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
84
85
86 @dataclass
87 class VADParams(DiarizerComponentConfig):
...
...
88 window_length_in_sec: float = 0.15 # Window length in sec for VAD context input
89 shift_length_in_sec: float = 0.01 # Shift length in sec for generate frame level VAD prediction
90 smoothing: Union[str, bool] = "median" # False or type of smoothing method (eg: median)
91 overlap: float = 0.5 # Overlap ratio for overlapped mean/median smoothing filter
92 onset: float = 0.1 # Onset threshold for detecting the beginning and end of a speech
93 offset: float = 0.1 # Offset threshold for detecting the end of a speech
94 pad_onset: float = 0.1 # Adding durations before each speech segment
95 pad_offset: float = 0 # Adding durations after each speech segment
96 min_duration_on: float = 0 # Threshold for small non_speech deletion
97 min_duration_off: float = 0.2 # Threshold for short speech segment deletion
98 filter_speech_first: bool = True
99
100
101 @dataclass
102 class VADConfig(DiarizerComponentConfig):
103 model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
104 external_vad_manifest: Optional[str] = None
105 parameters: VADParams = VADParams()
106
107
108 @dataclass
109 class SpeakerEmbeddingsParams(DiarizerComponentConfig):
...
...
107
108 @dataclass
109 class SpeakerEmbeddingsParams(DiarizerComponentConfig):
110 # Window length(s) in sec (floating-point number). either a number or a list. ex) 1.5 or [1.5,1.0,0.5]
111 window_length_in_sec: Tuple[float] = (1.5, 1.25, 1.0, 0.75, 0.5)
112 # Shift length(s) in sec (floating-point number). either a number or a list. ex) 0.75 or [0.75,0.5,0.25]
113 shift_length_in_sec: Tuple[float] = (0.75, 0.625, 0.5, 0.375, 0.25)
114 # Weight for each scale. None (for single scale) or list with window/shift scale count. ex) [0.33,0.33,0.33]
115 multiscale_weights: Tuple[float] = (1, 1, 1, 1, 1)
116 # save speaker embeddings in pickle format. True if clustering result is used for other models, such as MSDD.
117 save_embeddings: bool = True
118
119
120 @dataclass
121 class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
122 # .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
123 model_path: Optional[str] = None
124 parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
125
126
127 @dataclass
128 class ClusteringParams(DiarizerComponentConfig):
...
...
128 class ClusteringParams(DiarizerComponentConfig):
129 # If True, use num of speakers value provided in manifest file.
130 oracle_num_speakers: bool = False
131 # Max number of speakers for each recording. If an oracle number of speakers is passed, this value is ignored.
132 max_num_speakers: int = 8
133 # If the number of segments is lower than this number, enhanced speaker counting is activated.
134 enhanced_count_thres: int = 80
135 # Determines the range of p-value search: 0 < p <= max_rp_threshold.
136 max_rp_threshold: float = 0.25
137 # The higher the number, the more values will be examined with more time.
138 sparse_search_volume: int = 30
139 # If True, take a majority vote on multiple p-values to estimate the number of speakers.
140 maj_vote_spk_count: bool = False
141
142
143 @dataclass
144 class ClusteringConfig(DiarizerComponentConfig):
145 parameters: ClusteringParams = ClusteringParams()
146
147
148 @dataclass
149 class MSDDParams(DiarizerComponentConfig):
...
...
152 # Batch size for MSDD inference.
153 infer_batch_size: int = 25
154 # Sigmoid threshold for generating binarized speaker labels. The smaller the more generous on detecting overlaps.
155 sigmoid_threshold: Tuple[float] = (0.7,)
156 # If True, use oracle number of speaker and evaluate F1 score for the given speaker sequences. Default is False.
157 seq_eval_mode: bool = False
158 # If True, break the input audio clip to short sequences and calculate cluster average embeddings for inference.
159 split_infer: bool = True
160 # The length of split short sequence when split_infer is True.
161 diar_window_length: int = 50
162 # If the estimated number of speakers are larger than this number, overlap speech is not estimated.
163 overlap_infer_spk_limit: int = 5
164
165
166 @dataclass
167 class MSDDConfig(DiarizerComponentConfig):
168 model_path: Optional[str] = "diar_msdd_telephonic"
169 parameters: MSDDParams = MSDDParams()
170
171
172 @dataclass
173 class DiarizerConfig(DiarizerComponentConfig):
...
...
162 # If the estimated number of speakers are larger than this number, overlap speech is not estimated.
163 overlap_infer_spk_limit: int = 5
164
165
166 @dataclass
167 class MSDDConfig(DiarizerComponentConfig):
168 model_path: Optional[str] = "diar_msdd_telephonic"
169 parameters: MSDDParams = MSDDParams()
170
171
172 @dataclass
173 class DiarizerConfig(DiarizerComponentConfig):
174 manifest_filepath: Optional[str] = None
175 out_dir: Optional[str] = None
176 oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
177 collar: float = 0.25 # Collar value for scoring
178 ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
179 vad: VADConfig = VADConfig()
180 speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
181 clustering: ClusteringConfig = ClusteringConfig()
182 msdd_model: MSDDConfig = MSDDConfig()
183 asr: ASRDiarizerConfig = ASRDiarizerConfig()
184
185
186 @dataclass
187 class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
188 diarizer: DiarizerConfig = DiarizerConfig()
189 device: str = "cpu"
190 verbose: bool = False
191 batch_size: int = 64
192 num_workers: int = 1
...
[end of nemo/collections/asr/models/configs/diarizer_config.py]
[start of nemo/collections/asr/parts/k2/classes.py]
...
...
29
30 search_beam: float = 20.0
31 output_beam: float = 10.0
32 min_active_states: int = 30
33 max_active_states: int = 10000
34
35
36 @dataclass
37 class GraphModuleConfig:
38 """Config for graph modules.
39 Typically used with graph losses and decoders.
40 """
41
42 topo_type: str = "default"
43 topo_with_self_loops: bool = True
44 token_lm: Optional[Any] = None
45 intersect_pruned: bool = False
46 intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
47 boost_coeff: float = 0.0
48 predictor_window_size: int = 0
49 predictor_step_size: int = 1
50
...
[end of nemo/collections/asr/parts/k2/classes.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
...
101 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
102 num_workers: int = 1 # Number of workers for DataLoader
103
104 # The decoding scheme to be used for evaluation
105 decoding_strategy: str = "greedy_batch" # ["greedy_batch", "beam", "tsd", "alsd", "maes"]
106
107 # Beam Search hyperparameters
108 beam_width: List[int] = field(default_factory=lambda: [8]) # The width or list of the widths for the beam search decoding
109 beam_alpha: List[float] = field(default_factory=lambda: [0.2]) # The alpha parameter or list of the alphas for the beam search decoding
110
111 maes_prefix_alpha: List[int] = field(default_factory=lambda: [2]) # The maes_prefix_alpha or list of the maes_prefix_alpha for the maes decoding
112 maes_expansion_gamma: List[float] = field(default_factory=lambda: [2.3]) # The maes_expansion_gamma or list of the maes_expansion_gamma for the maes decoding
113
114 # HAT related parameters (only for internal lm subtraction)
115 hat_subtract_ilm: bool = False
116 hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
117
118 decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
119
120
121 # fmt: on
122
...
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
[start of nemo/core/config/modelPT.py]
...
44 # ...
45 train_ds: Optional[DatasetConfig] = None
46 validation_ds: Optional[DatasetConfig] = None
47 test_ds: Optional[DatasetConfig] = None
48 optim: Optional[OptimConfig] = None
49
50
51 @dataclass
52 class HydraConfig:
53 run: Dict[str, Any] = field(default_factory=lambda: {"dir": "."})
54 job_logging: Dict[str, Any] = field(default_factory=lambda: {"root": {"handlers": None}})
55
56
57 @dataclass
58 class NemoConfig:
59 name: str = MISSING
60 model: ModelConfig = MISSING
61 trainer: config.TrainerConfig = config.TrainerConfig(
62 strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
63 )
64 exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
65 hydra: HydraConfig = HydraConfig()
66
67
68 class ModelConfigBuilder:
69 def __init__(self, model_cfg: ModelConfig):
...
[end of nemo/core/config/modelPT.py]
[start of nemo/collections/asr/models/configs/aligner_config.py]
...
...
21 class AlignerCTCConfig:
22 prob_suppress_index: int = -1
23 prob_suppress_value: float = 1.0
24
25
26 @dataclass
27 class AlignerRNNTConfig:
28 predictor_window_size: int = 0
29 predictor_step_size: int = 1
30
31
32 @dataclass
33 class AlignerWrapperModelConfig:
34 alignment_type: str = "forced"
35 word_output: bool = True
36 cpu_decoding: bool = False
37 decode_batch_size: int = 0
38 ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
39 rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
40
41
42 @dataclass
43 class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
44 decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
45
[end of nemo/collections/asr/models/configs/aligner_config.py]
[start of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
...
...
239
240 if self.preserve_frame_confidence:
241 raise ValueError(
242 "Requested for per-frame confidence, but predictions provided were labels, not log probabilities."
243 )
244
245 return hypothesis
246
247 def __call__(self, *args, **kwargs):
248 return self.forward(*args, **kwargs)
249
250
251 @dataclass
252 class GreedyCTCInferConfig:
253 preserve_alignments: bool = False
254 compute_timestamps: bool = False
255 preserve_frame_confidence: bool = False
256 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
257
258 def __post_init__(self):
259 # OmegaConf.structured ensures that post_init check is always executed
260 self.confidence_method_cfg = OmegaConf.structured(
...
[end of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
[start of nemo/collections/common/parts/adapter_modules.py]
...
...
146 def forward(self, x):
147 x = self.module(x)
148
149 # Add dropout if available
150 if self.dropout is not None:
151 x = self.dropout(x)
152
153 return x
154
155
156 @dataclass
157 class LinearAdapterConfig:
158 in_features: int
159 dim: int
160 activation: str = 'swish'
161 norm_position: str = 'pre'
162 dropout: float = 0.0
163 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
164 _target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
165
[end of nemo/collections/common/parts/adapter_modules.py]
[start of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
...
...
169 return super().forward(query, key, value, mask, pos_emb, cache=cache)
170
171 def reset_parameters(self):
172 with torch.no_grad():
173 nn.init.zeros_(self.linear_out.weight)
174 nn.init.zeros_(self.linear_out.bias)
175
176 def get_default_strategy_config(self) -> 'dataclass':
177 return MHAResidualAddAdapterStrategyConfig()
178
179
180 @dataclass
181 class MultiHeadAttentionAdapterConfig:
182 n_head: int
183 n_feat: int
184 dropout_rate: float = 0.0
185 proj_dim: Optional[int] = None
186 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
187 _target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
188
189
190 class RelPositionMultiHeadAttentionAdapter(mha.RelPositionMultiHeadAttention, adapter_modules.AdapterModuleUtil):
...
...
273 # However:
274 # DO NOT REPLACE BELOW WITH self.pos_bias_u *= 0.0 OR self.pos_bias_v *= 0.0
275 # For some reason at init sometimes it will cause the value of the tensor to become NaN
276 # All operations to compute matrix_ac and matrix_bd will then fail.
277 nn.init.zeros_(self.pos_bias_u)
278 nn.init.zeros_(self.pos_bias_v)
279
280 def get_default_strategy_config(self) -> 'dataclass':
281 return MHAResidualAddAdapterStrategyConfig()
282
283
284 @dataclass
285 class RelPositionMultiHeadAttentionAdapterConfig:
286 n_head: int
287 n_feat: int
288 dropout_rate: float = 0.0
289 proj_dim: Optional[int] = None
290 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
291 _target_: str = "{0}.{1}".format(
292 RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
293 )
294
...
...
322
323 super().__init__(
324 d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0,
325 )
326
327 # Setup adapter strategy
328 self.setup_adapter_strategy(adapter_strategy)
329
330 def get_default_strategy_config(self) -> 'dataclass':
331 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
332
333
334 @dataclass
335 class PositionalEncodingAdapterConfig:
336 d_model: int
337 max_len: int = 5000
338 xscale: float = 1.0
339 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
340 _target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
341
342
343 class RelPositionalEncodingAdapter(mha.RelPositionalEncoding, adapter_modules.AdapterModuleUtil):
...
...
364 xscale=1.0,
365 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
366 ):
367 super().__init__(d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0)
368
369 # Setup adapter strategy
370 self.setup_adapter_strategy(adapter_strategy)
371
372 def get_default_strategy_config(self) -> 'dataclass':
373 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
374
375
376 @dataclass
377 class RelPositionalEncodingAdapterConfig:
378 d_model: int
379 max_len: int = 5000
380 xscale: float = 1.0
381 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
382 _target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
383
[end of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
...
98 # Parameters for inference
99 acoustic_batch_size: int = 16 # The batch size to calculate log probabilities
100 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
101 device: str = "cuda" # The device to load the model onto to calculate log probabilities
102 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
103
104 # Beam Search hyperparameters
105
106 # The decoding scheme to be used for evaluation.
107 # Can be one of ["greedy", "beamsearch", "beamsearch_ngram"]
108 decoding_mode: str = "beamsearch_ngram"
109
110 beam_width: List[int] = field(default_factory=lambda: [128]) # The width or list of the widths for the beam search decoding
111 beam_alpha: List[float] = field(default_factory=lambda: [1.0]) # The alpha parameter or list of the alphas for the beam search decoding
112 beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
113
114 decoding_strategy: str = "beam"
115 decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
116
117 text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
118 punctuation_marks = ".,?",
119 separate_punctuation = False,
120 do_lowercase = False,
121 rm_punctuation = False,
122 )
123 # fmt: on
124
125
126 def beam_search_eval(
...
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
[start of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
...
...
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Transformer based language model."""
16 from MeCab import Model
17 from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
18 from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
19 from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
20 MegatronRetrievalTransformerEncoderModule,
21 )
22 from nemo.collections.nlp.modules.common.megatron.utils import (
23 ApexGuardDefaults,
24 init_method_normal,
25 scaled_init_method_normal,
26 )
27
28 try:
29 from apex.transformer.enums import AttnMaskType, ModelType
30
31 HAVE_APEX = True
...
[end of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
[start of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
...
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Copyright 2017 Johns Hopkins University (Shinji Watanabe)
16 #
17 # Licensed under the Apache License, Version 2.0 (the "License");
18 # you may not use this file except in compliance with the License.
19 # You may obtain a copy of the License at
20 #
21 # http://www.apache.org/licenses/LICENSE-2.0
22 #
23 # Unless required by applicable law or agreed to in writing, software
24 # distributed under the License is distributed on an "AS IS" BASIS,
25 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
26 # See the License for the specific language governing permissions and
27 # limitations under the License.
28
29 from dataclasses import dataclass
30 from typing import List, Optional, Tuple, Union
31
32 import numpy as np
33 import torch
...
...
2171 if self.preserve_frame_confidence:
2172 for batch_idx in range(batchsize):
2173 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2174 del hypotheses[batch_idx].frame_confidence[-1]
2175
2176 # Preserve states
2177 for batch_idx in range(batchsize):
2178 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2179
2180 return hypotheses
2181
2182
2183 @dataclass
2184 class GreedyRNNTInferConfig:
2185 max_symbols_per_step: Optional[int] = 10
2186 preserve_alignments: bool = False
2187 preserve_frame_confidence: bool = False
2188 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
2189
2190 def __post_init__(self):
2191 # OmegaConf.structured ensures that post_init check is always executed
2192 self.confidence_method_cfg = OmegaConf.structured(
...
...
2187 preserve_frame_confidence: bool = False
2188 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
2189
2190 def __post_init__(self):
2191 # OmegaConf.structured ensures that post_init check is always executed
2192 self.confidence_method_cfg = OmegaConf.structured(
2193 self.confidence_method_cfg
2194 if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
2195 else ConfidenceMethodConfig(**self.confidence_method_cfg)
2196 )
2197
2198
2199 @dataclass
2200 class GreedyBatchedRNNTInferConfig:
2201 max_symbols_per_step: Optional[int] = 10
2202 preserve_alignments: bool = False
2203 preserve_frame_confidence: bool = False
2204 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
2205
2206 def __post_init__(self):
2207 # OmegaConf.structured ensures that post_init check is always executed
2208 self.confidence_method_cfg = OmegaConf.structured(
...
[end of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
[start of nemo/collections/tts/models/tacotron2.py]
...
...
39 LengthsType,
40 LogitsType,
41 MelSpectrogramType,
42 SequenceToSequenceAlignmentType,
43 )
44 from nemo.core.neural_types.neural_type import NeuralType
45 from nemo.utils import logging, model_utils
46
47
48 @dataclass
49 class Preprocessor:
50 _target_: str = MISSING
51 pad_value: float = MISSING
52
53
54 @dataclass
55 class Tacotron2Config:
56 preprocessor: Preprocessor = Preprocessor()
57 encoder: Dict[Any, Any] = MISSING
58 decoder: Dict[Any, Any] = MISSING
59 postnet: Dict[Any, Any] = MISSING
60 labels: List = MISSING
...
[end of nemo/collections/tts/models/tacotron2.py]
[start of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
...
188 num_shards: int = -1
189 shuffle: bool = False
190 max_duration: Optional[float] = None
191 min_duration: Optional[float] = None
192 shuffle_seed: Optional[int] = None
193 sort_in_shards: bool = True
194 shard_manifests: bool = True
195 keep_files_together: bool = False
196
197
198 @dataclass
199 class ASRTarredDatasetMetadata:
200 created_datetime: Optional[str] = None
201 version: int = 0
202 num_samples_per_shard: Optional[int] = None
203 is_concatenated_manifest: bool = False
204
205 dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
206 history: Optional[List[Any]] = field(default_factory=lambda: [])
207
208 def __post_init__(self):
209 self.created_datetime = self.get_current_datetime()
...
[end of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
[start of nemo/collections/asr/models/configs/classification_models_config.py]
...
...
58 bucketing_strategy: str = "synced_randomized"
59 bucketing_batch_size: Optional[Any] = None
60 bucketing_weights: Optional[List[int]] = None
61
62
63 @dataclass
64 class EncDecClassificationConfig(model_cfg.ModelConfig):
65 # Model global arguments
66 sample_rate: int = 16000
67 repeat: int = 1
68 dropout: float = 0.0
69 separable: bool = True
70 kernel_size_factor: float = 1.0
71 labels: List[str] = MISSING
72 timesteps: int = MISSING
73
74 # Dataset configs
75 train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
76 manifest_filepath=None, shuffle=True, trim_silence=False
77 )
78 validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
79 manifest_filepath=None, shuffle=False
80 )
81 test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
82 manifest_filepath=None, shuffle=False
83 )
84
85 # Optimizer / Scheduler config
86 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
87
88 # Model component configs
89 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
90 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
91 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
92 audio_length=timesteps
93 )
94
95 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
96 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
97
98
99 @dataclass
100 class EncDecClassificationModelConfig(model_cfg.NemoConfig):
101 model: EncDecClassificationConfig = EncDecClassificationConfig()
102
[end of nemo/collections/asr/models/configs/classification_models_config.py]
[start of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
...
...
161 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
162
163 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
164 When the alpha equals one, scaling is not applied to 'max_prob',
165 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
166
167 entropy_norm: A mapping of the entropy value to the interval [0,1].
168 Supported values:
169 - 'lin' for using the linear mapping.
170 - 'exp' for using exponential mapping with linear shift.
171 """
172
173 preserve_frame_confidence: bool = False
174 preserve_token_confidence: bool = False
175 preserve_word_confidence: bool = False
176 exclude_blank: bool = True
177 aggregation: str = "min"
178 method_cfg: ConfidenceMethodConfig = ConfidenceMethodConfig()
179
180 def __post_init__(self):
181 # OmegaConf.structured ensures that post_init check is always executed
182 self.method_cfg = OmegaConf.structured(
...
[end of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
[start of scripts/confidence_ensembles/build_ensemble.py]
...
61 <all arguments as in the previous examples>
62 tune_confidence_config.confidence_type='[entropy_renyi_exp,entropy_tsallis_exp]' # only tune over this set
63 tune_confidence_config.alpha='[0.1,0.5,1.0]' # only tune over this set
64
65 You can check the dataclasses in this file for the full list of supported
66 arguments and their default values.
67 """
68
69 import atexit
70
71 # using default logging to be able to silence unnecessary messages from nemo
72 import logging
73 import os
74 import random
75 import sys
76 import tempfile
77 from copy import deepcopy
78 from dataclasses import dataclass
79 from pathlib import Path
80 from typing import Dict, List, Optional, Tuple
81
82 import joblib
...
...
195 class_weight: Tuple = (None, "balanced")
196
197 # increase if getting many warnings that algorithm didn't converge
198 max_iter: int = 1000
199
200
201 @dataclass
202 class BuildEnsembleConfig:
203 # where to save the resulting ensemble model
204 output_path: str = MISSING
205
206 # each model specification
207 ensemble: List[EnsembleConfig] = MISSING
208
209 random_seed: int = 0 # for reproducibility
210
211 # default confidence, can override
212 confidence: ConfidenceConfig = ConfidenceConfig(
213 # we keep frame confidences and apply aggregation manually to get full-utterance confidence
214 preserve_frame_confidence=True,
215 exclude_blank=True,
216 aggregation="mean",
217 method_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
218 )
219 temperature: float = 1.0
220
221 # this is optional, but can be used to change any aspect of the transcription
222 # config, such as batch size or amp usage. Note that model, data and confidence
223 # will be overriden by this script
224 transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
225
226 # set to True to tune the confidence.
227 # requires dev manifests to be specified for each model
228 tune_confidence: bool = False
...
...
215 exclude_blank=True,
216 aggregation="mean",
217 method_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
218 )
219 temperature: float = 1.0
220
221 # this is optional, but can be used to change any aspect of the transcription
222 # config, such as batch size or amp usage. Note that model, data and confidence
223 # will be overriden by this script
224 transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
225
226 # set to True to tune the confidence.
227 # requires dev manifests to be specified for each model
228 tune_confidence: bool = False
229 # used to specify what to tune over. By default runs tuning over some
230 # reasonalbe grid, so that it does not take forever.
231 # Can be changed as needed
232 tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
233
234 # very fast to tune and can be important in case of imbalanced datasets
235 # will automatically set to False if dev data is not available
236 tune_logistic_regression: bool = True
237 tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
238
239 def __post_init__(self):
240 """Checking that if any dev data is provided, all are provided.
241
...
[end of scripts/confidence_ensembles/build_ensemble.py]
[start of examples/asr/experimental/k2/align_speech_parallel.py]
...
60
61 You may control the aligner's config by setting the aligner_args:
62 aligner_args.alignment_type=argmax \
63 aligner_args.word_output=False \
64 aligner_args.cpu_decoding=True \
65 aligner_args.decode_batch_size=8 \
66 aligner_args.ctc_cfg.prob_suppress_index=-1 \
67 aligner_args.ctc_cfg.prob_suppress_value=0.5 \
68 aligner_args.rnnt_cfg.predictor_window_size=10 \
69 aligner_args.decoder_module_cfg.intersect_pruned=true \
70 aligner_args.decoder_module_cfg.intersect_conf.search_beam=40 \
71 ...
72
73 """
74
75
76 import os
77 from dataclasses import dataclass, is_dataclass
78 from typing import Optional
79
80 import pytorch_lightning as ptl
81 import torch
...
...
80 import pytorch_lightning as ptl
81 import torch
82 from omegaconf import MISSING, OmegaConf
83
84 from nemo.collections.asr.data.audio_to_ctm_dataset import ASRCTMPredictionWriter
85 from nemo.collections.asr.models import ASRModel
86 from nemo.collections.asr.models.configs.aligner_config import K2AlignerWrapperModelConfig
87 from nemo.collections.asr.models.configs.asr_models_config import ASRDatasetConfig
88 from nemo.collections.asr.models.k2_aligner_model import AlignerWrapperModel
89 from nemo.core.config import TrainerConfig, hydra_runner
90 from nemo.utils import logging
91 from nemo.utils.get_rank import is_global_rank_zero
92
93
94 @dataclass
95 class ParallelAlignmentConfig:
96 model: Optional[str] = None # name
97 predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
98 aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
99 output_path: str = MISSING
100 model_stride: int = 8
101
102 trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
103
104 # there arguments will be ignored
105 return_predictions: bool = False
106 use_cer: bool = False
...
[end of examples/asr/experimental/k2/align_speech_parallel.py]
[start of nemo/collections/asr/models/configs/matchboxnet_config.py]
...
93
94
95 # fmt: on
96
97
98 @dataclass
99 class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
100 # Model global arguments
101 sample_rate: int = 16000
102 repeat: int = 1
103 dropout: float = 0.0
104 separable: bool = True
105 kernel_size_factor: float = 1.0
106 timesteps: int = 128
107 labels: List[str] = MISSING
108
109 # Dataset configs
110 train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
111 manifest_filepath=None, shuffle=True, trim_silence=False
112 )
113 validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
114 manifest_filepath=None, shuffle=False
115 )
116 test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
117 manifest_filepath=None, shuffle=False
118 )
119
120 # Optimizer / Scheduler config
121 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
122
123 # Model general component configs
124 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
125 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
126 freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
127 )
128 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
129 audio_length=128
130 )
131
132 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
133 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
134
135
136 @dataclass
137 class MatchboxNetVADModelConfig(MatchboxNetModelConfig):
...
[end of nemo/collections/asr/models/configs/matchboxnet_config.py]
[start of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
...
...
201 of ``model.class_labels`` files."""
202
203
204 @dataclass
205 class PunctuationCapitalizationModelConfig:
206 """
207 A configuration of
208 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
209 model.
210
211 See an example of model config in
212 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
213 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
214
215 This config is a part of :class:`~PunctuationCapitalizationConfig`.
216 """
217
218 class_labels: ClassLabelsConfig = ClassLabelsConfig()
219 """A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
220 These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
221 for passing vocabularies, please provide path to vocabulary files in
222 ``model.common_dataset_parameters.label_vocab_dir`` parameter."""
223
224 common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
225 """Label ids and loss mask information information."""
226
227 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
228 """A configuration for creating training dataset and data loader."""
...
...
219 """A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
220 These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
221 for passing vocabularies, please provide path to vocabulary files in
222 ``model.common_dataset_parameters.label_vocab_dir`` parameter."""
223
224 common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
225 """Label ids and loss mask information information."""
226
227 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
228 """A configuration for creating training dataset and data loader."""
229
230 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
231 """A configuration for creating validation datasets and data loaders."""
232
233 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
234 """A configuration for creating test datasets and data loaders."""
235
236 punct_head: HeadConfig = HeadConfig()
237 """A configuration for creating punctuation MLP head that is applied to a language model outputs."""
238
239 capit_head: HeadConfig = HeadConfig()
240 """A configuration for creating capitalization MLP head that is applied to a language model outputs."""
241
242 tokenizer: Any = TokenizerConfig()
243 """A configuration for source text tokenizer."""
244
245 language_model: LanguageModelConfig = LanguageModelConfig()
246 """A configuration of a BERT-like language model which serves as a model body."""
247
248 optim: Optional[Any] = None
249 """A configuration of optimizer and learning rate scheduler. There is much variability in such config. For
...
...
297 """
298
299 pretrained_model: Optional[str] = None
300 """Can be an NVIDIA's NGC cloud model or a path to a .nemo checkpoint. You can get list of possible cloud options
301 by calling method
302 :func:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel.list_available_models`.
303 """
304
305 name: Optional[str] = 'Punctuation_and_Capitalization'
306 """A name of the model. Used for naming output directories and ``.nemo`` checkpoints."""
307
308 do_training: bool = True
309 """Whether to perform training of the model."""
310
311 do_testing: bool = False
312 """Whether ot perform testing of the model."""
313
314 model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
315 """A configuration for the
316 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
317 model."""
318
319 trainer: Optional[TrainerConfig] = TrainerConfig()
320 """Contains ``Trainer`` Lightning class constructor parameters."""
321
322 exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
323 """A configuration with various NeMo training options such as output directories, resuming from checkpoint,
324 tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
325
326
327 @dataclass
328 class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
329 model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
330
331
332 def is_legacy_model_config(model_cfg: DictConfig) -> bool:
333 """
...
[end of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
[start of tools/nemo_forced_aligner/align.py]
...
135 viterbi_device: Optional[str] = None
136 batch_size: int = 1
137 use_local_attention: bool = True
138 additional_segment_grouping_separator: Optional[str] = None
139 audio_filepath_parts_in_utt_id: int = 1
140
141 # Buffered chunked streaming configs
142 use_buffered_chunked_streaming: bool = False
143 chunk_len_in_secs: float = 1.6
144 total_buffer_in_secs: float = 4.0
145 chunk_batch_size: int = 32
146
147 # Cache aware streaming configs
148 simulate_cache_aware_streaming: Optional[bool] = False
149
150 # Output file configs
151 save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
152 ctm_file_config: CTMFileConfig = CTMFileConfig()
153 ass_file_config: ASSFileConfig = ASSFileConfig()
154
155
156 @hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
157 def main(cfg: AlignmentConfig):
...
[end of tools/nemo_forced_aligner/align.py]
[start of nemo/collections/asr/models/configs/quartznet_config.py]
...
...
160 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
161 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
162 ]
163 return config
164 # fmt: on
165
166
167 @dataclass
168 class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
169 # Model global arguments
170 sample_rate: int = 16000
171 repeat: int = 1
172 dropout: float = 0.0
173 separable: bool = False
174 labels: List[str] = MISSING
175
176 # Dataset configs
177 train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
178 manifest_filepath=None, shuffle=True, trim_silence=True
179 )
180 validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
181 test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
182
183 # Optimizer / Scheduler config
184 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
185
186 # Model general component configs
187 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
188 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
189 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
190 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
191
192
193 @dataclass
194 class QuartzNetModelConfig(JasperModelConfig):
...
[end of nemo/collections/asr/models/configs/quartznet_config.py]
[start of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
...
...
53 return ' '.join(tokens)
54
55 def normalize(self, text) -> str:
56 # Normalization doesn't handle Japanese periods correctly;
57 # 'ใ'becomes '.'.
58 if self.lang_id == 'en':
59 return self.normalizer.normalize(text)
60 else:
61 return text
62
63
64 class JaMecabProcessor:
65 """
66 Tokenizer, Detokenizer and Normalizer utilities for Japanese MeCab & English
67 """
68
69 def __init__(self):
70 self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
71
72 def detokenize(self, text: List[str]) -> str:
73 RE_WS_IN_FW = re.compile(
...
[end of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
[start of nemo/collections/asr/metrics/rnnt_wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import re
17 from abc import abstractmethod
18 from dataclasses import dataclass, is_dataclass
19 from typing import Callable, Dict, List, Optional, Tuple, Union
20
21 import editdistance
22 import numpy as np
...
...
1285
1286 def compute(self):
1287 wer = self.scores.float() / self.words
1288 return wer, self.scores.detach(), self.words.detach()
1289
1290
1291 @dataclass
1292 class RNNTDecodingConfig:
1293 model_type: str = "rnnt" # one of "rnnt", "multiblank" or "tdt"
1294 strategy: str = "greedy_batch"
1295
1296 compute_hypothesis_token_set: bool = False
1297
1298 # preserve decoding alignments
1299 preserve_alignments: Optional[bool] = None
1300
1301 # confidence config
1302 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1303
1304 # RNNT Joint fused batch size
1305 fused_batch_size: Optional[int] = None
1306
...
...
1303
1304 # RNNT Joint fused batch size
1305 fused_batch_size: Optional[int] = None
1306
1307 # compute RNNT time stamps
1308 compute_timestamps: Optional[bool] = None
1309
1310 # compute language IDs
1311 compute_langs: bool = False
1312
1313 # token representing word seperator
1314 word_seperator: str = " "
1315
1316 # type of timestamps to calculate
1317 rnnt_timestamp_type: str = "all" # can be char, word or all for both
1318
1319 # greedy decoding config
1320 greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
1321
1322 # beam decoding config
1323 beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
1324
1325 # can be used to change temperature for decoding
1326 temperature: float = 1.0
1327
[end of nemo/collections/asr/metrics/rnnt_wer.py]
[start of nemo/collections/asr/modules/audio_preprocessing.py]
...
620 return augmented_spec
621
622
623 class CropOrPadSpectrogramAugmentation(NeuralModule):
624 """
625 Pad or Crop the incoming Spectrogram to a certain shape.
626
627 Args:
628 audio_length (int): the final number of timesteps that is required.
629 The signal will be either padded or cropped temporally to this
630 size.
631 """
632
633 def __init__(self, audio_length):
634 super(CropOrPadSpectrogramAugmentation, self).__init__()
635 self.audio_length = audio_length
636
637 @typecheck()
638 @torch.no_grad()
639 def forward(self, input_signal, length):
640 image = input_signal
...
[end of nemo/collections/asr/modules/audio_preprocessing.py]
[start of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
...
...
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
18 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
19 from nemo.core.config.modelPT import NemoConfig
20
21
22 @dataclass
23 class GraphModuleConfig:
24 criterion_type: str = "ml"
25 loss_type: str = "ctc"
26 split_batch_size: int = 0
27 dec_type: str = "topo"
28 transcribe_training: bool = True
29 backend_cfg: BackendConfig = BackendConfig()
30
31
32 @dataclass
33 class EncDecK2SeqConfig(EncDecCTCConfig):
34 graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
35
36
37 @dataclass
38 class EncDecK2SeqModelConfig(NemoConfig):
39 model: EncDecK2SeqConfig = EncDecK2SeqConfig()
40
[end of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
[start of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
...
...
110 pretrained_name: Optional[str] = None # Name of a pretrained model
111 dataset_manifest: str = MISSING
112 output_dir: str = MISSING
113
114 # General configs
115 batch_size: int = 32
116 num_workers: int = 4
117
118 # Set `cuda` to int to define CUDA device. If 'None', will look for CUDA
119 # device anyway, and do inference on CPU only if CUDA device is not found.
120 # If `cuda` is a negative number, inference will be on CPU only.
121 cuda: Optional[int] = None
122 amp: bool = False
123 audio_type: str = "wav"
124
125 # Confidence configs
126 target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
127 confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
128 grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
129
130
131 @hydra_runner(config_name="ConfidenceBenchmarkingConfig", schema=ConfidenceBenchmarkingConfig)
...
[end of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
[start of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
...
...
32 from nemo.core.config.modelPT import OptimConfig, SchedConfig
33
34
35 @dataclass
36 class MTSchedConfig(SchedConfig):
37 name: str = 'InverseSquareRootAnnealing'
38 warmup_ratio: Optional[float] = None
39 last_epoch: int = -1
40
41
42 # TODO: Refactor this dataclass to to support more optimizers (it pins the optimizer to Adam-like optimizers).
43 @dataclass
44 class MTOptimConfig(OptimConfig):
45 name: str = 'adam'
46 lr: float = 1e-3
47 betas: Tuple[float, float] = (0.9, 0.98)
48 weight_decay: float = 0.0
49 sched: Optional[MTSchedConfig] = MTSchedConfig()
50
51
52 @dataclass
53 class MTEncDecModelConfig(EncDecNLPModelConfig):
...
...
60 len_pen: float = 0.0
61 src_language: Any = 'en' # Any = str or List[str]
62 tgt_language: Any = 'en' # Any = str or List[str]
63 find_unused_parameters: Optional[bool] = True
64 shared_tokenizer: Optional[bool] = True
65 multilingual: Optional[bool] = False
66 preproc_out_dir: Optional[str] = None
67 validate_input_ids: Optional[bool] = True
68 shared_embeddings: bool = False
69
70 # network architecture configuration
71 encoder_tokenizer: Any = MISSING
72 encoder: Any = MISSING
73
74 decoder_tokenizer: Any = MISSING
75 decoder: Any = MISSING
76
77 head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
78
79 # dataset configurations
80 train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
81 src_file_name=MISSING,
82 tgt_file_name=MISSING,
83 tokens_in_batch=512,
84 clean=True,
85 shuffle=True,
86 cache_ids=False,
87 use_cache=False,
88 )
89 validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
90 src_file_name=MISSING,
91 tgt_file_name=MISSING,
92 tokens_in_batch=512,
93 clean=False,
94 shuffle=False,
95 cache_ids=False,
96 use_cache=False,
97 )
98 test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
99 src_file_name=MISSING,
100 tgt_file_name=MISSING,
101 tokens_in_batch=512,
102 clean=False,
103 shuffle=False,
104 cache_ids=False,
105 use_cache=False,
106 )
107 optim: Optional[OptimConfig] = MTOptimConfig()
108
109
110 @dataclass
111 class AAYNBaseConfig(MTEncDecModelConfig):
112
113 # Attention is All You Need Base Configuration
114 encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
115 decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
116
117 encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
118 library='nemo',
119 model_name=None,
120 pretrained=False,
121 hidden_size=512,
122 inner_size=2048,
123 num_layers=6,
124 num_attention_heads=8,
125 ffn_dropout=0.1,
126 attn_score_dropout=0.1,
127 attn_layer_dropout=0.1,
128 )
129
130 decoder: NeMoTransformerConfig = NeMoTransformerConfig(
131 library='nemo',
132 model_name=None,
133 pretrained=False,
134 hidden_size=512,
135 inner_size=2048,
136 num_layers=6,
137 num_attention_heads=8,
138 ffn_dropout=0.1,
139 attn_score_dropout=0.1,
140 attn_layer_dropout=0.1,
141 )
142
143
144 @dataclass
...
...
136 num_layers=6,
137 num_attention_heads=8,
138 ffn_dropout=0.1,
139 attn_score_dropout=0.1,
140 attn_layer_dropout=0.1,
141 )
142
143
144 @dataclass
145 class MTBottleneckModelConfig(AAYNBaseConfig):
146 model_type: str = 'nll'
147 min_logv: float = -6
148 latent_size: int = -1 # -1 will take value of encoder hidden
149 non_recon_warmup_batches: int = 200000
150 recon_per_token: bool = True
151 log_timing: bool = True
152
153 encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
154 library='nemo',
155 model_name=None,
156 pretrained=False,
157 hidden_size=512,
158 inner_size=2048,
159 num_layers=6,
160 num_attention_heads=8,
161 ffn_dropout=0.1,
162 attn_score_dropout=0.1,
163 attn_layer_dropout=0.1,
164 arch='seq2seq',
165 hidden_steps=32,
166 hidden_blocks=1,
167 hidden_init_method='params',
168 )
169
170 decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
171 library='nemo',
172 model_name=None,
173 pretrained=False,
174 inner_size=2048,
175 num_layers=6,
176 num_attention_heads=8,
177 ffn_dropout=0.1,
178 attn_score_dropout=0.1,
179 attn_layer_dropout=0.1,
180 arch='seq2seq',
181 )
182
[end of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
[start of nemo/utils/exp_manager.py]
...
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import os
17 import subprocess
18 import sys
19 import time
20 import warnings
21 from dataclasses import dataclass
22 from datetime import timedelta
23 from pathlib import Path
24 from shutil import copy, move
25 from typing import Any, Dict, List, Optional, Tuple, Union
...
...
132
133 # Log dir creation parameters
134 explicit_log_dir: Optional[str] = None
135 exp_dir: Optional[str] = None
136 name: Optional[str] = None
137 version: Optional[str] = None
138 use_datetime_version: Optional[bool] = True
139 resume_if_exists: Optional[bool] = False
140 resume_past_end: Optional[bool] = False
141 resume_ignore_no_checkpoint: Optional[bool] = False
142 resume_from_checkpoint: Optional[str] = None
143 # Logging parameters
144 create_tensorboard_logger: Optional[bool] = True
145 summary_writer_kwargs: Optional[Dict[Any, Any]] = None
146 create_wandb_logger: Optional[bool] = False
147 wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
148 create_mlflow_logger: Optional[bool] = False
149 mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
150 create_dllogger_logger: Optional[bool] = False
151 dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
152 create_clearml_logger: Optional[bool] = False
153 clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
154 # Checkpointing parameters
155 create_checkpoint_callback: Optional[bool] = True
156 checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
157 create_early_stopping_callback: Optional[bool] = False
158 early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
159 create_preemption_callback: Optional[bool] = True
160 # Additional exp_manager arguments
161 files_to_copy: Optional[List[str]] = None
162 # logs timing of train/val/test steps
163 log_step_timing: Optional[bool] = True
164 step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
165 # Configures creation of log files for different ranks
166 log_local_rank_0_only: Optional[bool] = False
167 log_global_rank_0_only: Optional[bool] = False
168 # disable initial validation when resuming from a checkpoint saved during validation
169 disable_validation_on_resume: Optional[bool] = True
170 ema: Optional[EMAParams] = EMAParams()
171 # Wall clock time limit
172 max_time_per_run: Optional[str] = None
173 # time to sleep non 0 ranks during initialization
174 seconds_to_sleep: float = 5
...
[end of nemo/utils/exp_manager.py]
[start of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
...
...
588 beam_threshold: float = 20.0
589 unk_weight: float = -math.inf
590 sil_weight: float = 0.0
591
592
593 @dataclass
594 class BeamCTCInferConfig:
595 beam_size: int
596 search_type: str = 'default'
597 preserve_alignments: bool = False
598 compute_timestamps: bool = False
599 return_best_hypothesis: bool = True
600
601 beam_alpha: float = 1.0
602 beam_beta: float = 0.0
603 kenlm_path: Optional[str] = None
604
605 flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
606 pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
607
[end of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
NVIDIA/NeMo
|
15db83ec4a65e649d83b61d7a4a58d911586e853
|
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
|
Seems to be a similar to #7002
Interesting. The fix is easy but needs to be applied to basically every single place that has this constructor for our adapter configs. Let me see if I can update it. But no guarantees on how soon fixes will come in main.
Looking forward to it @titu1994 ! Thanks ๐
@titu1994 I was looking to use NeMo speaker diarization with Python 3.11 and hit this dataclass issue. I patched everything involved in the specific code paths I needed: https://github.com/lmnt-com/NeMo/commit/d89acf9f0152e43dee29d7d1c4667ee34c26ffd7
I was using the neural diarizer as described in https://github.com/NVIDIA/NeMo/tree/main/examples/speaker_tasks/diarization
I'd be happy to upstream this if it's helpful.
I haven't checked whether this is backwards compatible for earlier python/dataclass versions, do you know?
For reference, what led me to this issue, though it's duplicative to the above discussion:
[Similar error](https://github.com/huggingface/datasets/issues/5230)
[StackOverflow solution](https://stackoverflow.com/questions/53632152/why-cant-dataclasses-have-mutable-defaults-in-their-class-attributes-declaratio)
@shaper Thanks for sharing. For brevity, you don't really need a `lambda` when you don't pass any init parameters, like this:
```
field(default_factory=lambda: ConfidenceConfig())
```
You can just do
```
field(default_factory=ConfidenceConfig)
```
It's only needed when you do pass parameter(s), like
```
field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
```
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
I have the same issue. @tango4j suggested using one of the models from https://huggingface.co/spaces/hf-audio/open_asr_leaderboard, but I cannot import nemo.collections.asr:
```
Traceback (most recent call last):
File "/opt/pycharm-2022.3.3/plugins/python/helpers/pycharm/docrunner.py", line 138, in __run
exec(compile(example.source, filename, "single",
File "<doctest NeMoASR[2]>", line 1, in <module>
NeMoASR().apply_asr(file)
^^^^^^^^^
File "/home/cbj/python/cbj/cbj/transcribe/pretrained.py", line 504, in __init__
import nemo.collections.asr as nemo_asr
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
For documentation (I had to search in the provided links):
Mutable defaults were never allowed in dataclasses (by convention), but in python 3.11 they improved the check: Instead of checking some types (dict, list, set) they now use hashable as indicator for mutable.
An alternative to default_factory would be to use frozen dataclasses, but I don't know whether in this code base the configs are used as mutable objects or not.
You need to update to NeMo 1.20, omegaconf did a fix that should resolve this
I have NeMo 1.20.0.
With `pip install nemo_toolkit` and `pip install pytorch_lightning` I installed yesterday nemo.
So it should be the newest PyPI version.
```
$ pip show nemo_toolkit
Name: nemo-toolkit
Version: 1.20.0
Summary: NeMo - a toolkit for Conversational AI
Home-page: https://github.com/nvidia/nemo
Author: NVIDIA
Author-email: nemo-toolkit@nvidia.com
License: Apache2
Location: /opt/py/2023/lib/python3.11/site-packages
Requires: huggingface-hub, numba, numpy, onnx, python-dateutil, ruamel.yaml, scikit-learn, setuptools, tensorboard, text-unidecode, torch, tqdm, wget, wrapt
Required-by:
$ pip show omegaconf
Name: omegaconf
Version: 2.3.0
Summary: A flexible configuration library
Home-page: https://github.com/omry/omegaconf
Author: Omry Yadan
Author-email: omry@yadan.net
License:
Location: /home/cbj/.local/lib/python3.11/site-packages
Requires: antlr4-python3-runtime, PyYAML
Required-by: hydra-core
$ python -c "import nemo.collections.asr as nemo_asr"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
Hmm ok I'll take a look
|
2023-10-03T19:14:38Z
|
<patch>
<patch>
diff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr/experimental/k2/align_speech_parallel.py
--- a/examples/asr/experimental/k2/align_speech_parallel.py
+++ b/examples/asr/experimental/k2/align_speech_parallel.py
@@ -74,7 +74,7 @@
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
@@ -94,12 +94,14 @@
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
- predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
- aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
+ predict_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(return_sample_id=True, num_workers=4)
+ )
+ aligner_args: K2AlignerWrapperModelConfig = field(default_factory=lambda: K2AlignerWrapperModelConfig())
output_path: str = MISSING
model_stride: int = 8
- trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
+ trainer: TrainerConfig = field(default_factory=lambda: TrainerConfig(gpus=-1, accelerator="ddp"))
# there arguments will be ignored
return_predictions: bool = False
diff --git a/nemo/collections/asr/metrics/rnnt_wer.py b/nemo/collections/asr/metrics/rnnt_wer.py
--- a/nemo/collections/asr/metrics/rnnt_wer.py
+++ b/nemo/collections/asr/metrics/rnnt_wer.py
@@ -15,7 +15,7 @@
import copy
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1299,7 +1299,7 @@ class RNNTDecodingConfig:
preserve_alignments: Optional[bool] = None
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
@@ -1317,10 +1317,10 @@ class RNNTDecodingConfig:
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
- greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
+ greedy: greedy_decode.GreedyRNNTInferConfig = field(default_factory=lambda: greedy_decode.GreedyRNNTInferConfig())
# beam decoding config
- beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
+ beam: beam_decode.BeamRNNTInferConfig = field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/metrics/wer.py b/nemo/collections/asr/metrics/wer.py
--- a/nemo/collections/asr/metrics/wer.py
+++ b/nemo/collections/asr/metrics/wer.py
@@ -14,7 +14,7 @@
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1297,13 +1297,17 @@ class CTCDecodingConfig:
batch_dim_index: int = 0
# greedy decoding config
- greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
+ greedy: ctc_greedy_decoding.GreedyCTCInferConfig = field(
+ default_factory=lambda: ctc_greedy_decoding.GreedyCTCInferConfig()
+ )
# beam decoding config
- beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ beam: ctc_beam_decoding.BeamCTCInferConfig = field(
+ default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ )
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/models/configs/aligner_config.py b/nemo/collections/asr/models/configs/aligner_config.py
--- a/nemo/collections/asr/models/configs/aligner_config.py
+++ b/nemo/collections/asr/models/configs/aligner_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@@ -35,10 +35,10 @@ class AlignerWrapperModelConfig:
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
- ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
- rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
+ ctc_cfg: AlignerCTCConfig = field(default_factory=lambda: AlignerCTCConfig())
+ rnnt_cfg: AlignerRNNTConfig = field(default_factory=lambda: AlignerRNNTConfig())
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
- decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ decoder_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
diff --git a/nemo/collections/asr/models/configs/asr_models_config.py b/nemo/collections/asr/models/configs/asr_models_config.py
--- a/nemo/collections/asr/models/configs/asr_models_config.py
+++ b/nemo/collections/asr/models/configs/asr_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -74,24 +74,32 @@ class EncDecCTCConfig(model_cfg.ModelConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
- validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ train_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=True))
+ validation_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False))
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
- decoding: CTCDecodingConfig = CTCDecodingConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
+ decoding: CTCDecodingConfig = field(default_factory=lambda: CTCDecodingConfig())
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
- model: EncDecCTCConfig = EncDecCTCConfig()
+ model: EncDecCTCConfig = field(default_factory=lambda: EncDecCTCConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/classification_models_config.py b/nemo/collections/asr/models/configs/classification_models_config.py
--- a/nemo/collections/asr/models/configs/classification_models_config.py
+++ b/nemo/collections/asr/models/configs/classification_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -72,30 +72,40 @@ class EncDecClassificationConfig(model_cfg.ModelConfig):
timesteps: int = MISSING
# Dataset configs
- train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=timesteps
+ preprocessor: AudioToMFCCPreprocessorConfig = field(default_factory=lambda: AudioToMFCCPreprocessorConfig())
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=-1)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
+
+ def __post_init__(self):
+ if self.crop_or_pad_augment is not None:
+ self.crop_or_pad_augment.audio_length = self.timesteps
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
- model: EncDecClassificationConfig = EncDecClassificationConfig()
+ model: EncDecClassificationConfig = field(default_factory=lambda: EncDecClassificationConfig())
diff --git a/nemo/collections/asr/models/configs/diarizer_config.py b/nemo/collections/asr/models/configs/diarizer_config.py
--- a/nemo/collections/asr/models/configs/diarizer_config.py
+++ b/nemo/collections/asr/models/configs/diarizer_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import asdict, dataclass
+from dataclasses import asdict, dataclass, field
from typing import Any, Dict, Optional, Tuple, Union
@@ -78,9 +78,9 @@ class ASRDiarizerParams(DiarizerComponentConfig):
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
- parameters: ASRDiarizerParams = ASRDiarizerParams()
- ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
- realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
+ parameters: ASRDiarizerParams = field(default_factory=lambda: ASRDiarizerParams())
+ ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = field(default_factory=lambda: ASRDiarizerCTCDecoderParams())
+ realigning_lm_parameters: ASRRealigningLMParams = field(default_factory=lambda: ASRRealigningLMParams())
@dataclass
@@ -102,7 +102,7 @@ class VADParams(DiarizerComponentConfig):
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
- parameters: VADParams = VADParams()
+ parameters: VADParams = field(default_factory=lambda: VADParams())
@dataclass
@@ -121,7 +121,7 @@ class SpeakerEmbeddingsParams(DiarizerComponentConfig):
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
- parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
+ parameters: SpeakerEmbeddingsParams = field(default_factory=lambda: SpeakerEmbeddingsParams())
@dataclass
@@ -142,7 +142,7 @@ class ClusteringParams(DiarizerComponentConfig):
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
- parameters: ClusteringParams = ClusteringParams()
+ parameters: ClusteringParams = field(default_factory=lambda: ClusteringParams())
@dataclass
@@ -166,7 +166,7 @@ class MSDDParams(DiarizerComponentConfig):
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
- parameters: MSDDParams = MSDDParams()
+ parameters: MSDDParams = field(default_factory=lambda: MSDDParams())
@dataclass
@@ -176,16 +176,16 @@ class DiarizerConfig(DiarizerComponentConfig):
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
- vad: VADConfig = VADConfig()
- speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
- clustering: ClusteringConfig = ClusteringConfig()
- msdd_model: MSDDConfig = MSDDConfig()
- asr: ASRDiarizerConfig = ASRDiarizerConfig()
+ vad: VADConfig = field(default_factory=lambda: VADConfig())
+ speaker_embeddings: SpeakerEmbeddingsConfig = field(default_factory=lambda: SpeakerEmbeddingsConfig())
+ clustering: ClusteringConfig = field(default_factory=lambda: ClusteringConfig())
+ msdd_model: MSDDConfig = field(default_factory=lambda: MSDDConfig())
+ asr: ASRDiarizerConfig = field(default_factory=lambda: ASRDiarizerConfig())
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
- diarizer: DiarizerConfig = DiarizerConfig()
+ diarizer: DiarizerConfig = field(default_factory=lambda: DiarizerConfig())
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
diff --git a/nemo/collections/asr/models/configs/k2_sequence_models_config.py b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
--- a/nemo/collections/asr/models/configs/k2_sequence_models_config.py
+++ b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
@@ -26,14 +26,14 @@ class GraphModuleConfig:
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
- backend_cfg: BackendConfig = BackendConfig()
+ backend_cfg: BackendConfig = field(default_factory=lambda: BackendConfig())
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
- graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ graph_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
- model: EncDecK2SeqConfig = EncDecK2SeqConfig()
+ model: EncDecK2SeqConfig = field(default_factory=lambda: EncDecK2SeqConfig())
diff --git a/nemo/collections/asr/models/configs/matchboxnet_config.py b/nemo/collections/asr/models/configs/matchboxnet_config.py
--- a/nemo/collections/asr/models/configs/matchboxnet_config.py
+++ b/nemo/collections/asr/models/configs/matchboxnet_config.py
@@ -107,30 +107,38 @@ class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
- freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ preprocessor: AudioToMFCCPreprocessorConfig = field(
+ default_factory=lambda: AudioToMFCCPreprocessorConfig(window_size=0.025)
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig(
+ freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ )
)
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=128
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=128)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/quartznet_config.py b/nemo/collections/asr/models/configs/quartznet_config.py
--- a/nemo/collections/asr/models/configs/quartznet_config.py
+++ b/nemo/collections/asr/models/configs/quartznet_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
@@ -174,20 +174,30 @@ class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=True
+ train_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=True, trim_silence=True)
+ )
+ validation_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
)
- validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
@dataclass
diff --git a/nemo/collections/asr/modules/audio_preprocessing.py b/nemo/collections/asr/modules/audio_preprocessing.py
--- a/nemo/collections/asr/modules/audio_preprocessing.py
+++ b/nemo/collections/asr/modules/audio_preprocessing.py
@@ -634,6 +634,12 @@ def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
+ if self.audio_length < 0:
+ raise ValueError(
+ 'audio_length must be non-negative. If using a dataclass with OmegaConf, '
+ 'please call OmegaConf.to_object(cfg) to call appropriate __post_init__ methods.'
+ )
+
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
diff --git a/nemo/collections/asr/parts/k2/classes.py b/nemo/collections/asr/parts/k2/classes.py
--- a/nemo/collections/asr/parts/k2/classes.py
+++ b/nemo/collections/asr/parts/k2/classes.py
@@ -13,7 +13,7 @@
# limitations under the License.
from abc import ABC
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
import torch
@@ -43,7 +43,7 @@ class GraphModuleConfig:
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
- intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
+ intersect_conf: GraphIntersectDenseConfig = field(default_factory=lambda: GraphIntersectDenseConfig())
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
diff --git a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
--- a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
+++ b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
@@ -13,7 +13,7 @@
# limitations under the License.
import math
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional
import torch
@@ -183,7 +183,7 @@ class MultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
@@ -287,7 +287,7 @@ class RelPositionMultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
@@ -336,7 +336,9 @@ class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
@@ -378,5 +380,7 @@ class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
diff --git a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
@@ -14,7 +14,7 @@
import math
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import torch
@@ -602,5 +602,5 @@ class BeamCTCInferConfig:
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
- flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
- pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
+ flashlight_cfg: Optional[FlashlightConfig] = field(default_factory=lambda: FlashlightConfig())
+ pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = field(default_factory=lambda: PyCTCDecodeConfig())
diff --git a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional
import torch
@@ -253,7 +253,7 @@ class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
@@ -26,7 +26,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import numpy as np
@@ -2185,7 +2185,7 @@ class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
@@ -2201,7 +2201,7 @@ class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/asr/parts/utils/asr_confidence_utils.py b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
--- a/nemo/collections/asr/parts/utils/asr_confidence_utils.py
+++ b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
@@ -14,7 +14,7 @@
import math
from abc import ABC, abstractmethod
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from functools import partial
from typing import List, Optional
@@ -175,7 +175,7 @@ class ConfidenceConfig:
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
- method_cfg: ConfidenceMethodConfig = ConfidenceMethodConfig()
+ method_cfg: ConfidenceMethodConfig = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/common/parts/adapter_modules.py b/nemo/collections/common/parts/adapter_modules.py
--- a/nemo/collections/common/parts/adapter_modules.py
+++ b/nemo/collections/common/parts/adapter_modules.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
@@ -160,5 +160,7 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
diff --git a/nemo/collections/common/tokenizers/en_ja_tokenizers.py b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
--- a/nemo/collections/common/tokenizers/en_ja_tokenizers.py
+++ b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
@@ -14,11 +14,19 @@
import re
from typing import List
-import ipadic
-import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
+try:
+ import ipadic
+ import MeCab
+
+ HAVE_MECAB = True
+ HAVE_IPADIC = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+ HAVE_IPADIC = False
+
class EnJaProcessor:
"""
@@ -67,6 +75,9 @@ class JaMecabProcessor:
"""
def __init__(self):
+ if not HAVE_MECAB or not HAVE_IPADIC:
+ raise ImportError("Please ensure that you have installed `MeCab` and `ipadic` to use JaMecabProcessor")
+
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
diff --git a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
--- a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
+++ b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
@@ -46,7 +46,7 @@ class MTOptimConfig(OptimConfig):
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
- sched: Optional[MTSchedConfig] = MTSchedConfig()
+ sched: Optional[MTSchedConfig] = field(default_factory=lambda: MTSchedConfig())
@dataclass
@@ -74,70 +74,80 @@ class MTEncDecModelConfig(EncDecNLPModelConfig):
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
- head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
+ head: TokenClassifierConfig = field(default_factory=lambda: TokenClassifierConfig(log_softmax=True))
# dataset configurations
- train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=True,
- shuffle=True,
- cache_ids=False,
- use_cache=False,
+ train_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=True,
+ shuffle=True,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ validation_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ test_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- optim: Optional[OptimConfig] = MTOptimConfig()
+ optim: Optional[OptimConfig] = field(default_factory=lambda: MTOptimConfig())
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
- encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
- decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
-
- encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ encoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+ decoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+
+ encoder: NeMoTransformerEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
- decoder: NeMoTransformerConfig = NeMoTransformerConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ decoder: NeMoTransformerConfig = field(
+ default_factory=lambda: NeMoTransformerConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
@@ -150,32 +160,36 @@ class MTBottleneckModelConfig(AAYNBaseConfig):
recon_per_token: bool = True
log_timing: bool = True
- encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
- hidden_steps=32,
- hidden_blocks=1,
- hidden_init_method='params',
+ encoder: NeMoTransformerBottleneckEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ hidden_steps=32,
+ hidden_blocks=1,
+ hidden_init_method='params',
+ )
)
- decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
+ decoder: NeMoTransformerBottleneckDecoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckDecoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ )
)
diff --git a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
--- a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
+++ b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
@@ -215,13 +215,15 @@ class PunctuationCapitalizationModelConfig:
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
- class_labels: ClassLabelsConfig = ClassLabelsConfig()
+ class_labels: ClassLabelsConfig = field(default_factory=lambda: ClassLabelsConfig())
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
- common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
+ common_dataset_parameters: Optional[CommonDatasetParametersConfig] = field(
+ default_factory=lambda: CommonDatasetParametersConfig()
+ )
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
@@ -233,16 +235,16 @@ class PunctuationCapitalizationModelConfig:
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
- punct_head: HeadConfig = HeadConfig()
+ punct_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
- capit_head: HeadConfig = HeadConfig()
+ capit_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
- tokenizer: Any = TokenizerConfig()
+ tokenizer: Any = field(default_factory=lambda: TokenizerConfig())
"""A configuration for source text tokenizer."""
- language_model: LanguageModelConfig = LanguageModelConfig()
+ language_model: LanguageModelConfig = field(default_factory=lambda: LanguageModelConfig())
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
@@ -311,22 +313,30 @@ class PunctuationCapitalizationConfig(NemoConfig):
do_testing: bool = False
"""Whether ot perform testing of the model."""
- model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
+ model: PunctuationCapitalizationModelConfig = field(default_factory=lambda: PunctuationCapitalizationModelConfig())
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
- trainer: Optional[TrainerConfig] = TrainerConfig()
+ trainer: Optional[TrainerConfig] = field(default_factory=lambda: TrainerConfig())
"""Contains ``Trainer`` Lightning class constructor parameters."""
- exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
+ exp_manager: Optional[ExpManagerConfig] = field(
+ default_factory=lambda: ExpManagerConfig(name=None, files_to_copy=[])
+ )
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
+ def __post_init__(self):
+ if self.exp_manager is not None:
+ self.exp_manager.name = self.name
+
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
- model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
+ model: PunctuationCapitalizationLexicalAudioModelConfig = field(
+ default_factory=lambda: PunctuationCapitalizationLexicalAudioModelConfig()
+ )
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
diff --git a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
--- a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
+++ b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
@@ -13,7 +13,6 @@
# limitations under the License.
"""Transformer based language model."""
-from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
@@ -25,6 +24,13 @@
scaled_init_method_normal,
)
+try:
+ from MeCab import Model
+
+ HAVE_MECAB = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+
try:
from apex.transformer.enums import AttnMaskType, ModelType
diff --git a/nemo/collections/tts/models/fastpitch.py b/nemo/collections/tts/models/fastpitch.py
--- a/nemo/collections/tts/models/fastpitch.py
+++ b/nemo/collections/tts/models/fastpitch.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import List, Optional
@@ -70,12 +70,12 @@ class TextTokenizer:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
@dataclass
class TextTokenizerConfig:
- text_tokenizer: TextTokenizer = TextTokenizer()
+ text_tokenizer: TextTokenizer = field(default_factory=lambda: TextTokenizer())
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
diff --git a/nemo/collections/tts/models/tacotron2.py b/nemo/collections/tts/models/tacotron2.py
--- a/nemo/collections/tts/models/tacotron2.py
+++ b/nemo/collections/tts/models/tacotron2.py
@@ -13,7 +13,7 @@
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
import torch
@@ -53,7 +53,7 @@ class Preprocessor:
@dataclass
class Tacotron2Config:
- preprocessor: Preprocessor = Preprocessor()
+ preprocessor: Preprocessor = field(default_factory=lambda: Preprocessor())
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
diff --git a/nemo/core/config/modelPT.py b/nemo/core/config/modelPT.py
--- a/nemo/core/config/modelPT.py
+++ b/nemo/core/config/modelPT.py
@@ -58,11 +58,13 @@ class HydraConfig:
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
- trainer: config.TrainerConfig = config.TrainerConfig(
- strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ trainer: config.TrainerConfig = field(
+ default_factory=lambda: config.TrainerConfig(
+ strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ )
)
- exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
- hydra: HydraConfig = HydraConfig()
+ exp_manager: Optional[Any] = field(default_factory=lambda: exp_manager.ExpManagerConfig())
+ hydra: HydraConfig = field(default_factory=lambda: HydraConfig())
class ModelConfigBuilder:
diff --git a/nemo/utils/exp_manager.py b/nemo/utils/exp_manager.py
--- a/nemo/utils/exp_manager.py
+++ b/nemo/utils/exp_manager.py
@@ -18,7 +18,7 @@
import sys
import time
import warnings
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
@@ -146,28 +146,30 @@ class ExpManagerConfig:
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
- mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
+ mlflow_logger_kwargs: Optional[MLFlowParams] = field(default_factory=lambda: MLFlowParams())
create_dllogger_logger: Optional[bool] = False
- dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
+ dllogger_logger_kwargs: Optional[DLLoggerParams] = field(default_factory=lambda: DLLoggerParams())
create_clearml_logger: Optional[bool] = False
- clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
+ clearml_logger_kwargs: Optional[ClearMLParams] = field(default_factory=lambda: ClearMLParams())
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
- checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
+ checkpoint_callback_params: Optional[CallbackParams] = field(default_factory=lambda: CallbackParams())
create_early_stopping_callback: Optional[bool] = False
- early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
+ early_stopping_callback_params: Optional[EarlyStoppingParams] = field(
+ default_factory=lambda: EarlyStoppingParams()
+ )
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
- step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
+ step_timing_kwargs: Optional[StepTimingParams] = field(default_factory=lambda: StepTimingParams())
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
- ema: Optional[EMAParams] = EMAParams()
+ ema: Optional[EMAParams] = field(default_factory=lambda: EMAParams())
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
@@ -112,14 +112,14 @@ class EvalBeamSearchNGramConfig:
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
- decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
+ decoding: ctc_beam_decoding.BeamCTCInferConfig = field(default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=128))
- text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
+ text_processing: Optional[TextProcessingConfig] = field(default_factory=lambda: TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
- )
+ ))
# fmt: on
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
@@ -115,7 +115,7 @@ class EvalBeamSearchNGramConfig:
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
- decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
+ decoding: rnnt_beam_decoding.BeamRNNTInferConfig = field(default_factory=lambda: rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128))
# fmt: on
diff --git a/scripts/confidence_ensembles/build_ensemble.py b/scripts/confidence_ensembles/build_ensemble.py
--- a/scripts/confidence_ensembles/build_ensemble.py
+++ b/scripts/confidence_ensembles/build_ensemble.py
@@ -75,7 +75,7 @@
import sys
import tempfile
from copy import deepcopy
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Tuple
@@ -209,19 +209,23 @@ class BuildEnsembleConfig:
random_seed: int = 0 # for reproducibility
# default confidence, can override
- confidence: ConfidenceConfig = ConfidenceConfig(
- # we keep frame confidences and apply aggregation manually to get full-utterance confidence
- preserve_frame_confidence=True,
- exclude_blank=True,
- aggregation="mean",
- method_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ confidence: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(
+ # we keep frame confidences and apply aggregation manually to get full-utterance confidence
+ preserve_frame_confidence=True,
+ exclude_blank=True,
+ aggregation="mean",
+ measure_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ )
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
- transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
+ transcription: transcribe_speech.TranscriptionConfig = field(
+ default_factory=lambda: transcribe_speech.TranscriptionConfig()
+ )
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
@@ -229,12 +233,14 @@ class BuildEnsembleConfig:
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
- tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
+ tune_confidence_config: TuneConfidenceConfig = field(default_factory=lambda: TuneConfidenceConfig())
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
- tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
+ tune_logistic_regression_config: TuneLogisticRegressionConfig = field(
+ default_factory=lambda: TuneLogisticRegressionConfig()
+ )
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
diff --git a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
--- a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
+++ b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
@@ -14,7 +14,7 @@
import json
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import Optional
@@ -124,7 +124,9 @@ class ConfidenceBenchmarkingConfig:
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
- confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ confidence_cfg: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ )
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
diff --git a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
--- a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
+++ b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
@@ -202,7 +202,7 @@ class ASRTarredDatasetMetadata:
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
- dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
+ dataset_config: Optional[ASRTarredDatasetConfig] = field(default_factory=lambda: ASRTarredDatasetConfig())
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
diff --git a/tools/nemo_forced_aligner/align.py b/tools/nemo_forced_aligner/align.py
--- a/tools/nemo_forced_aligner/align.py
+++ b/tools/nemo_forced_aligner/align.py
@@ -149,8 +149,8 @@ class AlignmentConfig:
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig = CTMFileConfig()
- ass_file_config: ASSFileConfig = ASSFileConfig()
+ ctm_file_config: CTMFileConfig = field(default_factory=lambda: CTMFileConfig())
+ ass_file_config: ASSFileConfig = field(default_factory=lambda: ASSFileConfig())
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
</patch>
</s>
</patch>
|
diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_to_text_dataset.py
--- a/tests/collections/asr/test_text_to_text_dataset.py
+++ b/tests/collections/asr/test_text_to_text_dataset.py
@@ -15,7 +15,7 @@
import json
import multiprocessing
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
import pytest
@@ -118,7 +118,7 @@ class TextTokenizerCfg:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
config = OmegaConf.create(OmegaConf.to_yaml(TextTokenizerCfg()))
return instantiate(config)
|
1.0
| |||
slackapi__python-slack-events-api-71
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
</issue>
<code>
[start of README.rst]
1 Slack Events API adapter for Python
2 ===================================
3
4 .. image:: https://badge.fury.io/py/slackeventsapi.svg
5 :target: https://pypi.org/project/slackeventsapi/
6 .. image:: https://travis-ci.org/slackapi/python-slack-events-api.svg?branch=master
7 :target: https://travis-ci.org/slackapi/python-slack-events-api
8 .. image:: https://codecov.io/gh/slackapi/python-slack-events-api/branch/master/graph/badge.svg
9 :target: https://codecov.io/gh/slackapi/python-slack-events-api
10
11
12 The Slack Events Adapter is a Python-based solution to receive and parse events
13 from Slackโs Events API. This library uses an event emitter framework to allow
14 you to easily process Slack events by simply attaching functions
15 to event listeners.
16
17 This adapter enhances and simplifies Slack's Events API by incorporating useful best practices, patterns, and opportunities to abstract out common tasks.
18
19 ๐ก We wrote a `blog post which explains how`_ the Events API can help you, why we built these tools, and how you can use them to build production-ready Slack apps.
20
21 .. _blog post which explains how: https://medium.com/@SlackAPI/enhancing-slacks-events-api-7535827829ab
22
23
24 ๐ค Installation
25 ----------------
26
27 .. code:: shell
28
29 pip install slackeventsapi
30
31 ๐ค App Setup
32 --------------------
33
34 Before you can use the `Events API`_ you must
35 `create a Slack App`_, and turn on
36 `Event Subscriptions`_.
37
38 ๐ก When you add the Request URL to your app's Event Subscription settings,
39 Slack will send a request containing a `challenge` code to verify that your
40 server is alive. This package handles that URL Verification event for you, so
41 all you need to do is start the example app, start ngrok and configure your
42 URL accordingly.
43
44 โ
Once you have your `Request URL` verified, your app is ready to start
45 receiving Team Events.
46
47 ๐ Your server will begin receiving Events from Slack's Events API as soon as a
48 user has authorized your app.
49
50 ๐ค Development workflow:
51 ===========================
52
53 (1) Create a Slack app on https://api.slack.com/apps
54 (2) Add a `bot user` for your app
55 (3) Start the example app on your **Request URL** endpoint
56 (4) Start ngrok and copy the **HTTPS** URL
57 (5) Add your **Request URL** and subscribe your app to events
58 (6) Go to your ngrok URL (e.g. https://myapp12.ngrok.com/) and auth your app
59
60 **๐ Once your app has been authorized, you will begin receiving Slack Events**
61
62 โ ๏ธ Ngrok is a great tool for developing Slack apps, but we don't recommend using ngrok
63 for production apps.
64
65 ๐ค Usage
66 ----------
67 **โ ๏ธ Keep your app's credentials safe!**
68
69 - For development, keep them in virtualenv variables.
70
71 - For production, use a secure data store.
72
73 - Never post your app's credentials to github.
74
75 .. code:: python
76
77 SLACK_SIGNING_SECRET = os.environ["SLACK_SIGNING_SECRET"]
78
79 Create a Slack Event Adapter for receiving actions via the Events API
80 -----------------------------------------------------------------------
81 **Using the built-in Flask server:**
82
83 .. code:: python
84
85 from slackeventsapi import SlackEventAdapter
86
87
88 slack_events_adapter = SlackEventAdapter(SLACK_SIGNING_SECRET, endpoint="/slack/events")
89
90
91 # Create an event listener for "reaction_added" events and print the emoji name
92 @slack_events_adapter.on("reaction_added")
93 def reaction_added(event_data):
94 emoji = event_data["event"]["reaction"]
95 print(emoji)
96
97
98 # Start the server on port 3000
99 slack_events_adapter.start(port=3000)
100
101
102 **Using your existing Flask instance:**
103
104
105 .. code:: python
106
107 from flask import Flask
108 from slackeventsapi import SlackEventAdapter
109
110
111 # This `app` represents your existing Flask app
112 app = Flask(__name__)
113
114
115 # An example of one of your Flask app's routes
116 @app.route("/")
117 def hello():
118 return "Hello there!"
119
120
121 # Bind the Events API route to your existing Flask app by passing the server
122 # instance as the last param, or with `server=app`.
123 slack_events_adapter = SlackEventAdapter(SLACK_SIGNING_SECRET, "/slack/events", app)
124
125
126 # Create an event listener for "reaction_added" events and print the emoji name
127 @slack_events_adapter.on("reaction_added")
128 def reaction_added(event_data):
129 emoji = event_data["event"]["reaction"]
130 print(emoji)
131
132
133 # Start the server on port 3000
134 if __name__ == "__main__":
135 app.run(port=3000)
136
137 For a comprehensive list of available Slack `Events` and more information on
138 `Scopes`, see https://api.slack.com/events-api
139
140 ๐ค Example event listeners
141 -----------------------------
142
143 See `example.py`_ for usage examples. This example also utilizes the
144 SlackClient Web API client.
145
146 .. _example.py: /example/
147
148 ๐ค Support
149 -----------
150
151 Need help? Join `Slack Community`_ and talk to us in `#slack-api`_.
152
153 You can also `create an Issue`_ right here on GitHub.
154
155 .. _Events API: https://api.slack.com/events-api
156 .. _create a Slack App: https://api.slack.com/apps/new
157 .. _Event Subscriptions: https://api.slack.com/events-api#subscriptions
158 .. _Slack Community: http://slackcommunity.com/
159 .. _#slack-api: https://dev4slack.slack.com/messages/slack-api/
160 .. _create an Issue: https://github.com/slackapi/python-slack-events-api/issues/new
161
[end of README.rst]
[start of slackeventsapi/server.py]
...
...
4 import sys
5 import hmac
6 import hashlib
7 from time import time
8 from .version import __version__
9
10
11 class SlackServer(Flask):
12 def __init__(self, signing_secret, endpoint, emitter, server):
13 self.signing_secret = signing_secret
14 self.emitter = emitter
15 self.endpoint = endpoint
16 self.package_info = self.get_package_info()
17
18 # If a server is passed in, bind the event handler routes to it,
19 # otherwise create a new Flask instance.
20 if server:
21 if isinstance(server, Flask) or isinstance(server, Blueprint):
22 self.bind_route(server)
23 else:
24 raise TypeError("Server must be an instance of Flask or Blueprint")
25 else:
26 Flask.__init__(self, __name__)
27 self.bind_route(self)
28
...
[end of slackeventsapi/server.py]
[start of /dev/null]
...
[end of /dev/null]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
slackapi/python-slack-events-api
|
0c0ce604b502508622fb14c278a0d64841fa32e3
|
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
|
2020-06-12T06:58:10Z
|
<patch>
<patch>
diff --git a/example/current_app/main.py b/example/current_app/main.py
new file mode 100644
--- /dev/null
+++ b/example/current_app/main.py
@@ -0,0 +1,49 @@
+# ------------------
+# Only for running this script here
+import sys
+from os.path import dirname
+sys.path.insert(1, f"{dirname(__file__)}/../..")
+# ------------------
+
+import os
+from slack import WebClient
+import logging
+logging.basicConfig(level=logging.DEBUG)
+
+from flask import Flask
+
+app = Flask(__name__)
+
+with app.app_context():
+ from test_module.slack_app import slack_events_adapter
+
+ slack_bot_token = os.environ["SLACK_BOT_TOKEN"]
+ slack_client = WebClient(slack_bot_token)
+
+
+ @slack_events_adapter.on("message")
+ def handle_message(event_data):
+ message = event_data["event"]
+ if message.get("subtype") is None and "hi" in message.get('text'):
+ channel = message["channel"]
+ message = "Hi <@%s>! :tada:" % message["user"]
+ slack_client.chat_postMessage(channel=channel, text=message)
+
+
+ @slack_events_adapter.on("error")
+ def error_handler(err):
+ print("ERROR: " + str(err))
+
+# (Terminal A)
+# source env/bin/activate
+# (env) $ export SLACK_BOT_TOKEN=xoxb-***
+# (env) $ export SLACK_SIGNING_SECRET=**
+# (env) $ cd example/current_app
+# (env) $ FLASK_APP=main.py FLASK_ENV=development flask run --port 3000
+
+# (Terminal B)
+# ngrok http 3000
+
+# in Slack
+# /invite @{your app's bot user}
+# post a message "hi" in the channel
diff --git a/slackeventsapi/server.py b/slackeventsapi/server.py
--- a/slackeventsapi/server.py
+++ b/slackeventsapi/server.py
@@ -1,10 +1,13 @@
-from flask import Flask, request, make_response, Blueprint
+import hashlib
+import hmac
import json
import platform
import sys
-import hmac
-import hashlib
from time import time
+
+from flask import Flask, request, make_response, Blueprint
+from werkzeug.local import LocalProxy
+
from .version import __version__
@@ -18,10 +21,10 @@ def __init__(self, signing_secret, endpoint, emitter, server):
# If a server is passed in, bind the event handler routes to it,
# otherwise create a new Flask instance.
if server:
- if isinstance(server, Flask) or isinstance(server, Blueprint):
+ if isinstance(server, (Flask, Blueprint, LocalProxy)):
self.bind_route(server)
else:
- raise TypeError("Server must be an instance of Flask or Blueprint")
+ raise TypeError("Server must be an instance of Flask, Blueprint, or LocalProxy")
else:
Flask.__init__(self, __name__)
self.bind_route(self)
</patch>
</s>
</patch>
|
diff --git a/example/current_app/test_module/__init__.py b/example/current_app/test_module/__init__.py
new file mode 100644
diff --git a/example/current_app/test_module/slack_app.py b/example/current_app/test_module/slack_app.py
new file mode 100644
--- /dev/null
+++ b/example/current_app/test_module/slack_app.py
@@ -0,0 +1,16 @@
+# ------------------
+# Only for running this script here
+import logging
+import sys
+from os.path import dirname
+
+sys.path.insert(1, f"{dirname(__file__)}/../../..")
+logging.basicConfig(level=logging.DEBUG)
+# ------------------
+
+from flask import current_app as app
+from slackeventsapi import SlackEventAdapter
+import os
+
+slack_signing_secret = os.environ["SLACK_SIGNING_SECRET"]
+slack_events_adapter = SlackEventAdapter(slack_signing_secret, "/slack/events", app)
diff --git a/tests/test_server.py b/tests/test_server.py
--- a/tests/test_server.py
+++ b/tests/test_server.py
@@ -18,7 +18,7 @@ def test_server_not_flask():
with pytest.raises(TypeError) as e:
invalid_flask = "I am not a Flask"
SlackEventAdapter("SIGNING_SECRET", "/slack/events", invalid_flask)
- assert e.value.args[0] == 'Server must be an instance of Flask or Blueprint'
+ assert e.value.args[0] == 'Server must be an instance of Flask, Blueprint, or LocalProxy'
def test_blueprint_server():
|
1.0
| ||||
celery__celery-2598
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CELERY_RESULT_SERIALIZER = 'json' breaks Exception marshaling
Setting `CELERY_RESULT_SERIALIZER = json` and raising an exception in the worker leads to this:
```
/path/to/lib/python2.7/site-packages/celery/result.py in get(self, timeout, propagate, interval, no_ack, follow_parents, EXCEPTION_STATES, PROPAGATE_STATES)
173 status = meta['status']
174 if status in PROPAGATE_STATES and propagate:
--> 175 raise meta['result']
176 return meta['result']
177 wait = get # deprecated alias to :meth:`get`.
TypeError: exceptions must be old-style classes or derived from BaseException, not dict
```
where the contents of `meta['result']` are (in my case):
```
{u'exc_message': u'unknown keys: nam', u'exc_type': u'ValueError'}
```
so it _looks_ like celery could convert the dict to a real exception before raising, but it does not currently. Changing back to `pickle` works as expected.
bug can be reproduced with the following:
``` python
# jsonresults.py
from celery.app.base import Celery
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'amqp'
app = Celery(config_source=__name__)
@app.task
def hello():
raise ValueError('go away')
```
worker:
```
# python -m celery --app=jsonresults:app worker
```
caller:
``` python
import jsonresults
jsonresults.hello.delay().get()
```
</issue>
<code>
[start of README.rst]
1 =================================
2 celery - Distributed Task Queue
3 =================================
4
5 .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
6
7 |build-status| |coverage-status|
8
9 :Version: 3.2.0a1 (Cipater)
10 :Web: http://celeryproject.org/
11 :Download: http://pypi.python.org/pypi/celery/
12 :Source: http://github.com/celery/celery/
13 :Keywords: task queue, job queue, asynchronous, async, rabbitmq, amqp, redis,
14 python, webhooks, queue, distributed
15
16 --
17
18 What is a Task Queue?
19 =====================
20
21 Task queues are used as a mechanism to distribute work across threads or
22 machines.
23
24 A task queue's input is a unit of work, called a task, dedicated worker
25 processes then constantly monitor the queue for new work to perform.
26
27 Celery communicates via messages, usually using a broker
28 to mediate between clients and workers. To initiate a task a client puts a
29 message on the queue, the broker then delivers the message to a worker.
30
31 A Celery system can consist of multiple workers and brokers, giving way
32 to high availability and horizontal scaling.
33
34 Celery is a library written in Python, but the protocol can be implemented in
35 any language. So far there's RCelery_ for the Ruby programming language, and a
36 `PHP client`, but language interoperability can also be achieved
37 by using webhooks.
38
39 .. _RCelery: https://github.com/leapfrogonline/rcelery
40 .. _`PHP client`: https://github.com/gjedeer/celery-php
41 .. _`using webhooks`:
42 http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
43
44 What do I need?
45 ===============
46
47 Celery version 3.0 runs on,
48
49 - Python (2.6, 2.7, 3.3, 3.4)
50 - PyPy (1.8, 1.9)
51 - Jython (2.5, 2.7).
52
53 This is the last version to support Python 2.5,
54 and from Celery 3.1, Python 2.6 or later is required.
55 The last version to support Python 2.4 was Celery series 2.2.
56
57 *Celery* is usually used with a message broker to send and receive messages.
58 The RabbitMQ, Redis transports are feature complete,
59 but there's also experimental support for a myriad of other solutions, including
60 using SQLite for local development.
61
62 *Celery* can run on a single machine, on multiple machines, or even
63 across datacenters.
64
65 Get Started
66 ===========
67
68 If this is the first time you're trying to use Celery, or you are
69 new to Celery 3.0 coming from previous versions then you should read our
70 getting started tutorials:
71
72 - `First steps with Celery`_
73
74 Tutorial teaching you the bare minimum needed to get started with Celery.
75
76 - `Next steps`_
77
78 A more complete overview, showing more features.
79
80 .. _`First steps with Celery`:
81 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
82
83 .. _`Next steps`:
84 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
85
86 Celery is...
87 ==========
88
89 - **Simple**
90
91 Celery is easy to use and maintain, and does *not need configuration files*.
92
93 It has an active, friendly community you can talk to for support,
94 including a `mailing-list`_ and and an IRC channel.
95
96 Here's one of the simplest applications you can make::
97
98 from celery import Celery
99
100 app = Celery('hello', broker='amqp://guest@localhost//')
101
102 @app.task
103 def hello():
104 return 'hello world'
105
106 - **Highly Available**
107
108 Workers and clients will automatically retry in the event
109 of connection loss or failure, and some brokers support
110 HA in way of *Master/Master* or *Master/Slave* replication.
111
112 - **Fast**
113
114 A single Celery process can process millions of tasks a minute,
115 with sub-millisecond round-trip latency (using RabbitMQ,
116 py-librabbitmq, and optimized settings).
117
118 - **Flexible**
119
120 Almost every part of *Celery* can be extended or used on its own,
121 Custom pool implementations, serializers, compression schemes, logging,
122 schedulers, consumers, producers, autoscalers, broker transports and much more.
123
124 It supports...
125 ============
126
127 - **Message Transports**
128
129 - RabbitMQ_, Redis_,
130 - MongoDB_ (experimental), Amazon SQS (experimental),
131 - CouchDB_ (experimental), SQLAlchemy_ (experimental),
132 - Django ORM (experimental), `IronMQ`_
133 - and more...
134
135 - **Concurrency**
136
137 - Prefork, Eventlet_, gevent_, threads/single threaded
138
139 - **Result Stores**
140
141 - AMQP, Redis
142 - memcached, MongoDB
143 - SQLAlchemy, Django ORM
144 - Apache Cassandra, IronCache
145
146 - **Serialization**
147
148 - *pickle*, *json*, *yaml*, *msgpack*.
149 - *zlib*, *bzip2* compression.
150 - Cryptographic message signing.
151
152 .. _`Eventlet`: http://eventlet.net/
153 .. _`gevent`: http://gevent.org/
154
155 .. _RabbitMQ: http://rabbitmq.com
156 .. _Redis: http://redis.io
157 .. _MongoDB: http://mongodb.org
158 .. _Beanstalk: http://kr.github.com/beanstalkd
159 .. _CouchDB: http://couchdb.apache.org
160 .. _SQLAlchemy: http://sqlalchemy.org
161 .. _`IronMQ`: http://iron.io
162
163 Framework Integration
164 =====================
165
166 Celery is easy to integrate with web frameworks, some of which even have
167 integration packages:
168
169 +--------------------+------------------------+
170 | `Django`_ | not needed |
171 +--------------------+------------------------+
172 | `Pyramid`_ | `pyramid_celery`_ |
173 +--------------------+------------------------+
174 | `Pylons`_ | `celery-pylons`_ |
175 +--------------------+------------------------+
176 | `Flask`_ | not needed |
177 +--------------------+------------------------+
178 | `web2py`_ | `web2py-celery`_ |
179 +--------------------+------------------------+
180 | `Tornado`_ | `tornado-celery`_ |
181 +--------------------+------------------------+
182
183 The integration packages are not strictly necessary, but they can make
184 development easier, and sometimes they add important hooks like closing
185 database connections at ``fork``.
186
187 .. _`Django`: http://djangoproject.com/
188 .. _`Pylons`: http://www.pylonsproject.org/
189 .. _`Flask`: http://flask.pocoo.org/
190 .. _`web2py`: http://web2py.com/
191 .. _`Bottle`: http://bottlepy.org/
192 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
193 .. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
194 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
195 .. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
196 .. _`web2py-celery`: http://code.google.com/p/web2py-celery/
197 .. _`Tornado`: http://www.tornadoweb.org/
198 .. _`tornado-celery`: http://github.com/mher/tornado-celery/
199
200 .. _celery-documentation:
201
202 Documentation
203 =============
204
205 The `latest documentation`_ with user guides, tutorials and API reference
206 is hosted at Read The Docs.
207
208 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
209
210 .. _celery-installation:
211
212 Installation
213 ============
214
215 You can install Celery either via the Python Package Index (PyPI)
216 or from source.
217
218 To install using `pip`,::
219
220 $ pip install -U Celery
221
222 To install using `easy_install`,::
223
224 $ easy_install -U Celery
225
226 .. _bundles:
227
228 Bundles
229 -------
230
231 Celery also defines a group of bundles that can be used
232 to install Celery and the dependencies for a given feature.
233
234 You can specify these in your requirements or on the ``pip`` comand-line
235 by using brackets. Multiple bundles can be specified by separating them by
236 commas.
237 ::
238
239 $ pip install "celery[librabbitmq]"
240
241 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
242
243 The following bundles are available:
244
245 Serializers
246 ~~~~~~~~~~~
247
248 :celery[auth]:
249 for using the auth serializer.
250
251 :celery[msgpack]:
252 for using the msgpack serializer.
253
254 :celery[yaml]:
255 for using the yaml serializer.
256
257 Concurrency
258 ~~~~~~~~~~~
259
260 :celery[eventlet]:
261 for using the eventlet pool.
262
263 :celery[gevent]:
264 for using the gevent pool.
265
266 :celery[threads]:
267 for using the thread pool.
268
269 Transports and Backends
270 ~~~~~~~~~~~~~~~~~~~~~~~
271
272 :celery[librabbitmq]:
273 for using the librabbitmq C library.
274
275 :celery[redis]:
276 for using Redis as a message transport or as a result backend.
277
278 :celery[mongodb]:
279 for using MongoDB as a message transport (*experimental*),
280 or as a result backend (*supported*).
281
282 :celery[sqs]:
283 for using Amazon SQS as a message transport (*experimental*).
284
285 :celery[memcache]:
286 for using memcached as a result backend.
287
288 :celery[cassandra]:
289 for using Apache Cassandra as a result backend.
290
291 :celery[couchdb]:
292 for using CouchDB as a message transport (*experimental*).
293
294 :celery[couchbase]:
295 for using CouchBase as a result backend.
296
297 :celery[beanstalk]:
298 for using Beanstalk as a message transport (*experimental*).
299
300 :celery[zookeeper]:
301 for using Zookeeper as a message transport.
302
303 :celery[zeromq]:
304 for using ZeroMQ as a message transport (*experimental*).
305
306 :celery[sqlalchemy]:
307 for using SQLAlchemy as a message transport (*experimental*),
308 or as a result backend (*supported*).
309
310 :celery[pyro]:
311 for using the Pyro4 message transport (*experimental*).
312
313 :celery[slmq]:
314 for using the SoftLayer Message Queue transport (*experimental*).
315
316 .. _celery-installing-from-source:
317
318 Downloading and installing from source
319 --------------------------------------
320
321 Download the latest version of Celery from
322 http://pypi.python.org/pypi/celery/
323
324 You can install it by doing the following,::
325
326 $ tar xvfz celery-0.0.0.tar.gz
327 $ cd celery-0.0.0
328 $ python setup.py build
329 # python setup.py install
330
331 The last command must be executed as a privileged user if
332 you are not currently using a virtualenv.
333
334 .. _celery-installing-from-git:
335
336 Using the development version
337 -----------------------------
338
339 With pip
340 ~~~~~~~~
341
342 The Celery development version also requires the development
343 versions of ``kombu``, ``amqp`` and ``billiard``.
344
345 You can install the latest snapshot of these using the following
346 pip commands::
347
348 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
349 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
350 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
351 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
352
353 With git
354 ~~~~~~~~
355
356 Please the Contributing section.
357
358 .. _getting-help:
359
360 Getting Help
361 ============
362
363 .. _mailing-list:
364
365 Mailing list
366 ------------
367
368 For discussions about the usage, development, and future of celery,
369 please join the `celery-users`_ mailing list.
370
371 .. _`celery-users`: http://groups.google.com/group/celery-users/
372
373 .. _irc-channel:
374
375 IRC
376 ---
377
378 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
379 network.
380
381 .. _`Freenode`: http://freenode.net
382
383 .. _bug-tracker:
384
385 Bug tracker
386 ===========
387
388 If you have any suggestions, bug reports or annoyances please report them
389 to our issue tracker at http://github.com/celery/celery/issues/
390
391 .. _wiki:
392
393 Wiki
394 ====
395
396 http://wiki.github.com/celery/celery/
397
398 .. _contributing-short:
399
400 Contributing
401 ============
402
403 Development of `celery` happens at Github: http://github.com/celery/celery
404
405 You are highly encouraged to participate in the development
406 of `celery`. If you don't like Github (for some reason) you're welcome
407 to send regular patches.
408
409 Be sure to also read the `Contributing to Celery`_ section in the
410 documentation.
411
412 .. _`Contributing to Celery`:
413 http://docs.celeryproject.org/en/master/contributing.html
414
415 .. _license:
416
417 License
418 =======
419
420 This software is licensed under the `New BSD License`. See the ``LICENSE``
421 file in the top distribution directory for the full license text.
422
423 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
424
425
426 .. image:: https://d2weczhvl823v0.cloudfront.net/celery/celery/trend.png
427 :alt: Bitdeli badge
428 :target: https://bitdeli.com/free
429
430 .. |build-status| image:: https://travis-ci.org/celery/celery.svg?branch=master
431 :target: https://travis-ci.org/celery/celery
432 .. |coverage-status| image:: https://coveralls.io/repos/celery/celery/badge.svg
433 :target: https://coveralls.io/r/celery/celery
434
[end of README.rst]
[start of celery/backends/amqp.py]
...
181 return payload
182 else:
183 # no new state, use previous
184 try:
185 return self._cache[task_id]
186 except KeyError:
187 # result probably pending.
188 return {'status': states.PENDING, 'result': None}
189 poll = get_task_meta # XXX compat
190
191 def drain_events(self, connection, consumer,
192 timeout=None, on_interval=None, now=monotonic, wait=None):
193 wait = wait or connection.drain_events
194 results = {}
195
196 def callback(meta, message):
197 if meta['status'] in states.READY_STATES:
198 results[meta['task_id']] = meta
199
200 consumer.callbacks[:] = [callback]
201 time_start = now()
202
...
[end of celery/backends/amqp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
celery/celery
|
6592ff64b6b024a4b68abcc53b151888fdf0dee3
|
CELERY_RESULT_SERIALIZER = 'json' breaks Exception marshaling
Setting `CELERY_RESULT_SERIALIZER = json` and raising an exception in the worker leads to this:
```
/path/to/lib/python2.7/site-packages/celery/result.py in get(self, timeout, propagate, interval, no_ack, follow_parents, EXCEPTION_STATES, PROPAGATE_STATES)
173 status = meta['status']
174 if status in PROPAGATE_STATES and propagate:
--> 175 raise meta['result']
176 return meta['result']
177 wait = get # deprecated alias to :meth:`get`.
TypeError: exceptions must be old-style classes or derived from BaseException, not dict
```
where the contents of `meta['result']` are (in my case):
```
{u'exc_message': u'unknown keys: nam', u'exc_type': u'ValueError'}
```
so it _looks_ like celery could convert the dict to a real exception before raising, but it does not currently. Changing back to `pickle` works as expected.
bug can be reproduced with the following:
``` python
# jsonresults.py
from celery.app.base import Celery
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'amqp'
app = Celery(config_source=__name__)
@app.task
def hello():
raise ValueError('go away')
```
worker:
```
# python -m celery --app=jsonresults:app worker
```
caller:
``` python
import jsonresults
jsonresults.hello.delay().get()
```
|
This is biting me as well. Any news?
|
2015-04-29T14:52:17Z
|
<patch>
<patch>
diff --git a/celery/backends/amqp.py b/celery/backends/amqp.py
--- a/celery/backends/amqp.py
+++ b/celery/backends/amqp.py
@@ -195,7 +195,7 @@ def drain_events(self, connection, consumer,
def callback(meta, message):
if meta['status'] in states.READY_STATES:
- results[meta['task_id']] = meta
+ results[meta['task_id']] = self.meta_from_decoded(meta)
consumer.callbacks[:] = [callback]
time_start = now()
</patch>
</s>
</patch>
|
diff --git a/celery/tests/backends/test_amqp.py b/celery/tests/backends/test_amqp.py
--- a/celery/tests/backends/test_amqp.py
+++ b/celery/tests/backends/test_amqp.py
@@ -13,6 +13,7 @@
from celery.backends.amqp import AMQPBackend
from celery.exceptions import TimeoutError
from celery.five import Empty, Queue, range
+from celery.result import AsyncResult
from celery.utils import uuid
from celery.tests.case import (
@@ -246,10 +247,20 @@ def test_wait_for(self):
with self.assertRaises(TimeoutError):
b.wait_for(tid, timeout=0.01, cache=False)
- def test_drain_events_remaining_timeouts(self):
+ def test_drain_events_decodes_exceptions_in_meta(self):
+ tid = uuid()
+ b = self.create_backend(serializer="json")
+ b.store_result(tid, RuntimeError("aap"), states.FAILURE)
+ result = AsyncResult(tid, backend=b)
- class Connection(object):
+ with self.assertRaises(Exception) as cm:
+ result.get()
+ self.assertEqual(cm.exception.__class__.__name__, "RuntimeError")
+ self.assertEqual(str(cm.exception), "aap")
+
+ def test_drain_events_remaining_timeouts(self):
+ class Connection(object):
def drain_events(self, timeout=None):
pass
|
1.0
| |||
celery__celery-2840
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
</issue>
<code>
[start of README.rst]
=================================
celery - Distributed Task Queue
=================================
.. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
|build-status| |coverage-status|
:Version: 3.2.0a1 (Cipater)
:Web: http://celeryproject.org/
:Download: http://pypi.python.org/pypi/celery/
:Source: http://github.com/celery/celery/
:Keywords: task queue, job queue, asynchronous, async, rabbitmq, amqp, redis,
python, webhooks, queue, distributed
--
What is a Task Queue?
=====================
Task queues are used as a mechanism to distribute work across threads or
machines.
A task queue's input is a unit of work, called a task, dedicated worker
processes then constantly monitor the queue for new work to perform.
Celery communicates via messages, usually using a broker
to mediate between clients and workers. To initiate a task a client puts a
message on the queue, the broker then delivers the message to a worker.
A Celery system can consist of multiple workers and brokers, giving way
to high availability and horizontal scaling.
Celery is a library written in Python, but the protocol can be implemented in
any language. So far there's RCelery_ for the Ruby programming language, and a
`PHP client`, but language interoperability can also be achieved
by using webhooks.
.. _RCelery: https://github.com/leapfrogonline/rcelery
.. _`PHP client`: https://github.com/gjedeer/celery-php
.. _`using webhooks`:
http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
What do I need?
===============
Celery version 3.0 runs on,
- Python (2.6, 2.7, 3.3, 3.4)
- PyPy (1.8, 1.9)
- Jython (2.5, 2.7).
This is the last version to support Python 2.5,
and from Celery 3.1, Python 2.6 or later is required.
The last version to support Python 2.4 was Celery series 2.2.
*Celery* is usually used with a message broker to send and receive messages.
The RabbitMQ, Redis transports are feature complete,
but there's also experimental support for a myriad of other solutions, including
using SQLite for local development.
*Celery* can run on a single machine, on multiple machines, or even
across datacenters.
Get Started
===========
If this is the first time you're trying to use Celery, or you are
new to Celery 3.0 coming from previous versions then you should read our
getting started tutorials:
- `First steps with Celery`_
Tutorial teaching you the bare minimum needed to get started with Celery.
- `Next steps`_
A more complete overview, showing more features.
.. _`First steps with Celery`:
http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
.. _`Next steps`:
http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
Celery is...
==========
- **Simple**
Celery is easy to use and maintain, and does *not need configuration files*.
It has an active, friendly community you can talk to for support,
including a `mailing-list`_ and and an IRC channel.
Here's one of the simplest applications you can make::
from celery import Celery
app = Celery('hello', broker='amqp://guest@localhost//')
@app.task
def hello():
return 'hello world'
- **Highly Available**
Workers and clients will automatically retry in the event
of connection loss or failure, and some brokers support
HA in way of *Master/Master* or *Master/Slave* replication.
- **Fast**
A single Celery process can process millions of tasks a minute,
with sub-millisecond round-trip latency (using RabbitMQ,
py-librabbitmq, and optimized settings).
- **Flexible**
Almost every part of *Celery* can be extended or used on its own,
Custom pool implementations, serializers, compression schemes, logging,
schedulers, consumers, producers, autoscalers, broker transports and much more.
It supports...
============
- **Message Transports**
- RabbitMQ_, Redis_,
- MongoDB_ (experimental), Amazon SQS (experimental),
- CouchDB_ (experimental), SQLAlchemy_ (experimental),
- Django ORM (experimental), `IronMQ`_
- and more...
- **Concurrency**
- Prefork, Eventlet_, gevent_, threads/single threaded
- **Result Stores**
- AMQP, Redis
- memcached, MongoDB
- SQLAlchemy, Django ORM
- Apache Cassandra, IronCache
- **Serialization**
- *pickle*, *json*, *yaml*, *msgpack*.
- *zlib*, *bzip2* compression.
- Cryptographic message signing.
.. _`Eventlet`: http://eventlet.net/
.. _`gevent`: http://gevent.org/
.. _RabbitMQ: http://rabbitmq.com
.. _Redis: http://redis.io
.. _MongoDB: http://mongodb.org
.. _Beanstalk: http://kr.github.com/beanstalkd
.. _CouchDB: http://couchdb.apache.org
.. _SQLAlchemy: http://sqlalchemy.org
.. _`IronMQ`: http://iron.io
Framework Integration
=====================
Celery is easy to integrate with web frameworks, some of which even have
integration packages:
+--------------------+----------------------------------------------------+
| `Django`_ | not needed |
+--------------------+----------------------------------------------------+
| `Pyramid`_ | `pyramid_celery`_ |
+--------------------+----------------------------------------------------+
| `Pylons`_ | `celery-pylons`_ |
+--------------------+----------------------------------------------------+
| `Flask`_ | not needed |
+--------------------+----------------------------------------------------+
| `web2py`_ | `web2py-celery`_ |
+--------------------+----------------------------------------------------+
| `Tornado`_ | `tornado-celery`_ | `another tornado-celery`_ |
+--------------------+----------------------------------------------------+
The integration packages are not strictly necessary, but they can make
development easier, and sometimes they add important hooks like closing
database connections at ``fork``.
.. _`Django`: http://djangoproject.com/
.. _`Pylons`: http://www.pylonsproject.org/
.. _`Flask`: http://flask.pocoo.org/
.. _`web2py`: http://web2py.com/
.. _`Bottle`: http://bottlepy.org/
.. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
.. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
.. _`django-celery`: http://pypi.python.org/pypi/django-celery
.. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
.. _`web2py-celery`: http://code.google.com/p/web2py-celery/
.. _`Tornado`: http://www.tornadoweb.org/
.. _`tornado-celery`: http://github.com/mher/tornado-celery/
.. _`another tornado-celery`: https://github.com/mayflaver/tornado-celery
.. _celery-documentation:
Documentation
=============
The `latest documentation`_ with user guides, tutorials and API reference
is hosted at Read The Docs.
.. _`latest documentation`: http://docs.celeryproject.org/en/latest/
.. _celery-installation:
Installation
============
You can install Celery either via the Python Package Index (PyPI)
or from source.
To install using `pip`,::
$ pip install -U Celery
To install using `easy_install`,::
$ easy_install -U Celery
.. _bundles:
Bundles
-------
Celery also defines a group of bundles that can be used
to install Celery and the dependencies for a given feature.
You can specify these in your requirements or on the ``pip`` comand-line
by using brackets. Multiple bundles can be specified by separating them by
commas.
::
$ pip install "celery[librabbitmq]"
$ pip install "celery[librabbitmq,redis,auth,msgpack]"
The following bundles are available:
Serializers
~~~~~~~~~~~
:celery[auth]:
for using the auth serializer.
:celery[msgpack]:
for using the msgpack serializer.
:celery[yaml]:
for using the yaml serializer.
Concurrency
~~~~~~~~~~~
:celery[eventlet]:
for using the eventlet pool.
:celery[gevent]:
for using the gevent pool.
:celery[threads]:
for using the thread pool.
Transports and Backends
~~~~~~~~~~~~~~~~~~~~~~~
:celery[librabbitmq]:
for using the librabbitmq C library.
:celery[redis]:
for using Redis as a message transport or as a result backend.
:celery[mongodb]:
for using MongoDB as a message transport (*experimental*),
or as a result backend (*supported*).
:celery[sqs]:
for using Amazon SQS as a message transport (*experimental*).
:celery[memcache]:
for using memcached as a result backend.
:celery[cassandra]:
for using Apache Cassandra as a result backend.
:celery[couchdb]:
for using CouchDB as a message transport (*experimental*).
:celery[couchbase]:
for using CouchBase as a result backend.
:celery[beanstalk]:
for using Beanstalk as a message transport (*experimental*).
:celery[zookeeper]:
for using Zookeeper as a message transport.
:celery[zeromq]:
for using ZeroMQ as a message transport (*experimental*).
:celery[sqlalchemy]:
for using SQLAlchemy as a message transport (*experimental*),
or as a result backend (*supported*).
:celery[pyro]:
for using the Pyro4 message transport (*experimental*).
:celery[slmq]:
for using the SoftLayer Message Queue transport (*experimental*).
.. _celery-installing-from-source:
Downloading and installing from source
--------------------------------------
Download the latest version of Celery from
http://pypi.python.org/pypi/celery/
You can install it by doing the following,::
$ tar xvfz celery-0.0.0.tar.gz
$ cd celery-0.0.0
$ python setup.py build
# python setup.py install
The last command must be executed as a privileged user if
you are not currently using a virtualenv.
.. _celery-installing-from-git:
Using the development version
-----------------------------
With pip
~~~~~~~~
The Celery development version also requires the development
versions of ``kombu``, ``amqp`` and ``billiard``.
You can install the latest snapshot of these using the following
pip commands::
$ pip install https://github.com/celery/celery/zipball/master#egg=celery
$ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
$ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
$ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
With git
~~~~~~~~
Please the Contributing section.
.. _getting-help:
Getting Help
============
.. _mailing-list:
Mailing list
------------
For discussions about the usage, development, and future of celery,
please join the `celery-users`_ mailing list.
.. _`celery-users`: http://groups.google.com/group/celery-users/
.. _irc-channel:
IRC
---
Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
network.
.. _`Freenode`: http://freenode.net
.. _bug-tracker:
Bug tracker
===========
If you have any suggestions, bug reports or annoyances please report them
to our issue tracker at http://github.com/celery/celery/issues/
.. _wiki:
Wiki
====
http://wiki.github.com/celery/celery/
.. _maintainers:
Maintainers
===========
- `@ask`_ (primary maintainer)
- `@thedrow`_
- `@chrisgogreen`_
- `@PMickael`_
- `@malinoff`_
- And you? We really need more: https://github.com/celery/celery/issues/2534
.. _`@ask`: http://github.com/ask
.. _`@thedrow`: http://github.com/thedrow
.. _`@chrisgogreen`: http://github.com/chrisgogreen
.. _`@PMickael`: http://github.com/PMickael
.. _`@malinoff`: http://github.com/malinoff
.. _contributing-short:
Contributing
============
Development of `celery` happens at Github: http://github.com/celery/celery
You are highly encouraged to participate in the development
of `celery`. If you don't like Github (for some reason) you're welcome
to send regular patches.
Be sure to also read the `Contributing to Celery`_ section in the
documentation.
.. _`Contributing to Celery`:
http://docs.celeryproject.org/en/master/contributing.html
.. _license:
License
=======
This software is licensed under the `New BSD License`. See the ``LICENSE``
file in the top distribution directory for the full license text.
.. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
.. image:: https://d2weczhvl823v0.cloudfront.net/celery/celery/trend.png
:alt: Bitdeli badge
:target: https://bitdeli.com/free
.. |build-status| image:: https://travis-ci.org/celery/celery.svg?branch=master
:target: https://travis-ci.org/celery/celery
.. |coverage-status| image:: https://coveralls.io/repos/celery/celery/badge.svg
:target: https://coveralls.io/r/celery/celery
[end of README.rst]
[start of celery/app/defaults.py]
# -*- coding: utf-8 -*-
"""
celery.app.defaults
~~~~~~~~~~~~~~~~~~~
Configuration introspection and defaults.
"""
from __future__ import absolute_import
import sys
from collections import deque, namedtuple
from datetime import timedelta
from celery.five import items
from celery.utils import strtobool
from celery.utils.functional import memoize
__all__ = ['Option', 'NAMESPACES', 'flatten', 'find']
is_jython = sys.platform.startswith('java')
is_pypy = hasattr(sys, 'pypy_version_info')
DEFAULT_POOL = 'prefork'
if is_jython:
DEFAULT_POOL = 'threads'
elif is_pypy:
if sys.pypy_version_info[0:3] < (1, 5, 0):
DEFAULT_POOL = 'solo'
else:
DEFAULT_POOL = 'prefork'
DEFAULT_ACCEPT_CONTENT = ['json', 'pickle', 'msgpack', 'yaml']
DEFAULT_PROCESS_LOG_FMT = """
[%(asctime)s: %(levelname)s/%(processName)s] %(message)s
""".strip()
DEFAULT_LOG_FMT = '[%(asctime)s: %(levelname)s] %(message)s'
DEFAULT_TASK_LOG_FMT = """[%(asctime)s: %(levelname)s/%(processName)s] \
%(task_name)s[%(task_id)s]: %(message)s"""
_BROKER_OLD = {'deprecate_by': '2.5', 'remove_by': '4.0',
'alt': 'BROKER_URL setting'}
_REDIS_OLD = {'deprecate_by': '2.5', 'remove_by': '4.0',
'alt': 'URL form of CELERY_RESULT_BACKEND'}
searchresult = namedtuple('searchresult', ('namespace', 'key', 'type'))
class Option(object):
alt = None
deprecate_by = None
remove_by = None
typemap = dict(string=str, int=int, float=float, any=lambda v: v,
bool=strtobool, dict=dict, tuple=tuple)
def __init__(self, default=None, *args, **kwargs):
self.default = default
self.type = kwargs.get('type') or 'string'
for attr, value in items(kwargs):
setattr(self, attr, value)
def to_python(self, value):
return self.typemap[self.type](value)
def __repr__(self):
return '<Option: type->{0} default->{1!r}>'.format(self.type,
self.default)
NAMESPACES = {
'BROKER': {
'URL': Option(None, type='string'),
'CONNECTION_TIMEOUT': Option(4, type='float'),
'CONNECTION_RETRY': Option(True, type='bool'),
'CONNECTION_MAX_RETRIES': Option(100, type='int'),
'FAILOVER_STRATEGY': Option(None, type='string'),
'HEARTBEAT': Option(None, type='int'),
'HEARTBEAT_CHECKRATE': Option(3.0, type='int'),
'LOGIN_METHOD': Option(None, type='string'),
'POOL_LIMIT': Option(10, type='int'),
'USE_SSL': Option(False, type='bool'),
'TRANSPORT': Option(type='string'),
'TRANSPORT_OPTIONS': Option({}, type='dict'),
'HOST': Option(type='string', **_BROKER_OLD),
'PORT': Option(type='int', **_BROKER_OLD),
'USER': Option(type='string', **_BROKER_OLD),
'PASSWORD': Option(type='string', **_BROKER_OLD),
'VHOST': Option(type='string', **_BROKER_OLD),
},
'CASSANDRA': {
'COLUMN_FAMILY': Option(type='string'),
'DETAILED_MODE': Option(False, type='bool'),
'KEYSPACE': Option(type='string'),
'READ_CONSISTENCY': Option(type='string'),
'SERVERS': Option(type='list'),
'WRITE_CONSISTENCY': Option(type='string'),
},
'CELERY': {
'ACCEPT_CONTENT': Option(DEFAULT_ACCEPT_CONTENT, type='list'),
'ACKS_LATE': Option(False, type='bool'),
'ALWAYS_EAGER': Option(False, type='bool'),
'ANNOTATIONS': Option(type='any'),
'BROADCAST_QUEUE': Option('celeryctl'),
'BROADCAST_EXCHANGE': Option('celeryctl'),
'BROADCAST_EXCHANGE_TYPE': Option('fanout'),
'CACHE_BACKEND': Option(),
'CACHE_BACKEND_OPTIONS': Option({}, type='dict'),
'CHORD_PROPAGATES': Option(True, type='bool'),
'COUCHBASE_BACKEND_SETTINGS': Option(None, type='dict'),
'CREATE_MISSING_QUEUES': Option(True, type='bool'),
'DEFAULT_RATE_LIMIT': Option(type='string'),
'DISABLE_RATE_LIMITS': Option(False, type='bool'),
'DEFAULT_ROUTING_KEY': Option('celery'),
'DEFAULT_QUEUE': Option('celery'),
'DEFAULT_EXCHANGE': Option('celery'),
'DEFAULT_EXCHANGE_TYPE': Option('direct'),
'DEFAULT_DELIVERY_MODE': Option(2, type='string'),
'EAGER_PROPAGATES_EXCEPTIONS': Option(False, type='bool'),
'ENABLE_UTC': Option(True, type='bool'),
'ENABLE_REMOTE_CONTROL': Option(True, type='bool'),
'EVENT_SERIALIZER': Option('json'),
'EVENT_QUEUE_EXPIRES': Option(60.0, type='float'),
'EVENT_QUEUE_TTL': Option(5.0, type='float'),
'IMPORTS': Option((), type='tuple'),
'INCLUDE': Option((), type='tuple'),
'IGNORE_RESULT': Option(False, type='bool'),
'MAX_CACHED_RESULTS': Option(100, type='int'),
'MESSAGE_COMPRESSION': Option(type='string'),
'MONGODB_BACKEND_SETTINGS': Option(type='dict'),
'REDIS_HOST': Option(type='string', **_REDIS_OLD),
'REDIS_PORT': Option(type='int', **_REDIS_OLD),
'REDIS_DB': Option(type='int', **_REDIS_OLD),
'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
'REDIS_MAX_CONNECTIONS': Option(type='int'),
'RESULT_BACKEND': Option(type='string'),
'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
'RESULT_DB_TABLENAMES': Option(type='dict'),
'RESULT_DBURI': Option(),
'RESULT_ENGINE_OPTIONS': Option(type='dict'),
'RESULT_EXCHANGE': Option('celeryresults'),
'RESULT_EXCHANGE_TYPE': Option('direct'),
'RESULT_SERIALIZER': Option('json'),
'RESULT_PERSISTENT': Option(None, type='bool'),
'RIAK_BACKEND_SETTINGS': Option(type='dict'),
'ROUTES': Option(type='any'),
'SEND_EVENTS': Option(False, type='bool'),
'SEND_TASK_ERROR_EMAILS': Option(False, type='bool'),
'SEND_TASK_SENT_EVENT': Option(False, type='bool'),
'STORE_ERRORS_EVEN_IF_IGNORED': Option(False, type='bool'),
'TASK_PROTOCOL': Option(1, type='int'),
'TASK_PUBLISH_RETRY': Option(True, type='bool'),
'TASK_PUBLISH_RETRY_POLICY': Option({
'max_retries': 3,
'interval_start': 0,
'interval_max': 1,
'interval_step': 0.2}, type='dict'),
'TASK_RESULT_EXPIRES': Option(timedelta(days=1), type='float'),
'TASK_SERIALIZER': Option('json'),
'TIMEZONE': Option(type='string'),
'TRACK_STARTED': Option(False, type='bool'),
'REDIRECT_STDOUTS': Option(True, type='bool'),
'REDIRECT_STDOUTS_LEVEL': Option('WARNING'),
'QUEUES': Option(type='dict'),
'QUEUE_HA_POLICY': Option(None, type='string'),
'SECURITY_KEY': Option(type='string'),
'SECURITY_CERTIFICATE': Option(type='string'),
'SECURITY_CERT_STORE': Option(type='string'),
'WORKER_DIRECT': Option(False, type='bool'),
},
'CELERYD': {
'AGENT': Option(None, type='string'),
'AUTOSCALER': Option('celery.worker.autoscale:Autoscaler'),
'AUTORELOADER': Option('celery.worker.autoreload:Autoreloader'),
'CONCURRENCY': Option(0, type='int'),
'TIMER': Option(type='string'),
'TIMER_PRECISION': Option(1.0, type='float'),
'FORCE_EXECV': Option(False, type='bool'),
'HIJACK_ROOT_LOGGER': Option(True, type='bool'),
'CONSUMER': Option('celery.worker.consumer:Consumer', type='string'),
'LOG_FORMAT': Option(DEFAULT_PROCESS_LOG_FMT),
'LOG_COLOR': Option(type='bool'),
'LOG_LEVEL': Option('WARN', deprecate_by='2.4', remove_by='4.0',
alt='--loglevel argument'),
'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
alt='--logfile argument'),
'MAX_TASKS_PER_CHILD': Option(type='int'),
'POOL': Option(DEFAULT_POOL),
'POOL_PUTLOCKS': Option(True, type='bool'),
'POOL_RESTARTS': Option(False, type='bool'),
'PREFETCH_MULTIPLIER': Option(4, type='int'),
'STATE_DB': Option(),
'TASK_LOG_FORMAT': Option(DEFAULT_TASK_LOG_FMT),
'TASK_SOFT_TIME_LIMIT': Option(type='float'),
'TASK_TIME_LIMIT': Option(type='float'),
'WORKER_LOST_WAIT': Option(10.0, type='float')
},
'CELERYBEAT': {
'SCHEDULE': Option({}, type='dict'),
'SCHEDULER': Option('celery.beat:PersistentScheduler'),
'SCHEDULE_FILENAME': Option('celerybeat-schedule'),
'SYNC_EVERY': Option(0, type='int'),
'MAX_LOOP_INTERVAL': Option(0, type='float'),
'LOG_LEVEL': Option('INFO', deprecate_by='2.4', remove_by='4.0',
alt='--loglevel argument'),
'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
alt='--logfile argument'),
},
'CELERYMON': {
'LOG_LEVEL': Option('INFO', deprecate_by='2.4', remove_by='4.0',
alt='--loglevel argument'),
'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
alt='--logfile argument'),
'LOG_FORMAT': Option(DEFAULT_LOG_FMT),
},
'EMAIL': {
'HOST': Option('localhost'),
'PORT': Option(25, type='int'),
'HOST_USER': Option(),
'HOST_PASSWORD': Option(),
'TIMEOUT': Option(2, type='float'),
'USE_SSL': Option(False, type='bool'),
'USE_TLS': Option(False, type='bool'),
'CHARSET': Option('us-ascii'),
},
'SERVER_EMAIL': Option('celery@localhost'),
'ADMINS': Option((), type='tuple'),
}
def flatten(d, ns=''):
stack = deque([(ns, d)])
while stack:
name, space = stack.popleft()
for key, value in items(space):
if isinstance(value, dict):
stack.append((name + key + '_', value))
else:
yield name + key, value
DEFAULTS = {key: value.default for key, value in flatten(NAMESPACES)}
def find_deprecated_settings(source):
from celery.utils import warn_deprecated
for name, opt in flatten(NAMESPACES):
if (opt.deprecate_by or opt.remove_by) and getattr(source, name, None):
warn_deprecated(description='The {0!r} setting'.format(name),
deprecation=opt.deprecate_by,
removal=opt.remove_by,
alternative='Use the {0.alt} instead'.format(opt))
return source
@memoize(maxsize=None)
def find(name, namespace='celery'):
# - Try specified namespace first.
namespace = namespace.upper()
try:
return searchresult(
namespace, name.upper(), NAMESPACES[namespace][name.upper()],
)
except KeyError:
# - Try all the other namespaces.
for ns, keys in items(NAMESPACES):
if ns.upper() == name.upper():
return searchresult(None, ns, keys)
elif isinstance(keys, dict):
try:
return searchresult(ns, name.upper(), keys[name.upper()])
except KeyError:
pass
# - See if name is a qualname last.
return searchresult(None, name.upper(), DEFAULTS[name.upper()])
[end of celery/app/defaults.py]
[start of celery/app/task.py]
# -*- coding: utf-8 -*-
"""
celery.app.task
~~~~~~~~~~~~~~~
Task Implementation: Task request context, and the base task class.
"""
from __future__ import absolute_import
import sys
from billiard.einfo import ExceptionInfo
from celery import current_app, group
from celery import states
from celery._state import _task_stack
from celery.canvas import signature
from celery.exceptions import Ignore, MaxRetriesExceededError, Reject, Retry
from celery.five import class_property, items
from celery.result import EagerResult
from celery.utils import abstract
from celery.utils import uuid, maybe_reraise
from celery.utils.functional import mattrgetter, maybe_list
from celery.utils.imports import instantiate
from celery.utils.mail import ErrorMail
from .annotations import resolve_all as resolve_all_annotations
from .registry import _unpickle_task_v2
from .utils import appstr
__all__ = ['Context', 'Task']
#: extracts attributes related to publishing a message from an object.
extract_exec_options = mattrgetter(
'queue', 'routing_key', 'exchange', 'priority', 'expires',
'serializer', 'delivery_mode', 'compression', 'time_limit',
'soft_time_limit', 'immediate', 'mandatory', # imm+man is deprecated
)
# We take __repr__ very seriously around here ;)
R_BOUND_TASK = '<class {0.__name__} of {app}{flags}>'
R_UNBOUND_TASK = '<unbound {0.__name__}{flags}>'
R_SELF_TASK = '<@task {0.name} bound to other {0.__self__}>'
R_INSTANCE = '<@task: {0.name} of {app}{flags}>'
#: Here for backwards compatibility as tasks no longer use a custom metaclass.
TaskType = type
def _strflags(flags, default=''):
if flags:
return ' ({0})'.format(', '.join(flags))
return default
def _reprtask(task, fmt=None, flags=None):
flags = list(flags) if flags is not None else []
flags.append('v2 compatible') if task.__v2_compat__ else None
if not fmt:
fmt = R_BOUND_TASK if task._app else R_UNBOUND_TASK
return fmt.format(
task, flags=_strflags(flags),
app=appstr(task._app) if task._app else None,
)
class Context(object):
# Default context
logfile = None
loglevel = None
hostname = None
id = None
args = None
kwargs = None
retries = 0
eta = None
expires = None
is_eager = False
headers = None
delivery_info = None
reply_to = None
root_id = None
parent_id = None
correlation_id = None
taskset = None # compat alias to group
group = None
chord = None
utc = None
called_directly = True
callbacks = None
errbacks = None
timelimit = None
_children = None # see property
_protected = 0
def __init__(self, *args, **kwargs):
self.update(*args, **kwargs)
def update(self, *args, **kwargs):
return self.__dict__.update(*args, **kwargs)
def clear(self):
return self.__dict__.clear()
def get(self, key, default=None):
return getattr(self, key, default)
def __repr__(self):
return '<Context: {0!r}>'.format(vars(self))
@property
def children(self):
# children must be an empy list for every thread
if self._children is None:
self._children = []
return self._children
class Task(object):
"""Task base class.
When called tasks apply the :meth:`run` method. This method must
be defined by all tasks (that is unless the :meth:`__call__` method
is overridden).
"""
__trace__ = None
__v2_compat__ = False # set by old base in celery.task.base
ErrorMail = ErrorMail
MaxRetriesExceededError = MaxRetriesExceededError
#: Execution strategy used, or the qualified name of one.
Strategy = 'celery.worker.strategy:default'
#: This is the instance bound to if the task is a method of a class.
__self__ = None
#: The application instance associated with this task class.
_app = None
#: Name of the task.
name = None
#: If :const:`True` the task is an abstract base class.
abstract = True
#: Maximum number of retries before giving up. If set to :const:`None`,
#: it will **never** stop retrying.
max_retries = 3
#: Default time in seconds before a retry of the task should be
#: executed. 3 minutes by default.
default_retry_delay = 3 * 60
#: Rate limit for this task type. Examples: :const:`None` (no rate
#: limit), `'100/s'` (hundred tasks a second), `'100/m'` (hundred tasks
#: a minute),`'100/h'` (hundred tasks an hour)
rate_limit = None
#: If enabled the worker will not store task state and return values
#: for this task. Defaults to the :setting:`CELERY_IGNORE_RESULT`
#: setting.
ignore_result = None
#: If enabled the request will keep track of subtasks started by
#: this task, and this information will be sent with the result
#: (``result.children``).
trail = True
#: When enabled errors will be stored even if the task is otherwise
#: configured to ignore results.
store_errors_even_if_ignored = None
#: If enabled an email will be sent to :setting:`ADMINS` whenever a task
#: of this type fails.
send_error_emails = None
#: The name of a serializer that are registered with
#: :mod:`kombu.serialization.registry`. Default is `'pickle'`.
serializer = None
#: Hard time limit.
#: Defaults to the :setting:`CELERYD_TASK_TIME_LIMIT` setting.
time_limit = None
#: Soft time limit.
#: Defaults to the :setting:`CELERYD_TASK_SOFT_TIME_LIMIT` setting.
soft_time_limit = None
#: The result store backend used for this task.
backend = None
#: If disabled this task won't be registered automatically.
autoregister = True
#: If enabled the task will report its status as 'started' when the task
#: is executed by a worker. Disabled by default as the normal behaviour
#: is to not report that level of granularity. Tasks are either pending,
#: finished, or waiting to be retried.
#:
#: Having a 'started' status can be useful for when there are long
#: running tasks and there is a need to report which task is currently
#: running.
#:
#: The application default can be overridden using the
#: :setting:`CELERY_TRACK_STARTED` setting.
track_started = None
#: When enabled messages for this task will be acknowledged **after**
#: the task has been executed, and not *just before* which is the
#: default behavior.
#:
#: Please note that this means the task may be executed twice if the
#: worker crashes mid execution (which may be acceptable for some
#: applications).
#:
#: The application default can be overridden with the
#: :setting:`CELERY_ACKS_LATE` setting.
acks_late = None
#: Tuple of expected exceptions.
#:
#: These are errors that are expected in normal operation
#: and that should not be regarded as a real error by the worker.
#: Currently this means that the state will be updated to an error
#: state, but the worker will not log the event as an error.
throws = ()
#: Default task expiry time.
expires = None
#: Task request stack, the current request will be the topmost.
request_stack = None
#: Some may expect a request to exist even if the task has not been
#: called. This should probably be deprecated.
_default_request = None
_exec_options = None
__bound__ = False
from_config = (
('send_error_emails', 'CELERY_SEND_TASK_ERROR_EMAILS'),
('serializer', 'CELERY_TASK_SERIALIZER'),
('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
('track_started', 'CELERY_TRACK_STARTED'),
('acks_late', 'CELERY_ACKS_LATE'),
('ignore_result', 'CELERY_IGNORE_RESULT'),
('store_errors_even_if_ignored',
'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
)
#: ignored
accept_magic_kwargs = False
_backend = None # set by backend property.
__bound__ = False
# - Tasks are lazily bound, so that configuration is not set
# - until the task is actually used
@classmethod
def bind(self, app):
was_bound, self.__bound__ = self.__bound__, True
self._app = app
conf = app.conf
self._exec_options = None # clear option cache
for attr_name, config_name in self.from_config:
if getattr(self, attr_name, None) is None:
setattr(self, attr_name, conf[config_name])
# decorate with annotations from config.
if not was_bound:
self.annotate()
from celery.utils.threads import LocalStack
self.request_stack = LocalStack()
# PeriodicTask uses this to add itself to the PeriodicTask schedule.
self.on_bound(app)
return app
@classmethod
def on_bound(self, app):
"""This method can be defined to do additional actions when the
task class is bound to an app."""
pass
@classmethod
def _get_app(self):
if self._app is None:
self._app = current_app
if not self.__bound__:
# The app property's __set__ method is not called
# if Task.app is set (on the class), so must bind on use.
self.bind(self._app)
return self._app
app = class_property(_get_app, bind)
@classmethod
def annotate(self):
for d in resolve_all_annotations(self.app.annotations, self):
for key, value in items(d):
if key.startswith('@'):
self.add_around(key[1:], value)
else:
setattr(self, key, value)
@classmethod
def add_around(self, attr, around):
orig = getattr(self, attr)
if getattr(orig, '__wrapped__', None):
orig = orig.__wrapped__
meth = around(orig)
meth.__wrapped__ = orig
setattr(self, attr, meth)
def __call__(self, *args, **kwargs):
_task_stack.push(self)
self.push_request(args=args, kwargs=kwargs)
try:
# add self if this is a bound task
if self.__self__ is not None:
return self.run(self.__self__, *args, **kwargs)
return self.run(*args, **kwargs)
finally:
self.pop_request()
_task_stack.pop()
def __reduce__(self):
# - tasks are pickled into the name of the task only, and the reciever
# - simply grabs it from the local registry.
# - in later versions the module of the task is also included,
# - and the receiving side tries to import that module so that
# - it will work even if the task has not been registered.
mod = type(self).__module__
mod = mod if mod and mod in sys.modules else None
return (_unpickle_task_v2, (self.name, mod), None)
def run(self, *args, **kwargs):
"""The body of the task executed by workers."""
raise NotImplementedError('Tasks must define the run method.')
def start_strategy(self, app, consumer, **kwargs):
return instantiate(self.Strategy, self, app, consumer, **kwargs)
def delay(self, *args, **kwargs):
"""Star argument version of :meth:`apply_async`.
Does not support the extra options enabled by :meth:`apply_async`.
:param \*args: positional arguments passed on to the task.
:param \*\*kwargs: keyword arguments passed on to the task.
:returns :class:`celery.result.AsyncResult`:
"""
return self.apply_async(args, kwargs)
def apply_async(self, args=None, kwargs=None, task_id=None, producer=None,
link=None, link_error=None, shadow=None, **options):
"""Apply tasks asynchronously by sending a message.
:keyword args: The positional arguments to pass on to the
task (a :class:`list` or :class:`tuple`).
:keyword kwargs: The keyword arguments to pass on to the
task (a :class:`dict`)
:keyword countdown: Number of seconds into the future that the
task should execute. Defaults to immediate
execution.
:keyword eta: A :class:`~datetime.datetime` object describing
the absolute time and date of when the task should
be executed. May not be specified if `countdown`
is also supplied.
:keyword expires: Either a :class:`int`, describing the number of
seconds, or a :class:`~datetime.datetime` object
that describes the absolute time and date of when
the task should expire. The task will not be
executed after the expiration time.
:keyword shadow: Override task name used in logs/monitoring
(default from :meth:`shadow_name`).
:keyword connection: Re-use existing broker connection instead
of establishing a new one.
:keyword retry: If enabled sending of the task message will be retried
in the event of connection loss or failure. Default
is taken from the :setting:`CELERY_TASK_PUBLISH_RETRY`
setting. Note that you need to handle the
producer/connection manually for this to work.
:keyword retry_policy: Override the retry policy used. See the
:setting:`CELERY_TASK_PUBLISH_RETRY_POLICY`
setting.
:keyword routing_key: Custom routing key used to route the task to a
worker server. If in combination with a
``queue`` argument only used to specify custom
routing keys to topic exchanges.
:keyword queue: The queue to route the task to. This must be a key
present in :setting:`CELERY_QUEUES`, or
:setting:`CELERY_CREATE_MISSING_QUEUES` must be
enabled. See :ref:`guide-routing` for more
information.
:keyword exchange: Named custom exchange to send the task to.
Usually not used in combination with the ``queue``
argument.
:keyword priority: The task priority, a number between 0 and 9.
Defaults to the :attr:`priority` attribute.
:keyword serializer: A string identifying the default
serialization method to use. Can be `pickle`,
`json`, `yaml`, `msgpack` or any custom
serialization method that has been registered
with :mod:`kombu.serialization.registry`.
Defaults to the :attr:`serializer` attribute.
:keyword compression: A string identifying the compression method
to use. Can be one of ``zlib``, ``bzip2``,
or any custom compression methods registered with
:func:`kombu.compression.register`. Defaults to
the :setting:`CELERY_MESSAGE_COMPRESSION`
setting.
:keyword link: A single, or a list of tasks to apply if the
task exits successfully.
:keyword link_error: A single, or a list of tasks to apply
if an error occurs while executing the task.
:keyword producer: :class:`kombu.Producer` instance to use.
:keyword add_to_parent: If set to True (default) and the task
is applied while executing another task, then the result
will be appended to the parent tasks ``request.children``
attribute. Trailing can also be disabled by default using the
:attr:`trail` attribute
:keyword publisher: Deprecated alias to ``producer``.
:keyword headers: Message headers to be sent in the
task (a :class:`dict`)
:rtype :class:`celery.result.AsyncResult`: if
:setting:`CELERY_ALWAYS_EAGER` is not set, otherwise
:class:`celery.result.EagerResult`:
Also supports all keyword arguments supported by
:meth:`kombu.Producer.publish`.
.. note::
If the :setting:`CELERY_ALWAYS_EAGER` setting is set, it will
be replaced by a local :func:`apply` call instead.
"""
try:
check_arguments = self.__header__
except AttributeError:
pass
else:
check_arguments(*(args or ()), **(kwargs or {}))
app = self._get_app()
if app.conf.CELERY_ALWAYS_EAGER:
return self.apply(args, kwargs, task_id=task_id or uuid(),
link=link, link_error=link_error, **options)
# add 'self' if this is a "task_method".
if self.__self__ is not None:
args = args if isinstance(args, tuple) else tuple(args or ())
args = (self.__self__,) + args
shadow = shadow or self.shadow_name(args, kwargs, options)
preopts = self._get_exec_options()
options = dict(preopts, **options) if options else preopts
return app.send_task(
self.name, args, kwargs, task_id=task_id, producer=producer,
link=link, link_error=link_error, result_cls=self.AsyncResult,
shadow=shadow,
**options
)
def shadow_name(self, args, kwargs, options):
"""Override for custom task name in worker logs/monitoring.
:param args: Task positional arguments.
:param kwargs: Task keyword arguments.
:param options: Task execution options.
**Example**:
.. code-block:: python
from celery.utils.imports import qualname
def shadow_name(task, args, kwargs, options):
return qualname(args[0])
@app.task(shadow_name=shadow_name, serializer='pickle')
def apply_function_async(fun, *args, **kwargs):
return fun(*args, **kwargs)
"""
pass
def signature_from_request(self, request=None, args=None, kwargs=None,
queue=None, **extra_options):
request = self.request if request is None else request
args = request.args if args is None else args
kwargs = request.kwargs if kwargs is None else kwargs
limit_hard, limit_soft = request.timelimit or (None, None)
options = {
'task_id': request.id,
'link': request.callbacks,
'link_error': request.errbacks,
'group_id': request.group,
'chord': request.chord,
'soft_time_limit': limit_soft,
'time_limit': limit_hard,
'reply_to': request.reply_to,
'headers': request.headers,
}
options.update(
{'queue': queue} if queue else (request.delivery_info or {}),
)
return self.signature(
args, kwargs, options, type=self, **extra_options
)
subtask_from_request = signature_from_request
def retry(self, args=None, kwargs=None, exc=None, throw=True,
eta=None, countdown=None, max_retries=None, **options):
"""Retry the task.
:param args: Positional arguments to retry with.
:param kwargs: Keyword arguments to retry with.
:keyword exc: Custom exception to report when the max restart
limit has been exceeded (default:
:exc:`~@MaxRetriesExceededError`).
If this argument is set and retry is called while
an exception was raised (``sys.exc_info()`` is set)
it will attempt to reraise the current exception.
If no exception was raised it will raise the ``exc``
argument provided.
:keyword countdown: Time in seconds to delay the retry for.
:keyword eta: Explicit time and date to run the retry at
(must be a :class:`~datetime.datetime` instance).
:keyword max_retries: If set, overrides the default retry limit.
A value of :const:`None`, means "use the default", so if you want
infinite retries you would have to set the :attr:`max_retries`
attribute of the task to :const:`None` first.
:keyword time_limit: If set, overrides the default time limit.
:keyword soft_time_limit: If set, overrides the default soft
time limit.
:keyword \*\*options: Any extra options to pass on to
meth:`apply_async`.
:keyword throw: If this is :const:`False`, do not raise the
:exc:`~@Retry` exception,
that tells the worker to mark the task as being
retried. Note that this means the task will be
marked as failed if the task raises an exception,
or successful if it returns.
:raises celery.exceptions.Retry: To tell the worker that
the task has been re-sent for retry. This always happens,
unless the `throw` keyword argument has been explicitly set
to :const:`False`, and is considered normal operation.
**Example**
.. code-block:: pycon
>>> from imaginary_twitter_lib import Twitter
>>> from proj.celery import app
>>> @app.task(bind=True)
... def tweet(self, auth, message):
... twitter = Twitter(oauth=auth)
... try:
... twitter.post_status_update(message)
... except twitter.FailWhale as exc:
... # Retry in 5 minutes.
... raise self.retry(countdown=60 * 5, exc=exc)
Although the task will never return above as `retry` raises an
exception to notify the worker, we use `raise` in front of the retry
to convey that the rest of the block will not be executed.
"""
request = self.request
retries = request.retries + 1
max_retries = self.max_retries if max_retries is None else max_retries
# Not in worker or emulated by (apply/always_eager),
# so just raise the original exception.
if request.called_directly:
maybe_reraise() # raise orig stack if PyErr_Occurred
raise exc or Retry('Task can be retried', None)
if not eta and countdown is None:
countdown = self.default_retry_delay
is_eager = request.is_eager
S = self.signature_from_request(
request, args, kwargs,
countdown=countdown, eta=eta, retries=retries,
**options
)
if max_retries is not None and retries > max_retries:
if exc:
# first try to reraise the original exception
maybe_reraise()
# or if not in an except block then raise the custom exc.
raise exc
raise self.MaxRetriesExceededError(
"Can't retry {0}[{1}] args:{2} kwargs:{3}".format(
self.name, request.id, S.args, S.kwargs))
ret = Retry(exc=exc, when=eta or countdown)
if is_eager:
# if task was executed eagerly using apply(),
# then the retry must also be executed eagerly.
S.apply().get()
if throw:
raise ret
return ret
try:
S.apply_async()
except Exception as exc:
raise Reject(exc, requeue=False)
if throw:
raise ret
return ret
def apply(self, args=None, kwargs=None,
link=None, link_error=None, **options):
"""Execute this task locally, by blocking until the task returns.
:param args: positional arguments passed on to the task.
:param kwargs: keyword arguments passed on to the task.
:keyword throw: Re-raise task exceptions. Defaults to
the :setting:`CELERY_EAGER_PROPAGATES_EXCEPTIONS`
setting.
:rtype :class:`celery.result.EagerResult`:
"""
# trace imports Task, so need to import inline.
from celery.app.trace import build_tracer
app = self._get_app()
args = args or ()
# add 'self' if this is a bound method.
if self.__self__ is not None:
args = (self.__self__,) + tuple(args)
kwargs = kwargs or {}
task_id = options.get('task_id') or uuid()
retries = options.get('retries', 0)
throw = app.either('CELERY_EAGER_PROPAGATES_EXCEPTIONS',
options.pop('throw', None))
# Make sure we get the task instance, not class.
task = app._tasks[self.name]
request = {'id': task_id,
'retries': retries,
'is_eager': True,
'logfile': options.get('logfile'),
'loglevel': options.get('loglevel', 0),
'callbacks': maybe_list(link),
'errbacks': maybe_list(link_error),
'headers': options.get('headers'),
'delivery_info': {'is_eager': True}}
tb = None
tracer = build_tracer(
task.name, task, eager=True,
propagate=throw, app=self._get_app(),
)
ret = tracer(task_id, args, kwargs, request)
retval = ret.retval
if isinstance(retval, ExceptionInfo):
retval, tb = retval.exception, retval.traceback
state = states.SUCCESS if ret.info is None else ret.info.state
return EagerResult(task_id, retval, state, traceback=tb)
def AsyncResult(self, task_id, **kwargs):
"""Get AsyncResult instance for this kind of task.
:param task_id: Task id to get result for.
"""
return self._get_app().AsyncResult(task_id, backend=self.backend,
task_name=self.name, **kwargs)
def signature(self, args=None, *starargs, **starkwargs):
"""Return :class:`~celery.signature` object for
this task, wrapping arguments and execution options
for a single task invocation."""
starkwargs.setdefault('app', self.app)
return signature(self, args, *starargs, **starkwargs)
subtask = signature
def s(self, *args, **kwargs):
"""``.s(*a, **k) -> .signature(a, k)``"""
return self.signature(args, kwargs)
def si(self, *args, **kwargs):
"""``.si(*a, **k) -> .signature(a, k, immutable=True)``"""
return self.signature(args, kwargs, immutable=True)
def chunks(self, it, n):
"""Creates a :class:`~celery.canvas.chunks` task for this task."""
from celery import chunks
return chunks(self.s(), it, n, app=self.app)
def map(self, it):
"""Creates a :class:`~celery.canvas.xmap` task from ``it``."""
from celery import xmap
return xmap(self.s(), it, app=self.app)
def starmap(self, it):
"""Creates a :class:`~celery.canvas.xstarmap` task from ``it``."""
from celery import xstarmap
return xstarmap(self.s(), it, app=self.app)
def send_event(self, type_, **fields):
req = self.request
with self.app.events.default_dispatcher(hostname=req.hostname) as d:
return d.send(type_, uuid=req.id, **fields)
def replace(self, sig):
"""Replace the current task, with a new task inheriting the
same task id.
:param sig: :class:`@signature`
Note: This will raise :exc:`~@Ignore`, so the best practice
is to always use ``raise self.replace(...)`` to convey
to the reader that the task will not continue after being replaced.
:param: Signature of new task.
"""
chord = self.request.chord
if isinstance(sig, group):
sig |= self.app.tasks['celery.accumulate'].s(index=0).set(
chord=chord,
)
chord = None
sig.freeze(self.request.id,
group_id=self.request.group,
chord=chord,
root_id=self.request.root_id)
sig.delay()
raise Ignore('Chord member replaced by new task')
def add_to_chord(self, sig, lazy=False):
"""Add signature to the chord the current task is a member of.
:param sig: Signature to extend chord with.
:param lazy: If enabled the new task will not actually be called,
and ``sig.delay()`` must be called manually.
Currently only supported by the Redis result backend when
``?new_join=1`` is enabled.
"""
if not self.request.chord:
raise ValueError('Current task is not member of any chord')
result = sig.freeze(group_id=self.request.group,
chord=self.request.chord,
root_id=self.request.root_id)
self.backend.add_to_chord(self.request.group, result)
return sig.delay() if not lazy else sig
def update_state(self, task_id=None, state=None, meta=None):
"""Update task state.
:keyword task_id: Id of the task to update, defaults to the
id of the current task
:keyword state: New state (:class:`str`).
:keyword meta: State metadata (:class:`dict`).
"""
if task_id is None:
task_id = self.request.id
self.backend.store_result(task_id, meta, state)
def on_success(self, retval, task_id, args, kwargs):
"""Success handler.
Run by the worker if the task executes successfully.
:param retval: The return value of the task.
:param task_id: Unique id of the executed task.
:param args: Original arguments for the executed task.
:param kwargs: Original keyword arguments for the executed task.
The return value of this handler is ignored.
"""
pass
def on_retry(self, exc, task_id, args, kwargs, einfo):
"""Retry handler.
This is run by the worker when the task is to be retried.
:param exc: The exception sent to :meth:`retry`.
:param task_id: Unique id of the retried task.
:param args: Original arguments for the retried task.
:param kwargs: Original keyword arguments for the retried task.
:keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
instance, containing the traceback.
The return value of this handler is ignored.
"""
pass
def on_failure(self, exc, task_id, args, kwargs, einfo):
"""Error handler.
This is run by the worker when the task fails.
:param exc: The exception raised by the task.
:param task_id: Unique id of the failed task.
:param args: Original arguments for the task that failed.
:param kwargs: Original keyword arguments for the task
that failed.
:keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
instance, containing the traceback.
The return value of this handler is ignored.
"""
pass
def after_return(self, status, retval, task_id, args, kwargs, einfo):
"""Handler called after the task returns.
:param status: Current task state.
:param retval: Task return value/exception.
:param task_id: Unique id of the task.
:param args: Original arguments for the task.
:param kwargs: Original keyword arguments for the task.
:keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
instance, containing the traceback (if any).
The return value of this handler is ignored.
"""
pass
def send_error_email(self, context, exc, **kwargs):
if self.send_error_emails and \
not getattr(self, 'disable_error_emails', None):
self.ErrorMail(self, **kwargs).send(context, exc)
def add_trail(self, result):
if self.trail:
self.request.children.append(result)
return result
def push_request(self, *args, **kwargs):
self.request_stack.push(Context(*args, **kwargs))
def pop_request(self):
self.request_stack.pop()
def __repr__(self):
"""`repr(task)`"""
return _reprtask(self, R_SELF_TASK if self.__self__ else R_INSTANCE)
def _get_request(self):
"""Get current request object."""
req = self.request_stack.top
if req is None:
# task was not called, but some may still expect a request
# to be there, perhaps that should be deprecated.
if self._default_request is None:
self._default_request = Context()
return self._default_request
return req
request = property(_get_request)
def _get_exec_options(self):
if self._exec_options is None:
self._exec_options = extract_exec_options(self)
return self._exec_options
@property
def backend(self):
backend = self._backend
if backend is None:
return self.app.backend
return backend
@backend.setter
def backend(self, value): # noqa
self._backend = value
@property
def __name__(self):
return self.__class__.__name__
abstract.CallableTask.register(Task)
BaseTask = Task # compat alias
[end of celery/app/task.py]
[start of celery/worker/request.py]
# -*- coding: utf-8 -*-
"""
celery.worker.request
~~~~~~~~~~~~~~~~~~~~~
This module defines the :class:`Request` class,
which specifies how tasks are executed.
"""
from __future__ import absolute_import, unicode_literals
import logging
import socket
import sys
from datetime import datetime
from weakref import ref
from kombu.utils.encoding import safe_repr, safe_str
from celery import signals
from celery.app.trace import trace_task, trace_task_ret
from celery.exceptions import (
Ignore, TaskRevokedError, InvalidTaskError,
SoftTimeLimitExceeded, TimeLimitExceeded,
WorkerLostError, Terminated, Retry, Reject,
)
from celery.five import string
from celery.platforms import signals as _signals
from celery.utils.functional import noop
from celery.utils.log import get_logger
from celery.utils.timeutils import maybe_iso8601, timezone, maybe_make_aware
from celery.utils.serialization import get_pickled_exception
from . import state
__all__ = ['Request']
IS_PYPY = hasattr(sys, 'pypy_version_info')
logger = get_logger(__name__)
debug, info, warn, error = (logger.debug, logger.info,
logger.warning, logger.error)
_does_info = False
_does_debug = False
def __optimize__():
# this is also called by celery.app.trace.setup_worker_optimizations
global _does_debug
global _does_info
_does_debug = logger.isEnabledFor(logging.DEBUG)
_does_info = logger.isEnabledFor(logging.INFO)
__optimize__()
# Localize
tz_utc = timezone.utc
tz_or_local = timezone.tz_or_local
send_revoked = signals.task_revoked.send
task_accepted = state.task_accepted
task_ready = state.task_ready
revoked_tasks = state.revoked
class Request(object):
"""A request for task execution."""
acknowledged = False
time_start = None
worker_pid = None
time_limits = (None, None)
_already_revoked = False
_terminate_on_ack = None
_apply_result = None
_tzlocal = None
if not IS_PYPY: # pragma: no cover
__slots__ = (
'app', 'type', 'name', 'id', 'on_ack', 'body',
'hostname', 'eventer', 'connection_errors', 'task', 'eta',
'expires', 'request_dict', 'on_reject', 'utc',
'content_type', 'content_encoding',
'__weakref__', '__dict__',
)
def __init__(self, message, on_ack=noop,
hostname=None, eventer=None, app=None,
connection_errors=None, request_dict=None,
task=None, on_reject=noop, body=None,
headers=None, decoded=False, utc=True,
maybe_make_aware=maybe_make_aware,
maybe_iso8601=maybe_iso8601, **opts):
if headers is None:
headers = message.headers
if body is None:
body = message.body
self.app = app
self.message = message
self.body = body
self.utc = utc
if decoded:
self.content_type = self.content_encoding = None
else:
self.content_type, self.content_encoding = (
message.content_type, message.content_encoding,
)
self.id = headers['id']
type = self.type = self.name = headers['task']
if 'shadow' in headers:
self.name = headers['shadow']
if 'timelimit' in headers:
self.time_limits = headers['timelimit']
self.on_ack = on_ack
self.on_reject = on_reject
self.hostname = hostname or socket.gethostname()
self.eventer = eventer
self.connection_errors = connection_errors or ()
self.task = task or self.app.tasks[type]
# timezone means the message is timezone-aware, and the only timezone
# supported at this point is UTC.
eta = headers.get('eta')
if eta is not None:
try:
eta = maybe_iso8601(eta)
except (AttributeError, ValueError, TypeError) as exc:
raise InvalidTaskError(
'invalid eta value {0!r}: {1}'.format(eta, exc))
self.eta = maybe_make_aware(eta, self.tzlocal)
else:
self.eta = None
expires = headers.get('expires')
if expires is not None:
try:
expires = maybe_iso8601(expires)
except (AttributeError, ValueError, TypeError) as exc:
raise InvalidTaskError(
'invalid expires value {0!r}: {1}'.format(expires, exc))
self.expires = maybe_make_aware(expires, self.tzlocal)
else:
self.expires = None
delivery_info = message.delivery_info or {}
properties = message.properties or {}
headers.update({
'reply_to': properties.get('reply_to'),
'correlation_id': properties.get('correlation_id'),
'delivery_info': {
'exchange': delivery_info.get('exchange'),
'routing_key': delivery_info.get('routing_key'),
'priority': delivery_info.get('priority'),
'redelivered': delivery_info.get('redelivered'),
}
})
self.request_dict = headers
@property
def delivery_info(self):
return self.request_dict['delivery_info']
def execute_using_pool(self, pool, **kwargs):
"""Used by the worker to send this task to the pool.
:param pool: A :class:`celery.concurrency.base.TaskPool` instance.
:raises celery.exceptions.TaskRevokedError: if the task was revoked
and ignored.
"""
task_id = self.id
task = self.task
if self.revoked():
raise TaskRevokedError(task_id)
time_limit, soft_time_limit = self.time_limits
time_limit = time_limit or task.time_limit
soft_time_limit = soft_time_limit or task.soft_time_limit
result = pool.apply_async(
trace_task_ret,
args=(self.type, task_id, self.request_dict, self.body,
self.content_type, self.content_encoding),
accept_callback=self.on_accepted,
timeout_callback=self.on_timeout,
callback=self.on_success,
error_callback=self.on_failure,
soft_timeout=soft_time_limit,
timeout=time_limit,
correlation_id=task_id,
)
# cannot create weakref to None
self._apply_result = ref(result) if result is not None else result
return result
def execute(self, loglevel=None, logfile=None):
"""Execute the task in a :func:`~celery.app.trace.trace_task`.
:keyword loglevel: The loglevel used by the task.
:keyword logfile: The logfile used by the task.
"""
if self.revoked():
return
# acknowledge task as being processed.
if not self.task.acks_late:
self.acknowledge()
request = self.request_dict
args, kwargs, embed = self.message.payload
request.update({'loglevel': loglevel, 'logfile': logfile,
'hostname': self.hostname, 'is_eager': False,
'args': args, 'kwargs': kwargs}, **embed or {})
retval = trace_task(self.task, self.id, args, kwargs, request,
hostname=self.hostname, loader=self.app.loader,
app=self.app)[0]
self.acknowledge()
return retval
def maybe_expire(self):
"""If expired, mark the task as revoked."""
if self.expires:
now = datetime.now(self.expires.tzinfo)
if now > self.expires:
revoked_tasks.add(self.id)
return True
def terminate(self, pool, signal=None):
signal = _signals.signum(signal or 'TERM')
if self.time_start:
pool.terminate_job(self.worker_pid, signal)
self._announce_revoked('terminated', True, signal, False)
else:
self._terminate_on_ack = pool, signal
if self._apply_result is not None:
obj = self._apply_result() # is a weakref
if obj is not None:
obj.terminate(signal)
def _announce_revoked(self, reason, terminated, signum, expired):
task_ready(self)
self.send_event('task-revoked',
terminated=terminated, signum=signum, expired=expired)
if self.store_errors:
self.task.backend.mark_as_revoked(self.id, reason, request=self)
self.acknowledge()
self._already_revoked = True
send_revoked(self.task, request=self,
terminated=terminated, signum=signum, expired=expired)
def revoked(self):
"""If revoked, skip task and mark state."""
expired = False
if self._already_revoked:
return True
if self.expires:
expired = self.maybe_expire()
if self.id in revoked_tasks:
info('Discarding revoked task: %s[%s]', self.name, self.id)
self._announce_revoked(
'expired' if expired else 'revoked', False, None, expired,
)
return True
return False
def send_event(self, type, **fields):
if self.eventer and self.eventer.enabled:
self.eventer.send(type, uuid=self.id, **fields)
def on_accepted(self, pid, time_accepted):
"""Handler called when task is accepted by worker pool."""
self.worker_pid = pid
self.time_start = time_accepted
task_accepted(self)
if not self.task.acks_late:
self.acknowledge()
self.send_event('task-started')
if _does_debug:
debug('Task accepted: %s[%s] pid:%r', self.name, self.id, pid)
if self._terminate_on_ack is not None:
self.terminate(*self._terminate_on_ack)
def on_timeout(self, soft, timeout):
"""Handler called if the task times out."""
task_ready(self)
if soft:
warn('Soft time limit (%ss) exceeded for %s[%s]',
soft, self.name, self.id)
exc = SoftTimeLimitExceeded(soft)
else:
error('Hard time limit (%ss) exceeded for %s[%s]',
timeout, self.name, self.id)
exc = TimeLimitExceeded(timeout)
if self.store_errors:
self.task.backend.mark_as_failure(self.id, exc, request=self)
if self.task.acks_late:
self.acknowledge()
def on_success(self, failed__retval__runtime, **kwargs):
"""Handler called if the task was successfully processed."""
failed, retval, runtime = failed__retval__runtime
if failed:
if isinstance(retval.exception, (SystemExit, KeyboardInterrupt)):
raise retval.exception
return self.on_failure(retval, return_ok=True)
task_ready(self)
if self.task.acks_late:
self.acknowledge()
self.send_event('task-succeeded', result=retval, runtime=runtime)
def on_retry(self, exc_info):
"""Handler called if the task should be retried."""
if self.task.acks_late:
self.acknowledge()
self.send_event('task-retried',
exception=safe_repr(exc_info.exception.exc),
traceback=safe_str(exc_info.traceback))
def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
"""Handler called if the task raised an exception."""
task_ready(self)
if isinstance(exc_info.exception, MemoryError):
raise MemoryError('Process got: %s' % (exc_info.exception,))
elif isinstance(exc_info.exception, Reject):
return self.reject(requeue=exc_info.exception.requeue)
elif isinstance(exc_info.exception, Ignore):
return self.acknowledge()
exc = exc_info.exception
if isinstance(exc, Retry):
return self.on_retry(exc_info)
# These are special cases where the process would not have had
# time to write the result.
if self.store_errors:
if isinstance(exc, Terminated):
self._announce_revoked(
'terminated', True, string(exc), False)
send_failed_event = False # already sent revoked event
elif isinstance(exc, WorkerLostError) or not return_ok:
self.task.backend.mark_as_failure(
self.id, exc, request=self,
)
# (acks_late) acknowledge after result stored.
if self.task.acks_late:
self.acknowledge()
if send_failed_event:
self.send_event(
'task-failed',
exception=safe_repr(get_pickled_exception(exc_info.exception)),
traceback=exc_info.traceback,
)
if not return_ok:
error('Task handler raised error: %r', exc,
exc_info=exc_info.exc_info)
def acknowledge(self):
"""Acknowledge task."""
if not self.acknowledged:
self.on_ack(logger, self.connection_errors)
self.acknowledged = True
def reject(self, requeue=False):
if not self.acknowledged:
self.on_reject(logger, self.connection_errors, requeue)
self.acknowledged = True
def info(self, safe=False):
return {'id': self.id,
'name': self.name,
'type': self.type,
'body': self.body,
'hostname': self.hostname,
'time_start': self.time_start,
'acknowledged': self.acknowledged,
'delivery_info': self.delivery_info,
'worker_pid': self.worker_pid}
def __str__(self):
return ' '.join([
self.humaninfo(),
' eta:[{0}]'.format(self.eta) if self.eta else '',
' expires:[{0}]'.format(self.expires) if self.expires else '',
])
shortinfo = __str__
def humaninfo(self):
return '{0.name}[{0.id}]'.format(self)
def __repr__(self):
return '<{0}: {1}>'.format(type(self).__name__, self.humaninfo())
@property
def tzlocal(self):
if self._tzlocal is None:
self._tzlocal = self.app.conf.CELERY_TIMEZONE
return self._tzlocal
@property
def store_errors(self):
return (not self.task.ignore_result or
self.task.store_errors_even_if_ignored)
@property
def task_id(self):
# XXX compat
return self.id
@task_id.setter # noqa
def task_id(self, value):
self.id = value
@property
def task_name(self):
# XXX compat
return self.name
@task_name.setter # noqa
def task_name(self, value):
self.name = value
@property
def reply_to(self):
# used by rpc backend when failures reported by parent process
return self.request_dict['reply_to']
@property
def correlation_id(self):
# used similarly to reply_to
return self.request_dict['correlation_id']
def create_request_cls(base, task, pool, hostname, eventer,
ref=ref, revoked_tasks=revoked_tasks,
task_ready=task_ready):
from celery.app.trace import trace_task_ret as trace
default_time_limit = task.time_limit
default_soft_time_limit = task.soft_time_limit
apply_async = pool.apply_async
acks_late = task.acks_late
events = eventer and eventer.enabled
class Request(base):
def execute_using_pool(self, pool, **kwargs):
task_id = self.id
if (self.expires or task_id in revoked_tasks) and self.revoked():
raise TaskRevokedError(task_id)
time_limit, soft_time_limit = self.time_limits
time_limit = time_limit or default_time_limit
soft_time_limit = soft_time_limit or default_soft_time_limit
result = apply_async(
trace,
args=(self.type, task_id, self.request_dict, self.body,
self.content_type, self.content_encoding),
accept_callback=self.on_accepted,
timeout_callback=self.on_timeout,
callback=self.on_success,
error_callback=self.on_failure,
soft_timeout=soft_time_limit,
timeout=time_limit,
correlation_id=task_id,
)
# cannot create weakref to None
self._apply_result = ref(result) if result is not None else result
return result
def on_success(self, failed__retval__runtime, **kwargs):
failed, retval, runtime = failed__retval__runtime
if failed:
if isinstance(retval.exception, (
SystemExit, KeyboardInterrupt)):
raise retval.exception
return self.on_failure(retval, return_ok=True)
task_ready(self)
if acks_late:
self.acknowledge()
if events:
self.send_event(
'task-succeeded', result=retval, runtime=runtime,
)
return Request
[end of celery/worker/request.py]
</code>
I need you to solve this issue by regenerating the full files in the code base that you would like to change. You can change as many files as you like. Please respond with a list of files and their revised contents in the following format.
<example>
[start of /src/this_file.py]
import os
def euclidean(a, b):
if b == 0:
return a
return euclidean(b, a % b)
[end of /src/this_file.py]
[start of /src/another_file.py]
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
x, y = x0, y0
sx = -1 if x0 > x1 else 1
sy = -1 if y0 > y1 else 1
if dx > dy:
err = dx / 2.0
while x != x1:
points.append((x, y))
err -= dy
if err < 0:
y += sy
err += dx
x += sx
else:
err = dy / 2.0
while y != y1:
points.append((x
err -= dx
if err < 0:
x += sx
err += dy
y += sy
points.append((x, y))
return points
[end of /src/another_file.py]
</example>
|
celery/celery
|
045b52f1450d6d5cc500e0057a4b498250dc5692
|
Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
|
This is deliberate as if a task is killed it may mean that the next invocation will also cause the same to happen. If the task is redelivered it may cause a loop where the same conditions occur again and again. Also, sadly you cannot distinguish processes killed by OOM from processes killed by other means, and if an administrator kills -9 a task going amok, you usually don't want that task to be called again.
There could be a configuration option for not acking terminated tasks, but I'm not sure how useful that would be.
A better solution could be to use `basic_reject(requeue=False)` instead of `basic_ack`, that way you can configure
a dead letter queue so that the killed tasks will be sent to a queue for manual inspection.
I must say, regardless of the status of this feature request, the documentation is misleading. Specifically, [this FAQ makes it seem that process failures would NOT acknowledge messages](http://celery.readthedocs.org/en/latest/faq.html#faq-acks-late-vs-retry). And [this FAQ boldface states](http://celery.readthedocs.org/en/latest/faq.html#id54) that in the event of a kill signal (9), that acks_late will allow the task to re-run (which again, is patently wrong based on this poorly documented behavior). Nowhere in the docs have I found that if the process _dies_, the message will be acknowledged, regardless of acks_late or not. (for instance, I have a set of 10k+ tasks, and some 1% of tasks wind up acknowledged but incomplete when a WorkerLostError is thrown in connection with the worker, although there are no other errors of any kind in any of my logs related to that task).
TL;DR at the least, appropriately document the current state when describing the functionality and limitations of acks_late. A work-around would be helpful -- I'm not sure I understand the solution of using `basic_reject`, although I'll keep looking into it.
The docs are referring to killing the worker process with KILL, not the child processes. The term worker will always refer to the worker instance, not the pool processes. The section within about acks_late is probably not very helpful and should be removed
|
2015-10-06T05:34:34Z
|
<patch>
<patch>
diff --git a/celery/app/defaults.py b/celery/app/defaults.py
--- a/celery/app/defaults.py
+++ b/celery/app/defaults.py
@@ -132,6 +132,7 @@ def __repr__(self):
'REDIS_DB': Option(type='int', **_REDIS_OLD),
'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
'REDIS_MAX_CONNECTIONS': Option(type='int'),
+ 'REJECT_ON_WORKER_LOST': Option(type='bool'),
'RESULT_BACKEND': Option(type='string'),
'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
'RESULT_DB_TABLENAMES': Option(type='dict'),
diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -220,6 +220,12 @@ class Task(object):
#: :setting:`CELERY_ACKS_LATE` setting.
acks_late = None
+ #: When CELERY_ACKS_LATE is set to True, the default behavior to
+ #: handle worker crash is to acknowledge the message. Setting
+ #: this to true allows the message to be rejected and requeued so
+ #: it will be executed again by another worker.
+ reject_on_worker_lost = None
+
#: Tuple of expected exceptions.
#:
#: These are errors that are expected in normal operation
@@ -248,6 +254,7 @@ class Task(object):
('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
('track_started', 'CELERY_TRACK_STARTED'),
('acks_late', 'CELERY_ACKS_LATE'),
+ ('reject_on_worker_lost', 'CELERY_REJECT_ON_WORKER_LOST'),
('ignore_result', 'CELERY_IGNORE_RESULT'),
('store_errors_even_if_ignored',
'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -326,7 +326,6 @@ def on_retry(self, exc_info):
def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
"""Handler called if the task raised an exception."""
task_ready(self)
-
if isinstance(exc_info.exception, MemoryError):
raise MemoryError('Process got: %s' % (exc_info.exception,))
elif isinstance(exc_info.exception, Reject):
@@ -352,7 +351,13 @@ def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
)
# (acks_late) acknowledge after result stored.
if self.task.acks_late:
- self.acknowledge()
+ reject_and_requeue = (self.task.reject_on_worker_lost and
+ isinstance(exc, WorkerLostError) and
+ self.delivery_info.get('redelivered', False) is False)
+ if reject_and_requeue:
+ self.reject(requeue=True)
+ else:
+ self.acknowledge()
if send_failed_event:
self.send_event(
</patch>
</s>
</patch>
|
diff --git a/celery/tests/worker/test_request.py b/celery/tests/worker/test_request.py
--- a/celery/tests/worker/test_request.py
+++ b/celery/tests/worker/test_request.py
@@ -325,6 +325,20 @@ def test_on_failure_Reject_rejects_with_requeue(self):
req_logger, req.connection_errors, True,
)
+ def test_on_failure_WrokerLostError_rejects_with_requeue(self):
+ einfo = None
+ try:
+ raise WorkerLostError()
+ except:
+ einfo = ExceptionInfo(internal=True)
+ req = self.get_request(self.add.s(2, 2))
+ req.task.acks_late = True
+ req.task.reject_on_worker_lost = True
+ req.delivery_info['redelivered'] = False
+ req.on_failure(einfo)
+ req.on_reject.assert_called_with(req_logger,
+ req.connection_errors, True)
+
def test_tzlocal_is_cached(self):
req = self.get_request(self.add.s(2, 2))
req._tzlocal = 'foo'
|
1.0
| |||
NVIDIA__NeMo-473
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
</issue>
<code>
[start of README.rst]
.. image:: http://www.repostatus.org/badges/latest/active.svg
:target: http://www.repostatus.org/#active
:alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
.. image:: https://img.shields.io/badge/documentation-github.io-blue.svg
:target: https://nvidia.github.io/NeMo/
:alt: NeMo documentation on GitHub pages
.. image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
:target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
:alt: NeMo core license and license for collections in this repo
.. image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
:target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
:alt: Language grade: Python
.. image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
:target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
:alt: Total alerts
.. image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: black
NVIDIA Neural Modules: NeMo
===========================
NeMo is a toolkit for defining and building `Conversational AI <https://developer.nvidia.com/conversational-ai#started>`_ applications.
Goal of the NeMo toolkit is to make it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components. Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
**Neural Modules** are conceptual blocks of neural networks that take *typed* inputs and produce *typed* outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.
The toolkit comes with extendable collections of pre-built modules for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS).
**Introduction**
* Watch `this video <https://nvidia.github.io/NeMo/>`_ for a quick walk-through.
* Documentation (latest released version): https://nvidia.github.io/NeMo/
* Read NVIDIA `Developer Blog for example applications <https://devblogs.nvidia.com/how-to-build-domain-specific-automatic-speech-recognition-models-on-gpus/>`_
* Read NVIDIA `Developer Blog for Quartznet ASR model <https://devblogs.nvidia.com/develop-smaller-speech-recognition-models-with-nvidias-nemo-framework/>`_
* Recommended version to install is **0.9.0** via ``pip install nemo-toolkit``
* Recommended NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_
* Pretrained models are available on NVIDIA `NGC Model repository <https://ngc.nvidia.com/catalog/models?orderBy=modifiedDESC&query=nemo&quickFilter=models&filters=>`_
Getting started
~~~~~~~~~~~~~~~
THE LATEST STABLE VERSION OF NeMo is **0.9.0** (Available via PIP).
**Requirements**
1) Python 3.6 or 3.7
2) PyTorch 1.4.* with GPU support
3) (optional, for best performance) NVIDIA APEX. Install from here: https://github.com/NVIDIA/apex
**NeMo Docker Container**
NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_ is now available.
* Pull the docker: ``docker pull nvcr.io/nvidia/nemo:v0.9``
* Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/nemo:v0.9``
If you are using the NVIDIA `NGC PyTorch container <https://ngc.nvidia.com/catalog/containers/nvidia:pytorch>`_ follow these instructions
* Pull the docker: ``docker pull nvcr.io/nvidia/pytorch:20.01-py3``
* Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:20.01-py3``
* ``apt-get update && apt-get install -y libsndfile1``
* ``pip install nemo_toolkit`` NeMo core
* ``pip install nemo_asr`` NeMo ASR (Speech Recognition) collection
* ``pip install nemo_nlp`` NeMo NLP (Natural Language Processing) collection
* ``pip install nemo_tts`` NeMo TTS (Speech Synthesis) collection
See `examples/start_here` to get started with the simplest example. The folder `examples` contains several examples to get you started with various tasks in NLP and ASR.
**Tutorials**
* `Speech recognition <https://nvidia.github.io/NeMo/asr/intro.html>`_
* `Natural language processing <https://nvidia.github.io/NeMo/nlp/intro.html>`_
* `Speech Synthesis <https://nvidia.github.io/NeMo/tts/intro.html>`_
DEVELOPMENT
~~~~~~~~~~~
If you'd like to use master branch and/or develop NeMo you can run "reinstall.sh" script.
`Documentation (master branch) <http://nemo-master-docs.s3-website.us-east-2.amazonaws.com/>`_.
**Installing From Github**
If you prefer to use NeMo's latest development version (from GitHub) follow the steps below:
1) Clone the repository ``git clone https://github.com/NVIDIA/NeMo.git``
2) Go to NeMo folder and re-install the toolkit with collections:
.. code-block:: bash
./reinstall.sh
**Style tests**
.. code-block:: bash
python setup.py style # Checks overall project code style and output issues with diff.
python setup.py style --fix # Tries to fix error in-place.
python setup.py style --scope=tests # Operates within certain scope (dir of file).
**Unittests**
This command runs unittests:
.. code-block:: bash
./reinstall.sh
python pytest tests
Citation
~~~~~~~~
If you are using NeMo please cite the following publication
.. code-block:: tex
@misc{nemo2019,
title={NeMo: a toolkit for building AI applications using Neural Modules},
author={Oleksii Kuchaiev and Jason Li and Huyen Nguyen and Oleksii Hrinchuk and Ryan Leary and Boris Ginsburg and Samuel Kriman and Stanislav Beliaev and Vitaly Lavrukhin and Jack Cook and Patrice Castonguay and Mariya Popova and Jocelyn Huang and Jonathan M. Cohen},
year={2019},
eprint={1909.09577},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
[end of README.rst]
[start of nemo/backends/pytorch/actions.py]
# Copyright (c) 2019 NVIDIA Corporation
import copy
import importlib
import itertools
import json
import os
from collections import defaultdict
from contextlib import ExitStack
from pathlib import Path
from typing import List, Optional
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP
from nemo import logging
from nemo.backends.pytorch.module_wrapper import TrainableNeuralModuleWrapper
from nemo.backends.pytorch.nm import DataLayerNM, TrainableNM
from nemo.backends.pytorch.optimizers import AdamW, Novograd, master_params
from nemo.core import DeploymentFormat, DeviceType, NeuralModule, NmTensor
from nemo.core.callbacks import ActionCallback, EvaluatorCallback, SimpleLossLoggerCallback
from nemo.core.neural_factory import Actions, ModelMode, Optimization
from nemo.core.neural_types import *
from nemo.utils.helpers import get_checkpoint_from_dir
# these imports will happen on as-needed basis
amp = None
# convert_syncbn = None
# create_syncbn_process_group = None
LARC = None
FusedLAMB = None
FusedAdam = None
FusedNovoGrad = None
AmpOptimizations = {
Optimization.mxprO0: "O0",
Optimization.mxprO1: "O1",
Optimization.mxprO2: "O2",
Optimization.mxprO3: "O3",
}
_float_2_half_req = {
Optimization.mxprO1,
Optimization.mxprO2,
Optimization.mxprO3,
}
class PtActions(Actions):
def __init__(
self, local_rank=None, global_rank=None, tb_writer=None, optimization_level=Optimization.mxprO0,
):
need_apex = local_rank is not None or optimization_level != Optimization.mxprO0
if need_apex:
try:
apex = importlib.import_module('apex')
if optimization_level != Optimization.mxprO0:
global amp
amp = importlib.import_module('apex.amp')
if local_rank is not None:
# global convert_syncbn
# global create_syncbn_process_group
global LARC
global FusedLAMB
global FusedAdam
global FusedNovoGrad
parallel = importlib.import_module('apex.parallel')
apex_optimizer = importlib.import_module('apex.optimizers')
# convert_syncbn = parallel.convert_syncbn_model
# create_syncbn_process_group = parallel.create_syncbn_process_group
LARC = parallel.LARC
FusedLAMB = apex_optimizer.FusedLAMB
FusedAdam = apex_optimizer.FusedAdam
FusedNovoGrad = apex_optimizer.FusedNovoGrad
except ImportError:
raise ImportError(
"NVIDIA Apex is necessary for distributed training and"
"mixed precision training. It only works on GPUs."
"Please install Apex from "
"https://www.github.com/nvidia/apex"
)
super(PtActions, self).__init__(
local_rank=local_rank, global_rank=global_rank, optimization_level=optimization_level,
)
# will be [unique_instance_id -> (NMModule, PTModule)]
self.module_reference_table = {}
self.step = 0
self.epoch_num = 0
self.optimizers = []
self.tb_writer = tb_writer
self._modules = set()
self.cache = None
self.amp_initialized = False
@property
def modules(self):
return self._modules
def __get_top_sorted_modules_and_dataloader(self, hook):
"""
Constructs DAG leading to hook and creates its topological order.
It also populates self.module_reference_table.
Args:
hook: an NmTensor or a list of NmTensors representing leaf nodes
in DAG
Returns:
list of modules with their call arguments and outputs, and dataset
"""
def create_node(producer, producer_args):
if producer_args is None:
return tuple((producer, ()))
else:
return tuple((producer, tuple([(k, v) for k, v in producer_args.items()]),))
def is_in_degree_zero(node, processed_nodes):
"""A node has in degree of zero"""
if node[1] == ():
return True
for portname, nmtensor in node[1]:
nd = create_node(nmtensor.producer, nmtensor.producer_args)
if nd not in processed_nodes:
return False
return True
hooks = hook if isinstance(hook, list) else [hook]
# ensures that no tensors are processed twice
processed_nmtensors = set()
indices_to_remove = []
# Check for duplicates in hook
for i, nmtensor in enumerate(hook):
if nmtensor in processed_nmtensors:
indices_to_remove.append(i)
else:
processed_nmtensors.add(nmtensor)
for i in reversed(indices_to_remove):
hook.pop(i)
_top_sorted_modules = []
all_nodes = {}
# extract all nodes to all_nodes set
hooks_lst = list(hooks)
while len(hooks_lst) > 0:
# take nmtensor from the end of the list
nmtensor = hooks_lst.pop()
node = create_node(nmtensor.producer, nmtensor.producer_args)
# Store nmtensor as an output of its producer
# first make sure all keys are present per output port
# and nm is inside all_nodes
if node not in all_nodes:
all_nodes[node] = {k: None for k in nmtensor.producer.output_ports}
# second, populate output port with current nmtensor
# where applicable
all_nodes[node][nmtensor.name] = nmtensor
processed_nmtensors.add(nmtensor)
if nmtensor.producer_args is not None and nmtensor.producer_args != {}:
for _, new_nmtensor in nmtensor.producer_args.items():
if new_nmtensor not in processed_nmtensors:
# put in the start of list
hooks_lst.insert(0, new_nmtensor)
all_node_with_output = []
# Iterate over all_nodes to create new nodes that include its output
# now all nodes have (module, input tensors, output tensors)
for node in all_nodes:
all_node_with_output.append(tuple((node[0], node[1], all_nodes[node])))
processed_nodes = []
while len(all_node_with_output) > 0:
for node in all_node_with_output.copy():
# if node's in_degree is zero it can be added to
# _top_sorted_modules
# this will also reduce in_degree of its children
if is_in_degree_zero(node, processed_nodes):
_top_sorted_modules.append(node)
processed_nodes.append((node[0], node[1]))
all_node_with_output.remove(node)
# Create top_sorted_modules aka callchain
top_sorted_modules = []
for i, m in enumerate(_top_sorted_modules):
top_sorted_modules.append((m[0], dict(m[1]), m[2]))
# Ensure that there is only one dataset in callchain
if i > 0 and isinstance(m[0], DataLayerNM):
raise ValueError("There were more than one DataLayer NeuralModule inside your DAG.")
if not isinstance(top_sorted_modules[0][0], DataLayerNM):
raise ValueError("The first module in your DAG was not a DataLayer NeuralModule.")
tdataset = top_sorted_modules[0][0].dataset
# populate self.module_reference_table
for m in top_sorted_modules:
if m[0].factory is None and self._local_rank is not None:
raise ValueError(
"Neural module {0} was created without "
"NeuralModuleFactory, but you are trying to"
"run in distributed mode. Please instantiate"
"NeuralModuleFactory first and pass its "
"instance as `factory` parameter to all your"
"Neural Module objects."
"".format(str(m[0]))
)
key = m[0].unique_instance_id
if key not in self.module_reference_table:
if isinstance(m[0], TrainableNeuralModuleWrapper):
self.module_reference_table[key] = (m[0], m[0]._pt_module)
else:
self.module_reference_table[key] = (m[0], m[0])
return top_sorted_modules, tdataset
def create_optimizer(self, optimizer, things_to_optimize, optimizer_params=None):
"""
Wrapper function around __setup_optimizer()
Args:
optimizer : A instantiated PyTorch optimizer or string. For
currently supported strings, see __setup_optimizer().
things_to_optimize (list): Must be a list of Neural Modules and/or
parameters. If a Neural Module is passed, all trainable
parameters are extracted and passed to the optimizer.
optimizer_params (dict): Optional parameters dictionary.
Returns:
Optimizer
"""
optimizer_instance = None
optimizer_class = None
if isinstance(optimizer, str):
optimizer_class = optimizer
elif isinstance(optimizer, torch.optim.Optimizer):
optimizer_instance = optimizer
else:
raise ValueError("`optimizer` must be a string or an instance of torch.optim.Optimizer")
modules_to_optimize = []
tensors_to_optimize = []
if not isinstance(things_to_optimize, list):
things_to_optimize = [things_to_optimize]
for thing in things_to_optimize:
if isinstance(thing, NeuralModule):
modules_to_optimize.append(thing)
elif isinstance(thing, NmTensor):
tensors_to_optimize.append(thing)
else:
raise ValueError(
"{} passed to create_optimizer() was neither a neural module nor a neural module tensor"
)
if tensors_to_optimize:
call_chain, _ = self.__get_top_sorted_modules_and_dataloader(tensors_to_optimize)
for module in call_chain:
if module[0] not in modules_to_optimize:
modules_to_optimize.append(module[0])
# Extract trainable weights which will be optimized
params_list = [p.parameters() for p in modules_to_optimize if isinstance(p, TrainableNM) or p.is_trainable()]
params_to_optimize = itertools.chain(*params_list)
if optimizer_params is None:
optimizer_params = {}
# Init amp
optimizer = self.__setup_optimizer(
optimizer_instance=optimizer_instance,
optimizer_class=optimizer_class,
optimization_params=optimizer_params,
params_to_optimize=params_to_optimize,
)
self.optimizers.append(optimizer)
return optimizer
@staticmethod
def __setup_optimizer(
optimizer_instance, optimizer_class, optimization_params, params_to_optimize,
):
if optimizer_instance is None:
# Setup optimizer instance, by default it is SGD
lr = optimization_params["lr"]
if optimizer_class.lower() == "sgd":
optimizer = optim.SGD(
params_to_optimize,
lr=lr,
momentum=optimization_params.get("momentum", 0.9),
weight_decay=optimization_params.get("weight_decay", 0.0),
)
elif optimizer_class.lower() == "adam":
optimizer = optim.Adam(
params=params_to_optimize, lr=lr, betas=optimization_params.get("betas", (0.9, 0.999)),
)
elif optimizer_class.lower() == "fused_adam":
optimizer = FusedAdam(params=params_to_optimize, lr=lr)
elif optimizer_class.lower() == "adam_w":
optimizer = AdamW(
params=params_to_optimize,
lr=lr,
weight_decay=optimization_params.get("weight_decay", 0.0),
betas=optimization_params.get("betas", (0.9, 0.999)),
)
elif optimizer_class.lower() == "novograd":
optimizer = Novograd(
params_to_optimize,
lr=lr,
weight_decay=optimization_params.get("weight_decay", 0.0),
luc=optimization_params.get("luc", False),
luc_trust=optimization_params.get("luc_eta", 1e-3),
betas=optimization_params.get("betas", (0.95, 0.25)),
)
elif optimizer_class.lower() == "fused_novograd":
optimizer = FusedNovoGrad(
params_to_optimize,
lr=lr,
weight_decay=optimization_params.get("weight_decay", 0.0),
reg_inside_moment=True,
grad_averaging=False,
betas=optimization_params.get("betas", (0.95, 0.25)),
)
elif optimizer_class.lower() == "fused_lamb":
optimizer = FusedLAMB(params_to_optimize, lr=lr,)
else:
raise ValueError("Unknown optimizer class: {0}".format(optimizer_class))
if optimization_params.get("larc", False):
logging.info("Enabling larc")
optimizer = LARC(optimizer, trust_coefficient=optimization_params.get("larc_eta", 2e-2),)
else:
logging.info("Optimizer instance: {0} is provided.")
if optimizer_class is not None and optimizer_class != "":
logging.warning("Ignoring `optimizer_class` parameter because `optimizer_instance` is provided")
if optimization_params is not None and optimization_params != {}:
logging.warning(
"Ignoring `optimization_params` parameter for "
"optimizer because `optimizer_instance` is provided"
)
optimizer = optimizer_instance
return optimizer
def __initialize_amp(
self, optimizer, optim_level, amp_max_loss_scale=2.0 ** 24, amp_min_loss_scale=1.0,
):
if optim_level not in AmpOptimizations:
raise ValueError(f"__initialize_amp() was called with unknown optim_level={optim_level}")
# in this case, nothing to do here
if optim_level == Optimization.mxprO0:
return optimizer
if len(self.modules) < 1:
raise ValueError("There were no modules to initialize")
pt_modules = []
for module in self.modules:
if isinstance(module, nn.Module):
pt_modules.append(module)
elif isinstance(module, TrainableNeuralModuleWrapper):
pt_modules.append(module._pt_module)
_, optimizer = amp.initialize(
max_loss_scale=amp_max_loss_scale,
min_loss_scale=amp_min_loss_scale,
models=pt_modules,
optimizers=optimizer,
opt_level=AmpOptimizations[optim_level],
)
self.amp_initialized = True
return optimizer
def __nm_graph_forward_pass(
self, call_chain, registered_tensors, mode=ModelMode.train, use_cache=False,
):
for ind in range(1, len(call_chain)):
if use_cache:
in_cache = True
for tensor in call_chain[ind][2].values():
if tensor is None:
# NM has an output tensor that is not used in the
# current call chain, so we don't care if it's not in
# cache
continue
if tensor.unique_name not in registered_tensors:
in_cache = False
if in_cache:
continue
call_args = call_chain[ind][1]
# module = call_chain[ind][0]
m_id = call_chain[ind][0].unique_instance_id
pmodule = self.module_reference_table[m_id][1]
# if self._local_rank is not None:
# if isinstance(pmodule, DDP):
# if disable_allreduce:
# pmodule.disable_allreduce()
# else:
# pmodule.enable_allreduce()
if mode == ModelMode.train:
# if module.is_trainable():
if isinstance(pmodule, nn.Module):
pmodule.train()
elif mode == ModelMode.eval:
# if module.is_trainable():
if isinstance(pmodule, nn.Module):
pmodule.eval()
else:
raise ValueError("Unknown ModelMode")
# prepare call signature for `module`
call_set = {}
for tensor_name, nmtensor in call_args.items():
# _add_uuid_2_name(nmtensor.name, nmtensor.producer._uuid)
key = nmtensor.unique_name
call_set[tensor_name] = registered_tensors[key]
# actual PyTorch module call with signature
if isinstance(self.module_reference_table[m_id][0], TrainableNeuralModuleWrapper,):
new_tensors = pmodule(**call_set)
else:
new_tensors = pmodule(force_pt=True, **call_set)
if not isinstance(new_tensors, List):
if not isinstance(new_tensors, tuple):
new_tensors = [new_tensors]
else:
new_tensors = list(new_tensors)
for t_tensor, nm_tensor in zip(new_tensors, call_chain[ind][2].values()):
if nm_tensor is None:
continue
t_name = nm_tensor.unique_name
if t_name not in registered_tensors:
registered_tensors[t_name] = t_tensor
else:
raise ValueError("A NMTensor was produced twice in " f"the same DAG. {t_name}")
@staticmethod
def pad_tensor(t: torch.Tensor, target_size: torch.Size):
padded_shape = target_size.cpu().data.numpy().tolist()
padded_t = torch.zeros(padded_shape).cuda().type_as(t)
t_size = t.size()
if len(t_size) == 0:
padded_t = t
elif len(t_size) == 1:
padded_t[: t_size[0]] = t
elif len(t_size) == 2:
padded_t[: t_size[0], : t_size[1]] = t
elif len(t_size) == 3:
padded_t[: t_size[0], : t_size[1], : t_size[2]] = t
elif len(t_size) == 4:
padded_t[: t_size[0], : t_size[1], : t_size[2], : t_size[3]] = t
else:
raise NotImplementedError
return padded_t
@staticmethod
def depad_tensor(t: torch.Tensor, original_size: torch.Size):
t_size = original_size
if len(t_size) == 0:
depadded_t = t
elif len(t_size) == 1:
depadded_t = t[: t_size[0]]
elif len(t_size) == 2:
depadded_t = t[: t_size[0], : t_size[1]]
elif len(t_size) == 3:
depadded_t = t[: t_size[0], : t_size[1], : t_size[2]]
elif len(t_size) == 4:
depadded_t = t[: t_size[0], : t_size[1], : t_size[2], : t_size[3]]
else:
raise NotImplementedError
return depadded_t
def _eval(self, tensors_2_evaluate, callback, step, verbose=False):
"""
Evaluation process.
WARNING THIS function assumes that all tensors_2_evaluate are based
on a single datalayer
Args:
tensors_2_evaluate: list of NmTensors to evaluate
callback: instance of EvaluatorCallback
step: current training step, used for logging
Returns:
None
"""
with torch.no_grad():
# each call chain corresponds to a tensor in tensors_2_evaluate
call_chain, _ = self.__get_top_sorted_modules_and_dataloader(hook=tensors_2_evaluate)
# "Retrieve" data layer from call chain.
dl_nm = call_chain[0][0]
# Prepare eval_dataloader
# For distributed training it should have disjoint subsets of
# all data on every worker
is_distributed = False
world_size = None
if dl_nm.placement == DeviceType.AllGpu:
assert dist.is_initialized()
is_distributed = True
world_size = torch.distributed.get_world_size()
# logging.info(
# "Doing distributed evaluation. Rank {0} of {1}".format(
# self.local_rank, world_size
# )
# )
if dl_nm.dataset is not None:
sampler = torch.utils.data.distributed.DistributedSampler(
dataset=dl_nm.dataset, shuffle=dl_nm.shuffle
)
eval_dataloader = torch.utils.data.DataLoader(
dataset=dl_nm.dataset,
sampler=sampler,
num_workers=dl_nm.num_workers,
batch_size=dl_nm.batch_size,
shuffle=False,
)
else:
eval_dataloader = dl_nm.data_iterator
if hasattr(eval_dataloader, 'sampler'):
eval_dataloader.sampler.set_epoch(0)
else: # Not distributed
if dl_nm.dataset is not None:
# Todo: remove local_parameters
eval_dataloader = torch.utils.data.DataLoader(
dataset=dl_nm.dataset,
sampler=None, # not distributed sampler
num_workers=dl_nm.num_workers,
batch_size=dl_nm.batch_size,
shuffle=dl_nm.shuffle,
)
else:
eval_dataloader = dl_nm.data_iterator
# after this eval_dataloader is ready to be used
# reset global_var_dict - results of evaluation will be stored
# there
callback.clear_global_var_dict()
dl_device = dl_nm._device
# Evaluation mini-batch for loop
num_batches = None
if hasattr(eval_dataloader, "__len__"):
num_batches = len(eval_dataloader)
for epoch_i, data in enumerate(eval_dataloader, 0):
if (
verbose
and num_batches is not None
and (num_batches < 10 or (epoch_i % int(num_batches / 10) == 0))
):
logging.info(f"Evaluating batch {epoch_i} out of {num_batches}")
tensors = []
if isinstance(data, torch.Tensor):
data = (data,)
for d in data:
if isinstance(d, torch.Tensor):
tensors.append(d.to(dl_device))
else:
tensors.append(d)
registered_e_tensors = {
t.unique_name: d for t, d in zip(call_chain[0][2].values(), tensors) if t is not None
}
self.__nm_graph_forward_pass(
call_chain=call_chain, registered_tensors=registered_e_tensors, mode=ModelMode.eval,
)
if not is_distributed or self.global_rank == 0:
values_dict = {}
# If distributed. For the outer loop, we need to ensure that
# all processes loop through the elements in the same order
for t2e in tensors_2_evaluate:
key = t2e.unique_name
if key not in registered_e_tensors.keys():
logging.info("WARNING: Tensor {} was not found during eval".format(key))
continue
if is_distributed:
# where we will all_gather results from all workers
tensors_list = []
# where we will all_gather tensor sizes
tensor_on_worker = registered_e_tensors[key]
if tensor_on_worker.shape != torch.Size([]):
tensor_on_worker_size_as_tensor = torch.tensor(tensor_on_worker.shape).cuda()
sizes = []
for ind in range(world_size):
sizes.append(torch.empty_like(tensor_on_worker_size_as_tensor))
dist.all_gather(sizes, tensor_on_worker_size_as_tensor)
mx_dim, _ = torch.max(torch.stack(sizes), dim=0)
else: # this is a singleton. For example, loss value
sizes = [torch.Size([])] * world_size
mx_dim = None
for ind in range(world_size):
# we have to use max shape for all_gather
if mx_dim is None: # singletons
tensors_list.append(torch.tensor(2).cuda().type_as(tensor_on_worker))
else: # non-singletons
tensors_list.append(
torch.empty(mx_dim.cpu().data.numpy().tolist()).cuda().type_as(tensor_on_worker)
)
if mx_dim is not None:
t_to_send = self.pad_tensor(tensor_on_worker, mx_dim)
else:
t_to_send = tensor_on_worker
dist.all_gather(tensors_list, t_to_send)
tensors_list = [self.depad_tensor(t, size) for t, size in zip(tensors_list, sizes)]
if self.global_rank == 0:
values_dict["IS_FROM_DIST_EVAL"] = True
values_dict[key] = tensors_list
else: # NON-DISTRIBUTED TRAINING
values_dict["IS_FROM_DIST_EVAL"] = False
values_dict[key] = [registered_e_tensors[key]]
if callback.user_iter_callback and (self.global_rank is None or self.global_rank == 0):
# values_dict will contain results from all workers
callback.user_iter_callback(values_dict, callback._global_var_dict)
# final aggregation (over minibatches) and logging of results
# should happend only on one worker
if callback.user_done_callback and (self.global_rank is None or self.global_rank == 0):
vals_to_log = callback.user_done_callback(callback._global_var_dict)
# log results to Tensorboard or Weights & Biases
if vals_to_log is not None:
if hasattr(callback, 'swriter') and callback.swriter is not None:
if hasattr(callback, 'tb_writer_func') and callback.tb_writer_func is not None:
callback.tb_writer_func(callback.swriter, vals_to_log, step)
else:
for key, val in vals_to_log.items():
callback.swriter.add_scalar(key, val, step)
if hasattr(callback, 'wandb_log'):
callback.wandb_log(vals_to_log)
def _infer(
self, tensors_to_return, verbose=False, cache=False, use_cache=False, offload_to_cpu=True,
):
"""
Does the same as _eval() just with tensors instead of eval callback.
"""
# Checking that cache is used properly
if cache and use_cache:
raise ValueError(
"cache and use_cache were both set. However cache must first be created prior to using it."
)
if cache:
if self.cache is not None:
raise ValueError("cache was set but was not empty")
self.cache = []
if use_cache:
if not self.cache:
raise ValueError("use_cache was set, but cache was empty")
with torch.no_grad():
# each call chain corresponds to a tensor in tensors_2_evaluate
dl_nm = None
call_chain, _ = self.__get_top_sorted_modules_and_dataloader(hook=tensors_to_return)
dl_nm = call_chain[0][0]
# Prepare eval_dataloader
# For distributed training it should have disjoint subsets of
# all data on every worker
is_distributed = False
world_size = None
if dl_nm.placement == DeviceType.AllGpu:
if self.cache or use_cache:
raise NotImplementedError("Caching is not available for distributed training.")
assert dist.is_initialized()
is_distributed = True
world_size = torch.distributed.get_world_size()
# logging.info(
# "Doing distributed evaluation. Rank {0} of {1}".format(
# self.local_rank, world_size
# )
# )
if dl_nm.dataset is not None:
sampler = torch.utils.data.distributed.DistributedSampler(
dataset=dl_nm.dataset, shuffle=dl_nm.shuffle
)
eval_dataloader = torch.utils.data.DataLoader(
dataset=dl_nm.dataset,
sampler=sampler,
num_workers=dl_nm.num_workers,
batch_size=dl_nm.batch_size,
shuffle=False,
)
else:
eval_dataloader = dl_nm.data_iterator
eval_dataloader.sampler.set_epoch(0)
elif not use_cache: # Not distributed and not using cache
# Dataloaders are only used if use_cache is False
# When caching, the DAG must cache all outputs from dataloader
if dl_nm.dataset is not None:
# Todo: remove local_parameters
eval_dataloader = torch.utils.data.DataLoader(
dataset=dl_nm.dataset,
sampler=None, # not distributed sampler
num_workers=dl_nm.num_workers,
batch_size=dl_nm.batch_size,
shuffle=dl_nm.shuffle,
)
else:
eval_dataloader = dl_nm.data_iterator
# after this eval_dataloader is ready to be used
# reset global_var_dict - results of evaluation will be stored
# there
if not is_distributed or self.global_rank == 0:
values_dict = {}
for t in tensors_to_return:
values_dict[t.unique_name] = []
dl_device = dl_nm._device
# Evaluation mini-batch for loop
if use_cache:
num_batches = len(self.cache)
loop_iterator = self.cache
else:
num_batches = len(eval_dataloader)
loop_iterator = eval_dataloader
for epoch_i, data in enumerate(loop_iterator, 0):
logging.debug(torch.cuda.memory_allocated())
if verbose and (num_batches < 10 or (epoch_i % int(num_batches / 10) == 0)):
logging.info(f"Evaluating batch {epoch_i} out of {num_batches}")
tensors = []
if use_cache:
registered_e_tensors = data
# delete tensors_to_return
for t in tensors_to_return:
if t.unique_name in registered_e_tensors:
del registered_e_tensors[t.unique_name]
# Need to check for device type mismatch
for t in registered_e_tensors:
registered_e_tensors[t].to(dl_device)
else:
if isinstance(data, torch.Tensor):
data = (data,)
for d in data:
if isinstance(d, torch.Tensor):
tensors.append(d.to(dl_device))
else:
tensors.append(d)
registered_e_tensors = {
t.unique_name: d for t, d in zip(call_chain[0][2].values(), tensors) if t is not None
}
self.__nm_graph_forward_pass(
call_chain=call_chain,
registered_tensors=registered_e_tensors,
mode=ModelMode.eval,
use_cache=use_cache,
)
# if offload_to_cpu:
# # Take all cuda tensors and save them to value_dict as
# # cpu tensors to save GPU memory
# for name, tensor in registered_e_tensors.items():
# if isinstance(tensor, torch.Tensor):
# registered_e_tensors[name] = tensor.cpu()
if cache:
self.append_to_cache(registered_e_tensors, offload_to_cpu)
# If distributed. For the outer loop, we need to ensure that
# all processes loop through the elements in the same order
for t2e in tensors_to_return:
key = t2e.unique_name
if key not in registered_e_tensors.keys():
logging.info("WARNING: Tensor {} was not found during eval".format(key))
continue
if is_distributed:
# where we will all_gather results from all workers
tensors_list = []
# where we will all_gather tensor sizes
tensor_on_worker = registered_e_tensors[key]
if tensor_on_worker.shape != torch.Size([]):
tensor_on_worker_size_as_tensor = torch.tensor(tensor_on_worker.shape).cuda()
sizes = []
for ind in range(world_size):
sizes.append(torch.empty_like(tensor_on_worker_size_as_tensor))
dist.all_gather(sizes, tensor_on_worker_size_as_tensor)
mx_dim, _ = torch.max(torch.stack(sizes), dim=0)
else: # this is a singleton. For example, loss value
sizes = [torch.Size([])] * world_size
mx_dim = None
for ind in range(world_size):
# we have to use max shape for all_gather
if mx_dim is None: # singletons
tensors_list.append(torch.tensor(2).cuda().type_as(tensor_on_worker))
else: # non-singletons
tensors_list.append(
torch.empty(mx_dim.cpu().data.numpy().tolist()).cuda().type_as(tensor_on_worker)
)
if mx_dim is not None:
t_to_send = self.pad_tensor(tensor_on_worker, mx_dim)
else:
t_to_send = tensor_on_worker
dist.all_gather(tensors_list, t_to_send)
tensors_list = [self.depad_tensor(t, size) for t, size in zip(tensors_list, sizes)]
if offload_to_cpu:
tensors_list = [t.cpu() for t in tensors_list]
if self.global_rank == 0:
values_dict[key] += tensors_list
else: # NON-DISTRIBUTED TRAINING
tensor = registered_e_tensors[key]
if offload_to_cpu and isinstance(tensor, torch.Tensor):
tensor = tensor.cpu()
values_dict[key] += [tensor]
if not is_distributed or self.global_rank == 0:
inferred_tensors = []
for t in tensors_to_return:
inferred_tensors.append(values_dict[t.unique_name])
return inferred_tensors
# For all other ranks
return None
def append_to_cache(self, registered_tensors: dict, offload_to_cpu):
"""Simpler helper function to add results of __nm_graph_forward_pass to
current cache.
"""
if offload_to_cpu:
for t in registered_tensors:
registered_tensors[t] = registered_tensors[t].cpu()
self.cache.append(registered_tensors)
def clear_cache(self):
""" Simple helpful function to clear cache by setting self.cache to
None
"""
self.cache = None
def save_state_to(self, path: str):
"""
Saves current state such as step, epoch and optimizer parameters
Args:
path:
Returns:
"""
state = {
"step": self.step,
"epoch_num": self.epoch_num,
"optimizer_state": [opt.state_dict() for opt in self.optimizers],
}
torch.save(state, path)
def restore_state_from(self, path: str):
"""
Restores state such as step, epoch and optimizer parameters
Args:
path:
Returns:
"""
if os.path.isfile(path):
# map_location could be cuda:<device_id> but cpu seems to be more
# general since we are also saving step and epoch_num
# load_state_dict should move the variables to the relevant device
checkpoint = torch.load(path, map_location="cpu")
self.step = checkpoint["step"]
self.epoch_num = checkpoint["epoch_num"]
if checkpoint["optimizer_state"]:
for opt, opt_chkpt in zip(self.optimizers, checkpoint["optimizer_state"]):
opt.load_state_dict(opt_chkpt)
else:
raise FileNotFoundError("Could not find checkpoint file: {0}".format(path))
@staticmethod
def _check_all_tensors(list_of_tensors):
"""Method that checks if the passed list contains all NmTensors
"""
if not isinstance(list_of_tensors, list):
return False
for tensor in list_of_tensors:
if not isinstance(tensor, NmTensor):
return False
return True
@staticmethod
def _check_tuples(list_of_tuples):
"""Method that checks if the passed tuple contains an optimizer in the
first element, and a list of NmTensors in the second.
"""
for tup in list_of_tuples:
if not (isinstance(tup[0], torch.optim.Optimizer) and PtActions._check_all_tensors(tup[1])):
return False
return True
def _get_all_modules(self, training_loop, callbacks, logging_callchain=None):
"""Gets all neural modules that will be used by train() and eval() via
EvaluatorCallbacks. Saves all modules to self.modules
"""
# If there is a SimpleLossLoggerCallback, create an logger_callchain
# with all callchains from training_loop and
# SimpleLossLoggerCallback.tensors
if logging_callchain:
for module in logging_callchain:
self.modules.add(module[0])
# Else grab all callchains from training_loop
else:
for step in training_loop:
for module in step[2]:
self.modules.add(module[0])
# Lastly, grab all eval modules
if callbacks is not None:
for callback in callbacks:
if isinstance(callback, EvaluatorCallback):
(callchain, _,) = self.__get_top_sorted_modules_and_dataloader(hook=callback.eval_tensors)
for module in callchain:
self.modules.add(module[0])
@staticmethod
def __module_export(module, output, d_format: DeploymentFormat, input_example=None, output_example=None):
# Check if output already exists
destination = Path(output)
if destination.exists():
raise FileExistsError(f"Destination {output} already exists. " f"Aborting export.")
input_names = list(module.input_ports.keys())
output_names = list(module.output_ports.keys())
dynamic_axes = defaultdict(list)
def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defaultdict):
if ntype.axes:
for ind, axis in enumerate(ntype.axes):
if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
dynamic_axes[port_name].append(ind)
# This is a hack for Jasper to Jarvis export -- need re-design for this
inputs_to_drop = set()
outputs_to_drop = set()
if type(module).__name__ == "JasperEncoder":
logging.info(
"Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
"deployment"
)
inputs_to_drop.add("length")
outputs_to_drop.add("encoded_lengths")
# for input_ports
for port_name, ntype in module.input_ports.items():
if port_name in inputs_to_drop:
input_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
# for output_ports
for port_name, ntype in module.output_ports.items():
if port_name in outputs_to_drop:
output_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
if len(dynamic_axes) == 0:
dynamic_axes = None
# Make a deep copy of init parameters.
init_params_copy = copy.deepcopy(module._init_params)
# Remove NeMo-related things from the module
# We need to change __call__ method. Note that this will change the
# whole class, not just this object! Which is why we need to repair it
# in the finally block
type(module).__call__ = torch.nn.Module.__call__
# Reset standard instance field - making the file (probably) lighter.
module._init_params = None
module._placement = None
module._factory = None
module._device = None
module.eval()
try:
if d_format == DeploymentFormat.TORCHSCRIPT:
if input_example is None:
# Route 1 - via torch.jit.script
traced_m = torch.jit.script(module)
traced_m.save(output)
else:
# Route 2 - via tracing
traced_m = torch.jit.trace(module, input_example)
traced_m.save(output)
elif d_format == DeploymentFormat.ONNX or d_format == DeploymentFormat.TRTONNX:
if input_example is None:
raise ValueError(f'Example input is None, but ONNX tracing was' f' attempted')
if output_example is None:
if isinstance(input_example, tuple):
output_example = module.forward(*input_example)
else:
output_example = module.forward(input_example)
with torch.jit.optimized_execution(True):
jitted_model = torch.jit.trace(module, input_example)
torch.onnx.export(
jitted_model,
input_example,
output,
input_names=input_names,
output_names=output_names,
verbose=False,
export_params=True,
do_constant_folding=True,
dynamic_axes=dynamic_axes,
opset_version=11,
example_outputs=output_example,
)
# fn = output + ".readable"
# with open(fn, 'w') as f:
# tempModel = onnx.load(output)
# onnx.save(tempModel, output + ".copy")
# onnx.checker.check_model(tempModel)
# pgraph = onnx.helper.printable_graph(tempModel.graph)
# f.write(pgraph)
elif d_format == DeploymentFormat.PYTORCH:
torch.save(module.state_dict(), output)
with open(output + ".json", 'w') as outfile:
json.dump(init_params_copy, outfile)
else:
raise NotImplementedError(f"Not supported deployment format: {d_format}")
except Exception as e: # nopep8
logging.error(f'module export failed for {module} ' f'with exception {e}')
finally:
def __old_call__(self, force_pt=False, *input, **kwargs):
pt_call = len(input) > 0 or force_pt
if pt_call:
return nn.Module.__call__(self, *input, **kwargs)
else:
return NeuralModule.__call__(self, **kwargs)
type(module).__call__ = __old_call__
@staticmethod
def deployment_export(module, output: str, d_format: DeploymentFormat, input_example=None, output_example=None):
"""Exports Neural Module instance for deployment.
Args:
module: neural module to export
output (str): where export results should be saved
d_format (DeploymentFormat): which deployment format to use
input_example: sometimes tracing will require input examples
output_example: Should match inference on input_example
amp_max_loss_scale (float): Max value for amp loss scaling.
Defaults to 2.0**24.
"""
with torch.no_grad():
PtActions.__module_export(
module=module,
output=output,
d_format=d_format,
input_example=input_example,
output_example=output_example,
)
def train(
self,
tensors_to_optimize,
optimizer=None,
optimization_params=None,
callbacks: Optional[List[ActionCallback]] = None,
lr_policy=None,
batches_per_step=None,
stop_on_nan_loss=False,
synced_batchnorm=False,
synced_batchnorm_groupsize=0,
gradient_predivide=False,
amp_max_loss_scale=2.0 ** 24,
):
if gradient_predivide:
logging.error(
"gradient_predivide is currently disabled, and is under consideration for removal in future versions. "
"If this functionality is needed, please raise a github issue."
)
if not optimization_params:
optimization_params = {}
num_epochs = optimization_params.get("num_epochs", None)
max_steps = optimization_params.get("max_steps", None)
if num_epochs is None and max_steps is None:
raise ValueError("You must specify either max_steps or num_epochs")
grad_norm_clip = optimization_params.get('grad_norm_clip', None)
if batches_per_step is None:
batches_per_step = 1
# this is necessary because we average gradients over batch
bps_scale = torch.FloatTensor([1.0 / batches_per_step]).squeeze()
if tensors_to_optimize is None:
# This is Evaluation Mode
self._init_callbacks(callbacks)
# Do action start callbacks
self._perform_on_action_end(callbacks=callbacks)
return
# Check if tensors_to_optimize is just a list of NmTensors
elif tensors_to_optimize is not None and (
isinstance(tensors_to_optimize[0], NmTensor) and PtActions._check_all_tensors(tensors_to_optimize)
):
# Parse graph into a topologically sorted sequence of neural
# modules' calls
(opt_call_chain, t_dataset,) = self.__get_top_sorted_modules_and_dataloader(hook=tensors_to_optimize)
# Extract trainable weights which will be optimized
params_list = [
p[0].parameters() for p in opt_call_chain if isinstance(p[0], TrainableNM) or p[0].is_trainable()
]
params_to_optimize = itertools.chain(*params_list)
# Setup optimizer instance. By default it is SGD
optimizer_instance = None
optimizer_class = None
if isinstance(optimizer, str):
optimizer_class = optimizer
elif isinstance(optimizer, torch.optim.Optimizer):
optimizer_instance = optimizer
else:
raise ValueError("optimizer was not understood")
optimizer = self.__setup_optimizer(
optimizer_instance=optimizer_instance,
optimizer_class=optimizer_class,
optimization_params=optimization_params,
params_to_optimize=params_to_optimize,
)
training_loop = [(optimizer, tensors_to_optimize, opt_call_chain)]
self.optimizers.append(optimizer)
assert (
len(self.optimizers) == 1
), "There was more than one optimizer, was create_optimizer() called before train()?"
elif PtActions._check_tuples(tensors_to_optimize):
if batches_per_step != 1:
raise ValueError("Gradient accumlation with multiple optimizers is not supported")
datasets = []
training_loop = []
for step in tensors_to_optimize:
(step_call_chain, dataset,) = self.__get_top_sorted_modules_and_dataloader(hook=step[1])
datasets.append(dataset)
training_loop.append((step[0], step[1], step_call_chain))
t_dataset = datasets[0]
for dataset in datasets:
if type(dataset) is not type(t_dataset):
raise ValueError("There were two training datasets, we only support 1.")
else:
raise ValueError("tensors_to_optimize was not understood")
logging_callchain = None
# callbacks setup
if callbacks is not None:
for callback in callbacks:
if not isinstance(callback, ActionCallback):
raise ValueError("A callback was received that was not a child of ActionCallback")
elif isinstance(callback, SimpleLossLoggerCallback):
if logging_callchain:
raise ValueError("We only support one logger callback but more than one were found")
logger_step_freq = callback._step_freq
logging_tensors = callback.tensors
all_tensors = logging_tensors
for step in training_loop:
all_tensors = all_tensors + step[1]
(logging_callchain, _,) = self.__get_top_sorted_modules_and_dataloader(hook=all_tensors)
self._get_all_modules(training_loop, callbacks, logging_callchain)
# Intialize Amp if needed
if self._optim_level in AmpOptimizations:
# Store mapping of self.optimizers to optimizer in callchain
training_loop_opts = []
for opt in training_loop:
training_loop_opts.append(self.optimizers.index(opt[0]))
self.optimizers = self.__initialize_amp(
optimizer=self.optimizers,
optim_level=self._optim_level,
amp_max_loss_scale=amp_max_loss_scale,
amp_min_loss_scale=optimization_params.get('amp_min_loss_scale', 1.0),
)
# Use stored mapping to map amp_init opts to training loop
for i, step in enumerate(training_loop):
training_loop[i] = (
self.optimizers[training_loop_opts[i]],
step[1],
step[2],
)
dataNM = training_loop[0][2][0][0]
if dataNM.placement == DeviceType.AllGpu:
# if len(training_loop) > 1:
# raise NotImplementedError(
# "Distributed training does nor work with multiple "
# "optimizers")
logging.info("Doing distributed training")
if t_dataset is not None:
train_sampler = torch.utils.data.distributed.DistributedSampler(
dataset=t_dataset, shuffle=dataNM.shuffle
)
train_dataloader = torch.utils.data.DataLoader(
dataset=t_dataset,
sampler=train_sampler,
num_workers=dataNM.num_workers,
batch_size=dataNM.batch_size,
shuffle=False,
)
else:
train_dataloader = dataNM.data_iterator
if hasattr(train_dataloader, 'sampler'):
train_sampler = train_dataloader.sampler
else:
train_sampler = None
for train_iter in training_loop:
call_chain = train_iter[2]
for i in range(1, len(call_chain) - 1):
key = call_chain[i][0].unique_instance_id
pmodule = self.module_reference_table[key][1]
if not isinstance(pmodule, DDP) and isinstance(pmodule, torch.nn.Module):
# gpf = 1
# if gradient_predivide:
# gpf = dist.get_world_size()
# pmodule = DDP(pmodule, gradient_predivide_factor=gpf) # Old Apex Method
# Per pytorch docs, convert sync bn prior to DDP
if synced_batchnorm:
world_size = dist.get_world_size()
sync_batchnorm_group = None
if synced_batchnorm_groupsize > 0:
if world_size % synced_batchnorm_groupsize != 0:
raise ValueError(
f"Synchronized batch norm group size ({synced_batchnorm_groupsize}) must be 0"
f" or divide total number of GPUs ({world_size})."
)
# Find ranks of other nodes in the same batchnorm group
rank = torch.distributed.get_rank()
group = rank // synced_batchnorm_groupsize
group_rank_ids = range(
group * synced_batchnorm_groupsize, (group + 1) * synced_batchnorm_groupsize
)
sync_batchnorm_group = torch.distributed.new_group(group_rank_ids)
pmodule = nn.SyncBatchNorm.convert_sync_batchnorm(
pmodule, process_group=sync_batchnorm_group
)
# By default, disable broadcast_buffers. This disables batch norm synchronization on forward
# pass
pmodule = DDP(
pmodule, device_ids=[self.local_rank], broadcast_buffers=False, find_unused_parameters=True
)
# # Convert batchnorm modules to synced if applicable
# if synced_batchnorm and isinstance(pmodule, torch.nn.Module):
# world_size = dist.get_world_size()
# if synced_batchnorm_groupsize > 0 and world_size % synced_batchnorm_groupsize != 0:
# raise ValueError(
# f"Synchronized batch norm group size"
# f" ({synced_batchnorm_groupsize}) must be 0"
# f" or divide total number of GPUs"
# f" ({world_size})."
# )
# process_group = create_syncbn_process_group(synced_batchnorm_groupsize)
# pmodule = convert_syncbn(pmodule, process_group=process_group)
self.module_reference_table[key] = (
self.module_reference_table[key][0],
pmodule,
)
# single GPU/CPU training
else:
if t_dataset is not None:
train_sampler = None
train_dataloader = torch.utils.data.DataLoader(
dataset=t_dataset,
sampler=None,
num_workers=dataNM.num_workers,
batch_size=dataNM.batch_size,
shuffle=dataNM.shuffle,
)
else:
train_dataloader = dataNM.data_iterator
train_sampler = None
self._init_callbacks(callbacks)
# Do action start callbacks
self._perform_on_action_start(callbacks=callbacks)
# MAIN TRAINING LOOP
# iteration over epochs
while num_epochs is None or self.epoch_num < num_epochs:
if train_sampler is not None:
train_sampler.set_epoch(self.epoch_num)
if max_steps is not None and self.step >= max_steps:
break
# Register epochs start with callbacks
self._perform_on_epoch_start(callbacks=callbacks)
# iteration over batches in epoch
batch_counter = 0
for _, data in enumerate(train_dataloader, 0):
if max_steps is not None and self.step >= max_steps:
break
if batch_counter == 0:
# Started step, zero gradients
curr_optimizer = training_loop[self.step % len(training_loop)][0]
curr_optimizer.zero_grad()
# Register iteration start with callbacks
self._perform_on_iteration_start(callbacks=callbacks)
# set learning rate policy
if lr_policy is not None:
adjusted_lr = lr_policy(optimization_params["lr"], self.step, self.epoch_num)
for param_group in curr_optimizer.param_groups:
param_group["lr"] = adjusted_lr
if self.tb_writer is not None:
value = curr_optimizer.param_groups[0]['lr']
self.tb_writer.add_scalar('param/lr', value, self.step)
if callbacks is not None:
for callback in callbacks:
callback.learning_rate = curr_optimizer.param_groups[0]['lr']
# registered_tensors will contain created tensors
# named by output port and uuid of module which created them
# Get and properly name tensors returned by data layer
curr_call_chain = training_loop[self.step % len(training_loop)][2]
dl_device = curr_call_chain[0][0]._device
if logging_callchain and self.step % logger_step_freq == 0:
curr_call_chain = logging_callchain
tensors = []
if isinstance(data, torch.Tensor):
data = (data,)
for d in data:
if isinstance(d, torch.Tensor):
tensors.append(d.to(dl_device))
else:
tensors.append(d)
registered_tensors = {
t.unique_name: d for t, d in zip(curr_call_chain[0][2].values(), tensors) if t is not None
}
disable_allreduce = batch_counter < (batches_per_step - 1)
self.__nm_graph_forward_pass(
call_chain=curr_call_chain, registered_tensors=registered_tensors,
)
curr_tensors_to_optimize = training_loop[self.step % len(training_loop)][1]
final_loss = 0
nan = False
for tensor in curr_tensors_to_optimize:
if (
torch.isnan(registered_tensors[tensor.unique_name]).any()
or torch.isinf(registered_tensors[tensor.unique_name]).any()
):
if stop_on_nan_loss:
raise ValueError('Loss is NaN or inf - exiting')
logging.warning('Loss is NaN or inf')
curr_optimizer.zero_grad()
nan = True
break
final_loss += registered_tensors[tensor.unique_name]
if nan:
continue
if self._optim_level in AmpOptimizations and self._optim_level != Optimization.mxprO0:
with amp.scale_loss(final_loss, curr_optimizer, delay_unscale=disable_allreduce) as scaled_loss:
if torch.isnan(scaled_loss).any() or torch.isinf(scaled_loss).any():
if stop_on_nan_loss:
raise ValueError('Loss is NaN or inf -' ' exiting')
logging.warning('WARNING: Loss is NaN or inf')
curr_optimizer.zero_grad()
continue
if disable_allreduce:
with ExitStack() as stack:
for mod in self.get_DDP_modules(curr_call_chain):
stack.enter_context(mod.no_sync())
scaled_loss.backward(bps_scale.to(scaled_loss.get_device()))
else:
scaled_loss.backward(bps_scale.to(scaled_loss.get_device()))
# no AMP optimizations needed
else:
# multi-GPU, float32
if self._local_rank is not None:
if disable_allreduce:
with ExitStack() as stack:
for mod in self.get_DDP_modules(curr_call_chain):
stack.enter_context(mod.no_sync())
final_loss.backward(bps_scale.to(final_loss.get_device()))
else:
final_loss.backward(bps_scale.to(final_loss.get_device()))
# single device (CPU or GPU)
else:
# Fix (workaround?) enabling to backpropagate gradiens on CPUs.
if final_loss.get_device() < 0:
final_loss.backward(bps_scale)
else:
final_loss.backward(bps_scale.to(final_loss.get_device()))
batch_counter += 1
if batch_counter == batches_per_step:
# Ended step. Do optimizer update
if grad_norm_clip is not None:
torch.nn.utils.clip_grad_norm_(master_params(curr_optimizer), grad_norm_clip)
curr_optimizer.step()
batch_counter = 0
# Register iteration end with callbacks
self._update_callbacks(
callbacks=callbacks, registered_tensors=registered_tensors,
)
self._perform_on_iteration_end(callbacks=callbacks)
self.step += 1
# End of epoch for loop
# Register epochs end with callbacks
self._perform_on_epoch_end(callbacks=callbacks)
self.epoch_num += 1
self._perform_on_action_end(callbacks=callbacks)
def infer(
self,
tensors,
checkpoint_dir=None,
ckpt_pattern='',
verbose=True,
cache=False,
use_cache=False,
offload_to_cpu=True,
modules_to_restore=None,
):
"""See NeuralModuleFactory.infer()
"""
call_chain, _ = self.__get_top_sorted_modules_and_dataloader(hook=tensors)
if checkpoint_dir:
# Find all modules that need to be restored
if modules_to_restore is None:
modules_to_restore = []
modules_to_restore_name = []
for op in call_chain:
if op[0].num_weights > 0:
modules_to_restore.append(op[0])
if not isinstance(modules_to_restore, list):
modules_to_restore = [modules_to_restore]
modules_to_restore_name = []
for mod in modules_to_restore:
if not isinstance(mod, NeuralModule):
raise ValueError("Found something that was not a Neural Module inside modules_to_restore")
elif mod.num_weights == 0:
raise ValueError("Found a Neural Module with 0 weights inside modules_to_restore")
modules_to_restore_name.append(str(mod))
module_checkpoints = get_checkpoint_from_dir(modules_to_restore_name, checkpoint_dir, ckpt_pattern)
for mod, checkpoint in zip(modules_to_restore, module_checkpoints):
logging.info(f"Restoring {mod} from {checkpoint}")
mod.restore_from(checkpoint, self._local_rank)
# Init Amp
if (
self._optim_level in AmpOptimizations
and self._optim_level != Optimization.mxprO0
and not self.amp_initialized
):
pt_modules = []
for i in range(len(call_chain)):
if isinstance(call_chain[i][0], nn.Module):
pt_modules.append(call_chain[i][0])
elif isinstance(call_chain[i][0], TrainableNeuralModuleWrapper):
pt_modules.append(call_chain[i][0]._pt_module)
amp.initialize(
min_loss_scale=1.0, models=pt_modules, optimizers=None, opt_level=AmpOptimizations[self._optim_level],
)
self.amp_initialized = True
# Run infer
return self._infer(
tensors_to_return=tensors,
verbose=verbose,
cache=cache,
use_cache=use_cache,
offload_to_cpu=offload_to_cpu,
)
def get_DDP_modules(self, call_chain):
modules = []
for ind in range(1, len(call_chain)):
m_id = call_chain[ind][0].unique_instance_id
module = self.module_reference_table[m_id][1]
if isinstance(module, DDP):
modules.append(module)
return modules
[end of nemo/backends/pytorch/actions.py]
[start of nemo/collections/asr/jasper.py]
# Copyright (c) 2019 NVIDIA Corporation
from typing import Optional
import torch
import torch.nn as nn
import torch.nn.functional as F
import nemo
from .parts.jasper import JasperBlock, init_weights, jasper_activations
from nemo.backends.pytorch.nm import TrainableNM
from nemo.core.neural_types import *
from nemo.utils.decorators import add_port_docs
logging = nemo.logging
class JasperEncoder(TrainableNM):
"""
Jasper Encoder creates the pre-processing (prologue), Jasper convolution
block, and the first 3 post-processing (epilogue) layers as described in
Jasper (https://arxiv.org/abs/1904.03288)
Args:
jasper (list): A list of dictionaries. Each element in the list
represents the configuration of one Jasper Block. Each element
should contain::
{
# Required parameters
'filters' (int) # Number of output channels,
'repeat' (int) # Number of sub-blocks,
'kernel' (int) # Size of conv kernel,
'stride' (int) # Conv stride
'dilation' (int) # Conv dilation
'dropout' (float) # Dropout probability
'residual' (bool) # Whether to use residual or not.
# Optional parameters
'residual_dense' (bool) # Whether to use Dense Residuals
# or not. 'residual' must be True for 'residual_dense'
# to be enabled.
# Defaults to False.
'separable' (bool) # Whether to use separable convolutions.
# Defaults to False
'groups' (int) # Number of groups in each conv layer.
# Defaults to 1
'heads' (int) # Sharing of separable filters
# Defaults to -1
'tied' (bool) # Whether to use the same weights for all
# sub-blocks.
# Defaults to False
'se' (bool) # Whether to add Squeeze and Excitation
# sub-blocks.
# Defaults to False
'se_reduction_ratio' (int) # The reduction ratio of the Squeeze
# sub-module.
# Must be an integer > 1.
# Defaults to 16
'kernel_size_factor' (float) # Conv kernel size multiplier
# Can be either an int or float
# Kernel size is recomputed as below:
# new_kernel_size = int(max(1, (kernel_size * kernel_width)))
# to prevent kernel sizes than 1.
# Note: If rescaled kernel size is an even integer,
# adds 1 to the rescaled kernel size to allow "same"
# padding.
}
activation (str): Activation function used for each sub-blocks. Can be
one of ["hardtanh", "relu", "selu"].
feat_in (int): Number of channels being input to this module
normalization_mode (str): Normalization to be used in each sub-block.
Can be one of ["batch", "layer", "instance", "group"]
Defaults to "batch".
residual_mode (str): Type of residual connection.
Can be "add" or "max".
Defaults to "add".
norm_groups (int): Number of groups for "group" normalization type.
If set to -1, number of channels is used.
Defaults to -1.
conv_mask (bool): Controls the use of sequence length masking prior
to convolutions.
Defaults to True.
frame_splicing (int): Defaults to 1.
init_mode (str): Describes how neural network parameters are
initialized. Options are ['xavier_uniform', 'xavier_normal',
'kaiming_uniform','kaiming_normal'].
Defaults to "xavier_uniform".
"""
length: Optional[torch.Tensor]
@property
@add_port_docs()
def input_ports(self):
"""Returns definitions of module input ports.
"""
return {
# "audio_signal": NeuralType(
# {0: AxisType(BatchTag), 1: AxisType(SpectrogramSignalTag), 2: AxisType(ProcessedTimeTag),}
# ),
# "length": NeuralType({0: AxisType(BatchTag)}),
"audio_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
"length": NeuralType(tuple('B'), LengthsType()),
}
@property
@add_port_docs()
def output_ports(self):
"""Returns definitions of module output ports.
"""
return {
# "outputs": NeuralType(
# {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag),}
# ),
# "encoded_lengths": NeuralType({0: AxisType(BatchTag)}),
"outputs": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
"encoded_lengths": NeuralType(tuple('B'), LengthsType()),
}
@property
def disabled_deployment_input_ports(self):
return set(["length"])
@property
def disabled_deployment_output_ports(self):
return set(["encoded_lengths"])
def prepare_for_deployment(self):
m_count = 0
for m in self.modules():
if type(m).__name__ == "MaskedConv1d":
m.use_mask = False
m_count += 1
logging.warning(f"Turned off {m_count} masked convolutions")
def __init__(
self,
jasper,
activation,
feat_in,
normalization_mode="batch",
residual_mode="add",
norm_groups=-1,
conv_mask=True,
frame_splicing=1,
init_mode='xavier_uniform',
):
super().__init__()
activation = jasper_activations[activation]()
feat_in = feat_in * frame_splicing
residual_panes = []
encoder_layers = []
self.dense_residual = False
for lcfg in jasper:
dense_res = []
if lcfg.get('residual_dense', False):
residual_panes.append(feat_in)
dense_res = residual_panes
self.dense_residual = True
groups = lcfg.get('groups', 1)
separable = lcfg.get('separable', False)
heads = lcfg.get('heads', -1)
se = lcfg.get('se', False)
se_reduction_ratio = lcfg.get('se_reduction_ratio', 16)
kernel_size_factor = lcfg.get('kernel_size_factor', 1.0)
encoder_layers.append(
JasperBlock(
feat_in,
lcfg['filters'],
repeat=lcfg['repeat'],
kernel_size=lcfg['kernel'],
stride=lcfg['stride'],
dilation=lcfg['dilation'],
dropout=lcfg['dropout'],
residual=lcfg['residual'],
groups=groups,
separable=separable,
heads=heads,
residual_mode=residual_mode,
normalization=normalization_mode,
norm_groups=norm_groups,
activation=activation,
residual_panes=dense_res,
conv_mask=conv_mask,
se=se,
se_reduction_ratio=se_reduction_ratio,
kernel_size_factor=kernel_size_factor,
)
)
feat_in = lcfg['filters']
self.encoder = nn.Sequential(*encoder_layers)
self.apply(lambda x: init_weights(x, mode=init_mode))
self.to(self._device)
def forward(self, audio_signal, length=None):
# type: (Tensor, Optional[Tensor]) -> Tensor, Optional[Tensor]
s_input, length = self.encoder(([audio_signal], length))
if length is None:
return s_input[-1]
return s_input[-1], length
class JasperDecoderForCTC(TrainableNM):
"""
Jasper Decoder creates the final layer in Jasper that maps from the outputs
of Jasper Encoder to the vocabulary of interest.
Args:
feat_in (int): Number of channels being input to this module
num_classes (int): Number of characters in ASR model's vocab/labels.
This count should not include the CTC blank symbol.
init_mode (str): Describes how neural network parameters are
initialized. Options are ['xavier_uniform', 'xavier_normal',
'kaiming_uniform','kaiming_normal'].
Defaults to "xavier_uniform".
"""
@property
@add_port_docs()
def input_ports(self):
"""Returns definitions of module input ports.
"""
return {
# "encoder_output": NeuralType(
# {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag),}
# )
"encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation())
}
@property
@add_port_docs()
def output_ports(self):
"""Returns definitions of module output ports.
"""
# return {"output": NeuralType({0: AxisType(BatchTag), 1: AxisType(TimeTag), 2: AxisType(ChannelTag),})}
return {"output": NeuralType(('B', 'T', 'D'), LogprobsType())}
def __init__(self, feat_in, num_classes, init_mode="xavier_uniform"):
super().__init__()
self._feat_in = feat_in
# Add 1 for blank char
self._num_classes = num_classes + 1
self.decoder_layers = nn.Sequential(nn.Conv1d(self._feat_in, self._num_classes, kernel_size=1, bias=True))
self.apply(lambda x: init_weights(x, mode=init_mode))
self.to(self._device)
def forward(self, encoder_output):
return F.log_softmax(self.decoder_layers(encoder_output).transpose(1, 2), dim=-1)
class JasperDecoderForClassification(TrainableNM):
"""
Jasper Decoder creates the final layer in Jasper that maps from the outputs
of Jasper Encoder to one class label.
Args:
feat_in (int): Number of channels being input to this module
num_classes (int): Number of characters in ASR model's vocab/labels.
This count should not include the CTC blank symbol.
init_mode (str): Describes how neural network parameters are
initialized. Options are ['xavier_uniform', 'xavier_normal',
'kaiming_uniform','kaiming_normal'].
Defaults to "xavier_uniform".
"""
@property
def input_ports(self):
"""Returns definitions of module input ports.
"""
return {
# "encoder_output": NeuralType(
# {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag)}
# )
"encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation())
}
@property
def output_ports(self):
"""Returns definitions of module output ports.
"""
# return {"logits": NeuralType({0: AxisType(BatchTag), 1: AxisType(ChannelTag)})}
return {"logits": NeuralType(('B', 'D'), LogitsType())}
def __init__(
self, *, feat_in, num_classes, init_mode="xavier_uniform", return_logits=True, pooling_type='avg', **kwargs
):
TrainableNM.__init__(self, **kwargs)
self._feat_in = feat_in
self._return_logits = return_logits
self._num_classes = num_classes
if pooling_type == 'avg':
self.pooling = nn.AdaptiveAvgPool1d(1)
elif pooling_type == 'max':
self.pooling = nn.AdaptiveMaxPool1d(1)
else:
raise ValueError('Pooling type chosen is not valid. Must be either `avg` or `max`')
self.decoder_layers = nn.Sequential(nn.Linear(self._feat_in, self._num_classes, bias=True))
self.apply(lambda x: init_weights(x, mode=init_mode))
self.to(self._device)
def forward(self, encoder_output):
batch, in_channels, timesteps = encoder_output.size()
encoder_output = self.pooling(encoder_output).view(batch, in_channels) # [B, C]
logits = self.decoder_layers(encoder_output) # [B, num_classes]
if self._return_logits:
return logits
return F.softmax(logits, dim=-1)
[end of nemo/collections/asr/jasper.py]
[start of nemo/core/neural_factory.py]
# ! /usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__all__ = [
'Backend',
'ModelMode',
'Optimization',
'DeviceType',
'Actions',
'NeuralModuleFactory',
'DeploymentFormat',
]
import random
from abc import ABC, abstractmethod
from enum import Enum
from typing import List, Optional
import numpy as np
import nemo
from ..utils import ExpManager
from .callbacks import ActionCallback, EvaluatorCallback
from .neural_types import *
from nemo.utils.decorators import deprecated
logging = nemo.logging
class DeploymentFormat(Enum):
"""Which format to use when exporting a Neural Module for deployment"""
AUTO = 0
PYTORCH = 1
TORCHSCRIPT = 2
ONNX = 3
TRTONNX = 4
class Backend(Enum):
"""Supported backends. For now, it is only PyTorch."""
PyTorch = 1
NotSupported = 2
class ModelMode(Enum):
"""Training Mode or Evaluation/Inference"""
train = 0
eval = 1
class Optimization(Enum):
"""Various levels of Apex/amp Optimization.
WARNING: This might have effect on model accuracy."""
mxprO0 = 0
mxprO1 = 1
mxprO2 = 2
mxprO3 = 3
class DeviceType(Enum):
"""Device types where Neural Modules can be placed."""
GPU = 1
CPU = 2
AllGpu = 3
class Actions(ABC):
"""Basic actions allowed on graphs of Neural Modules"""
def __init__(self, local_rank, global_rank, optimization_level=Optimization.mxprO0):
self._local_rank = local_rank
self._global_rank = global_rank
self._optim_level = optimization_level
self.step = None
self.epoch_num = None
@property
def local_rank(self):
"""Local rank during distributed execution. None if single GPU/CPU
Returns:
(int) rank or worker or None if not in distributed model
"""
return self._local_rank
@property
def global_rank(self):
"""Global rank during distributed execution. None if single GPU/CPU
Returns:
(int) rank or worker or None if not in distributed model
"""
return self._global_rank
@abstractmethod
def train(
self,
tensors_to_optimize: List[NmTensor],
callbacks: Optional[List[ActionCallback]],
lr_policy=None,
batches_per_step=None,
stop_on_nan_loss=False,
):
"""This action executes training and (optionally) evaluation.
Args:
tensors_to_optimize: which tensors to optimize. Typically this is
single loss tesnor.
callbacks: list of callback objects
lr_policy: function which should take (initial_lr, step, epoch) and
return learning rate
batches_per_step: number of mini-batches to process before one
optimizer step. (default: None, same as 1). Use this
to simulate larger batch sizes on hardware which could not fit
larger batch in memory otherwise. Effectively, this will make
"algorithmic" batch size per GPU/worker = batches_per_step*
batch_size
stop_on_nan_loss: (default: False) If set to True, the training
will stop if loss=nan. If set to False, the training will
continue, but the gradients will be zeroed before next
mini-batch.
Returns:
None
"""
pass
@abstractmethod
def infer(self, tensors: List[NmTensor]):
"""This action executes inference. Nothing is optimized.
Args:
tensors: which tensors to evaluate.
Returns:
None
"""
pass
@abstractmethod
def save_state_to(self, path: str):
"""
Saves current state such as step, epoch and optimizer parameters
Args:
path:
Returns:
"""
pass
@abstractmethod
def restore_state_from(self, path: str):
"""
Restores state such as step, epoch and optimizer parameters
Args:
path:
Returns:
"""
pass
@abstractmethod
def create_optimizer(self, optimizer, things_to_optimize, optimizer_params):
"""
Creates an optimizer object to be use in the train() method.
Args:
optimizer: Specifies which optimizer to use.
things_to_optimize: A list of neural modules or tensors to be
optimized.
optimizer_params: Specifies the parameters of the optimizer
Returns:
Optimizer
"""
pass
def _perform_on_iteration_start(self, callbacks):
# TODO: Most of these checks can be relaxed since we enforce callbacks
# to be a list of ActionCallback objects
if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
for callback in callbacks:
callback.on_iteration_start()
def _perform_on_iteration_end(self, callbacks):
if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
for callback in callbacks:
callback.on_iteration_end()
def _perform_on_action_start(self, callbacks):
if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
for callback in callbacks:
callback.on_action_start()
def _perform_on_action_end(self, callbacks):
if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
for callback in callbacks:
callback.on_action_end()
def _perform_on_epoch_start(self, callbacks):
if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
for callback in callbacks:
callback.on_epoch_start()
def _perform_on_epoch_end(self, callbacks):
if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
for callback in callbacks:
callback.on_epoch_end()
def _init_callbacks(self, callbacks):
if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
for callback in callbacks:
callback.action = self
def _update_callbacks(
self, callbacks=None, registered_tensors=None,
):
# if self.local_rank is None or self.local_rank == 0:
if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
for callback in callbacks:
callback._registered_tensors = registered_tensors
def _str_to_opt_level(opt_str: str) -> Optimization:
number = int(opt_str[1:])
if number not in Optimization._value2member_map_:
raise ValueError(f"Unknown optimization value {opt_str}")
return Optimization(number)
class NeuralModuleFactory(object):
_DEFAULT = None
"""
Neural Module Factory instance is used to create neural modules and
trainers
Args:
backend (Backend): Currently only Backend.PyTorch is supported
local_rank (int): Process rank. Should be set by distributed runner
optimization_level (Optimization): Level of optimization to use. Will
be passed to neural modules and actions created by this factory.
placement (DeviceType: where to place NeuralModule instances by default
cudnn_benchmark (bool): (default False) If set to True it will use
cudnnFind method to find the best kernels instead of using
heuristics. If the shapes of your inputs are constant this
should help, for various shapes it can slow things down. Give it
few iterations to warmup if set to True. Currently only supported
by PyTorch backend.
random_seed (int): (default None) Sets random seed to control for
randomness. This should be used for debugging purposes as it might
have negative impact on performance. Can't be used when
`cudnn_benchmark=True`.
master_process (bool): (default True) Flag for master process
indication
set_default (bool): (default True) True if should set this instance as
default factory for modules instantiating.
"""
def __init__(
self,
backend=Backend.PyTorch,
local_rank=None,
optimization_level=Optimization.mxprO0,
placement=None,
cudnn_benchmark=False,
random_seed=None,
set_default=True,
log_dir=None,
checkpoint_dir=None,
tensorboard_dir=None,
create_tb_writer=False,
files_to_copy=None,
add_time_to_log_dir=False,
):
self._local_rank = local_rank
self._global_rank = None
if isinstance(optimization_level, str):
optimization_level = _str_to_opt_level(optimization_level)
self._optim_level = optimization_level
if placement is None:
if local_rank is not None:
device = DeviceType.AllGpu
else:
device = DeviceType.GPU
self._placement = device
else:
self._placement = placement
self._backend = backend
self._world_size = 1
broadcast_func = None
if backend == Backend.PyTorch:
# TODO: Move all framework specific code from this file
import torch
if self._placement != DeviceType.CPU:
if not torch.cuda.is_available():
raise ValueError(
"You requested to use GPUs but CUDA is "
"not installed. You can try running using"
" CPU-only. To do this, instantiate your"
" factory with placement=DeviceType.CPU"
"\n"
"Note that this is slow and is not "
"well supported."
)
torch.backends.cudnn.benchmark = cudnn_benchmark
if random_seed is not None and cudnn_benchmark:
raise ValueError("cudnn_benchmark can not be set to True when random_seed is not None.")
if random_seed is not None:
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
torch.manual_seed(random_seed)
np.random.seed(random_seed)
random.seed(random_seed)
if self._local_rank is not None:
torch.distributed.init_process_group(backend="nccl", init_method="env://")
cuda_set = True
# Try to set cuda device. This should fail if self._local_rank
# is greater than the number of available GPUs
try:
torch.cuda.set_device(self._local_rank)
except RuntimeError:
# Note in this case, all tensors are now sent to GPU 0
# who could crash because of OOM. Thus init_process_group()
# must be done before any cuda tensors are allocated
cuda_set = False
cuda_set_t = torch.cuda.IntTensor([cuda_set])
# Do an all_reduce to ensure all workers obtained a GPU
# For the strangest reason, BAND doesn't work so I am resorting
# to MIN.
torch.distributed.all_reduce(cuda_set_t, op=torch.distributed.ReduceOp.MIN)
if cuda_set_t.item() == 0:
raise RuntimeError(
"There was an error initializing distributed training."
" Perhaps you specified more gpus than you have "
"available"
)
del cuda_set_t
torch.cuda.empty_cache()
# Remove test tensor from memory
self._world_size = torch.distributed.get_world_size()
self._global_rank = torch.distributed.get_rank()
def torch_broadcast_wrapper(str_len=None, string=None, src=0):
"""Wrapper function to broadcast string values across all
workers
"""
# Create byte cuda torch tensor
if string is not None:
string_tensor = torch.tensor(list(string.encode()), dtype=torch.uint8).cuda()
else:
string_tensor = torch.tensor([0] * str_len, dtype=torch.uint8).cuda()
# Run broadcast
torch.distributed.broadcast(string_tensor, src)
# turn byte tensor back to string
return_string = string_tensor.cpu().numpy()
return_string = b''.join(return_string).decode()
return return_string
broadcast_func = torch_broadcast_wrapper
else:
raise NotImplementedError("Only Pytorch backend is currently supported.")
# Create ExpManager
# if log_dir is None, only create logger
self._exp_manager = ExpManager(
work_dir=log_dir,
ckpt_dir=checkpoint_dir,
use_tb=create_tb_writer,
tb_dir=tensorboard_dir,
local_rank=local_rank,
global_rank=self._global_rank,
files_to_copy=files_to_copy,
add_time=add_time_to_log_dir,
exist_ok=True,
broadcast_func=broadcast_func,
)
self._tb_writer = self._exp_manager.tb_writer
# Create trainer
self._trainer = self._get_trainer(tb_writer=self._tb_writer)
if set_default:
NeuralModuleFactory.set_default_factory(self)
@classmethod
def get_default_factory(cls):
return cls._DEFAULT
@classmethod
def set_default_factory(cls, factory):
cls._DEFAULT = factory
@classmethod
def reset_default_factory(cls):
cls._DEFAULT = None
@staticmethod
def __name_import(name):
components = name.split(".")
mod = __import__(components[0])
for comp in components[1:]:
mod = getattr(mod, comp)
return mod
@deprecated(version=0.11)
def __get_pytorch_module(self, name, collection, params, pretrained):
# TK: "factory" is not passed as parameter anymore.
# params["factory"] = self
if collection == "toys" or collection == "tutorials" or collection == "other":
constructor = NeuralModuleFactory.__name_import("nemo.backends.pytorch.tutorials." + name)
elif collection == "nemo_nlp":
constructor = NeuralModuleFactory.__name_import("nemo_nlp." + name)
if name == "BERT" and pretrained is True:
params["pretrained"] = True
elif collection == "nemo_asr":
constructor = NeuralModuleFactory.__name_import("nemo_asr." + name)
elif collection == "nemo_lpr":
constructor = NeuralModuleFactory.__name_import("nemo_lpr." + name)
elif collection == 'common':
constructor = NeuralModuleFactory.__name_import('nemo.backends.pytorch.common.' + name)
elif collection == "torchvision":
import torchvision.models as tv_models
import nemo.backends.pytorch.module_wrapper as mw
import torch.nn as nn
if name == "ImageFolderDataLayer":
constructor = NeuralModuleFactory.__name_import("nemo.backends.pytorch.torchvision.data." + name)
instance = constructor(**params)
return instance
else:
_nm_name = name.lower()
if _nm_name == "resnet18":
input_ports = {
"x": NeuralType(
{
0: AxisType(BatchTag),
1: AxisType(ChannelTag),
2: AxisType(HeightTag, 224),
3: AxisType(WidthTag, 224),
}
)
}
output_ports = {"output": NeuralType({0: AxisType(BatchTag), 1: AxisType(ChannelTag)})}
pt_model = tv_models.resnet18(pretrained=pretrained)
num_classes = params.get("num_classes", None)
if num_classes is not None:
pt_model.fc = nn.Linear(512, params["num_classes"])
return mw.TrainableNeuralModuleWrapper(
pt_nn_module=pt_model, input_ports_dict=input_ports, output_ports_dict=output_ports,
)
elif _nm_name == "resnet50":
input_ports = {
"x": NeuralType(
{
0: AxisType(BatchTag),
1: AxisType(ChannelTag),
2: AxisType(HeightTag, 224),
3: AxisType(WidthTag, 224),
}
)
}
output_ports = {"output": NeuralType({0: AxisType(BatchTag), 1: AxisType(ChannelTag)})}
pt_model = tv_models.resnet50(pretrained=pretrained)
num_classes = params.get("num_classes", None)
if num_classes is not None:
pt_model.fc = nn.Linear(2048, params["num_classes"])
return mw.TrainableNeuralModuleWrapper(
pt_nn_module=pt_model, input_ports_dict=input_ports, output_ports_dict=output_ports,
)
else:
collection_path = "nemo.collections." + collection + "." + name
constructor = NeuralModuleFactory.__name_import(collection_path)
if name == "BERT" and pretrained is True:
params["pretrained"] = True
# TK: "placement" is not passed as parameter anymore.
# if "placement" not in params:
# params["placement"] = self._placement
instance = constructor(**params)
return instance
@deprecated(version=0.11)
def get_module(self, name, collection, params, pretrained=False):
"""
Creates NeuralModule instance
Args:
name (str): name of NeuralModule which instance should be returned.
params (dict): local parameters which should be passed to
NeuralModule's constructor.
collection (str): in which collection to look for
`neural_module_name`
pretrained (bool): return pre-trained instance or randomly
initialized (default)
Returns:
NeuralModule instance
"""
# TK: "optimization_level" is not passed as parameter anymore.
# if params is not None and "optimization_level" in params:
# if params["optimization_level"] != self._optim_level:
# logging.warning(
# "Module's {0} requested optimization level {1} is"
# "different from the one specified by factory - {2}."
# "Using: {3} for this module".format(
# name, params["optimization_level"], self._optim_level, params["optimization_level"],
# )
# )
# else:
# if params is None:
# params = {}
# params["optimization_level"] = self._optim_level
if self._backend == Backend.PyTorch:
return self.__get_pytorch_module(name=name, collection=collection, params=params, pretrained=pretrained,)
else:
return None
def create_optimizer(self, optimizer, things_to_optimize, optimizer_params):
return self._trainer.create_optimizer(
optimizer=optimizer, things_to_optimize=things_to_optimize, optimizer_params=optimizer_params,
)
def train(
self,
tensors_to_optimize,
optimizer=None,
optimization_params=None,
callbacks: Optional[List[ActionCallback]] = None,
lr_policy=None,
batches_per_step=None,
stop_on_nan_loss=False,
synced_batchnorm=False,
synced_batchnorm_groupsize=0,
gradient_predivide=False,
amp_max_loss_scale=2.0 ** 24,
reset=False,
):
if reset:
self.reset_trainer()
return self._trainer.train(
tensors_to_optimize=tensors_to_optimize,
optimizer=optimizer,
optimization_params=optimization_params,
callbacks=callbacks,
lr_policy=lr_policy,
batches_per_step=batches_per_step,
stop_on_nan_loss=stop_on_nan_loss,
synced_batchnorm=synced_batchnorm,
synced_batchnorm_groupsize=synced_batchnorm_groupsize,
gradient_predivide=gradient_predivide,
amp_max_loss_scale=amp_max_loss_scale,
)
def eval(self, callbacks: List[EvaluatorCallback]):
if callbacks is None or len(callbacks) == 0:
raise ValueError(f"You need to provide at lease one evaluation" f"callback to eval")
for callback in callbacks:
if not isinstance(callback, EvaluatorCallback):
raise TypeError(f"All callbacks passed to the eval action must" f"be inherited from EvaluatorCallback")
self.train(
tensors_to_optimize=None, optimizer='sgd', callbacks=callbacks, optimization_params={'num_epochs': 1},
)
def deployment_export(
self, module, output: str, d_format: DeploymentFormat, input_example=None, output_example=None
):
"""Exports Neural Module instance for deployment.
Args:
module: neural module to export
output (str): where export results should be saved
d_format (DeploymentFormat): which deployment format to use
input_example: sometimes tracing will require input examples
output_example: Should match inference on input_example
"""
module.prepare_for_deployment()
return self._trainer.deployment_export(
module=module,
output=output,
d_format=d_format,
input_example=input_example,
output_example=output_example,
)
def infer(
self,
tensors: List[NmTensor],
checkpoint_dir=None,
ckpt_pattern='',
verbose=True,
cache=False,
use_cache=False,
offload_to_cpu=True,
modules_to_restore=None,
):
"""Runs inference to obtain values for tensors
Args:
tensors (list[NmTensor]): List of NeMo tensors that we want to get
values of.
checkpoint_dir (str): Path to checkpoint directory. Default is None
which does not load checkpoints.
ckpt_pattern (str): Pattern used to check for checkpoints inside
checkpoint_dir. Default is '' which matches any checkpoints
inside checkpoint_dir.
verbose (bool): Controls printing. Defaults to True.
cache (bool): If True, cache all `tensors` and intermediate tensors
so that future calls that have use_cache set will avoid
computation. Defaults to False.
use_cache (bool): Values from `tensors` will be always re-computed.
It will re-use intermediate tensors from the DAG leading to
`tensors`. If you want something to be re-computed, put it into
`tensors` list. Defaults to False.
offload_to_cpu (bool): If True, all evaluated tensors are moved to
cpu memory after each inference batch. Defaults to True.
modules_to_restore (list): Defaults to None, in which case all
NMs inside callchain with weights will be restored. If
specified only the modules inside this list will be restored.
Returns:
List of evaluated tensors. Each element in the list is also a list
where each element is now a batch of tensor values.
"""
return self._trainer.infer(
tensors=tensors,
checkpoint_dir=checkpoint_dir,
ckpt_pattern=ckpt_pattern,
verbose=verbose,
cache=cache,
use_cache=use_cache,
offload_to_cpu=offload_to_cpu,
modules_to_restore=modules_to_restore,
)
def clear_cache(self):
"""Helper function to clean inference cache."""
self._trainer.clear_cache()
@deprecated(version="future")
def _get_trainer(self, tb_writer=None):
if self._backend == Backend.PyTorch:
constructor = NeuralModuleFactory.__name_import("nemo.backends.pytorch.PtActions")
instance = constructor(
local_rank=self._local_rank,
global_rank=self._global_rank,
tb_writer=tb_writer,
optimization_level=self._optim_level,
)
return instance
else:
raise ValueError("Only PyTorch backend is currently supported.")
@deprecated(
version="future",
explanation="Please use .train(...), .eval(...), .infer(...) and "
f".create_optimizer(...) of the NeuralModuleFactory instance directly.",
)
def get_trainer(self, tb_writer=None):
if self._trainer:
logging.warning(
"The trainer instance was created during initialization of "
"Neural factory, using the already created instance."
)
return self._trainer
return self._get_trainer(tb_writer)
def reset_trainer(self):
del self._trainer
self._trainer = self._get_trainer(tb_writer=self._tb_writer)
def sync_all_processes(self, status=True):
""" Helper function for testing that allows proccess 0 to inform all
other processes of failures. Does nothing if not using distributed
training. Usage example can be seen in examples/asr/jasper_an4.py
Args:
status (bool): Defaults to True. If any proccess passes False, it
will trigger a graceful exit on all other processes. It is
assumed that the process that passed False will print an error
message on its own and exit
"""
if self._world_size == 1:
logging.info("sync_all_processes does nothing if there is one process")
return
if self._backend == Backend.PyTorch:
import torch
status_tensor = torch.cuda.IntTensor([status])
torch.distributed.all_reduce(status_tensor, op=torch.distributed.ReduceOp.MIN)
if status_tensor.item() == 0:
logging.error("At least one process had a failure")
if status:
raise ValueError(
f"Process with global rank {self._global_rank} entered"
" sync_all_processes with a passing status, but "
"another process indicated a failure"
)
@property
def world_size(self):
return self._world_size
@property
def tb_writer(self):
return self._tb_writer
@property
def placement(self):
return self._placement
@property
def optim_level(self):
return self._optim_level
@property
@deprecated(version=0.11, explanation="Please use ``nemo.logging instead``")
def logger(self):
return nemo.logging
@property
def checkpoint_dir(self):
return self._exp_manager.ckpt_dir
@property
def work_dir(self):
return self._exp_manager.work_dir
@property
def global_rank(self):
return self._global_rank
[end of nemo/core/neural_factory.py]
[start of nemo/core/neural_modules.py]
# ! /usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2019-, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This file contains NeuralModule and NmTensor classes."""
__all__ = ['WeightShareTransform', 'NeuralModule']
import collections
import uuid
from abc import ABC, abstractmethod
from collections import namedtuple
from enum import Enum
from inspect import getargvalues, getfullargspec, stack
from os import path
from typing import Dict, List, Optional, Set, Tuple
from ruamel.yaml import YAML
from .neural_types import (
CanNotInferResultNeuralType,
NeuralPortNameMismatchError,
NeuralPortNmTensorMismatchError,
NeuralType,
NeuralTypeComparisonResult,
NmTensor,
)
from nemo import logging
from nemo.core import NeuralModuleFactory
from nemo.package_info import __version__ as nemo_version
from nemo.utils.decorators.deprecated import deprecated
YAML = YAML(typ='safe')
class WeightShareTransform(Enum):
"""When sharing parameters, what kind of transform to apply."""
SAME = 0
TRANSPOSE = 1
PretrainedModelInfo = namedtuple(
"PretrainedModleInfo", ("pretrained_model_name", "description", "parameters", "location"),
)
class NeuralModule(ABC):
"""Abstract class that every Neural Module must inherit from.
"""
def __init__(self):
# Get default factory.
self._factory = NeuralModuleFactory.get_default_factory()
# Set module properties from factory else use defaults
self._placement = self._factory.placement
# If one needs to change that should override it manually.
# Optimization level.
self._opt_level = self._factory.optim_level
# Get object UUID.
self._uuid = str(uuid.uuid4())
# Retrieve dictionary of parameters (keys, values) passed to init.
self._init_params = self.__extract_init_params()
# Pint the types of the values.
# for key, value in self._init_params.items():
# print("{}: {} ({})".format(key, value, type(value)))
# Validate the parameters.
# self._validate_params(self._init_params)
@property
def init_params(self) -> Optional[Dict]:
"""
Property returning parameters used to instantiate the module.
Returns:
Dictionary containing parameters used to instantiate the module.
"""
return self._init_params
def __extract_init_params(self):
"""
Retrieves the dictionary of of parameters (keys, values) passed to constructor of a class derived
(also indirectly) from the Neural Module class.
Returns:
Dictionary containing parameters passed to init().
"""
# Get names of arguments of the original module init method.
init_keys = getfullargspec(type(self).__init__).args
# Remove self.
if "self" in init_keys:
init_keys.remove("self")
# Create list of params.
init_params = {}.fromkeys(init_keys)
# Retrieve values of those params from the call list.
for frame in stack()[1:]:
localvars = getargvalues(frame[0]).locals
# print("localvars: ", localvars)
for key in init_keys:
# Found the variable!
if key in localvars.keys():
# Save the value.
init_params[key] = localvars[key]
# Return parameters.
return init_params
def __validate_params(self, params):
"""
Checks whether dictionary contains parameters being primitive types (string, int, float etc.)
or (lists of)+ primitive types.
Args:
params: dictionary of parameters.
Returns:
True if all parameters were ok, False otherwise.
"""
ok = True
# Iterate over parameters and check them one by one.
for key, variable in params.items():
if not self.__is_of_allowed_type(variable):
logging.warning(
"Parameter '{}' contains a variable '{}' of type '{}' which is not allowed.".format(
key, variable, type(variable)
)
)
ok = False
# Return the result.
return ok
def __is_of_allowed_type(self, var):
"""
A recursive function that checks if a given variable is of allowed type.
Args:
pretrained_model_name (str): name of pretrained model to use in order.
Returns:
True if all parameters were ok, False otherwise.
"""
# Special case: None is also allowed.
if var is None:
return True
var_type = type(var)
# If this is list - check its elements.
if var_type == list:
for list_var in var:
if not self.__is_of_allowed_type(list_var):
return False
# If this is dict - check its elements.
elif var_type == dict:
for _, dict_var in var.items():
if not self.__is_of_allowed_type(dict_var):
return False
elif var_type not in (str, int, float, bool):
return False
# Well, seems that everything is ok.
return True
def _create_config_header(self):
""" A protected method that create a header stored later in the configuration file. """
# Get module "full specification".
module_full_spec = str(self.__module__) + "." + str(self.__class__.__qualname__)
module_class_name = type(self).__name__
# print(module_full_spec)
# Check whether module belongs to a collection.
spec_list = module_full_spec.split(".")
# Do not check Neural Modules from unit tests.
if spec_list[0] == "tests":
# Set collection variables.
collection_type = "tests"
collection_version = None
else:
# Check if component belongs to any collection
if len(spec_list) < 3 or (spec_list[0] != "nemo" and spec_list[1] != "collection"):
logging.warning(
"Module `{}` does not belong to any collection. This won't be allowed in the next release.".format(
module_class_name
)
)
collection_type = "unknown"
collection_version = None
else:
# Ok, set collection.
collection_type = spec_list[2]
collection_version = None
# TODO: to be SET!
# print(getattr("nemo.collections.nlp", __version__))
# Create a "header" with module "specification".
header = {
"nemo_core_version": nemo_version,
"collection_type": collection_type,
"collection_version": collection_version,
# "class": module_class_name, # Operating only on full_spec now.
"full_spec": module_full_spec,
}
return header
def export_to_config(self, config_file):
"""
A function that exports module "configuration" (i.e. init parameters) to a YAML file.
Raises a ValueError exception in case then parameters coudn't be exported.
Args:
config_file: path (absolute or relative) and name of the config file (YML)
"""
# Check if generic export will work.
if not self.__validate_params(self._init_params):
raise ValueError(
"Generic configuration export enables to use of parameters of primitive types (string, int, float) "
F"or (lists of/dicts of) primitive types. Please implement your own custom `export_to_config()` and "
F"`import_from_config()` methods for your custom Module class."
)
# Greate an absolute path.
abs_path_file = path.expanduser(config_file)
# Create the dictionary to be exported.
to_export = {}
# Add "header" with module "specification".
to_export["header"] = self._create_config_header()
# Add init parameters.
to_export["init_params"] = self._init_params
# print(to_export)
# All parameters are ok, let's export.
with open(abs_path_file, 'w') as outfile:
YAML.dump(to_export, outfile)
logging.info(
"Configuration of module {} ({}) exported to {}".format(self._uuid, type(self).__name__, abs_path_file)
)
@classmethod
def _validate_config_file(cls, config_file, section_name=None):
"""
Class method validating whether the config file has a proper content (sections, specification etc.).
Raises an ImportError exception when config file is invalid or
incompatible (when called from a particular class).
Args:
config_file: path (absolute or relative) and name of the config file (YML)
section_name: section in the configuration file storing module configuration (optional, DEFAULT: None)
Returns:
A loaded configuration file (dictionary).
"""
# Greate an absolute path.
abs_path_file = path.expanduser(config_file)
# Open the config file.
with open(abs_path_file, 'r') as stream:
loaded_config = YAML.load(stream)
# Check section.
if section_name is not None:
if section_name not in loaded_config:
raise ImportError(
"The loaded config `{}` doesn't contain the indicated `{}` section".format(
config_file, section_name
)
)
# Section exists - use only it for configuration.
loaded_config = loaded_config[section_name]
# Make sure that the config is valid.
if "header" not in loaded_config:
raise ImportError("The loaded config `{}` doesn't contain the `header` section".format(config_file))
if "init_params" not in loaded_config:
raise ImportError("The loaded config `{}` doesn't contain the `init_params` section".format(config_file))
# Parse the "full specification".
spec_list = loaded_config["header"]["full_spec"].split(".")
# Check if config contains data of a compatible class.
if cls.__name__ != "NeuralModule" and spec_list[-1] != cls.__name__:
txt = "The loaded file `{}` contains configuration of ".format(config_file)
txt = txt + "`{}` thus cannot be used for instantiation of an object of type `{}`".format(
spec_list[-1], cls.__name__
)
raise ImportError(txt)
# Success - return configuration.
return loaded_config
@classmethod
def import_from_config(cls, config_file, section_name=None, overwrite_params={}):
"""
Class method importing the configuration file.
Raises an ImportError exception when config file is invalid or
incompatible (when called from a particular class).
Args:
config_file: path (absolute or relative) and name of the config file (YML)
section_name: section in the configuration file storing module configuration (optional, DEFAULT: None)
overwrite_params: Dictionary containing parameters that will be added to or overwrite (!) the default
parameters loaded from the configuration file
Returns:
Instance of the created NeuralModule object.
"""
# Validate the content of the configuration file (its header).
loaded_config = cls._validate_config_file(config_file, section_name)
# Parse the "full specification".
spec_list = loaded_config["header"]["full_spec"].split(".")
# Get object class from "full specification".
mod_obj = __import__(spec_list[0])
for spec in spec_list[1:]:
mod_obj = getattr(mod_obj, spec)
# print(mod_obj)
# Get init parameters.
init_params = loaded_config["init_params"]
# Update parameters with additional ones.
init_params.update(overwrite_params)
# Create and return the object.
obj = mod_obj(**init_params)
logging.info(
"Instantiated a new Neural Module of type `{}` using configuration loaded from the `{}` file".format(
spec_list[-1], config_file
)
)
return obj
@deprecated(version=0.11)
@staticmethod
def create_ports(**kwargs):
""" Deprecated method, to be remoted in the next release."""
raise Exception(
'Deprecated method. Please implement ``inputs`` and ``outputs`` \
properties to define module ports instead'
)
@property
@abstractmethod
def input_ports(self) -> Optional[Dict[str, NeuralType]]:
"""Returns definitions of module input ports
Returns:
A (dict) of module's input ports names to NeuralTypes mapping
"""
@property
@abstractmethod
def output_ports(self) -> Optional[Dict[str, NeuralType]]:
"""Returns definitions of module output ports
Returns:
A (dict) of module's output ports names to NeuralTypes mapping
"""
@property
def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
"""Returns names of input ports that will not be included in an export
Returns:
A (set) of module's input port names that are not exportable
"""
return set([])
@property
def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""Returns names of output ports that will not be included in an export
Returns:
A (set) of module's output port names that are not exportable
"""
return set([])
def prepare_for_deployment(self) -> None:
"""Patch the module if required to prepare for deployment
"""
return
@staticmethod
def pretrained_storage():
return ''
def __call__(self, **kwargs):
"""This method allows objects to be called with their port names
Args:
kwargs: Input ports and their values. For example:
...
mymodule1 = Subclass1_of_NeuralModule(...)
mymodule2 = Subclass2_of_NeuralModule(...)
...
out_port1, out_port2 = mymodule1(input_port1=value1,
input_port2=value2,
input_port3=value3)
out_port11 = mymodule2(input_port1=out_port2)
...
Returns:
NmTensor object or tuple of NmTensor objects
"""
# Get input and output ports definitions.
input_port_defs = self.input_ports
output_port_defs = self.output_ports
first_input_nmtensor_type = None
input_nmtensors_are_of_same_type = True
for port_name, tgv in kwargs.items():
# make sure that passed arguments correspond to input port names
if port_name not in input_port_defs.keys():
raise NeuralPortNameMismatchError("Wrong input port name: {0}".format(port_name))
input_port = input_port_defs[port_name]
type_comatibility = input_port.compare(tgv)
if (
type_comatibility != NeuralTypeComparisonResult.SAME
and type_comatibility != NeuralTypeComparisonResult.GREATER
):
raise NeuralPortNmTensorMismatchError(
"\n\nIn {0}. \n"
"Port: {1} and a NmTensor it was fed are \n"
"of incompatible neural types:\n\n{2} \n\n and \n\n{3}"
"\n\nType comparison result: {4}".format(
self.__class__.__name__, port_name, input_port_defs[port_name], tgv, type_comatibility,
)
)
# if first_input_nmtensor_type is None:
# first_input_nmtensor_type = NeuralType(tgv._axis2type)
# else:
# if first_input_nmtensor_type._axis2type is None:
# input_nmtensors_are_of_same_type = True
# else:
# input_nmtensors_are_of_same_type = first_input_nmtensor_type.compare(
# tgv
# ) == NeuralTypeComparisonResult.SAME and len(first_input_nmtensor_type._axis2type)
# if not (
# type_comatibility == NeuralTypeComparisonResult.SAME
# or type_comatibility == NeuralTypeComparisonResult.GREATER
# ):
# raise NeuralPortNmTensorMismatchError(
# "\n\nIn {0}. \n"
# "Port: {1} and a NmTensor it was fed are \n"
# "of incompatible neural types:\n\n{2} \n\n and \n\n{3}"
# "\n\nType comparison result: {4}".format(
# self.__class__.__name__, port_name, input_port_defs[port_name], tgv, type_comatibility,
# )
# )
# if type_comatibility == NeuralTypeComparisonResult.LESS:
# print('Types were raised')
if len(output_port_defs) == 1:
out_name = list(output_port_defs)[0]
out_type = output_port_defs[out_name]
if out_type is None:
if input_nmtensors_are_of_same_type:
out_type = first_input_nmtensor_type
else:
raise CanNotInferResultNeuralType(
"Can't infer output neural type. Likely your inputs are of different type."
)
return NmTensor(producer=self, producer_args=kwargs, name=out_name, ntype=out_type,)
else:
result = []
for out_port, n_type in output_port_defs.items():
out_type = n_type
if out_type is None:
if input_nmtensors_are_of_same_type:
out_type = first_input_nmtensor_type
else:
raise CanNotInferResultNeuralType(
"Can't infer output neural type. Likely your inputs are of different type."
)
result.append(NmTensor(producer=self, producer_args=kwargs, name=out_port, ntype=out_type,))
# Creating ad-hoc class for returning from module's forward pass.
output_class_name = f'{self.__class__.__name__}Output'
field_names = list(output_port_defs)
result_type = collections.namedtuple(typename=output_class_name, field_names=field_names,)
# Tie tuple of output tensors with corresponding names.
result = result_type(*result)
return result
def __str__(self):
return self.__class__.__name__
@abstractmethod
def get_weights(self) -> Optional[Dict[(str, bool)]]:
"""Returns NeuralModule's weights copy.
Returns:
Dictionary of name -> (weights, trainable)"""
pass
@abstractmethod
def set_weights(
self,
name2weight: Dict[(str, Tuple[str, bool])],
name2name_and_transform: Dict[(str, Tuple[str, WeightShareTransform])] = None,
):
"""Sets weight from given values. For every named weight in
name2weight,
if weight with the same name is found in the model, it will be set to
found value.
WARNING: This will NOT tie weights. It will copy values.
If ``name2name_and_transform`` is provided then if will set weights
using
name mapping and transform. For example, suppose ``objec1.X = 3x5
weight``.
Then, if ``name2name_and_transform['X']=('Y',
WeightShareTransform.TRANSPOSE)``
and ``Y`` is 5x3 weight and ``name2weight['Y']=Y. Then:
``object1.set_weights(name2weight, name2name_and_transform)`` will
set object1.X=transpose(Y).
Args:
name2weight (dict): dictionary of name to (weight, trainable).
Typically this is output of get_weights method.
name2name_and_transform: mapping from name -> (name, transform)
"""
pass
@staticmethod
def list_pretrained_models() -> Optional[List[PretrainedModelInfo]]:
"""List all available pre-trained models (e.g. weights) for this NM.
Returns:
A list of PretrainedModelInfo tuples.
The pretrained_model_name field of the tuple can be used to
retrieve pre-trained model's weights (pass it as
pretrained_model_name argument to the module's constructor)
"""
return None
def get_config_dict_and_checkpoint(self, pretrained_model_name):
"""WARNING: This part is work in progress"""
return None
@abstractmethod
def tie_weights_with(
self,
module,
weight_names=List[str],
name2name_and_transform: Dict[(str, Tuple[str, WeightShareTransform])] = None,
):
"""Ties weights between self and module. For every weight name in
weight_names, if weight with the same name is found in self, it will
be tied
with a same weight from ``module``.
WARNING: Once weights are tied, updates to one weights's weights
will affect
other module's weights.
If ``name2name_and_transform`` is provided then if will set weights
using
name mapping and transform. For example, suppose ``objec1.X = 3x5
weights``
and ``object2.Y = 5x3 weights``. Then these weights can be tied like
this:
.. code-block:: python
object1.tie_weights_with(object2, weight_names=['X'],
name2name_and_transform =
{ 'X': ('Y', WeightShareTransform.TRANSPOSE)})
Args:
module: with which module to tie weights
weight_names (List[str]): list of self weights' names
name2name_and_transform: mapping from name -> (name, transform)
"""
pass
def is_trainable(self) -> bool:
"""
Checks if NeuralModule is trainable.
A NeuralModule is trainable IFF it contains at least one trainable
weight
Returns:
True if module has trainable weights, False otherwise
"""
weights = self.get_weights()
if weights is None:
return False
for name, w in weights.items():
if w[1]:
return True
return False
@abstractmethod
def save_to(self, path: str):
"""Save module state to file.
Args:
path (string): path to while where to save.
"""
pass
@abstractmethod
def restore_from(self, path: str):
"""Restore module's state from file.
Args:
path (string): path to where to restore from.
"""
pass
@abstractmethod
def freeze(self, weights: Set[str] = None):
"""Freeze weights
Args:
weights (set): set of weight names to freeze
If None, all weights are freezed.
"""
pass
@abstractmethod
def unfreeze(self, weights: Set[str] = None):
"""Unfreeze weights
Args:
weights (set): set of weight names to unfreeze
If None, all weights are unfreezed.
"""
pass
@property
def placement(self):
"""Module's placement. Currently CPU or GPU.
DataParallel and ModelParallel will come later.
Returns:
(DeviceType) Device where NM's weights are located
"""
return self._placement
@property
@deprecated(version=0.11)
def local_parameters(self) -> Optional[Dict]:
"""Get module's parameters
Returns:
module's parameters
"""
return self._init_params
# return self._local_parameters
@property
def unique_instance_id(self):
"""A unique instance id for this object
Returns:
A uniq uuid which can be used to identify this object
"""
return self._uuid
@property
def factory(self):
""" Neural module factory which created this module
Returns: NeuralModuleFactory instance or None
"""
return self._factory
@property
@abstractmethod
def num_weights(self):
"""Number of module's weights
"""
pass
[end of nemo/core/neural_modules.py]
</code>
I need you to solve this issue by regenerating the full files in the code base that you would like to change. You can change as many files as you like. Please respond with a list of files and their revised contents in the following format.
<example>
[start of /src/this_file.py]
import os
def euclidean(a, b):
if b == 0:
return a
return euclidean(b, a % b)
[end of /src/this_file.py]
[start of /src/another_file.py]
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
x, y = x0, y0
sx = -1 if x0 > x1 else 1
sy = -1 if y0 > y1 else 1
if dx > dy:
err = dx / 2.0
while x != x1:
points.append((x, y))
err -= dy
if err < 0:
y += sy
err += dx
x += sx
else:
err = dy / 2.0
while y != y1:
points.append((x
err -= dx
if err < 0:
x += sx
err += dy
y += sy
points.append((x, y))
return points
[end of /src/another_file.py]
</example>
|
NVIDIA/NeMo
|
ba4616f1f011d599de87f0cb3315605e715d402a
|
Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
|
2020-03-10T03:03:23Z
|
<patch>
<patch>
diff --git a/nemo/backends/pytorch/actions.py b/nemo/backends/pytorch/actions.py
--- a/nemo/backends/pytorch/actions.py
+++ b/nemo/backends/pytorch/actions.py
@@ -937,26 +937,16 @@ def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defa
if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
dynamic_axes[port_name].append(ind)
- # This is a hack for Jasper to Jarvis export -- need re-design for this
- inputs_to_drop = set()
- outputs_to_drop = set()
- if type(module).__name__ == "JasperEncoder":
- logging.info(
- "Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
- "deployment"
- )
- inputs_to_drop.add("length")
- outputs_to_drop.add("encoded_lengths")
-
+ # extract dynamic axes and remove unnecessary inputs/outputs
# for input_ports
for port_name, ntype in module.input_ports.items():
- if port_name in inputs_to_drop:
+ if port_name in module._disabled_deployment_input_ports:
input_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
# for output_ports
for port_name, ntype in module.output_ports.items():
- if port_name in outputs_to_drop:
+ if port_name in module._disabled_deployment_output_ports:
output_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
diff --git a/nemo/collections/asr/jasper.py b/nemo/collections/asr/jasper.py
--- a/nemo/collections/asr/jasper.py
+++ b/nemo/collections/asr/jasper.py
@@ -118,14 +118,14 @@ def output_ports(self):
}
@property
- def disabled_deployment_input_ports(self):
+ def _disabled_deployment_input_ports(self):
return set(["length"])
@property
- def disabled_deployment_output_ports(self):
+ def _disabled_deployment_output_ports(self):
return set(["encoded_lengths"])
- def prepare_for_deployment(self):
+ def _prepare_for_deployment(self):
m_count = 0
for m in self.modules():
if type(m).__name__ == "MaskedConv1d":
diff --git a/nemo/core/neural_factory.py b/nemo/core/neural_factory.py
--- a/nemo/core/neural_factory.py
+++ b/nemo/core/neural_factory.py
@@ -610,7 +610,7 @@ def deployment_export(
input_example: sometimes tracing will require input examples
output_example: Should match inference on input_example
"""
- module.prepare_for_deployment()
+ module._prepare_for_deployment()
return self._trainer.deployment_export(
module=module,
diff --git a/nemo/core/neural_modules.py b/nemo/core/neural_modules.py
--- a/nemo/core/neural_modules.py
+++ b/nemo/core/neural_modules.py
@@ -393,7 +393,7 @@ def output_ports(self) -> Optional[Dict[str, NeuralType]]:
"""
@property
- def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_input_ports(self) -> Optional[Set[str]]:
"""Returns names of input ports that will not be included in an export
Returns:
@@ -402,7 +402,7 @@ def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
return set([])
@property
- def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""Returns names of output ports that will not be included in an export
Returns:
@@ -410,7 +410,7 @@ def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""
return set([])
- def prepare_for_deployment(self) -> None:
+ def _prepare_for_deployment(self) -> None:
"""Patch the module if required to prepare for deployment
"""
</patch>
</s>
</patch>
|
diff --git a/tests/unit/core/test_deploy_export.py b/tests/unit/core/test_deploy_export.py
--- a/tests/unit/core/test_deploy_export.py
+++ b/tests/unit/core/test_deploy_export.py
@@ -46,9 +46,11 @@
import nemo.collections.nlp.nm.trainables.common.token_classification_nm
from nemo import logging
+TRT_ONNX_DISABLED = False
+
# Check if the required libraries and runtimes are installed.
+# Only initialize GPU after this runner is activated.
try:
- # Only initialize GPU after this runner is activated.
import pycuda.autoinit
# This import causes pycuda to automatically manage CUDA context creation and cleanup.
@@ -63,16 +65,17 @@
)
from .tensorrt_runner import TensorRTRunnerV2
except:
- # Skip tests.
- pytestmark = pytest.mark.skip
+ TRT_ONNX_DISABLED = True
@pytest.mark.usefixtures("neural_factory")
class TestDeployExport(TestCase):
- def setUp(self):
- logging.setLevel(logging.WARNING)
- device = nemo.core.DeviceType.GPU
- self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
+ # def setUp(self):
+ # super().setUp()
+
+ # logging.setLevel(logging.WARNING)
+ # device = nemo.core.DeviceType.GPU
+ # self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
def __test_export_route(self, module, out_name, mode, input_example=None):
out = Path(out_name)
@@ -112,7 +115,13 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
loader_cache = DataLoaderCache(data_loader)
profile_shapes = OrderedDict()
names = list(module.input_ports) + list(module.output_ports)
-
+ names = list(
+ filter(
+ lambda x: x
+ not in (module._disabled_deployment_input_ports | module._disabled_deployment_output_ports),
+ names,
+ )
+ )
if isinstance(input_example, tuple):
si = [tuple(input_example[i].shape) for i in range(len(input_example))]
elif isinstance(input_example, OrderedDict):
@@ -152,7 +161,7 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
input_names = list(input_metadata.keys())
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
+ if input_name in module._disabled_deployment_input_ports:
continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
@@ -209,8 +218,8 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
ort_inputs = ort_session.get_inputs()
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
- input_name = ort_inputs[i].name
+ if input_name in module._disabled_deployment_input_ports:
+ continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
if isinstance(input_example, OrderedDict)
@@ -263,9 +272,10 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
def __test_export_route_all(self, module, out_name, input_example=None):
if input_example is not None:
- self.__test_export_route(
- module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
- )
+ if not TRT_ONNX_DISABLED:
+ self.__test_export_route(
+ module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
+ )
self.__test_export_route(module, out_name + '.onnx', nemo.core.DeploymentFormat.ONNX, input_example)
self.__test_export_route(module, out_name + '.pt', nemo.core.DeploymentFormat.PYTORCH, input_example)
self.__test_export_route(module, out_name + '.ts', nemo.core.DeploymentFormat.TORCHSCRIPT, input_example)
@@ -323,9 +333,7 @@ def test_jasper_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="jasper_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randn(256).cuda()),
+ module=jasper_encoder, out_name="jasper_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
@pytest.mark.unit
@@ -343,7 +351,5 @@ def test_quartz_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="quartz_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randint(20, (16,)).cuda()),
+ module=jasper_encoder, out_name="quartz_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
|
1.0
| ||||
NVIDIA__NeMo-3632
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
./reinstall.sh crashes due to not being able to uninstall llvmlite
Starting off of `nemo:1.5.1` container, cloning the NeMo repo to a folder inside of it and calling `./reinstall.sh` fails with
```
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
`pip install -e` on the other hand succeeds installing `nemo:1.7.0rc0` and `numpy:1.22.2`, the rest of the packages remain untouched.
It seems that `./reinstall.sh` which used to work fine, a week or so ago when following the same procedure to upgrade to `nemo:1.6.0rc` redeveloped issue #841. The solution remains the same, first call
```
pip install --ignore-installed llvmlite
```
followed by `./reinstall.sh`. In this case, apart `llvml`, the following packages are updated
```
ftfy-6.0.3 nemo-toolkit-1.7.0rc0 numba-0.55.1 pytorch-lightning-1.5.9 sacrebleu-2.0.0 setuptools-59.5.0
```
Interestingly `numpy` in this case is left at `1.21.5`.
</issue>
<code>
[start of README.rst]
|status| |documentation| |license| |lgtm_grade| |lgtm_alerts| |black|
.. |status| image:: http://www.repostatus.org/badges/latest/active.svg
:target: http://www.repostatus.org/#active
:alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
.. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
:alt: Documentation
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
.. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
:target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
:alt: NeMo core license and license for collections in this repo
.. |lgtm_grade| image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
:target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
:alt: Language grade: Python
.. |lgtm_alerts| image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
:target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
:alt: Total alerts
.. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: black
.. _main-readme:
**NVIDIA NeMo**
===============
Introduction
------------
NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR), natural language processing (NLP), and text-to-speech synthesis (TTS).
The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models) and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
`Pre-trained NeMo models. <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_
`Introductory video. <https://www.youtube.com/embed/wBgpMf_KQVw>`_
Key Features
------------
* Speech processing
* `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
* Supported models: Jasper, QuartzNet, CitriNet, Conformer-CTC, Conformer-Transducer, ContextNet, ...
* Supports CTC and Transducer/RNNT losses/decoders
* Beam Search decoding
* `Language Modelling for ASR <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
* Streaming and Buffered ASR (CTC/Transdcer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/main/examples/asr/asr_chunked_inference>`_
* `Speech Classification and Speech Command Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition)
* `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
* `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
* `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
* `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
* `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
* Natural Language Processing
* `Compatible with Hugging Face Transformers and NVIDIA Megatron <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron_finetuning.html>`_
* `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation.html>`_
* `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
* `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
* `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
* `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
* `BERT pre-training <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/bert_pretraining.html>`_
* `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
* `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
* `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
* `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
* `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
* `Neural Duplex Text Normalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization.html>`_
* `Prompt Tuning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron_finetuning.html#prompt-tuning>`_
* `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
* `Speech synthesis (TTS) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
* Spectrogram generation: Tacotron2, GlowTTS, TalkNet, FastPitch, FastSpeech2, Mixer-TTS, Mixer-TTS-X
* Vocoders: WaveGlow, SqueezeWave, UniGlow, MelGAN, HiFiGAN, UnivNet
* End-to-end speech generation: FastPitch_HifiGan_E2E, FastSpeech2_HifiGan_E2E
* `NGC collection of pre-trained TTS models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
* `Tools <https://github.com/NVIDIA/NeMo/tree/main/tools>`_
* `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/text_processing_deployment.html>`_
* `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
* `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
Requirements
------------
1) Python 3.6, 3.7 or 3.8
2) Pytorch 1.10.0 or above
3) NVIDIA GPU for training
Documentation
-------------
.. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
:alt: Documentation Status
:scale: 100%
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
.. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
:alt: Documentation Status
:scale: 100%
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
| Version | Status | Description |
+=========+=============+==========================================================================================================================================+
| Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
| Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
Tutorials
---------
A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
Getting help with NeMo
----------------------
FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
Installation
------------
Pip
~~~
Use this installation mode if you want the latest released version.
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
pip install nemo_toolkit['all']
.. note::
Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
Pip from source
~~~~~~~~~~~~~~~
Use this installation mode if you want the a version from particular GitHub branch (e.g main).
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
From source
~~~~~~~~~~~
Use this installation mode if you are contributing to NeMo.
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
git clone https://github.com/NVIDIA/NeMo
cd NeMo
./reinstall.sh
.. note::
If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
with ``pip install -e .`` when your PWD is the root of the NeMo repository.
RNNT
~~~~
Note that RNNT requires numba to be installed from conda.
.. code-block:: bash
conda remove numba
pip uninstall numba
conda install -c conda-forge numba
Megatron GPT
~~~~~~~~~~~~
Megatron GPT training requires NVIDIA Apex to be installed.
.. code-block:: bash
git clone https://github.com/NVIDIA/apex
cd apex
git checkout c8bcc98176ad8c3a0717082600c70c907891f9cb
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" ./
Docker containers:
~~~~~~~~~~~~~~~~~~
To build a nemo container with Dockerfile from a branch, please run
.. code-block:: bash
DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 22.01-py3 and then installing from GitHub.
.. code-block:: bash
docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:22.01-py3
Examples
--------
Many examples can be found under `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
Contributing
------------
We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
Publications
------------
We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/blob/main/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
Citation
--------
.. code-block:: bash
@article{kuchaiev2019nemo,
title={Nemo: a toolkit for building ai applications using neural modules},
author={Kuchaiev, Oleksii and Li, Jason and Nguyen, Huyen and Hrinchuk, Oleksii and Leary, Ryan and Ginsburg, Boris and Kriman, Samuel and Beliaev, Stanislav and Lavrukhin, Vitaly and Cook, Jack and others},
journal={arXiv preprint arXiv:1909.09577},
year={2019}
}
License
-------
NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
[end of README.rst]
[start of /dev/null]
[end of /dev/null]
[start of nemo_text_processing/text_normalization/__init__.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from nemo.utils import logging
try:
import pynini
PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
logging.warning(
"`pynini` is not installed ! \n"
"Please run the `nemo_text_processing/setup.sh` script"
"prior to usage of this toolkit."
)
PYNINI_AVAILABLE = False
[end of nemo_text_processing/text_normalization/__init__.py]
[start of nemo_text_processing/text_normalization/en/graph_utils.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
# Copyright 2015 and onwards Google, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import string
from pathlib import Path
from typing import Dict
from nemo_text_processing.text_normalization.en.utils import get_abs_path
try:
import pynini
from pynini import Far
from pynini.export import export
from pynini.examples import plurals
from pynini.lib import byte, pynutil, utf8
NEMO_CHAR = utf8.VALID_UTF8_CHAR
NEMO_DIGIT = byte.DIGIT
NEMO_LOWER = pynini.union(*string.ascii_lowercase).optimize()
NEMO_UPPER = pynini.union(*string.ascii_uppercase).optimize()
NEMO_ALPHA = pynini.union(NEMO_LOWER, NEMO_UPPER).optimize()
NEMO_ALNUM = pynini.union(NEMO_DIGIT, NEMO_ALPHA).optimize()
NEMO_HEX = pynini.union(*string.hexdigits).optimize()
NEMO_NON_BREAKING_SPACE = u"\u00A0"
NEMO_SPACE = " "
NEMO_WHITE_SPACE = pynini.union(" ", "\t", "\n", "\r", u"\u00A0").optimize()
NEMO_NOT_SPACE = pynini.difference(NEMO_CHAR, NEMO_WHITE_SPACE).optimize()
NEMO_NOT_QUOTE = pynini.difference(NEMO_CHAR, r'"').optimize()
NEMO_PUNCT = pynini.union(*map(pynini.escape, string.punctuation)).optimize()
NEMO_GRAPH = pynini.union(NEMO_ALNUM, NEMO_PUNCT).optimize()
NEMO_SIGMA = pynini.closure(NEMO_CHAR)
delete_space = pynutil.delete(pynini.closure(NEMO_WHITE_SPACE))
insert_space = pynutil.insert(" ")
delete_extra_space = pynini.cross(pynini.closure(NEMO_WHITE_SPACE, 1), " ")
delete_preserve_order = pynini.closure(
pynutil.delete(" preserve_order: true")
| (pynutil.delete(" field_order: \"") + NEMO_NOT_QUOTE + pynutil.delete("\""))
)
suppletive = pynini.string_file(get_abs_path("data/suppletive.tsv"))
# _v = pynini.union("a", "e", "i", "o", "u")
_c = pynini.union(
"b", "c", "d", "f", "g", "h", "j", "k", "l", "m", "n", "p", "q", "r", "s", "t", "v", "w", "x", "y", "z"
)
_ies = NEMO_SIGMA + _c + pynini.cross("y", "ies")
_es = NEMO_SIGMA + pynini.union("s", "sh", "ch", "x", "z") + pynutil.insert("es")
_s = NEMO_SIGMA + pynutil.insert("s")
graph_plural = plurals._priority_union(
suppletive, plurals._priority_union(_ies, plurals._priority_union(_es, _s, NEMO_SIGMA), NEMO_SIGMA), NEMO_SIGMA
).optimize()
SINGULAR_TO_PLURAL = graph_plural
PLURAL_TO_SINGULAR = pynini.invert(graph_plural)
TO_LOWER = pynini.union(*[pynini.cross(x, y) for x, y in zip(string.ascii_uppercase, string.ascii_lowercase)])
TO_UPPER = pynini.invert(TO_LOWER)
PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
# Create placeholders
NEMO_CHAR = None
NEMO_DIGIT = None
NEMO_LOWER = None
NEMO_UPPER = None
NEMO_ALPHA = None
NEMO_ALNUM = None
NEMO_HEX = None
NEMO_NON_BREAKING_SPACE = u"\u00A0"
NEMO_SPACE = " "
NEMO_WHITE_SPACE = None
NEMO_NOT_SPACE = None
NEMO_NOT_QUOTE = None
NEMO_PUNCT = None
NEMO_GRAPH = None
NEMO_SIGMA = None
delete_space = None
insert_space = None
delete_extra_space = None
delete_preserve_order = None
suppletive = None
# _v = pynini.union("a", "e", "i", "o", "u")
_c = None
_ies = None
_es = None
_s = None
graph_plural = None
SINGULAR_TO_PLURAL = None
PLURAL_TO_SINGULAR = None
TO_LOWER = None
TO_UPPER = None
PYNINI_AVAILABLE = False
def generator_main(file_name: str, graphs: Dict[str, 'pynini.FstLike']):
"""
Exports graph as OpenFst finite state archive (FAR) file with given file name and rule name.
Args:
file_name: exported file name
graphs: Mapping of a rule name and Pynini WFST graph to be exported
"""
exporter = export.Exporter(file_name)
for rule, graph in graphs.items():
exporter[rule] = graph.optimize()
exporter.close()
print(f'Created {file_name}')
def get_plurals(fst):
"""
Given singular returns plurals
Args:
fst: Fst
Returns plurals to given singular forms
"""
return SINGULAR_TO_PLURAL @ fst
def get_singulars(fst):
"""
Given plural returns singulars
Args:
fst: Fst
Returns singulars to given plural forms
"""
return PLURAL_TO_SINGULAR @ fst
def convert_space(fst) -> 'pynini.FstLike':
"""
Converts space to nonbreaking space.
Used only in tagger grammars for transducing token values within quotes, e.g. name: "hello kitty"
This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
Args:
fst: input fst
Returns output fst where breaking spaces are converted to non breaking spaces
"""
return fst @ pynini.cdrewrite(pynini.cross(NEMO_SPACE, NEMO_NON_BREAKING_SPACE), "", "", NEMO_SIGMA)
class GraphFst:
"""
Base class for all grammar fsts.
Args:
name: name of grammar class
kind: either 'classify' or 'verbalize'
deterministic: if True will provide a single transduction option,
for False multiple transduction are generated (used for audio-based normalization)
"""
def __init__(self, name: str, kind: str, deterministic: bool = True):
self.name = name
self.kind = str
self._fst = None
self.deterministic = deterministic
self.far_path = Path(os.path.dirname(__file__) + '/grammars/' + kind + '/' + name + '.far')
if self.far_exist():
self._fst = Far(self.far_path, mode="r", arc_type="standard", far_type="default").get_fst()
def far_exist(self) -> bool:
"""
Returns true if FAR can be loaded
"""
return self.far_path.exists()
@property
def fst(self) -> 'pynini.FstLike':
return self._fst
@fst.setter
def fst(self, fst):
self._fst = fst
def add_tokens(self, fst) -> 'pynini.FstLike':
"""
Wraps class name around to given fst
Args:
fst: input fst
Returns:
Fst: fst
"""
return pynutil.insert(f"{self.name} {{ ") + fst + pynutil.insert(" }")
def delete_tokens(self, fst) -> 'pynini.FstLike':
"""
Deletes class name wrap around output of given fst
Args:
fst: input fst
Returns:
Fst: fst
"""
res = (
pynutil.delete(f"{self.name}")
+ delete_space
+ pynutil.delete("{")
+ delete_space
+ fst
+ delete_space
+ pynutil.delete("}")
)
return res @ pynini.cdrewrite(pynini.cross(u"\u00A0", " "), "", "", NEMO_SIGMA)
[end of nemo_text_processing/text_normalization/en/graph_utils.py]
[start of nemo_text_processing/text_normalization/en/taggers/punctuation.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
# Copyright 2015 and onwards Google, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from unicodedata import category
from nemo_text_processing.text_normalization.en.graph_utils import GraphFst
try:
import pynini
from pynini.lib import pynutil
PYNINI_AVAILABLE = False
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
class PunctuationFst(GraphFst):
"""
Finite state transducer for classifying punctuation
e.g. a, -> tokens { name: "a" } tokens { name: "," }
Args:
deterministic: if True will provide a single transduction option,
for False multiple transduction are generated (used for audio-based normalization)
"""
def __init__(self, deterministic: bool = True):
super().__init__(name="punctuation", kind="classify", deterministic=deterministic)
s = "!#%&\'()*+,-./:;<=>?@^_`{|}~\""
punct_unicode = [chr(i) for i in range(sys.maxunicode) if category(chr(i)).startswith("P")]
punct_unicode.remove('[')
punct_unicode.remove(']')
punct = pynini.union(*s) | pynini.union(*punct_unicode)
self.graph = punct
self.fst = (pynutil.insert("name: \"") + self.graph + pynutil.insert("\"")).optimize()
[end of nemo_text_processing/text_normalization/en/taggers/punctuation.py]
[start of nemo_text_processing/text_normalization/en/verbalizers/whitelist.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
# Copyright 2015 and onwards Google, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
import pynini
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
class WhiteListFst(GraphFst):
"""
Finite state transducer for verbalizing whitelist
e.g. tokens { name: "misses" } } -> misses
Args:
deterministic: if True will provide a single transduction option,
for False multiple transduction are generated (used for audio-based normalization)
"""
def __init__(self, deterministic: bool = True):
super().__init__(name="whitelist", kind="verbalize", deterministic=deterministic)
graph = (
pynutil.delete("name:")
+ delete_space
+ pynutil.delete("\"")
+ pynini.closure(NEMO_CHAR - " ", 1)
+ pynutil.delete("\"")
)
graph = graph @ pynini.cdrewrite(pynini.cross(u"\u00A0", " "), "", "", NEMO_SIGMA)
self.fst = graph.optimize()
[end of nemo_text_processing/text_normalization/en/verbalizers/whitelist.py]
[start of nemo_text_processing/text_normalization/en/verbalizers/word.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
# Copyright 2015 and onwards Google, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
import pynini
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
class WordFst(GraphFst):
"""
Finite state transducer for verbalizing word
e.g. tokens { name: "sleep" } -> sleep
Args:
deterministic: if True will provide a single transduction option,
for False multiple transduction are generated (used for audio-based normalization)
"""
def __init__(self, deterministic: bool = True):
super().__init__(name="word", kind="verbalize", deterministic=deterministic)
chars = pynini.closure(NEMO_CHAR - " ", 1)
char = pynutil.delete("name:") + delete_space + pynutil.delete("\"") + chars + pynutil.delete("\"")
graph = char @ pynini.cdrewrite(pynini.cross(u"\u00A0", " "), "", "", NEMO_SIGMA)
self.fst = graph.optimize()
[end of nemo_text_processing/text_normalization/en/verbalizers/word.py]
[start of nemo_text_processing/text_normalization/normalize.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import itertools
import os
import re
from argparse import ArgumentParser
from collections import OrderedDict
from math import factorial
from typing import Dict, List, Union
from nemo_text_processing.text_normalization.data_loader_utils import get_installation_msg, pre_process
from nemo_text_processing.text_normalization.token_parser import PRESERVE_ORDER_KEY, TokenParser
from tqdm import tqdm
try:
import pynini
PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
try:
from nemo.collections.common.tokenizers.moses_tokenizers import MosesProcessor
from nemo.collections.nlp.data.text_normalization.utils import post_process_punct
NLP_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
NLP_AVAILABLE = False
SPACE_DUP = re.compile(' {2,}')
class Normalizer:
"""
Normalizer class that converts text from written to spoken form.
Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
lang: language specifying the TN rules, by default: English
cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
overwrite_cache: set to True to overwrite .far files
whitelist: path to a file with whitelist replacements
"""
def __init__(
self,
input_case: str,
lang: str = 'en',
deterministic: bool = True,
cache_dir: str = None,
overwrite_cache: bool = False,
whitelist: str = None,
):
assert input_case in ["lower_cased", "cased"]
if not PYNINI_AVAILABLE:
raise ImportError(get_installation_msg())
if lang == 'en' and deterministic:
from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.en.verbalizers.verbalize_final import VerbalizeFinalFst
elif lang == 'en' and not deterministic:
from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify_with_audio import ClassifyFst
from nemo_text_processing.text_normalization.en.verbalizers.verbalize_final import VerbalizeFinalFst
elif lang == 'ru':
# Ru TN only support non-deterministic cases and produces multiple normalization options
# use normalize_with_audio.py
from nemo_text_processing.text_normalization.ru.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.ru.verbalizers.verbalize_final import VerbalizeFinalFst
elif lang == 'de':
# Ru TN only support non-deterministic cases and produces multiple normalization options
# use normalize_with_audio.py
from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.de.verbalizers.verbalize_final import VerbalizeFinalFst
self.tagger = ClassifyFst(
input_case=input_case,
deterministic=deterministic,
cache_dir=cache_dir,
overwrite_cache=overwrite_cache,
whitelist=whitelist,
)
self.verbalizer = VerbalizeFinalFst(deterministic=deterministic)
self.parser = TokenParser()
self.lang = lang
if NLP_AVAILABLE:
self.processor = MosesProcessor(lang_id=lang)
else:
self.processor = None
print("NeMo NLP is not available. Moses de-tokenization will be skipped.")
def normalize_list(self, texts: List[str], verbose=False, punct_post_process: bool = False) -> List[str]:
"""
NeMo text normalizer
Args:
texts: list of input strings
verbose: whether to print intermediate meta information
Returns converted list input strings
"""
res = []
for input in tqdm(texts):
try:
text = self.normalize(input, verbose=verbose, punct_post_process=punct_post_process)
except:
print(input)
raise Exception
res.append(text)
return res
def _estimate_number_of_permutations_in_nested_dict(
self, token_group: Dict[str, Union[OrderedDict, str, bool]]
) -> int:
num_perms = 1
for k, inner in token_group.items():
if isinstance(inner, dict):
num_perms *= self._estimate_number_of_permutations_in_nested_dict(inner)
num_perms *= factorial(len(token_group))
return num_perms
def _split_tokens_to_reduce_number_of_permutations(
self, tokens: List[dict], max_number_of_permutations_per_split: int = 729
) -> List[List[dict]]:
"""
Splits a sequence of tokens in a smaller sequences of tokens in a way that maximum number of composite
tokens permutations does not exceed ``max_number_of_permutations_per_split``.
For example,
.. code-block:: python
tokens = [
{"tokens": {"date": {"year": "twenty eighteen", "month": "december", "day": "thirty one"}}},
{"tokens": {"date": {"year": "twenty eighteen", "month": "january", "day": "eight"}}},
]
split = normalizer._split_tokens_to_reduce_number_of_permutations(
tokens, max_number_of_permutations_per_split=6
)
assert split == [
[{"tokens": {"date": {"year": "twenty eighteen", "month": "december", "day": "thirty one"}}}],
[{"tokens": {"date": {"year": "twenty eighteen", "month": "january", "day": "eight"}}}],
]
Date tokens contain 3 items each which gives 6 permutations for every date. Since there are 2 dates, total
number of permutations would be ``6 * 6 == 36``. Parameter ``max_number_of_permutations_per_split`` equals 6,
so input sequence of tokens is split into 2 smaller sequences.
Args:
tokens (:obj:`List[dict]`): a list of dictionaries, possibly nested.
max_number_of_permutations_per_split (:obj:`int`, `optional`, defaults to :obj:`243`): a maximum number
of permutations which can be generated from input sequence of tokens.
Returns:
:obj:`List[List[dict]]`: a list of smaller sequences of tokens resulting from ``tokens`` split.
"""
splits = []
prev_end_of_split = 0
current_number_of_permutations = 1
for i, token_group in enumerate(tokens):
n = self._estimate_number_of_permutations_in_nested_dict(token_group)
if n * current_number_of_permutations > max_number_of_permutations_per_split:
splits.append(tokens[prev_end_of_split:i])
prev_end_of_split = i
current_number_of_permutations = 1
if n > max_number_of_permutations_per_split:
raise ValueError(
f"Could not split token list with respect to condition that every split can generate number of "
f"permutations less or equal to "
f"`max_number_of_permutations_per_split={max_number_of_permutations_per_split}`. "
f"There is an unsplittable token group that generates more than "
f"{max_number_of_permutations_per_split} permutations. Try to increase "
f"`max_number_of_permutations_per_split` parameter."
)
current_number_of_permutations *= n
splits.append(tokens[prev_end_of_split:])
assert sum([len(s) for s in splits]) == len(tokens)
return splits
def normalize(
self, text: str, verbose: bool = False, punct_pre_process: bool = False, punct_post_process: bool = False
) -> str:
"""
Main function. Normalizes tokens from written to spoken form
e.g. 12 kg -> twelve kilograms
Args:
text: string that may include semiotic classes
verbose: whether to print intermediate meta information
punct_pre_process: whether to perform punctuation pre-processing, for example, [25] -> [ 25 ]
punct_post_process: whether to normalize punctuation
Returns: spoken form
"""
original_text = text
if punct_pre_process:
text = pre_process(text)
text = text.strip()
if not text:
if verbose:
print(text)
return text
text = pynini.escape(text)
tagged_lattice = self.find_tags(text)
tagged_text = self.select_tag(tagged_lattice)
if verbose:
print(tagged_text)
self.parser(tagged_text)
tokens = self.parser.parse()
split_tokens = self._split_tokens_to_reduce_number_of_permutations(tokens)
output = ""
for s in split_tokens:
tags_reordered = self.generate_permutations(s)
verbalizer_lattice = None
for tagged_text in tags_reordered:
tagged_text = pynini.escape(tagged_text)
verbalizer_lattice = self.find_verbalizer(tagged_text)
if verbalizer_lattice.num_states() != 0:
break
if verbalizer_lattice is None:
raise ValueError(f"No permutations were generated from tokens {s}")
output += ' ' + self.select_verbalizer(verbalizer_lattice)
output = SPACE_DUP.sub(' ', output[1:])
if punct_post_process:
# do post-processing based on Moses detokenizer
if self.processor:
output = self.processor.moses_detokenizer.detokenize([output], unescape=False)
output = post_process_punct(input=original_text, normalized_text=output)
else:
print("NEMO_NLP collection is not available: skipping punctuation post_processing")
return output
def _permute(self, d: OrderedDict) -> List[str]:
"""
Creates reorderings of dictionary elements and serializes as strings
Args:
d: (nested) dictionary of key value pairs
Return permutations of different string serializations of key value pairs
"""
l = []
if PRESERVE_ORDER_KEY in d.keys():
d_permutations = [d.items()]
else:
d_permutations = itertools.permutations(d.items())
for perm in d_permutations:
subl = [""]
for k, v in perm:
if isinstance(v, str):
subl = ["".join(x) for x in itertools.product(subl, [f"{k}: \"{v}\" "])]
elif isinstance(v, OrderedDict):
rec = self._permute(v)
subl = ["".join(x) for x in itertools.product(subl, [f" {k} {{ "], rec, [f" }} "])]
elif isinstance(v, bool):
subl = ["".join(x) for x in itertools.product(subl, [f"{k}: true "])]
else:
raise ValueError()
l.extend(subl)
return l
def generate_permutations(self, tokens: List[dict]):
"""
Generates permutations of string serializations of list of dictionaries
Args:
tokens: list of dictionaries
Returns string serialization of list of dictionaries
"""
def _helper(prefix: str, tokens: List[dict], idx: int):
"""
Generates permutations of string serializations of given dictionary
Args:
tokens: list of dictionaries
prefix: prefix string
idx: index of next dictionary
Returns string serialization of dictionary
"""
if idx == len(tokens):
yield prefix
return
token_options = self._permute(tokens[idx])
for token_option in token_options:
yield from _helper(prefix + token_option, tokens, idx + 1)
return _helper("", tokens, 0)
def find_tags(self, text: str) -> 'pynini.FstLike':
"""
Given text use tagger Fst to tag text
Args:
text: sentence
Returns: tagged lattice
"""
lattice = text @ self.tagger.fst
return lattice
def select_tag(self, lattice: 'pynini.FstLike') -> str:
"""
Given tagged lattice return shortest path
Args:
tagged_text: tagged text
Returns: shortest path
"""
tagged_text = pynini.shortestpath(lattice, nshortest=1, unique=True).string()
return tagged_text
def find_verbalizer(self, tagged_text: str) -> 'pynini.FstLike':
"""
Given tagged text creates verbalization lattice
This is context-independent.
Args:
tagged_text: input text
Returns: verbalized lattice
"""
lattice = tagged_text @ self.verbalizer.fst
return lattice
def select_verbalizer(self, lattice: 'pynini.FstLike') -> str:
"""
Given verbalized lattice return shortest path
Args:
lattice: verbalization lattice
Returns: shortest path
"""
output = pynini.shortestpath(lattice, nshortest=1, unique=True).string()
return output
def parse_args():
parser = ArgumentParser()
parser.add_argument("input_string", help="input string", type=str)
parser.add_argument("--language", help="language", choices=["en", "de"], default="en", type=str)
parser.add_argument(
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
parser.add_argument("--verbose", help="print info for debugging", action='store_true')
parser.add_argument(
"--punct_post_process", help="set to True to enable punctuation post processing", action="store_true"
)
parser.add_argument(
"--punct_pre_process", help="set to True to enable punctuation pre processing", action="store_true"
)
parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
parser.add_argument("--whitelist", help="path to a file with with whitelist", default=None, type=str)
parser.add_argument(
"--cache_dir",
help="path to a dir with .far grammar file. Set to None to avoid using cache",
default=None,
type=str,
)
return parser.parse_args()
if __name__ == "__main__":
args = parse_args()
whitelist = os.path.abspath(args.whitelist) if args.whitelist else None
normalizer = Normalizer(
input_case=args.input_case,
cache_dir=args.cache_dir,
overwrite_cache=args.overwrite_cache,
whitelist=whitelist,
lang=args.language,
)
print(
normalizer.normalize(
args.input_string,
verbose=args.verbose,
punct_pre_process=args.punct_pre_process,
punct_post_process=args.punct_post_process,
)
)
[end of nemo_text_processing/text_normalization/normalize.py]
[start of nemo_text_processing/text_normalization/normalize_with_audio.py]
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import time
from argparse import ArgumentParser
from glob import glob
from typing import List, Tuple
from joblib import Parallel, delayed
from nemo_text_processing.text_normalization.normalize import Normalizer
from tqdm import tqdm
try:
from nemo.collections.asr.metrics.wer import word_error_rate
from nemo.collections.asr.models import ASRModel
ASR_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
ASR_AVAILABLE = False
try:
import pynini
from pynini.lib import rewrite
PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
try:
from nemo.collections.nlp.data.text_normalization.utils import post_process_punct
from nemo_text_processing.text_normalization.data_loader_utils import pre_process
NLP_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
NLP_AVAILABLE = False
"""
The script provides multiple normalization options and chooses the best one that minimizes CER of the ASR output
(most of the semiotic classes use deterministic=False flag).
To run this script with a .json manifest file, the manifest file should contain the following fields:
"audio_data" - path to the audio file
"text" - raw text
"pred_text" - ASR model prediction
See https://github.com/NVIDIA/NeMo/blob/main/examples/asr/transcribe_speech.py on how to add ASR predictions
When the manifest is ready, run:
python normalize_with_audio.py \
--audio_data PATH/TO/MANIFEST.JSON \
--language en
To run with a single audio file, specify path to audio and text with:
python normalize_with_audio.py \
--audio_data PATH/TO/AUDIO.WAV \
--language en \
--text raw text OR PATH/TO/.TXT/FILE
--model QuartzNet15x5Base-En \
--verbose
To see possible normalization options for a text input without an audio file (could be used for debugging), run:
python python normalize_with_audio.py --text "RAW TEXT"
Specify `--cache_dir` to generate .far grammars once and re-used them for faster inference
"""
class NormalizerWithAudio(Normalizer):
"""
Normalizer class that converts text from written to spoken form.
Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
lang: language
cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
overwrite_cache: set to True to overwrite .far files
whitelist: path to a file with whitelist replacements
"""
def __init__(
self,
input_case: str,
lang: str = 'en',
cache_dir: str = None,
overwrite_cache: bool = False,
whitelist: str = None,
):
super().__init__(
input_case=input_case,
lang=lang,
deterministic=False,
cache_dir=cache_dir,
overwrite_cache=overwrite_cache,
whitelist=whitelist,
)
def normalize(self, text: str, n_tagged: int, punct_post_process: bool = True, verbose: bool = False,) -> str:
"""
Main function. Normalizes tokens from written to spoken form
e.g. 12 kg -> twelve kilograms
Args:
text: string that may include semiotic classes
n_tagged: number of tagged options to consider, -1 - to get all possible tagged options
punct_post_process: whether to normalize punctuation
verbose: whether to print intermediate meta information
Returns:
normalized text options (usually there are multiple ways of normalizing a given semiotic class)
"""
original_text = text
if self.lang == "en":
text = pre_process(text)
text = text.strip()
if not text:
if verbose:
print(text)
return text
text = pynini.escape(text)
if n_tagged == -1:
if self.lang == "en":
try:
tagged_texts = rewrite.rewrites(text, self.tagger.fst_no_digits)
except pynini.lib.rewrite.Error:
tagged_texts = rewrite.rewrites(text, self.tagger.fst)
else:
tagged_texts = rewrite.rewrites(text, self.tagger.fst)
else:
if self.lang == "en":
try:
tagged_texts = rewrite.top_rewrites(text, self.tagger.fst_no_digits, nshortest=n_tagged)
except pynini.lib.rewrite.Error:
tagged_texts = rewrite.top_rewrites(text, self.tagger.fst, nshortest=n_tagged)
else:
tagged_texts = rewrite.top_rewrites(text, self.tagger.fst, nshortest=n_tagged)
# non-deterministic Eng normalization uses tagger composed with verbalizer, no permutation in between
if self.lang == "en":
normalized_texts = tagged_texts
else:
normalized_texts = []
for tagged_text in tagged_texts:
self._verbalize(tagged_text, normalized_texts, verbose=verbose)
if len(normalized_texts) == 0:
raise ValueError()
if punct_post_process:
# do post-processing based on Moses detokenizer
if self.processor:
normalized_texts = [self.processor.detokenize([t]) for t in normalized_texts]
normalized_texts = [
post_process_punct(input=original_text, normalized_text=t) for t in normalized_texts
]
normalized_texts = set(normalized_texts)
return normalized_texts
def _verbalize(self, tagged_text: str, normalized_texts: List[str], verbose: bool = False):
"""
Verbalizes tagged text
Args:
tagged_text: text with tags
normalized_texts: list of possible normalization options
verbose: if true prints intermediate classification results
"""
def get_verbalized_text(tagged_text):
return rewrite.rewrites(tagged_text, self.verbalizer.fst)
self.parser(tagged_text)
tokens = self.parser.parse()
tags_reordered = self.generate_permutations(tokens)
for tagged_text_reordered in tags_reordered:
try:
tagged_text_reordered = pynini.escape(tagged_text_reordered)
normalized_texts.extend(get_verbalized_text(tagged_text_reordered))
if verbose:
print(tagged_text_reordered)
except pynini.lib.rewrite.Error:
continue
def select_best_match(
self,
normalized_texts: List[str],
input_text: str,
pred_text: str,
verbose: bool = False,
remove_punct: bool = False,
):
"""
Selects the best normalization option based on the lowest CER
Args:
normalized_texts: normalized text options
input_text: input text
pred_text: ASR model transcript of the audio file corresponding to the normalized text
verbose: whether to print intermediate meta information
remove_punct: whether to remove punctuation before calculating CER
Returns:
normalized text with the lowest CER and CER value
"""
if pred_text == "":
return input_text, 1000
normalized_texts_cer = calculate_cer(normalized_texts, pred_text, remove_punct)
normalized_texts_cer = sorted(normalized_texts_cer, key=lambda x: x[1])
normalized_text, cer = normalized_texts_cer[0]
if verbose:
print('-' * 30)
for option in normalized_texts:
print(option)
print('-' * 30)
return normalized_text, cer
def calculate_cer(normalized_texts: List[str], pred_text: str, remove_punct=False) -> List[Tuple[str, float]]:
"""
Calculates character error rate (CER)
Args:
normalized_texts: normalized text options
pred_text: ASR model output
Returns: normalized options with corresponding CER
"""
normalized_options = []
for text in normalized_texts:
text_clean = text.replace('-', ' ').lower()
if remove_punct:
for punct in "!?:;,.-()*+-/<=>@^_":
text_clean = text_clean.replace(punct, "")
cer = round(word_error_rate([pred_text], [text_clean], use_cer=True) * 100, 2)
normalized_options.append((text, cer))
return normalized_options
def get_asr_model(asr_model):
"""
Returns ASR Model
Args:
asr_model: NeMo ASR model
"""
if os.path.exists(args.model):
asr_model = ASRModel.restore_from(asr_model)
elif args.model in ASRModel.get_available_model_names():
asr_model = ASRModel.from_pretrained(asr_model)
else:
raise ValueError(
f'Provide path to the pretrained checkpoint or choose from {ASRModel.get_available_model_names()}'
)
return asr_model
def parse_args():
parser = ArgumentParser()
parser.add_argument("--text", help="input string or path to a .txt file", default=None, type=str)
parser.add_argument(
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
parser.add_argument(
"--language", help="Select target language", choices=["en", "ru", "de"], default="en", type=str
)
parser.add_argument("--audio_data", default=None, help="path to an audio file or .json manifest")
parser.add_argument(
'--model', type=str, default='QuartzNet15x5Base-En', help='Pre-trained model name or path to model checkpoint'
)
parser.add_argument(
"--n_tagged",
type=int,
default=30,
help="number of tagged options to consider, -1 - return all possible tagged options",
)
parser.add_argument("--verbose", help="print info for debugging", action="store_true")
parser.add_argument(
"--no_remove_punct_for_cer",
help="Set to True to NOT remove punctuation before calculating CER",
action="store_true",
)
parser.add_argument(
"--no_punct_post_process", help="set to True to disable punctuation post processing", action="store_true"
)
parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
parser.add_argument("--whitelist", help="path to a file with with whitelist", default=None, type=str)
parser.add_argument(
"--cache_dir",
help="path to a dir with .far grammar file. Set to None to avoid using cache",
default=None,
type=str,
)
parser.add_argument("--n_jobs", default=-2, type=int, help="The maximum number of concurrently running jobs")
parser.add_argument("--batch_size", default=200, type=int, help="Number of examples for each process")
return parser.parse_args()
def _normalize_line(normalizer: NormalizerWithAudio, n_tagged, verbose, line: str, remove_punct, punct_post_process):
line = json.loads(line)
pred_text = line["pred_text"]
normalized_texts = normalizer.normalize(
text=line["text"], verbose=verbose, n_tagged=n_tagged, punct_post_process=punct_post_process,
)
normalized_text, cer = normalizer.select_best_match(
normalized_texts=normalized_texts,
input_text=line["text"],
pred_text=pred_text,
verbose=verbose,
remove_punct=remove_punct,
)
line["nemo_normalized"] = normalized_text
line["CER_nemo_normalized"] = cer
return line
def normalize_manifest(
normalizer,
audio_data: str,
n_jobs: int,
n_tagged: int,
remove_punct: bool,
punct_post_process: bool,
batch_size: int,
):
"""
Args:
args.audio_data: path to .json manifest file.
"""
def __process_batch(batch_idx, batch, dir_name):
normalized_lines = [
_normalize_line(
normalizer,
n_tagged,
verbose=False,
line=line,
remove_punct=remove_punct,
punct_post_process=punct_post_process,
)
for line in tqdm(batch)
]
with open(f"{dir_name}/{batch_idx}.json", "w") as f_out:
for line in normalized_lines:
f_out.write(json.dumps(line, ensure_ascii=False) + '\n')
print(f"Batch -- {batch_idx} -- is complete")
return normalized_lines
manifest_out = audio_data.replace('.json', '_normalized.json')
with open(audio_data, 'r') as f:
lines = f.readlines()
print(f'Normalizing {len(lines)} lines of {audio_data}...')
# to save intermediate results to a file
batch = min(len(lines), batch_size)
tmp_dir = manifest_out.replace(".json", "_parts")
os.makedirs(tmp_dir, exist_ok=True)
Parallel(n_jobs=n_jobs)(
delayed(__process_batch)(idx, lines[i : i + batch], tmp_dir)
for idx, i in enumerate(range(0, len(lines), batch))
)
# aggregate all intermediate files
with open(manifest_out, "w") as f_out:
for batch_f in sorted(glob(f"{tmp_dir}/*.json")):
with open(batch_f, "r") as f_in:
lines = f_in.read()
f_out.write(lines)
print(f'Normalized version saved at {manifest_out}')
if __name__ == "__main__":
args = parse_args()
if not ASR_AVAILABLE and args.audio_data:
raise ValueError("NeMo ASR collection is not installed.")
start = time.time()
args.whitelist = os.path.abspath(args.whitelist) if args.whitelist else None
if args.text is not None:
normalizer = NormalizerWithAudio(
input_case=args.input_case,
lang=args.language,
cache_dir=args.cache_dir,
overwrite_cache=args.overwrite_cache,
whitelist=args.whitelist,
)
if os.path.exists(args.text):
with open(args.text, 'r') as f:
args.text = f.read().strip()
normalized_texts = normalizer.normalize(
text=args.text,
verbose=args.verbose,
n_tagged=args.n_tagged,
punct_post_process=not args.no_punct_post_process,
)
if args.audio_data:
asr_model = get_asr_model(args.model)
pred_text = asr_model.transcribe([args.audio_data])[0]
normalized_text, cer = normalizer.select_best_match(
normalized_texts=normalized_texts,
pred_text=pred_text,
input_text=args.text,
verbose=args.verbose,
remove_punct=not args.no_remove_punct_for_cer,
)
print(f"Transcript: {pred_text}")
print(f"Normalized: {normalized_text}")
else:
print("Normalization options:")
for norm_text in normalized_texts:
print(norm_text)
elif not os.path.exists(args.audio_data):
raise ValueError(f"{args.audio_data} not found.")
elif args.audio_data.endswith('.json'):
normalizer = NormalizerWithAudio(
input_case=args.input_case,
lang=args.language,
cache_dir=args.cache_dir,
overwrite_cache=args.overwrite_cache,
whitelist=args.whitelist,
)
normalize_manifest(
normalizer=normalizer,
audio_data=args.audio_data,
n_jobs=args.n_jobs,
n_tagged=args.n_tagged,
remove_punct=not args.no_remove_punct_for_cer,
punct_post_process=not args.no_punct_post_process,
batch_size=args.batch_size,
)
else:
raise ValueError(
"Provide either path to .json manifest in '--audio_data' OR "
+ "'--audio_data' path to audio file and '--text' path to a text file OR"
"'--text' string text (for debugging without audio)"
)
print(f'Execution time: {round((time.time() - start)/60, 2)} min.')
[end of nemo_text_processing/text_normalization/normalize_with_audio.py]
[start of tools/text_processing_deployment/pynini_export.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
# Copyright 2015 and onwards Google, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import time
from argparse import ArgumentParser
from nemo.utils import logging
try:
import pynini
from nemo_text_processing.text_normalization.en.graph_utils import generator_main
PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
logging.warning(
"`pynini` is not installed ! \n"
"Please run the `nemo_text_processing/setup.sh` script"
"prior to usage of this toolkit."
)
PYNINI_AVAILABLE = False
# This script exports compiled grammars inside nemo_text_processing into OpenFst finite state archive files
# tokenize_and_classify.far and verbalize.far for production purposes
def itn_grammars(**kwargs):
d = {}
d['classify'] = {
'TOKENIZE_AND_CLASSIFY': ITNClassifyFst(
cache_dir=kwargs["cache_dir"], overwrite_cache=kwargs["overwrite_cache"]
).fst
}
d['verbalize'] = {'ALL': ITNVerbalizeFst().fst, 'REDUP': pynini.accep("REDUP")}
return d
def tn_grammars(**kwargs):
d = {}
d['classify'] = {
'TOKENIZE_AND_CLASSIFY': TNClassifyFst(
input_case=kwargs["input_case"],
deterministic=True,
cache_dir=kwargs["cache_dir"],
overwrite_cache=kwargs["overwrite_cache"],
).fst
}
d['verbalize'] = {'ALL': TNVerbalizeFst(deterministic=True).fst, 'REDUP': pynini.accep("REDUP")}
return d
def export_grammars(output_dir, grammars):
"""
Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
Args:
output_dir: directory to export FAR files to. Subdirectories will be created for tagger and verbalizer respectively.
grammars: grammars to be exported
"""
for category, graphs in grammars.items():
out_dir = os.path.join(output_dir, category)
if not os.path.exists(out_dir):
os.makedirs(out_dir)
time.sleep(1)
if category == "classify":
category = "tokenize_and_classify"
generator_main(f"{out_dir}/{category}.far", graphs)
def parse_args():
parser = ArgumentParser()
parser.add_argument("--output_dir", help="output directory for grammars", required=True, type=str)
parser.add_argument(
"--language", help="language", choices=["en", "de", "es", "ru", 'fr', 'vi'], type=str, default='en'
)
parser.add_argument(
"--grammars", help="grammars to be exported", choices=["tn_grammars", "itn_grammars"], type=str, required=True
)
parser.add_argument(
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
parser.add_argument(
"--cache_dir",
help="path to a dir with .far grammar file. Set to None to avoid using cache",
default=None,
type=str,
)
return parser.parse_args()
if __name__ == '__main__':
args = parse_args()
if args.language in ['ru', 'fr', 'es', 'vi'] and args.grammars == 'tn_grammars':
raise ValueError('Only ITN grammars could be deployed in Sparrowhawk for the selected languages.')
if args.language == 'en':
from nemo_text_processing.inverse_text_normalization.en.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
)
from nemo_text_processing.inverse_text_normalization.en.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify import (
ClassifyFst as TNClassifyFst,
)
from nemo_text_processing.text_normalization.en.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
elif args.language == 'de':
from nemo_text_processing.inverse_text_normalization.de.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
)
from nemo_text_processing.inverse_text_normalization.de.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import (
ClassifyFst as TNClassifyFst,
)
from nemo_text_processing.text_normalization.de.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
elif args.language == 'ru':
from nemo_text_processing.inverse_text_normalization.ru.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
)
from nemo_text_processing.inverse_text_normalization.ru.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
elif args.language == 'es':
from nemo_text_processing.inverse_text_normalization.es.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
)
from nemo_text_processing.inverse_text_normalization.es.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
elif args.language == 'fr':
from nemo_text_processing.inverse_text_normalization.fr.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
)
from nemo_text_processing.inverse_text_normalization.fr.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
elif args.language == 'vi':
from nemo_text_processing.inverse_text_normalization.vi.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
)
from nemo_text_processing.inverse_text_normalization.vi.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
output_dir = os.path.join(args.output_dir, args.language)
export_grammars(
output_dir=output_dir,
grammars=locals()[args.grammars](
input_case=args.input_case, cache_dir=args.cache_dir, overwrite_cache=args.overwrite_cache
),
)
[end of tools/text_processing_deployment/pynini_export.py]
</code>
I need you to solve this issue by regenerating the full files in the code base that you would like to change. You can change as many files as you like. Please respond with a list of files and their revised contents in the following format.
<example>
[start of /src/this_file.py]
import os
def euclidean(a, b):
if b == 0:
return a
return euclidean(b, a % b)
[end of /src/this_file.py]
[start of /src/another_file.py]
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
x, y = x0, y0
sx = -1 if x0 > x1 else 1
sy = -1 if y0 > y1 else 1
if dx > dy:
err = dx / 2.0
while x != x1:
points.append((x, y))
err -= dy
if err < 0:
y += sy
err += dx
x += sx
else:
err = dy / 2.0
while y != y1:
points.append((x
err -= dx
if err < 0:
x += sx
err += dy
y += sy
points.append((x, y))
return points
[end of /src/another_file.py]
</example>
|
NVIDIA/NeMo
|
022f0292aecbc98d591d49423d5045235394f793
|
./reinstall.sh crashes due to not being able to uninstall llvmlite
Starting off of `nemo:1.5.1` container, cloning the NeMo repo to a folder inside of it and calling `./reinstall.sh` fails with
```
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
`pip install -e` on the other hand succeeds installing `nemo:1.7.0rc0` and `numpy:1.22.2`, the rest of the packages remain untouched.
It seems that `./reinstall.sh` which used to work fine, a week or so ago when following the same procedure to upgrade to `nemo:1.6.0rc` redeveloped issue #841. The solution remains the same, first call
```
pip install --ignore-installed llvmlite
```
followed by `./reinstall.sh`. In this case, apart `llvml`, the following packages are updated
```
ftfy-6.0.3 nemo-toolkit-1.7.0rc0 numba-0.55.1 pytorch-lightning-1.5.9 sacrebleu-2.0.0 setuptools-59.5.0
```
Interestingly `numpy` in this case is left at `1.21.5`.
|
2022-02-09T05:12:31Z
|
<patch>
<patch>
diff --git a/nemo_text_processing/text_normalization/__init__.py b/nemo_text_processing/text_normalization/__init__.py
--- a/nemo_text_processing/text_normalization/__init__.py
+++ b/nemo_text_processing/text_normalization/__init__.py
@@ -21,7 +21,7 @@
except (ModuleNotFoundError, ImportError):
logging.warning(
"`pynini` is not installed ! \n"
- "Please run the `nemo_text_processing/setup.sh` script"
+ "Please run the `nemo_text_processing/setup.sh` script "
"prior to usage of this toolkit."
)
diff --git a/nemo_text_processing/text_normalization/en/graph_utils.py b/nemo_text_processing/text_normalization/en/graph_utils.py
--- a/nemo_text_processing/text_normalization/en/graph_utils.py
+++ b/nemo_text_processing/text_normalization/en/graph_utils.py
@@ -159,7 +159,7 @@ def convert_space(fst) -> 'pynini.FstLike':
"""
Converts space to nonbreaking space.
Used only in tagger grammars for transducing token values within quotes, e.g. name: "hello kitty"
- This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
+ This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
Args:
fst: input fst
@@ -208,9 +208,9 @@ def add_tokens(self, fst) -> 'pynini.FstLike':
"""
Wraps class name around to given fst
- Args:
+ Args:
fst: input fst
-
+
Returns:
Fst: fst
"""
diff --git a/nemo_text_processing/text_normalization/en/taggers/punctuation.py b/nemo_text_processing/text_normalization/en/taggers/punctuation.py
--- a/nemo_text_processing/text_normalization/en/taggers/punctuation.py
+++ b/nemo_text_processing/text_normalization/en/taggers/punctuation.py
@@ -22,7 +22,7 @@
import pynini
from pynini.lib import pynutil
- PYNINI_AVAILABLE = False
+ PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py b/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
--- a/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
+++ b/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
@@ -12,8 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
-
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
@@ -21,6 +19,7 @@
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
+
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/en/verbalizers/word.py b/nemo_text_processing/text_normalization/en/verbalizers/word.py
--- a/nemo_text_processing/text_normalization/en/verbalizers/word.py
+++ b/nemo_text_processing/text_normalization/en/verbalizers/word.py
@@ -12,7 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
@@ -20,6 +19,7 @@
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
+
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/es/__init__.py b/nemo_text_processing/text_normalization/es/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/__init__.py
@@ -0,0 +1,15 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+LOCALIZATION = "eu" # Set to am for alternate formatting
diff --git a/nemo_text_processing/text_normalization/es/data/__init__.py b/nemo_text_processing/text_normalization/es/data/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/dates/__init__.py b/nemo_text_processing/text_normalization/es/data/dates/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/dates/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/electronic/__init__.py b/nemo_text_processing/text_normalization/es/data/electronic/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/electronic/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/fractions/__init__.py b/nemo_text_processing/text_normalization/es/data/fractions/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/fractions/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/measures/__init__.py b/nemo_text_processing/text_normalization/es/data/measures/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/measures/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/money/__init__.py b/nemo_text_processing/text_normalization/es/data/money/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/money/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/numbers/__init__.py b/nemo_text_processing/text_normalization/es/data/numbers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/numbers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py b/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/roman/__init__.py b/nemo_text_processing/text_normalization/es/data/roman/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/roman/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/time/__init__.py b/nemo_text_processing/text_normalization/es/data/time/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/time/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/graph_utils.py b/nemo_text_processing/text_normalization/es/graph_utils.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/graph_utils.py
@@ -0,0 +1,179 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_SIGMA, NEMO_SPACE
+from nemo_text_processing.text_normalization.es import LOCALIZATION
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digits = pynini.project(pynini.string_file(get_abs_path("data/numbers/digit.tsv")), "input")
+ tens = pynini.project(pynini.string_file(get_abs_path("data/numbers/ties.tsv")), "input")
+ teens = pynini.project(pynini.string_file(get_abs_path("data/numbers/teen.tsv")), "input")
+ twenties = pynini.project(pynini.string_file(get_abs_path("data/numbers/twenties.tsv")), "input")
+ hundreds = pynini.project(pynini.string_file(get_abs_path("data/numbers/hundreds.tsv")), "input")
+
+ accents = pynini.string_map([("รก", "a"), ("รฉ", "e"), ("รญ", "i"), ("รณ", "o"), ("รบ", "u")])
+
+ if LOCALIZATION == "am": # Setting localization for central and northern america formatting
+ cardinal_separator = pynini.string_map([",", NEMO_SPACE])
+ decimal_separator = pynini.accep(".")
+ else:
+ cardinal_separator = pynini.string_map([".", NEMO_SPACE])
+ decimal_separator = pynini.accep(",")
+
+ ones = pynini.union("un", "รบn")
+ fem_ones = pynini.union(pynini.cross("un", "una"), pynini.cross("รบn", "una"), pynini.cross("uno", "una"))
+ one_to_one_hundred = pynini.union(digits, tens, teens, twenties, tens + pynini.accep(" y ") + digits)
+ fem_hundreds = hundreds @ pynini.cdrewrite(pynini.cross("ientos", "ientas"), "", "", NEMO_SIGMA)
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ digits = None
+ tens = None
+ teens = None
+ twenties = None
+ hundreds = None
+
+ accents = None
+
+ cardinal_separator = None
+ decimal_separator = None
+
+ ones = None
+ fem_ones = None
+ one_to_one_hundred = None
+ fem_hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def strip_accent(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Converts all accented vowels to non-accented equivalents
+
+ Args:
+ fst: Any fst. Composes vowel conversion onto fst's output strings
+ """
+ return fst @ pynini.cdrewrite(accents, "", "", NEMO_SIGMA)
+
+
+def shift_cardinal_gender(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Applies gender conversion rules to a cardinal string. These include: rendering all masculine forms of "uno" (including apocopated forms) as "una" and
+ Converting all gendered numbers in the hundreds series (200,300,400...) to feminine equivalent (e.g. "doscientos" -> "doscientas"). Converssion only applies
+ to value place for <1000 and multiple of 1000. (e.g. "doscientos mil doscientos" -> "doscientas mil doscientas".) For place values greater than the thousands, there
+ is no gender shift as the higher powers of ten ("millones", "billones") are masculine nouns and any conversion would be formally
+ ungrammatical.
+ e.g.
+ "doscientos" -> "doscientas"
+ "doscientos mil" -> "doscientas mil"
+ "doscientos millones" -> "doscientos millones"
+ "doscientos mil millones" -> "doscientos mil millones"
+ "doscientos millones doscientos mil doscientos" -> "doscientos millones doscientas mil doscientas"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ before_mil = (
+ NEMO_SPACE
+ + (pynini.accep("mil") | pynini.accep("milรฉsimo"))
+ + pynini.closure(NEMO_SPACE + hundreds, 0, 1)
+ + pynini.closure(NEMO_SPACE + one_to_one_hundred, 0, 1)
+ + pynini.union(pynini.accep("[EOS]"), pynini.accep("\""), decimal_separator)
+ )
+ before_double_digits = pynini.closure(NEMO_SPACE + one_to_one_hundred, 0, 1) + pynini.union(
+ pynini.accep("[EOS]"), pynini.accep("\"")
+ )
+
+ fem_allign = pynini.cdrewrite(fem_hundreds, "", before_mil, NEMO_SIGMA) # doscientas mil dosciento
+ fem_allign @= pynini.cdrewrite(fem_hundreds, "", before_double_digits, NEMO_SIGMA) # doscientas mil doscienta
+ fem_allign @= pynini.cdrewrite(
+ fem_ones, "", pynini.union("[EOS]", "\"", decimal_separator), NEMO_SIGMA
+ ) # If before a quote or EOS, we know it's the end of a string
+
+ return fst @ fem_allign
+
+
+def shift_number_gender(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Performs gender conversion on all verbalized numbers in output. All values in the hundreds series (200,300,400) are changed to
+ feminine gender (e.g. "doscientos" -> "doscientas") and all forms of "uno" (including apocopated forms) are converted to "una".
+ This has no boundary restriction and will perform shift across all values in output string.
+ e.g.
+ "doscientos" -> "doscientas"
+ "doscientos millones" -> "doscientas millones"
+ "doscientos millones doscientos" -> "doscientas millones doscientas"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ fem_allign = pynini.cdrewrite(fem_hundreds, "", "", NEMO_SIGMA)
+ fem_allign @= pynini.cdrewrite(
+ fem_ones, "", pynini.union(NEMO_SPACE, pynini.accep("[EOS]"), pynini.accep("\"")), NEMO_SIGMA
+ ) # If before a quote or EOS, we know it's the end of a string
+
+ return fst @ fem_allign
+
+
+def strip_cardinal_apocope(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Reverts apocope on cardinal strings in line with formation rules. e.g. "un" -> "uno". Due to cardinal formation rules, this in effect only
+ affects strings where the final value is a variation of "un".
+ e.g.
+ "un" -> "uno"
+ "veintiรบn" -> "veintiuno"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ # Since cardinals use apocope by default for large values (e.g. "millรณn"), this only needs to act on the last instance of one
+ strip = pynini.cross("un", "uno") | pynini.cross("รบn", "uno")
+ strip = pynini.cdrewrite(strip, "", pynini.union("[EOS]", "\""), NEMO_SIGMA)
+ return fst @ strip
+
+
+def roman_to_int(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Alters given fst to convert Roman integers (lower and upper cased) into Arabic numerals. Valid for values up to 1000.
+ e.g.
+ "V" -> "5"
+ "i" -> "1"
+
+ Args:
+ fst: Any fst. Composes fst onto Roman conversion outputs.
+ """
+
+ def _load_roman(file: str):
+ roman = load_labels(get_abs_path(file))
+ roman_numerals = [(x, y) for x, y in roman] + [(x.upper(), y) for x, y in roman]
+ return pynini.string_map(roman_numerals)
+
+ digit = _load_roman("data/roman/digit.tsv")
+ ties = _load_roman("data/roman/ties.tsv")
+ hundreds = _load_roman("data/roman/hundreds.tsv")
+
+ graph = (
+ digit
+ | ties + (digit | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ | (
+ hundreds
+ + (ties | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ + (digit | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ )
+ ).optimize()
+
+ return graph @ fst
diff --git a/nemo_text_processing/text_normalization/es/taggers/__init__.py b/nemo_text_processing/text_normalization/es/taggers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/taggers/cardinal.py b/nemo_text_processing/text_normalization/es/taggers/cardinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/cardinal.py
@@ -0,0 +1,190 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import cardinal_separator
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ teen = pynini.invert(pynini.string_file(get_abs_path("data/numbers/teen.tsv")))
+ ties = pynini.invert(pynini.string_file(get_abs_path("data/numbers/ties.tsv")))
+ twenties = pynini.invert(pynini.string_file(get_abs_path("data/numbers/twenties.tsv")))
+ hundreds = pynini.invert(pynini.string_file(get_abs_path("data/numbers/hundreds.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ zero = None
+ digit = None
+ teen = None
+ ties = None
+ twenties = None
+ hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def filter_punctuation(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Helper function for parsing number strings. Converts common cardinal strings (groups of three digits delineated by 'cardinal_separator' - see graph_utils)
+ and converts to a string of digits:
+ "1 000" -> "1000"
+ "1.000.000" -> "1000000"
+ Args:
+ fst: Any pynini.FstLike object. Function composes fst onto string parser fst
+
+ Returns:
+ fst: A pynini.FstLike object
+ """
+ exactly_three_digits = NEMO_DIGIT ** 3 # for blocks of three
+ up_to_three_digits = pynini.closure(NEMO_DIGIT, 1, 3) # for start of string
+
+ cardinal_string = pynini.closure(
+ NEMO_DIGIT, 1
+ ) # For string w/o punctuation (used for page numbers, thousand series)
+
+ cardinal_string |= (
+ up_to_three_digits
+ + pynutil.delete(cardinal_separator)
+ + pynini.closure(exactly_three_digits + pynutil.delete(cardinal_separator))
+ + exactly_three_digits
+ )
+
+ return cardinal_string @ fst
+
+
+class CardinalFst(GraphFst):
+ """
+ Finite state transducer for classifying cardinals, e.g.
+ "1000" -> cardinal { integer: "mil" }
+ "2.000.000" -> cardinal { integer: "dos millones" }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="cardinal", kind="classify", deterministic=deterministic)
+
+ # Any single digit
+ graph_digit = digit
+ digits_no_one = (NEMO_DIGIT - "1") @ graph_digit
+
+ # Any double digit
+ graph_tens = teen
+ graph_tens |= ties + (pynutil.delete('0') | (pynutil.insert(" y ") + graph_digit))
+ graph_tens |= twenties
+
+ self.tens = graph_tens.optimize()
+
+ self.two_digit_non_zero = pynini.union(
+ graph_digit, graph_tens, (pynini.cross("0", NEMO_SPACE) + graph_digit)
+ ).optimize()
+
+ # Three digit strings
+ graph_hundreds = hundreds + pynini.union(
+ pynutil.delete("00"), (insert_space + graph_tens), (pynini.cross("0", NEMO_SPACE) + graph_digit)
+ )
+ graph_hundreds |= pynini.cross("100", "cien")
+ graph_hundreds |= (
+ pynini.cross("1", "ciento") + insert_space + pynini.union(graph_tens, pynutil.delete("0") + graph_digit)
+ )
+
+ self.hundreds = graph_hundreds.optimize()
+
+ # For all three digit strings with leading zeroes (graph appends '0's to manage place in string)
+ graph_hundreds_component = pynini.union(graph_hundreds, pynutil.delete("0") + graph_tens)
+
+ graph_hundreds_component_at_least_one_none_zero_digit = graph_hundreds_component | (
+ pynutil.delete("00") + graph_digit
+ )
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one = graph_hundreds_component | (
+ pynutil.delete("00") + digits_no_one
+ )
+
+ graph_thousands_component_at_least_one_none_zero_digit = pynini.union(
+ pynutil.delete("000") + graph_hundreds_component_at_least_one_none_zero_digit,
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one
+ + pynutil.insert(" mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ pynini.cross("001", "mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ )
+
+ graph_thousands_component_at_least_one_none_zero_digit_no_one = pynini.union(
+ pynutil.delete("000") + graph_hundreds_component_at_least_one_none_zero_digit_no_one,
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one
+ + pynutil.insert(" mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ pynini.cross("001", "mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ )
+
+ graph_million = pynutil.add_weight(pynini.cross("000001", "un millรณn"), -0.001)
+ graph_million |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" millones")
+ graph_million |= pynutil.delete("000000")
+ graph_million += insert_space
+
+ graph_billion = pynutil.add_weight(pynini.cross("000001", "un billรณn"), -0.001)
+ graph_billion |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" billones")
+ graph_billion |= pynutil.delete("000000")
+ graph_billion += insert_space
+
+ graph_trillion = pynutil.add_weight(pynini.cross("000001", "un trillรณn"), -0.001)
+ graph_trillion |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" trillones")
+ graph_trillion |= pynutil.delete("000000")
+ graph_trillion += insert_space
+
+ graph = (
+ graph_trillion
+ + graph_billion
+ + graph_million
+ + (graph_thousands_component_at_least_one_none_zero_digit | pynutil.delete("000000"))
+ )
+
+ self.graph = (
+ ((NEMO_DIGIT - "0") + pynini.closure(NEMO_DIGIT, 0))
+ @ pynini.cdrewrite(pynini.closure(pynutil.insert("0")), "[BOS]", "", NEMO_SIGMA)
+ @ NEMO_DIGIT ** 24
+ @ graph
+ @ pynini.cdrewrite(delete_space, "[BOS]", "", NEMO_SIGMA)
+ @ pynini.cdrewrite(delete_space, "", "[EOS]", NEMO_SIGMA)
+ @ pynini.cdrewrite(
+ pynini.cross(pynini.closure(NEMO_WHITE_SPACE, 2), NEMO_SPACE), NEMO_ALPHA, NEMO_ALPHA, NEMO_SIGMA
+ )
+ )
+ self.graph |= zero
+
+ self.graph = filter_punctuation(self.graph).optimize()
+
+ optional_minus_graph = pynini.closure(pynutil.insert("negative: ") + pynini.cross("-", "\"true\" "), 0, 1)
+
+ final_graph = optional_minus_graph + pynutil.insert("integer: \"") + self.graph + pynutil.insert("\"")
+
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/date.py b/nemo_text_processing/text_normalization/es/taggers/date.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/date.py
@@ -0,0 +1,107 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_DIGIT, NEMO_SPACE, GraphFst, delete_extra_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ articles = pynini.union("de", "del", "el", "del aรฑo")
+ delete_leading_zero = (pynutil.delete("0") | (NEMO_DIGIT - "0")) + NEMO_DIGIT
+ month_numbers = pynini.string_file(get_abs_path("data/dates/months.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ articles = None
+ delete_leading_zero = None
+ month_numbers = None
+
+ PYNINI_AVAILABLE = False
+
+
+class DateFst(GraphFst):
+ """
+ Finite state transducer for classifying date, e.g.
+ "01.04.2010" -> date { day: "un" month: "enero" year: "dos mil diez" preserve_order: true }
+ "marzo 4 2000" -> date { month: "marzo" day: "cuatro" year: "dos mil" }
+ "1990-20-01" -> date { year: "mil novecientos noventa" day: "veinte" month: "enero" }
+
+ Args:
+ cardinal: cardinal GraphFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool):
+ super().__init__(name="date", kind="classify", deterministic=deterministic)
+
+ number_to_month = month_numbers.optimize()
+ month_graph = pynini.project(number_to_month, "output")
+
+ numbers = cardinal.graph
+ optional_leading_zero = delete_leading_zero | NEMO_DIGIT
+
+ # 01, 31, 1
+ digit_day = optional_leading_zero @ pynini.union(*[str(x) for x in range(1, 32)]) @ numbers
+ day = (pynutil.insert("day: \"") + digit_day + pynutil.insert("\"")).optimize()
+
+ digit_month = optional_leading_zero @ pynini.union(*[str(x) for x in range(1, 13)])
+ number_to_month = digit_month @ number_to_month
+
+ month_name = (pynutil.insert("month: \"") + month_graph + pynutil.insert("\"")).optimize()
+ month_number = (pynutil.insert("month: \"") + number_to_month + pynutil.insert("\"")).optimize()
+
+ # prefer cardinal over year
+ year = (NEMO_DIGIT - "0") + pynini.closure(NEMO_DIGIT, 1, 3) # 90, 990, 1990
+ year @= numbers
+ self.year = year
+
+ year_only = pynutil.insert("year: \"") + year + pynutil.insert("\"")
+ year_with_articles = (
+ pynutil.insert("year: \"") + pynini.closure(articles + NEMO_SPACE, 0, 1) + year + pynutil.insert("\"")
+ )
+
+ graph_dmy = (
+ day
+ + pynini.closure(pynutil.delete(" de"))
+ + NEMO_SPACE
+ + month_name
+ + pynini.closure(NEMO_SPACE + year_with_articles, 0, 1)
+ )
+
+ graph_mdy = ( # English influences on language
+ month_name + delete_extra_space + day + pynini.closure(NEMO_SPACE + year_with_articles, 0, 1)
+ )
+
+ separators = [".", "-", "/"]
+ for sep in separators:
+ year_optional = pynini.closure(pynini.cross(sep, NEMO_SPACE) + year_only, 0, 1)
+ new_graph = day + pynini.cross(sep, NEMO_SPACE) + month_number + year_optional
+ graph_dmy |= new_graph
+ if not deterministic:
+ new_graph = month_number + pynini.cross(sep, NEMO_SPACE) + day + year_optional
+ graph_mdy |= new_graph
+
+ dash = "-"
+ day_optional = pynini.closure(pynini.cross(dash, NEMO_SPACE) + day, 0, 1)
+ graph_ymd = NEMO_DIGIT ** 4 @ year_only + pynini.cross(dash, NEMO_SPACE) + month_number + day_optional
+
+ final_graph = graph_dmy + pynutil.insert(" preserve_order: true")
+ final_graph |= graph_ymd
+ final_graph |= graph_mdy
+
+ self.final_graph = final_graph.optimize()
+ self.fst = self.add_tokens(self.final_graph).optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/decimals.py b/nemo_text_processing/text_normalization/es/taggers/decimals.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/decimals.py
@@ -0,0 +1,138 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ cardinal_separator,
+ decimal_separator,
+ strip_cardinal_apocope,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ quantities = pynini.string_file(get_abs_path("data/numbers/quantities.tsv"))
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ quantities = None
+ digit = None
+ zero = None
+
+ PYNINI_AVAILABLE = False
+
+
+def get_quantity(decimal_graph: 'pynini.FstLike', cardinal_graph: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Returns FST that transforms either a cardinal or decimal followed by a quantity into a numeral,
+ e.g. 2 millones -> integer_part: "dos" quantity: "millones"
+ e.g. 2,4 millones -> integer_part: "dos" fractional_part: "quatro" quantity: "millones"
+ e.g. 2,400 millones -> integer_part: "dos mil cuatrocientos" fractional_part: "quatro" quantity: "millones"
+
+ Args:
+ decimal_graph: DecimalFST
+ cardinal_graph: CardinalFST
+ """
+ numbers = pynini.closure(NEMO_DIGIT, 1, 6) @ cardinal_graph
+ numbers = pynini.cdrewrite(pynutil.delete(cardinal_separator), "", "", NEMO_SIGMA) @ numbers
+
+ res = (
+ pynutil.insert("integer_part: \"")
+ + numbers # The cardinal we're passing only produces 'un' for one, so gender agreement is safe (all quantities are masculine). Limit to 10^6 power.
+ + pynutil.insert("\"")
+ + NEMO_SPACE
+ + pynutil.insert("quantity: \"")
+ + quantities
+ + pynutil.insert("\"")
+ )
+ res |= decimal_graph + NEMO_SPACE + pynutil.insert("quantity: \"") + quantities + pynutil.insert("\"")
+ return res
+
+
+class DecimalFst(GraphFst):
+ """
+ Finite state transducer for classifying decimal, e.g.
+ -11,4006 billones -> decimal { negative: "true" integer_part: "once" fractional_part: "cuatro cero cero seis" quantity: "billones" preserve_order: true }
+ 1 billรณn -> decimal { integer_part: "un" quantity: "billรณn" preserve_order: true }
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="decimal", kind="classify", deterministic=deterministic)
+ graph_digit = digit | zero
+
+ if not deterministic:
+ graph = pynini.union(graph_digit, cardinal.hundreds, cardinal.tens)
+ graph += pynini.closure(insert_space + graph)
+
+ else:
+ # General pattern seems to be 1-3 digits: map as cardinal, default to digits otherwise \
+ graph = pynini.union(
+ graph_digit,
+ cardinal.tens,
+ cardinal.hundreds,
+ graph_digit + pynini.closure(insert_space + graph_digit, 3),
+ zero
+ + pynini.closure(insert_space + zero)
+ + pynini.closure(insert_space + graph_digit), # For cases such as "1,010"
+ )
+
+ # Need to strip apocope everywhere BUT end of string
+ reverse_apocope = pynini.string_map([("un", "uno"), ("รบn", "uno")])
+ apply_reverse_apocope = pynini.cdrewrite(reverse_apocope, "", NEMO_SPACE, NEMO_SIGMA)
+ graph @= apply_reverse_apocope
+
+ # Technically decimals should be space delineated groups of three, e.g. (1,333 333). This removes any possible spaces
+ strip_formatting = pynini.cdrewrite(delete_space, "", "", NEMO_SIGMA)
+ graph = strip_formatting @ graph
+
+ self.graph = graph.optimize()
+
+ graph_separator = pynutil.delete(decimal_separator)
+ optional_graph_negative = pynini.closure(pynutil.insert("negative: ") + pynini.cross("-", "\"true\" "), 0, 1)
+
+ self.graph_fractional = pynutil.insert("fractional_part: \"") + self.graph + pynutil.insert("\"")
+
+ # Integer graph maintains apocope except for ones place
+ graph_integer = (
+ strip_cardinal_apocope(cardinal.graph)
+ if deterministic
+ else pynini.union(cardinal.graph, strip_cardinal_apocope(cardinal.graph))
+ ) # Gives us forms w/ and w/o apocope
+ self.graph_integer = pynutil.insert("integer_part: \"") + graph_integer + pynutil.insert("\"")
+ final_graph_wo_sign = self.graph_integer + graph_separator + insert_space + self.graph_fractional
+
+ self.final_graph_wo_negative = (
+ final_graph_wo_sign | get_quantity(final_graph_wo_sign, cardinal.graph).optimize()
+ )
+ final_graph = optional_graph_negative + self.final_graph_wo_negative
+
+ final_graph += pynutil.insert(" preserve_order: true")
+ final_graph = self.add_tokens(final_graph)
+
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/electronic.py b/nemo_text_processing/text_normalization/es/taggers/electronic.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/electronic.py
@@ -0,0 +1,84 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_ALPHA, NEMO_DIGIT, GraphFst, insert_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ common_domains = [x[0] for x in load_labels(get_abs_path("data/electronic/domain.tsv"))]
+ symbols = [x[0] for x in load_labels(get_abs_path("data/electronic/symbols.tsv"))]
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ common_domains = None
+ symbols = None
+
+ PYNINI_AVAILABLE = False
+
+
+class ElectronicFst(GraphFst):
+ """
+ Finite state transducer for classifying electronic: email addresses
+ e.g. "abc@hotmail.com" -> electronic { username: "abc" domain: "hotmail.com" preserve_order: true }
+ e.g. "www.abc.com/123" -> electronic { protocol: "www." domain: "abc.com/123" preserve_order: true }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="electronic", kind="classify", deterministic=deterministic)
+
+ dot = pynini.accep(".")
+ accepted_common_domains = pynini.union(*common_domains)
+ accepted_symbols = pynini.union(*symbols) - dot
+ accepted_characters = pynini.closure(NEMO_ALPHA | NEMO_DIGIT | accepted_symbols)
+ acceepted_characters_with_dot = pynini.closure(NEMO_ALPHA | NEMO_DIGIT | accepted_symbols | dot)
+
+ # email
+ username = (
+ pynutil.insert("username: \"")
+ + acceepted_characters_with_dot
+ + pynutil.insert("\"")
+ + pynini.cross('@', ' ')
+ )
+ domain_graph = accepted_characters + dot + accepted_characters
+ domain_graph = pynutil.insert("domain: \"") + domain_graph + pynutil.insert("\"")
+ domain_common_graph = (
+ pynutil.insert("domain: \"")
+ + accepted_characters
+ + accepted_common_domains
+ + pynini.closure((accepted_symbols | dot) + pynini.closure(accepted_characters, 1), 0, 1)
+ + pynutil.insert("\"")
+ )
+ graph = (username + domain_graph) | domain_common_graph
+
+ # url
+ protocol_start = pynini.accep("https://") | pynini.accep("http://")
+ protocol_end = (
+ pynini.accep("www.")
+ if deterministic
+ else pynini.accep("www.") | pynini.cross("www.", "doble ve doble ve doble ve.")
+ )
+ protocol = protocol_start | protocol_end | (protocol_start + protocol_end)
+ protocol = pynutil.insert("protocol: \"") + protocol + pynutil.insert("\"")
+ graph |= protocol + insert_space + (domain_graph | domain_common_graph)
+ self.graph = graph
+
+ final_graph = self.add_tokens(self.graph + pynutil.insert(" preserve_order: true"))
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/fraction.py b/nemo_text_processing/text_normalization/es/taggers/fraction.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/fraction.py
@@ -0,0 +1,124 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ ordinal_exceptions = pynini.string_file(get_abs_path("data/fractions/ordinal_exceptions.tsv"))
+ higher_powers_of_ten = pynini.string_file(get_abs_path("data/fractions/powers_of_ten.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ ordinal_exceptions = None
+ higher_powers_of_ten = None
+
+ PYNINI_AVAILABLE = False
+
+
+class FractionFst(GraphFst):
+ """
+ Finite state transducer for classifying fraction
+ "23 4/5" ->
+ tokens { fraction { integer: "veintitrรฉs" numerator: "cuatro" denominator: "quinto" mophosyntactic_features: "ordinal" } }
+
+ Args:
+ cardinal: CardinalFst
+ ordinal: OrdinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, ordinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="fraction", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+ ordinal_graph = ordinal.graph
+
+ # 2-10 are all ordinals
+ three_to_ten = pynini.string_map(["2", "3", "4", "5", "6", "7", "8", "9", "10",])
+ block_three_to_ten = pynutil.delete(three_to_ten) # To block cardinal productions
+ if not deterministic: # Multiples of tens are sometimes rendered as ordinals
+ three_to_ten |= pynini.string_map(["20", "30", "40", "50", "60", "70", "80", "90",])
+ graph_three_to_ten = three_to_ten @ ordinal_graph
+ graph_three_to_ten @= pynini.cdrewrite(ordinal_exceptions, "", "", NEMO_SIGMA)
+
+ # Higher powers of tens (and multiples) are converted to ordinals.
+ hundreds = pynini.string_map(["100", "200", "300", "400", "500", "600", "700", "800", "900",])
+ graph_hundreds = hundreds @ ordinal_graph
+
+ multiples_of_thousand = ordinal.multiples_of_thousand # So we can have X milรฉsimos
+
+ graph_higher_powers_of_ten = (
+ pynini.closure(ordinal.one_to_one_thousand + NEMO_SPACE, 0, 1)
+ + pynini.closure("mil ", 0, 1)
+ + pynini.closure(ordinal.one_to_one_thousand + NEMO_SPACE, 0, 1)
+ ) # x millones / x mil millones / x mil z millones
+ graph_higher_powers_of_ten += higher_powers_of_ten
+ graph_higher_powers_of_ten = cardinal_graph @ graph_higher_powers_of_ten
+ graph_higher_powers_of_ten @= pynini.cdrewrite(
+ pynutil.delete("un "), pynini.accep("[BOS]"), pynini.project(higher_powers_of_ten, "output"), NEMO_SIGMA
+ ) # we drop 'un' from these ordinals (millionths, not one-millionths)
+
+ graph_higher_powers_of_ten = multiples_of_thousand | graph_hundreds | graph_higher_powers_of_ten
+ block_higher_powers_of_ten = pynutil.delete(
+ pynini.project(graph_higher_powers_of_ten, "input")
+ ) # For cardinal graph
+
+ graph_fractions_ordinals = graph_higher_powers_of_ten | graph_three_to_ten
+ graph_fractions_ordinals += pynutil.insert(
+ "\" morphosyntactic_features: \"ordinal\""
+ ) # We note the root for processing later
+
+ # Blocking the digits and hundreds from Cardinal graph
+ graph_fractions_cardinals = pynini.cdrewrite(
+ block_three_to_ten | block_higher_powers_of_ten, pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+ graph_fractions_cardinals @= NEMO_CHAR.plus @ pynini.cdrewrite(
+ pynutil.delete("0"), pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA
+ ) # Empty characters become '0' for NEMO_CHAR fst, so ned to block
+ graph_fractions_cardinals @= cardinal_graph
+ graph_fractions_cardinals += pynutil.insert(
+ "\" morphosyntactic_features: \"add_root\""
+ ) # blocking these entries to reduce erroneous possibilities in debugging
+
+ if deterministic:
+ graph_fractions_cardinals = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ graph_fractions_cardinals
+ ) # Past hundreds the conventional scheme can be hard to read. For determinism we stop here
+
+ graph_denominator = pynini.union(
+ graph_fractions_ordinals,
+ graph_fractions_cardinals,
+ pynutil.add_weight(cardinal_graph + pynutil.insert("\""), 0.001),
+ ) # Last form is simply recording the cardinal. Weighting so last resort
+
+ integer = pynutil.insert("integer_part: \"") + cardinal_graph + pynutil.insert("\"") + NEMO_SPACE
+ numerator = (
+ pynutil.insert("numerator: \"") + cardinal_graph + (pynini.cross("/", "\" ") | pynini.cross(" / ", "\" "))
+ )
+ denominator = pynutil.insert("denominator: \"") + graph_denominator
+
+ self.graph = pynini.closure(integer, 0, 1) + numerator + denominator
+
+ final_graph = self.add_tokens(self.graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/measure.py b/nemo_text_processing/text_normalization/es/taggers/measure.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/measure.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_NON_BREAKING_SPACE,
+ NEMO_SPACE,
+ GraphFst,
+ convert_space,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import strip_cardinal_apocope
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ unit = pynini.string_file(get_abs_path("data/measures/measurements.tsv"))
+ unit_plural_fem = pynini.string_file(get_abs_path("data/measures/measurements_plural_fem.tsv"))
+ unit_plural_masc = pynini.string_file(get_abs_path("data/measures/measurements_plural_masc.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ unit = None
+ unit_plural_fem = None
+ unit_plural_masc = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MeasureFst(GraphFst):
+ """
+ Finite state transducer for classifying measure, e.g.
+ "2,4 g" -> measure { cardinal { integer_part: "dos" fractional_part: "cuatro" units: "gramos" preserve_order: true } }
+ "1 g" -> measure { cardinal { integer: "un" units: "gramo" preserve_order: true } }
+ "1 millรณn g" -> measure { cardinal { integer: "un quantity: "millรณn" units: "gramos" preserve_order: true } }
+ e.g. "a-8" โ> "a ocho"
+ e.g. "1,2-a" โ> "uno coma dos a"
+ This class also converts words containing numbers and letters
+ e.g. "a-8" โ> "a ocho"
+ e.g. "1,2-a" โ> "uno coma dos a"
+
+
+ Args:
+ cardinal: CardinalFst
+ decimal: DecimalFst
+ fraction: FractionFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, decimal: GraphFst, fraction: GraphFst, deterministic: bool = True):
+ super().__init__(name="measure", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+
+ unit_singular = unit
+ unit_plural = unit_singular @ (unit_plural_fem | unit_plural_masc)
+
+ graph_unit_singular = convert_space(unit_singular)
+ graph_unit_plural = convert_space(unit_plural)
+
+ optional_graph_negative = pynini.closure("-", 0, 1)
+
+ graph_unit_denominator = (
+ pynini.cross("/", "por") + pynutil.insert(NEMO_NON_BREAKING_SPACE) + graph_unit_singular
+ )
+
+ optional_unit_denominator = pynini.closure(
+ pynutil.insert(NEMO_NON_BREAKING_SPACE) + graph_unit_denominator, 0, 1,
+ )
+
+ unit_plural = (
+ pynutil.insert("units: \"")
+ + ((graph_unit_plural + optional_unit_denominator) | graph_unit_denominator)
+ + pynutil.insert("\"")
+ )
+
+ unit_singular_graph = (
+ pynutil.insert("units: \"")
+ + ((graph_unit_singular + optional_unit_denominator) | graph_unit_denominator)
+ + pynutil.insert("\"")
+ )
+
+ subgraph_decimal = decimal.fst + insert_space + pynini.closure(NEMO_SPACE, 0, 1) + unit_plural
+
+ subgraph_cardinal = (
+ (optional_graph_negative + (pynini.closure(NEMO_DIGIT) - "1")) @ cardinal.fst
+ + insert_space
+ + pynini.closure(delete_space, 0, 1)
+ + unit_plural
+ )
+
+ subgraph_cardinal |= (
+ (optional_graph_negative + pynini.accep("1")) @ cardinal.fst
+ + insert_space
+ + pynini.closure(delete_space, 0, 1)
+ + unit_singular_graph
+ )
+
+ subgraph_fraction = fraction.fst + insert_space + pynini.closure(delete_space, 0, 1) + unit_plural
+
+ decimal_times = (
+ pynutil.insert("decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.insert(" } units: \"")
+ + pynini.union('x', 'X')
+ + pynutil.insert("\"")
+ )
+
+ cardinal_times = (
+ pynutil.insert("cardinal { integer: \"")
+ + strip_cardinal_apocope(cardinal_graph)
+ + pynutil.insert("\" } units: \"")
+ + pynini.union('x', 'X')
+ + pynutil.insert("\"")
+ )
+
+ cardinal_dash_alpha = (
+ pynutil.insert("cardinal { integer: \"")
+ + strip_cardinal_apocope(cardinal_graph)
+ + pynutil.delete('-')
+ + pynutil.insert("\" } units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.insert("\"")
+ )
+
+ decimal_dash_alpha = (
+ pynutil.insert("decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.delete('-')
+ + pynutil.insert(" } units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.insert("\"")
+ )
+
+ alpha_dash_cardinal = (
+ pynutil.insert("units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.delete('-')
+ + pynutil.insert("\"")
+ + pynutil.insert(" cardinal { integer: \"")
+ + cardinal_graph
+ + pynutil.insert("\" } preserve_order: true")
+ )
+
+ alpha_dash_decimal = (
+ pynutil.insert("units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.delete('-')
+ + pynutil.insert("\"")
+ + pynutil.insert(" decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.insert(" } preserve_order: true")
+ )
+
+ final_graph = (
+ subgraph_decimal
+ | subgraph_cardinal
+ | subgraph_fraction
+ | cardinal_dash_alpha
+ | alpha_dash_cardinal
+ | decimal_dash_alpha
+ | decimal_times
+ | cardinal_times
+ | alpha_dash_decimal
+ )
+ final_graph += pynutil.insert(" preserve_order: true")
+ final_graph = self.add_tokens(final_graph)
+
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/money.py b/nemo_text_processing/text_normalization/es/taggers/money.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/money.py
@@ -0,0 +1,194 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import decimal_separator
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ maj_singular_labels = load_labels(get_abs_path("data/money/currency_major.tsv"))
+ maj_singular = pynini.string_file((get_abs_path("data/money/currency_major.tsv")))
+ min_singular = pynini.string_file(get_abs_path("data/money/currency_minor.tsv"))
+ fem_plural = pynini.string_file((get_abs_path("data/money/currency_plural_fem.tsv")))
+ masc_plural = pynini.string_file((get_abs_path("data/money/currency_plural_masc.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ maj_singular_labels = None
+ min_singular = None
+ maj_singular = None
+ fem_plural = None
+ masc_plural = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MoneyFst(GraphFst):
+ """
+ Finite state transducer for classifying money, e.g.
+ "โฌ1" -> money { currency_maj: "euro" integer_part: "un"}
+ "โฌ1,000" -> money { currency_maj: "euro" integer_part: "un" }
+ "โฌ1,001" -> money { currency_maj: "euro" integer_part: "un" fractional_part: "cero cero un" }
+ "ยฃ1,4" -> money { integer_part: "una" currency_maj: "libra" fractional_part: "cuarenta" preserve_order: true }
+ -> money { integer_part: "una" currency_maj: "libra" fractional_part: "cuarenta" currency_min: "penique" preserve_order: true }
+ "0,01 ยฃ" -> money { fractional_part: "un" currency_min: "penique" preserve_order: true }
+ "0,02 ยฃ" -> money { fractional_part: "dos" currency_min: "peniques" preserve_order: true }
+ "ยฃ0,01 million" -> money { currency_maj: "libra" integer_part: "cero" fractional_part: "cero un" quantity: "million" }
+
+ Args:
+ cardinal: CardinalFst
+ decimal: DecimalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, decimal: GraphFst, deterministic: bool = True):
+ super().__init__(name="money", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+ graph_decimal_final = decimal.final_graph_wo_negative
+
+ maj_singular_graph = maj_singular
+ min_singular_graph = min_singular
+ maj_plural_graph = maj_singular @ (fem_plural | masc_plural)
+ min_plural_graph = min_singular @ (fem_plural | masc_plural)
+
+ graph_maj_singular = pynutil.insert("currency_maj: \"") + maj_singular_graph + pynutil.insert("\"")
+ graph_maj_plural = pynutil.insert("currency_maj: \"") + maj_plural_graph + pynutil.insert("\"")
+
+ graph_integer_one = pynutil.insert("integer_part: \"") + pynini.cross("1", "un") + pynutil.insert("\"")
+
+ decimal_with_quantity = (NEMO_SIGMA + NEMO_ALPHA) @ graph_decimal_final
+
+ graph_decimal_plural = pynini.union(
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + graph_decimal_final, # $1,05
+ graph_decimal_final + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_plural, # 1,05 $
+ )
+ graph_decimal_plural = (
+ (NEMO_SIGMA - "1") + decimal_separator + NEMO_SIGMA
+ ) @ graph_decimal_plural # Can't have "un euros"
+
+ graph_decimal_singular = pynini.union(
+ graph_maj_singular + pynini.closure(delete_space, 0, 1) + insert_space + graph_decimal_final, # $1,05
+ graph_decimal_final + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_singular, # 1,05 $
+ )
+ graph_decimal_singular = (pynini.accep("1") + decimal_separator + NEMO_SIGMA) @ graph_decimal_singular
+
+ graph_decimal = pynini.union(
+ graph_decimal_singular,
+ graph_decimal_plural,
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + decimal_with_quantity,
+ )
+
+ graph_integer = (
+ pynutil.insert("integer_part: \"") + ((NEMO_SIGMA - "1") @ cardinal_graph) + pynutil.insert("\"")
+ )
+
+ graph_integer_only = pynini.union(
+ graph_maj_singular + pynini.closure(delete_space, 0, 1) + insert_space + graph_integer_one,
+ graph_integer_one + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_singular,
+ )
+ graph_integer_only |= pynini.union(
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + graph_integer,
+ graph_integer + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_plural,
+ )
+
+ graph = graph_integer_only | graph_decimal
+
+ # remove trailing zeros of non zero number in the first 2 digits and fill up to 2 digits
+ # e.g. 2000 -> 20, 0200->02, 01 -> 01, 10 -> 10
+ # not accepted: 002, 00, 0,
+ two_digits_fractional_part = (
+ pynini.closure(NEMO_DIGIT) + (NEMO_DIGIT - "0") + pynini.closure(pynutil.delete("0"))
+ ) @ (
+ (pynutil.delete("0") + (NEMO_DIGIT - "0"))
+ | ((NEMO_DIGIT - "0") + pynutil.insert("0"))
+ | ((NEMO_DIGIT - "0") + NEMO_DIGIT)
+ )
+
+ graph_min_singular = pynutil.insert("currency_min: \"") + min_singular_graph + pynutil.insert("\"")
+ graph_min_plural = pynutil.insert("currency_min: \"") + min_plural_graph + pynutil.insert("\"")
+
+ # format ** euro ** cent
+ decimal_graph_with_minor = None
+ for curr_symbol, _ in maj_singular_labels:
+ preserve_order = pynutil.insert(" preserve_order: true")
+
+ integer_plus_maj = pynini.union(
+ graph_integer + insert_space + pynutil.insert(curr_symbol) @ graph_maj_plural,
+ graph_integer_one + insert_space + pynutil.insert(curr_symbol) @ graph_maj_singular,
+ )
+ # non zero integer part
+ integer_plus_maj = (pynini.closure(NEMO_DIGIT) - "0") @ integer_plus_maj
+
+ graph_fractional_one = (
+ pynutil.insert("fractional_part: \"")
+ + two_digits_fractional_part @ pynini.cross("1", "un")
+ + pynutil.insert("\"")
+ )
+
+ graph_fractional = (
+ two_digits_fractional_part @ (pynini.closure(NEMO_DIGIT, 1, 2) - "1") @ cardinal.two_digit_non_zero
+ )
+ graph_fractional = pynutil.insert("fractional_part: \"") + graph_fractional + pynutil.insert("\"")
+
+ fractional_plus_min = pynini.union(
+ graph_fractional + insert_space + pynutil.insert(curr_symbol) @ graph_min_plural,
+ graph_fractional_one + insert_space + pynutil.insert(curr_symbol) @ graph_min_singular,
+ )
+
+ decimal_graph_with_minor_curr = (
+ integer_plus_maj + pynini.cross(decimal_separator, NEMO_SPACE) + fractional_plus_min
+ )
+ decimal_graph_with_minor_curr |= pynutil.add_weight(
+ integer_plus_maj
+ + pynini.cross(decimal_separator, NEMO_SPACE)
+ + pynutil.insert("fractional_part: \"")
+ + two_digits_fractional_part @ cardinal.two_digit_non_zero
+ + pynutil.insert("\""),
+ weight=0.0001,
+ )
+
+ decimal_graph_with_minor_curr |= pynutil.delete("0,") + fractional_plus_min
+ decimal_graph_with_minor_curr = pynini.union(
+ pynutil.delete(curr_symbol)
+ + pynini.closure(delete_space, 0, 1)
+ + decimal_graph_with_minor_curr
+ + preserve_order,
+ decimal_graph_with_minor_curr
+ + preserve_order
+ + pynini.closure(delete_space, 0, 1)
+ + pynutil.delete(curr_symbol),
+ )
+
+ decimal_graph_with_minor = (
+ decimal_graph_with_minor_curr
+ if decimal_graph_with_minor is None
+ else pynini.union(decimal_graph_with_minor, decimal_graph_with_minor_curr)
+ )
+
+ final_graph = graph | pynutil.add_weight(decimal_graph_with_minor, -0.001)
+
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/ordinal.py b/nemo_text_processing/text_normalization/es/taggers/ordinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/ordinal.py
@@ -0,0 +1,186 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import roman_to_int, strip_accent
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/digit.tsv")))
+ teens = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/teen.tsv")))
+ twenties = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/twenties.tsv")))
+ ties = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/ties.tsv")))
+ hundreds = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/hundreds.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ImportError, ModuleNotFoundError):
+ digit = None
+ teens = None
+ twenties = None
+ ties = None
+ hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def get_one_to_one_thousand(cardinal: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Produces an acceptor for verbalizations of all numbers from 1 to 1000. Needed for ordinals and fractions.
+
+ Args:
+ cardinal: CardinalFst
+
+ Returns:
+ fst: A pynini.FstLike object
+ """
+ numbers = pynini.string_map([str(_) for _ in range(1, 1000)]) @ cardinal
+ return pynini.project(numbers, "output").optimize()
+
+
+class OrdinalFst(GraphFst):
+ """
+ Finite state transducer for classifying ordinal
+ "21.ยบ" -> ordinal { integer: "vigรฉsimo primero" morphosyntactic_features: "gender_masc" }
+ This class converts ordinal up to the millionth (millonรฉsimo) order (exclusive).
+
+ This FST also records the ending of the ordinal (called "morphosyntactic_features"):
+ either as gender_masc, gender_fem, or apocope. Also introduces plural feature for non-deterministic graphs.
+
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="ordinal", kind="classify")
+ cardinal_graph = cardinal.graph
+
+ graph_digit = digit.optimize()
+ graph_teens = teens.optimize()
+ graph_ties = ties.optimize()
+ graph_twenties = twenties.optimize()
+ graph_hundreds = hundreds.optimize()
+
+ if not deterministic:
+ # Some alternative derivations
+ graph_ties = graph_ties | pynini.cross("sesenta", "setuagรฉsimo")
+
+ graph_teens = graph_teens | pynini.cross("once", "decimoprimero")
+ graph_teens |= pynini.cross("doce", "decimosegundo")
+
+ graph_digit = graph_digit | pynini.cross("nueve", "nono")
+ graph_digit |= pynini.cross("siete", "sรฉtimo")
+
+ graph_tens_component = (
+ graph_teens
+ | (graph_ties + pynini.closure(pynini.cross(" y ", NEMO_SPACE) + graph_digit, 0, 1))
+ | graph_twenties
+ )
+
+ graph_hundred_component = pynini.union(
+ graph_hundreds + pynini.closure(NEMO_SPACE + pynini.union(graph_tens_component, graph_digit), 0, 1),
+ graph_tens_component,
+ graph_digit,
+ )
+
+ # Need to go up to thousands for fractions
+ self.one_to_one_thousand = get_one_to_one_thousand(cardinal_graph)
+
+ thousands = pynini.cross("mil", "milรฉsimo")
+
+ graph_thousands = (
+ strip_accent(self.one_to_one_thousand) + NEMO_SPACE + thousands
+ ) # Cardinals become prefix for thousands series. Snce accent on the powers of ten we strip accent from leading words
+ graph_thousands @= pynini.cdrewrite(delete_space, "", "milรฉsimo", NEMO_SIGMA) # merge as a prefix
+ graph_thousands |= thousands
+
+ self.multiples_of_thousand = (cardinal_graph @ graph_thousands).optimize()
+
+ if (
+ not deterministic
+ ): # Formally the words preceding the power of ten should be a prefix, but some maintain word boundaries.
+ graph_thousands |= (self.one_to_one_thousand @ graph_hundred_component) + NEMO_SPACE + thousands
+
+ graph_thousands += pynini.closure(NEMO_SPACE + graph_hundred_component, 0, 1)
+
+ ordinal_graph = graph_thousands | graph_hundred_component
+ ordinal_graph = cardinal_graph @ ordinal_graph
+
+ if not deterministic:
+ # The 10's and 20's series can also be two words
+ split_words = pynini.cross("decimo", "dรฉcimo ") | pynini.cross("vigesimo", "vigรฉsimo ")
+ split_words = pynini.cdrewrite(split_words, "", NEMO_CHAR, NEMO_SIGMA)
+ ordinal_graph |= ordinal_graph @ split_words
+
+ # If "octavo" is preceeded by the "o" within string, it needs deletion
+ ordinal_graph @= pynini.cdrewrite(pynutil.delete("o"), "", "octavo", NEMO_SIGMA)
+
+ self.graph = ordinal_graph.optimize()
+
+ masc = pynini.accep("gender_masc")
+ fem = pynini.accep("gender_fem")
+ apocope = pynini.accep("apocope")
+
+ delete_period = pynini.closure(pynutil.delete("."), 0, 1) # Sometimes the period is omitted f
+
+ accept_masc = delete_period + pynini.cross("ยบ", masc)
+ accep_fem = delete_period + pynini.cross("ยช", fem)
+ accep_apocope = delete_period + pynini.cross("แตสณ", apocope)
+
+ # Managing Romanization
+ graph_roman = pynutil.insert("integer: \"") + roman_to_int(ordinal_graph) + pynutil.insert("\"")
+ if not deterministic:
+ # Introduce plural
+ plural = pynini.closure(pynutil.insert("/plural"), 0, 1)
+ accept_masc += plural
+ accep_fem += plural
+
+ # Romanizations have no morphology marker, so in non-deterministic case we provide option for all
+ insert_morphology = pynutil.insert(pynini.union(masc, fem)) + plural
+ insert_morphology |= pynutil.insert(apocope)
+ insert_morphology = (
+ pynutil.insert(" morphosyntactic_features: \"") + insert_morphology + pynutil.insert("\"")
+ )
+
+ graph_roman += insert_morphology
+
+ else:
+ # We assume masculine gender as default
+ graph_roman += pynutil.insert(" morphosyntactic_features: \"gender_masc\"")
+
+ # Rest of graph
+ convert_abbreviation = accept_masc | accep_fem | accep_apocope
+
+ graph = (
+ pynutil.insert("integer: \"")
+ + ordinal_graph
+ + pynutil.insert("\"")
+ + pynutil.insert(" morphosyntactic_features: \"")
+ + convert_abbreviation
+ + pynutil.insert("\"")
+ )
+ graph = pynini.union(graph, graph_roman)
+
+ final_graph = self.add_tokens(graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/telephone.py b/nemo_text_processing/text_normalization/es/taggers/telephone.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/telephone.py
@@ -0,0 +1,156 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_SIGMA, GraphFst, insert_space
+from nemo_text_processing.text_normalization.es.graph_utils import ones
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ graph_digit = pynini.string_file(get_abs_path("data/numbers/digit.tsv"))
+ graph_ties = pynini.string_file(get_abs_path("data/numbers/ties.tsv"))
+ graph_teen = pynini.string_file(get_abs_path("data/numbers/teen.tsv"))
+ graph_twenties = pynini.string_file(get_abs_path("data/numbers/twenties.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ graph_digit = None
+ graph_ties = None
+ graph_teen = None
+ graph_twenties = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TelephoneFst(GraphFst):
+ """
+ Finite state transducer for classifying telephone numbers, e.g.
+ 123-123-5678 -> { number_part: "uno dos tres uno dos tres cinco seis siete ocho" }.
+ In Spanish, digits are generally read individually, or as 2-digit numbers,
+ eg. "123" = "uno dos tres",
+ "1234" = "doce treinta y cuatro".
+ This will verbalize sequences of 10 (3+3+4 e.g. 123-456-7890).
+ 9 (3+3+3 e.g. 123-456-789) and 8 (4+4 e.g. 1234-5678) digits.
+
+ (we ignore more complicated cases such as "doscientos y dos" or "tres nueves").
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="telephone", kind="classify")
+
+ # create `single_digits` and `double_digits` graphs as these will be
+ # the building blocks of possible telephone numbers
+ single_digits = pynini.invert(graph_digit).optimize() | pynini.cross("0", "cero")
+
+ double_digits = pynini.union(
+ graph_twenties,
+ graph_teen,
+ (graph_ties + pynutil.delete("0")),
+ (graph_ties + insert_space + pynutil.insert("y") + insert_space + graph_digit),
+ )
+ double_digits = pynini.invert(double_digits)
+
+ # define `ten_digit_graph`, `nine_digit_graph`, `eight_digit_graph`
+ # which produces telephone numbers spoken (1) only with single digits,
+ # or (2) spoken with double digits (and sometimes single digits)
+
+ # 10-digit option (1): all single digits
+ ten_digit_graph = (
+ pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + single_digits
+ )
+
+ # 9-digit option (1): all single digits
+ nine_digit_graph = (
+ pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 2, 2)
+ + single_digits
+ )
+
+ # 8-digit option (1): all single digits
+ eight_digit_graph = (
+ pynini.closure(single_digits + insert_space, 4, 4)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + single_digits
+ )
+
+ if not deterministic:
+ # 10-digit option (2): (1+2) + (1+2) + (2+2) digits
+ ten_digit_graph |= (
+ single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + double_digits
+ + insert_space
+ + double_digits
+ )
+
+ # 9-digit option (2): (1+2) + (1+2) + (1+2) digits
+ nine_digit_graph |= (
+ single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ )
+
+ # 8-digit option (2): (2+2) + (2+2) digits
+ eight_digit_graph |= (
+ double_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + double_digits
+ + insert_space
+ + double_digits
+ )
+
+ number_part = pynini.union(ten_digit_graph, nine_digit_graph, eight_digit_graph)
+ number_part @= pynini.cdrewrite(pynini.cross(ones, "uno"), "", "", NEMO_SIGMA)
+
+ number_part = pynutil.insert("number_part: \"") + number_part + pynutil.insert("\"")
+
+ graph = number_part
+ final_graph = self.add_tokens(graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/time.py b/nemo_text_processing/text_normalization/es/taggers/time.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/time.py
@@ -0,0 +1,218 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ time_zone_graph = pynini.string_file(get_abs_path("data/time/time_zone.tsv"))
+ suffix = pynini.string_file(get_abs_path("data/time/time_suffix.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ time_zone_graph = None
+ suffix = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TimeFst(GraphFst):
+ """
+ Finite state transducer for classifying time, e.g.
+ "02:15 est" -> time { hours: "dos" minutes: "quince" zone: "e s t"}
+ "2 h" -> time { hours: "dos" }
+ "9 h" -> time { hours: "nueve" }
+ "02:15:10 h" -> time { hours: "dos" minutes: "quince" seconds: "diez"}
+
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="time", kind="classify", deterministic=deterministic)
+
+ delete_time_delimiter = pynutil.delete(pynini.union(".", ":"))
+
+ one = pynini.string_map([("un", "una"), ("รบn", "una")])
+ change_one = pynini.cdrewrite(one, "", "", NEMO_SIGMA)
+ cardinal_graph = cardinal.graph @ change_one
+
+ day_suffix = pynutil.insert("suffix: \"") + suffix + pynutil.insert("\"")
+ day_suffix = delete_space + insert_space + day_suffix
+
+ delete_hora_suffix = delete_space + insert_space + pynutil.delete("h")
+ delete_minute_suffix = delete_space + insert_space + pynutil.delete("min")
+ delete_second_suffix = delete_space + insert_space + pynutil.delete("s")
+
+ labels_hour_24 = [
+ str(x) for x in range(0, 25)
+ ] # Can see both systems. Twelve hour requires am/pm for ambiguity resolution
+ labels_hour_12 = [str(x) for x in range(1, 13)]
+ labels_minute_single = [str(x) for x in range(1, 10)]
+ labels_minute_double = [str(x) for x in range(10, 60)]
+
+ delete_leading_zero_to_double_digit = (
+ pynini.closure(pynutil.delete("0") | (NEMO_DIGIT - "0"), 0, 1) + NEMO_DIGIT
+ )
+
+ graph_24 = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ delete_leading_zero_to_double_digit @ pynini.union(*labels_hour_24)
+ )
+ graph_12 = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ delete_leading_zero_to_double_digit @ pynini.union(*labels_hour_12)
+ )
+
+ graph_hour_24 = graph_24 @ cardinal_graph
+ graph_hour_12 = graph_12 @ cardinal_graph
+
+ graph_minute_single = delete_leading_zero_to_double_digit @ pynini.union(*labels_minute_single)
+ graph_minute_double = pynini.union(*labels_minute_double)
+
+ graph_minute = pynini.union(graph_minute_single, graph_minute_double) @ cardinal_graph
+
+ final_graph_hour_only_24 = (
+ pynutil.insert("hours: \"") + graph_hour_24 + pynutil.insert("\"") + delete_hora_suffix
+ )
+ final_graph_hour_only_12 = pynutil.insert("hours: \"") + graph_hour_12 + pynutil.insert("\"") + day_suffix
+
+ final_graph_hour_24 = pynutil.insert("hours: \"") + graph_hour_24 + pynutil.insert("\"")
+ final_graph_hour_12 = pynutil.insert("hours: \"") + graph_hour_12 + pynutil.insert("\"")
+
+ final_graph_minute = pynutil.insert("minutes: \"") + graph_minute + pynutil.insert("\"")
+ final_graph_second = pynutil.insert("seconds: \"") + graph_minute + pynutil.insert("\"")
+ final_time_zone_optional = pynini.closure(
+ delete_space + insert_space + pynutil.insert("zone: \"") + time_zone_graph + pynutil.insert("\""), 0, 1,
+ )
+
+ # 02.30 h
+ graph_hm = (
+ final_graph_hour_24
+ + delete_time_delimiter
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + pynini.closure(
+ delete_time_delimiter + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second)),
+ 0,
+ 1,
+ ) # For seconds 2.30.35 h
+ + pynini.closure(delete_hora_suffix, 0, 1) # 2.30 is valid if unambiguous
+ + final_time_zone_optional
+ )
+
+ # 2 h 30 min
+ graph_hm |= (
+ final_graph_hour_24
+ + delete_hora_suffix
+ + delete_space
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + delete_minute_suffix
+ + pynini.closure(
+ delete_space
+ + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second))
+ + delete_second_suffix,
+ 0,
+ 1,
+ ) # For seconds
+ + final_time_zone_optional
+ )
+
+ # 2.30 a. m. (Only for 12 hour clock)
+ graph_hm |= (
+ final_graph_hour_12
+ + delete_time_delimiter
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + pynini.closure(
+ delete_time_delimiter + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second)),
+ 0,
+ 1,
+ ) # For seconds 2.30.35 a. m.
+ + day_suffix
+ + final_time_zone_optional
+ )
+
+ graph_h = (
+ pynini.union(final_graph_hour_only_24, final_graph_hour_only_12) + final_time_zone_optional
+ ) # Should always have a time indicator, else we'll pass to cardinals
+
+ if not deterministic:
+ # This includes alternate vocalization (hour menos min, min para hour), here we shift the times and indicate a `style` tag
+ hour_shift_24 = pynini.invert(pynini.string_file(get_abs_path("data/time/hour_to_24.tsv")))
+ hour_shift_12 = pynini.invert(pynini.string_file(get_abs_path("data/time/hour_to_12.tsv")))
+ minute_shift = pynini.string_file(get_abs_path("data/time/minute_to.tsv"))
+
+ graph_hour_to_24 = graph_24 @ hour_shift_24 @ cardinal_graph
+ graph_hour_to_12 = graph_12 @ hour_shift_12 @ cardinal_graph
+
+ graph_minute_to = pynini.union(graph_minute_single, graph_minute_double) @ minute_shift @ cardinal_graph
+
+ final_graph_hour_to_24 = pynutil.insert("hours: \"") + graph_hour_to_24 + pynutil.insert("\"")
+ final_graph_hour_to_12 = pynutil.insert("hours: \"") + graph_hour_to_12 + pynutil.insert("\"")
+
+ final_graph_minute_to = pynutil.insert("minutes: \"") + graph_minute_to + pynutil.insert("\"")
+
+ graph_menos = pynutil.insert(" style: \"1\"")
+ graph_para = pynutil.insert(" style: \"2\"")
+
+ final_graph_style = graph_menos | graph_para
+
+ # 02.30 h (omitting seconds since a bit awkward)
+ graph_hm |= (
+ final_graph_hour_to_24
+ + delete_time_delimiter
+ + insert_space
+ + final_graph_minute_to
+ + pynini.closure(delete_hora_suffix, 0, 1) # 2.30 is valid if unambiguous
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ # 2 h 30 min
+ graph_hm |= (
+ final_graph_hour_to_24
+ + delete_hora_suffix
+ + delete_space
+ + insert_space
+ + final_graph_minute_to
+ + delete_minute_suffix
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ # 2.30 a. m. (Only for 12 hour clock)
+ graph_hm |= (
+ final_graph_hour_to_12
+ + delete_time_delimiter
+ + insert_space
+ + final_graph_minute_to
+ + day_suffix
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ final_graph = graph_hm | graph_h
+ if deterministic:
+ final_graph = final_graph + pynutil.insert(" preserve_order: true")
+ final_graph = final_graph.optimize()
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py b/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py
@@ -0,0 +1,157 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_extra_space,
+ delete_space,
+ generator_main,
+)
+from nemo_text_processing.text_normalization.en.taggers.punctuation import PunctuationFst
+from nemo_text_processing.text_normalization.es.taggers.cardinal import CardinalFst
+from nemo_text_processing.text_normalization.es.taggers.date import DateFst
+from nemo_text_processing.text_normalization.es.taggers.decimals import DecimalFst
+from nemo_text_processing.text_normalization.es.taggers.electronic import ElectronicFst
+from nemo_text_processing.text_normalization.es.taggers.fraction import FractionFst
+from nemo_text_processing.text_normalization.es.taggers.measure import MeasureFst
+from nemo_text_processing.text_normalization.es.taggers.money import MoneyFst
+from nemo_text_processing.text_normalization.es.taggers.ordinal import OrdinalFst
+from nemo_text_processing.text_normalization.es.taggers.telephone import TelephoneFst
+from nemo_text_processing.text_normalization.es.taggers.time import TimeFst
+from nemo_text_processing.text_normalization.es.taggers.whitelist import WhiteListFst
+from nemo_text_processing.text_normalization.es.taggers.word import WordFst
+
+from nemo.utils import logging
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class ClassifyFst(GraphFst):
+ """
+ Final class that composes all other classification grammars. This class can process an entire sentence, that is lower cased.
+ For deployment, this grammar will be compiled and exported to OpenFst Finate State aRchive (FAR) File.
+ More details to deployment at NeMo/tools/text_processing_deployment.
+
+ Args:
+ input_case: accepting either "lower_cased" or "cased" input.
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
+ overwrite_cache: set to True to overwrite .far files
+ whitelist: path to a file with whitelist replacements
+ """
+
+ def __init__(
+ self,
+ input_case: str,
+ deterministic: bool = False,
+ cache_dir: str = None,
+ overwrite_cache: bool = False,
+ whitelist: str = None,
+ ):
+ super().__init__(name="tokenize_and_classify", kind="classify", deterministic=deterministic)
+ far_file = None
+ if cache_dir is not None and cache_dir != "None":
+ os.makedirs(cache_dir, exist_ok=True)
+ whitelist_file = os.path.basename(whitelist) if whitelist else ""
+ far_file = os.path.join(
+ cache_dir, f"_{input_case}_es_tn_{deterministic}_deterministic{whitelist_file}.far"
+ )
+ if not overwrite_cache and far_file and os.path.exists(far_file):
+ self.fst = pynini.Far(far_file, mode="r")["tokenize_and_classify"]
+ logging.info(f"ClassifyFst.fst was restored from {far_file}.")
+ else:
+ logging.info(f"Creating ClassifyFst grammars. This might take some time...")
+
+ self.cardinal = CardinalFst(deterministic=deterministic)
+ cardinal_graph = self.cardinal.fst
+
+ self.ordinal = OrdinalFst(cardinal=self.cardinal, deterministic=deterministic)
+ ordinal_graph = self.ordinal.fst
+
+ self.decimal = DecimalFst(cardinal=self.cardinal, deterministic=deterministic)
+ decimal_graph = self.decimal.fst
+
+ self.fraction = FractionFst(cardinal=self.cardinal, ordinal=self.ordinal, deterministic=deterministic)
+ fraction_graph = self.fraction.fst
+ self.measure = MeasureFst(
+ cardinal=self.cardinal, decimal=self.decimal, fraction=self.fraction, deterministic=deterministic
+ )
+ measure_graph = self.measure.fst
+ self.date = DateFst(cardinal=self.cardinal, deterministic=deterministic)
+ date_graph = self.date.fst
+ word_graph = WordFst(deterministic=deterministic).fst
+ self.time = TimeFst(self.cardinal, deterministic=deterministic)
+ time_graph = self.time.fst
+ self.telephone = TelephoneFst(deterministic=deterministic)
+ telephone_graph = self.telephone.fst
+ self.electronic = ElectronicFst(deterministic=deterministic)
+ electronic_graph = self.electronic.fst
+ self.money = MoneyFst(cardinal=self.cardinal, decimal=self.decimal, deterministic=deterministic)
+ money_graph = self.money.fst
+ self.whitelist = WhiteListFst(input_case=input_case, deterministic=deterministic, input_file=whitelist)
+ whitelist_graph = self.whitelist.fst
+ punct_graph = PunctuationFst(deterministic=deterministic).fst
+
+ classify = (
+ pynutil.add_weight(whitelist_graph, 1.01)
+ | pynutil.add_weight(time_graph, 1.09)
+ | pynutil.add_weight(measure_graph, 1.08)
+ | pynutil.add_weight(cardinal_graph, 1.1)
+ | pynutil.add_weight(fraction_graph, 1.09)
+ | pynutil.add_weight(date_graph, 1.1)
+ | pynutil.add_weight(ordinal_graph, 1.1)
+ | pynutil.add_weight(decimal_graph, 1.1)
+ | pynutil.add_weight(money_graph, 1.1)
+ | pynutil.add_weight(telephone_graph, 1.1)
+ | pynutil.add_weight(electronic_graph, 1.1)
+ | pynutil.add_weight(word_graph, 200)
+ )
+ punct = pynutil.insert("tokens { ") + pynutil.add_weight(punct_graph, weight=2.1) + pynutil.insert(" }")
+ punct = pynini.closure(
+ pynini.compose(pynini.closure(NEMO_WHITE_SPACE, 1), delete_extra_space)
+ | (pynutil.insert(" ") + punct),
+ 1,
+ )
+ token = pynutil.insert("tokens { ") + classify + pynutil.insert(" }")
+ token_plus_punct = (
+ pynini.closure(punct + pynutil.insert(" ")) + token + pynini.closure(pynutil.insert(" ") + punct)
+ )
+
+ graph = token_plus_punct + pynini.closure(
+ (
+ pynini.compose(pynini.closure(NEMO_WHITE_SPACE, 1), delete_extra_space)
+ | (pynutil.insert(" ") + punct + pynutil.insert(" "))
+ )
+ + token_plus_punct
+ )
+
+ graph = delete_space + graph + delete_space
+ graph |= punct
+
+ self.fst = graph.optimize()
+
+ if far_file:
+ generator_main(far_file, {"tokenize_and_classify": self.fst})
+ logging.info(f"ClassifyFst grammars are saved to {far_file}.")
diff --git a/nemo_text_processing/text_normalization/es/taggers/whitelist.py b/nemo_text_processing/text_normalization/es/taggers/whitelist.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/whitelist.py
@@ -0,0 +1,69 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst, convert_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class WhiteListFst(GraphFst):
+ """
+ Finite state transducer for classifying whitelist, e.g.
+ "sr." -> tokens { name: "seรฑor" }
+ This class has highest priority among all classifier grammars. Whitelisted tokens are defined and loaded from "data/whitelist.tsv".
+
+ Args:
+ input_case: accepting either "lower_cased" or "cased" input.
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ input_file: path to a file with whitelist replacements
+ """
+
+ def __init__(self, input_case: str, deterministic: bool = True, input_file: str = None):
+ super().__init__(name="whitelist", kind="classify", deterministic=deterministic)
+
+ def _get_whitelist_graph(input_case, file):
+ whitelist = load_labels(file)
+ if input_case == "lower_cased":
+ whitelist = [[x[0].lower()] + x[1:] for x in whitelist]
+ graph = pynini.string_map(whitelist)
+ return graph
+
+ graph = _get_whitelist_graph(input_case, get_abs_path("data/whitelist.tsv"))
+ if not deterministic and input_case != "lower_cased":
+ graph |= pynutil.add_weight(
+ _get_whitelist_graph("lower_cased", get_abs_path("data/whitelist.tsv")), weight=0.0001
+ )
+
+ if input_file:
+ whitelist_provided = _get_whitelist_graph(input_case, input_file)
+ if not deterministic:
+ graph |= whitelist_provided
+ else:
+ graph = whitelist_provided
+
+ if not deterministic:
+ units_graph = _get_whitelist_graph(input_case, file=get_abs_path("data/measures/measurements.tsv"))
+ graph |= units_graph
+
+ self.graph = graph
+ self.final_graph = convert_space(self.graph).optimize()
+ self.fst = (pynutil.insert("name: \"") + self.final_graph + pynutil.insert("\"")).optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/word.py b/nemo_text_processing/text_normalization/es/taggers/word.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/word.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_SPACE, GraphFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class WordFst(GraphFst):
+ """
+ Finite state transducer for classifying word.
+ e.g. dormir -> tokens { name: "dormir" }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="word", kind="classify")
+ word = pynutil.insert("name: \"") + pynini.closure(NEMO_NOT_SPACE, 1) + pynutil.insert("\"")
+ self.fst = word.optimize()
diff --git a/nemo_text_processing/text_normalization/es/utils.py b/nemo_text_processing/text_normalization/es/utils.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/utils.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import csv
+import os
+
+
+def get_abs_path(rel_path):
+ """
+ Get absolute path
+
+ Args:
+ rel_path: relative path to this file
+
+ Returns absolute path
+ """
+ return os.path.dirname(os.path.abspath(__file__)) + '/' + rel_path
+
+
+def load_labels(abs_path):
+ """
+ loads relative path file as dictionary
+
+ Args:
+ abs_path: absolute path
+
+ Returns dictionary of mappings
+ """
+ label_tsv = open(abs_path)
+ labels = list(csv.reader(label_tsv, delimiter="\t"))
+ return labels
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/__init__.py b/nemo_text_processing/text_normalization/es/verbalizers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py b/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py
@@ -0,0 +1,57 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, GraphFst
+from nemo_text_processing.text_normalization.es.graph_utils import shift_cardinal_gender, strip_cardinal_apocope
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class CardinalFst(GraphFst):
+ """
+ Finite state transducer for verbalizing cardinals
+ e.g. cardinal { integer: "dos" } -> "dos"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="cardinal", kind="verbalize", deterministic=deterministic)
+ optional_sign = pynini.closure(pynini.cross("negative: \"true\" ", "menos "), 0, 1)
+ self.optional_sign = optional_sign
+
+ integer = pynini.closure(NEMO_NOT_QUOTE, 1)
+ self.integer = pynutil.delete(" \"") + integer + pynutil.delete("\"")
+
+ integer = pynutil.delete("integer:") + self.integer
+ self.numbers = integer
+ graph = optional_sign + self.numbers
+
+ if not deterministic:
+ # For alternate renderings
+ no_adjust = graph
+ fem_adjust = shift_cardinal_gender(graph)
+ apocope_adjust = strip_cardinal_apocope(graph)
+ graph = no_adjust | fem_adjust | apocope_adjust
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/date.py b/nemo_text_processing/text_normalization/es/verbalizers/date.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/date.py
@@ -0,0 +1,86 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import strip_cardinal_apocope
+from nemo_text_processing.text_normalization.es.taggers.date import articles
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class DateFst(GraphFst):
+ """
+ Finite state transducer for verbalizing date, e.g.
+ date { day: "treinta y uno" month: "marzo" year: "dos mil" } -> "treinta y uno de marzo de dos mil"
+ date { day: "uno" month: "mayo" year: "del mil novecientos noventa" } -> "primero de mayo del mil novecientos noventa"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="date", kind="verbalize", deterministic=deterministic)
+
+ day_cardinal = pynutil.delete("day: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ day = strip_cardinal_apocope(day_cardinal)
+
+ primero = pynini.cdrewrite(pynini.cross("uno", "primero"), "[BOS]", "[EOS]", NEMO_SIGMA)
+ day = (
+ (day @ primero) if deterministic else pynini.union(day, day @ primero)
+ ) # Primero for first day is traditional, but will vary depending on region
+
+ month = pynutil.delete("month: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+
+ year = (
+ pynutil.delete("year: \"")
+ + articles
+ + NEMO_SPACE
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ # Insert preposition if wasn't originally with the year. This would mean a space was present
+ year = pynutil.add_weight(year, -0.001)
+ year |= (
+ pynutil.delete("year: \"")
+ + pynutil.insert("de ")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ # day month year
+ graph_dmy = day + pynini.cross(NEMO_SPACE, " de ") + month + pynini.closure(pynini.accep(" ") + year, 0, 1)
+
+ graph_mdy = month + NEMO_SPACE + day + pynini.closure(NEMO_SPACE + year, 0, 1)
+ if deterministic:
+ graph_mdy += pynutil.delete(" preserve_order: true") # Only accepts this if was explicitly passed
+
+ self.graph = graph_dmy | graph_mdy
+ final_graph = self.graph + delete_preserve_order
+
+ delete_tokens = self.delete_tokens(final_graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/decimals.py b/nemo_text_processing/text_normalization/es/verbalizers/decimals.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/decimals.py
@@ -0,0 +1,87 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ GraphFst,
+ delete_preserve_order,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es import LOCALIZATION
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ shift_cardinal_gender,
+ shift_number_gender,
+ strip_cardinal_apocope,
+)
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class DecimalFst(GraphFst):
+ """
+ Finite state transducer for classifying decimal, e.g.
+ decimal { negative: "true" integer_part: "dos" fractional_part: "cuatro cero" quantity: "billones" } -> menos dos coma quatro cero billones
+ decimal { integer_part: "un" quantity: "billรณn" } -> un billรณn
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="decimal", kind="classify", deterministic=deterministic)
+
+ self.optional_sign = pynini.closure(pynini.cross("negative: \"true\"", "menos ") + delete_space, 0, 1)
+ self.integer = pynutil.delete("integer_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ self.fractional_default = (
+ pynutil.delete("fractional_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ )
+
+ conjunction = pynutil.insert(" punto ") if LOCALIZATION == "am" else pynutil.insert(" coma ")
+ if not deterministic:
+ conjunction |= pynutil.insert(pynini.union(" con ", " y "))
+ self.fractional_default |= strip_cardinal_apocope(self.fractional_default)
+ self.fractional = conjunction + self.fractional_default
+
+ self.quantity = (
+ delete_space
+ + insert_space
+ + pynutil.delete("quantity: \"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ self.optional_quantity = pynini.closure(self.quantity, 0, 1)
+
+ graph = self.optional_sign + pynini.union(
+ (self.integer + self.quantity), (self.integer + delete_space + self.fractional + self.optional_quantity)
+ )
+
+ self.numbers = graph.optimize()
+ self.numbers_no_quantity = self.integer + delete_space + self.fractional + self.optional_quantity
+
+ if not deterministic:
+ graph |= self.optional_sign + (
+ shift_cardinal_gender(self.integer + delete_space) + shift_number_gender(self.fractional)
+ )
+
+ graph += delete_preserve_order
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/electronic.py b/nemo_text_processing/text_normalization/es/verbalizers/electronic.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/electronic.py
@@ -0,0 +1,91 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_preserve_order,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digit_no_zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+
+ graph_symbols = pynini.string_file(get_abs_path("data/electronic/symbols.tsv"))
+ server_common = pynini.string_file(get_abs_path("data/electronic/server_name.tsv"))
+ domain_common = pynini.string_file(get_abs_path("data/electronic/domain.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ digit_no_zero = None
+ zero = None
+
+ graph_symbols = None
+ server_common = None
+ domain_common = None
+
+ PYNINI_AVAILABLE = False
+
+
+class ElectronicFst(GraphFst):
+ """
+ Finite state transducer for verbalizing electronic
+ e.g. electronic { username: "abc" domain: "hotmail.com" } -> "a b c arroba hotmail punto com"
+ -> "a b c arroba h o t m a i l punto c o m"
+ -> "a b c arroba hotmail punto c o m"
+ -> "a b c at h o t m a i l punto com"
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="electronic", kind="verbalize", deterministic=deterministic)
+
+ graph_digit_no_zero = (
+ digit_no_zero @ pynini.cdrewrite(pynini.cross("un", "uno"), "", "", NEMO_SIGMA).optimize()
+ )
+ graph_digit = graph_digit_no_zero | zero
+
+ def add_space_after_char():
+ return pynini.closure(NEMO_NOT_QUOTE - pynini.accep(" ") + insert_space) + (
+ NEMO_NOT_QUOTE - pynini.accep(" ")
+ )
+
+ verbalize_characters = pynini.cdrewrite(graph_symbols | graph_digit, "", "", NEMO_SIGMA)
+
+ user_name = pynutil.delete("username: \"") + add_space_after_char() + pynutil.delete("\"")
+ user_name @= verbalize_characters
+
+ convert_defaults = pynutil.add_weight(NEMO_NOT_QUOTE, weight=0.0001) | domain_common | server_common
+ domain = convert_defaults + pynini.closure(insert_space + convert_defaults)
+ domain @= verbalize_characters
+
+ domain = pynutil.delete("domain: \"") + domain + pynutil.delete("\"")
+ protocol = (
+ pynutil.delete("protocol: \"")
+ + add_space_after_char() @ pynini.cdrewrite(graph_symbols, "", "", NEMO_SIGMA)
+ + pynutil.delete("\"")
+ )
+ self.graph = (pynini.closure(protocol + pynini.accep(" "), 0, 1) + domain) | (
+ user_name + pynini.accep(" ") + pynutil.insert("arroba ") + domain
+ )
+ delete_tokens = self.delete_tokens(self.graph + delete_preserve_order)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/fraction.py b/nemo_text_processing/text_normalization/es/verbalizers/fraction.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/fraction.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_NOT_QUOTE,
+ NEMO_NOT_SPACE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ accents,
+ shift_cardinal_gender,
+ strip_cardinal_apocope,
+)
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class FractionFst(GraphFst):
+ """
+ Finite state transducer for verbalizing fraction
+ e.g. tokens { fraction { integer: "treinta y tres" numerator: "cuatro" denominator: "quinto" } } ->
+ treinta y tres y cuatro quintos
+
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="fraction", kind="verbalize", deterministic=deterministic)
+
+ # Derivational strings append 'avo' as a suffix. Adding space for processing aid
+ fraction_stem = pynutil.insert(" avo")
+ plural = pynutil.insert("s")
+
+ integer = (
+ pynutil.delete("integer_part: \"")
+ + strip_cardinal_apocope(pynini.closure(NEMO_NOT_QUOTE))
+ + pynutil.delete("\"")
+ )
+
+ numerator_one = pynutil.delete("numerator: \"") + pynini.accep("un") + pynutil.delete("\" ")
+ numerator = (
+ pynutil.delete("numerator: \"")
+ + pynini.difference(pynini.closure(NEMO_NOT_QUOTE), "un")
+ + pynutil.delete("\" ")
+ )
+
+ denominator_add_stem = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE)
+ + fraction_stem
+ + pynutil.delete("\" morphosyntactic_features: \"add_root\"")
+ )
+ denominator_ordinal = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\" morphosyntactic_features: \"ordinal\"")
+ )
+ denominator_cardinal = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\"")
+ )
+
+ denominator_singular = pynini.union(denominator_add_stem, denominator_ordinal)
+ denominator_plural = denominator_singular + plural
+
+ if not deterministic:
+ # Occasional exceptions
+ denominator_singular |= denominator_add_stem @ pynini.string_map(
+ [("once avo", "undรฉcimo"), ("doce avo", "duodรฉcimo")]
+ )
+
+ # Merging operations
+ merge = pynini.cdrewrite(
+ pynini.cross(" y ", "i"), "", "", NEMO_SIGMA
+ ) # The denominator must be a single word, with the conjunction "y" replaced by i
+ merge @= pynini.cdrewrite(delete_space, "", pynini.difference(NEMO_CHAR, "parte"), NEMO_SIGMA)
+
+ # The merger can produce duplicate vowels. This is not allowed in orthography
+ delete_duplicates = pynini.string_map([("aa", "a"), ("oo", "o")]) # Removes vowels
+ delete_duplicates = pynini.cdrewrite(delete_duplicates, "", "", NEMO_SIGMA)
+
+ remove_accents = pynini.cdrewrite(
+ accents,
+ pynini.union(NEMO_SPACE, pynini.accep("[BOS]")) + pynini.closure(NEMO_NOT_SPACE),
+ pynini.closure(NEMO_NOT_SPACE) + pynini.union("avo", "ava", "รฉsimo", "รฉsima"),
+ NEMO_SIGMA,
+ )
+ merge_into_single_word = merge @ remove_accents @ delete_duplicates
+
+ fraction_default = numerator + delete_space + insert_space + (denominator_plural @ merge_into_single_word)
+ fraction_with_one = (
+ numerator_one + delete_space + insert_space + (denominator_singular @ merge_into_single_word)
+ )
+
+ fraction_with_cardinal = strip_cardinal_apocope(numerator | numerator_one)
+ fraction_with_cardinal += (
+ delete_space + pynutil.insert(" sobre ") + strip_cardinal_apocope(denominator_cardinal)
+ )
+
+ conjunction = pynutil.insert(" y ")
+
+ if not deterministic:
+ # There is an alternative rendering where ordinals act as adjectives for 'parte'. This requires use of the feminine
+ # Other rules will manage use of "un" at end, so just worry about endings
+ exceptions = pynini.string_map([("tercia", "tercera")])
+ apply_exceptions = pynini.cdrewrite(exceptions, "", "", NEMO_SIGMA)
+ vowel_change = pynini.cdrewrite(pynini.cross("o", "a"), "", pynini.accep("[EOS]"), NEMO_SIGMA)
+
+ denominator_singular_fem = shift_cardinal_gender(denominator_singular) @ vowel_change @ apply_exceptions
+ denominator_plural_fem = denominator_singular_fem + plural
+
+ numerator_one_fem = shift_cardinal_gender(numerator_one)
+ numerator_fem = shift_cardinal_gender(numerator)
+
+ fraction_with_cardinal |= (
+ (numerator_one_fem | numerator_fem)
+ + delete_space
+ + pynutil.insert(" sobre ")
+ + shift_cardinal_gender(denominator_cardinal)
+ )
+
+ # Still need to manage stems
+ merge_stem = pynini.cdrewrite(
+ delete_space, "", pynini.union("avo", "ava", "avos", "avas"), NEMO_SIGMA
+ ) # For managing alternative spacing
+ merge_stem @= remove_accents @ delete_duplicates
+
+ fraction_with_one_fem = numerator_one_fem + delete_space + insert_space
+ fraction_with_one_fem += pynini.union(
+ denominator_singular_fem @ merge_stem, denominator_singular_fem @ merge_into_single_word
+ ) # Both forms exists
+ fraction_with_one_fem @= pynini.cdrewrite(pynini.cross("una media", "media"), "", "", NEMO_SIGMA)
+ fraction_with_one_fem += pynutil.insert(" parte")
+
+ fraction_default_fem = numerator_fem + delete_space + insert_space
+ fraction_default_fem += pynini.union(
+ denominator_plural_fem @ merge_stem, denominator_plural_fem @ merge_into_single_word
+ )
+ fraction_default_fem += pynutil.insert(" partes")
+
+ fraction_default |= (
+ numerator + delete_space + insert_space + denominator_plural @ merge_stem
+ ) # Case of no merger
+ fraction_default |= fraction_default_fem
+
+ fraction_with_one |= numerator_one + delete_space + insert_space + denominator_singular @ merge_stem
+ fraction_with_one |= fraction_with_one_fem
+
+ # Integers are influenced by dominant noun, need to allow feminine forms as well
+ integer |= shift_cardinal_gender(integer)
+
+ # Remove 'un medio'
+ fraction_with_one @= pynini.cdrewrite(pynini.cross("un medio", "medio"), "", "", NEMO_SIGMA)
+
+ integer = pynini.closure(integer + delete_space + conjunction, 0, 1)
+
+ fraction = fraction_with_one | fraction_default | fraction_with_cardinal
+
+ graph = integer + fraction
+
+ self.graph = graph
+ delete_tokens = self.delete_tokens(self.graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/measure.py b/nemo_text_processing/text_normalization/es/verbalizers/measure.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/measure.py
@@ -0,0 +1,110 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_extra_space,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import ones, shift_cardinal_gender
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ unit_plural_fem = pynini.string_file(get_abs_path("data/measures/measurements_plural_fem.tsv"))
+ unit_plural_masc = pynini.string_file(get_abs_path("data/measures/measurements_plural_masc.tsv"))
+
+ unit_singular_fem = pynini.project(unit_plural_fem, "input")
+ unit_singular_masc = pynini.project(unit_plural_masc, "input")
+
+ unit_plural_fem = pynini.project(unit_plural_fem, "output")
+ unit_plural_masc = pynini.project(unit_plural_masc, "output")
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ unit_plural_fem = None
+ unit_plural_masc = None
+
+ unit_singular_fem = None
+ unit_singular_masc = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MeasureFst(GraphFst):
+ """
+ Finite state transducer for verbalizing measure, e.g.
+ measure { cardinal { integer: "dos" units: "gramos" } } -> "dos gramos"
+ measure { cardinal { integer_part: "dos" quantity: "millones" units: "gramos" } } -> "dos millones de gramos"
+
+ Args:
+ decimal: DecimalFst
+ cardinal: CardinalFst
+ fraction: FractionFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, decimal: GraphFst, cardinal: GraphFst, fraction: GraphFst, deterministic: bool):
+ super().__init__(name="measure", kind="verbalize", deterministic=deterministic)
+
+ graph_decimal = decimal.fst
+ graph_cardinal = cardinal.fst
+ graph_fraction = fraction.fst
+
+ unit_masc = (unit_plural_masc | unit_singular_masc) + pynini.closure(
+ NEMO_WHITE_SPACE + "por" + pynini.closure(NEMO_NOT_QUOTE, 1), 0, 1
+ )
+ unit_masc |= "por" + pynini.closure(NEMO_NOT_QUOTE, 1)
+ unit_masc = pynutil.delete("units: \"") + (pynini.closure(NEMO_NOT_QUOTE) @ unit_masc) + pynutil.delete("\"")
+
+ unit_fem = (unit_plural_fem | unit_singular_fem) + pynini.closure(
+ NEMO_WHITE_SPACE + "por" + pynini.closure(NEMO_NOT_QUOTE, 1), 0, 1
+ )
+ unit_fem = pynutil.delete("units: \"") + (pynini.closure(NEMO_NOT_QUOTE) @ unit_fem) + pynutil.delete("\"")
+
+ graph_masc = (graph_cardinal | graph_decimal | graph_fraction) + NEMO_WHITE_SPACE + unit_masc
+ graph_fem = (
+ shift_cardinal_gender(graph_cardinal | graph_decimal | graph_fraction) + NEMO_WHITE_SPACE + unit_fem
+ )
+ graph = graph_masc | graph_fem
+
+ graph = (
+ pynini.cdrewrite(
+ pynutil.insert(" de"), "quantity: \"" + pynini.closure(NEMO_NOT_QUOTE, 1), "\"", NEMO_SIGMA
+ )
+ @ graph
+ ) # billones de xyz
+
+ graph @= pynini.cdrewrite(pynini.cross(ones, "uno"), "", NEMO_WHITE_SPACE + "por", NEMO_SIGMA)
+
+ # To manage alphanumeric combonations ("a-8, 5x"), we let them use a weighted default path.
+ alpha_num_unit = pynutil.delete("units: \"") + pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\"")
+ graph_alpha_num = pynini.union(
+ (graph_cardinal | graph_decimal) + NEMO_SPACE + alpha_num_unit,
+ alpha_num_unit + delete_extra_space + (graph_cardinal | graph_decimal),
+ )
+
+ graph |= pynutil.add_weight(graph_alpha_num, 0.01)
+
+ graph += delete_preserve_order
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/money.py b/nemo_text_processing/text_normalization/es/verbalizers/money.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/money.py
@@ -0,0 +1,195 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ shift_cardinal_gender,
+ shift_number_gender,
+ strip_cardinal_apocope,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ fem = pynini.string_file((get_abs_path("data/money/currency_plural_fem.tsv")))
+ masc = pynini.string_file((get_abs_path("data/money/currency_plural_masc.tsv")))
+
+ fem_singular = pynini.project(fem, "input")
+ masc_singular = pynini.project(masc, "input")
+
+ fem_plural = pynini.project(fem, "output")
+ masc_plural = pynini.project(masc, "output")
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ fem_plural = None
+ masc_plural = None
+
+ fem_singular = None
+ masc_singular = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MoneyFst(GraphFst):
+ """
+ Finite state transducer for verbalizing money, e.g.
+ money { currency_maj: "euro" integer_part: "un"} -> "un euro"
+ money { currency_maj: "euro" integer_part: "un" fractional_part: "cero cero un"} -> "uno coma cero cero uno euros"
+ money { integer_part: "un" currency_maj: "libra" fractional_part: "cuarenta" preserve_order: true} -> "una libra cuarenta"
+ money { integer_part: "un" currency_maj: "libra" fractional_part: "cuarenta" currency_min: "peniques" preserve_order: true} -> "una libra con cuarenta peniques"
+ money { fractional_part: "un" currency_min: "penique" preserve_order: true} -> "un penique"
+
+ Args:
+ decimal: GraphFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, decimal: GraphFst, deterministic: bool = True):
+ super().__init__(name="money", kind="verbalize", deterministic=deterministic)
+
+ maj_singular_masc = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_singular)
+ + pynutil.delete("\"")
+ )
+ maj_singular_fem = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_singular)
+ + pynutil.delete("\"")
+ )
+
+ maj_plural_masc = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_plural)
+ + pynutil.delete("\"")
+ )
+ maj_plural_fem = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_plural)
+ + pynutil.delete("\"")
+ )
+
+ maj_masc = maj_plural_masc | maj_singular_masc # Tagger kept quantity resolution stable
+ maj_fem = maj_plural_fem | maj_singular_fem
+
+ min_singular_masc = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_singular)
+ + pynutil.delete("\"")
+ )
+ min_singular_fem = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_singular)
+ + pynutil.delete("\"")
+ )
+
+ min_plural_masc = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_plural)
+ + pynutil.delete("\"")
+ )
+ min_plural_fem = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_plural)
+ + pynutil.delete("\"")
+ )
+
+ min_masc = min_plural_masc | min_singular_masc
+ min_fem = min_plural_fem | min_singular_fem
+
+ fractional_part = (
+ pynutil.delete("fractional_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ )
+
+ integer_part = pynutil.delete("integer_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ optional_add_and = pynini.closure(pynutil.insert(pynini.union("con ", "y ")), 0, 1)
+
+ # *** currency_maj
+ graph_integer_masc = integer_part + NEMO_SPACE + maj_masc
+ graph_integer_fem = shift_cardinal_gender(integer_part) + NEMO_SPACE + maj_fem
+ graph_integer = graph_integer_fem | graph_integer_masc
+
+ # *** currency_maj + (***) | ((con) *** current_min)
+ graph_integer_with_minor_masc = (
+ integer_part
+ + NEMO_SPACE
+ + maj_masc
+ + NEMO_SPACE
+ + pynini.union(
+ optional_add_and + strip_cardinal_apocope(fractional_part),
+ (optional_add_and + fractional_part + NEMO_SPACE + min_masc),
+ (optional_add_and + shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem),
+ ) # Could be minor currency that is different gender
+ + delete_preserve_order
+ )
+
+ graph_integer_with_minor_fem = (
+ shift_cardinal_gender(integer_part)
+ + NEMO_SPACE
+ + maj_fem
+ + NEMO_SPACE
+ + pynini.union(
+ optional_add_and + shift_cardinal_gender(fractional_part),
+ (optional_add_and + fractional_part + NEMO_SPACE + min_masc),
+ (optional_add_and + shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem),
+ ) # Could be minor currency that is different gender
+ + delete_preserve_order
+ )
+
+ graph_integer_with_minor = graph_integer_with_minor_fem | graph_integer_with_minor_masc
+
+ # *** coma *** currency_maj
+ graph_decimal_masc = decimal.numbers + NEMO_SPACE + maj_masc
+
+ # Need to fix some of the inner parts, so don't use decimal here (note: quantities covered by masc)
+ graph_decimal_fem = (
+ pynini.accep("integer_part: \"")
+ + shift_cardinal_gender(pynini.closure(NEMO_NOT_QUOTE, 1))
+ + pynini.accep("\"")
+ + NEMO_SPACE
+ + pynini.accep("fractional_part: \"")
+ + shift_number_gender(pynini.closure(NEMO_NOT_QUOTE, 1))
+ + pynini.accep("\"")
+ + NEMO_SIGMA
+ )
+ graph_decimal_fem @= decimal.numbers_no_quantity
+ graph_decimal_fem += NEMO_SPACE + maj_fem
+
+ graph_decimal = graph_decimal_fem | graph_decimal_masc
+ graph_decimal = (
+ pynini.cdrewrite(
+ pynutil.insert(" de"), "quantity: \"" + pynini.closure(NEMO_NOT_QUOTE, 1), "\"", NEMO_SIGMA
+ )
+ @ graph_decimal
+ ) # formally it's millones/billones de ***
+
+ # *** current_min
+ graph_minor_masc = fractional_part + NEMO_SPACE + min_masc + delete_preserve_order
+ graph_minor_fem = shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem + delete_preserve_order
+ graph_minor = graph_minor_fem | graph_minor_masc
+
+ graph = graph_integer | graph_integer_with_minor | graph_decimal | graph_minor
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py b/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py
@@ -0,0 +1,76 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, NEMO_SIGMA, NEMO_SPACE, GraphFst
+from nemo_text_processing.text_normalization.es.graph_utils import shift_number_gender
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class OrdinalFst(GraphFst):
+ """
+ Finite state transducer for verbalizing ordinals
+ e.g. ordinal { integer: "tercer" } } -> "tercero"
+ -> "tercera"
+ -> "tercer"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="ordinal", kind="verbalize", deterministic=deterministic)
+
+ graph = pynutil.delete("integer: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+
+ # masculne gender we leave as is
+ graph_masc = graph + pynutil.delete(" morphosyntactic_features: \"gender_masc")
+
+ # shift gender
+ graph_fem_ending = graph @ pynini.cdrewrite(
+ pynini.cross("o", "a"), "", NEMO_SPACE | pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+ graph_fem = shift_number_gender(graph_fem_ending) + pynutil.delete(" morphosyntactic_features: \"gender_fem")
+
+ # Apocope just changes tercero and primero. May occur if someone wrote 11.er (uncommon)
+ graph_apocope = (
+ pynini.cross("tercero", "tercer")
+ | pynini.cross("primero", "primer")
+ | pynini.cross("undรฉcimo", "decimoprimer")
+ ) # In case someone wrote 11.er with deterministic
+ graph_apocope = (graph @ pynini.cdrewrite(graph_apocope, "", "", NEMO_SIGMA)) + pynutil.delete(
+ " morphosyntactic_features: \"apocope"
+ )
+
+ graph = graph_apocope | graph_masc | graph_fem
+
+ if not deterministic:
+ # Plural graph
+ graph_plural = pynini.cdrewrite(
+ pynutil.insert("s"), pynini.union("o", "a"), NEMO_SPACE | pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+
+ graph |= (graph @ graph_plural) + pynutil.delete("/plural")
+
+ self.graph = graph + pynutil.delete("\"")
+
+ delete_tokens = self.delete_tokens(self.graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/telephone.py b/nemo_text_processing/text_normalization/es/verbalizers/telephone.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/telephone.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, GraphFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class TelephoneFst(GraphFst):
+ """
+ Finite state transducer for verbalizing telephone, e.g.
+ telephone { number_part: "uno dos tres uno dos tres cinco seis siete ocho" }
+ -> uno dos tres uno dos tres cinco seis siete ocho
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="telephone", kind="verbalize")
+
+ number_part = pynutil.delete("number_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ delete_tokens = self.delete_tokens(number_part)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/time.py b/nemo_text_processing/text_normalization/es/verbalizers/time.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/time.py
@@ -0,0 +1,269 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_preserve_order,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ alt_minutes = pynini.string_file(get_abs_path("data/time/alt_minutes.tsv"))
+
+ morning_times = pynini.string_file(get_abs_path("data/time/morning_times.tsv"))
+ afternoon_times = pynini.string_file(get_abs_path("data/time/afternoon_times.tsv"))
+ evening_times = pynini.string_file(get_abs_path("data/time/evening_times.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ alt_minutes = None
+
+ morning_times = None
+ afternoon_times = None
+ evening_times = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TimeFst(GraphFst):
+ """
+ Finite state transducer for verbalizing time, e.g.
+ time { hours: "doce" minutes: "media" suffix: "a m" } -> doce y media de la noche
+ time { hours: "doce" } -> twelve o'clock
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="time", kind="verbalize", deterministic=deterministic)
+
+ change_minutes = pynini.cdrewrite(alt_minutes, pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA)
+
+ morning_phrases = pynini.cross("am", "de la maรฑana")
+ afternoon_phrases = pynini.cross("pm", "de la tarde")
+ evening_phrases = pynini.cross("pm", "de la noche")
+
+ # For the 12's
+ mid_times = pynini.accep("doce")
+ mid_phrases = (
+ pynini.string_map([("pm", "del mediodรญa"), ("am", "de la noche")])
+ if deterministic
+ else pynini.string_map(
+ [
+ ("pm", "de la maรฑana"),
+ ("pm", "del dรญa"),
+ ("pm", "del mediodรญa"),
+ ("am", "de la noche"),
+ ("am", "de la medianoche"),
+ ]
+ )
+ )
+
+ hour = (
+ pynutil.delete("hours:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ minute = (
+ pynutil.delete("minutes:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ minute = (minute @ change_minutes) if deterministic else pynini.union(minute, minute @ change_minutes)
+
+ suffix = (
+ pynutil.delete("suffix:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ zone = (
+ pynutil.delete("zone:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ optional_zone = pynini.closure(delete_space + insert_space + zone, 0, 1)
+ second = (
+ pynutil.delete("seconds:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ graph_hms = (
+ hour
+ + pynutil.insert(" horas ")
+ + delete_space
+ + minute
+ + pynutil.insert(" minutos y ")
+ + delete_space
+ + second
+ + pynutil.insert(" segundos")
+ )
+
+ graph_hm = hour + delete_space + pynutil.insert(" y ") + minute
+ graph_hm |= pynini.union(
+ (hour @ morning_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases),
+ (hour @ afternoon_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases),
+ (hour @ evening_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases),
+ (hour @ mid_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases),
+ )
+
+ graph_h = pynini.union(
+ hour,
+ (hour @ morning_times) + delete_space + insert_space + (suffix @ morning_phrases),
+ (hour @ afternoon_times) + delete_space + insert_space + (suffix @ afternoon_phrases),
+ (hour @ evening_times) + delete_space + insert_space + (suffix @ evening_phrases),
+ (hour @ mid_times) + delete_space + insert_space + (suffix @ mid_phrases),
+ )
+
+ graph = (graph_hms | graph_hm | graph_h) + optional_zone
+
+ if not deterministic:
+ graph_style_1 = pynutil.delete(" style: \"1\"")
+ graph_style_2 = pynutil.delete(" style: \"2\"")
+
+ graph_menos = hour + delete_space + pynutil.insert(" menos ") + minute + graph_style_1
+ graph_menos |= (
+ (hour @ morning_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ afternoon_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ evening_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ mid_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases)
+ + graph_style_1
+ )
+ graph_menos += optional_zone
+
+ graph_para = minute + pynutil.insert(" para las ") + delete_space + hour + graph_style_2
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ morning_times)
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ afternoon_times)
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ evening_times)
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ mid_times)
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases)
+ + graph_style_2
+ )
+ graph_para += optional_zone
+ graph_para @= pynini.cdrewrite(
+ pynini.cross(" las ", " la "), "para", "una", NEMO_SIGMA
+ ) # Need agreement with one
+
+ graph |= graph_menos | graph_para
+ delete_tokens = self.delete_tokens(graph + delete_preserve_order)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py b/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py
@@ -0,0 +1,73 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst
+from nemo_text_processing.text_normalization.en.verbalizers.whitelist import WhiteListFst
+from nemo_text_processing.text_normalization.es.verbalizers.cardinal import CardinalFst
+from nemo_text_processing.text_normalization.es.verbalizers.date import DateFst
+from nemo_text_processing.text_normalization.es.verbalizers.decimals import DecimalFst
+from nemo_text_processing.text_normalization.es.verbalizers.electronic import ElectronicFst
+from nemo_text_processing.text_normalization.es.verbalizers.fraction import FractionFst
+from nemo_text_processing.text_normalization.es.verbalizers.measure import MeasureFst
+from nemo_text_processing.text_normalization.es.verbalizers.money import MoneyFst
+from nemo_text_processing.text_normalization.es.verbalizers.ordinal import OrdinalFst
+from nemo_text_processing.text_normalization.es.verbalizers.telephone import TelephoneFst
+from nemo_text_processing.text_normalization.es.verbalizers.time import TimeFst
+
+
+class VerbalizeFst(GraphFst):
+ """
+ Composes other verbalizer grammars.
+ For deployment, this grammar will be compiled and exported to OpenFst Finate State Archiv (FAR) File.
+ More details to deployment at NeMo/tools/text_processing_deployment.
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="verbalize", kind="verbalize", deterministic=deterministic)
+ cardinal = CardinalFst(deterministic=deterministic)
+ cardinal_graph = cardinal.fst
+ ordinal = OrdinalFst(deterministic=deterministic)
+ ordinal_graph = ordinal.fst
+ decimal = DecimalFst(deterministic=deterministic)
+ decimal_graph = decimal.fst
+ fraction = FractionFst(deterministic=deterministic)
+ fraction_graph = fraction.fst
+ date = DateFst(deterministic=deterministic)
+ date_graph = date.fst
+ measure = MeasureFst(cardinal=cardinal, decimal=decimal, fraction=fraction, deterministic=deterministic)
+ measure_graph = measure.fst
+ electronic = ElectronicFst(deterministic=deterministic)
+ electronic_graph = electronic.fst
+ whitelist_graph = WhiteListFst(deterministic=deterministic).fst
+ money_graph = MoneyFst(decimal=decimal, deterministic=deterministic).fst
+ telephone_graph = TelephoneFst(deterministic=deterministic).fst
+ time_graph = TimeFst(deterministic=deterministic).fst
+
+ graph = (
+ cardinal_graph
+ | measure_graph
+ | decimal_graph
+ | ordinal_graph
+ | date_graph
+ | electronic_graph
+ | money_graph
+ | fraction_graph
+ | whitelist_graph
+ | telephone_graph
+ | time_graph
+ )
+ self.fst = graph
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py b/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py
@@ -0,0 +1,52 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst, delete_extra_space, delete_space
+from nemo_text_processing.text_normalization.en.verbalizers.word import WordFst
+from nemo_text_processing.text_normalization.es.verbalizers.verbalize import VerbalizeFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class VerbalizeFinalFst(GraphFst):
+ """
+ Finite state transducer that verbalizes an entire sentence
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="verbalize_final", kind="verbalize", deterministic=deterministic)
+ verbalize = VerbalizeFst(deterministic=deterministic).fst
+ word = WordFst(deterministic=deterministic).fst
+ types = verbalize | word
+ graph = (
+ pynutil.delete("tokens")
+ + delete_space
+ + pynutil.delete("{")
+ + delete_space
+ + types
+ + delete_space
+ + pynutil.delete("}")
+ )
+ graph = delete_space + pynini.closure(graph + delete_extra_space) + graph + delete_space
+ self.fst = graph
diff --git a/nemo_text_processing/text_normalization/normalize.py b/nemo_text_processing/text_normalization/normalize.py
--- a/nemo_text_processing/text_normalization/normalize.py
+++ b/nemo_text_processing/text_normalization/normalize.py
@@ -46,8 +46,8 @@
class Normalizer:
"""
- Normalizer class that converts text from written to spoken form.
- Useful for TTS preprocessing.
+ Normalizer class that converts text from written to spoken form.
+ Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
@@ -83,10 +83,11 @@ def __init__(
from nemo_text_processing.text_normalization.ru.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.ru.verbalizers.verbalize_final import VerbalizeFinalFst
elif lang == 'de':
- # Ru TN only support non-deterministic cases and produces multiple normalization options
- # use normalize_with_audio.py
from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.de.verbalizers.verbalize_final import VerbalizeFinalFst
+ elif lang == 'es':
+ from nemo_text_processing.text_normalization.es.taggers.tokenize_and_classify import ClassifyFst
+ from nemo_text_processing.text_normalization.es.verbalizers.verbalize_final import VerbalizeFinalFst
self.tagger = ClassifyFst(
input_case=input_case,
deterministic=deterministic,
@@ -106,7 +107,7 @@ def __init__(
def normalize_list(self, texts: List[str], verbose=False, punct_post_process: bool = False) -> List[str]:
"""
- NeMo text normalizer
+ NeMo text normalizer
Args:
texts: list of input strings
@@ -357,7 +358,7 @@ def select_verbalizer(self, lattice: 'pynini.FstLike') -> str:
def parse_args():
parser = ArgumentParser()
parser.add_argument("input_string", help="input string", type=str)
- parser.add_argument("--language", help="language", choices=["en", "de"], default="en", type=str)
+ parser.add_argument("--language", help="language", choices=["en", "de", "es"], default="en", type=str)
parser.add_argument(
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
diff --git a/nemo_text_processing/text_normalization/normalize_with_audio.py b/nemo_text_processing/text_normalization/normalize_with_audio.py
--- a/nemo_text_processing/text_normalization/normalize_with_audio.py
+++ b/nemo_text_processing/text_normalization/normalize_with_audio.py
@@ -55,15 +55,15 @@
"audio_data" - path to the audio file
"text" - raw text
"pred_text" - ASR model prediction
-
+
See https://github.com/NVIDIA/NeMo/blob/main/examples/asr/transcribe_speech.py on how to add ASR predictions
-
+
When the manifest is ready, run:
python normalize_with_audio.py \
--audio_data PATH/TO/MANIFEST.JSON \
- --language en
-
-
+ --language en
+
+
To run with a single audio file, specify path to audio and text with:
python normalize_with_audio.py \
--audio_data PATH/TO/AUDIO.WAV \
@@ -71,18 +71,18 @@
--text raw text OR PATH/TO/.TXT/FILE
--model QuartzNet15x5Base-En \
--verbose
-
+
To see possible normalization options for a text input without an audio file (could be used for debugging), run:
python python normalize_with_audio.py --text "RAW TEXT"
-
+
Specify `--cache_dir` to generate .far grammars once and re-used them for faster inference
"""
class NormalizerWithAudio(Normalizer):
"""
- Normalizer class that converts text from written to spoken form.
- Useful for TTS preprocessing.
+ Normalizer class that converts text from written to spoken form.
+ Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
@@ -282,7 +282,7 @@ def parse_args():
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
parser.add_argument(
- "--language", help="Select target language", choices=["en", "ru", "de"], default="en", type=str
+ "--language", help="Select target language", choices=["en", "ru", "de", "es"], default="en", type=str
)
parser.add_argument("--audio_data", default=None, help="path to an audio file or .json manifest")
parser.add_argument(
diff --git a/tools/text_processing_deployment/pynini_export.py b/tools/text_processing_deployment/pynini_export.py
--- a/tools/text_processing_deployment/pynini_export.py
+++ b/tools/text_processing_deployment/pynini_export.py
@@ -67,7 +67,7 @@ def tn_grammars(**kwargs):
def export_grammars(output_dir, grammars):
"""
- Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
+ Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
Args:
output_dir: directory to export FAR files to. Subdirectories will be created for tagger and verbalizer respectively.
@@ -109,7 +109,7 @@ def parse_args():
if __name__ == '__main__':
args = parse_args()
- if args.language in ['ru', 'fr', 'es', 'vi'] and args.grammars == 'tn_grammars':
+ if args.language in ['ru', 'fr', 'vi'] and args.grammars == 'tn_grammars':
raise ValueError('Only ITN grammars could be deployed in Sparrowhawk for the selected languages.')
if args.language == 'en':
@@ -148,6 +148,10 @@ def parse_args():
from nemo_text_processing.inverse_text_normalization.es.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
+ from nemo_text_processing.text_normalization.es.taggers.tokenize_and_classify import (
+ ClassifyFst as TNClassifyFst,
+ )
+ from nemo_text_processing.text_normalization.es.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
elif args.language == 'fr':
from nemo_text_processing.inverse_text_normalization.fr.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
</patch>
</s>
</patch>
|
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt
@@ -0,0 +1,86 @@
+1~un
+2~dos
+3~tres
+4~cuatro
+5~cinco
+6~seis
+7~siete
+8~ocho
+9~nueve
+10~diez
+11~once
+12~doce
+13~trece
+14~catorce
+15~quince
+16~diecisรฉis
+17~diecisiete
+18~dieciocho
+19~diecinueve
+20~veinte
+21~veintiรบn
+22~veintidรณs
+23~veintitrรฉs
+24~veinticuatro
+25~veinticinco
+26~veintisรฉis
+27~veintisiete
+28~veintiocho
+29~veintinueve
+30~treinta
+31~treinta y un
+40~cuarenta
+41~cuarenta y un
+50~cincuenta
+51~cincuenta y un
+60~sesenta
+70~setenta
+80~ochenta
+90~noventa
+100~cien
+101~ciento un
+120~ciento veinte
+121~ciento veintiรบn
+130~ciento treinta
+131~ciento treinta y un
+200~doscientos
+201~doscientos un
+300~trescientos
+301~trescientos un
+1000~mil
+1 000~mil
+1.000~mil
+1001~mil un
+1010~mil diez
+1020~mil veinte
+1021~mil veintiรบn
+1100~mil cien
+1101~mil ciento un
+1110~mil ciento diez
+1111~mil ciento once
+1234~mil doscientos treinta y cuatro
+2000~dos mil
+2001~dos mil un
+2010~dos mil diez
+2020~dos mil veinte
+2100~dos mil cien
+2101~dos mil ciento un
+2110~dos mil ciento diez
+2111~dos mil ciento once
+2222~dos mil doscientos veintidรณs
+10000~diez mil
+10 000~diez mil
+10.000~diez mil
+100000~cien mil
+100 000~cien mil
+100.000~cien mil
+1 000 000~un millรณn
+1.000.000~un millรณn
+1 234 568~un millรณn doscientos treinta y cuatro mil quinientos sesenta y ocho
+2.000.000~dos millones
+1.000.000.000~mil millones
+2.000.000.000~dos mil millones
+3 000 000 000 000~tres billones
+3.000.000.000.000~tres billones
+100 000 000 000 000 000 000 000~cien mil trillones
+100 000 000 000 000 000 000 001~cien mil trillones un
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt
@@ -0,0 +1,13 @@
+1 enero~primero de enero
+5 febrero~cinco de febrero
+20 de marzo~veinte de marzo
+abril 30~treinta de abril
+31 marzo~treinta y uno de marzo
+10 mayo 1990~diez de mayo de mil novecientos noventa
+junio 11 2000~once de junio de dos mil
+30 julio del 2020~treinta de julio del dos mil veinte
+30-2-1990~treinta de febrero de mil novecientos noventa
+30/2/1990~treinta de febrero de mil novecientos noventa
+30.2.1990~treinta de febrero de mil novecientos noventa
+1990-2-30~treinta de febrero de mil novecientos noventa
+1990-02-30~treinta de febrero de mil novecientos noventa
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt
@@ -0,0 +1,27 @@
+0,1~cero coma un
+0,01~cero coma cero un
+0,010~cero coma cero uno cero
+1,0101~uno coma cero uno cero un
+0,0~cero coma cero
+1,0~uno coma cero
+1,00~uno coma cero cero
+1,1~uno coma un
+233,32~doscientos treinta y tres coma treinta y dos
+32,22 millones~treinta y dos coma veintidรณs millones
+320 320,22 millones~trescientos veinte mil trescientos veinte coma veintidรณs millones
+5.002,232~cinco mil dos coma doscientos treinta y dos
+3,2 trillones~tres coma dos trillones
+3 millones~tres millones
+3 000 millones~tres mil millones
+3000 millones~tres mil millones
+3.000 millones~tres mil millones
+3.001 millones~tres mil un millones
+1 millรณn~un millรณn
+1 000 millones~mil millones
+1000 millones~mil millones
+1.000 millones~mil millones
+2,33302 millones~dos coma tres tres tres cero dos millones
+1,5332 millรณn~uno coma cinco tres tres dos millรณn
+1,53322 millรณn~uno coma cinco tres tres dos dos millรณn
+1,53321 millรณn~uno coma cinco tres tres dos un millรณn
+101,010101 millones~ciento uno coma cero uno cero uno cero un millones
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt
@@ -0,0 +1,12 @@
+a.bc@gmail.com~a punto b c arroba gmail punto com
+cdf@abc.edu~c d f arroba a b c punto e d u
+abc@gmail.abc~a b c arroba gmail punto a b c
+abc@abc.com~a b c arroba a b c punto com
+asdf123@abc.com~a s d f uno dos tres arroba a b c punto com
+a1b2@abc.com~a uno b dos arroba a b c punto com
+ab3.sdd.3@gmail.com~a b tres punto s d d punto tres arroba gmail punto com
+https://www.nvidia.com~h t t p s dos puntos barra barra w w w punto nvidia punto com
+www.nvidia.com~w w w punto nvidia punto com
+www.abc.es/efg~w w w punto a b c punto es barra e f g
+www.abc.es~w w w punto a b c punto es
+http://www.ourdailynews.com.sm~h t t p dos puntos barra barra w w w punto o u r d a i l y n e w s punto com punto s m
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt
@@ -0,0 +1,76 @@
+1/2~medio
+1 1/2~uno y medio
+3/2~tres medios
+1 3/2~uno y tres medios
+1/3~un tercio
+2/3~dos tercios
+1/4~un cuarto
+2/4~dos cuartos
+1/5~un quinto
+2/5~dos quintos
+1/6~un sexto
+2/6~dos sextos
+1/7~un sรฉptimo
+2/7~dos sรฉptimos
+1/8~un octavo
+2/8~dos octavos
+1/9~un noveno
+2/9~dos novenos
+1/10~un dรฉcimo
+2/10~dos dรฉcimos
+1/11~un onceavo
+1/12~un doceavo
+1/13~un treceavo
+1/14~un catorceavo
+1/15~un quinceavo
+1/16~un dieciseisavo
+1/17~un diecisieteavo
+1/18~un dieciochoavo
+1/19~un diecinueveavo
+1/20~un veinteavo
+1/21~un veintiunavo
+1/22~un veintidosavo
+1/30~un treintavo
+1/31~un treintaiunavo
+1/40~un cuarentavo
+1/41~un cuarentaiunavo
+1/50~un cincuentavo
+1/60~un sesentavo
+1/70~un setentavo
+1/80~un ochentavo
+1/90~un noventavo
+1/100~un centรฉsimo
+2/100~dos centรฉsimos
+1 2/100~uno y dos centรฉsimos
+1/101~uno sobre ciento uno
+1/110~uno sobre ciento diez
+1/111~uno sobre ciento once
+1/112~uno sobre ciento doce
+1/123~uno sobre ciento veintitrรฉs
+1/134~uno sobre ciento treinta y cuatro
+1/200~un ducentรฉsimo
+1/201~uno sobre doscientos uno
+1/234~uno sobre doscientos treinta y cuatro
+1/300~un tricentรฉsimo
+1/345~uno sobre trescientos cuarenta y cinco
+1/400~un cuadringentรฉsimo
+1/456~uno sobre cuatrocientos cincuenta y seis
+1/500~un quingentรฉsimo
+1/600~un sexcentรฉsimo
+1/700~un septingentรฉsimo
+1/800~un octingentรฉsimo
+1/900~un noningentรฉsimo
+1/1000~un milรฉsimo
+2/1000~dos milรฉsimos
+1 2/1000~uno y dos milรฉsimos
+1/1001~uno sobre mil uno
+1/1100~uno sobre mil cien
+1/1200~uno sobre mil doscientos
+1/1234~uno sobre mil doscientos treinta y cuatro
+1/2000~un dosmilรฉsimo
+1/5000~un cincomilรฉsimo
+1/10000~un diezmilรฉsimo
+1/100.000~un cienmilรฉsimo
+1/1.000.000~un millonรฉsimo
+1/100.000.000~un cienmillonรฉsimo
+1/1.200.000.000~un mildoscientosmillonรฉsimo
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt
@@ -0,0 +1,17 @@
+1,2-a~uno coma dos a
+a-5~a cinco
+200 m~doscientos metros
+3 h~tres horas
+1 h~una hora
+245 mph~doscientas cuarenta y cinco millas por hora
+2 kg~dos kilogramos
+60,2400 kg~sesenta coma dos cuatro cero cero kilogramos
+-60,2400 kg~menos sesenta coma dos cuatro cero cero kilogramos
+8,52 %~ocho coma cincuenta y dos por ciento
+-8,52 %~menos ocho coma cincuenta y dos por ciento
+1 %~uno por ciento
+3 cm~tres centรญmetros
+4 s~cuatro segundos
+5 l~cinco litros
+4,51/s~cuatro coma cincuenta y uno por segundo
+0,0101 s~cero coma cero uno cero un segundos
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt
@@ -0,0 +1,24 @@
+$1~un dรณlar
+1 $~un dรณlar
+$1,50~un dรณlar cincuenta centavos
+1,50 $~un dรณlar cincuenta centavos
+ยฃ200.000.001~doscientos millones una libras
+200.000.001 ยฃ~doscientos millones una libras
+2 billones de euros~dos billones de euros
+โฌ2 billones~dos billones de euros
+โฌ 2 billones~dos billones de euros
+โฌ 2,3 billones~dos coma tres billones de euros
+2,3 billones de euros~dos coma tres billones de euros
+โฌ5,50~cinco euros cincuenta cรฉntimos
+5,50 โฌ~cinco euros cincuenta cรฉntimos
+5,01 โฌ~cinco euros un cรฉntimo
+5,01 ยฃ~cinco libras un penique
+21 czk~veintiuna coronas checas
+czk21~veintiuna coronas checas
+czk21,1 millones~veintiuna coma una millones de coronas checas
+czk 5,50 billones~cinco coma cincuenta billones de coronas checas
+rs 5,50 billones~cinco coma cincuenta billones de rupias
+czk5,50 billones~cinco coma cincuenta billones de coronas checas
+0,55 $~cincuenta y cinco centavos
+1,01 $~un dรณlar un centavo
+ยฅ12,05~doce yenes cinco centavos
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt
@@ -0,0 +1,120 @@
+~121
+ciento veintiรบn
+ciento veintiuno
+ciento veintiuna
+121
+~200
+doscientos
+doscientas
+200
+~201
+doscientos un
+doscientos uno
+doscientas una
+201
+~1
+un
+uno
+una
+1
+~550.000.001
+quinientos cincuenta millones un
+quinientos cincuenta millones una
+quinientos cincuenta millones uno
+550.000.001
+~500.501
+quinientos mil quinientos un
+quinientos mil quinientos uno
+quinientas mil quinientas una
+500.501
+~500.001.ยบ
+quinientosmilรฉsimo primero
+quingentรฉsimo milรฉsimo primero
+quinientosmilรฉsimos primeros
+quingentรฉsimos milรฉsimos primeros
+500.001.ยบ
+~500.001.ยช
+quinientasmilรฉsima primera
+quingentรฉsima milรฉsima primera
+quinientasmilรฉsimas primeras
+quingentรฉsimas milรฉsimas primeras
+500.001.ยช
+~11.ยช
+dรฉcima primera
+decimoprimera
+dรฉcimas primeras
+decimoprimeras
+undรฉcima
+undรฉcimas
+11.ยช
+~11.ยบ
+dรฉcimo primero
+decimoprimero
+dรฉcimos primeros
+decimoprimeros
+undรฉcimo
+undรฉcimos
+11.ยบ
+~12.ยบ
+dรฉcimo segundo
+decimosegundo
+dรฉcimos segundos
+decimosegundos
+duodรฉcimo
+duodรฉcimos
+12.ยบ
+~200,0101
+doscientos coma cero uno cero un
+doscientos coma cero uno cero uno
+doscientas coma cero una cero una
+200,0101
+~1.000.200,21
+un millรณn doscientos coma veintiรบn
+un millรณn doscientos coma veintiuno
+un millรณn doscientas coma veintiuna
+un millรณn doscientos coma dos un
+un millรณn doscientos coma dos uno
+un millรณn doscientas coma dos una
+1.000.200,21
+~1/12
+un doceavo
+una doceava parte
+un duodรฉcimo
+una duodรฉcima parte
+uno sobre doce
+1/12
+~5/200
+cinco ducentรฉsimos
+cinco ducentรฉsimas partes
+cinco sobre doscientos
+5/200
+~1 5/3
+uno y cinco tercios
+una y cinco terceras partes
+uno y cinco sobre tres
+una y cinco sobre tres
+~1/5/2020
+primero de mayo de dos mil veinte
+uno de mayo de dos mil veinte
+cinco de enero de dos mil veinte
+~$5,50
+cinco dรณlares con cincuenta
+cinco dรณlares y cincuenta
+cinco dรณlares cincuenta
+cinco dรณlares con cincuenta centavos
+cinco dรณlares y cincuenta centavos
+cinco dรณlares cincuenta centavos
+~2.30 h
+dos y treinta
+dos y media
+tres menos treinta
+tres menos media
+treinta para las tres
+~12.30 a.m.
+doce y treinta de la medianoche
+doce y treinta de la noche
+doce y media de la medianoche
+doce y media de la noche
+una menos treinta de la maรฑana
+una menos media de la maรฑana
+treinta para la una de la maรฑana
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt
@@ -0,0 +1,137 @@
+1.แตสณ~primer
+1.ยบ~primero
+1.ยช~primera
+2.ยบ~segundo
+2.ยช~segunda
+ii~segundo
+II~segundo
+3.แตสณ~tercer
+3.ยบ~tercero
+3.ยช~tercera
+4.ยบ~cuarto
+4.ยช~cuarta
+5.ยบ~quinto
+5.ยช~quinta
+6.ยบ~sexto
+6.ยช~sexta
+7.ยบ~sรฉptimo
+7.ยช~sรฉptima
+8.ยบ~octavo
+8.ยช~octava
+9.ยบ~noveno
+9.ยช~novena
+10.ยบ~dรฉcimo
+10.ยช~dรฉcima
+11.แตสณ~decimoprimer
+11.ยบ~undรฉcimo
+11.ยช~undรฉcima
+12.ยบ~duodรฉcimo
+12.ยช~duodรฉcima
+13.แตสณ~decimotercer
+13.ยบ~decimotercero
+13.ยช~decimotercera
+14.ยบ~decimocuarto
+14.ยช~decimocuarta
+15.ยบ~decimoquinto
+15.ยช~decimoquinta
+16.ยบ~decimosexto
+16.ยช~decimosexta
+17.ยบ~decimosรฉptimo
+17.ยช~decimosรฉptima
+18.ยบ~decimoctavo
+18.ยช~decimoctava
+19.ยบ~decimonoveno
+19.ยช~decimonovena
+20.ยบ~vigรฉsimo
+20.ยช~vigรฉsima
+21.แตสณ~vigesimoprimer
+21.ยบ~vigesimoprimero
+21.ยช~vigesimoprimera
+30.ยบ~trigรฉsimo
+30.ยช~trigรฉsima
+31.แตสณ~trigรฉsimo primer
+31.ยบ~trigรฉsimo primero
+31.ยช~trigรฉsima primera
+40.ยบ~cuadragรฉsimo
+40.ยช~cuadragรฉsima
+41.แตสณ~cuadragรฉsimo primer
+41.ยบ~cuadragรฉsimo primero
+41.ยช~cuadragรฉsima primera
+50.ยบ~quincuagรฉsimo
+50.ยช~quincuagรฉsima
+51.แตสณ~quincuagรฉsimo primer
+51.ยบ~quincuagรฉsimo primero
+51.ยช~quincuagรฉsima primera
+60.ยบ~sexagรฉsimo
+60.ยช~sexagรฉsima
+70.ยบ~septuagรฉsimo
+70.ยช~septuagรฉsima
+80.ยบ~octogรฉsimo
+80.ยช~octogรฉsima
+90.ยบ~nonagรฉsimo
+90.ยช~nonagรฉsima
+100.ยบ~centรฉsimo
+100.ยช~centรฉsima
+101.แตสณ~centรฉsimo primer
+101.ยบ~centรฉsimo primero
+101.ยช~centรฉsima primera
+134.ยบ~centรฉsimo trigรฉsimo cuarto
+134.ยช~centรฉsima trigรฉsima cuarta
+200.ยบ~ducentรฉsimo
+200.ยช~ducentรฉsima
+300.ยบ~tricentรฉsimo
+300.ยช~tricentรฉsima
+400.ยบ~cuadringentรฉsimo
+400.ยช~cuadringentรฉsima
+500.ยบ~quingentรฉsimo
+500.ยช~quingentรฉsima
+600.ยบ~sexcentรฉsimo
+600.ยช~sexcentรฉsima
+700.ยบ~septingentรฉsimo
+700.ยช~septingentรฉsima
+800.ยบ~octingentรฉsimo
+800.ยช~octingentรฉsima
+900.ยบ~noningentรฉsimo
+900.ยช~noningentรฉsima
+1000.ยบ~milรฉsimo
+1000.ยช~milรฉsima
+1001.แตสณ~milรฉsimo primer
+1 000.ยบ~milรฉsimo
+1 000.ยช~milรฉsima
+1 001.แตสณ~milรฉsimo primer
+1.000.ยบ~milรฉsimo
+1.000.ยช~milรฉsima
+1.001.แตสณ~milรฉsimo primer
+1248.ยบ~milรฉsimo ducentรฉsimo cuadragรฉsimo octavo
+1248.ยช~milรฉsima ducentรฉsima cuadragรฉsima octava
+2000.ยบ~dosmilรฉsimo
+100 000.ยบ~cienmilรฉsimo
+i~primero
+I~primero
+ii~segundo
+II~segundo
+iii~tercero
+III~tercero
+iv~cuarto
+IV~cuarto
+V~quinto
+VI~sexto
+VII~sรฉptimo
+VIII~octavo
+IX~noveno
+X~dรฉcimo
+XI~undรฉcimo
+XII~duodรฉcimo
+XIII~decimotercero
+XX~vigรฉsimo
+XXI~vigesimoprimero
+XXX~trigรฉsimo
+XL~cuadragรฉsimo
+L~quincuagรฉsimo
+XC~nonagรฉsimo
+C~centรฉsimo
+CD~cuadringentรฉsimo
+D~quingentรฉsimo
+CM~noningentรฉsimo
+999.ยบ~noningentรฉsimo nonagรฉsimo noveno
+cmxcix~noningentรฉsimo nonagรฉsimo noveno
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt
@@ -0,0 +1,3 @@
+123-123-5678~uno dos tres uno dos tres cinco seis siete ocho
+123-456-789~uno dos tres cuatro cinco seis siete ocho nueve
+1234-5678~uno dos tres cuatro cinco seis siete ocho
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt
@@ -0,0 +1,26 @@
+1.00~una
+1:00~una
+01:00~una
+01 h~una
+3 h~tres horas
+1 h~una hora
+1.05 h~una y cinco
+01.05 h~una y cinco
+1.00 h~una
+1.00 a.m.~una de la maรฑana
+1.00 a.m~una de la maรฑana
+1.00 p.m.~una de la tarde
+1.00 p.m est~una de la tarde e s t
+1.00 est~una e s t
+5:02 est~cinco y dos e s t
+5:02 p.m pst~cinco y dos de la noche p s t
+5:02 p.m.~cinco y dos de la noche
+12.15~doce y cuarto
+12.15 a.m.~doce y cuarto de la noche
+12.15 p.m.~doce y cuarto del mediodรญa
+13.30~trece y media
+14.05~catorce y cinco
+24:50~veinticuatro y cincuenta
+3:02:32 pst~tres horas dos minutos y treinta y dos segundos p s t
+00:52~cero y cincuenta y dos
+0:52~cero y cincuenta y dos
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt
@@ -0,0 +1,3 @@
+el dr.~el doctor
+sr. rodriguez~seรฑor rodriguez
+182 esq. toledo~ciento ochenta y dos esquina toledo
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt
@@ -0,0 +1,48 @@
+~
+yahoo!~yahoo!
+veinte!~veinte!
+โ~โ
+aaa~aaa
+aabach~aabach
+aabenraa~aabenraa
+aabye~aabye
+aaccessed~aaccessed
+aach~aach
+aachen's~aachen's
+aadri~aadri
+aafia~aafia
+aagaard~aagaard
+aagadu~aagadu
+aagard~aagard
+aagathadi~aagathadi
+aaghart's~aaghart's
+aagnes~aagnes
+aagomoni~aagomoni
+aagon~aagon
+aagoo~aagoo
+aagot~aagot
+aahar~aahar
+aahh~aahh
+aahperd~aahperd
+aaibinterstate~aaibinterstate
+aajab~aajab
+aakasa~aakasa
+aakervik~aakervik
+aakirkeby~aakirkeby
+aalam~aalam
+aalbaek~aalbaek
+aaldiu~aaldiu
+aalem~aalem
+a'ali~a'ali
+aalilaassamthey~aalilaassamthey
+aalin~aalin
+aaliyan~aaliyan
+aaliyan's~aaliyan's
+aamadu~aamadu
+aamara~aamara
+aambala~aambala
+aamera~aamera
+aamer's~aamer's
+aamina~aamina
+aaminah~aaminah
+aamjiwnaang~aamjiwnaang
diff --git a/tests/nemo_text_processing/es/test_cardinal.py b/tests/nemo_text_processing/es/test_cardinal.py
--- a/tests/nemo_text_processing/es/test_cardinal.py
+++ b/tests/nemo_text_processing/es/test_cardinal.py
@@ -22,7 +22,8 @@
class TestCardinal:
- inverse_normalizer_es = (
+
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +33,34 @@ class TestCardinal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_cardinal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_date.py b/tests/nemo_text_processing/es/test_date.py
--- a/tests/nemo_text_processing/es/test_date.py
+++ b/tests/nemo_text_processing/es/test_date.py
@@ -22,7 +22,7 @@
class TestDate:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +32,34 @@ class TestDate:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_date.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_decimal.py b/tests/nemo_text_processing/es/test_decimal.py
--- a/tests/nemo_text_processing/es/test_decimal.py
+++ b/tests/nemo_text_processing/es/test_decimal.py
@@ -22,7 +22,7 @@
class TestDecimal:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +32,34 @@ class TestDecimal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_decimal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_electronic.py b/tests/nemo_text_processing/es/test_electronic.py
--- a/tests/nemo_text_processing/es/test_electronic.py
+++ b/tests/nemo_text_processing/es/test_electronic.py
@@ -35,3 +35,31 @@ class TestElectronic:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_electronic.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_fraction.py b/tests/nemo_text_processing/es/test_fraction.py
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_fraction.py
@@ -0,0 +1,51 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import pytest
+from nemo_text_processing.text_normalization.normalize import Normalizer
+from nemo_text_processing.text_normalization.normalize_with_audio import NormalizerWithAudio
+from parameterized import parameterized
+
+from ..utils import CACHE_DIR, PYNINI_AVAILABLE, parse_test_case_file
+
+
+class TestFraction:
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_fraction.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_measure.py b/tests/nemo_text_processing/es/test_measure.py
--- a/tests/nemo_text_processing/es/test_measure.py
+++ b/tests/nemo_text_processing/es/test_measure.py
@@ -36,3 +36,31 @@ class TestMeasure:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_measure.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_money.py b/tests/nemo_text_processing/es/test_money.py
--- a/tests/nemo_text_processing/es/test_money.py
+++ b/tests/nemo_text_processing/es/test_money.py
@@ -23,7 +23,7 @@
class TestMoney:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -33,6 +33,34 @@ class TestMoney:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_money.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_normalization_with_audio.py b/tests/nemo_text_processing/es/test_normalization_with_audio.py
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_normalization_with_audio.py
@@ -0,0 +1,43 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+from nemo_text_processing.text_normalization.normalize_with_audio import NormalizerWithAudio
+from parameterized import parameterized
+
+from ..utils import CACHE_DIR, PYNINI_AVAILABLE, get_test_cases_multiple
+
+
+class TestNormalizeWithAudio:
+
+ normalizer_es = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ @parameterized.expand(get_test_cases_multiple('es/data_text_normalization/test_cases_normalize_with_audio.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer_es.normalize(test_input, n_tagged=1000, punct_post_process=False)
+ print(expected)
+ print("pred")
+ print(pred)
+ assert len(set(pred).intersection(set(expected))) == len(
+ expected
+ ), f'missing: {set(expected).difference(set(pred))}'
diff --git a/tests/nemo_text_processing/es/test_ordinal.py b/tests/nemo_text_processing/es/test_ordinal.py
--- a/tests/nemo_text_processing/es/test_ordinal.py
+++ b/tests/nemo_text_processing/es/test_ordinal.py
@@ -23,7 +23,7 @@
class TestOrdinal:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -33,6 +33,33 @@ class TestOrdinal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_ordinal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=30, punct_post_process=False,
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh b/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh
@@ -0,0 +1,84 @@
+#! /bin/sh
+
+PROJECT_DIR=/workspace/tests
+
+runtest () {
+ input=$1
+ cd /workspace/sparrowhawk/documentation/grammars
+
+ # read test file
+ while read testcase; do
+ IFS='~' read written spoken <<< $testcase
+ denorm_pred=$(echo $written | normalizer_main --config=sparrowhawk_configuration.ascii_proto 2>&1 | tail -n 1)
+
+ # trim white space
+ spoken="$(echo -e "${spoken}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
+ denorm_pred="$(echo -e "${denorm_pred}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
+
+ # input expected actual
+ assertEquals "$written" "$spoken" "$denorm_pred"
+ done < "$input"
+}
+
+testTNCardinal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_cardinal.txt
+ runtest $input
+}
+
+testTNDate() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_date.txt
+ runtest $input
+}
+
+testTNDecimal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_decimal.txt
+ runtest $input
+}
+
+testTNElectronic() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_electronic.txt
+ runtest $input
+}
+
+testTNFraction() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_fraction.txt
+ runtest $input
+}
+
+testTNMoney() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_money.txt
+ runtest $input
+}
+
+testTNOrdinal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_ordinal.txt
+ runtest $input
+}
+
+testTNTelephone() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_ordinal.txt
+ runtest $input
+}
+
+testTNTime() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_time.txt
+ runtest $input
+}
+
+testTNMeasure() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_measure.txt
+ runtest $input
+}
+
+testTNWhitelist() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_whitelist.txt
+ runtest $input
+}
+
+testTNWord() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_word.txt
+ runtest $input
+}
+
+# Load shUnit2
+. $PROJECT_DIR/../shunit2/shunit2
diff --git a/tests/nemo_text_processing/es/test_telephone.py b/tests/nemo_text_processing/es/test_telephone.py
--- a/tests/nemo_text_processing/es/test_telephone.py
+++ b/tests/nemo_text_processing/es/test_telephone.py
@@ -36,3 +36,31 @@ class TestTelephone:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_telephone.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_time.py b/tests/nemo_text_processing/es/test_time.py
--- a/tests/nemo_text_processing/es/test_time.py
+++ b/tests/nemo_text_processing/es/test_time.py
@@ -35,3 +35,31 @@ class TestTime:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_time.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_whitelist.py b/tests/nemo_text_processing/es/test_whitelist.py
--- a/tests/nemo_text_processing/es/test_whitelist.py
+++ b/tests/nemo_text_processing/es/test_whitelist.py
@@ -35,3 +35,30 @@ class TestWhitelist:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_whitelist.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=10, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_word.py b/tests/nemo_text_processing/es/test_word.py
--- a/tests/nemo_text_processing/es/test_word.py
+++ b/tests/nemo_text_processing/es/test_word.py
@@ -35,3 +35,30 @@ class TestWord:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer_es = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio_es = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_word.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer_es.normalize(test_input, verbose=False)
+ assert pred == expected, f"input: {test_input}"
+
+ if self.normalizer_with_audio_es:
+ pred_non_deterministic = self.normalizer_with_audio_es.normalize(
+ test_input, n_tagged=150, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic, f"input: {test_input}"
|
1.0
| ||||
NVIDIA__NeMo-7582
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
</issue>
<code>
[start of README.rst]
|status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
.. |status| image:: http://www.repostatus.org/badges/latest/active.svg
:target: http://www.repostatus.org/#active
:alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
.. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
:alt: Documentation
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
.. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
:target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
:alt: NeMo core license and license for collections in this repo
.. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
:target: https://badge.fury.io/py/nemo-toolkit
:alt: Release version
.. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
:target: https://badge.fury.io/py/nemo-toolkit
:alt: Python version
.. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
:target: https://pepy.tech/project/nemo-toolkit
:alt: PyPi total downloads
.. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
:target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
:alt: CodeQL
.. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: black
.. _main-readme:
**NVIDIA NeMo**
===============
Introduction
------------
NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
text-to-speech synthesis (TTS), large language models (LLMs), and
natural language processing (NLP).
The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
training is automatically scalable to 1000s of GPUs.
Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
Getting started with NeMo is simple.
State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
`NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
can all be run on `Google Colab <https://colab.research.google.com>`_.
For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
which can be used to find the optimal model parallel configuration for training on a specific cluster.
Also see our `introductory video <https://www.youtube.com/embed/wBgpMf_KQVw>`_ for a high level overview of NeMo.
Key Features
------------
* Speech processing
* `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
* `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
* Supported ASR models: `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_
* Jasper, QuartzNet, CitriNet, ContextNet
* Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
* Squeezeformer-CTC and Squeezeformer-Transducer
* LSTM-Transducer (RNNT) and LSTM-CTC
* Supports the following decoders/losses:
* CTC
* Transducer/RNNT
* Hybrid Transducer/CTC
* NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
* Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
* Cache-aware Streaming Conformer with multiple lookaheads - `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_
* Beam Search decoding
* `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
* `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
* `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
* `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
* ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
* `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
* `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
* Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
* Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
* `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
* `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
* `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
* Natural Language Processing
* `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
* `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
* `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
* `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
* `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
* `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
* `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
* `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
* `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
* `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
* `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
* `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
* `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
* `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
* Text-to-Speech Synthesis (TTS):
* `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
* Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
* Vocoders: HiFiGAN, UnivNet, WaveGlow
* End-to-End Models: VITS
* `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
* `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
* `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
* `NeMo Forced Aligner <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/nemo_forced_aligner.html>`_
* `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
* `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
* `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
Requirements
------------
1) Python 3.10 or above
2) Pytorch 1.13.1 or above
3) NVIDIA GPU for training
Documentation
-------------
.. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
:alt: Documentation Status
:scale: 100%
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
.. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
:alt: Documentation Status
:scale: 100%
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
| Version | Status | Description |
+=========+=============+==========================================================================================================================================+
| Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
| Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
Tutorials
---------
A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
Getting help with NeMo
----------------------
FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
Installation
------------
Conda
~~~~~
We recommend installing NeMo in a fresh Conda environment.
.. code-block:: bash
conda create --name nemo python==3.10.12
conda activate nemo
Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
.. code-block:: bash
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
Pip
~~~
Use this installation mode if you want the latest released version.
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
pip install nemo_toolkit['all']
Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
Pip from source
~~~~~~~~~~~~~~~
Use this installation mode if you want the version from a particular GitHub branch (e.g main).
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
From source
~~~~~~~~~~~
Use this installation mode if you are contributing to NeMo.
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
git clone https://github.com/NVIDIA/NeMo
cd NeMo
./reinstall.sh
If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
with ``pip install -e .`` when your PWD is the root of the NeMo repository.
RNNT
~~~~
Note that RNNT requires numba to be installed from conda.
.. code-block:: bash
conda remove numba
pip uninstall numba
conda install -c conda-forge numba
NeMo Megatron
~~~~~~~~~~~~~
NeMo Megatron training requires NVIDIA Apex to be installed.
Install it manually if not using the NVIDIA PyTorch container.
To install Apex, run
.. code-block:: bash
git clone https://github.com/NVIDIA/apex.git
cd apex
git checkout 52e18c894223800cb611682dce27d88050edf1de
pip install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
.. code-block:: bash
conda install -c nvidia cuda-nvprof=11.8
packaging is also needed:
.. code-block:: bash
pip install packaging
With the latest versions of Apex, the `pyproject.toml` file in Apex may need to be deleted in order to install locally.
Transformer Engine
~~~~~~~~~~~~~~~~~~
NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
`Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
.. code-block:: bash
pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
Transformer Engine requires PyTorch to be built with CUDA 11.8.
Flash Attention
~~~~~~~~~~~~~~~~~~~~
Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
.. code-block:: bash
pip install flash-attn
pip install triton==2.0.0.dev20221202
NLP inference UI
~~~~~~~~~~~~~~~~~~~~
To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
.. code-block:: bash
pip install gradio==3.34.0
NeMo Text Processing
~~~~~~~~~~~~~~~~~~~~
NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
Docker containers:
~~~~~~~~~~~~~~~~~~
We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.20.0`` comes with container ``nemo:23.06``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
To use built container, please run
.. code-block:: bash
docker pull nvcr.io/nvidia/nemo:23.06
To build a nemo container with Dockerfile from a branch, please run
.. code-block:: bash
DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
.. code-block:: bash
docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
Examples
--------
Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
Contributing
------------
We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
Publications
------------
We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/tree/stable/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
License
-------
NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
[end of README.rst]
[start of examples/asr/experimental/k2/align_speech_parallel.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
# Based on examples/asr/transcribe_speech_parallel.py
# ASR alignment with multi-GPU/multi-node support for large datasets
# It supports both tarred and non-tarred datasets
# Arguments
# model: path to a nemo/PTL checkpoint file or name of a pretrained model
# predict_ds: config of the dataset/dataloader
# aligner_args: aligner config
# output_path: path to store the predictions
# model_stride: model downsampling factor, 8 for Citrinet models and 4 for Conformer models
#
# Results of each GPU/worker is written into a file named 'predictions_{rank}.json, and aggregated results of all workers are written into 'predictions_all.json'
Example for non-tarred datasets:
python align_speech_parallel.py \
model=stt_en_conformer_ctc_large \
predict_ds.manifest_filepath=/dataset/manifest_file.json \
predict_ds.batch_size=16 \
output_path=/tmp/
Example for tarred datasets:
python align_speech_parallel.py \
predict_ds.is_tarred=true \
predict_ds.manifest_filepath=/tarred_dataset/tarred_audio_manifest.json \
predict_ds.tarred_audio_filepaths=/tarred_dataset/audio__OP_0..127_CL_.tar \
...
By default the trainer uses all the GPUs available and default precision is FP32.
By setting the trainer config you may control these configs. For example to do the predictions with AMP on just two GPUs:
python align_speech_parallel.py \
trainer.precision=16 \
trainer.gpus=2 \
...
You may control the dataloader's config by setting the predict_ds:
python align_speech_parallel.py \
predict_ds.num_workers=8 \
predict_ds.min_duration=2.0 \
predict_ds.sample_rate=16000 \
model=stt_en_conformer_ctc_small \
...
You may control the aligner's config by setting the aligner_args:
aligner_args.alignment_type=argmax \
aligner_args.word_output=False \
aligner_args.cpu_decoding=True \
aligner_args.decode_batch_size=8 \
aligner_args.ctc_cfg.prob_suppress_index=-1 \
aligner_args.ctc_cfg.prob_suppress_value=0.5 \
aligner_args.rnnt_cfg.predictor_window_size=10 \
aligner_args.decoder_module_cfg.intersect_pruned=true \
aligner_args.decoder_module_cfg.intersect_conf.search_beam=40 \
...
"""
import os
from dataclasses import dataclass, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
import torch
from omegaconf import MISSING, OmegaConf
from nemo.collections.asr.data.audio_to_ctm_dataset import ASRCTMPredictionWriter
from nemo.collections.asr.models import ASRModel
from nemo.collections.asr.models.configs.aligner_config import K2AlignerWrapperModelConfig
from nemo.collections.asr.models.configs.asr_models_config import ASRDatasetConfig
from nemo.collections.asr.models.k2_aligner_model import AlignerWrapperModel
from nemo.core.config import TrainerConfig, hydra_runner
from nemo.utils import logging
from nemo.utils.get_rank import is_global_rank_zero
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
output_path: str = MISSING
model_stride: int = 8
trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
# there arguments will be ignored
return_predictions: bool = False
use_cer: bool = False
def match_train_config(predict_ds, train_ds):
# It copies the important configurations from the train dataset of the model
# into the predict_ds to be used for prediction. It is needed to match the training configurations.
if train_ds is None:
return
predict_ds.sample_rate = train_ds.get("sample_rate", 16000)
cfg_name_list = [
"int_values",
"use_start_end_token",
"blank_index",
"unk_index",
"normalize",
"parser",
"eos_id",
"bos_id",
"pad_id",
]
if is_dataclass(predict_ds):
predict_ds = OmegaConf.structured(predict_ds)
for cfg_name in cfg_name_list:
if hasattr(train_ds, cfg_name):
setattr(predict_ds, cfg_name, getattr(train_ds, cfg_name))
return predict_ds
@hydra_runner(config_name="AlignmentConfig", schema=ParallelAlignmentConfig)
def main(cfg: ParallelAlignmentConfig):
if cfg.model.endswith(".nemo"):
logging.info("Attempting to initialize from .nemo file")
model = ASRModel.restore_from(restore_path=cfg.model, map_location="cpu")
elif cfg.model.endswith(".ckpt"):
logging.info("Attempting to initialize from .ckpt file")
model = ASRModel.load_from_checkpoint(checkpoint_path=cfg.model, map_location="cpu")
else:
logging.info(
"Attempting to initialize from a pretrained model as the model name does not have the extension of .nemo or .ckpt"
)
model = ASRModel.from_pretrained(model_name=cfg.model, map_location="cpu")
trainer = ptl.Trainer(**cfg.trainer)
cfg.predict_ds.return_sample_id = True
cfg.return_predictions = False
cfg.use_cer = False
cfg.predict_ds = match_train_config(predict_ds=cfg.predict_ds, train_ds=model._cfg.train_ds)
data_loader = model._setup_dataloader_from_config(cfg.predict_ds)
os.makedirs(cfg.output_path, exist_ok=True)
# trainer.global_rank is not valid before predict() is called. Need this hack to find the correct global_rank.
global_rank = trainer.node_rank * trainer.num_devices + int(os.environ.get("LOCAL_RANK", 0))
output_file = os.path.join(cfg.output_path, f"predictions_{global_rank}.json")
output_ctm_dir = os.path.join(cfg.output_path, "ctm")
predictor_writer = ASRCTMPredictionWriter(
dataset=data_loader.dataset,
output_file=output_file,
output_ctm_dir=output_ctm_dir,
time_per_frame=cfg.model_stride * model._cfg.preprocessor['window_stride'],
)
trainer.callbacks.extend([predictor_writer])
aligner_wrapper = AlignerWrapperModel(model=model, cfg=cfg.aligner_args)
trainer.predict(model=aligner_wrapper, dataloaders=data_loader, return_predictions=cfg.return_predictions)
samples_num = predictor_writer.close_output_file()
logging.info(
f"Prediction on rank {global_rank} is done for {samples_num} samples and results are stored in {output_file}."
)
if torch.distributed.is_initialized():
torch.distributed.barrier()
samples_num = 0
if is_global_rank_zero():
output_file = os.path.join(cfg.output_path, f"predictions_all.json")
logging.info(f"Prediction files are being aggregated in {output_file}.")
with open(output_file, 'tw', encoding="utf-8") as outf:
for rank in range(trainer.world_size):
input_file = os.path.join(cfg.output_path, f"predictions_{rank}.json")
with open(input_file, 'r', encoding="utf-8") as inpf:
lines = inpf.readlines()
samples_num += len(lines)
outf.writelines(lines)
logging.info(
f"Prediction is done for {samples_num} samples in total on all workers and results are aggregated in {output_file}."
)
if __name__ == '__main__':
main()
[end of examples/asr/experimental/k2/align_speech_parallel.py]
[start of nemo/collections/asr/metrics/rnnt_wer.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import re
from abc import abstractmethod
from dataclasses import dataclass, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
import numpy as np
import torch
from omegaconf import OmegaConf
from torchmetrics import Metric
from nemo.collections.asr.metrics.wer import move_dimension_to_the_front
from nemo.collections.asr.parts.submodules import rnnt_beam_decoding as beam_decode
from nemo.collections.asr.parts.submodules import rnnt_greedy_decoding as greedy_decode
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
from nemo.utils import logging
__all__ = ['RNNTDecoding', 'RNNTWER']
class AbstractRNNTDecoding(ConfidenceMixin):
"""
Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
Args:
decoding_cfg: A dict-like object which contains the following key-value pairs.
strategy: str value which represents the type of decoding that can occur.
Possible values are :
- greedy, greedy_batch (for greedy decoding).
- beam, tsd, alsd (for beam search decoding).
compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
tokens as well as the decoded string. Default is False in order to avoid double decoding
unless required.
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
with the `return_hypotheses` flag set to True.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrete intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
rnnt_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
Can take the following values - "char" for character/subword time stamps, "word" for word level
time stamps and "all" (default), for both character level and word level time stamps.
word_seperator: Str token representing the seperator between words.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of ints.
confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize
`rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
The config may further contain the following sub-dictionaries:
"greedy":
max_symbols: int, describing the maximum number of target tokens to decode per
timestep during greedy decoding. Setting to larger values allows longer sentences
to be decoded, at the cost of increased execution time.
preserve_frame_confidence: Same as above, overrides above value.
confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
"beam":
beam_size: int, defining the beam size for beam search. Must be >= 1.
If beam_size == 1, will perform cached greedy search. This might be slightly different
results compared to the greedy search above.
score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
Set to True by default.
return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
hypotheses after beam search has concluded. This flag is set by default.
tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
at increased cost to execution time.
alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
If an integer is provided, it can decode sequences of that particular maximum length.
If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
where seq_len is the length of the acoustic model output (T).
NOTE:
If a float is provided, it can be greater than 1!
By default, a float of 2.0 is used so that a target sequence can be at most twice
as long as the acoustic model output length T.
maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
in order to reduce expensive beam search cost later. int >= 0.
maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
and affects the speed of inference since large values will perform large beam search in the next step.
maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
expansion apart from the "most likely" candidate.
Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
tuned on a validation set.
softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
decoder: The Decoder/Prediction network module.
joint: The Joint network module.
blank_id: The id of the RNNT blank token.
"""
def __init__(self, decoding_cfg, decoder, joint, blank_id: int):
super(AbstractRNNTDecoding, self).__init__()
# Convert dataclass to config object
if is_dataclass(decoding_cfg):
decoding_cfg = OmegaConf.structured(decoding_cfg)
self.cfg = decoding_cfg
self.blank_id = blank_id
self.num_extra_outputs = joint.num_extra_outputs
self.big_blank_durations = self.cfg.get("big_blank_durations", None)
self.durations = self.cfg.get("durations", None)
self.compute_hypothesis_token_set = self.cfg.get("compute_hypothesis_token_set", False)
self.compute_langs = decoding_cfg.get('compute_langs', False)
self.preserve_alignments = self.cfg.get('preserve_alignments', None)
self.joint_fused_batch_size = self.cfg.get('fused_batch_size', None)
self.compute_timestamps = self.cfg.get('compute_timestamps', None)
self.word_seperator = self.cfg.get('word_seperator', ' ')
if self.durations is not None: # this means it's a TDT model.
if blank_id == 0:
raise ValueError("blank_id must equal len(non_blank_vocabs) for TDT models")
if self.big_blank_durations is not None:
raise ValueError("duration and big_blank_durations can't both be not None")
if self.cfg.strategy not in ['greedy', 'greedy_batch']:
raise ValueError("currently only greedy and greedy_batch inference is supported for TDT models")
if self.big_blank_durations is not None: # this means it's a multi-blank model.
if blank_id == 0:
raise ValueError("blank_id must equal len(vocabs) for multi-blank RNN-T models")
if self.cfg.strategy not in ['greedy', 'greedy_batch']:
raise ValueError(
"currently only greedy and greedy_batch inference is supported for multi-blank models"
)
possible_strategies = ['greedy', 'greedy_batch', 'beam', 'tsd', 'alsd', 'maes']
if self.cfg.strategy not in possible_strategies:
raise ValueError(f"Decoding strategy must be one of {possible_strategies}")
# Update preserve alignments
if self.preserve_alignments is None:
if self.cfg.strategy in ['greedy', 'greedy_batch']:
self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
# Update compute timestamps
if self.compute_timestamps is None:
if self.cfg.strategy in ['greedy', 'greedy_batch']:
self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
# Test if alignments are being preserved for RNNT
if self.compute_timestamps is True and self.preserve_alignments is False:
raise ValueError("If `compute_timesteps` flag is set, then `preserve_alignments` flag must also be set.")
# initialize confidence-related fields
self._init_confidence(self.cfg.get('confidence_cfg', None))
# Confidence estimation is not implemented for these strategies
if (
not self.preserve_frame_confidence
and self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']
and self.cfg.beam.get('preserve_frame_confidence', False)
):
raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
if self.cfg.strategy == 'greedy':
if self.big_blank_durations is None:
if self.durations is None:
self.decoding = greedy_decode.GreedyRNNTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None)
or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_measure_cfg=self.confidence_measure_cfg,
)
else:
self.decoding = greedy_decode.GreedyTDTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
durations=self.durations,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None)
or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_measure_cfg=self.confidence_measure_cfg,
)
else:
self.decoding = greedy_decode.GreedyMultiblankRNNTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
big_blank_durations=self.big_blank_durations,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_measure_cfg=self.confidence_measure_cfg,
)
elif self.cfg.strategy == 'greedy_batch':
if self.big_blank_durations is None:
if self.durations is None:
self.decoding = greedy_decode.GreedyBatchedRNNTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None)
or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_measure_cfg=self.confidence_measure_cfg,
)
else:
self.decoding = greedy_decode.GreedyBatchedTDTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
durations=self.durations,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None)
or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_measure_cfg=self.confidence_measure_cfg,
)
else:
self.decoding = greedy_decode.GreedyBatchedMultiblankRNNTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
big_blank_durations=self.big_blank_durations,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_measure_cfg=self.confidence_measure_cfg,
)
elif self.cfg.strategy == 'beam':
self.decoding = beam_decode.BeamRNNTInfer(
decoder_model=decoder,
joint_model=joint,
beam_size=self.cfg.beam.beam_size,
return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
search_type='default',
score_norm=self.cfg.beam.get('score_norm', True),
softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
preserve_alignments=self.preserve_alignments,
)
elif self.cfg.strategy == 'tsd':
self.decoding = beam_decode.BeamRNNTInfer(
decoder_model=decoder,
joint_model=joint,
beam_size=self.cfg.beam.beam_size,
return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
search_type='tsd',
score_norm=self.cfg.beam.get('score_norm', True),
tsd_max_sym_exp_per_step=self.cfg.beam.get('tsd_max_sym_exp', 10),
softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
preserve_alignments=self.preserve_alignments,
)
elif self.cfg.strategy == 'alsd':
self.decoding = beam_decode.BeamRNNTInfer(
decoder_model=decoder,
joint_model=joint,
beam_size=self.cfg.beam.beam_size,
return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
search_type='alsd',
score_norm=self.cfg.beam.get('score_norm', True),
alsd_max_target_len=self.cfg.beam.get('alsd_max_target_len', 2),
softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
preserve_alignments=self.preserve_alignments,
)
elif self.cfg.strategy == 'maes':
self.decoding = beam_decode.BeamRNNTInfer(
decoder_model=decoder,
joint_model=joint,
beam_size=self.cfg.beam.beam_size,
return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
search_type='maes',
score_norm=self.cfg.beam.get('score_norm', True),
maes_num_steps=self.cfg.beam.get('maes_num_steps', 2),
maes_prefix_alpha=self.cfg.beam.get('maes_prefix_alpha', 1),
maes_expansion_gamma=self.cfg.beam.get('maes_expansion_gamma', 2.3),
maes_expansion_beta=self.cfg.beam.get('maes_expansion_beta', 2.0),
softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
preserve_alignments=self.preserve_alignments,
ngram_lm_model=self.cfg.beam.get('ngram_lm_model', None),
ngram_lm_alpha=self.cfg.beam.get('ngram_lm_alpha', 0.0),
hat_subtract_ilm=self.cfg.beam.get('hat_subtract_ilm', False),
hat_ilm_weight=self.cfg.beam.get('hat_ilm_weight', 0.0),
)
else:
raise ValueError(
f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
f"but was provided {self.cfg.strategy}"
)
# Update the joint fused batch size or disable it entirely if needed.
self.update_joint_fused_batch_size()
def rnnt_decoder_predictions_tensor(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
return_hypotheses: bool = False,
partial_hypotheses: Optional[List[Hypothesis]] = None,
) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
"""
Decode an encoder output by autoregressive decoding of the Decoder+Joint networks.
Args:
encoder_output: torch.Tensor of shape [B, D, T].
encoded_lengths: torch.Tensor containing lengths of the padded encoder outputs. Shape [B].
return_hypotheses: bool. If set to True it will return list of Hypothesis or NBestHypotheses
Returns:
If `return_best_hypothesis` is set:
A tuple (hypotheses, None):
hypotheses - list of Hypothesis (best hypothesis per sample).
Look at rnnt_utils.Hypothesis for more information.
If `return_best_hypothesis` is not set:
A tuple(hypotheses, all_hypotheses)
hypotheses - list of Hypothesis (best hypothesis per sample).
Look at rnnt_utils.Hypothesis for more information.
all_hypotheses - list of NBestHypotheses. Each NBestHypotheses further contains a sorted
list of all the hypotheses of the model per sample.
Look at rnnt_utils.NBestHypotheses for more information.
"""
# Compute hypotheses
with torch.inference_mode():
hypotheses_list = self.decoding(
encoder_output=encoder_output, encoded_lengths=encoded_lengths, partial_hypotheses=partial_hypotheses
) # type: [List[Hypothesis]]
# extract the hypotheses
hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
prediction_list = hypotheses_list
if isinstance(prediction_list[0], NBestHypotheses):
hypotheses = []
all_hypotheses = []
for nbest_hyp in prediction_list: # type: NBestHypotheses
n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
decoded_hyps = self.decode_hypothesis(n_hyps) # type: List[str]
# If computing timestamps
if self.compute_timestamps is True:
timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
for hyp_idx in range(len(decoded_hyps)):
decoded_hyps[hyp_idx] = self.compute_rnnt_timestamps(decoded_hyps[hyp_idx], timestamp_type)
hypotheses.append(decoded_hyps[0]) # best hypothesis
all_hypotheses.append(decoded_hyps)
if return_hypotheses:
return hypotheses, all_hypotheses
best_hyp_text = [h.text for h in hypotheses]
all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
return best_hyp_text, all_hyp_text
else:
hypotheses = self.decode_hypothesis(prediction_list) # type: List[str]
# If computing timestamps
if self.compute_timestamps is True:
timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
for hyp_idx in range(len(hypotheses)):
hypotheses[hyp_idx] = self.compute_rnnt_timestamps(hypotheses[hyp_idx], timestamp_type)
if return_hypotheses:
# greedy decoding, can get high-level confidence scores
if self.preserve_frame_confidence and (
self.preserve_word_confidence or self.preserve_token_confidence
):
hypotheses = self.compute_confidence(hypotheses)
return hypotheses, None
best_hyp_text = [h.text for h in hypotheses]
return best_hyp_text, None
def decode_hypothesis(self, hypotheses_list: List[Hypothesis]) -> List[Union[Hypothesis, NBestHypotheses]]:
"""
Decode a list of hypotheses into a list of strings.
Args:
hypotheses_list: List of Hypothesis.
Returns:
A list of strings.
"""
for ind in range(len(hypotheses_list)):
# Extract the integer encoded hypothesis
prediction = hypotheses_list[ind].y_sequence
if type(prediction) != list:
prediction = prediction.tolist()
# RNN-T sample level is already preprocessed by implicit RNNT decoding
# Simply remove any blank and possibly big blank tokens
if self.big_blank_durations is not None: # multi-blank RNNT
num_extra_outputs = len(self.big_blank_durations)
prediction = [p for p in prediction if p < self.blank_id - num_extra_outputs]
elif self.durations is not None: # TDT model.
prediction = [p for p in prediction if p < self.blank_id]
else: # standard RNN-T
prediction = [p for p in prediction if p != self.blank_id]
# De-tokenize the integer tokens; if not computing timestamps
if self.compute_timestamps is True:
# keep the original predictions, wrap with the number of repetitions per token and alignments
# this is done so that `rnnt_decoder_predictions_tensor()` can process this hypothesis
# in order to compute exact time stamps.
alignments = copy.deepcopy(hypotheses_list[ind].alignments)
token_repetitions = [1] * len(alignments) # preserve number of repetitions per token
hypothesis = (prediction, alignments, token_repetitions)
else:
hypothesis = self.decode_tokens_to_str(prediction)
# TODO: remove
# collapse leading spaces before . , ? for PC models
hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
if self.compute_hypothesis_token_set:
hypotheses_list[ind].tokens = self.decode_ids_to_tokens(prediction)
# De-tokenize the integer tokens
hypotheses_list[ind].text = hypothesis
return hypotheses_list
def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
"""
Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
Assumes that `frame_confidence` is present in the hypotheses.
Args:
hypotheses_list: List of Hypothesis.
Returns:
A list of hypotheses with high-level confidence scores.
"""
if self.exclude_blank_from_confidence:
for hyp in hypotheses_list:
hyp.token_confidence = hyp.non_blank_frame_confidence
else:
for hyp in hypotheses_list:
offset = 0
token_confidence = []
if len(hyp.timestep) > 0:
for ts, te in zip(hyp.timestep, hyp.timestep[1:] + [len(hyp.frame_confidence)]):
if ts != te:
# <blank> tokens are considered to belong to the last non-blank token, if any.
token_confidence.append(
self._aggregate_confidence(
[hyp.frame_confidence[ts][offset]]
+ [fc[0] for fc in hyp.frame_confidence[ts + 1 : te]]
)
)
offset = 0
else:
token_confidence.append(hyp.frame_confidence[ts][offset])
offset += 1
hyp.token_confidence = token_confidence
if self.preserve_word_confidence:
for hyp in hypotheses_list:
hyp.word_confidence = self._aggregate_token_confidence(hyp)
return hypotheses_list
@abstractmethod
def decode_tokens_to_str(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to decoder a token id list into a string.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded string.
"""
raise NotImplementedError()
@abstractmethod
def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to decode a token id list into a token list.
A token list is the string representation of each token id.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded tokens.
"""
raise NotImplementedError()
@abstractmethod
def decode_tokens_to_lang(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to
compute the most likely language ID (LID) string given the tokens.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded LID string.
"""
raise NotImplementedError()
@abstractmethod
def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to
decode a token id list into language ID (LID) list.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded LIDS.
"""
raise NotImplementedError()
def update_joint_fused_batch_size(self):
if self.joint_fused_batch_size is None:
# do nothing and let the Joint itself handle setting up of the fused batch
return
if not hasattr(self.decoding.joint, 'set_fused_batch_size'):
logging.warning(
"The joint module does not have `set_fused_batch_size(int)` as a setter function.\n"
"Ignoring update of joint fused batch size."
)
return
if not hasattr(self.decoding.joint, 'set_fuse_loss_wer'):
logging.warning(
"The joint module does not have `set_fuse_loss_wer(bool, RNNTLoss, RNNTWER)` "
"as a setter function.\n"
"Ignoring update of joint fused batch size."
)
return
if self.joint_fused_batch_size > 0:
self.decoding.joint.set_fused_batch_size(self.joint_fused_batch_size)
else:
logging.info("Joint fused batch size <= 0; Will temporarily disable fused batch step in the Joint.")
self.decoding.joint.set_fuse_loss_wer(False)
def compute_rnnt_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
assert timestamp_type in ['char', 'word', 'all']
# Unpack the temporary storage
decoded_prediction, alignments, token_repetitions = hypothesis.text
# Retrieve offsets
char_offsets = word_offsets = None
char_offsets = self._compute_offsets(hypothesis, token_repetitions, self.blank_id)
# finally, set the flattened decoded predictions to text field for later text decoding
hypothesis.text = decoded_prediction
# Assert number of offsets and hypothesis tokens are 1:1 match.
num_flattened_tokens = 0
for t in range(len(char_offsets)):
# Subtract one here for the extra RNNT BLANK token emitted to designate "End of timestep"
num_flattened_tokens += len(char_offsets[t]['char']) - 1
if num_flattened_tokens != len(hypothesis.text):
raise ValueError(
f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
" have to be of the same length, but are: "
f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
f" {len(hypothesis.text)}"
)
encoded_char_offsets = copy.deepcopy(char_offsets)
# Correctly process the token ids to chars/subwords.
for i, offsets in enumerate(char_offsets):
decoded_chars = []
for char in offsets['char'][:-1]: # ignore the RNNT Blank token at end of every timestep with -1 subset
decoded_chars.append(self.decode_tokens_to_str([int(char)]))
char_offsets[i]["char"] = decoded_chars
# detect char vs subword models
lens = []
for v in char_offsets:
tokens = v["char"]
# each token may be either 1 unicode token or multiple unicode token
# for character based models, only 1 token is used
# for subword, more than one token can be used.
# Computing max, then summing up total lens is a test to check for char vs subword
# For char models, len(lens) == sum(lens)
# but this is violated for subword models.
max_len = max(len(c) for c in tokens)
lens.append(max_len)
# array of one or more chars implies subword based model with multiple char emitted per TxU step (via subword)
if sum(lens) > len(lens):
text_type = 'subword'
else:
# full array of ones implies character based model with 1 char emitted per TxU step
text_type = 'char'
# retrieve word offsets from character offsets
word_offsets = None
if timestamp_type in ['word', 'all']:
if text_type == 'char':
word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
else:
# utilize the copy of char offsets with the correct integer ids for tokens
# so as to avoid tokenize -> detokenize -> compare -> merge steps.
word_offsets = self._get_word_offsets_subwords_sentencepiece(
encoded_char_offsets,
hypothesis,
decode_ids_to_tokens=self.decode_ids_to_tokens,
decode_tokens_to_str=self.decode_tokens_to_str,
)
# attach results
if len(hypothesis.timestep) > 0:
timestep_info = hypothesis.timestep
else:
timestep_info = []
# Setup defaults
hypothesis.timestep = {"timestep": timestep_info}
# Add char / subword time stamps
if char_offsets is not None and timestamp_type in ['char', 'all']:
hypothesis.timestep['char'] = char_offsets
# Add word time stamps
if word_offsets is not None and timestamp_type in ['word', 'all']:
hypothesis.timestep['word'] = word_offsets
# Convert the flattened token indices to text
hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
return hypothesis
@staticmethod
def _compute_offsets(
hypothesis: Hypothesis, token_repetitions: List[int], rnnt_token: int
) -> List[Dict[str, Union[str, int]]]:
"""
Utility method that calculates the indidual time indices where a token starts and ends.
Args:
hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
emitted at every time step after rnnt collapse.
token_repetitions: A list of ints representing the number of repetitions of each emitted token.
rnnt_token: The integer of the rnnt blank token used during rnnt collapse.
Returns:
"""
start_index = 0
# If the exact timestep information is available, utilize the 1st non-rnnt blank token timestep
# as the start index.
if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
start_index = max(0, hypothesis.timestep[0] - 1)
# Construct the start and end indices brackets
end_indices = np.asarray(token_repetitions).cumsum()
start_indices = np.concatenate(([start_index], end_indices[:-1]))
# Process the TxU dangling alignment tensor, containing pairs of (logits, label)
alignment_labels = [al_logits_labels for al_logits_labels in hypothesis.text[1]]
for t in range(len(alignment_labels)):
for u in range(len(alignment_labels[t])):
alignment_labels[t][u] = alignment_labels[t][u][1] # pick label from (logit, label) tuple
# Merge the results per token into a list of dictionaries
offsets = [
{"char": a, "start_offset": s, "end_offset": e}
for a, s, e in zip(alignment_labels, start_indices, end_indices)
]
# Filter out RNNT token (blank at [t][0] position). This is because blank can only occur at end of a
# time step for RNNT, so if 0th token is blank, then that timestep is skipped.
offsets = list(filter(lambda offsets: offsets["char"][0] != rnnt_token, offsets))
return offsets
@staticmethod
def _get_word_offsets_chars(
offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
) -> Dict[str, Union[str, float]]:
"""
Utility method which constructs word time stamps out of character time stamps.
References:
This code is a port of the Hugging Face code for word time stamp construction.
Args:
offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
word_delimiter_char: Character token that represents the word delimiter. By default, " ".
Returns:
A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
"end_offset".
"""
word_offsets = []
last_state = "SPACE"
word = ""
start_offset = 0
end_offset = 0
for i, offset in enumerate(offsets):
chars = offset["char"]
for char in chars:
state = "SPACE" if char == word_delimiter_char else "WORD"
if state == last_state:
# If we are in the same state as before, we simply repeat what we've done before
end_offset = offset["end_offset"]
word += char
else:
# Switching state
if state == "SPACE":
# Finishing a word
word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
else:
# Starting a new word
start_offset = offset["start_offset"]
end_offset = offset["end_offset"]
word = char
last_state = state
if last_state == "WORD":
word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
return word_offsets
@staticmethod
def _get_word_offsets_subwords_sentencepiece(
offsets: Dict[str, Union[str, float]],
hypothesis: Hypothesis,
decode_ids_to_tokens: Callable[[List[int]], str],
decode_tokens_to_str: Callable[[List[int]], str],
) -> Dict[str, Union[str, float]]:
"""
Utility method which constructs word time stamps out of sub-word time stamps.
**Note**: Only supports Sentencepiece based tokenizers !
Args:
offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
after rnnt collapse.
decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
Returns:
A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
"end_offset".
"""
word_offsets = []
built_token = []
previous_token_index = 0
# For every offset token
for i, offset in enumerate(offsets):
# For every subword token in offset token list (ignoring the RNNT Blank token at the end)
for char in offset['char'][:-1]:
char = int(char)
# Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
token = decode_ids_to_tokens([char])[0]
token_text = decode_tokens_to_str([char])
# It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
# after forcing partial text conversion of the token.
if token != token_text:
# If there are any partially or fully built sub-word token ids, construct to text.
# Note: This is "old" subword, that occurs *after* current sub-word has started.
if built_token:
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": offsets[previous_token_index]["start_offset"],
"end_offset": offsets[i]["start_offset"],
}
)
# Prepare list of new sub-word ids
built_token.clear()
built_token.append(char)
previous_token_index = i
else:
# If the token does not contain any sub-word start mark, then the sub-word has not completed yet
# Append to current sub-word list.
built_token.append(char)
# Inject the start offset of the first token to word offsets
# This is because we always skip the delay the injection of the first sub-word due to the loop
# condition and check whether built token is ready or not.
# Therefore without this forced injection, the start_offset appears as off by 1.
# This should only be done when these arrays contain more than one element.
if offsets and word_offsets:
word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
# If there are any remaining tokens left, inject them all into the final word offset.
# The start offset of this token is the start time of the next token to process.
# The end offset of this token is the end time of the last token from offsets.
# Note that built_token is a flat list; but offsets contains a nested list which
# may have different dimensionality.
# As such, we can't rely on the length of the list of built_token to index offsets.
if built_token:
# start from the previous token index as this hasn't been committed to word_offsets yet
# if we still have content in built_token
start_offset = offsets[previous_token_index]["start_offset"]
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": start_offset,
"end_offset": offsets[-1]["end_offset"],
}
)
built_token.clear()
return word_offsets
class RNNTDecoding(AbstractRNNTDecoding):
"""
Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
Args:
decoding_cfg: A dict-like object which contains the following key-value pairs.
strategy: str value which represents the type of decoding that can occur.
Possible values are :
- greedy, greedy_batch (for greedy decoding).
- beam, tsd, alsd (for beam search decoding).
compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
tokens as well as the decoded string. Default is False in order to avoid double decoding
unless required.
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
with the `return_hypotheses` flag set to True.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize
`rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
The config may further contain the following sub-dictionaries:
"greedy":
max_symbols: int, describing the maximum number of target tokens to decode per
timestep during greedy decoding. Setting to larger values allows longer sentences
to be decoded, at the cost of increased execution time.
preserve_frame_confidence: Same as above, overrides above value.
confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
"beam":
beam_size: int, defining the beam size for beam search. Must be >= 1.
If beam_size == 1, will perform cached greedy search. This might be slightly different
results compared to the greedy search above.
score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
Set to True by default.
return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
hypotheses after beam search has concluded. This flag is set by default.
tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
at increased cost to execution time.
alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
If an integer is provided, it can decode sequences of that particular maximum length.
If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
where seq_len is the length of the acoustic model output (T).
NOTE:
If a float is provided, it can be greater than 1!
By default, a float of 2.0 is used so that a target sequence can be at most twice
as long as the acoustic model output length T.
maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
in order to reduce expensive beam search cost later. int >= 0.
maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
and affects the speed of inference since large values will perform large beam search in the next step.
maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
expansion apart from the "most likely" candidate.
Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
tuned on a validation set.
softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
decoder: The Decoder/Prediction network module.
joint: The Joint network module.
vocabulary: The vocabulary (excluding the RNNT blank token) which will be used for decoding.
"""
def __init__(
self, decoding_cfg, decoder, joint, vocabulary,
):
# we need to ensure blank is the last token in the vocab for the case of RNNT and Multi-blank RNNT.
blank_id = len(vocabulary) + joint.num_extra_outputs
if hasattr(decoding_cfg, 'model_type') and decoding_cfg.model_type == 'tdt':
blank_id = len(vocabulary)
self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
super(RNNTDecoding, self).__init__(
decoding_cfg=decoding_cfg, decoder=decoder, joint=joint, blank_id=blank_id,
)
if isinstance(self.decoding, beam_decode.BeamRNNTInfer):
self.decoding.set_decoding_type('char')
def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
"""
Implemented by subclass in order to aggregate token confidence to a word-level confidence.
Args:
hypothesis: Hypothesis
Returns:
A list of word-level confidence scores.
"""
return self._aggregate_token_confidence_chars(hypothesis.words, hypothesis.token_confidence)
def decode_tokens_to_str(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to decoder a token list into a string.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded string.
"""
hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
return hypothesis
def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to decode a token id list into a token list.
A token list is the string representation of each token id.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded tokens.
"""
token_list = [self.labels_map[c] for c in tokens if c < self.blank_id - self.num_extra_outputs]
return token_list
def decode_tokens_to_lang(self, tokens: List[int]) -> str:
"""
Compute the most likely language ID (LID) string given the tokens.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded LID string.
"""
lang = self.tokenizer.ids_to_lang(tokens)
return lang
def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
"""
Decode a token id list into language ID (LID) list.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded LIDS.
"""
lang_list = self.tokenizer.ids_to_text_and_langs(tokens)
return lang_list
class RNNTWER(Metric):
"""
This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference texts.
When doing distributed training/evaluation the result of res=WER(predictions, targets, target_lengths) calls
will be all-reduced between all workers using SUM operations.
Here contains two numbers res=[wer_numerator, wer_denominator]. WER=wer_numerator/wer_denominator.
If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step results.
Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
Example:
def validation_step(self, batch, batch_idx):
...
wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
return self.val_outputs
def on_validation_epoch_end(self):
...
wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
self.val_outputs.clear() # free memory
return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
Args:
decoding: RNNTDecoding object that will perform autoregressive decoding of the RNNT model.
batch_dim_index: Index of the batch dimension.
use_cer: Whether to use Character Error Rate isntead of Word Error Rate.
log_prediction: Whether to log a single decoded sample per call.
Returns:
res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenshtein's
distances for all prediction - reference pairs, total number of words in all references.
"""
full_state_update = True
def __init__(
self, decoding: RNNTDecoding, batch_dim_index=0, use_cer=False, log_prediction=True, dist_sync_on_step=False
):
super(RNNTWER, self).__init__(dist_sync_on_step=dist_sync_on_step)
self.decoding = decoding
self.batch_dim_index = batch_dim_index
self.use_cer = use_cer
self.log_prediction = log_prediction
self.blank_id = self.decoding.blank_id
self.labels_map = self.decoding.labels_map
self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
def update(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
targets: torch.Tensor,
target_lengths: torch.Tensor,
) -> torch.Tensor:
words = 0
scores = 0
references = []
with torch.no_grad():
# prediction_cpu_tensor = tensors[0].long().cpu()
targets_cpu_tensor = targets.long().cpu()
targets_cpu_tensor = move_dimension_to_the_front(targets_cpu_tensor, self.batch_dim_index)
tgt_lenths_cpu_tensor = target_lengths.long().cpu()
# iterate over batch
for ind in range(targets_cpu_tensor.shape[0]):
tgt_len = tgt_lenths_cpu_tensor[ind].item()
target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
reference = self.decoding.decode_tokens_to_str(target)
references.append(reference)
hypotheses, _ = self.decoding.rnnt_decoder_predictions_tensor(encoder_output, encoded_lengths)
if self.log_prediction:
logging.info(f"\n")
logging.info(f"reference :{references[0]}")
logging.info(f"predicted :{hypotheses[0]}")
for h, r in zip(hypotheses, references):
if self.use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
words += len(r_list)
# Compute Levenshtein's distance
scores += editdistance.eval(h_list, r_list)
self.scores += torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
self.words += torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
# return torch.tensor([scores, words]).to(predictions.device)
def compute(self):
wer = self.scores.float() / self.words
return wer, self.scores.detach(), self.words.detach()
@dataclass
class RNNTDecodingConfig:
model_type: str = "rnnt" # one of "rnnt", "multiblank" or "tdt"
strategy: str = "greedy_batch"
compute_hypothesis_token_set: bool = False
# preserve decoding alignments
preserve_alignments: Optional[bool] = None
# confidence config
confidence_cfg: ConfidenceConfig = ConfidenceConfig()
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
# compute RNNT time stamps
compute_timestamps: Optional[bool] = None
# compute language IDs
compute_langs: bool = False
# token representing word seperator
word_seperator: str = " "
# type of timestamps to calculate
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
# beam decoding config
beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
# can be used to change temperature for decoding
temperature: float = 1.0
[end of nemo/collections/asr/metrics/rnnt_wer.py]
[start of nemo/collections/asr/metrics/wer.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from abc import abstractmethod
from dataclasses import dataclass, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
import jiwer
import numpy as np
import torch
from omegaconf import DictConfig, OmegaConf
from torchmetrics import Metric
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
from nemo.utils import logging, logging_mode
__all__ = ['word_error_rate', 'word_error_rate_detail', 'WER', 'move_dimension_to_the_front']
def word_error_rate(hypotheses: List[str], references: List[str], use_cer=False) -> float:
"""
Computes Average Word Error rate between two texts represented as
corresponding lists of string.
Hypotheses and references must have same length.
Args:
hypotheses (list): list of hypotheses
references(list) : list of references
use_cer (bool): set True to enable cer
Returns:
wer (float): average word error rate
"""
scores = 0
words = 0
if len(hypotheses) != len(references):
raise ValueError(
"In word error rate calculation, hypotheses and reference"
" lists must have the same number of elements. But I got:"
"{0} and {1} correspondingly".format(len(hypotheses), len(references))
)
for h, r in zip(hypotheses, references):
if use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
words += len(r_list)
# May deprecate using editdistance in future release for here and rest of codebase
# once we confirm jiwer is reliable.
scores += editdistance.eval(h_list, r_list)
if words != 0:
wer = 1.0 * scores / words
else:
wer = float('inf')
return wer
def word_error_rate_detail(
hypotheses: List[str], references: List[str], use_cer=False
) -> Tuple[float, int, float, float, float]:
"""
Computes Average Word Error Rate with details (insertion rate, deletion rate, substitution rate)
between two texts represented as corresponding lists of string.
Hypotheses and references must have same length.
Args:
hypotheses (list): list of hypotheses
references(list) : list of references
use_cer (bool): set True to enable cer
Returns:
wer (float): average word error rate
words (int): Total number of words/charactors of given reference texts
ins_rate (float): average insertion error rate
del_rate (float): average deletion error rate
sub_rate (float): average substitution error rate
"""
scores = 0
words = 0
ops_count = {'substitutions': 0, 'insertions': 0, 'deletions': 0}
if len(hypotheses) != len(references):
raise ValueError(
"In word error rate calculation, hypotheses and reference"
" lists must have the same number of elements. But I got:"
"{0} and {1} correspondingly".format(len(hypotheses), len(references))
)
for h, r in zip(hypotheses, references):
if use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
# To get rid of the issue that jiwer does not allow empty string
if len(r_list) == 0:
if len(h_list) != 0:
errors = len(h_list)
ops_count['insertions'] += errors
else:
errors = 0
else:
if use_cer:
measures = jiwer.cer(r, h, return_dict=True)
else:
measures = jiwer.compute_measures(r, h)
errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
ops_count['insertions'] += measures['insertions']
ops_count['deletions'] += measures['deletions']
ops_count['substitutions'] += measures['substitutions']
scores += errors
words += len(r_list)
if words != 0:
wer = 1.0 * scores / words
ins_rate = 1.0 * ops_count['insertions'] / words
del_rate = 1.0 * ops_count['deletions'] / words
sub_rate = 1.0 * ops_count['substitutions'] / words
else:
wer, ins_rate, del_rate, sub_rate = float('inf'), float('inf'), float('inf'), float('inf')
return wer, words, ins_rate, del_rate, sub_rate
def word_error_rate_per_utt(hypotheses: List[str], references: List[str], use_cer=False) -> Tuple[List[float], float]:
"""
Computes Word Error Rate per utterance and the average WER
between two texts represented as corresponding lists of string.
Hypotheses and references must have same length.
Args:
hypotheses (list): list of hypotheses
references(list) : list of references
use_cer (bool): set True to enable cer
Returns:
wer_per_utt (List[float]): word error rate per utterance
avg_wer (float): average word error rate
"""
scores = 0
words = 0
wer_per_utt = []
if len(hypotheses) != len(references):
raise ValueError(
"In word error rate calculation, hypotheses and reference"
" lists must have the same number of elements. But I got:"
"{0} and {1} correspondingly".format(len(hypotheses), len(references))
)
for h, r in zip(hypotheses, references):
if use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
# To get rid of the issue that jiwer does not allow empty string
if len(r_list) == 0:
if len(h_list) != 0:
errors = len(h_list)
wer_per_utt.append(float('inf'))
else:
if use_cer:
measures = jiwer.cer(r, h, return_dict=True)
er = measures['cer']
else:
measures = jiwer.compute_measures(r, h)
er = measures['wer']
errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
wer_per_utt.append(er)
scores += errors
words += len(r_list)
if words != 0:
avg_wer = 1.0 * scores / words
else:
avg_wer = float('inf')
return wer_per_utt, avg_wer
def move_dimension_to_the_front(tensor, dim_index):
all_dims = list(range(tensor.ndim))
return tensor.permute(*([dim_index] + all_dims[:dim_index] + all_dims[dim_index + 1 :]))
class AbstractCTCDecoding(ConfidenceMixin):
"""
Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs.
Args:
decoding_cfg: A dict-like object which contains the following key-value pairs.
strategy: str value which represents the type of decoding that can occur.
Possible values are :
- greedy (for greedy decoding).
- beam (for DeepSpeed KenLM based decoding).
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
Can take the following values - "char" for character/subword time stamps, "word" for word level
time stamps and "all" (default), for both character level and word level time stamps.
word_seperator: Str token representing the seperator between words.
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize
`ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding. When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
The config may further contain the following sub-dictionaries:
"greedy":
preserve_alignments: Same as above, overrides above value.
compute_timestamps: Same as above, overrides above value.
preserve_frame_confidence: Same as above, overrides above value.
confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
"beam":
beam_size: int, defining the beam size for beam search. Must be >= 1.
If beam_size == 1, will perform cached greedy search. This might be slightly different
results compared to the greedy search above.
return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
hypotheses after beam search has concluded. This flag is set by default.
beam_alpha: float, the strength of the Language model on the final score of a token.
final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
beam_beta: float, the strength of the sequence length penalty on the final score of a token.
final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
If the path is invalid (file is not found at path), will raise a deferred error at the moment
of calculation of beam search, so that users may update / change the decoding strategy
to point to the correct file.
blank_id: The id of the RNNT blank token.
"""
def __init__(self, decoding_cfg, blank_id: int):
super().__init__()
# Convert dataclas to config
if is_dataclass(decoding_cfg):
decoding_cfg = OmegaConf.structured(decoding_cfg)
if not isinstance(decoding_cfg, DictConfig):
decoding_cfg = OmegaConf.create(decoding_cfg)
OmegaConf.set_struct(decoding_cfg, False)
# update minimal config
minimal_cfg = ['greedy']
for item in minimal_cfg:
if item not in decoding_cfg:
decoding_cfg[item] = OmegaConf.create({})
self.cfg = decoding_cfg
self.blank_id = blank_id
self.preserve_alignments = self.cfg.get('preserve_alignments', None)
self.compute_timestamps = self.cfg.get('compute_timestamps', None)
self.batch_dim_index = self.cfg.get('batch_dim_index', 0)
self.word_seperator = self.cfg.get('word_seperator', ' ')
possible_strategies = ['greedy', 'beam', 'pyctcdecode', 'flashlight']
if self.cfg.strategy not in possible_strategies:
raise ValueError(f"Decoding strategy must be one of {possible_strategies}. Given {self.cfg.strategy}")
# Update preserve alignments
if self.preserve_alignments is None:
if self.cfg.strategy in ['greedy']:
self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
else:
self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
# Update compute timestamps
if self.compute_timestamps is None:
if self.cfg.strategy in ['greedy']:
self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
elif self.cfg.strategy in ['beam']:
self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
# initialize confidence-related fields
self._init_confidence(self.cfg.get('confidence_cfg', None))
# Confidence estimation is not implemented for strategies other than `greedy`
if (
not self.preserve_frame_confidence
and self.cfg.strategy != 'greedy'
and self.cfg.beam.get('preserve_frame_confidence', False)
):
raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
# we need timestamps to extract non-blank per-frame confidence
if self.compute_timestamps is not None:
self.compute_timestamps |= self.preserve_frame_confidence
if self.cfg.strategy == 'greedy':
self.decoding = ctc_greedy_decoding.GreedyCTCInfer(
blank_id=self.blank_id,
preserve_alignments=self.preserve_alignments,
compute_timestamps=self.compute_timestamps,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_measure_cfg=self.confidence_measure_cfg,
)
elif self.cfg.strategy == 'beam':
self.decoding = ctc_beam_decoding.BeamCTCInfer(
blank_id=blank_id,
beam_size=self.cfg.beam.get('beam_size', 1),
search_type='default',
return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
preserve_alignments=self.preserve_alignments,
compute_timestamps=self.compute_timestamps,
beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
beam_beta=self.cfg.beam.get('beam_beta', 0.0),
kenlm_path=self.cfg.beam.get('kenlm_path', None),
)
self.decoding.override_fold_consecutive_value = False
elif self.cfg.strategy == 'pyctcdecode':
self.decoding = ctc_beam_decoding.BeamCTCInfer(
blank_id=blank_id,
beam_size=self.cfg.beam.get('beam_size', 1),
search_type='pyctcdecode',
return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
preserve_alignments=self.preserve_alignments,
compute_timestamps=self.compute_timestamps,
beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
beam_beta=self.cfg.beam.get('beam_beta', 0.0),
kenlm_path=self.cfg.beam.get('kenlm_path', None),
pyctcdecode_cfg=self.cfg.beam.get('pyctcdecode_cfg', None),
)
self.decoding.override_fold_consecutive_value = False
elif self.cfg.strategy == 'flashlight':
self.decoding = ctc_beam_decoding.BeamCTCInfer(
blank_id=blank_id,
beam_size=self.cfg.beam.get('beam_size', 1),
search_type='flashlight',
return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
preserve_alignments=self.preserve_alignments,
compute_timestamps=self.compute_timestamps,
beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
beam_beta=self.cfg.beam.get('beam_beta', 0.0),
kenlm_path=self.cfg.beam.get('kenlm_path', None),
flashlight_cfg=self.cfg.beam.get('flashlight_cfg', None),
)
self.decoding.override_fold_consecutive_value = False
else:
raise ValueError(
f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
f"but was provided {self.cfg.strategy}"
)
def ctc_decoder_predictions_tensor(
self,
decoder_outputs: torch.Tensor,
decoder_lengths: torch.Tensor = None,
fold_consecutive: bool = True,
return_hypotheses: bool = False,
) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
"""
Decodes a sequence of labels to words
Args:
decoder_outputs: An integer torch.Tensor of shape [Batch, Time, {Vocabulary}] (if ``batch_index_dim == 0``) or [Time, Batch]
(if ``batch_index_dim == 1``) of integer indices that correspond to the index of some character in the
label set.
decoder_lengths: Optional tensor of length `Batch` which contains the integer lengths
of the sequence in the padded `predictions` tensor.
fold_consecutive: Bool, determine whether to perform "ctc collapse", folding consecutive tokens
into a single token.
return_hypotheses: Bool flag whether to return just the decoding predictions of the model
or a Hypothesis object that holds information such as the decoded `text`,
the `alignment` of emited by the CTC Model, and the `length` of the sequence (if available).
May also contain the log-probabilities of the decoder (if this method is called via
transcribe())
Returns:
Either a list of str which represent the CTC decoded strings per sample,
or a list of Hypothesis objects containing additional information.
"""
if isinstance(decoder_outputs, torch.Tensor):
decoder_outputs = move_dimension_to_the_front(decoder_outputs, self.batch_dim_index)
if (
hasattr(self.decoding, 'override_fold_consecutive_value')
and self.decoding.override_fold_consecutive_value is not None
):
logging.info(
f"Beam search requires that consecutive ctc tokens are not folded. \n"
f"Overriding provided value of `fold_consecutive` = {fold_consecutive} to "
f"{self.decoding.override_fold_consecutive_value}",
mode=logging_mode.ONCE,
)
fold_consecutive = self.decoding.override_fold_consecutive_value
with torch.inference_mode():
# Resolve the forward step of the decoding strategy
hypotheses_list = self.decoding(
decoder_output=decoder_outputs, decoder_lengths=decoder_lengths
) # type: List[List[Hypothesis]]
# extract the hypotheses
hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
if isinstance(hypotheses_list[0], NBestHypotheses):
hypotheses = []
all_hypotheses = []
for nbest_hyp in hypotheses_list: # type: NBestHypotheses
n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
decoded_hyps = self.decode_hypothesis(
n_hyps, fold_consecutive
) # type: List[Union[Hypothesis, NBestHypotheses]]
# If computing timestamps
if self.compute_timestamps is True:
timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
for hyp_idx in range(len(decoded_hyps)):
decoded_hyps[hyp_idx] = self.compute_ctc_timestamps(decoded_hyps[hyp_idx], timestamp_type)
hypotheses.append(decoded_hyps[0]) # best hypothesis
all_hypotheses.append(decoded_hyps)
if return_hypotheses:
return hypotheses, all_hypotheses
best_hyp_text = [h.text for h in hypotheses]
all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
return best_hyp_text, all_hyp_text
else:
hypotheses = self.decode_hypothesis(
hypotheses_list, fold_consecutive
) # type: List[Union[Hypothesis, NBestHypotheses]]
# If computing timestamps
if self.compute_timestamps is True:
# greedy decoding, can get high-level confidence scores
if return_hypotheses and (self.preserve_word_confidence or self.preserve_token_confidence):
hypotheses = self.compute_confidence(hypotheses)
else:
# remove unused token_repetitions from Hypothesis.text
for hyp in hypotheses:
hyp.text = hyp.text[:2]
timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
for hyp_idx in range(len(hypotheses)):
hypotheses[hyp_idx] = self.compute_ctc_timestamps(hypotheses[hyp_idx], timestamp_type)
if return_hypotheses:
return hypotheses, None
best_hyp_text = [h.text for h in hypotheses]
return best_hyp_text, None
def decode_hypothesis(
self, hypotheses_list: List[Hypothesis], fold_consecutive: bool
) -> List[Union[Hypothesis, NBestHypotheses]]:
"""
Decode a list of hypotheses into a list of strings.
Args:
hypotheses_list: List of Hypothesis.
fold_consecutive: Whether to collapse the ctc blank tokens or not.
Returns:
A list of strings.
"""
for ind in range(len(hypotheses_list)):
# Extract the integer encoded hypothesis
hyp = hypotheses_list[ind]
prediction = hyp.y_sequence
predictions_len = hyp.length if hyp.length > 0 else None
if fold_consecutive:
if type(prediction) != list:
prediction = prediction.numpy().tolist()
if predictions_len is not None:
prediction = prediction[:predictions_len]
# CTC decoding procedure
decoded_prediction = []
token_lengths = [] # preserve token lengths
token_repetitions = [] # preserve number of repetitions per token
previous = self.blank_id
last_length = 0
last_repetition = 1
for pidx, p in enumerate(prediction):
if (p != previous or previous == self.blank_id) and p != self.blank_id:
decoded_prediction.append(p)
token_lengths.append(pidx - last_length)
last_length = pidx
token_repetitions.append(last_repetition)
last_repetition = 1
if p == previous and previous != self.blank_id:
last_repetition += 1
previous = p
if len(token_repetitions) > 0:
token_repetitions = token_repetitions[1:] + [last_repetition]
else:
if predictions_len is not None:
prediction = prediction[:predictions_len]
decoded_prediction = prediction[prediction != self.blank_id].tolist()
token_lengths = [1] * len(decoded_prediction) # preserve number of repetitions per token
token_repetitions = [1] * len(decoded_prediction) # preserve number of repetitions per token
# De-tokenize the integer tokens; if not computing timestamps
if self.compute_timestamps is True:
# keep the original predictions, wrap with the number of repetitions per token
# this is done so that `ctc_decoder_predictions_tensor()` can process this hypothesis
# in order to compute exact time stamps.
hypothesis = (decoded_prediction, token_lengths, token_repetitions)
else:
hypothesis = self.decode_tokens_to_str(decoded_prediction)
# TODO: remove
# collapse leading spaces before . , ? for PC models
hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
# Preserve this wrapped hypothesis or decoded text tokens.
hypotheses_list[ind].text = hypothesis
return hypotheses_list
def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
"""
Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
Assumes that `frame_confidence` is present in the hypotheses.
Args:
hypotheses_list: List of Hypothesis.
Returns:
A list of hypotheses with high-level confidence scores.
"""
for hyp in hypotheses_list:
if not isinstance(hyp.text, tuple) or len(hyp.text) != 3:
# the method must have been called in the wrong place
raise ValueError(
"""Wrong format of the `text` attribute of a hypothesis.\n
Expected: (decoded_prediction, token_repetitions)\n
The method invocation is expected between .decode_hypothesis() and .compute_ctc_timestamps()"""
)
token_repetitions = hyp.text[2]
hyp.text = hyp.text[:2]
token_confidence = []
if self.exclude_blank_from_confidence:
non_blank_frame_confidence = hyp.non_blank_frame_confidence
i = 0
for tr in token_repetitions:
# token repetition can be zero
j = i + tr
token_confidence.append(self._aggregate_confidence(non_blank_frame_confidence[i:j]))
i = j
else:
# <blank> tokens are considered to belong to the last non-blank token, if any.
token_lengths = hyp.text[1]
if len(token_lengths) > 0:
ts = token_lengths[0]
for tl in token_lengths[1:] + [len(hyp.frame_confidence)]:
token_confidence.append(self._aggregate_confidence(hyp.frame_confidence[ts : ts + tl]))
ts += tl
hyp.token_confidence = token_confidence
if self.preserve_word_confidence:
for hyp in hypotheses_list:
hyp.word_confidence = self._aggregate_token_confidence(hyp)
return hypotheses_list
@abstractmethod
def decode_tokens_to_str(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to decoder a token id list into a string.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded string.
"""
raise NotImplementedError()
@abstractmethod
def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to decode a token id list into a token list.
A token list is the string representation of each token id.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded tokens.
"""
raise NotImplementedError()
def compute_ctc_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
"""
Method to compute time stamps at char/subword, and word level given some hypothesis.
Requires the input hypothesis to contain a `text` field that is the tuple. The tuple contains -
the ctc collapsed integer ids, and the number of repetitions of each token.
Args:
hypothesis: A Hypothesis object, with a wrapped `text` field.
The `text` field must contain a tuple with two values -
The ctc collapsed integer ids
A list of integers that represents the number of repetitions per token.
timestamp_type: A str value that represents the type of time stamp calculated.
Can be one of "char", "word" or "all"
Returns:
A Hypothesis object with a modified `timestep` value, which is now a dictionary containing
the time stamp information.
"""
assert timestamp_type in ['char', 'word', 'all']
# Unpack the temporary storage, and set the decoded predictions
decoded_prediction, token_lengths = hypothesis.text
hypothesis.text = decoded_prediction
# Retrieve offsets
char_offsets = word_offsets = None
char_offsets = self._compute_offsets(hypothesis, token_lengths, self.blank_id)
# Assert number of offsets and hypothesis tokens are 1:1 match.
if len(char_offsets) != len(hypothesis.text):
raise ValueError(
f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
" have to be of the same length, but are: "
f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
f" {len(hypothesis.text)}"
)
# Correctly process the token ids to chars/subwords.
for i, char in enumerate(hypothesis.text):
char_offsets[i]["char"] = self.decode_tokens_to_str([char])
# detect char vs subword models
lens = [len(list(v["char"])) > 1 for v in char_offsets]
if any(lens):
text_type = 'subword'
else:
text_type = 'char'
# retrieve word offsets from character offsets
word_offsets = None
if timestamp_type in ['word', 'all']:
if text_type == 'char':
word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
else:
word_offsets = self._get_word_offsets_subwords_sentencepiece(
char_offsets,
hypothesis,
decode_ids_to_tokens=self.decode_ids_to_tokens,
decode_tokens_to_str=self.decode_tokens_to_str,
)
# attach results
if len(hypothesis.timestep) > 0:
timestep_info = hypothesis.timestep
else:
timestep_info = []
# Setup defaults
hypothesis.timestep = {"timestep": timestep_info}
# Add char / subword time stamps
if char_offsets is not None and timestamp_type in ['char', 'all']:
hypothesis.timestep['char'] = char_offsets
# Add word time stamps
if word_offsets is not None and timestamp_type in ['word', 'all']:
hypothesis.timestep['word'] = word_offsets
# Convert the token indices to text
hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
return hypothesis
@staticmethod
def _compute_offsets(
hypothesis: Hypothesis, token_lengths: List[int], ctc_token: int
) -> List[Dict[str, Union[str, int]]]:
"""
Utility method that calculates the indidual time indices where a token starts and ends.
Args:
hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
emitted at every time step after ctc collapse.
token_lengths: A list of ints representing the lengths of each emitted token.
ctc_token: The integer of the ctc blank token used during ctc collapse.
Returns:
"""
start_index = 0
# If the exact timestep information is available, utilize the 1st non-ctc blank token timestep
# as the start index.
if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
start_index = max(0, hypothesis.timestep[0] - 1)
# Construct the start and end indices brackets
end_indices = np.asarray(token_lengths).cumsum()
start_indices = np.concatenate(([start_index], end_indices[:-1]))
# Merge the results per token into a list of dictionaries
offsets = [
{"char": t, "start_offset": s, "end_offset": e}
for t, s, e in zip(hypothesis.text, start_indices, end_indices)
]
# Filter out CTC token
offsets = list(filter(lambda offsets: offsets["char"] != ctc_token, offsets))
return offsets
@staticmethod
def _get_word_offsets_chars(
offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
) -> Dict[str, Union[str, float]]:
"""
Utility method which constructs word time stamps out of character time stamps.
References:
This code is a port of the Hugging Face code for word time stamp construction.
Args:
offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
word_delimiter_char: Character token that represents the word delimiter. By default, " ".
Returns:
A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
"end_offset".
"""
word_offsets = []
last_state = "SPACE"
word = ""
start_offset = 0
end_offset = 0
for i, offset in enumerate(offsets):
char = offset["char"]
state = "SPACE" if char == word_delimiter_char else "WORD"
if state == last_state:
# If we are in the same state as before, we simply repeat what we've done before
end_offset = offset["end_offset"]
word += char
else:
# Switching state
if state == "SPACE":
# Finishing a word
word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
else:
# Starting a new word
start_offset = offset["start_offset"]
end_offset = offset["end_offset"]
word = char
last_state = state
if last_state == "WORD":
word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
return word_offsets
@staticmethod
def _get_word_offsets_subwords_sentencepiece(
offsets: Dict[str, Union[str, float]],
hypothesis: Hypothesis,
decode_ids_to_tokens: Callable[[List[int]], str],
decode_tokens_to_str: Callable[[List[int]], str],
) -> Dict[str, Union[str, float]]:
"""
Utility method which constructs word time stamps out of sub-word time stamps.
**Note**: Only supports Sentencepiece based tokenizers !
Args:
offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
after ctc collapse.
decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
Returns:
A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
"end_offset".
"""
word_offsets = []
built_token = []
previous_token_index = 0
# For every collapsed sub-word token
for i, char in enumerate(hypothesis.text):
# Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
token = decode_ids_to_tokens([char])[0]
token_text = decode_tokens_to_str([char])
# It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
# after forcing partial text conversion of the token.
if token != token_text:
# If there are any partially or fully built sub-word token ids, construct to text.
# Note: This is "old" subword, that occurs *after* current sub-word has started.
if len(built_token) > 0:
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": offsets[previous_token_index]["start_offset"],
"end_offset": offsets[i]["start_offset"],
}
)
# Prepare list of new sub-word ids
built_token.clear()
built_token.append(char)
previous_token_index = i
else:
# If the token does not contain any sub-word start mark, then the sub-word has not completed yet
# Append to current sub-word list.
built_token.append(char)
# Inject the start offset of the first token to word offsets
# This is because we always skip the delay the injection of the first sub-word due to the loop
# condition and check whether built token is ready or not.
# Therefore without this forced injection, the start_offset appears as off by 1.
if len(word_offsets) == 0:
# alaptev: sometimes word_offsets can be empty
if len(built_token) > 0:
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": offsets[0]["start_offset"],
"end_offset": offsets[-1]["end_offset"],
}
)
built_token.clear()
else:
word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
# If there are any remaining tokens left, inject them all into the final word offset.
# Note: The start offset of this token is the start time of the first token inside build_token.
# Note: The end offset of this token is the end time of the last token inside build_token
if len(built_token) > 0:
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": offsets[-(len(built_token))]["start_offset"],
"end_offset": offsets[-1]["end_offset"],
}
)
built_token.clear()
return word_offsets
@property
def preserve_alignments(self):
return self._preserve_alignments
@preserve_alignments.setter
def preserve_alignments(self, value):
self._preserve_alignments = value
if hasattr(self, 'decoding'):
self.decoding.preserve_alignments = value
@property
def compute_timestamps(self):
return self._compute_timestamps
@compute_timestamps.setter
def compute_timestamps(self, value):
self._compute_timestamps = value
if hasattr(self, 'decoding'):
self.decoding.compute_timestamps = value
@property
def preserve_frame_confidence(self):
return self._preserve_frame_confidence
@preserve_frame_confidence.setter
def preserve_frame_confidence(self, value):
self._preserve_frame_confidence = value
if hasattr(self, 'decoding'):
self.decoding.preserve_frame_confidence = value
class CTCDecoding(AbstractCTCDecoding):
"""
Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs for character
based models.
Args:
decoding_cfg: A dict-like object which contains the following key-value pairs.
strategy: str value which represents the type of decoding that can occur.
Possible values are :
- greedy (for greedy decoding).
- beam (for DeepSpeed KenLM based decoding).
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
Can take the following values - "char" for character/subword time stamps, "word" for word level
time stamps and "all" (default), for both character level and word level time stamps.
word_seperator: Str token representing the seperator between words.
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize
`ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding. When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
The config may further contain the following sub-dictionaries:
"greedy":
preserve_alignments: Same as above, overrides above value.
compute_timestamps: Same as above, overrides above value.
preserve_frame_confidence: Same as above, overrides above value.
confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
"beam":
beam_size: int, defining the beam size for beam search. Must be >= 1.
If beam_size == 1, will perform cached greedy search. This might be slightly different
results compared to the greedy search above.
return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
hypotheses after beam search has concluded. This flag is set by default.
beam_alpha: float, the strength of the Language model on the final score of a token.
final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
beam_beta: float, the strength of the sequence length penalty on the final score of a token.
final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
If the path is invalid (file is not found at path), will raise a deferred error at the moment
of calculation of beam search, so that users may update / change the decoding strategy
to point to the correct file.
blank_id: The id of the RNNT blank token.
"""
def __init__(
self, decoding_cfg, vocabulary,
):
blank_id = len(vocabulary)
self.vocabulary = vocabulary
self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
super().__init__(decoding_cfg=decoding_cfg, blank_id=blank_id)
# Finalize Beam Search Decoding framework
if isinstance(self.decoding, ctc_beam_decoding.AbstractBeamCTCInfer):
self.decoding.set_vocabulary(self.vocabulary)
self.decoding.set_decoding_type('char')
def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
"""
Implemented by subclass in order to aggregate token confidence to a word-level confidence.
Args:
hypothesis: Hypothesis
Returns:
A list of word-level confidence scores.
"""
return self._aggregate_token_confidence_chars(
self.decode_tokens_to_str(hypothesis.text[0]).split(), hypothesis.token_confidence
)
def decode_tokens_to_str(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to decoder a token list into a string.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded string.
"""
hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
return hypothesis
def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to decode a token id list into a token list.
A token list is the string representation of each token id.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded tokens.
"""
token_list = [self.labels_map[c] for c in tokens if c != self.blank_id]
return token_list
class WER(Metric):
"""
This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference
texts. When doing distributed training/evaluation the result of ``res=WER(predictions, targets, target_lengths)``
calls will be all-reduced between all workers using SUM operations. Here ``res`` contains three numbers
``res=[wer, total_levenstein_distance, total_number_of_words]``.
If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step
results. Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
Example:
def validation_step(self, batch, batch_idx):
...
wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
return self.val_outputs
def on_validation_epoch_end(self):
...
wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
self.val_outputs.clear() # free memory
return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
Args:
decoding: An instance of CTCDecoding.
use_cer: Whether to use Character Error Rate instead of Word Error Rate.
log_prediction: Whether to log a single decoded sample per call.
fold_consecutive: Whether repeated consecutive characters should be folded into one when decoding.
Returns:
res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenstein's
distances for all prediction - reference pairs, total number of words in all references.
"""
full_state_update: bool = True
def __init__(
self,
decoding: CTCDecoding,
use_cer=False,
log_prediction=True,
fold_consecutive=True,
dist_sync_on_step=False,
):
super().__init__(dist_sync_on_step=dist_sync_on_step)
self.decoding = decoding
self.use_cer = use_cer
self.log_prediction = log_prediction
self.fold_consecutive = fold_consecutive
self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
def update(
self,
predictions: torch.Tensor,
targets: torch.Tensor,
target_lengths: torch.Tensor,
predictions_lengths: torch.Tensor = None,
):
"""
Updates metric state.
Args:
predictions: an integer torch.Tensor of shape ``[Batch, Time, {Vocabulary}]`` (if ``batch_dim_index == 0``) or
``[Time, Batch]`` (if ``batch_dim_index == 1``)
targets: an integer torch.Tensor of shape ``[Batch, Time]`` (if ``batch_dim_index == 0``) or
``[Time, Batch]`` (if ``batch_dim_index == 1``)
target_lengths: an integer torch.Tensor of shape ``[Batch]``
predictions_lengths: an integer torch.Tensor of shape ``[Batch]``
"""
words = 0
scores = 0
references = []
with torch.no_grad():
# prediction_cpu_tensor = tensors[0].long().cpu()
targets_cpu_tensor = targets.long().cpu()
tgt_lenths_cpu_tensor = target_lengths.long().cpu()
# iterate over batch
for ind in range(targets_cpu_tensor.shape[0]):
tgt_len = tgt_lenths_cpu_tensor[ind].item()
target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
reference = self.decoding.decode_tokens_to_str(target)
references.append(reference)
hypotheses, _ = self.decoding.ctc_decoder_predictions_tensor(
predictions, predictions_lengths, fold_consecutive=self.fold_consecutive
)
if self.log_prediction:
logging.info(f"\n")
logging.info(f"reference:{references[0]}")
logging.info(f"predicted:{hypotheses[0]}")
for h, r in zip(hypotheses, references):
if self.use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
words += len(r_list)
# Compute Levenstein's distance
scores += editdistance.eval(h_list, r_list)
self.scores = torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
self.words = torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
# return torch.tensor([scores, words]).to(predictions.device)
def compute(self):
scores = self.scores.detach().float()
words = self.words.detach().float()
return scores / words, scores, words
@dataclass
class CTCDecodingConfig:
strategy: str = "greedy"
# preserve decoding alignments
preserve_alignments: Optional[bool] = None
# compute ctc time stamps
compute_timestamps: Optional[bool] = None
# token representing word seperator
word_seperator: str = " "
# type of timestamps to calculate
ctc_timestamp_type: str = "all" # can be char, word or all for both
# batch dimension
batch_dim_index: int = 0
# greedy decoding config
greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
# beam decoding config
beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
# confidence config
confidence_cfg: ConfidenceConfig = ConfidenceConfig()
# can be used to change temperature for decoding
temperature: float = 1.0
[end of nemo/collections/asr/metrics/wer.py]
[start of nemo/collections/asr/models/configs/aligner_config.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@dataclass
class AlignerCTCConfig:
prob_suppress_index: int = -1
prob_suppress_value: float = 1.0
@dataclass
class AlignerRNNTConfig:
predictor_window_size: int = 0
predictor_step_size: int = 1
@dataclass
class AlignerWrapperModelConfig:
alignment_type: str = "forced"
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
[end of nemo/collections/asr/models/configs/aligner_config.py]
[start of nemo/collections/asr/models/configs/asr_models_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
import nemo.core.classes.dataset
from nemo.collections.asr.metrics.wer import CTCDecodingConfig
from nemo.collections.asr.modules.audio_preprocessing import (
AudioToMelSpectrogramPreprocessorConfig,
SpectrogramAugmentationConfig,
)
from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig
from nemo.core.config import modelPT as model_cfg
@dataclass
class ASRDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
manifest_filepath: Optional[Any] = None
sample_rate: int = MISSING
labels: List[str] = MISSING
trim_silence: bool = False
# Tarred dataset support
is_tarred: bool = False
tarred_audio_filepaths: Optional[Any] = None
tarred_shard_strategy: str = "scatter"
shard_manifests: bool = False
shuffle_n: int = 0
# Optional
int_values: Optional[int] = None
augmentor: Optional[Dict[str, Any]] = None
max_duration: Optional[float] = None
min_duration: Optional[float] = None
max_utts: int = 0
blank_index: int = -1
unk_index: int = -1
normalize: bool = False
trim: bool = True
parser: Optional[str] = 'en'
eos_id: Optional[int] = None
bos_id: Optional[int] = None
pad_id: int = 0
use_start_end_token: bool = False
return_sample_id: Optional[bool] = False
# bucketing params
bucketing_strategy: str = "synced_randomized"
bucketing_batch_size: Optional[Any] = None
bucketing_weights: Optional[List[int]] = None
@dataclass
class EncDecCTCConfig(model_cfg.ModelConfig):
# Model global arguments
sample_rate: int = 16000
repeat: int = 1
dropout: float = 0.0
separable: bool = False
labels: List[str] = MISSING
# Dataset configs
train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
# Model component configs
preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
encoder: ConvASREncoderConfig = ConvASREncoderConfig()
decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
decoding: CTCDecodingConfig = CTCDecodingConfig()
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
model: EncDecCTCConfig = EncDecCTCConfig()
@dataclass
class CacheAwareStreamingConfig:
chunk_size: int = 0 # the size of each chunk at each step, it can be a list of two integers to specify different chunk sizes for the first step and others
shift_size: int = 0 # the size of the shift in each step, it can be a list of two integers to specify different shift sizes for the first step and others
cache_drop_size: int = 0 # the number of steps to drop from the cache
last_channel_cache_size: int = 0 # the size of the needed cache for last channel layers
valid_out_len: int = 0 # the number of the steps in the final output which are valid (have the same value as in the offline mode)
pre_encode_cache_size: int = 0 # the size of the needed cache for the pre-encoding part of the model to avoid caching inside the pre-encoding layers
drop_extra_pre_encoded: int = 0 # the number of steps to get dropped after the pre-encoding layer
last_channel_num: int = 0 # number of the last channel layers (like MHA layers) which need caching in the model
last_time_num: int = 0 # number of the last time layers (like convolutions) which need caching in the model
[end of nemo/collections/asr/models/configs/asr_models_config.py]
[start of nemo/collections/asr/models/configs/classification_models_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
import nemo.core.classes.dataset
from nemo.collections.asr.modules.audio_preprocessing import (
AudioToMFCCPreprocessorConfig,
CropOrPadSpectrogramAugmentationConfig,
SpectrogramAugmentationConfig,
)
from nemo.collections.asr.modules.conv_asr import ConvASRDecoderClassificationConfig, ConvASREncoderConfig
from nemo.core.config import modelPT as model_cfg
@dataclass
class EncDecClassificationDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
manifest_filepath: Optional[str] = None
sample_rate: int = MISSING
labels: List[str] = MISSING
trim_silence: bool = False
# Tarred dataset support
is_tarred: bool = False
tarred_audio_filepaths: Optional[str] = None
tarred_shard_strategy: str = "scatter"
shuffle_n: int = 0
# Optional
int_values: Optional[int] = None
augmentor: Optional[Dict[str, Any]] = None
max_duration: Optional[float] = None
min_duration: Optional[float] = None
cal_labels_occurrence: Optional[bool] = False
# VAD Optional
vad_stream: Optional[bool] = None
window_length_in_sec: float = 0.31
shift_length_in_sec: float = 0.01
normalize_audio: bool = False
is_regression_task: bool = False
# bucketing params
bucketing_strategy: str = "synced_randomized"
bucketing_batch_size: Optional[Any] = None
bucketing_weights: Optional[List[int]] = None
@dataclass
class EncDecClassificationConfig(model_cfg.ModelConfig):
# Model global arguments
sample_rate: int = 16000
repeat: int = 1
dropout: float = 0.0
separable: bool = True
kernel_size_factor: float = 1.0
labels: List[str] = MISSING
timesteps: int = MISSING
# Dataset configs
train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=True, trim_silence=False
)
validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=False
)
test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=False
)
# Optimizer / Scheduler config
optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
# Model component configs
preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
audio_length=timesteps
)
encoder: ConvASREncoderConfig = ConvASREncoderConfig()
decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
model: EncDecClassificationConfig = EncDecClassificationConfig()
[end of nemo/collections/asr/models/configs/classification_models_config.py]
[start of nemo/collections/asr/models/configs/diarizer_config.py]
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import asdict, dataclass
from typing import Any, Dict, Optional, Tuple, Union
@dataclass
class DiarizerComponentConfig:
"""Dataclass to imitate HydraConfig dict when accessing parameters."""
def get(self, name: str, default: Optional[Any] = None):
return getattr(self, name, default)
def __iter__(self):
for key in asdict(self):
yield key
def dict(self) -> Dict:
return asdict(self)
@dataclass
class ASRDiarizerCTCDecoderParams:
pretrained_language_model: Optional[str] = None # KenLM model file: .arpa model file or .bin binary file.
beam_width: int = 32
alpha: float = 0.5
beta: float = 2.5
@dataclass
class ASRRealigningLMParams:
# Provide a KenLM language model in .arpa format.
arpa_language_model: Optional[str] = None
# Min number of words for the left context.
min_number_of_words: int = 3
# Max number of words for the right context.
max_number_of_words: int = 10
# The threshold for the difference between two log probability values from two hypotheses.
logprob_diff_threshold: float = 1.2
@dataclass
class ASRDiarizerParams(DiarizerComponentConfig):
# if True, speech segmentation for diarization is based on word-timestamps from ASR inference.
asr_based_vad: bool = False
# Threshold (in sec) that caps the gap between two words when generating VAD timestamps using ASR based VAD.
asr_based_vad_threshold: float = 1.0
# Batch size can be dependent on each ASR model. Default batch sizes are applied if set to null.
asr_batch_size: Optional[int] = None
# Native decoder delay. null is recommended to use the default values for each ASR model.
decoder_delay_in_sec: Optional[float] = None
# Offset to set a reference point from the start of the word. Recommended range of values is [-0.05 0.2].
word_ts_anchor_offset: Optional[float] = None
# Select which part of the word timestamp we want to use. The options are: 'start', 'end', 'mid'.
word_ts_anchor_pos: str = "start"
# Fix the word timestamp using VAD output. You must provide a VAD model to use this feature.
fix_word_ts_with_VAD: bool = False
# If True, use colored text to distinguish speakers in the output transcript.
colored_text: bool = False
# If True, the start and end time of each speaker turn is printed in the output transcript.
print_time: bool = True
# If True, the output transcript breaks the line to fix the line width (default is 90 chars)
break_lines: bool = False
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
parameters: ASRDiarizerParams = ASRDiarizerParams()
ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
@dataclass
class VADParams(DiarizerComponentConfig):
window_length_in_sec: float = 0.15 # Window length in sec for VAD context input
shift_length_in_sec: float = 0.01 # Shift length in sec for generate frame level VAD prediction
smoothing: Union[str, bool] = "median" # False or type of smoothing method (eg: median)
overlap: float = 0.5 # Overlap ratio for overlapped mean/median smoothing filter
onset: float = 0.1 # Onset threshold for detecting the beginning and end of a speech
offset: float = 0.1 # Offset threshold for detecting the end of a speech
pad_onset: float = 0.1 # Adding durations before each speech segment
pad_offset: float = 0 # Adding durations after each speech segment
min_duration_on: float = 0 # Threshold for small non_speech deletion
min_duration_off: float = 0.2 # Threshold for short speech segment deletion
filter_speech_first: bool = True
@dataclass
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
parameters: VADParams = VADParams()
@dataclass
class SpeakerEmbeddingsParams(DiarizerComponentConfig):
# Window length(s) in sec (floating-point number). either a number or a list. ex) 1.5 or [1.5,1.0,0.5]
window_length_in_sec: Tuple[float] = (1.5, 1.25, 1.0, 0.75, 0.5)
# Shift length(s) in sec (floating-point number). either a number or a list. ex) 0.75 or [0.75,0.5,0.25]
shift_length_in_sec: Tuple[float] = (0.75, 0.625, 0.5, 0.375, 0.25)
# Weight for each scale. None (for single scale) or list with window/shift scale count. ex) [0.33,0.33,0.33]
multiscale_weights: Tuple[float] = (1, 1, 1, 1, 1)
# save speaker embeddings in pickle format. True if clustering result is used for other models, such as MSDD.
save_embeddings: bool = True
@dataclass
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
@dataclass
class ClusteringParams(DiarizerComponentConfig):
# If True, use num of speakers value provided in manifest file.
oracle_num_speakers: bool = False
# Max number of speakers for each recording. If an oracle number of speakers is passed, this value is ignored.
max_num_speakers: int = 8
# If the number of segments is lower than this number, enhanced speaker counting is activated.
enhanced_count_thres: int = 80
# Determines the range of p-value search: 0 < p <= max_rp_threshold.
max_rp_threshold: float = 0.25
# The higher the number, the more values will be examined with more time.
sparse_search_volume: int = 30
# If True, take a majority vote on multiple p-values to estimate the number of speakers.
maj_vote_spk_count: bool = False
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
parameters: ClusteringParams = ClusteringParams()
@dataclass
class MSDDParams(DiarizerComponentConfig):
# If True, use speaker embedding model in checkpoint, else provided speaker embedding model in config will be used.
use_speaker_model_from_ckpt: bool = True
# Batch size for MSDD inference.
infer_batch_size: int = 25
# Sigmoid threshold for generating binarized speaker labels. The smaller the more generous on detecting overlaps.
sigmoid_threshold: Tuple[float] = (0.7,)
# If True, use oracle number of speaker and evaluate F1 score for the given speaker sequences. Default is False.
seq_eval_mode: bool = False
# If True, break the input audio clip to short sequences and calculate cluster average embeddings for inference.
split_infer: bool = True
# The length of split short sequence when split_infer is True.
diar_window_length: int = 50
# If the estimated number of speakers are larger than this number, overlap speech is not estimated.
overlap_infer_spk_limit: int = 5
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
parameters: MSDDParams = MSDDParams()
@dataclass
class DiarizerConfig(DiarizerComponentConfig):
manifest_filepath: Optional[str] = None
out_dir: Optional[str] = None
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
vad: VADConfig = VADConfig()
speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
clustering: ClusteringConfig = ClusteringConfig()
msdd_model: MSDDConfig = MSDDConfig()
asr: ASRDiarizerConfig = ASRDiarizerConfig()
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
diarizer: DiarizerConfig = DiarizerConfig()
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
num_workers: int = 1
sample_rate: int = 16000
name: str = ""
@classmethod
def init_config(cls, diar_model_path: str, vad_model_path: str, map_location: str, verbose: bool):
return NeuralDiarizerInferenceConfig(
DiarizerConfig(
vad=VADConfig(model_path=vad_model_path), msdd_model=MSDDConfig(model_path=diar_model_path),
),
device=map_location,
verbose=verbose,
)
[end of nemo/collections/asr/models/configs/diarizer_config.py]
[start of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
from nemo.core.config.modelPT import NemoConfig
@dataclass
class GraphModuleConfig:
criterion_type: str = "ml"
loss_type: str = "ctc"
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
backend_cfg: BackendConfig = BackendConfig()
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
model: EncDecK2SeqConfig = EncDecK2SeqConfig()
[end of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
[start of nemo/collections/asr/models/configs/matchboxnet_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
from nemo.collections.asr.models.configs import classification_models_config as clf_cfg
from nemo.collections.asr.modules.audio_preprocessing import (
AudioToMFCCPreprocessorConfig,
CropOrPadSpectrogramAugmentationConfig,
SpectrogramAugmentationConfig,
)
from nemo.collections.asr.modules.conv_asr import (
ConvASRDecoderClassificationConfig,
ConvASREncoderConfig,
JasperEncoderConfig,
)
from nemo.core.config import modelPT as model_cfg
# fmt: off
def matchboxnet_3x1x64():
config = [
JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
]
return config
def matchboxnet_3x1x64_vad():
config = [
JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
]
return config
# fmt: on
@dataclass
class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
# Model global arguments
sample_rate: int = 16000
repeat: int = 1
dropout: float = 0.0
separable: bool = True
kernel_size_factor: float = 1.0
timesteps: int = 128
labels: List[str] = MISSING
# Dataset configs
train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=True, trim_silence=False
)
validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=False
)
test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=False
)
# Optimizer / Scheduler config
optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
# Model general component configs
preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
)
crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
audio_length=128
)
encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
@dataclass
class MatchboxNetVADModelConfig(MatchboxNetModelConfig):
timesteps: int = 64
labels: List[str] = field(default_factory=lambda: ['background', 'speech'])
crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = None
class EncDecClassificationModelConfigBuilder(model_cfg.ModelConfigBuilder):
VALID_CONFIGS = ['matchboxnet_3x1x64', 'matchboxnet_3x1x64_vad']
def __init__(self, name: str = 'matchboxnet_3x1x64', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
if name not in EncDecClassificationModelConfigBuilder.VALID_CONFIGS:
raise ValueError("`name` must be one of : \n" f"{EncDecClassificationModelConfigBuilder.VALID_CONFIGS}")
self.name = name
if 'matchboxnet_3x1x64_vad' in name:
if encoder_cfg_func is None:
encoder_cfg_func = matchboxnet_3x1x64_vad
model_cfg = MatchboxNetVADModelConfig(
repeat=1,
separable=True,
encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
decoder=ConvASRDecoderClassificationConfig(),
)
elif 'matchboxnet_3x1x64' in name:
if encoder_cfg_func is None:
encoder_cfg_func = matchboxnet_3x1x64
model_cfg = MatchboxNetModelConfig(
repeat=1,
separable=False,
spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
decoder=ConvASRDecoderClassificationConfig(),
)
else:
raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
super(EncDecClassificationModelConfigBuilder, self).__init__(model_cfg)
self.model_cfg: clf_cfg.EncDecClassificationConfig = model_cfg # enable type hinting
def set_labels(self, labels: List[str]):
self.model_cfg.labels = labels
def set_separable(self, separable: bool):
self.model_cfg.separable = separable
def set_repeat(self, repeat: int):
self.model_cfg.repeat = repeat
def set_sample_rate(self, sample_rate: int):
self.model_cfg.sample_rate = sample_rate
def set_dropout(self, dropout: float = 0.0):
self.model_cfg.dropout = dropout
def set_timesteps(self, timesteps: int):
self.model_cfg.timesteps = timesteps
def set_is_regression_task(self, is_regression_task: bool):
self.model_cfg.is_regression_task = is_regression_task
# Note: Autocomplete for users wont work without these overrides
# But practically it is not needed since python will infer at runtime
# def set_train_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
# super().set_train_ds(cfg)
#
# def set_validation_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
# super().set_validation_ds(cfg)
#
# def set_test_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
# super().set_test_ds(cfg)
def _finalize_cfg(self):
# propagate labels
self.model_cfg.train_ds.labels = self.model_cfg.labels
self.model_cfg.validation_ds.labels = self.model_cfg.labels
self.model_cfg.test_ds.labels = self.model_cfg.labels
self.model_cfg.decoder.vocabulary = self.model_cfg.labels
# propagate num classes
self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
# propagate sample rate
self.model_cfg.sample_rate = self.model_cfg.sample_rate
self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
# propagate filters
self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
# propagate timeteps
if self.model_cfg.crop_or_pad_augment is not None:
self.model_cfg.crop_or_pad_augment.audio_length = self.model_cfg.timesteps
# propagate separable
for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
layer.separable = self.model_cfg.separable
# propagate repeat
for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
layer.repeat = self.model_cfg.repeat
# propagate dropout
for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
layer.dropout = self.model_cfg.dropout
def build(self) -> clf_cfg.EncDecClassificationConfig:
return super().build()
[end of nemo/collections/asr/models/configs/matchboxnet_config.py]
[start of nemo/collections/asr/models/configs/quartznet_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
from nemo.collections.asr.models.configs import asr_models_config as ctc_cfg
from nemo.collections.asr.modules.audio_preprocessing import (
AudioToMelSpectrogramPreprocessorConfig,
SpectrogramAugmentationConfig,
)
from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig, JasperEncoderConfig
from nemo.core.config import modelPT as model_cfg
# fmt: off
def qn_15x5():
config = [
JasperEncoderConfig(filters=256, repeat=1, kernel=[33], stride=[2], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=1, kernel=[87], stride=[1], dilation=[2], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
]
return config
def jasper_10x5_dr():
config = [
JasperEncoderConfig(filters=256, repeat=1, kernel=[11], stride=[2], dilation=[1], dropout=0.2,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=896, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.4,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.4,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
]
return config
# fmt: on
@dataclass
class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
# Model global arguments
sample_rate: int = 16000
repeat: int = 1
dropout: float = 0.0
separable: bool = False
labels: List[str] = MISSING
# Dataset configs
train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
manifest_filepath=None, shuffle=True, trim_silence=True
)
validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
# Model general component configs
preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
@dataclass
class QuartzNetModelConfig(JasperModelConfig):
separable: bool = True
class EncDecCTCModelConfigBuilder(model_cfg.ModelConfigBuilder):
VALID_CONFIGS = ['quartznet_15x5', 'quartznet_15x5_zh', 'jasper_10x5dr']
def __init__(self, name: str = 'quartznet_15x5', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
if name not in EncDecCTCModelConfigBuilder.VALID_CONFIGS:
raise ValueError("`name` must be one of : \n" f"{EncDecCTCModelConfigBuilder.VALID_CONFIGS}")
self.name = name
if 'quartznet_15x5' in name:
if encoder_cfg_func is None:
encoder_cfg_func = qn_15x5
model_cfg = QuartzNetModelConfig(
repeat=5,
separable=True,
spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
decoder=ConvASRDecoderConfig(),
)
elif 'jasper_10x5' in name:
if encoder_cfg_func is None:
encoder_cfg_func = jasper_10x5_dr
model_cfg = JasperModelConfig(
repeat=5,
separable=False,
spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
decoder=ConvASRDecoderConfig(),
)
else:
raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
super(EncDecCTCModelConfigBuilder, self).__init__(model_cfg)
self.model_cfg: ctc_cfg.EncDecCTCConfig = model_cfg # enable type hinting
if 'zh' in name:
self.set_dataset_normalize(normalize=False)
def set_labels(self, labels: List[str]):
self.model_cfg.labels = labels
def set_separable(self, separable: bool):
self.model_cfg.separable = separable
def set_repeat(self, repeat: int):
self.model_cfg.repeat = repeat
def set_sample_rate(self, sample_rate: int):
self.model_cfg.sample_rate = sample_rate
def set_dropout(self, dropout: float = 0.0):
self.model_cfg.dropout = dropout
def set_dataset_normalize(self, normalize: bool):
self.model_cfg.train_ds.normalize = normalize
self.model_cfg.validation_ds.normalize = normalize
self.model_cfg.test_ds.normalize = normalize
# Note: Autocomplete for users wont work without these overrides
# But practically it is not needed since python will infer at runtime
# def set_train_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
# super().set_train_ds(cfg)
#
# def set_validation_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
# super().set_validation_ds(cfg)
#
# def set_test_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
# super().set_test_ds(cfg)
def _finalize_cfg(self):
# propagate labels
self.model_cfg.train_ds.labels = self.model_cfg.labels
self.model_cfg.validation_ds.labels = self.model_cfg.labels
self.model_cfg.test_ds.labels = self.model_cfg.labels
self.model_cfg.decoder.vocabulary = self.model_cfg.labels
# propagate num classes
self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
# propagate sample rate
self.model_cfg.sample_rate = self.model_cfg.sample_rate
self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
# propagate filters
self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
# propagate separable
for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
layer.separable = self.model_cfg.separable
# propagate repeat
for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
layer.repeat = self.model_cfg.repeat
# propagate dropout
for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
layer.dropout = self.model_cfg.dropout
def build(self) -> ctc_cfg.EncDecCTCConfig:
return super().build()
[end of nemo/collections/asr/models/configs/quartznet_config.py]
[start of nemo/collections/asr/modules/audio_preprocessing.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import random
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Any, Dict, Optional, Tuple
import torch
from packaging import version
from nemo.collections.asr.parts.numba.spec_augment import SpecAugmentNumba, spec_augment_launch_heuristics
from nemo.collections.asr.parts.preprocessing.features import (
FilterbankFeatures,
FilterbankFeaturesTA,
make_seq_mask_like,
)
from nemo.collections.asr.parts.submodules.spectr_augment import SpecAugment, SpecCutout
from nemo.core.classes import Exportable, NeuralModule, typecheck
from nemo.core.neural_types import (
AudioSignal,
LengthsType,
MelSpectrogramType,
MFCCSpectrogramType,
NeuralType,
SpectrogramType,
)
from nemo.core.utils import numba_utils
from nemo.core.utils.numba_utils import __NUMBA_MINIMUM_VERSION__
from nemo.utils import logging
try:
import torchaudio
import torchaudio.functional
import torchaudio.transforms
TORCHAUDIO_VERSION = version.parse(torchaudio.__version__)
TORCHAUDIO_VERSION_MIN = version.parse('0.5')
HAVE_TORCHAUDIO = True
except ModuleNotFoundError:
HAVE_TORCHAUDIO = False
__all__ = [
'AudioToMelSpectrogramPreprocessor',
'AudioToSpectrogram',
'SpectrogramToAudio',
'AudioToMFCCPreprocessor',
'SpectrogramAugmentation',
'MaskedPatchAugmentation',
'CropOrPadSpectrogramAugmentation',
]
class AudioPreprocessor(NeuralModule, ABC):
"""
An interface for Neural Modules that performs audio pre-processing,
transforming the wav files to features.
"""
def __init__(self, win_length, hop_length):
super().__init__()
self.win_length = win_length
self.hop_length = hop_length
self.torch_windows = {
'hann': torch.hann_window,
'hamming': torch.hamming_window,
'blackman': torch.blackman_window,
'bartlett': torch.bartlett_window,
'ones': torch.ones,
None: torch.ones,
}
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
processed_signal, processed_length = self.get_features(input_signal, length)
return processed_signal, processed_length
@abstractmethod
def get_features(self, input_signal, length):
# Called by forward(). Subclasses should implement this.
pass
class AudioToMelSpectrogramPreprocessor(AudioPreprocessor, Exportable):
"""Featurizer module that converts wavs to mel spectrograms.
Args:
sample_rate (int): Sample rate of the input audio data.
Defaults to 16000
window_size (float): Size of window for fft in seconds
Defaults to 0.02
window_stride (float): Stride of window for fft in seconds
Defaults to 0.01
n_window_size (int): Size of window for fft in samples
Defaults to None. Use one of window_size or n_window_size.
n_window_stride (int): Stride of window for fft in samples
Defaults to None. Use one of window_stride or n_window_stride.
window (str): Windowing function for fft. can be one of ['hann',
'hamming', 'blackman', 'bartlett']
Defaults to "hann"
normalize (str): Can be one of ['per_feature', 'all_features']; all
other options disable feature normalization. 'all_features'
normalizes the entire spectrogram to be mean 0 with std 1.
'pre_features' normalizes per channel / freq instead.
Defaults to "per_feature"
n_fft (int): Length of FT window. If None, it uses the smallest power
of 2 that is larger than n_window_size.
Defaults to None
preemph (float): Amount of pre emphasis to add to audio. Can be
disabled by passing None.
Defaults to 0.97
features (int): Number of mel spectrogram freq bins to output.
Defaults to 64
lowfreq (int): Lower bound on mel basis in Hz.
Defaults to 0
highfreq (int): Lower bound on mel basis in Hz.
Defaults to None
log (bool): Log features.
Defaults to True
log_zero_guard_type(str): Need to avoid taking the log of zero. There
are two options: "add" or "clamp".
Defaults to "add".
log_zero_guard_value(float, or str): Add or clamp requires the number
to add with or clamp to. log_zero_guard_value can either be a float
or "tiny" or "eps". torch.finfo is used if "tiny" or "eps" is
passed.
Defaults to 2**-24.
dither (float): Amount of white-noise dithering.
Defaults to 1e-5
pad_to (int): Ensures that the output size of the time dimension is
a multiple of pad_to.
Defaults to 16
frame_splicing (int): Defaults to 1
exact_pad (bool): If True, sets stft center to False and adds padding, such that num_frames = audio_length
// hop_length. Defaults to False.
pad_value (float): The value that shorter mels are padded with.
Defaults to 0
mag_power (float): The power that the linear spectrogram is raised to
prior to multiplication with mel basis.
Defaults to 2 for a power spec
rng : Random number generator
nb_augmentation_prob (float) : Probability with which narrowband augmentation would be applied to
samples in the batch.
Defaults to 0.0
nb_max_freq (int) : Frequency above which all frequencies will be masked for narrowband augmentation.
Defaults to 4000
use_torchaudio: Whether to use the `torchaudio` implementation.
mel_norm: Normalization used for mel filterbank weights.
Defaults to 'slaney' (area normalization)
stft_exact_pad: Deprecated argument, kept for compatibility with older checkpoints.
stft_conv: Deprecated argument, kept for compatibility with older checkpoints.
"""
def save_to(self, save_path: str):
pass
@classmethod
def restore_from(cls, restore_path: str):
pass
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
return {
"input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
"length": NeuralType(
tuple('B'), LengthsType()
), # Please note that length should be in samples not seconds.
}
@property
def output_types(self):
"""Returns definitions of module output ports.
processed_signal:
0: AxisType(BatchTag)
1: AxisType(MelSpectrogramSignalTag)
2: AxisType(ProcessedTimeTag)
processed_length:
0: AxisType(BatchTag)
"""
return {
"processed_signal": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"processed_length": NeuralType(tuple('B'), LengthsType()),
}
def __init__(
self,
sample_rate=16000,
window_size=0.02,
window_stride=0.01,
n_window_size=None,
n_window_stride=None,
window="hann",
normalize="per_feature",
n_fft=None,
preemph=0.97,
features=64,
lowfreq=0,
highfreq=None,
log=True,
log_zero_guard_type="add",
log_zero_guard_value=2 ** -24,
dither=1e-5,
pad_to=16,
frame_splicing=1,
exact_pad=False,
pad_value=0,
mag_power=2.0,
rng=None,
nb_augmentation_prob=0.0,
nb_max_freq=4000,
use_torchaudio: bool = False,
mel_norm="slaney",
stft_exact_pad=False, # Deprecated arguments; kept for config compatibility
stft_conv=False, # Deprecated arguments; kept for config compatibility
):
super().__init__(n_window_size, n_window_stride)
self._sample_rate = sample_rate
if window_size and n_window_size:
raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
if window_stride and n_window_stride:
raise ValueError(
f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
)
if window_size:
n_window_size = int(window_size * self._sample_rate)
if window_stride:
n_window_stride = int(window_stride * self._sample_rate)
# Given the long and similar argument list, point to the class and instantiate it by reference
if not use_torchaudio:
featurizer_class = FilterbankFeatures
else:
featurizer_class = FilterbankFeaturesTA
self.featurizer = featurizer_class(
sample_rate=self._sample_rate,
n_window_size=n_window_size,
n_window_stride=n_window_stride,
window=window,
normalize=normalize,
n_fft=n_fft,
preemph=preemph,
nfilt=features,
lowfreq=lowfreq,
highfreq=highfreq,
log=log,
log_zero_guard_type=log_zero_guard_type,
log_zero_guard_value=log_zero_guard_value,
dither=dither,
pad_to=pad_to,
frame_splicing=frame_splicing,
exact_pad=exact_pad,
pad_value=pad_value,
mag_power=mag_power,
rng=rng,
nb_augmentation_prob=nb_augmentation_prob,
nb_max_freq=nb_max_freq,
mel_norm=mel_norm,
stft_exact_pad=stft_exact_pad, # Deprecated arguments; kept for config compatibility
stft_conv=stft_conv, # Deprecated arguments; kept for config compatibility
)
def input_example(self, max_batch: int = 8, max_dim: int = 32000, min_length: int = 200):
batch_size = torch.randint(low=1, high=max_batch, size=[1]).item()
max_length = torch.randint(low=min_length, high=max_dim, size=[1]).item()
signals = torch.rand(size=[batch_size, max_length]) * 2 - 1
lengths = torch.randint(low=min_length, high=max_dim, size=[batch_size])
lengths[0] = max_length
return signals, lengths
def get_features(self, input_signal, length):
return self.featurizer(input_signal, length)
@property
def filter_banks(self):
return self.featurizer.filter_banks
class AudioToMFCCPreprocessor(AudioPreprocessor):
"""Preprocessor that converts wavs to MFCCs.
Uses torchaudio.transforms.MFCC.
Args:
sample_rate: The sample rate of the audio.
Defaults to 16000.
window_size: Size of window for fft in seconds. Used to calculate the
win_length arg for mel spectrogram.
Defaults to 0.02
window_stride: Stride of window for fft in seconds. Used to caculate
the hop_length arg for mel spect.
Defaults to 0.01
n_window_size: Size of window for fft in samples
Defaults to None. Use one of window_size or n_window_size.
n_window_stride: Stride of window for fft in samples
Defaults to None. Use one of window_stride or n_window_stride.
window: Windowing function for fft. can be one of ['hann',
'hamming', 'blackman', 'bartlett', 'none', 'null'].
Defaults to 'hann'
n_fft: Length of FT window. If None, it uses the smallest power of 2
that is larger than n_window_size.
Defaults to None
lowfreq (int): Lower bound on mel basis in Hz.
Defaults to 0
highfreq (int): Lower bound on mel basis in Hz.
Defaults to None
n_mels: Number of mel filterbanks.
Defaults to 64
n_mfcc: Number of coefficients to retain
Defaults to 64
dct_type: Type of discrete cosine transform to use
norm: Type of norm to use
log: Whether to use log-mel spectrograms instead of db-scaled.
Defaults to True.
"""
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
return {
"input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
"length": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {
"processed_signal": NeuralType(('B', 'D', 'T'), MFCCSpectrogramType()),
"processed_length": NeuralType(tuple('B'), LengthsType()),
}
def save_to(self, save_path: str):
pass
@classmethod
def restore_from(cls, restore_path: str):
pass
def __init__(
self,
sample_rate=16000,
window_size=0.02,
window_stride=0.01,
n_window_size=None,
n_window_stride=None,
window='hann',
n_fft=None,
lowfreq=0.0,
highfreq=None,
n_mels=64,
n_mfcc=64,
dct_type=2,
norm='ortho',
log=True,
):
self._sample_rate = sample_rate
if not HAVE_TORCHAUDIO:
logging.error('Could not import torchaudio. Some features might not work.')
raise ModuleNotFoundError(
"torchaudio is not installed but is necessary for "
"AudioToMFCCPreprocessor. We recommend you try "
"building it from source for the PyTorch version you have."
)
if window_size and n_window_size:
raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
if window_stride and n_window_stride:
raise ValueError(
f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
)
# Get win_length (n_window_size) and hop_length (n_window_stride)
if window_size:
n_window_size = int(window_size * self._sample_rate)
if window_stride:
n_window_stride = int(window_stride * self._sample_rate)
super().__init__(n_window_size, n_window_stride)
mel_kwargs = {}
mel_kwargs['f_min'] = lowfreq
mel_kwargs['f_max'] = highfreq
mel_kwargs['n_mels'] = n_mels
mel_kwargs['n_fft'] = n_fft or 2 ** math.ceil(math.log2(n_window_size))
mel_kwargs['win_length'] = n_window_size
mel_kwargs['hop_length'] = n_window_stride
# Set window_fn. None defaults to torch.ones.
window_fn = self.torch_windows.get(window, None)
if window_fn is None:
raise ValueError(
f"Window argument for AudioProcessor is invalid: {window}."
f"For no window function, use 'ones' or None."
)
mel_kwargs['window_fn'] = window_fn
# Use torchaudio's implementation of MFCCs as featurizer
self.featurizer = torchaudio.transforms.MFCC(
sample_rate=self._sample_rate,
n_mfcc=n_mfcc,
dct_type=dct_type,
norm=norm,
log_mels=log,
melkwargs=mel_kwargs,
)
def get_features(self, input_signal, length):
features = self.featurizer(input_signal)
seq_len = torch.ceil(length.to(torch.float32) / self.hop_length).to(dtype=torch.long)
return features, seq_len
class SpectrogramAugmentation(NeuralModule):
"""
Performs time and freq cuts in one of two ways.
SpecAugment zeroes out vertical and horizontal sections as described in
SpecAugment (https://arxiv.org/abs/1904.08779). Arguments for use with
SpecAugment are `freq_masks`, `time_masks`, `freq_width`, and `time_width`.
SpecCutout zeroes out rectangulars as described in Cutout
(https://arxiv.org/abs/1708.04552). Arguments for use with Cutout are
`rect_masks`, `rect_freq`, and `rect_time`.
Args:
freq_masks (int): how many frequency segments should be cut.
Defaults to 0.
time_masks (int): how many time segments should be cut
Defaults to 0.
freq_width (int): maximum number of frequencies to be cut in one
segment.
Defaults to 10.
time_width (int): maximum number of time steps to be cut in one
segment
Defaults to 10.
rect_masks (int): how many rectangular masks should be cut
Defaults to 0.
rect_freq (int): maximum size of cut rectangles along the frequency
dimension
Defaults to 5.
rect_time (int): maximum size of cut rectangles along the time
dimension
Defaults to 25.
"""
@property
def input_types(self):
"""Returns definitions of module input types
"""
return {
"input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
"length": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output types
"""
return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
def __init__(
self,
freq_masks=0,
time_masks=0,
freq_width=10,
time_width=10,
rect_masks=0,
rect_time=5,
rect_freq=20,
rng=None,
mask_value=0.0,
use_numba_spec_augment: bool = True,
):
super().__init__()
if rect_masks > 0:
self.spec_cutout = SpecCutout(rect_masks=rect_masks, rect_time=rect_time, rect_freq=rect_freq, rng=rng,)
# self.spec_cutout.to(self._device)
else:
self.spec_cutout = lambda input_spec: input_spec
if freq_masks + time_masks > 0:
self.spec_augment = SpecAugment(
freq_masks=freq_masks,
time_masks=time_masks,
freq_width=freq_width,
time_width=time_width,
rng=rng,
mask_value=mask_value,
)
else:
self.spec_augment = lambda input_spec, length: input_spec
# Check if numba is supported, and use a Numba kernel if it is
if use_numba_spec_augment and numba_utils.numba_cuda_is_supported(__NUMBA_MINIMUM_VERSION__):
logging.info('Numba CUDA SpecAugment kernel is being used')
self.spec_augment_numba = SpecAugmentNumba(
freq_masks=freq_masks,
time_masks=time_masks,
freq_width=freq_width,
time_width=time_width,
rng=rng,
mask_value=mask_value,
)
else:
self.spec_augment_numba = None
@typecheck()
def forward(self, input_spec, length):
augmented_spec = self.spec_cutout(input_spec=input_spec)
# To run the Numba kernel, correct numba version is required as well as
# tensor must be on GPU and length must be provided
if self.spec_augment_numba is not None and spec_augment_launch_heuristics(augmented_spec, length):
augmented_spec = self.spec_augment_numba(input_spec=augmented_spec, length=length)
else:
augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
return augmented_spec
class MaskedPatchAugmentation(NeuralModule):
"""
Zeroes out fixed size time patches of the spectrogram.
All samples in batch are guaranteed to have the same amount of masked time steps.
Optionally also performs frequency masking in the same way as SpecAugment.
Args:
patch_size (int): up to how many time steps does one patch consist of.
Defaults to 48.
mask_patches (float): how many patches should be masked in each sample.
if >= 1., interpreted as number of patches (after converting to int)
if <1., interpreted as fraction of total tokens to be masked (number of patches is rounded up)
Defaults to 10.
freq_masks (int): how many frequency segments should be cut.
Defaults to 0.
freq_width (int): maximum number of frequencies to be cut in a segment.
Defaults to 0.
"""
@property
def input_types(self):
"""Returns definitions of module input types
"""
return {
"input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
"length": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output types
"""
return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
def __init__(
self, patch_size: int = 48, mask_patches: float = 10.0, freq_masks: int = 0, freq_width: int = 0,
):
super().__init__()
self.patch_size = patch_size
if mask_patches >= 1:
self.mask_patches = int(mask_patches)
elif mask_patches >= 0:
self._mask_fraction = mask_patches
self.mask_patches = None
else:
raise ValueError('mask_patches cannot be negative')
if freq_masks > 0:
self.spec_augment = SpecAugment(freq_masks=freq_masks, time_masks=0, freq_width=freq_width, time_width=0,)
else:
self.spec_augment = None
@typecheck()
def forward(self, input_spec, length):
augmented_spec = input_spec
min_len = torch.min(length)
if self.mask_patches is None:
# masking specified as fraction
len_fraction = int(min_len * self._mask_fraction)
mask_patches = len_fraction // self.patch_size + int(len_fraction % self.patch_size != 0)
else:
mask_patches = self.mask_patches
if min_len < self.patch_size * mask_patches:
mask_patches = min_len // self.patch_size
for idx in range(input_spec.shape[0]):
cur_len = length[idx]
patches = range(cur_len // self.patch_size)
masked_patches = random.sample(patches, mask_patches)
for mp in masked_patches:
augmented_spec[idx, :, mp * self.patch_size : (mp + 1) * self.patch_size] = 0.0
if self.spec_augment is not None:
augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
return augmented_spec
class CropOrPadSpectrogramAugmentation(NeuralModule):
"""
Pad or Crop the incoming Spectrogram to a certain shape.
Args:
audio_length (int): the final number of timesteps that is required.
The signal will be either padded or cropped temporally to this
size.
"""
def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
image = input_signal
num_images = image.shape[0]
audio_length = self.audio_length
image_len = image.shape[-1]
# Crop long signal
if image_len > audio_length: # randomly slice
cutout_images = []
offset = torch.randint(low=0, high=image_len - audio_length + 1, size=[num_images])
for idx, offset in enumerate(offset):
cutout_images.append(image[idx : idx + 1, :, offset : offset + audio_length])
image = torch.cat(cutout_images, dim=0)
del cutout_images
else: # symmetrically pad short signal with zeros
pad_left = (audio_length - image_len) // 2
pad_right = (audio_length - image_len) // 2
if (audio_length - image_len) % 2 == 1:
pad_right += 1
image = torch.nn.functional.pad(image, [pad_left, pad_right], mode="constant", value=0)
# Replace dynamic length sequences with static number of timesteps
length = (length * 0) + audio_length
return image, length
@property
def input_types(self):
"""Returns definitions of module output ports.
"""
return {
"input_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
"length": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {
"processed_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
"processed_length": NeuralType(tuple('B'), LengthsType()),
}
def save_to(self, save_path: str):
pass
@classmethod
def restore_from(cls, restore_path: str):
pass
class AudioToSpectrogram(NeuralModule):
"""Transform a batch of input multi-channel signals into a batch of
STFT-based spectrograms.
Args:
fft_length: length of FFT
hop_length: length of hops/shifts of the sliding window
power: exponent for magnitude spectrogram. Default `None` will
return a complex-valued spectrogram
"""
def __init__(self, fft_length: int, hop_length: int, power: Optional[float] = None):
if not HAVE_TORCHAUDIO:
logging.error('Could not import torchaudio. Some features might not work.')
raise ModuleNotFoundError(
"torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
)
super().__init__()
# For now, assume FFT length is divisible by two
if fft_length % 2 != 0:
raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
self.stft = torchaudio.transforms.Spectrogram(
n_fft=fft_length, hop_length=hop_length, power=power, pad_mode='constant'
)
# number of subbands
self.F = fft_length // 2 + 1
@property
def num_subbands(self) -> int:
return self.F
@property
def input_types(self) -> Dict[str, NeuralType]:
"""Returns definitions of module output ports.
"""
return {
"input": NeuralType(('B', 'C', 'T'), AudioSignal()),
"input_length": NeuralType(('B',), LengthsType(), optional=True),
}
@property
def output_types(self) -> Dict[str, NeuralType]:
"""Returns definitions of module output ports.
"""
return {
"output": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
"output_length": NeuralType(('B',), LengthsType()),
}
@typecheck()
def forward(
self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None
) -> Tuple[torch.Tensor, torch.Tensor]:
"""Convert a batch of C-channel input signals
into a batch of complex-valued spectrograms.
Args:
input: Time-domain input signal with C channels, shape (B, C, T)
input_length: Length of valid entries along the time dimension, shape (B,)
Returns:
Output spectrogram with F subbands and N time frames, shape (B, C, F, N)
and output length with shape (B,).
"""
B, T = input.size(0), input.size(-1)
input = input.view(B, -1, T)
# STFT output (B, C, F, N)
with torch.cuda.amp.autocast(enabled=False):
output = self.stft(input.float())
if input_length is not None:
# Mask padded frames
output_length = self.get_output_length(input_length=input_length)
length_mask: torch.Tensor = make_seq_mask_like(
lengths=output_length, like=output, time_dim=-1, valid_ones=False
)
output = output.masked_fill(length_mask, 0.0)
else:
# Assume all frames are valid for all examples in the batch
output_length = output.size(-1) * torch.ones(B, device=output.device).long()
return output, output_length
def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
"""Get length of valid frames for the output.
Args:
input_length: number of valid samples, shape (B,)
Returns:
Number of valid frames, shape (B,)
"""
output_length = input_length.div(self.stft.hop_length, rounding_mode='floor').add(1).long()
return output_length
class SpectrogramToAudio(NeuralModule):
"""Transform a batch of input multi-channel spectrograms into a batch of
time-domain multi-channel signals.
Args:
fft_length: length of FFT
hop_length: length of hops/shifts of the sliding window
power: exponent for magnitude spectrogram. Default `None` will
return a complex-valued spectrogram
"""
def __init__(self, fft_length: int, hop_length: int):
if not HAVE_TORCHAUDIO:
logging.error('Could not import torchaudio. Some features might not work.')
raise ModuleNotFoundError(
"torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
)
super().__init__()
# For now, assume FFT length is divisible by two
if fft_length % 2 != 0:
raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
self.istft = torchaudio.transforms.InverseSpectrogram(
n_fft=fft_length, hop_length=hop_length, pad_mode='constant'
)
self.F = fft_length // 2 + 1
@property
def num_subbands(self) -> int:
return self.F
@property
def input_types(self) -> Dict[str, NeuralType]:
"""Returns definitions of module output ports.
"""
return {
"input": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
"input_length": NeuralType(('B',), LengthsType(), optional=True),
}
@property
def output_types(self) -> Dict[str, NeuralType]:
"""Returns definitions of module output ports.
"""
return {
"output": NeuralType(('B', 'C', 'T'), AudioSignal()),
"output_length": NeuralType(('B',), LengthsType()),
}
@typecheck()
def forward(self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None) -> torch.Tensor:
"""Convert input complex-valued spectrogram to a time-domain
signal. Multi-channel IO is supported.
Args:
input: Input spectrogram for C channels, shape (B, C, F, N)
input_length: Length of valid entries along the time dimension, shape (B,)
Returns:
Time-domain signal with T time-domain samples and C channels, (B, C, T)
and output length with shape (B,).
"""
B, F, N = input.size(0), input.size(-2), input.size(-1)
assert F == self.F, f'Number of subbands F={F} not matching self.F={self.F}'
input = input.view(B, -1, F, N)
# iSTFT output (B, C, T)
with torch.cuda.amp.autocast(enabled=False):
output = self.istft(input.cfloat())
if input_length is not None:
# Mask padded samples
output_length = self.get_output_length(input_length=input_length)
length_mask: torch.Tensor = make_seq_mask_like(
lengths=output_length, like=output, time_dim=-1, valid_ones=False
)
output = output.masked_fill(length_mask, 0.0)
else:
# Assume all frames are valid for all examples in the batch
output_length = output.size(-1) * torch.ones(B, device=output.device).long()
return output, output_length
def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
"""Get length of valid samples for the output.
Args:
input_length: number of valid frames, shape (B,)
Returns:
Number of valid samples, shape (B,)
"""
output_length = input_length.sub(1).mul(self.istft.hop_length).long()
return output_length
@dataclass
class AudioToMelSpectrogramPreprocessorConfig:
_target_: str = "nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor"
sample_rate: int = 16000
window_size: float = 0.02
window_stride: float = 0.01
n_window_size: Optional[int] = None
n_window_stride: Optional[int] = None
window: str = "hann"
normalize: str = "per_feature"
n_fft: Optional[int] = None
preemph: float = 0.97
features: int = 64
lowfreq: int = 0
highfreq: Optional[int] = None
log: bool = True
log_zero_guard_type: str = "add"
log_zero_guard_value: float = 2 ** -24
dither: float = 1e-5
pad_to: int = 16
frame_splicing: int = 1
exact_pad: bool = False
pad_value: int = 0
mag_power: float = 2.0
rng: Optional[str] = None
nb_augmentation_prob: float = 0.0
nb_max_freq: int = 4000
use_torchaudio: bool = False
mel_norm: str = "slaney"
stft_exact_pad: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
stft_conv: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
@dataclass
class AudioToMFCCPreprocessorConfig:
_target_: str = 'nemo.collections.asr.modules.AudioToMFCCPreprocessor'
sample_rate: int = 16000
window_size: float = 0.02
window_stride: float = 0.01
n_window_size: Optional[int] = None
n_window_stride: Optional[int] = None
window: str = 'hann'
n_fft: Optional[int] = None
lowfreq: Optional[float] = 0.0
highfreq: Optional[float] = None
n_mels: int = 64
n_mfcc: int = 64
dct_type: int = 2
norm: str = 'ortho'
log: bool = True
@dataclass
class SpectrogramAugmentationConfig:
_target_: str = "nemo.collections.asr.modules.SpectrogramAugmentation"
freq_masks: int = 0
time_masks: int = 0
freq_width: int = 0
time_width: Optional[Any] = 0
rect_masks: int = 0
rect_time: int = 0
rect_freq: int = 0
mask_value: float = 0
rng: Optional[Any] = None # random.Random() type
use_numba_spec_augment: bool = True
@dataclass
class CropOrPadSpectrogramAugmentationConfig:
audio_length: int
_target_: str = "nemo.collections.asr.modules.CropOrPadSpectrogramAugmentation"
@dataclass
class MaskedPatchAugmentationConfig:
patch_size: int = 48
mask_patches: float = 10.0
freq_masks: int = 0
freq_width: int = 0
_target_: str = "nemo.collections.asr.modules.MaskedPatchAugmentation"
[end of nemo/collections/asr/modules/audio_preprocessing.py]
[start of nemo/collections/asr/parts/k2/classes.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from abc import ABC
from dataclasses import dataclass
from typing import Any, Optional, Tuple
import torch
from omegaconf import DictConfig
from nemo.utils import logging
@dataclass
class GraphIntersectDenseConfig:
"""Graph dense intersection config.
"""
search_beam: float = 20.0
output_beam: float = 10.0
min_active_states: int = 30
max_active_states: int = 10000
@dataclass
class GraphModuleConfig:
"""Config for graph modules.
Typically used with graph losses and decoders.
"""
topo_type: str = "default"
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
class ASRK2Mixin(ABC):
"""k2 Mixin class that simplifies the construction of various models with k2-based losses.
It does the following:
- Sets up the graph loss and decoder (methods _init_k2 and update_k2_modules).
- Registers external graphs, if needed.
- Augments forward(...) with optional graph decoding to get accurate predictions.
"""
def _init_k2(self):
"""
k2-related initialization implementation.
This method is expected to run after the __init__ which sets self._cfg
self._cfg is expected to have the attribute graph_module_cfg
"""
if not hasattr(self, "_cfg"):
raise ValueError("self._cfg must be set before calling _init_k2().")
if not hasattr(self._cfg, "graph_module_cfg") or self._cfg.graph_module_cfg is None:
raise ValueError("self._cfg.graph_module_cfg must be set and cannot be None.")
self.graph_module_cfg = self._cfg.graph_module_cfg
# register token_lm for MAPLoss
criterion_type = self.graph_module_cfg.get("criterion_type", "ml")
self.use_graph_lm = criterion_type == "map"
if self.use_graph_lm:
token_lm_path = self.graph_module_cfg.backend_cfg.get("token_lm", None)
if token_lm_path is None:
raise ValueError(
f"graph_module_cfg.backend_cfg.token_lm is empty. It must be set for criterion_type == `{criterion_type}`"
)
token_lm_path = self.register_artifact('graph_module_cfg.backend_cfg.token_lm', token_lm_path)
self.graph_module_cfg.backend_cfg["token_lm"] = token_lm_path
self.update_k2_modules(self.graph_module_cfg)
def update_k2_modules(self, input_cfg: DictConfig):
"""
Helper function to initialize or update k2 loss and transcribe_decoder.
Args:
input_cfg: DictConfig to take new parameters from. Schema is expected as in
nemo.collections.asr.models.configs.k2_sequence_models_config.GraphModuleConfig
"""
del self.loss
if hasattr(self, "transcribe_decoder"):
del self.transcribe_decoder
if hasattr(self, "joint"):
# RNNT
num_classes = self.joint.num_classes_with_blank - 1
else:
# CTC, MMI, ...
num_classes = self.decoder.num_classes_with_blank - 1
remove_consecutive = input_cfg.backend_cfg.get("topo_with_self_loops", True) and input_cfg.backend_cfg.get(
"topo_type", "default"
) not in ["forced_blank", "identity",]
self._wer.remove_consecutive = remove_consecutive
from nemo.collections.asr.losses.lattice_losses import LatticeLoss
self.loss = LatticeLoss(
num_classes=num_classes,
reduction=self._cfg.get("ctc_reduction", "mean_batch"),
backend="k2",
criterion_type=input_cfg.get("criterion_type", "ml"),
loss_type=input_cfg.get("loss_type", "ctc"),
split_batch_size=input_cfg.get("split_batch_size", 0),
graph_module_cfg=input_cfg.backend_cfg,
)
criterion_type = self.loss.criterion_type
self.use_graph_lm = criterion_type == "map"
transcribe_training = input_cfg.get("transcribe_training", False)
if transcribe_training and criterion_type == "ml":
logging.warning(
f"""You do not need to use transcribe_training=`{transcribe_training}`
with criterion_type=`{criterion_type}`. transcribe_training will be set to False."""
)
transcribe_training = False
self.transcribe_training = transcribe_training
if self.use_graph_lm:
from nemo.collections.asr.modules.graph_decoder import ViterbiDecoderWithGraph
self.transcribe_decoder = ViterbiDecoderWithGraph(
num_classes=num_classes,
backend="k2",
dec_type="token_lm",
return_type="1best",
return_ilabels=True,
output_aligned=True,
split_batch_size=input_cfg.get("split_batch_size", 0),
graph_module_cfg=input_cfg.backend_cfg,
)
def _forward_k2_post_processing(
self, log_probs: torch.Tensor, encoded_length: torch.Tensor, greedy_predictions: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
k2-related post-processing parf of .forward()
Args:
log_probs: The log probabilities tensor of shape [B, T, D].
encoded_length: The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
greedy_predictions: The greedy token predictions of the model of shape [B, T]
Returns:
A tuple of 3 elements -
1) The log probabilities tensor of shape [B, T, D].
2) The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
3) The greedy token predictions of the model of shape [B, T] (via argmax)
"""
# greedy_predictions from .forward() are incorrect for criterion_type=`map`
# getting correct greedy_predictions, if needed
if self.use_graph_lm and (not self.training or self.transcribe_training):
greedy_predictions, encoded_length, _ = self.transcribe_decoder.forward(
log_probs=log_probs, log_probs_length=encoded_length
)
return log_probs, encoded_length, greedy_predictions
[end of nemo/collections/asr/parts/k2/classes.py]
[start of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from dataclasses import dataclass
from typing import Any, Optional
import torch
from torch import nn as nn
from nemo.collections.asr.parts.submodules import multi_head_attention as mha
from nemo.collections.common.parts import adapter_modules
from nemo.core.classes.mixins import adapter_mixin_strategies
class MHAResidualAddAdapterStrategy(adapter_mixin_strategies.ResidualAddAdapterStrategy):
"""
An implementation of residual addition of an adapter module with its input for the MHA Adapters.
"""
def forward(self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'):
"""
A basic strategy, comprising of a residual connection over the input, after forward pass by
the underlying adapter. Additional work is done to pack and unpack the dictionary of inputs and outputs.
Note: The `value` tensor is added to the output of the attention adapter as the residual connection.
Args:
input: A dictionary of multiple input arguments for the adapter module.
`query`, `key`, `value`: Original output tensor of the module, or the output of the
previous adapter (if more than one adapters are enabled).
`mask`: Attention mask.
`pos_emb`: Optional positional embedding for relative encoding.
adapter: The adapter module that is currently required to perform the forward pass.
module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
Returns:
The result tensor, after one of the active adapters has finished its forward passes.
"""
out = self.compute_output(input, adapter, module=module)
# If not in training mode, or probability of stochastic depth is 0, skip step.
p = self.stochastic_depth
if not module.training or p == 0.0:
pass
else:
out = self.apply_stochastic_depth(out, input['value'], adapter, module=module)
# Return the residual connection output = input + adapter(input)
result = input['value'] + out
# If l2_lambda is activated, register the loss value
self.compute_auxiliary_losses(result, input['value'], adapter, module=module)
return result
def compute_output(
self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'
) -> torch.Tensor:
"""
Compute the output of a single adapter to some input.
Args:
input: Original output tensor of the module, or the output of the previous adapter (if more than
one adapters are enabled).
adapter: The adapter module that is currently required to perform the forward pass.
module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
Returns:
The result tensor, after one of the active adapters has finished its forward passes.
"""
if isinstance(input, (list, tuple)):
out = adapter(*input)
elif isinstance(input, dict):
out = adapter(**input)
else:
out = adapter(input)
return out
@dataclass
class MHAResidualAddAdapterStrategyConfig(adapter_mixin_strategies.ResidualAddAdapterStrategyConfig):
_target_: str = "{0}.{1}".format(
MHAResidualAddAdapterStrategy.__module__, MHAResidualAddAdapterStrategy.__name__
) # mandatory field
class MultiHeadAttentionAdapter(mha.MultiHeadAttention, adapter_modules.AdapterModuleUtil):
"""Multi-Head Attention layer of Transformer.
Args:
n_head (int): number of heads
n_feat (int): size of the features
dropout_rate (float): dropout rate
proj_dim (int, optional): Optional integer value for projection before computing attention.
If None, then there is no projection (equivalent to proj_dim = n_feat).
If > 0, then will project the n_feat to proj_dim before calculating attention.
If <0, then will equal n_head, so that each head has a projected dimension of 1.
adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
"""
def __init__(
self,
n_head: int,
n_feat: int,
dropout_rate: float,
proj_dim: Optional[int] = None,
adapter_strategy: MHAResidualAddAdapterStrategy = None,
):
super().__init__(n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, max_cache_len=0)
self.pre_norm = nn.LayerNorm(n_feat)
# Set the projection dim to number of heads automatically
if proj_dim is not None and proj_dim < 1:
proj_dim = n_head
self.proj_dim = proj_dim
# Recompute weights for projection dim
if self.proj_dim is not None:
if self.proj_dim % n_head != 0:
raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
self.d_k = self.proj_dim // n_head
self.s_d_k = math.sqrt(self.d_k)
self.linear_q = nn.Linear(n_feat, self.proj_dim)
self.linear_k = nn.Linear(n_feat, self.proj_dim)
self.linear_v = nn.Linear(n_feat, self.proj_dim)
self.linear_out = nn.Linear(self.proj_dim, n_feat)
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
# reset parameters for Q to be identity operation
self.reset_parameters()
def forward(self, query, key, value, mask, pos_emb=None, cache=None):
"""Compute 'Scaled Dot Product Attention'.
Args:
query (torch.Tensor): (batch, time1, size)
key (torch.Tensor): (batch, time2, size)
value(torch.Tensor): (batch, time2, size)
mask (torch.Tensor): (batch, time1, time2)
cache (torch.Tensor) : (batch, time_cache, size)
returns:
output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
cache (torch.Tensor) : (batch, time_cache_next, size)
"""
# Need to perform duplicate computations as at this point the tensors have been
# separated by the adapter forward
query = self.pre_norm(query)
key = self.pre_norm(key)
value = self.pre_norm(value)
return super().forward(query, key, value, mask, pos_emb, cache=cache)
def reset_parameters(self):
with torch.no_grad():
nn.init.zeros_(self.linear_out.weight)
nn.init.zeros_(self.linear_out.bias)
def get_default_strategy_config(self) -> 'dataclass':
return MHAResidualAddAdapterStrategyConfig()
@dataclass
class MultiHeadAttentionAdapterConfig:
n_head: int
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
class RelPositionMultiHeadAttentionAdapter(mha.RelPositionMultiHeadAttention, adapter_modules.AdapterModuleUtil):
"""Multi-Head Attention layer of Transformer-XL with support of relative positional encoding.
Paper: https://arxiv.org/abs/1901.02860
Args:
n_head (int): number of heads
n_feat (int): size of the features
dropout_rate (float): dropout rate
proj_dim (int, optional): Optional integer value for projection before computing attention.
If None, then there is no projection (equivalent to proj_dim = n_feat).
If > 0, then will project the n_feat to proj_dim before calculating attention.
If <0, then will equal n_head, so that each head has a projected dimension of 1.
adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
"""
def __init__(
self,
n_head: int,
n_feat: int,
dropout_rate: float,
proj_dim: Optional[int] = None,
adapter_strategy: MHAResidualAddAdapterStrategyConfig = None,
):
super().__init__(
n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, pos_bias_u=None, pos_bias_v=None, max_cache_len=0
)
self.pre_norm = nn.LayerNorm(n_feat)
# Set the projection dim to number of heads automatically
if proj_dim is not None and proj_dim < 1:
proj_dim = n_head
self.proj_dim = proj_dim
# Recompute weights for projection dim
if self.proj_dim is not None:
if self.proj_dim % n_head != 0:
raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
self.d_k = self.proj_dim // n_head
self.s_d_k = math.sqrt(self.d_k)
self.linear_q = nn.Linear(n_feat, self.proj_dim)
self.linear_k = nn.Linear(n_feat, self.proj_dim)
self.linear_v = nn.Linear(n_feat, self.proj_dim)
self.linear_out = nn.Linear(self.proj_dim, n_feat)
self.linear_pos = nn.Linear(n_feat, self.proj_dim, bias=False)
self.pos_bias_u = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
self.pos_bias_v = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
# reset parameters for Q to be identity operation
self.reset_parameters()
def forward(self, query, key, value, mask, pos_emb, cache=None):
"""Compute 'Scaled Dot Product Attention' with rel. positional encoding.
Args:
query (torch.Tensor): (batch, time1, size)
key (torch.Tensor): (batch, time2, size)
value(torch.Tensor): (batch, time2, size)
mask (torch.Tensor): (batch, time1, time2)
pos_emb (torch.Tensor) : (batch, time1, size)
cache (torch.Tensor) : (batch, time_cache, size)
Returns:
output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
cache_next (torch.Tensor) : (batch, time_cache_next, size)
"""
# Need to perform duplicate computations as at this point the tensors have been
# separated by the adapter forward
query = self.pre_norm(query)
key = self.pre_norm(key)
value = self.pre_norm(value)
return super().forward(query, key, value, mask, pos_emb, cache=cache)
def reset_parameters(self):
with torch.no_grad():
nn.init.zeros_(self.linear_out.weight)
nn.init.zeros_(self.linear_out.bias)
# NOTE: This exact procedure apparently highly important.
# Above operation is safe to do as self.linear_out.weight *= 0.0 (similar for bias)
# However:
# DO NOT REPLACE BELOW WITH self.pos_bias_u *= 0.0 OR self.pos_bias_v *= 0.0
# For some reason at init sometimes it will cause the value of the tensor to become NaN
# All operations to compute matrix_ac and matrix_bd will then fail.
nn.init.zeros_(self.pos_bias_u)
nn.init.zeros_(self.pos_bias_v)
def get_default_strategy_config(self) -> 'dataclass':
return MHAResidualAddAdapterStrategyConfig()
@dataclass
class RelPositionMultiHeadAttentionAdapterConfig:
n_head: int
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
class PositionalEncodingAdapter(mha.PositionalEncoding, adapter_modules.AdapterModuleUtil):
"""
Absolute positional embedding adapter.
.. note::
Absolute positional embedding value is added to the input tensor *without residual connection* !
Therefore, the input is changed, if you only require the positional embedding, drop the returned `x` !
Args:
d_model (int): The input dimension of x.
max_len (int): The max sequence length.
xscale (float): The input scaling factor. Defaults to 1.0.
adapter_strategy (AbstractAdapterStrategy): By default, ReturnResultAdapterStrategyConfig.
An adapter composition function object.
NOTE: Since this is a positional encoding, it will not add a residual !
"""
def __init__(
self,
d_model: int,
max_len: int = 5000,
xscale=1.0,
adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
):
super().__init__(
d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0,
)
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
def get_default_strategy_config(self) -> 'dataclass':
return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
@dataclass
class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
class RelPositionalEncodingAdapter(mha.RelPositionalEncoding, adapter_modules.AdapterModuleUtil):
"""
Relative positional encoding for TransformerXL's layers
See : Appendix B in https://arxiv.org/abs/1901.02860
.. note::
Relative positional embedding value is **not** added to the input tensor !
Therefore, the input should be updated changed, if you only require the positional embedding, drop the returned `x` !
Args:
d_model (int): embedding dim
max_len (int): maximum input length
xscale (bool): whether to scale the input by sqrt(d_model)
adapter_strategy: By default, ReturnResultAdapterStrategyConfig. An adapter composition function object.
"""
def __init__(
self,
d_model: int,
max_len: int = 5000,
xscale=1.0,
adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
):
super().__init__(d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0)
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
def get_default_strategy_config(self) -> 'dataclass':
return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
@dataclass
class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
[end of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
[start of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import os
from dataclasses import dataclass
from typing import List, Optional, Tuple, Union
import torch
from nemo.collections.asr.parts.utils import rnnt_utils
from nemo.collections.common.tokenizers.tokenizer_spec import TokenizerSpec
from nemo.core.classes import Typing, typecheck
from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
from nemo.utils import logging
DEFAULT_TOKEN_OFFSET = 100
def pack_hypotheses(
hypotheses: List[rnnt_utils.NBestHypotheses], logitlen: torch.Tensor,
) -> List[rnnt_utils.NBestHypotheses]:
if logitlen is not None:
if hasattr(logitlen, 'cpu'):
logitlen_cpu = logitlen.to('cpu')
else:
logitlen_cpu = logitlen
for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.NBestHypotheses
for candidate_idx, cand in enumerate(hyp.n_best_hypotheses):
cand.y_sequence = torch.tensor(cand.y_sequence, dtype=torch.long)
if logitlen is not None:
cand.length = logitlen_cpu[idx]
if cand.dec_state is not None:
cand.dec_state = _states_to_device(cand.dec_state)
return hypotheses
def _states_to_device(dec_state, device='cpu'):
if torch.is_tensor(dec_state):
dec_state = dec_state.to(device)
elif isinstance(dec_state, (list, tuple)):
dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
return dec_state
class AbstractBeamCTCInfer(Typing):
"""A beam CTC decoder.
Provides a common abstraction for sample level beam decoding.
Args:
blank_id: int, index of the blank token. Can be 0 or len(vocabulary).
beam_size: int, size of the beam used in the underlying beam search engine.
"""
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
return {
"decoder_output": NeuralType(('B', 'T', 'D'), LogprobsType()),
"decoder_lengths": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {"predictions": [NeuralType(elements_type=HypothesisType())]}
def __init__(self, blank_id: int, beam_size: int):
self.blank_id = blank_id
if beam_size < 1:
raise ValueError("Beam search size cannot be less than 1!")
self.beam_size = beam_size
# Variables set by corresponding setter methods
self.vocab = None
self.decoding_type = None
self.tokenizer = None
# Utility maps for vocabulary
self.vocab_index_map = None
self.index_vocab_map = None
# Internal variable, used to prevent double reduction of consecutive tokens (ctc collapse)
self.override_fold_consecutive_value = None
def set_vocabulary(self, vocab: List[str]):
"""
Set the vocabulary of the decoding framework.
Args:
vocab: List of str. Each token corresponds to its location in the vocabulary emitted by the model.
Note that this vocabulary must NOT contain the "BLANK" token.
"""
self.vocab = vocab
self.vocab_index_map = {v: i for i, v in enumerate(vocab)}
self.index_vocab_map = {i: v for i, v in enumerate(vocab)}
def set_decoding_type(self, decoding_type: str):
"""
Sets the decoding type of the framework. Can support either char or subword models.
Args:
decoding_type: Str corresponding to decoding type. Only supports "char" and "subword".
"""
decoding_type = decoding_type.lower()
supported_types = ['char', 'subword']
if decoding_type not in supported_types:
raise ValueError(
f"Unsupported decoding type. Supported types = {supported_types}.\n" f"Given = {decoding_type}"
)
self.decoding_type = decoding_type
def set_tokenizer(self, tokenizer: TokenizerSpec):
"""
Set the tokenizer of the decoding framework.
Args:
tokenizer: NeMo tokenizer object, which inherits from TokenizerSpec.
"""
self.tokenizer = tokenizer
@typecheck()
def forward(
self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-repressively.
Args:
decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
decoder_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
raise NotImplementedError()
def __call__(self, *args, **kwargs):
return self.forward(*args, **kwargs)
class BeamCTCInfer(AbstractBeamCTCInfer):
"""A greedy CTC decoder.
Provides a common abstraction for sample level and batch level greedy decoding.
Args:
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
"""
def __init__(
self,
blank_id: int,
beam_size: int,
search_type: str = "default",
return_best_hypothesis: bool = True,
preserve_alignments: bool = False,
compute_timestamps: bool = False,
beam_alpha: float = 1.0,
beam_beta: float = 0.0,
kenlm_path: str = None,
flashlight_cfg: Optional['FlashlightConfig'] = None,
pyctcdecode_cfg: Optional['PyCTCDecodeConfig'] = None,
):
super().__init__(blank_id=blank_id, beam_size=beam_size)
self.search_type = search_type
self.return_best_hypothesis = return_best_hypothesis
self.preserve_alignments = preserve_alignments
self.compute_timestamps = compute_timestamps
if self.compute_timestamps:
raise ValueError(f"Currently this flag is not supported for beam search algorithms.")
self.vocab = None # This must be set by specific method by user before calling forward() !
if search_type == "default" or search_type == "nemo":
self.search_algorithm = self.default_beam_search
elif search_type == "pyctcdecode":
self.search_algorithm = self._pyctcdecode_beam_search
elif search_type == "flashlight":
self.search_algorithm = self.flashlight_beam_search
else:
raise NotImplementedError(
f"The search type ({search_type}) supplied is not supported!\n"
f"Please use one of : (default, nemo, pyctcdecode)"
)
# Log the beam search algorithm
logging.info(f"Beam search algorithm: {search_type}")
self.beam_alpha = beam_alpha
self.beam_beta = beam_beta
# Default beam search args
self.kenlm_path = kenlm_path
# PyCTCDecode params
if pyctcdecode_cfg is None:
pyctcdecode_cfg = PyCTCDecodeConfig()
self.pyctcdecode_cfg = pyctcdecode_cfg # type: PyCTCDecodeConfig
if flashlight_cfg is None:
flashlight_cfg = FlashlightConfig()
self.flashlight_cfg = flashlight_cfg
# Default beam search scorer functions
self.default_beam_scorer = None
self.pyctcdecode_beam_scorer = None
self.flashlight_beam_scorer = None
self.token_offset = 0
@typecheck()
def forward(
self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-repressively.
Args:
decoder_output: A tensor of size (batch, timesteps, features).
decoder_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
if self.vocab is None:
raise RuntimeError("Please set the vocabulary with `set_vocabulary()` before calling this function.")
if self.decoding_type is None:
raise ValueError("Please set the decoding type with `set_decoding_type()` before calling this function.")
with torch.no_grad(), torch.inference_mode():
# Process each sequence independently
prediction_tensor = decoder_output
if prediction_tensor.ndim != 3:
raise ValueError(
f"`decoder_output` must be a tensor of shape [B, T, V] (log probs, float). "
f"Provided shape = {prediction_tensor.shape}"
)
# determine type of input - logprobs or labels
out_len = decoder_lengths if decoder_lengths is not None else None
hypotheses = self.search_algorithm(prediction_tensor, out_len)
# Pack results into Hypotheses
packed_result = pack_hypotheses(hypotheses, decoder_lengths)
# Pack the result
if self.return_best_hypothesis and isinstance(packed_result[0], rnnt_utils.NBestHypotheses):
packed_result = [res.n_best_hypotheses[0] for res in packed_result] # type: Hypothesis
return (packed_result,)
@torch.no_grad()
def default_beam_search(
self, x: torch.Tensor, out_len: torch.Tensor
) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
"""
Open Seq2Seq Beam Search Algorithm (DeepSpeed)
Args:
x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
and V is the vocabulary size. The tensor contains log-probabilities.
out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
Returns:
A list of NBestHypotheses objects, one for each sequence in the batch.
"""
if self.compute_timestamps:
raise ValueError(
f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
)
if self.default_beam_scorer is None:
# Check for filepath
if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
raise FileNotFoundError(
f"KenLM binary file not found at : {self.kenlm_path}. "
f"Please set a valid path in the decoding config."
)
# perform token offset for subword models
if self.decoding_type == 'subword':
vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
else:
# char models
vocab = self.vocab
# Must import at runtime to avoid circular dependency due to module level import.
from nemo.collections.asr.modules.beam_search_decoder import BeamSearchDecoderWithLM
self.default_beam_scorer = BeamSearchDecoderWithLM(
vocab=vocab,
lm_path=self.kenlm_path,
beam_width=self.beam_size,
alpha=self.beam_alpha,
beta=self.beam_beta,
num_cpus=max(1, os.cpu_count()),
input_tensor=False,
)
x = x.to('cpu')
with typecheck.disable_checks():
data = [x[sample_id, : out_len[sample_id], :].softmax(dim=-1) for sample_id in range(len(x))]
beams_batch = self.default_beam_scorer.forward(log_probs=data, log_probs_length=None)
# For each sample in the batch
nbest_hypotheses = []
for beams_idx, beams in enumerate(beams_batch):
# For each beam candidate / hypothesis in each sample
hypotheses = []
for candidate_idx, candidate in enumerate(beams):
hypothesis = rnnt_utils.Hypothesis(
score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
)
# For subword encoding, NeMo will double encode the subword (multiple tokens) into a
# singular unicode id. In doing so, we preserve the semantic of the unicode token, and
# compress the size of the final KenLM ARPA / Binary file.
# In order to do double encoding, we shift the subword by some token offset.
# This step is ignored for character based models.
if self.decoding_type == 'subword':
pred_token_ids = [ord(c) - self.token_offset for c in candidate[1]]
else:
# Char models
pred_token_ids = [self.vocab_index_map[c] for c in candidate[1]]
# We preserve the token ids and the score for this hypothesis
hypothesis.y_sequence = pred_token_ids
hypothesis.score = candidate[0]
# If alignment must be preserved, we preserve a view of the output logprobs.
# Note this view is shared amongst all beams within the sample, be sure to clone it if you
# require specific processing for each sample in the beam.
# This is done to preserve memory.
if self.preserve_alignments:
hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
hypotheses.append(hypothesis)
# Wrap the result in NBestHypothesis.
hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
nbest_hypotheses.append(hypotheses)
return nbest_hypotheses
@torch.no_grad()
def _pyctcdecode_beam_search(
self, x: torch.Tensor, out_len: torch.Tensor
) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
"""
PyCTCDecode Beam Search Algorithm. Should support Char and Subword models.
Args:
x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
and V is the vocabulary size. The tensor contains log-probabilities.
out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
Returns:
A list of NBestHypotheses objects, one for each sequence in the batch.
"""
if self.compute_timestamps:
raise ValueError(
f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
)
try:
import pyctcdecode
except (ImportError, ModuleNotFoundError):
raise ImportError(
f"Could not load `pyctcdecode` library. Please install it from pip using :\n"
f"pip install --upgrade pyctcdecode"
)
if self.pyctcdecode_beam_scorer is None:
self.pyctcdecode_beam_scorer = pyctcdecode.build_ctcdecoder(
labels=self.vocab, kenlm_model_path=self.kenlm_path, alpha=self.beam_alpha, beta=self.beam_beta
) # type: pyctcdecode.BeamSearchDecoderCTC
x = x.to('cpu').numpy()
with typecheck.disable_checks():
beams_batch = []
for sample_id in range(len(x)):
logprobs = x[sample_id, : out_len[sample_id], :]
result = self.pyctcdecode_beam_scorer.decode_beams(
logprobs,
beam_width=self.beam_size,
beam_prune_logp=self.pyctcdecode_cfg.beam_prune_logp,
token_min_logp=self.pyctcdecode_cfg.token_min_logp,
prune_history=self.pyctcdecode_cfg.prune_history,
hotwords=self.pyctcdecode_cfg.hotwords,
hotword_weight=self.pyctcdecode_cfg.hotword_weight,
lm_start_state=None,
) # Output format: text, last_lm_state, text_frames, logit_score, lm_score
beams_batch.append(result)
nbest_hypotheses = []
for beams_idx, beams in enumerate(beams_batch):
hypotheses = []
for candidate_idx, candidate in enumerate(beams):
# Candidate = (text, last_lm_state, text_frames, logit_score, lm_score)
hypothesis = rnnt_utils.Hypothesis(
score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
)
# TODO: Requires token ids to be returned rather than text.
if self.decoding_type == 'subword':
if self.tokenizer is None:
raise ValueError("Tokenizer must be provided for subword decoding. Use set_tokenizer().")
pred_token_ids = self.tokenizer.text_to_ids(candidate[0])
else:
if self.vocab is None:
raise ValueError("Vocab must be provided for character decoding. Use set_vocab().")
chars = list(candidate[0])
pred_token_ids = [self.vocab_index_map[c] for c in chars]
hypothesis.y_sequence = pred_token_ids
hypothesis.text = candidate[0] # text
hypothesis.score = candidate[4] # score
# Inject word level timestamps
hypothesis.timestep = candidate[2] # text_frames
if self.preserve_alignments:
hypothesis.alignments = torch.from_numpy(x[beams_idx][: out_len[beams_idx]])
hypotheses.append(hypothesis)
hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
nbest_hypotheses.append(hypotheses)
return nbest_hypotheses
@torch.no_grad()
def flashlight_beam_search(
self, x: torch.Tensor, out_len: torch.Tensor
) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
"""
Flashlight Beam Search Algorithm. Should support Char and Subword models.
Args:
x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
and V is the vocabulary size. The tensor contains log-probabilities.
out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
Returns:
A list of NBestHypotheses objects, one for each sequence in the batch.
"""
if self.compute_timestamps:
raise ValueError(
f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
)
if self.flashlight_beam_scorer is None:
# Check for filepath
if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
raise FileNotFoundError(
f"KenLM binary file not found at : {self.kenlm_path}. "
f"Please set a valid path in the decoding config."
)
# perform token offset for subword models
# if self.decoding_type == 'subword':
# vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
# else:
# # char models
# vocab = self.vocab
# Must import at runtime to avoid circular dependency due to module level import.
from nemo.collections.asr.modules.flashlight_decoder import FlashLightKenLMBeamSearchDecoder
self.flashlight_beam_scorer = FlashLightKenLMBeamSearchDecoder(
lm_path=self.kenlm_path,
vocabulary=self.vocab,
tokenizer=self.tokenizer,
lexicon_path=self.flashlight_cfg.lexicon_path,
boost_path=self.flashlight_cfg.boost_path,
beam_size=self.beam_size,
beam_size_token=self.flashlight_cfg.beam_size_token,
beam_threshold=self.flashlight_cfg.beam_threshold,
lm_weight=self.beam_alpha,
word_score=self.beam_beta,
unk_weight=self.flashlight_cfg.unk_weight,
sil_weight=self.flashlight_cfg.sil_weight,
)
x = x.to('cpu')
with typecheck.disable_checks():
beams_batch = self.flashlight_beam_scorer.forward(log_probs=x)
# For each sample in the batch
nbest_hypotheses = []
for beams_idx, beams in enumerate(beams_batch):
# For each beam candidate / hypothesis in each sample
hypotheses = []
for candidate_idx, candidate in enumerate(beams):
hypothesis = rnnt_utils.Hypothesis(
score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
)
# We preserve the token ids and the score for this hypothesis
hypothesis.y_sequence = candidate['tokens'].tolist()
hypothesis.score = candidate['score']
# If alignment must be preserved, we preserve a view of the output logprobs.
# Note this view is shared amongst all beams within the sample, be sure to clone it if you
# require specific processing for each sample in the beam.
# This is done to preserve memory.
if self.preserve_alignments:
hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
hypotheses.append(hypothesis)
# Wrap the result in NBestHypothesis.
hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
nbest_hypotheses.append(hypotheses)
return nbest_hypotheses
def set_decoding_type(self, decoding_type: str):
super().set_decoding_type(decoding_type)
# Please check train_kenlm.py in scripts/asr_language_modeling/ to find out why we need
# TOKEN_OFFSET for BPE-based models
if self.decoding_type == 'subword':
self.token_offset = DEFAULT_TOKEN_OFFSET
@dataclass
class PyCTCDecodeConfig:
# These arguments cannot be imported from pyctcdecode (optional dependency)
# Therefore we copy the values explicitly
# Taken from pyctcdecode.constant
beam_prune_logp: float = -10.0
token_min_logp: float = -5.0
prune_history: bool = False
hotwords: Optional[List[str]] = None
hotword_weight: float = 10.0
@dataclass
class FlashlightConfig:
lexicon_path: Optional[str] = None
boost_path: Optional[str] = None
beam_size_token: int = 16
beam_threshold: float = 20.0
unk_weight: float = -math.inf
sil_weight: float = 0.0
@dataclass
class BeamCTCInferConfig:
beam_size: int
search_type: str = 'default'
preserve_alignments: bool = False
compute_timestamps: bool = False
return_best_hypothesis: bool = True
beam_alpha: float = 1.0
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
[end of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
[start of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import List, Optional
import torch
from omegaconf import DictConfig, OmegaConf
from nemo.collections.asr.parts.utils import rnnt_utils
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMeasureConfig, ConfidenceMeasureMixin
from nemo.core.classes import Typing, typecheck
from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
from nemo.utils import logging
def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
if logitlen is not None:
if hasattr(logitlen, 'cpu'):
logitlen_cpu = logitlen.to('cpu')
else:
logitlen_cpu = logitlen
for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
if logitlen is not None:
hyp.length = logitlen_cpu[idx]
if hyp.dec_state is not None:
hyp.dec_state = _states_to_device(hyp.dec_state)
return hypotheses
def _states_to_device(dec_state, device='cpu'):
if torch.is_tensor(dec_state):
dec_state = dec_state.to(device)
elif isinstance(dec_state, (list, tuple)):
dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
return dec_state
class GreedyCTCInfer(Typing, ConfidenceMeasureMixin):
"""A greedy CTC decoder.
Provides a common abstraction for sample level and batch level greedy decoding.
Args:
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding. When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
# Input can be of dimention -
# ('B', 'T', 'D') [Log probs] or ('B', 'T') [Labels]
return {
"decoder_output": NeuralType(None, LogprobsType()),
"decoder_lengths": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {"predictions": [NeuralType(elements_type=HypothesisType())]}
def __init__(
self,
blank_id: int,
preserve_alignments: bool = False,
compute_timestamps: bool = False,
preserve_frame_confidence: bool = False,
confidence_measure_cfg: Optional[DictConfig] = None,
):
super().__init__()
self.blank_id = blank_id
self.preserve_alignments = preserve_alignments
# we need timestamps to extract non-blank per-frame confidence
self.compute_timestamps = compute_timestamps | preserve_frame_confidence
self.preserve_frame_confidence = preserve_frame_confidence
# set confidence calculation measure
self._init_confidence_measure(confidence_measure_cfg)
@typecheck()
def forward(
self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-repressively.
Args:
decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
decoder_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
with torch.inference_mode():
hypotheses = []
# Process each sequence independently
prediction_cpu_tensor = decoder_output.cpu()
if prediction_cpu_tensor.ndim < 2 or prediction_cpu_tensor.ndim > 3:
raise ValueError(
f"`decoder_output` must be a tensor of shape [B, T] (labels, int) or "
f"[B, T, V] (log probs, float). Provided shape = {prediction_cpu_tensor.shape}"
)
# determine type of input - logprobs or labels
if prediction_cpu_tensor.ndim == 2: # labels
greedy_decode = self._greedy_decode_labels
else:
greedy_decode = self._greedy_decode_logprobs
for ind in range(prediction_cpu_tensor.shape[0]):
out_len = decoder_lengths[ind] if decoder_lengths is not None else None
hypothesis = greedy_decode(prediction_cpu_tensor[ind], out_len)
hypotheses.append(hypothesis)
# Pack results into Hypotheses
packed_result = pack_hypotheses(hypotheses, decoder_lengths)
return (packed_result,)
@torch.no_grad()
def _greedy_decode_logprobs(self, x: torch.Tensor, out_len: torch.Tensor):
# x: [T, D]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
prediction = x.detach().cpu()
if out_len is not None:
prediction = prediction[:out_len]
prediction_logprobs, prediction_labels = prediction.max(dim=-1)
non_blank_ids = prediction_labels != self.blank_id
hypothesis.y_sequence = prediction_labels.numpy().tolist()
hypothesis.score = (prediction_logprobs[non_blank_ids]).sum()
if self.preserve_alignments:
# Preserve the logprobs, as well as labels after argmax
hypothesis.alignments = (prediction.clone(), prediction_labels.clone())
if self.compute_timestamps:
hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
if self.preserve_frame_confidence:
hypothesis.frame_confidence = self._get_confidence(prediction)
return hypothesis
@torch.no_grad()
def _greedy_decode_labels(self, x: torch.Tensor, out_len: torch.Tensor):
# x: [T]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
prediction_labels = x.detach().cpu()
if out_len is not None:
prediction_labels = prediction_labels[:out_len]
non_blank_ids = prediction_labels != self.blank_id
hypothesis.y_sequence = prediction_labels.numpy().tolist()
hypothesis.score = -1.0
if self.preserve_alignments:
raise ValueError("Requested for alignments, but predictions provided were labels, not log probabilities.")
if self.compute_timestamps:
hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
if self.preserve_frame_confidence:
raise ValueError(
"Requested for per-frame confidence, but predictions provided were labels, not log probabilities."
)
return hypothesis
def __call__(self, *args, **kwargs):
return self.forward(*args, **kwargs)
@dataclass
class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
self.confidence_measure_cfg = OmegaConf.structured(
self.confidence_measure_cfg
if isinstance(self.confidence_measure_cfg, ConfidenceMeasureConfig)
else ConfidenceMeasureConfig(**self.confidence_measure_cfg)
)
if self.confidence_method_cfg != "DEPRECATED":
logging.warning(
"`confidence_method_cfg` is deprecated and will be removed in the future. "
"Please use `confidence_measure_cfg` instead."
)
# TODO (alaptev): delete the following two lines sometime in the future
logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
# OmegaConf.structured ensures that post_init check is always executed
self.confidence_measure_cfg = OmegaConf.structured(
self.confidence_method_cfg
if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
else ConfidenceMeasureConfig(**self.confidence_method_cfg)
)
[end of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
[start of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright 2017 Johns Hopkins University (Shinji Watanabe)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import List, Optional, Tuple, Union
import numpy as np
import torch
from omegaconf import DictConfig, OmegaConf
from nemo.collections.asr.modules import rnnt_abstract
from nemo.collections.asr.parts.utils import rnnt_utils
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMeasureConfig, ConfidenceMeasureMixin
from nemo.collections.common.parts.rnn import label_collate
from nemo.core.classes import Typing, typecheck
from nemo.core.neural_types import AcousticEncodedRepresentation, ElementType, HypothesisType, LengthsType, NeuralType
from nemo.utils import logging
def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
if hasattr(logitlen, 'cpu'):
logitlen_cpu = logitlen.to('cpu')
else:
logitlen_cpu = logitlen
for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
hyp.length = logitlen_cpu[idx]
if hyp.dec_state is not None:
hyp.dec_state = _states_to_device(hyp.dec_state)
return hypotheses
def _states_to_device(dec_state, device='cpu'):
if torch.is_tensor(dec_state):
dec_state = dec_state.to(device)
elif isinstance(dec_state, (list, tuple)):
dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
return dec_state
class _GreedyRNNTInfer(Typing, ConfidenceMeasureMixin):
"""A greedy transducer decoder.
Provides a common abstraction for sample level and batch level greedy decoding.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
return {
"encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
"encoded_lengths": NeuralType(tuple('B'), LengthsType()),
"partial_hypotheses": [NeuralType(elements_type=HypothesisType(), optional=True)], # must always be last
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {"predictions": [NeuralType(elements_type=HypothesisType())]}
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_measure_cfg: Optional[DictConfig] = None,
):
super().__init__()
self.decoder = decoder_model
self.joint = joint_model
self._blank_index = blank_index
self._SOS = blank_index # Start of single index
self.max_symbols = max_symbols_per_step
self.preserve_alignments = preserve_alignments
self.preserve_frame_confidence = preserve_frame_confidence
# set confidence calculation measure
self._init_confidence_measure(confidence_measure_cfg)
def __call__(self, *args, **kwargs):
return self.forward(*args, **kwargs)
@torch.no_grad()
def _pred_step(
self,
label: Union[torch.Tensor, int],
hidden: Optional[torch.Tensor],
add_sos: bool = False,
batch_size: Optional[int] = None,
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Common prediction step based on the AbstractRNNTDecoder implementation.
Args:
label: (int/torch.Tensor): Label or "Start-of-Signal" token.
hidden: (Optional torch.Tensor): RNN State vector
add_sos (bool): Whether to add a zero vector at the begging as "start of sentence" token.
batch_size: Batch size of the output tensor.
Returns:
g: (B, U, H) if add_sos is false, else (B, U + 1, H)
hid: (h, c) where h is the final sequence hidden state and c is
the final cell state:
h (tensor), shape (L, B, H)
c (tensor), shape (L, B, H)
"""
if isinstance(label, torch.Tensor):
# label: [batch, 1]
if label.dtype != torch.long:
label = label.long()
else:
# Label is an integer
if label == self._SOS:
return self.decoder.predict(None, hidden, add_sos=add_sos, batch_size=batch_size)
label = label_collate([[label]])
# output: [B, 1, K]
return self.decoder.predict(label, hidden, add_sos=add_sos, batch_size=batch_size)
def _joint_step(self, enc, pred, log_normalize: Optional[bool] = None):
"""
Common joint step based on AbstractRNNTJoint implementation.
Args:
enc: Output of the Encoder model. A torch.Tensor of shape [B, 1, H1]
pred: Output of the Decoder model. A torch.Tensor of shape [B, 1, H2]
log_normalize: Whether to log normalize or not. None will log normalize only for CPU.
Returns:
logits of shape (B, T=1, U=1, V + 1)
"""
with torch.no_grad():
logits = self.joint.joint(enc, pred)
if log_normalize is None:
if not logits.is_cuda: # Use log softmax only if on CPU
logits = logits.log_softmax(dim=len(logits.shape) - 1)
else:
if log_normalize:
logits = logits.log_softmax(dim=len(logits.shape) - 1)
return logits
class GreedyRNNTInfer(_GreedyRNNTInfer):
"""A greedy transducer decoder.
Sequence level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_measure_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_measure_cfg=confidence_measure_cfg,
)
@typecheck()
def forward(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
# Preserve decoder and joint training state
decoder_training_state = self.decoder.training
joint_training_state = self.joint.training
with torch.inference_mode():
# Apply optional preprocessing
encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
self.decoder.eval()
self.joint.eval()
hypotheses = []
# Process each sequence independently
with self.decoder.as_frozen(), self.joint.as_frozen():
for batch_idx in range(encoder_output.size(0)):
inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
logitlen = encoded_lengths[batch_idx]
partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
hypotheses.append(hypothesis)
# Pack results into Hypotheses
packed_result = pack_hypotheses(hypotheses, encoded_lengths)
self.decoder.train(decoder_training_state)
self.joint.train(joint_training_state)
return (packed_result,)
@torch.no_grad()
def _greedy_decode(
self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
):
# x: [T, 1, D]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
if partial_hypotheses is not None:
hypothesis.last_token = partial_hypotheses.last_token
hypothesis.y_sequence = (
partial_hypotheses.y_sequence.cpu().tolist()
if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
else partial_hypotheses.y_sequence
)
if partial_hypotheses.dec_state is not None:
hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
if self.preserve_alignments:
# Alignments is a 2-dimensional dangling list representing T x U
hypothesis.alignments = [[]]
if self.preserve_frame_confidence:
hypothesis.frame_confidence = [[]]
# For timestep t in X_t
for time_idx in range(out_len):
# Extract encoder embedding at timestep t
# f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
f = x.narrow(dim=0, start=time_idx, length=1)
# Setup exit flags and counter
not_blank = True
symbols_added = 0
# While blank is not predicted, or we dont run out of max symbols per timestep
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# In the first timestep, we initialize the network with RNNT Blank
# In later timesteps, we provide previous predicted label as input.
if hypothesis.last_token is None and hypothesis.dec_state is None:
last_label = self._SOS
else:
last_label = label_collate([[hypothesis.last_token]])
# Perform prediction network and joint network steps.
g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
0, 0, 0, :
]
del g
# torch.max(0) op doesnt exist for FP 16.
if logp.dtype != torch.float32:
logp = logp.float()
# get index k, of max prob
v, k = logp.max(0)
k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
if self.preserve_alignments:
# insert logprobs into last timestep
hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
if self.preserve_frame_confidence:
# insert confidence into last timestep
hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
del logp
# If blank token is predicted, exit inner loop, move onto next timestep t
if k == self._blank_index:
not_blank = False
else:
# Append token to label set, update RNN state.
hypothesis.y_sequence.append(k)
hypothesis.score += float(v)
hypothesis.timestep.append(time_idx)
hypothesis.dec_state = hidden_prime
hypothesis.last_token = k
# Increment token counter.
symbols_added += 1
if self.preserve_alignments:
# convert Ti-th logits into a torch array
hypothesis.alignments.append([]) # blank buffer for next timestep
if self.preserve_frame_confidence:
hypothesis.frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of Alignments
if self.preserve_alignments:
if len(hypothesis.alignments[-1]) == 0:
del hypothesis.alignments[-1]
# Remove trailing empty list of per-frame confidence
if self.preserve_frame_confidence:
if len(hypothesis.frame_confidence[-1]) == 0:
del hypothesis.frame_confidence[-1]
# Unpack the hidden states
hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
return hypothesis
class GreedyBatchedRNNTInfer(_GreedyRNNTInfer):
"""A batch level greedy transducer decoder.
Batch level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_measure_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_measure_cfg=confidence_measure_cfg,
)
# Depending on availability of `blank_as_pad` support
# switch between more efficient batch decoding technique
if self.decoder.blank_as_pad:
self._greedy_decode = self._greedy_decode_blank_as_pad
else:
self._greedy_decode = self._greedy_decode_masked
@typecheck()
def forward(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
# Preserve decoder and joint training state
decoder_training_state = self.decoder.training
joint_training_state = self.joint.training
with torch.inference_mode():
# Apply optional preprocessing
encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
logitlen = encoded_lengths
self.decoder.eval()
self.joint.eval()
with self.decoder.as_frozen(), self.joint.as_frozen():
inseq = encoder_output # [B, T, D]
hypotheses = self._greedy_decode(
inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
)
# Pack the hypotheses results
packed_result = pack_hypotheses(hypotheses, logitlen)
self.decoder.train(decoder_training_state)
self.joint.train(joint_training_state)
return (packed_result,)
def _greedy_decode_blank_as_pad(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
with torch.inference_mode():
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize list of Hypothesis
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a dangling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
# If confidence scores need to be preserved, register a dangling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
for time_idx in range(max_out_len):
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset blank mask
blank_mask.mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1]
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
:, 0, 0, :
]
if logp.dtype != torch.float32:
logp = logp.float()
# Get index k, of max prob for batch
v, k = logp.max(1)
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask.bitwise_or_(k_is_blank)
all_blanks = torch.all(blank_mask)
del k_is_blank
# If preserving alignments, check if sequence length of sample has been reached
# before adding alignment
if self.preserve_alignments:
# Insert logprobs into last timestep per sample
logp_vals = logp.to('cpu')
logp_ids = logp_vals.max(1)[1]
for batch_idx, is_blank in enumerate(blank_mask):
# we only want to update non-blanks, unless we are at the last step in the loop where
# all elements produced blanks, otherwise there will be duplicate predictions
# saved in alignments
if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
hypotheses[batch_idx].alignments[-1].append(
(logp_vals[batch_idx], logp_ids[batch_idx])
)
del logp_vals
# If preserving per-frame confidence, check if sequence length of sample has been reached
# before adding confidence scores
if self.preserve_frame_confidence:
# Insert probabilities into last timestep per sample
confidence = self._get_confidence(logp)
for batch_idx, is_blank in enumerate(blank_mask):
if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if all_blanks:
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.clone().view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
# If preserving alignments, convert the current Uj alignments into a torch.Tensor
# Then preserve U at current timestep Ti
# Finally, forward the timestep history to Ti+1 for that sample
# All of this should only be done iff the current time index <= sample-level AM length.
# Otherwise ignore and move to next sample / next timestep.
if self.preserve_alignments:
# convert Ti-th logits into a torch array
for batch_idx in range(batchsize):
# this checks if current timestep <= sample-level AM length
# If current timestep > sample-level AM length, no alignments will be added
# Therefore the list of Uj alignments is empty here.
if len(hypotheses[batch_idx].alignments[-1]) > 0:
hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
# Do the same if preserving per-frame confidence
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
def _greedy_decode_masked(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize state
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a danling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
else:
alignments = None
# If confidence scores need to be preserved, register a danling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
last_label_without_blank = last_label.clone()
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
with torch.inference_mode():
for time_idx in range(max_out_len):
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset blank mask
blank_mask.mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Set a dummy label for the blank value
# This value will be overwritten by "blank" again the last label update below
# This is done as vocabulary of prediction network does not contain "blank" token of RNNT
last_label_without_blank_mask = last_label == self._blank_index
last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
last_label_without_blank[~last_label_without_blank_mask] = last_label[
~last_label_without_blank_mask
]
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1]
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
:, 0, 0, :
]
if logp.dtype != torch.float32:
logp = logp.float()
# Get index k, of max prob for batch
v, k = logp.max(1)
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask.bitwise_or_(k_is_blank)
all_blanks = torch.all(blank_mask)
# If preserving alignments, check if sequence length of sample has been reached
# before adding alignment
if self.preserve_alignments:
# Insert logprobs into last timestep per sample
logp_vals = logp.to('cpu')
logp_ids = logp_vals.max(1)[1]
for batch_idx, is_blank in enumerate(blank_mask):
# we only want to update non-blanks, unless we are at the last step in the loop where
# all elements produced blanks, otherwise there will be duplicate predictions
# saved in alignments
if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
hypotheses[batch_idx].alignments[-1].append(
(logp_vals[batch_idx], logp_ids[batch_idx])
)
del logp_vals
# If preserving per-frame confidence, check if sequence length of sample has been reached
# before adding confidence scores
if self.preserve_frame_confidence:
# Insert probabilities into last timestep per sample
confidence = self._get_confidence(logp)
for batch_idx, is_blank in enumerate(blank_mask):
if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if blank_mask.all():
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
# If preserving alignments, convert the current Uj alignments into a torch.Tensor
# Then preserve U at current timestep Ti
# Finally, forward the timestep history to Ti+1 for that sample
# All of this should only be done iff the current time index <= sample-level AM length.
# Otherwise ignore and move to next sample / next timestep.
if self.preserve_alignments:
# convert Ti-th logits into a torch array
for batch_idx in range(batchsize):
# this checks if current timestep <= sample-level AM length
# If current timestep > sample-level AM length, no alignments will be added
# Therefore the list of Uj alignments is empty here.
if len(hypotheses[batch_idx].alignments[-1]) > 0:
hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
# Do the same if preserving per-frame confidence
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
class ExportedModelGreedyBatchedRNNTInfer:
def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = None):
self.encoder_model_path = encoder_model
self.decoder_joint_model_path = decoder_joint_model
self.max_symbols_per_step = max_symbols_per_step
# Will be populated at runtime
self._blank_index = None
def __call__(self, audio_signal: torch.Tensor, length: torch.Tensor):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
with torch.no_grad():
# Apply optional preprocessing
encoder_output, encoded_lengths = self.run_encoder(audio_signal=audio_signal, length=length)
if torch.is_tensor(encoder_output):
encoder_output = encoder_output.transpose(1, 2)
else:
encoder_output = encoder_output.transpose([0, 2, 1]) # (B, T, D)
logitlen = encoded_lengths
inseq = encoder_output # [B, T, D]
hypotheses, timestamps = self._greedy_decode(inseq, logitlen)
# Pack the hypotheses results
packed_result = [rnnt_utils.Hypothesis(score=-1.0, y_sequence=[]) for _ in range(len(hypotheses))]
for i in range(len(packed_result)):
packed_result[i].y_sequence = torch.tensor(hypotheses[i], dtype=torch.long)
packed_result[i].length = timestamps[i]
del hypotheses
return packed_result
def _greedy_decode(self, x, out_len):
# x: [B, T, D]
# out_len: [B]
# Initialize state
batchsize = x.shape[0]
hidden = self._get_initial_states(batchsize)
target_lengths = torch.ones(batchsize, dtype=torch.int32)
# Output string buffer
label = [[] for _ in range(batchsize)]
timesteps = [[] for _ in range(batchsize)]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long).numpy()
if torch.is_tensor(x):
last_label = torch.from_numpy(last_label).to(self.device)
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool).numpy()
# Get max sequence length
max_out_len = out_len.max()
for time_idx in range(max_out_len):
f = x[:, time_idx : time_idx + 1, :] # [B, 1, D]
if torch.is_tensor(f):
f = f.transpose(1, 2)
else:
f = f.transpose([0, 2, 1])
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset blank mask
blank_mask *= False
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols_per_step is None or symbols_added < self.max_symbols_per_step):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0:
g = torch.tensor([self._blank_index] * batchsize, dtype=torch.int32).view(-1, 1)
else:
if torch.is_tensor(last_label):
g = last_label.type(torch.int32)
else:
g = last_label.astype(np.int32)
# Batched joint step - Output = [B, V + 1]
joint_out, hidden_prime = self.run_decoder_joint(f, g, target_lengths, *hidden)
logp, pred_lengths = joint_out
logp = logp[:, 0, 0, :]
# Get index k, of max prob for batch
if torch.is_tensor(logp):
v, k = logp.max(1)
else:
k = np.argmax(logp, axis=1).astype(np.int32)
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask |= k_is_blank
del k_is_blank
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if blank_mask.all():
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
if torch.is_tensor(blank_mask):
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
else:
blank_indices = blank_mask.astype(np.int32).nonzero()
if type(blank_indices) in (list, tuple):
blank_indices = blank_indices[0]
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
for state_id in range(len(hidden)):
hidden_prime[state_id][:, blank_indices, :] = hidden[state_id][:, blank_indices, :]
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
for state_id in range(len(hidden_prime)):
hidden_prime[state_id][:, blank_indices, :] *= 0.0
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
if torch.is_tensor(k):
last_label = k.clone().reshape(-1, 1)
else:
last_label = k.copy().reshape(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
label[kidx].append(ki)
timesteps[kidx].append(time_idx)
symbols_added += 1
return label, timesteps
def _setup_blank_index(self):
raise NotImplementedError()
def run_encoder(self, audio_signal, length):
raise NotImplementedError()
def run_decoder_joint(self, enc_logits, targets, target_length, *states):
raise NotImplementedError()
def _get_initial_states(self, batchsize):
raise NotImplementedError()
class ONNXGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = 10):
super().__init__(
encoder_model=encoder_model,
decoder_joint_model=decoder_joint_model,
max_symbols_per_step=max_symbols_per_step,
)
try:
import onnx
import onnxruntime
except (ModuleNotFoundError, ImportError):
raise ImportError(f"`onnx` or `onnxruntime` could not be imported, please install the libraries.\n")
if torch.cuda.is_available():
# Try to use onnxruntime-gpu
providers = ['TensorrtExecutionProvider', 'CUDAExecutionProvider']
else:
# Fall back to CPU and onnxruntime-cpu
providers = ['CPUExecutionProvider']
onnx_session_opt = onnxruntime.SessionOptions()
onnx_session_opt.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
onnx_model = onnx.load(self.encoder_model_path)
onnx.checker.check_model(onnx_model, full_check=True)
self.encoder_model = onnx_model
self.encoder = onnxruntime.InferenceSession(
onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
)
onnx_model = onnx.load(self.decoder_joint_model_path)
onnx.checker.check_model(onnx_model, full_check=True)
self.decoder_joint_model = onnx_model
self.decoder_joint = onnxruntime.InferenceSession(
onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
)
logging.info("Successfully loaded encoder, decoder and joint onnx models !")
# Will be populated at runtime
self._blank_index = None
self.max_symbols_per_step = max_symbols_per_step
self._setup_encoder_input_output_keys()
self._setup_decoder_joint_input_output_keys()
self._setup_blank_index()
def _setup_encoder_input_output_keys(self):
self.encoder_inputs = list(self.encoder_model.graph.input)
self.encoder_outputs = list(self.encoder_model.graph.output)
def _setup_decoder_joint_input_output_keys(self):
self.decoder_joint_inputs = list(self.decoder_joint_model.graph.input)
self.decoder_joint_outputs = list(self.decoder_joint_model.graph.output)
def _setup_blank_index(self):
# ASSUME: Single input with no time length information
dynamic_dim = 257
shapes = self.encoder_inputs[0].type.tensor_type.shape.dim
ip_shape = []
for shape in shapes:
if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
ip_shape.append(dynamic_dim) # replace dynamic axes with constant
else:
ip_shape.append(int(shape.dim_value))
enc_logits, encoded_length = self.run_encoder(
audio_signal=torch.randn(*ip_shape), length=torch.randint(0, 1, size=(dynamic_dim,))
)
# prepare states
states = self._get_initial_states(batchsize=dynamic_dim)
# run decoder 1 step
joint_out, states = self.run_decoder_joint(enc_logits, None, None, *states)
log_probs, lengths = joint_out
self._blank_index = log_probs.shape[-1] - 1 # last token of vocab size is blank token
logging.info(
f"Enc-Dec-Joint step was evaluated, blank token id = {self._blank_index}; vocab size = {log_probs.shape[-1]}"
)
def run_encoder(self, audio_signal, length):
if hasattr(audio_signal, 'cpu'):
audio_signal = audio_signal.cpu().numpy()
if hasattr(length, 'cpu'):
length = length.cpu().numpy()
ip = {
self.encoder_inputs[0].name: audio_signal,
self.encoder_inputs[1].name: length,
}
enc_out = self.encoder.run(None, ip)
enc_out, encoded_length = enc_out # ASSUME: single output
return enc_out, encoded_length
def run_decoder_joint(self, enc_logits, targets, target_length, *states):
# ASSUME: Decoder is RNN Transducer
if targets is None:
targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32)
target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32)
if hasattr(targets, 'cpu'):
targets = targets.cpu().numpy()
if hasattr(target_length, 'cpu'):
target_length = target_length.cpu().numpy()
ip = {
self.decoder_joint_inputs[0].name: enc_logits,
self.decoder_joint_inputs[1].name: targets,
self.decoder_joint_inputs[2].name: target_length,
}
num_states = 0
if states is not None and len(states) > 0:
num_states = len(states)
for idx, state in enumerate(states):
if hasattr(state, 'cpu'):
state = state.cpu().numpy()
ip[self.decoder_joint_inputs[len(ip)].name] = state
dec_out = self.decoder_joint.run(None, ip)
# unpack dec output
if num_states > 0:
new_states = dec_out[-num_states:]
dec_out = dec_out[:-num_states]
else:
new_states = None
return dec_out, new_states
def _get_initial_states(self, batchsize):
# ASSUME: LSTM STATES of shape (layers, batchsize, dim)
input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
num_states = len(input_state_nodes)
if num_states == 0:
return
input_states = []
for state_id in range(num_states):
node = input_state_nodes[state_id]
ip_shape = []
for shape_idx, shape in enumerate(node.type.tensor_type.shape.dim):
if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
ip_shape.append(batchsize) # replace dynamic axes with constant
else:
ip_shape.append(int(shape.dim_value))
input_states.append(torch.zeros(*ip_shape))
return input_states
class TorchscriptGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
def __init__(
self,
encoder_model: str,
decoder_joint_model: str,
cfg: DictConfig,
device: str,
max_symbols_per_step: Optional[int] = 10,
):
super().__init__(
encoder_model=encoder_model,
decoder_joint_model=decoder_joint_model,
max_symbols_per_step=max_symbols_per_step,
)
self.cfg = cfg
self.device = device
self.encoder = torch.jit.load(self.encoder_model_path, map_location=self.device)
self.decoder_joint = torch.jit.load(self.decoder_joint_model_path, map_location=self.device)
logging.info("Successfully loaded encoder, decoder and joint torchscript models !")
# Will be populated at runtime
self._blank_index = None
self.max_symbols_per_step = max_symbols_per_step
self._setup_encoder_input_keys()
self._setup_decoder_joint_input_keys()
self._setup_blank_index()
def _setup_encoder_input_keys(self):
arguments = self.encoder.forward.schema.arguments[1:]
self.encoder_inputs = [arg for arg in arguments]
def _setup_decoder_joint_input_keys(self):
arguments = self.decoder_joint.forward.schema.arguments[1:]
self.decoder_joint_inputs = [arg for arg in arguments]
def _setup_blank_index(self):
self._blank_index = len(self.cfg.joint.vocabulary)
logging.info(f"Blank token id = {self._blank_index}; vocab size = {len(self.cfg.joint.vocabulary) + 1}")
def run_encoder(self, audio_signal, length):
enc_out = self.encoder(audio_signal, length)
enc_out, encoded_length = enc_out # ASSUME: single output
return enc_out, encoded_length
def run_decoder_joint(self, enc_logits, targets, target_length, *states):
# ASSUME: Decoder is RNN Transducer
if targets is None:
targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32, device=enc_logits.device)
target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32, device=enc_logits.device)
num_states = 0
if states is not None and len(states) > 0:
num_states = len(states)
dec_out = self.decoder_joint(enc_logits, targets, target_length, *states)
# unpack dec output
if num_states > 0:
new_states = dec_out[-num_states:]
dec_out = dec_out[:-num_states]
else:
new_states = None
return dec_out, new_states
def _get_initial_states(self, batchsize):
# ASSUME: LSTM STATES of shape (layers, batchsize, dim)
input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
num_states = len(input_state_nodes)
if num_states == 0:
return
input_states = []
for state_id in range(num_states):
# Hardcode shape size for LSTM (1 is for num layers in LSTM, which is flattened for export)
ip_shape = [1, batchsize, self.cfg.model_defaults.pred_hidden]
input_states.append(torch.zeros(*ip_shape, device=self.device))
return input_states
class GreedyMultiblankRNNTInfer(GreedyRNNTInfer):
"""A greedy transducer decoder for multi-blank RNN-T.
Sequence level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
big_blank_durations: a list containing durations for big blanks the model supports.
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
big_blank_durations: list,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_measure_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_measure_cfg=confidence_measure_cfg,
)
self.big_blank_durations = big_blank_durations
self._SOS = blank_index - len(big_blank_durations)
@torch.no_grad()
def _greedy_decode(
self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
):
# x: [T, 1, D]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
if partial_hypotheses is not None:
hypothesis.last_token = partial_hypotheses.last_token
hypothesis.y_sequence = (
partial_hypotheses.y_sequence.cpu().tolist()
if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
else partial_hypotheses.y_sequence
)
if partial_hypotheses.dec_state is not None:
hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
if self.preserve_alignments:
# Alignments is a 2-dimensional dangling list representing T x U
hypothesis.alignments = [[]]
if self.preserve_frame_confidence:
hypothesis.frame_confidence = [[]]
# if this variable is > 1, it means the last emission was a big-blank and we need to skip frames.
big_blank_duration = 1
# For timestep t in X_t
for time_idx in range(out_len):
if big_blank_duration > 1:
# skip frames until big_blank_duration == 1.
big_blank_duration -= 1
continue
# Extract encoder embedding at timestep t
# f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
f = x.narrow(dim=0, start=time_idx, length=1)
# Setup exit flags and counter
not_blank = True
symbols_added = 0
# While blank is not predicted, or we dont run out of max symbols per timestep
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# In the first timestep, we initialize the network with RNNT Blank
# In later timesteps, we provide previous predicted label as input.
if hypothesis.last_token is None and hypothesis.dec_state is None:
last_label = self._SOS
else:
last_label = label_collate([[hypothesis.last_token]])
# Perform prediction network and joint network steps.
g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
0, 0, 0, :
]
del g
# torch.max(0) op doesnt exist for FP 16.
if logp.dtype != torch.float32:
logp = logp.float()
# get index k, of max prob
v, k = logp.max(0)
k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
# Note, we have non-blanks in the vocab first, followed by big blanks, and standard blank at last.
# here we check if it's a big blank and if yes, set the duration variable.
if k >= self._blank_index - len(self.big_blank_durations) and k < self._blank_index:
big_blank_duration = self.big_blank_durations[self._blank_index - k - 1]
if self.preserve_alignments:
# insert logprobs into last timestep
hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
if self.preserve_frame_confidence:
# insert confidence into last timestep
hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
del logp
# If any type of blank token is predicted, exit inner loop, move onto next timestep t
if k >= self._blank_index - len(self.big_blank_durations):
not_blank = False
else:
# Append token to label set, update RNN state.
hypothesis.y_sequence.append(k)
hypothesis.score += float(v)
hypothesis.timestep.append(time_idx)
hypothesis.dec_state = hidden_prime
hypothesis.last_token = k
# Increment token counter.
symbols_added += 1
if self.preserve_alignments:
# convert Ti-th logits into a torch array
hypothesis.alignments.append([]) # blank buffer for next timestep
if self.preserve_frame_confidence:
hypothesis.frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of Alignments
if self.preserve_alignments:
if len(hypothesis.alignments[-1]) == 0:
del hypothesis.alignments[-1]
# Remove trailing empty list of per-frame confidence
if self.preserve_frame_confidence:
if len(hypothesis.frame_confidence[-1]) == 0:
del hypothesis.frame_confidence[-1]
# Unpack the hidden states
hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
return hypothesis
class GreedyBatchedMultiblankRNNTInfer(GreedyBatchedRNNTInfer):
"""A batch level greedy transducer decoder.
Batch level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
big_blank_durations: a list containing durations for big blanks the model supports.
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
big_blank_durations: List[int],
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_measure_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_measure_cfg=confidence_measure_cfg,
)
self.big_blank_durations = big_blank_durations
# Depending on availability of `blank_as_pad` support
# switch between more efficient batch decoding technique
if self.decoder.blank_as_pad:
self._greedy_decode = self._greedy_decode_blank_as_pad
else:
self._greedy_decode = self._greedy_decode_masked
self._SOS = blank_index - len(big_blank_durations)
def _greedy_decode_blank_as_pad(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
with torch.inference_mode():
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize list of Hypothesis
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a danling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
# If confidence scores need to be preserved, register a danling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._SOS, dtype=torch.long, device=device)
# this mask is true for if the emission is *any type* of blank.
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
# We have a mask for each big blank. A mask is "true" means: the previous emission is exactly the big-blank
# with the corresponding duration, or has larger duration. E.g., for big_blank_mask for duration 2, it will
# be set true if the previous emission was a big blank with duration 4, or 3 or 2; but false if prevoius
# emission was a standard blank (with duration = 1).
big_blank_masks = [torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)] * len(
self.big_blank_durations
)
# if this variable > 1, it means the previous emission was big-blank and we need to skip frames.
big_blank_duration = 1
for time_idx in range(max_out_len):
if big_blank_duration > 1:
# skip frames until big_blank_duration == 1
big_blank_duration -= 1
continue
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset all blank masks
blank_mask.mul_(False)
for i in range(len(big_blank_masks)):
big_blank_masks[i].mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
for i in range(len(big_blank_masks)):
big_blank_masks[i] = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1 + num-big-blanks]
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
:, 0, 0, :
]
if logp.dtype != torch.float32:
logp = logp.float()
# Get index k, of max prob for batch
v, k = logp.max(1)
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k >= self._blank_index - len(self.big_blank_durations)
blank_mask.bitwise_or_(k_is_blank)
for i in range(len(big_blank_masks)):
# using <= since as we mentioned before, the mask doesn't store exact matches.
# instead, it is True when the predicted blank's duration is >= the duration that the
# mask corresponds to.
k_is_big_blank = k <= self._blank_index - 1 - i
# need to do a bitwise_and since it could also be a non-blank.
k_is_big_blank.bitwise_and_(k_is_blank)
big_blank_masks[i].bitwise_or_(k_is_big_blank)
del k_is_blank
# If preserving alignments, check if sequence length of sample has been reached
# before adding alignment
if self.preserve_alignments:
# Insert logprobs into last timestep per sample
logp_vals = logp.to('cpu')
logp_ids = logp_vals.max(1)[1]
for batch_idx in range(batchsize):
if time_idx < out_len[batch_idx]:
hypotheses[batch_idx].alignments[-1].append(
(logp_vals[batch_idx], logp_ids[batch_idx])
)
del logp_vals
# If preserving per-frame confidence, check if sequence length of sample has been reached
# before adding confidence scores
if self.preserve_frame_confidence:
# Insert probabilities into last timestep per sample
confidence = self._get_confidence(logp)
for batch_idx in range(batchsize):
if time_idx < out_len[batch_idx]:
hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if blank_mask.all():
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.clone().view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
for i in range(len(big_blank_masks) + 1):
# The task here is find the shortest blank duration of all batches.
# so we start from the shortest blank duration and go up,
# and stop once we found the duration whose corresponding mask isn't all True.
if i == len(big_blank_masks) or not big_blank_masks[i].all():
big_blank_duration = self.big_blank_durations[i - 1] if i > 0 else 1
break
# If preserving alignments, convert the current Uj alignments into a torch.Tensor
# Then preserve U at current timestep Ti
# Finally, forward the timestep history to Ti+1 for that sample
# All of this should only be done iff the current time index <= sample-level AM length.
# Otherwise ignore and move to next sample / next timestep.
if self.preserve_alignments:
# convert Ti-th logits into a torch array
for batch_idx in range(batchsize):
# this checks if current timestep <= sample-level AM length
# If current timestep > sample-level AM length, no alignments will be added
# Therefore the list of Uj alignments is empty here.
if len(hypotheses[batch_idx].alignments[-1]) > 0:
hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
# Do the same if preserving per-frame confidence
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
def _greedy_decode_masked(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
if self.big_blank_durations != [1] * len(self.big_blank_durations):
raise NotImplementedError(
"Efficient frame-skipping version for multi-blank masked decoding is not supported."
)
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize state
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a danling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
else:
hyp.alignments = None
# If confidence scores need to be preserved, register a danling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
last_label_without_blank = last_label.clone()
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
with torch.inference_mode():
for time_idx in range(max_out_len):
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset blank mask
blank_mask.mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Set a dummy label for the blank value
# This value will be overwritten by "blank" again the last label update below
# This is done as vocabulary of prediction network does not contain "blank" token of RNNT
last_label_without_blank_mask = last_label >= self._blank_index
last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
last_label_without_blank[~last_label_without_blank_mask] = last_label[
~last_label_without_blank_mask
]
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1 + num-big-blanks]
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
:, 0, 0, :
]
if logp.dtype != torch.float32:
logp = logp.float()
# Get index k, of max prob for batch
v, k = logp.max(1)
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask.bitwise_or_(k_is_blank)
# If preserving alignments, check if sequence length of sample has been reached
# before adding alignment
if self.preserve_alignments:
# Insert logprobs into last timestep per sample
logp_vals = logp.to('cpu')
logp_ids = logp_vals.max(1)[1]
for batch_idx in range(batchsize):
if time_idx < out_len[batch_idx]:
hypotheses[batch_idx].alignments[-1].append(
(logp_vals[batch_idx], logp_ids[batch_idx])
)
del logp_vals
# If preserving per-frame confidence, check if sequence length of sample has been reached
# before adding confidence scores
if self.preserve_frame_confidence:
# Insert probabilities into last timestep per sample
confidence = self._get_confidence(logp)
for batch_idx in range(batchsize):
if time_idx < out_len[batch_idx]:
hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if blank_mask.all():
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
# If preserving alignments, convert the current Uj alignments into a torch.Tensor
# Then preserve U at current timestep Ti
# Finally, forward the timestep history to Ti+1 for that sample
# All of this should only be done iff the current time index <= sample-level AM length.
# Otherwise ignore and move to next sample / next timestep.
if self.preserve_alignments:
# convert Ti-th logits into a torch array
for batch_idx in range(batchsize):
# this checks if current timestep <= sample-level AM length
# If current timestep > sample-level AM length, no alignments will be added
# Therefore the list of Uj alignments is empty here.
if len(hypotheses[batch_idx].alignments[-1]) > 0:
hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
# Do the same if preserving per-frame confidence
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
@dataclass
class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
self.confidence_measure_cfg = OmegaConf.structured(
self.confidence_measure_cfg
if isinstance(self.confidence_measure_cfg, ConfidenceMeasureConfig)
else ConfidenceMeasureConfig(**self.confidence_measure_cfg)
)
if self.confidence_method_cfg != "DEPRECATED":
logging.warning(
"`confidence_method_cfg` is deprecated and will be removed in the future. "
"Please use `confidence_measure_cfg` instead."
)
# TODO (alaptev): delete the following two lines sometime in the future
logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
# OmegaConf.structured ensures that post_init check is always executed
self.confidence_measure_cfg = OmegaConf.structured(
self.confidence_method_cfg
if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
else ConfidenceMeasureConfig(**self.confidence_method_cfg)
)
self.confidence_method_cfg = "DEPRECATED"
@dataclass
class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
self.confidence_measure_cfg = OmegaConf.structured(
self.confidence_measure_cfg
if isinstance(self.confidence_measure_cfg, ConfidenceMeasureConfig)
else ConfidenceMeasureConfig(**self.confidence_measure_cfg)
)
if self.confidence_method_cfg != "DEPRECATED":
logging.warning(
"`confidence_method_cfg` is deprecated and will be removed in the future. "
"Please use `confidence_measure_cfg` instead."
)
# TODO (alaptev): delete the following two lines sometime in the future
logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
# OmegaConf.structured ensures that post_init check is always executed
self.confidence_measure_cfg = OmegaConf.structured(
self.confidence_method_cfg
if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
else ConfidenceMeasureConfig(**self.confidence_method_cfg)
)
self.confidence_method_cfg = "DEPRECATED"
class GreedyTDTInfer(_GreedyRNNTInfer):
"""A greedy TDT decoder.
Sequence level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
durations: a list containing durations for TDT.
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
durations: list,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_measure_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_measure_cfg=confidence_measure_cfg,
)
self.durations = durations
@typecheck()
def forward(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
# Preserve decoder and joint training state
decoder_training_state = self.decoder.training
joint_training_state = self.joint.training
with torch.inference_mode():
# Apply optional preprocessing
encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
self.decoder.eval()
self.joint.eval()
hypotheses = []
# Process each sequence independently
with self.decoder.as_frozen(), self.joint.as_frozen():
for batch_idx in range(encoder_output.size(0)):
inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
logitlen = encoded_lengths[batch_idx]
partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
hypotheses.append(hypothesis)
# Pack results into Hypotheses
packed_result = pack_hypotheses(hypotheses, encoded_lengths)
self.decoder.train(decoder_training_state)
self.joint.train(joint_training_state)
return (packed_result,)
@torch.no_grad()
def _greedy_decode(
self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
):
# x: [T, 1, D]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
if partial_hypotheses is not None:
hypothesis.last_token = partial_hypotheses.last_token
hypothesis.y_sequence = (
partial_hypotheses.y_sequence.cpu().tolist()
if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
else partial_hypotheses.y_sequence
)
if partial_hypotheses.dec_state is not None:
hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
if self.preserve_alignments:
# Alignments is a 2-dimensional dangling list representing T x U
hypothesis.alignments = [[]]
if self.preserve_frame_confidence:
hypothesis.frame_confidence = [[]]
time_idx = 0
while time_idx < out_len:
# Extract encoder embedding at timestep t
# f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
f = x.narrow(dim=0, start=time_idx, length=1)
# Setup exit flags and counter
not_blank = True
symbols_added = 0
need_loop = True
# While blank is not predicted, or we dont run out of max symbols per timestep
while need_loop and (self.max_symbols is None or symbols_added < self.max_symbols):
# In the first timestep, we initialize the network with RNNT Blank
# In later timesteps, we provide previous predicted label as input.
if hypothesis.last_token is None and hypothesis.dec_state is None:
last_label = self._SOS
else:
last_label = label_collate([[hypothesis.last_token]])
# Perform prediction network and joint network steps.
g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
# If preserving per-frame confidence, log_normalize must be true
logits = self._joint_step(f, g, log_normalize=False)
logp = logits[0, 0, 0, : -len(self.durations)]
if self.preserve_frame_confidence:
logp = torch.log_softmax(logp, -1)
duration_logp = torch.log_softmax(logits[0, 0, 0, -len(self.durations) :], dim=-1)
del g
# torch.max(0) op doesnt exist for FP 16.
if logp.dtype != torch.float32:
logp = logp.float()
# get index k, of max prob
v, k = logp.max(0)
k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
d_v, d_k = duration_logp.max(0)
d_k = d_k.item()
skip = self.durations[d_k]
if self.preserve_alignments:
# insert logprobs into last timestep
hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
if self.preserve_frame_confidence:
# insert confidence into last timestep
hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
del logp
# If blank token is predicted, exit inner loop, move onto next timestep t
if k == self._blank_index:
not_blank = False
else:
# Append token to label set, update RNN state.
hypothesis.y_sequence.append(k)
hypothesis.score += float(v)
hypothesis.timestep.append(time_idx)
hypothesis.dec_state = hidden_prime
hypothesis.last_token = k
# Increment token counter.
symbols_added += 1
time_idx += skip
need_loop = skip == 0
# this rarely happens, but we manually increment the `skip` number
# if blank is emitted and duration=0 is predicted. This prevents possible
# infinite loops.
if skip == 0:
skip = 1
if self.preserve_alignments:
# convert Ti-th logits into a torch array
hypothesis.alignments.append([]) # blank buffer for next timestep
if self.preserve_frame_confidence:
hypothesis.frame_confidence.append([]) # blank buffer for next timestep
if symbols_added == self.max_symbols:
time_idx += 1
# Remove trailing empty list of Alignments
if self.preserve_alignments:
if len(hypothesis.alignments[-1]) == 0:
del hypothesis.alignments[-1]
# Remove trailing empty list of per-frame confidence
if self.preserve_frame_confidence:
if len(hypothesis.frame_confidence[-1]) == 0:
del hypothesis.frame_confidence[-1]
# Unpack the hidden states
hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
return hypothesis
class GreedyBatchedTDTInfer(_GreedyRNNTInfer):
"""A batch level greedy TDT decoder.
Batch level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
durations: a list containing durations.
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
durations: List[int],
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_measure_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_measure_cfg=confidence_measure_cfg,
)
self.durations = durations
# Depending on availability of `blank_as_pad` support
# switch between more efficient batch decoding technique
if self.decoder.blank_as_pad:
self._greedy_decode = self._greedy_decode_blank_as_pad
else:
self._greedy_decode = self._greedy_decode_masked
@typecheck()
def forward(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
# Preserve decoder and joint training state
decoder_training_state = self.decoder.training
joint_training_state = self.joint.training
with torch.inference_mode():
# Apply optional preprocessing
encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
logitlen = encoded_lengths
self.decoder.eval()
self.joint.eval()
with self.decoder.as_frozen(), self.joint.as_frozen():
inseq = encoder_output # [B, T, D]
hypotheses = self._greedy_decode(
inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
)
# Pack the hypotheses results
packed_result = pack_hypotheses(hypotheses, logitlen)
self.decoder.train(decoder_training_state)
self.joint.train(joint_training_state)
return (packed_result,)
def _greedy_decode_blank_as_pad(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
with torch.inference_mode():
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize list of Hypothesis
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a danling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
# If confidence scores need to be preserved, register a danling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
# skip means the number of frames the next decoding step should "jump" to. When skip == 1
# it means the next decoding step will just use the next input frame.
skip = 1
for time_idx in range(max_out_len):
if skip > 1: # if skip > 1 at the current step, we decrement it and skip the current frame.
skip -= 1
continue
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# need_to_stay is a boolean indicates whether the next decoding step should remain in the same frame.
need_to_stay = True
symbols_added = 0
# Reset blank mask
blank_mask.mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while need_to_stay and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1 + num-big-blanks]
# Note: log_normalize must not be True here since the joiner output is contanetation of both token logits and duration logits,
# and they need to be normalized independently.
joined = self._joint_step(f, g, log_normalize=None)
logp = joined[:, 0, 0, : -len(self.durations)]
duration_logp = joined[:, 0, 0, -len(self.durations) :]
if logp.dtype != torch.float32:
logp = logp.float()
duration_logp = duration_logp.float()
# get the max for both token and duration predictions.
v, k = logp.max(1)
dv, dk = duration_logp.max(1)
# here we set the skip value to be the minimum of all predicted durations, hense the "torch.min(dk)" call there.
# Please refer to Section 5.2 of our paper https://arxiv.org/pdf/2304.06795.pdf for explanation of this.
skip = self.durations[int(torch.min(dk))]
# this is a special case: if all batches emit blanks, we require that skip be at least 1
# so we don't loop forever at the current frame.
if blank_mask.all():
if skip == 0:
skip = 1
need_to_stay = skip == 0
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask.bitwise_or_(k_is_blank)
del k_is_blank
del logp, duration_logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if not blank_mask.all():
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.clone().view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
def _greedy_decode_masked(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
raise NotImplementedError("masked greedy-batched decode is not supported for TDT models.")
[end of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
[start of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from abc import ABC, abstractmethod
from dataclasses import dataclass
from functools import partial
from typing import List, Optional
import torch
from omegaconf import DictConfig, OmegaConf
from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
from nemo.utils import logging
class ConfidenceMeasureConstants:
NAMES = ("max_prob", "entropy")
ENTROPY_TYPES = ("gibbs", "tsallis", "renyi")
ENTROPY_NORMS = ("lin", "exp")
@classmethod
def print(cls):
return (
cls.__name__
+ ": "
+ str({"NAMES": cls.NAMES, "ENTROPY_TYPES": cls.ENTROPY_TYPES, "ENTROPY_NORMS": cls.ENTROPY_NORMS})
)
class ConfidenceConstants:
AGGREGATIONS = ("mean", "min", "max", "prod")
@classmethod
def print(cls):
return cls.__name__ + ": " + str({"AGGREGATIONS": cls.AGGREGATIONS})
@dataclass
class ConfidenceMeasureConfig:
"""A Config which contains the measure name and settings to compute per-frame confidence scores.
Args:
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
name: str = "entropy"
entropy_type: str = "tsallis"
alpha: float = 0.33
entropy_norm: str = "exp"
temperature: str = "DEPRECATED"
def __post_init__(self):
if self.temperature != "DEPRECATED":
logging.warning(
"`temperature` is deprecated and will be removed in the future. Please use `alpha` instead."
)
# TODO (alaptev): delete the following two lines sometime in the future
logging.warning("Re-writing `alpha` with the value of `temperature`.")
# self.temperature has type str
self.alpha = float(self.temperature)
self.temperature = "DEPRECATED"
if self.name not in ConfidenceMeasureConstants.NAMES:
raise ValueError(
f"`name` must be one of the following: "
f"{'`' + '`, `'.join(ConfidenceMeasureConstants.NAMES) + '`'}. Provided: `{self.name}`"
)
if self.entropy_type not in ConfidenceMeasureConstants.ENTROPY_TYPES:
raise ValueError(
f"`entropy_type` must be one of the following: "
f"{'`' + '`, `'.join(ConfidenceMeasureConstants.ENTROPY_TYPES) + '`'}. Provided: `{self.entropy_type}`"
)
if self.alpha <= 0.0:
raise ValueError(f"`alpha` must be > 0. Provided: {self.alpha}")
if self.entropy_norm not in ConfidenceMeasureConstants.ENTROPY_NORMS:
raise ValueError(
f"`entropy_norm` must be one of the following: "
f"{'`' + '`, `'.join(ConfidenceMeasureConstants.ENTROPY_NORMS) + '`'}. Provided: `{self.entropy_norm}`"
)
@dataclass
class ConfidenceConfig:
"""A config which contains the following key-value pairs related to confidence scores.
Args:
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding. When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
confidence scores.
name: The measure name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
preserve_frame_confidence: bool = False
preserve_token_confidence: bool = False
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
measure_cfg: ConfidenceMeasureConfig = ConfidenceMeasureConfig()
method_cfg: str = "DEPRECATED"
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
self.measure_cfg = OmegaConf.structured(
self.measure_cfg
if isinstance(self.measure_cfg, ConfidenceMeasureConfig)
else ConfidenceMeasureConfig(**self.measure_cfg)
)
if self.method_cfg != "DEPRECATED":
logging.warning(
"`method_cfg` is deprecated and will be removed in the future. Please use `measure_cfg` instead."
)
# TODO (alaptev): delete the following two lines sometime in the future
logging.warning("Re-writing `measure_cfg` with the value of `method_cfg`.")
# OmegaConf.structured ensures that post_init check is always executed
self.measure_cfg = OmegaConf.structured(
self.method_cfg
if isinstance(self.method_cfg, ConfidenceMeasureConfig)
else ConfidenceMeasureConfig(**self.method_cfg)
)
self.method_cfg = "DEPRECATED"
if self.aggregation not in ConfidenceConstants.AGGREGATIONS:
raise ValueError(
f"`aggregation` has to be one of the following: "
f"{'`' + '`, `'.join(ConfidenceMeasureConstants.AGGREGATIONS) + '`'}. Provided: `{self.aggregation}`"
)
def get_confidence_measure_bank():
"""Generate a dictionary with confidence measure functionals.
Supported confidence measures:
max_prob: normalized maximum probability
entropy_gibbs_lin: Gibbs entropy with linear normalization
entropy_gibbs_exp: Gibbs entropy with exponential normalization
entropy_tsallis_lin: Tsallis entropy with linear normalization
entropy_tsallis_exp: Tsallis entropy with exponential normalization
entropy_renyi_lin: Rรฉnyi entropy with linear normalization
entropy_renyi_exp: Rรฉnyi entropy with exponential normalization
Returns:
dictionary with lambda functions.
"""
# helper functions
# Gibbs entropy is implemented without alpha
neg_entropy_gibbs = lambda x: (x.exp() * x).sum(-1)
neg_entropy_alpha = lambda x, t: (x * t).exp().sum(-1)
neg_entropy_alpha_gibbs = lambda x, t: ((x * t).exp() * x).sum(-1)
# too big for a lambda
def entropy_tsallis_exp(x, v, t):
exp_neg_max_ent = math.exp((1 - math.pow(v, 1 - t)) / (1 - t))
return (((1 - neg_entropy_alpha(x, t)) / (1 - t)).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
def entropy_gibbs_exp(x, v, t):
exp_neg_max_ent = math.pow(v, -t * math.pow(v, 1 - t))
return ((neg_entropy_alpha_gibbs(x, t) * t).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
# use Gibbs entropies for Tsallis and Rรฉnyi with t == 1.0
entropy_gibbs_lin_baseline = lambda x, v: 1 + neg_entropy_gibbs(x) / math.log(v)
entropy_gibbs_exp_baseline = lambda x, v: (neg_entropy_gibbs(x).exp() * v - 1) / (v - 1)
# fill the measure bank
confidence_measure_bank = {}
# Maximum probability measure is implemented without alpha
confidence_measure_bank["max_prob"] = (
lambda x, v, t: (x.max(dim=-1)[0].exp() * v - 1) / (v - 1)
if t == 1.0
else ((x.max(dim=-1)[0] * t).exp() * math.pow(v, t) - 1) / (math.pow(v, t) - 1)
)
confidence_measure_bank["entropy_gibbs_lin"] = (
lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
if t == 1.0
else 1 + neg_entropy_alpha_gibbs(x, t) / math.log(v) / math.pow(v, 1 - t)
)
confidence_measure_bank["entropy_gibbs_exp"] = (
lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_gibbs_exp(x, v, t)
)
confidence_measure_bank["entropy_tsallis_lin"] = (
lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
if t == 1.0
else 1 + (1 - neg_entropy_alpha(x, t)) / (math.pow(v, 1 - t) - 1)
)
confidence_measure_bank["entropy_tsallis_exp"] = (
lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_tsallis_exp(x, v, t)
)
confidence_measure_bank["entropy_renyi_lin"] = (
lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
if t == 1.0
else 1 + neg_entropy_alpha(x, t).log2() / (t - 1) / math.log(v, 2)
)
confidence_measure_bank["entropy_renyi_exp"] = (
lambda x, v, t: entropy_gibbs_exp_baseline(x, v)
if t == 1.0
else (neg_entropy_alpha(x, t).pow(1 / (t - 1)) * v - 1) / (v - 1)
)
return confidence_measure_bank
def get_confidence_aggregation_bank():
"""Generate a dictionary with confidence aggregation functions.
Supported confidence measures:
min: minimum
max: maximum
mean: arithmetic mean
prod: product
Returns:
dictionary with functions.
"""
confidence_aggregation_bank = {"mean": lambda x: sum(x) / len(x), "min": min, "max": max}
# python 3.7 and earlier do not have math.prod
if hasattr(math, "prod"):
confidence_aggregation_bank["prod"] = math.prod
else:
import operator
from functools import reduce
confidence_aggregation_bank["prod"] = lambda x: reduce(operator.mul, x, 1)
return confidence_aggregation_bank
class ConfidenceMeasureMixin(ABC):
"""Confidence Measure Mixin class.
It initializes per-frame confidence measure.
"""
def _init_confidence_measure(self, confidence_measure_cfg: Optional[DictConfig] = None):
"""Initialize per-frame confidence measure from config.
"""
# OmegaConf.structured ensures that post_init check is always executed
confidence_measure_cfg = OmegaConf.structured(
ConfidenceMeasureConfig()
if confidence_measure_cfg is None
else ConfidenceMeasureConfig(**confidence_measure_cfg)
)
# set confidence calculation measure
# we suppose that self.blank_id == len(vocabulary)
self.num_tokens = (self.blank_id if hasattr(self, "blank_id") else self._blank_index) + 1
self.alpha = confidence_measure_cfg.alpha
# init confidence measure bank
self.confidence_measure_bank = get_confidence_measure_bank()
measure = None
# construct measure_name
measure_name = ""
if confidence_measure_cfg.name == "max_prob":
measure_name = "max_prob"
elif confidence_measure_cfg.name == "entropy":
measure_name = '_'.join(
[confidence_measure_cfg.name, confidence_measure_cfg.entropy_type, confidence_measure_cfg.entropy_norm]
)
else:
raise ValueError(f"Unsupported `confidence_measure_cfg.name`: `{confidence_measure_cfg.name}`")
if measure_name not in self.confidence_measure_bank:
raise ValueError(f"Unsupported measure setup: `{measure_name}`")
measure = partial(self.confidence_measure_bank[measure_name], v=self.num_tokens, t=self.alpha)
self._get_confidence = lambda x: measure(torch.nan_to_num(x)).tolist()
class ConfidenceMixin(ABC):
"""Confidence Mixin class.
It is responsible for confidence estimation method initialization and high-level confidence score calculation.
"""
def _init_confidence(self, confidence_cfg: Optional[DictConfig] = None):
"""Initialize confidence-related fields and confidence aggregation function from config.
"""
# OmegaConf.structured ensures that post_init check is always executed
confidence_cfg = OmegaConf.structured(
ConfidenceConfig() if confidence_cfg is None else ConfidenceConfig(**confidence_cfg)
)
self.confidence_measure_cfg = confidence_cfg.measure_cfg
# extract the config
self.preserve_word_confidence = confidence_cfg.get('preserve_word_confidence', False)
# set preserve_frame_confidence and preserve_token_confidence to True
# if preserve_word_confidence is True
self.preserve_token_confidence = (
confidence_cfg.get('preserve_token_confidence', False) | self.preserve_word_confidence
)
# set preserve_frame_confidence to True if preserve_token_confidence is True
self.preserve_frame_confidence = (
confidence_cfg.get('preserve_frame_confidence', False) | self.preserve_token_confidence
)
self.exclude_blank_from_confidence = confidence_cfg.get('exclude_blank', True)
self.word_confidence_aggregation = confidence_cfg.get('aggregation', "min")
# define aggregation functions
self.confidence_aggregation_bank = get_confidence_aggregation_bank()
self._aggregate_confidence = self.confidence_aggregation_bank[self.word_confidence_aggregation]
# Update preserve frame confidence
if self.preserve_frame_confidence is False:
if self.cfg.strategy in ['greedy', 'greedy_batch']:
self.preserve_frame_confidence = self.cfg.greedy.get('preserve_frame_confidence', False)
# OmegaConf.structured ensures that post_init check is always executed
confidence_measure_cfg = OmegaConf.structured(self.cfg.greedy).get('confidence_measure_cfg', None)
self.confidence_measure_cfg = (
OmegaConf.structured(ConfidenceMeasureConfig())
if confidence_measure_cfg is None
else OmegaConf.structured(ConfidenceMeasureConfig(**confidence_measure_cfg))
)
@abstractmethod
def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
"""Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
Assumes that `frame_confidence` is present in the hypotheses.
Args:
hypotheses_list: List of Hypothesis.
Returns:
A list of hypotheses with high-level confidence scores.
"""
raise NotImplementedError()
@abstractmethod
def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
"""Implemented by subclass in order to aggregate token confidence to a word-level confidence.
Args:
hypothesis: Hypothesis
Returns:
A list of word-level confidence scores.
"""
raise NotImplementedError()
def _aggregate_token_confidence_chars(self, words: List[str], token_confidence: List[float]) -> List[float]:
"""Implementation of token confidence aggregation for character-based models.
Args:
words: List of words of a hypothesis.
token_confidence: List of token-level confidence scores of a hypothesis.
Returns:
A list of word-level confidence scores.
"""
word_confidence = []
i = 0
for word in words:
word_len = len(word)
word_confidence.append(self._aggregate_confidence(token_confidence[i : i + word_len]))
# we assume that there is exactly one space token between words and exclude it from word confidence
i += word_len + 1
return word_confidence
def _aggregate_token_confidence_subwords_sentencepiece(
self, words: List[str], token_confidence: List[float], token_ids: List[int]
) -> List[float]:
"""Implementation of token confidence aggregation for subword-based models.
**Note**: Only supports Sentencepiece based tokenizers !
Args:
words: List of words of a hypothesis.
token_confidence: List of token-level confidence scores of a hypothesis.
token_ids: List of token ids of a hypothesis.
Returns:
A list of word-level confidence scores.
"""
word_confidence = []
# run only if there are final words
if len(words) > 0:
j = 0
prev_unk = False
prev_underline = False
for i, token_id in enumerate(token_ids):
token = self.decode_ids_to_tokens([int(token_id)])[0]
token_text = self.decode_tokens_to_str([int(token_id)])
# treat `<unk>` as a separate word regardless of the next token
# to match the result of `tokenizer.ids_to_text`
if (token != token_text or prev_unk) and i > j:
# do not add confidence for `โ` if the current token starts with `โ`
# to match the result of `tokenizer.ids_to_text`
if not prev_underline:
word_confidence.append(self._aggregate_confidence(token_confidence[j:i]))
j = i
prev_unk = token == '<unk>'
prev_underline = token == 'โ'
if not prev_underline:
word_confidence.append(self._aggregate_confidence(token_confidence[j : len(token_ids)]))
if len(words) != len(word_confidence):
raise RuntimeError(
f"""Something went wrong with word-level confidence aggregation.\n
Please check these values for debugging:\n
len(words): {len(words)},\n
len(word_confidence): {len(word_confidence)},\n
recognized text: `{' '.join(words)}`"""
)
return word_confidence
[end of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
[start of nemo/collections/common/parts/adapter_modules.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
from omegaconf import OmegaConf
from torch import nn as nn
from nemo.collections.common.parts.utils import activation_registry
from nemo.core.classes.mixins import access_mixins, adapter_mixin_strategies
class AdapterModuleUtil(access_mixins.AccessMixin):
"""
Base class of Adapter Modules, providing common functionality to all Adapter Modules.
"""
def setup_adapter_strategy(self, adapter_strategy: Optional[adapter_mixin_strategies.AbstractAdapterStrategy]):
"""
Setup adapter strategy of this class, enabling dynamic change in the way the adapter output is
merged with the input.
When called successfully, will assign the variable `adapter_strategy` to the module.
Args:
adapter_strategy: Can be a None or an implementation of AbstractAdapterStrategy.
"""
# set default adapter strategy
if adapter_strategy is None:
adapter_strategy = self.get_default_strategy_config()
if is_dataclass(adapter_strategy):
adapter_strategy = OmegaConf.structured(adapter_strategy)
OmegaConf.set_struct(adapter_strategy, False)
# The config must have the `_target_` field pointing to the actual adapter strategy class
# which will load that strategy dynamically to this module.
if isinstance(adapter_strategy, dict) or OmegaConf.is_config(adapter_strategy):
self.adapter_strategy = instantiate(adapter_strategy)
elif isinstance(adapter_strategy, adapter_mixin_strategies.AbstractAdapterStrategy):
self.adapter_strategy = adapter_strategy
else:
raise AttributeError(f'`adapter_strategy` provided is invalid : {adapter_strategy}')
def get_default_strategy_config(self) -> 'dataclass':
"""
Returns a default adapter module strategy.
"""
return adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
def adapter_unfreeze(self,):
"""
Sets the requires grad for all parameters in the adapter to True.
This method should be overridden for any custom unfreeze behavior that is required.
For example, if not all params of the adapter should be unfrozen.
"""
for param in self.parameters():
param.requires_grad_(True)
class LinearAdapter(nn.Module, AdapterModuleUtil):
"""
Simple Linear Feedforward Adapter module with LayerNorm and singe hidden layer with activation function.
Note: The adapter explicitly initializes its final layer with all zeros in order to avoid affecting the
original model when all adapters are disabled.
Args:
in_features: Input dimension of the module. Note that for adapters, input_dim == output_dim.
dim: Hidden dimension of the feed forward network.
activation: Str name for an activation function.
norm_position: Str, can be `pre` or `post`. Defaults to `pre`. Determines whether the normalization
will occur in the first layer or the last layer. Certain architectures may prefer one over the other.
dropout: float value, whether to perform dropout on the output of the last layer of the adapter.
adapter_strategy: By default, ResidualAddAdapterStrategyConfig. An adapter composition function object.
"""
def __init__(
self,
in_features: int,
dim: int,
activation: str = 'swish',
norm_position: str = 'pre',
dropout: float = 0.0,
adapter_strategy: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig = None,
):
super().__init__()
activation = activation_registry[activation]()
# If the activation can be executed in place, do so.
if hasattr(activation, 'inplace'):
activation.inplace = True
assert norm_position in ['pre', 'post']
self.norm_position = norm_position
if norm_position == 'pre':
self.module = nn.Sequential(
nn.LayerNorm(in_features),
nn.Linear(in_features, dim, bias=False),
activation,
nn.Linear(dim, in_features, bias=False),
)
elif norm_position == 'post':
self.module = nn.Sequential(
nn.Linear(in_features, dim, bias=False),
activation,
nn.Linear(dim, in_features, bias=False),
nn.LayerNorm(in_features),
)
if dropout > 0.0:
self.dropout = nn.Dropout(dropout)
else:
self.dropout = None
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
# reset parameters
self.reset_parameters()
def reset_parameters(self):
# Final layer initializations must be 0
if self.norm_position == 'pre':
self.module[-1].weight.data *= 0
elif self.norm_position == 'post':
self.module[-1].weight.data *= 0
self.module[-1].bias.data *= 0
def forward(self, x):
x = self.module(x)
# Add dropout if available
if self.dropout is not None:
x = self.dropout(x)
return x
@dataclass
class LinearAdapterConfig:
in_features: int
dim: int
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
[end of nemo/collections/common/parts/adapter_modules.py]
[start of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from typing import List
import ipadic
import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
class EnJaProcessor:
"""
Tokenizer, Detokenizer and Normalizer utilities for Japanese & English
Args:
lang_id: One of ['en', 'ja'].
"""
def __init__(self, lang_id: str):
self.lang_id = lang_id
self.moses_tokenizer = MosesTokenizer(lang=lang_id)
self.moses_detokenizer = MosesDetokenizer(lang=lang_id)
self.normalizer = MosesPunctNormalizer(
lang=lang_id, pre_replace_unicode_punct=True, post_remove_control_chars=True
)
def detokenize(self, tokens: List[str]) -> str:
"""
Detokenizes a list of tokens
Args:
tokens: list of strings as tokens
Returns:
detokenized Japanese or English string
"""
return self.moses_detokenizer.detokenize(tokens)
def tokenize(self, text) -> str:
"""
Tokenizes text using Moses. Returns a string of tokens.
"""
tokens = self.moses_tokenizer.tokenize(text)
return ' '.join(tokens)
def normalize(self, text) -> str:
# Normalization doesn't handle Japanese periods correctly;
# 'ใ'becomes '.'.
if self.lang_id == 'en':
return self.normalizer.normalize(text)
else:
return text
class JaMecabProcessor:
"""
Tokenizer, Detokenizer and Normalizer utilities for Japanese MeCab & English
"""
def __init__(self):
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
RE_WS_IN_FW = re.compile(
r'([\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])\s+(?=[\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])'
)
detokenize = lambda s: spacing(RE_WS_IN_FW.sub(r'\1', s)).strip()
return detokenize(' '.join(text))
def tokenize(self, text) -> str:
"""
Tokenizes text using Moses. Returns a string of tokens.
"""
return self.mecab_tokenizer.parse(text).strip()
def normalize(self, text) -> str:
return text
[end of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
[start of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
from nemo.collections.nlp.data.machine_translation.machine_translation_dataset import TranslationDataConfig
from nemo.collections.nlp.models.enc_dec_nlp_model import EncDecNLPModelConfig
from nemo.collections.nlp.modules.common.token_classifier import TokenClassifierConfig
from nemo.collections.nlp.modules.common.tokenizer_utils import TokenizerConfig
from nemo.collections.nlp.modules.common.transformer.transformer import (
NeMoTransformerConfig,
NeMoTransformerEncoderConfig,
)
from nemo.collections.nlp.modules.common.transformer.transformer_bottleneck import (
NeMoTransformerBottleneckDecoderConfig,
NeMoTransformerBottleneckEncoderConfig,
)
from nemo.core.config.modelPT import OptimConfig, SchedConfig
@dataclass
class MTSchedConfig(SchedConfig):
name: str = 'InverseSquareRootAnnealing'
warmup_ratio: Optional[float] = None
last_epoch: int = -1
# TODO: Refactor this dataclass to to support more optimizers (it pins the optimizer to Adam-like optimizers).
@dataclass
class MTOptimConfig(OptimConfig):
name: str = 'adam'
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
sched: Optional[MTSchedConfig] = MTSchedConfig()
@dataclass
class MTEncDecModelConfig(EncDecNLPModelConfig):
# machine translation configurations
num_val_examples: int = 3
num_test_examples: int = 3
max_generation_delta: int = 10
label_smoothing: Optional[float] = 0.0
beam_size: int = 4
len_pen: float = 0.0
src_language: Any = 'en' # Any = str or List[str]
tgt_language: Any = 'en' # Any = str or List[str]
find_unused_parameters: Optional[bool] = True
shared_tokenizer: Optional[bool] = True
multilingual: Optional[bool] = False
preproc_out_dir: Optional[str] = None
validate_input_ids: Optional[bool] = True
shared_embeddings: bool = False
# network architecture configuration
encoder_tokenizer: Any = MISSING
encoder: Any = MISSING
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
# dataset configurations
train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
src_file_name=MISSING,
tgt_file_name=MISSING,
tokens_in_batch=512,
clean=True,
shuffle=True,
cache_ids=False,
use_cache=False,
)
validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
src_file_name=MISSING,
tgt_file_name=MISSING,
tokens_in_batch=512,
clean=False,
shuffle=False,
cache_ids=False,
use_cache=False,
)
test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
src_file_name=MISSING,
tgt_file_name=MISSING,
tokens_in_batch=512,
clean=False,
shuffle=False,
cache_ids=False,
use_cache=False,
)
optim: Optional[OptimConfig] = MTOptimConfig()
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
library='nemo',
model_name=None,
pretrained=False,
hidden_size=512,
inner_size=2048,
num_layers=6,
num_attention_heads=8,
ffn_dropout=0.1,
attn_score_dropout=0.1,
attn_layer_dropout=0.1,
)
decoder: NeMoTransformerConfig = NeMoTransformerConfig(
library='nemo',
model_name=None,
pretrained=False,
hidden_size=512,
inner_size=2048,
num_layers=6,
num_attention_heads=8,
ffn_dropout=0.1,
attn_score_dropout=0.1,
attn_layer_dropout=0.1,
)
@dataclass
class MTBottleneckModelConfig(AAYNBaseConfig):
model_type: str = 'nll'
min_logv: float = -6
latent_size: int = -1 # -1 will take value of encoder hidden
non_recon_warmup_batches: int = 200000
recon_per_token: bool = True
log_timing: bool = True
encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
library='nemo',
model_name=None,
pretrained=False,
hidden_size=512,
inner_size=2048,
num_layers=6,
num_attention_heads=8,
ffn_dropout=0.1,
attn_score_dropout=0.1,
attn_layer_dropout=0.1,
arch='seq2seq',
hidden_steps=32,
hidden_blocks=1,
hidden_init_method='params',
)
decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
library='nemo',
model_name=None,
pretrained=False,
inner_size=2048,
num_layers=6,
num_attention_heads=8,
ffn_dropout=0.1,
attn_score_dropout=0.1,
attn_layer_dropout=0.1,
arch='seq2seq',
)
[end of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
[start of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
from nemo.collections.common.parts.adapter_modules import LinearAdapterConfig
from nemo.collections.nlp.data.token_classification.punctuation_capitalization_dataset import (
PunctuationCapitalizationEvalDataConfig,
PunctuationCapitalizationTrainDataConfig,
legacy_data_config_to_new_data_config,
)
from nemo.core.config import TrainerConfig
from nemo.core.config.modelPT import NemoConfig
from nemo.utils.exp_manager import ExpManagerConfig
@dataclass
class FreezeConfig:
is_enabled: bool = False
"""Freeze audio encoder weight and add Conformer Layers on top of it"""
d_model: Optional[int] = 256
"""`d_model` parameter of ``ConformerLayer``"""
d_ff: Optional[int] = 1024
"""``d_ff`` parameter of ``ConformerLayer``"""
num_layers: Optional[int] = 8
"""``num_layers`` number of ``ConformerLayer`` modules to add on top of audio encoder"""
@dataclass
class AdapterConfig:
config: Optional[LinearAdapterConfig] = None
"""Linear adapter config see ``collections.common.parts.LinearAdapterConfig``"""
enable: bool = False
"""Use adapters for audio encoder"""
@dataclass
class FusionConfig:
num_layers: Optional[int] = 4
""""Number of layers to use in fusion"""
num_attention_heads: Optional[int] = 4
"""Number of attention heads to use in fusion"""
inner_size: Optional[int] = 2048
"""Fusion inner size"""
@dataclass
class AudioEncoderConfig:
pretrained_model: str = MISSING
"""A configuration for restoring pretrained audio encoder"""
freeze: Optional[FreezeConfig] = None
adapter: Optional[AdapterConfig] = None
fusion: Optional[FusionConfig] = None
@dataclass
class TokenizerConfig:
"""A structure and default values of source text tokenizer."""
vocab_file: Optional[str] = None
"""A path to vocabulary file which is used in ``'word'``, ``'char'``, and HuggingFace tokenizers"""
tokenizer_name: str = MISSING
"""A name of the tokenizer used for tokenization of source sequences. Possible options are ``'sentencepiece'``,
``'word'``, ``'char'``, HuggingFace tokenizers (e.g. ``'bert-base-uncased'``). For more options see function
``nemo.collections.nlp.modules.common.get_tokenizer``. The tokenizer must have properties ``cls_id``, ``pad_id``,
``sep_id``, ``unk_id``."""
special_tokens: Optional[Dict[str, str]] = None
"""A dictionary with special tokens passed to constructors of ``'char'``, ``'word'``, ``'sentencepiece'``, and
various HuggingFace tokenizers."""
tokenizer_model: Optional[str] = None
"""A path to a tokenizer model required for ``'sentencepiece'`` tokenizer."""
@dataclass
class LanguageModelConfig:
"""
A structure and default values of language model configuration of punctuation and capitalization model. BERT like
HuggingFace models are supported. Provide a valid ``pretrained_model_name`` and, optionally, you may
reinitialize model via ``config_file`` or ``config``.
Alternatively you can initialize the language model using ``lm_checkpoint``.
This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
"""
pretrained_model_name: str = MISSING
"""A mandatory parameter containing name of HuggingFace pretrained model. For example, ``'bert-base-uncased'``."""
config_file: Optional[str] = None
"""A path to a file with HuggingFace model config which is used to reinitialize language model."""
config: Optional[Dict] = None
"""A HuggingFace config which is used to reinitialize language model."""
lm_checkpoint: Optional[str] = None
"""A path to a ``torch`` checkpoint of a language model."""
@dataclass
class HeadConfig:
"""
A structure and default values of configuration of capitalization or punctuation model head. This config defines a
multilayer perceptron which is applied to output of a language model. Number of units in the hidden layer is equal
to the dimension of the language model.
This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
"""
num_fc_layers: int = 1
"""A number of hidden layers in a multilayer perceptron."""
fc_dropout: float = 0.1
"""A dropout used in an MLP."""
activation: str = 'relu'
"""An activation used in hidden layers."""
use_transformer_init: bool = True
"""Whether to initialize the weights of the classifier head with the approach that was used for language model
initialization."""
@dataclass
class ClassLabelsConfig:
"""
A structure and default values of a mandatory part of config which contains names of files which are saved in .nemo
checkpoint. These files can also be used for passing label vocabulary to the model. For using them as label
vocabularies you will need to provide path these files in parameter
``model.common_dataset_parameters.label_vocab_dir``. Each line in labels files
contains 1 label. The values are sorted, ``<line number>==<label id>``, starting from ``0``. A label with ``0`` id
must contain neutral label which must be equal to ``model.common_dataset_parameters.pad_label``.
This config is a part of :class:`~CommonDatasetParametersConfig`.
"""
punct_labels_file: str = MISSING
"""A name of punctuation labels file."""
capit_labels_file: str = MISSING
"""A name of capitalization labels file."""
@dataclass
class CommonDatasetParametersConfig:
"""
A structure and default values of common dataset parameters config which includes label and loss mask information.
If you omit parameters ``punct_label_ids``, ``capit_label_ids``, ``label_vocab_dir``, then labels will be inferred
from a training dataset or loaded from a checkpoint.
Parameters ``ignore_extra_tokens`` and ``ignore_start_end`` are responsible for forming loss mask. A loss mask
defines on which tokens loss is computed.
This parameter is a part of config :class:`~PunctuationCapitalizationModelConfig`.
"""
pad_label: str = MISSING
"""A mandatory parameter which should contain label used for punctuation and capitalization label padding. It
also serves as a neutral label for both punctuation and capitalization. If any of ``punct_label_ids``,
``capit_label_ids`` parameters is provided, then ``pad_label`` must have ``0`` id in them. In addition, if ``label_vocab_dir``
is provided, then ``pad_label`` must be on the first lines in files ``class_labels.punct_labels_file`` and
``class_labels.capit_labels_file``."""
ignore_extra_tokens: bool = False
"""Whether to compute loss on not first tokens in words. If this parameter is ``True``, then loss mask is ``False``
for all tokens in a word except the first."""
ignore_start_end: bool = True
"""If ``False``, then loss is computed on [CLS] and [SEP] tokens."""
punct_label_ids: Optional[Dict[str, int]] = None
"""A dictionary with punctuation label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit this
parameter and pass label ids through ``class_labels.punct_labels_file`` or let the model to infer label ids from
dataset or load them from checkpoint."""
capit_label_ids: Optional[Dict[str, int]] = None
"""A dictionary with capitalization label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit
this parameter and pass label ids through ``class_labels.capit_labels_file`` or let model to infer label ids from
dataset or load them from checkpoint."""
label_vocab_dir: Optional[str] = None
"""A path to directory which contains class labels files. See :class:`ClassLabelsConfig`. If this parameter is
provided, then labels will be loaded from files which are located in ``label_vocab_dir`` and have names specified
in ``model.class_labels`` configuration section. A label specified in ``pad_label`` has to be on the first lines
of ``model.class_labels`` files."""
@dataclass
class PunctuationCapitalizationModelConfig:
"""
A configuration of
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model.
See an example of model config in
`nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
<https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
class_labels: ClassLabelsConfig = ClassLabelsConfig()
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
"""A configuration for creating training dataset and data loader."""
validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating validation datasets and data loaders."""
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
punct_head: HeadConfig = HeadConfig()
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
capit_head: HeadConfig = HeadConfig()
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
tokenizer: Any = TokenizerConfig()
"""A configuration for source text tokenizer."""
language_model: LanguageModelConfig = LanguageModelConfig()
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
"""A configuration of optimizer and learning rate scheduler. There is much variability in such config. For
description see `Optimizers
<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html#optimizers>`_ section in
documentation and `primer <https://github.com/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb>_ tutorial."""
@dataclass
class PunctuationCapitalizationLexicalAudioModelConfig(PunctuationCapitalizationModelConfig):
"""
A configuration of
:class:`~nemo.collections.nlp.models.token_classification.punctuation_lexical_audio_capitalization_model.PunctuationCapitalizationLexicalAudioModel`
model.
See an example of model config in
`nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
<https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_lexical_audio_config.yaml>`_
Audio encoder can be frozen during training with ``freeze_audio_encoder`` parameter.
Adapter can be added to audio encoder with ``use_adapters`` and ``adapter_config`` parameters.
More conformer layers can be added on top of pretrained audio encoder with ``frozen_conf_d_model``, ``frozen_conf_d_ff`` and ``frozen_conf_num_layers`` parameters.
"""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
"""A configuration for creating training dataset and data loader."""
validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating validation datasets and data loaders."""
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
audio_encoder: Optional[AudioEncoderConfig] = None
restore_lexical_encoder_from: Optional[str] = None
""""Path to .nemo checkpoint to load weights from""" # add more comments
use_weighted_loss: Optional[bool] = False
"""If set to ``True`` CrossEntropyLoss will be weighted"""
@dataclass
class PunctuationCapitalizationConfig(NemoConfig):
"""
A config for punctuation model training and testing.
See an example of full config in
`nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
<https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
"""
pretrained_model: Optional[str] = None
"""Can be an NVIDIA's NGC cloud model or a path to a .nemo checkpoint. You can get list of possible cloud options
by calling method
:func:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel.list_available_models`.
"""
name: Optional[str] = 'Punctuation_and_Capitalization'
"""A name of the model. Used for naming output directories and ``.nemo`` checkpoints."""
do_training: bool = True
"""Whether to perform training of the model."""
do_testing: bool = False
"""Whether ot perform testing of the model."""
model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
trainer: Optional[TrainerConfig] = TrainerConfig()
"""Contains ``Trainer`` Lightning class constructor parameters."""
exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
"""
Test if model config is old style config. Old style configs are configs which were used before
``common_dataset_parameters`` item was added. Old style datasets use ``dataset`` instead of
``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``. Old style configs do not support
tarred datasets.
Args:
model_cfg: model configuration
Returns:
whether ``model_config`` is legacy
"""
return 'common_dataset_parameters' not in model_cfg
def legacy_model_config_to_new_model_config(model_cfg: DictConfig) -> DictConfig:
"""
Transform old style config into
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`.
Old style configs are configs which were used before ``common_dataset_parameters`` item was added. Old style
datasets use ``dataset`` instead of ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``.
Old style configs do not support tarred datasets.
Args:
model_cfg: old style config
Returns:
model config which follows dataclass
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`
"""
train_ds = model_cfg.get('train_ds')
validation_ds = model_cfg.get('validation_ds')
test_ds = model_cfg.get('test_ds')
dataset = model_cfg.dataset
punct_head_config = model_cfg.get('punct_head', {})
capit_head_config = model_cfg.get('capit_head', {})
omega_conf = OmegaConf.structured(
PunctuationCapitalizationModelConfig(
class_labels=model_cfg.class_labels,
common_dataset_parameters=CommonDatasetParametersConfig(
pad_label=dataset.pad_label,
ignore_extra_tokens=dataset.get(
'ignore_extra_tokens', CommonDatasetParametersConfig.ignore_extra_tokens
),
ignore_start_end=dataset.get('ignore_start_end', CommonDatasetParametersConfig.ignore_start_end),
punct_label_ids=model_cfg.punct_label_ids,
capit_label_ids=model_cfg.capit_label_ids,
),
train_ds=None
if train_ds is None
else legacy_data_config_to_new_data_config(train_ds, dataset, train=True),
validation_ds=None
if validation_ds is None
else legacy_data_config_to_new_data_config(validation_ds, dataset, train=False),
test_ds=None if test_ds is None else legacy_data_config_to_new_data_config(test_ds, dataset, train=False),
punct_head=HeadConfig(
num_fc_layers=punct_head_config.get('punct_num_fc_layers', HeadConfig.num_fc_layers),
fc_dropout=punct_head_config.get('fc_dropout', HeadConfig.fc_dropout),
activation=punct_head_config.get('activation', HeadConfig.activation),
use_transformer_init=punct_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
),
capit_head=HeadConfig(
num_fc_layers=capit_head_config.get('capit_num_fc_layers', HeadConfig.num_fc_layers),
fc_dropout=capit_head_config.get('fc_dropout', HeadConfig.fc_dropout),
activation=capit_head_config.get('activation', HeadConfig.activation),
use_transformer_init=capit_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
),
tokenizer=model_cfg.tokenizer,
language_model=model_cfg.language_model,
optim=model_cfg.optim,
)
)
with open_dict(omega_conf):
retain_during_legacy_conversion = model_cfg.get('retain_during_legacy_conversion', {})
for key in retain_during_legacy_conversion.keys():
omega_conf[key] = retain_during_legacy_conversion[key]
return omega_conf
[end of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
[start of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Transformer based language model."""
from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
MegatronRetrievalTransformerEncoderModule,
)
from nemo.collections.nlp.modules.common.megatron.utils import (
ApexGuardDefaults,
init_method_normal,
scaled_init_method_normal,
)
try:
from apex.transformer.enums import AttnMaskType, ModelType
HAVE_APEX = True
except (ImportError, ModuleNotFoundError):
HAVE_APEX = False
# fake missing classes with None attributes
AttnMaskType = ApexGuardDefaults()
ModelType = ApexGuardDefaults()
try:
from megatron.core import ModelParallelConfig
HAVE_MEGATRON_CORE = True
except (ImportError, ModuleNotFoundError):
ModelParallelConfig = ApexGuardDefaults
HAVE_MEGATRON_CORE = False
__all__ = []
AVAILABLE_ENCODERS = ["transformer", "perceiver", "retro"]
def get_encoder_model(
config: ModelParallelConfig,
arch,
hidden_size,
ffn_hidden_size,
num_layers,
num_attention_heads,
apply_query_key_layer_scaling=False,
kv_channels=None,
init_method=None,
scaled_init_method=None,
encoder_attn_mask_type=AttnMaskType.padding,
pre_process=True,
post_process=True,
init_method_std=0.02,
megatron_amp_O2=False,
hidden_dropout=0.1,
attention_dropout=0.1,
ffn_dropout=0.0,
precision=16,
fp32_residual_connection=False,
activations_checkpoint_method=None,
activations_checkpoint_num_layers=1,
activations_checkpoint_granularity=None,
layernorm_epsilon=1e-5,
bias_activation_fusion=True,
bias_dropout_add_fusion=True,
masked_softmax_fusion=True,
persist_layer_norm=False,
openai_gelu=False,
activation="gelu",
onnx_safe=False,
bias=True,
normalization="layernorm",
headscale=False,
transformer_block_type="pre_ln",
hidden_steps=32,
parent_model_type=ModelType.encoder_or_decoder,
layer_type=None,
chunk_size=64,
num_self_attention_per_cross_attention=1,
layer_number_offset=0, # this is use only for attention norm_factor scaling
megatron_legacy=False,
normalize_attention_scores=True,
sequence_parallel=False,
num_moe_experts=1,
moe_frequency=1,
moe_dropout=0.0,
turn_off_rop=False, # turn off the RoP positional embedding
version=1, # model version
position_embedding_type='learned_absolute',
use_flash_attention=False,
):
"""Build language model and return along with the key to save."""
if kv_channels is None:
assert (
hidden_size % num_attention_heads == 0
), 'hidden_size must be divisible by num_attention_heads if kv_channels is None'
kv_channels = hidden_size // num_attention_heads
if init_method is None:
init_method = init_method_normal(init_method_std)
if scaled_init_method is None:
scaled_init_method = scaled_init_method_normal(init_method_std, num_layers)
if arch == "transformer":
# Language encoder.
encoder = MegatronTransformerEncoderModule(
config=config,
init_method=init_method,
output_layer_init_method=scaled_init_method,
hidden_size=hidden_size,
num_layers=num_layers,
num_attention_heads=num_attention_heads,
apply_query_key_layer_scaling=apply_query_key_layer_scaling,
kv_channels=kv_channels,
ffn_hidden_size=ffn_hidden_size,
encoder_attn_mask_type=encoder_attn_mask_type,
pre_process=pre_process,
post_process=post_process,
megatron_amp_O2=megatron_amp_O2,
hidden_dropout=hidden_dropout,
attention_dropout=attention_dropout,
ffn_dropout=ffn_dropout,
precision=precision,
fp32_residual_connection=fp32_residual_connection,
activations_checkpoint_method=activations_checkpoint_method,
activations_checkpoint_num_layers=activations_checkpoint_num_layers,
activations_checkpoint_granularity=activations_checkpoint_granularity,
layernorm_epsilon=layernorm_epsilon,
bias_activation_fusion=bias_activation_fusion,
bias_dropout_add_fusion=bias_dropout_add_fusion,
masked_softmax_fusion=masked_softmax_fusion,
persist_layer_norm=persist_layer_norm,
openai_gelu=openai_gelu,
onnx_safe=onnx_safe,
activation=activation,
bias=bias,
normalization=normalization,
transformer_block_type=transformer_block_type,
headscale=headscale,
parent_model_type=parent_model_type,
megatron_legacy=megatron_legacy,
normalize_attention_scores=normalize_attention_scores,
num_moe_experts=num_moe_experts,
moe_frequency=moe_frequency,
moe_dropout=moe_dropout,
position_embedding_type=position_embedding_type,
use_flash_attention=use_flash_attention,
)
elif arch == "retro":
encoder = MegatronRetrievalTransformerEncoderModule(
config=config,
init_method=init_method,
output_layer_init_method=scaled_init_method,
hidden_size=hidden_size,
num_layers=num_layers,
num_attention_heads=num_attention_heads,
apply_query_key_layer_scaling=apply_query_key_layer_scaling,
kv_channels=kv_channels,
layer_type=layer_type,
ffn_hidden_size=ffn_hidden_size,
pre_process=pre_process,
post_process=post_process,
megatron_amp_O2=megatron_amp_O2,
hidden_dropout=hidden_dropout,
attention_dropout=attention_dropout,
precision=precision,
fp32_residual_connection=fp32_residual_connection,
activations_checkpoint_method=activations_checkpoint_method,
activations_checkpoint_num_layers=activations_checkpoint_num_layers,
activations_checkpoint_granularity=activations_checkpoint_granularity,
layernorm_epsilon=layernorm_epsilon,
bias_activation_fusion=bias_activation_fusion,
bias_dropout_add_fusion=bias_dropout_add_fusion,
masked_softmax_fusion=masked_softmax_fusion,
persist_layer_norm=persist_layer_norm,
openai_gelu=openai_gelu,
onnx_safe=onnx_safe,
activation=activation,
bias=bias,
normalization=normalization,
transformer_block_type=transformer_block_type,
parent_model_type=parent_model_type,
chunk_size=chunk_size,
layer_number_offset=layer_number_offset,
megatron_legacy=megatron_legacy,
normalize_attention_scores=normalize_attention_scores,
turn_off_rop=turn_off_rop,
version=version,
)
elif arch == "perceiver":
encoder = MegatronPerceiverEncoderModule(
config=config,
init_method=init_method,
output_layer_init_method=scaled_init_method,
hidden_size=hidden_size,
num_layers=num_layers,
num_attention_heads=num_attention_heads,
apply_query_key_layer_scaling=apply_query_key_layer_scaling,
kv_channels=kv_channels,
ffn_hidden_size=ffn_hidden_size,
encoder_attn_mask_type=encoder_attn_mask_type,
pre_process=pre_process,
post_process=post_process,
megatron_amp_O2=megatron_amp_O2,
hidden_dropout=hidden_dropout,
attention_dropout=attention_dropout,
ffn_dropout=ffn_dropout,
precision=precision,
fp32_residual_connection=fp32_residual_connection,
activations_checkpoint_method=activations_checkpoint_method,
activations_checkpoint_num_layers=activations_checkpoint_num_layers,
activations_checkpoint_granularity=activations_checkpoint_granularity,
layernorm_epsilon=layernorm_epsilon,
bias_activation_fusion=bias_activation_fusion,
bias_dropout_add_fusion=bias_dropout_add_fusion,
masked_softmax_fusion=masked_softmax_fusion,
persist_layer_norm=persist_layer_norm,
openai_gelu=openai_gelu,
onnx_safe=onnx_safe,
activation=activation,
bias=bias,
normalization=normalization,
transformer_block_type=transformer_block_type,
headscale=headscale,
parent_model_type=parent_model_type,
hidden_steps=hidden_steps,
num_self_attention_per_cross_attention=num_self_attention_per_cross_attention,
megatron_legacy=megatron_legacy,
normalize_attention_scores=normalize_attention_scores,
)
else:
raise ValueError(f"Unknown encoder arch = {arch}. Available encoder arch = {AVAILABLE_ENCODERS}")
return encoder
[end of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
[start of nemo/collections/tts/models/fastpitch.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
from dataclasses import dataclass
from pathlib import Path
from typing import List, Optional
import torch
from hydra.utils import instantiate
from omegaconf import DictConfig, OmegaConf, open_dict
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TensorBoardLogger
from nemo.collections.common.parts.preprocessing import parsers
from nemo.collections.tts.losses.aligner_loss import BinLoss, ForwardSumLoss
from nemo.collections.tts.losses.fastpitchloss import DurationLoss, EnergyLoss, MelLoss, PitchLoss
from nemo.collections.tts.models.base import SpectrogramGenerator
from nemo.collections.tts.modules.fastpitch import FastPitchModule
from nemo.collections.tts.parts.mixins import FastPitchAdapterModelMixin
from nemo.collections.tts.parts.utils.callbacks import LoggingCallback
from nemo.collections.tts.parts.utils.helpers import (
batch_from_ragged,
g2p_backward_compatible_support,
plot_alignment_to_numpy,
plot_spectrogram_to_numpy,
process_batch,
sample_tts_input,
)
from nemo.core.classes import Exportable
from nemo.core.classes.common import PretrainedModelInfo, typecheck
from nemo.core.neural_types.elements import (
Index,
LengthsType,
MelSpectrogramType,
ProbsType,
RegressionValuesType,
TokenDurationType,
TokenIndex,
TokenLogDurationType,
)
from nemo.core.neural_types.neural_type import NeuralType
from nemo.utils import logging, model_utils
@dataclass
class G2PConfig:
_target_: str = "nemo.collections.tts.g2p.models.en_us_arpabet.EnglishG2p"
phoneme_dict: str = "scripts/tts_dataset_files/cmudict-0.7b_nv22.10"
heteronyms: str = "scripts/tts_dataset_files/heteronyms-052722"
phoneme_probability: float = 0.5
@dataclass
class TextTokenizer:
_target_: str = "nemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.EnglishPhonemesTokenizer"
punct: bool = True
stresses: bool = True
chars: bool = True
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
g2p: G2PConfig = G2PConfig()
@dataclass
class TextTokenizerConfig:
text_tokenizer: TextTokenizer = TextTokenizer()
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
"""FastPitch model (https://arxiv.org/abs/2006.06873) that is used to generate mel spectrogram from text."""
def __init__(self, cfg: DictConfig, trainer: Trainer = None):
# Convert to Hydra 1.0 compatible DictConfig
cfg = model_utils.convert_model_config_to_dict_config(cfg)
cfg = model_utils.maybe_update_config_version(cfg)
# Setup normalizer
self.normalizer = None
self.text_normalizer_call = None
self.text_normalizer_call_kwargs = {}
self._setup_normalizer(cfg)
self.learn_alignment = cfg.get("learn_alignment", False)
# Setup vocabulary (=tokenizer) and input_fft_kwargs (supported only with self.learn_alignment=True)
input_fft_kwargs = {}
if self.learn_alignment:
self.vocab = None
self.ds_class = cfg.train_ds.dataset._target_
self.ds_class_name = self.ds_class.split(".")[-1]
if not self.ds_class in [
"nemo.collections.tts.data.dataset.TTSDataset",
"nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset",
"nemo.collections.tts.torch.data.TTSDataset",
]:
raise ValueError(f"Unknown dataset class: {self.ds_class}.")
self._setup_tokenizer(cfg)
assert self.vocab is not None
input_fft_kwargs["n_embed"] = len(self.vocab.tokens)
input_fft_kwargs["padding_idx"] = self.vocab.pad
self._parser = None
self._tb_logger = None
super().__init__(cfg=cfg, trainer=trainer)
self.bin_loss_warmup_epochs = cfg.get("bin_loss_warmup_epochs", 100)
self.log_images = cfg.get("log_images", False)
self.log_train_images = False
default_prosody_loss_scale = 0.1 if self.learn_alignment else 1.0
dur_loss_scale = cfg.get("dur_loss_scale", default_prosody_loss_scale)
pitch_loss_scale = cfg.get("pitch_loss_scale", default_prosody_loss_scale)
energy_loss_scale = cfg.get("energy_loss_scale", default_prosody_loss_scale)
self.mel_loss_fn = MelLoss()
self.pitch_loss_fn = PitchLoss(loss_scale=pitch_loss_scale)
self.duration_loss_fn = DurationLoss(loss_scale=dur_loss_scale)
self.energy_loss_fn = EnergyLoss(loss_scale=energy_loss_scale)
self.aligner = None
if self.learn_alignment:
aligner_loss_scale = cfg.get("aligner_loss_scale", 1.0)
self.aligner = instantiate(self._cfg.alignment_module)
self.forward_sum_loss_fn = ForwardSumLoss(loss_scale=aligner_loss_scale)
self.bin_loss_fn = BinLoss(loss_scale=aligner_loss_scale)
self.preprocessor = instantiate(self._cfg.preprocessor)
input_fft = instantiate(self._cfg.input_fft, **input_fft_kwargs)
output_fft = instantiate(self._cfg.output_fft)
duration_predictor = instantiate(self._cfg.duration_predictor)
pitch_predictor = instantiate(self._cfg.pitch_predictor)
speaker_encoder = instantiate(self._cfg.get("speaker_encoder", None))
energy_embedding_kernel_size = cfg.get("energy_embedding_kernel_size", 0)
energy_predictor = instantiate(self._cfg.get("energy_predictor", None))
# [TODO] may remove if we change the pre-trained config
# cfg: condition_types = [ "add" ]
n_speakers = cfg.get("n_speakers", 0)
speaker_emb_condition_prosody = cfg.get("speaker_emb_condition_prosody", False)
speaker_emb_condition_decoder = cfg.get("speaker_emb_condition_decoder", False)
speaker_emb_condition_aligner = cfg.get("speaker_emb_condition_aligner", False)
min_token_duration = cfg.get("min_token_duration", 0)
use_log_energy = cfg.get("use_log_energy", True)
if n_speakers > 1 and "add" not in input_fft.cond_input.condition_types:
input_fft.cond_input.condition_types.append("add")
if speaker_emb_condition_prosody:
duration_predictor.cond_input.condition_types.append("add")
pitch_predictor.cond_input.condition_types.append("add")
if speaker_emb_condition_decoder:
output_fft.cond_input.condition_types.append("add")
if speaker_emb_condition_aligner and self.aligner is not None:
self.aligner.cond_input.condition_types.append("add")
self.fastpitch = FastPitchModule(
input_fft,
output_fft,
duration_predictor,
pitch_predictor,
energy_predictor,
self.aligner,
speaker_encoder,
n_speakers,
cfg.symbols_embedding_dim,
cfg.pitch_embedding_kernel_size,
energy_embedding_kernel_size,
cfg.n_mel_channels,
min_token_duration,
cfg.max_token_duration,
use_log_energy,
)
self._input_types = self._output_types = None
self.export_config = {
"emb_range": (0, self.fastpitch.encoder.word_emb.num_embeddings),
"enable_volume": False,
"enable_ragged_batches": False,
}
if self.fastpitch.speaker_emb is not None:
self.export_config["num_speakers"] = cfg.n_speakers
self.log_config = cfg.get("log_config", None)
# Adapter modules setup (from FastPitchAdapterModelMixin)
self.setup_adapters()
def _get_default_text_tokenizer_conf(self):
text_tokenizer: TextTokenizerConfig = TextTokenizerConfig()
return OmegaConf.create(OmegaConf.to_yaml(text_tokenizer))
def _setup_normalizer(self, cfg):
if "text_normalizer" in cfg:
normalizer_kwargs = {}
if "whitelist" in cfg.text_normalizer:
normalizer_kwargs["whitelist"] = self.register_artifact(
'text_normalizer.whitelist', cfg.text_normalizer.whitelist
)
try:
import nemo_text_processing
self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
except Exception as e:
logging.error(e)
raise ImportError(
"`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
)
self.text_normalizer_call = self.normalizer.normalize
if "text_normalizer_call_kwargs" in cfg:
self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
def _setup_tokenizer(self, cfg):
text_tokenizer_kwargs = {}
if "g2p" in cfg.text_tokenizer:
# for backward compatibility
if (
self._is_model_being_restored()
and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
):
cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
cfg.text_tokenizer.g2p["_target_"]
)
g2p_kwargs = {}
if "phoneme_dict" in cfg.text_tokenizer.g2p:
g2p_kwargs["phoneme_dict"] = self.register_artifact(
'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
)
if "heteronyms" in cfg.text_tokenizer.g2p:
g2p_kwargs["heteronyms"] = self.register_artifact(
'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
)
# for backward compatability
text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
# TODO @xueyang: rename the instance of tokenizer because vocab is misleading.
self.vocab = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
@property
def tb_logger(self):
if self._tb_logger is None:
if self.logger is None and self.logger.experiment is None:
return None
tb_logger = self.logger.experiment
for logger in self.trainer.loggers:
if isinstance(logger, TensorBoardLogger):
tb_logger = logger.experiment
break
self._tb_logger = tb_logger
return self._tb_logger
@property
def parser(self):
if self._parser is not None:
return self._parser
if self.learn_alignment:
self._parser = self.vocab.encode
else:
self._parser = parsers.make_parser(
labels=self._cfg.labels,
name='en',
unk_id=-1,
blank_id=-1,
do_normalize=True,
abbreviation_version="fastpitch",
make_table=False,
)
return self._parser
def parse(self, str_input: str, normalize=True) -> torch.tensor:
if self.training:
logging.warning("parse() is meant to be called in eval mode.")
if normalize and self.text_normalizer_call is not None:
str_input = self.text_normalizer_call(str_input, **self.text_normalizer_call_kwargs)
if self.learn_alignment:
eval_phon_mode = contextlib.nullcontext()
if hasattr(self.vocab, "set_phone_prob"):
eval_phon_mode = self.vocab.set_phone_prob(prob=1.0)
# Disable mixed g2p representation if necessary
with eval_phon_mode:
tokens = self.parser(str_input)
else:
tokens = self.parser(str_input)
x = torch.tensor(tokens).unsqueeze_(0).long().to(self.device)
return x
@typecheck(
input_types={
"text": NeuralType(('B', 'T_text'), TokenIndex()),
"durs": NeuralType(('B', 'T_text'), TokenDurationType()),
"pitch": NeuralType(('B', 'T_audio'), RegressionValuesType()),
"energy": NeuralType(('B', 'T_audio'), RegressionValuesType(), optional=True),
"speaker": NeuralType(('B'), Index(), optional=True),
"pace": NeuralType(optional=True),
"spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
"attn_prior": NeuralType(('B', 'T_spec', 'T_text'), ProbsType(), optional=True),
"mel_lens": NeuralType(('B'), LengthsType(), optional=True),
"input_lens": NeuralType(('B'), LengthsType(), optional=True),
# reference_* data is used for multi-speaker FastPitch training
"reference_spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
"reference_spec_lens": NeuralType(('B'), LengthsType(), optional=True),
}
)
def forward(
self,
*,
text,
durs=None,
pitch=None,
energy=None,
speaker=None,
pace=1.0,
spec=None,
attn_prior=None,
mel_lens=None,
input_lens=None,
reference_spec=None,
reference_spec_lens=None,
):
return self.fastpitch(
text=text,
durs=durs,
pitch=pitch,
energy=energy,
speaker=speaker,
pace=pace,
spec=spec,
attn_prior=attn_prior,
mel_lens=mel_lens,
input_lens=input_lens,
reference_spec=reference_spec,
reference_spec_lens=reference_spec_lens,
)
@typecheck(output_types={"spect": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType())})
def generate_spectrogram(
self,
tokens: 'torch.tensor',
speaker: Optional[int] = None,
pace: float = 1.0,
reference_spec: Optional['torch.tensor'] = None,
reference_spec_lens: Optional['torch.tensor'] = None,
) -> torch.tensor:
if self.training:
logging.warning("generate_spectrogram() is meant to be called in eval mode.")
if isinstance(speaker, int):
speaker = torch.tensor([speaker]).to(self.device)
spect, *_ = self(
text=tokens,
durs=None,
pitch=None,
speaker=speaker,
pace=pace,
reference_spec=reference_spec,
reference_spec_lens=reference_spec_lens,
)
return spect
def training_step(self, batch, batch_idx):
attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
None,
None,
None,
None,
None,
None,
)
if self.learn_alignment:
if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
batch_dict = batch
else:
batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
audio = batch_dict.get("audio")
audio_lens = batch_dict.get("audio_lens")
text = batch_dict.get("text")
text_lens = batch_dict.get("text_lens")
attn_prior = batch_dict.get("align_prior_matrix", None)
pitch = batch_dict.get("pitch", None)
energy = batch_dict.get("energy", None)
speaker = batch_dict.get("speaker_id", None)
reference_audio = batch_dict.get("reference_audio", None)
reference_audio_len = batch_dict.get("reference_audio_lens", None)
else:
audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
mels, spec_len = self.preprocessor(input_signal=audio, length=audio_lens)
reference_spec, reference_spec_len = None, None
if reference_audio is not None:
reference_spec, reference_spec_len = self.preprocessor(
input_signal=reference_audio, length=reference_audio_len
)
(
mels_pred,
_,
_,
log_durs_pred,
pitch_pred,
attn_soft,
attn_logprob,
attn_hard,
attn_hard_dur,
pitch,
energy_pred,
energy_tgt,
) = self(
text=text,
durs=durs,
pitch=pitch,
energy=energy,
speaker=speaker,
pace=1.0,
spec=mels if self.learn_alignment else None,
reference_spec=reference_spec,
reference_spec_lens=reference_spec_len,
attn_prior=attn_prior,
mel_lens=spec_len,
input_lens=text_lens,
)
if durs is None:
durs = attn_hard_dur
mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
loss = mel_loss + dur_loss
if self.learn_alignment:
ctc_loss = self.forward_sum_loss_fn(attn_logprob=attn_logprob, in_lens=text_lens, out_lens=spec_len)
bin_loss_weight = min(self.current_epoch / self.bin_loss_warmup_epochs, 1.0) * 1.0
bin_loss = self.bin_loss_fn(hard_attention=attn_hard, soft_attention=attn_soft) * bin_loss_weight
loss += ctc_loss + bin_loss
pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
loss += pitch_loss + energy_loss
self.log("t_loss", loss)
self.log("t_mel_loss", mel_loss)
self.log("t_dur_loss", dur_loss)
self.log("t_pitch_loss", pitch_loss)
if energy_tgt is not None:
self.log("t_energy_loss", energy_loss)
if self.learn_alignment:
self.log("t_ctc_loss", ctc_loss)
self.log("t_bin_loss", bin_loss)
# Log images to tensorboard
if self.log_images and self.log_train_images and isinstance(self.logger, TensorBoardLogger):
self.log_train_images = False
self.tb_logger.add_image(
"train_mel_target",
plot_spectrogram_to_numpy(mels[0].data.cpu().float().numpy()),
self.global_step,
dataformats="HWC",
)
spec_predict = mels_pred[0].data.cpu().float().numpy()
self.tb_logger.add_image(
"train_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
)
if self.learn_alignment:
attn = attn_hard[0].data.cpu().float().numpy().squeeze()
self.tb_logger.add_image(
"train_attn", plot_alignment_to_numpy(attn.T), self.global_step, dataformats="HWC",
)
soft_attn = attn_soft[0].data.cpu().float().numpy().squeeze()
self.tb_logger.add_image(
"train_soft_attn", plot_alignment_to_numpy(soft_attn.T), self.global_step, dataformats="HWC",
)
return loss
def validation_step(self, batch, batch_idx):
attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
None,
None,
None,
None,
None,
None,
)
if self.learn_alignment:
if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
batch_dict = batch
else:
batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
audio = batch_dict.get("audio")
audio_lens = batch_dict.get("audio_lens")
text = batch_dict.get("text")
text_lens = batch_dict.get("text_lens")
attn_prior = batch_dict.get("align_prior_matrix", None)
pitch = batch_dict.get("pitch", None)
energy = batch_dict.get("energy", None)
speaker = batch_dict.get("speaker_id", None)
reference_audio = batch_dict.get("reference_audio", None)
reference_audio_len = batch_dict.get("reference_audio_lens", None)
else:
audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
mels, mel_lens = self.preprocessor(input_signal=audio, length=audio_lens)
reference_spec, reference_spec_len = None, None
if reference_audio is not None:
reference_spec, reference_spec_len = self.preprocessor(
input_signal=reference_audio, length=reference_audio_len
)
# Calculate val loss on ground truth durations to better align L2 loss in time
(mels_pred, _, _, log_durs_pred, pitch_pred, _, _, _, attn_hard_dur, pitch, energy_pred, energy_tgt,) = self(
text=text,
durs=durs,
pitch=pitch,
energy=energy,
speaker=speaker,
pace=1.0,
spec=mels if self.learn_alignment else None,
reference_spec=reference_spec,
reference_spec_lens=reference_spec_len,
attn_prior=attn_prior,
mel_lens=mel_lens,
input_lens=text_lens,
)
if durs is None:
durs = attn_hard_dur
mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
loss = mel_loss + dur_loss + pitch_loss + energy_loss
val_outputs = {
"val_loss": loss,
"mel_loss": mel_loss,
"dur_loss": dur_loss,
"pitch_loss": pitch_loss,
"energy_loss": energy_loss if energy_tgt is not None else None,
"mel_target": mels if batch_idx == 0 else None,
"mel_pred": mels_pred if batch_idx == 0 else None,
}
self.validation_step_outputs.append(val_outputs)
return val_outputs
def on_validation_epoch_end(self):
collect = lambda key: torch.stack([x[key] for x in self.validation_step_outputs]).mean()
val_loss = collect("val_loss")
mel_loss = collect("mel_loss")
dur_loss = collect("dur_loss")
pitch_loss = collect("pitch_loss")
self.log("val_loss", val_loss, sync_dist=True)
self.log("val_mel_loss", mel_loss, sync_dist=True)
self.log("val_dur_loss", dur_loss, sync_dist=True)
self.log("val_pitch_loss", pitch_loss, sync_dist=True)
if self.validation_step_outputs[0]["energy_loss"] is not None:
energy_loss = collect("energy_loss")
self.log("val_energy_loss", energy_loss, sync_dist=True)
_, _, _, _, _, spec_target, spec_predict = self.validation_step_outputs[0].values()
if self.log_images and isinstance(self.logger, TensorBoardLogger):
self.tb_logger.add_image(
"val_mel_target",
plot_spectrogram_to_numpy(spec_target[0].data.cpu().float().numpy()),
self.global_step,
dataformats="HWC",
)
spec_predict = spec_predict[0].data.cpu().float().numpy()
self.tb_logger.add_image(
"val_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
)
self.log_train_images = True
self.validation_step_outputs.clear() # free memory)
def _setup_train_dataloader(self, cfg):
phon_mode = contextlib.nullcontext()
if hasattr(self.vocab, "set_phone_prob"):
phon_mode = self.vocab.set_phone_prob(self.vocab.phoneme_probability)
with phon_mode:
dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
sampler = dataset.get_sampler(cfg.dataloader_params.batch_size)
return torch.utils.data.DataLoader(
dataset, collate_fn=dataset.collate_fn, sampler=sampler, **cfg.dataloader_params
)
def _setup_test_dataloader(self, cfg):
phon_mode = contextlib.nullcontext()
if hasattr(self.vocab, "set_phone_prob"):
phon_mode = self.vocab.set_phone_prob(0.0)
with phon_mode:
dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
raise ValueError(f"No dataset for {name}")
if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
raise ValueError(f"No dataloader_params for {name}")
if shuffle_should_be:
if 'shuffle' not in cfg.dataloader_params:
logging.warning(
f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
"config. Manually setting to True"
)
with open_dict(cfg.dataloader_params):
cfg.dataloader_params.shuffle = True
elif not cfg.dataloader_params.shuffle:
logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
elif cfg.dataloader_params.shuffle:
logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
if self.ds_class == "nemo.collections.tts.data.dataset.TTSDataset":
phon_mode = contextlib.nullcontext()
if hasattr(self.vocab, "set_phone_prob"):
phon_mode = self.vocab.set_phone_prob(prob=None if name == "val" else self.vocab.phoneme_probability)
with phon_mode:
dataset = instantiate(
cfg.dataset,
text_normalizer=self.normalizer,
text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
text_tokenizer=self.vocab,
)
else:
dataset = instantiate(cfg.dataset)
return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
def setup_training_data(self, cfg):
if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
self._train_dl = self._setup_train_dataloader(cfg)
else:
self._train_dl = self.__setup_dataloader_from_config(cfg)
def setup_validation_data(self, cfg):
if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
self._validation_dl = self._setup_test_dataloader(cfg)
else:
self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="val")
def setup_test_data(self, cfg):
"""Omitted."""
pass
def configure_callbacks(self):
if not self.log_config:
return []
sample_ds_class = self.log_config.dataset._target_
if sample_ds_class != "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
raise ValueError(f"Logging callback only supported for TextToSpeechDataset, got {sample_ds_class}")
data_loader = self._setup_test_dataloader(self.log_config)
generators = instantiate(self.log_config.generators)
log_dir = Path(self.log_config.log_dir) if self.log_config.log_dir else None
log_callback = LoggingCallback(
generators=generators,
data_loader=data_loader,
log_epochs=self.log_config.log_epochs,
epoch_frequency=self.log_config.epoch_frequency,
output_dir=log_dir,
loggers=self.trainer.loggers,
log_tensorboard=self.log_config.log_tensorboard,
log_wandb=self.log_config.log_wandb,
)
return [log_callback]
@classmethod
def list_available_models(cls) -> 'List[PretrainedModelInfo]':
"""
This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
Returns:
List of available pre-trained models.
"""
list_of_models = []
# en-US, single speaker, 22050Hz, LJSpeech (ARPABET).
model = PretrainedModelInfo(
pretrained_model_name="tts_en_fastpitch",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/1.8.1/files/tts_en_fastpitch_align.nemo",
description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is ARPABET-based.",
class_=cls,
)
list_of_models.append(model)
# en-US, single speaker, 22050Hz, LJSpeech (IPA).
model = PretrainedModelInfo(
pretrained_model_name="tts_en_fastpitch_ipa",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/IPA_1.13.0/files/tts_en_fastpitch_align_ipa.nemo",
description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is IPA-based.",
class_=cls,
)
list_of_models.append(model)
# en-US, multi-speaker, 44100Hz, HiFiTTS.
model = PretrainedModelInfo(
pretrained_model_name="tts_en_fastpitch_multispeaker",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_multispeaker_fastpitchhifigan/versions/1.10.0/files/tts_en_fastpitch_multispeaker.nemo",
description="This model is trained on HiFITTS sampled at 44100Hz with and can be used to generate male and female English voices with an American accent.",
class_=cls,
)
list_of_models.append(model)
# de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 21.02
model = PretrainedModelInfo(
pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2102",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2102.nemo",
description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 21.02 Dataset sampled at 22050Hz and can be used to generate male German voices.",
class_=cls,
)
list_of_models.append(model)
# de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 22.10
model = PretrainedModelInfo(
pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2210",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2210.nemo",
description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 22.10 Dataset sampled at 22050Hz and can be used to generate male German voices.",
class_=cls,
)
list_of_models.append(model)
# de-DE, multi-speaker, 5 speakers, 44100 Hz, HUI-Audio-Corpus-German Clean.
model = PretrainedModelInfo(
pretrained_model_name="tts_de_fastpitch_multispeaker_5",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitch_multispeaker_5/versions/1.11.0/files/tts_de_fastpitch_multispeaker_5.nemo",
description="This model is trained on 5 speakers in HUI-Audio-Corpus-German clean subset sampled at 44100Hz with and can be used to generate male and female German voices.",
class_=cls,
)
list_of_models.append(model)
# es, 174 speakers, 44100Hz, OpenSLR (IPA)
model = PretrainedModelInfo(
pretrained_model_name="tts_es_fastpitch_multispeaker",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_es_multispeaker_fastpitchhifigan/versions/1.15.0/files/tts_es_fastpitch_multispeaker.nemo",
description="This model is trained on 174 speakers in 6 crowdsourced Latin American Spanish OpenSLR datasets sampled at 44100Hz and can be used to generate male and female Spanish voices with Latin American accents.",
class_=cls,
)
list_of_models.append(model)
# zh, single female speaker, 22050Hz, SFSpeech Bilingual Chinese/English dataset, improved model using richer
# dict and jieba word segmenter for polyphone disambiguation.
model = PretrainedModelInfo(
pretrained_model_name="tts_zh_fastpitch_sfspeech",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_zh_fastpitch_hifigan_sfspeech/versions/1.15.0/files/tts_zh_fastpitch_sfspeech.nemo",
description="This model is trained on a single female speaker in SFSpeech Bilingual Chinese/English dataset"
" sampled at 22050Hz and can be used to generate female Mandarin Chinese voices. It is improved"
" using richer dict and jieba word segmenter for polyphone disambiguation.",
class_=cls,
)
list_of_models.append(model)
# en, multi speaker, LibriTTS, 16000 Hz
# stft 25ms 10ms matching ASR params
# for use during Enhlish ASR training/adaptation
model = PretrainedModelInfo(
pretrained_model_name="tts_en_fastpitch_for_asr_finetuning",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch_spectrogram_enhancer_for_asr_finetuning/versions/1.20.0/files/tts_en_fastpitch_for_asr_finetuning.nemo",
description="This model is trained on LibriSpeech, train-960 subset."
" STFT parameters follow those commonly used in ASR: 25 ms window, 10 ms hop."
" This model is supposed to be used with its companion SpetrogramEnhancer for "
" ASR fine-tuning. Usage for regular TTS tasks is not advised.",
class_=cls,
)
list_of_models.append(model)
return list_of_models
# Methods for model exportability
def _prepare_for_export(self, **kwargs):
super()._prepare_for_export(**kwargs)
tensor_shape = ('T') if self.export_config["enable_ragged_batches"] else ('B', 'T')
# Define input_types and output_types as required by export()
self._input_types = {
"text": NeuralType(tensor_shape, TokenIndex()),
"pitch": NeuralType(tensor_shape, RegressionValuesType()),
"pace": NeuralType(tensor_shape),
"volume": NeuralType(tensor_shape, optional=True),
"batch_lengths": NeuralType(('B'), optional=True),
"speaker": NeuralType(('B'), Index(), optional=True),
}
self._output_types = {
"spect": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"num_frames": NeuralType(('B'), TokenDurationType()),
"durs_predicted": NeuralType(('B', 'T'), TokenDurationType()),
"log_durs_predicted": NeuralType(('B', 'T'), TokenLogDurationType()),
"pitch_predicted": NeuralType(('B', 'T'), RegressionValuesType()),
}
if self.export_config["enable_volume"]:
self._output_types["volume_aligned"] = NeuralType(('B', 'T'), RegressionValuesType())
def _export_teardown(self):
self._input_types = self._output_types = None
@property
def disabled_deployment_input_names(self):
"""Implement this method to return a set of input names disabled for export"""
disabled_inputs = set()
if self.fastpitch.speaker_emb is None:
disabled_inputs.add("speaker")
if not self.export_config["enable_ragged_batches"]:
disabled_inputs.add("batch_lengths")
if not self.export_config["enable_volume"]:
disabled_inputs.add("volume")
return disabled_inputs
@property
def input_types(self):
return self._input_types
@property
def output_types(self):
return self._output_types
def input_example(self, max_batch=1, max_dim=44):
"""
Generates input examples for tracing etc.
Returns:
A tuple of input examples.
"""
par = next(self.fastpitch.parameters())
inputs = sample_tts_input(self.export_config, par.device, max_batch=max_batch, max_dim=max_dim)
if 'enable_ragged_batches' not in self.export_config:
inputs.pop('batch_lengths', None)
return (inputs,)
def forward_for_export(self, text, pitch, pace, volume=None, batch_lengths=None, speaker=None):
if self.export_config["enable_ragged_batches"]:
text, pitch, pace, volume_tensor, lens = batch_from_ragged(
text, pitch, pace, batch_lengths, padding_idx=self.fastpitch.encoder.padding_idx, volume=volume
)
if volume is not None:
volume = volume_tensor
return self.fastpitch.infer(text=text, pitch=pitch, pace=pace, volume=volume, speaker=speaker)
def interpolate_speaker(
self, original_speaker_1, original_speaker_2, weight_speaker_1, weight_speaker_2, new_speaker_id
):
"""
This method performs speaker interpolation between two original speakers the model is trained on.
Inputs:
original_speaker_1: Integer speaker ID of first existing speaker in the model
original_speaker_2: Integer speaker ID of second existing speaker in the model
weight_speaker_1: Floating point weight associated in to first speaker during weight combination
weight_speaker_2: Floating point weight associated in to second speaker during weight combination
new_speaker_id: Integer speaker ID of new interpolated speaker in the model
"""
if self.fastpitch.speaker_emb is None:
raise Exception(
"Current FastPitch model is not a multi-speaker FastPitch model. Speaker interpolation can only \
be performed with a multi-speaker model"
)
n_speakers = self.fastpitch.speaker_emb.weight.data.size()[0]
if original_speaker_1 >= n_speakers or original_speaker_2 >= n_speakers or new_speaker_id >= n_speakers:
raise Exception(
f"Parameters original_speaker_1, original_speaker_2, new_speaker_id should be less than the total \
total number of speakers FastPitch was trained on (n_speakers = {n_speakers})."
)
speaker_emb_1 = (
self.fastpitch.speaker_emb(torch.tensor(original_speaker_1, dtype=torch.int32).cuda()).clone().detach()
)
speaker_emb_2 = (
self.fastpitch.speaker_emb(torch.tensor(original_speaker_2, dtype=torch.int32).cuda()).clone().detach()
)
new_speaker_emb = weight_speaker_1 * speaker_emb_1 + weight_speaker_2 * speaker_emb_2
self.fastpitch.speaker_emb.weight.data[new_speaker_id] = new_speaker_emb
[end of nemo/collections/tts/models/fastpitch.py]
[start of nemo/collections/tts/models/tacotron2.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
import torch
from hydra.utils import instantiate
from omegaconf import MISSING, DictConfig, OmegaConf, open_dict
from omegaconf.errors import ConfigAttributeError
from pytorch_lightning.loggers import TensorBoardLogger, WandbLogger
from torch import nn
from nemo.collections.common.parts.preprocessing import parsers
from nemo.collections.tts.losses.tacotron2loss import Tacotron2Loss
from nemo.collections.tts.models.base import SpectrogramGenerator
from nemo.collections.tts.parts.utils.helpers import (
g2p_backward_compatible_support,
get_mask_from_lengths,
tacotron2_log_to_tb_func,
tacotron2_log_to_wandb_func,
)
from nemo.core.classes.common import PretrainedModelInfo, typecheck
from nemo.core.neural_types.elements import (
AudioSignal,
EmbeddedTextType,
LengthsType,
LogitsType,
MelSpectrogramType,
SequenceToSequenceAlignmentType,
)
from nemo.core.neural_types.neural_type import NeuralType
from nemo.utils import logging, model_utils
@dataclass
class Preprocessor:
_target_: str = MISSING
pad_value: float = MISSING
@dataclass
class Tacotron2Config:
preprocessor: Preprocessor = Preprocessor()
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
labels: List = MISSING
train_ds: Optional[Dict[Any, Any]] = None
validation_ds: Optional[Dict[Any, Any]] = None
class Tacotron2Model(SpectrogramGenerator):
"""Tacotron 2 Model that is used to generate mel spectrograms from text"""
def __init__(self, cfg: DictConfig, trainer: 'Trainer' = None):
# Convert to Hydra 1.0 compatible DictConfig
cfg = model_utils.convert_model_config_to_dict_config(cfg)
cfg = model_utils.maybe_update_config_version(cfg)
# setup normalizer
self.normalizer = None
self.text_normalizer_call = None
self.text_normalizer_call_kwargs = {}
self._setup_normalizer(cfg)
# setup tokenizer
self.tokenizer = None
if hasattr(cfg, 'text_tokenizer'):
self._setup_tokenizer(cfg)
self.num_tokens = len(self.tokenizer.tokens)
self.tokenizer_pad = self.tokenizer.pad
self.tokenizer_unk = self.tokenizer.oov
# assert self.tokenizer is not None
else:
self.num_tokens = len(cfg.labels) + 3
super().__init__(cfg=cfg, trainer=trainer)
schema = OmegaConf.structured(Tacotron2Config)
# ModelPT ensures that cfg is a DictConfig, but do this second check in case ModelPT changes
if isinstance(cfg, dict):
cfg = OmegaConf.create(cfg)
elif not isinstance(cfg, DictConfig):
raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
# Ensure passed cfg is compliant with schema
try:
OmegaConf.merge(cfg, schema)
self.pad_value = cfg.preprocessor.pad_value
except ConfigAttributeError:
self.pad_value = cfg.preprocessor.params.pad_value
logging.warning(
"Your config is using an old NeMo yaml configuration. Please ensure that the yaml matches the "
"current version in the main branch for future compatibility."
)
self._parser = None
self.audio_to_melspec_precessor = instantiate(cfg.preprocessor)
self.text_embedding = nn.Embedding(self.num_tokens, 512)
self.encoder = instantiate(self._cfg.encoder)
self.decoder = instantiate(self._cfg.decoder)
self.postnet = instantiate(self._cfg.postnet)
self.loss = Tacotron2Loss()
self.calculate_loss = True
@property
def parser(self):
if self._parser is not None:
return self._parser
ds_class_name = self._cfg.train_ds.dataset._target_.split(".")[-1]
if ds_class_name == "TTSDataset":
self._parser = None
elif hasattr(self._cfg, "labels"):
self._parser = parsers.make_parser(
labels=self._cfg.labels,
name='en',
unk_id=-1,
blank_id=-1,
do_normalize=True,
abbreviation_version="fastpitch",
make_table=False,
)
else:
raise ValueError("Wanted to setup parser, but model does not have necessary paramaters")
return self._parser
def parse(self, text: str, normalize=True) -> torch.Tensor:
if self.training:
logging.warning("parse() is meant to be called in eval mode.")
if normalize and self.text_normalizer_call is not None:
text = self.text_normalizer_call(text, **self.text_normalizer_call_kwargs)
eval_phon_mode = contextlib.nullcontext()
if hasattr(self.tokenizer, "set_phone_prob"):
eval_phon_mode = self.tokenizer.set_phone_prob(prob=1.0)
with eval_phon_mode:
if self.tokenizer is not None:
tokens = self.tokenizer.encode(text)
else:
tokens = self.parser(text)
# Old parser doesn't add bos and eos ids, so maunally add it
tokens = [len(self._cfg.labels)] + tokens + [len(self._cfg.labels) + 1]
tokens_tensor = torch.tensor(tokens).unsqueeze_(0).to(self.device)
return tokens_tensor
@property
def input_types(self):
if self.training:
return {
"tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
"token_len": NeuralType(('B'), LengthsType()),
"audio": NeuralType(('B', 'T'), AudioSignal()),
"audio_len": NeuralType(('B'), LengthsType()),
}
else:
return {
"tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
"token_len": NeuralType(('B'), LengthsType()),
"audio": NeuralType(('B', 'T'), AudioSignal(), optional=True),
"audio_len": NeuralType(('B'), LengthsType(), optional=True),
}
@property
def output_types(self):
if not self.calculate_loss and not self.training:
return {
"spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"gate_pred": NeuralType(('B', 'T'), LogitsType()),
"alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
"pred_length": NeuralType(('B'), LengthsType()),
}
return {
"spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"gate_pred": NeuralType(('B', 'T'), LogitsType()),
"spec_target": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"spec_target_len": NeuralType(('B'), LengthsType()),
"alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
}
@typecheck()
def forward(self, *, tokens, token_len, audio=None, audio_len=None):
if audio is not None and audio_len is not None:
spec_target, spec_target_len = self.audio_to_melspec_precessor(audio, audio_len)
else:
if self.training or self.calculate_loss:
raise ValueError(
f"'audio' and 'audio_len' can not be None when either 'self.training' or 'self.calculate_loss' is True."
)
token_embedding = self.text_embedding(tokens).transpose(1, 2)
encoder_embedding = self.encoder(token_embedding=token_embedding, token_len=token_len)
if self.training:
spec_pred_dec, gate_pred, alignments = self.decoder(
memory=encoder_embedding, decoder_inputs=spec_target, memory_lengths=token_len
)
else:
spec_pred_dec, gate_pred, alignments, pred_length = self.decoder(
memory=encoder_embedding, memory_lengths=token_len
)
spec_pred_postnet = self.postnet(mel_spec=spec_pred_dec)
if not self.calculate_loss and not self.training:
return spec_pred_dec, spec_pred_postnet, gate_pred, alignments, pred_length
return spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments
@typecheck(
input_types={"tokens": NeuralType(('B', 'T'), EmbeddedTextType())},
output_types={"spec": NeuralType(('B', 'D', 'T'), MelSpectrogramType())},
)
def generate_spectrogram(self, *, tokens):
self.eval()
self.calculate_loss = False
token_len = torch.tensor([len(i) for i in tokens]).to(self.device)
tensors = self(tokens=tokens, token_len=token_len)
spectrogram_pred = tensors[1]
if spectrogram_pred.shape[0] > 1:
# Silence all frames past the predicted end
mask = ~get_mask_from_lengths(tensors[-1])
mask = mask.expand(spectrogram_pred.shape[1], mask.size(0), mask.size(1))
mask = mask.permute(1, 0, 2)
spectrogram_pred.data.masked_fill_(mask, self.pad_value)
return spectrogram_pred
def training_step(self, batch, batch_idx):
audio, audio_len, tokens, token_len = batch
spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, _ = self.forward(
audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
)
loss, _ = self.loss(
spec_pred_dec=spec_pred_dec,
spec_pred_postnet=spec_pred_postnet,
gate_pred=gate_pred,
spec_target=spec_target,
spec_target_len=spec_target_len,
pad_value=self.pad_value,
)
output = {
'loss': loss,
'progress_bar': {'training_loss': loss},
'log': {'loss': loss},
}
return output
def validation_step(self, batch, batch_idx):
audio, audio_len, tokens, token_len = batch
spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments = self.forward(
audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
)
loss, gate_target = self.loss(
spec_pred_dec=spec_pred_dec,
spec_pred_postnet=spec_pred_postnet,
gate_pred=gate_pred,
spec_target=spec_target,
spec_target_len=spec_target_len,
pad_value=self.pad_value,
)
loss = {
"val_loss": loss,
"mel_target": spec_target,
"mel_postnet": spec_pred_postnet,
"gate": gate_pred,
"gate_target": gate_target,
"alignments": alignments,
}
self.validation_step_outputs.append(loss)
return loss
def on_validation_epoch_end(self):
if self.logger is not None and self.logger.experiment is not None:
logger = self.logger.experiment
for logger in self.trainer.loggers:
if isinstance(logger, TensorBoardLogger):
logger = logger.experiment
break
if isinstance(logger, TensorBoardLogger):
tacotron2_log_to_tb_func(
logger,
self.validation_step_outputs[0].values(),
self.global_step,
tag="val",
log_images=True,
add_audio=False,
)
elif isinstance(logger, WandbLogger):
tacotron2_log_to_wandb_func(
logger,
self.validation_step_outputs[0].values(),
self.global_step,
tag="val",
log_images=True,
add_audio=False,
)
avg_loss = torch.stack(
[x['val_loss'] for x in self.validation_step_outputs]
).mean() # This reduces across batches, not workers!
self.log('val_loss', avg_loss)
self.validation_step_outputs.clear() # free memory
def _setup_normalizer(self, cfg):
if "text_normalizer" in cfg:
normalizer_kwargs = {}
if "whitelist" in cfg.text_normalizer:
normalizer_kwargs["whitelist"] = self.register_artifact(
'text_normalizer.whitelist', cfg.text_normalizer.whitelist
)
try:
import nemo_text_processing
self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
except Exception as e:
logging.error(e)
raise ImportError(
"`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
)
self.text_normalizer_call = self.normalizer.normalize
if "text_normalizer_call_kwargs" in cfg:
self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
def _setup_tokenizer(self, cfg):
text_tokenizer_kwargs = {}
if "g2p" in cfg.text_tokenizer and cfg.text_tokenizer.g2p is not None:
# for backward compatibility
if (
self._is_model_being_restored()
and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
):
cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
cfg.text_tokenizer.g2p["_target_"]
)
g2p_kwargs = {}
if "phoneme_dict" in cfg.text_tokenizer.g2p:
g2p_kwargs["phoneme_dict"] = self.register_artifact(
'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
)
if "heteronyms" in cfg.text_tokenizer.g2p:
g2p_kwargs["heteronyms"] = self.register_artifact(
'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
)
text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
self.tokenizer = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
raise ValueError(f"No dataset for {name}")
if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
raise ValueError(f"No dataloder_params for {name}")
if shuffle_should_be:
if 'shuffle' not in cfg.dataloader_params:
logging.warning(
f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
"config. Manually setting to True"
)
with open_dict(cfg.dataloader_params):
cfg.dataloader_params.shuffle = True
elif not cfg.dataloader_params.shuffle:
logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
elif not shuffle_should_be and cfg.dataloader_params.shuffle:
logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
dataset = instantiate(
cfg.dataset,
text_normalizer=self.normalizer,
text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
text_tokenizer=self.tokenizer,
)
return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
def setup_training_data(self, cfg):
self._train_dl = self.__setup_dataloader_from_config(cfg)
def setup_validation_data(self, cfg):
self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="validation")
@classmethod
def list_available_models(cls) -> 'List[PretrainedModelInfo]':
"""
This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
Returns:
List of available pre-trained models.
"""
list_of_models = []
model = PretrainedModelInfo(
pretrained_model_name="tts_en_tacotron2",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_tacotron2/versions/1.10.0/files/tts_en_tacotron2.nemo",
description="This model is trained on LJSpeech sampled at 22050Hz, and can be used to generate female English voices with an American accent.",
class_=cls,
aliases=["Tacotron2-22050Hz"],
)
list_of_models.append(model)
return list_of_models
[end of nemo/collections/tts/models/tacotron2.py]
[start of nemo/core/config/modelPT.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf import MISSING
from nemo.core import config
from nemo.core.classes.dataset import DatasetConfig
from nemo.utils import exp_manager
@dataclass
class SchedConfig:
name: str = MISSING
min_lr: float = 0.0
last_epoch: int = -1
@dataclass
class OptimConfig:
name: str = MISSING
sched: Optional[SchedConfig] = None
@dataclass
class ModelConfig:
"""
Model component inside ModelPT
"""
# ...
train_ds: Optional[DatasetConfig] = None
validation_ds: Optional[DatasetConfig] = None
test_ds: Optional[DatasetConfig] = None
optim: Optional[OptimConfig] = None
@dataclass
class HydraConfig:
run: Dict[str, Any] = field(default_factory=lambda: {"dir": "."})
job_logging: Dict[str, Any] = field(default_factory=lambda: {"root": {"handlers": None}})
@dataclass
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
trainer: config.TrainerConfig = config.TrainerConfig(
strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
)
exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
hydra: HydraConfig = HydraConfig()
class ModelConfigBuilder:
def __init__(self, model_cfg: ModelConfig):
"""
Base class for any Model Config Builder.
A Model Config Builder is a utility class that accepts a ModelConfig dataclass,
and via a set of utility methods (that are implemented by the subclassed ModelConfigBuilder),
builds a finalized ModelConfig that can be supplied to a NemoModel dataclass as
the `model` component.
Subclasses *must* implement the private method `_finalize_cfg`.
Inside this method, they must update `self.model_cfg` with all interdependent config
options that need to be set (either updated by user explicitly or with their default value).
The updated model config must then be preserved in `self.model_cfg`.
Example:
# Create the config builder
config_builder = <subclass>ModelConfigBuilder()
# Update the components of the config that are modifiable
config_builder.set_X(X)
config_builder.set_Y(Y)
# Create a "finalized" config dataclass that will contain all the updates
# that were specified by the builder
model_config = config_builder.build()
# Use model config as is (or further update values), then create a new Model
model = nemo.<domain>.models.<ModelName>Model(cfg=model_config, trainer=Trainer())
Supported build methods:
- set_train_ds: All model configs can accept a subclass of `DatasetConfig` as their
training config. Subclasses can override this method to enable auto-complete
by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
- set_validation_ds: All model configs can accept a subclass of `DatasetConfig` as their
validation config. Subclasses can override this method to enable auto-complete
by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
- set_test_ds: All model configs can accept a subclass of `DatasetConfig` as their
test config. Subclasses can override this method to enable auto-complete
by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
- set_optim: A build method that supports changes to the Optimizer (and optionally,
the Scheduler) used for training the model. The function accepts two inputs -
`cfg`: A subclass of `OptimizerParams` - any OptimizerParams subclass can be used,
in order to select an appropriate Optimizer. Examples: AdamParams.
`sched_cfg`: A subclass of `SchedulerParams` - any SchedulerParams subclass can be used,
in order to select an appropriate Scheduler. Examples: CosineAnnealingParams.
Note that this argument is optional.
- build(): The method which should return a "finalized" ModelConfig dataclass.
Subclasses *should* always override this method, and update the signature
of this method with the return type of the Dataclass, so that it enables
autocomplete for the user.
Example:
def build(self) -> EncDecCTCConfig:
return super().build()
Any additional build methods must be added by subclasses of ModelConfigBuilder.
Args:
model_cfg:
"""
self.model_cfg = model_cfg
self.train_ds_cfg = None
self.validation_ds_cfg = None
self.test_ds_cfg = None
self.optim_cfg = None
def set_train_ds(self, cfg: Optional[DatasetConfig] = None):
self.model_cfg.train_ds = cfg
def set_validation_ds(self, cfg: Optional[DatasetConfig] = None):
self.model_cfg.validation_ds = cfg
def set_test_ds(self, cfg: Optional[DatasetConfig] = None):
self.model_cfg.test_ds = cfg
def set_optim(self, cfg: config.OptimizerParams, sched_cfg: Optional[config.SchedulerParams] = None):
@dataclass
class WrappedOptimConfig(OptimConfig, cfg.__class__):
pass
# Setup optim
optim_name = cfg.__class__.__name__.replace("Params", "").lower()
wrapped_cfg = WrappedOptimConfig(name=optim_name, sched=None, **vars(cfg))
if sched_cfg is not None:
@dataclass
class WrappedSchedConfig(SchedConfig, sched_cfg.__class__):
pass
# Setup scheduler
sched_name = sched_cfg.__class__.__name__.replace("Params", "")
wrapped_sched_cfg = WrappedSchedConfig(name=sched_name, **vars(sched_cfg))
wrapped_cfg.sched = wrapped_sched_cfg
self.model_cfg.optim = wrapped_cfg
def _finalize_cfg(self):
raise NotImplementedError()
def build(self) -> ModelConfig:
# validate config
self._finalize_cfg()
return self.model_cfg
[end of nemo/core/config/modelPT.py]
[start of nemo/utils/exp_manager.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import os
import subprocess
import sys
import time
import warnings
from dataclasses import dataclass
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
from typing import Any, Dict, List, Optional, Tuple, Union
import pytorch_lightning
import torch
from hydra.core.hydra_config import HydraConfig
from hydra.utils import get_original_cwd
from omegaconf import DictConfig, OmegaConf, open_dict
from pytorch_lightning.callbacks import Callback, ModelCheckpoint
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning.callbacks.timer import Interval, Timer
from pytorch_lightning.loggers import MLFlowLogger, TensorBoardLogger, WandbLogger
from pytorch_lightning.loops import _TrainingEpochLoop
from pytorch_lightning.strategies.ddp import DDPStrategy
from nemo.collections.common.callbacks import EMA
from nemo.constants import NEMO_ENV_VARNAME_TESTING, NEMO_ENV_VARNAME_VERSION
from nemo.utils import logging, timers
from nemo.utils.app_state import AppState
from nemo.utils.callbacks import NeMoModelCheckpoint, PreemptionCallback
from nemo.utils.env_var_parsing import get_envbool
from nemo.utils.exceptions import NeMoBaseException
from nemo.utils.get_rank import is_global_rank_zero
from nemo.utils.lightning_logger_patch import add_filehandlers_to_pl_logger
from nemo.utils.loggers import ClearMLLogger, ClearMLParams, DLLogger, DLLoggerParams, MLFlowParams
from nemo.utils.model_utils import uninject_model_parallel_rank
class NotFoundError(NeMoBaseException):
""" Raised when a file or folder is not found"""
class LoggerMisconfigurationError(NeMoBaseException):
""" Raised when a mismatch between trainer.logger and exp_manager occurs"""
def __init__(self, message):
message = (
message
+ " You can disable lighning's trainer from creating a logger by passing logger=False to its constructor."
)
super().__init__(message)
class CheckpointMisconfigurationError(NeMoBaseException):
""" Raised when a mismatch between trainer.callbacks and exp_manager occurs"""
@dataclass
class EarlyStoppingParams:
monitor: str = "val_loss" # The metric that early stopping should consider.
mode: str = "min" # inform early stopping whether to look for increase or decrease in monitor.
min_delta: float = 0.001 # smallest change to consider as improvement.
patience: int = 10 # how many (continuous) validation cycles to wait with no improvement and stopping training.
verbose: bool = True
strict: bool = True
check_finite: bool = True
stopping_threshold: Optional[float] = None
divergence_threshold: Optional[float] = None
check_on_train_epoch_end: Optional[bool] = None
log_rank_zero_only: bool = False
@dataclass
class CallbackParams:
filepath: Optional[str] = None # Deprecated
dirpath: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
filename: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
monitor: Optional[str] = "val_loss"
verbose: Optional[bool] = True
save_last: Optional[bool] = True
save_top_k: Optional[int] = 3
save_weights_only: Optional[bool] = False
mode: Optional[str] = "min"
auto_insert_metric_name: bool = True
every_n_epochs: Optional[int] = 1
every_n_train_steps: Optional[int] = None
train_time_interval: Optional[str] = None
prefix: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
postfix: str = ".nemo"
save_best_model: bool = False
always_save_nemo: bool = False
save_nemo_on_train_end: Optional[bool] = True # Whether to automatically save .nemo file durin on_train_end hook
model_parallel_size: Optional[int] = None # tensor parallel size * pipeline parallel size
save_on_train_epoch_end: Optional[bool] = False # Save after training, not after validation
@dataclass
class StepTimingParams:
reduction: Optional[str] = "mean"
# if True torch.cuda.synchronize() is called on start/stop
sync_cuda: Optional[bool] = False
# if positive, defines the size of a sliding window for computing mean
buffer_size: Optional[int] = 1
@dataclass
class EMAParams:
enable: Optional[bool] = False
decay: Optional[float] = 0.999
cpu_offload: Optional[bool] = False
validate_original_weights: Optional[bool] = False
every_n_steps: int = 1
@dataclass
class ExpManagerConfig:
"""Experiment Manager config for validation of passed arguments.
"""
# Log dir creation parameters
explicit_log_dir: Optional[str] = None
exp_dir: Optional[str] = None
name: Optional[str] = None
version: Optional[str] = None
use_datetime_version: Optional[bool] = True
resume_if_exists: Optional[bool] = False
resume_past_end: Optional[bool] = False
resume_ignore_no_checkpoint: Optional[bool] = False
resume_from_checkpoint: Optional[str] = None
# Logging parameters
create_tensorboard_logger: Optional[bool] = True
summary_writer_kwargs: Optional[Dict[Any, Any]] = None
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
create_dllogger_logger: Optional[bool] = False
dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
create_clearml_logger: Optional[bool] = False
clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
create_early_stopping_callback: Optional[bool] = False
early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
ema: Optional[EMAParams] = EMAParams()
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
seconds_to_sleep: float = 5
class TimingCallback(Callback):
"""
Logs execution time of train/val/test steps
"""
def __init__(self, timer_kwargs={}):
self.timer = timers.NamedTimer(**timer_kwargs)
def _on_batch_start(self, name):
# reset only if we do not return mean of a sliding window
if self.timer.buffer_size <= 0:
self.timer.reset(name)
self.timer.start(name)
def _on_batch_end(self, name, pl_module):
self.timer.stop(name)
# Set the `batch_size=1` as WAR for `dataloader_iter`, which is not used for any metric
pl_module.log(
name + ' in s',
self.timer[name],
on_step=True,
on_epoch=False,
batch_size=1,
prog_bar=(name == "train_step_timing"),
)
def on_train_batch_start(self, trainer, pl_module, batch, batch_idx):
self._on_batch_start("train_step_timing")
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
self._on_batch_end("train_step_timing", pl_module)
def on_validation_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
self._on_batch_start("validation_step_timing")
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
self._on_batch_end("validation_step_timing", pl_module)
def on_test_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
self._on_batch_start("test_step_timing")
def on_test_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
self._on_batch_end("test_step_timing", pl_module)
def on_before_backward(self, trainer, pl_module, loss):
self._on_batch_start("train_backward_timing")
def on_after_backward(self, trainer, pl_module):
self._on_batch_end("train_backward_timing", pl_module)
def exp_manager(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None) -> Optional[Path]:
"""
exp_manager is a helper function used to manage folders for experiments. It follows the pytorch lightning paradigm
of exp_dir/model_or_experiment_name/version. If the lightning trainer has a logger, exp_manager will get exp_dir,
name, and version from the logger. Otherwise it will use the exp_dir and name arguments to create the logging
directory. exp_manager also allows for explicit folder creation via explicit_log_dir.
The version can be a datetime string or an integer. Datestime version can be disabled if use_datetime_version is set
to False. It optionally creates TensorBoardLogger, WandBLogger, DLLogger, MLFlowLogger, ClearMLLogger,
ModelCheckpoint objects from pytorch lightning.
It copies sys.argv, and git information if available to the logging directory. It creates a log file for each
process to log their output into.
exp_manager additionally has a resume feature (resume_if_exists) which can be used to continuing training from
the constructed log_dir. When you need to continue the training repeatedly (like on a cluster which you need
multiple consecutive jobs), you need to avoid creating the version folders. Therefore from v1.0.0, when
resume_if_exists is set to True, creating the version folders is ignored.
Args:
trainer (pytorch_lightning.Trainer): The lightning trainer.
cfg (DictConfig, dict): Can have the following keys:
- explicit_log_dir (str, Path): Can be used to override exp_dir/name/version folder creation. Defaults to
None, which will use exp_dir, name, and version to construct the logging directory.
- exp_dir (str, Path): The base directory to create the logging directory. Defaults to None, which logs to
./nemo_experiments.
- name (str): The name of the experiment. Defaults to None which turns into "default" via name = name or
"default".
- version (str): The version of the experiment. Defaults to None which uses either a datetime string or
lightning's TensorboardLogger system of using version_{int}.
- use_datetime_version (bool): Whether to use a datetime string for version. Defaults to True.
- resume_if_exists (bool): Whether this experiment is resuming from a previous run. If True, it sets
trainer._checkpoint_connector._ckpt_path so that the trainer should auto-resume. exp_manager will move files
under log_dir to log_dir/run_{int}. Defaults to False. From v1.0.0, when resume_if_exists is True,
we would not create version folders to make it easier to find the log folder for next runs.
- resume_past_end (bool): exp_manager errors out if resume_if_exists is True and a checkpoint matching
``*end.ckpt`` indicating a previous training run fully completed. This behaviour can be disabled, in which
case the ``*end.ckpt`` will be loaded by setting resume_past_end to True. Defaults to False.
- resume_ignore_no_checkpoint (bool): exp_manager errors out if resume_if_exists is True and no checkpoint
could be found. This behaviour can be disabled, in which case exp_manager will print a message and
continue without restoring, by setting resume_ignore_no_checkpoint to True. Defaults to False.
- resume_from_checkpoint (str): Can be used to specify a path to a specific checkpoint file to load from. This will
override any checkpoint found when resume_if_exists is True. Defaults to None.
- create_tensorboard_logger (bool): Whether to create a tensorboard logger and attach it to the pytorch
lightning trainer. Defaults to True.
- summary_writer_kwargs (dict): A dictionary of kwargs that can be passed to lightning's TensorboardLogger
class. Note that log_dir is passed by exp_manager and cannot exist in this dict. Defaults to None.
- create_wandb_logger (bool): Whether to create a Weights and Baises logger and attach it to the pytorch
lightning trainer. Defaults to False.
- wandb_logger_kwargs (dict): A dictionary of kwargs that can be passed to lightning's WandBLogger
class. Note that name and project are required parameters if create_wandb_logger is True.
Defaults to None.
- create_mlflow_logger (bool): Whether to create an MLFlow logger and attach it to the pytorch lightning
training. Defaults to False
- mlflow_logger_kwargs (dict): optional parameters for the MLFlow logger
- create_dllogger_logger (bool): Whether to create an DLLogger logger and attach it to the pytorch lightning
training. Defaults to False
- dllogger_logger_kwargs (dict): optional parameters for the DLLogger logger
- create_clearml_logger (bool): Whether to create an ClearML logger and attach it to the pytorch lightning
training. Defaults to False
- clearml_logger_kwargs (dict): optional parameters for the ClearML logger
- create_checkpoint_callback (bool): Whether to create a ModelCheckpoint callback and attach it to the
pytorch lightning trainer. The ModelCheckpoint saves the top 3 models with the best "val_loss", the most
recent checkpoint under ``*last.ckpt``, and the final checkpoint after training completes under ``*end.ckpt``.
Defaults to True.
- create_early_stopping_callback (bool): Flag to decide if early stopping should be used to stop training. Default is False.
See EarlyStoppingParams dataclass above.
- create_preemption_callback (bool): Flag to decide whether to enable preemption callback to save checkpoints and exit training
immediately upon preemption. Default is True.
- files_to_copy (list): A list of files to copy to the experiment logging directory. Defaults to None which
copies no files.
- log_local_rank_0_only (bool): Whether to only create log files for local rank 0. Defaults to False.
Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
- log_global_rank_0_only (bool): Whether to only create log files for global rank 0. Defaults to False.
Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
- max_time (str): The maximum wall clock time *per run*. This is intended to be used on clusters where you want
a checkpoint to be saved after this specified time and be able to resume from that checkpoint. Defaults to None.
- seconds_to_sleep (float): seconds to sleep non rank 0 processes for. Used to give enough time for rank 0 to initialize
returns:
log_dir (Path): The final logging directory where logging files are saved. Usually the concatenation of
exp_dir, name, and version.
"""
# Add rank information to logger
# Note: trainer.global_rank and trainer.is_global_zero are not set until trainer.fit, so have to hack around it
local_rank = int(os.environ.get("LOCAL_RANK", 0))
global_rank = trainer.node_rank * trainer.num_devices + local_rank
logging.rank = global_rank
if cfg is None:
logging.error("exp_manager did not receive a cfg argument. It will be disabled.")
return
if trainer.fast_dev_run:
logging.info("Trainer was called with fast_dev_run. exp_manager will return without any functionality.")
return
# Ensure passed cfg is compliant with ExpManagerConfig
schema = OmegaConf.structured(ExpManagerConfig)
if isinstance(cfg, dict):
cfg = OmegaConf.create(cfg)
elif not isinstance(cfg, DictConfig):
raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
cfg = OmegaConf.create(OmegaConf.to_container(cfg, resolve=True))
cfg = OmegaConf.merge(schema, cfg)
error_checks(trainer, cfg) # Ensures that trainer options are compliant with NeMo and exp_manager arguments
log_dir, exp_dir, name, version = get_log_dir(
trainer=trainer,
exp_dir=cfg.exp_dir,
name=cfg.name,
version=cfg.version,
explicit_log_dir=cfg.explicit_log_dir,
use_datetime_version=cfg.use_datetime_version,
resume_if_exists=cfg.resume_if_exists,
)
check_resume(
trainer,
log_dir,
cfg.resume_if_exists,
cfg.resume_past_end,
cfg.resume_ignore_no_checkpoint,
cfg.checkpoint_callback_params.dirpath,
cfg.resume_from_checkpoint,
)
checkpoint_name = name
# If name returned from get_log_dir is "", use cfg.name for checkpointing
if checkpoint_name is None or checkpoint_name == '':
checkpoint_name = cfg.name or "default"
# Set mlflow name if it's not set, before the main name is erased
if cfg.create_mlflow_logger and (not cfg.mlflow_logger_kwargs.get("experiment_name", None)):
cfg.mlflow_logger_kwargs.experiment_name = cfg.name
logging.warning(
'mlflow logger specified but no experiment name set. Using the same as Tensorboard: %s',
cfg.mlflow_logger_kwargs.experiment_name,
)
cfg.name = name # Used for configure_loggers so that the log_dir is properly set even if name is ""
cfg.version = version
# update app_state with log_dir, exp_dir, etc
app_state = AppState()
app_state.log_dir = log_dir
app_state.exp_dir = exp_dir
app_state.name = name
app_state.version = version
app_state.checkpoint_name = checkpoint_name
app_state.create_checkpoint_callback = cfg.create_checkpoint_callback
app_state.checkpoint_callback_params = cfg.checkpoint_callback_params
# Create the logging directory if it does not exist
os.makedirs(log_dir, exist_ok=True) # Cannot limit creation to global zero as all ranks write to own log file
logging.info(f'Experiments will be logged at {log_dir}')
trainer._default_root_dir = log_dir
if cfg.log_local_rank_0_only is True and cfg.log_global_rank_0_only is True:
raise ValueError(
f"Cannot set both log_local_rank_0_only and log_global_rank_0_only to True. Please set either one or neither."
)
# This is set if the env var NEMO_TESTING is set to True.
nemo_testing = get_envbool(NEMO_ENV_VARNAME_TESTING, False)
# Handle logging to file
log_file = log_dir / f'nemo_log_globalrank-{global_rank}_localrank-{local_rank}.txt'
if cfg.log_local_rank_0_only is True and not nemo_testing:
if local_rank == 0:
logging.add_file_handler(log_file)
elif cfg.log_global_rank_0_only is True and not nemo_testing:
if global_rank == 0:
logging.add_file_handler(log_file)
else:
# Logs on all ranks.
logging.add_file_handler(log_file)
# For some reason, LearningRateLogger requires trainer to have a logger. Safer to create logger on all ranks
# not just global rank 0.
if (
cfg.create_tensorboard_logger
or cfg.create_wandb_logger
or cfg.create_mlflow_logger
or cfg.create_dllogger_logger
or cfg.create_clearml_logger
):
configure_loggers(
trainer,
exp_dir,
log_dir,
cfg.name,
cfg.version,
cfg.checkpoint_callback_params,
cfg.create_tensorboard_logger,
cfg.summary_writer_kwargs,
cfg.create_wandb_logger,
cfg.wandb_logger_kwargs,
cfg.create_mlflow_logger,
cfg.mlflow_logger_kwargs,
cfg.create_dllogger_logger,
cfg.dllogger_logger_kwargs,
cfg.create_clearml_logger,
cfg.clearml_logger_kwargs,
)
# add loggers timing callbacks
if cfg.log_step_timing:
timing_callback = TimingCallback(timer_kwargs=cfg.step_timing_kwargs or {})
trainer.callbacks.insert(0, timing_callback)
if cfg.ema.enable:
ema_callback = EMA(
decay=cfg.ema.decay,
validate_original_weights=cfg.ema.validate_original_weights,
cpu_offload=cfg.ema.cpu_offload,
every_n_steps=cfg.ema.every_n_steps,
)
trainer.callbacks.append(ema_callback)
if cfg.create_early_stopping_callback:
early_stop_callback = EarlyStopping(**cfg.early_stopping_callback_params)
trainer.callbacks.append(early_stop_callback)
if cfg.create_checkpoint_callback:
configure_checkpointing(
trainer,
log_dir,
checkpoint_name,
cfg.resume_if_exists,
cfg.checkpoint_callback_params,
cfg.create_preemption_callback,
)
if cfg.disable_validation_on_resume:
# extend training loop to skip initial validation when resuming from checkpoint
configure_no_restart_validation_training_loop(trainer)
# Setup a stateless timer for use on clusters.
if cfg.max_time_per_run is not None:
found_ptl_timer = False
for idx, callback in enumerate(trainer.callbacks):
if isinstance(callback, Timer):
# NOTE: PTL does not expose a `trainer.max_time`. By the time we are in this function, PTL has already setup a timer if the user specifies `trainer.max_time` so best we can do is replace that.
# Working: If only `trainer.max_time` is set - it behaves as a normal PTL timer. If only `exp_manager.max_time_per_run` is set - it behaves as a StateLessTimer. If both are set, it also behaves as a StateLessTimer.
logging.warning(
f'Found a PTL Timer callback, replacing with a StatelessTimer callback. This will happen if you set trainer.max_time as well as exp_manager.max_time_per_run.'
)
trainer.callbacks[idx] = StatelessTimer(cfg.max_time_per_run)
found_ptl_timer = True
break
if not found_ptl_timer:
trainer.max_time = cfg.max_time_per_run
trainer.callbacks.append(StatelessTimer(cfg.max_time_per_run))
if is_global_rank_zero():
# Move files_to_copy to folder and add git information if present
if cfg.files_to_copy:
for _file in cfg.files_to_copy:
copy(Path(_file), log_dir)
# Create files for cmd args and git info
with open(log_dir / 'cmd-args.log', 'w', encoding='utf-8') as _file:
_file.write(" ".join(sys.argv))
# Try to get git hash
git_repo, git_hash = get_git_hash()
if git_repo:
with open(log_dir / 'git-info.log', 'w', encoding='utf-8') as _file:
_file.write(f'commit hash: {git_hash}')
_file.write(get_git_diff())
# Add err_file logging to global_rank zero
logging.add_err_file_handler(log_dir / 'nemo_error_log.txt')
# Add lightning file logging to global_rank zero
add_filehandlers_to_pl_logger(log_dir / 'lightning_logs.txt', log_dir / 'nemo_error_log.txt')
elif trainer.num_nodes * trainer.num_devices > 1:
# sleep other ranks so rank 0 can finish
# doing the initialization such as moving files
time.sleep(cfg.seconds_to_sleep)
return log_dir
def error_checks(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None):
"""
Checks that the passed trainer is compliant with NeMo and exp_manager's passed configuration. Checks that:
- Throws error when hydra has changed the working directory. This causes issues with lightning's DDP
- Throws error when trainer has loggers defined but create_tensorboard_logger or create_wandB_logger
or create_mlflow_logger or create_dllogger_logger is True
- Prints error messages when 1) run on multi-node and not Slurm, and 2) run on multi-gpu without DDP
"""
if HydraConfig.initialized() and get_original_cwd() != os.getcwd():
raise ValueError(
"Hydra changed the working directory. This interferes with ExpManger's functionality. Please pass "
"hydra.run.dir=. to your python script."
)
if trainer.logger is not None and (
cfg.create_tensorboard_logger or cfg.create_wandb_logger or cfg.create_mlflow_logger
):
raise LoggerMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a logger, and either "
f"create_tensorboard_logger: {cfg.create_tensorboard_logger} or create_wandb_logger: "
f"{cfg.create_wandb_logger} or create_mlflow_logger: {cfg.create_mlflow_logger}"
f"or create_dllogger_logger: {cfg.create_mlflow_logger} was set to True. "
"These can only be used if trainer does not already have a logger."
)
if trainer.num_nodes > 1 and not check_slurm(trainer):
logging.error(
"You are running multi-node training without SLURM handling the processes."
" Please note that this is not tested in NeMo and could result in errors."
)
if trainer.num_devices > 1 and not isinstance(trainer.strategy, DDPStrategy):
logging.error(
"You are running multi-gpu without ddp.Please note that this is not tested in NeMo and could result in "
"errors."
)
def check_resume(
trainer: 'pytorch_lightning.Trainer',
log_dir: str,
resume_if_exists: bool = False,
resume_past_end: bool = False,
resume_ignore_no_checkpoint: bool = False,
dirpath: str = None,
resume_from_checkpoint: str = None,
):
"""Checks that resume=True was used correctly with the arguments pass to exp_manager. Sets
trainer._checkpoint_connector._ckpt_path as necessary.
Returns:
log_dir (Path): The log_dir
exp_dir (str): The base exp_dir without name nor version
name (str): The name of the experiment
version (str): The version of the experiment
Raises:
NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
ValueError: If resume is True, and there were more than 1 checkpoint could found.
"""
if not log_dir:
raise ValueError(f"Resuming requires the log_dir {log_dir} to be passed to exp_manager")
checkpoint = None
if resume_from_checkpoint:
checkpoint = resume_from_checkpoint
if resume_if_exists:
# Use <log_dir>/checkpoints/ unless `dirpath` is set
checkpoint_dir = Path(dirpath) if dirpath else Path(Path(log_dir) / "checkpoints")
# when using distributed checkpointing, checkpoint_dir is a directory of directories
# we check for this here
dist_checkpoints = [d for d in list(checkpoint_dir.glob("*")) if d.is_dir()]
end_dist_checkpoints = [d for d in dist_checkpoints if d.match("*end")]
last_dist_checkpoints = [d for d in dist_checkpoints if d.match("*last")]
end_checkpoints = end_dist_checkpoints if end_dist_checkpoints else list(checkpoint_dir.rglob("*end.ckpt"))
last_checkpoints = last_dist_checkpoints if last_dist_checkpoints else list(checkpoint_dir.rglob("*last.ckpt"))
if not checkpoint_dir.exists() or (not len(end_checkpoints) > 0 and not len(last_checkpoints) > 0):
if resume_ignore_no_checkpoint:
warn = f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. "
if checkpoint is None:
warn += "Training from scratch."
elif checkpoint == resume_from_checkpoint:
warn += f"Training from {resume_from_checkpoint}."
logging.warning(warn)
else:
raise NotFoundError(
f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. Cannot resume."
)
elif len(end_checkpoints) > 0:
if resume_past_end:
if len(end_checkpoints) > 1:
if 'mp_rank' in str(end_checkpoints[0]):
checkpoint = end_checkpoints[0]
else:
raise ValueError(f"Multiple checkpoints {end_checkpoints} that matches *end.ckpt.")
else:
raise ValueError(
f"Found {end_checkpoints[0]} indicating that the last training run has already completed."
)
elif len(last_checkpoints) > 1:
if 'mp_rank' in str(last_checkpoints[0]) or 'tp_rank' in str(last_checkpoints[0]):
checkpoint = last_checkpoints[0]
checkpoint = uninject_model_parallel_rank(checkpoint)
else:
raise ValueError(f"Multiple checkpoints {last_checkpoints} that matches *last.ckpt.")
else:
checkpoint = last_checkpoints[0]
# PTL 2.0 supports ckpt_path instead of resume_from_checkpoint as the trainer flag
if checkpoint is not None:
trainer.ckpt_path = str(checkpoint)
logging.info(f'Resuming training from checkpoint: {trainer.ckpt_path}')
if is_global_rank_zero():
# Check to see if any files exist that need to be moved
files_to_move = []
if Path(log_dir).exists():
for child in Path(log_dir).iterdir():
if child.is_file():
files_to_move.append(child)
if len(files_to_move) > 0:
# Move old files to a new folder
other_run_dirs = Path(log_dir).glob("run_*")
run_count = 0
for fold in other_run_dirs:
if fold.is_dir():
run_count += 1
new_run_dir = Path(Path(log_dir) / f"run_{run_count}")
new_run_dir.mkdir()
for _file in files_to_move:
move(str(_file), str(new_run_dir))
def check_explicit_log_dir(
trainer: 'pytorch_lightning.Trainer', explicit_log_dir: Union[Path, str], exp_dir: str, name: str, version: str
) -> Tuple[Path, str, str, str]:
""" Checks that the passed arguments are compatible with explicit_log_dir.
Returns:
log_dir (Path): the log_dir
exp_dir (str): the base exp_dir without name nor version
name (str): The name of the experiment
version (str): The version of the experiment
Raise:
LoggerMisconfigurationError
"""
if trainer.logger is not None:
raise LoggerMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a logger and explicit_log_dir: "
f"{explicit_log_dir} was pass to exp_manager. Please remove the logger from the lightning trainer."
)
# Checking only (explicit_log_dir) vs (exp_dir and version).
# The `name` will be used as the actual name of checkpoint/archive.
if exp_dir or version:
logging.error(
f"exp_manager received explicit_log_dir: {explicit_log_dir} and at least one of exp_dir: {exp_dir}, "
f"or version: {version}. Please note that exp_dir, name, and version will be ignored."
)
if is_global_rank_zero() and Path(explicit_log_dir).exists():
logging.warning(f"Exp_manager is logging to {explicit_log_dir}, but it already exists.")
return Path(explicit_log_dir), str(explicit_log_dir), "", ""
def get_log_dir(
trainer: 'pytorch_lightning.Trainer',
exp_dir: str = None,
name: str = None,
version: str = None,
explicit_log_dir: str = None,
use_datetime_version: bool = True,
resume_if_exists: bool = False,
) -> Tuple[Path, str, str, str]:
"""
Obtains the log_dir used for exp_manager.
Returns:
log_dir (Path): the log_dir
exp_dir (str): the base exp_dir without name nor version
name (str): The name of the experiment
version (str): The version of the experiment
explicit_log_dir (str): The explicit path to the log folder. Defaults to False.
use_datetime_version (bool): Uses date and time as the version of the log folder. Defaults to True.
resume_if_exists (bool): if resume_if_exists of the exp_manager's config is enabled or not. When enabled, the
version folders would not get created.
Raise:
LoggerMisconfigurationError: If trainer is incompatible with arguments
NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
ValueError: If resume is True, and there were more than 1 checkpoint could found.
"""
if explicit_log_dir: # If explicit log_dir was passed, short circuit
return check_explicit_log_dir(trainer, explicit_log_dir, exp_dir, name, version)
# Default exp_dir to ./nemo_experiments if None was passed
_exp_dir = exp_dir
if exp_dir is None:
_exp_dir = str(Path.cwd() / 'nemo_experiments')
# If the user has already defined a logger for the trainer, use the logger defaults for logging directory
if trainer.logger is not None:
if trainer.logger.save_dir:
if exp_dir:
raise LoggerMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a logger, the logger's "
f"save_dir was not None, and exp_dir ({exp_dir}) was not None. If trainer.logger.save_dir "
"exists, exp_manager will use trainer.logger.save_dir as the logging directory and exp_dir "
"must be None."
)
_exp_dir = trainer.logger.save_dir
if name:
raise LoggerMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a logger, and name: "
f"{name} was also passed to exp_manager. If the trainer contains a "
"logger, exp_manager will use trainer.logger.name, and name passed to exp_manager must be None."
)
name = trainer.logger.name
version = f"version_{trainer.logger.version}"
# Use user-defined exp_dir, project_name, exp_name, and versioning options
else:
name = name or "default"
version = version or os.environ.get(NEMO_ENV_VARNAME_VERSION, None)
if not version:
if resume_if_exists:
logging.warning(
"No version folders would be created under the log folder as 'resume_if_exists' is enabled."
)
version = None
elif is_global_rank_zero():
if use_datetime_version:
version = time.strftime('%Y-%m-%d_%H-%M-%S')
else:
tensorboard_logger = TensorBoardLogger(save_dir=Path(_exp_dir), name=name, version=version)
version = f"version_{tensorboard_logger.version}"
os.environ[NEMO_ENV_VARNAME_VERSION] = "" if version is None else version
log_dir = Path(_exp_dir) / Path(str(name)) / Path("" if version is None else str(version))
return log_dir, str(_exp_dir), name, version
def get_git_hash():
"""
Helper function that tries to get the commit hash if running inside a git folder
returns:
Bool: Whether the git subprocess ran without error
str: git subprocess output or error message
"""
try:
return (
True,
subprocess.check_output(['git', 'rev-parse', 'HEAD'], stderr=subprocess.STDOUT).decode(),
)
except subprocess.CalledProcessError as err:
return False, "{}\n".format(err.output.decode("utf-8"))
def get_git_diff():
"""
Helper function that tries to get the git diff if running inside a git folder
returns:
Bool: Whether the git subprocess ran without error
str: git subprocess output or error message
"""
try:
return subprocess.check_output(['git', 'diff'], stderr=subprocess.STDOUT).decode()
except subprocess.CalledProcessError as err:
return "{}\n".format(err.output.decode("utf-8"))
def configure_loggers(
trainer: 'pytorch_lightning.Trainer',
exp_dir: [Path, str],
log_dir: [Path, str],
name: str,
version: str,
checkpoint_callback_params: dict,
create_tensorboard_logger: bool,
summary_writer_kwargs: dict,
create_wandb_logger: bool,
wandb_kwargs: dict,
create_mlflow_logger: bool,
mlflow_kwargs: dict,
create_dllogger_logger: bool,
dllogger_kwargs: dict,
create_clearml_logger: bool,
clearml_kwargs: dict,
):
"""
Creates TensorboardLogger and/or WandBLogger / MLFlowLogger / DLlogger / ClearMLLogger and attach them to trainer.
Raises ValueError if summary_writer_kwargs or wandb_kwargs are misconfigured.
"""
# Potentially create tensorboard logger and/or WandBLogger / MLFlowLogger / DLLogger
logger_list = []
if create_tensorboard_logger:
if summary_writer_kwargs is None:
summary_writer_kwargs = {}
elif "log_dir" in summary_writer_kwargs:
raise ValueError(
"You cannot pass `log_dir` as part of `summary_writer_kwargs`. `log_dir` is handled by lightning's "
"TensorBoardLogger logger."
)
tensorboard_logger = TensorBoardLogger(save_dir=exp_dir, name=name, version=version, **summary_writer_kwargs)
logger_list.append(tensorboard_logger)
logging.info("TensorboardLogger has been set up")
if create_wandb_logger:
if wandb_kwargs is None:
wandb_kwargs = {}
if "name" not in wandb_kwargs and "project" not in wandb_kwargs:
raise ValueError("name and project are required for wandb_logger")
# Update the wandb save_dir
if wandb_kwargs.get('save_dir', None) is None:
wandb_kwargs['save_dir'] = exp_dir
os.makedirs(wandb_kwargs['save_dir'], exist_ok=True)
wandb_logger = WandbLogger(version=version, **wandb_kwargs)
logger_list.append(wandb_logger)
logging.info("WandBLogger has been set up")
if create_mlflow_logger:
mlflow_logger = MLFlowLogger(run_name=version, **mlflow_kwargs)
logger_list.append(mlflow_logger)
logging.info("MLFlowLogger has been set up")
if create_dllogger_logger:
dllogger_logger = DLLogger(**dllogger_kwargs)
logger_list.append(dllogger_logger)
logging.info("DLLogger has been set up")
if create_clearml_logger:
clearml_logger = ClearMLLogger(
clearml_cfg=clearml_kwargs,
log_dir=log_dir,
prefix=name,
save_best_model=checkpoint_callback_params.save_best_model,
)
logger_list.append(clearml_logger)
logging.info("ClearMLLogger has been set up")
trainer._logger_connector.configure_logger(logger_list)
def configure_checkpointing(
trainer: 'pytorch_lightning.Trainer',
log_dir: Path,
name: str,
resume: bool,
params: 'DictConfig',
create_preemption_callback: bool,
):
""" Adds ModelCheckpoint to trainer. Raises CheckpointMisconfigurationError if trainer already has a ModelCheckpoint
callback
"""
for callback in trainer.callbacks:
if isinstance(callback, ModelCheckpoint):
raise CheckpointMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a ModelCheckpoint "
"and create_checkpoint_callback was set to True. Please either set create_checkpoint_callback "
"to False, or remove ModelCheckpoint from the lightning trainer"
)
# Create the callback and attach it to trainer
if "filepath" in params:
if params.filepath is not None:
logging.warning("filepath is deprecated. Please switch to dirpath and filename instead")
if params.dirpath is None:
params.dirpath = Path(params.filepath).parent
if params.filename is None:
params.filename = Path(params.filepath).name
with open_dict(params):
del params["filepath"]
if params.dirpath is None:
params.dirpath = Path(log_dir / 'checkpoints')
if params.filename is None:
params.filename = f'{name}--{{{params.monitor}:.4f}}-{{epoch}}'
if params.prefix is None:
params.prefix = name
NeMoModelCheckpoint.CHECKPOINT_NAME_LAST = params.filename + '-last'
logging.debug(params.dirpath)
logging.debug(params.filename)
logging.debug(params.prefix)
if "val" in params.monitor:
if (
trainer.max_epochs is not None
and trainer.max_epochs != -1
and trainer.max_epochs < trainer.check_val_every_n_epoch
):
logging.error(
"The checkpoint callback was told to monitor a validation value but trainer.max_epochs("
f"{trainer.max_epochs}) was less than trainer.check_val_every_n_epoch({trainer.check_val_every_n_epoch}"
f"). It is very likely this run will fail with ModelCheckpoint(monitor='{params.monitor}') not found "
"in the returned metrics. Please ensure that validation is run within trainer.max_epochs."
)
elif trainer.max_steps is not None and trainer.max_steps != -1:
logging.warning(
"The checkpoint callback was told to monitor a validation value and trainer's max_steps was set to "
f"{trainer.max_steps}. Please ensure that max_steps will run for at least "
f"{trainer.check_val_every_n_epoch} epochs to ensure that checkpointing will not error out."
)
checkpoint_callback = NeMoModelCheckpoint(n_resume=resume, **params)
checkpoint_callback.last_model_path = trainer.ckpt_path or ""
if 'mp_rank' in checkpoint_callback.last_model_path or 'tp_rank' in checkpoint_callback.last_model_path:
checkpoint_callback.last_model_path = uninject_model_parallel_rank(checkpoint_callback.last_model_path)
trainer.callbacks.append(checkpoint_callback)
if create_preemption_callback:
# Check if cuda is avialable as preemption is supported only on GPUs
if torch.cuda.is_available():
## By default PreemptionCallback handles SIGTERM. To handle other signals pass the signal in the call as below:
## PreemptionCallback(checkpoint_callback, signal.SIGCHLD)
preemption_callback = PreemptionCallback(checkpoint_callback)
trainer.callbacks.append(preemption_callback)
else:
logging.info("Preemption is supported only on GPUs, disabling preemption")
def check_slurm(trainer):
try:
return trainer.accelerator_connector.is_slurm_managing_tasks
except AttributeError:
return False
class StatelessTimer(Timer):
"""Extension of PTL timers to be per run."""
def __init__(self, duration: timedelta = None, interval: str = Interval.step, verbose: bool = True,) -> None:
super().__init__(duration, interval, verbose)
# Override PTL Timer's state dict to not store elapsed time information so that we can restore and continue training.
def state_dict(self) -> Dict[str, Any]:
return {}
def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
return
def configure_no_restart_validation_training_loop(trainer: pytorch_lightning.Trainer) -> None:
if type(trainer.fit_loop.epoch_loop) != _TrainingEpochLoop:
warnings.warn("Detected custom epoch loop. Skipping no validation on restart support.", UserWarning)
return
## Pass trainer object to avoid trainer getting overwritten as None
loop = SkipResumeTrainingValidationLoop(trainer, trainer.min_steps, trainer.max_steps)
trainer.fit_loop.epoch_loop = loop
class SkipResumeTrainingValidationLoop(_TrainingEpochLoop):
"""
Extend the PTL Epoch loop to skip validating when resuming.
This happens when resuming a checkpoint that has already run validation, but loading restores
the training state before validation has run.
"""
def _should_check_val_fx(self) -> bool:
if self.restarting and self.global_step % self.trainer.val_check_batch == 0:
return False
return super()._should_check_val_fx()
def clean_exp_ckpt(exp_log_dir: Union[str, Path], remove_ckpt: bool = True, remove_nemo: bool = False):
"""
Helper method that removes Pytorch Lightning .ckpt files or NeMo .nemo files from the checkpoint directory
Args:
exp_log_dir: str path to the root directory of the current experiment.
remove_ckpt: bool, whether to remove all *.ckpt files in the checkpoints directory.
remove_nemo: bool, whether to remove all *.nemo files in the checkpoints directory.
"""
exp_log_dir = str(exp_log_dir)
if remove_ckpt:
logging.info("Deleting *.ckpt files ...")
ckpt_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.ckpt"))
for filepath in ckpt_files:
os.remove(filepath)
logging.info(f"Deleted file : {filepath}")
if remove_nemo:
logging.info("Deleting *.nemo files ...")
nemo_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.nemo"))
for filepath in nemo_files:
os.remove(filepath)
logging.info(f"Deleted file : {filepath}")
[end of nemo/utils/exp_manager.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
# This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
# fusion with beam search decoders on top of a trained ASR model with CTC decoder. To evaluate a model with
# Transducer (RNN-T) decoder use another script 'scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py'.
# NeMo's beam search decoders are capable of using the KenLM's N-gram models
# to find the best candidates. This script supports both character level and BPE level
# encodings and models which is detected automatically from the type of the model.
# You may train the LM model with 'scripts/asr_language_modeling/ngram_lm/train_kenlm.py'.
# Config Help
To discover all arguments of the script, please run :
python eval_beamsearch_ngram.py --help
python eval_beamsearch_ngram.py --cfg job
# USAGE
python eval_beamsearch_ngram.py nemo_model_file=<path to the .nemo file of the model> \
input_manifest=<path to the evaluation JSON manifest file> \
kenlm_model_file=<path to the binary KenLM model> \
beam_width=[<list of the beam widths, separated with commas>] \
beam_alpha=[<list of the beam alphas, separated with commas>] \
beam_beta=[<list of the beam betas, separated with commas>] \
preds_output_folder=<optional folder to store the predictions> \
probs_cache_file=null \
decoding_mode=beamsearch_ngram
...
# Grid Search for Hyper parameters
For grid search, you can provide a list of arguments as follows -
beam_width=[4,8,16,....] \
beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
beam_beta=[-1.0,-0.5,0.0,...,1.0] \
# You may find more info on how to use this script at:
# https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
"""
import contextlib
import json
import os
import pickle
from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import List, Optional
import editdistance
import numpy as np
import torch
from omegaconf import MISSING, OmegaConf
from sklearn.model_selection import ParameterGrid
from tqdm.auto import tqdm
import nemo.collections.asr as nemo_asr
from nemo.collections.asr.models import EncDecHybridRNNTCTCModel
from nemo.collections.asr.parts.submodules import ctc_beam_decoding
from nemo.collections.asr.parts.utils.transcribe_utils import PunctuationCapitalization, TextProcessingConfig
from nemo.core.config import hydra_runner
from nemo.utils import logging
# fmt: off
@dataclass
class EvalBeamSearchNGramConfig:
"""
Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
"""
# # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
nemo_model_file: str = MISSING
# File paths
input_manifest: str = MISSING # The manifest file of the evaluation set
kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
# Parameters for inference
acoustic_batch_size: int = 16 # The batch size to calculate log probabilities
beam_batch_size: int = 128 # The batch size to be used for beam search decoding
device: str = "cuda" # The device to load the model onto to calculate log probabilities
use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
# Beam Search hyperparameters
# The decoding scheme to be used for evaluation.
# Can be one of ["greedy", "beamsearch", "beamsearch_ngram"]
decoding_mode: str = "beamsearch_ngram"
beam_width: List[int] = field(default_factory=lambda: [128]) # The width or list of the widths for the beam search decoding
beam_alpha: List[float] = field(default_factory=lambda: [1.0]) # The alpha parameter or list of the alphas for the beam search decoding
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
)
# fmt: on
def beam_search_eval(
model: nemo_asr.models.ASRModel,
cfg: EvalBeamSearchNGramConfig,
all_probs: List[torch.Tensor],
target_transcripts: List[str],
preds_output_file: str = None,
lm_path: str = None,
beam_alpha: float = 1.0,
beam_beta: float = 0.0,
beam_width: int = 128,
beam_batch_size: int = 128,
progress_bar: bool = True,
punctuation_capitalization: PunctuationCapitalization = None,
):
level = logging.getEffectiveLevel()
logging.setLevel(logging.CRITICAL)
# Reset config
model.change_decoding_strategy(None)
# Override the beam search config with current search candidate configuration
cfg.decoding.beam_size = beam_width
cfg.decoding.beam_alpha = beam_alpha
cfg.decoding.beam_beta = beam_beta
cfg.decoding.return_best_hypothesis = False
cfg.decoding.kenlm_path = cfg.kenlm_model_file
# Update model's decoding strategy config
model.cfg.decoding.strategy = cfg.decoding_strategy
model.cfg.decoding.beam = cfg.decoding
# Update model's decoding strategy
if isinstance(model, EncDecHybridRNNTCTCModel):
model.change_decoding_strategy(model.cfg.decoding, decoder_type='ctc')
decoding = model.ctc_decoding
else:
model.change_decoding_strategy(model.cfg.decoding)
decoding = model.decoding
logging.setLevel(level)
wer_dist_first = cer_dist_first = 0
wer_dist_best = cer_dist_best = 0
words_count = 0
chars_count = 0
sample_idx = 0
if preds_output_file:
out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
if progress_bar:
it = tqdm(
range(int(np.ceil(len(all_probs) / beam_batch_size))),
desc=f"Beam search decoding with width={beam_width}, alpha={beam_alpha}, beta={beam_beta}",
ncols=120,
)
else:
it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
for batch_idx in it:
# disabling type checking
probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
probs_lens = torch.tensor([prob.shape[0] for prob in probs_batch])
with torch.no_grad():
packed_batch = torch.zeros(len(probs_batch), max(probs_lens), probs_batch[0].shape[-1], device='cpu')
for prob_index in range(len(probs_batch)):
packed_batch[prob_index, : probs_lens[prob_index], :] = torch.tensor(
probs_batch[prob_index], device=packed_batch.device, dtype=packed_batch.dtype
)
_, beams_batch = decoding.ctc_decoder_predictions_tensor(
packed_batch, decoder_lengths=probs_lens, return_hypotheses=True,
)
for beams_idx, beams in enumerate(beams_batch):
target = target_transcripts[sample_idx + beams_idx]
target_split_w = target.split()
target_split_c = list(target)
words_count += len(target_split_w)
chars_count += len(target_split_c)
wer_dist_min = cer_dist_min = 10000
for candidate_idx, candidate in enumerate(beams): # type: (int, ctc_beam_decoding.rnnt_utils.Hypothesis)
pred_text = candidate.text
if cfg.text_processing.do_lowercase:
pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
if cfg.text_processing.rm_punctuation:
pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
if cfg.text_processing.separate_punctuation:
pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
pred_split_w = pred_text.split()
wer_dist = editdistance.eval(target_split_w, pred_split_w)
pred_split_c = list(pred_text)
cer_dist = editdistance.eval(target_split_c, pred_split_c)
wer_dist_min = min(wer_dist_min, wer_dist)
cer_dist_min = min(cer_dist_min, cer_dist)
if candidate_idx == 0:
# first candidate
wer_dist_first += wer_dist
cer_dist_first += cer_dist
score = candidate.score
if preds_output_file:
out_file.write('{}\t{}\n'.format(pred_text, score))
wer_dist_best += wer_dist_min
cer_dist_best += cer_dist_min
sample_idx += len(probs_batch)
if preds_output_file:
out_file.close()
logging.info(f"Stored the predictions of beam search decoding at '{preds_output_file}'.")
if lm_path:
logging.info(
'WER/CER with beam search decoding and N-gram model = {:.2%}/{:.2%}'.format(
wer_dist_first / words_count, cer_dist_first / chars_count
)
)
else:
logging.info(
'WER/CER with beam search decoding = {:.2%}/{:.2%}'.format(
wer_dist_first / words_count, cer_dist_first / chars_count
)
)
logging.info(
'Oracle WER/CER in candidates with perfect LM= {:.2%}/{:.2%}'.format(
wer_dist_best / words_count, cer_dist_best / chars_count
)
)
logging.info(f"=================================================================================")
return wer_dist_first / words_count, cer_dist_first / chars_count
@hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
def main(cfg: EvalBeamSearchNGramConfig):
logging.warning("This file will be renamed to eval_beamsearch_ngram_ctc.py in the future NeMo (1.21) release.")
if is_dataclass(cfg):
cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
valid_decoding_modes = ["greedy", "beamsearch", "beamsearch_ngram"]
if cfg.decoding_mode not in valid_decoding_modes:
raise ValueError(
f"Given decoding_mode={cfg.decoding_mode} is invalid. Available options are :\n" f"{valid_decoding_modes}"
)
if cfg.nemo_model_file.endswith('.nemo'):
asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
else:
logging.warning(
"nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
)
asr_model = nemo_asr.models.ASRModel.from_pretrained(
cfg.nemo_model_file, map_location=torch.device(cfg.device)
)
target_transcripts = []
manifest_dir = Path(cfg.input_manifest).parent
with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
audio_file_paths = []
for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
data = json.loads(line)
audio_file = Path(data['audio_filepath'])
if not audio_file.is_file() and not audio_file.is_absolute():
audio_file = manifest_dir / audio_file
target_transcripts.append(data['text'])
audio_file_paths.append(str(audio_file.absolute()))
punctuation_capitalization = PunctuationCapitalization(cfg.text_processing.punctuation_marks)
if cfg.text_processing.do_lowercase:
target_transcripts = punctuation_capitalization.do_lowercase(target_transcripts)
if cfg.text_processing.rm_punctuation:
target_transcripts = punctuation_capitalization.rm_punctuation(target_transcripts)
if cfg.text_processing.separate_punctuation:
target_transcripts = punctuation_capitalization.separate_punctuation(target_transcripts)
if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
with open(cfg.probs_cache_file, 'rb') as probs_file:
all_probs = pickle.load(probs_file)
if len(all_probs) != len(audio_file_paths):
raise ValueError(
f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
f"match the manifest file. You may need to delete the probabilities cached file."
)
else:
@contextlib.contextmanager
def default_autocast():
yield
if cfg.use_amp:
if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
logging.info("AMP is enabled!\n")
autocast = torch.cuda.amp.autocast
else:
autocast = default_autocast
else:
autocast = default_autocast
with autocast():
with torch.no_grad():
if isinstance(asr_model, EncDecHybridRNNTCTCModel):
asr_model.cur_decoder = 'ctc'
all_logits = asr_model.transcribe(audio_file_paths, batch_size=cfg.acoustic_batch_size, logprobs=True)
all_probs = all_logits
if cfg.probs_cache_file:
logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
with open(cfg.probs_cache_file, 'wb') as f_dump:
pickle.dump(all_probs, f_dump)
wer_dist_greedy = 0
cer_dist_greedy = 0
words_count = 0
chars_count = 0
for batch_idx, probs in enumerate(all_probs):
preds = np.argmax(probs, axis=1)
preds_tensor = torch.tensor(preds, device='cpu').unsqueeze(0)
if isinstance(asr_model, EncDecHybridRNNTCTCModel):
pred_text = asr_model.ctc_decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
else:
pred_text = asr_model._wer.decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
if cfg.text_processing.do_lowercase:
pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
if cfg.text_processing.rm_punctuation:
pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
if cfg.text_processing.separate_punctuation:
pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
pred_split_w = pred_text.split()
target_split_w = target_transcripts[batch_idx].split()
pred_split_c = list(pred_text)
target_split_c = list(target_transcripts[batch_idx])
wer_dist = editdistance.eval(target_split_w, pred_split_w)
cer_dist = editdistance.eval(target_split_c, pred_split_c)
wer_dist_greedy += wer_dist
cer_dist_greedy += cer_dist
words_count += len(target_split_w)
chars_count += len(target_split_c)
logging.info('Greedy WER/CER = {:.2%}/{:.2%}'.format(wer_dist_greedy / words_count, cer_dist_greedy / chars_count))
asr_model = asr_model.to('cpu')
if cfg.decoding_mode == "beamsearch_ngram":
if not os.path.exists(cfg.kenlm_model_file):
raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
lm_path = cfg.kenlm_model_file
else:
lm_path = None
# 'greedy' decoding_mode would skip the beam search decoding
if cfg.decoding_mode in ["beamsearch_ngram", "beamsearch"]:
if cfg.beam_width is None or cfg.beam_alpha is None or cfg.beam_beta is None:
raise ValueError("beam_width, beam_alpha and beam_beta are needed to perform beam search decoding.")
params = {'beam_width': cfg.beam_width, 'beam_alpha': cfg.beam_alpha, 'beam_beta': cfg.beam_beta}
hp_grid = ParameterGrid(params)
hp_grid = list(hp_grid)
best_wer_beam_size, best_cer_beam_size = None, None
best_wer_alpha, best_cer_alpha = None, None
best_wer_beta, best_cer_beta = None, None
best_wer, best_cer = 1e6, 1e6
logging.info(f"==============================Starting the beam search decoding===============================")
logging.info(f"Grid search size: {len(hp_grid)}")
logging.info(f"It may take some time...")
logging.info(f"==============================================================================================")
if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
os.mkdir(cfg.preds_output_folder)
for hp in hp_grid:
if cfg.preds_output_folder:
preds_output_file = os.path.join(
cfg.preds_output_folder,
f"preds_out_width{hp['beam_width']}_alpha{hp['beam_alpha']}_beta{hp['beam_beta']}.tsv",
)
else:
preds_output_file = None
candidate_wer, candidate_cer = beam_search_eval(
asr_model,
cfg,
all_probs=all_probs,
target_transcripts=target_transcripts,
preds_output_file=preds_output_file,
lm_path=lm_path,
beam_width=hp["beam_width"],
beam_alpha=hp["beam_alpha"],
beam_beta=hp["beam_beta"],
beam_batch_size=cfg.beam_batch_size,
progress_bar=True,
punctuation_capitalization=punctuation_capitalization,
)
if candidate_cer < best_cer:
best_cer_beam_size = hp["beam_width"]
best_cer_alpha = hp["beam_alpha"]
best_cer_beta = hp["beam_beta"]
best_cer = candidate_cer
if candidate_wer < best_wer:
best_wer_beam_size = hp["beam_width"]
best_wer_alpha = hp["beam_alpha"]
best_wer_beta = hp["beam_beta"]
best_wer = candidate_wer
logging.info(
f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
f'Beam alpha = {best_wer_alpha}, Beam beta = {best_wer_beta}'
)
logging.info(
f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
f'Beam alpha = {best_cer_alpha}, Beam beta = {best_cer_beta}'
)
logging.info(f"=================================================================================")
if __name__ == '__main__':
main()
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
# This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
# fusion with beam search decoders on top of a trained ASR Transducer model. NeMo's beam search decoders are capable of using the
# KenLM's N-gram models to find the best candidates. This script supports both character level and BPE level
# encodings and models which is detected automatically from the type of the model.
# You may train the LM model with 'scripts/ngram_lm/train_kenlm.py'.
# Config Help
To discover all arguments of the script, please run :
python eval_beamsearch_ngram.py --help
python eval_beamsearch_ngram.py --cfg job
# USAGE
python eval_beamsearch_ngram_transducer.py nemo_model_file=<path to the .nemo file of the model> \
input_manifest=<path to the evaluation JSON manifest file \
kenlm_model_file=<path to the binary KenLM model> \
beam_width=[<list of the beam widths, separated with commas>] \
beam_alpha=[<list of the beam alphas, separated with commas>] \
preds_output_folder=<optional folder to store the predictions> \
probs_cache_file=null \
decoding_strategy=<greedy_batch or maes decoding>
maes_prefix_alpha=[<list of the maes prefix alphas, separated with commas>] \
maes_expansion_gamma=[<list of the maes expansion gammas, separated with commas>] \
hat_subtract_ilm=<in case of HAT model: subtract internal LM or not> \
hat_ilm_weight=[<in case of HAT model: list of the HAT internal LM weights, separated with commas>] \
...
# Grid Search for Hyper parameters
For grid search, you can provide a list of arguments as follows -
beam_width=[4,8,16,....] \
beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
# You may find more info on how to use this script at:
# https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
"""
import contextlib
import json
import os
import pickle
import tempfile
from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import List, Optional
import editdistance
import numpy as np
import torch
from omegaconf import MISSING, OmegaConf
from sklearn.model_selection import ParameterGrid
from tqdm.auto import tqdm
import nemo.collections.asr as nemo_asr
from nemo.collections.asr.parts.submodules import rnnt_beam_decoding
from nemo.core.config import hydra_runner
from nemo.utils import logging
# fmt: off
@dataclass
class EvalBeamSearchNGramConfig:
"""
Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
"""
# # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
nemo_model_file: str = MISSING
# File paths
input_manifest: str = MISSING # The manifest file of the evaluation set
kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
# Parameters for inference
acoustic_batch_size: int = 128 # The batch size to calculate log probabilities
beam_batch_size: int = 128 # The batch size to be used for beam search decoding
device: str = "cuda" # The device to load the model onto to calculate log probabilities
use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
num_workers: int = 1 # Number of workers for DataLoader
# The decoding scheme to be used for evaluation
decoding_strategy: str = "greedy_batch" # ["greedy_batch", "beam", "tsd", "alsd", "maes"]
# Beam Search hyperparameters
beam_width: List[int] = field(default_factory=lambda: [8]) # The width or list of the widths for the beam search decoding
beam_alpha: List[float] = field(default_factory=lambda: [0.2]) # The alpha parameter or list of the alphas for the beam search decoding
maes_prefix_alpha: List[int] = field(default_factory=lambda: [2]) # The maes_prefix_alpha or list of the maes_prefix_alpha for the maes decoding
maes_expansion_gamma: List[float] = field(default_factory=lambda: [2.3]) # The maes_expansion_gamma or list of the maes_expansion_gamma for the maes decoding
# HAT related parameters (only for internal lm subtraction)
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
# fmt: on
def decoding_step(
model: nemo_asr.models.ASRModel,
cfg: EvalBeamSearchNGramConfig,
all_probs: List[torch.Tensor],
target_transcripts: List[str],
preds_output_file: str = None,
beam_batch_size: int = 128,
progress_bar: bool = True,
):
level = logging.getEffectiveLevel()
logging.setLevel(logging.CRITICAL)
# Reset config
model.change_decoding_strategy(None)
cfg.decoding.hat_ilm_weight = cfg.decoding.hat_ilm_weight * cfg.hat_subtract_ilm
# Override the beam search config with current search candidate configuration
cfg.decoding.return_best_hypothesis = False
cfg.decoding.ngram_lm_model = cfg.kenlm_model_file
cfg.decoding.hat_subtract_ilm = cfg.hat_subtract_ilm
# Update model's decoding strategy config
model.cfg.decoding.strategy = cfg.decoding_strategy
model.cfg.decoding.beam = cfg.decoding
# Update model's decoding strategy
model.change_decoding_strategy(model.cfg.decoding)
logging.setLevel(level)
wer_dist_first = cer_dist_first = 0
wer_dist_best = cer_dist_best = 0
words_count = 0
chars_count = 0
sample_idx = 0
if preds_output_file:
out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
if progress_bar:
if cfg.decoding_strategy == "greedy_batch":
description = "Greedy_batch decoding.."
else:
description = f"{cfg.decoding_strategy} decoding with bw={cfg.decoding.beam_size}, ba={cfg.decoding.ngram_lm_alpha}, ma={cfg.decoding.maes_prefix_alpha}, mg={cfg.decoding.maes_expansion_gamma}, hat_ilmw={cfg.decoding.hat_ilm_weight}"
it = tqdm(range(int(np.ceil(len(all_probs) / beam_batch_size))), desc=description, ncols=120)
else:
it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
for batch_idx in it:
# disabling type checking
probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
probs_lens = torch.tensor([prob.shape[-1] for prob in probs_batch])
with torch.no_grad():
packed_batch = torch.zeros(len(probs_batch), probs_batch[0].shape[0], max(probs_lens), device='cpu')
for prob_index in range(len(probs_batch)):
packed_batch[prob_index, :, : probs_lens[prob_index]] = torch.tensor(
probs_batch[prob_index].unsqueeze(0), device=packed_batch.device, dtype=packed_batch.dtype
)
best_hyp_batch, beams_batch = model.decoding.rnnt_decoder_predictions_tensor(
packed_batch, probs_lens, return_hypotheses=True,
)
if cfg.decoding_strategy == "greedy_batch":
beams_batch = [[x] for x in best_hyp_batch]
for beams_idx, beams in enumerate(beams_batch):
target = target_transcripts[sample_idx + beams_idx]
target_split_w = target.split()
target_split_c = list(target)
words_count += len(target_split_w)
chars_count += len(target_split_c)
wer_dist_min = cer_dist_min = 10000
for candidate_idx, candidate in enumerate(beams): # type: (int, rnnt_beam_decoding.rnnt_utils.Hypothesis)
pred_text = candidate.text
pred_split_w = pred_text.split()
wer_dist = editdistance.eval(target_split_w, pred_split_w)
pred_split_c = list(pred_text)
cer_dist = editdistance.eval(target_split_c, pred_split_c)
wer_dist_min = min(wer_dist_min, wer_dist)
cer_dist_min = min(cer_dist_min, cer_dist)
if candidate_idx == 0:
# first candidate
wer_dist_first += wer_dist
cer_dist_first += cer_dist
score = candidate.score
if preds_output_file:
out_file.write('{}\t{}\n'.format(pred_text, score))
wer_dist_best += wer_dist_min
cer_dist_best += cer_dist_min
sample_idx += len(probs_batch)
if cfg.decoding_strategy == "greedy_batch":
return wer_dist_first / words_count, cer_dist_first / chars_count
if preds_output_file:
out_file.close()
logging.info(f"Stored the predictions of {cfg.decoding_strategy} decoding at '{preds_output_file}'.")
if cfg.decoding.ngram_lm_model:
logging.info(
f"WER/CER with {cfg.decoding_strategy} decoding and N-gram model = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
)
else:
logging.info(
f"WER/CER with {cfg.decoding_strategy} decoding = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
)
logging.info(
f"Oracle WER/CER in candidates with perfect LM= {wer_dist_best / words_count:.2%}/{cer_dist_best / chars_count:.2%}"
)
logging.info(f"=================================================================================")
return wer_dist_first / words_count, cer_dist_first / chars_count
@hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
def main(cfg: EvalBeamSearchNGramConfig):
if is_dataclass(cfg):
cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
valid_decoding_strategis = ["greedy_batch", "beam", "tsd", "alsd", "maes"]
if cfg.decoding_strategy not in valid_decoding_strategis:
raise ValueError(
f"Given decoding_strategy={cfg.decoding_strategy} is invalid. Available options are :\n"
f"{valid_decoding_strategis}"
)
if cfg.nemo_model_file.endswith('.nemo'):
asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
else:
logging.warning(
"nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
)
asr_model = nemo_asr.models.ASRModel.from_pretrained(
cfg.nemo_model_file, map_location=torch.device(cfg.device)
)
if cfg.kenlm_model_file:
if not os.path.exists(cfg.kenlm_model_file):
raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
if cfg.decoding_strategy != "maes":
raise ValueError(f"Decoding with kenlm model is supported only for maes decoding algorithm.")
lm_path = cfg.kenlm_model_file
else:
lm_path = None
cfg.beam_alpha = [0.0]
if cfg.hat_subtract_ilm:
assert lm_path, "kenlm must be set for hat internal lm subtraction"
if cfg.decoding_strategy != "maes":
cfg.maes_prefix_alpha, cfg.maes_expansion_gamma, cfg.hat_ilm_weight = [0], [0], [0]
target_transcripts = []
manifest_dir = Path(cfg.input_manifest).parent
with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
audio_file_paths = []
for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
data = json.loads(line)
audio_file = Path(data['audio_filepath'])
if not audio_file.is_file() and not audio_file.is_absolute():
audio_file = manifest_dir / audio_file
target_transcripts.append(data['text'])
audio_file_paths.append(str(audio_file.absolute()))
if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
with open(cfg.probs_cache_file, 'rb') as probs_file:
all_probs = pickle.load(probs_file)
if len(all_probs) != len(audio_file_paths):
raise ValueError(
f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
f"match the manifest file. You may need to delete the probabilities cached file."
)
else:
@contextlib.contextmanager
def default_autocast():
yield
if cfg.use_amp:
if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
logging.info("AMP is enabled!\n")
autocast = torch.cuda.amp.autocast
else:
autocast = default_autocast
else:
autocast = default_autocast
# manual calculation of encoder_embeddings
with autocast():
with torch.no_grad():
asr_model.eval()
asr_model.encoder.freeze()
device = next(asr_model.parameters()).device
all_probs = []
with tempfile.TemporaryDirectory() as tmpdir:
with open(os.path.join(tmpdir, 'manifest.json'), 'w', encoding='utf-8') as fp:
for audio_file in audio_file_paths:
entry = {'audio_filepath': audio_file, 'duration': 100000, 'text': ''}
fp.write(json.dumps(entry) + '\n')
config = {
'paths2audio_files': audio_file_paths,
'batch_size': cfg.acoustic_batch_size,
'temp_dir': tmpdir,
'num_workers': cfg.num_workers,
'channel_selector': None,
'augmentor': None,
}
temporary_datalayer = asr_model._setup_transcribe_dataloader(config)
for test_batch in tqdm(temporary_datalayer, desc="Transcribing", disable=True):
encoded, encoded_len = asr_model.forward(
input_signal=test_batch[0].to(device), input_signal_length=test_batch[1].to(device)
)
# dump encoder embeddings per file
for idx in range(encoded.shape[0]):
encoded_no_pad = encoded[idx, :, : encoded_len[idx]]
all_probs.append(encoded_no_pad)
if cfg.probs_cache_file:
logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
with open(cfg.probs_cache_file, 'wb') as f_dump:
pickle.dump(all_probs, f_dump)
if cfg.decoding_strategy == "greedy_batch":
asr_model = asr_model.to('cpu')
candidate_wer, candidate_cer = decoding_step(
asr_model,
cfg,
all_probs=all_probs,
target_transcripts=target_transcripts,
beam_batch_size=cfg.beam_batch_size,
progress_bar=True,
)
logging.info(f"Greedy batch WER/CER = {candidate_wer:.2%}/{candidate_cer:.2%}")
asr_model = asr_model.to('cpu')
# 'greedy_batch' decoding_strategy would skip the beam search decoding
if cfg.decoding_strategy in ["beam", "tsd", "alsd", "maes"]:
if cfg.beam_width is None or cfg.beam_alpha is None:
raise ValueError("beam_width and beam_alpha are needed to perform beam search decoding.")
params = {
'beam_width': cfg.beam_width,
'beam_alpha': cfg.beam_alpha,
'maes_prefix_alpha': cfg.maes_prefix_alpha,
'maes_expansion_gamma': cfg.maes_expansion_gamma,
'hat_ilm_weight': cfg.hat_ilm_weight,
}
hp_grid = ParameterGrid(params)
hp_grid = list(hp_grid)
best_wer_beam_size, best_cer_beam_size = None, None
best_wer_alpha, best_cer_alpha = None, None
best_wer, best_cer = 1e6, 1e6
logging.info(
f"==============================Starting the {cfg.decoding_strategy} decoding==============================="
)
logging.info(f"Grid search size: {len(hp_grid)}")
logging.info(f"It may take some time...")
logging.info(f"==============================================================================================")
if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
os.mkdir(cfg.preds_output_folder)
for hp in hp_grid:
if cfg.preds_output_folder:
results_file = f"preds_out_{cfg.decoding_strategy}_bw{hp['beam_width']}"
if cfg.decoding_strategy == "maes":
results_file = f"{results_file}_ma{hp['maes_prefix_alpha']}_mg{hp['maes_expansion_gamma']}"
if cfg.kenlm_model_file:
results_file = f"{results_file}_ba{hp['beam_alpha']}"
if cfg.hat_subtract_ilm:
results_file = f"{results_file}_hat_ilmw{hp['hat_ilm_weight']}"
preds_output_file = os.path.join(cfg.preds_output_folder, f"{results_file}.tsv")
else:
preds_output_file = None
cfg.decoding.beam_size = hp["beam_width"]
cfg.decoding.ngram_lm_alpha = hp["beam_alpha"]
cfg.decoding.maes_prefix_alpha = hp["maes_prefix_alpha"]
cfg.decoding.maes_expansion_gamma = hp["maes_expansion_gamma"]
cfg.decoding.hat_ilm_weight = hp["hat_ilm_weight"]
candidate_wer, candidate_cer = decoding_step(
asr_model,
cfg,
all_probs=all_probs,
target_transcripts=target_transcripts,
preds_output_file=preds_output_file,
beam_batch_size=cfg.beam_batch_size,
progress_bar=True,
)
if candidate_cer < best_cer:
best_cer_beam_size = hp["beam_width"]
best_cer_alpha = hp["beam_alpha"]
best_cer_ma = hp["maes_prefix_alpha"]
best_cer_mg = hp["maes_expansion_gamma"]
best_cer_hat_ilm_weight = hp["hat_ilm_weight"]
best_cer = candidate_cer
if candidate_wer < best_wer:
best_wer_beam_size = hp["beam_width"]
best_wer_alpha = hp["beam_alpha"]
best_wer_ma = hp["maes_prefix_alpha"]
best_wer_ga = hp["maes_expansion_gamma"]
best_wer_hat_ilm_weight = hp["hat_ilm_weight"]
best_wer = candidate_wer
wer_hat_parameter = ""
if cfg.hat_subtract_ilm:
wer_hat_parameter = f"HAT ilm weight = {best_wer_hat_ilm_weight}, "
logging.info(
f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
f'Beam alpha = {best_wer_alpha}, {wer_hat_parameter}'
f'maes_prefix_alpha = {best_wer_ma}, maes_expansion_gamma = {best_wer_ga} '
)
cer_hat_parameter = ""
if cfg.hat_subtract_ilm:
cer_hat_parameter = f"HAT ilm weight = {best_cer_hat_ilm_weight}"
logging.info(
f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
f'Beam alpha = {best_cer_alpha}, {cer_hat_parameter} '
f'maes_prefix_alpha = {best_cer_ma}, maes_expansion_gamma = {best_cer_mg}'
)
logging.info(f"=================================================================================")
if __name__ == '__main__':
main()
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
[start of scripts/confidence_ensembles/build_ensemble.py]
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script provides a functionality to create confidence-based ensembles
from a collection of pretrained models.
For more details see the paper https://arxiv.org/abs/2306.15824
or tutorial in tutorials/asr/Confidence_Ensembles.ipynb
You would typically use this script by providing a yaml config file or overriding
default options from command line.
Usage examples:
1. Building an ensemble of two monolingual models with default settings (no confidence tuning).
python build_ensemble.py --config-path=. --config-name=ensemble_config.yaml
ensemble.0.model=stt_it_conformer_ctc_large
ensemble.0.training_manifest=<path to the Italian data of 100+ utterances (no transcription required)>
ensemble.1.model=stt_es_conformer_ctc_large
ensemble.1.training_manifest=<path to the Spanish data of 100+ utterances (no transcription required)>
output_path=<path to the desired location of the .nemo checkpoint>
You can have more than 2 models and can control transcription settings (e.g., batch size)
with ``transcription.<any argument of examples/asr/transcribe_speech.py>`` parameters.
2. If you want to get improved results, you can enable tuning of the confidence and logistic regression (LR) parameters.
E.g.
python build_ensemble.py
<all arguments like in the previous example>
ensemble.0.dev_manifest=<path to the dev data that's required for tuning>
...
# IMPORTANT: see the note below if you use > 2 models!
ensemble.N.dev_manifest=<path to the dev data that's required for tuning>
tune_confidence=True # to allow confidence tuning. LR is tuned by default
As with any tuning, it is recommended to have reasonably large validation set for each model,
otherwise you might overfit to the validation data.
Note that if you add additional models (> 2) you will need to modify ensemble_config.yaml
or create a new one with added models in there. While it's theoretically possible to
fully override such parameters from commandline, hydra is very unfriendly for such
use-cases, so it's strongly recommended to be creating new configs.
3. If you want to precisely control tuning grid search, you can do that with
python build_ensemble.py
<all arguments as in the previous examples>
tune_confidence_config.confidence_type='[entropy_renyi_exp,entropy_tsallis_exp]' # only tune over this set
tune_confidence_config.alpha='[0.1,0.5,1.0]' # only tune over this set
You can check the dataclasses in this file for the full list of supported
arguments and their default values.
"""
import atexit
# using default logging to be able to silence unnecessary messages from nemo
import logging
import os
import random
import sys
import tempfile
from copy import deepcopy
from dataclasses import dataclass
from pathlib import Path
from typing import Dict, List, Optional, Tuple
import joblib
import numpy as np
import pytorch_lightning as pl
from omegaconf import MISSING, DictConfig, OmegaConf
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.preprocessing import StandardScaler
from tqdm import tqdm
from nemo.collections.asr.models.confidence_ensemble import (
ConfidenceEnsembleModel,
ConfidenceSpec,
compute_confidence,
get_filtered_logprobs,
)
from nemo.collections.asr.parts.utils.asr_confidence_utils import (
ConfidenceConfig,
ConfidenceMeasureConfig,
get_confidence_aggregation_bank,
get_confidence_measure_bank,
)
from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
from nemo.core.config import hydra_runner
LOG = logging.getLogger(__file__)
# adding Python path. If not found, asking user to get the file
try:
sys.path.append(str(Path(__file__).parents[2] / "examples" / "asr"))
import transcribe_speech
except ImportError:
# if users run script normally from nemo repo, this shouldn't be triggered as
# we modify the path above. But if they downloaded the build_ensemble.py as
# an isolated script, we'd ask them to also download corresponding version
# of the transcribe_speech.py
print(
"Current script depends on 'examples/asr/transcribe_speech.py', but can't find it. "
"If it's not present, download it from the NeMo github manually and put inside this folder."
)
@dataclass
class EnsembleConfig:
# .nemo path or pretrained name
model: str = MISSING
# path to the training data manifest (non-tarred)
training_manifest: str = MISSING
# specify to limit the number of training samples
# 100 is most likely enough, but setting higher default just in case
max_training_samples: int = 1000
# specify to provide dev data manifest for HP tuning
dev_manifest: Optional[str] = None
@dataclass
class TuneConfidenceConfig:
# important parameter, so should always be tuned
exclude_blank: Tuple[bool] = (True, False)
# prod is pretty much always worse, so not including by default
aggregation: Tuple[str] = ("mean", "min", "max")
# not including max prob, as there is always an entropy-based metric
# that's better but otherwise including everything
confidence_type: Tuple[str] = (
"entropy_renyi_exp",
"entropy_renyi_lin",
"entropy_tsallis_exp",
"entropy_tsallis_lin",
"entropy_gibbs_lin",
"entropy_gibbs_exp",
)
# TODO: currently it's not possible to efficiently tune temperature, as we always
# apply log-softmax in the decoder, so to try different values it will be required
# to rerun the decoding, which is very slow. To support this for one-off experiments
# it's possible to modify the code of CTC decoder / Transducer joint to
# remove log-softmax and then apply it directly in this script with the temperature
#
# Alternatively, one can run this script multiple times with different values of
# temperature and pick the best performing ensemble. Note that this will increase
# tuning time by the number of temperature values tried. On the other hand,
# the above approach is a lot more efficient and will only slightly increase
# the total tuning runtime.
# very important to tune for max prob, but for entropy metrics 1.0 is almost always best
# temperature: Tuple[float] = (1.0,)
# not that important, but can sometimes make a small difference
alpha: Tuple[float] = (0.25, 0.33, 0.5, 1.0)
def get_grid_size(self) -> int:
"""Returns the total number of points in the search space."""
if "max_prob" in self.confidence_type:
return (
len(self.exclude_blank)
* len(self.aggregation)
* ((len(self.confidence_type) - 1) * len(self.alpha) + 1)
)
return len(self.exclude_blank) * len(self.aggregation) * len(self.confidence_type) * len(self.alpha)
@dataclass
class TuneLogisticRegressionConfig:
# will have log-uniform grid over this range with that many points
# note that a value of 10000.0 (not regularization) is always added
C_num_points: int = 10
C_min: float = 0.0001
C_max: float = 10.0
# not too important
multi_class: Tuple[str] = ("ovr", "multinomial")
# should try to include weights directly if the data is too imbalanced
class_weight: Tuple = (None, "balanced")
# increase if getting many warnings that algorithm didn't converge
max_iter: int = 1000
@dataclass
class BuildEnsembleConfig:
# where to save the resulting ensemble model
output_path: str = MISSING
# each model specification
ensemble: List[EnsembleConfig] = MISSING
random_seed: int = 0 # for reproducibility
# default confidence, can override
confidence: ConfidenceConfig = ConfidenceConfig(
# we keep frame confidences and apply aggregation manually to get full-utterance confidence
preserve_frame_confidence=True,
exclude_blank=True,
aggregation="mean",
measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
tune_confidence: bool = False
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
Will also auto-set tune_logistic_regression to False if no dev data
is available.
If tune_confidence is set to True (user choice) and no dev data is
provided, will raise an error.
"""
num_dev_data = 0
for ensemble_cfg in self.ensemble:
num_dev_data += ensemble_cfg.dev_manifest is not None
if num_dev_data == 0:
if self.tune_confidence:
raise ValueError("tune_confidence is set to True, but no dev data is provided")
LOG.info("Setting tune_logistic_regression = False since no dev data is provided")
self.tune_logistic_regression = False
return
if num_dev_data < len(self.ensemble):
raise ValueError(
"Some ensemble configs specify dev data, but some don't. Either all have to specify it or none!"
)
def calculate_score(features: np.ndarray, labels: np.ndarray, pipe: Pipeline) -> Tuple[float, np.ndarray]:
"""Score is always calculated as mean of the per-class scores.
This is done to account for possible class imbalances.
Args:
features: numpy array of features of shape [N x D], where N is the
number of objects (typically a total number of utterances in
all datasets) and D is the total number of confidence scores
used to train the model (typically = number of models).
labels: numpy array of shape [N] contatining ground-truth model indices.
pipe: classification pipeline (currently, standardization + logistic
regression).
Returns:
tuple: score value in [0, 1] and full classification confusion matrix.
"""
predictions = pipe.predict(features)
conf_m = confusion_matrix(labels, predictions)
score = np.diag(conf_m).sum() / conf_m.sum()
return score, conf_m
def train_model_selection(
training_features: np.ndarray,
training_labels: np.ndarray,
dev_features: Optional[np.ndarray] = None,
dev_labels: Optional[np.ndarray] = None,
tune_lr: bool = False,
tune_lr_cfg: Optional[TuneLogisticRegressionConfig] = None,
verbose: bool = False,
) -> Tuple[Pipeline, float]:
"""Trains model selection block with an (optional) tuning of the parameters.
Returns a pipeline consisting of feature standardization and logistic
regression. If tune_lr is set to True, dev features/labels will be used
to tune the hyperparameters of the logistic regression with the grid
search that's defined via ``tune_lr_cfg``.
If no tuning is requested, uses the following parameters::
best_pipe = make_pipeline(
StandardScaler(),
LogisticRegression(
multi_class="multinomial",
C=10000.0,
max_iter=1000,
class_weight="balanced",
),
)
Args:
training_features: numpy array of features of shape [N x D], where N is
the number of objects (typically a total number of utterances in
all training datasets) and D is the total number of confidence
scores used to train the model (typically = number of models).
training_labels: numpy array of shape [N] contatining ground-truth
model indices.
dev_features: same as training, but for the validation subset.
dev_labels: same as training, but for the validation subset.
tune_lr: controls whether tuning of LR hyperparameters is performed.
If set to True, it's required to also provide dev features/labels.
tune_lr_cfg: specifies what values of LR hyperparameters to try.
verbose: if True, will output final training/dev scores.
Returns:
tuple: trained model selection pipeline, best score (or -1 if no tuning
was done).
"""
if not tune_lr:
# default parameters: C=10000.0 disables regularization
best_pipe = make_pipeline(
StandardScaler(),
LogisticRegression(multi_class="multinomial", C=10000.0, max_iter=1000, class_weight="balanced"),
)
max_score = -1
else:
C_pms = np.append(
np.exp(np.linspace(np.log(tune_lr_cfg.C_min), np.log(tune_lr_cfg.C_max), tune_lr_cfg.C_num_points)),
10000.0,
)
max_score = 0
best_pipe = None
for class_weight in tune_lr_cfg.class_weight:
for multi_class in tune_lr_cfg.multi_class:
for C in C_pms:
pipe = make_pipeline(
StandardScaler(),
LogisticRegression(
multi_class=multi_class, C=C, max_iter=tune_lr_cfg.max_iter, class_weight=class_weight
),
)
pipe.fit(training_features, training_labels)
score, confusion = calculate_score(dev_features, dev_labels, pipe)
if score > max_score:
max_score = score
best_pipe = pipe
best_pipe.fit(training_features, training_labels)
if verbose:
accuracy, confusion = calculate_score(training_features, training_labels, best_pipe)
LOG.info("Training fit accuracy: %.4f", accuracy * 100.0)
LOG.info("Training confusion matrix:\n%s", str(confusion))
if dev_features is not None and verbose:
accuracy, confusion = calculate_score(dev_features, dev_labels, best_pipe)
LOG.info("Dev fit accuracy: %.4f", accuracy * 100.0)
LOG.info("Dev confusion matrix:\n%s", str(confusion))
return best_pipe, max_score
def subsample_manifest(manifest_file: str, max_samples: int) -> str:
"""Will save a subsampled version of the manifest to the same folder.
Have to save to the same folder to support relative paths.
Args:
manifest_file: path to the manifest file that needs subsampling.
max_samples: how many samples to retain. Will randomly select that
many lines from the manifest.
Returns:
str: the path to the subsampled manifest file.
"""
with open(manifest_file, "rt", encoding="utf-8") as fin:
lines = fin.readlines()
if max_samples < len(lines):
lines = random.sample(lines, max_samples)
output_file = manifest_file + "-subsampled"
with open(output_file, "wt", encoding="utf-8") as fout:
fout.write("".join(lines))
return output_file
def cleanup_subsampled_manifests(subsampled_manifests: List[str]):
"""Removes all generated subsamples manifests."""
for manifest in subsampled_manifests:
os.remove(manifest)
def compute_all_confidences(
hypothesis: Hypothesis, tune_confidence_cfg: TuneConfidenceConfig
) -> Dict[ConfidenceSpec, float]:
"""Computes a set of confidence scores from a given hypothesis.
Works with the output of both CTC and Transducer decoding.
Args:
hypothesis: generated hypothesis as returned from the transcribe
method of the ASR model.
tune_confidence_cfg: config specifying what confidence scores to
compute.
Returns:
dict: dictionary with confidenct spec -> confidence score mapping.
"""
conf_values = {}
for exclude_blank in tune_confidence_cfg.exclude_blank:
filtered_logprobs = get_filtered_logprobs(hypothesis, exclude_blank)
vocab_size = filtered_logprobs.shape[1]
for aggregation in tune_confidence_cfg.aggregation:
aggr_func = get_confidence_aggregation_bank()[aggregation]
for conf_type in tune_confidence_cfg.confidence_type:
conf_func = get_confidence_measure_bank()[conf_type]
if conf_type == "max_prob": # skipping alpha in this case
conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=1.0)).cpu().item()
conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, 1.0)] = conf_value
else:
for alpha in tune_confidence_cfg.alpha:
conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=alpha)).cpu().item()
conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, alpha)] = conf_value
return conf_values
def find_best_confidence(
train_confidences: List[List[Dict[ConfidenceSpec, float]]],
train_labels: List[int],
dev_confidences: List[List[Dict[ConfidenceSpec, float]]],
dev_labels: List[int],
tune_lr: bool,
tune_lr_config: TuneConfidenceConfig,
) -> Tuple[ConfidenceConfig, Pipeline]:
"""Finds the best confidence configuration for model selection.
Will loop over all values in the confidence dictionary and fit the LR
model (optionally tuning its HPs). The best performing confidence (on the
dev set) will be used for the final LR model.
Args:
train_confidences: this is an object of type
``List[List[Dict[ConfidenceSpec, float]]]``. The shape of this
object is [M, N, S], where
M: number of models
N: number of utterances in all training sets
S: number of confidence scores to try
This argument will be used to construct np.array objects for each
of the confidence scores with the shape [M, N]
train_labels: ground-truth labels of the correct model for each data
points. This is a list of size [N]
dev_confidences: same as training, but for the validation subset.
dev_labels: same as training, but for the validation subset.
tune_lr: controls whether tuning of LR hyperparameters is performed.
tune_lr_cfg: specifies what values of LR hyperparameters to try.
Returns:
tuple: best confidence config, best model selection pipeline
"""
max_score = 0
best_pipe = None
best_conf_spec = None
LOG.info("Evaluation all confidences. Total grid size: %d", len(train_confidences[0][0].keys()))
for conf_spec in tqdm(train_confidences[0][0].keys()):
cur_train_confidences = []
for model_confs in train_confidences:
cur_train_confidences.append([])
for model_conf in model_confs:
cur_train_confidences[-1].append(model_conf[conf_spec])
cur_dev_confidences = []
for model_confs in dev_confidences:
cur_dev_confidences.append([])
for model_conf in model_confs:
cur_dev_confidences[-1].append(model_conf[conf_spec])
# transposing with zip(*list)
training_features = np.array(list(zip(*cur_train_confidences)))
training_labels = np.array(train_labels)
dev_features = np.array(list(zip(*cur_dev_confidences)))
dev_labels = np.array(dev_labels)
pipe, score = train_model_selection(
training_features, training_labels, dev_features, dev_labels, tune_lr, tune_lr_config,
)
if max_score < score:
max_score = score
best_pipe = pipe
best_conf_spec = conf_spec
LOG.info("Found better parameters: %s. New score: %.4f", str(conf_spec), max_score)
return best_conf_spec.to_confidence_config(), best_pipe
@hydra_runner(config_name="BuildEnsembleConfig", schema=BuildEnsembleConfig)
def main(cfg: BuildEnsembleConfig):
# silencing all messages from nemo/ptl to avoid dumping tons of configs to the stdout
logging.getLogger('pytorch_lightning').setLevel(logging.CRITICAL)
logging.getLogger('nemo_logger').setLevel(logging.CRITICAL)
LOG.info(f'Build ensemble config:\n{OmegaConf.to_yaml(cfg)}')
# to ensure post init is called
cfg = BuildEnsembleConfig(**cfg)
pl.seed_everything(cfg.random_seed)
cfg.transcription.random_seed = None # seed is already applied
cfg.transcription.return_transcriptions = True
cfg.transcription.preserve_alignment = True
cfg.transcription.ctc_decoding.temperature = cfg.temperature
cfg.transcription.rnnt_decoding.temperature = cfg.temperature
# this ensures that generated output is after log-softmax for consistency with CTC
train_confidences = []
dev_confidences = []
train_labels = []
dev_labels = []
# registering clean-up function that will hold on to this list and
# should clean up even if there is partial error in some of the transcribe
# calls
subsampled_manifests = []
atexit.register(cleanup_subsampled_manifests, subsampled_manifests)
# note that we loop over the same config.
# This is intentional, as we need to run all models on all datasets
# this loop will do the following things:
# 1. Goes through each model X each training dataset
# 2. Computes predictions by directly calling transcribe_speech.main
# 3. Converts transcription to the confidence score(s) as specified in the config
# 4. If dev sets are provided, computes the same for them
# 5. Creates a list of ground-truth model indices by mapping each model
# to its own training dataset as specified in the config.
# 6. After the loop, we either run tuning over all confidence scores or
# directly use a single score to fit logistic regression and save the
# final ensemble model.
for model_idx, model_cfg in enumerate(cfg.ensemble):
train_model_confidences = []
dev_model_confidences = []
for data_idx, data_cfg in enumerate(cfg.ensemble):
if model_idx == 0: # generating subsampled manifests only one time
subsampled_manifests.append(
subsample_manifest(data_cfg.training_manifest, data_cfg.max_training_samples)
)
subsampled_manifest = subsampled_manifests[data_idx]
if model_cfg.model.endswith(".nemo"):
cfg.transcription.model_path = model_cfg.model
else: # assuming pretrained model
cfg.transcription.pretrained_name = model_cfg.model
cfg.transcription.dataset_manifest = subsampled_manifest
# training
with tempfile.NamedTemporaryFile() as output_file:
cfg.transcription.output_filename = output_file.name
LOG.info("Transcribing training dataset %d with model %d", data_idx, model_idx)
transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
LOG.info("Generating confidence scores")
# TODO: parallelize this loop?
for transcription in tqdm(transcriptions):
if cfg.tune_confidence:
train_model_confidences.append(
compute_all_confidences(transcription, cfg.tune_confidence_config)
)
else:
train_model_confidences.append(compute_confidence(transcription, cfg.confidence))
if model_idx == 0: # labels are the same for all models
train_labels.append(data_idx)
# optional dev
if data_cfg.dev_manifest is not None:
cfg.transcription.dataset_manifest = data_cfg.dev_manifest
with tempfile.NamedTemporaryFile() as output_file:
cfg.transcription.output_filename = output_file.name
LOG.info("Transcribing dev dataset %d with model %d", data_idx, model_idx)
transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
LOG.info("Generating confidence scores")
for transcription in tqdm(transcriptions):
if cfg.tune_confidence:
dev_model_confidences.append(
compute_all_confidences(transcription, cfg.tune_confidence_config)
)
else:
dev_model_confidences.append(compute_confidence(transcription, cfg.confidence))
if model_idx == 0: # labels are the same for all models
dev_labels.append(data_idx)
train_confidences.append(train_model_confidences)
if dev_model_confidences:
dev_confidences.append(dev_model_confidences)
if cfg.tune_confidence:
best_confidence, model_selection_block = find_best_confidence(
train_confidences,
train_labels,
dev_confidences,
dev_labels,
cfg.tune_logistic_regression,
cfg.tune_logistic_regression_config,
)
else:
best_confidence = cfg.confidence
# transposing with zip(*list)
training_features = np.array(list(zip(*train_confidences)))
training_labels = np.array(train_labels)
if dev_confidences:
dev_features = np.array(list(zip(*dev_confidences)))
dev_labels = np.array(dev_labels)
else:
dev_features = None
dev_labels = None
model_selection_block, _ = train_model_selection(
training_features,
training_labels,
dev_features,
dev_labels,
cfg.tune_logistic_regression,
cfg.tune_logistic_regression_config,
verbose=True,
)
with tempfile.TemporaryDirectory() as tmpdir:
model_selection_block_path = os.path.join(tmpdir, 'model_selection_block.pkl')
joblib.dump(model_selection_block, model_selection_block_path)
# creating ensemble checkpoint
ensemble_model = ConfidenceEnsembleModel(
cfg=DictConfig(
{
'model_selection_block': model_selection_block_path,
'confidence': best_confidence,
'temperature': cfg.temperature,
'load_models': [model_cfg.model for model_cfg in cfg.ensemble],
}
),
trainer=None,
)
ensemble_model.save_to(cfg.output_path)
if __name__ == '__main__':
main()
[end of scripts/confidence_ensembles/build_ensemble.py]
[start of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
from dataclasses import dataclass, is_dataclass
from pathlib import Path
from typing import Optional
import pytorch_lightning as pl
import torch
from omegaconf import MISSING, OmegaConf
from sklearn.model_selection import ParameterGrid
from nemo.collections.asr.metrics.rnnt_wer import RNNTDecodingConfig
from nemo.collections.asr.metrics.wer import CTCDecodingConfig
from nemo.collections.asr.models import ASRModel, EncDecRNNTModel
from nemo.collections.asr.parts.utils.asr_confidence_benchmarking_utils import (
apply_confidence_parameters,
run_confidence_benchmark,
)
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig
from nemo.core.config import hydra_runner
from nemo.utils import logging
"""
Get confidence metrics and curve plots for a given model, dataset, and confidence parameters.
# Arguments
model_path: Path to .nemo ASR checkpoint
pretrained_name: Name of pretrained ASR model (from NGC registry)
dataset_manifest: Path to dataset JSON manifest file (in NeMo format)
output_dir: Output directory to store a report and curve plot directories
batch_size: batch size during inference
num_workers: number of workers during inference
cuda: Optional int to enable or disable execution of model on certain CUDA device
amp: Bool to decide if Automatic Mixed Precision should be used during inference
audio_type: Str filetype of the audio. Supported = wav, flac, mp3
target_level: Word- or token-level confidence. Supported = word, token, auto (for computing both word and token)
confidence_cfg: Config with confidence parameters
grid_params: Dictionary with lists of parameters to iteratively benchmark on
# Usage
ASR model can be specified by either "model_path" or "pretrained_name".
Data for transcription are defined with "dataset_manifest".
Results are returned as a benchmark report and curve plots.
python benchmark_asr_confidence.py \
model_path=null \
pretrained_name=null \
dataset_manifest="" \
output_dir="" \
batch_size=64 \
num_workers=8 \
cuda=0 \
amp=True \
target_level="word" \
confidence_cfg.exclude_blank=False \
'grid_params="{\"aggregation\": [\"min\", \"prod\"], \"alpha\": [0.33, 0.5]}"'
"""
def get_experiment_params(cfg):
"""Get experiment parameters from a confidence config and generate the experiment name.
Returns:
List of experiment parameters.
String with the experiment name.
"""
blank = "no_blank" if cfg.exclude_blank else "blank"
aggregation = cfg.aggregation
method_name = cfg.measure_cfg.name
alpha = cfg.measure_cfg.alpha
if method_name == "entropy":
entropy_type = cfg.measure_cfg.entropy_type
entropy_norm = cfg.measure_cfg.entropy_norm
experiment_param_list = [
aggregation,
str(cfg.exclude_blank),
method_name,
entropy_type,
entropy_norm,
str(alpha),
]
experiment_str = "-".join([aggregation, blank, method_name, entropy_type, entropy_norm, str(alpha)])
else:
experiment_param_list = [aggregation, str(cfg.exclude_blank), method_name, "-", "-", str(alpha)]
experiment_str = "-".join([aggregation, blank, method_name, str(alpha)])
return experiment_param_list, experiment_str
@dataclass
class ConfidenceBenchmarkingConfig:
# Required configs
model_path: Optional[str] = None # Path to a .nemo file
pretrained_name: Optional[str] = None # Name of a pretrained model
dataset_manifest: str = MISSING
output_dir: str = MISSING
# General configs
batch_size: int = 32
num_workers: int = 4
# Set `cuda` to int to define CUDA device. If 'None', will look for CUDA
# device anyway, and do inference on CPU only if CUDA device is not found.
# If `cuda` is a negative number, inference will be on CPU only.
cuda: Optional[int] = None
amp: bool = False
audio_type: str = "wav"
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
@hydra_runner(config_name="ConfidenceBenchmarkingConfig", schema=ConfidenceBenchmarkingConfig)
def main(cfg: ConfidenceBenchmarkingConfig):
torch.set_grad_enabled(False)
logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
if is_dataclass(cfg):
cfg = OmegaConf.structured(cfg)
if cfg.model_path is None and cfg.pretrained_name is None:
raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None!")
# setup GPU
if cfg.cuda is None:
if torch.cuda.is_available():
device = [0] # use 0th CUDA device
accelerator = 'gpu'
else:
device = 1
accelerator = 'cpu'
else:
device = [cfg.cuda]
accelerator = 'gpu'
map_location = torch.device('cuda:{}'.format(device[0]) if accelerator == 'gpu' else 'cpu')
# setup model
if cfg.model_path is not None:
# restore model from .nemo file path
model_cfg = ASRModel.restore_from(restore_path=cfg.model_path, return_config=True)
classpath = model_cfg.target # original class path
imported_class = model_utils.import_class_by_path(classpath) # type: ASRModel
logging.info(f"Restoring model : {imported_class.__name__}")
asr_model = imported_class.restore_from(
restore_path=cfg.model_path, map_location=map_location
) # type: ASRModel
else:
# restore model by name
asr_model = ASRModel.from_pretrained(
model_name=cfg.pretrained_name, map_location=map_location
) # type: ASRModel
trainer = pl.Trainer(devices=device, accelerator=accelerator)
asr_model.set_trainer(trainer)
asr_model = asr_model.eval()
# Check if ctc or rnnt model
is_rnnt = isinstance(asr_model, EncDecRNNTModel)
# Check that the model has the `change_decoding_strategy` method
if not hasattr(asr_model, 'change_decoding_strategy'):
raise RuntimeError("The asr_model you are using must have the `change_decoding_strategy` method.")
# get filenames and reference texts from manifest
filepaths = []
reference_texts = []
if os.stat(cfg.dataset_manifest).st_size == 0:
logging.error(f"The input dataset_manifest {cfg.dataset_manifest} is empty. Exiting!")
return None
manifest_dir = Path(cfg.dataset_manifest).parent
with open(cfg.dataset_manifest, 'r') as f:
for line in f:
item = json.loads(line)
audio_file = Path(item['audio_filepath'])
if not audio_file.is_file() and not audio_file.is_absolute():
audio_file = manifest_dir / audio_file
filepaths.append(str(audio_file.absolute()))
reference_texts.append(item['text'])
# setup AMP (optional)
autocast = None
if cfg.amp and torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
logging.info("AMP enabled!\n")
autocast = torch.cuda.amp.autocast
# do grid-based benchmarking if grid_params is provided, otherwise a regular one
work_dir = Path(cfg.output_dir)
os.makedirs(work_dir, exist_ok=True)
report_legend = (
",".join(
[
"model_type",
"aggregation",
"blank",
"method_name",
"entropy_type",
"entropy_norm",
"alpha",
"target_level",
"auc_roc",
"auc_pr",
"auc_nt",
"nce",
"ece",
"auc_yc",
"std_yc",
"max_yc",
]
)
+ "\n"
)
model_typename = "RNNT" if is_rnnt else "CTC"
report_file = work_dir / Path("report.csv")
if cfg.grid_params:
asr_model.change_decoding_strategy(
RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
if is_rnnt
else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
)
params = json.loads(cfg.grid_params)
hp_grid = ParameterGrid(params)
hp_grid = list(hp_grid)
logging.info(f"==============================Running a benchmarking with grid search=========================")
logging.info(f"Grid search size: {len(hp_grid)}")
logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directories near the file")
logging.info(f"==============================================================================================")
with open(report_file, "tw", encoding="utf-8") as f:
f.write(report_legend)
f.flush()
for i, hp in enumerate(hp_grid):
logging.info(f"Run # {i + 1}, grid: `{hp}`")
asr_model.change_decoding_strategy(apply_confidence_parameters(asr_model.cfg.decoding, hp))
param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
plot_dir = work_dir / Path(experiment_name)
results = run_confidence_benchmark(
asr_model,
cfg.target_level,
filepaths,
reference_texts,
cfg.batch_size,
cfg.num_workers,
plot_dir,
autocast,
)
for level, result in results.items():
f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
f.flush()
else:
asr_model.change_decoding_strategy(
RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
if is_rnnt
else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
)
param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
plot_dir = work_dir / Path(experiment_name)
logging.info(f"==============================Running a single benchmarking===================================")
logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directory `{plot_dir}`")
with open(report_file, "tw", encoding="utf-8") as f:
f.write(report_legend)
f.flush()
results = run_confidence_benchmark(
asr_model,
cfg.batch_size,
cfg.num_workers,
cfg.target_level,
filepaths,
reference_texts,
plot_dir,
autocast,
)
for level, result in results.items():
f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
logging.info(f"===========================================Done===============================================")
if __name__ == '__main__':
main()
[end of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
[start of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
# This script converts an existing audio dataset with a manifest to
# a tarred and sharded audio dataset that can be read by the
# TarredAudioToTextDataLayer.
# Please make sure your audio_filepath DOES NOT CONTAIN '-sub'!
# Because we will use it to handle files which have duplicate filenames but with different offsets
# (see function create_shard for details)
# Bucketing can help to improve the training speed. You may use --buckets_num to specify the number of buckets.
# It creates multiple tarred datasets, one per bucket, based on the audio durations.
# The range of [min_duration, max_duration) is split into equal sized buckets.
# Recommend to use --sort_in_shards to speedup the training by reducing the paddings in the batches
# More info on how to use bucketing feature: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/datasets.html
# If valid NVIDIA DALI version is installed, will also generate the corresponding DALI index files that need to be
# supplied to the config in order to utilize webdataset for efficient large dataset handling.
# NOTE: DALI + Webdataset is NOT compatible with Bucketing support !
# Usage:
1) Creating a new tarfile dataset
python convert_to_tarred_audio_dataset.py \
--manifest_path=<path to the manifest file> \
--target_dir=<path to output directory> \
--num_shards=<number of tarfiles that will contain the audio> \
--max_duration=<float representing maximum duration of audio samples> \
--min_duration=<float representing minimum duration of audio samples> \
--shuffle --shuffle_seed=1 \
--sort_in_shards \
--workers=-1
2) Concatenating more tarfiles to a pre-existing tarred dataset
python convert_to_tarred_audio_dataset.py \
--manifest_path=<path to the tarred manifest file> \
--metadata_path=<path to the metadata.yaml (or metadata_version_{X}.yaml) file> \
--target_dir=<path to output directory where the original tarfiles are contained> \
--max_duration=<float representing maximum duration of audio samples> \
--min_duration=<float representing minimum duration of audio samples> \
--shuffle --shuffle_seed=1 \
--sort_in_shards \
--workers=-1 \
--concat_manifest_paths \
<space separated paths to 1 or more manifest files to concatenate into the original tarred dataset>
3) Writing an empty metadata file
python convert_to_tarred_audio_dataset.py \
--target_dir=<path to output directory> \
# any other optional argument
--num_shards=8 \
--max_duration=16.7 \
--min_duration=0.01 \
--shuffle \
--workers=-1 \
--sort_in_shards \
--shuffle_seed=1 \
--write_metadata
"""
import argparse
import copy
import json
import os
import random
import tarfile
from collections import defaultdict
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, List, Optional
from joblib import Parallel, delayed
from omegaconf import DictConfig, OmegaConf, open_dict
try:
import create_dali_tarred_dataset_index as dali_index
DALI_INDEX_SCRIPT_AVAILABLE = True
except (ImportError, ModuleNotFoundError, FileNotFoundError):
DALI_INDEX_SCRIPT_AVAILABLE = False
parser = argparse.ArgumentParser(
description="Convert an existing ASR dataset to tarballs compatible with TarredAudioToTextDataLayer."
)
parser.add_argument(
"--manifest_path", default=None, type=str, required=False, help="Path to the existing dataset's manifest."
)
parser.add_argument(
'--concat_manifest_paths',
nargs='+',
default=None,
type=str,
required=False,
help="Path to the additional dataset's manifests that will be concatenated with base dataset.",
)
# Optional arguments
parser.add_argument(
"--target_dir",
default='./tarred',
type=str,
help="Target directory for resulting tarballs and manifest. Defaults to `./tarred`. Creates the path if necessary.",
)
parser.add_argument(
"--metadata_path", required=False, default=None, type=str, help="Path to metadata file for the dataset.",
)
parser.add_argument(
"--num_shards",
default=-1,
type=int,
help="Number of shards (tarballs) to create. Used for partitioning data among workers.",
)
parser.add_argument(
'--max_duration',
default=None,
required=True,
type=float,
help='Maximum duration of audio clip in the dataset. By default, it is None and is required to be set.',
)
parser.add_argument(
'--min_duration',
default=None,
type=float,
help='Minimum duration of audio clip in the dataset. By default, it is None and will not filter files.',
)
parser.add_argument(
"--shuffle",
action='store_true',
help="Whether or not to randomly shuffle the samples in the manifest before tarring/sharding.",
)
parser.add_argument(
"--keep_files_together",
action='store_true',
help="Whether or not to keep entries from the same file (but different offsets) together when sorting before tarring/sharding.",
)
parser.add_argument(
"--sort_in_shards",
action='store_true',
help="Whether or not to sort samples inside the shards based on their duration.",
)
parser.add_argument(
"--buckets_num", type=int, default=1, help="Number of buckets to create based on duration.",
)
parser.add_argument("--shuffle_seed", type=int, default=None, help="Random seed for use if shuffling is enabled.")
parser.add_argument(
'--write_metadata',
action='store_true',
help=(
"Flag to write a blank metadata with the current call config. "
"Note that the metadata will not contain the number of shards, "
"and it must be filled out by the user."
),
)
parser.add_argument(
"--no_shard_manifests",
action='store_true',
help="Do not write sharded manifests along with the aggregated manifest.",
)
parser.add_argument('--workers', type=int, default=1, help='Number of worker processes')
args = parser.parse_args()
@dataclass
class ASRTarredDatasetConfig:
num_shards: int = -1
shuffle: bool = False
max_duration: Optional[float] = None
min_duration: Optional[float] = None
shuffle_seed: Optional[int] = None
sort_in_shards: bool = True
shard_manifests: bool = True
keep_files_together: bool = False
@dataclass
class ASRTarredDatasetMetadata:
created_datetime: Optional[str] = None
version: int = 0
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
self.created_datetime = self.get_current_datetime()
def get_current_datetime(self):
return datetime.now().strftime("%m-%d-%Y %H-%M-%S")
@classmethod
def from_config(cls, config: DictConfig):
obj = cls()
obj.__dict__.update(**config)
return obj
@classmethod
def from_file(cls, filepath: str):
config = OmegaConf.load(filepath)
return ASRTarredDatasetMetadata.from_config(config=config)
class ASRTarredDatasetBuilder:
"""
Helper class that constructs a tarred dataset from scratch, or concatenates tarred datasets
together and constructs manifests for them.
"""
def __init__(self):
self.config = None
def configure(self, config: ASRTarredDatasetConfig):
"""
Sets the config generated from command line overrides.
Args:
config: ASRTarredDatasetConfig dataclass object.
"""
self.config = config # type: ASRTarredDatasetConfig
if self.config.num_shards < 0:
raise ValueError("`num_shards` must be > 0. Please fill in the metadata information correctly.")
def create_new_dataset(self, manifest_path: str, target_dir: str = "./tarred/", num_workers: int = 0):
"""
Creates a new tarred dataset from a given manifest file.
Args:
manifest_path: Path to the original ASR manifest.
target_dir: Output directory.
num_workers: Integer denoting number of parallel worker processes which will write tarfiles.
Defaults to 1 - which denotes sequential worker process.
Output:
Writes tarfiles, along with the tarred dataset compatible manifest file.
Also preserves a record of the metadata used to construct this tarred dataset.
"""
if self.config is None:
raise ValueError("Config has not been set. Please call `configure(config: ASRTarredDatasetConfig)`")
if manifest_path is None:
raise FileNotFoundError("Manifest filepath cannot be None !")
config = self.config # type: ASRTarredDatasetConfig
if not os.path.exists(target_dir):
os.makedirs(target_dir)
# Read the existing manifest
entries, total_duration, filtered_entries, filtered_duration = self._read_manifest(manifest_path, config)
if len(filtered_entries) > 0:
print(f"Filtered {len(filtered_entries)} files which amounts to {filtered_duration} seconds of audio.")
print(
f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
)
if len(entries) == 0:
print("No tarred dataset was created as there were 0 valid samples after filtering!")
return
if config.shuffle:
random.seed(config.shuffle_seed)
print("Shuffling...")
if config.keep_files_together:
filename_entries = defaultdict(list)
for ent in entries:
filename_entries[ent["audio_filepath"]].append(ent)
filenames = list(filename_entries.keys())
random.shuffle(filenames)
shuffled_entries = []
for filename in filenames:
shuffled_entries += filename_entries[filename]
entries = shuffled_entries
else:
random.shuffle(entries)
# Create shards and updated manifest entries
print(f"Number of samples added : {len(entries)}")
print(f"Remainder: {len(entries) % config.num_shards}")
start_indices = []
end_indices = []
# Build indices
for i in range(config.num_shards):
start_idx = (len(entries) // config.num_shards) * i
end_idx = start_idx + (len(entries) // config.num_shards)
print(f"Shard {i} has entries {start_idx} ~ {end_idx}")
files = set()
for ent_id in range(start_idx, end_idx):
files.add(entries[ent_id]["audio_filepath"])
print(f"Shard {i} contains {len(files)} files")
if i == config.num_shards - 1:
# We discard in order to have the same number of entries per shard.
print(f"Have {len(entries) - end_idx} entries left over that will be discarded.")
start_indices.append(start_idx)
end_indices.append(end_idx)
manifest_folder, _ = os.path.split(manifest_path)
with Parallel(n_jobs=num_workers, verbose=config.num_shards) as parallel:
# Call parallel tarfile construction
new_entries_list = parallel(
delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, i, manifest_folder)
for i, (start_idx, end_idx) in enumerate(zip(start_indices, end_indices))
)
if config.shard_manifests:
sharded_manifests_dir = target_dir + '/sharded_manifests'
if not os.path.exists(sharded_manifests_dir):
os.makedirs(sharded_manifests_dir)
for manifest in new_entries_list:
shard_id = manifest[0]['shard_id']
new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
for entry in manifest:
json.dump(entry, m2)
m2.write('\n')
# Flatten the list of list of entries to a list of entries
new_entries = [sample for manifest in new_entries_list for sample in manifest]
del new_entries_list
print("Total number of entries in manifest :", len(new_entries))
# Write manifest
new_manifest_path = os.path.join(target_dir, 'tarred_audio_manifest.json')
with open(new_manifest_path, 'w', encoding='utf-8') as m2:
for entry in new_entries:
json.dump(entry, m2)
m2.write('\n')
# Write metadata (default metadata for new datasets)
new_metadata_path = os.path.join(target_dir, 'metadata.yaml')
metadata = ASRTarredDatasetMetadata()
# Update metadata
metadata.dataset_config = config
metadata.num_samples_per_shard = len(new_entries) // config.num_shards
# Write metadata
metadata_yaml = OmegaConf.structured(metadata)
OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
def create_concatenated_dataset(
self,
base_manifest_path: str,
manifest_paths: List[str],
metadata: ASRTarredDatasetMetadata,
target_dir: str = "./tarred_concatenated/",
num_workers: int = 1,
):
"""
Creates new tarfiles in order to create a concatenated dataset, whose manifest contains the data for
both the original dataset as well as the new data submitted in manifest paths.
Args:
base_manifest_path: Path to the manifest file which contains the information for the original
tarred dataset (with flattened paths).
manifest_paths: List of one or more paths to manifest files that will be concatenated with above
base tarred dataset.
metadata: ASRTarredDatasetMetadata dataclass instance with overrides from command line.
target_dir: Output directory
Output:
Writes tarfiles which with indices mapping to a "concatenated" tarred dataset,
along with the tarred dataset compatible manifest file which includes information
about all the datasets that comprise the concatenated dataset.
Also preserves a record of the metadata used to construct this tarred dataset.
"""
if not os.path.exists(target_dir):
os.makedirs(target_dir)
if base_manifest_path is None:
raise FileNotFoundError("Base manifest filepath cannot be None !")
if manifest_paths is None or len(manifest_paths) == 0:
raise FileNotFoundError("List of additional manifest filepaths cannot be None !")
config = ASRTarredDatasetConfig(**(metadata.dataset_config))
# Read the existing manifest (no filtering here)
base_entries, _, _, _ = self._read_manifest(base_manifest_path, config)
print(f"Read base manifest containing {len(base_entries)} samples.")
# Precompute number of samples per shard
if metadata.num_samples_per_shard is None:
num_samples_per_shard = len(base_entries) // config.num_shards
else:
num_samples_per_shard = metadata.num_samples_per_shard
print("Number of samples per shard :", num_samples_per_shard)
# Compute min and max duration and update config (if no metadata passed)
print(f"Selected max duration : {config.max_duration}")
print(f"Selected min duration : {config.min_duration}")
entries = []
for new_manifest_idx in range(len(manifest_paths)):
new_entries, total_duration, filtered_new_entries, filtered_duration = self._read_manifest(
manifest_paths[new_manifest_idx], config
)
if len(filtered_new_entries) > 0:
print(
f"Filtered {len(filtered_new_entries)} files which amounts to {filtered_duration:0.2f}"
f" seconds of audio from manifest {manifest_paths[new_manifest_idx]}."
)
print(
f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
)
entries.extend(new_entries)
if len(entries) == 0:
print("No tarred dataset was created as there were 0 valid samples after filtering!")
return
if config.shuffle:
random.seed(config.shuffle_seed)
print("Shuffling...")
random.shuffle(entries)
# Drop last section of samples that cannot be added onto a chunk
drop_count = len(entries) % num_samples_per_shard
total_new_entries = len(entries)
entries = entries[:-drop_count]
print(
f"Dropping {drop_count} samples from total new samples {total_new_entries} since they cannot "
f"be added into a uniformly sized chunk."
)
# Create shards and updated manifest entries
num_added_shards = len(entries) // num_samples_per_shard
print(f"Number of samples in base dataset : {len(base_entries)}")
print(f"Number of samples in additional datasets : {len(entries)}")
print(f"Number of added shards : {num_added_shards}")
print(f"Remainder: {len(entries) % num_samples_per_shard}")
start_indices = []
end_indices = []
shard_indices = []
for i in range(num_added_shards):
start_idx = (len(entries) // num_added_shards) * i
end_idx = start_idx + (len(entries) // num_added_shards)
shard_idx = i + config.num_shards
print(f"Shard {shard_idx} has entries {start_idx + len(base_entries)} ~ {end_idx + len(base_entries)}")
start_indices.append(start_idx)
end_indices.append(end_idx)
shard_indices.append(shard_idx)
manifest_folder, _ = os.path.split(base_manifest_path)
with Parallel(n_jobs=num_workers, verbose=num_added_shards) as parallel:
# Call parallel tarfile construction
new_entries_list = parallel(
delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, shard_idx, manifest_folder)
for i, (start_idx, end_idx, shard_idx) in enumerate(zip(start_indices, end_indices, shard_indices))
)
if config.shard_manifests:
sharded_manifests_dir = target_dir + '/sharded_manifests'
if not os.path.exists(sharded_manifests_dir):
os.makedirs(sharded_manifests_dir)
for manifest in new_entries_list:
shard_id = manifest[0]['shard_id']
new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
for entry in manifest:
json.dump(entry, m2)
m2.write('\n')
# Flatten the list of list of entries to a list of entries
new_entries = [sample for manifest in new_entries_list for sample in manifest]
del new_entries_list
# Write manifest
if metadata is None:
new_version = 1 # start with `1`, where `0` indicates the base manifest + dataset
else:
new_version = metadata.version + 1
print("Total number of entries in manifest :", len(base_entries) + len(new_entries))
new_manifest_path = os.path.join(target_dir, f'tarred_audio_manifest_version_{new_version}.json')
with open(new_manifest_path, 'w', encoding='utf-8') as m2:
# First write all the entries of base manifest
for entry in base_entries:
json.dump(entry, m2)
m2.write('\n')
# Finally write the new entries
for entry in new_entries:
json.dump(entry, m2)
m2.write('\n')
# Preserve historical metadata
base_metadata = metadata
# Write metadata (updated metadata for concatenated datasets)
new_metadata_path = os.path.join(target_dir, f'metadata_version_{new_version}.yaml')
metadata = ASRTarredDatasetMetadata()
# Update config
config.num_shards = config.num_shards + num_added_shards
# Update metadata
metadata.version = new_version
metadata.dataset_config = config
metadata.num_samples_per_shard = num_samples_per_shard
metadata.is_concatenated_manifest = True
metadata.created_datetime = metadata.get_current_datetime()
# Attach history
current_metadata = OmegaConf.structured(base_metadata.history)
metadata.history = current_metadata
# Write metadata
metadata_yaml = OmegaConf.structured(metadata)
OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
def _read_manifest(self, manifest_path: str, config: ASRTarredDatasetConfig):
"""Read and filters data from the manifest"""
# Read the existing manifest
entries = []
total_duration = 0.0
filtered_entries = []
filtered_duration = 0.0
with open(manifest_path, 'r', encoding='utf-8') as m:
for line in m:
entry = json.loads(line)
if (config.max_duration is None or entry['duration'] < config.max_duration) and (
config.min_duration is None or entry['duration'] >= config.min_duration
):
entries.append(entry)
total_duration += entry["duration"]
else:
filtered_entries.append(entry)
filtered_duration += entry['duration']
return entries, total_duration, filtered_entries, filtered_duration
def _create_shard(self, entries, target_dir, shard_id, manifest_folder):
"""Creates a tarball containing the audio files from `entries`.
"""
if self.config.sort_in_shards:
entries.sort(key=lambda x: x["duration"], reverse=False)
new_entries = []
tar = tarfile.open(os.path.join(target_dir, f'audio_{shard_id}.tar'), mode='w', dereference=True)
count = dict()
for entry in entries:
# We squash the filename since we do not preserve directory structure of audio files in the tarball.
if os.path.exists(entry["audio_filepath"]):
audio_filepath = entry["audio_filepath"]
else:
audio_filepath = os.path.join(manifest_folder, entry["audio_filepath"])
if not os.path.exists(audio_filepath):
raise FileNotFoundError(f"Could not find {entry['audio_filepath']}!")
base, ext = os.path.splitext(audio_filepath)
base = base.replace('/', '_')
# Need the following replacement as long as WebDataset splits on first period
base = base.replace('.', '_')
squashed_filename = f'{base}{ext}'
if squashed_filename not in count:
tar.add(audio_filepath, arcname=squashed_filename)
to_write = squashed_filename
count[squashed_filename] = 1
else:
to_write = base + "-sub" + str(count[squashed_filename]) + ext
count[squashed_filename] += 1
new_entry = {
'audio_filepath': to_write,
'duration': entry['duration'],
'shard_id': shard_id, # Keep shard ID for recordkeeping
}
if 'label' in entry:
new_entry['label'] = entry['label']
if 'text' in entry:
new_entry['text'] = entry['text']
if 'offset' in entry:
new_entry['offset'] = entry['offset']
if 'lang' in entry:
new_entry['lang'] = entry['lang']
new_entries.append(new_entry)
tar.close()
return new_entries
@classmethod
def setup_history(cls, base_metadata: ASRTarredDatasetMetadata, history: List[Any]):
if 'history' in base_metadata.keys():
for history_val in base_metadata.history:
cls.setup_history(history_val, history)
if base_metadata is not None:
metadata_copy = copy.deepcopy(base_metadata)
with open_dict(metadata_copy):
metadata_copy.pop('history', None)
history.append(metadata_copy)
def main():
if args.buckets_num > 1:
bucket_length = (args.max_duration - args.min_duration) / float(args.buckets_num)
for i in range(args.buckets_num):
min_duration = args.min_duration + i * bucket_length
max_duration = min_duration + bucket_length
if i == args.buckets_num - 1:
# add a small number to cover the samples with exactly duration of max_duration in the last bucket.
max_duration += 1e-5
target_dir = os.path.join(args.target_dir, f"bucket{i+1}")
print(f"Creating bucket {i+1} with min_duration={min_duration} and max_duration={max_duration} ...")
print(f"Results are being saved at: {target_dir}.")
create_tar_datasets(min_duration=min_duration, max_duration=max_duration, target_dir=target_dir)
print(f"Bucket {i+1} is created.")
else:
create_tar_datasets(min_duration=args.min_duration, max_duration=args.max_duration, target_dir=args.target_dir)
def create_tar_datasets(min_duration: float, max_duration: float, target_dir: str):
builder = ASRTarredDatasetBuilder()
shard_manifests = False if args.no_shard_manifests else True
if args.write_metadata:
metadata = ASRTarredDatasetMetadata()
dataset_cfg = ASRTarredDatasetConfig(
num_shards=args.num_shards,
shuffle=args.shuffle,
max_duration=max_duration,
min_duration=min_duration,
shuffle_seed=args.shuffle_seed,
sort_in_shards=args.sort_in_shards,
shard_manifests=shard_manifests,
keep_files_together=args.keep_files_together,
)
metadata.dataset_config = dataset_cfg
output_path = os.path.join(target_dir, 'default_metadata.yaml')
OmegaConf.save(metadata, output_path, resolve=True)
print(f"Default metadata written to {output_path}")
exit(0)
if args.concat_manifest_paths is None or len(args.concat_manifest_paths) == 0:
print("Creating new tarred dataset ...")
# Create a tarred dataset from scratch
config = ASRTarredDatasetConfig(
num_shards=args.num_shards,
shuffle=args.shuffle,
max_duration=max_duration,
min_duration=min_duration,
shuffle_seed=args.shuffle_seed,
sort_in_shards=args.sort_in_shards,
shard_manifests=shard_manifests,
keep_files_together=args.keep_files_together,
)
builder.configure(config)
builder.create_new_dataset(manifest_path=args.manifest_path, target_dir=target_dir, num_workers=args.workers)
else:
if args.buckets_num > 1:
raise ValueError("Concatenation feature does not support buckets_num > 1.")
print("Concatenating multiple tarred datasets ...")
# Implicitly update config from base details
if args.metadata_path is not None:
metadata = ASRTarredDatasetMetadata.from_file(args.metadata_path)
else:
raise ValueError("`metadata` yaml file path must be provided!")
# Preserve history
history = []
builder.setup_history(OmegaConf.structured(metadata), history)
metadata.history = history
# Add command line overrides (everything other than num_shards)
metadata.dataset_config.max_duration = max_duration
metadata.dataset_config.min_duration = min_duration
metadata.dataset_config.shuffle = args.shuffle
metadata.dataset_config.shuffle_seed = args.shuffle_seed
metadata.dataset_config.sort_in_shards = args.sort_in_shards
metadata.dataset_config.shard_manifests = shard_manifests
builder.configure(metadata.dataset_config)
# Concatenate a tarred dataset onto a previous one
builder.create_concatenated_dataset(
base_manifest_path=args.manifest_path,
manifest_paths=args.concat_manifest_paths,
metadata=metadata,
target_dir=target_dir,
num_workers=args.workers,
)
if DALI_INDEX_SCRIPT_AVAILABLE and dali_index.INDEX_CREATOR_AVAILABLE:
print("Constructing DALI Tarfile Index - ", target_dir)
index_config = dali_index.DALITarredIndexConfig(tar_dir=target_dir, workers=args.workers)
dali_index.main(index_config)
if __name__ == "__main__":
main()
[end of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
[start of tools/nemo_forced_aligner/align.py]
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import math
import os
from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import List, Optional
import torch
from omegaconf import OmegaConf
from utils.data_prep import (
add_t_start_end_to_utt_obj,
get_batch_starts_ends,
get_batch_variables,
get_manifest_lines_batch,
is_entry_in_all_lines,
is_entry_in_any_lines,
)
from utils.make_ass_files import make_ass_files
from utils.make_ctm_files import make_ctm_files
from utils.make_output_manifest import write_manifest_out_line
from utils.viterbi_decoding import viterbi_decoding
from nemo.collections.asr.models.ctc_models import EncDecCTCModel
from nemo.collections.asr.models.hybrid_rnnt_ctc_models import EncDecHybridRNNTCTCModel
from nemo.collections.asr.parts.utils.streaming_utils import FrameBatchASR
from nemo.collections.asr.parts.utils.transcribe_utils import setup_model
from nemo.core.config import hydra_runner
from nemo.utils import logging
"""
Align the utterances in manifest_filepath.
Results are saved in ctm files in output_dir.
Arguments:
pretrained_name: string specifying the name of a CTC NeMo ASR model which will be automatically downloaded
from NGC and used for generating the log-probs which we will use to do alignment.
Note: NFA can only use CTC models (not Transducer models) at the moment.
model_path: string specifying the local filepath to a CTC NeMo ASR model which will be used to generate the
log-probs which we will use to do alignment.
Note: NFA can only use CTC models (not Transducer models) at the moment.
Note: if a model_path is provided, it will override the pretrained_name.
manifest_filepath: filepath to the manifest of the data you want to align,
containing 'audio_filepath' and 'text' fields.
output_dir: the folder where output CTM files and new JSON manifest will be saved.
align_using_pred_text: if True, will transcribe the audio using the specified model and then use that transcription
as the reference text for the forced alignment.
transcribe_device: None, or a string specifying the device that will be used for generating log-probs (i.e. "transcribing").
The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
(otherwise will set it to 'cpu').
viterbi_device: None, or string specifying the device that will be used for doing Viterbi decoding.
The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
(otherwise will set it to 'cpu').
batch_size: int specifying batch size that will be used for generating log-probs and doing Viterbi decoding.
use_local_attention: boolean flag specifying whether to try to use local attention for the ASR Model (will only
work if the ASR Model is a Conformer model). If local attention is used, we will set the local attention context
size to [64,64].
additional_segment_grouping_separator: an optional string used to separate the text into smaller segments.
If this is not specified, then the whole text will be treated as a single segment.
remove_blank_tokens_from_ctm: a boolean denoting whether to remove <blank> tokens from token-level output CTMs.
audio_filepath_parts_in_utt_id: int specifying how many of the 'parts' of the audio_filepath
we will use (starting from the final part of the audio_filepath) to determine the
utt_id that will be used in the CTM files. Note also that any spaces that are present in the audio_filepath
will be replaced with dashes, so as not to change the number of space-separated elements in the
CTM files.
e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 1 => utt_id will be "e1"
e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 2 => utt_id will be "d_e1"
e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 3 => utt_id will be "c_d_e1"
use_buffered_infer: False, if set True, using streaming to do get the logits for alignment
This flag is useful when aligning large audio file.
However, currently the chunk streaming inference does not support batch inference,
which means even you set batch_size > 1, it will only infer one by one instead of doing
the whole batch inference together.
chunk_len_in_secs: float chunk length in seconds
total_buffer_in_secs: float Length of buffer (chunk + left and right padding) in seconds
chunk_batch_size: int batch size for buffered chunk inference,
which will cut one audio into segments and do inference on chunk_batch_size segments at a time
simulate_cache_aware_streaming: False, if set True, using cache aware streaming to do get the logits for alignment
save_output_file_formats: List of strings specifying what type of output files to save (default: ["ctm", "ass"])
ctm_file_config: CTMFileConfig to specify the configuration of the output CTM files
ass_file_config: ASSFileConfig to specify the configuration of the output ASS files
"""
@dataclass
class CTMFileConfig:
remove_blank_tokens: bool = False
# minimum duration (in seconds) for timestamps in the CTM.If any line in the CTM has a
# duration lower than this, it will be enlarged from the middle outwards until it
# meets the minimum_timestamp_duration, or reaches the beginning or end of the audio file.
# Note that this may cause timestamps to overlap.
minimum_timestamp_duration: float = 0
@dataclass
class ASSFileConfig:
fontsize: int = 20
vertical_alignment: str = "center"
# if resegment_text_to_fill_space is True, the ASS files will use new segments
# such that each segment will not take up more than (approximately) max_lines_per_segment
# when the ASS file is applied to a video
resegment_text_to_fill_space: bool = False
max_lines_per_segment: int = 2
text_already_spoken_rgb: List[int] = field(default_factory=lambda: [49, 46, 61]) # dark gray
text_being_spoken_rgb: List[int] = field(default_factory=lambda: [57, 171, 9]) # dark green
text_not_yet_spoken_rgb: List[int] = field(default_factory=lambda: [194, 193, 199]) # light gray
@dataclass
class AlignmentConfig:
# Required configs
pretrained_name: Optional[str] = None
model_path: Optional[str] = None
manifest_filepath: Optional[str] = None
output_dir: Optional[str] = None
# General configs
align_using_pred_text: bool = False
transcribe_device: Optional[str] = None
viterbi_device: Optional[str] = None
batch_size: int = 1
use_local_attention: bool = True
additional_segment_grouping_separator: Optional[str] = None
audio_filepath_parts_in_utt_id: int = 1
# Buffered chunked streaming configs
use_buffered_chunked_streaming: bool = False
chunk_len_in_secs: float = 1.6
total_buffer_in_secs: float = 4.0
chunk_batch_size: int = 32
# Cache aware streaming configs
simulate_cache_aware_streaming: Optional[bool] = False
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
ctm_file_config: CTMFileConfig = CTMFileConfig()
ass_file_config: ASSFileConfig = ASSFileConfig()
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
def main(cfg: AlignmentConfig):
logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
if is_dataclass(cfg):
cfg = OmegaConf.structured(cfg)
# Validate config
if cfg.model_path is None and cfg.pretrained_name is None:
raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None")
if cfg.model_path is not None and cfg.pretrained_name is not None:
raise ValueError("One of cfg.model_path and cfg.pretrained_name must be None")
if cfg.manifest_filepath is None:
raise ValueError("cfg.manifest_filepath must be specified")
if cfg.output_dir is None:
raise ValueError("cfg.output_dir must be specified")
if cfg.batch_size < 1:
raise ValueError("cfg.batch_size cannot be zero or a negative number")
if cfg.additional_segment_grouping_separator == "" or cfg.additional_segment_grouping_separator == " ":
raise ValueError("cfg.additional_grouping_separator cannot be empty string or space character")
if cfg.ctm_file_config.minimum_timestamp_duration < 0:
raise ValueError("cfg.minimum_timestamp_duration cannot be a negative number")
if cfg.ass_file_config.vertical_alignment not in ["top", "center", "bottom"]:
raise ValueError("cfg.ass_file_config.vertical_alignment must be one of 'top', 'center' or 'bottom'")
for rgb_list in [
cfg.ass_file_config.text_already_spoken_rgb,
cfg.ass_file_config.text_already_spoken_rgb,
cfg.ass_file_config.text_already_spoken_rgb,
]:
if len(rgb_list) != 3:
raise ValueError(
"cfg.ass_file_config.text_already_spoken_rgb,"
" cfg.ass_file_config.text_being_spoken_rgb,"
" and cfg.ass_file_config.text_already_spoken_rgb all need to contain"
" exactly 3 elements."
)
# Validate manifest contents
if not is_entry_in_all_lines(cfg.manifest_filepath, "audio_filepath"):
raise RuntimeError(
"At least one line in cfg.manifest_filepath does not contain an 'audio_filepath' entry. "
"All lines must contain an 'audio_filepath' entry."
)
if cfg.align_using_pred_text:
if is_entry_in_any_lines(cfg.manifest_filepath, "pred_text"):
raise RuntimeError(
"Cannot specify cfg.align_using_pred_text=True when the manifest at cfg.manifest_filepath "
"contains 'pred_text' entries. This is because the audio will be transcribed and may produce "
"a different 'pred_text'. This may cause confusion."
)
else:
if not is_entry_in_all_lines(cfg.manifest_filepath, "text"):
raise RuntimeError(
"At least one line in cfg.manifest_filepath does not contain a 'text' entry. "
"NFA requires all lines to contain a 'text' entry when cfg.align_using_pred_text=False."
)
# init devices
if cfg.transcribe_device is None:
transcribe_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
else:
transcribe_device = torch.device(cfg.transcribe_device)
logging.info(f"Device to be used for transcription step (`transcribe_device`) is {transcribe_device}")
if cfg.viterbi_device is None:
viterbi_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
else:
viterbi_device = torch.device(cfg.viterbi_device)
logging.info(f"Device to be used for viterbi step (`viterbi_device`) is {viterbi_device}")
if transcribe_device.type == 'cuda' or viterbi_device.type == 'cuda':
logging.warning(
'One or both of transcribe_device and viterbi_device are GPUs. If you run into OOM errors '
'it may help to change both devices to be the CPU.'
)
# load model
model, _ = setup_model(cfg, transcribe_device)
model.eval()
if isinstance(model, EncDecHybridRNNTCTCModel):
model.change_decoding_strategy(decoder_type="ctc")
if cfg.use_local_attention:
logging.info(
"Flag use_local_attention is set to True => will try to use local attention for model if it allows it"
)
model.change_attention_model(self_attention_model="rel_pos_local_attn", att_context_size=[64, 64])
if not (isinstance(model, EncDecCTCModel) or isinstance(model, EncDecHybridRNNTCTCModel)):
raise NotImplementedError(
f"Model is not an instance of NeMo EncDecCTCModel or ENCDecHybridRNNTCTCModel."
" Currently only instances of these models are supported"
)
if cfg.ctm_file_config.minimum_timestamp_duration > 0:
logging.warning(
f"cfg.ctm_file_config.minimum_timestamp_duration has been set to {cfg.ctm_file_config.minimum_timestamp_duration} seconds. "
"This may cause the alignments for some tokens/words/additional segments to be overlapping."
)
buffered_chunk_params = {}
if cfg.use_buffered_chunked_streaming:
model_cfg = copy.deepcopy(model._cfg)
OmegaConf.set_struct(model_cfg.preprocessor, False)
# some changes for streaming scenario
model_cfg.preprocessor.dither = 0.0
model_cfg.preprocessor.pad_to = 0
if model_cfg.preprocessor.normalize != "per_feature":
logging.error(
"Only EncDecCTCModelBPE models trained with per_feature normalization are supported currently"
)
# Disable config overwriting
OmegaConf.set_struct(model_cfg.preprocessor, True)
feature_stride = model_cfg.preprocessor['window_stride']
model_stride_in_secs = feature_stride * cfg.model_downsample_factor
total_buffer = cfg.total_buffer_in_secs
chunk_len = float(cfg.chunk_len_in_secs)
tokens_per_chunk = math.ceil(chunk_len / model_stride_in_secs)
mid_delay = math.ceil((chunk_len + (total_buffer - chunk_len) / 2) / model_stride_in_secs)
logging.info(f"tokens_per_chunk is {tokens_per_chunk}, mid_delay is {mid_delay}")
model = FrameBatchASR(
asr_model=model,
frame_len=chunk_len,
total_buffer=cfg.total_buffer_in_secs,
batch_size=cfg.chunk_batch_size,
)
buffered_chunk_params = {
"delay": mid_delay,
"model_stride_in_secs": model_stride_in_secs,
"tokens_per_chunk": tokens_per_chunk,
}
# get start and end line IDs of batches
starts, ends = get_batch_starts_ends(cfg.manifest_filepath, cfg.batch_size)
# init output_timestep_duration = None and we will calculate and update it during the first batch
output_timestep_duration = None
# init f_manifest_out
os.makedirs(cfg.output_dir, exist_ok=True)
tgt_manifest_name = str(Path(cfg.manifest_filepath).stem) + "_with_output_file_paths.json"
tgt_manifest_filepath = str(Path(cfg.output_dir) / tgt_manifest_name)
f_manifest_out = open(tgt_manifest_filepath, 'w')
# get alignment and save in CTM batch-by-batch
for start, end in zip(starts, ends):
manifest_lines_batch = get_manifest_lines_batch(cfg.manifest_filepath, start, end)
(log_probs_batch, y_batch, T_batch, U_batch, utt_obj_batch, output_timestep_duration,) = get_batch_variables(
manifest_lines_batch,
model,
cfg.additional_segment_grouping_separator,
cfg.align_using_pred_text,
cfg.audio_filepath_parts_in_utt_id,
output_timestep_duration,
cfg.simulate_cache_aware_streaming,
cfg.use_buffered_chunked_streaming,
buffered_chunk_params,
)
alignments_batch = viterbi_decoding(log_probs_batch, y_batch, T_batch, U_batch, viterbi_device)
for utt_obj, alignment_utt in zip(utt_obj_batch, alignments_batch):
utt_obj = add_t_start_end_to_utt_obj(utt_obj, alignment_utt, output_timestep_duration)
if "ctm" in cfg.save_output_file_formats:
utt_obj = make_ctm_files(utt_obj, cfg.output_dir, cfg.ctm_file_config,)
if "ass" in cfg.save_output_file_formats:
utt_obj = make_ass_files(utt_obj, cfg.output_dir, cfg.ass_file_config)
write_manifest_out_line(
f_manifest_out, utt_obj,
)
f_manifest_out.close()
return None
if __name__ == "__main__":
main()
[end of tools/nemo_forced_aligner/align.py]
</code>
I need you to solve this issue by regenerating the full files in the code base that you would like to change. You can change as many files as you like. Please respond with a list of files and their revised contents in the following format.
<example>
[start of /src/this_file.py]
import os
def euclidean(a, b):
if b == 0:
return a
return euclidean(b, a % b)
[end of /src/this_file.py]
[start of /src/another_file.py]
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
x, y = x0, y0
sx = -1 if x0 > x1 else 1
sy = -1 if y0 > y1 else 1
if dx > dy:
err = dx / 2.0
while x != x1:
points.append((x, y))
err -= dy
if err < 0:
y += sy
err += dx
x += sx
else:
err = dy / 2.0
while y != y1:
points.append((x
err -= dx
if err < 0:
x += sx
err += dy
y += sy
points.append((x, y))
return points
[end of /src/another_file.py]
</example>
|
NVIDIA/NeMo
|
8a892b86186dbdf61803d75570cb5c58471e9dda
|
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
|
Seems to be a similar to #7002
Interesting. The fix is easy but needs to be applied to basically every single place that has this constructor for our adapter configs. Let me see if I can update it. But no guarantees on how soon fixes will come in main.
Looking forward to it @titu1994 ! Thanks ๐
@titu1994 I was looking to use NeMo speaker diarization with Python 3.11 and hit this dataclass issue. I patched everything involved in the specific code paths I needed: https://github.com/lmnt-com/NeMo/commit/d89acf9f0152e43dee29d7d1c4667ee34c26ffd7
I was using the neural diarizer as described in https://github.com/NVIDIA/NeMo/tree/main/examples/speaker_tasks/diarization
I'd be happy to upstream this if it's helpful.
I haven't checked whether this is backwards compatible for earlier python/dataclass versions, do you know?
For reference, what led me to this issue, though it's duplicative to the above discussion:
[Similar error](https://github.com/huggingface/datasets/issues/5230)
[StackOverflow solution](https://stackoverflow.com/questions/53632152/why-cant-dataclasses-have-mutable-defaults-in-their-class-attributes-declaratio)
@shaper Thanks for sharing. For brevity, you don't really need a `lambda` when you don't pass any init parameters, like this:
```
field(default_factory=lambda: ConfidenceConfig())
```
You can just do
```
field(default_factory=ConfidenceConfig)
```
It's only needed when you do pass parameter(s), like
```
field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
```
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
I have the same issue. @tango4j suggested using one of the models from https://huggingface.co/spaces/hf-audio/open_asr_leaderboard, but I cannot import nemo.collections.asr:
```
Traceback (most recent call last):
File "/opt/pycharm-2022.3.3/plugins/python/helpers/pycharm/docrunner.py", line 138, in __run
exec(compile(example.source, filename, "single",
File "<doctest NeMoASR[2]>", line 1, in <module>
NeMoASR().apply_asr(file)
^^^^^^^^^
File "/home/cbj/python/cbj/cbj/transcribe/pretrained.py", line 504, in __init__
import nemo.collections.asr as nemo_asr
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
For documentation (I had to search in the provided links):
Mutable defaults were never allowed in dataclasses (by convention), but in python 3.11 they improved the check: Instead of checking some types (dict, list, set) they now use hashable as indicator for mutable.
An alternative to default_factory would be to use frozen dataclasses, but I don't know whether in this code base the configs are used as mutable objects or not.
You need to update to NeMo 1.20, omegaconf did a fix that should resolve this
I have NeMo 1.20.0.
With `pip install nemo_toolkit` and `pip install pytorch_lightning` I installed yesterday nemo.
So it should be the newest PyPI version.
```
$ pip show nemo_toolkit
Name: nemo-toolkit
Version: 1.20.0
Summary: NeMo - a toolkit for Conversational AI
Home-page: https://github.com/nvidia/nemo
Author: NVIDIA
Author-email: nemo-toolkit@nvidia.com
License: Apache2
Location: /opt/py/2023/lib/python3.11/site-packages
Requires: huggingface-hub, numba, numpy, onnx, python-dateutil, ruamel.yaml, scikit-learn, setuptools, tensorboard, text-unidecode, torch, tqdm, wget, wrapt
Required-by:
$ pip show omegaconf
Name: omegaconf
Version: 2.3.0
Summary: A flexible configuration library
Home-page: https://github.com/omry/omegaconf
Author: Omry Yadan
Author-email: omry@yadan.net
License:
Location: /home/cbj/.local/lib/python3.11/site-packages
Requires: antlr4-python3-runtime, PyYAML
Required-by: hydra-core
$ python -c "import nemo.collections.asr as nemo_asr"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
Hmm ok I'll take a look
|
2023-09-30T01:26:50Z
|
<patch>
<patch>
diff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr/experimental/k2/align_speech_parallel.py
--- a/examples/asr/experimental/k2/align_speech_parallel.py
+++ b/examples/asr/experimental/k2/align_speech_parallel.py
@@ -74,7 +74,7 @@
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
@@ -94,12 +94,14 @@
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
- predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
- aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
+ predict_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(return_sample_id=True, num_workers=4)
+ )
+ aligner_args: K2AlignerWrapperModelConfig = field(default_factory=lambda: K2AlignerWrapperModelConfig())
output_path: str = MISSING
model_stride: int = 8
- trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
+ trainer: TrainerConfig = field(default_factory=lambda: TrainerConfig(gpus=-1, accelerator="ddp"))
# there arguments will be ignored
return_predictions: bool = False
diff --git a/nemo/collections/asr/metrics/rnnt_wer.py b/nemo/collections/asr/metrics/rnnt_wer.py
--- a/nemo/collections/asr/metrics/rnnt_wer.py
+++ b/nemo/collections/asr/metrics/rnnt_wer.py
@@ -15,7 +15,7 @@
import copy
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1299,7 +1299,7 @@ class RNNTDecodingConfig:
preserve_alignments: Optional[bool] = None
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
@@ -1317,10 +1317,10 @@ class RNNTDecodingConfig:
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
- greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
+ greedy: greedy_decode.GreedyRNNTInferConfig = field(default_factory=lambda: greedy_decode.GreedyRNNTInferConfig())
# beam decoding config
- beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
+ beam: beam_decode.BeamRNNTInferConfig = field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/metrics/wer.py b/nemo/collections/asr/metrics/wer.py
--- a/nemo/collections/asr/metrics/wer.py
+++ b/nemo/collections/asr/metrics/wer.py
@@ -14,7 +14,7 @@
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1297,13 +1297,17 @@ class CTCDecodingConfig:
batch_dim_index: int = 0
# greedy decoding config
- greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
+ greedy: ctc_greedy_decoding.GreedyCTCInferConfig = field(
+ default_factory=lambda: ctc_greedy_decoding.GreedyCTCInferConfig()
+ )
# beam decoding config
- beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ beam: ctc_beam_decoding.BeamCTCInferConfig = field(
+ default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ )
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/models/configs/aligner_config.py b/nemo/collections/asr/models/configs/aligner_config.py
--- a/nemo/collections/asr/models/configs/aligner_config.py
+++ b/nemo/collections/asr/models/configs/aligner_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@@ -35,10 +35,10 @@ class AlignerWrapperModelConfig:
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
- ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
- rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
+ ctc_cfg: AlignerCTCConfig = field(default_factory=lambda: AlignerCTCConfig())
+ rnnt_cfg: AlignerRNNTConfig = field(default_factory=lambda: AlignerRNNTConfig())
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
- decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ decoder_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
diff --git a/nemo/collections/asr/models/configs/asr_models_config.py b/nemo/collections/asr/models/configs/asr_models_config.py
--- a/nemo/collections/asr/models/configs/asr_models_config.py
+++ b/nemo/collections/asr/models/configs/asr_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -74,24 +74,32 @@ class EncDecCTCConfig(model_cfg.ModelConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
- validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ train_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=True))
+ validation_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False))
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
- decoding: CTCDecodingConfig = CTCDecodingConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
+ decoding: CTCDecodingConfig = field(default_factory=lambda: CTCDecodingConfig())
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
- model: EncDecCTCConfig = EncDecCTCConfig()
+ model: EncDecCTCConfig = field(default_factory=lambda: EncDecCTCConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/classification_models_config.py b/nemo/collections/asr/models/configs/classification_models_config.py
--- a/nemo/collections/asr/models/configs/classification_models_config.py
+++ b/nemo/collections/asr/models/configs/classification_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -72,30 +72,40 @@ class EncDecClassificationConfig(model_cfg.ModelConfig):
timesteps: int = MISSING
# Dataset configs
- train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=timesteps
+ preprocessor: AudioToMFCCPreprocessorConfig = field(default_factory=lambda: AudioToMFCCPreprocessorConfig())
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=-1)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
+
+ def __post_init__(self):
+ if self.crop_or_pad_augment is not None:
+ self.crop_or_pad_augment.audio_length = self.timesteps
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
- model: EncDecClassificationConfig = EncDecClassificationConfig()
+ model: EncDecClassificationConfig = field(default_factory=lambda: EncDecClassificationConfig())
diff --git a/nemo/collections/asr/models/configs/diarizer_config.py b/nemo/collections/asr/models/configs/diarizer_config.py
--- a/nemo/collections/asr/models/configs/diarizer_config.py
+++ b/nemo/collections/asr/models/configs/diarizer_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import asdict, dataclass
+from dataclasses import asdict, dataclass, field
from typing import Any, Dict, Optional, Tuple, Union
@@ -78,9 +78,9 @@ class ASRDiarizerParams(DiarizerComponentConfig):
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
- parameters: ASRDiarizerParams = ASRDiarizerParams()
- ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
- realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
+ parameters: ASRDiarizerParams = field(default_factory=lambda: ASRDiarizerParams())
+ ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = field(default_factory=lambda: ASRDiarizerCTCDecoderParams())
+ realigning_lm_parameters: ASRRealigningLMParams = field(default_factory=lambda: ASRRealigningLMParams())
@dataclass
@@ -102,7 +102,7 @@ class VADParams(DiarizerComponentConfig):
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
- parameters: VADParams = VADParams()
+ parameters: VADParams = field(default_factory=lambda: VADParams())
@dataclass
@@ -121,7 +121,7 @@ class SpeakerEmbeddingsParams(DiarizerComponentConfig):
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
- parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
+ parameters: SpeakerEmbeddingsParams = field(default_factory=lambda: SpeakerEmbeddingsParams())
@dataclass
@@ -142,7 +142,7 @@ class ClusteringParams(DiarizerComponentConfig):
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
- parameters: ClusteringParams = ClusteringParams()
+ parameters: ClusteringParams = field(default_factory=lambda: ClusteringParams())
@dataclass
@@ -166,7 +166,7 @@ class MSDDParams(DiarizerComponentConfig):
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
- parameters: MSDDParams = MSDDParams()
+ parameters: MSDDParams = field(default_factory=lambda: MSDDParams())
@dataclass
@@ -176,16 +176,16 @@ class DiarizerConfig(DiarizerComponentConfig):
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
- vad: VADConfig = VADConfig()
- speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
- clustering: ClusteringConfig = ClusteringConfig()
- msdd_model: MSDDConfig = MSDDConfig()
- asr: ASRDiarizerConfig = ASRDiarizerConfig()
+ vad: VADConfig = field(default_factory=lambda: VADConfig())
+ speaker_embeddings: SpeakerEmbeddingsConfig = field(default_factory=lambda: SpeakerEmbeddingsConfig())
+ clustering: ClusteringConfig = field(default_factory=lambda: ClusteringConfig())
+ msdd_model: MSDDConfig = field(default_factory=lambda: MSDDConfig())
+ asr: ASRDiarizerConfig = field(default_factory=lambda: ASRDiarizerConfig())
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
- diarizer: DiarizerConfig = DiarizerConfig()
+ diarizer: DiarizerConfig = field(default_factory=lambda: DiarizerConfig())
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
diff --git a/nemo/collections/asr/models/configs/k2_sequence_models_config.py b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
--- a/nemo/collections/asr/models/configs/k2_sequence_models_config.py
+++ b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
@@ -26,14 +26,14 @@ class GraphModuleConfig:
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
- backend_cfg: BackendConfig = BackendConfig()
+ backend_cfg: BackendConfig = field(default_factory=lambda: BackendConfig())
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
- graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ graph_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
- model: EncDecK2SeqConfig = EncDecK2SeqConfig()
+ model: EncDecK2SeqConfig = field(default_factory=lambda: EncDecK2SeqConfig())
diff --git a/nemo/collections/asr/models/configs/matchboxnet_config.py b/nemo/collections/asr/models/configs/matchboxnet_config.py
--- a/nemo/collections/asr/models/configs/matchboxnet_config.py
+++ b/nemo/collections/asr/models/configs/matchboxnet_config.py
@@ -107,30 +107,38 @@ class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
- freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ preprocessor: AudioToMFCCPreprocessorConfig = field(
+ default_factory=lambda: AudioToMFCCPreprocessorConfig(window_size=0.025)
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig(
+ freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ )
)
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=128
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=128)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/quartznet_config.py b/nemo/collections/asr/models/configs/quartznet_config.py
--- a/nemo/collections/asr/models/configs/quartznet_config.py
+++ b/nemo/collections/asr/models/configs/quartznet_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
@@ -174,20 +174,30 @@ class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=True
+ train_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=True, trim_silence=True)
+ )
+ validation_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
)
- validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
@dataclass
diff --git a/nemo/collections/asr/modules/audio_preprocessing.py b/nemo/collections/asr/modules/audio_preprocessing.py
--- a/nemo/collections/asr/modules/audio_preprocessing.py
+++ b/nemo/collections/asr/modules/audio_preprocessing.py
@@ -634,6 +634,12 @@ def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
+ if self.audio_length < 0:
+ raise ValueError(
+ 'audio_length must be non-negative. If using a dataclass with OmegaConf, '
+ 'please call OmegaConf.to_object(cfg) to call appropriate __post_init__ methods.'
+ )
+
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
diff --git a/nemo/collections/asr/parts/k2/classes.py b/nemo/collections/asr/parts/k2/classes.py
--- a/nemo/collections/asr/parts/k2/classes.py
+++ b/nemo/collections/asr/parts/k2/classes.py
@@ -13,7 +13,7 @@
# limitations under the License.
from abc import ABC
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
import torch
@@ -43,7 +43,7 @@ class GraphModuleConfig:
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
- intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
+ intersect_conf: GraphIntersectDenseConfig = field(default_factory=lambda: GraphIntersectDenseConfig())
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
diff --git a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
--- a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
+++ b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
@@ -13,7 +13,7 @@
# limitations under the License.
import math
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional
import torch
@@ -183,7 +183,7 @@ class MultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
@@ -287,7 +287,7 @@ class RelPositionMultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
@@ -336,7 +336,9 @@ class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
@@ -378,5 +380,7 @@ class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
diff --git a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
@@ -14,7 +14,7 @@
import math
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import torch
@@ -602,5 +602,5 @@ class BeamCTCInferConfig:
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
- flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
- pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
+ flashlight_cfg: Optional[FlashlightConfig] = field(default_factory=lambda: FlashlightConfig())
+ pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = field(default_factory=lambda: PyCTCDecodeConfig())
diff --git a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional
import torch
@@ -253,7 +253,9 @@ class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
@@ -26,7 +26,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import numpy as np
@@ -2185,7 +2185,9 @@ class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
@@ -2217,7 +2219,9 @@ class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/asr/parts/utils/asr_confidence_utils.py b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
--- a/nemo/collections/asr/parts/utils/asr_confidence_utils.py
+++ b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
@@ -14,7 +14,7 @@
import math
from abc import ABC, abstractmethod
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from functools import partial
from typing import List, Optional
@@ -181,7 +181,7 @@ class ConfidenceConfig:
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
- measure_cfg: ConfidenceMeasureConfig = ConfidenceMeasureConfig()
+ measure_cfg: ConfidenceMeasureConfig = field(default_factory=lambda: ConfidenceMeasureConfig())
method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/common/parts/adapter_modules.py b/nemo/collections/common/parts/adapter_modules.py
--- a/nemo/collections/common/parts/adapter_modules.py
+++ b/nemo/collections/common/parts/adapter_modules.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
@@ -160,5 +160,7 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
diff --git a/nemo/collections/common/tokenizers/en_ja_tokenizers.py b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
--- a/nemo/collections/common/tokenizers/en_ja_tokenizers.py
+++ b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
@@ -14,11 +14,19 @@
import re
from typing import List
-import ipadic
-import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
+try:
+ import ipadic
+ import MeCab
+
+ HAVE_MECAB = True
+ HAVE_IPADIC = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+ HAVE_IPADIC = False
+
class EnJaProcessor:
"""
@@ -67,6 +75,9 @@ class JaMecabProcessor:
"""
def __init__(self):
+ if not HAVE_MECAB or not HAVE_IPADIC:
+ raise ImportError("Please ensure that you have installed `MeCab` and `ipadic` to use JaMecabProcessor")
+
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
diff --git a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
--- a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
+++ b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
@@ -46,7 +46,7 @@ class MTOptimConfig(OptimConfig):
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
- sched: Optional[MTSchedConfig] = MTSchedConfig()
+ sched: Optional[MTSchedConfig] = field(default_factory=lambda: MTSchedConfig())
@dataclass
@@ -74,70 +74,80 @@ class MTEncDecModelConfig(EncDecNLPModelConfig):
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
- head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
+ head: TokenClassifierConfig = field(default_factory=lambda: TokenClassifierConfig(log_softmax=True))
# dataset configurations
- train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=True,
- shuffle=True,
- cache_ids=False,
- use_cache=False,
+ train_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=True,
+ shuffle=True,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ validation_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ test_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- optim: Optional[OptimConfig] = MTOptimConfig()
+ optim: Optional[OptimConfig] = field(default_factory=lambda: MTOptimConfig())
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
- encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
- decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
-
- encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ encoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+ decoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+
+ encoder: NeMoTransformerEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
- decoder: NeMoTransformerConfig = NeMoTransformerConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ decoder: NeMoTransformerConfig = field(
+ default_factory=lambda: NeMoTransformerConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
@@ -150,32 +160,36 @@ class MTBottleneckModelConfig(AAYNBaseConfig):
recon_per_token: bool = True
log_timing: bool = True
- encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
- hidden_steps=32,
- hidden_blocks=1,
- hidden_init_method='params',
+ encoder: NeMoTransformerBottleneckEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ hidden_steps=32,
+ hidden_blocks=1,
+ hidden_init_method='params',
+ )
)
- decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
+ decoder: NeMoTransformerBottleneckDecoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckDecoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ )
)
diff --git a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
--- a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
+++ b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
@@ -215,13 +215,15 @@ class PunctuationCapitalizationModelConfig:
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
- class_labels: ClassLabelsConfig = ClassLabelsConfig()
+ class_labels: ClassLabelsConfig = field(default_factory=lambda: ClassLabelsConfig())
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
- common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
+ common_dataset_parameters: Optional[CommonDatasetParametersConfig] = field(
+ default_factory=lambda: CommonDatasetParametersConfig()
+ )
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
@@ -233,16 +235,16 @@ class PunctuationCapitalizationModelConfig:
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
- punct_head: HeadConfig = HeadConfig()
+ punct_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
- capit_head: HeadConfig = HeadConfig()
+ capit_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
- tokenizer: Any = TokenizerConfig()
+ tokenizer: Any = field(default_factory=lambda: TokenizerConfig())
"""A configuration for source text tokenizer."""
- language_model: LanguageModelConfig = LanguageModelConfig()
+ language_model: LanguageModelConfig = field(default_factory=lambda: LanguageModelConfig())
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
@@ -311,22 +313,30 @@ class PunctuationCapitalizationConfig(NemoConfig):
do_testing: bool = False
"""Whether ot perform testing of the model."""
- model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
+ model: PunctuationCapitalizationModelConfig = field(default_factory=lambda: PunctuationCapitalizationModelConfig())
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
- trainer: Optional[TrainerConfig] = TrainerConfig()
+ trainer: Optional[TrainerConfig] = field(default_factory=lambda: TrainerConfig())
"""Contains ``Trainer`` Lightning class constructor parameters."""
- exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
+ exp_manager: Optional[ExpManagerConfig] = field(
+ default_factory=lambda: ExpManagerConfig(name=None, files_to_copy=[])
+ )
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
+ def __post_init__(self):
+ if self.exp_manager is not None:
+ self.exp_manager.name = self.name
+
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
- model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
+ model: PunctuationCapitalizationLexicalAudioModelConfig = field(
+ default_factory=lambda: PunctuationCapitalizationLexicalAudioModelConfig()
+ )
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
diff --git a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
--- a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
+++ b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
@@ -13,7 +13,6 @@
# limitations under the License.
"""Transformer based language model."""
-from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
@@ -25,6 +24,13 @@
scaled_init_method_normal,
)
+try:
+ from MeCab import Model
+
+ HAVE_MECAB = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+
try:
from apex.transformer.enums import AttnMaskType, ModelType
diff --git a/nemo/collections/tts/models/fastpitch.py b/nemo/collections/tts/models/fastpitch.py
--- a/nemo/collections/tts/models/fastpitch.py
+++ b/nemo/collections/tts/models/fastpitch.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import List, Optional
@@ -70,12 +70,12 @@ class TextTokenizer:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
@dataclass
class TextTokenizerConfig:
- text_tokenizer: TextTokenizer = TextTokenizer()
+ text_tokenizer: TextTokenizer = field(default_factory=lambda: TextTokenizer())
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
diff --git a/nemo/collections/tts/models/tacotron2.py b/nemo/collections/tts/models/tacotron2.py
--- a/nemo/collections/tts/models/tacotron2.py
+++ b/nemo/collections/tts/models/tacotron2.py
@@ -13,7 +13,7 @@
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
import torch
@@ -53,7 +53,7 @@ class Preprocessor:
@dataclass
class Tacotron2Config:
- preprocessor: Preprocessor = Preprocessor()
+ preprocessor: Preprocessor = field(default_factory=lambda: Preprocessor())
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
diff --git a/nemo/core/config/modelPT.py b/nemo/core/config/modelPT.py
--- a/nemo/core/config/modelPT.py
+++ b/nemo/core/config/modelPT.py
@@ -58,11 +58,13 @@ class HydraConfig:
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
- trainer: config.TrainerConfig = config.TrainerConfig(
- strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ trainer: config.TrainerConfig = field(
+ default_factory=lambda: config.TrainerConfig(
+ strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ )
)
- exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
- hydra: HydraConfig = HydraConfig()
+ exp_manager: Optional[Any] = field(default_factory=lambda: exp_manager.ExpManagerConfig())
+ hydra: HydraConfig = field(default_factory=lambda: HydraConfig())
class ModelConfigBuilder:
diff --git a/nemo/utils/exp_manager.py b/nemo/utils/exp_manager.py
--- a/nemo/utils/exp_manager.py
+++ b/nemo/utils/exp_manager.py
@@ -18,7 +18,7 @@
import sys
import time
import warnings
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
@@ -146,28 +146,30 @@ class ExpManagerConfig:
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
- mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
+ mlflow_logger_kwargs: Optional[MLFlowParams] = field(default_factory=lambda: MLFlowParams())
create_dllogger_logger: Optional[bool] = False
- dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
+ dllogger_logger_kwargs: Optional[DLLoggerParams] = field(default_factory=lambda: DLLoggerParams())
create_clearml_logger: Optional[bool] = False
- clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
+ clearml_logger_kwargs: Optional[ClearMLParams] = field(default_factory=lambda: ClearMLParams())
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
- checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
+ checkpoint_callback_params: Optional[CallbackParams] = field(default_factory=lambda: CallbackParams())
create_early_stopping_callback: Optional[bool] = False
- early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
+ early_stopping_callback_params: Optional[EarlyStoppingParams] = field(
+ default_factory=lambda: EarlyStoppingParams()
+ )
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
- step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
+ step_timing_kwargs: Optional[StepTimingParams] = field(default_factory=lambda: StepTimingParams())
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
- ema: Optional[EMAParams] = EMAParams()
+ ema: Optional[EMAParams] = field(default_factory=lambda: EMAParams())
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
@@ -112,14 +112,14 @@ class EvalBeamSearchNGramConfig:
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
- decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
+ decoding: ctc_beam_decoding.BeamCTCInferConfig = field(default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=128))
- text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
+ text_processing: Optional[TextProcessingConfig] = field(default_factory=lambda: TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
- )
+ ))
# fmt: on
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
@@ -115,7 +115,7 @@ class EvalBeamSearchNGramConfig:
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
- decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
+ decoding: rnnt_beam_decoding.BeamRNNTInferConfig = field(default_factory=lambda: rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128))
# fmt: on
diff --git a/scripts/confidence_ensembles/build_ensemble.py b/scripts/confidence_ensembles/build_ensemble.py
--- a/scripts/confidence_ensembles/build_ensemble.py
+++ b/scripts/confidence_ensembles/build_ensemble.py
@@ -75,7 +75,7 @@
import sys
import tempfile
from copy import deepcopy
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Tuple
@@ -209,19 +209,23 @@ class BuildEnsembleConfig:
random_seed: int = 0 # for reproducibility
# default confidence, can override
- confidence: ConfidenceConfig = ConfidenceConfig(
- # we keep frame confidences and apply aggregation manually to get full-utterance confidence
- preserve_frame_confidence=True,
- exclude_blank=True,
- aggregation="mean",
- measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ confidence: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(
+ # we keep frame confidences and apply aggregation manually to get full-utterance confidence
+ preserve_frame_confidence=True,
+ exclude_blank=True,
+ aggregation="mean",
+ measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ )
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
- transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
+ transcription: transcribe_speech.TranscriptionConfig = field(
+ default_factory=lambda: transcribe_speech.TranscriptionConfig()
+ )
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
@@ -229,12 +233,14 @@ class BuildEnsembleConfig:
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
- tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
+ tune_confidence_config: TuneConfidenceConfig = field(default_factory=lambda: TuneConfidenceConfig())
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
- tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
+ tune_logistic_regression_config: TuneLogisticRegressionConfig = field(
+ default_factory=lambda: TuneLogisticRegressionConfig()
+ )
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
diff --git a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
--- a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
+++ b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
@@ -14,7 +14,7 @@
import json
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import Optional
@@ -124,7 +124,9 @@ class ConfidenceBenchmarkingConfig:
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
- confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ confidence_cfg: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ )
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
diff --git a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
--- a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
+++ b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
@@ -202,7 +202,7 @@ class ASRTarredDatasetMetadata:
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
- dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
+ dataset_config: Optional[ASRTarredDatasetConfig] = field(default_factory=lambda: ASRTarredDatasetConfig())
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
diff --git a/tools/nemo_forced_aligner/align.py b/tools/nemo_forced_aligner/align.py
--- a/tools/nemo_forced_aligner/align.py
+++ b/tools/nemo_forced_aligner/align.py
@@ -149,8 +149,8 @@ class AlignmentConfig:
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig = CTMFileConfig()
- ass_file_config: ASSFileConfig = ASSFileConfig()
+ ctm_file_config: CTMFileConfig = field(default_factory=lambda: CTMFileConfig())
+ ass_file_config: ASSFileConfig = field(default_factory=lambda: ASSFileConfig())
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
</patch>
</s>
</patch>
|
diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_to_text_dataset.py
--- a/tests/collections/asr/test_text_to_text_dataset.py
+++ b/tests/collections/asr/test_text_to_text_dataset.py
@@ -15,7 +15,7 @@
import json
import multiprocessing
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
import pytest
@@ -110,7 +110,7 @@ class TextTokenizerCfg:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
config = OmegaConf.create(OmegaConf.to_yaml(TextTokenizerCfg()))
return instantiate(config)
|
1.0
| |||
NVIDIA__NeMo-7616
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
</issue>
<code>
[start of README.rst]
|status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
.. |status| image:: http://www.repostatus.org/badges/latest/active.svg
:target: http://www.repostatus.org/#active
:alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
.. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
:alt: Documentation
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
.. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
:target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
:alt: NeMo core license and license for collections in this repo
.. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
:target: https://badge.fury.io/py/nemo-toolkit
:alt: Release version
.. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
:target: https://badge.fury.io/py/nemo-toolkit
:alt: Python version
.. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
:target: https://pepy.tech/project/nemo-toolkit
:alt: PyPi total downloads
.. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
:target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
:alt: CodeQL
.. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Code style: black
.. _main-readme:
**NVIDIA NeMo**
===============
Introduction
------------
NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
text-to-speech synthesis (TTS), large language models (LLMs), and
natural language processing (NLP).
The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
training is automatically scalable to 1000s of GPUs.
Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
Getting started with NeMo is simple.
State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
`NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
can all be run on `Google Colab <https://colab.research.google.com>`_.
For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
which can be used to find the optimal model parallel configuration for training on a specific cluster.
Also see the two introductory videos below for a high level overview of NeMo.
* Developing State-Of-The-Art Conversational AI Models in Three Lines of Code.
* NVIDIA NeMo: Toolkit for Conversational AI at PyData Yerevan 2022.
|three_lines| |pydata|
.. |pydata| image:: https://img.youtube.com/vi/J-P6Sczmas8/maxres3.jpg
:target: https://www.youtube.com/embed/J-P6Sczmas8?mute=0&start=14&autoplay=0
:width: 600
:alt: Develop Conversational AI Models in 3 Lines
.. |three_lines| image:: https://img.youtube.com/vi/wBgpMf_KQVw/maxresdefault.jpg
:target: https://www.youtube.com/embed/wBgpMf_KQVw?mute=0&start=0&autoplay=0
:width: 600
:alt: Introduction at PyData@Yerevan 2022
Key Features
------------
* Speech processing
* `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
* `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
* Supported ASR models: `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_
* Jasper, QuartzNet, CitriNet, ContextNet
* Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
* Squeezeformer-CTC and Squeezeformer-Transducer
* LSTM-Transducer (RNNT) and LSTM-CTC
* Supports the following decoders/losses:
* CTC
* Transducer/RNNT
* Hybrid Transducer/CTC
* NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
* Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
* Cache-aware Streaming Conformer with multiple lookaheads - `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_
* Beam Search decoding
* `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
* `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
* `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
* `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
* ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
* `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
* `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
* Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
* Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
* `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
* `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
* `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
* Natural Language Processing
* `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
* `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
* `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
* `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
* `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
* `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
* `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
* `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
* `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
* `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
* `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
* `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
* `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
* `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
* Text-to-Speech Synthesis (TTS):
* `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
* Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
* Vocoders: HiFiGAN, UnivNet, WaveGlow
* End-to-End Models: VITS
* `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
* `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
* `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
* `NeMo Forced Aligner <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/nemo_forced_aligner.html>`_
* `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
* `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
* `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
Requirements
------------
1) Python 3.10 or above
2) Pytorch 1.13.1 or above
3) NVIDIA GPU for training
Documentation
-------------
.. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
:alt: Documentation Status
:scale: 100%
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
.. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
:alt: Documentation Status
:scale: 100%
:target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
| Version | Status | Description |
+=========+=============+==========================================================================================================================================+
| Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
| Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
+---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
Tutorials
---------
A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
Getting help with NeMo
----------------------
FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
Installation
------------
Conda
~~~~~
We recommend installing NeMo in a fresh Conda environment.
.. code-block:: bash
conda create --name nemo python==3.10.12
conda activate nemo
Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
.. code-block:: bash
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
Pip
~~~
Use this installation mode if you want the latest released version.
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
pip install nemo_toolkit['all']
Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
Pip from source
~~~~~~~~~~~~~~~
Use this installation mode if you want the version from a particular GitHub branch (e.g main).
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
pip install Cython
python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
From source
~~~~~~~~~~~
Use this installation mode if you are contributing to NeMo.
.. code-block:: bash
apt-get update && apt-get install -y libsndfile1 ffmpeg
git clone https://github.com/NVIDIA/NeMo
cd NeMo
./reinstall.sh
If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
with ``pip install -e .`` when your PWD is the root of the NeMo repository.
RNNT
~~~~
Note that RNNT requires numba to be installed from conda.
.. code-block:: bash
conda remove numba
pip uninstall numba
conda install -c conda-forge numba
NeMo Megatron
~~~~~~~~~~~~~
NeMo Megatron training requires NVIDIA Apex to be installed.
Install it manually if not using the NVIDIA PyTorch container.
To install Apex, run
.. code-block:: bash
git clone https://github.com/NVIDIA/apex.git
cd apex
git checkout 52e18c894223800cb611682dce27d88050edf1de
pip install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
.. code-block:: bash
conda install -c nvidia cuda-nvprof=11.8
packaging is also needed:
.. code-block:: bash
pip install packaging
With the latest versions of Apex, the `pyproject.toml` file in Apex may need to be deleted in order to install locally.
Transformer Engine
~~~~~~~~~~~~~~~~~~
NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
`Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
.. code-block:: bash
pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
Transformer Engine requires PyTorch to be built with CUDA 11.8.
Flash Attention
~~~~~~~~~~~~~~~~~~~~
Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
.. code-block:: bash
pip install flash-attn
pip install triton==2.0.0.dev20221202
NLP inference UI
~~~~~~~~~~~~~~~~~~~~
To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
.. code-block:: bash
pip install gradio==3.34.0
NeMo Text Processing
~~~~~~~~~~~~~~~~~~~~
NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
Docker containers:
~~~~~~~~~~~~~~~~~~
We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.20.0`` comes with container ``nemo:23.06``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
To use built container, please run
.. code-block:: bash
docker pull nvcr.io/nvidia/nemo:23.06
To build a nemo container with Dockerfile from a branch, please run
.. code-block:: bash
DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
.. code-block:: bash
docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
-p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
Examples
--------
Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
Contributing
------------
We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
Publications
------------
We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/tree/stable/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
License
-------
NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
[end of README.rst]
[start of examples/asr/experimental/k2/align_speech_parallel.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
# Based on examples/asr/transcribe_speech_parallel.py
# ASR alignment with multi-GPU/multi-node support for large datasets
# It supports both tarred and non-tarred datasets
# Arguments
# model: path to a nemo/PTL checkpoint file or name of a pretrained model
# predict_ds: config of the dataset/dataloader
# aligner_args: aligner config
# output_path: path to store the predictions
# model_stride: model downsampling factor, 8 for Citrinet models and 4 for Conformer models
#
# Results of each GPU/worker is written into a file named 'predictions_{rank}.json, and aggregated results of all workers are written into 'predictions_all.json'
Example for non-tarred datasets:
python align_speech_parallel.py \
model=stt_en_conformer_ctc_large \
predict_ds.manifest_filepath=/dataset/manifest_file.json \
predict_ds.batch_size=16 \
output_path=/tmp/
Example for tarred datasets:
python align_speech_parallel.py \
predict_ds.is_tarred=true \
predict_ds.manifest_filepath=/tarred_dataset/tarred_audio_manifest.json \
predict_ds.tarred_audio_filepaths=/tarred_dataset/audio__OP_0..127_CL_.tar \
...
By default the trainer uses all the GPUs available and default precision is FP32.
By setting the trainer config you may control these configs. For example to do the predictions with AMP on just two GPUs:
python align_speech_parallel.py \
trainer.precision=16 \
trainer.gpus=2 \
...
You may control the dataloader's config by setting the predict_ds:
python align_speech_parallel.py \
predict_ds.num_workers=8 \
predict_ds.min_duration=2.0 \
predict_ds.sample_rate=16000 \
model=stt_en_conformer_ctc_small \
...
You may control the aligner's config by setting the aligner_args:
aligner_args.alignment_type=argmax \
aligner_args.word_output=False \
aligner_args.cpu_decoding=True \
aligner_args.decode_batch_size=8 \
aligner_args.ctc_cfg.prob_suppress_index=-1 \
aligner_args.ctc_cfg.prob_suppress_value=0.5 \
aligner_args.rnnt_cfg.predictor_window_size=10 \
aligner_args.decoder_module_cfg.intersect_pruned=true \
aligner_args.decoder_module_cfg.intersect_conf.search_beam=40 \
...
"""
import os
from dataclasses import dataclass, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
import torch
from omegaconf import MISSING, OmegaConf
from nemo.collections.asr.data.audio_to_ctm_dataset import ASRCTMPredictionWriter
from nemo.collections.asr.models import ASRModel
from nemo.collections.asr.models.configs.aligner_config import K2AlignerWrapperModelConfig
from nemo.collections.asr.models.configs.asr_models_config import ASRDatasetConfig
from nemo.collections.asr.models.k2_aligner_model import AlignerWrapperModel
from nemo.core.config import TrainerConfig, hydra_runner
from nemo.utils import logging
from nemo.utils.get_rank import is_global_rank_zero
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
output_path: str = MISSING
model_stride: int = 8
trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
# there arguments will be ignored
return_predictions: bool = False
use_cer: bool = False
def match_train_config(predict_ds, train_ds):
# It copies the important configurations from the train dataset of the model
# into the predict_ds to be used for prediction. It is needed to match the training configurations.
if train_ds is None:
return
predict_ds.sample_rate = train_ds.get("sample_rate", 16000)
cfg_name_list = [
"int_values",
"use_start_end_token",
"blank_index",
"unk_index",
"normalize",
"parser",
"eos_id",
"bos_id",
"pad_id",
]
if is_dataclass(predict_ds):
predict_ds = OmegaConf.structured(predict_ds)
for cfg_name in cfg_name_list:
if hasattr(train_ds, cfg_name):
setattr(predict_ds, cfg_name, getattr(train_ds, cfg_name))
return predict_ds
@hydra_runner(config_name="AlignmentConfig", schema=ParallelAlignmentConfig)
def main(cfg: ParallelAlignmentConfig):
if cfg.model.endswith(".nemo"):
logging.info("Attempting to initialize from .nemo file")
model = ASRModel.restore_from(restore_path=cfg.model, map_location="cpu")
elif cfg.model.endswith(".ckpt"):
logging.info("Attempting to initialize from .ckpt file")
model = ASRModel.load_from_checkpoint(checkpoint_path=cfg.model, map_location="cpu")
else:
logging.info(
"Attempting to initialize from a pretrained model as the model name does not have the extension of .nemo or .ckpt"
)
model = ASRModel.from_pretrained(model_name=cfg.model, map_location="cpu")
trainer = ptl.Trainer(**cfg.trainer)
cfg.predict_ds.return_sample_id = True
cfg.return_predictions = False
cfg.use_cer = False
cfg.predict_ds = match_train_config(predict_ds=cfg.predict_ds, train_ds=model._cfg.train_ds)
data_loader = model._setup_dataloader_from_config(cfg.predict_ds)
os.makedirs(cfg.output_path, exist_ok=True)
# trainer.global_rank is not valid before predict() is called. Need this hack to find the correct global_rank.
global_rank = trainer.node_rank * trainer.num_devices + int(os.environ.get("LOCAL_RANK", 0))
output_file = os.path.join(cfg.output_path, f"predictions_{global_rank}.json")
output_ctm_dir = os.path.join(cfg.output_path, "ctm")
predictor_writer = ASRCTMPredictionWriter(
dataset=data_loader.dataset,
output_file=output_file,
output_ctm_dir=output_ctm_dir,
time_per_frame=cfg.model_stride * model._cfg.preprocessor['window_stride'],
)
trainer.callbacks.extend([predictor_writer])
aligner_wrapper = AlignerWrapperModel(model=model, cfg=cfg.aligner_args)
trainer.predict(model=aligner_wrapper, dataloaders=data_loader, return_predictions=cfg.return_predictions)
samples_num = predictor_writer.close_output_file()
logging.info(
f"Prediction on rank {global_rank} is done for {samples_num} samples and results are stored in {output_file}."
)
if torch.distributed.is_initialized():
torch.distributed.barrier()
samples_num = 0
if is_global_rank_zero():
output_file = os.path.join(cfg.output_path, f"predictions_all.json")
logging.info(f"Prediction files are being aggregated in {output_file}.")
with open(output_file, 'tw', encoding="utf-8") as outf:
for rank in range(trainer.world_size):
input_file = os.path.join(cfg.output_path, f"predictions_{rank}.json")
with open(input_file, 'r', encoding="utf-8") as inpf:
lines = inpf.readlines()
samples_num += len(lines)
outf.writelines(lines)
logging.info(
f"Prediction is done for {samples_num} samples in total on all workers and results are aggregated in {output_file}."
)
if __name__ == '__main__':
main()
[end of examples/asr/experimental/k2/align_speech_parallel.py]
[start of nemo/collections/asr/metrics/rnnt_wer.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import re
from abc import abstractmethod
from dataclasses import dataclass, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
import numpy as np
import torch
from omegaconf import OmegaConf
from torchmetrics import Metric
from nemo.collections.asr.metrics.wer import move_dimension_to_the_front
from nemo.collections.asr.parts.submodules import rnnt_beam_decoding as beam_decode
from nemo.collections.asr.parts.submodules import rnnt_greedy_decoding as greedy_decode
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
from nemo.utils import logging
__all__ = ['RNNTDecoding', 'RNNTWER']
class AbstractRNNTDecoding(ConfidenceMixin):
"""
Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
Args:
decoding_cfg: A dict-like object which contains the following key-value pairs.
strategy: str value which represents the type of decoding that can occur.
Possible values are :
- greedy, greedy_batch (for greedy decoding).
- beam, tsd, alsd (for beam search decoding).
compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
tokens as well as the decoded string. Default is False in order to avoid double decoding
unless required.
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
with the `return_hypotheses` flag set to True.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrete intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
rnnt_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
Can take the following values - "char" for character/subword time stamps, "word" for word level
time stamps and "all" (default), for both character level and word level time stamps.
word_seperator: Str token representing the seperator between words.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of ints.
confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize
`rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
The config may further contain the following sub-dictionaries:
"greedy":
max_symbols: int, describing the maximum number of target tokens to decode per
timestep during greedy decoding. Setting to larger values allows longer sentences
to be decoded, at the cost of increased execution time.
preserve_frame_confidence: Same as above, overrides above value.
confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
"beam":
beam_size: int, defining the beam size for beam search. Must be >= 1.
If beam_size == 1, will perform cached greedy search. This might be slightly different
results compared to the greedy search above.
score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
Set to True by default.
return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
hypotheses after beam search has concluded. This flag is set by default.
tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
at increased cost to execution time.
alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
If an integer is provided, it can decode sequences of that particular maximum length.
If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
where seq_len is the length of the acoustic model output (T).
NOTE:
If a float is provided, it can be greater than 1!
By default, a float of 2.0 is used so that a target sequence can be at most twice
as long as the acoustic model output length T.
maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
in order to reduce expensive beam search cost later. int >= 0.
maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
and affects the speed of inference since large values will perform large beam search in the next step.
maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
expansion apart from the "most likely" candidate.
Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
tuned on a validation set.
softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
decoder: The Decoder/Prediction network module.
joint: The Joint network module.
blank_id: The id of the RNNT blank token.
"""
def __init__(self, decoding_cfg, decoder, joint, blank_id: int):
super(AbstractRNNTDecoding, self).__init__()
# Convert dataclass to config object
if is_dataclass(decoding_cfg):
decoding_cfg = OmegaConf.structured(decoding_cfg)
self.cfg = decoding_cfg
self.blank_id = blank_id
self.num_extra_outputs = joint.num_extra_outputs
self.big_blank_durations = self.cfg.get("big_blank_durations", None)
self.durations = self.cfg.get("durations", None)
self.compute_hypothesis_token_set = self.cfg.get("compute_hypothesis_token_set", False)
self.compute_langs = decoding_cfg.get('compute_langs', False)
self.preserve_alignments = self.cfg.get('preserve_alignments', None)
self.joint_fused_batch_size = self.cfg.get('fused_batch_size', None)
self.compute_timestamps = self.cfg.get('compute_timestamps', None)
self.word_seperator = self.cfg.get('word_seperator', ' ')
if self.durations is not None: # this means it's a TDT model.
if blank_id == 0:
raise ValueError("blank_id must equal len(non_blank_vocabs) for TDT models")
if self.big_blank_durations is not None:
raise ValueError("duration and big_blank_durations can't both be not None")
if self.cfg.strategy not in ['greedy', 'greedy_batch']:
raise ValueError("currently only greedy and greedy_batch inference is supported for TDT models")
if self.big_blank_durations is not None: # this means it's a multi-blank model.
if blank_id == 0:
raise ValueError("blank_id must equal len(vocabs) for multi-blank RNN-T models")
if self.cfg.strategy not in ['greedy', 'greedy_batch']:
raise ValueError(
"currently only greedy and greedy_batch inference is supported for multi-blank models"
)
possible_strategies = ['greedy', 'greedy_batch', 'beam', 'tsd', 'alsd', 'maes']
if self.cfg.strategy not in possible_strategies:
raise ValueError(f"Decoding strategy must be one of {possible_strategies}")
# Update preserve alignments
if self.preserve_alignments is None:
if self.cfg.strategy in ['greedy', 'greedy_batch']:
self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
# Update compute timestamps
if self.compute_timestamps is None:
if self.cfg.strategy in ['greedy', 'greedy_batch']:
self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
# Test if alignments are being preserved for RNNT
if self.compute_timestamps is True and self.preserve_alignments is False:
raise ValueError("If `compute_timesteps` flag is set, then `preserve_alignments` flag must also be set.")
# initialize confidence-related fields
self._init_confidence(self.cfg.get('confidence_cfg', None))
# Confidence estimation is not implemented for these strategies
if (
not self.preserve_frame_confidence
and self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']
and self.cfg.beam.get('preserve_frame_confidence', False)
):
raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
if self.cfg.strategy == 'greedy':
if self.big_blank_durations is None:
if self.durations is None:
self.decoding = greedy_decode.GreedyRNNTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None)
or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_method_cfg=self.confidence_method_cfg,
)
else:
self.decoding = greedy_decode.GreedyTDTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
durations=self.durations,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None)
or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_method_cfg=self.confidence_method_cfg,
)
else:
self.decoding = greedy_decode.GreedyMultiblankRNNTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
big_blank_durations=self.big_blank_durations,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_method_cfg=self.confidence_method_cfg,
)
elif self.cfg.strategy == 'greedy_batch':
if self.big_blank_durations is None:
if self.durations is None:
self.decoding = greedy_decode.GreedyBatchedRNNTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None)
or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_method_cfg=self.confidence_method_cfg,
)
else:
self.decoding = greedy_decode.GreedyBatchedTDTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
durations=self.durations,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None)
or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_method_cfg=self.confidence_method_cfg,
)
else:
self.decoding = greedy_decode.GreedyBatchedMultiblankRNNTInfer(
decoder_model=decoder,
joint_model=joint,
blank_index=self.blank_id,
big_blank_durations=self.big_blank_durations,
max_symbols_per_step=(
self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
),
preserve_alignments=self.preserve_alignments,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_method_cfg=self.confidence_method_cfg,
)
elif self.cfg.strategy == 'beam':
self.decoding = beam_decode.BeamRNNTInfer(
decoder_model=decoder,
joint_model=joint,
beam_size=self.cfg.beam.beam_size,
return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
search_type='default',
score_norm=self.cfg.beam.get('score_norm', True),
softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
preserve_alignments=self.preserve_alignments,
)
elif self.cfg.strategy == 'tsd':
self.decoding = beam_decode.BeamRNNTInfer(
decoder_model=decoder,
joint_model=joint,
beam_size=self.cfg.beam.beam_size,
return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
search_type='tsd',
score_norm=self.cfg.beam.get('score_norm', True),
tsd_max_sym_exp_per_step=self.cfg.beam.get('tsd_max_sym_exp', 10),
softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
preserve_alignments=self.preserve_alignments,
)
elif self.cfg.strategy == 'alsd':
self.decoding = beam_decode.BeamRNNTInfer(
decoder_model=decoder,
joint_model=joint,
beam_size=self.cfg.beam.beam_size,
return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
search_type='alsd',
score_norm=self.cfg.beam.get('score_norm', True),
alsd_max_target_len=self.cfg.beam.get('alsd_max_target_len', 2),
softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
preserve_alignments=self.preserve_alignments,
)
elif self.cfg.strategy == 'maes':
self.decoding = beam_decode.BeamRNNTInfer(
decoder_model=decoder,
joint_model=joint,
beam_size=self.cfg.beam.beam_size,
return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
search_type='maes',
score_norm=self.cfg.beam.get('score_norm', True),
maes_num_steps=self.cfg.beam.get('maes_num_steps', 2),
maes_prefix_alpha=self.cfg.beam.get('maes_prefix_alpha', 1),
maes_expansion_gamma=self.cfg.beam.get('maes_expansion_gamma', 2.3),
maes_expansion_beta=self.cfg.beam.get('maes_expansion_beta', 2.0),
softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
preserve_alignments=self.preserve_alignments,
ngram_lm_model=self.cfg.beam.get('ngram_lm_model', None),
ngram_lm_alpha=self.cfg.beam.get('ngram_lm_alpha', 0.0),
hat_subtract_ilm=self.cfg.beam.get('hat_subtract_ilm', False),
hat_ilm_weight=self.cfg.beam.get('hat_ilm_weight', 0.0),
)
else:
raise ValueError(
f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
f"but was provided {self.cfg.strategy}"
)
# Update the joint fused batch size or disable it entirely if needed.
self.update_joint_fused_batch_size()
def rnnt_decoder_predictions_tensor(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
return_hypotheses: bool = False,
partial_hypotheses: Optional[List[Hypothesis]] = None,
) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
"""
Decode an encoder output by autoregressive decoding of the Decoder+Joint networks.
Args:
encoder_output: torch.Tensor of shape [B, D, T].
encoded_lengths: torch.Tensor containing lengths of the padded encoder outputs. Shape [B].
return_hypotheses: bool. If set to True it will return list of Hypothesis or NBestHypotheses
Returns:
If `return_best_hypothesis` is set:
A tuple (hypotheses, None):
hypotheses - list of Hypothesis (best hypothesis per sample).
Look at rnnt_utils.Hypothesis for more information.
If `return_best_hypothesis` is not set:
A tuple(hypotheses, all_hypotheses)
hypotheses - list of Hypothesis (best hypothesis per sample).
Look at rnnt_utils.Hypothesis for more information.
all_hypotheses - list of NBestHypotheses. Each NBestHypotheses further contains a sorted
list of all the hypotheses of the model per sample.
Look at rnnt_utils.NBestHypotheses for more information.
"""
# Compute hypotheses
with torch.inference_mode():
hypotheses_list = self.decoding(
encoder_output=encoder_output, encoded_lengths=encoded_lengths, partial_hypotheses=partial_hypotheses
) # type: [List[Hypothesis]]
# extract the hypotheses
hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
prediction_list = hypotheses_list
if isinstance(prediction_list[0], NBestHypotheses):
hypotheses = []
all_hypotheses = []
for nbest_hyp in prediction_list: # type: NBestHypotheses
n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
decoded_hyps = self.decode_hypothesis(n_hyps) # type: List[str]
# If computing timestamps
if self.compute_timestamps is True:
timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
for hyp_idx in range(len(decoded_hyps)):
decoded_hyps[hyp_idx] = self.compute_rnnt_timestamps(decoded_hyps[hyp_idx], timestamp_type)
hypotheses.append(decoded_hyps[0]) # best hypothesis
all_hypotheses.append(decoded_hyps)
if return_hypotheses:
return hypotheses, all_hypotheses
best_hyp_text = [h.text for h in hypotheses]
all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
return best_hyp_text, all_hyp_text
else:
hypotheses = self.decode_hypothesis(prediction_list) # type: List[str]
# If computing timestamps
if self.compute_timestamps is True:
timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
for hyp_idx in range(len(hypotheses)):
hypotheses[hyp_idx] = self.compute_rnnt_timestamps(hypotheses[hyp_idx], timestamp_type)
if return_hypotheses:
# greedy decoding, can get high-level confidence scores
if self.preserve_frame_confidence and (
self.preserve_word_confidence or self.preserve_token_confidence
):
hypotheses = self.compute_confidence(hypotheses)
return hypotheses, None
best_hyp_text = [h.text for h in hypotheses]
return best_hyp_text, None
def decode_hypothesis(self, hypotheses_list: List[Hypothesis]) -> List[Union[Hypothesis, NBestHypotheses]]:
"""
Decode a list of hypotheses into a list of strings.
Args:
hypotheses_list: List of Hypothesis.
Returns:
A list of strings.
"""
for ind in range(len(hypotheses_list)):
# Extract the integer encoded hypothesis
prediction = hypotheses_list[ind].y_sequence
if type(prediction) != list:
prediction = prediction.tolist()
# RNN-T sample level is already preprocessed by implicit RNNT decoding
# Simply remove any blank and possibly big blank tokens
if self.big_blank_durations is not None: # multi-blank RNNT
num_extra_outputs = len(self.big_blank_durations)
prediction = [p for p in prediction if p < self.blank_id - num_extra_outputs]
elif self.durations is not None: # TDT model.
prediction = [p for p in prediction if p < self.blank_id]
else: # standard RNN-T
prediction = [p for p in prediction if p != self.blank_id]
# De-tokenize the integer tokens; if not computing timestamps
if self.compute_timestamps is True:
# keep the original predictions, wrap with the number of repetitions per token and alignments
# this is done so that `rnnt_decoder_predictions_tensor()` can process this hypothesis
# in order to compute exact time stamps.
alignments = copy.deepcopy(hypotheses_list[ind].alignments)
token_repetitions = [1] * len(alignments) # preserve number of repetitions per token
hypothesis = (prediction, alignments, token_repetitions)
else:
hypothesis = self.decode_tokens_to_str(prediction)
# TODO: remove
# collapse leading spaces before . , ? for PC models
hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
if self.compute_hypothesis_token_set:
hypotheses_list[ind].tokens = self.decode_ids_to_tokens(prediction)
# De-tokenize the integer tokens
hypotheses_list[ind].text = hypothesis
return hypotheses_list
def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
"""
Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
Assumes that `frame_confidence` is present in the hypotheses.
Args:
hypotheses_list: List of Hypothesis.
Returns:
A list of hypotheses with high-level confidence scores.
"""
if self.exclude_blank_from_confidence:
for hyp in hypotheses_list:
hyp.token_confidence = hyp.non_blank_frame_confidence
else:
for hyp in hypotheses_list:
offset = 0
token_confidence = []
if len(hyp.timestep) > 0:
for ts, te in zip(hyp.timestep, hyp.timestep[1:] + [len(hyp.frame_confidence)]):
if ts != te:
# <blank> tokens are considered to belong to the last non-blank token, if any.
token_confidence.append(
self._aggregate_confidence(
[hyp.frame_confidence[ts][offset]]
+ [fc[0] for fc in hyp.frame_confidence[ts + 1 : te]]
)
)
offset = 0
else:
token_confidence.append(hyp.frame_confidence[ts][offset])
offset += 1
hyp.token_confidence = token_confidence
if self.preserve_word_confidence:
for hyp in hypotheses_list:
hyp.word_confidence = self._aggregate_token_confidence(hyp)
return hypotheses_list
@abstractmethod
def decode_tokens_to_str(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to decoder a token id list into a string.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded string.
"""
raise NotImplementedError()
@abstractmethod
def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to decode a token id list into a token list.
A token list is the string representation of each token id.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded tokens.
"""
raise NotImplementedError()
@abstractmethod
def decode_tokens_to_lang(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to
compute the most likely language ID (LID) string given the tokens.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded LID string.
"""
raise NotImplementedError()
@abstractmethod
def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to
decode a token id list into language ID (LID) list.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded LIDS.
"""
raise NotImplementedError()
def update_joint_fused_batch_size(self):
if self.joint_fused_batch_size is None:
# do nothing and let the Joint itself handle setting up of the fused batch
return
if not hasattr(self.decoding.joint, 'set_fused_batch_size'):
logging.warning(
"The joint module does not have `set_fused_batch_size(int)` as a setter function.\n"
"Ignoring update of joint fused batch size."
)
return
if not hasattr(self.decoding.joint, 'set_fuse_loss_wer'):
logging.warning(
"The joint module does not have `set_fuse_loss_wer(bool, RNNTLoss, RNNTWER)` "
"as a setter function.\n"
"Ignoring update of joint fused batch size."
)
return
if self.joint_fused_batch_size > 0:
self.decoding.joint.set_fused_batch_size(self.joint_fused_batch_size)
else:
logging.info("Joint fused batch size <= 0; Will temporarily disable fused batch step in the Joint.")
self.decoding.joint.set_fuse_loss_wer(False)
def compute_rnnt_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
assert timestamp_type in ['char', 'word', 'all']
# Unpack the temporary storage
decoded_prediction, alignments, token_repetitions = hypothesis.text
# Retrieve offsets
char_offsets = word_offsets = None
char_offsets = self._compute_offsets(hypothesis, token_repetitions, self.blank_id)
# finally, set the flattened decoded predictions to text field for later text decoding
hypothesis.text = decoded_prediction
# Assert number of offsets and hypothesis tokens are 1:1 match.
num_flattened_tokens = 0
for t in range(len(char_offsets)):
# Subtract one here for the extra RNNT BLANK token emitted to designate "End of timestep"
num_flattened_tokens += len(char_offsets[t]['char']) - 1
if num_flattened_tokens != len(hypothesis.text):
raise ValueError(
f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
" have to be of the same length, but are: "
f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
f" {len(hypothesis.text)}"
)
encoded_char_offsets = copy.deepcopy(char_offsets)
# Correctly process the token ids to chars/subwords.
for i, offsets in enumerate(char_offsets):
decoded_chars = []
for char in offsets['char'][:-1]: # ignore the RNNT Blank token at end of every timestep with -1 subset
decoded_chars.append(self.decode_tokens_to_str([int(char)]))
char_offsets[i]["char"] = decoded_chars
# detect char vs subword models
lens = []
for v in char_offsets:
tokens = v["char"]
# each token may be either 1 unicode token or multiple unicode token
# for character based models, only 1 token is used
# for subword, more than one token can be used.
# Computing max, then summing up total lens is a test to check for char vs subword
# For char models, len(lens) == sum(lens)
# but this is violated for subword models.
max_len = max(len(c) for c in tokens)
lens.append(max_len)
# array of one or more chars implies subword based model with multiple char emitted per TxU step (via subword)
if sum(lens) > len(lens):
text_type = 'subword'
else:
# full array of ones implies character based model with 1 char emitted per TxU step
text_type = 'char'
# retrieve word offsets from character offsets
word_offsets = None
if timestamp_type in ['word', 'all']:
if text_type == 'char':
word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
else:
# utilize the copy of char offsets with the correct integer ids for tokens
# so as to avoid tokenize -> detokenize -> compare -> merge steps.
word_offsets = self._get_word_offsets_subwords_sentencepiece(
encoded_char_offsets,
hypothesis,
decode_ids_to_tokens=self.decode_ids_to_tokens,
decode_tokens_to_str=self.decode_tokens_to_str,
)
# attach results
if len(hypothesis.timestep) > 0:
timestep_info = hypothesis.timestep
else:
timestep_info = []
# Setup defaults
hypothesis.timestep = {"timestep": timestep_info}
# Add char / subword time stamps
if char_offsets is not None and timestamp_type in ['char', 'all']:
hypothesis.timestep['char'] = char_offsets
# Add word time stamps
if word_offsets is not None and timestamp_type in ['word', 'all']:
hypothesis.timestep['word'] = word_offsets
# Convert the flattened token indices to text
hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
return hypothesis
@staticmethod
def _compute_offsets(
hypothesis: Hypothesis, token_repetitions: List[int], rnnt_token: int
) -> List[Dict[str, Union[str, int]]]:
"""
Utility method that calculates the indidual time indices where a token starts and ends.
Args:
hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
emitted at every time step after rnnt collapse.
token_repetitions: A list of ints representing the number of repetitions of each emitted token.
rnnt_token: The integer of the rnnt blank token used during rnnt collapse.
Returns:
"""
start_index = 0
# If the exact timestep information is available, utilize the 1st non-rnnt blank token timestep
# as the start index.
if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
start_index = max(0, hypothesis.timestep[0] - 1)
# Construct the start and end indices brackets
end_indices = np.asarray(token_repetitions).cumsum()
start_indices = np.concatenate(([start_index], end_indices[:-1]))
# Process the TxU dangling alignment tensor, containing pairs of (logits, label)
alignment_labels = [al_logits_labels for al_logits_labels in hypothesis.text[1]]
for t in range(len(alignment_labels)):
for u in range(len(alignment_labels[t])):
alignment_labels[t][u] = alignment_labels[t][u][1] # pick label from (logit, label) tuple
# Merge the results per token into a list of dictionaries
offsets = [
{"char": a, "start_offset": s, "end_offset": e}
for a, s, e in zip(alignment_labels, start_indices, end_indices)
]
# Filter out RNNT token (blank at [t][0] position). This is because blank can only occur at end of a
# time step for RNNT, so if 0th token is blank, then that timestep is skipped.
offsets = list(filter(lambda offsets: offsets["char"][0] != rnnt_token, offsets))
return offsets
@staticmethod
def _get_word_offsets_chars(
offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
) -> Dict[str, Union[str, float]]:
"""
Utility method which constructs word time stamps out of character time stamps.
References:
This code is a port of the Hugging Face code for word time stamp construction.
Args:
offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
word_delimiter_char: Character token that represents the word delimiter. By default, " ".
Returns:
A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
"end_offset".
"""
word_offsets = []
last_state = "SPACE"
word = ""
start_offset = 0
end_offset = 0
for i, offset in enumerate(offsets):
chars = offset["char"]
for char in chars:
state = "SPACE" if char == word_delimiter_char else "WORD"
if state == last_state:
# If we are in the same state as before, we simply repeat what we've done before
end_offset = offset["end_offset"]
word += char
else:
# Switching state
if state == "SPACE":
# Finishing a word
word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
else:
# Starting a new word
start_offset = offset["start_offset"]
end_offset = offset["end_offset"]
word = char
last_state = state
if last_state == "WORD":
word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
return word_offsets
@staticmethod
def _get_word_offsets_subwords_sentencepiece(
offsets: Dict[str, Union[str, float]],
hypothesis: Hypothesis,
decode_ids_to_tokens: Callable[[List[int]], str],
decode_tokens_to_str: Callable[[List[int]], str],
) -> Dict[str, Union[str, float]]:
"""
Utility method which constructs word time stamps out of sub-word time stamps.
**Note**: Only supports Sentencepiece based tokenizers !
Args:
offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
after rnnt collapse.
decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
Returns:
A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
"end_offset".
"""
word_offsets = []
built_token = []
previous_token_index = 0
# For every offset token
for i, offset in enumerate(offsets):
# For every subword token in offset token list (ignoring the RNNT Blank token at the end)
for char in offset['char'][:-1]:
char = int(char)
# Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
token = decode_ids_to_tokens([char])[0]
token_text = decode_tokens_to_str([char])
# It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
# after forcing partial text conversion of the token.
if token != token_text:
# If there are any partially or fully built sub-word token ids, construct to text.
# Note: This is "old" subword, that occurs *after* current sub-word has started.
if built_token:
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": offsets[previous_token_index]["start_offset"],
"end_offset": offsets[i]["start_offset"],
}
)
# Prepare list of new sub-word ids
built_token.clear()
built_token.append(char)
previous_token_index = i
else:
# If the token does not contain any sub-word start mark, then the sub-word has not completed yet
# Append to current sub-word list.
built_token.append(char)
# Inject the start offset of the first token to word offsets
# This is because we always skip the delay the injection of the first sub-word due to the loop
# condition and check whether built token is ready or not.
# Therefore without this forced injection, the start_offset appears as off by 1.
# This should only be done when these arrays contain more than one element.
if offsets and word_offsets:
word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
# If there are any remaining tokens left, inject them all into the final word offset.
# The start offset of this token is the start time of the next token to process.
# The end offset of this token is the end time of the last token from offsets.
# Note that built_token is a flat list; but offsets contains a nested list which
# may have different dimensionality.
# As such, we can't rely on the length of the list of built_token to index offsets.
if built_token:
# start from the previous token index as this hasn't been committed to word_offsets yet
# if we still have content in built_token
start_offset = offsets[previous_token_index]["start_offset"]
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": start_offset,
"end_offset": offsets[-1]["end_offset"],
}
)
built_token.clear()
return word_offsets
class RNNTDecoding(AbstractRNNTDecoding):
"""
Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
Args:
decoding_cfg: A dict-like object which contains the following key-value pairs.
strategy: str value which represents the type of decoding that can occur.
Possible values are :
- greedy, greedy_batch (for greedy decoding).
- beam, tsd, alsd (for beam search decoding).
compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
tokens as well as the decoded string. Default is False in order to avoid double decoding
unless required.
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
with the `return_hypotheses` flag set to True.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize
`rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
The config may further contain the following sub-dictionaries:
"greedy":
max_symbols: int, describing the maximum number of target tokens to decode per
timestep during greedy decoding. Setting to larger values allows longer sentences
to be decoded, at the cost of increased execution time.
preserve_frame_confidence: Same as above, overrides above value.
confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
"beam":
beam_size: int, defining the beam size for beam search. Must be >= 1.
If beam_size == 1, will perform cached greedy search. This might be slightly different
results compared to the greedy search above.
score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
Set to True by default.
return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
hypotheses after beam search has concluded. This flag is set by default.
tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
at increased cost to execution time.
alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
If an integer is provided, it can decode sequences of that particular maximum length.
If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
where seq_len is the length of the acoustic model output (T).
NOTE:
If a float is provided, it can be greater than 1!
By default, a float of 2.0 is used so that a target sequence can be at most twice
as long as the acoustic model output length T.
maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
in order to reduce expensive beam search cost later. int >= 0.
maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
and affects the speed of inference since large values will perform large beam search in the next step.
maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
expansion apart from the "most likely" candidate.
Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
tuned on a validation set.
softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
decoder: The Decoder/Prediction network module.
joint: The Joint network module.
vocabulary: The vocabulary (excluding the RNNT blank token) which will be used for decoding.
"""
def __init__(
self, decoding_cfg, decoder, joint, vocabulary,
):
# we need to ensure blank is the last token in the vocab for the case of RNNT and Multi-blank RNNT.
blank_id = len(vocabulary) + joint.num_extra_outputs
if hasattr(decoding_cfg, 'model_type') and decoding_cfg.model_type == 'tdt':
blank_id = len(vocabulary)
self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
super(RNNTDecoding, self).__init__(
decoding_cfg=decoding_cfg, decoder=decoder, joint=joint, blank_id=blank_id,
)
if isinstance(self.decoding, beam_decode.BeamRNNTInfer):
self.decoding.set_decoding_type('char')
def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
"""
Implemented by subclass in order to aggregate token confidence to a word-level confidence.
Args:
hypothesis: Hypothesis
Returns:
A list of word-level confidence scores.
"""
return self._aggregate_token_confidence_chars(hypothesis.words, hypothesis.token_confidence)
def decode_tokens_to_str(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to decoder a token list into a string.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded string.
"""
hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
return hypothesis
def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to decode a token id list into a token list.
A token list is the string representation of each token id.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded tokens.
"""
token_list = [self.labels_map[c] for c in tokens if c < self.blank_id - self.num_extra_outputs]
return token_list
def decode_tokens_to_lang(self, tokens: List[int]) -> str:
"""
Compute the most likely language ID (LID) string given the tokens.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded LID string.
"""
lang = self.tokenizer.ids_to_lang(tokens)
return lang
def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
"""
Decode a token id list into language ID (LID) list.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded LIDS.
"""
lang_list = self.tokenizer.ids_to_text_and_langs(tokens)
return lang_list
class RNNTWER(Metric):
"""
This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference texts.
When doing distributed training/evaluation the result of res=WER(predictions, targets, target_lengths) calls
will be all-reduced between all workers using SUM operations.
Here contains two numbers res=[wer_numerator, wer_denominator]. WER=wer_numerator/wer_denominator.
If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step results.
Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
Example:
def validation_step(self, batch, batch_idx):
...
wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
return self.val_outputs
def on_validation_epoch_end(self):
...
wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
self.val_outputs.clear() # free memory
return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
Args:
decoding: RNNTDecoding object that will perform autoregressive decoding of the RNNT model.
batch_dim_index: Index of the batch dimension.
use_cer: Whether to use Character Error Rate isntead of Word Error Rate.
log_prediction: Whether to log a single decoded sample per call.
Returns:
res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenshtein's
distances for all prediction - reference pairs, total number of words in all references.
"""
full_state_update = True
def __init__(
self, decoding: RNNTDecoding, batch_dim_index=0, use_cer=False, log_prediction=True, dist_sync_on_step=False
):
super(RNNTWER, self).__init__(dist_sync_on_step=dist_sync_on_step)
self.decoding = decoding
self.batch_dim_index = batch_dim_index
self.use_cer = use_cer
self.log_prediction = log_prediction
self.blank_id = self.decoding.blank_id
self.labels_map = self.decoding.labels_map
self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
def update(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
targets: torch.Tensor,
target_lengths: torch.Tensor,
) -> torch.Tensor:
words = 0
scores = 0
references = []
with torch.no_grad():
# prediction_cpu_tensor = tensors[0].long().cpu()
targets_cpu_tensor = targets.long().cpu()
targets_cpu_tensor = move_dimension_to_the_front(targets_cpu_tensor, self.batch_dim_index)
tgt_lenths_cpu_tensor = target_lengths.long().cpu()
# iterate over batch
for ind in range(targets_cpu_tensor.shape[0]):
tgt_len = tgt_lenths_cpu_tensor[ind].item()
target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
reference = self.decoding.decode_tokens_to_str(target)
references.append(reference)
hypotheses, _ = self.decoding.rnnt_decoder_predictions_tensor(encoder_output, encoded_lengths)
if self.log_prediction:
logging.info(f"\n")
logging.info(f"reference :{references[0]}")
logging.info(f"predicted :{hypotheses[0]}")
for h, r in zip(hypotheses, references):
if self.use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
words += len(r_list)
# Compute Levenshtein's distance
scores += editdistance.eval(h_list, r_list)
self.scores += torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
self.words += torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
# return torch.tensor([scores, words]).to(predictions.device)
def compute(self):
wer = self.scores.float() / self.words
return wer, self.scores.detach(), self.words.detach()
@dataclass
class RNNTDecodingConfig:
model_type: str = "rnnt" # one of "rnnt", "multiblank" or "tdt"
strategy: str = "greedy_batch"
compute_hypothesis_token_set: bool = False
# preserve decoding alignments
preserve_alignments: Optional[bool] = None
# confidence config
confidence_cfg: ConfidenceConfig = ConfidenceConfig()
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
# compute RNNT time stamps
compute_timestamps: Optional[bool] = None
# compute language IDs
compute_langs: bool = False
# token representing word seperator
word_seperator: str = " "
# type of timestamps to calculate
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
# beam decoding config
beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
# can be used to change temperature for decoding
temperature: float = 1.0
[end of nemo/collections/asr/metrics/rnnt_wer.py]
[start of nemo/collections/asr/metrics/wer.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from abc import abstractmethod
from dataclasses import dataclass, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
import jiwer
import numpy as np
import torch
from omegaconf import DictConfig, OmegaConf
from torchmetrics import Metric
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
from nemo.utils import logging, logging_mode
__all__ = ['word_error_rate', 'word_error_rate_detail', 'WER', 'move_dimension_to_the_front']
def word_error_rate(hypotheses: List[str], references: List[str], use_cer=False) -> float:
"""
Computes Average Word Error rate between two texts represented as
corresponding lists of string.
Hypotheses and references must have same length.
Args:
hypotheses (list): list of hypotheses
references(list) : list of references
use_cer (bool): set True to enable cer
Returns:
wer (float): average word error rate
"""
scores = 0
words = 0
if len(hypotheses) != len(references):
raise ValueError(
"In word error rate calculation, hypotheses and reference"
" lists must have the same number of elements. But I got:"
"{0} and {1} correspondingly".format(len(hypotheses), len(references))
)
for h, r in zip(hypotheses, references):
if use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
words += len(r_list)
# May deprecate using editdistance in future release for here and rest of codebase
# once we confirm jiwer is reliable.
scores += editdistance.eval(h_list, r_list)
if words != 0:
wer = 1.0 * scores / words
else:
wer = float('inf')
return wer
def word_error_rate_detail(
hypotheses: List[str], references: List[str], use_cer=False
) -> Tuple[float, int, float, float, float]:
"""
Computes Average Word Error Rate with details (insertion rate, deletion rate, substitution rate)
between two texts represented as corresponding lists of string.
Hypotheses and references must have same length.
Args:
hypotheses (list): list of hypotheses
references(list) : list of references
use_cer (bool): set True to enable cer
Returns:
wer (float): average word error rate
words (int): Total number of words/charactors of given reference texts
ins_rate (float): average insertion error rate
del_rate (float): average deletion error rate
sub_rate (float): average substitution error rate
"""
scores = 0
words = 0
ops_count = {'substitutions': 0, 'insertions': 0, 'deletions': 0}
if len(hypotheses) != len(references):
raise ValueError(
"In word error rate calculation, hypotheses and reference"
" lists must have the same number of elements. But I got:"
"{0} and {1} correspondingly".format(len(hypotheses), len(references))
)
for h, r in zip(hypotheses, references):
if use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
# To get rid of the issue that jiwer does not allow empty string
if len(r_list) == 0:
if len(h_list) != 0:
errors = len(h_list)
ops_count['insertions'] += errors
else:
errors = 0
else:
if use_cer:
measures = jiwer.cer(r, h, return_dict=True)
else:
measures = jiwer.compute_measures(r, h)
errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
ops_count['insertions'] += measures['insertions']
ops_count['deletions'] += measures['deletions']
ops_count['substitutions'] += measures['substitutions']
scores += errors
words += len(r_list)
if words != 0:
wer = 1.0 * scores / words
ins_rate = 1.0 * ops_count['insertions'] / words
del_rate = 1.0 * ops_count['deletions'] / words
sub_rate = 1.0 * ops_count['substitutions'] / words
else:
wer, ins_rate, del_rate, sub_rate = float('inf'), float('inf'), float('inf'), float('inf')
return wer, words, ins_rate, del_rate, sub_rate
def word_error_rate_per_utt(hypotheses: List[str], references: List[str], use_cer=False) -> Tuple[List[float], float]:
"""
Computes Word Error Rate per utterance and the average WER
between two texts represented as corresponding lists of string.
Hypotheses and references must have same length.
Args:
hypotheses (list): list of hypotheses
references(list) : list of references
use_cer (bool): set True to enable cer
Returns:
wer_per_utt (List[float]): word error rate per utterance
avg_wer (float): average word error rate
"""
scores = 0
words = 0
wer_per_utt = []
if len(hypotheses) != len(references):
raise ValueError(
"In word error rate calculation, hypotheses and reference"
" lists must have the same number of elements. But I got:"
"{0} and {1} correspondingly".format(len(hypotheses), len(references))
)
for h, r in zip(hypotheses, references):
if use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
# To get rid of the issue that jiwer does not allow empty string
if len(r_list) == 0:
if len(h_list) != 0:
errors = len(h_list)
wer_per_utt.append(float('inf'))
else:
if use_cer:
measures = jiwer.cer(r, h, return_dict=True)
er = measures['cer']
else:
measures = jiwer.compute_measures(r, h)
er = measures['wer']
errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
wer_per_utt.append(er)
scores += errors
words += len(r_list)
if words != 0:
avg_wer = 1.0 * scores / words
else:
avg_wer = float('inf')
return wer_per_utt, avg_wer
def move_dimension_to_the_front(tensor, dim_index):
all_dims = list(range(tensor.ndim))
return tensor.permute(*([dim_index] + all_dims[:dim_index] + all_dims[dim_index + 1 :]))
class AbstractCTCDecoding(ConfidenceMixin):
"""
Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs.
Args:
decoding_cfg: A dict-like object which contains the following key-value pairs.
strategy: str value which represents the type of decoding that can occur.
Possible values are :
- greedy (for greedy decoding).
- beam (for DeepSpeed KenLM based decoding).
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
Can take the following values - "char" for character/subword time stamps, "word" for word level
time stamps and "all" (default), for both character level and word level time stamps.
word_seperator: Str token representing the seperator between words.
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize
`ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding. When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
The config may further contain the following sub-dictionaries:
"greedy":
preserve_alignments: Same as above, overrides above value.
compute_timestamps: Same as above, overrides above value.
preserve_frame_confidence: Same as above, overrides above value.
confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
"beam":
beam_size: int, defining the beam size for beam search. Must be >= 1.
If beam_size == 1, will perform cached greedy search. This might be slightly different
results compared to the greedy search above.
return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
hypotheses after beam search has concluded. This flag is set by default.
beam_alpha: float, the strength of the Language model on the final score of a token.
final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
beam_beta: float, the strength of the sequence length penalty on the final score of a token.
final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
If the path is invalid (file is not found at path), will raise a deferred error at the moment
of calculation of beam search, so that users may update / change the decoding strategy
to point to the correct file.
blank_id: The id of the RNNT blank token.
"""
def __init__(self, decoding_cfg, blank_id: int):
super().__init__()
# Convert dataclas to config
if is_dataclass(decoding_cfg):
decoding_cfg = OmegaConf.structured(decoding_cfg)
if not isinstance(decoding_cfg, DictConfig):
decoding_cfg = OmegaConf.create(decoding_cfg)
OmegaConf.set_struct(decoding_cfg, False)
# update minimal config
minimal_cfg = ['greedy']
for item in minimal_cfg:
if item not in decoding_cfg:
decoding_cfg[item] = OmegaConf.create({})
self.cfg = decoding_cfg
self.blank_id = blank_id
self.preserve_alignments = self.cfg.get('preserve_alignments', None)
self.compute_timestamps = self.cfg.get('compute_timestamps', None)
self.batch_dim_index = self.cfg.get('batch_dim_index', 0)
self.word_seperator = self.cfg.get('word_seperator', ' ')
possible_strategies = ['greedy', 'beam', 'pyctcdecode', 'flashlight']
if self.cfg.strategy not in possible_strategies:
raise ValueError(f"Decoding strategy must be one of {possible_strategies}. Given {self.cfg.strategy}")
# Update preserve alignments
if self.preserve_alignments is None:
if self.cfg.strategy in ['greedy']:
self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
else:
self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
# Update compute timestamps
if self.compute_timestamps is None:
if self.cfg.strategy in ['greedy']:
self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
elif self.cfg.strategy in ['beam']:
self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
# initialize confidence-related fields
self._init_confidence(self.cfg.get('confidence_cfg', None))
# Confidence estimation is not implemented for strategies other than `greedy`
if (
not self.preserve_frame_confidence
and self.cfg.strategy != 'greedy'
and self.cfg.beam.get('preserve_frame_confidence', False)
):
raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
# we need timestamps to extract non-blank per-frame confidence
if self.compute_timestamps is not None:
self.compute_timestamps |= self.preserve_frame_confidence
if self.cfg.strategy == 'greedy':
self.decoding = ctc_greedy_decoding.GreedyCTCInfer(
blank_id=self.blank_id,
preserve_alignments=self.preserve_alignments,
compute_timestamps=self.compute_timestamps,
preserve_frame_confidence=self.preserve_frame_confidence,
confidence_method_cfg=self.confidence_method_cfg,
)
elif self.cfg.strategy == 'beam':
self.decoding = ctc_beam_decoding.BeamCTCInfer(
blank_id=blank_id,
beam_size=self.cfg.beam.get('beam_size', 1),
search_type='default',
return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
preserve_alignments=self.preserve_alignments,
compute_timestamps=self.compute_timestamps,
beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
beam_beta=self.cfg.beam.get('beam_beta', 0.0),
kenlm_path=self.cfg.beam.get('kenlm_path', None),
)
self.decoding.override_fold_consecutive_value = False
elif self.cfg.strategy == 'pyctcdecode':
self.decoding = ctc_beam_decoding.BeamCTCInfer(
blank_id=blank_id,
beam_size=self.cfg.beam.get('beam_size', 1),
search_type='pyctcdecode',
return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
preserve_alignments=self.preserve_alignments,
compute_timestamps=self.compute_timestamps,
beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
beam_beta=self.cfg.beam.get('beam_beta', 0.0),
kenlm_path=self.cfg.beam.get('kenlm_path', None),
pyctcdecode_cfg=self.cfg.beam.get('pyctcdecode_cfg', None),
)
self.decoding.override_fold_consecutive_value = False
elif self.cfg.strategy == 'flashlight':
self.decoding = ctc_beam_decoding.BeamCTCInfer(
blank_id=blank_id,
beam_size=self.cfg.beam.get('beam_size', 1),
search_type='flashlight',
return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
preserve_alignments=self.preserve_alignments,
compute_timestamps=self.compute_timestamps,
beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
beam_beta=self.cfg.beam.get('beam_beta', 0.0),
kenlm_path=self.cfg.beam.get('kenlm_path', None),
flashlight_cfg=self.cfg.beam.get('flashlight_cfg', None),
)
self.decoding.override_fold_consecutive_value = False
else:
raise ValueError(
f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
f"but was provided {self.cfg.strategy}"
)
def ctc_decoder_predictions_tensor(
self,
decoder_outputs: torch.Tensor,
decoder_lengths: torch.Tensor = None,
fold_consecutive: bool = True,
return_hypotheses: bool = False,
) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
"""
Decodes a sequence of labels to words
Args:
decoder_outputs: An integer torch.Tensor of shape [Batch, Time, {Vocabulary}] (if ``batch_index_dim == 0``) or [Time, Batch]
(if ``batch_index_dim == 1``) of integer indices that correspond to the index of some character in the
label set.
decoder_lengths: Optional tensor of length `Batch` which contains the integer lengths
of the sequence in the padded `predictions` tensor.
fold_consecutive: Bool, determine whether to perform "ctc collapse", folding consecutive tokens
into a single token.
return_hypotheses: Bool flag whether to return just the decoding predictions of the model
or a Hypothesis object that holds information such as the decoded `text`,
the `alignment` of emited by the CTC Model, and the `length` of the sequence (if available).
May also contain the log-probabilities of the decoder (if this method is called via
transcribe())
Returns:
Either a list of str which represent the CTC decoded strings per sample,
or a list of Hypothesis objects containing additional information.
"""
if isinstance(decoder_outputs, torch.Tensor):
decoder_outputs = move_dimension_to_the_front(decoder_outputs, self.batch_dim_index)
if (
hasattr(self.decoding, 'override_fold_consecutive_value')
and self.decoding.override_fold_consecutive_value is not None
):
logging.info(
f"Beam search requires that consecutive ctc tokens are not folded. \n"
f"Overriding provided value of `fold_consecutive` = {fold_consecutive} to "
f"{self.decoding.override_fold_consecutive_value}",
mode=logging_mode.ONCE,
)
fold_consecutive = self.decoding.override_fold_consecutive_value
with torch.inference_mode():
# Resolve the forward step of the decoding strategy
hypotheses_list = self.decoding(
decoder_output=decoder_outputs, decoder_lengths=decoder_lengths
) # type: List[List[Hypothesis]]
# extract the hypotheses
hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
if isinstance(hypotheses_list[0], NBestHypotheses):
hypotheses = []
all_hypotheses = []
for nbest_hyp in hypotheses_list: # type: NBestHypotheses
n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
decoded_hyps = self.decode_hypothesis(
n_hyps, fold_consecutive
) # type: List[Union[Hypothesis, NBestHypotheses]]
# If computing timestamps
if self.compute_timestamps is True:
timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
for hyp_idx in range(len(decoded_hyps)):
decoded_hyps[hyp_idx] = self.compute_ctc_timestamps(decoded_hyps[hyp_idx], timestamp_type)
hypotheses.append(decoded_hyps[0]) # best hypothesis
all_hypotheses.append(decoded_hyps)
if return_hypotheses:
return hypotheses, all_hypotheses
best_hyp_text = [h.text for h in hypotheses]
all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
return best_hyp_text, all_hyp_text
else:
hypotheses = self.decode_hypothesis(
hypotheses_list, fold_consecutive
) # type: List[Union[Hypothesis, NBestHypotheses]]
# If computing timestamps
if self.compute_timestamps is True:
# greedy decoding, can get high-level confidence scores
if return_hypotheses and (self.preserve_word_confidence or self.preserve_token_confidence):
hypotheses = self.compute_confidence(hypotheses)
else:
# remove unused token_repetitions from Hypothesis.text
for hyp in hypotheses:
hyp.text = hyp.text[:2]
timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
for hyp_idx in range(len(hypotheses)):
hypotheses[hyp_idx] = self.compute_ctc_timestamps(hypotheses[hyp_idx], timestamp_type)
if return_hypotheses:
return hypotheses, None
best_hyp_text = [h.text for h in hypotheses]
return best_hyp_text, None
def decode_hypothesis(
self, hypotheses_list: List[Hypothesis], fold_consecutive: bool
) -> List[Union[Hypothesis, NBestHypotheses]]:
"""
Decode a list of hypotheses into a list of strings.
Args:
hypotheses_list: List of Hypothesis.
fold_consecutive: Whether to collapse the ctc blank tokens or not.
Returns:
A list of strings.
"""
for ind in range(len(hypotheses_list)):
# Extract the integer encoded hypothesis
hyp = hypotheses_list[ind]
prediction = hyp.y_sequence
predictions_len = hyp.length if hyp.length > 0 else None
if fold_consecutive:
if type(prediction) != list:
prediction = prediction.numpy().tolist()
if predictions_len is not None:
prediction = prediction[:predictions_len]
# CTC decoding procedure
decoded_prediction = []
token_lengths = [] # preserve token lengths
token_repetitions = [] # preserve number of repetitions per token
previous = self.blank_id
last_length = 0
last_repetition = 1
for pidx, p in enumerate(prediction):
if (p != previous or previous == self.blank_id) and p != self.blank_id:
decoded_prediction.append(p)
token_lengths.append(pidx - last_length)
last_length = pidx
token_repetitions.append(last_repetition)
last_repetition = 1
if p == previous and previous != self.blank_id:
last_repetition += 1
previous = p
if len(token_repetitions) > 0:
token_repetitions = token_repetitions[1:] + [last_repetition]
else:
if predictions_len is not None:
prediction = prediction[:predictions_len]
decoded_prediction = prediction[prediction != self.blank_id].tolist()
token_lengths = [1] * len(decoded_prediction) # preserve number of repetitions per token
token_repetitions = [1] * len(decoded_prediction) # preserve number of repetitions per token
# De-tokenize the integer tokens; if not computing timestamps
if self.compute_timestamps is True:
# keep the original predictions, wrap with the number of repetitions per token
# this is done so that `ctc_decoder_predictions_tensor()` can process this hypothesis
# in order to compute exact time stamps.
hypothesis = (decoded_prediction, token_lengths, token_repetitions)
else:
hypothesis = self.decode_tokens_to_str(decoded_prediction)
# TODO: remove
# collapse leading spaces before . , ? for PC models
hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
# Preserve this wrapped hypothesis or decoded text tokens.
hypotheses_list[ind].text = hypothesis
return hypotheses_list
def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
"""
Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
Assumes that `frame_confidence` is present in the hypotheses.
Args:
hypotheses_list: List of Hypothesis.
Returns:
A list of hypotheses with high-level confidence scores.
"""
for hyp in hypotheses_list:
if not isinstance(hyp.text, tuple) or len(hyp.text) != 3:
# the method must have been called in the wrong place
raise ValueError(
"""Wrong format of the `text` attribute of a hypothesis.\n
Expected: (decoded_prediction, token_repetitions)\n
The method invocation is expected between .decode_hypothesis() and .compute_ctc_timestamps()"""
)
token_repetitions = hyp.text[2]
hyp.text = hyp.text[:2]
token_confidence = []
if self.exclude_blank_from_confidence:
non_blank_frame_confidence = hyp.non_blank_frame_confidence
i = 0
for tr in token_repetitions:
# token repetition can be zero
j = i + tr
token_confidence.append(self._aggregate_confidence(non_blank_frame_confidence[i:j]))
i = j
else:
# <blank> tokens are considered to belong to the last non-blank token, if any.
token_lengths = hyp.text[1]
if len(token_lengths) > 0:
ts = token_lengths[0]
for tl in token_lengths[1:] + [len(hyp.frame_confidence)]:
token_confidence.append(self._aggregate_confidence(hyp.frame_confidence[ts : ts + tl]))
ts += tl
hyp.token_confidence = token_confidence
if self.preserve_word_confidence:
for hyp in hypotheses_list:
hyp.word_confidence = self._aggregate_token_confidence(hyp)
return hypotheses_list
@abstractmethod
def decode_tokens_to_str(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to decoder a token id list into a string.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded string.
"""
raise NotImplementedError()
@abstractmethod
def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to decode a token id list into a token list.
A token list is the string representation of each token id.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded tokens.
"""
raise NotImplementedError()
def compute_ctc_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
"""
Method to compute time stamps at char/subword, and word level given some hypothesis.
Requires the input hypothesis to contain a `text` field that is the tuple. The tuple contains -
the ctc collapsed integer ids, and the number of repetitions of each token.
Args:
hypothesis: A Hypothesis object, with a wrapped `text` field.
The `text` field must contain a tuple with two values -
The ctc collapsed integer ids
A list of integers that represents the number of repetitions per token.
timestamp_type: A str value that represents the type of time stamp calculated.
Can be one of "char", "word" or "all"
Returns:
A Hypothesis object with a modified `timestep` value, which is now a dictionary containing
the time stamp information.
"""
assert timestamp_type in ['char', 'word', 'all']
# Unpack the temporary storage, and set the decoded predictions
decoded_prediction, token_lengths = hypothesis.text
hypothesis.text = decoded_prediction
# Retrieve offsets
char_offsets = word_offsets = None
char_offsets = self._compute_offsets(hypothesis, token_lengths, self.blank_id)
# Assert number of offsets and hypothesis tokens are 1:1 match.
if len(char_offsets) != len(hypothesis.text):
raise ValueError(
f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
" have to be of the same length, but are: "
f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
f" {len(hypothesis.text)}"
)
# Correctly process the token ids to chars/subwords.
for i, char in enumerate(hypothesis.text):
char_offsets[i]["char"] = self.decode_tokens_to_str([char])
# detect char vs subword models
lens = [len(list(v["char"])) > 1 for v in char_offsets]
if any(lens):
text_type = 'subword'
else:
text_type = 'char'
# retrieve word offsets from character offsets
word_offsets = None
if timestamp_type in ['word', 'all']:
if text_type == 'char':
word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
else:
word_offsets = self._get_word_offsets_subwords_sentencepiece(
char_offsets,
hypothesis,
decode_ids_to_tokens=self.decode_ids_to_tokens,
decode_tokens_to_str=self.decode_tokens_to_str,
)
# attach results
if len(hypothesis.timestep) > 0:
timestep_info = hypothesis.timestep
else:
timestep_info = []
# Setup defaults
hypothesis.timestep = {"timestep": timestep_info}
# Add char / subword time stamps
if char_offsets is not None and timestamp_type in ['char', 'all']:
hypothesis.timestep['char'] = char_offsets
# Add word time stamps
if word_offsets is not None and timestamp_type in ['word', 'all']:
hypothesis.timestep['word'] = word_offsets
# Convert the token indices to text
hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
return hypothesis
@staticmethod
def _compute_offsets(
hypothesis: Hypothesis, token_lengths: List[int], ctc_token: int
) -> List[Dict[str, Union[str, int]]]:
"""
Utility method that calculates the indidual time indices where a token starts and ends.
Args:
hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
emitted at every time step after ctc collapse.
token_lengths: A list of ints representing the lengths of each emitted token.
ctc_token: The integer of the ctc blank token used during ctc collapse.
Returns:
"""
start_index = 0
# If the exact timestep information is available, utilize the 1st non-ctc blank token timestep
# as the start index.
if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
start_index = max(0, hypothesis.timestep[0] - 1)
# Construct the start and end indices brackets
end_indices = np.asarray(token_lengths).cumsum()
start_indices = np.concatenate(([start_index], end_indices[:-1]))
# Merge the results per token into a list of dictionaries
offsets = [
{"char": t, "start_offset": s, "end_offset": e}
for t, s, e in zip(hypothesis.text, start_indices, end_indices)
]
# Filter out CTC token
offsets = list(filter(lambda offsets: offsets["char"] != ctc_token, offsets))
return offsets
@staticmethod
def _get_word_offsets_chars(
offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
) -> Dict[str, Union[str, float]]:
"""
Utility method which constructs word time stamps out of character time stamps.
References:
This code is a port of the Hugging Face code for word time stamp construction.
Args:
offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
word_delimiter_char: Character token that represents the word delimiter. By default, " ".
Returns:
A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
"end_offset".
"""
word_offsets = []
last_state = "SPACE"
word = ""
start_offset = 0
end_offset = 0
for i, offset in enumerate(offsets):
char = offset["char"]
state = "SPACE" if char == word_delimiter_char else "WORD"
if state == last_state:
# If we are in the same state as before, we simply repeat what we've done before
end_offset = offset["end_offset"]
word += char
else:
# Switching state
if state == "SPACE":
# Finishing a word
word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
else:
# Starting a new word
start_offset = offset["start_offset"]
end_offset = offset["end_offset"]
word = char
last_state = state
if last_state == "WORD":
word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
return word_offsets
@staticmethod
def _get_word_offsets_subwords_sentencepiece(
offsets: Dict[str, Union[str, float]],
hypothesis: Hypothesis,
decode_ids_to_tokens: Callable[[List[int]], str],
decode_tokens_to_str: Callable[[List[int]], str],
) -> Dict[str, Union[str, float]]:
"""
Utility method which constructs word time stamps out of sub-word time stamps.
**Note**: Only supports Sentencepiece based tokenizers !
Args:
offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
after ctc collapse.
decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
Returns:
A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
"end_offset".
"""
word_offsets = []
built_token = []
previous_token_index = 0
# For every collapsed sub-word token
for i, char in enumerate(hypothesis.text):
# Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
token = decode_ids_to_tokens([char])[0]
token_text = decode_tokens_to_str([char])
# It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
# after forcing partial text conversion of the token.
if token != token_text:
# If there are any partially or fully built sub-word token ids, construct to text.
# Note: This is "old" subword, that occurs *after* current sub-word has started.
if len(built_token) > 0:
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": offsets[previous_token_index]["start_offset"],
"end_offset": offsets[i]["start_offset"],
}
)
# Prepare list of new sub-word ids
built_token.clear()
built_token.append(char)
previous_token_index = i
else:
# If the token does not contain any sub-word start mark, then the sub-word has not completed yet
# Append to current sub-word list.
built_token.append(char)
# Inject the start offset of the first token to word offsets
# This is because we always skip the delay the injection of the first sub-word due to the loop
# condition and check whether built token is ready or not.
# Therefore without this forced injection, the start_offset appears as off by 1.
if len(word_offsets) == 0:
# alaptev: sometimes word_offsets can be empty
if len(built_token) > 0:
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": offsets[0]["start_offset"],
"end_offset": offsets[-1]["end_offset"],
}
)
built_token.clear()
else:
word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
# If there are any remaining tokens left, inject them all into the final word offset.
# Note: The start offset of this token is the start time of the first token inside build_token.
# Note: The end offset of this token is the end time of the last token inside build_token
if len(built_token) > 0:
word_offsets.append(
{
"word": decode_tokens_to_str(built_token),
"start_offset": offsets[-(len(built_token))]["start_offset"],
"end_offset": offsets[-1]["end_offset"],
}
)
built_token.clear()
return word_offsets
@property
def preserve_alignments(self):
return self._preserve_alignments
@preserve_alignments.setter
def preserve_alignments(self, value):
self._preserve_alignments = value
if hasattr(self, 'decoding'):
self.decoding.preserve_alignments = value
@property
def compute_timestamps(self):
return self._compute_timestamps
@compute_timestamps.setter
def compute_timestamps(self, value):
self._compute_timestamps = value
if hasattr(self, 'decoding'):
self.decoding.compute_timestamps = value
@property
def preserve_frame_confidence(self):
return self._preserve_frame_confidence
@preserve_frame_confidence.setter
def preserve_frame_confidence(self, value):
self._preserve_frame_confidence = value
if hasattr(self, 'decoding'):
self.decoding.preserve_frame_confidence = value
class CTCDecoding(AbstractCTCDecoding):
"""
Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs for character
based models.
Args:
decoding_cfg: A dict-like object which contains the following key-value pairs.
strategy: str value which represents the type of decoding that can occur.
Possible values are :
- greedy (for greedy decoding).
- beam (for DeepSpeed KenLM based decoding).
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
Can take the following values - "char" for character/subword time stamps, "word" for word level
time stamps and "all" (default), for both character level and word level time stamps.
word_seperator: Str token representing the seperator between words.
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
scores. In order to obtain hypotheses with confidence scores, please utilize
`ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding. When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
The config may further contain the following sub-dictionaries:
"greedy":
preserve_alignments: Same as above, overrides above value.
compute_timestamps: Same as above, overrides above value.
preserve_frame_confidence: Same as above, overrides above value.
confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
"beam":
beam_size: int, defining the beam size for beam search. Must be >= 1.
If beam_size == 1, will perform cached greedy search. This might be slightly different
results compared to the greedy search above.
return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
hypotheses after beam search has concluded. This flag is set by default.
beam_alpha: float, the strength of the Language model on the final score of a token.
final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
beam_beta: float, the strength of the sequence length penalty on the final score of a token.
final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
If the path is invalid (file is not found at path), will raise a deferred error at the moment
of calculation of beam search, so that users may update / change the decoding strategy
to point to the correct file.
blank_id: The id of the RNNT blank token.
"""
def __init__(
self, decoding_cfg, vocabulary,
):
blank_id = len(vocabulary)
self.vocabulary = vocabulary
self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
super().__init__(decoding_cfg=decoding_cfg, blank_id=blank_id)
# Finalize Beam Search Decoding framework
if isinstance(self.decoding, ctc_beam_decoding.AbstractBeamCTCInfer):
self.decoding.set_vocabulary(self.vocabulary)
self.decoding.set_decoding_type('char')
def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
"""
Implemented by subclass in order to aggregate token confidence to a word-level confidence.
Args:
hypothesis: Hypothesis
Returns:
A list of word-level confidence scores.
"""
return self._aggregate_token_confidence_chars(
self.decode_tokens_to_str(hypothesis.text[0]).split(), hypothesis.token_confidence
)
def decode_tokens_to_str(self, tokens: List[int]) -> str:
"""
Implemented by subclass in order to decoder a token list into a string.
Args:
tokens: List of int representing the token ids.
Returns:
A decoded string.
"""
hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
return hypothesis
def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
"""
Implemented by subclass in order to decode a token id list into a token list.
A token list is the string representation of each token id.
Args:
tokens: List of int representing the token ids.
Returns:
A list of decoded tokens.
"""
token_list = [self.labels_map[c] for c in tokens if c != self.blank_id]
return token_list
class WER(Metric):
"""
This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference
texts. When doing distributed training/evaluation the result of ``res=WER(predictions, targets, target_lengths)``
calls will be all-reduced between all workers using SUM operations. Here ``res`` contains three numbers
``res=[wer, total_levenstein_distance, total_number_of_words]``.
If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step
results. Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
Example:
def validation_step(self, batch, batch_idx):
...
wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
return self.val_outputs
def on_validation_epoch_end(self):
...
wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
self.val_outputs.clear() # free memory
return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
Args:
decoding: An instance of CTCDecoding.
use_cer: Whether to use Character Error Rate instead of Word Error Rate.
log_prediction: Whether to log a single decoded sample per call.
fold_consecutive: Whether repeated consecutive characters should be folded into one when decoding.
Returns:
res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenstein's
distances for all prediction - reference pairs, total number of words in all references.
"""
full_state_update: bool = True
def __init__(
self,
decoding: CTCDecoding,
use_cer=False,
log_prediction=True,
fold_consecutive=True,
dist_sync_on_step=False,
):
super().__init__(dist_sync_on_step=dist_sync_on_step)
self.decoding = decoding
self.use_cer = use_cer
self.log_prediction = log_prediction
self.fold_consecutive = fold_consecutive
self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
def update(
self,
predictions: torch.Tensor,
targets: torch.Tensor,
target_lengths: torch.Tensor,
predictions_lengths: torch.Tensor = None,
):
"""
Updates metric state.
Args:
predictions: an integer torch.Tensor of shape ``[Batch, Time, {Vocabulary}]`` (if ``batch_dim_index == 0``) or
``[Time, Batch]`` (if ``batch_dim_index == 1``)
targets: an integer torch.Tensor of shape ``[Batch, Time]`` (if ``batch_dim_index == 0``) or
``[Time, Batch]`` (if ``batch_dim_index == 1``)
target_lengths: an integer torch.Tensor of shape ``[Batch]``
predictions_lengths: an integer torch.Tensor of shape ``[Batch]``
"""
words = 0
scores = 0
references = []
with torch.no_grad():
# prediction_cpu_tensor = tensors[0].long().cpu()
targets_cpu_tensor = targets.long().cpu()
tgt_lenths_cpu_tensor = target_lengths.long().cpu()
# iterate over batch
for ind in range(targets_cpu_tensor.shape[0]):
tgt_len = tgt_lenths_cpu_tensor[ind].item()
target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
reference = self.decoding.decode_tokens_to_str(target)
references.append(reference)
hypotheses, _ = self.decoding.ctc_decoder_predictions_tensor(
predictions, predictions_lengths, fold_consecutive=self.fold_consecutive
)
if self.log_prediction:
logging.info(f"\n")
logging.info(f"reference:{references[0]}")
logging.info(f"predicted:{hypotheses[0]}")
for h, r in zip(hypotheses, references):
if self.use_cer:
h_list = list(h)
r_list = list(r)
else:
h_list = h.split()
r_list = r.split()
words += len(r_list)
# Compute Levenstein's distance
scores += editdistance.eval(h_list, r_list)
self.scores = torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
self.words = torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
# return torch.tensor([scores, words]).to(predictions.device)
def compute(self):
scores = self.scores.detach().float()
words = self.words.detach().float()
return scores / words, scores, words
@dataclass
class CTCDecodingConfig:
strategy: str = "greedy"
# preserve decoding alignments
preserve_alignments: Optional[bool] = None
# compute ctc time stamps
compute_timestamps: Optional[bool] = None
# token representing word seperator
word_seperator: str = " "
# type of timestamps to calculate
ctc_timestamp_type: str = "all" # can be char, word or all for both
# batch dimension
batch_dim_index: int = 0
# greedy decoding config
greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
# beam decoding config
beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
# confidence config
confidence_cfg: ConfidenceConfig = ConfidenceConfig()
# can be used to change temperature for decoding
temperature: float = 1.0
[end of nemo/collections/asr/metrics/wer.py]
[start of nemo/collections/asr/models/configs/aligner_config.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@dataclass
class AlignerCTCConfig:
prob_suppress_index: int = -1
prob_suppress_value: float = 1.0
@dataclass
class AlignerRNNTConfig:
predictor_window_size: int = 0
predictor_step_size: int = 1
@dataclass
class AlignerWrapperModelConfig:
alignment_type: str = "forced"
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
[end of nemo/collections/asr/models/configs/aligner_config.py]
[start of nemo/collections/asr/models/configs/asr_models_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
import nemo.core.classes.dataset
from nemo.collections.asr.metrics.wer import CTCDecodingConfig
from nemo.collections.asr.modules.audio_preprocessing import (
AudioToMelSpectrogramPreprocessorConfig,
SpectrogramAugmentationConfig,
)
from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig
from nemo.core.config import modelPT as model_cfg
@dataclass
class ASRDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
manifest_filepath: Optional[Any] = None
sample_rate: int = MISSING
labels: List[str] = MISSING
trim_silence: bool = False
# Tarred dataset support
is_tarred: bool = False
tarred_audio_filepaths: Optional[Any] = None
tarred_shard_strategy: str = "scatter"
shard_manifests: bool = False
shuffle_n: int = 0
# Optional
int_values: Optional[int] = None
augmentor: Optional[Dict[str, Any]] = None
max_duration: Optional[float] = None
min_duration: Optional[float] = None
max_utts: int = 0
blank_index: int = -1
unk_index: int = -1
normalize: bool = False
trim: bool = True
parser: Optional[str] = 'en'
eos_id: Optional[int] = None
bos_id: Optional[int] = None
pad_id: int = 0
use_start_end_token: bool = False
return_sample_id: Optional[bool] = False
# bucketing params
bucketing_strategy: str = "synced_randomized"
bucketing_batch_size: Optional[Any] = None
bucketing_weights: Optional[List[int]] = None
@dataclass
class EncDecCTCConfig(model_cfg.ModelConfig):
# Model global arguments
sample_rate: int = 16000
repeat: int = 1
dropout: float = 0.0
separable: bool = False
labels: List[str] = MISSING
# Dataset configs
train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
# Model component configs
preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
encoder: ConvASREncoderConfig = ConvASREncoderConfig()
decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
decoding: CTCDecodingConfig = CTCDecodingConfig()
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
model: EncDecCTCConfig = EncDecCTCConfig()
@dataclass
class CacheAwareStreamingConfig:
chunk_size: int = 0 # the size of each chunk at each step, it can be a list of two integers to specify different chunk sizes for the first step and others
shift_size: int = 0 # the size of the shift in each step, it can be a list of two integers to specify different shift sizes for the first step and others
cache_drop_size: int = 0 # the number of steps to drop from the cache
last_channel_cache_size: int = 0 # the size of the needed cache for last channel layers
valid_out_len: int = 0 # the number of the steps in the final output which are valid (have the same value as in the offline mode)
pre_encode_cache_size: int = 0 # the size of the needed cache for the pre-encoding part of the model to avoid caching inside the pre-encoding layers
drop_extra_pre_encoded: int = 0 # the number of steps to get dropped after the pre-encoding layer
last_channel_num: int = 0 # number of the last channel layers (like MHA layers) which need caching in the model
last_time_num: int = 0 # number of the last time layers (like convolutions) which need caching in the model
[end of nemo/collections/asr/models/configs/asr_models_config.py]
[start of nemo/collections/asr/models/configs/classification_models_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
import nemo.core.classes.dataset
from nemo.collections.asr.modules.audio_preprocessing import (
AudioToMFCCPreprocessorConfig,
CropOrPadSpectrogramAugmentationConfig,
SpectrogramAugmentationConfig,
)
from nemo.collections.asr.modules.conv_asr import ConvASRDecoderClassificationConfig, ConvASREncoderConfig
from nemo.core.config import modelPT as model_cfg
@dataclass
class EncDecClassificationDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
manifest_filepath: Optional[str] = None
sample_rate: int = MISSING
labels: List[str] = MISSING
trim_silence: bool = False
# Tarred dataset support
is_tarred: bool = False
tarred_audio_filepaths: Optional[str] = None
tarred_shard_strategy: str = "scatter"
shuffle_n: int = 0
# Optional
int_values: Optional[int] = None
augmentor: Optional[Dict[str, Any]] = None
max_duration: Optional[float] = None
min_duration: Optional[float] = None
cal_labels_occurrence: Optional[bool] = False
# VAD Optional
vad_stream: Optional[bool] = None
window_length_in_sec: float = 0.31
shift_length_in_sec: float = 0.01
normalize_audio: bool = False
is_regression_task: bool = False
# bucketing params
bucketing_strategy: str = "synced_randomized"
bucketing_batch_size: Optional[Any] = None
bucketing_weights: Optional[List[int]] = None
@dataclass
class EncDecClassificationConfig(model_cfg.ModelConfig):
# Model global arguments
sample_rate: int = 16000
repeat: int = 1
dropout: float = 0.0
separable: bool = True
kernel_size_factor: float = 1.0
labels: List[str] = MISSING
timesteps: int = MISSING
# Dataset configs
train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=True, trim_silence=False
)
validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=False
)
test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=False
)
# Optimizer / Scheduler config
optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
# Model component configs
preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
audio_length=timesteps
)
encoder: ConvASREncoderConfig = ConvASREncoderConfig()
decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
model: EncDecClassificationConfig = EncDecClassificationConfig()
[end of nemo/collections/asr/models/configs/classification_models_config.py]
[start of nemo/collections/asr/models/configs/diarizer_config.py]
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import asdict, dataclass
from typing import Any, Dict, Optional, Tuple, Union
@dataclass
class DiarizerComponentConfig:
"""Dataclass to imitate HydraConfig dict when accessing parameters."""
def get(self, name: str, default: Optional[Any] = None):
return getattr(self, name, default)
def __iter__(self):
for key in asdict(self):
yield key
def dict(self) -> Dict:
return asdict(self)
@dataclass
class ASRDiarizerCTCDecoderParams:
pretrained_language_model: Optional[str] = None # KenLM model file: .arpa model file or .bin binary file.
beam_width: int = 32
alpha: float = 0.5
beta: float = 2.5
@dataclass
class ASRRealigningLMParams:
# Provide a KenLM language model in .arpa format.
arpa_language_model: Optional[str] = None
# Min number of words for the left context.
min_number_of_words: int = 3
# Max number of words for the right context.
max_number_of_words: int = 10
# The threshold for the difference between two log probability values from two hypotheses.
logprob_diff_threshold: float = 1.2
@dataclass
class ASRDiarizerParams(DiarizerComponentConfig):
# if True, speech segmentation for diarization is based on word-timestamps from ASR inference.
asr_based_vad: bool = False
# Threshold (in sec) that caps the gap between two words when generating VAD timestamps using ASR based VAD.
asr_based_vad_threshold: float = 1.0
# Batch size can be dependent on each ASR model. Default batch sizes are applied if set to null.
asr_batch_size: Optional[int] = None
# Native decoder delay. null is recommended to use the default values for each ASR model.
decoder_delay_in_sec: Optional[float] = None
# Offset to set a reference point from the start of the word. Recommended range of values is [-0.05 0.2].
word_ts_anchor_offset: Optional[float] = None
# Select which part of the word timestamp we want to use. The options are: 'start', 'end', 'mid'.
word_ts_anchor_pos: str = "start"
# Fix the word timestamp using VAD output. You must provide a VAD model to use this feature.
fix_word_ts_with_VAD: bool = False
# If True, use colored text to distinguish speakers in the output transcript.
colored_text: bool = False
# If True, the start and end time of each speaker turn is printed in the output transcript.
print_time: bool = True
# If True, the output transcript breaks the line to fix the line width (default is 90 chars)
break_lines: bool = False
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
parameters: ASRDiarizerParams = ASRDiarizerParams()
ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
@dataclass
class VADParams(DiarizerComponentConfig):
window_length_in_sec: float = 0.15 # Window length in sec for VAD context input
shift_length_in_sec: float = 0.01 # Shift length in sec for generate frame level VAD prediction
smoothing: Union[str, bool] = "median" # False or type of smoothing method (eg: median)
overlap: float = 0.5 # Overlap ratio for overlapped mean/median smoothing filter
onset: float = 0.1 # Onset threshold for detecting the beginning and end of a speech
offset: float = 0.1 # Offset threshold for detecting the end of a speech
pad_onset: float = 0.1 # Adding durations before each speech segment
pad_offset: float = 0 # Adding durations after each speech segment
min_duration_on: float = 0 # Threshold for small non_speech deletion
min_duration_off: float = 0.2 # Threshold for short speech segment deletion
filter_speech_first: bool = True
@dataclass
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
parameters: VADParams = VADParams()
@dataclass
class SpeakerEmbeddingsParams(DiarizerComponentConfig):
# Window length(s) in sec (floating-point number). either a number or a list. ex) 1.5 or [1.5,1.0,0.5]
window_length_in_sec: Tuple[float] = (1.5, 1.25, 1.0, 0.75, 0.5)
# Shift length(s) in sec (floating-point number). either a number or a list. ex) 0.75 or [0.75,0.5,0.25]
shift_length_in_sec: Tuple[float] = (0.75, 0.625, 0.5, 0.375, 0.25)
# Weight for each scale. None (for single scale) or list with window/shift scale count. ex) [0.33,0.33,0.33]
multiscale_weights: Tuple[float] = (1, 1, 1, 1, 1)
# save speaker embeddings in pickle format. True if clustering result is used for other models, such as MSDD.
save_embeddings: bool = True
@dataclass
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
@dataclass
class ClusteringParams(DiarizerComponentConfig):
# If True, use num of speakers value provided in manifest file.
oracle_num_speakers: bool = False
# Max number of speakers for each recording. If an oracle number of speakers is passed, this value is ignored.
max_num_speakers: int = 8
# If the number of segments is lower than this number, enhanced speaker counting is activated.
enhanced_count_thres: int = 80
# Determines the range of p-value search: 0 < p <= max_rp_threshold.
max_rp_threshold: float = 0.25
# The higher the number, the more values will be examined with more time.
sparse_search_volume: int = 30
# If True, take a majority vote on multiple p-values to estimate the number of speakers.
maj_vote_spk_count: bool = False
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
parameters: ClusteringParams = ClusteringParams()
@dataclass
class MSDDParams(DiarizerComponentConfig):
# If True, use speaker embedding model in checkpoint, else provided speaker embedding model in config will be used.
use_speaker_model_from_ckpt: bool = True
# Batch size for MSDD inference.
infer_batch_size: int = 25
# Sigmoid threshold for generating binarized speaker labels. The smaller the more generous on detecting overlaps.
sigmoid_threshold: Tuple[float] = (0.7,)
# If True, use oracle number of speaker and evaluate F1 score for the given speaker sequences. Default is False.
seq_eval_mode: bool = False
# If True, break the input audio clip to short sequences and calculate cluster average embeddings for inference.
split_infer: bool = True
# The length of split short sequence when split_infer is True.
diar_window_length: int = 50
# If the estimated number of speakers are larger than this number, overlap speech is not estimated.
overlap_infer_spk_limit: int = 5
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
parameters: MSDDParams = MSDDParams()
@dataclass
class DiarizerConfig(DiarizerComponentConfig):
manifest_filepath: Optional[str] = None
out_dir: Optional[str] = None
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
vad: VADConfig = VADConfig()
speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
clustering: ClusteringConfig = ClusteringConfig()
msdd_model: MSDDConfig = MSDDConfig()
asr: ASRDiarizerConfig = ASRDiarizerConfig()
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
diarizer: DiarizerConfig = DiarizerConfig()
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
num_workers: int = 1
sample_rate: int = 16000
name: str = ""
@classmethod
def init_config(cls, diar_model_path: str, vad_model_path: str, map_location: str, verbose: bool):
return NeuralDiarizerInferenceConfig(
DiarizerConfig(
vad=VADConfig(model_path=vad_model_path), msdd_model=MSDDConfig(model_path=diar_model_path),
),
device=map_location,
verbose=verbose,
)
[end of nemo/collections/asr/models/configs/diarizer_config.py]
[start of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
from nemo.core.config.modelPT import NemoConfig
@dataclass
class GraphModuleConfig:
criterion_type: str = "ml"
loss_type: str = "ctc"
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
backend_cfg: BackendConfig = BackendConfig()
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
model: EncDecK2SeqConfig = EncDecK2SeqConfig()
[end of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
[start of nemo/collections/asr/models/configs/matchboxnet_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
from nemo.collections.asr.models.configs import classification_models_config as clf_cfg
from nemo.collections.asr.modules.audio_preprocessing import (
AudioToMFCCPreprocessorConfig,
CropOrPadSpectrogramAugmentationConfig,
SpectrogramAugmentationConfig,
)
from nemo.collections.asr.modules.conv_asr import (
ConvASRDecoderClassificationConfig,
ConvASREncoderConfig,
JasperEncoderConfig,
)
from nemo.core.config import modelPT as model_cfg
# fmt: off
def matchboxnet_3x1x64():
config = [
JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
]
return config
def matchboxnet_3x1x64_vad():
config = [
JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
]
return config
# fmt: on
@dataclass
class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
# Model global arguments
sample_rate: int = 16000
repeat: int = 1
dropout: float = 0.0
separable: bool = True
kernel_size_factor: float = 1.0
timesteps: int = 128
labels: List[str] = MISSING
# Dataset configs
train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=True, trim_silence=False
)
validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=False
)
test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
manifest_filepath=None, shuffle=False
)
# Optimizer / Scheduler config
optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
# Model general component configs
preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
)
crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
audio_length=128
)
encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
@dataclass
class MatchboxNetVADModelConfig(MatchboxNetModelConfig):
timesteps: int = 64
labels: List[str] = field(default_factory=lambda: ['background', 'speech'])
crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = None
class EncDecClassificationModelConfigBuilder(model_cfg.ModelConfigBuilder):
VALID_CONFIGS = ['matchboxnet_3x1x64', 'matchboxnet_3x1x64_vad']
def __init__(self, name: str = 'matchboxnet_3x1x64', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
if name not in EncDecClassificationModelConfigBuilder.VALID_CONFIGS:
raise ValueError("`name` must be one of : \n" f"{EncDecClassificationModelConfigBuilder.VALID_CONFIGS}")
self.name = name
if 'matchboxnet_3x1x64_vad' in name:
if encoder_cfg_func is None:
encoder_cfg_func = matchboxnet_3x1x64_vad
model_cfg = MatchboxNetVADModelConfig(
repeat=1,
separable=True,
encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
decoder=ConvASRDecoderClassificationConfig(),
)
elif 'matchboxnet_3x1x64' in name:
if encoder_cfg_func is None:
encoder_cfg_func = matchboxnet_3x1x64
model_cfg = MatchboxNetModelConfig(
repeat=1,
separable=False,
spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
decoder=ConvASRDecoderClassificationConfig(),
)
else:
raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
super(EncDecClassificationModelConfigBuilder, self).__init__(model_cfg)
self.model_cfg: clf_cfg.EncDecClassificationConfig = model_cfg # enable type hinting
def set_labels(self, labels: List[str]):
self.model_cfg.labels = labels
def set_separable(self, separable: bool):
self.model_cfg.separable = separable
def set_repeat(self, repeat: int):
self.model_cfg.repeat = repeat
def set_sample_rate(self, sample_rate: int):
self.model_cfg.sample_rate = sample_rate
def set_dropout(self, dropout: float = 0.0):
self.model_cfg.dropout = dropout
def set_timesteps(self, timesteps: int):
self.model_cfg.timesteps = timesteps
def set_is_regression_task(self, is_regression_task: bool):
self.model_cfg.is_regression_task = is_regression_task
# Note: Autocomplete for users wont work without these overrides
# But practically it is not needed since python will infer at runtime
# def set_train_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
# super().set_train_ds(cfg)
#
# def set_validation_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
# super().set_validation_ds(cfg)
#
# def set_test_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
# super().set_test_ds(cfg)
def _finalize_cfg(self):
# propagate labels
self.model_cfg.train_ds.labels = self.model_cfg.labels
self.model_cfg.validation_ds.labels = self.model_cfg.labels
self.model_cfg.test_ds.labels = self.model_cfg.labels
self.model_cfg.decoder.vocabulary = self.model_cfg.labels
# propagate num classes
self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
# propagate sample rate
self.model_cfg.sample_rate = self.model_cfg.sample_rate
self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
# propagate filters
self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
# propagate timeteps
if self.model_cfg.crop_or_pad_augment is not None:
self.model_cfg.crop_or_pad_augment.audio_length = self.model_cfg.timesteps
# propagate separable
for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
layer.separable = self.model_cfg.separable
# propagate repeat
for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
layer.repeat = self.model_cfg.repeat
# propagate dropout
for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
layer.dropout = self.model_cfg.dropout
def build(self) -> clf_cfg.EncDecClassificationConfig:
return super().build()
[end of nemo/collections/asr/models/configs/matchboxnet_config.py]
[start of nemo/collections/asr/models/configs/quartznet_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
from nemo.collections.asr.models.configs import asr_models_config as ctc_cfg
from nemo.collections.asr.modules.audio_preprocessing import (
AudioToMelSpectrogramPreprocessorConfig,
SpectrogramAugmentationConfig,
)
from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig, JasperEncoderConfig
from nemo.core.config import modelPT as model_cfg
# fmt: off
def qn_15x5():
config = [
JasperEncoderConfig(filters=256, repeat=1, kernel=[33], stride=[2], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=1, kernel=[87], stride=[1], dilation=[2], dropout=0.0,
residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
]
return config
def jasper_10x5_dr():
config = [
JasperEncoderConfig(filters=256, repeat=1, kernel=[11], stride=[2], dilation=[1], dropout=0.2,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=896, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.4,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.4,
residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
]
return config
# fmt: on
@dataclass
class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
# Model global arguments
sample_rate: int = 16000
repeat: int = 1
dropout: float = 0.0
separable: bool = False
labels: List[str] = MISSING
# Dataset configs
train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
manifest_filepath=None, shuffle=True, trim_silence=True
)
validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
# Model general component configs
preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
@dataclass
class QuartzNetModelConfig(JasperModelConfig):
separable: bool = True
class EncDecCTCModelConfigBuilder(model_cfg.ModelConfigBuilder):
VALID_CONFIGS = ['quartznet_15x5', 'quartznet_15x5_zh', 'jasper_10x5dr']
def __init__(self, name: str = 'quartznet_15x5', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
if name not in EncDecCTCModelConfigBuilder.VALID_CONFIGS:
raise ValueError("`name` must be one of : \n" f"{EncDecCTCModelConfigBuilder.VALID_CONFIGS}")
self.name = name
if 'quartznet_15x5' in name:
if encoder_cfg_func is None:
encoder_cfg_func = qn_15x5
model_cfg = QuartzNetModelConfig(
repeat=5,
separable=True,
spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
decoder=ConvASRDecoderConfig(),
)
elif 'jasper_10x5' in name:
if encoder_cfg_func is None:
encoder_cfg_func = jasper_10x5_dr
model_cfg = JasperModelConfig(
repeat=5,
separable=False,
spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
decoder=ConvASRDecoderConfig(),
)
else:
raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
super(EncDecCTCModelConfigBuilder, self).__init__(model_cfg)
self.model_cfg: ctc_cfg.EncDecCTCConfig = model_cfg # enable type hinting
if 'zh' in name:
self.set_dataset_normalize(normalize=False)
def set_labels(self, labels: List[str]):
self.model_cfg.labels = labels
def set_separable(self, separable: bool):
self.model_cfg.separable = separable
def set_repeat(self, repeat: int):
self.model_cfg.repeat = repeat
def set_sample_rate(self, sample_rate: int):
self.model_cfg.sample_rate = sample_rate
def set_dropout(self, dropout: float = 0.0):
self.model_cfg.dropout = dropout
def set_dataset_normalize(self, normalize: bool):
self.model_cfg.train_ds.normalize = normalize
self.model_cfg.validation_ds.normalize = normalize
self.model_cfg.test_ds.normalize = normalize
# Note: Autocomplete for users wont work without these overrides
# But practically it is not needed since python will infer at runtime
# def set_train_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
# super().set_train_ds(cfg)
#
# def set_validation_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
# super().set_validation_ds(cfg)
#
# def set_test_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
# super().set_test_ds(cfg)
def _finalize_cfg(self):
# propagate labels
self.model_cfg.train_ds.labels = self.model_cfg.labels
self.model_cfg.validation_ds.labels = self.model_cfg.labels
self.model_cfg.test_ds.labels = self.model_cfg.labels
self.model_cfg.decoder.vocabulary = self.model_cfg.labels
# propagate num classes
self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
# propagate sample rate
self.model_cfg.sample_rate = self.model_cfg.sample_rate
self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
# propagate filters
self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
# propagate separable
for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
layer.separable = self.model_cfg.separable
# propagate repeat
for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
layer.repeat = self.model_cfg.repeat
# propagate dropout
for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
layer.dropout = self.model_cfg.dropout
def build(self) -> ctc_cfg.EncDecCTCConfig:
return super().build()
[end of nemo/collections/asr/models/configs/quartznet_config.py]
[start of nemo/collections/asr/modules/audio_preprocessing.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import random
from abc import ABC, abstractmethod
from dataclasses import dataclass
from typing import Any, Dict, Optional, Tuple
import torch
from packaging import version
from nemo.collections.asr.parts.numba.spec_augment import SpecAugmentNumba, spec_augment_launch_heuristics
from nemo.collections.asr.parts.preprocessing.features import (
FilterbankFeatures,
FilterbankFeaturesTA,
make_seq_mask_like,
)
from nemo.collections.asr.parts.submodules.spectr_augment import SpecAugment, SpecCutout
from nemo.core.classes import Exportable, NeuralModule, typecheck
from nemo.core.neural_types import (
AudioSignal,
LengthsType,
MelSpectrogramType,
MFCCSpectrogramType,
NeuralType,
SpectrogramType,
)
from nemo.core.utils import numba_utils
from nemo.core.utils.numba_utils import __NUMBA_MINIMUM_VERSION__
from nemo.utils import logging
try:
import torchaudio
import torchaudio.functional
import torchaudio.transforms
TORCHAUDIO_VERSION = version.parse(torchaudio.__version__)
TORCHAUDIO_VERSION_MIN = version.parse('0.5')
HAVE_TORCHAUDIO = True
except ModuleNotFoundError:
HAVE_TORCHAUDIO = False
__all__ = [
'AudioToMelSpectrogramPreprocessor',
'AudioToSpectrogram',
'SpectrogramToAudio',
'AudioToMFCCPreprocessor',
'SpectrogramAugmentation',
'MaskedPatchAugmentation',
'CropOrPadSpectrogramAugmentation',
]
class AudioPreprocessor(NeuralModule, ABC):
"""
An interface for Neural Modules that performs audio pre-processing,
transforming the wav files to features.
"""
def __init__(self, win_length, hop_length):
super().__init__()
self.win_length = win_length
self.hop_length = hop_length
self.torch_windows = {
'hann': torch.hann_window,
'hamming': torch.hamming_window,
'blackman': torch.blackman_window,
'bartlett': torch.bartlett_window,
'ones': torch.ones,
None: torch.ones,
}
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
processed_signal, processed_length = self.get_features(input_signal, length)
return processed_signal, processed_length
@abstractmethod
def get_features(self, input_signal, length):
# Called by forward(). Subclasses should implement this.
pass
class AudioToMelSpectrogramPreprocessor(AudioPreprocessor, Exportable):
"""Featurizer module that converts wavs to mel spectrograms.
Args:
sample_rate (int): Sample rate of the input audio data.
Defaults to 16000
window_size (float): Size of window for fft in seconds
Defaults to 0.02
window_stride (float): Stride of window for fft in seconds
Defaults to 0.01
n_window_size (int): Size of window for fft in samples
Defaults to None. Use one of window_size or n_window_size.
n_window_stride (int): Stride of window for fft in samples
Defaults to None. Use one of window_stride or n_window_stride.
window (str): Windowing function for fft. can be one of ['hann',
'hamming', 'blackman', 'bartlett']
Defaults to "hann"
normalize (str): Can be one of ['per_feature', 'all_features']; all
other options disable feature normalization. 'all_features'
normalizes the entire spectrogram to be mean 0 with std 1.
'pre_features' normalizes per channel / freq instead.
Defaults to "per_feature"
n_fft (int): Length of FT window. If None, it uses the smallest power
of 2 that is larger than n_window_size.
Defaults to None
preemph (float): Amount of pre emphasis to add to audio. Can be
disabled by passing None.
Defaults to 0.97
features (int): Number of mel spectrogram freq bins to output.
Defaults to 64
lowfreq (int): Lower bound on mel basis in Hz.
Defaults to 0
highfreq (int): Lower bound on mel basis in Hz.
Defaults to None
log (bool): Log features.
Defaults to True
log_zero_guard_type(str): Need to avoid taking the log of zero. There
are two options: "add" or "clamp".
Defaults to "add".
log_zero_guard_value(float, or str): Add or clamp requires the number
to add with or clamp to. log_zero_guard_value can either be a float
or "tiny" or "eps". torch.finfo is used if "tiny" or "eps" is
passed.
Defaults to 2**-24.
dither (float): Amount of white-noise dithering.
Defaults to 1e-5
pad_to (int): Ensures that the output size of the time dimension is
a multiple of pad_to.
Defaults to 16
frame_splicing (int): Defaults to 1
exact_pad (bool): If True, sets stft center to False and adds padding, such that num_frames = audio_length
// hop_length. Defaults to False.
pad_value (float): The value that shorter mels are padded with.
Defaults to 0
mag_power (float): The power that the linear spectrogram is raised to
prior to multiplication with mel basis.
Defaults to 2 for a power spec
rng : Random number generator
nb_augmentation_prob (float) : Probability with which narrowband augmentation would be applied to
samples in the batch.
Defaults to 0.0
nb_max_freq (int) : Frequency above which all frequencies will be masked for narrowband augmentation.
Defaults to 4000
use_torchaudio: Whether to use the `torchaudio` implementation.
mel_norm: Normalization used for mel filterbank weights.
Defaults to 'slaney' (area normalization)
stft_exact_pad: Deprecated argument, kept for compatibility with older checkpoints.
stft_conv: Deprecated argument, kept for compatibility with older checkpoints.
"""
def save_to(self, save_path: str):
pass
@classmethod
def restore_from(cls, restore_path: str):
pass
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
return {
"input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
"length": NeuralType(
tuple('B'), LengthsType()
), # Please note that length should be in samples not seconds.
}
@property
def output_types(self):
"""Returns definitions of module output ports.
processed_signal:
0: AxisType(BatchTag)
1: AxisType(MelSpectrogramSignalTag)
2: AxisType(ProcessedTimeTag)
processed_length:
0: AxisType(BatchTag)
"""
return {
"processed_signal": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"processed_length": NeuralType(tuple('B'), LengthsType()),
}
def __init__(
self,
sample_rate=16000,
window_size=0.02,
window_stride=0.01,
n_window_size=None,
n_window_stride=None,
window="hann",
normalize="per_feature",
n_fft=None,
preemph=0.97,
features=64,
lowfreq=0,
highfreq=None,
log=True,
log_zero_guard_type="add",
log_zero_guard_value=2 ** -24,
dither=1e-5,
pad_to=16,
frame_splicing=1,
exact_pad=False,
pad_value=0,
mag_power=2.0,
rng=None,
nb_augmentation_prob=0.0,
nb_max_freq=4000,
use_torchaudio: bool = False,
mel_norm="slaney",
stft_exact_pad=False, # Deprecated arguments; kept for config compatibility
stft_conv=False, # Deprecated arguments; kept for config compatibility
):
super().__init__(n_window_size, n_window_stride)
self._sample_rate = sample_rate
if window_size and n_window_size:
raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
if window_stride and n_window_stride:
raise ValueError(
f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
)
if window_size:
n_window_size = int(window_size * self._sample_rate)
if window_stride:
n_window_stride = int(window_stride * self._sample_rate)
# Given the long and similar argument list, point to the class and instantiate it by reference
if not use_torchaudio:
featurizer_class = FilterbankFeatures
else:
featurizer_class = FilterbankFeaturesTA
self.featurizer = featurizer_class(
sample_rate=self._sample_rate,
n_window_size=n_window_size,
n_window_stride=n_window_stride,
window=window,
normalize=normalize,
n_fft=n_fft,
preemph=preemph,
nfilt=features,
lowfreq=lowfreq,
highfreq=highfreq,
log=log,
log_zero_guard_type=log_zero_guard_type,
log_zero_guard_value=log_zero_guard_value,
dither=dither,
pad_to=pad_to,
frame_splicing=frame_splicing,
exact_pad=exact_pad,
pad_value=pad_value,
mag_power=mag_power,
rng=rng,
nb_augmentation_prob=nb_augmentation_prob,
nb_max_freq=nb_max_freq,
mel_norm=mel_norm,
stft_exact_pad=stft_exact_pad, # Deprecated arguments; kept for config compatibility
stft_conv=stft_conv, # Deprecated arguments; kept for config compatibility
)
def input_example(self, max_batch: int = 8, max_dim: int = 32000, min_length: int = 200):
batch_size = torch.randint(low=1, high=max_batch, size=[1]).item()
max_length = torch.randint(low=min_length, high=max_dim, size=[1]).item()
signals = torch.rand(size=[batch_size, max_length]) * 2 - 1
lengths = torch.randint(low=min_length, high=max_dim, size=[batch_size])
lengths[0] = max_length
return signals, lengths
def get_features(self, input_signal, length):
return self.featurizer(input_signal, length)
@property
def filter_banks(self):
return self.featurizer.filter_banks
class AudioToMFCCPreprocessor(AudioPreprocessor):
"""Preprocessor that converts wavs to MFCCs.
Uses torchaudio.transforms.MFCC.
Args:
sample_rate: The sample rate of the audio.
Defaults to 16000.
window_size: Size of window for fft in seconds. Used to calculate the
win_length arg for mel spectrogram.
Defaults to 0.02
window_stride: Stride of window for fft in seconds. Used to caculate
the hop_length arg for mel spect.
Defaults to 0.01
n_window_size: Size of window for fft in samples
Defaults to None. Use one of window_size or n_window_size.
n_window_stride: Stride of window for fft in samples
Defaults to None. Use one of window_stride or n_window_stride.
window: Windowing function for fft. can be one of ['hann',
'hamming', 'blackman', 'bartlett', 'none', 'null'].
Defaults to 'hann'
n_fft: Length of FT window. If None, it uses the smallest power of 2
that is larger than n_window_size.
Defaults to None
lowfreq (int): Lower bound on mel basis in Hz.
Defaults to 0
highfreq (int): Lower bound on mel basis in Hz.
Defaults to None
n_mels: Number of mel filterbanks.
Defaults to 64
n_mfcc: Number of coefficients to retain
Defaults to 64
dct_type: Type of discrete cosine transform to use
norm: Type of norm to use
log: Whether to use log-mel spectrograms instead of db-scaled.
Defaults to True.
"""
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
return {
"input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
"length": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {
"processed_signal": NeuralType(('B', 'D', 'T'), MFCCSpectrogramType()),
"processed_length": NeuralType(tuple('B'), LengthsType()),
}
def save_to(self, save_path: str):
pass
@classmethod
def restore_from(cls, restore_path: str):
pass
def __init__(
self,
sample_rate=16000,
window_size=0.02,
window_stride=0.01,
n_window_size=None,
n_window_stride=None,
window='hann',
n_fft=None,
lowfreq=0.0,
highfreq=None,
n_mels=64,
n_mfcc=64,
dct_type=2,
norm='ortho',
log=True,
):
self._sample_rate = sample_rate
if not HAVE_TORCHAUDIO:
logging.error('Could not import torchaudio. Some features might not work.')
raise ModuleNotFoundError(
"torchaudio is not installed but is necessary for "
"AudioToMFCCPreprocessor. We recommend you try "
"building it from source for the PyTorch version you have."
)
if window_size and n_window_size:
raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
if window_stride and n_window_stride:
raise ValueError(
f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
)
# Get win_length (n_window_size) and hop_length (n_window_stride)
if window_size:
n_window_size = int(window_size * self._sample_rate)
if window_stride:
n_window_stride = int(window_stride * self._sample_rate)
super().__init__(n_window_size, n_window_stride)
mel_kwargs = {}
mel_kwargs['f_min'] = lowfreq
mel_kwargs['f_max'] = highfreq
mel_kwargs['n_mels'] = n_mels
mel_kwargs['n_fft'] = n_fft or 2 ** math.ceil(math.log2(n_window_size))
mel_kwargs['win_length'] = n_window_size
mel_kwargs['hop_length'] = n_window_stride
# Set window_fn. None defaults to torch.ones.
window_fn = self.torch_windows.get(window, None)
if window_fn is None:
raise ValueError(
f"Window argument for AudioProcessor is invalid: {window}."
f"For no window function, use 'ones' or None."
)
mel_kwargs['window_fn'] = window_fn
# Use torchaudio's implementation of MFCCs as featurizer
self.featurizer = torchaudio.transforms.MFCC(
sample_rate=self._sample_rate,
n_mfcc=n_mfcc,
dct_type=dct_type,
norm=norm,
log_mels=log,
melkwargs=mel_kwargs,
)
def get_features(self, input_signal, length):
features = self.featurizer(input_signal)
seq_len = torch.ceil(length.to(torch.float32) / self.hop_length).to(dtype=torch.long)
return features, seq_len
class SpectrogramAugmentation(NeuralModule):
"""
Performs time and freq cuts in one of two ways.
SpecAugment zeroes out vertical and horizontal sections as described in
SpecAugment (https://arxiv.org/abs/1904.08779). Arguments for use with
SpecAugment are `freq_masks`, `time_masks`, `freq_width`, and `time_width`.
SpecCutout zeroes out rectangulars as described in Cutout
(https://arxiv.org/abs/1708.04552). Arguments for use with Cutout are
`rect_masks`, `rect_freq`, and `rect_time`.
Args:
freq_masks (int): how many frequency segments should be cut.
Defaults to 0.
time_masks (int): how many time segments should be cut
Defaults to 0.
freq_width (int): maximum number of frequencies to be cut in one
segment.
Defaults to 10.
time_width (int): maximum number of time steps to be cut in one
segment
Defaults to 10.
rect_masks (int): how many rectangular masks should be cut
Defaults to 0.
rect_freq (int): maximum size of cut rectangles along the frequency
dimension
Defaults to 5.
rect_time (int): maximum size of cut rectangles along the time
dimension
Defaults to 25.
"""
@property
def input_types(self):
"""Returns definitions of module input types
"""
return {
"input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
"length": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output types
"""
return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
def __init__(
self,
freq_masks=0,
time_masks=0,
freq_width=10,
time_width=10,
rect_masks=0,
rect_time=5,
rect_freq=20,
rng=None,
mask_value=0.0,
use_numba_spec_augment: bool = True,
):
super().__init__()
if rect_masks > 0:
self.spec_cutout = SpecCutout(rect_masks=rect_masks, rect_time=rect_time, rect_freq=rect_freq, rng=rng,)
# self.spec_cutout.to(self._device)
else:
self.spec_cutout = lambda input_spec: input_spec
if freq_masks + time_masks > 0:
self.spec_augment = SpecAugment(
freq_masks=freq_masks,
time_masks=time_masks,
freq_width=freq_width,
time_width=time_width,
rng=rng,
mask_value=mask_value,
)
else:
self.spec_augment = lambda input_spec, length: input_spec
# Check if numba is supported, and use a Numba kernel if it is
if use_numba_spec_augment and numba_utils.numba_cuda_is_supported(__NUMBA_MINIMUM_VERSION__):
logging.info('Numba CUDA SpecAugment kernel is being used')
self.spec_augment_numba = SpecAugmentNumba(
freq_masks=freq_masks,
time_masks=time_masks,
freq_width=freq_width,
time_width=time_width,
rng=rng,
mask_value=mask_value,
)
else:
self.spec_augment_numba = None
@typecheck()
def forward(self, input_spec, length):
augmented_spec = self.spec_cutout(input_spec=input_spec)
# To run the Numba kernel, correct numba version is required as well as
# tensor must be on GPU and length must be provided
if self.spec_augment_numba is not None and spec_augment_launch_heuristics(augmented_spec, length):
augmented_spec = self.spec_augment_numba(input_spec=augmented_spec, length=length)
else:
augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
return augmented_spec
class MaskedPatchAugmentation(NeuralModule):
"""
Zeroes out fixed size time patches of the spectrogram.
All samples in batch are guaranteed to have the same amount of masked time steps.
Optionally also performs frequency masking in the same way as SpecAugment.
Args:
patch_size (int): up to how many time steps does one patch consist of.
Defaults to 48.
mask_patches (float): how many patches should be masked in each sample.
if >= 1., interpreted as number of patches (after converting to int)
if <1., interpreted as fraction of total tokens to be masked (number of patches is rounded up)
Defaults to 10.
freq_masks (int): how many frequency segments should be cut.
Defaults to 0.
freq_width (int): maximum number of frequencies to be cut in a segment.
Defaults to 0.
"""
@property
def input_types(self):
"""Returns definitions of module input types
"""
return {
"input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
"length": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output types
"""
return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
def __init__(
self, patch_size: int = 48, mask_patches: float = 10.0, freq_masks: int = 0, freq_width: int = 0,
):
super().__init__()
self.patch_size = patch_size
if mask_patches >= 1:
self.mask_patches = int(mask_patches)
elif mask_patches >= 0:
self._mask_fraction = mask_patches
self.mask_patches = None
else:
raise ValueError('mask_patches cannot be negative')
if freq_masks > 0:
self.spec_augment = SpecAugment(freq_masks=freq_masks, time_masks=0, freq_width=freq_width, time_width=0,)
else:
self.spec_augment = None
@typecheck()
def forward(self, input_spec, length):
augmented_spec = input_spec
min_len = torch.min(length)
if self.mask_patches is None:
# masking specified as fraction
len_fraction = int(min_len * self._mask_fraction)
mask_patches = len_fraction // self.patch_size + int(len_fraction % self.patch_size != 0)
else:
mask_patches = self.mask_patches
if min_len < self.patch_size * mask_patches:
mask_patches = min_len // self.patch_size
for idx in range(input_spec.shape[0]):
cur_len = length[idx]
patches = range(cur_len // self.patch_size)
masked_patches = random.sample(patches, mask_patches)
for mp in masked_patches:
augmented_spec[idx, :, mp * self.patch_size : (mp + 1) * self.patch_size] = 0.0
if self.spec_augment is not None:
augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
return augmented_spec
class CropOrPadSpectrogramAugmentation(NeuralModule):
"""
Pad or Crop the incoming Spectrogram to a certain shape.
Args:
audio_length (int): the final number of timesteps that is required.
The signal will be either padded or cropped temporally to this
size.
"""
def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
image = input_signal
num_images = image.shape[0]
audio_length = self.audio_length
image_len = image.shape[-1]
# Crop long signal
if image_len > audio_length: # randomly slice
cutout_images = []
offset = torch.randint(low=0, high=image_len - audio_length + 1, size=[num_images])
for idx, offset in enumerate(offset):
cutout_images.append(image[idx : idx + 1, :, offset : offset + audio_length])
image = torch.cat(cutout_images, dim=0)
del cutout_images
else: # symmetrically pad short signal with zeros
pad_left = (audio_length - image_len) // 2
pad_right = (audio_length - image_len) // 2
if (audio_length - image_len) % 2 == 1:
pad_right += 1
image = torch.nn.functional.pad(image, [pad_left, pad_right], mode="constant", value=0)
# Replace dynamic length sequences with static number of timesteps
length = (length * 0) + audio_length
return image, length
@property
def input_types(self):
"""Returns definitions of module output ports.
"""
return {
"input_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
"length": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {
"processed_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
"processed_length": NeuralType(tuple('B'), LengthsType()),
}
def save_to(self, save_path: str):
pass
@classmethod
def restore_from(cls, restore_path: str):
pass
class AudioToSpectrogram(NeuralModule):
"""Transform a batch of input multi-channel signals into a batch of
STFT-based spectrograms.
Args:
fft_length: length of FFT
hop_length: length of hops/shifts of the sliding window
power: exponent for magnitude spectrogram. Default `None` will
return a complex-valued spectrogram
"""
def __init__(self, fft_length: int, hop_length: int, power: Optional[float] = None):
if not HAVE_TORCHAUDIO:
logging.error('Could not import torchaudio. Some features might not work.')
raise ModuleNotFoundError(
"torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
)
super().__init__()
# For now, assume FFT length is divisible by two
if fft_length % 2 != 0:
raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
self.stft = torchaudio.transforms.Spectrogram(
n_fft=fft_length, hop_length=hop_length, power=power, pad_mode='constant'
)
# number of subbands
self.F = fft_length // 2 + 1
@property
def num_subbands(self) -> int:
return self.F
@property
def input_types(self) -> Dict[str, NeuralType]:
"""Returns definitions of module output ports.
"""
return {
"input": NeuralType(('B', 'C', 'T'), AudioSignal()),
"input_length": NeuralType(('B',), LengthsType(), optional=True),
}
@property
def output_types(self) -> Dict[str, NeuralType]:
"""Returns definitions of module output ports.
"""
return {
"output": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
"output_length": NeuralType(('B',), LengthsType()),
}
@typecheck()
def forward(
self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None
) -> Tuple[torch.Tensor, torch.Tensor]:
"""Convert a batch of C-channel input signals
into a batch of complex-valued spectrograms.
Args:
input: Time-domain input signal with C channels, shape (B, C, T)
input_length: Length of valid entries along the time dimension, shape (B,)
Returns:
Output spectrogram with F subbands and N time frames, shape (B, C, F, N)
and output length with shape (B,).
"""
B, T = input.size(0), input.size(-1)
input = input.view(B, -1, T)
# STFT output (B, C, F, N)
with torch.cuda.amp.autocast(enabled=False):
output = self.stft(input.float())
if input_length is not None:
# Mask padded frames
output_length = self.get_output_length(input_length=input_length)
length_mask: torch.Tensor = make_seq_mask_like(
lengths=output_length, like=output, time_dim=-1, valid_ones=False
)
output = output.masked_fill(length_mask, 0.0)
else:
# Assume all frames are valid for all examples in the batch
output_length = output.size(-1) * torch.ones(B, device=output.device).long()
return output, output_length
def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
"""Get length of valid frames for the output.
Args:
input_length: number of valid samples, shape (B,)
Returns:
Number of valid frames, shape (B,)
"""
output_length = input_length.div(self.stft.hop_length, rounding_mode='floor').add(1).long()
return output_length
class SpectrogramToAudio(NeuralModule):
"""Transform a batch of input multi-channel spectrograms into a batch of
time-domain multi-channel signals.
Args:
fft_length: length of FFT
hop_length: length of hops/shifts of the sliding window
power: exponent for magnitude spectrogram. Default `None` will
return a complex-valued spectrogram
"""
def __init__(self, fft_length: int, hop_length: int):
if not HAVE_TORCHAUDIO:
logging.error('Could not import torchaudio. Some features might not work.')
raise ModuleNotFoundError(
"torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
)
super().__init__()
# For now, assume FFT length is divisible by two
if fft_length % 2 != 0:
raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
self.istft = torchaudio.transforms.InverseSpectrogram(
n_fft=fft_length, hop_length=hop_length, pad_mode='constant'
)
self.F = fft_length // 2 + 1
@property
def num_subbands(self) -> int:
return self.F
@property
def input_types(self) -> Dict[str, NeuralType]:
"""Returns definitions of module output ports.
"""
return {
"input": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
"input_length": NeuralType(('B',), LengthsType(), optional=True),
}
@property
def output_types(self) -> Dict[str, NeuralType]:
"""Returns definitions of module output ports.
"""
return {
"output": NeuralType(('B', 'C', 'T'), AudioSignal()),
"output_length": NeuralType(('B',), LengthsType()),
}
@typecheck()
def forward(self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None) -> torch.Tensor:
"""Convert input complex-valued spectrogram to a time-domain
signal. Multi-channel IO is supported.
Args:
input: Input spectrogram for C channels, shape (B, C, F, N)
input_length: Length of valid entries along the time dimension, shape (B,)
Returns:
Time-domain signal with T time-domain samples and C channels, (B, C, T)
and output length with shape (B,).
"""
B, F, N = input.size(0), input.size(-2), input.size(-1)
assert F == self.F, f'Number of subbands F={F} not matching self.F={self.F}'
input = input.view(B, -1, F, N)
# iSTFT output (B, C, T)
with torch.cuda.amp.autocast(enabled=False):
output = self.istft(input.cfloat())
if input_length is not None:
# Mask padded samples
output_length = self.get_output_length(input_length=input_length)
length_mask: torch.Tensor = make_seq_mask_like(
lengths=output_length, like=output, time_dim=-1, valid_ones=False
)
output = output.masked_fill(length_mask, 0.0)
else:
# Assume all frames are valid for all examples in the batch
output_length = output.size(-1) * torch.ones(B, device=output.device).long()
return output, output_length
def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
"""Get length of valid samples for the output.
Args:
input_length: number of valid frames, shape (B,)
Returns:
Number of valid samples, shape (B,)
"""
output_length = input_length.sub(1).mul(self.istft.hop_length).long()
return output_length
@dataclass
class AudioToMelSpectrogramPreprocessorConfig:
_target_: str = "nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor"
sample_rate: int = 16000
window_size: float = 0.02
window_stride: float = 0.01
n_window_size: Optional[int] = None
n_window_stride: Optional[int] = None
window: str = "hann"
normalize: str = "per_feature"
n_fft: Optional[int] = None
preemph: float = 0.97
features: int = 64
lowfreq: int = 0
highfreq: Optional[int] = None
log: bool = True
log_zero_guard_type: str = "add"
log_zero_guard_value: float = 2 ** -24
dither: float = 1e-5
pad_to: int = 16
frame_splicing: int = 1
exact_pad: bool = False
pad_value: int = 0
mag_power: float = 2.0
rng: Optional[str] = None
nb_augmentation_prob: float = 0.0
nb_max_freq: int = 4000
use_torchaudio: bool = False
mel_norm: str = "slaney"
stft_exact_pad: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
stft_conv: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
@dataclass
class AudioToMFCCPreprocessorConfig:
_target_: str = 'nemo.collections.asr.modules.AudioToMFCCPreprocessor'
sample_rate: int = 16000
window_size: float = 0.02
window_stride: float = 0.01
n_window_size: Optional[int] = None
n_window_stride: Optional[int] = None
window: str = 'hann'
n_fft: Optional[int] = None
lowfreq: Optional[float] = 0.0
highfreq: Optional[float] = None
n_mels: int = 64
n_mfcc: int = 64
dct_type: int = 2
norm: str = 'ortho'
log: bool = True
@dataclass
class SpectrogramAugmentationConfig:
_target_: str = "nemo.collections.asr.modules.SpectrogramAugmentation"
freq_masks: int = 0
time_masks: int = 0
freq_width: int = 0
time_width: Optional[Any] = 0
rect_masks: int = 0
rect_time: int = 0
rect_freq: int = 0
mask_value: float = 0
rng: Optional[Any] = None # random.Random() type
use_numba_spec_augment: bool = True
@dataclass
class CropOrPadSpectrogramAugmentationConfig:
audio_length: int
_target_: str = "nemo.collections.asr.modules.CropOrPadSpectrogramAugmentation"
@dataclass
class MaskedPatchAugmentationConfig:
patch_size: int = 48
mask_patches: float = 10.0
freq_masks: int = 0
freq_width: int = 0
_target_: str = "nemo.collections.asr.modules.MaskedPatchAugmentation"
[end of nemo/collections/asr/modules/audio_preprocessing.py]
[start of nemo/collections/asr/parts/k2/classes.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from abc import ABC
from dataclasses import dataclass
from typing import Any, Optional, Tuple
import torch
from omegaconf import DictConfig
from nemo.utils import logging
@dataclass
class GraphIntersectDenseConfig:
"""Graph dense intersection config.
"""
search_beam: float = 20.0
output_beam: float = 10.0
min_active_states: int = 30
max_active_states: int = 10000
@dataclass
class GraphModuleConfig:
"""Config for graph modules.
Typically used with graph losses and decoders.
"""
topo_type: str = "default"
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
class ASRK2Mixin(ABC):
"""k2 Mixin class that simplifies the construction of various models with k2-based losses.
It does the following:
- Sets up the graph loss and decoder (methods _init_k2 and update_k2_modules).
- Registers external graphs, if needed.
- Augments forward(...) with optional graph decoding to get accurate predictions.
"""
def _init_k2(self):
"""
k2-related initialization implementation.
This method is expected to run after the __init__ which sets self._cfg
self._cfg is expected to have the attribute graph_module_cfg
"""
if not hasattr(self, "_cfg"):
raise ValueError("self._cfg must be set before calling _init_k2().")
if not hasattr(self._cfg, "graph_module_cfg") or self._cfg.graph_module_cfg is None:
raise ValueError("self._cfg.graph_module_cfg must be set and cannot be None.")
self.graph_module_cfg = self._cfg.graph_module_cfg
# register token_lm for MAPLoss
criterion_type = self.graph_module_cfg.get("criterion_type", "ml")
self.use_graph_lm = criterion_type == "map"
if self.use_graph_lm:
token_lm_path = self.graph_module_cfg.backend_cfg.get("token_lm", None)
if token_lm_path is None:
raise ValueError(
f"graph_module_cfg.backend_cfg.token_lm is empty. It must be set for criterion_type == `{criterion_type}`"
)
token_lm_path = self.register_artifact('graph_module_cfg.backend_cfg.token_lm', token_lm_path)
self.graph_module_cfg.backend_cfg["token_lm"] = token_lm_path
self.update_k2_modules(self.graph_module_cfg)
def update_k2_modules(self, input_cfg: DictConfig):
"""
Helper function to initialize or update k2 loss and transcribe_decoder.
Args:
input_cfg: DictConfig to take new parameters from. Schema is expected as in
nemo.collections.asr.models.configs.k2_sequence_models_config.GraphModuleConfig
"""
del self.loss
if hasattr(self, "transcribe_decoder"):
del self.transcribe_decoder
if hasattr(self, "joint"):
# RNNT
num_classes = self.joint.num_classes_with_blank - 1
else:
# CTC, MMI, ...
num_classes = self.decoder.num_classes_with_blank - 1
remove_consecutive = input_cfg.backend_cfg.get("topo_with_self_loops", True) and input_cfg.backend_cfg.get(
"topo_type", "default"
) not in ["forced_blank", "identity",]
self._wer.remove_consecutive = remove_consecutive
from nemo.collections.asr.losses.lattice_losses import LatticeLoss
self.loss = LatticeLoss(
num_classes=num_classes,
reduction=self._cfg.get("ctc_reduction", "mean_batch"),
backend="k2",
criterion_type=input_cfg.get("criterion_type", "ml"),
loss_type=input_cfg.get("loss_type", "ctc"),
split_batch_size=input_cfg.get("split_batch_size", 0),
graph_module_cfg=input_cfg.backend_cfg,
)
criterion_type = self.loss.criterion_type
self.use_graph_lm = criterion_type == "map"
transcribe_training = input_cfg.get("transcribe_training", False)
if transcribe_training and criterion_type == "ml":
logging.warning(
f"""You do not need to use transcribe_training=`{transcribe_training}`
with criterion_type=`{criterion_type}`. transcribe_training will be set to False."""
)
transcribe_training = False
self.transcribe_training = transcribe_training
if self.use_graph_lm:
from nemo.collections.asr.modules.graph_decoder import ViterbiDecoderWithGraph
self.transcribe_decoder = ViterbiDecoderWithGraph(
num_classes=num_classes,
backend="k2",
dec_type="token_lm",
return_type="1best",
return_ilabels=True,
output_aligned=True,
split_batch_size=input_cfg.get("split_batch_size", 0),
graph_module_cfg=input_cfg.backend_cfg,
)
def _forward_k2_post_processing(
self, log_probs: torch.Tensor, encoded_length: torch.Tensor, greedy_predictions: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
k2-related post-processing parf of .forward()
Args:
log_probs: The log probabilities tensor of shape [B, T, D].
encoded_length: The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
greedy_predictions: The greedy token predictions of the model of shape [B, T]
Returns:
A tuple of 3 elements -
1) The log probabilities tensor of shape [B, T, D].
2) The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
3) The greedy token predictions of the model of shape [B, T] (via argmax)
"""
# greedy_predictions from .forward() are incorrect for criterion_type=`map`
# getting correct greedy_predictions, if needed
if self.use_graph_lm and (not self.training or self.transcribe_training):
greedy_predictions, encoded_length, _ = self.transcribe_decoder.forward(
log_probs=log_probs, log_probs_length=encoded_length
)
return log_probs, encoded_length, greedy_predictions
[end of nemo/collections/asr/parts/k2/classes.py]
[start of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from dataclasses import dataclass
from typing import Any, Optional
import torch
from torch import nn as nn
from nemo.collections.asr.parts.submodules import multi_head_attention as mha
from nemo.collections.common.parts import adapter_modules
from nemo.core.classes.mixins import adapter_mixin_strategies
class MHAResidualAddAdapterStrategy(adapter_mixin_strategies.ResidualAddAdapterStrategy):
"""
An implementation of residual addition of an adapter module with its input for the MHA Adapters.
"""
def forward(self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'):
"""
A basic strategy, comprising of a residual connection over the input, after forward pass by
the underlying adapter. Additional work is done to pack and unpack the dictionary of inputs and outputs.
Note: The `value` tensor is added to the output of the attention adapter as the residual connection.
Args:
input: A dictionary of multiple input arguments for the adapter module.
`query`, `key`, `value`: Original output tensor of the module, or the output of the
previous adapter (if more than one adapters are enabled).
`mask`: Attention mask.
`pos_emb`: Optional positional embedding for relative encoding.
adapter: The adapter module that is currently required to perform the forward pass.
module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
Returns:
The result tensor, after one of the active adapters has finished its forward passes.
"""
out = self.compute_output(input, adapter, module=module)
# If not in training mode, or probability of stochastic depth is 0, skip step.
p = self.stochastic_depth
if not module.training or p == 0.0:
pass
else:
out = self.apply_stochastic_depth(out, input['value'], adapter, module=module)
# Return the residual connection output = input + adapter(input)
result = input['value'] + out
# If l2_lambda is activated, register the loss value
self.compute_auxiliary_losses(result, input['value'], adapter, module=module)
return result
def compute_output(
self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'
) -> torch.Tensor:
"""
Compute the output of a single adapter to some input.
Args:
input: Original output tensor of the module, or the output of the previous adapter (if more than
one adapters are enabled).
adapter: The adapter module that is currently required to perform the forward pass.
module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
Returns:
The result tensor, after one of the active adapters has finished its forward passes.
"""
if isinstance(input, (list, tuple)):
out = adapter(*input)
elif isinstance(input, dict):
out = adapter(**input)
else:
out = adapter(input)
return out
@dataclass
class MHAResidualAddAdapterStrategyConfig(adapter_mixin_strategies.ResidualAddAdapterStrategyConfig):
_target_: str = "{0}.{1}".format(
MHAResidualAddAdapterStrategy.__module__, MHAResidualAddAdapterStrategy.__name__
) # mandatory field
class MultiHeadAttentionAdapter(mha.MultiHeadAttention, adapter_modules.AdapterModuleUtil):
"""Multi-Head Attention layer of Transformer.
Args:
n_head (int): number of heads
n_feat (int): size of the features
dropout_rate (float): dropout rate
proj_dim (int, optional): Optional integer value for projection before computing attention.
If None, then there is no projection (equivalent to proj_dim = n_feat).
If > 0, then will project the n_feat to proj_dim before calculating attention.
If <0, then will equal n_head, so that each head has a projected dimension of 1.
adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
"""
def __init__(
self,
n_head: int,
n_feat: int,
dropout_rate: float,
proj_dim: Optional[int] = None,
adapter_strategy: MHAResidualAddAdapterStrategy = None,
):
super().__init__(n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, max_cache_len=0)
self.pre_norm = nn.LayerNorm(n_feat)
# Set the projection dim to number of heads automatically
if proj_dim is not None and proj_dim < 1:
proj_dim = n_head
self.proj_dim = proj_dim
# Recompute weights for projection dim
if self.proj_dim is not None:
if self.proj_dim % n_head != 0:
raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
self.d_k = self.proj_dim // n_head
self.s_d_k = math.sqrt(self.d_k)
self.linear_q = nn.Linear(n_feat, self.proj_dim)
self.linear_k = nn.Linear(n_feat, self.proj_dim)
self.linear_v = nn.Linear(n_feat, self.proj_dim)
self.linear_out = nn.Linear(self.proj_dim, n_feat)
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
# reset parameters for Q to be identity operation
self.reset_parameters()
def forward(self, query, key, value, mask, pos_emb=None, cache=None):
"""Compute 'Scaled Dot Product Attention'.
Args:
query (torch.Tensor): (batch, time1, size)
key (torch.Tensor): (batch, time2, size)
value(torch.Tensor): (batch, time2, size)
mask (torch.Tensor): (batch, time1, time2)
cache (torch.Tensor) : (batch, time_cache, size)
returns:
output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
cache (torch.Tensor) : (batch, time_cache_next, size)
"""
# Need to perform duplicate computations as at this point the tensors have been
# separated by the adapter forward
query = self.pre_norm(query)
key = self.pre_norm(key)
value = self.pre_norm(value)
return super().forward(query, key, value, mask, pos_emb, cache=cache)
def reset_parameters(self):
with torch.no_grad():
nn.init.zeros_(self.linear_out.weight)
nn.init.zeros_(self.linear_out.bias)
def get_default_strategy_config(self) -> 'dataclass':
return MHAResidualAddAdapterStrategyConfig()
@dataclass
class MultiHeadAttentionAdapterConfig:
n_head: int
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
class RelPositionMultiHeadAttentionAdapter(mha.RelPositionMultiHeadAttention, adapter_modules.AdapterModuleUtil):
"""Multi-Head Attention layer of Transformer-XL with support of relative positional encoding.
Paper: https://arxiv.org/abs/1901.02860
Args:
n_head (int): number of heads
n_feat (int): size of the features
dropout_rate (float): dropout rate
proj_dim (int, optional): Optional integer value for projection before computing attention.
If None, then there is no projection (equivalent to proj_dim = n_feat).
If > 0, then will project the n_feat to proj_dim before calculating attention.
If <0, then will equal n_head, so that each head has a projected dimension of 1.
adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
"""
def __init__(
self,
n_head: int,
n_feat: int,
dropout_rate: float,
proj_dim: Optional[int] = None,
adapter_strategy: MHAResidualAddAdapterStrategyConfig = None,
):
super().__init__(
n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, pos_bias_u=None, pos_bias_v=None, max_cache_len=0
)
self.pre_norm = nn.LayerNorm(n_feat)
# Set the projection dim to number of heads automatically
if proj_dim is not None and proj_dim < 1:
proj_dim = n_head
self.proj_dim = proj_dim
# Recompute weights for projection dim
if self.proj_dim is not None:
if self.proj_dim % n_head != 0:
raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
self.d_k = self.proj_dim // n_head
self.s_d_k = math.sqrt(self.d_k)
self.linear_q = nn.Linear(n_feat, self.proj_dim)
self.linear_k = nn.Linear(n_feat, self.proj_dim)
self.linear_v = nn.Linear(n_feat, self.proj_dim)
self.linear_out = nn.Linear(self.proj_dim, n_feat)
self.linear_pos = nn.Linear(n_feat, self.proj_dim, bias=False)
self.pos_bias_u = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
self.pos_bias_v = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
# reset parameters for Q to be identity operation
self.reset_parameters()
def forward(self, query, key, value, mask, pos_emb, cache=None):
"""Compute 'Scaled Dot Product Attention' with rel. positional encoding.
Args:
query (torch.Tensor): (batch, time1, size)
key (torch.Tensor): (batch, time2, size)
value(torch.Tensor): (batch, time2, size)
mask (torch.Tensor): (batch, time1, time2)
pos_emb (torch.Tensor) : (batch, time1, size)
cache (torch.Tensor) : (batch, time_cache, size)
Returns:
output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
cache_next (torch.Tensor) : (batch, time_cache_next, size)
"""
# Need to perform duplicate computations as at this point the tensors have been
# separated by the adapter forward
query = self.pre_norm(query)
key = self.pre_norm(key)
value = self.pre_norm(value)
return super().forward(query, key, value, mask, pos_emb, cache=cache)
def reset_parameters(self):
with torch.no_grad():
nn.init.zeros_(self.linear_out.weight)
nn.init.zeros_(self.linear_out.bias)
# NOTE: This exact procedure apparently highly important.
# Above operation is safe to do as self.linear_out.weight *= 0.0 (similar for bias)
# However:
# DO NOT REPLACE BELOW WITH self.pos_bias_u *= 0.0 OR self.pos_bias_v *= 0.0
# For some reason at init sometimes it will cause the value of the tensor to become NaN
# All operations to compute matrix_ac and matrix_bd will then fail.
nn.init.zeros_(self.pos_bias_u)
nn.init.zeros_(self.pos_bias_v)
def get_default_strategy_config(self) -> 'dataclass':
return MHAResidualAddAdapterStrategyConfig()
@dataclass
class RelPositionMultiHeadAttentionAdapterConfig:
n_head: int
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
class PositionalEncodingAdapter(mha.PositionalEncoding, adapter_modules.AdapterModuleUtil):
"""
Absolute positional embedding adapter.
.. note::
Absolute positional embedding value is added to the input tensor *without residual connection* !
Therefore, the input is changed, if you only require the positional embedding, drop the returned `x` !
Args:
d_model (int): The input dimension of x.
max_len (int): The max sequence length.
xscale (float): The input scaling factor. Defaults to 1.0.
adapter_strategy (AbstractAdapterStrategy): By default, ReturnResultAdapterStrategyConfig.
An adapter composition function object.
NOTE: Since this is a positional encoding, it will not add a residual !
"""
def __init__(
self,
d_model: int,
max_len: int = 5000,
xscale=1.0,
adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
):
super().__init__(
d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0,
)
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
def get_default_strategy_config(self) -> 'dataclass':
return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
@dataclass
class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
class RelPositionalEncodingAdapter(mha.RelPositionalEncoding, adapter_modules.AdapterModuleUtil):
"""
Relative positional encoding for TransformerXL's layers
See : Appendix B in https://arxiv.org/abs/1901.02860
.. note::
Relative positional embedding value is **not** added to the input tensor !
Therefore, the input should be updated changed, if you only require the positional embedding, drop the returned `x` !
Args:
d_model (int): embedding dim
max_len (int): maximum input length
xscale (bool): whether to scale the input by sqrt(d_model)
adapter_strategy: By default, ReturnResultAdapterStrategyConfig. An adapter composition function object.
"""
def __init__(
self,
d_model: int,
max_len: int = 5000,
xscale=1.0,
adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
):
super().__init__(d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0)
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
def get_default_strategy_config(self) -> 'dataclass':
return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
@dataclass
class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
[end of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
[start of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import os
from dataclasses import dataclass
from typing import List, Optional, Tuple, Union
import torch
from nemo.collections.asr.parts.utils import rnnt_utils
from nemo.collections.common.tokenizers.tokenizer_spec import TokenizerSpec
from nemo.core.classes import Typing, typecheck
from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
from nemo.utils import logging
DEFAULT_TOKEN_OFFSET = 100
def pack_hypotheses(
hypotheses: List[rnnt_utils.NBestHypotheses], logitlen: torch.Tensor,
) -> List[rnnt_utils.NBestHypotheses]:
if logitlen is not None:
if hasattr(logitlen, 'cpu'):
logitlen_cpu = logitlen.to('cpu')
else:
logitlen_cpu = logitlen
for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.NBestHypotheses
for candidate_idx, cand in enumerate(hyp.n_best_hypotheses):
cand.y_sequence = torch.tensor(cand.y_sequence, dtype=torch.long)
if logitlen is not None:
cand.length = logitlen_cpu[idx]
if cand.dec_state is not None:
cand.dec_state = _states_to_device(cand.dec_state)
return hypotheses
def _states_to_device(dec_state, device='cpu'):
if torch.is_tensor(dec_state):
dec_state = dec_state.to(device)
elif isinstance(dec_state, (list, tuple)):
dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
return dec_state
class AbstractBeamCTCInfer(Typing):
"""A beam CTC decoder.
Provides a common abstraction for sample level beam decoding.
Args:
blank_id: int, index of the blank token. Can be 0 or len(vocabulary).
beam_size: int, size of the beam used in the underlying beam search engine.
"""
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
return {
"decoder_output": NeuralType(('B', 'T', 'D'), LogprobsType()),
"decoder_lengths": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {"predictions": [NeuralType(elements_type=HypothesisType())]}
def __init__(self, blank_id: int, beam_size: int):
self.blank_id = blank_id
if beam_size < 1:
raise ValueError("Beam search size cannot be less than 1!")
self.beam_size = beam_size
# Variables set by corresponding setter methods
self.vocab = None
self.decoding_type = None
self.tokenizer = None
# Utility maps for vocabulary
self.vocab_index_map = None
self.index_vocab_map = None
# Internal variable, used to prevent double reduction of consecutive tokens (ctc collapse)
self.override_fold_consecutive_value = None
def set_vocabulary(self, vocab: List[str]):
"""
Set the vocabulary of the decoding framework.
Args:
vocab: List of str. Each token corresponds to its location in the vocabulary emitted by the model.
Note that this vocabulary must NOT contain the "BLANK" token.
"""
self.vocab = vocab
self.vocab_index_map = {v: i for i, v in enumerate(vocab)}
self.index_vocab_map = {i: v for i, v in enumerate(vocab)}
def set_decoding_type(self, decoding_type: str):
"""
Sets the decoding type of the framework. Can support either char or subword models.
Args:
decoding_type: Str corresponding to decoding type. Only supports "char" and "subword".
"""
decoding_type = decoding_type.lower()
supported_types = ['char', 'subword']
if decoding_type not in supported_types:
raise ValueError(
f"Unsupported decoding type. Supported types = {supported_types}.\n" f"Given = {decoding_type}"
)
self.decoding_type = decoding_type
def set_tokenizer(self, tokenizer: TokenizerSpec):
"""
Set the tokenizer of the decoding framework.
Args:
tokenizer: NeMo tokenizer object, which inherits from TokenizerSpec.
"""
self.tokenizer = tokenizer
@typecheck()
def forward(
self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-repressively.
Args:
decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
decoder_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
raise NotImplementedError()
def __call__(self, *args, **kwargs):
return self.forward(*args, **kwargs)
class BeamCTCInfer(AbstractBeamCTCInfer):
"""A greedy CTC decoder.
Provides a common abstraction for sample level and batch level greedy decoding.
Args:
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
"""
def __init__(
self,
blank_id: int,
beam_size: int,
search_type: str = "default",
return_best_hypothesis: bool = True,
preserve_alignments: bool = False,
compute_timestamps: bool = False,
beam_alpha: float = 1.0,
beam_beta: float = 0.0,
kenlm_path: str = None,
flashlight_cfg: Optional['FlashlightConfig'] = None,
pyctcdecode_cfg: Optional['PyCTCDecodeConfig'] = None,
):
super().__init__(blank_id=blank_id, beam_size=beam_size)
self.search_type = search_type
self.return_best_hypothesis = return_best_hypothesis
self.preserve_alignments = preserve_alignments
self.compute_timestamps = compute_timestamps
if self.compute_timestamps:
raise ValueError(f"Currently this flag is not supported for beam search algorithms.")
self.vocab = None # This must be set by specific method by user before calling forward() !
if search_type == "default" or search_type == "nemo":
self.search_algorithm = self.default_beam_search
elif search_type == "pyctcdecode":
self.search_algorithm = self._pyctcdecode_beam_search
elif search_type == "flashlight":
self.search_algorithm = self.flashlight_beam_search
else:
raise NotImplementedError(
f"The search type ({search_type}) supplied is not supported!\n"
f"Please use one of : (default, nemo, pyctcdecode)"
)
# Log the beam search algorithm
logging.info(f"Beam search algorithm: {search_type}")
self.beam_alpha = beam_alpha
self.beam_beta = beam_beta
# Default beam search args
self.kenlm_path = kenlm_path
# PyCTCDecode params
if pyctcdecode_cfg is None:
pyctcdecode_cfg = PyCTCDecodeConfig()
self.pyctcdecode_cfg = pyctcdecode_cfg # type: PyCTCDecodeConfig
if flashlight_cfg is None:
flashlight_cfg = FlashlightConfig()
self.flashlight_cfg = flashlight_cfg
# Default beam search scorer functions
self.default_beam_scorer = None
self.pyctcdecode_beam_scorer = None
self.flashlight_beam_scorer = None
self.token_offset = 0
@typecheck()
def forward(
self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-repressively.
Args:
decoder_output: A tensor of size (batch, timesteps, features).
decoder_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
if self.vocab is None:
raise RuntimeError("Please set the vocabulary with `set_vocabulary()` before calling this function.")
if self.decoding_type is None:
raise ValueError("Please set the decoding type with `set_decoding_type()` before calling this function.")
with torch.no_grad(), torch.inference_mode():
# Process each sequence independently
prediction_tensor = decoder_output
if prediction_tensor.ndim != 3:
raise ValueError(
f"`decoder_output` must be a tensor of shape [B, T, V] (log probs, float). "
f"Provided shape = {prediction_tensor.shape}"
)
# determine type of input - logprobs or labels
out_len = decoder_lengths if decoder_lengths is not None else None
hypotheses = self.search_algorithm(prediction_tensor, out_len)
# Pack results into Hypotheses
packed_result = pack_hypotheses(hypotheses, decoder_lengths)
# Pack the result
if self.return_best_hypothesis and isinstance(packed_result[0], rnnt_utils.NBestHypotheses):
packed_result = [res.n_best_hypotheses[0] for res in packed_result] # type: Hypothesis
return (packed_result,)
@torch.no_grad()
def default_beam_search(
self, x: torch.Tensor, out_len: torch.Tensor
) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
"""
Open Seq2Seq Beam Search Algorithm (DeepSpeed)
Args:
x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
and V is the vocabulary size. The tensor contains log-probabilities.
out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
Returns:
A list of NBestHypotheses objects, one for each sequence in the batch.
"""
if self.compute_timestamps:
raise ValueError(
f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
)
if self.default_beam_scorer is None:
# Check for filepath
if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
raise FileNotFoundError(
f"KenLM binary file not found at : {self.kenlm_path}. "
f"Please set a valid path in the decoding config."
)
# perform token offset for subword models
if self.decoding_type == 'subword':
vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
else:
# char models
vocab = self.vocab
# Must import at runtime to avoid circular dependency due to module level import.
from nemo.collections.asr.modules.beam_search_decoder import BeamSearchDecoderWithLM
self.default_beam_scorer = BeamSearchDecoderWithLM(
vocab=vocab,
lm_path=self.kenlm_path,
beam_width=self.beam_size,
alpha=self.beam_alpha,
beta=self.beam_beta,
num_cpus=max(1, os.cpu_count()),
input_tensor=False,
)
x = x.to('cpu')
with typecheck.disable_checks():
data = [x[sample_id, : out_len[sample_id], :].softmax(dim=-1) for sample_id in range(len(x))]
beams_batch = self.default_beam_scorer.forward(log_probs=data, log_probs_length=None)
# For each sample in the batch
nbest_hypotheses = []
for beams_idx, beams in enumerate(beams_batch):
# For each beam candidate / hypothesis in each sample
hypotheses = []
for candidate_idx, candidate in enumerate(beams):
hypothesis = rnnt_utils.Hypothesis(
score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
)
# For subword encoding, NeMo will double encode the subword (multiple tokens) into a
# singular unicode id. In doing so, we preserve the semantic of the unicode token, and
# compress the size of the final KenLM ARPA / Binary file.
# In order to do double encoding, we shift the subword by some token offset.
# This step is ignored for character based models.
if self.decoding_type == 'subword':
pred_token_ids = [ord(c) - self.token_offset for c in candidate[1]]
else:
# Char models
pred_token_ids = [self.vocab_index_map[c] for c in candidate[1]]
# We preserve the token ids and the score for this hypothesis
hypothesis.y_sequence = pred_token_ids
hypothesis.score = candidate[0]
# If alignment must be preserved, we preserve a view of the output logprobs.
# Note this view is shared amongst all beams within the sample, be sure to clone it if you
# require specific processing for each sample in the beam.
# This is done to preserve memory.
if self.preserve_alignments:
hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
hypotheses.append(hypothesis)
# Wrap the result in NBestHypothesis.
hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
nbest_hypotheses.append(hypotheses)
return nbest_hypotheses
@torch.no_grad()
def _pyctcdecode_beam_search(
self, x: torch.Tensor, out_len: torch.Tensor
) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
"""
PyCTCDecode Beam Search Algorithm. Should support Char and Subword models.
Args:
x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
and V is the vocabulary size. The tensor contains log-probabilities.
out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
Returns:
A list of NBestHypotheses objects, one for each sequence in the batch.
"""
if self.compute_timestamps:
raise ValueError(
f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
)
try:
import pyctcdecode
except (ImportError, ModuleNotFoundError):
raise ImportError(
f"Could not load `pyctcdecode` library. Please install it from pip using :\n"
f"pip install --upgrade pyctcdecode"
)
if self.pyctcdecode_beam_scorer is None:
self.pyctcdecode_beam_scorer = pyctcdecode.build_ctcdecoder(
labels=self.vocab, kenlm_model_path=self.kenlm_path, alpha=self.beam_alpha, beta=self.beam_beta
) # type: pyctcdecode.BeamSearchDecoderCTC
x = x.to('cpu').numpy()
with typecheck.disable_checks():
beams_batch = []
for sample_id in range(len(x)):
logprobs = x[sample_id, : out_len[sample_id], :]
result = self.pyctcdecode_beam_scorer.decode_beams(
logprobs,
beam_width=self.beam_size,
beam_prune_logp=self.pyctcdecode_cfg.beam_prune_logp,
token_min_logp=self.pyctcdecode_cfg.token_min_logp,
prune_history=self.pyctcdecode_cfg.prune_history,
hotwords=self.pyctcdecode_cfg.hotwords,
hotword_weight=self.pyctcdecode_cfg.hotword_weight,
lm_start_state=None,
) # Output format: text, last_lm_state, text_frames, logit_score, lm_score
beams_batch.append(result)
nbest_hypotheses = []
for beams_idx, beams in enumerate(beams_batch):
hypotheses = []
for candidate_idx, candidate in enumerate(beams):
# Candidate = (text, last_lm_state, text_frames, logit_score, lm_score)
hypothesis = rnnt_utils.Hypothesis(
score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
)
# TODO: Requires token ids to be returned rather than text.
if self.decoding_type == 'subword':
if self.tokenizer is None:
raise ValueError("Tokenizer must be provided for subword decoding. Use set_tokenizer().")
pred_token_ids = self.tokenizer.text_to_ids(candidate[0])
else:
if self.vocab is None:
raise ValueError("Vocab must be provided for character decoding. Use set_vocab().")
chars = list(candidate[0])
pred_token_ids = [self.vocab_index_map[c] for c in chars]
hypothesis.y_sequence = pred_token_ids
hypothesis.text = candidate[0] # text
hypothesis.score = candidate[4] # score
# Inject word level timestamps
hypothesis.timestep = candidate[2] # text_frames
if self.preserve_alignments:
hypothesis.alignments = torch.from_numpy(x[beams_idx][: out_len[beams_idx]])
hypotheses.append(hypothesis)
hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
nbest_hypotheses.append(hypotheses)
return nbest_hypotheses
@torch.no_grad()
def flashlight_beam_search(
self, x: torch.Tensor, out_len: torch.Tensor
) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
"""
Flashlight Beam Search Algorithm. Should support Char and Subword models.
Args:
x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
and V is the vocabulary size. The tensor contains log-probabilities.
out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
Returns:
A list of NBestHypotheses objects, one for each sequence in the batch.
"""
if self.compute_timestamps:
raise ValueError(
f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
)
if self.flashlight_beam_scorer is None:
# Check for filepath
if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
raise FileNotFoundError(
f"KenLM binary file not found at : {self.kenlm_path}. "
f"Please set a valid path in the decoding config."
)
# perform token offset for subword models
# if self.decoding_type == 'subword':
# vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
# else:
# # char models
# vocab = self.vocab
# Must import at runtime to avoid circular dependency due to module level import.
from nemo.collections.asr.modules.flashlight_decoder import FlashLightKenLMBeamSearchDecoder
self.flashlight_beam_scorer = FlashLightKenLMBeamSearchDecoder(
lm_path=self.kenlm_path,
vocabulary=self.vocab,
tokenizer=self.tokenizer,
lexicon_path=self.flashlight_cfg.lexicon_path,
boost_path=self.flashlight_cfg.boost_path,
beam_size=self.beam_size,
beam_size_token=self.flashlight_cfg.beam_size_token,
beam_threshold=self.flashlight_cfg.beam_threshold,
lm_weight=self.beam_alpha,
word_score=self.beam_beta,
unk_weight=self.flashlight_cfg.unk_weight,
sil_weight=self.flashlight_cfg.sil_weight,
)
x = x.to('cpu')
with typecheck.disable_checks():
beams_batch = self.flashlight_beam_scorer.forward(log_probs=x)
# For each sample in the batch
nbest_hypotheses = []
for beams_idx, beams in enumerate(beams_batch):
# For each beam candidate / hypothesis in each sample
hypotheses = []
for candidate_idx, candidate in enumerate(beams):
hypothesis = rnnt_utils.Hypothesis(
score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
)
# We preserve the token ids and the score for this hypothesis
hypothesis.y_sequence = candidate['tokens'].tolist()
hypothesis.score = candidate['score']
# If alignment must be preserved, we preserve a view of the output logprobs.
# Note this view is shared amongst all beams within the sample, be sure to clone it if you
# require specific processing for each sample in the beam.
# This is done to preserve memory.
if self.preserve_alignments:
hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
hypotheses.append(hypothesis)
# Wrap the result in NBestHypothesis.
hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
nbest_hypotheses.append(hypotheses)
return nbest_hypotheses
def set_decoding_type(self, decoding_type: str):
super().set_decoding_type(decoding_type)
# Please check train_kenlm.py in scripts/asr_language_modeling/ to find out why we need
# TOKEN_OFFSET for BPE-based models
if self.decoding_type == 'subword':
self.token_offset = DEFAULT_TOKEN_OFFSET
@dataclass
class PyCTCDecodeConfig:
# These arguments cannot be imported from pyctcdecode (optional dependency)
# Therefore we copy the values explicitly
# Taken from pyctcdecode.constant
beam_prune_logp: float = -10.0
token_min_logp: float = -5.0
prune_history: bool = False
hotwords: Optional[List[str]] = None
hotword_weight: float = 10.0
@dataclass
class FlashlightConfig:
lexicon_path: Optional[str] = None
boost_path: Optional[str] = None
beam_size_token: int = 16
beam_threshold: float = 20.0
unk_weight: float = -math.inf
sil_weight: float = 0.0
@dataclass
class BeamCTCInferConfig:
beam_size: int
search_type: str = 'default'
preserve_alignments: bool = False
compute_timestamps: bool = False
return_best_hypothesis: bool = True
beam_alpha: float = 1.0
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
[end of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
[start of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import List, Optional
import torch
from omegaconf import DictConfig, OmegaConf
from nemo.collections.asr.parts.utils import rnnt_utils
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMethodConfig, ConfidenceMethodMixin
from nemo.core.classes import Typing, typecheck
from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
from nemo.utils import logging
def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
if logitlen is not None:
if hasattr(logitlen, 'cpu'):
logitlen_cpu = logitlen.to('cpu')
else:
logitlen_cpu = logitlen
for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
if logitlen is not None:
hyp.length = logitlen_cpu[idx]
if hyp.dec_state is not None:
hyp.dec_state = _states_to_device(hyp.dec_state)
return hypotheses
def _states_to_device(dec_state, device='cpu'):
if torch.is_tensor(dec_state):
dec_state = dec_state.to(device)
elif isinstance(dec_state, (list, tuple)):
dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
return dec_state
class GreedyCTCInfer(Typing, ConfidenceMethodMixin):
"""A greedy CTC decoder.
Provides a common abstraction for sample level and batch level greedy decoding.
Args:
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
preserve_alignments: Bool flag which preserves the history of logprobs generated during
decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding. When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
# Input can be of dimention -
# ('B', 'T', 'D') [Log probs] or ('B', 'T') [Labels]
return {
"decoder_output": NeuralType(None, LogprobsType()),
"decoder_lengths": NeuralType(tuple('B'), LengthsType()),
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {"predictions": [NeuralType(elements_type=HypothesisType())]}
def __init__(
self,
blank_id: int,
preserve_alignments: bool = False,
compute_timestamps: bool = False,
preserve_frame_confidence: bool = False,
confidence_method_cfg: Optional[DictConfig] = None,
):
super().__init__()
self.blank_id = blank_id
self.preserve_alignments = preserve_alignments
# we need timestamps to extract non-blank per-frame confidence
self.compute_timestamps = compute_timestamps | preserve_frame_confidence
self.preserve_frame_confidence = preserve_frame_confidence
# set confidence calculation method
self._init_confidence_method(confidence_method_cfg)
@typecheck()
def forward(
self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-repressively.
Args:
decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
decoder_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
with torch.inference_mode():
hypotheses = []
# Process each sequence independently
prediction_cpu_tensor = decoder_output.cpu()
if prediction_cpu_tensor.ndim < 2 or prediction_cpu_tensor.ndim > 3:
raise ValueError(
f"`decoder_output` must be a tensor of shape [B, T] (labels, int) or "
f"[B, T, V] (log probs, float). Provided shape = {prediction_cpu_tensor.shape}"
)
# determine type of input - logprobs or labels
if prediction_cpu_tensor.ndim == 2: # labels
greedy_decode = self._greedy_decode_labels
else:
greedy_decode = self._greedy_decode_logprobs
for ind in range(prediction_cpu_tensor.shape[0]):
out_len = decoder_lengths[ind] if decoder_lengths is not None else None
hypothesis = greedy_decode(prediction_cpu_tensor[ind], out_len)
hypotheses.append(hypothesis)
# Pack results into Hypotheses
packed_result = pack_hypotheses(hypotheses, decoder_lengths)
return (packed_result,)
@torch.no_grad()
def _greedy_decode_logprobs(self, x: torch.Tensor, out_len: torch.Tensor):
# x: [T, D]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
prediction = x.detach().cpu()
if out_len is not None:
prediction = prediction[:out_len]
prediction_logprobs, prediction_labels = prediction.max(dim=-1)
non_blank_ids = prediction_labels != self.blank_id
hypothesis.y_sequence = prediction_labels.numpy().tolist()
hypothesis.score = (prediction_logprobs[non_blank_ids]).sum()
if self.preserve_alignments:
# Preserve the logprobs, as well as labels after argmax
hypothesis.alignments = (prediction.clone(), prediction_labels.clone())
if self.compute_timestamps:
hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
if self.preserve_frame_confidence:
hypothesis.frame_confidence = self._get_confidence(prediction)
return hypothesis
@torch.no_grad()
def _greedy_decode_labels(self, x: torch.Tensor, out_len: torch.Tensor):
# x: [T]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
prediction_labels = x.detach().cpu()
if out_len is not None:
prediction_labels = prediction_labels[:out_len]
non_blank_ids = prediction_labels != self.blank_id
hypothesis.y_sequence = prediction_labels.numpy().tolist()
hypothesis.score = -1.0
if self.preserve_alignments:
raise ValueError("Requested for alignments, but predictions provided were labels, not log probabilities.")
if self.compute_timestamps:
hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
if self.preserve_frame_confidence:
raise ValueError(
"Requested for per-frame confidence, but predictions provided were labels, not log probabilities."
)
return hypothesis
def __call__(self, *args, **kwargs):
return self.forward(*args, **kwargs)
@dataclass
class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
self.confidence_method_cfg = OmegaConf.structured(
self.confidence_method_cfg
if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
else ConfidenceMethodConfig(**self.confidence_method_cfg)
)
[end of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
[start of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright 2017 Johns Hopkins University (Shinji Watanabe)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import List, Optional, Tuple, Union
import numpy as np
import torch
from omegaconf import DictConfig, OmegaConf
from nemo.collections.asr.modules import rnnt_abstract
from nemo.collections.asr.parts.utils import rnnt_utils
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMethodConfig, ConfidenceMethodMixin
from nemo.collections.common.parts.rnn import label_collate
from nemo.core.classes import Typing, typecheck
from nemo.core.neural_types import AcousticEncodedRepresentation, ElementType, HypothesisType, LengthsType, NeuralType
from nemo.utils import logging
def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
if hasattr(logitlen, 'cpu'):
logitlen_cpu = logitlen.to('cpu')
else:
logitlen_cpu = logitlen
for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
hyp.length = logitlen_cpu[idx]
if hyp.dec_state is not None:
hyp.dec_state = _states_to_device(hyp.dec_state)
return hypotheses
def _states_to_device(dec_state, device='cpu'):
if torch.is_tensor(dec_state):
dec_state = dec_state.to(device)
elif isinstance(dec_state, (list, tuple)):
dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
return dec_state
class _GreedyRNNTInfer(Typing, ConfidenceMethodMixin):
"""A greedy transducer decoder.
Provides a common abstraction for sample level and batch level greedy decoding.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
@property
def input_types(self):
"""Returns definitions of module input ports.
"""
return {
"encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
"encoded_lengths": NeuralType(tuple('B'), LengthsType()),
"partial_hypotheses": [NeuralType(elements_type=HypothesisType(), optional=True)], # must always be last
}
@property
def output_types(self):
"""Returns definitions of module output ports.
"""
return {"predictions": [NeuralType(elements_type=HypothesisType())]}
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_method_cfg: Optional[DictConfig] = None,
):
super().__init__()
self.decoder = decoder_model
self.joint = joint_model
self._blank_index = blank_index
self._SOS = blank_index # Start of single index
self.max_symbols = max_symbols_per_step
self.preserve_alignments = preserve_alignments
self.preserve_frame_confidence = preserve_frame_confidence
# set confidence calculation method
self._init_confidence_method(confidence_method_cfg)
def __call__(self, *args, **kwargs):
return self.forward(*args, **kwargs)
@torch.no_grad()
def _pred_step(
self,
label: Union[torch.Tensor, int],
hidden: Optional[torch.Tensor],
add_sos: bool = False,
batch_size: Optional[int] = None,
) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Common prediction step based on the AbstractRNNTDecoder implementation.
Args:
label: (int/torch.Tensor): Label or "Start-of-Signal" token.
hidden: (Optional torch.Tensor): RNN State vector
add_sos (bool): Whether to add a zero vector at the begging as "start of sentence" token.
batch_size: Batch size of the output tensor.
Returns:
g: (B, U, H) if add_sos is false, else (B, U + 1, H)
hid: (h, c) where h is the final sequence hidden state and c is
the final cell state:
h (tensor), shape (L, B, H)
c (tensor), shape (L, B, H)
"""
if isinstance(label, torch.Tensor):
# label: [batch, 1]
if label.dtype != torch.long:
label = label.long()
else:
# Label is an integer
if label == self._SOS:
return self.decoder.predict(None, hidden, add_sos=add_sos, batch_size=batch_size)
label = label_collate([[label]])
# output: [B, 1, K]
return self.decoder.predict(label, hidden, add_sos=add_sos, batch_size=batch_size)
def _joint_step(self, enc, pred, log_normalize: Optional[bool] = None):
"""
Common joint step based on AbstractRNNTJoint implementation.
Args:
enc: Output of the Encoder model. A torch.Tensor of shape [B, 1, H1]
pred: Output of the Decoder model. A torch.Tensor of shape [B, 1, H2]
log_normalize: Whether to log normalize or not. None will log normalize only for CPU.
Returns:
logits of shape (B, T=1, U=1, V + 1)
"""
with torch.no_grad():
logits = self.joint.joint(enc, pred)
if log_normalize is None:
if not logits.is_cuda: # Use log softmax only if on CPU
logits = logits.log_softmax(dim=len(logits.shape) - 1)
else:
if log_normalize:
logits = logits.log_softmax(dim=len(logits.shape) - 1)
return logits
class GreedyRNNTInfer(_GreedyRNNTInfer):
"""A greedy transducer decoder.
Sequence level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_method_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_method_cfg=confidence_method_cfg,
)
@typecheck()
def forward(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
# Preserve decoder and joint training state
decoder_training_state = self.decoder.training
joint_training_state = self.joint.training
with torch.inference_mode():
# Apply optional preprocessing
encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
self.decoder.eval()
self.joint.eval()
hypotheses = []
# Process each sequence independently
with self.decoder.as_frozen(), self.joint.as_frozen():
for batch_idx in range(encoder_output.size(0)):
inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
logitlen = encoded_lengths[batch_idx]
partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
hypotheses.append(hypothesis)
# Pack results into Hypotheses
packed_result = pack_hypotheses(hypotheses, encoded_lengths)
self.decoder.train(decoder_training_state)
self.joint.train(joint_training_state)
return (packed_result,)
@torch.no_grad()
def _greedy_decode(
self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
):
# x: [T, 1, D]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
if partial_hypotheses is not None:
hypothesis.last_token = partial_hypotheses.last_token
hypothesis.y_sequence = (
partial_hypotheses.y_sequence.cpu().tolist()
if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
else partial_hypotheses.y_sequence
)
if partial_hypotheses.dec_state is not None:
hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
if self.preserve_alignments:
# Alignments is a 2-dimensional dangling list representing T x U
hypothesis.alignments = [[]]
if self.preserve_frame_confidence:
hypothesis.frame_confidence = [[]]
# For timestep t in X_t
for time_idx in range(out_len):
# Extract encoder embedding at timestep t
# f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
f = x.narrow(dim=0, start=time_idx, length=1)
# Setup exit flags and counter
not_blank = True
symbols_added = 0
# While blank is not predicted, or we dont run out of max symbols per timestep
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# In the first timestep, we initialize the network with RNNT Blank
# In later timesteps, we provide previous predicted label as input.
if hypothesis.last_token is None and hypothesis.dec_state is None:
last_label = self._SOS
else:
last_label = label_collate([[hypothesis.last_token]])
# Perform prediction network and joint network steps.
g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
0, 0, 0, :
]
del g
# torch.max(0) op doesnt exist for FP 16.
if logp.dtype != torch.float32:
logp = logp.float()
# get index k, of max prob
v, k = logp.max(0)
k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
if self.preserve_alignments:
# insert logprobs into last timestep
hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
if self.preserve_frame_confidence:
# insert confidence into last timestep
hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
del logp
# If blank token is predicted, exit inner loop, move onto next timestep t
if k == self._blank_index:
not_blank = False
else:
# Append token to label set, update RNN state.
hypothesis.y_sequence.append(k)
hypothesis.score += float(v)
hypothesis.timestep.append(time_idx)
hypothesis.dec_state = hidden_prime
hypothesis.last_token = k
# Increment token counter.
symbols_added += 1
if self.preserve_alignments:
# convert Ti-th logits into a torch array
hypothesis.alignments.append([]) # blank buffer for next timestep
if self.preserve_frame_confidence:
hypothesis.frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of Alignments
if self.preserve_alignments:
if len(hypothesis.alignments[-1]) == 0:
del hypothesis.alignments[-1]
# Remove trailing empty list of per-frame confidence
if self.preserve_frame_confidence:
if len(hypothesis.frame_confidence[-1]) == 0:
del hypothesis.frame_confidence[-1]
# Unpack the hidden states
hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
return hypothesis
class GreedyBatchedRNNTInfer(_GreedyRNNTInfer):
"""A batch level greedy transducer decoder.
Batch level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Can be 0 or len(vocabulary).
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_method_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_method_cfg=confidence_method_cfg,
)
# Depending on availability of `blank_as_pad` support
# switch between more efficient batch decoding technique
if self.decoder.blank_as_pad:
self._greedy_decode = self._greedy_decode_blank_as_pad
else:
self._greedy_decode = self._greedy_decode_masked
@typecheck()
def forward(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
# Preserve decoder and joint training state
decoder_training_state = self.decoder.training
joint_training_state = self.joint.training
with torch.inference_mode():
# Apply optional preprocessing
encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
logitlen = encoded_lengths
self.decoder.eval()
self.joint.eval()
with self.decoder.as_frozen(), self.joint.as_frozen():
inseq = encoder_output # [B, T, D]
hypotheses = self._greedy_decode(
inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
)
# Pack the hypotheses results
packed_result = pack_hypotheses(hypotheses, logitlen)
self.decoder.train(decoder_training_state)
self.joint.train(joint_training_state)
return (packed_result,)
def _greedy_decode_blank_as_pad(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
with torch.inference_mode():
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize list of Hypothesis
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a dangling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
# If confidence scores need to be preserved, register a dangling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
for time_idx in range(max_out_len):
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset blank mask
blank_mask.mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1]
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
:, 0, 0, :
]
if logp.dtype != torch.float32:
logp = logp.float()
# Get index k, of max prob for batch
v, k = logp.max(1)
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask.bitwise_or_(k_is_blank)
all_blanks = torch.all(blank_mask)
del k_is_blank
# If preserving alignments, check if sequence length of sample has been reached
# before adding alignment
if self.preserve_alignments:
# Insert logprobs into last timestep per sample
logp_vals = logp.to('cpu')
logp_ids = logp_vals.max(1)[1]
for batch_idx, is_blank in enumerate(blank_mask):
# we only want to update non-blanks, unless we are at the last step in the loop where
# all elements produced blanks, otherwise there will be duplicate predictions
# saved in alignments
if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
hypotheses[batch_idx].alignments[-1].append(
(logp_vals[batch_idx], logp_ids[batch_idx])
)
del logp_vals
# If preserving per-frame confidence, check if sequence length of sample has been reached
# before adding confidence scores
if self.preserve_frame_confidence:
# Insert probabilities into last timestep per sample
confidence = self._get_confidence(logp)
for batch_idx, is_blank in enumerate(blank_mask):
if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if all_blanks:
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.clone().view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
# If preserving alignments, convert the current Uj alignments into a torch.Tensor
# Then preserve U at current timestep Ti
# Finally, forward the timestep history to Ti+1 for that sample
# All of this should only be done iff the current time index <= sample-level AM length.
# Otherwise ignore and move to next sample / next timestep.
if self.preserve_alignments:
# convert Ti-th logits into a torch array
for batch_idx in range(batchsize):
# this checks if current timestep <= sample-level AM length
# If current timestep > sample-level AM length, no alignments will be added
# Therefore the list of Uj alignments is empty here.
if len(hypotheses[batch_idx].alignments[-1]) > 0:
hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
# Do the same if preserving per-frame confidence
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
def _greedy_decode_masked(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize state
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a danling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
else:
alignments = None
# If confidence scores need to be preserved, register a danling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
last_label_without_blank = last_label.clone()
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
with torch.inference_mode():
for time_idx in range(max_out_len):
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset blank mask
blank_mask.mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Set a dummy label for the blank value
# This value will be overwritten by "blank" again the last label update below
# This is done as vocabulary of prediction network does not contain "blank" token of RNNT
last_label_without_blank_mask = last_label == self._blank_index
last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
last_label_without_blank[~last_label_without_blank_mask] = last_label[
~last_label_without_blank_mask
]
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1]
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
:, 0, 0, :
]
if logp.dtype != torch.float32:
logp = logp.float()
# Get index k, of max prob for batch
v, k = logp.max(1)
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask.bitwise_or_(k_is_blank)
all_blanks = torch.all(blank_mask)
# If preserving alignments, check if sequence length of sample has been reached
# before adding alignment
if self.preserve_alignments:
# Insert logprobs into last timestep per sample
logp_vals = logp.to('cpu')
logp_ids = logp_vals.max(1)[1]
for batch_idx, is_blank in enumerate(blank_mask):
# we only want to update non-blanks, unless we are at the last step in the loop where
# all elements produced blanks, otherwise there will be duplicate predictions
# saved in alignments
if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
hypotheses[batch_idx].alignments[-1].append(
(logp_vals[batch_idx], logp_ids[batch_idx])
)
del logp_vals
# If preserving per-frame confidence, check if sequence length of sample has been reached
# before adding confidence scores
if self.preserve_frame_confidence:
# Insert probabilities into last timestep per sample
confidence = self._get_confidence(logp)
for batch_idx, is_blank in enumerate(blank_mask):
if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if blank_mask.all():
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
# If preserving alignments, convert the current Uj alignments into a torch.Tensor
# Then preserve U at current timestep Ti
# Finally, forward the timestep history to Ti+1 for that sample
# All of this should only be done iff the current time index <= sample-level AM length.
# Otherwise ignore and move to next sample / next timestep.
if self.preserve_alignments:
# convert Ti-th logits into a torch array
for batch_idx in range(batchsize):
# this checks if current timestep <= sample-level AM length
# If current timestep > sample-level AM length, no alignments will be added
# Therefore the list of Uj alignments is empty here.
if len(hypotheses[batch_idx].alignments[-1]) > 0:
hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
# Do the same if preserving per-frame confidence
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
class ExportedModelGreedyBatchedRNNTInfer:
def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = None):
self.encoder_model_path = encoder_model
self.decoder_joint_model_path = decoder_joint_model
self.max_symbols_per_step = max_symbols_per_step
# Will be populated at runtime
self._blank_index = None
def __call__(self, audio_signal: torch.Tensor, length: torch.Tensor):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
with torch.no_grad():
# Apply optional preprocessing
encoder_output, encoded_lengths = self.run_encoder(audio_signal=audio_signal, length=length)
if torch.is_tensor(encoder_output):
encoder_output = encoder_output.transpose(1, 2)
else:
encoder_output = encoder_output.transpose([0, 2, 1]) # (B, T, D)
logitlen = encoded_lengths
inseq = encoder_output # [B, T, D]
hypotheses, timestamps = self._greedy_decode(inseq, logitlen)
# Pack the hypotheses results
packed_result = [rnnt_utils.Hypothesis(score=-1.0, y_sequence=[]) for _ in range(len(hypotheses))]
for i in range(len(packed_result)):
packed_result[i].y_sequence = torch.tensor(hypotheses[i], dtype=torch.long)
packed_result[i].length = timestamps[i]
del hypotheses
return packed_result
def _greedy_decode(self, x, out_len):
# x: [B, T, D]
# out_len: [B]
# Initialize state
batchsize = x.shape[0]
hidden = self._get_initial_states(batchsize)
target_lengths = torch.ones(batchsize, dtype=torch.int32)
# Output string buffer
label = [[] for _ in range(batchsize)]
timesteps = [[] for _ in range(batchsize)]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long).numpy()
if torch.is_tensor(x):
last_label = torch.from_numpy(last_label).to(self.device)
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool).numpy()
# Get max sequence length
max_out_len = out_len.max()
for time_idx in range(max_out_len):
f = x[:, time_idx : time_idx + 1, :] # [B, 1, D]
if torch.is_tensor(f):
f = f.transpose(1, 2)
else:
f = f.transpose([0, 2, 1])
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset blank mask
blank_mask *= False
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols_per_step is None or symbols_added < self.max_symbols_per_step):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0:
g = torch.tensor([self._blank_index] * batchsize, dtype=torch.int32).view(-1, 1)
else:
if torch.is_tensor(last_label):
g = last_label.type(torch.int32)
else:
g = last_label.astype(np.int32)
# Batched joint step - Output = [B, V + 1]
joint_out, hidden_prime = self.run_decoder_joint(f, g, target_lengths, *hidden)
logp, pred_lengths = joint_out
logp = logp[:, 0, 0, :]
# Get index k, of max prob for batch
if torch.is_tensor(logp):
v, k = logp.max(1)
else:
k = np.argmax(logp, axis=1).astype(np.int32)
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask |= k_is_blank
del k_is_blank
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if blank_mask.all():
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
if torch.is_tensor(blank_mask):
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
else:
blank_indices = blank_mask.astype(np.int32).nonzero()
if type(blank_indices) in (list, tuple):
blank_indices = blank_indices[0]
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
for state_id in range(len(hidden)):
hidden_prime[state_id][:, blank_indices, :] = hidden[state_id][:, blank_indices, :]
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
for state_id in range(len(hidden_prime)):
hidden_prime[state_id][:, blank_indices, :] *= 0.0
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
if torch.is_tensor(k):
last_label = k.clone().reshape(-1, 1)
else:
last_label = k.copy().reshape(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
label[kidx].append(ki)
timesteps[kidx].append(time_idx)
symbols_added += 1
return label, timesteps
def _setup_blank_index(self):
raise NotImplementedError()
def run_encoder(self, audio_signal, length):
raise NotImplementedError()
def run_decoder_joint(self, enc_logits, targets, target_length, *states):
raise NotImplementedError()
def _get_initial_states(self, batchsize):
raise NotImplementedError()
class ONNXGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = 10):
super().__init__(
encoder_model=encoder_model,
decoder_joint_model=decoder_joint_model,
max_symbols_per_step=max_symbols_per_step,
)
try:
import onnx
import onnxruntime
except (ModuleNotFoundError, ImportError):
raise ImportError(f"`onnx` or `onnxruntime` could not be imported, please install the libraries.\n")
if torch.cuda.is_available():
# Try to use onnxruntime-gpu
providers = ['TensorrtExecutionProvider', 'CUDAExecutionProvider']
else:
# Fall back to CPU and onnxruntime-cpu
providers = ['CPUExecutionProvider']
onnx_session_opt = onnxruntime.SessionOptions()
onnx_session_opt.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
onnx_model = onnx.load(self.encoder_model_path)
onnx.checker.check_model(onnx_model, full_check=True)
self.encoder_model = onnx_model
self.encoder = onnxruntime.InferenceSession(
onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
)
onnx_model = onnx.load(self.decoder_joint_model_path)
onnx.checker.check_model(onnx_model, full_check=True)
self.decoder_joint_model = onnx_model
self.decoder_joint = onnxruntime.InferenceSession(
onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
)
logging.info("Successfully loaded encoder, decoder and joint onnx models !")
# Will be populated at runtime
self._blank_index = None
self.max_symbols_per_step = max_symbols_per_step
self._setup_encoder_input_output_keys()
self._setup_decoder_joint_input_output_keys()
self._setup_blank_index()
def _setup_encoder_input_output_keys(self):
self.encoder_inputs = list(self.encoder_model.graph.input)
self.encoder_outputs = list(self.encoder_model.graph.output)
def _setup_decoder_joint_input_output_keys(self):
self.decoder_joint_inputs = list(self.decoder_joint_model.graph.input)
self.decoder_joint_outputs = list(self.decoder_joint_model.graph.output)
def _setup_blank_index(self):
# ASSUME: Single input with no time length information
dynamic_dim = 257
shapes = self.encoder_inputs[0].type.tensor_type.shape.dim
ip_shape = []
for shape in shapes:
if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
ip_shape.append(dynamic_dim) # replace dynamic axes with constant
else:
ip_shape.append(int(shape.dim_value))
enc_logits, encoded_length = self.run_encoder(
audio_signal=torch.randn(*ip_shape), length=torch.randint(0, 1, size=(dynamic_dim,))
)
# prepare states
states = self._get_initial_states(batchsize=dynamic_dim)
# run decoder 1 step
joint_out, states = self.run_decoder_joint(enc_logits, None, None, *states)
log_probs, lengths = joint_out
self._blank_index = log_probs.shape[-1] - 1 # last token of vocab size is blank token
logging.info(
f"Enc-Dec-Joint step was evaluated, blank token id = {self._blank_index}; vocab size = {log_probs.shape[-1]}"
)
def run_encoder(self, audio_signal, length):
if hasattr(audio_signal, 'cpu'):
audio_signal = audio_signal.cpu().numpy()
if hasattr(length, 'cpu'):
length = length.cpu().numpy()
ip = {
self.encoder_inputs[0].name: audio_signal,
self.encoder_inputs[1].name: length,
}
enc_out = self.encoder.run(None, ip)
enc_out, encoded_length = enc_out # ASSUME: single output
return enc_out, encoded_length
def run_decoder_joint(self, enc_logits, targets, target_length, *states):
# ASSUME: Decoder is RNN Transducer
if targets is None:
targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32)
target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32)
if hasattr(targets, 'cpu'):
targets = targets.cpu().numpy()
if hasattr(target_length, 'cpu'):
target_length = target_length.cpu().numpy()
ip = {
self.decoder_joint_inputs[0].name: enc_logits,
self.decoder_joint_inputs[1].name: targets,
self.decoder_joint_inputs[2].name: target_length,
}
num_states = 0
if states is not None and len(states) > 0:
num_states = len(states)
for idx, state in enumerate(states):
if hasattr(state, 'cpu'):
state = state.cpu().numpy()
ip[self.decoder_joint_inputs[len(ip)].name] = state
dec_out = self.decoder_joint.run(None, ip)
# unpack dec output
if num_states > 0:
new_states = dec_out[-num_states:]
dec_out = dec_out[:-num_states]
else:
new_states = None
return dec_out, new_states
def _get_initial_states(self, batchsize):
# ASSUME: LSTM STATES of shape (layers, batchsize, dim)
input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
num_states = len(input_state_nodes)
if num_states == 0:
return
input_states = []
for state_id in range(num_states):
node = input_state_nodes[state_id]
ip_shape = []
for shape_idx, shape in enumerate(node.type.tensor_type.shape.dim):
if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
ip_shape.append(batchsize) # replace dynamic axes with constant
else:
ip_shape.append(int(shape.dim_value))
input_states.append(torch.zeros(*ip_shape))
return input_states
class TorchscriptGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
def __init__(
self,
encoder_model: str,
decoder_joint_model: str,
cfg: DictConfig,
device: str,
max_symbols_per_step: Optional[int] = 10,
):
super().__init__(
encoder_model=encoder_model,
decoder_joint_model=decoder_joint_model,
max_symbols_per_step=max_symbols_per_step,
)
self.cfg = cfg
self.device = device
self.encoder = torch.jit.load(self.encoder_model_path, map_location=self.device)
self.decoder_joint = torch.jit.load(self.decoder_joint_model_path, map_location=self.device)
logging.info("Successfully loaded encoder, decoder and joint torchscript models !")
# Will be populated at runtime
self._blank_index = None
self.max_symbols_per_step = max_symbols_per_step
self._setup_encoder_input_keys()
self._setup_decoder_joint_input_keys()
self._setup_blank_index()
def _setup_encoder_input_keys(self):
arguments = self.encoder.forward.schema.arguments[1:]
self.encoder_inputs = [arg for arg in arguments]
def _setup_decoder_joint_input_keys(self):
arguments = self.decoder_joint.forward.schema.arguments[1:]
self.decoder_joint_inputs = [arg for arg in arguments]
def _setup_blank_index(self):
self._blank_index = len(self.cfg.joint.vocabulary)
logging.info(f"Blank token id = {self._blank_index}; vocab size = {len(self.cfg.joint.vocabulary) + 1}")
def run_encoder(self, audio_signal, length):
enc_out = self.encoder(audio_signal, length)
enc_out, encoded_length = enc_out # ASSUME: single output
return enc_out, encoded_length
def run_decoder_joint(self, enc_logits, targets, target_length, *states):
# ASSUME: Decoder is RNN Transducer
if targets is None:
targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32, device=enc_logits.device)
target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32, device=enc_logits.device)
num_states = 0
if states is not None and len(states) > 0:
num_states = len(states)
dec_out = self.decoder_joint(enc_logits, targets, target_length, *states)
# unpack dec output
if num_states > 0:
new_states = dec_out[-num_states:]
dec_out = dec_out[:-num_states]
else:
new_states = None
return dec_out, new_states
def _get_initial_states(self, batchsize):
# ASSUME: LSTM STATES of shape (layers, batchsize, dim)
input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
num_states = len(input_state_nodes)
if num_states == 0:
return
input_states = []
for state_id in range(num_states):
# Hardcode shape size for LSTM (1 is for num layers in LSTM, which is flattened for export)
ip_shape = [1, batchsize, self.cfg.model_defaults.pred_hidden]
input_states.append(torch.zeros(*ip_shape, device=self.device))
return input_states
class GreedyMultiblankRNNTInfer(GreedyRNNTInfer):
"""A greedy transducer decoder for multi-blank RNN-T.
Sequence level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
big_blank_durations: a list containing durations for big blanks the model supports.
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
big_blank_durations: list,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_method_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_method_cfg=confidence_method_cfg,
)
self.big_blank_durations = big_blank_durations
self._SOS = blank_index - len(big_blank_durations)
@torch.no_grad()
def _greedy_decode(
self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
):
# x: [T, 1, D]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
if partial_hypotheses is not None:
hypothesis.last_token = partial_hypotheses.last_token
hypothesis.y_sequence = (
partial_hypotheses.y_sequence.cpu().tolist()
if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
else partial_hypotheses.y_sequence
)
if partial_hypotheses.dec_state is not None:
hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
if self.preserve_alignments:
# Alignments is a 2-dimensional dangling list representing T x U
hypothesis.alignments = [[]]
if self.preserve_frame_confidence:
hypothesis.frame_confidence = [[]]
# if this variable is > 1, it means the last emission was a big-blank and we need to skip frames.
big_blank_duration = 1
# For timestep t in X_t
for time_idx in range(out_len):
if big_blank_duration > 1:
# skip frames until big_blank_duration == 1.
big_blank_duration -= 1
continue
# Extract encoder embedding at timestep t
# f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
f = x.narrow(dim=0, start=time_idx, length=1)
# Setup exit flags and counter
not_blank = True
symbols_added = 0
# While blank is not predicted, or we dont run out of max symbols per timestep
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# In the first timestep, we initialize the network with RNNT Blank
# In later timesteps, we provide previous predicted label as input.
if hypothesis.last_token is None and hypothesis.dec_state is None:
last_label = self._SOS
else:
last_label = label_collate([[hypothesis.last_token]])
# Perform prediction network and joint network steps.
g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
0, 0, 0, :
]
del g
# torch.max(0) op doesnt exist for FP 16.
if logp.dtype != torch.float32:
logp = logp.float()
# get index k, of max prob
v, k = logp.max(0)
k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
# Note, we have non-blanks in the vocab first, followed by big blanks, and standard blank at last.
# here we check if it's a big blank and if yes, set the duration variable.
if k >= self._blank_index - len(self.big_blank_durations) and k < self._blank_index:
big_blank_duration = self.big_blank_durations[self._blank_index - k - 1]
if self.preserve_alignments:
# insert logprobs into last timestep
hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
if self.preserve_frame_confidence:
# insert confidence into last timestep
hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
del logp
# If any type of blank token is predicted, exit inner loop, move onto next timestep t
if k >= self._blank_index - len(self.big_blank_durations):
not_blank = False
else:
# Append token to label set, update RNN state.
hypothesis.y_sequence.append(k)
hypothesis.score += float(v)
hypothesis.timestep.append(time_idx)
hypothesis.dec_state = hidden_prime
hypothesis.last_token = k
# Increment token counter.
symbols_added += 1
if self.preserve_alignments:
# convert Ti-th logits into a torch array
hypothesis.alignments.append([]) # blank buffer for next timestep
if self.preserve_frame_confidence:
hypothesis.frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of Alignments
if self.preserve_alignments:
if len(hypothesis.alignments[-1]) == 0:
del hypothesis.alignments[-1]
# Remove trailing empty list of per-frame confidence
if self.preserve_frame_confidence:
if len(hypothesis.frame_confidence[-1]) == 0:
del hypothesis.frame_confidence[-1]
# Unpack the hidden states
hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
return hypothesis
class GreedyBatchedMultiblankRNNTInfer(GreedyBatchedRNNTInfer):
"""A batch level greedy transducer decoder.
Batch level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
big_blank_durations: a list containing durations for big blanks the model supports.
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
big_blank_durations: List[int],
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_method_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_method_cfg=confidence_method_cfg,
)
self.big_blank_durations = big_blank_durations
# Depending on availability of `blank_as_pad` support
# switch between more efficient batch decoding technique
if self.decoder.blank_as_pad:
self._greedy_decode = self._greedy_decode_blank_as_pad
else:
self._greedy_decode = self._greedy_decode_masked
self._SOS = blank_index - len(big_blank_durations)
def _greedy_decode_blank_as_pad(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
with torch.inference_mode():
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize list of Hypothesis
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a danling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
# If confidence scores need to be preserved, register a danling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._SOS, dtype=torch.long, device=device)
# this mask is true for if the emission is *any type* of blank.
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
# We have a mask for each big blank. A mask is "true" means: the previous emission is exactly the big-blank
# with the corresponding duration, or has larger duration. E.g., for big_blank_mask for duration 2, it will
# be set true if the previous emission was a big blank with duration 4, or 3 or 2; but false if prevoius
# emission was a standard blank (with duration = 1).
big_blank_masks = [torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)] * len(
self.big_blank_durations
)
# if this variable > 1, it means the previous emission was big-blank and we need to skip frames.
big_blank_duration = 1
for time_idx in range(max_out_len):
if big_blank_duration > 1:
# skip frames until big_blank_duration == 1
big_blank_duration -= 1
continue
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset all blank masks
blank_mask.mul_(False)
for i in range(len(big_blank_masks)):
big_blank_masks[i].mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
for i in range(len(big_blank_masks)):
big_blank_masks[i] = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1 + num-big-blanks]
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
:, 0, 0, :
]
if logp.dtype != torch.float32:
logp = logp.float()
# Get index k, of max prob for batch
v, k = logp.max(1)
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k >= self._blank_index - len(self.big_blank_durations)
blank_mask.bitwise_or_(k_is_blank)
for i in range(len(big_blank_masks)):
# using <= since as we mentioned before, the mask doesn't store exact matches.
# instead, it is True when the predicted blank's duration is >= the duration that the
# mask corresponds to.
k_is_big_blank = k <= self._blank_index - 1 - i
# need to do a bitwise_and since it could also be a non-blank.
k_is_big_blank.bitwise_and_(k_is_blank)
big_blank_masks[i].bitwise_or_(k_is_big_blank)
del k_is_blank
# If preserving alignments, check if sequence length of sample has been reached
# before adding alignment
if self.preserve_alignments:
# Insert logprobs into last timestep per sample
logp_vals = logp.to('cpu')
logp_ids = logp_vals.max(1)[1]
for batch_idx in range(batchsize):
if time_idx < out_len[batch_idx]:
hypotheses[batch_idx].alignments[-1].append(
(logp_vals[batch_idx], logp_ids[batch_idx])
)
del logp_vals
# If preserving per-frame confidence, check if sequence length of sample has been reached
# before adding confidence scores
if self.preserve_frame_confidence:
# Insert probabilities into last timestep per sample
confidence = self._get_confidence(logp)
for batch_idx in range(batchsize):
if time_idx < out_len[batch_idx]:
hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if blank_mask.all():
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.clone().view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
for i in range(len(big_blank_masks) + 1):
# The task here is find the shortest blank duration of all batches.
# so we start from the shortest blank duration and go up,
# and stop once we found the duration whose corresponding mask isn't all True.
if i == len(big_blank_masks) or not big_blank_masks[i].all():
big_blank_duration = self.big_blank_durations[i - 1] if i > 0 else 1
break
# If preserving alignments, convert the current Uj alignments into a torch.Tensor
# Then preserve U at current timestep Ti
# Finally, forward the timestep history to Ti+1 for that sample
# All of this should only be done iff the current time index <= sample-level AM length.
# Otherwise ignore and move to next sample / next timestep.
if self.preserve_alignments:
# convert Ti-th logits into a torch array
for batch_idx in range(batchsize):
# this checks if current timestep <= sample-level AM length
# If current timestep > sample-level AM length, no alignments will be added
# Therefore the list of Uj alignments is empty here.
if len(hypotheses[batch_idx].alignments[-1]) > 0:
hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
# Do the same if preserving per-frame confidence
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
def _greedy_decode_masked(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
if self.big_blank_durations != [1] * len(self.big_blank_durations):
raise NotImplementedError(
"Efficient frame-skipping version for multi-blank masked decoding is not supported."
)
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize state
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a danling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
else:
hyp.alignments = None
# If confidence scores need to be preserved, register a danling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
last_label_without_blank = last_label.clone()
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
with torch.inference_mode():
for time_idx in range(max_out_len):
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# Prepare t timestamp batch variables
not_blank = True
symbols_added = 0
# Reset blank mask
blank_mask.mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Set a dummy label for the blank value
# This value will be overwritten by "blank" again the last label update below
# This is done as vocabulary of prediction network does not contain "blank" token of RNNT
last_label_without_blank_mask = last_label >= self._blank_index
last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
last_label_without_blank[~last_label_without_blank_mask] = last_label[
~last_label_without_blank_mask
]
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1 + num-big-blanks]
# If preserving per-frame confidence, log_normalize must be true
logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
:, 0, 0, :
]
if logp.dtype != torch.float32:
logp = logp.float()
# Get index k, of max prob for batch
v, k = logp.max(1)
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask.bitwise_or_(k_is_blank)
# If preserving alignments, check if sequence length of sample has been reached
# before adding alignment
if self.preserve_alignments:
# Insert logprobs into last timestep per sample
logp_vals = logp.to('cpu')
logp_ids = logp_vals.max(1)[1]
for batch_idx in range(batchsize):
if time_idx < out_len[batch_idx]:
hypotheses[batch_idx].alignments[-1].append(
(logp_vals[batch_idx], logp_ids[batch_idx])
)
del logp_vals
# If preserving per-frame confidence, check if sequence length of sample has been reached
# before adding confidence scores
if self.preserve_frame_confidence:
# Insert probabilities into last timestep per sample
confidence = self._get_confidence(logp)
for batch_idx in range(batchsize):
if time_idx < out_len[batch_idx]:
hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
del logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if blank_mask.all():
not_blank = False
else:
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
# If preserving alignments, convert the current Uj alignments into a torch.Tensor
# Then preserve U at current timestep Ti
# Finally, forward the timestep history to Ti+1 for that sample
# All of this should only be done iff the current time index <= sample-level AM length.
# Otherwise ignore and move to next sample / next timestep.
if self.preserve_alignments:
# convert Ti-th logits into a torch array
for batch_idx in range(batchsize):
# this checks if current timestep <= sample-level AM length
# If current timestep > sample-level AM length, no alignments will be added
# Therefore the list of Uj alignments is empty here.
if len(hypotheses[batch_idx].alignments[-1]) > 0:
hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
# Do the same if preserving per-frame confidence
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
@dataclass
class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
self.confidence_method_cfg = OmegaConf.structured(
self.confidence_method_cfg
if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
else ConfidenceMethodConfig(**self.confidence_method_cfg)
)
@dataclass
class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
self.confidence_method_cfg = OmegaConf.structured(
self.confidence_method_cfg
if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
else ConfidenceMethodConfig(**self.confidence_method_cfg)
)
class GreedyTDTInfer(_GreedyRNNTInfer):
"""A greedy TDT decoder.
Sequence level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
durations: a list containing durations for TDT.
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
durations: list,
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_method_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_method_cfg=confidence_method_cfg,
)
self.durations = durations
@typecheck()
def forward(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
# Preserve decoder and joint training state
decoder_training_state = self.decoder.training
joint_training_state = self.joint.training
with torch.inference_mode():
# Apply optional preprocessing
encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
self.decoder.eval()
self.joint.eval()
hypotheses = []
# Process each sequence independently
with self.decoder.as_frozen(), self.joint.as_frozen():
for batch_idx in range(encoder_output.size(0)):
inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
logitlen = encoded_lengths[batch_idx]
partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
hypotheses.append(hypothesis)
# Pack results into Hypotheses
packed_result = pack_hypotheses(hypotheses, encoded_lengths)
self.decoder.train(decoder_training_state)
self.joint.train(joint_training_state)
return (packed_result,)
@torch.no_grad()
def _greedy_decode(
self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
):
# x: [T, 1, D]
# out_len: [seq_len]
# Initialize blank state and empty label set in Hypothesis
hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
if partial_hypotheses is not None:
hypothesis.last_token = partial_hypotheses.last_token
hypothesis.y_sequence = (
partial_hypotheses.y_sequence.cpu().tolist()
if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
else partial_hypotheses.y_sequence
)
if partial_hypotheses.dec_state is not None:
hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
if self.preserve_alignments:
# Alignments is a 2-dimensional dangling list representing T x U
hypothesis.alignments = [[]]
if self.preserve_frame_confidence:
hypothesis.frame_confidence = [[]]
time_idx = 0
while time_idx < out_len:
# Extract encoder embedding at timestep t
# f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
f = x.narrow(dim=0, start=time_idx, length=1)
# Setup exit flags and counter
not_blank = True
symbols_added = 0
need_loop = True
# While blank is not predicted, or we dont run out of max symbols per timestep
while need_loop and (self.max_symbols is None or symbols_added < self.max_symbols):
# In the first timestep, we initialize the network with RNNT Blank
# In later timesteps, we provide previous predicted label as input.
if hypothesis.last_token is None and hypothesis.dec_state is None:
last_label = self._SOS
else:
last_label = label_collate([[hypothesis.last_token]])
# Perform prediction network and joint network steps.
g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
# If preserving per-frame confidence, log_normalize must be true
logits = self._joint_step(f, g, log_normalize=False)
logp = logits[0, 0, 0, : -len(self.durations)]
if self.preserve_frame_confidence:
logp = torch.log_softmax(logp, -1)
duration_logp = torch.log_softmax(logits[0, 0, 0, -len(self.durations) :], dim=-1)
del g
# torch.max(0) op doesnt exist for FP 16.
if logp.dtype != torch.float32:
logp = logp.float()
# get index k, of max prob
v, k = logp.max(0)
k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
d_v, d_k = duration_logp.max(0)
d_k = d_k.item()
skip = self.durations[d_k]
if self.preserve_alignments:
# insert logprobs into last timestep
hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
if self.preserve_frame_confidence:
# insert confidence into last timestep
hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
del logp
# If blank token is predicted, exit inner loop, move onto next timestep t
if k == self._blank_index:
not_blank = False
else:
# Append token to label set, update RNN state.
hypothesis.y_sequence.append(k)
hypothesis.score += float(v)
hypothesis.timestep.append(time_idx)
hypothesis.dec_state = hidden_prime
hypothesis.last_token = k
# Increment token counter.
symbols_added += 1
time_idx += skip
need_loop = skip == 0
# this rarely happens, but we manually increment the `skip` number
# if blank is emitted and duration=0 is predicted. This prevents possible
# infinite loops.
if skip == 0:
skip = 1
if self.preserve_alignments:
# convert Ti-th logits into a torch array
hypothesis.alignments.append([]) # blank buffer for next timestep
if self.preserve_frame_confidence:
hypothesis.frame_confidence.append([]) # blank buffer for next timestep
if symbols_added == self.max_symbols:
time_idx += 1
# Remove trailing empty list of Alignments
if self.preserve_alignments:
if len(hypothesis.alignments[-1]) == 0:
del hypothesis.alignments[-1]
# Remove trailing empty list of per-frame confidence
if self.preserve_frame_confidence:
if len(hypothesis.frame_confidence[-1]) == 0:
del hypothesis.frame_confidence[-1]
# Unpack the hidden states
hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
return hypothesis
class GreedyBatchedTDTInfer(_GreedyRNNTInfer):
"""A batch level greedy TDT decoder.
Batch level greedy decoding, performed auto-regressively.
Args:
decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
joint_model: rnnt_utils.AbstractRNNTJoint implementation.
blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
durations: a list containing durations.
max_symbols_per_step: Optional int. The maximum number of symbols that can be added
to a sequence in a single time step; if set to None then there is
no limit.
preserve_alignments: Bool flag which preserves the history of alignments generated during
greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `alignments` in it. Here, `alignments` is a List of List of
Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
U is the number of target tokens for the current timestep Ti.
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
The length of the list corresponds to the Acoustic Length (T).
Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
U is the number of target tokens for the current timestep Ti.
confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
def __init__(
self,
decoder_model: rnnt_abstract.AbstractRNNTDecoder,
joint_model: rnnt_abstract.AbstractRNNTJoint,
blank_index: int,
durations: List[int],
max_symbols_per_step: Optional[int] = None,
preserve_alignments: bool = False,
preserve_frame_confidence: bool = False,
confidence_method_cfg: Optional[DictConfig] = None,
):
super().__init__(
decoder_model=decoder_model,
joint_model=joint_model,
blank_index=blank_index,
max_symbols_per_step=max_symbols_per_step,
preserve_alignments=preserve_alignments,
preserve_frame_confidence=preserve_frame_confidence,
confidence_method_cfg=confidence_method_cfg,
)
self.durations = durations
# Depending on availability of `blank_as_pad` support
# switch between more efficient batch decoding technique
if self.decoder.blank_as_pad:
self._greedy_decode = self._greedy_decode_blank_as_pad
else:
self._greedy_decode = self._greedy_decode_masked
@typecheck()
def forward(
self,
encoder_output: torch.Tensor,
encoded_lengths: torch.Tensor,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
"""Returns a list of hypotheses given an input batch of the encoder hidden embedding.
Output token is generated auto-regressively.
Args:
encoder_output: A tensor of size (batch, features, timesteps).
encoded_lengths: list of int representing the length of each sequence
output sequence.
Returns:
packed list containing batch number of sentences (Hypotheses).
"""
# Preserve decoder and joint training state
decoder_training_state = self.decoder.training
joint_training_state = self.joint.training
with torch.inference_mode():
# Apply optional preprocessing
encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
logitlen = encoded_lengths
self.decoder.eval()
self.joint.eval()
with self.decoder.as_frozen(), self.joint.as_frozen():
inseq = encoder_output # [B, T, D]
hypotheses = self._greedy_decode(
inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
)
# Pack the hypotheses results
packed_result = pack_hypotheses(hypotheses, logitlen)
self.decoder.train(decoder_training_state)
self.joint.train(joint_training_state)
return (packed_result,)
def _greedy_decode_blank_as_pad(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
if partial_hypotheses is not None:
raise NotImplementedError("`partial_hypotheses` support is not supported")
with torch.inference_mode():
# x: [B, T, D]
# out_len: [B]
# device: torch.device
# Initialize list of Hypothesis
batchsize = x.shape[0]
hypotheses = [
rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
]
# Initialize Hidden state matrix (shared by entire batch)
hidden = None
# If alignments need to be preserved, register a danling list to hold the values
if self.preserve_alignments:
# alignments is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.alignments = [[]]
# If confidence scores need to be preserved, register a danling list to hold the values
if self.preserve_frame_confidence:
# frame_confidence is a 3-dimensional dangling list representing B x T x U
for hyp in hypotheses:
hyp.frame_confidence = [[]]
# Last Label buffer + Last Label without blank buffer
# batch level equivalent of the last_label
last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
# Mask buffers
blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
# Get max sequence length
max_out_len = out_len.max()
# skip means the number of frames the next decoding step should "jump" to. When skip == 1
# it means the next decoding step will just use the next input frame.
skip = 1
for time_idx in range(max_out_len):
if skip > 1: # if skip > 1 at the current step, we decrement it and skip the current frame.
skip -= 1
continue
f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
# need_to_stay is a boolean indicates whether the next decoding step should remain in the same frame.
need_to_stay = True
symbols_added = 0
# Reset blank mask
blank_mask.mul_(False)
# Update blank mask with time mask
# Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
# Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
blank_mask = time_idx >= out_len
# Start inner loop
while need_to_stay and (self.max_symbols is None or symbols_added < self.max_symbols):
# Batch prediction and joint network steps
# If very first prediction step, submit SOS tag (blank) to pred_step.
# This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
if time_idx == 0 and symbols_added == 0 and hidden is None:
g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
else:
# Perform batch step prediction of decoder, getting new states and scores ("g")
g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
# Batched joint step - Output = [B, V + 1 + num-big-blanks]
# Note: log_normalize must not be True here since the joiner output is contanetation of both token logits and duration logits,
# and they need to be normalized independently.
joined = self._joint_step(f, g, log_normalize=None)
logp = joined[:, 0, 0, : -len(self.durations)]
duration_logp = joined[:, 0, 0, -len(self.durations) :]
if logp.dtype != torch.float32:
logp = logp.float()
duration_logp = duration_logp.float()
# get the max for both token and duration predictions.
v, k = logp.max(1)
dv, dk = duration_logp.max(1)
# here we set the skip value to be the minimum of all predicted durations, hense the "torch.min(dk)" call there.
# Please refer to Section 5.2 of our paper https://arxiv.org/pdf/2304.06795.pdf for explanation of this.
skip = self.durations[int(torch.min(dk))]
# this is a special case: if all batches emit blanks, we require that skip be at least 1
# so we don't loop forever at the current frame.
if blank_mask.all():
if skip == 0:
skip = 1
need_to_stay = skip == 0
del g
# Update blank mask with current predicted blanks
# This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
k_is_blank = k == self._blank_index
blank_mask.bitwise_or_(k_is_blank)
del k_is_blank
del logp, duration_logp
# If all samples predict / have predicted prior blanks, exit loop early
# This is equivalent to if single sample predicted k
if not blank_mask.all():
# Collect batch indices where blanks occurred now/past
blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
# Recover prior state for all samples which predicted blank now/past
if hidden is not None:
hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
elif len(blank_indices) > 0 and hidden is None:
# Reset state if there were some blank and other non-blank predictions in batch
# Original state is filled with zeros so we just multiply
# LSTM has 2 states
hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
# Recover prior predicted label for all samples which predicted blank now/past
k[blank_indices] = last_label[blank_indices, 0]
# Update new label and hidden state for next iteration
last_label = k.clone().view(-1, 1)
hidden = hidden_prime
# Update predicted labels, accounting for time mask
# If blank was predicted even once, now or in the past,
# Force the current predicted label to also be blank
# This ensures that blanks propogate across all timesteps
# once they have occured (normally stopping condition of sample level loop).
for kidx, ki in enumerate(k):
if blank_mask[kidx] == 0:
hypotheses[kidx].y_sequence.append(ki)
hypotheses[kidx].timestep.append(time_idx)
hypotheses[kidx].score += float(v[kidx])
symbols_added += 1
# Remove trailing empty list of alignments at T_{am-len} x Uj
if self.preserve_alignments:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].alignments[-1]) == 0:
del hypotheses[batch_idx].alignments[-1]
# Remove trailing empty list of confidence scores at T_{am-len} x Uj
if self.preserve_frame_confidence:
for batch_idx in range(batchsize):
if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
del hypotheses[batch_idx].frame_confidence[-1]
# Preserve states
for batch_idx in range(batchsize):
hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
return hypotheses
def _greedy_decode_masked(
self,
x: torch.Tensor,
out_len: torch.Tensor,
device: torch.device,
partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
):
raise NotImplementedError("masked greedy-batched decode is not supported for TDT models.")
[end of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
[start of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from abc import ABC, abstractmethod
from dataclasses import dataclass
from functools import partial
from typing import List, Optional
import torch
from omegaconf import DictConfig, OmegaConf
from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
from nemo.utils import logging
class ConfidenceMethodConstants:
NAMES = ("max_prob", "entropy")
ENTROPY_TYPES = ("gibbs", "tsallis", "renyi")
ENTROPY_NORMS = ("lin", "exp")
@classmethod
def print(cls):
return (
cls.__name__
+ ": "
+ str({"NAMES": cls.NAMES, "ENTROPY_TYPES": cls.ENTROPY_TYPES, "ENTROPY_NORMS": cls.ENTROPY_NORMS})
)
class ConfidenceConstants:
AGGREGATIONS = ("mean", "min", "max", "prod")
@classmethod
def print(cls):
return cls.__name__ + ": " + str({"AGGREGATIONS": cls.AGGREGATIONS})
@dataclass
class ConfidenceMethodConfig:
"""A Config which contains the method name and settings to compute per-frame confidence scores.
Args:
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str).
Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
name: str = "entropy"
entropy_type: str = "tsallis"
alpha: float = 0.33
entropy_norm: str = "exp"
temperature: str = "DEPRECATED"
def __post_init__(self):
if self.temperature != "DEPRECATED":
# self.temperature has type str
self.alpha = float(self.temperature)
self.temperature = "DEPRECATED"
if self.name not in ConfidenceMethodConstants.NAMES:
raise ValueError(
f"`name` must be one of the following: "
f"{'`' + '`, `'.join(ConfidenceMethodConstants.NAMES) + '`'}. Provided: `{self.name}`"
)
if self.entropy_type not in ConfidenceMethodConstants.ENTROPY_TYPES:
raise ValueError(
f"`entropy_type` must be one of the following: "
f"{'`' + '`, `'.join(ConfidenceMethodConstants.ENTROPY_TYPES) + '`'}. Provided: `{self.entropy_type}`"
)
if self.alpha <= 0.0:
raise ValueError(f"`alpha` must be > 0. Provided: {self.alpha}")
if self.entropy_norm not in ConfidenceMethodConstants.ENTROPY_NORMS:
raise ValueError(
f"`entropy_norm` must be one of the following: "
f"{'`' + '`, `'.join(ConfidenceMethodConstants.ENTROPY_NORMS) + '`'}. Provided: `{self.entropy_norm}`"
)
@dataclass
class ConfidenceConfig:
"""A config which contains the following key-value pairs related to confidence scores.
Args:
preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
generated during decoding. When set to true, the Hypothesis will contain
the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
The length of the list corresponds to the number of recognized tokens.
preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
The length of the list corresponds to the number of recognized words.
exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
from the `token_confidence`.
aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
Valid options are `mean`, `min`, `max`, `prod`.
method_cfg: A dict-like object which contains the method name and settings to compute per-frame
confidence scores.
name: The method name (str).
Supported values:
- 'max_prob' for using the maximum token probability as a confidence.
- 'entropy' for using a normalized entropy of a log-likelihood vector.
entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
Supported values:
- 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
Note that for this entropy, the alpha should comply the following inequality:
(log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
where V is the model vocabulary size.
- 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/Tsallis_entropy
- 'renyi' for the Rรฉnyi entropy.
Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
When the alpha equals one, scaling is not applied to 'max_prob',
and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
entropy_norm: A mapping of the entropy value to the interval [0,1].
Supported values:
- 'lin' for using the linear mapping.
- 'exp' for using exponential mapping with linear shift.
"""
preserve_frame_confidence: bool = False
preserve_token_confidence: bool = False
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
method_cfg: ConfidenceMethodConfig = ConfidenceMethodConfig()
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
self.method_cfg = OmegaConf.structured(
self.method_cfg
if isinstance(self.method_cfg, ConfidenceMethodConfig)
else ConfidenceMethodConfig(**self.method_cfg)
)
if self.aggregation not in ConfidenceConstants.AGGREGATIONS:
raise ValueError(
f"`aggregation` has to be one of the following: "
f"{'`' + '`, `'.join(ConfidenceConstants.AGGREGATIONS) + '`'}. Provided: `{self.aggregation}`"
)
def get_confidence_measure_bank():
"""Generate a dictionary with confidence measure functionals.
Supported confidence measures:
max_prob: normalized maximum probability
entropy_gibbs_lin: Gibbs entropy with linear normalization
entropy_gibbs_exp: Gibbs entropy with exponential normalization
entropy_tsallis_lin: Tsallis entropy with linear normalization
entropy_tsallis_exp: Tsallis entropy with exponential normalization
entropy_renyi_lin: Rรฉnyi entropy with linear normalization
entropy_renyi_exp: Rรฉnyi entropy with exponential normalization
Returns:
dictionary with lambda functions.
"""
# helper functions
# Gibbs entropy is implemented without alpha
neg_entropy_gibbs = lambda x: (x.exp() * x).sum(-1)
neg_entropy_alpha = lambda x, t: (x * t).exp().sum(-1)
neg_entropy_alpha_gibbs = lambda x, t: ((x * t).exp() * x).sum(-1)
# too big for a lambda
def entropy_tsallis_exp(x, v, t):
exp_neg_max_ent = math.exp((1 - math.pow(v, 1 - t)) / (1 - t))
return (((1 - neg_entropy_alpha(x, t)) / (1 - t)).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
def entropy_gibbs_exp(x, v, t):
exp_neg_max_ent = math.pow(v, -t * math.pow(v, 1 - t))
return ((neg_entropy_alpha_gibbs(x, t) * t).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
# use Gibbs entropies for Tsallis and Rรฉnyi with t == 1.0
entropy_gibbs_lin_baseline = lambda x, v: 1 + neg_entropy_gibbs(x) / math.log(v)
entropy_gibbs_exp_baseline = lambda x, v: (neg_entropy_gibbs(x).exp() * v - 1) / (v - 1)
# fill the measure bank
confidence_measure_bank = {}
# Maximum probability measure is implemented without alpha
confidence_measure_bank["max_prob"] = (
lambda x, v, t: (x.max(dim=-1)[0].exp() * v - 1) / (v - 1)
if t == 1.0
else ((x.max(dim=-1)[0] * t).exp() * math.pow(v, t) - 1) / (math.pow(v, t) - 1)
)
confidence_measure_bank["entropy_gibbs_lin"] = (
lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
if t == 1.0
else 1 + neg_entropy_alpha_gibbs(x, t) / math.log(v) / math.pow(v, 1 - t)
)
confidence_measure_bank["entropy_gibbs_exp"] = (
lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_gibbs_exp(x, v, t)
)
confidence_measure_bank["entropy_tsallis_lin"] = (
lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
if t == 1.0
else 1 + (1 - neg_entropy_alpha(x, t)) / (math.pow(v, 1 - t) - 1)
)
confidence_measure_bank["entropy_tsallis_exp"] = (
lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_tsallis_exp(x, v, t)
)
confidence_measure_bank["entropy_renyi_lin"] = (
lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
if t == 1.0
else 1 + neg_entropy_alpha(x, t).log2() / (t - 1) / math.log(v, 2)
)
confidence_measure_bank["entropy_renyi_exp"] = (
lambda x, v, t: entropy_gibbs_exp_baseline(x, v)
if t == 1.0
else (neg_entropy_alpha(x, t).pow(1 / (t - 1)) * v - 1) / (v - 1)
)
return confidence_measure_bank
def get_confidence_aggregation_bank():
"""Generate a dictionary with confidence aggregation functions.
Supported confidence aggregation functions:
min: minimum
max: maximum
mean: arithmetic mean
prod: product
Returns:
dictionary with functions.
"""
confidence_aggregation_bank = {"mean": lambda x: sum(x) / len(x), "min": min, "max": max}
# python 3.7 and earlier do not have math.prod
if hasattr(math, "prod"):
confidence_aggregation_bank["prod"] = math.prod
else:
import operator
from functools import reduce
confidence_aggregation_bank["prod"] = lambda x: reduce(operator.mul, x, 1)
return confidence_aggregation_bank
class ConfidenceMethodMixin(ABC):
"""Confidence Method Mixin class.
It initializes per-frame confidence method.
"""
def _init_confidence_method(self, confidence_method_cfg: Optional[DictConfig] = None):
"""Initialize per-frame confidence method from config.
"""
# OmegaConf.structured ensures that post_init check is always executed
confidence_method_cfg = OmegaConf.structured(
ConfidenceMethodConfig()
if confidence_method_cfg is None
else ConfidenceMethodConfig(**confidence_method_cfg)
)
# set confidence calculation method
# we suppose that self.blank_id == len(vocabulary)
self.num_tokens = (self.blank_id if hasattr(self, "blank_id") else self._blank_index) + 1
self.alpha = confidence_method_cfg.alpha
# init confidence measure bank
self.confidence_measure_bank = get_confidence_measure_bank()
measure = None
# construct measure_name
measure_name = ""
if confidence_method_cfg.name == "max_prob":
measure_name = "max_prob"
elif confidence_method_cfg.name == "entropy":
measure_name = '_'.join(
[confidence_method_cfg.name, confidence_method_cfg.entropy_type, confidence_method_cfg.entropy_norm]
)
else:
raise ValueError(f"Unsupported `confidence_method_cfg.name`: `{confidence_method_cfg.name}`")
if measure_name not in self.confidence_measure_bank:
raise ValueError(f"Unsupported measure setup: `{measure_name}`")
measure = partial(self.confidence_measure_bank[measure_name], v=self.num_tokens, t=self.alpha)
self._get_confidence = lambda x: measure(torch.nan_to_num(x)).tolist()
class ConfidenceMixin(ABC):
"""Confidence Mixin class.
It is responsible for confidence estimation method initialization and high-level confidence score calculation.
"""
def _init_confidence(self, confidence_cfg: Optional[DictConfig] = None):
"""Initialize confidence-related fields and confidence aggregation function from config.
"""
# OmegaConf.structured ensures that post_init check is always executed
confidence_cfg = OmegaConf.structured(
ConfidenceConfig() if confidence_cfg is None else ConfidenceConfig(**confidence_cfg)
)
self.confidence_method_cfg = confidence_cfg.method_cfg
# extract the config
self.preserve_word_confidence = confidence_cfg.get('preserve_word_confidence', False)
# set preserve_frame_confidence and preserve_token_confidence to True
# if preserve_word_confidence is True
self.preserve_token_confidence = (
confidence_cfg.get('preserve_token_confidence', False) | self.preserve_word_confidence
)
# set preserve_frame_confidence to True if preserve_token_confidence is True
self.preserve_frame_confidence = (
confidence_cfg.get('preserve_frame_confidence', False) | self.preserve_token_confidence
)
self.exclude_blank_from_confidence = confidence_cfg.get('exclude_blank', True)
self.word_confidence_aggregation = confidence_cfg.get('aggregation', "min")
# define aggregation functions
self.confidence_aggregation_bank = get_confidence_aggregation_bank()
self._aggregate_confidence = self.confidence_aggregation_bank[self.word_confidence_aggregation]
# Update preserve frame confidence
if self.preserve_frame_confidence is False:
if self.cfg.strategy in ['greedy', 'greedy_batch']:
self.preserve_frame_confidence = self.cfg.greedy.get('preserve_frame_confidence', False)
# OmegaConf.structured ensures that post_init check is always executed
confidence_method_cfg = OmegaConf.structured(self.cfg.greedy).get('confidence_method_cfg', None)
self.confidence_method_cfg = (
OmegaConf.structured(ConfidenceMethodConfig())
if confidence_method_cfg is None
else OmegaConf.structured(ConfidenceMethodConfig(**confidence_method_cfg))
)
@abstractmethod
def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
"""Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
Assumes that `frame_confidence` is present in the hypotheses.
Args:
hypotheses_list: List of Hypothesis.
Returns:
A list of hypotheses with high-level confidence scores.
"""
raise NotImplementedError()
@abstractmethod
def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
"""Implemented by subclass in order to aggregate token confidence to a word-level confidence.
Args:
hypothesis: Hypothesis
Returns:
A list of word-level confidence scores.
"""
raise NotImplementedError()
def _aggregate_token_confidence_chars(self, words: List[str], token_confidence: List[float]) -> List[float]:
"""Implementation of token confidence aggregation for character-based models.
Args:
words: List of words of a hypothesis.
token_confidence: List of token-level confidence scores of a hypothesis.
Returns:
A list of word-level confidence scores.
"""
word_confidence = []
i = 0
for word in words:
word_len = len(word)
word_confidence.append(self._aggregate_confidence(token_confidence[i : i + word_len]))
# we assume that there is exactly one space token between words and exclude it from word confidence
i += word_len + 1
return word_confidence
def _aggregate_token_confidence_subwords_sentencepiece(
self, words: List[str], token_confidence: List[float], token_ids: List[int]
) -> List[float]:
"""Implementation of token confidence aggregation for subword-based models.
**Note**: Only supports Sentencepiece based tokenizers !
Args:
words: List of words of a hypothesis.
token_confidence: List of token-level confidence scores of a hypothesis.
token_ids: List of token ids of a hypothesis.
Returns:
A list of word-level confidence scores.
"""
word_confidence = []
# run only if there are final words
if len(words) > 0:
j = 0
prev_unk = False
prev_underline = False
for i, token_id in enumerate(token_ids):
token = self.decode_ids_to_tokens([int(token_id)])[0]
token_text = self.decode_tokens_to_str([int(token_id)])
# treat `<unk>` as a separate word regardless of the next token
# to match the result of `tokenizer.ids_to_text`
if (token != token_text or prev_unk) and i > j:
# do not add confidence for `โ` if the current token starts with `โ`
# to match the result of `tokenizer.ids_to_text`
if not prev_underline:
word_confidence.append(self._aggregate_confidence(token_confidence[j:i]))
j = i
prev_unk = token == '<unk>'
prev_underline = token == 'โ'
if not prev_underline:
word_confidence.append(self._aggregate_confidence(token_confidence[j : len(token_ids)]))
if len(words) != len(word_confidence):
raise RuntimeError(
f"""Something went wrong with word-level confidence aggregation.\n
Please check these values for debugging:\n
len(words): {len(words)},\n
len(word_confidence): {len(word_confidence)},\n
recognized text: `{' '.join(words)}`"""
)
return word_confidence
[end of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
[start of nemo/collections/common/parts/adapter_modules.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
from omegaconf import OmegaConf
from torch import nn as nn
from nemo.collections.common.parts.utils import activation_registry
from nemo.core.classes.mixins import access_mixins, adapter_mixin_strategies
class AdapterModuleUtil(access_mixins.AccessMixin):
"""
Base class of Adapter Modules, providing common functionality to all Adapter Modules.
"""
def setup_adapter_strategy(self, adapter_strategy: Optional[adapter_mixin_strategies.AbstractAdapterStrategy]):
"""
Setup adapter strategy of this class, enabling dynamic change in the way the adapter output is
merged with the input.
When called successfully, will assign the variable `adapter_strategy` to the module.
Args:
adapter_strategy: Can be a None or an implementation of AbstractAdapterStrategy.
"""
# set default adapter strategy
if adapter_strategy is None:
adapter_strategy = self.get_default_strategy_config()
if is_dataclass(adapter_strategy):
adapter_strategy = OmegaConf.structured(adapter_strategy)
OmegaConf.set_struct(adapter_strategy, False)
# The config must have the `_target_` field pointing to the actual adapter strategy class
# which will load that strategy dynamically to this module.
if isinstance(adapter_strategy, dict) or OmegaConf.is_config(adapter_strategy):
self.adapter_strategy = instantiate(adapter_strategy)
elif isinstance(adapter_strategy, adapter_mixin_strategies.AbstractAdapterStrategy):
self.adapter_strategy = adapter_strategy
else:
raise AttributeError(f'`adapter_strategy` provided is invalid : {adapter_strategy}')
def get_default_strategy_config(self) -> 'dataclass':
"""
Returns a default adapter module strategy.
"""
return adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
def adapter_unfreeze(self,):
"""
Sets the requires grad for all parameters in the adapter to True.
This method should be overridden for any custom unfreeze behavior that is required.
For example, if not all params of the adapter should be unfrozen.
"""
for param in self.parameters():
param.requires_grad_(True)
class LinearAdapter(nn.Module, AdapterModuleUtil):
"""
Simple Linear Feedforward Adapter module with LayerNorm and singe hidden layer with activation function.
Note: The adapter explicitly initializes its final layer with all zeros in order to avoid affecting the
original model when all adapters are disabled.
Args:
in_features: Input dimension of the module. Note that for adapters, input_dim == output_dim.
dim: Hidden dimension of the feed forward network.
activation: Str name for an activation function.
norm_position: Str, can be `pre` or `post`. Defaults to `pre`. Determines whether the normalization
will occur in the first layer or the last layer. Certain architectures may prefer one over the other.
dropout: float value, whether to perform dropout on the output of the last layer of the adapter.
adapter_strategy: By default, ResidualAddAdapterStrategyConfig. An adapter composition function object.
"""
def __init__(
self,
in_features: int,
dim: int,
activation: str = 'swish',
norm_position: str = 'pre',
dropout: float = 0.0,
adapter_strategy: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig = None,
):
super().__init__()
activation = activation_registry[activation]()
# If the activation can be executed in place, do so.
if hasattr(activation, 'inplace'):
activation.inplace = True
assert norm_position in ['pre', 'post']
self.norm_position = norm_position
if norm_position == 'pre':
self.module = nn.Sequential(
nn.LayerNorm(in_features),
nn.Linear(in_features, dim, bias=False),
activation,
nn.Linear(dim, in_features, bias=False),
)
elif norm_position == 'post':
self.module = nn.Sequential(
nn.Linear(in_features, dim, bias=False),
activation,
nn.Linear(dim, in_features, bias=False),
nn.LayerNorm(in_features),
)
if dropout > 0.0:
self.dropout = nn.Dropout(dropout)
else:
self.dropout = None
# Setup adapter strategy
self.setup_adapter_strategy(adapter_strategy)
# reset parameters
self.reset_parameters()
def reset_parameters(self):
# Final layer initializations must be 0
if self.norm_position == 'pre':
self.module[-1].weight.data *= 0
elif self.norm_position == 'post':
self.module[-1].weight.data *= 0
self.module[-1].bias.data *= 0
def forward(self, x):
x = self.module(x)
# Add dropout if available
if self.dropout is not None:
x = self.dropout(x)
return x
@dataclass
class LinearAdapterConfig:
in_features: int
dim: int
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
[end of nemo/collections/common/parts/adapter_modules.py]
[start of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from typing import List
import ipadic
import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
class EnJaProcessor:
"""
Tokenizer, Detokenizer and Normalizer utilities for Japanese & English
Args:
lang_id: One of ['en', 'ja'].
"""
def __init__(self, lang_id: str):
self.lang_id = lang_id
self.moses_tokenizer = MosesTokenizer(lang=lang_id)
self.moses_detokenizer = MosesDetokenizer(lang=lang_id)
self.normalizer = MosesPunctNormalizer(
lang=lang_id, pre_replace_unicode_punct=True, post_remove_control_chars=True
)
def detokenize(self, tokens: List[str]) -> str:
"""
Detokenizes a list of tokens
Args:
tokens: list of strings as tokens
Returns:
detokenized Japanese or English string
"""
return self.moses_detokenizer.detokenize(tokens)
def tokenize(self, text) -> str:
"""
Tokenizes text using Moses. Returns a string of tokens.
"""
tokens = self.moses_tokenizer.tokenize(text)
return ' '.join(tokens)
def normalize(self, text) -> str:
# Normalization doesn't handle Japanese periods correctly;
# 'ใ'becomes '.'.
if self.lang_id == 'en':
return self.normalizer.normalize(text)
else:
return text
class JaMecabProcessor:
"""
Tokenizer, Detokenizer and Normalizer utilities for Japanese MeCab & English
"""
def __init__(self):
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
RE_WS_IN_FW = re.compile(
r'([\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])\s+(?=[\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])'
)
detokenize = lambda s: spacing(RE_WS_IN_FW.sub(r'\1', s)).strip()
return detokenize(' '.join(text))
def tokenize(self, text) -> str:
"""
Tokenizes text using Moses. Returns a string of tokens.
"""
return self.mecab_tokenizer.parse(text).strip()
def normalize(self, text) -> str:
return text
[end of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
[start of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
from nemo.collections.nlp.data.machine_translation.machine_translation_dataset import TranslationDataConfig
from nemo.collections.nlp.models.enc_dec_nlp_model import EncDecNLPModelConfig
from nemo.collections.nlp.modules.common.token_classifier import TokenClassifierConfig
from nemo.collections.nlp.modules.common.tokenizer_utils import TokenizerConfig
from nemo.collections.nlp.modules.common.transformer.transformer import (
NeMoTransformerConfig,
NeMoTransformerEncoderConfig,
)
from nemo.collections.nlp.modules.common.transformer.transformer_bottleneck import (
NeMoTransformerBottleneckDecoderConfig,
NeMoTransformerBottleneckEncoderConfig,
)
from nemo.core.config.modelPT import OptimConfig, SchedConfig
@dataclass
class MTSchedConfig(SchedConfig):
name: str = 'InverseSquareRootAnnealing'
warmup_ratio: Optional[float] = None
last_epoch: int = -1
# TODO: Refactor this dataclass to to support more optimizers (it pins the optimizer to Adam-like optimizers).
@dataclass
class MTOptimConfig(OptimConfig):
name: str = 'adam'
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
sched: Optional[MTSchedConfig] = MTSchedConfig()
@dataclass
class MTEncDecModelConfig(EncDecNLPModelConfig):
# machine translation configurations
num_val_examples: int = 3
num_test_examples: int = 3
max_generation_delta: int = 10
label_smoothing: Optional[float] = 0.0
beam_size: int = 4
len_pen: float = 0.0
src_language: Any = 'en' # Any = str or List[str]
tgt_language: Any = 'en' # Any = str or List[str]
find_unused_parameters: Optional[bool] = True
shared_tokenizer: Optional[bool] = True
multilingual: Optional[bool] = False
preproc_out_dir: Optional[str] = None
validate_input_ids: Optional[bool] = True
shared_embeddings: bool = False
# network architecture configuration
encoder_tokenizer: Any = MISSING
encoder: Any = MISSING
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
# dataset configurations
train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
src_file_name=MISSING,
tgt_file_name=MISSING,
tokens_in_batch=512,
clean=True,
shuffle=True,
cache_ids=False,
use_cache=False,
)
validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
src_file_name=MISSING,
tgt_file_name=MISSING,
tokens_in_batch=512,
clean=False,
shuffle=False,
cache_ids=False,
use_cache=False,
)
test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
src_file_name=MISSING,
tgt_file_name=MISSING,
tokens_in_batch=512,
clean=False,
shuffle=False,
cache_ids=False,
use_cache=False,
)
optim: Optional[OptimConfig] = MTOptimConfig()
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
library='nemo',
model_name=None,
pretrained=False,
hidden_size=512,
inner_size=2048,
num_layers=6,
num_attention_heads=8,
ffn_dropout=0.1,
attn_score_dropout=0.1,
attn_layer_dropout=0.1,
)
decoder: NeMoTransformerConfig = NeMoTransformerConfig(
library='nemo',
model_name=None,
pretrained=False,
hidden_size=512,
inner_size=2048,
num_layers=6,
num_attention_heads=8,
ffn_dropout=0.1,
attn_score_dropout=0.1,
attn_layer_dropout=0.1,
)
@dataclass
class MTBottleneckModelConfig(AAYNBaseConfig):
model_type: str = 'nll'
min_logv: float = -6
latent_size: int = -1 # -1 will take value of encoder hidden
non_recon_warmup_batches: int = 200000
recon_per_token: bool = True
log_timing: bool = True
encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
library='nemo',
model_name=None,
pretrained=False,
hidden_size=512,
inner_size=2048,
num_layers=6,
num_attention_heads=8,
ffn_dropout=0.1,
attn_score_dropout=0.1,
attn_layer_dropout=0.1,
arch='seq2seq',
hidden_steps=32,
hidden_blocks=1,
hidden_init_method='params',
)
decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
library='nemo',
model_name=None,
pretrained=False,
inner_size=2048,
num_layers=6,
num_attention_heads=8,
ffn_dropout=0.1,
attn_score_dropout=0.1,
attn_layer_dropout=0.1,
arch='seq2seq',
)
[end of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
[start of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
from nemo.collections.common.parts.adapter_modules import LinearAdapterConfig
from nemo.collections.nlp.data.token_classification.punctuation_capitalization_dataset import (
PunctuationCapitalizationEvalDataConfig,
PunctuationCapitalizationTrainDataConfig,
legacy_data_config_to_new_data_config,
)
from nemo.core.config import TrainerConfig
from nemo.core.config.modelPT import NemoConfig
from nemo.utils.exp_manager import ExpManagerConfig
@dataclass
class FreezeConfig:
is_enabled: bool = False
"""Freeze audio encoder weight and add Conformer Layers on top of it"""
d_model: Optional[int] = 256
"""`d_model` parameter of ``ConformerLayer``"""
d_ff: Optional[int] = 1024
"""``d_ff`` parameter of ``ConformerLayer``"""
num_layers: Optional[int] = 8
"""``num_layers`` number of ``ConformerLayer`` modules to add on top of audio encoder"""
@dataclass
class AdapterConfig:
config: Optional[LinearAdapterConfig] = None
"""Linear adapter config see ``collections.common.parts.LinearAdapterConfig``"""
enable: bool = False
"""Use adapters for audio encoder"""
@dataclass
class FusionConfig:
num_layers: Optional[int] = 4
""""Number of layers to use in fusion"""
num_attention_heads: Optional[int] = 4
"""Number of attention heads to use in fusion"""
inner_size: Optional[int] = 2048
"""Fusion inner size"""
@dataclass
class AudioEncoderConfig:
pretrained_model: str = MISSING
"""A configuration for restoring pretrained audio encoder"""
freeze: Optional[FreezeConfig] = None
adapter: Optional[AdapterConfig] = None
fusion: Optional[FusionConfig] = None
@dataclass
class TokenizerConfig:
"""A structure and default values of source text tokenizer."""
vocab_file: Optional[str] = None
"""A path to vocabulary file which is used in ``'word'``, ``'char'``, and HuggingFace tokenizers"""
tokenizer_name: str = MISSING
"""A name of the tokenizer used for tokenization of source sequences. Possible options are ``'sentencepiece'``,
``'word'``, ``'char'``, HuggingFace tokenizers (e.g. ``'bert-base-uncased'``). For more options see function
``nemo.collections.nlp.modules.common.get_tokenizer``. The tokenizer must have properties ``cls_id``, ``pad_id``,
``sep_id``, ``unk_id``."""
special_tokens: Optional[Dict[str, str]] = None
"""A dictionary with special tokens passed to constructors of ``'char'``, ``'word'``, ``'sentencepiece'``, and
various HuggingFace tokenizers."""
tokenizer_model: Optional[str] = None
"""A path to a tokenizer model required for ``'sentencepiece'`` tokenizer."""
@dataclass
class LanguageModelConfig:
"""
A structure and default values of language model configuration of punctuation and capitalization model. BERT like
HuggingFace models are supported. Provide a valid ``pretrained_model_name`` and, optionally, you may
reinitialize model via ``config_file`` or ``config``.
Alternatively you can initialize the language model using ``lm_checkpoint``.
This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
"""
pretrained_model_name: str = MISSING
"""A mandatory parameter containing name of HuggingFace pretrained model. For example, ``'bert-base-uncased'``."""
config_file: Optional[str] = None
"""A path to a file with HuggingFace model config which is used to reinitialize language model."""
config: Optional[Dict] = None
"""A HuggingFace config which is used to reinitialize language model."""
lm_checkpoint: Optional[str] = None
"""A path to a ``torch`` checkpoint of a language model."""
@dataclass
class HeadConfig:
"""
A structure and default values of configuration of capitalization or punctuation model head. This config defines a
multilayer perceptron which is applied to output of a language model. Number of units in the hidden layer is equal
to the dimension of the language model.
This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
"""
num_fc_layers: int = 1
"""A number of hidden layers in a multilayer perceptron."""
fc_dropout: float = 0.1
"""A dropout used in an MLP."""
activation: str = 'relu'
"""An activation used in hidden layers."""
use_transformer_init: bool = True
"""Whether to initialize the weights of the classifier head with the approach that was used for language model
initialization."""
@dataclass
class ClassLabelsConfig:
"""
A structure and default values of a mandatory part of config which contains names of files which are saved in .nemo
checkpoint. These files can also be used for passing label vocabulary to the model. For using them as label
vocabularies you will need to provide path these files in parameter
``model.common_dataset_parameters.label_vocab_dir``. Each line in labels files
contains 1 label. The values are sorted, ``<line number>==<label id>``, starting from ``0``. A label with ``0`` id
must contain neutral label which must be equal to ``model.common_dataset_parameters.pad_label``.
This config is a part of :class:`~CommonDatasetParametersConfig`.
"""
punct_labels_file: str = MISSING
"""A name of punctuation labels file."""
capit_labels_file: str = MISSING
"""A name of capitalization labels file."""
@dataclass
class CommonDatasetParametersConfig:
"""
A structure and default values of common dataset parameters config which includes label and loss mask information.
If you omit parameters ``punct_label_ids``, ``capit_label_ids``, ``label_vocab_dir``, then labels will be inferred
from a training dataset or loaded from a checkpoint.
Parameters ``ignore_extra_tokens`` and ``ignore_start_end`` are responsible for forming loss mask. A loss mask
defines on which tokens loss is computed.
This parameter is a part of config :class:`~PunctuationCapitalizationModelConfig`.
"""
pad_label: str = MISSING
"""A mandatory parameter which should contain label used for punctuation and capitalization label padding. It
also serves as a neutral label for both punctuation and capitalization. If any of ``punct_label_ids``,
``capit_label_ids`` parameters is provided, then ``pad_label`` must have ``0`` id in them. In addition, if ``label_vocab_dir``
is provided, then ``pad_label`` must be on the first lines in files ``class_labels.punct_labels_file`` and
``class_labels.capit_labels_file``."""
ignore_extra_tokens: bool = False
"""Whether to compute loss on not first tokens in words. If this parameter is ``True``, then loss mask is ``False``
for all tokens in a word except the first."""
ignore_start_end: bool = True
"""If ``False``, then loss is computed on [CLS] and [SEP] tokens."""
punct_label_ids: Optional[Dict[str, int]] = None
"""A dictionary with punctuation label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit this
parameter and pass label ids through ``class_labels.punct_labels_file`` or let the model to infer label ids from
dataset or load them from checkpoint."""
capit_label_ids: Optional[Dict[str, int]] = None
"""A dictionary with capitalization label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit
this parameter and pass label ids through ``class_labels.capit_labels_file`` or let model to infer label ids from
dataset or load them from checkpoint."""
label_vocab_dir: Optional[str] = None
"""A path to directory which contains class labels files. See :class:`ClassLabelsConfig`. If this parameter is
provided, then labels will be loaded from files which are located in ``label_vocab_dir`` and have names specified
in ``model.class_labels`` configuration section. A label specified in ``pad_label`` has to be on the first lines
of ``model.class_labels`` files."""
@dataclass
class PunctuationCapitalizationModelConfig:
"""
A configuration of
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model.
See an example of model config in
`nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
<https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
class_labels: ClassLabelsConfig = ClassLabelsConfig()
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
"""A configuration for creating training dataset and data loader."""
validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating validation datasets and data loaders."""
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
punct_head: HeadConfig = HeadConfig()
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
capit_head: HeadConfig = HeadConfig()
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
tokenizer: Any = TokenizerConfig()
"""A configuration for source text tokenizer."""
language_model: LanguageModelConfig = LanguageModelConfig()
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
"""A configuration of optimizer and learning rate scheduler. There is much variability in such config. For
description see `Optimizers
<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html#optimizers>`_ section in
documentation and `primer <https://github.com/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb>_ tutorial."""
@dataclass
class PunctuationCapitalizationLexicalAudioModelConfig(PunctuationCapitalizationModelConfig):
"""
A configuration of
:class:`~nemo.collections.nlp.models.token_classification.punctuation_lexical_audio_capitalization_model.PunctuationCapitalizationLexicalAudioModel`
model.
See an example of model config in
`nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
<https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_lexical_audio_config.yaml>`_
Audio encoder can be frozen during training with ``freeze_audio_encoder`` parameter.
Adapter can be added to audio encoder with ``use_adapters`` and ``adapter_config`` parameters.
More conformer layers can be added on top of pretrained audio encoder with ``frozen_conf_d_model``, ``frozen_conf_d_ff`` and ``frozen_conf_num_layers`` parameters.
"""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
"""A configuration for creating training dataset and data loader."""
validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating validation datasets and data loaders."""
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
audio_encoder: Optional[AudioEncoderConfig] = None
restore_lexical_encoder_from: Optional[str] = None
""""Path to .nemo checkpoint to load weights from""" # add more comments
use_weighted_loss: Optional[bool] = False
"""If set to ``True`` CrossEntropyLoss will be weighted"""
@dataclass
class PunctuationCapitalizationConfig(NemoConfig):
"""
A config for punctuation model training and testing.
See an example of full config in
`nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
<https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
"""
pretrained_model: Optional[str] = None
"""Can be an NVIDIA's NGC cloud model or a path to a .nemo checkpoint. You can get list of possible cloud options
by calling method
:func:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel.list_available_models`.
"""
name: Optional[str] = 'Punctuation_and_Capitalization'
"""A name of the model. Used for naming output directories and ``.nemo`` checkpoints."""
do_training: bool = True
"""Whether to perform training of the model."""
do_testing: bool = False
"""Whether ot perform testing of the model."""
model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
trainer: Optional[TrainerConfig] = TrainerConfig()
"""Contains ``Trainer`` Lightning class constructor parameters."""
exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
"""
Test if model config is old style config. Old style configs are configs which were used before
``common_dataset_parameters`` item was added. Old style datasets use ``dataset`` instead of
``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``. Old style configs do not support
tarred datasets.
Args:
model_cfg: model configuration
Returns:
whether ``model_config`` is legacy
"""
return 'common_dataset_parameters' not in model_cfg
def legacy_model_config_to_new_model_config(model_cfg: DictConfig) -> DictConfig:
"""
Transform old style config into
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`.
Old style configs are configs which were used before ``common_dataset_parameters`` item was added. Old style
datasets use ``dataset`` instead of ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``.
Old style configs do not support tarred datasets.
Args:
model_cfg: old style config
Returns:
model config which follows dataclass
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`
"""
train_ds = model_cfg.get('train_ds')
validation_ds = model_cfg.get('validation_ds')
test_ds = model_cfg.get('test_ds')
dataset = model_cfg.dataset
punct_head_config = model_cfg.get('punct_head', {})
capit_head_config = model_cfg.get('capit_head', {})
omega_conf = OmegaConf.structured(
PunctuationCapitalizationModelConfig(
class_labels=model_cfg.class_labels,
common_dataset_parameters=CommonDatasetParametersConfig(
pad_label=dataset.pad_label,
ignore_extra_tokens=dataset.get(
'ignore_extra_tokens', CommonDatasetParametersConfig.ignore_extra_tokens
),
ignore_start_end=dataset.get('ignore_start_end', CommonDatasetParametersConfig.ignore_start_end),
punct_label_ids=model_cfg.punct_label_ids,
capit_label_ids=model_cfg.capit_label_ids,
),
train_ds=None
if train_ds is None
else legacy_data_config_to_new_data_config(train_ds, dataset, train=True),
validation_ds=None
if validation_ds is None
else legacy_data_config_to_new_data_config(validation_ds, dataset, train=False),
test_ds=None if test_ds is None else legacy_data_config_to_new_data_config(test_ds, dataset, train=False),
punct_head=HeadConfig(
num_fc_layers=punct_head_config.get('punct_num_fc_layers', HeadConfig.num_fc_layers),
fc_dropout=punct_head_config.get('fc_dropout', HeadConfig.fc_dropout),
activation=punct_head_config.get('activation', HeadConfig.activation),
use_transformer_init=punct_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
),
capit_head=HeadConfig(
num_fc_layers=capit_head_config.get('capit_num_fc_layers', HeadConfig.num_fc_layers),
fc_dropout=capit_head_config.get('fc_dropout', HeadConfig.fc_dropout),
activation=capit_head_config.get('activation', HeadConfig.activation),
use_transformer_init=capit_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
),
tokenizer=model_cfg.tokenizer,
language_model=model_cfg.language_model,
optim=model_cfg.optim,
)
)
with open_dict(omega_conf):
retain_during_legacy_conversion = model_cfg.get('retain_during_legacy_conversion', {})
for key in retain_during_legacy_conversion.keys():
omega_conf[key] = retain_during_legacy_conversion[key]
return omega_conf
[end of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
[start of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Transformer based language model."""
from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
MegatronRetrievalTransformerEncoderModule,
)
from nemo.collections.nlp.modules.common.megatron.utils import (
ApexGuardDefaults,
init_method_normal,
scaled_init_method_normal,
)
try:
from apex.transformer.enums import AttnMaskType, ModelType
HAVE_APEX = True
except (ImportError, ModuleNotFoundError):
HAVE_APEX = False
# fake missing classes with None attributes
AttnMaskType = ApexGuardDefaults()
ModelType = ApexGuardDefaults()
try:
from megatron.core import ModelParallelConfig
HAVE_MEGATRON_CORE = True
except (ImportError, ModuleNotFoundError):
ModelParallelConfig = ApexGuardDefaults
HAVE_MEGATRON_CORE = False
__all__ = []
AVAILABLE_ENCODERS = ["transformer", "perceiver", "retro"]
def get_encoder_model(
config: ModelParallelConfig,
arch,
hidden_size,
ffn_hidden_size,
num_layers,
num_attention_heads,
apply_query_key_layer_scaling=False,
kv_channels=None,
init_method=None,
scaled_init_method=None,
encoder_attn_mask_type=AttnMaskType.padding,
pre_process=True,
post_process=True,
init_method_std=0.02,
megatron_amp_O2=False,
hidden_dropout=0.1,
attention_dropout=0.1,
ffn_dropout=0.0,
precision=16,
fp32_residual_connection=False,
activations_checkpoint_method=None,
activations_checkpoint_num_layers=1,
activations_checkpoint_granularity=None,
layernorm_epsilon=1e-5,
bias_activation_fusion=True,
bias_dropout_add_fusion=True,
masked_softmax_fusion=True,
persist_layer_norm=False,
openai_gelu=False,
activation="gelu",
onnx_safe=False,
bias=True,
normalization="layernorm",
headscale=False,
transformer_block_type="pre_ln",
hidden_steps=32,
parent_model_type=ModelType.encoder_or_decoder,
layer_type=None,
chunk_size=64,
num_self_attention_per_cross_attention=1,
layer_number_offset=0, # this is use only for attention norm_factor scaling
megatron_legacy=False,
normalize_attention_scores=True,
sequence_parallel=False,
num_moe_experts=1,
moe_frequency=1,
moe_dropout=0.0,
turn_off_rop=False, # turn off the RoP positional embedding
version=1, # model version
position_embedding_type='learned_absolute',
use_flash_attention=False,
):
"""Build language model and return along with the key to save."""
if kv_channels is None:
assert (
hidden_size % num_attention_heads == 0
), 'hidden_size must be divisible by num_attention_heads if kv_channels is None'
kv_channels = hidden_size // num_attention_heads
if init_method is None:
init_method = init_method_normal(init_method_std)
if scaled_init_method is None:
scaled_init_method = scaled_init_method_normal(init_method_std, num_layers)
if arch == "transformer":
# Language encoder.
encoder = MegatronTransformerEncoderModule(
config=config,
init_method=init_method,
output_layer_init_method=scaled_init_method,
hidden_size=hidden_size,
num_layers=num_layers,
num_attention_heads=num_attention_heads,
apply_query_key_layer_scaling=apply_query_key_layer_scaling,
kv_channels=kv_channels,
ffn_hidden_size=ffn_hidden_size,
encoder_attn_mask_type=encoder_attn_mask_type,
pre_process=pre_process,
post_process=post_process,
megatron_amp_O2=megatron_amp_O2,
hidden_dropout=hidden_dropout,
attention_dropout=attention_dropout,
ffn_dropout=ffn_dropout,
precision=precision,
fp32_residual_connection=fp32_residual_connection,
activations_checkpoint_method=activations_checkpoint_method,
activations_checkpoint_num_layers=activations_checkpoint_num_layers,
activations_checkpoint_granularity=activations_checkpoint_granularity,
layernorm_epsilon=layernorm_epsilon,
bias_activation_fusion=bias_activation_fusion,
bias_dropout_add_fusion=bias_dropout_add_fusion,
masked_softmax_fusion=masked_softmax_fusion,
persist_layer_norm=persist_layer_norm,
openai_gelu=openai_gelu,
onnx_safe=onnx_safe,
activation=activation,
bias=bias,
normalization=normalization,
transformer_block_type=transformer_block_type,
headscale=headscale,
parent_model_type=parent_model_type,
megatron_legacy=megatron_legacy,
normalize_attention_scores=normalize_attention_scores,
num_moe_experts=num_moe_experts,
moe_frequency=moe_frequency,
moe_dropout=moe_dropout,
position_embedding_type=position_embedding_type,
use_flash_attention=use_flash_attention,
)
elif arch == "retro":
encoder = MegatronRetrievalTransformerEncoderModule(
config=config,
init_method=init_method,
output_layer_init_method=scaled_init_method,
hidden_size=hidden_size,
num_layers=num_layers,
num_attention_heads=num_attention_heads,
apply_query_key_layer_scaling=apply_query_key_layer_scaling,
kv_channels=kv_channels,
layer_type=layer_type,
ffn_hidden_size=ffn_hidden_size,
pre_process=pre_process,
post_process=post_process,
megatron_amp_O2=megatron_amp_O2,
hidden_dropout=hidden_dropout,
attention_dropout=attention_dropout,
precision=precision,
fp32_residual_connection=fp32_residual_connection,
activations_checkpoint_method=activations_checkpoint_method,
activations_checkpoint_num_layers=activations_checkpoint_num_layers,
activations_checkpoint_granularity=activations_checkpoint_granularity,
layernorm_epsilon=layernorm_epsilon,
bias_activation_fusion=bias_activation_fusion,
bias_dropout_add_fusion=bias_dropout_add_fusion,
masked_softmax_fusion=masked_softmax_fusion,
persist_layer_norm=persist_layer_norm,
openai_gelu=openai_gelu,
onnx_safe=onnx_safe,
activation=activation,
bias=bias,
normalization=normalization,
transformer_block_type=transformer_block_type,
parent_model_type=parent_model_type,
chunk_size=chunk_size,
layer_number_offset=layer_number_offset,
megatron_legacy=megatron_legacy,
normalize_attention_scores=normalize_attention_scores,
turn_off_rop=turn_off_rop,
version=version,
)
elif arch == "perceiver":
encoder = MegatronPerceiverEncoderModule(
config=config,
init_method=init_method,
output_layer_init_method=scaled_init_method,
hidden_size=hidden_size,
num_layers=num_layers,
num_attention_heads=num_attention_heads,
apply_query_key_layer_scaling=apply_query_key_layer_scaling,
kv_channels=kv_channels,
ffn_hidden_size=ffn_hidden_size,
encoder_attn_mask_type=encoder_attn_mask_type,
pre_process=pre_process,
post_process=post_process,
megatron_amp_O2=megatron_amp_O2,
hidden_dropout=hidden_dropout,
attention_dropout=attention_dropout,
ffn_dropout=ffn_dropout,
precision=precision,
fp32_residual_connection=fp32_residual_connection,
activations_checkpoint_method=activations_checkpoint_method,
activations_checkpoint_num_layers=activations_checkpoint_num_layers,
activations_checkpoint_granularity=activations_checkpoint_granularity,
layernorm_epsilon=layernorm_epsilon,
bias_activation_fusion=bias_activation_fusion,
bias_dropout_add_fusion=bias_dropout_add_fusion,
masked_softmax_fusion=masked_softmax_fusion,
persist_layer_norm=persist_layer_norm,
openai_gelu=openai_gelu,
onnx_safe=onnx_safe,
activation=activation,
bias=bias,
normalization=normalization,
transformer_block_type=transformer_block_type,
headscale=headscale,
parent_model_type=parent_model_type,
hidden_steps=hidden_steps,
num_self_attention_per_cross_attention=num_self_attention_per_cross_attention,
megatron_legacy=megatron_legacy,
normalize_attention_scores=normalize_attention_scores,
)
else:
raise ValueError(f"Unknown encoder arch = {arch}. Available encoder arch = {AVAILABLE_ENCODERS}")
return encoder
[end of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
[start of nemo/collections/tts/models/fastpitch.py]
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
from dataclasses import dataclass
from pathlib import Path
from typing import List, Optional
import torch
from hydra.utils import instantiate
from omegaconf import DictConfig, OmegaConf, open_dict
from pytorch_lightning import Trainer
from pytorch_lightning.loggers import TensorBoardLogger
from nemo.collections.common.parts.preprocessing import parsers
from nemo.collections.tts.losses.aligner_loss import BinLoss, ForwardSumLoss
from nemo.collections.tts.losses.fastpitchloss import DurationLoss, EnergyLoss, MelLoss, PitchLoss
from nemo.collections.tts.models.base import SpectrogramGenerator
from nemo.collections.tts.modules.fastpitch import FastPitchModule
from nemo.collections.tts.parts.mixins import FastPitchAdapterModelMixin
from nemo.collections.tts.parts.utils.callbacks import LoggingCallback
from nemo.collections.tts.parts.utils.helpers import (
batch_from_ragged,
g2p_backward_compatible_support,
plot_alignment_to_numpy,
plot_spectrogram_to_numpy,
process_batch,
sample_tts_input,
)
from nemo.core.classes import Exportable
from nemo.core.classes.common import PretrainedModelInfo, typecheck
from nemo.core.neural_types.elements import (
Index,
LengthsType,
MelSpectrogramType,
ProbsType,
RegressionValuesType,
TokenDurationType,
TokenIndex,
TokenLogDurationType,
)
from nemo.core.neural_types.neural_type import NeuralType
from nemo.utils import logging, model_utils
@dataclass
class G2PConfig:
_target_: str = "nemo.collections.tts.g2p.models.en_us_arpabet.EnglishG2p"
phoneme_dict: str = "scripts/tts_dataset_files/cmudict-0.7b_nv22.10"
heteronyms: str = "scripts/tts_dataset_files/heteronyms-052722"
phoneme_probability: float = 0.5
@dataclass
class TextTokenizer:
_target_: str = "nemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.EnglishPhonemesTokenizer"
punct: bool = True
stresses: bool = True
chars: bool = True
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
g2p: G2PConfig = G2PConfig()
@dataclass
class TextTokenizerConfig:
text_tokenizer: TextTokenizer = TextTokenizer()
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
"""FastPitch model (https://arxiv.org/abs/2006.06873) that is used to generate mel spectrogram from text."""
def __init__(self, cfg: DictConfig, trainer: Trainer = None):
# Convert to Hydra 1.0 compatible DictConfig
cfg = model_utils.convert_model_config_to_dict_config(cfg)
cfg = model_utils.maybe_update_config_version(cfg)
# Setup normalizer
self.normalizer = None
self.text_normalizer_call = None
self.text_normalizer_call_kwargs = {}
self._setup_normalizer(cfg)
self.learn_alignment = cfg.get("learn_alignment", False)
# Setup vocabulary (=tokenizer) and input_fft_kwargs (supported only with self.learn_alignment=True)
input_fft_kwargs = {}
if self.learn_alignment:
self.vocab = None
self.ds_class = cfg.train_ds.dataset._target_
self.ds_class_name = self.ds_class.split(".")[-1]
if not self.ds_class in [
"nemo.collections.tts.data.dataset.TTSDataset",
"nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset",
"nemo.collections.tts.torch.data.TTSDataset",
]:
raise ValueError(f"Unknown dataset class: {self.ds_class}.")
self._setup_tokenizer(cfg)
assert self.vocab is not None
input_fft_kwargs["n_embed"] = len(self.vocab.tokens)
input_fft_kwargs["padding_idx"] = self.vocab.pad
self._parser = None
self._tb_logger = None
super().__init__(cfg=cfg, trainer=trainer)
self.bin_loss_warmup_epochs = cfg.get("bin_loss_warmup_epochs", 100)
self.log_images = cfg.get("log_images", False)
self.log_train_images = False
default_prosody_loss_scale = 0.1 if self.learn_alignment else 1.0
dur_loss_scale = cfg.get("dur_loss_scale", default_prosody_loss_scale)
pitch_loss_scale = cfg.get("pitch_loss_scale", default_prosody_loss_scale)
energy_loss_scale = cfg.get("energy_loss_scale", default_prosody_loss_scale)
self.mel_loss_fn = MelLoss()
self.pitch_loss_fn = PitchLoss(loss_scale=pitch_loss_scale)
self.duration_loss_fn = DurationLoss(loss_scale=dur_loss_scale)
self.energy_loss_fn = EnergyLoss(loss_scale=energy_loss_scale)
self.aligner = None
if self.learn_alignment:
aligner_loss_scale = cfg.get("aligner_loss_scale", 1.0)
self.aligner = instantiate(self._cfg.alignment_module)
self.forward_sum_loss_fn = ForwardSumLoss(loss_scale=aligner_loss_scale)
self.bin_loss_fn = BinLoss(loss_scale=aligner_loss_scale)
self.preprocessor = instantiate(self._cfg.preprocessor)
input_fft = instantiate(self._cfg.input_fft, **input_fft_kwargs)
output_fft = instantiate(self._cfg.output_fft)
duration_predictor = instantiate(self._cfg.duration_predictor)
pitch_predictor = instantiate(self._cfg.pitch_predictor)
speaker_encoder = instantiate(self._cfg.get("speaker_encoder", None))
energy_embedding_kernel_size = cfg.get("energy_embedding_kernel_size", 0)
energy_predictor = instantiate(self._cfg.get("energy_predictor", None))
# [TODO] may remove if we change the pre-trained config
# cfg: condition_types = [ "add" ]
n_speakers = cfg.get("n_speakers", 0)
speaker_emb_condition_prosody = cfg.get("speaker_emb_condition_prosody", False)
speaker_emb_condition_decoder = cfg.get("speaker_emb_condition_decoder", False)
speaker_emb_condition_aligner = cfg.get("speaker_emb_condition_aligner", False)
min_token_duration = cfg.get("min_token_duration", 0)
use_log_energy = cfg.get("use_log_energy", True)
if n_speakers > 1 and "add" not in input_fft.cond_input.condition_types:
input_fft.cond_input.condition_types.append("add")
if speaker_emb_condition_prosody:
duration_predictor.cond_input.condition_types.append("add")
pitch_predictor.cond_input.condition_types.append("add")
if speaker_emb_condition_decoder:
output_fft.cond_input.condition_types.append("add")
if speaker_emb_condition_aligner and self.aligner is not None:
self.aligner.cond_input.condition_types.append("add")
self.fastpitch = FastPitchModule(
input_fft,
output_fft,
duration_predictor,
pitch_predictor,
energy_predictor,
self.aligner,
speaker_encoder,
n_speakers,
cfg.symbols_embedding_dim,
cfg.pitch_embedding_kernel_size,
energy_embedding_kernel_size,
cfg.n_mel_channels,
min_token_duration,
cfg.max_token_duration,
use_log_energy,
)
self._input_types = self._output_types = None
self.export_config = {
"emb_range": (0, self.fastpitch.encoder.word_emb.num_embeddings),
"enable_volume": False,
"enable_ragged_batches": False,
}
if self.fastpitch.speaker_emb is not None:
self.export_config["num_speakers"] = cfg.n_speakers
self.log_config = cfg.get("log_config", None)
# Adapter modules setup (from FastPitchAdapterModelMixin)
self.setup_adapters()
def _get_default_text_tokenizer_conf(self):
text_tokenizer: TextTokenizerConfig = TextTokenizerConfig()
return OmegaConf.create(OmegaConf.to_yaml(text_tokenizer))
def _setup_normalizer(self, cfg):
if "text_normalizer" in cfg:
normalizer_kwargs = {}
if "whitelist" in cfg.text_normalizer:
normalizer_kwargs["whitelist"] = self.register_artifact(
'text_normalizer.whitelist', cfg.text_normalizer.whitelist
)
try:
import nemo_text_processing
self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
except Exception as e:
logging.error(e)
raise ImportError(
"`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
)
self.text_normalizer_call = self.normalizer.normalize
if "text_normalizer_call_kwargs" in cfg:
self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
def _setup_tokenizer(self, cfg):
text_tokenizer_kwargs = {}
if "g2p" in cfg.text_tokenizer:
# for backward compatibility
if (
self._is_model_being_restored()
and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
):
cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
cfg.text_tokenizer.g2p["_target_"]
)
g2p_kwargs = {}
if "phoneme_dict" in cfg.text_tokenizer.g2p:
g2p_kwargs["phoneme_dict"] = self.register_artifact(
'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
)
if "heteronyms" in cfg.text_tokenizer.g2p:
g2p_kwargs["heteronyms"] = self.register_artifact(
'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
)
# for backward compatability
text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
# TODO @xueyang: rename the instance of tokenizer because vocab is misleading.
self.vocab = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
@property
def tb_logger(self):
if self._tb_logger is None:
if self.logger is None and self.logger.experiment is None:
return None
tb_logger = self.logger.experiment
for logger in self.trainer.loggers:
if isinstance(logger, TensorBoardLogger):
tb_logger = logger.experiment
break
self._tb_logger = tb_logger
return self._tb_logger
@property
def parser(self):
if self._parser is not None:
return self._parser
if self.learn_alignment:
self._parser = self.vocab.encode
else:
self._parser = parsers.make_parser(
labels=self._cfg.labels,
name='en',
unk_id=-1,
blank_id=-1,
do_normalize=True,
abbreviation_version="fastpitch",
make_table=False,
)
return self._parser
def parse(self, str_input: str, normalize=True) -> torch.tensor:
if self.training:
logging.warning("parse() is meant to be called in eval mode.")
if normalize and self.text_normalizer_call is not None:
str_input = self.text_normalizer_call(str_input, **self.text_normalizer_call_kwargs)
if self.learn_alignment:
eval_phon_mode = contextlib.nullcontext()
if hasattr(self.vocab, "set_phone_prob"):
eval_phon_mode = self.vocab.set_phone_prob(prob=1.0)
# Disable mixed g2p representation if necessary
with eval_phon_mode:
tokens = self.parser(str_input)
else:
tokens = self.parser(str_input)
x = torch.tensor(tokens).unsqueeze_(0).long().to(self.device)
return x
@typecheck(
input_types={
"text": NeuralType(('B', 'T_text'), TokenIndex()),
"durs": NeuralType(('B', 'T_text'), TokenDurationType()),
"pitch": NeuralType(('B', 'T_audio'), RegressionValuesType()),
"energy": NeuralType(('B', 'T_audio'), RegressionValuesType(), optional=True),
"speaker": NeuralType(('B'), Index(), optional=True),
"pace": NeuralType(optional=True),
"spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
"attn_prior": NeuralType(('B', 'T_spec', 'T_text'), ProbsType(), optional=True),
"mel_lens": NeuralType(('B'), LengthsType(), optional=True),
"input_lens": NeuralType(('B'), LengthsType(), optional=True),
# reference_* data is used for multi-speaker FastPitch training
"reference_spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
"reference_spec_lens": NeuralType(('B'), LengthsType(), optional=True),
}
)
def forward(
self,
*,
text,
durs=None,
pitch=None,
energy=None,
speaker=None,
pace=1.0,
spec=None,
attn_prior=None,
mel_lens=None,
input_lens=None,
reference_spec=None,
reference_spec_lens=None,
):
return self.fastpitch(
text=text,
durs=durs,
pitch=pitch,
energy=energy,
speaker=speaker,
pace=pace,
spec=spec,
attn_prior=attn_prior,
mel_lens=mel_lens,
input_lens=input_lens,
reference_spec=reference_spec,
reference_spec_lens=reference_spec_lens,
)
@typecheck(output_types={"spect": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType())})
def generate_spectrogram(
self,
tokens: 'torch.tensor',
speaker: Optional[int] = None,
pace: float = 1.0,
reference_spec: Optional['torch.tensor'] = None,
reference_spec_lens: Optional['torch.tensor'] = None,
) -> torch.tensor:
if self.training:
logging.warning("generate_spectrogram() is meant to be called in eval mode.")
if isinstance(speaker, int):
speaker = torch.tensor([speaker]).to(self.device)
spect, *_ = self(
text=tokens,
durs=None,
pitch=None,
speaker=speaker,
pace=pace,
reference_spec=reference_spec,
reference_spec_lens=reference_spec_lens,
)
return spect
def training_step(self, batch, batch_idx):
attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
None,
None,
None,
None,
None,
None,
)
if self.learn_alignment:
if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
batch_dict = batch
else:
batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
audio = batch_dict.get("audio")
audio_lens = batch_dict.get("audio_lens")
text = batch_dict.get("text")
text_lens = batch_dict.get("text_lens")
attn_prior = batch_dict.get("align_prior_matrix", None)
pitch = batch_dict.get("pitch", None)
energy = batch_dict.get("energy", None)
speaker = batch_dict.get("speaker_id", None)
reference_audio = batch_dict.get("reference_audio", None)
reference_audio_len = batch_dict.get("reference_audio_lens", None)
else:
audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
mels, spec_len = self.preprocessor(input_signal=audio, length=audio_lens)
reference_spec, reference_spec_len = None, None
if reference_audio is not None:
reference_spec, reference_spec_len = self.preprocessor(
input_signal=reference_audio, length=reference_audio_len
)
(
mels_pred,
_,
_,
log_durs_pred,
pitch_pred,
attn_soft,
attn_logprob,
attn_hard,
attn_hard_dur,
pitch,
energy_pred,
energy_tgt,
) = self(
text=text,
durs=durs,
pitch=pitch,
energy=energy,
speaker=speaker,
pace=1.0,
spec=mels if self.learn_alignment else None,
reference_spec=reference_spec,
reference_spec_lens=reference_spec_len,
attn_prior=attn_prior,
mel_lens=spec_len,
input_lens=text_lens,
)
if durs is None:
durs = attn_hard_dur
mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
loss = mel_loss + dur_loss
if self.learn_alignment:
ctc_loss = self.forward_sum_loss_fn(attn_logprob=attn_logprob, in_lens=text_lens, out_lens=spec_len)
bin_loss_weight = min(self.current_epoch / self.bin_loss_warmup_epochs, 1.0) * 1.0
bin_loss = self.bin_loss_fn(hard_attention=attn_hard, soft_attention=attn_soft) * bin_loss_weight
loss += ctc_loss + bin_loss
pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
loss += pitch_loss + energy_loss
self.log("t_loss", loss)
self.log("t_mel_loss", mel_loss)
self.log("t_dur_loss", dur_loss)
self.log("t_pitch_loss", pitch_loss)
if energy_tgt is not None:
self.log("t_energy_loss", energy_loss)
if self.learn_alignment:
self.log("t_ctc_loss", ctc_loss)
self.log("t_bin_loss", bin_loss)
# Log images to tensorboard
if self.log_images and self.log_train_images and isinstance(self.logger, TensorBoardLogger):
self.log_train_images = False
self.tb_logger.add_image(
"train_mel_target",
plot_spectrogram_to_numpy(mels[0].data.cpu().float().numpy()),
self.global_step,
dataformats="HWC",
)
spec_predict = mels_pred[0].data.cpu().float().numpy()
self.tb_logger.add_image(
"train_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
)
if self.learn_alignment:
attn = attn_hard[0].data.cpu().float().numpy().squeeze()
self.tb_logger.add_image(
"train_attn", plot_alignment_to_numpy(attn.T), self.global_step, dataformats="HWC",
)
soft_attn = attn_soft[0].data.cpu().float().numpy().squeeze()
self.tb_logger.add_image(
"train_soft_attn", plot_alignment_to_numpy(soft_attn.T), self.global_step, dataformats="HWC",
)
return loss
def validation_step(self, batch, batch_idx):
attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
None,
None,
None,
None,
None,
None,
)
if self.learn_alignment:
if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
batch_dict = batch
else:
batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
audio = batch_dict.get("audio")
audio_lens = batch_dict.get("audio_lens")
text = batch_dict.get("text")
text_lens = batch_dict.get("text_lens")
attn_prior = batch_dict.get("align_prior_matrix", None)
pitch = batch_dict.get("pitch", None)
energy = batch_dict.get("energy", None)
speaker = batch_dict.get("speaker_id", None)
reference_audio = batch_dict.get("reference_audio", None)
reference_audio_len = batch_dict.get("reference_audio_lens", None)
else:
audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
mels, mel_lens = self.preprocessor(input_signal=audio, length=audio_lens)
reference_spec, reference_spec_len = None, None
if reference_audio is not None:
reference_spec, reference_spec_len = self.preprocessor(
input_signal=reference_audio, length=reference_audio_len
)
# Calculate val loss on ground truth durations to better align L2 loss in time
(mels_pred, _, _, log_durs_pred, pitch_pred, _, _, _, attn_hard_dur, pitch, energy_pred, energy_tgt,) = self(
text=text,
durs=durs,
pitch=pitch,
energy=energy,
speaker=speaker,
pace=1.0,
spec=mels if self.learn_alignment else None,
reference_spec=reference_spec,
reference_spec_lens=reference_spec_len,
attn_prior=attn_prior,
mel_lens=mel_lens,
input_lens=text_lens,
)
if durs is None:
durs = attn_hard_dur
mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
loss = mel_loss + dur_loss + pitch_loss + energy_loss
val_outputs = {
"val_loss": loss,
"mel_loss": mel_loss,
"dur_loss": dur_loss,
"pitch_loss": pitch_loss,
"energy_loss": energy_loss if energy_tgt is not None else None,
"mel_target": mels if batch_idx == 0 else None,
"mel_pred": mels_pred if batch_idx == 0 else None,
}
self.validation_step_outputs.append(val_outputs)
return val_outputs
def on_validation_epoch_end(self):
collect = lambda key: torch.stack([x[key] for x in self.validation_step_outputs]).mean()
val_loss = collect("val_loss")
mel_loss = collect("mel_loss")
dur_loss = collect("dur_loss")
pitch_loss = collect("pitch_loss")
self.log("val_loss", val_loss, sync_dist=True)
self.log("val_mel_loss", mel_loss, sync_dist=True)
self.log("val_dur_loss", dur_loss, sync_dist=True)
self.log("val_pitch_loss", pitch_loss, sync_dist=True)
if self.validation_step_outputs[0]["energy_loss"] is not None:
energy_loss = collect("energy_loss")
self.log("val_energy_loss", energy_loss, sync_dist=True)
_, _, _, _, _, spec_target, spec_predict = self.validation_step_outputs[0].values()
if self.log_images and isinstance(self.logger, TensorBoardLogger):
self.tb_logger.add_image(
"val_mel_target",
plot_spectrogram_to_numpy(spec_target[0].data.cpu().float().numpy()),
self.global_step,
dataformats="HWC",
)
spec_predict = spec_predict[0].data.cpu().float().numpy()
self.tb_logger.add_image(
"val_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
)
self.log_train_images = True
self.validation_step_outputs.clear() # free memory)
def _setup_train_dataloader(self, cfg):
phon_mode = contextlib.nullcontext()
if hasattr(self.vocab, "set_phone_prob"):
phon_mode = self.vocab.set_phone_prob(self.vocab.phoneme_probability)
with phon_mode:
dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
sampler = dataset.get_sampler(cfg.dataloader_params.batch_size)
return torch.utils.data.DataLoader(
dataset, collate_fn=dataset.collate_fn, sampler=sampler, **cfg.dataloader_params
)
def _setup_test_dataloader(self, cfg):
phon_mode = contextlib.nullcontext()
if hasattr(self.vocab, "set_phone_prob"):
phon_mode = self.vocab.set_phone_prob(0.0)
with phon_mode:
dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
raise ValueError(f"No dataset for {name}")
if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
raise ValueError(f"No dataloader_params for {name}")
if shuffle_should_be:
if 'shuffle' not in cfg.dataloader_params:
logging.warning(
f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
"config. Manually setting to True"
)
with open_dict(cfg.dataloader_params):
cfg.dataloader_params.shuffle = True
elif not cfg.dataloader_params.shuffle:
logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
elif cfg.dataloader_params.shuffle:
logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
if self.ds_class == "nemo.collections.tts.data.dataset.TTSDataset":
phon_mode = contextlib.nullcontext()
if hasattr(self.vocab, "set_phone_prob"):
phon_mode = self.vocab.set_phone_prob(prob=None if name == "val" else self.vocab.phoneme_probability)
with phon_mode:
dataset = instantiate(
cfg.dataset,
text_normalizer=self.normalizer,
text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
text_tokenizer=self.vocab,
)
else:
dataset = instantiate(cfg.dataset)
return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
def setup_training_data(self, cfg):
if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
self._train_dl = self._setup_train_dataloader(cfg)
else:
self._train_dl = self.__setup_dataloader_from_config(cfg)
def setup_validation_data(self, cfg):
if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
self._validation_dl = self._setup_test_dataloader(cfg)
else:
self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="val")
def setup_test_data(self, cfg):
"""Omitted."""
pass
def configure_callbacks(self):
if not self.log_config:
return []
sample_ds_class = self.log_config.dataset._target_
if sample_ds_class != "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
raise ValueError(f"Logging callback only supported for TextToSpeechDataset, got {sample_ds_class}")
data_loader = self._setup_test_dataloader(self.log_config)
generators = instantiate(self.log_config.generators)
log_dir = Path(self.log_config.log_dir) if self.log_config.log_dir else None
log_callback = LoggingCallback(
generators=generators,
data_loader=data_loader,
log_epochs=self.log_config.log_epochs,
epoch_frequency=self.log_config.epoch_frequency,
output_dir=log_dir,
loggers=self.trainer.loggers,
log_tensorboard=self.log_config.log_tensorboard,
log_wandb=self.log_config.log_wandb,
)
return [log_callback]
@classmethod
def list_available_models(cls) -> 'List[PretrainedModelInfo]':
"""
This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
Returns:
List of available pre-trained models.
"""
list_of_models = []
# en-US, single speaker, 22050Hz, LJSpeech (ARPABET).
model = PretrainedModelInfo(
pretrained_model_name="tts_en_fastpitch",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/1.8.1/files/tts_en_fastpitch_align.nemo",
description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is ARPABET-based.",
class_=cls,
)
list_of_models.append(model)
# en-US, single speaker, 22050Hz, LJSpeech (IPA).
model = PretrainedModelInfo(
pretrained_model_name="tts_en_fastpitch_ipa",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/IPA_1.13.0/files/tts_en_fastpitch_align_ipa.nemo",
description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is IPA-based.",
class_=cls,
)
list_of_models.append(model)
# en-US, multi-speaker, 44100Hz, HiFiTTS.
model = PretrainedModelInfo(
pretrained_model_name="tts_en_fastpitch_multispeaker",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_multispeaker_fastpitchhifigan/versions/1.10.0/files/tts_en_fastpitch_multispeaker.nemo",
description="This model is trained on HiFITTS sampled at 44100Hz with and can be used to generate male and female English voices with an American accent.",
class_=cls,
)
list_of_models.append(model)
# de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 21.02
model = PretrainedModelInfo(
pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2102",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2102.nemo",
description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 21.02 Dataset sampled at 22050Hz and can be used to generate male German voices.",
class_=cls,
)
list_of_models.append(model)
# de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 22.10
model = PretrainedModelInfo(
pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2210",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2210.nemo",
description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 22.10 Dataset sampled at 22050Hz and can be used to generate male German voices.",
class_=cls,
)
list_of_models.append(model)
# de-DE, multi-speaker, 5 speakers, 44100 Hz, HUI-Audio-Corpus-German Clean.
model = PretrainedModelInfo(
pretrained_model_name="tts_de_fastpitch_multispeaker_5",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitch_multispeaker_5/versions/1.11.0/files/tts_de_fastpitch_multispeaker_5.nemo",
description="This model is trained on 5 speakers in HUI-Audio-Corpus-German clean subset sampled at 44100Hz with and can be used to generate male and female German voices.",
class_=cls,
)
list_of_models.append(model)
# es, 174 speakers, 44100Hz, OpenSLR (IPA)
model = PretrainedModelInfo(
pretrained_model_name="tts_es_fastpitch_multispeaker",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_es_multispeaker_fastpitchhifigan/versions/1.15.0/files/tts_es_fastpitch_multispeaker.nemo",
description="This model is trained on 174 speakers in 6 crowdsourced Latin American Spanish OpenSLR datasets sampled at 44100Hz and can be used to generate male and female Spanish voices with Latin American accents.",
class_=cls,
)
list_of_models.append(model)
# zh, single female speaker, 22050Hz, SFSpeech Bilingual Chinese/English dataset, improved model using richer
# dict and jieba word segmenter for polyphone disambiguation.
model = PretrainedModelInfo(
pretrained_model_name="tts_zh_fastpitch_sfspeech",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_zh_fastpitch_hifigan_sfspeech/versions/1.15.0/files/tts_zh_fastpitch_sfspeech.nemo",
description="This model is trained on a single female speaker in SFSpeech Bilingual Chinese/English dataset"
" sampled at 22050Hz and can be used to generate female Mandarin Chinese voices. It is improved"
" using richer dict and jieba word segmenter for polyphone disambiguation.",
class_=cls,
)
list_of_models.append(model)
# en, multi speaker, LibriTTS, 16000 Hz
# stft 25ms 10ms matching ASR params
# for use during Enhlish ASR training/adaptation
model = PretrainedModelInfo(
pretrained_model_name="tts_en_fastpitch_for_asr_finetuning",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch_spectrogram_enhancer_for_asr_finetuning/versions/1.20.0/files/tts_en_fastpitch_for_asr_finetuning.nemo",
description="This model is trained on LibriSpeech, train-960 subset."
" STFT parameters follow those commonly used in ASR: 25 ms window, 10 ms hop."
" This model is supposed to be used with its companion SpetrogramEnhancer for "
" ASR fine-tuning. Usage for regular TTS tasks is not advised.",
class_=cls,
)
list_of_models.append(model)
return list_of_models
# Methods for model exportability
def _prepare_for_export(self, **kwargs):
super()._prepare_for_export(**kwargs)
tensor_shape = ('T') if self.export_config["enable_ragged_batches"] else ('B', 'T')
# Define input_types and output_types as required by export()
self._input_types = {
"text": NeuralType(tensor_shape, TokenIndex()),
"pitch": NeuralType(tensor_shape, RegressionValuesType()),
"pace": NeuralType(tensor_shape),
"volume": NeuralType(tensor_shape, optional=True),
"batch_lengths": NeuralType(('B'), optional=True),
"speaker": NeuralType(('B'), Index(), optional=True),
}
self._output_types = {
"spect": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"num_frames": NeuralType(('B'), TokenDurationType()),
"durs_predicted": NeuralType(('B', 'T'), TokenDurationType()),
"log_durs_predicted": NeuralType(('B', 'T'), TokenLogDurationType()),
"pitch_predicted": NeuralType(('B', 'T'), RegressionValuesType()),
}
if self.export_config["enable_volume"]:
self._output_types["volume_aligned"] = NeuralType(('B', 'T'), RegressionValuesType())
def _export_teardown(self):
self._input_types = self._output_types = None
@property
def disabled_deployment_input_names(self):
"""Implement this method to return a set of input names disabled for export"""
disabled_inputs = set()
if self.fastpitch.speaker_emb is None:
disabled_inputs.add("speaker")
if not self.export_config["enable_ragged_batches"]:
disabled_inputs.add("batch_lengths")
if not self.export_config["enable_volume"]:
disabled_inputs.add("volume")
return disabled_inputs
@property
def input_types(self):
return self._input_types
@property
def output_types(self):
return self._output_types
def input_example(self, max_batch=1, max_dim=44):
"""
Generates input examples for tracing etc.
Returns:
A tuple of input examples.
"""
par = next(self.fastpitch.parameters())
inputs = sample_tts_input(self.export_config, par.device, max_batch=max_batch, max_dim=max_dim)
if 'enable_ragged_batches' not in self.export_config:
inputs.pop('batch_lengths', None)
return (inputs,)
def forward_for_export(self, text, pitch, pace, volume=None, batch_lengths=None, speaker=None):
if self.export_config["enable_ragged_batches"]:
text, pitch, pace, volume_tensor, lens = batch_from_ragged(
text, pitch, pace, batch_lengths, padding_idx=self.fastpitch.encoder.padding_idx, volume=volume
)
if volume is not None:
volume = volume_tensor
return self.fastpitch.infer(text=text, pitch=pitch, pace=pace, volume=volume, speaker=speaker)
def interpolate_speaker(
self, original_speaker_1, original_speaker_2, weight_speaker_1, weight_speaker_2, new_speaker_id
):
"""
This method performs speaker interpolation between two original speakers the model is trained on.
Inputs:
original_speaker_1: Integer speaker ID of first existing speaker in the model
original_speaker_2: Integer speaker ID of second existing speaker in the model
weight_speaker_1: Floating point weight associated in to first speaker during weight combination
weight_speaker_2: Floating point weight associated in to second speaker during weight combination
new_speaker_id: Integer speaker ID of new interpolated speaker in the model
"""
if self.fastpitch.speaker_emb is None:
raise Exception(
"Current FastPitch model is not a multi-speaker FastPitch model. Speaker interpolation can only \
be performed with a multi-speaker model"
)
n_speakers = self.fastpitch.speaker_emb.weight.data.size()[0]
if original_speaker_1 >= n_speakers or original_speaker_2 >= n_speakers or new_speaker_id >= n_speakers:
raise Exception(
f"Parameters original_speaker_1, original_speaker_2, new_speaker_id should be less than the total \
total number of speakers FastPitch was trained on (n_speakers = {n_speakers})."
)
speaker_emb_1 = (
self.fastpitch.speaker_emb(torch.tensor(original_speaker_1, dtype=torch.int32).cuda()).clone().detach()
)
speaker_emb_2 = (
self.fastpitch.speaker_emb(torch.tensor(original_speaker_2, dtype=torch.int32).cuda()).clone().detach()
)
new_speaker_emb = weight_speaker_1 * speaker_emb_1 + weight_speaker_2 * speaker_emb_2
self.fastpitch.speaker_emb.weight.data[new_speaker_id] = new_speaker_emb
[end of nemo/collections/tts/models/fastpitch.py]
[start of nemo/collections/tts/models/tacotron2.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
import torch
from hydra.utils import instantiate
from omegaconf import MISSING, DictConfig, OmegaConf, open_dict
from omegaconf.errors import ConfigAttributeError
from pytorch_lightning.loggers import TensorBoardLogger, WandbLogger
from torch import nn
from nemo.collections.common.parts.preprocessing import parsers
from nemo.collections.tts.losses.tacotron2loss import Tacotron2Loss
from nemo.collections.tts.models.base import SpectrogramGenerator
from nemo.collections.tts.parts.utils.helpers import (
g2p_backward_compatible_support,
get_mask_from_lengths,
tacotron2_log_to_tb_func,
tacotron2_log_to_wandb_func,
)
from nemo.core.classes.common import PretrainedModelInfo, typecheck
from nemo.core.neural_types.elements import (
AudioSignal,
EmbeddedTextType,
LengthsType,
LogitsType,
MelSpectrogramType,
SequenceToSequenceAlignmentType,
)
from nemo.core.neural_types.neural_type import NeuralType
from nemo.utils import logging, model_utils
@dataclass
class Preprocessor:
_target_: str = MISSING
pad_value: float = MISSING
@dataclass
class Tacotron2Config:
preprocessor: Preprocessor = Preprocessor()
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
labels: List = MISSING
train_ds: Optional[Dict[Any, Any]] = None
validation_ds: Optional[Dict[Any, Any]] = None
class Tacotron2Model(SpectrogramGenerator):
"""Tacotron 2 Model that is used to generate mel spectrograms from text"""
def __init__(self, cfg: DictConfig, trainer: 'Trainer' = None):
# Convert to Hydra 1.0 compatible DictConfig
cfg = model_utils.convert_model_config_to_dict_config(cfg)
cfg = model_utils.maybe_update_config_version(cfg)
# setup normalizer
self.normalizer = None
self.text_normalizer_call = None
self.text_normalizer_call_kwargs = {}
self._setup_normalizer(cfg)
# setup tokenizer
self.tokenizer = None
if hasattr(cfg, 'text_tokenizer'):
self._setup_tokenizer(cfg)
self.num_tokens = len(self.tokenizer.tokens)
self.tokenizer_pad = self.tokenizer.pad
self.tokenizer_unk = self.tokenizer.oov
# assert self.tokenizer is not None
else:
self.num_tokens = len(cfg.labels) + 3
super().__init__(cfg=cfg, trainer=trainer)
schema = OmegaConf.structured(Tacotron2Config)
# ModelPT ensures that cfg is a DictConfig, but do this second check in case ModelPT changes
if isinstance(cfg, dict):
cfg = OmegaConf.create(cfg)
elif not isinstance(cfg, DictConfig):
raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
# Ensure passed cfg is compliant with schema
try:
OmegaConf.merge(cfg, schema)
self.pad_value = cfg.preprocessor.pad_value
except ConfigAttributeError:
self.pad_value = cfg.preprocessor.params.pad_value
logging.warning(
"Your config is using an old NeMo yaml configuration. Please ensure that the yaml matches the "
"current version in the main branch for future compatibility."
)
self._parser = None
self.audio_to_melspec_precessor = instantiate(cfg.preprocessor)
self.text_embedding = nn.Embedding(self.num_tokens, 512)
self.encoder = instantiate(self._cfg.encoder)
self.decoder = instantiate(self._cfg.decoder)
self.postnet = instantiate(self._cfg.postnet)
self.loss = Tacotron2Loss()
self.calculate_loss = True
@property
def parser(self):
if self._parser is not None:
return self._parser
ds_class_name = self._cfg.train_ds.dataset._target_.split(".")[-1]
if ds_class_name == "TTSDataset":
self._parser = None
elif hasattr(self._cfg, "labels"):
self._parser = parsers.make_parser(
labels=self._cfg.labels,
name='en',
unk_id=-1,
blank_id=-1,
do_normalize=True,
abbreviation_version="fastpitch",
make_table=False,
)
else:
raise ValueError("Wanted to setup parser, but model does not have necessary paramaters")
return self._parser
def parse(self, text: str, normalize=True) -> torch.Tensor:
if self.training:
logging.warning("parse() is meant to be called in eval mode.")
if normalize and self.text_normalizer_call is not None:
text = self.text_normalizer_call(text, **self.text_normalizer_call_kwargs)
eval_phon_mode = contextlib.nullcontext()
if hasattr(self.tokenizer, "set_phone_prob"):
eval_phon_mode = self.tokenizer.set_phone_prob(prob=1.0)
with eval_phon_mode:
if self.tokenizer is not None:
tokens = self.tokenizer.encode(text)
else:
tokens = self.parser(text)
# Old parser doesn't add bos and eos ids, so maunally add it
tokens = [len(self._cfg.labels)] + tokens + [len(self._cfg.labels) + 1]
tokens_tensor = torch.tensor(tokens).unsqueeze_(0).to(self.device)
return tokens_tensor
@property
def input_types(self):
if self.training:
return {
"tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
"token_len": NeuralType(('B'), LengthsType()),
"audio": NeuralType(('B', 'T'), AudioSignal()),
"audio_len": NeuralType(('B'), LengthsType()),
}
else:
return {
"tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
"token_len": NeuralType(('B'), LengthsType()),
"audio": NeuralType(('B', 'T'), AudioSignal(), optional=True),
"audio_len": NeuralType(('B'), LengthsType(), optional=True),
}
@property
def output_types(self):
if not self.calculate_loss and not self.training:
return {
"spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"gate_pred": NeuralType(('B', 'T'), LogitsType()),
"alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
"pred_length": NeuralType(('B'), LengthsType()),
}
return {
"spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"gate_pred": NeuralType(('B', 'T'), LogitsType()),
"spec_target": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
"spec_target_len": NeuralType(('B'), LengthsType()),
"alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
}
@typecheck()
def forward(self, *, tokens, token_len, audio=None, audio_len=None):
if audio is not None and audio_len is not None:
spec_target, spec_target_len = self.audio_to_melspec_precessor(audio, audio_len)
else:
if self.training or self.calculate_loss:
raise ValueError(
f"'audio' and 'audio_len' can not be None when either 'self.training' or 'self.calculate_loss' is True."
)
token_embedding = self.text_embedding(tokens).transpose(1, 2)
encoder_embedding = self.encoder(token_embedding=token_embedding, token_len=token_len)
if self.training:
spec_pred_dec, gate_pred, alignments = self.decoder(
memory=encoder_embedding, decoder_inputs=spec_target, memory_lengths=token_len
)
else:
spec_pred_dec, gate_pred, alignments, pred_length = self.decoder(
memory=encoder_embedding, memory_lengths=token_len
)
spec_pred_postnet = self.postnet(mel_spec=spec_pred_dec)
if not self.calculate_loss and not self.training:
return spec_pred_dec, spec_pred_postnet, gate_pred, alignments, pred_length
return spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments
@typecheck(
input_types={"tokens": NeuralType(('B', 'T'), EmbeddedTextType())},
output_types={"spec": NeuralType(('B', 'D', 'T'), MelSpectrogramType())},
)
def generate_spectrogram(self, *, tokens):
self.eval()
self.calculate_loss = False
token_len = torch.tensor([len(i) for i in tokens]).to(self.device)
tensors = self(tokens=tokens, token_len=token_len)
spectrogram_pred = tensors[1]
if spectrogram_pred.shape[0] > 1:
# Silence all frames past the predicted end
mask = ~get_mask_from_lengths(tensors[-1])
mask = mask.expand(spectrogram_pred.shape[1], mask.size(0), mask.size(1))
mask = mask.permute(1, 0, 2)
spectrogram_pred.data.masked_fill_(mask, self.pad_value)
return spectrogram_pred
def training_step(self, batch, batch_idx):
audio, audio_len, tokens, token_len = batch
spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, _ = self.forward(
audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
)
loss, _ = self.loss(
spec_pred_dec=spec_pred_dec,
spec_pred_postnet=spec_pred_postnet,
gate_pred=gate_pred,
spec_target=spec_target,
spec_target_len=spec_target_len,
pad_value=self.pad_value,
)
output = {
'loss': loss,
'progress_bar': {'training_loss': loss},
'log': {'loss': loss},
}
return output
def validation_step(self, batch, batch_idx):
audio, audio_len, tokens, token_len = batch
spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments = self.forward(
audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
)
loss, gate_target = self.loss(
spec_pred_dec=spec_pred_dec,
spec_pred_postnet=spec_pred_postnet,
gate_pred=gate_pred,
spec_target=spec_target,
spec_target_len=spec_target_len,
pad_value=self.pad_value,
)
loss = {
"val_loss": loss,
"mel_target": spec_target,
"mel_postnet": spec_pred_postnet,
"gate": gate_pred,
"gate_target": gate_target,
"alignments": alignments,
}
self.validation_step_outputs.append(loss)
return loss
def on_validation_epoch_end(self):
if self.logger is not None and self.logger.experiment is not None:
logger = self.logger.experiment
for logger in self.trainer.loggers:
if isinstance(logger, TensorBoardLogger):
logger = logger.experiment
break
if isinstance(logger, TensorBoardLogger):
tacotron2_log_to_tb_func(
logger,
self.validation_step_outputs[0].values(),
self.global_step,
tag="val",
log_images=True,
add_audio=False,
)
elif isinstance(logger, WandbLogger):
tacotron2_log_to_wandb_func(
logger,
self.validation_step_outputs[0].values(),
self.global_step,
tag="val",
log_images=True,
add_audio=False,
)
avg_loss = torch.stack(
[x['val_loss'] for x in self.validation_step_outputs]
).mean() # This reduces across batches, not workers!
self.log('val_loss', avg_loss)
self.validation_step_outputs.clear() # free memory
def _setup_normalizer(self, cfg):
if "text_normalizer" in cfg:
normalizer_kwargs = {}
if "whitelist" in cfg.text_normalizer:
normalizer_kwargs["whitelist"] = self.register_artifact(
'text_normalizer.whitelist', cfg.text_normalizer.whitelist
)
try:
import nemo_text_processing
self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
except Exception as e:
logging.error(e)
raise ImportError(
"`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
)
self.text_normalizer_call = self.normalizer.normalize
if "text_normalizer_call_kwargs" in cfg:
self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
def _setup_tokenizer(self, cfg):
text_tokenizer_kwargs = {}
if "g2p" in cfg.text_tokenizer and cfg.text_tokenizer.g2p is not None:
# for backward compatibility
if (
self._is_model_being_restored()
and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
):
cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
cfg.text_tokenizer.g2p["_target_"]
)
g2p_kwargs = {}
if "phoneme_dict" in cfg.text_tokenizer.g2p:
g2p_kwargs["phoneme_dict"] = self.register_artifact(
'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
)
if "heteronyms" in cfg.text_tokenizer.g2p:
g2p_kwargs["heteronyms"] = self.register_artifact(
'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
)
text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
self.tokenizer = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
raise ValueError(f"No dataset for {name}")
if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
raise ValueError(f"No dataloder_params for {name}")
if shuffle_should_be:
if 'shuffle' not in cfg.dataloader_params:
logging.warning(
f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
"config. Manually setting to True"
)
with open_dict(cfg.dataloader_params):
cfg.dataloader_params.shuffle = True
elif not cfg.dataloader_params.shuffle:
logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
elif not shuffle_should_be and cfg.dataloader_params.shuffle:
logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
dataset = instantiate(
cfg.dataset,
text_normalizer=self.normalizer,
text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
text_tokenizer=self.tokenizer,
)
return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
def setup_training_data(self, cfg):
self._train_dl = self.__setup_dataloader_from_config(cfg)
def setup_validation_data(self, cfg):
self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="validation")
@classmethod
def list_available_models(cls) -> 'List[PretrainedModelInfo]':
"""
This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
Returns:
List of available pre-trained models.
"""
list_of_models = []
model = PretrainedModelInfo(
pretrained_model_name="tts_en_tacotron2",
location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_tacotron2/versions/1.10.0/files/tts_en_tacotron2.nemo",
description="This model is trained on LJSpeech sampled at 22050Hz, and can be used to generate female English voices with an American accent.",
class_=cls,
aliases=["Tacotron2-22050Hz"],
)
list_of_models.append(model)
return list_of_models
[end of nemo/collections/tts/models/tacotron2.py]
[start of nemo/core/config/modelPT.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf import MISSING
from nemo.core import config
from nemo.core.classes.dataset import DatasetConfig
from nemo.utils import exp_manager
@dataclass
class SchedConfig:
name: str = MISSING
min_lr: float = 0.0
last_epoch: int = -1
@dataclass
class OptimConfig:
name: str = MISSING
sched: Optional[SchedConfig] = None
@dataclass
class ModelConfig:
"""
Model component inside ModelPT
"""
# ...
train_ds: Optional[DatasetConfig] = None
validation_ds: Optional[DatasetConfig] = None
test_ds: Optional[DatasetConfig] = None
optim: Optional[OptimConfig] = None
@dataclass
class HydraConfig:
run: Dict[str, Any] = field(default_factory=lambda: {"dir": "."})
job_logging: Dict[str, Any] = field(default_factory=lambda: {"root": {"handlers": None}})
@dataclass
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
trainer: config.TrainerConfig = config.TrainerConfig(
strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
)
exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
hydra: HydraConfig = HydraConfig()
class ModelConfigBuilder:
def __init__(self, model_cfg: ModelConfig):
"""
Base class for any Model Config Builder.
A Model Config Builder is a utility class that accepts a ModelConfig dataclass,
and via a set of utility methods (that are implemented by the subclassed ModelConfigBuilder),
builds a finalized ModelConfig that can be supplied to a NemoModel dataclass as
the `model` component.
Subclasses *must* implement the private method `_finalize_cfg`.
Inside this method, they must update `self.model_cfg` with all interdependent config
options that need to be set (either updated by user explicitly or with their default value).
The updated model config must then be preserved in `self.model_cfg`.
Example:
# Create the config builder
config_builder = <subclass>ModelConfigBuilder()
# Update the components of the config that are modifiable
config_builder.set_X(X)
config_builder.set_Y(Y)
# Create a "finalized" config dataclass that will contain all the updates
# that were specified by the builder
model_config = config_builder.build()
# Use model config as is (or further update values), then create a new Model
model = nemo.<domain>.models.<ModelName>Model(cfg=model_config, trainer=Trainer())
Supported build methods:
- set_train_ds: All model configs can accept a subclass of `DatasetConfig` as their
training config. Subclasses can override this method to enable auto-complete
by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
- set_validation_ds: All model configs can accept a subclass of `DatasetConfig` as their
validation config. Subclasses can override this method to enable auto-complete
by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
- set_test_ds: All model configs can accept a subclass of `DatasetConfig` as their
test config. Subclasses can override this method to enable auto-complete
by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
- set_optim: A build method that supports changes to the Optimizer (and optionally,
the Scheduler) used for training the model. The function accepts two inputs -
`cfg`: A subclass of `OptimizerParams` - any OptimizerParams subclass can be used,
in order to select an appropriate Optimizer. Examples: AdamParams.
`sched_cfg`: A subclass of `SchedulerParams` - any SchedulerParams subclass can be used,
in order to select an appropriate Scheduler. Examples: CosineAnnealingParams.
Note that this argument is optional.
- build(): The method which should return a "finalized" ModelConfig dataclass.
Subclasses *should* always override this method, and update the signature
of this method with the return type of the Dataclass, so that it enables
autocomplete for the user.
Example:
def build(self) -> EncDecCTCConfig:
return super().build()
Any additional build methods must be added by subclasses of ModelConfigBuilder.
Args:
model_cfg:
"""
self.model_cfg = model_cfg
self.train_ds_cfg = None
self.validation_ds_cfg = None
self.test_ds_cfg = None
self.optim_cfg = None
def set_train_ds(self, cfg: Optional[DatasetConfig] = None):
self.model_cfg.train_ds = cfg
def set_validation_ds(self, cfg: Optional[DatasetConfig] = None):
self.model_cfg.validation_ds = cfg
def set_test_ds(self, cfg: Optional[DatasetConfig] = None):
self.model_cfg.test_ds = cfg
def set_optim(self, cfg: config.OptimizerParams, sched_cfg: Optional[config.SchedulerParams] = None):
@dataclass
class WrappedOptimConfig(OptimConfig, cfg.__class__):
pass
# Setup optim
optim_name = cfg.__class__.__name__.replace("Params", "").lower()
wrapped_cfg = WrappedOptimConfig(name=optim_name, sched=None, **vars(cfg))
if sched_cfg is not None:
@dataclass
class WrappedSchedConfig(SchedConfig, sched_cfg.__class__):
pass
# Setup scheduler
sched_name = sched_cfg.__class__.__name__.replace("Params", "")
wrapped_sched_cfg = WrappedSchedConfig(name=sched_name, **vars(sched_cfg))
wrapped_cfg.sched = wrapped_sched_cfg
self.model_cfg.optim = wrapped_cfg
def _finalize_cfg(self):
raise NotImplementedError()
def build(self) -> ModelConfig:
# validate config
self._finalize_cfg()
return self.model_cfg
[end of nemo/core/config/modelPT.py]
[start of nemo/utils/exp_manager.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import os
import subprocess
import sys
import time
import warnings
from dataclasses import dataclass
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
from typing import Any, Dict, List, Optional, Tuple, Union
import pytorch_lightning
import torch
from hydra.core.hydra_config import HydraConfig
from hydra.utils import get_original_cwd
from omegaconf import DictConfig, OmegaConf, open_dict
from pytorch_lightning.callbacks import Callback, ModelCheckpoint
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
from pytorch_lightning.callbacks.timer import Interval, Timer
from pytorch_lightning.loggers import MLFlowLogger, TensorBoardLogger, WandbLogger
from pytorch_lightning.loops import _TrainingEpochLoop
from pytorch_lightning.strategies.ddp import DDPStrategy
from nemo.collections.common.callbacks import EMA
from nemo.constants import NEMO_ENV_VARNAME_TESTING, NEMO_ENV_VARNAME_VERSION
from nemo.utils import logging, timers
from nemo.utils.app_state import AppState
from nemo.utils.callbacks import NeMoModelCheckpoint, PreemptionCallback
from nemo.utils.env_var_parsing import get_envbool
from nemo.utils.exceptions import NeMoBaseException
from nemo.utils.get_rank import is_global_rank_zero
from nemo.utils.lightning_logger_patch import add_filehandlers_to_pl_logger
from nemo.utils.loggers import ClearMLLogger, ClearMLParams, DLLogger, DLLoggerParams, MLFlowParams
from nemo.utils.model_utils import uninject_model_parallel_rank
class NotFoundError(NeMoBaseException):
""" Raised when a file or folder is not found"""
class LoggerMisconfigurationError(NeMoBaseException):
""" Raised when a mismatch between trainer.logger and exp_manager occurs"""
def __init__(self, message):
message = (
message
+ " You can disable lighning's trainer from creating a logger by passing logger=False to its constructor."
)
super().__init__(message)
class CheckpointMisconfigurationError(NeMoBaseException):
""" Raised when a mismatch between trainer.callbacks and exp_manager occurs"""
@dataclass
class EarlyStoppingParams:
monitor: str = "val_loss" # The metric that early stopping should consider.
mode: str = "min" # inform early stopping whether to look for increase or decrease in monitor.
min_delta: float = 0.001 # smallest change to consider as improvement.
patience: int = 10 # how many (continuous) validation cycles to wait with no improvement and stopping training.
verbose: bool = True
strict: bool = True
check_finite: bool = True
stopping_threshold: Optional[float] = None
divergence_threshold: Optional[float] = None
check_on_train_epoch_end: Optional[bool] = None
log_rank_zero_only: bool = False
@dataclass
class CallbackParams:
filepath: Optional[str] = None # Deprecated
dirpath: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
filename: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
monitor: Optional[str] = "val_loss"
verbose: Optional[bool] = True
save_last: Optional[bool] = True
save_top_k: Optional[int] = 3
save_weights_only: Optional[bool] = False
mode: Optional[str] = "min"
auto_insert_metric_name: bool = True
every_n_epochs: Optional[int] = 1
every_n_train_steps: Optional[int] = None
train_time_interval: Optional[str] = None
prefix: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
postfix: str = ".nemo"
save_best_model: bool = False
always_save_nemo: bool = False
save_nemo_on_train_end: Optional[bool] = True # Whether to automatically save .nemo file durin on_train_end hook
model_parallel_size: Optional[int] = None # tensor parallel size * pipeline parallel size
save_on_train_epoch_end: Optional[bool] = False # Save after training, not after validation
@dataclass
class StepTimingParams:
reduction: Optional[str] = "mean"
# if True torch.cuda.synchronize() is called on start/stop
sync_cuda: Optional[bool] = False
# if positive, defines the size of a sliding window for computing mean
buffer_size: Optional[int] = 1
@dataclass
class EMAParams:
enable: Optional[bool] = False
decay: Optional[float] = 0.999
cpu_offload: Optional[bool] = False
validate_original_weights: Optional[bool] = False
every_n_steps: int = 1
@dataclass
class ExpManagerConfig:
"""Experiment Manager config for validation of passed arguments.
"""
# Log dir creation parameters
explicit_log_dir: Optional[str] = None
exp_dir: Optional[str] = None
name: Optional[str] = None
version: Optional[str] = None
use_datetime_version: Optional[bool] = True
resume_if_exists: Optional[bool] = False
resume_past_end: Optional[bool] = False
resume_ignore_no_checkpoint: Optional[bool] = False
resume_from_checkpoint: Optional[str] = None
# Logging parameters
create_tensorboard_logger: Optional[bool] = True
summary_writer_kwargs: Optional[Dict[Any, Any]] = None
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
create_dllogger_logger: Optional[bool] = False
dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
create_clearml_logger: Optional[bool] = False
clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
create_early_stopping_callback: Optional[bool] = False
early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
ema: Optional[EMAParams] = EMAParams()
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
seconds_to_sleep: float = 5
class TimingCallback(Callback):
"""
Logs execution time of train/val/test steps
"""
def __init__(self, timer_kwargs={}):
self.timer = timers.NamedTimer(**timer_kwargs)
def _on_batch_start(self, name):
# reset only if we do not return mean of a sliding window
if self.timer.buffer_size <= 0:
self.timer.reset(name)
self.timer.start(name)
def _on_batch_end(self, name, pl_module):
self.timer.stop(name)
# Set the `batch_size=1` as WAR for `dataloader_iter`, which is not used for any metric
pl_module.log(
name + ' in s',
self.timer[name],
on_step=True,
on_epoch=False,
batch_size=1,
prog_bar=(name == "train_step_timing"),
)
def on_train_batch_start(self, trainer, pl_module, batch, batch_idx):
self._on_batch_start("train_step_timing")
def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
self._on_batch_end("train_step_timing", pl_module)
def on_validation_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
self._on_batch_start("validation_step_timing")
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
self._on_batch_end("validation_step_timing", pl_module)
def on_test_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
self._on_batch_start("test_step_timing")
def on_test_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
self._on_batch_end("test_step_timing", pl_module)
def on_before_backward(self, trainer, pl_module, loss):
self._on_batch_start("train_backward_timing")
def on_after_backward(self, trainer, pl_module):
self._on_batch_end("train_backward_timing", pl_module)
def exp_manager(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None) -> Optional[Path]:
"""
exp_manager is a helper function used to manage folders for experiments. It follows the pytorch lightning paradigm
of exp_dir/model_or_experiment_name/version. If the lightning trainer has a logger, exp_manager will get exp_dir,
name, and version from the logger. Otherwise it will use the exp_dir and name arguments to create the logging
directory. exp_manager also allows for explicit folder creation via explicit_log_dir.
The version can be a datetime string or an integer. Datestime version can be disabled if use_datetime_version is set
to False. It optionally creates TensorBoardLogger, WandBLogger, DLLogger, MLFlowLogger, ClearMLLogger,
ModelCheckpoint objects from pytorch lightning.
It copies sys.argv, and git information if available to the logging directory. It creates a log file for each
process to log their output into.
exp_manager additionally has a resume feature (resume_if_exists) which can be used to continuing training from
the constructed log_dir. When you need to continue the training repeatedly (like on a cluster which you need
multiple consecutive jobs), you need to avoid creating the version folders. Therefore from v1.0.0, when
resume_if_exists is set to True, creating the version folders is ignored.
Args:
trainer (pytorch_lightning.Trainer): The lightning trainer.
cfg (DictConfig, dict): Can have the following keys:
- explicit_log_dir (str, Path): Can be used to override exp_dir/name/version folder creation. Defaults to
None, which will use exp_dir, name, and version to construct the logging directory.
- exp_dir (str, Path): The base directory to create the logging directory. Defaults to None, which logs to
./nemo_experiments.
- name (str): The name of the experiment. Defaults to None which turns into "default" via name = name or
"default".
- version (str): The version of the experiment. Defaults to None which uses either a datetime string or
lightning's TensorboardLogger system of using version_{int}.
- use_datetime_version (bool): Whether to use a datetime string for version. Defaults to True.
- resume_if_exists (bool): Whether this experiment is resuming from a previous run. If True, it sets
trainer._checkpoint_connector._ckpt_path so that the trainer should auto-resume. exp_manager will move files
under log_dir to log_dir/run_{int}. Defaults to False. From v1.0.0, when resume_if_exists is True,
we would not create version folders to make it easier to find the log folder for next runs.
- resume_past_end (bool): exp_manager errors out if resume_if_exists is True and a checkpoint matching
``*end.ckpt`` indicating a previous training run fully completed. This behaviour can be disabled, in which
case the ``*end.ckpt`` will be loaded by setting resume_past_end to True. Defaults to False.
- resume_ignore_no_checkpoint (bool): exp_manager errors out if resume_if_exists is True and no checkpoint
could be found. This behaviour can be disabled, in which case exp_manager will print a message and
continue without restoring, by setting resume_ignore_no_checkpoint to True. Defaults to False.
- resume_from_checkpoint (str): Can be used to specify a path to a specific checkpoint file to load from. This will
override any checkpoint found when resume_if_exists is True. Defaults to None.
- create_tensorboard_logger (bool): Whether to create a tensorboard logger and attach it to the pytorch
lightning trainer. Defaults to True.
- summary_writer_kwargs (dict): A dictionary of kwargs that can be passed to lightning's TensorboardLogger
class. Note that log_dir is passed by exp_manager and cannot exist in this dict. Defaults to None.
- create_wandb_logger (bool): Whether to create a Weights and Baises logger and attach it to the pytorch
lightning trainer. Defaults to False.
- wandb_logger_kwargs (dict): A dictionary of kwargs that can be passed to lightning's WandBLogger
class. Note that name and project are required parameters if create_wandb_logger is True.
Defaults to None.
- create_mlflow_logger (bool): Whether to create an MLFlow logger and attach it to the pytorch lightning
training. Defaults to False
- mlflow_logger_kwargs (dict): optional parameters for the MLFlow logger
- create_dllogger_logger (bool): Whether to create an DLLogger logger and attach it to the pytorch lightning
training. Defaults to False
- dllogger_logger_kwargs (dict): optional parameters for the DLLogger logger
- create_clearml_logger (bool): Whether to create an ClearML logger and attach it to the pytorch lightning
training. Defaults to False
- clearml_logger_kwargs (dict): optional parameters for the ClearML logger
- create_checkpoint_callback (bool): Whether to create a ModelCheckpoint callback and attach it to the
pytorch lightning trainer. The ModelCheckpoint saves the top 3 models with the best "val_loss", the most
recent checkpoint under ``*last.ckpt``, and the final checkpoint after training completes under ``*end.ckpt``.
Defaults to True.
- create_early_stopping_callback (bool): Flag to decide if early stopping should be used to stop training. Default is False.
See EarlyStoppingParams dataclass above.
- create_preemption_callback (bool): Flag to decide whether to enable preemption callback to save checkpoints and exit training
immediately upon preemption. Default is True.
- files_to_copy (list): A list of files to copy to the experiment logging directory. Defaults to None which
copies no files.
- log_local_rank_0_only (bool): Whether to only create log files for local rank 0. Defaults to False.
Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
- log_global_rank_0_only (bool): Whether to only create log files for global rank 0. Defaults to False.
Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
- max_time (str): The maximum wall clock time *per run*. This is intended to be used on clusters where you want
a checkpoint to be saved after this specified time and be able to resume from that checkpoint. Defaults to None.
- seconds_to_sleep (float): seconds to sleep non rank 0 processes for. Used to give enough time for rank 0 to initialize
returns:
log_dir (Path): The final logging directory where logging files are saved. Usually the concatenation of
exp_dir, name, and version.
"""
# Add rank information to logger
# Note: trainer.global_rank and trainer.is_global_zero are not set until trainer.fit, so have to hack around it
local_rank = int(os.environ.get("LOCAL_RANK", 0))
global_rank = trainer.node_rank * trainer.num_devices + local_rank
logging.rank = global_rank
if cfg is None:
logging.error("exp_manager did not receive a cfg argument. It will be disabled.")
return
if trainer.fast_dev_run:
logging.info("Trainer was called with fast_dev_run. exp_manager will return without any functionality.")
return
# Ensure passed cfg is compliant with ExpManagerConfig
schema = OmegaConf.structured(ExpManagerConfig)
if isinstance(cfg, dict):
cfg = OmegaConf.create(cfg)
elif not isinstance(cfg, DictConfig):
raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
cfg = OmegaConf.create(OmegaConf.to_container(cfg, resolve=True))
cfg = OmegaConf.merge(schema, cfg)
error_checks(trainer, cfg) # Ensures that trainer options are compliant with NeMo and exp_manager arguments
log_dir, exp_dir, name, version = get_log_dir(
trainer=trainer,
exp_dir=cfg.exp_dir,
name=cfg.name,
version=cfg.version,
explicit_log_dir=cfg.explicit_log_dir,
use_datetime_version=cfg.use_datetime_version,
resume_if_exists=cfg.resume_if_exists,
)
check_resume(
trainer,
log_dir,
cfg.resume_if_exists,
cfg.resume_past_end,
cfg.resume_ignore_no_checkpoint,
cfg.checkpoint_callback_params.dirpath,
cfg.resume_from_checkpoint,
)
checkpoint_name = name
# If name returned from get_log_dir is "", use cfg.name for checkpointing
if checkpoint_name is None or checkpoint_name == '':
checkpoint_name = cfg.name or "default"
# Set mlflow name if it's not set, before the main name is erased
if cfg.create_mlflow_logger and (not cfg.mlflow_logger_kwargs.get("experiment_name", None)):
cfg.mlflow_logger_kwargs.experiment_name = cfg.name
logging.warning(
'mlflow logger specified but no experiment name set. Using the same as Tensorboard: %s',
cfg.mlflow_logger_kwargs.experiment_name,
)
cfg.name = name # Used for configure_loggers so that the log_dir is properly set even if name is ""
cfg.version = version
# update app_state with log_dir, exp_dir, etc
app_state = AppState()
app_state.log_dir = log_dir
app_state.exp_dir = exp_dir
app_state.name = name
app_state.version = version
app_state.checkpoint_name = checkpoint_name
app_state.create_checkpoint_callback = cfg.create_checkpoint_callback
app_state.checkpoint_callback_params = cfg.checkpoint_callback_params
# Create the logging directory if it does not exist
os.makedirs(log_dir, exist_ok=True) # Cannot limit creation to global zero as all ranks write to own log file
logging.info(f'Experiments will be logged at {log_dir}')
trainer._default_root_dir = log_dir
if cfg.log_local_rank_0_only is True and cfg.log_global_rank_0_only is True:
raise ValueError(
f"Cannot set both log_local_rank_0_only and log_global_rank_0_only to True. Please set either one or neither."
)
# This is set if the env var NEMO_TESTING is set to True.
nemo_testing = get_envbool(NEMO_ENV_VARNAME_TESTING, False)
# Handle logging to file
log_file = log_dir / f'nemo_log_globalrank-{global_rank}_localrank-{local_rank}.txt'
if cfg.log_local_rank_0_only is True and not nemo_testing:
if local_rank == 0:
logging.add_file_handler(log_file)
elif cfg.log_global_rank_0_only is True and not nemo_testing:
if global_rank == 0:
logging.add_file_handler(log_file)
else:
# Logs on all ranks.
logging.add_file_handler(log_file)
# For some reason, LearningRateLogger requires trainer to have a logger. Safer to create logger on all ranks
# not just global rank 0.
if (
cfg.create_tensorboard_logger
or cfg.create_wandb_logger
or cfg.create_mlflow_logger
or cfg.create_dllogger_logger
or cfg.create_clearml_logger
):
configure_loggers(
trainer,
exp_dir,
log_dir,
cfg.name,
cfg.version,
cfg.checkpoint_callback_params,
cfg.create_tensorboard_logger,
cfg.summary_writer_kwargs,
cfg.create_wandb_logger,
cfg.wandb_logger_kwargs,
cfg.create_mlflow_logger,
cfg.mlflow_logger_kwargs,
cfg.create_dllogger_logger,
cfg.dllogger_logger_kwargs,
cfg.create_clearml_logger,
cfg.clearml_logger_kwargs,
)
# add loggers timing callbacks
if cfg.log_step_timing:
timing_callback = TimingCallback(timer_kwargs=cfg.step_timing_kwargs or {})
trainer.callbacks.insert(0, timing_callback)
if cfg.ema.enable:
ema_callback = EMA(
decay=cfg.ema.decay,
validate_original_weights=cfg.ema.validate_original_weights,
cpu_offload=cfg.ema.cpu_offload,
every_n_steps=cfg.ema.every_n_steps,
)
trainer.callbacks.append(ema_callback)
if cfg.create_early_stopping_callback:
early_stop_callback = EarlyStopping(**cfg.early_stopping_callback_params)
trainer.callbacks.append(early_stop_callback)
if cfg.create_checkpoint_callback:
configure_checkpointing(
trainer,
log_dir,
checkpoint_name,
cfg.resume_if_exists,
cfg.checkpoint_callback_params,
cfg.create_preemption_callback,
)
if cfg.disable_validation_on_resume:
# extend training loop to skip initial validation when resuming from checkpoint
configure_no_restart_validation_training_loop(trainer)
# Setup a stateless timer for use on clusters.
if cfg.max_time_per_run is not None:
found_ptl_timer = False
for idx, callback in enumerate(trainer.callbacks):
if isinstance(callback, Timer):
# NOTE: PTL does not expose a `trainer.max_time`. By the time we are in this function, PTL has already setup a timer if the user specifies `trainer.max_time` so best we can do is replace that.
# Working: If only `trainer.max_time` is set - it behaves as a normal PTL timer. If only `exp_manager.max_time_per_run` is set - it behaves as a StateLessTimer. If both are set, it also behaves as a StateLessTimer.
logging.warning(
f'Found a PTL Timer callback, replacing with a StatelessTimer callback. This will happen if you set trainer.max_time as well as exp_manager.max_time_per_run.'
)
trainer.callbacks[idx] = StatelessTimer(cfg.max_time_per_run)
found_ptl_timer = True
break
if not found_ptl_timer:
trainer.max_time = cfg.max_time_per_run
trainer.callbacks.append(StatelessTimer(cfg.max_time_per_run))
if is_global_rank_zero():
# Move files_to_copy to folder and add git information if present
if cfg.files_to_copy:
for _file in cfg.files_to_copy:
copy(Path(_file), log_dir)
# Create files for cmd args and git info
with open(log_dir / 'cmd-args.log', 'w', encoding='utf-8') as _file:
_file.write(" ".join(sys.argv))
# Try to get git hash
git_repo, git_hash = get_git_hash()
if git_repo:
with open(log_dir / 'git-info.log', 'w', encoding='utf-8') as _file:
_file.write(f'commit hash: {git_hash}')
_file.write(get_git_diff())
# Add err_file logging to global_rank zero
logging.add_err_file_handler(log_dir / 'nemo_error_log.txt')
# Add lightning file logging to global_rank zero
add_filehandlers_to_pl_logger(log_dir / 'lightning_logs.txt', log_dir / 'nemo_error_log.txt')
elif trainer.num_nodes * trainer.num_devices > 1:
# sleep other ranks so rank 0 can finish
# doing the initialization such as moving files
time.sleep(cfg.seconds_to_sleep)
return log_dir
def error_checks(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None):
"""
Checks that the passed trainer is compliant with NeMo and exp_manager's passed configuration. Checks that:
- Throws error when hydra has changed the working directory. This causes issues with lightning's DDP
- Throws error when trainer has loggers defined but create_tensorboard_logger or create_wandB_logger
or create_mlflow_logger or create_dllogger_logger is True
- Prints error messages when 1) run on multi-node and not Slurm, and 2) run on multi-gpu without DDP
"""
if HydraConfig.initialized() and get_original_cwd() != os.getcwd():
raise ValueError(
"Hydra changed the working directory. This interferes with ExpManger's functionality. Please pass "
"hydra.run.dir=. to your python script."
)
if trainer.logger is not None and (
cfg.create_tensorboard_logger or cfg.create_wandb_logger or cfg.create_mlflow_logger
):
raise LoggerMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a logger, and either "
f"create_tensorboard_logger: {cfg.create_tensorboard_logger} or create_wandb_logger: "
f"{cfg.create_wandb_logger} or create_mlflow_logger: {cfg.create_mlflow_logger}"
f"or create_dllogger_logger: {cfg.create_mlflow_logger} was set to True. "
"These can only be used if trainer does not already have a logger."
)
if trainer.num_nodes > 1 and not check_slurm(trainer):
logging.error(
"You are running multi-node training without SLURM handling the processes."
" Please note that this is not tested in NeMo and could result in errors."
)
if trainer.num_devices > 1 and not isinstance(trainer.strategy, DDPStrategy):
logging.error(
"You are running multi-gpu without ddp.Please note that this is not tested in NeMo and could result in "
"errors."
)
def check_resume(
trainer: 'pytorch_lightning.Trainer',
log_dir: str,
resume_if_exists: bool = False,
resume_past_end: bool = False,
resume_ignore_no_checkpoint: bool = False,
dirpath: str = None,
resume_from_checkpoint: str = None,
):
"""Checks that resume=True was used correctly with the arguments pass to exp_manager. Sets
trainer._checkpoint_connector._ckpt_path as necessary.
Returns:
log_dir (Path): The log_dir
exp_dir (str): The base exp_dir without name nor version
name (str): The name of the experiment
version (str): The version of the experiment
Raises:
NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
ValueError: If resume is True, and there were more than 1 checkpoint could found.
"""
if not log_dir:
raise ValueError(f"Resuming requires the log_dir {log_dir} to be passed to exp_manager")
checkpoint = None
if resume_from_checkpoint:
checkpoint = resume_from_checkpoint
if resume_if_exists:
# Use <log_dir>/checkpoints/ unless `dirpath` is set
checkpoint_dir = Path(dirpath) if dirpath else Path(Path(log_dir) / "checkpoints")
# when using distributed checkpointing, checkpoint_dir is a directory of directories
# we check for this here
dist_checkpoints = [d for d in list(checkpoint_dir.glob("*")) if d.is_dir()]
end_dist_checkpoints = [d for d in dist_checkpoints if d.match("*end")]
last_dist_checkpoints = [d for d in dist_checkpoints if d.match("*last")]
end_checkpoints = end_dist_checkpoints if end_dist_checkpoints else list(checkpoint_dir.rglob("*end.ckpt"))
last_checkpoints = last_dist_checkpoints if last_dist_checkpoints else list(checkpoint_dir.rglob("*last.ckpt"))
if not checkpoint_dir.exists() or (not len(end_checkpoints) > 0 and not len(last_checkpoints) > 0):
if resume_ignore_no_checkpoint:
warn = f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. "
if checkpoint is None:
warn += "Training from scratch."
elif checkpoint == resume_from_checkpoint:
warn += f"Training from {resume_from_checkpoint}."
logging.warning(warn)
else:
raise NotFoundError(
f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. Cannot resume."
)
elif len(end_checkpoints) > 0:
if resume_past_end:
if len(end_checkpoints) > 1:
if 'mp_rank' in str(end_checkpoints[0]):
checkpoint = end_checkpoints[0]
else:
raise ValueError(f"Multiple checkpoints {end_checkpoints} that matches *end.ckpt.")
else:
raise ValueError(
f"Found {end_checkpoints[0]} indicating that the last training run has already completed."
)
elif len(last_checkpoints) > 1:
if 'mp_rank' in str(last_checkpoints[0]) or 'tp_rank' in str(last_checkpoints[0]):
checkpoint = last_checkpoints[0]
checkpoint = uninject_model_parallel_rank(checkpoint)
else:
raise ValueError(f"Multiple checkpoints {last_checkpoints} that matches *last.ckpt.")
else:
checkpoint = last_checkpoints[0]
# PTL 2.0 supports ckpt_path instead of resume_from_checkpoint as the trainer flag
if checkpoint is not None:
trainer.ckpt_path = str(checkpoint)
logging.info(f'Resuming training from checkpoint: {trainer.ckpt_path}')
if is_global_rank_zero():
# Check to see if any files exist that need to be moved
files_to_move = []
if Path(log_dir).exists():
for child in Path(log_dir).iterdir():
if child.is_file():
files_to_move.append(child)
if len(files_to_move) > 0:
# Move old files to a new folder
other_run_dirs = Path(log_dir).glob("run_*")
run_count = 0
for fold in other_run_dirs:
if fold.is_dir():
run_count += 1
new_run_dir = Path(Path(log_dir) / f"run_{run_count}")
new_run_dir.mkdir()
for _file in files_to_move:
move(str(_file), str(new_run_dir))
def check_explicit_log_dir(
trainer: 'pytorch_lightning.Trainer', explicit_log_dir: Union[Path, str], exp_dir: str, name: str, version: str
) -> Tuple[Path, str, str, str]:
""" Checks that the passed arguments are compatible with explicit_log_dir.
Returns:
log_dir (Path): the log_dir
exp_dir (str): the base exp_dir without name nor version
name (str): The name of the experiment
version (str): The version of the experiment
Raise:
LoggerMisconfigurationError
"""
if trainer.logger is not None:
raise LoggerMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a logger and explicit_log_dir: "
f"{explicit_log_dir} was pass to exp_manager. Please remove the logger from the lightning trainer."
)
# Checking only (explicit_log_dir) vs (exp_dir and version).
# The `name` will be used as the actual name of checkpoint/archive.
if exp_dir or version:
logging.error(
f"exp_manager received explicit_log_dir: {explicit_log_dir} and at least one of exp_dir: {exp_dir}, "
f"or version: {version}. Please note that exp_dir, name, and version will be ignored."
)
if is_global_rank_zero() and Path(explicit_log_dir).exists():
logging.warning(f"Exp_manager is logging to {explicit_log_dir}, but it already exists.")
return Path(explicit_log_dir), str(explicit_log_dir), "", ""
def get_log_dir(
trainer: 'pytorch_lightning.Trainer',
exp_dir: str = None,
name: str = None,
version: str = None,
explicit_log_dir: str = None,
use_datetime_version: bool = True,
resume_if_exists: bool = False,
) -> Tuple[Path, str, str, str]:
"""
Obtains the log_dir used for exp_manager.
Returns:
log_dir (Path): the log_dir
exp_dir (str): the base exp_dir without name nor version
name (str): The name of the experiment
version (str): The version of the experiment
explicit_log_dir (str): The explicit path to the log folder. Defaults to False.
use_datetime_version (bool): Uses date and time as the version of the log folder. Defaults to True.
resume_if_exists (bool): if resume_if_exists of the exp_manager's config is enabled or not. When enabled, the
version folders would not get created.
Raise:
LoggerMisconfigurationError: If trainer is incompatible with arguments
NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
ValueError: If resume is True, and there were more than 1 checkpoint could found.
"""
if explicit_log_dir: # If explicit log_dir was passed, short circuit
return check_explicit_log_dir(trainer, explicit_log_dir, exp_dir, name, version)
# Default exp_dir to ./nemo_experiments if None was passed
_exp_dir = exp_dir
if exp_dir is None:
_exp_dir = str(Path.cwd() / 'nemo_experiments')
# If the user has already defined a logger for the trainer, use the logger defaults for logging directory
if trainer.logger is not None:
if trainer.logger.save_dir:
if exp_dir:
raise LoggerMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a logger, the logger's "
f"save_dir was not None, and exp_dir ({exp_dir}) was not None. If trainer.logger.save_dir "
"exists, exp_manager will use trainer.logger.save_dir as the logging directory and exp_dir "
"must be None."
)
_exp_dir = trainer.logger.save_dir
if name:
raise LoggerMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a logger, and name: "
f"{name} was also passed to exp_manager. If the trainer contains a "
"logger, exp_manager will use trainer.logger.name, and name passed to exp_manager must be None."
)
name = trainer.logger.name
version = f"version_{trainer.logger.version}"
# Use user-defined exp_dir, project_name, exp_name, and versioning options
else:
name = name or "default"
version = version or os.environ.get(NEMO_ENV_VARNAME_VERSION, None)
if not version:
if resume_if_exists:
logging.warning(
"No version folders would be created under the log folder as 'resume_if_exists' is enabled."
)
version = None
elif is_global_rank_zero():
if use_datetime_version:
version = time.strftime('%Y-%m-%d_%H-%M-%S')
else:
tensorboard_logger = TensorBoardLogger(save_dir=Path(_exp_dir), name=name, version=version)
version = f"version_{tensorboard_logger.version}"
os.environ[NEMO_ENV_VARNAME_VERSION] = "" if version is None else version
log_dir = Path(_exp_dir) / Path(str(name)) / Path("" if version is None else str(version))
return log_dir, str(_exp_dir), name, version
def get_git_hash():
"""
Helper function that tries to get the commit hash if running inside a git folder
returns:
Bool: Whether the git subprocess ran without error
str: git subprocess output or error message
"""
try:
return (
True,
subprocess.check_output(['git', 'rev-parse', 'HEAD'], stderr=subprocess.STDOUT).decode(),
)
except subprocess.CalledProcessError as err:
return False, "{}\n".format(err.output.decode("utf-8"))
def get_git_diff():
"""
Helper function that tries to get the git diff if running inside a git folder
returns:
Bool: Whether the git subprocess ran without error
str: git subprocess output or error message
"""
try:
return subprocess.check_output(['git', 'diff'], stderr=subprocess.STDOUT).decode()
except subprocess.CalledProcessError as err:
return "{}\n".format(err.output.decode("utf-8"))
def configure_loggers(
trainer: 'pytorch_lightning.Trainer',
exp_dir: [Path, str],
log_dir: [Path, str],
name: str,
version: str,
checkpoint_callback_params: dict,
create_tensorboard_logger: bool,
summary_writer_kwargs: dict,
create_wandb_logger: bool,
wandb_kwargs: dict,
create_mlflow_logger: bool,
mlflow_kwargs: dict,
create_dllogger_logger: bool,
dllogger_kwargs: dict,
create_clearml_logger: bool,
clearml_kwargs: dict,
):
"""
Creates TensorboardLogger and/or WandBLogger / MLFlowLogger / DLlogger / ClearMLLogger and attach them to trainer.
Raises ValueError if summary_writer_kwargs or wandb_kwargs are misconfigured.
"""
# Potentially create tensorboard logger and/or WandBLogger / MLFlowLogger / DLLogger
logger_list = []
if create_tensorboard_logger:
if summary_writer_kwargs is None:
summary_writer_kwargs = {}
elif "log_dir" in summary_writer_kwargs:
raise ValueError(
"You cannot pass `log_dir` as part of `summary_writer_kwargs`. `log_dir` is handled by lightning's "
"TensorBoardLogger logger."
)
tensorboard_logger = TensorBoardLogger(save_dir=exp_dir, name=name, version=version, **summary_writer_kwargs)
logger_list.append(tensorboard_logger)
logging.info("TensorboardLogger has been set up")
if create_wandb_logger:
if wandb_kwargs is None:
wandb_kwargs = {}
if "name" not in wandb_kwargs and "project" not in wandb_kwargs:
raise ValueError("name and project are required for wandb_logger")
# Update the wandb save_dir
if wandb_kwargs.get('save_dir', None) is None:
wandb_kwargs['save_dir'] = exp_dir
os.makedirs(wandb_kwargs['save_dir'], exist_ok=True)
wandb_logger = WandbLogger(version=version, **wandb_kwargs)
logger_list.append(wandb_logger)
logging.info("WandBLogger has been set up")
if create_mlflow_logger:
mlflow_logger = MLFlowLogger(run_name=version, **mlflow_kwargs)
logger_list.append(mlflow_logger)
logging.info("MLFlowLogger has been set up")
if create_dllogger_logger:
dllogger_logger = DLLogger(**dllogger_kwargs)
logger_list.append(dllogger_logger)
logging.info("DLLogger has been set up")
if create_clearml_logger:
clearml_logger = ClearMLLogger(
clearml_cfg=clearml_kwargs,
log_dir=log_dir,
prefix=name,
save_best_model=checkpoint_callback_params.save_best_model,
)
logger_list.append(clearml_logger)
logging.info("ClearMLLogger has been set up")
trainer._logger_connector.configure_logger(logger_list)
def configure_checkpointing(
trainer: 'pytorch_lightning.Trainer',
log_dir: Path,
name: str,
resume: bool,
params: 'DictConfig',
create_preemption_callback: bool,
):
""" Adds ModelCheckpoint to trainer. Raises CheckpointMisconfigurationError if trainer already has a ModelCheckpoint
callback
"""
for callback in trainer.callbacks:
if isinstance(callback, ModelCheckpoint):
raise CheckpointMisconfigurationError(
"The pytorch lightning trainer that was passed to exp_manager contained a ModelCheckpoint "
"and create_checkpoint_callback was set to True. Please either set create_checkpoint_callback "
"to False, or remove ModelCheckpoint from the lightning trainer"
)
# Create the callback and attach it to trainer
if "filepath" in params:
if params.filepath is not None:
logging.warning("filepath is deprecated. Please switch to dirpath and filename instead")
if params.dirpath is None:
params.dirpath = Path(params.filepath).parent
if params.filename is None:
params.filename = Path(params.filepath).name
with open_dict(params):
del params["filepath"]
if params.dirpath is None:
params.dirpath = Path(log_dir / 'checkpoints')
if params.filename is None:
params.filename = f'{name}--{{{params.monitor}:.4f}}-{{epoch}}'
if params.prefix is None:
params.prefix = name
NeMoModelCheckpoint.CHECKPOINT_NAME_LAST = params.filename + '-last'
logging.debug(params.dirpath)
logging.debug(params.filename)
logging.debug(params.prefix)
if "val" in params.monitor:
if (
trainer.max_epochs is not None
and trainer.max_epochs != -1
and trainer.max_epochs < trainer.check_val_every_n_epoch
):
logging.error(
"The checkpoint callback was told to monitor a validation value but trainer.max_epochs("
f"{trainer.max_epochs}) was less than trainer.check_val_every_n_epoch({trainer.check_val_every_n_epoch}"
f"). It is very likely this run will fail with ModelCheckpoint(monitor='{params.monitor}') not found "
"in the returned metrics. Please ensure that validation is run within trainer.max_epochs."
)
elif trainer.max_steps is not None and trainer.max_steps != -1:
logging.warning(
"The checkpoint callback was told to monitor a validation value and trainer's max_steps was set to "
f"{trainer.max_steps}. Please ensure that max_steps will run for at least "
f"{trainer.check_val_every_n_epoch} epochs to ensure that checkpointing will not error out."
)
checkpoint_callback = NeMoModelCheckpoint(n_resume=resume, **params)
checkpoint_callback.last_model_path = trainer.ckpt_path or ""
if 'mp_rank' in checkpoint_callback.last_model_path or 'tp_rank' in checkpoint_callback.last_model_path:
checkpoint_callback.last_model_path = uninject_model_parallel_rank(checkpoint_callback.last_model_path)
trainer.callbacks.append(checkpoint_callback)
if create_preemption_callback:
# Check if cuda is avialable as preemption is supported only on GPUs
if torch.cuda.is_available():
## By default PreemptionCallback handles SIGTERM. To handle other signals pass the signal in the call as below:
## PreemptionCallback(checkpoint_callback, signal.SIGCHLD)
preemption_callback = PreemptionCallback(checkpoint_callback)
trainer.callbacks.append(preemption_callback)
else:
logging.info("Preemption is supported only on GPUs, disabling preemption")
def check_slurm(trainer):
try:
return trainer.accelerator_connector.is_slurm_managing_tasks
except AttributeError:
return False
class StatelessTimer(Timer):
"""Extension of PTL timers to be per run."""
def __init__(self, duration: timedelta = None, interval: str = Interval.step, verbose: bool = True,) -> None:
super().__init__(duration, interval, verbose)
# Override PTL Timer's state dict to not store elapsed time information so that we can restore and continue training.
def state_dict(self) -> Dict[str, Any]:
return {}
def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
return
def configure_no_restart_validation_training_loop(trainer: pytorch_lightning.Trainer) -> None:
if type(trainer.fit_loop.epoch_loop) != _TrainingEpochLoop:
warnings.warn("Detected custom epoch loop. Skipping no validation on restart support.", UserWarning)
return
## Pass trainer object to avoid trainer getting overwritten as None
loop = SkipResumeTrainingValidationLoop(trainer, trainer.min_steps, trainer.max_steps)
trainer.fit_loop.epoch_loop = loop
class SkipResumeTrainingValidationLoop(_TrainingEpochLoop):
"""
Extend the PTL Epoch loop to skip validating when resuming.
This happens when resuming a checkpoint that has already run validation, but loading restores
the training state before validation has run.
"""
def _should_check_val_fx(self) -> bool:
if self.restarting and self.global_step % self.trainer.val_check_batch == 0:
return False
return super()._should_check_val_fx()
def clean_exp_ckpt(exp_log_dir: Union[str, Path], remove_ckpt: bool = True, remove_nemo: bool = False):
"""
Helper method that removes Pytorch Lightning .ckpt files or NeMo .nemo files from the checkpoint directory
Args:
exp_log_dir: str path to the root directory of the current experiment.
remove_ckpt: bool, whether to remove all *.ckpt files in the checkpoints directory.
remove_nemo: bool, whether to remove all *.nemo files in the checkpoints directory.
"""
exp_log_dir = str(exp_log_dir)
if remove_ckpt:
logging.info("Deleting *.ckpt files ...")
ckpt_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.ckpt"))
for filepath in ckpt_files:
os.remove(filepath)
logging.info(f"Deleted file : {filepath}")
if remove_nemo:
logging.info("Deleting *.nemo files ...")
nemo_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.nemo"))
for filepath in nemo_files:
os.remove(filepath)
logging.info(f"Deleted file : {filepath}")
[end of nemo/utils/exp_manager.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
# This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
# fusion with beam search decoders on top of a trained ASR model with CTC decoder. To evaluate a model with
# Transducer (RNN-T) decoder use another script 'scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py'.
# NeMo's beam search decoders are capable of using the KenLM's N-gram models
# to find the best candidates. This script supports both character level and BPE level
# encodings and models which is detected automatically from the type of the model.
# You may train the LM model with 'scripts/asr_language_modeling/ngram_lm/train_kenlm.py'.
# Config Help
To discover all arguments of the script, please run :
python eval_beamsearch_ngram.py --help
python eval_beamsearch_ngram.py --cfg job
# USAGE
python eval_beamsearch_ngram.py nemo_model_file=<path to the .nemo file of the model> \
input_manifest=<path to the evaluation JSON manifest file> \
kenlm_model_file=<path to the binary KenLM model> \
beam_width=[<list of the beam widths, separated with commas>] \
beam_alpha=[<list of the beam alphas, separated with commas>] \
beam_beta=[<list of the beam betas, separated with commas>] \
preds_output_folder=<optional folder to store the predictions> \
probs_cache_file=null \
decoding_mode=beamsearch_ngram
...
# Grid Search for Hyper parameters
For grid search, you can provide a list of arguments as follows -
beam_width=[4,8,16,....] \
beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
beam_beta=[-1.0,-0.5,0.0,...,1.0] \
# You may find more info on how to use this script at:
# https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
"""
import contextlib
import json
import os
import pickle
from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import List, Optional
import editdistance
import numpy as np
import torch
from omegaconf import MISSING, OmegaConf
from sklearn.model_selection import ParameterGrid
from tqdm.auto import tqdm
import nemo.collections.asr as nemo_asr
from nemo.collections.asr.models import EncDecHybridRNNTCTCModel
from nemo.collections.asr.parts.submodules import ctc_beam_decoding
from nemo.collections.asr.parts.utils.transcribe_utils import PunctuationCapitalization, TextProcessingConfig
from nemo.core.config import hydra_runner
from nemo.utils import logging
# fmt: off
@dataclass
class EvalBeamSearchNGramConfig:
"""
Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
"""
# # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
nemo_model_file: str = MISSING
# File paths
input_manifest: str = MISSING # The manifest file of the evaluation set
kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
# Parameters for inference
acoustic_batch_size: int = 16 # The batch size to calculate log probabilities
beam_batch_size: int = 128 # The batch size to be used for beam search decoding
device: str = "cuda" # The device to load the model onto to calculate log probabilities
use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
# Beam Search hyperparameters
# The decoding scheme to be used for evaluation.
# Can be one of ["greedy", "beamsearch", "beamsearch_ngram"]
decoding_mode: str = "beamsearch_ngram"
beam_width: List[int] = field(default_factory=lambda: [128]) # The width or list of the widths for the beam search decoding
beam_alpha: List[float] = field(default_factory=lambda: [1.0]) # The alpha parameter or list of the alphas for the beam search decoding
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
)
# fmt: on
def beam_search_eval(
model: nemo_asr.models.ASRModel,
cfg: EvalBeamSearchNGramConfig,
all_probs: List[torch.Tensor],
target_transcripts: List[str],
preds_output_file: str = None,
lm_path: str = None,
beam_alpha: float = 1.0,
beam_beta: float = 0.0,
beam_width: int = 128,
beam_batch_size: int = 128,
progress_bar: bool = True,
punctuation_capitalization: PunctuationCapitalization = None,
):
level = logging.getEffectiveLevel()
logging.setLevel(logging.CRITICAL)
# Reset config
model.change_decoding_strategy(None)
# Override the beam search config with current search candidate configuration
cfg.decoding.beam_size = beam_width
cfg.decoding.beam_alpha = beam_alpha
cfg.decoding.beam_beta = beam_beta
cfg.decoding.return_best_hypothesis = False
cfg.decoding.kenlm_path = cfg.kenlm_model_file
# Update model's decoding strategy config
model.cfg.decoding.strategy = cfg.decoding_strategy
model.cfg.decoding.beam = cfg.decoding
# Update model's decoding strategy
if isinstance(model, EncDecHybridRNNTCTCModel):
model.change_decoding_strategy(model.cfg.decoding, decoder_type='ctc')
decoding = model.ctc_decoding
else:
model.change_decoding_strategy(model.cfg.decoding)
decoding = model.decoding
logging.setLevel(level)
wer_dist_first = cer_dist_first = 0
wer_dist_best = cer_dist_best = 0
words_count = 0
chars_count = 0
sample_idx = 0
if preds_output_file:
out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
if progress_bar:
it = tqdm(
range(int(np.ceil(len(all_probs) / beam_batch_size))),
desc=f"Beam search decoding with width={beam_width}, alpha={beam_alpha}, beta={beam_beta}",
ncols=120,
)
else:
it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
for batch_idx in it:
# disabling type checking
probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
probs_lens = torch.tensor([prob.shape[0] for prob in probs_batch])
with torch.no_grad():
packed_batch = torch.zeros(len(probs_batch), max(probs_lens), probs_batch[0].shape[-1], device='cpu')
for prob_index in range(len(probs_batch)):
packed_batch[prob_index, : probs_lens[prob_index], :] = torch.tensor(
probs_batch[prob_index], device=packed_batch.device, dtype=packed_batch.dtype
)
_, beams_batch = decoding.ctc_decoder_predictions_tensor(
packed_batch, decoder_lengths=probs_lens, return_hypotheses=True,
)
for beams_idx, beams in enumerate(beams_batch):
target = target_transcripts[sample_idx + beams_idx]
target_split_w = target.split()
target_split_c = list(target)
words_count += len(target_split_w)
chars_count += len(target_split_c)
wer_dist_min = cer_dist_min = 10000
for candidate_idx, candidate in enumerate(beams): # type: (int, ctc_beam_decoding.rnnt_utils.Hypothesis)
pred_text = candidate.text
if cfg.text_processing.do_lowercase:
pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
if cfg.text_processing.rm_punctuation:
pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
if cfg.text_processing.separate_punctuation:
pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
pred_split_w = pred_text.split()
wer_dist = editdistance.eval(target_split_w, pred_split_w)
pred_split_c = list(pred_text)
cer_dist = editdistance.eval(target_split_c, pred_split_c)
wer_dist_min = min(wer_dist_min, wer_dist)
cer_dist_min = min(cer_dist_min, cer_dist)
if candidate_idx == 0:
# first candidate
wer_dist_first += wer_dist
cer_dist_first += cer_dist
score = candidate.score
if preds_output_file:
out_file.write('{}\t{}\n'.format(pred_text, score))
wer_dist_best += wer_dist_min
cer_dist_best += cer_dist_min
sample_idx += len(probs_batch)
if preds_output_file:
out_file.close()
logging.info(f"Stored the predictions of beam search decoding at '{preds_output_file}'.")
if lm_path:
logging.info(
'WER/CER with beam search decoding and N-gram model = {:.2%}/{:.2%}'.format(
wer_dist_first / words_count, cer_dist_first / chars_count
)
)
else:
logging.info(
'WER/CER with beam search decoding = {:.2%}/{:.2%}'.format(
wer_dist_first / words_count, cer_dist_first / chars_count
)
)
logging.info(
'Oracle WER/CER in candidates with perfect LM= {:.2%}/{:.2%}'.format(
wer_dist_best / words_count, cer_dist_best / chars_count
)
)
logging.info(f"=================================================================================")
return wer_dist_first / words_count, cer_dist_first / chars_count
@hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
def main(cfg: EvalBeamSearchNGramConfig):
logging.warning("This file will be renamed to eval_beamsearch_ngram_ctc.py in the future NeMo (1.21) release.")
if is_dataclass(cfg):
cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
valid_decoding_modes = ["greedy", "beamsearch", "beamsearch_ngram"]
if cfg.decoding_mode not in valid_decoding_modes:
raise ValueError(
f"Given decoding_mode={cfg.decoding_mode} is invalid. Available options are :\n" f"{valid_decoding_modes}"
)
if cfg.nemo_model_file.endswith('.nemo'):
asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
else:
logging.warning(
"nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
)
asr_model = nemo_asr.models.ASRModel.from_pretrained(
cfg.nemo_model_file, map_location=torch.device(cfg.device)
)
target_transcripts = []
manifest_dir = Path(cfg.input_manifest).parent
with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
audio_file_paths = []
for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
data = json.loads(line)
audio_file = Path(data['audio_filepath'])
if not audio_file.is_file() and not audio_file.is_absolute():
audio_file = manifest_dir / audio_file
target_transcripts.append(data['text'])
audio_file_paths.append(str(audio_file.absolute()))
punctuation_capitalization = PunctuationCapitalization(cfg.text_processing.punctuation_marks)
if cfg.text_processing.do_lowercase:
target_transcripts = punctuation_capitalization.do_lowercase(target_transcripts)
if cfg.text_processing.rm_punctuation:
target_transcripts = punctuation_capitalization.rm_punctuation(target_transcripts)
if cfg.text_processing.separate_punctuation:
target_transcripts = punctuation_capitalization.separate_punctuation(target_transcripts)
if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
with open(cfg.probs_cache_file, 'rb') as probs_file:
all_probs = pickle.load(probs_file)
if len(all_probs) != len(audio_file_paths):
raise ValueError(
f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
f"match the manifest file. You may need to delete the probabilities cached file."
)
else:
@contextlib.contextmanager
def default_autocast():
yield
if cfg.use_amp:
if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
logging.info("AMP is enabled!\n")
autocast = torch.cuda.amp.autocast
else:
autocast = default_autocast
else:
autocast = default_autocast
with autocast():
with torch.no_grad():
if isinstance(asr_model, EncDecHybridRNNTCTCModel):
asr_model.cur_decoder = 'ctc'
all_logits = asr_model.transcribe(audio_file_paths, batch_size=cfg.acoustic_batch_size, logprobs=True)
all_probs = all_logits
if cfg.probs_cache_file:
logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
with open(cfg.probs_cache_file, 'wb') as f_dump:
pickle.dump(all_probs, f_dump)
wer_dist_greedy = 0
cer_dist_greedy = 0
words_count = 0
chars_count = 0
for batch_idx, probs in enumerate(all_probs):
preds = np.argmax(probs, axis=1)
preds_tensor = torch.tensor(preds, device='cpu').unsqueeze(0)
if isinstance(asr_model, EncDecHybridRNNTCTCModel):
pred_text = asr_model.ctc_decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
else:
pred_text = asr_model._wer.decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
if cfg.text_processing.do_lowercase:
pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
if cfg.text_processing.rm_punctuation:
pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
if cfg.text_processing.separate_punctuation:
pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
pred_split_w = pred_text.split()
target_split_w = target_transcripts[batch_idx].split()
pred_split_c = list(pred_text)
target_split_c = list(target_transcripts[batch_idx])
wer_dist = editdistance.eval(target_split_w, pred_split_w)
cer_dist = editdistance.eval(target_split_c, pred_split_c)
wer_dist_greedy += wer_dist
cer_dist_greedy += cer_dist
words_count += len(target_split_w)
chars_count += len(target_split_c)
logging.info('Greedy WER/CER = {:.2%}/{:.2%}'.format(wer_dist_greedy / words_count, cer_dist_greedy / chars_count))
asr_model = asr_model.to('cpu')
if cfg.decoding_mode == "beamsearch_ngram":
if not os.path.exists(cfg.kenlm_model_file):
raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
lm_path = cfg.kenlm_model_file
else:
lm_path = None
# 'greedy' decoding_mode would skip the beam search decoding
if cfg.decoding_mode in ["beamsearch_ngram", "beamsearch"]:
if cfg.beam_width is None or cfg.beam_alpha is None or cfg.beam_beta is None:
raise ValueError("beam_width, beam_alpha and beam_beta are needed to perform beam search decoding.")
params = {'beam_width': cfg.beam_width, 'beam_alpha': cfg.beam_alpha, 'beam_beta': cfg.beam_beta}
hp_grid = ParameterGrid(params)
hp_grid = list(hp_grid)
best_wer_beam_size, best_cer_beam_size = None, None
best_wer_alpha, best_cer_alpha = None, None
best_wer_beta, best_cer_beta = None, None
best_wer, best_cer = 1e6, 1e6
logging.info(f"==============================Starting the beam search decoding===============================")
logging.info(f"Grid search size: {len(hp_grid)}")
logging.info(f"It may take some time...")
logging.info(f"==============================================================================================")
if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
os.mkdir(cfg.preds_output_folder)
for hp in hp_grid:
if cfg.preds_output_folder:
preds_output_file = os.path.join(
cfg.preds_output_folder,
f"preds_out_width{hp['beam_width']}_alpha{hp['beam_alpha']}_beta{hp['beam_beta']}.tsv",
)
else:
preds_output_file = None
candidate_wer, candidate_cer = beam_search_eval(
asr_model,
cfg,
all_probs=all_probs,
target_transcripts=target_transcripts,
preds_output_file=preds_output_file,
lm_path=lm_path,
beam_width=hp["beam_width"],
beam_alpha=hp["beam_alpha"],
beam_beta=hp["beam_beta"],
beam_batch_size=cfg.beam_batch_size,
progress_bar=True,
punctuation_capitalization=punctuation_capitalization,
)
if candidate_cer < best_cer:
best_cer_beam_size = hp["beam_width"]
best_cer_alpha = hp["beam_alpha"]
best_cer_beta = hp["beam_beta"]
best_cer = candidate_cer
if candidate_wer < best_wer:
best_wer_beam_size = hp["beam_width"]
best_wer_alpha = hp["beam_alpha"]
best_wer_beta = hp["beam_beta"]
best_wer = candidate_wer
logging.info(
f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
f'Beam alpha = {best_wer_alpha}, Beam beta = {best_wer_beta}'
)
logging.info(
f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
f'Beam alpha = {best_cer_alpha}, Beam beta = {best_cer_beta}'
)
logging.info(f"=================================================================================")
if __name__ == '__main__':
main()
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
# This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
# fusion with beam search decoders on top of a trained ASR Transducer model. NeMo's beam search decoders are capable of using the
# KenLM's N-gram models to find the best candidates. This script supports both character level and BPE level
# encodings and models which is detected automatically from the type of the model.
# You may train the LM model with 'scripts/ngram_lm/train_kenlm.py'.
# Config Help
To discover all arguments of the script, please run :
python eval_beamsearch_ngram.py --help
python eval_beamsearch_ngram.py --cfg job
# USAGE
python eval_beamsearch_ngram_transducer.py nemo_model_file=<path to the .nemo file of the model> \
input_manifest=<path to the evaluation JSON manifest file \
kenlm_model_file=<path to the binary KenLM model> \
beam_width=[<list of the beam widths, separated with commas>] \
beam_alpha=[<list of the beam alphas, separated with commas>] \
preds_output_folder=<optional folder to store the predictions> \
probs_cache_file=null \
decoding_strategy=<greedy_batch or maes decoding>
maes_prefix_alpha=[<list of the maes prefix alphas, separated with commas>] \
maes_expansion_gamma=[<list of the maes expansion gammas, separated with commas>] \
hat_subtract_ilm=<in case of HAT model: subtract internal LM or not> \
hat_ilm_weight=[<in case of HAT model: list of the HAT internal LM weights, separated with commas>] \
...
# Grid Search for Hyper parameters
For grid search, you can provide a list of arguments as follows -
beam_width=[4,8,16,....] \
beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
# You may find more info on how to use this script at:
# https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
"""
import contextlib
import json
import os
import pickle
import tempfile
from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import List, Optional
import editdistance
import numpy as np
import torch
from omegaconf import MISSING, OmegaConf
from sklearn.model_selection import ParameterGrid
from tqdm.auto import tqdm
import nemo.collections.asr as nemo_asr
from nemo.collections.asr.parts.submodules import rnnt_beam_decoding
from nemo.core.config import hydra_runner
from nemo.utils import logging
# fmt: off
@dataclass
class EvalBeamSearchNGramConfig:
"""
Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
"""
# # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
nemo_model_file: str = MISSING
# File paths
input_manifest: str = MISSING # The manifest file of the evaluation set
kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
# Parameters for inference
acoustic_batch_size: int = 128 # The batch size to calculate log probabilities
beam_batch_size: int = 128 # The batch size to be used for beam search decoding
device: str = "cuda" # The device to load the model onto to calculate log probabilities
use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
num_workers: int = 1 # Number of workers for DataLoader
# The decoding scheme to be used for evaluation
decoding_strategy: str = "greedy_batch" # ["greedy_batch", "beam", "tsd", "alsd", "maes"]
# Beam Search hyperparameters
beam_width: List[int] = field(default_factory=lambda: [8]) # The width or list of the widths for the beam search decoding
beam_alpha: List[float] = field(default_factory=lambda: [0.2]) # The alpha parameter or list of the alphas for the beam search decoding
maes_prefix_alpha: List[int] = field(default_factory=lambda: [2]) # The maes_prefix_alpha or list of the maes_prefix_alpha for the maes decoding
maes_expansion_gamma: List[float] = field(default_factory=lambda: [2.3]) # The maes_expansion_gamma or list of the maes_expansion_gamma for the maes decoding
# HAT related parameters (only for internal lm subtraction)
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
# fmt: on
def decoding_step(
model: nemo_asr.models.ASRModel,
cfg: EvalBeamSearchNGramConfig,
all_probs: List[torch.Tensor],
target_transcripts: List[str],
preds_output_file: str = None,
beam_batch_size: int = 128,
progress_bar: bool = True,
):
level = logging.getEffectiveLevel()
logging.setLevel(logging.CRITICAL)
# Reset config
model.change_decoding_strategy(None)
cfg.decoding.hat_ilm_weight = cfg.decoding.hat_ilm_weight * cfg.hat_subtract_ilm
# Override the beam search config with current search candidate configuration
cfg.decoding.return_best_hypothesis = False
cfg.decoding.ngram_lm_model = cfg.kenlm_model_file
cfg.decoding.hat_subtract_ilm = cfg.hat_subtract_ilm
# Update model's decoding strategy config
model.cfg.decoding.strategy = cfg.decoding_strategy
model.cfg.decoding.beam = cfg.decoding
# Update model's decoding strategy
model.change_decoding_strategy(model.cfg.decoding)
logging.setLevel(level)
wer_dist_first = cer_dist_first = 0
wer_dist_best = cer_dist_best = 0
words_count = 0
chars_count = 0
sample_idx = 0
if preds_output_file:
out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
if progress_bar:
if cfg.decoding_strategy == "greedy_batch":
description = "Greedy_batch decoding.."
else:
description = f"{cfg.decoding_strategy} decoding with bw={cfg.decoding.beam_size}, ba={cfg.decoding.ngram_lm_alpha}, ma={cfg.decoding.maes_prefix_alpha}, mg={cfg.decoding.maes_expansion_gamma}, hat_ilmw={cfg.decoding.hat_ilm_weight}"
it = tqdm(range(int(np.ceil(len(all_probs) / beam_batch_size))), desc=description, ncols=120)
else:
it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
for batch_idx in it:
# disabling type checking
probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
probs_lens = torch.tensor([prob.shape[-1] for prob in probs_batch])
with torch.no_grad():
packed_batch = torch.zeros(len(probs_batch), probs_batch[0].shape[0], max(probs_lens), device='cpu')
for prob_index in range(len(probs_batch)):
packed_batch[prob_index, :, : probs_lens[prob_index]] = torch.tensor(
probs_batch[prob_index].unsqueeze(0), device=packed_batch.device, dtype=packed_batch.dtype
)
best_hyp_batch, beams_batch = model.decoding.rnnt_decoder_predictions_tensor(
packed_batch, probs_lens, return_hypotheses=True,
)
if cfg.decoding_strategy == "greedy_batch":
beams_batch = [[x] for x in best_hyp_batch]
for beams_idx, beams in enumerate(beams_batch):
target = target_transcripts[sample_idx + beams_idx]
target_split_w = target.split()
target_split_c = list(target)
words_count += len(target_split_w)
chars_count += len(target_split_c)
wer_dist_min = cer_dist_min = 10000
for candidate_idx, candidate in enumerate(beams): # type: (int, rnnt_beam_decoding.rnnt_utils.Hypothesis)
pred_text = candidate.text
pred_split_w = pred_text.split()
wer_dist = editdistance.eval(target_split_w, pred_split_w)
pred_split_c = list(pred_text)
cer_dist = editdistance.eval(target_split_c, pred_split_c)
wer_dist_min = min(wer_dist_min, wer_dist)
cer_dist_min = min(cer_dist_min, cer_dist)
if candidate_idx == 0:
# first candidate
wer_dist_first += wer_dist
cer_dist_first += cer_dist
score = candidate.score
if preds_output_file:
out_file.write('{}\t{}\n'.format(pred_text, score))
wer_dist_best += wer_dist_min
cer_dist_best += cer_dist_min
sample_idx += len(probs_batch)
if cfg.decoding_strategy == "greedy_batch":
return wer_dist_first / words_count, cer_dist_first / chars_count
if preds_output_file:
out_file.close()
logging.info(f"Stored the predictions of {cfg.decoding_strategy} decoding at '{preds_output_file}'.")
if cfg.decoding.ngram_lm_model:
logging.info(
f"WER/CER with {cfg.decoding_strategy} decoding and N-gram model = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
)
else:
logging.info(
f"WER/CER with {cfg.decoding_strategy} decoding = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
)
logging.info(
f"Oracle WER/CER in candidates with perfect LM= {wer_dist_best / words_count:.2%}/{cer_dist_best / chars_count:.2%}"
)
logging.info(f"=================================================================================")
return wer_dist_first / words_count, cer_dist_first / chars_count
@hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
def main(cfg: EvalBeamSearchNGramConfig):
if is_dataclass(cfg):
cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
valid_decoding_strategis = ["greedy_batch", "beam", "tsd", "alsd", "maes"]
if cfg.decoding_strategy not in valid_decoding_strategis:
raise ValueError(
f"Given decoding_strategy={cfg.decoding_strategy} is invalid. Available options are :\n"
f"{valid_decoding_strategis}"
)
if cfg.nemo_model_file.endswith('.nemo'):
asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
else:
logging.warning(
"nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
)
asr_model = nemo_asr.models.ASRModel.from_pretrained(
cfg.nemo_model_file, map_location=torch.device(cfg.device)
)
if cfg.kenlm_model_file:
if not os.path.exists(cfg.kenlm_model_file):
raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
if cfg.decoding_strategy != "maes":
raise ValueError(f"Decoding with kenlm model is supported only for maes decoding algorithm.")
lm_path = cfg.kenlm_model_file
else:
lm_path = None
cfg.beam_alpha = [0.0]
if cfg.hat_subtract_ilm:
assert lm_path, "kenlm must be set for hat internal lm subtraction"
if cfg.decoding_strategy != "maes":
cfg.maes_prefix_alpha, cfg.maes_expansion_gamma, cfg.hat_ilm_weight = [0], [0], [0]
target_transcripts = []
manifest_dir = Path(cfg.input_manifest).parent
with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
audio_file_paths = []
for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
data = json.loads(line)
audio_file = Path(data['audio_filepath'])
if not audio_file.is_file() and not audio_file.is_absolute():
audio_file = manifest_dir / audio_file
target_transcripts.append(data['text'])
audio_file_paths.append(str(audio_file.absolute()))
if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
with open(cfg.probs_cache_file, 'rb') as probs_file:
all_probs = pickle.load(probs_file)
if len(all_probs) != len(audio_file_paths):
raise ValueError(
f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
f"match the manifest file. You may need to delete the probabilities cached file."
)
else:
@contextlib.contextmanager
def default_autocast():
yield
if cfg.use_amp:
if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
logging.info("AMP is enabled!\n")
autocast = torch.cuda.amp.autocast
else:
autocast = default_autocast
else:
autocast = default_autocast
# manual calculation of encoder_embeddings
with autocast():
with torch.no_grad():
asr_model.eval()
asr_model.encoder.freeze()
device = next(asr_model.parameters()).device
all_probs = []
with tempfile.TemporaryDirectory() as tmpdir:
with open(os.path.join(tmpdir, 'manifest.json'), 'w', encoding='utf-8') as fp:
for audio_file in audio_file_paths:
entry = {'audio_filepath': audio_file, 'duration': 100000, 'text': ''}
fp.write(json.dumps(entry) + '\n')
config = {
'paths2audio_files': audio_file_paths,
'batch_size': cfg.acoustic_batch_size,
'temp_dir': tmpdir,
'num_workers': cfg.num_workers,
'channel_selector': None,
'augmentor': None,
}
temporary_datalayer = asr_model._setup_transcribe_dataloader(config)
for test_batch in tqdm(temporary_datalayer, desc="Transcribing", disable=True):
encoded, encoded_len = asr_model.forward(
input_signal=test_batch[0].to(device), input_signal_length=test_batch[1].to(device)
)
# dump encoder embeddings per file
for idx in range(encoded.shape[0]):
encoded_no_pad = encoded[idx, :, : encoded_len[idx]]
all_probs.append(encoded_no_pad)
if cfg.probs_cache_file:
logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
with open(cfg.probs_cache_file, 'wb') as f_dump:
pickle.dump(all_probs, f_dump)
if cfg.decoding_strategy == "greedy_batch":
asr_model = asr_model.to('cpu')
candidate_wer, candidate_cer = decoding_step(
asr_model,
cfg,
all_probs=all_probs,
target_transcripts=target_transcripts,
beam_batch_size=cfg.beam_batch_size,
progress_bar=True,
)
logging.info(f"Greedy batch WER/CER = {candidate_wer:.2%}/{candidate_cer:.2%}")
asr_model = asr_model.to('cpu')
# 'greedy_batch' decoding_strategy would skip the beam search decoding
if cfg.decoding_strategy in ["beam", "tsd", "alsd", "maes"]:
if cfg.beam_width is None or cfg.beam_alpha is None:
raise ValueError("beam_width and beam_alpha are needed to perform beam search decoding.")
params = {
'beam_width': cfg.beam_width,
'beam_alpha': cfg.beam_alpha,
'maes_prefix_alpha': cfg.maes_prefix_alpha,
'maes_expansion_gamma': cfg.maes_expansion_gamma,
'hat_ilm_weight': cfg.hat_ilm_weight,
}
hp_grid = ParameterGrid(params)
hp_grid = list(hp_grid)
best_wer_beam_size, best_cer_beam_size = None, None
best_wer_alpha, best_cer_alpha = None, None
best_wer, best_cer = 1e6, 1e6
logging.info(
f"==============================Starting the {cfg.decoding_strategy} decoding==============================="
)
logging.info(f"Grid search size: {len(hp_grid)}")
logging.info(f"It may take some time...")
logging.info(f"==============================================================================================")
if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
os.mkdir(cfg.preds_output_folder)
for hp in hp_grid:
if cfg.preds_output_folder:
results_file = f"preds_out_{cfg.decoding_strategy}_bw{hp['beam_width']}"
if cfg.decoding_strategy == "maes":
results_file = f"{results_file}_ma{hp['maes_prefix_alpha']}_mg{hp['maes_expansion_gamma']}"
if cfg.kenlm_model_file:
results_file = f"{results_file}_ba{hp['beam_alpha']}"
if cfg.hat_subtract_ilm:
results_file = f"{results_file}_hat_ilmw{hp['hat_ilm_weight']}"
preds_output_file = os.path.join(cfg.preds_output_folder, f"{results_file}.tsv")
else:
preds_output_file = None
cfg.decoding.beam_size = hp["beam_width"]
cfg.decoding.ngram_lm_alpha = hp["beam_alpha"]
cfg.decoding.maes_prefix_alpha = hp["maes_prefix_alpha"]
cfg.decoding.maes_expansion_gamma = hp["maes_expansion_gamma"]
cfg.decoding.hat_ilm_weight = hp["hat_ilm_weight"]
candidate_wer, candidate_cer = decoding_step(
asr_model,
cfg,
all_probs=all_probs,
target_transcripts=target_transcripts,
preds_output_file=preds_output_file,
beam_batch_size=cfg.beam_batch_size,
progress_bar=True,
)
if candidate_cer < best_cer:
best_cer_beam_size = hp["beam_width"]
best_cer_alpha = hp["beam_alpha"]
best_cer_ma = hp["maes_prefix_alpha"]
best_cer_mg = hp["maes_expansion_gamma"]
best_cer_hat_ilm_weight = hp["hat_ilm_weight"]
best_cer = candidate_cer
if candidate_wer < best_wer:
best_wer_beam_size = hp["beam_width"]
best_wer_alpha = hp["beam_alpha"]
best_wer_ma = hp["maes_prefix_alpha"]
best_wer_ga = hp["maes_expansion_gamma"]
best_wer_hat_ilm_weight = hp["hat_ilm_weight"]
best_wer = candidate_wer
wer_hat_parameter = ""
if cfg.hat_subtract_ilm:
wer_hat_parameter = f"HAT ilm weight = {best_wer_hat_ilm_weight}, "
logging.info(
f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
f'Beam alpha = {best_wer_alpha}, {wer_hat_parameter}'
f'maes_prefix_alpha = {best_wer_ma}, maes_expansion_gamma = {best_wer_ga} '
)
cer_hat_parameter = ""
if cfg.hat_subtract_ilm:
cer_hat_parameter = f"HAT ilm weight = {best_cer_hat_ilm_weight}"
logging.info(
f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
f'Beam alpha = {best_cer_alpha}, {cer_hat_parameter} '
f'maes_prefix_alpha = {best_cer_ma}, maes_expansion_gamma = {best_cer_mg}'
)
logging.info(f"=================================================================================")
if __name__ == '__main__':
main()
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
[start of scripts/confidence_ensembles/build_ensemble.py]
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This script provides a functionality to create confidence-based ensembles
from a collection of pretrained models.
For more details see the paper https://arxiv.org/abs/2306.15824
or tutorial in tutorials/asr/Confidence_Ensembles.ipynb
You would typically use this script by providing a yaml config file or overriding
default options from command line.
Usage examples:
1. Building an ensemble of two monolingual models with default settings (no confidence tuning).
python build_ensemble.py --config-path=. --config-name=ensemble_config.yaml
ensemble.0.model=stt_it_conformer_ctc_large
ensemble.0.training_manifest=<path to the Italian data of 100+ utterances (no transcription required)>
ensemble.1.model=stt_es_conformer_ctc_large
ensemble.1.training_manifest=<path to the Spanish data of 100+ utterances (no transcription required)>
output_path=<path to the desired location of the .nemo checkpoint>
You can have more than 2 models and can control transcription settings (e.g., batch size)
with ``transcription.<any argument of examples/asr/transcribe_speech.py>`` parameters.
2. If you want to get improved results, you can enable tuning of the confidence and logistic regression (LR) parameters.
E.g.
python build_ensemble.py
<all arguments like in the previous example>
ensemble.0.dev_manifest=<path to the dev data that's required for tuning>
...
# IMPORTANT: see the note below if you use > 2 models!
ensemble.N.dev_manifest=<path to the dev data that's required for tuning>
tune_confidence=True # to allow confidence tuning. LR is tuned by default
As with any tuning, it is recommended to have reasonably large validation set for each model,
otherwise you might overfit to the validation data.
Note that if you add additional models (> 2) you will need to modify ensemble_config.yaml
or create a new one with added models in there. While it's theoretically possible to
fully override such parameters from commandline, hydra is very unfriendly for such
use-cases, so it's strongly recommended to be creating new configs.
3. If you want to precisely control tuning grid search, you can do that with
python build_ensemble.py
<all arguments as in the previous examples>
tune_confidence_config.confidence_type='[entropy_renyi_exp,entropy_tsallis_exp]' # only tune over this set
tune_confidence_config.alpha='[0.1,0.5,1.0]' # only tune over this set
You can check the dataclasses in this file for the full list of supported
arguments and their default values.
"""
import atexit
# using default logging to be able to silence unnecessary messages from nemo
import logging
import os
import random
import sys
import tempfile
from copy import deepcopy
from dataclasses import dataclass
from pathlib import Path
from typing import Dict, List, Optional, Tuple
import joblib
import numpy as np
import pytorch_lightning as pl
from omegaconf import MISSING, DictConfig, OmegaConf
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.pipeline import Pipeline, make_pipeline
from sklearn.preprocessing import StandardScaler
from tqdm import tqdm
from nemo.collections.asr.models.confidence_ensemble import (
ConfidenceEnsembleModel,
ConfidenceSpec,
compute_confidence,
get_filtered_logprobs,
)
from nemo.collections.asr.parts.utils.asr_confidence_utils import (
ConfidenceConfig,
ConfidenceMethodConfig,
get_confidence_aggregation_bank,
get_confidence_measure_bank,
)
from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
from nemo.core.config import hydra_runner
LOG = logging.getLogger(__file__)
# adding Python path. If not found, asking user to get the file
try:
sys.path.append(str(Path(__file__).parents[2] / "examples" / "asr"))
import transcribe_speech
except ImportError:
# if users run script normally from nemo repo, this shouldn't be triggered as
# we modify the path above. But if they downloaded the build_ensemble.py as
# an isolated script, we'd ask them to also download corresponding version
# of the transcribe_speech.py
print(
"Current script depends on 'examples/asr/transcribe_speech.py', but can't find it. "
"If it's not present, download it from the NeMo github manually and put inside this folder."
)
@dataclass
class EnsembleConfig:
# .nemo path or pretrained name
model: str = MISSING
# path to the training data manifest (non-tarred)
training_manifest: str = MISSING
# specify to limit the number of training samples
# 100 is most likely enough, but setting higher default just in case
max_training_samples: int = 1000
# specify to provide dev data manifest for HP tuning
dev_manifest: Optional[str] = None
@dataclass
class TuneConfidenceConfig:
# important parameter, so should always be tuned
exclude_blank: Tuple[bool] = (True, False)
# prod is pretty much always worse, so not including by default
aggregation: Tuple[str] = ("mean", "min", "max")
# not including max prob, as there is always an entropy-based metric
# that's better but otherwise including everything
confidence_type: Tuple[str] = (
"entropy_renyi_exp",
"entropy_renyi_lin",
"entropy_tsallis_exp",
"entropy_tsallis_lin",
"entropy_gibbs_lin",
"entropy_gibbs_exp",
)
# TODO: currently it's not possible to efficiently tune temperature, as we always
# apply log-softmax in the decoder, so to try different values it will be required
# to rerun the decoding, which is very slow. To support this for one-off experiments
# it's possible to modify the code of CTC decoder / Transducer joint to
# remove log-softmax and then apply it directly in this script with the temperature
#
# Alternatively, one can run this script multiple times with different values of
# temperature and pick the best performing ensemble. Note that this will increase
# tuning time by the number of temperature values tried. On the other hand,
# the above approach is a lot more efficient and will only slightly increase
# the total tuning runtime.
# very important to tune for max prob, but for entropy metrics 1.0 is almost always best
# temperature: Tuple[float] = (1.0,)
# not that important, but can sometimes make a small difference
alpha: Tuple[float] = (0.25, 0.33, 0.5, 1.0)
def get_grid_size(self) -> int:
"""Returns the total number of points in the search space."""
if "max_prob" in self.confidence_type:
return (
len(self.exclude_blank)
* len(self.aggregation)
* ((len(self.confidence_type) - 1) * len(self.alpha) + 1)
)
return len(self.exclude_blank) * len(self.aggregation) * len(self.confidence_type) * len(self.alpha)
@dataclass
class TuneLogisticRegressionConfig:
# will have log-uniform grid over this range with that many points
# note that a value of 10000.0 (not regularization) is always added
C_num_points: int = 10
C_min: float = 0.0001
C_max: float = 10.0
# not too important
multi_class: Tuple[str] = ("ovr", "multinomial")
# should try to include weights directly if the data is too imbalanced
class_weight: Tuple = (None, "balanced")
# increase if getting many warnings that algorithm didn't converge
max_iter: int = 1000
@dataclass
class BuildEnsembleConfig:
# where to save the resulting ensemble model
output_path: str = MISSING
# each model specification
ensemble: List[EnsembleConfig] = MISSING
random_seed: int = 0 # for reproducibility
# default confidence, can override
confidence: ConfidenceConfig = ConfidenceConfig(
# we keep frame confidences and apply aggregation manually to get full-utterance confidence
preserve_frame_confidence=True,
exclude_blank=True,
aggregation="mean",
method_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
tune_confidence: bool = False
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
Will also auto-set tune_logistic_regression to False if no dev data
is available.
If tune_confidence is set to True (user choice) and no dev data is
provided, will raise an error.
"""
num_dev_data = 0
for ensemble_cfg in self.ensemble:
num_dev_data += ensemble_cfg.dev_manifest is not None
if num_dev_data == 0:
if self.tune_confidence:
raise ValueError("tune_confidence is set to True, but no dev data is provided")
LOG.info("Setting tune_logistic_regression = False since no dev data is provided")
self.tune_logistic_regression = False
return
if num_dev_data < len(self.ensemble):
raise ValueError(
"Some ensemble configs specify dev data, but some don't. Either all have to specify it or none!"
)
def calculate_score(features: np.ndarray, labels: np.ndarray, pipe: Pipeline) -> Tuple[float, np.ndarray]:
"""Score is always calculated as mean of the per-class scores.
This is done to account for possible class imbalances.
Args:
features: numpy array of features of shape [N x D], where N is the
number of objects (typically a total number of utterances in
all datasets) and D is the total number of confidence scores
used to train the model (typically = number of models).
labels: numpy array of shape [N] contatining ground-truth model indices.
pipe: classification pipeline (currently, standardization + logistic
regression).
Returns:
tuple: score value in [0, 1] and full classification confusion matrix.
"""
predictions = pipe.predict(features)
conf_m = confusion_matrix(labels, predictions)
score = np.diag(conf_m).sum() / conf_m.sum()
return score, conf_m
def train_model_selection(
training_features: np.ndarray,
training_labels: np.ndarray,
dev_features: Optional[np.ndarray] = None,
dev_labels: Optional[np.ndarray] = None,
tune_lr: bool = False,
tune_lr_cfg: Optional[TuneLogisticRegressionConfig] = None,
verbose: bool = False,
) -> Tuple[Pipeline, float]:
"""Trains model selection block with an (optional) tuning of the parameters.
Returns a pipeline consisting of feature standardization and logistic
regression. If tune_lr is set to True, dev features/labels will be used
to tune the hyperparameters of the logistic regression with the grid
search that's defined via ``tune_lr_cfg``.
If no tuning is requested, uses the following parameters::
best_pipe = make_pipeline(
StandardScaler(),
LogisticRegression(
multi_class="multinomial",
C=10000.0,
max_iter=1000,
class_weight="balanced",
),
)
Args:
training_features: numpy array of features of shape [N x D], where N is
the number of objects (typically a total number of utterances in
all training datasets) and D is the total number of confidence
scores used to train the model (typically = number of models).
training_labels: numpy array of shape [N] contatining ground-truth
model indices.
dev_features: same as training, but for the validation subset.
dev_labels: same as training, but for the validation subset.
tune_lr: controls whether tuning of LR hyperparameters is performed.
If set to True, it's required to also provide dev features/labels.
tune_lr_cfg: specifies what values of LR hyperparameters to try.
verbose: if True, will output final training/dev scores.
Returns:
tuple: trained model selection pipeline, best score (or -1 if no tuning
was done).
"""
if not tune_lr:
# default parameters: C=10000.0 disables regularization
best_pipe = make_pipeline(
StandardScaler(),
LogisticRegression(multi_class="multinomial", C=10000.0, max_iter=1000, class_weight="balanced"),
)
max_score = -1
else:
C_pms = np.append(
np.exp(np.linspace(np.log(tune_lr_cfg.C_min), np.log(tune_lr_cfg.C_max), tune_lr_cfg.C_num_points)),
10000.0,
)
max_score = 0
best_pipe = None
for class_weight in tune_lr_cfg.class_weight:
for multi_class in tune_lr_cfg.multi_class:
for C in C_pms:
pipe = make_pipeline(
StandardScaler(),
LogisticRegression(
multi_class=multi_class, C=C, max_iter=tune_lr_cfg.max_iter, class_weight=class_weight
),
)
pipe.fit(training_features, training_labels)
score, confusion = calculate_score(dev_features, dev_labels, pipe)
if score > max_score:
max_score = score
best_pipe = pipe
best_pipe.fit(training_features, training_labels)
if verbose:
accuracy, confusion = calculate_score(training_features, training_labels, best_pipe)
LOG.info("Training fit accuracy: %.4f", accuracy * 100.0)
LOG.info("Training confusion matrix:\n%s", str(confusion))
if dev_features is not None and verbose:
accuracy, confusion = calculate_score(dev_features, dev_labels, best_pipe)
LOG.info("Dev fit accuracy: %.4f", accuracy * 100.0)
LOG.info("Dev confusion matrix:\n%s", str(confusion))
return best_pipe, max_score
def subsample_manifest(manifest_file: str, max_samples: int) -> str:
"""Will save a subsampled version of the manifest to the same folder.
Have to save to the same folder to support relative paths.
Args:
manifest_file: path to the manifest file that needs subsampling.
max_samples: how many samples to retain. Will randomly select that
many lines from the manifest.
Returns:
str: the path to the subsampled manifest file.
"""
with open(manifest_file, "rt", encoding="utf-8") as fin:
lines = fin.readlines()
if max_samples < len(lines):
lines = random.sample(lines, max_samples)
output_file = manifest_file + "-subsampled"
with open(output_file, "wt", encoding="utf-8") as fout:
fout.write("".join(lines))
return output_file
def cleanup_subsampled_manifests(subsampled_manifests: List[str]):
"""Removes all generated subsamples manifests."""
for manifest in subsampled_manifests:
os.remove(manifest)
def compute_all_confidences(
hypothesis: Hypothesis, tune_confidence_cfg: TuneConfidenceConfig
) -> Dict[ConfidenceSpec, float]:
"""Computes a set of confidence scores from a given hypothesis.
Works with the output of both CTC and Transducer decoding.
Args:
hypothesis: generated hypothesis as returned from the transcribe
method of the ASR model.
tune_confidence_cfg: config specifying what confidence scores to
compute.
Returns:
dict: dictionary with confidenct spec -> confidence score mapping.
"""
conf_values = {}
for exclude_blank in tune_confidence_cfg.exclude_blank:
filtered_logprobs = get_filtered_logprobs(hypothesis, exclude_blank)
vocab_size = filtered_logprobs.shape[1]
for aggregation in tune_confidence_cfg.aggregation:
aggr_func = get_confidence_aggregation_bank()[aggregation]
for conf_type in tune_confidence_cfg.confidence_type:
conf_func = get_confidence_measure_bank()[conf_type]
if conf_type == "max_prob": # skipping alpha in this case
conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=1.0)).cpu().item()
conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, 1.0)] = conf_value
else:
for alpha in tune_confidence_cfg.alpha:
conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=alpha)).cpu().item()
conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, alpha)] = conf_value
return conf_values
def find_best_confidence(
train_confidences: List[List[Dict[ConfidenceSpec, float]]],
train_labels: List[int],
dev_confidences: List[List[Dict[ConfidenceSpec, float]]],
dev_labels: List[int],
tune_lr: bool,
tune_lr_config: TuneConfidenceConfig,
) -> Tuple[ConfidenceConfig, Pipeline]:
"""Finds the best confidence configuration for model selection.
Will loop over all values in the confidence dictionary and fit the LR
model (optionally tuning its HPs). The best performing confidence (on the
dev set) will be used for the final LR model.
Args:
train_confidences: this is an object of type
``List[List[Dict[ConfidenceSpec, float]]]``. The shape of this
object is [M, N, S], where
M: number of models
N: number of utterances in all training sets
S: number of confidence scores to try
This argument will be used to construct np.array objects for each
of the confidence scores with the shape [M, N]
train_labels: ground-truth labels of the correct model for each data
points. This is a list of size [N]
dev_confidences: same as training, but for the validation subset.
dev_labels: same as training, but for the validation subset.
tune_lr: controls whether tuning of LR hyperparameters is performed.
tune_lr_cfg: specifies what values of LR hyperparameters to try.
Returns:
tuple: best confidence config, best model selection pipeline
"""
max_score = 0
best_pipe = None
best_conf_spec = None
LOG.info("Evaluation all confidences. Total grid size: %d", len(train_confidences[0][0].keys()))
for conf_spec in tqdm(train_confidences[0][0].keys()):
cur_train_confidences = []
for model_confs in train_confidences:
cur_train_confidences.append([])
for model_conf in model_confs:
cur_train_confidences[-1].append(model_conf[conf_spec])
cur_dev_confidences = []
for model_confs in dev_confidences:
cur_dev_confidences.append([])
for model_conf in model_confs:
cur_dev_confidences[-1].append(model_conf[conf_spec])
# transposing with zip(*list)
training_features = np.array(list(zip(*cur_train_confidences)))
training_labels = np.array(train_labels)
dev_features = np.array(list(zip(*cur_dev_confidences)))
dev_labels = np.array(dev_labels)
pipe, score = train_model_selection(
training_features, training_labels, dev_features, dev_labels, tune_lr, tune_lr_config,
)
if max_score < score:
max_score = score
best_pipe = pipe
best_conf_spec = conf_spec
LOG.info("Found better parameters: %s. New score: %.4f", str(conf_spec), max_score)
return best_conf_spec.to_confidence_config(), best_pipe
@hydra_runner(config_name="BuildEnsembleConfig", schema=BuildEnsembleConfig)
def main(cfg: BuildEnsembleConfig):
# silencing all messages from nemo/ptl to avoid dumping tons of configs to the stdout
logging.getLogger('pytorch_lightning').setLevel(logging.CRITICAL)
logging.getLogger('nemo_logger').setLevel(logging.CRITICAL)
LOG.info(f'Build ensemble config:\n{OmegaConf.to_yaml(cfg)}')
# to ensure post init is called
cfg = BuildEnsembleConfig(**cfg)
pl.seed_everything(cfg.random_seed)
cfg.transcription.random_seed = None # seed is already applied
cfg.transcription.return_transcriptions = True
cfg.transcription.preserve_alignment = True
cfg.transcription.ctc_decoding.temperature = cfg.temperature
cfg.transcription.rnnt_decoding.temperature = cfg.temperature
# this ensures that generated output is after log-softmax for consistency with CTC
train_confidences = []
dev_confidences = []
train_labels = []
dev_labels = []
# registering clean-up function that will hold on to this list and
# should clean up even if there is partial error in some of the transcribe
# calls
subsampled_manifests = []
atexit.register(cleanup_subsampled_manifests, subsampled_manifests)
# note that we loop over the same config.
# This is intentional, as we need to run all models on all datasets
# this loop will do the following things:
# 1. Goes through each model X each training dataset
# 2. Computes predictions by directly calling transcribe_speech.main
# 3. Converts transcription to the confidence score(s) as specified in the config
# 4. If dev sets are provided, computes the same for them
# 5. Creates a list of ground-truth model indices by mapping each model
# to its own training dataset as specified in the config.
# 6. After the loop, we either run tuning over all confidence scores or
# directly use a single score to fit logistic regression and save the
# final ensemble model.
for model_idx, model_cfg in enumerate(cfg.ensemble):
train_model_confidences = []
dev_model_confidences = []
for data_idx, data_cfg in enumerate(cfg.ensemble):
if model_idx == 0: # generating subsampled manifests only one time
subsampled_manifests.append(
subsample_manifest(data_cfg.training_manifest, data_cfg.max_training_samples)
)
subsampled_manifest = subsampled_manifests[data_idx]
if model_cfg.model.endswith(".nemo"):
cfg.transcription.model_path = model_cfg.model
else: # assuming pretrained model
cfg.transcription.pretrained_name = model_cfg.model
cfg.transcription.dataset_manifest = subsampled_manifest
# training
with tempfile.NamedTemporaryFile() as output_file:
cfg.transcription.output_filename = output_file.name
LOG.info("Transcribing training dataset %d with model %d", data_idx, model_idx)
transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
LOG.info("Generating confidence scores")
# TODO: parallelize this loop?
for transcription in tqdm(transcriptions):
if cfg.tune_confidence:
train_model_confidences.append(
compute_all_confidences(transcription, cfg.tune_confidence_config)
)
else:
train_model_confidences.append(compute_confidence(transcription, cfg.confidence))
if model_idx == 0: # labels are the same for all models
train_labels.append(data_idx)
# optional dev
if data_cfg.dev_manifest is not None:
cfg.transcription.dataset_manifest = data_cfg.dev_manifest
with tempfile.NamedTemporaryFile() as output_file:
cfg.transcription.output_filename = output_file.name
LOG.info("Transcribing dev dataset %d with model %d", data_idx, model_idx)
transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
LOG.info("Generating confidence scores")
for transcription in tqdm(transcriptions):
if cfg.tune_confidence:
dev_model_confidences.append(
compute_all_confidences(transcription, cfg.tune_confidence_config)
)
else:
dev_model_confidences.append(compute_confidence(transcription, cfg.confidence))
if model_idx == 0: # labels are the same for all models
dev_labels.append(data_idx)
train_confidences.append(train_model_confidences)
if dev_model_confidences:
dev_confidences.append(dev_model_confidences)
if cfg.tune_confidence:
best_confidence, model_selection_block = find_best_confidence(
train_confidences,
train_labels,
dev_confidences,
dev_labels,
cfg.tune_logistic_regression,
cfg.tune_logistic_regression_config,
)
else:
best_confidence = cfg.confidence
# transposing with zip(*list)
training_features = np.array(list(zip(*train_confidences)))
training_labels = np.array(train_labels)
if dev_confidences:
dev_features = np.array(list(zip(*dev_confidences)))
dev_labels = np.array(dev_labels)
else:
dev_features = None
dev_labels = None
model_selection_block, _ = train_model_selection(
training_features,
training_labels,
dev_features,
dev_labels,
cfg.tune_logistic_regression,
cfg.tune_logistic_regression_config,
verbose=True,
)
with tempfile.TemporaryDirectory() as tmpdir:
model_selection_block_path = os.path.join(tmpdir, 'model_selection_block.pkl')
joblib.dump(model_selection_block, model_selection_block_path)
# creating ensemble checkpoint
ensemble_model = ConfidenceEnsembleModel(
cfg=DictConfig(
{
'model_selection_block': model_selection_block_path,
'confidence': best_confidence,
'temperature': cfg.temperature,
'load_models': [model_cfg.model for model_cfg in cfg.ensemble],
}
),
trainer=None,
)
ensemble_model.save_to(cfg.output_path)
if __name__ == '__main__':
main()
[end of scripts/confidence_ensembles/build_ensemble.py]
[start of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
from dataclasses import dataclass, is_dataclass
from pathlib import Path
from typing import Optional
import pytorch_lightning as pl
import torch
from omegaconf import MISSING, OmegaConf
from sklearn.model_selection import ParameterGrid
from nemo.collections.asr.metrics.rnnt_wer import RNNTDecodingConfig
from nemo.collections.asr.metrics.wer import CTCDecodingConfig
from nemo.collections.asr.models import ASRModel, EncDecRNNTModel
from nemo.collections.asr.parts.utils.asr_confidence_benchmarking_utils import (
apply_confidence_parameters,
run_confidence_benchmark,
)
from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig
from nemo.core.config import hydra_runner
from nemo.utils import logging, model_utils
"""
Get confidence metrics and curve plots for a given model, dataset, and confidence parameters.
# Arguments
model_path: Path to .nemo ASR checkpoint
pretrained_name: Name of pretrained ASR model (from NGC registry)
dataset_manifest: Path to dataset JSON manifest file (in NeMo format)
output_dir: Output directory to store a report and curve plot directories
batch_size: batch size during inference
num_workers: number of workers during inference
cuda: Optional int to enable or disable execution of model on certain CUDA device
amp: Bool to decide if Automatic Mixed Precision should be used during inference
audio_type: Str filetype of the audio. Supported = wav, flac, mp3
target_level: Word- or token-level confidence. Supported = word, token, auto (for computing both word and token)
confidence_cfg: Config with confidence parameters
grid_params: Dictionary with lists of parameters to iteratively benchmark on
# Usage
ASR model can be specified by either "model_path" or "pretrained_name".
Data for transcription are defined with "dataset_manifest".
Results are returned as a benchmark report and curve plots.
python benchmark_asr_confidence.py \
model_path=null \
pretrained_name=null \
dataset_manifest="" \
output_dir="" \
batch_size=64 \
num_workers=8 \
cuda=0 \
amp=True \
target_level="word" \
confidence_cfg.exclude_blank=False \
'grid_params="{\"aggregation\": [\"min\", \"prod\"], \"alpha\": [0.33, 0.5]}"'
"""
def get_experiment_params(cfg):
"""Get experiment parameters from a confidence config and generate the experiment name.
Returns:
List of experiment parameters.
String with the experiment name.
"""
blank = "no_blank" if cfg.exclude_blank else "blank"
aggregation = cfg.aggregation
method_name = cfg.method_cfg.name
alpha = cfg.method_cfg.alpha
if method_name == "entropy":
entropy_type = cfg.method_cfg.entropy_type
entropy_norm = cfg.method_cfg.entropy_norm
experiment_param_list = [
aggregation,
str(cfg.exclude_blank),
method_name,
entropy_type,
entropy_norm,
str(alpha),
]
experiment_str = "-".join([aggregation, blank, method_name, entropy_type, entropy_norm, str(alpha)])
else:
experiment_param_list = [aggregation, str(cfg.exclude_blank), method_name, "-", "-", str(alpha)]
experiment_str = "-".join([aggregation, blank, method_name, str(alpha)])
return experiment_param_list, experiment_str
@dataclass
class ConfidenceBenchmarkingConfig:
# Required configs
model_path: Optional[str] = None # Path to a .nemo file
pretrained_name: Optional[str] = None # Name of a pretrained model
dataset_manifest: str = MISSING
output_dir: str = MISSING
# General configs
batch_size: int = 32
num_workers: int = 4
# Set `cuda` to int to define CUDA device. If 'None', will look for CUDA
# device anyway, and do inference on CPU only if CUDA device is not found.
# If `cuda` is a negative number, inference will be on CPU only.
cuda: Optional[int] = None
amp: bool = False
audio_type: str = "wav"
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
@hydra_runner(config_name="ConfidenceBenchmarkingConfig", schema=ConfidenceBenchmarkingConfig)
def main(cfg: ConfidenceBenchmarkingConfig):
torch.set_grad_enabled(False)
logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
if is_dataclass(cfg):
cfg = OmegaConf.structured(cfg)
if cfg.model_path is None and cfg.pretrained_name is None:
raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None!")
# setup GPU
if cfg.cuda is None:
if torch.cuda.is_available():
device = [0] # use 0th CUDA device
accelerator = 'gpu'
else:
device = 1
accelerator = 'cpu'
else:
device = [cfg.cuda]
accelerator = 'gpu'
map_location = torch.device('cuda:{}'.format(device[0]) if accelerator == 'gpu' else 'cpu')
# setup model
if cfg.model_path is not None:
# restore model from .nemo file path
model_cfg = ASRModel.restore_from(restore_path=cfg.model_path, return_config=True)
classpath = model_cfg.target # original class path
imported_class = model_utils.import_class_by_path(classpath) # type: ASRModel
logging.info(f"Restoring model : {imported_class.__name__}")
asr_model = imported_class.restore_from(
restore_path=cfg.model_path, map_location=map_location
) # type: ASRModel
else:
# restore model by name
asr_model = ASRModel.from_pretrained(
model_name=cfg.pretrained_name, map_location=map_location
) # type: ASRModel
trainer = pl.Trainer(devices=device, accelerator=accelerator)
asr_model.set_trainer(trainer)
asr_model = asr_model.eval()
# Check if ctc or rnnt model
is_rnnt = isinstance(asr_model, EncDecRNNTModel)
# Check that the model has the `change_decoding_strategy` method
if not hasattr(asr_model, 'change_decoding_strategy'):
raise RuntimeError("The asr_model you are using must have the `change_decoding_strategy` method.")
# get filenames and reference texts from manifest
filepaths = []
reference_texts = []
if os.stat(cfg.dataset_manifest).st_size == 0:
logging.error(f"The input dataset_manifest {cfg.dataset_manifest} is empty. Exiting!")
return None
manifest_dir = Path(cfg.dataset_manifest).parent
with open(cfg.dataset_manifest, 'r') as f:
for line in f:
item = json.loads(line)
audio_file = Path(item['audio_filepath'])
if not audio_file.is_file() and not audio_file.is_absolute():
audio_file = manifest_dir / audio_file
filepaths.append(str(audio_file.absolute()))
reference_texts.append(item['text'])
# setup AMP (optional)
autocast = None
if cfg.amp and torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
logging.info("AMP enabled!\n")
autocast = torch.cuda.amp.autocast
# do grid-based benchmarking if grid_params is provided, otherwise a regular one
work_dir = Path(cfg.output_dir)
os.makedirs(work_dir, exist_ok=True)
report_legend = (
",".join(
[
"model_type",
"aggregation",
"blank",
"method_name",
"entropy_type",
"entropy_norm",
"alpha",
"target_level",
"auc_roc",
"auc_pr",
"auc_nt",
"nce",
"ece",
"auc_yc",
"std_yc",
"max_yc",
]
)
+ "\n"
)
model_typename = "RNNT" if is_rnnt else "CTC"
report_file = work_dir / Path("report.csv")
if cfg.grid_params:
asr_model.change_decoding_strategy(
RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
if is_rnnt
else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
)
params = json.loads(cfg.grid_params)
hp_grid = ParameterGrid(params)
hp_grid = list(hp_grid)
logging.info(f"==============================Running a benchmarking with grid search=========================")
logging.info(f"Grid search size: {len(hp_grid)}")
logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directories near the file")
logging.info(f"==============================================================================================")
with open(report_file, "tw", encoding="utf-8") as f:
f.write(report_legend)
f.flush()
for i, hp in enumerate(hp_grid):
logging.info(f"Run # {i + 1}, grid: `{hp}`")
asr_model.change_decoding_strategy(apply_confidence_parameters(asr_model.cfg.decoding, hp))
param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
plot_dir = work_dir / Path(experiment_name)
results = run_confidence_benchmark(
asr_model,
cfg.target_level,
filepaths,
reference_texts,
cfg.batch_size,
cfg.num_workers,
plot_dir,
autocast,
)
for level, result in results.items():
f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
f.flush()
else:
asr_model.change_decoding_strategy(
RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
if is_rnnt
else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
)
param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
plot_dir = work_dir / Path(experiment_name)
logging.info(f"==============================Running a single benchmarking===================================")
logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directory `{plot_dir}`")
with open(report_file, "tw", encoding="utf-8") as f:
f.write(report_legend)
f.flush()
results = run_confidence_benchmark(
asr_model,
cfg.batch_size,
cfg.num_workers,
cfg.target_level,
filepaths,
reference_texts,
plot_dir,
autocast,
)
for level, result in results.items():
f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
logging.info(f"===========================================Done===============================================")
if __name__ == '__main__':
main()
[end of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
[start of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
# This script converts an existing audio dataset with a manifest to
# a tarred and sharded audio dataset that can be read by the
# TarredAudioToTextDataLayer.
# Please make sure your audio_filepath DOES NOT CONTAIN '-sub'!
# Because we will use it to handle files which have duplicate filenames but with different offsets
# (see function create_shard for details)
# Bucketing can help to improve the training speed. You may use --buckets_num to specify the number of buckets.
# It creates multiple tarred datasets, one per bucket, based on the audio durations.
# The range of [min_duration, max_duration) is split into equal sized buckets.
# Recommend to use --sort_in_shards to speedup the training by reducing the paddings in the batches
# More info on how to use bucketing feature: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/datasets.html
# If valid NVIDIA DALI version is installed, will also generate the corresponding DALI index files that need to be
# supplied to the config in order to utilize webdataset for efficient large dataset handling.
# NOTE: DALI + Webdataset is NOT compatible with Bucketing support !
# Usage:
1) Creating a new tarfile dataset
python convert_to_tarred_audio_dataset.py \
--manifest_path=<path to the manifest file> \
--target_dir=<path to output directory> \
--num_shards=<number of tarfiles that will contain the audio> \
--max_duration=<float representing maximum duration of audio samples> \
--min_duration=<float representing minimum duration of audio samples> \
--shuffle --shuffle_seed=1 \
--sort_in_shards \
--workers=-1
2) Concatenating more tarfiles to a pre-existing tarred dataset
python convert_to_tarred_audio_dataset.py \
--manifest_path=<path to the tarred manifest file> \
--metadata_path=<path to the metadata.yaml (or metadata_version_{X}.yaml) file> \
--target_dir=<path to output directory where the original tarfiles are contained> \
--max_duration=<float representing maximum duration of audio samples> \
--min_duration=<float representing minimum duration of audio samples> \
--shuffle --shuffle_seed=1 \
--sort_in_shards \
--workers=-1 \
--concat_manifest_paths \
<space separated paths to 1 or more manifest files to concatenate into the original tarred dataset>
3) Writing an empty metadata file
python convert_to_tarred_audio_dataset.py \
--target_dir=<path to output directory> \
# any other optional argument
--num_shards=8 \
--max_duration=16.7 \
--min_duration=0.01 \
--shuffle \
--workers=-1 \
--sort_in_shards \
--shuffle_seed=1 \
--write_metadata
"""
import argparse
import copy
import json
import os
import random
import tarfile
from collections import defaultdict
from dataclasses import dataclass, field
from datetime import datetime
from typing import Any, List, Optional
from joblib import Parallel, delayed
from omegaconf import DictConfig, OmegaConf, open_dict
try:
import create_dali_tarred_dataset_index as dali_index
DALI_INDEX_SCRIPT_AVAILABLE = True
except (ImportError, ModuleNotFoundError, FileNotFoundError):
DALI_INDEX_SCRIPT_AVAILABLE = False
parser = argparse.ArgumentParser(
description="Convert an existing ASR dataset to tarballs compatible with TarredAudioToTextDataLayer."
)
parser.add_argument(
"--manifest_path", default=None, type=str, required=False, help="Path to the existing dataset's manifest."
)
parser.add_argument(
'--concat_manifest_paths',
nargs='+',
default=None,
type=str,
required=False,
help="Path to the additional dataset's manifests that will be concatenated with base dataset.",
)
# Optional arguments
parser.add_argument(
"--target_dir",
default='./tarred',
type=str,
help="Target directory for resulting tarballs and manifest. Defaults to `./tarred`. Creates the path if necessary.",
)
parser.add_argument(
"--metadata_path", required=False, default=None, type=str, help="Path to metadata file for the dataset.",
)
parser.add_argument(
"--num_shards",
default=-1,
type=int,
help="Number of shards (tarballs) to create. Used for partitioning data among workers.",
)
parser.add_argument(
'--max_duration',
default=None,
required=True,
type=float,
help='Maximum duration of audio clip in the dataset. By default, it is None and is required to be set.',
)
parser.add_argument(
'--min_duration',
default=None,
type=float,
help='Minimum duration of audio clip in the dataset. By default, it is None and will not filter files.',
)
parser.add_argument(
"--shuffle",
action='store_true',
help="Whether or not to randomly shuffle the samples in the manifest before tarring/sharding.",
)
parser.add_argument(
"--keep_files_together",
action='store_true',
help="Whether or not to keep entries from the same file (but different offsets) together when sorting before tarring/sharding.",
)
parser.add_argument(
"--sort_in_shards",
action='store_true',
help="Whether or not to sort samples inside the shards based on their duration.",
)
parser.add_argument(
"--buckets_num", type=int, default=1, help="Number of buckets to create based on duration.",
)
parser.add_argument("--shuffle_seed", type=int, default=None, help="Random seed for use if shuffling is enabled.")
parser.add_argument(
'--write_metadata',
action='store_true',
help=(
"Flag to write a blank metadata with the current call config. "
"Note that the metadata will not contain the number of shards, "
"and it must be filled out by the user."
),
)
parser.add_argument(
"--no_shard_manifests",
action='store_true',
help="Do not write sharded manifests along with the aggregated manifest.",
)
parser.add_argument('--workers', type=int, default=1, help='Number of worker processes')
args = parser.parse_args()
@dataclass
class ASRTarredDatasetConfig:
num_shards: int = -1
shuffle: bool = False
max_duration: Optional[float] = None
min_duration: Optional[float] = None
shuffle_seed: Optional[int] = None
sort_in_shards: bool = True
shard_manifests: bool = True
keep_files_together: bool = False
@dataclass
class ASRTarredDatasetMetadata:
created_datetime: Optional[str] = None
version: int = 0
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
self.created_datetime = self.get_current_datetime()
def get_current_datetime(self):
return datetime.now().strftime("%m-%d-%Y %H-%M-%S")
@classmethod
def from_config(cls, config: DictConfig):
obj = cls()
obj.__dict__.update(**config)
return obj
@classmethod
def from_file(cls, filepath: str):
config = OmegaConf.load(filepath)
return ASRTarredDatasetMetadata.from_config(config=config)
class ASRTarredDatasetBuilder:
"""
Helper class that constructs a tarred dataset from scratch, or concatenates tarred datasets
together and constructs manifests for them.
"""
def __init__(self):
self.config = None
def configure(self, config: ASRTarredDatasetConfig):
"""
Sets the config generated from command line overrides.
Args:
config: ASRTarredDatasetConfig dataclass object.
"""
self.config = config # type: ASRTarredDatasetConfig
if self.config.num_shards < 0:
raise ValueError("`num_shards` must be > 0. Please fill in the metadata information correctly.")
def create_new_dataset(self, manifest_path: str, target_dir: str = "./tarred/", num_workers: int = 0):
"""
Creates a new tarred dataset from a given manifest file.
Args:
manifest_path: Path to the original ASR manifest.
target_dir: Output directory.
num_workers: Integer denoting number of parallel worker processes which will write tarfiles.
Defaults to 1 - which denotes sequential worker process.
Output:
Writes tarfiles, along with the tarred dataset compatible manifest file.
Also preserves a record of the metadata used to construct this tarred dataset.
"""
if self.config is None:
raise ValueError("Config has not been set. Please call `configure(config: ASRTarredDatasetConfig)`")
if manifest_path is None:
raise FileNotFoundError("Manifest filepath cannot be None !")
config = self.config # type: ASRTarredDatasetConfig
if not os.path.exists(target_dir):
os.makedirs(target_dir)
# Read the existing manifest
entries, total_duration, filtered_entries, filtered_duration = self._read_manifest(manifest_path, config)
if len(filtered_entries) > 0:
print(f"Filtered {len(filtered_entries)} files which amounts to {filtered_duration} seconds of audio.")
print(
f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
)
if len(entries) == 0:
print("No tarred dataset was created as there were 0 valid samples after filtering!")
return
if config.shuffle:
random.seed(config.shuffle_seed)
print("Shuffling...")
if config.keep_files_together:
filename_entries = defaultdict(list)
for ent in entries:
filename_entries[ent["audio_filepath"]].append(ent)
filenames = list(filename_entries.keys())
random.shuffle(filenames)
shuffled_entries = []
for filename in filenames:
shuffled_entries += filename_entries[filename]
entries = shuffled_entries
else:
random.shuffle(entries)
# Create shards and updated manifest entries
print(f"Number of samples added : {len(entries)}")
print(f"Remainder: {len(entries) % config.num_shards}")
start_indices = []
end_indices = []
# Build indices
for i in range(config.num_shards):
start_idx = (len(entries) // config.num_shards) * i
end_idx = start_idx + (len(entries) // config.num_shards)
print(f"Shard {i} has entries {start_idx} ~ {end_idx}")
files = set()
for ent_id in range(start_idx, end_idx):
files.add(entries[ent_id]["audio_filepath"])
print(f"Shard {i} contains {len(files)} files")
if i == config.num_shards - 1:
# We discard in order to have the same number of entries per shard.
print(f"Have {len(entries) - end_idx} entries left over that will be discarded.")
start_indices.append(start_idx)
end_indices.append(end_idx)
manifest_folder, _ = os.path.split(manifest_path)
with Parallel(n_jobs=num_workers, verbose=config.num_shards) as parallel:
# Call parallel tarfile construction
new_entries_list = parallel(
delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, i, manifest_folder)
for i, (start_idx, end_idx) in enumerate(zip(start_indices, end_indices))
)
if config.shard_manifests:
sharded_manifests_dir = target_dir + '/sharded_manifests'
if not os.path.exists(sharded_manifests_dir):
os.makedirs(sharded_manifests_dir)
for manifest in new_entries_list:
shard_id = manifest[0]['shard_id']
new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
for entry in manifest:
json.dump(entry, m2)
m2.write('\n')
# Flatten the list of list of entries to a list of entries
new_entries = [sample for manifest in new_entries_list for sample in manifest]
del new_entries_list
print("Total number of entries in manifest :", len(new_entries))
# Write manifest
new_manifest_path = os.path.join(target_dir, 'tarred_audio_manifest.json')
with open(new_manifest_path, 'w', encoding='utf-8') as m2:
for entry in new_entries:
json.dump(entry, m2)
m2.write('\n')
# Write metadata (default metadata for new datasets)
new_metadata_path = os.path.join(target_dir, 'metadata.yaml')
metadata = ASRTarredDatasetMetadata()
# Update metadata
metadata.dataset_config = config
metadata.num_samples_per_shard = len(new_entries) // config.num_shards
# Write metadata
metadata_yaml = OmegaConf.structured(metadata)
OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
def create_concatenated_dataset(
self,
base_manifest_path: str,
manifest_paths: List[str],
metadata: ASRTarredDatasetMetadata,
target_dir: str = "./tarred_concatenated/",
num_workers: int = 1,
):
"""
Creates new tarfiles in order to create a concatenated dataset, whose manifest contains the data for
both the original dataset as well as the new data submitted in manifest paths.
Args:
base_manifest_path: Path to the manifest file which contains the information for the original
tarred dataset (with flattened paths).
manifest_paths: List of one or more paths to manifest files that will be concatenated with above
base tarred dataset.
metadata: ASRTarredDatasetMetadata dataclass instance with overrides from command line.
target_dir: Output directory
Output:
Writes tarfiles which with indices mapping to a "concatenated" tarred dataset,
along with the tarred dataset compatible manifest file which includes information
about all the datasets that comprise the concatenated dataset.
Also preserves a record of the metadata used to construct this tarred dataset.
"""
if not os.path.exists(target_dir):
os.makedirs(target_dir)
if base_manifest_path is None:
raise FileNotFoundError("Base manifest filepath cannot be None !")
if manifest_paths is None or len(manifest_paths) == 0:
raise FileNotFoundError("List of additional manifest filepaths cannot be None !")
config = ASRTarredDatasetConfig(**(metadata.dataset_config))
# Read the existing manifest (no filtering here)
base_entries, _, _, _ = self._read_manifest(base_manifest_path, config)
print(f"Read base manifest containing {len(base_entries)} samples.")
# Precompute number of samples per shard
if metadata.num_samples_per_shard is None:
num_samples_per_shard = len(base_entries) // config.num_shards
else:
num_samples_per_shard = metadata.num_samples_per_shard
print("Number of samples per shard :", num_samples_per_shard)
# Compute min and max duration and update config (if no metadata passed)
print(f"Selected max duration : {config.max_duration}")
print(f"Selected min duration : {config.min_duration}")
entries = []
for new_manifest_idx in range(len(manifest_paths)):
new_entries, total_duration, filtered_new_entries, filtered_duration = self._read_manifest(
manifest_paths[new_manifest_idx], config
)
if len(filtered_new_entries) > 0:
print(
f"Filtered {len(filtered_new_entries)} files which amounts to {filtered_duration:0.2f}"
f" seconds of audio from manifest {manifest_paths[new_manifest_idx]}."
)
print(
f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
)
entries.extend(new_entries)
if len(entries) == 0:
print("No tarred dataset was created as there were 0 valid samples after filtering!")
return
if config.shuffle:
random.seed(config.shuffle_seed)
print("Shuffling...")
random.shuffle(entries)
# Drop last section of samples that cannot be added onto a chunk
drop_count = len(entries) % num_samples_per_shard
total_new_entries = len(entries)
entries = entries[:-drop_count]
print(
f"Dropping {drop_count} samples from total new samples {total_new_entries} since they cannot "
f"be added into a uniformly sized chunk."
)
# Create shards and updated manifest entries
num_added_shards = len(entries) // num_samples_per_shard
print(f"Number of samples in base dataset : {len(base_entries)}")
print(f"Number of samples in additional datasets : {len(entries)}")
print(f"Number of added shards : {num_added_shards}")
print(f"Remainder: {len(entries) % num_samples_per_shard}")
start_indices = []
end_indices = []
shard_indices = []
for i in range(num_added_shards):
start_idx = (len(entries) // num_added_shards) * i
end_idx = start_idx + (len(entries) // num_added_shards)
shard_idx = i + config.num_shards
print(f"Shard {shard_idx} has entries {start_idx + len(base_entries)} ~ {end_idx + len(base_entries)}")
start_indices.append(start_idx)
end_indices.append(end_idx)
shard_indices.append(shard_idx)
manifest_folder, _ = os.path.split(base_manifest_path)
with Parallel(n_jobs=num_workers, verbose=num_added_shards) as parallel:
# Call parallel tarfile construction
new_entries_list = parallel(
delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, shard_idx, manifest_folder)
for i, (start_idx, end_idx, shard_idx) in enumerate(zip(start_indices, end_indices, shard_indices))
)
if config.shard_manifests:
sharded_manifests_dir = target_dir + '/sharded_manifests'
if not os.path.exists(sharded_manifests_dir):
os.makedirs(sharded_manifests_dir)
for manifest in new_entries_list:
shard_id = manifest[0]['shard_id']
new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
for entry in manifest:
json.dump(entry, m2)
m2.write('\n')
# Flatten the list of list of entries to a list of entries
new_entries = [sample for manifest in new_entries_list for sample in manifest]
del new_entries_list
# Write manifest
if metadata is None:
new_version = 1 # start with `1`, where `0` indicates the base manifest + dataset
else:
new_version = metadata.version + 1
print("Total number of entries in manifest :", len(base_entries) + len(new_entries))
new_manifest_path = os.path.join(target_dir, f'tarred_audio_manifest_version_{new_version}.json')
with open(new_manifest_path, 'w', encoding='utf-8') as m2:
# First write all the entries of base manifest
for entry in base_entries:
json.dump(entry, m2)
m2.write('\n')
# Finally write the new entries
for entry in new_entries:
json.dump(entry, m2)
m2.write('\n')
# Preserve historical metadata
base_metadata = metadata
# Write metadata (updated metadata for concatenated datasets)
new_metadata_path = os.path.join(target_dir, f'metadata_version_{new_version}.yaml')
metadata = ASRTarredDatasetMetadata()
# Update config
config.num_shards = config.num_shards + num_added_shards
# Update metadata
metadata.version = new_version
metadata.dataset_config = config
metadata.num_samples_per_shard = num_samples_per_shard
metadata.is_concatenated_manifest = True
metadata.created_datetime = metadata.get_current_datetime()
# Attach history
current_metadata = OmegaConf.structured(base_metadata.history)
metadata.history = current_metadata
# Write metadata
metadata_yaml = OmegaConf.structured(metadata)
OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
def _read_manifest(self, manifest_path: str, config: ASRTarredDatasetConfig):
"""Read and filters data from the manifest"""
# Read the existing manifest
entries = []
total_duration = 0.0
filtered_entries = []
filtered_duration = 0.0
with open(manifest_path, 'r', encoding='utf-8') as m:
for line in m:
entry = json.loads(line)
if (config.max_duration is None or entry['duration'] < config.max_duration) and (
config.min_duration is None or entry['duration'] >= config.min_duration
):
entries.append(entry)
total_duration += entry["duration"]
else:
filtered_entries.append(entry)
filtered_duration += entry['duration']
return entries, total_duration, filtered_entries, filtered_duration
def _create_shard(self, entries, target_dir, shard_id, manifest_folder):
"""Creates a tarball containing the audio files from `entries`.
"""
if self.config.sort_in_shards:
entries.sort(key=lambda x: x["duration"], reverse=False)
new_entries = []
tar = tarfile.open(os.path.join(target_dir, f'audio_{shard_id}.tar'), mode='w', dereference=True)
count = dict()
for entry in entries:
# We squash the filename since we do not preserve directory structure of audio files in the tarball.
if os.path.exists(entry["audio_filepath"]):
audio_filepath = entry["audio_filepath"]
else:
audio_filepath = os.path.join(manifest_folder, entry["audio_filepath"])
if not os.path.exists(audio_filepath):
raise FileNotFoundError(f"Could not find {entry['audio_filepath']}!")
base, ext = os.path.splitext(audio_filepath)
base = base.replace('/', '_')
# Need the following replacement as long as WebDataset splits on first period
base = base.replace('.', '_')
squashed_filename = f'{base}{ext}'
if squashed_filename not in count:
tar.add(audio_filepath, arcname=squashed_filename)
to_write = squashed_filename
count[squashed_filename] = 1
else:
to_write = base + "-sub" + str(count[squashed_filename]) + ext
count[squashed_filename] += 1
new_entry = {
'audio_filepath': to_write,
'duration': entry['duration'],
'shard_id': shard_id, # Keep shard ID for recordkeeping
}
if 'label' in entry:
new_entry['label'] = entry['label']
if 'text' in entry:
new_entry['text'] = entry['text']
if 'offset' in entry:
new_entry['offset'] = entry['offset']
if 'lang' in entry:
new_entry['lang'] = entry['lang']
new_entries.append(new_entry)
tar.close()
return new_entries
@classmethod
def setup_history(cls, base_metadata: ASRTarredDatasetMetadata, history: List[Any]):
if 'history' in base_metadata.keys():
for history_val in base_metadata.history:
cls.setup_history(history_val, history)
if base_metadata is not None:
metadata_copy = copy.deepcopy(base_metadata)
with open_dict(metadata_copy):
metadata_copy.pop('history', None)
history.append(metadata_copy)
def main():
if args.buckets_num > 1:
bucket_length = (args.max_duration - args.min_duration) / float(args.buckets_num)
for i in range(args.buckets_num):
min_duration = args.min_duration + i * bucket_length
max_duration = min_duration + bucket_length
if i == args.buckets_num - 1:
# add a small number to cover the samples with exactly duration of max_duration in the last bucket.
max_duration += 1e-5
target_dir = os.path.join(args.target_dir, f"bucket{i+1}")
print(f"Creating bucket {i+1} with min_duration={min_duration} and max_duration={max_duration} ...")
print(f"Results are being saved at: {target_dir}.")
create_tar_datasets(min_duration=min_duration, max_duration=max_duration, target_dir=target_dir)
print(f"Bucket {i+1} is created.")
else:
create_tar_datasets(min_duration=args.min_duration, max_duration=args.max_duration, target_dir=args.target_dir)
def create_tar_datasets(min_duration: float, max_duration: float, target_dir: str):
builder = ASRTarredDatasetBuilder()
shard_manifests = False if args.no_shard_manifests else True
if args.write_metadata:
metadata = ASRTarredDatasetMetadata()
dataset_cfg = ASRTarredDatasetConfig(
num_shards=args.num_shards,
shuffle=args.shuffle,
max_duration=max_duration,
min_duration=min_duration,
shuffle_seed=args.shuffle_seed,
sort_in_shards=args.sort_in_shards,
shard_manifests=shard_manifests,
keep_files_together=args.keep_files_together,
)
metadata.dataset_config = dataset_cfg
output_path = os.path.join(target_dir, 'default_metadata.yaml')
OmegaConf.save(metadata, output_path, resolve=True)
print(f"Default metadata written to {output_path}")
exit(0)
if args.concat_manifest_paths is None or len(args.concat_manifest_paths) == 0:
print("Creating new tarred dataset ...")
# Create a tarred dataset from scratch
config = ASRTarredDatasetConfig(
num_shards=args.num_shards,
shuffle=args.shuffle,
max_duration=max_duration,
min_duration=min_duration,
shuffle_seed=args.shuffle_seed,
sort_in_shards=args.sort_in_shards,
shard_manifests=shard_manifests,
keep_files_together=args.keep_files_together,
)
builder.configure(config)
builder.create_new_dataset(manifest_path=args.manifest_path, target_dir=target_dir, num_workers=args.workers)
else:
if args.buckets_num > 1:
raise ValueError("Concatenation feature does not support buckets_num > 1.")
print("Concatenating multiple tarred datasets ...")
# Implicitly update config from base details
if args.metadata_path is not None:
metadata = ASRTarredDatasetMetadata.from_file(args.metadata_path)
else:
raise ValueError("`metadata` yaml file path must be provided!")
# Preserve history
history = []
builder.setup_history(OmegaConf.structured(metadata), history)
metadata.history = history
# Add command line overrides (everything other than num_shards)
metadata.dataset_config.max_duration = max_duration
metadata.dataset_config.min_duration = min_duration
metadata.dataset_config.shuffle = args.shuffle
metadata.dataset_config.shuffle_seed = args.shuffle_seed
metadata.dataset_config.sort_in_shards = args.sort_in_shards
metadata.dataset_config.shard_manifests = shard_manifests
builder.configure(metadata.dataset_config)
# Concatenate a tarred dataset onto a previous one
builder.create_concatenated_dataset(
base_manifest_path=args.manifest_path,
manifest_paths=args.concat_manifest_paths,
metadata=metadata,
target_dir=target_dir,
num_workers=args.workers,
)
if DALI_INDEX_SCRIPT_AVAILABLE and dali_index.INDEX_CREATOR_AVAILABLE:
print("Constructing DALI Tarfile Index - ", target_dir)
index_config = dali_index.DALITarredIndexConfig(tar_dir=target_dir, workers=args.workers)
dali_index.main(index_config)
if __name__ == "__main__":
main()
[end of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
[start of tools/nemo_forced_aligner/align.py]
# Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import math
import os
from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import List, Optional
import torch
from omegaconf import OmegaConf
from utils.data_prep import (
add_t_start_end_to_utt_obj,
get_batch_starts_ends,
get_batch_variables,
get_manifest_lines_batch,
is_entry_in_all_lines,
is_entry_in_any_lines,
)
from utils.make_ass_files import make_ass_files
from utils.make_ctm_files import make_ctm_files
from utils.make_output_manifest import write_manifest_out_line
from utils.viterbi_decoding import viterbi_decoding
from nemo.collections.asr.models.ctc_models import EncDecCTCModel
from nemo.collections.asr.models.hybrid_rnnt_ctc_models import EncDecHybridRNNTCTCModel
from nemo.collections.asr.parts.utils.streaming_utils import FrameBatchASR
from nemo.collections.asr.parts.utils.transcribe_utils import setup_model
from nemo.core.config import hydra_runner
from nemo.utils import logging
"""
Align the utterances in manifest_filepath.
Results are saved in ctm files in output_dir.
Arguments:
pretrained_name: string specifying the name of a CTC NeMo ASR model which will be automatically downloaded
from NGC and used for generating the log-probs which we will use to do alignment.
Note: NFA can only use CTC models (not Transducer models) at the moment.
model_path: string specifying the local filepath to a CTC NeMo ASR model which will be used to generate the
log-probs which we will use to do alignment.
Note: NFA can only use CTC models (not Transducer models) at the moment.
Note: if a model_path is provided, it will override the pretrained_name.
manifest_filepath: filepath to the manifest of the data you want to align,
containing 'audio_filepath' and 'text' fields.
output_dir: the folder where output CTM files and new JSON manifest will be saved.
align_using_pred_text: if True, will transcribe the audio using the specified model and then use that transcription
as the reference text for the forced alignment.
transcribe_device: None, or a string specifying the device that will be used for generating log-probs (i.e. "transcribing").
The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
(otherwise will set it to 'cpu').
viterbi_device: None, or string specifying the device that will be used for doing Viterbi decoding.
The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
(otherwise will set it to 'cpu').
batch_size: int specifying batch size that will be used for generating log-probs and doing Viterbi decoding.
use_local_attention: boolean flag specifying whether to try to use local attention for the ASR Model (will only
work if the ASR Model is a Conformer model). If local attention is used, we will set the local attention context
size to [64,64].
additional_segment_grouping_separator: an optional string used to separate the text into smaller segments.
If this is not specified, then the whole text will be treated as a single segment.
remove_blank_tokens_from_ctm: a boolean denoting whether to remove <blank> tokens from token-level output CTMs.
audio_filepath_parts_in_utt_id: int specifying how many of the 'parts' of the audio_filepath
we will use (starting from the final part of the audio_filepath) to determine the
utt_id that will be used in the CTM files. Note also that any spaces that are present in the audio_filepath
will be replaced with dashes, so as not to change the number of space-separated elements in the
CTM files.
e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 1 => utt_id will be "e1"
e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 2 => utt_id will be "d_e1"
e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 3 => utt_id will be "c_d_e1"
use_buffered_infer: False, if set True, using streaming to do get the logits for alignment
This flag is useful when aligning large audio file.
However, currently the chunk streaming inference does not support batch inference,
which means even you set batch_size > 1, it will only infer one by one instead of doing
the whole batch inference together.
chunk_len_in_secs: float chunk length in seconds
total_buffer_in_secs: float Length of buffer (chunk + left and right padding) in seconds
chunk_batch_size: int batch size for buffered chunk inference,
which will cut one audio into segments and do inference on chunk_batch_size segments at a time
simulate_cache_aware_streaming: False, if set True, using cache aware streaming to do get the logits for alignment
save_output_file_formats: List of strings specifying what type of output files to save (default: ["ctm", "ass"])
ctm_file_config: CTMFileConfig to specify the configuration of the output CTM files
ass_file_config: ASSFileConfig to specify the configuration of the output ASS files
"""
@dataclass
class CTMFileConfig:
remove_blank_tokens: bool = False
# minimum duration (in seconds) for timestamps in the CTM.If any line in the CTM has a
# duration lower than this, it will be enlarged from the middle outwards until it
# meets the minimum_timestamp_duration, or reaches the beginning or end of the audio file.
# Note that this may cause timestamps to overlap.
minimum_timestamp_duration: float = 0
@dataclass
class ASSFileConfig:
fontsize: int = 20
vertical_alignment: str = "center"
# if resegment_text_to_fill_space is True, the ASS files will use new segments
# such that each segment will not take up more than (approximately) max_lines_per_segment
# when the ASS file is applied to a video
resegment_text_to_fill_space: bool = False
max_lines_per_segment: int = 2
text_already_spoken_rgb: List[int] = field(default_factory=lambda: [49, 46, 61]) # dark gray
text_being_spoken_rgb: List[int] = field(default_factory=lambda: [57, 171, 9]) # dark green
text_not_yet_spoken_rgb: List[int] = field(default_factory=lambda: [194, 193, 199]) # light gray
@dataclass
class AlignmentConfig:
# Required configs
pretrained_name: Optional[str] = None
model_path: Optional[str] = None
manifest_filepath: Optional[str] = None
output_dir: Optional[str] = None
# General configs
align_using_pred_text: bool = False
transcribe_device: Optional[str] = None
viterbi_device: Optional[str] = None
batch_size: int = 1
use_local_attention: bool = True
additional_segment_grouping_separator: Optional[str] = None
audio_filepath_parts_in_utt_id: int = 1
# Buffered chunked streaming configs
use_buffered_chunked_streaming: bool = False
chunk_len_in_secs: float = 1.6
total_buffer_in_secs: float = 4.0
chunk_batch_size: int = 32
# Cache aware streaming configs
simulate_cache_aware_streaming: Optional[bool] = False
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
ctm_file_config: CTMFileConfig = CTMFileConfig()
ass_file_config: ASSFileConfig = ASSFileConfig()
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
def main(cfg: AlignmentConfig):
logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
if is_dataclass(cfg):
cfg = OmegaConf.structured(cfg)
# Validate config
if cfg.model_path is None and cfg.pretrained_name is None:
raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None")
if cfg.model_path is not None and cfg.pretrained_name is not None:
raise ValueError("One of cfg.model_path and cfg.pretrained_name must be None")
if cfg.manifest_filepath is None:
raise ValueError("cfg.manifest_filepath must be specified")
if cfg.output_dir is None:
raise ValueError("cfg.output_dir must be specified")
if cfg.batch_size < 1:
raise ValueError("cfg.batch_size cannot be zero or a negative number")
if cfg.additional_segment_grouping_separator == "" or cfg.additional_segment_grouping_separator == " ":
raise ValueError("cfg.additional_grouping_separator cannot be empty string or space character")
if cfg.ctm_file_config.minimum_timestamp_duration < 0:
raise ValueError("cfg.minimum_timestamp_duration cannot be a negative number")
if cfg.ass_file_config.vertical_alignment not in ["top", "center", "bottom"]:
raise ValueError("cfg.ass_file_config.vertical_alignment must be one of 'top', 'center' or 'bottom'")
for rgb_list in [
cfg.ass_file_config.text_already_spoken_rgb,
cfg.ass_file_config.text_already_spoken_rgb,
cfg.ass_file_config.text_already_spoken_rgb,
]:
if len(rgb_list) != 3:
raise ValueError(
"cfg.ass_file_config.text_already_spoken_rgb,"
" cfg.ass_file_config.text_being_spoken_rgb,"
" and cfg.ass_file_config.text_already_spoken_rgb all need to contain"
" exactly 3 elements."
)
# Validate manifest contents
if not is_entry_in_all_lines(cfg.manifest_filepath, "audio_filepath"):
raise RuntimeError(
"At least one line in cfg.manifest_filepath does not contain an 'audio_filepath' entry. "
"All lines must contain an 'audio_filepath' entry."
)
if cfg.align_using_pred_text:
if is_entry_in_any_lines(cfg.manifest_filepath, "pred_text"):
raise RuntimeError(
"Cannot specify cfg.align_using_pred_text=True when the manifest at cfg.manifest_filepath "
"contains 'pred_text' entries. This is because the audio will be transcribed and may produce "
"a different 'pred_text'. This may cause confusion."
)
else:
if not is_entry_in_all_lines(cfg.manifest_filepath, "text"):
raise RuntimeError(
"At least one line in cfg.manifest_filepath does not contain a 'text' entry. "
"NFA requires all lines to contain a 'text' entry when cfg.align_using_pred_text=False."
)
# init devices
if cfg.transcribe_device is None:
transcribe_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
else:
transcribe_device = torch.device(cfg.transcribe_device)
logging.info(f"Device to be used for transcription step (`transcribe_device`) is {transcribe_device}")
if cfg.viterbi_device is None:
viterbi_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
else:
viterbi_device = torch.device(cfg.viterbi_device)
logging.info(f"Device to be used for viterbi step (`viterbi_device`) is {viterbi_device}")
if transcribe_device.type == 'cuda' or viterbi_device.type == 'cuda':
logging.warning(
'One or both of transcribe_device and viterbi_device are GPUs. If you run into OOM errors '
'it may help to change both devices to be the CPU.'
)
# load model
model, _ = setup_model(cfg, transcribe_device)
model.eval()
if isinstance(model, EncDecHybridRNNTCTCModel):
model.change_decoding_strategy(decoder_type="ctc")
if cfg.use_local_attention:
logging.info(
"Flag use_local_attention is set to True => will try to use local attention for model if it allows it"
)
model.change_attention_model(self_attention_model="rel_pos_local_attn", att_context_size=[64, 64])
if not (isinstance(model, EncDecCTCModel) or isinstance(model, EncDecHybridRNNTCTCModel)):
raise NotImplementedError(
f"Model is not an instance of NeMo EncDecCTCModel or ENCDecHybridRNNTCTCModel."
" Currently only instances of these models are supported"
)
if cfg.ctm_file_config.minimum_timestamp_duration > 0:
logging.warning(
f"cfg.ctm_file_config.minimum_timestamp_duration has been set to {cfg.ctm_file_config.minimum_timestamp_duration} seconds. "
"This may cause the alignments for some tokens/words/additional segments to be overlapping."
)
buffered_chunk_params = {}
if cfg.use_buffered_chunked_streaming:
model_cfg = copy.deepcopy(model._cfg)
OmegaConf.set_struct(model_cfg.preprocessor, False)
# some changes for streaming scenario
model_cfg.preprocessor.dither = 0.0
model_cfg.preprocessor.pad_to = 0
if model_cfg.preprocessor.normalize != "per_feature":
logging.error(
"Only EncDecCTCModelBPE models trained with per_feature normalization are supported currently"
)
# Disable config overwriting
OmegaConf.set_struct(model_cfg.preprocessor, True)
feature_stride = model_cfg.preprocessor['window_stride']
model_stride_in_secs = feature_stride * cfg.model_downsample_factor
total_buffer = cfg.total_buffer_in_secs
chunk_len = float(cfg.chunk_len_in_secs)
tokens_per_chunk = math.ceil(chunk_len / model_stride_in_secs)
mid_delay = math.ceil((chunk_len + (total_buffer - chunk_len) / 2) / model_stride_in_secs)
logging.info(f"tokens_per_chunk is {tokens_per_chunk}, mid_delay is {mid_delay}")
model = FrameBatchASR(
asr_model=model,
frame_len=chunk_len,
total_buffer=cfg.total_buffer_in_secs,
batch_size=cfg.chunk_batch_size,
)
buffered_chunk_params = {
"delay": mid_delay,
"model_stride_in_secs": model_stride_in_secs,
"tokens_per_chunk": tokens_per_chunk,
}
# get start and end line IDs of batches
starts, ends = get_batch_starts_ends(cfg.manifest_filepath, cfg.batch_size)
# init output_timestep_duration = None and we will calculate and update it during the first batch
output_timestep_duration = None
# init f_manifest_out
os.makedirs(cfg.output_dir, exist_ok=True)
tgt_manifest_name = str(Path(cfg.manifest_filepath).stem) + "_with_output_file_paths.json"
tgt_manifest_filepath = str(Path(cfg.output_dir) / tgt_manifest_name)
f_manifest_out = open(tgt_manifest_filepath, 'w')
# get alignment and save in CTM batch-by-batch
for start, end in zip(starts, ends):
manifest_lines_batch = get_manifest_lines_batch(cfg.manifest_filepath, start, end)
(log_probs_batch, y_batch, T_batch, U_batch, utt_obj_batch, output_timestep_duration,) = get_batch_variables(
manifest_lines_batch,
model,
cfg.additional_segment_grouping_separator,
cfg.align_using_pred_text,
cfg.audio_filepath_parts_in_utt_id,
output_timestep_duration,
cfg.simulate_cache_aware_streaming,
cfg.use_buffered_chunked_streaming,
buffered_chunk_params,
)
alignments_batch = viterbi_decoding(log_probs_batch, y_batch, T_batch, U_batch, viterbi_device)
for utt_obj, alignment_utt in zip(utt_obj_batch, alignments_batch):
utt_obj = add_t_start_end_to_utt_obj(utt_obj, alignment_utt, output_timestep_duration)
if "ctm" in cfg.save_output_file_formats:
utt_obj = make_ctm_files(utt_obj, cfg.output_dir, cfg.ctm_file_config,)
if "ass" in cfg.save_output_file_formats:
utt_obj = make_ass_files(utt_obj, cfg.output_dir, cfg.ass_file_config)
write_manifest_out_line(
f_manifest_out, utt_obj,
)
f_manifest_out.close()
return None
if __name__ == "__main__":
main()
[end of tools/nemo_forced_aligner/align.py]
</code>
I need you to solve this issue by regenerating the full files in the code base that you would like to change. You can change as many files as you like. Please respond with a list of files and their revised contents in the following format.
<example>
[start of /src/this_file.py]
import os
def euclidean(a, b):
if b == 0:
return a
return euclidean(b, a % b)
[end of /src/this_file.py]
[start of /src/another_file.py]
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
x, y = x0, y0
sx = -1 if x0 > x1 else 1
sy = -1 if y0 > y1 else 1
if dx > dy:
err = dx / 2.0
while x != x1:
points.append((x, y))
err -= dy
if err < 0:
y += sy
err += dx
x += sx
else:
err = dy / 2.0
while y != y1:
points.append((x
err -= dx
if err < 0:
x += sx
err += dy
y += sy
points.append((x, y))
return points
[end of /src/another_file.py]
</example>
|
NVIDIA/NeMo
|
15db83ec4a65e649d83b61d7a4a58d911586e853
|
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
|
Seems to be a similar to #7002
Interesting. The fix is easy but needs to be applied to basically every single place that has this constructor for our adapter configs. Let me see if I can update it. But no guarantees on how soon fixes will come in main.
Looking forward to it @titu1994 ! Thanks ๐
@titu1994 I was looking to use NeMo speaker diarization with Python 3.11 and hit this dataclass issue. I patched everything involved in the specific code paths I needed: https://github.com/lmnt-com/NeMo/commit/d89acf9f0152e43dee29d7d1c4667ee34c26ffd7
I was using the neural diarizer as described in https://github.com/NVIDIA/NeMo/tree/main/examples/speaker_tasks/diarization
I'd be happy to upstream this if it's helpful.
I haven't checked whether this is backwards compatible for earlier python/dataclass versions, do you know?
For reference, what led me to this issue, though it's duplicative to the above discussion:
[Similar error](https://github.com/huggingface/datasets/issues/5230)
[StackOverflow solution](https://stackoverflow.com/questions/53632152/why-cant-dataclasses-have-mutable-defaults-in-their-class-attributes-declaratio)
@shaper Thanks for sharing. For brevity, you don't really need a `lambda` when you don't pass any init parameters, like this:
```
field(default_factory=lambda: ConfidenceConfig())
```
You can just do
```
field(default_factory=ConfidenceConfig)
```
It's only needed when you do pass parameter(s), like
```
field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
```
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
I have the same issue. @tango4j suggested using one of the models from https://huggingface.co/spaces/hf-audio/open_asr_leaderboard, but I cannot import nemo.collections.asr:
```
Traceback (most recent call last):
File "/opt/pycharm-2022.3.3/plugins/python/helpers/pycharm/docrunner.py", line 138, in __run
exec(compile(example.source, filename, "single",
File "<doctest NeMoASR[2]>", line 1, in <module>
NeMoASR().apply_asr(file)
^^^^^^^^^
File "/home/cbj/python/cbj/cbj/transcribe/pretrained.py", line 504, in __init__
import nemo.collections.asr as nemo_asr
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
For documentation (I had to search in the provided links):
Mutable defaults were never allowed in dataclasses (by convention), but in python 3.11 they improved the check: Instead of checking some types (dict, list, set) they now use hashable as indicator for mutable.
An alternative to default_factory would be to use frozen dataclasses, but I don't know whether in this code base the configs are used as mutable objects or not.
You need to update to NeMo 1.20, omegaconf did a fix that should resolve this
I have NeMo 1.20.0.
With `pip install nemo_toolkit` and `pip install pytorch_lightning` I installed yesterday nemo.
So it should be the newest PyPI version.
```
$ pip show nemo_toolkit
Name: nemo-toolkit
Version: 1.20.0
Summary: NeMo - a toolkit for Conversational AI
Home-page: https://github.com/nvidia/nemo
Author: NVIDIA
Author-email: nemo-toolkit@nvidia.com
License: Apache2
Location: /opt/py/2023/lib/python3.11/site-packages
Requires: huggingface-hub, numba, numpy, onnx, python-dateutil, ruamel.yaml, scikit-learn, setuptools, tensorboard, text-unidecode, torch, tqdm, wget, wrapt
Required-by:
$ pip show omegaconf
Name: omegaconf
Version: 2.3.0
Summary: A flexible configuration library
Home-page: https://github.com/omry/omegaconf
Author: Omry Yadan
Author-email: omry@yadan.net
License:
Location: /home/cbj/.local/lib/python3.11/site-packages
Requires: antlr4-python3-runtime, PyYAML
Required-by: hydra-core
$ python -c "import nemo.collections.asr as nemo_asr"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
Hmm ok I'll take a look
|
2023-10-03T19:14:38Z
|
<patch>
<patch>
diff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr/experimental/k2/align_speech_parallel.py
--- a/examples/asr/experimental/k2/align_speech_parallel.py
+++ b/examples/asr/experimental/k2/align_speech_parallel.py
@@ -74,7 +74,7 @@
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
@@ -94,12 +94,14 @@
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
- predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
- aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
+ predict_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(return_sample_id=True, num_workers=4)
+ )
+ aligner_args: K2AlignerWrapperModelConfig = field(default_factory=lambda: K2AlignerWrapperModelConfig())
output_path: str = MISSING
model_stride: int = 8
- trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
+ trainer: TrainerConfig = field(default_factory=lambda: TrainerConfig(gpus=-1, accelerator="ddp"))
# there arguments will be ignored
return_predictions: bool = False
diff --git a/nemo/collections/asr/metrics/rnnt_wer.py b/nemo/collections/asr/metrics/rnnt_wer.py
--- a/nemo/collections/asr/metrics/rnnt_wer.py
+++ b/nemo/collections/asr/metrics/rnnt_wer.py
@@ -15,7 +15,7 @@
import copy
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1299,7 +1299,7 @@ class RNNTDecodingConfig:
preserve_alignments: Optional[bool] = None
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
@@ -1317,10 +1317,10 @@ class RNNTDecodingConfig:
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
- greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
+ greedy: greedy_decode.GreedyRNNTInferConfig = field(default_factory=lambda: greedy_decode.GreedyRNNTInferConfig())
# beam decoding config
- beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
+ beam: beam_decode.BeamRNNTInferConfig = field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/metrics/wer.py b/nemo/collections/asr/metrics/wer.py
--- a/nemo/collections/asr/metrics/wer.py
+++ b/nemo/collections/asr/metrics/wer.py
@@ -14,7 +14,7 @@
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1297,13 +1297,17 @@ class CTCDecodingConfig:
batch_dim_index: int = 0
# greedy decoding config
- greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
+ greedy: ctc_greedy_decoding.GreedyCTCInferConfig = field(
+ default_factory=lambda: ctc_greedy_decoding.GreedyCTCInferConfig()
+ )
# beam decoding config
- beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ beam: ctc_beam_decoding.BeamCTCInferConfig = field(
+ default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ )
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/models/configs/aligner_config.py b/nemo/collections/asr/models/configs/aligner_config.py
--- a/nemo/collections/asr/models/configs/aligner_config.py
+++ b/nemo/collections/asr/models/configs/aligner_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@@ -35,10 +35,10 @@ class AlignerWrapperModelConfig:
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
- ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
- rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
+ ctc_cfg: AlignerCTCConfig = field(default_factory=lambda: AlignerCTCConfig())
+ rnnt_cfg: AlignerRNNTConfig = field(default_factory=lambda: AlignerRNNTConfig())
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
- decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ decoder_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
diff --git a/nemo/collections/asr/models/configs/asr_models_config.py b/nemo/collections/asr/models/configs/asr_models_config.py
--- a/nemo/collections/asr/models/configs/asr_models_config.py
+++ b/nemo/collections/asr/models/configs/asr_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -74,24 +74,32 @@ class EncDecCTCConfig(model_cfg.ModelConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
- validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ train_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=True))
+ validation_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False))
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
- decoding: CTCDecodingConfig = CTCDecodingConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
+ decoding: CTCDecodingConfig = field(default_factory=lambda: CTCDecodingConfig())
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
- model: EncDecCTCConfig = EncDecCTCConfig()
+ model: EncDecCTCConfig = field(default_factory=lambda: EncDecCTCConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/classification_models_config.py b/nemo/collections/asr/models/configs/classification_models_config.py
--- a/nemo/collections/asr/models/configs/classification_models_config.py
+++ b/nemo/collections/asr/models/configs/classification_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -72,30 +72,40 @@ class EncDecClassificationConfig(model_cfg.ModelConfig):
timesteps: int = MISSING
# Dataset configs
- train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=timesteps
+ preprocessor: AudioToMFCCPreprocessorConfig = field(default_factory=lambda: AudioToMFCCPreprocessorConfig())
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=-1)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
+
+ def __post_init__(self):
+ if self.crop_or_pad_augment is not None:
+ self.crop_or_pad_augment.audio_length = self.timesteps
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
- model: EncDecClassificationConfig = EncDecClassificationConfig()
+ model: EncDecClassificationConfig = field(default_factory=lambda: EncDecClassificationConfig())
diff --git a/nemo/collections/asr/models/configs/diarizer_config.py b/nemo/collections/asr/models/configs/diarizer_config.py
--- a/nemo/collections/asr/models/configs/diarizer_config.py
+++ b/nemo/collections/asr/models/configs/diarizer_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import asdict, dataclass
+from dataclasses import asdict, dataclass, field
from typing import Any, Dict, Optional, Tuple, Union
@@ -78,9 +78,9 @@ class ASRDiarizerParams(DiarizerComponentConfig):
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
- parameters: ASRDiarizerParams = ASRDiarizerParams()
- ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
- realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
+ parameters: ASRDiarizerParams = field(default_factory=lambda: ASRDiarizerParams())
+ ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = field(default_factory=lambda: ASRDiarizerCTCDecoderParams())
+ realigning_lm_parameters: ASRRealigningLMParams = field(default_factory=lambda: ASRRealigningLMParams())
@dataclass
@@ -102,7 +102,7 @@ class VADParams(DiarizerComponentConfig):
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
- parameters: VADParams = VADParams()
+ parameters: VADParams = field(default_factory=lambda: VADParams())
@dataclass
@@ -121,7 +121,7 @@ class SpeakerEmbeddingsParams(DiarizerComponentConfig):
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
- parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
+ parameters: SpeakerEmbeddingsParams = field(default_factory=lambda: SpeakerEmbeddingsParams())
@dataclass
@@ -142,7 +142,7 @@ class ClusteringParams(DiarizerComponentConfig):
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
- parameters: ClusteringParams = ClusteringParams()
+ parameters: ClusteringParams = field(default_factory=lambda: ClusteringParams())
@dataclass
@@ -166,7 +166,7 @@ class MSDDParams(DiarizerComponentConfig):
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
- parameters: MSDDParams = MSDDParams()
+ parameters: MSDDParams = field(default_factory=lambda: MSDDParams())
@dataclass
@@ -176,16 +176,16 @@ class DiarizerConfig(DiarizerComponentConfig):
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
- vad: VADConfig = VADConfig()
- speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
- clustering: ClusteringConfig = ClusteringConfig()
- msdd_model: MSDDConfig = MSDDConfig()
- asr: ASRDiarizerConfig = ASRDiarizerConfig()
+ vad: VADConfig = field(default_factory=lambda: VADConfig())
+ speaker_embeddings: SpeakerEmbeddingsConfig = field(default_factory=lambda: SpeakerEmbeddingsConfig())
+ clustering: ClusteringConfig = field(default_factory=lambda: ClusteringConfig())
+ msdd_model: MSDDConfig = field(default_factory=lambda: MSDDConfig())
+ asr: ASRDiarizerConfig = field(default_factory=lambda: ASRDiarizerConfig())
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
- diarizer: DiarizerConfig = DiarizerConfig()
+ diarizer: DiarizerConfig = field(default_factory=lambda: DiarizerConfig())
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
diff --git a/nemo/collections/asr/models/configs/k2_sequence_models_config.py b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
--- a/nemo/collections/asr/models/configs/k2_sequence_models_config.py
+++ b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
@@ -26,14 +26,14 @@ class GraphModuleConfig:
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
- backend_cfg: BackendConfig = BackendConfig()
+ backend_cfg: BackendConfig = field(default_factory=lambda: BackendConfig())
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
- graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ graph_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
- model: EncDecK2SeqConfig = EncDecK2SeqConfig()
+ model: EncDecK2SeqConfig = field(default_factory=lambda: EncDecK2SeqConfig())
diff --git a/nemo/collections/asr/models/configs/matchboxnet_config.py b/nemo/collections/asr/models/configs/matchboxnet_config.py
--- a/nemo/collections/asr/models/configs/matchboxnet_config.py
+++ b/nemo/collections/asr/models/configs/matchboxnet_config.py
@@ -107,30 +107,38 @@ class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
- freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ preprocessor: AudioToMFCCPreprocessorConfig = field(
+ default_factory=lambda: AudioToMFCCPreprocessorConfig(window_size=0.025)
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig(
+ freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ )
)
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=128
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=128)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/quartznet_config.py b/nemo/collections/asr/models/configs/quartznet_config.py
--- a/nemo/collections/asr/models/configs/quartznet_config.py
+++ b/nemo/collections/asr/models/configs/quartznet_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
@@ -174,20 +174,30 @@ class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=True
+ train_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=True, trim_silence=True)
+ )
+ validation_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
)
- validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
@dataclass
diff --git a/nemo/collections/asr/modules/audio_preprocessing.py b/nemo/collections/asr/modules/audio_preprocessing.py
--- a/nemo/collections/asr/modules/audio_preprocessing.py
+++ b/nemo/collections/asr/modules/audio_preprocessing.py
@@ -634,6 +634,12 @@ def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
+ if self.audio_length < 0:
+ raise ValueError(
+ 'audio_length must be non-negative. If using a dataclass with OmegaConf, '
+ 'please call OmegaConf.to_object(cfg) to call appropriate __post_init__ methods.'
+ )
+
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
diff --git a/nemo/collections/asr/parts/k2/classes.py b/nemo/collections/asr/parts/k2/classes.py
--- a/nemo/collections/asr/parts/k2/classes.py
+++ b/nemo/collections/asr/parts/k2/classes.py
@@ -13,7 +13,7 @@
# limitations under the License.
from abc import ABC
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
import torch
@@ -43,7 +43,7 @@ class GraphModuleConfig:
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
- intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
+ intersect_conf: GraphIntersectDenseConfig = field(default_factory=lambda: GraphIntersectDenseConfig())
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
diff --git a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
--- a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
+++ b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
@@ -13,7 +13,7 @@
# limitations under the License.
import math
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional
import torch
@@ -183,7 +183,7 @@ class MultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
@@ -287,7 +287,7 @@ class RelPositionMultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
@@ -336,7 +336,9 @@ class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
@@ -378,5 +380,7 @@ class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
diff --git a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
@@ -14,7 +14,7 @@
import math
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import torch
@@ -602,5 +602,5 @@ class BeamCTCInferConfig:
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
- flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
- pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
+ flashlight_cfg: Optional[FlashlightConfig] = field(default_factory=lambda: FlashlightConfig())
+ pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = field(default_factory=lambda: PyCTCDecodeConfig())
diff --git a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional
import torch
@@ -253,7 +253,7 @@ class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
@@ -26,7 +26,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import numpy as np
@@ -2185,7 +2185,7 @@ class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
@@ -2201,7 +2201,7 @@ class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/asr/parts/utils/asr_confidence_utils.py b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
--- a/nemo/collections/asr/parts/utils/asr_confidence_utils.py
+++ b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
@@ -14,7 +14,7 @@
import math
from abc import ABC, abstractmethod
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from functools import partial
from typing import List, Optional
@@ -175,7 +175,7 @@ class ConfidenceConfig:
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
- method_cfg: ConfidenceMethodConfig = ConfidenceMethodConfig()
+ method_cfg: ConfidenceMethodConfig = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/common/parts/adapter_modules.py b/nemo/collections/common/parts/adapter_modules.py
--- a/nemo/collections/common/parts/adapter_modules.py
+++ b/nemo/collections/common/parts/adapter_modules.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
@@ -160,5 +160,7 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
diff --git a/nemo/collections/common/tokenizers/en_ja_tokenizers.py b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
--- a/nemo/collections/common/tokenizers/en_ja_tokenizers.py
+++ b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
@@ -14,11 +14,19 @@
import re
from typing import List
-import ipadic
-import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
+try:
+ import ipadic
+ import MeCab
+
+ HAVE_MECAB = True
+ HAVE_IPADIC = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+ HAVE_IPADIC = False
+
class EnJaProcessor:
"""
@@ -67,6 +75,9 @@ class JaMecabProcessor:
"""
def __init__(self):
+ if not HAVE_MECAB or not HAVE_IPADIC:
+ raise ImportError("Please ensure that you have installed `MeCab` and `ipadic` to use JaMecabProcessor")
+
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
diff --git a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
--- a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
+++ b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
@@ -46,7 +46,7 @@ class MTOptimConfig(OptimConfig):
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
- sched: Optional[MTSchedConfig] = MTSchedConfig()
+ sched: Optional[MTSchedConfig] = field(default_factory=lambda: MTSchedConfig())
@dataclass
@@ -74,70 +74,80 @@ class MTEncDecModelConfig(EncDecNLPModelConfig):
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
- head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
+ head: TokenClassifierConfig = field(default_factory=lambda: TokenClassifierConfig(log_softmax=True))
# dataset configurations
- train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=True,
- shuffle=True,
- cache_ids=False,
- use_cache=False,
+ train_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=True,
+ shuffle=True,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ validation_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ test_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- optim: Optional[OptimConfig] = MTOptimConfig()
+ optim: Optional[OptimConfig] = field(default_factory=lambda: MTOptimConfig())
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
- encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
- decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
-
- encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ encoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+ decoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+
+ encoder: NeMoTransformerEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
- decoder: NeMoTransformerConfig = NeMoTransformerConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ decoder: NeMoTransformerConfig = field(
+ default_factory=lambda: NeMoTransformerConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
@@ -150,32 +160,36 @@ class MTBottleneckModelConfig(AAYNBaseConfig):
recon_per_token: bool = True
log_timing: bool = True
- encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
- hidden_steps=32,
- hidden_blocks=1,
- hidden_init_method='params',
+ encoder: NeMoTransformerBottleneckEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ hidden_steps=32,
+ hidden_blocks=1,
+ hidden_init_method='params',
+ )
)
- decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
+ decoder: NeMoTransformerBottleneckDecoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckDecoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ )
)
diff --git a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
--- a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
+++ b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
@@ -215,13 +215,15 @@ class PunctuationCapitalizationModelConfig:
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
- class_labels: ClassLabelsConfig = ClassLabelsConfig()
+ class_labels: ClassLabelsConfig = field(default_factory=lambda: ClassLabelsConfig())
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
- common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
+ common_dataset_parameters: Optional[CommonDatasetParametersConfig] = field(
+ default_factory=lambda: CommonDatasetParametersConfig()
+ )
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
@@ -233,16 +235,16 @@ class PunctuationCapitalizationModelConfig:
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
- punct_head: HeadConfig = HeadConfig()
+ punct_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
- capit_head: HeadConfig = HeadConfig()
+ capit_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
- tokenizer: Any = TokenizerConfig()
+ tokenizer: Any = field(default_factory=lambda: TokenizerConfig())
"""A configuration for source text tokenizer."""
- language_model: LanguageModelConfig = LanguageModelConfig()
+ language_model: LanguageModelConfig = field(default_factory=lambda: LanguageModelConfig())
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
@@ -311,22 +313,30 @@ class PunctuationCapitalizationConfig(NemoConfig):
do_testing: bool = False
"""Whether ot perform testing of the model."""
- model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
+ model: PunctuationCapitalizationModelConfig = field(default_factory=lambda: PunctuationCapitalizationModelConfig())
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
- trainer: Optional[TrainerConfig] = TrainerConfig()
+ trainer: Optional[TrainerConfig] = field(default_factory=lambda: TrainerConfig())
"""Contains ``Trainer`` Lightning class constructor parameters."""
- exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
+ exp_manager: Optional[ExpManagerConfig] = field(
+ default_factory=lambda: ExpManagerConfig(name=None, files_to_copy=[])
+ )
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
+ def __post_init__(self):
+ if self.exp_manager is not None:
+ self.exp_manager.name = self.name
+
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
- model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
+ model: PunctuationCapitalizationLexicalAudioModelConfig = field(
+ default_factory=lambda: PunctuationCapitalizationLexicalAudioModelConfig()
+ )
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
diff --git a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
--- a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
+++ b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
@@ -13,7 +13,6 @@
# limitations under the License.
"""Transformer based language model."""
-from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
@@ -25,6 +24,13 @@
scaled_init_method_normal,
)
+try:
+ from MeCab import Model
+
+ HAVE_MECAB = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+
try:
from apex.transformer.enums import AttnMaskType, ModelType
diff --git a/nemo/collections/tts/models/fastpitch.py b/nemo/collections/tts/models/fastpitch.py
--- a/nemo/collections/tts/models/fastpitch.py
+++ b/nemo/collections/tts/models/fastpitch.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import List, Optional
@@ -70,12 +70,12 @@ class TextTokenizer:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
@dataclass
class TextTokenizerConfig:
- text_tokenizer: TextTokenizer = TextTokenizer()
+ text_tokenizer: TextTokenizer = field(default_factory=lambda: TextTokenizer())
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
diff --git a/nemo/collections/tts/models/tacotron2.py b/nemo/collections/tts/models/tacotron2.py
--- a/nemo/collections/tts/models/tacotron2.py
+++ b/nemo/collections/tts/models/tacotron2.py
@@ -13,7 +13,7 @@
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
import torch
@@ -53,7 +53,7 @@ class Preprocessor:
@dataclass
class Tacotron2Config:
- preprocessor: Preprocessor = Preprocessor()
+ preprocessor: Preprocessor = field(default_factory=lambda: Preprocessor())
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
diff --git a/nemo/core/config/modelPT.py b/nemo/core/config/modelPT.py
--- a/nemo/core/config/modelPT.py
+++ b/nemo/core/config/modelPT.py
@@ -58,11 +58,13 @@ class HydraConfig:
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
- trainer: config.TrainerConfig = config.TrainerConfig(
- strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ trainer: config.TrainerConfig = field(
+ default_factory=lambda: config.TrainerConfig(
+ strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ )
)
- exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
- hydra: HydraConfig = HydraConfig()
+ exp_manager: Optional[Any] = field(default_factory=lambda: exp_manager.ExpManagerConfig())
+ hydra: HydraConfig = field(default_factory=lambda: HydraConfig())
class ModelConfigBuilder:
diff --git a/nemo/utils/exp_manager.py b/nemo/utils/exp_manager.py
--- a/nemo/utils/exp_manager.py
+++ b/nemo/utils/exp_manager.py
@@ -18,7 +18,7 @@
import sys
import time
import warnings
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
@@ -146,28 +146,30 @@ class ExpManagerConfig:
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
- mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
+ mlflow_logger_kwargs: Optional[MLFlowParams] = field(default_factory=lambda: MLFlowParams())
create_dllogger_logger: Optional[bool] = False
- dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
+ dllogger_logger_kwargs: Optional[DLLoggerParams] = field(default_factory=lambda: DLLoggerParams())
create_clearml_logger: Optional[bool] = False
- clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
+ clearml_logger_kwargs: Optional[ClearMLParams] = field(default_factory=lambda: ClearMLParams())
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
- checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
+ checkpoint_callback_params: Optional[CallbackParams] = field(default_factory=lambda: CallbackParams())
create_early_stopping_callback: Optional[bool] = False
- early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
+ early_stopping_callback_params: Optional[EarlyStoppingParams] = field(
+ default_factory=lambda: EarlyStoppingParams()
+ )
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
- step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
+ step_timing_kwargs: Optional[StepTimingParams] = field(default_factory=lambda: StepTimingParams())
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
- ema: Optional[EMAParams] = EMAParams()
+ ema: Optional[EMAParams] = field(default_factory=lambda: EMAParams())
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
@@ -112,14 +112,14 @@ class EvalBeamSearchNGramConfig:
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
- decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
+ decoding: ctc_beam_decoding.BeamCTCInferConfig = field(default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=128))
- text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
+ text_processing: Optional[TextProcessingConfig] = field(default_factory=lambda: TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
- )
+ ))
# fmt: on
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
@@ -115,7 +115,7 @@ class EvalBeamSearchNGramConfig:
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
- decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
+ decoding: rnnt_beam_decoding.BeamRNNTInferConfig = field(default_factory=lambda: rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128))
# fmt: on
diff --git a/scripts/confidence_ensembles/build_ensemble.py b/scripts/confidence_ensembles/build_ensemble.py
--- a/scripts/confidence_ensembles/build_ensemble.py
+++ b/scripts/confidence_ensembles/build_ensemble.py
@@ -75,7 +75,7 @@
import sys
import tempfile
from copy import deepcopy
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Tuple
@@ -209,19 +209,23 @@ class BuildEnsembleConfig:
random_seed: int = 0 # for reproducibility
# default confidence, can override
- confidence: ConfidenceConfig = ConfidenceConfig(
- # we keep frame confidences and apply aggregation manually to get full-utterance confidence
- preserve_frame_confidence=True,
- exclude_blank=True,
- aggregation="mean",
- method_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ confidence: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(
+ # we keep frame confidences and apply aggregation manually to get full-utterance confidence
+ preserve_frame_confidence=True,
+ exclude_blank=True,
+ aggregation="mean",
+ measure_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ )
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
- transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
+ transcription: transcribe_speech.TranscriptionConfig = field(
+ default_factory=lambda: transcribe_speech.TranscriptionConfig()
+ )
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
@@ -229,12 +233,14 @@ class BuildEnsembleConfig:
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
- tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
+ tune_confidence_config: TuneConfidenceConfig = field(default_factory=lambda: TuneConfidenceConfig())
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
- tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
+ tune_logistic_regression_config: TuneLogisticRegressionConfig = field(
+ default_factory=lambda: TuneLogisticRegressionConfig()
+ )
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
diff --git a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
--- a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
+++ b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
@@ -14,7 +14,7 @@
import json
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import Optional
@@ -124,7 +124,9 @@ class ConfidenceBenchmarkingConfig:
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
- confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ confidence_cfg: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ )
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
diff --git a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
--- a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
+++ b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
@@ -202,7 +202,7 @@ class ASRTarredDatasetMetadata:
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
- dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
+ dataset_config: Optional[ASRTarredDatasetConfig] = field(default_factory=lambda: ASRTarredDatasetConfig())
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
diff --git a/tools/nemo_forced_aligner/align.py b/tools/nemo_forced_aligner/align.py
--- a/tools/nemo_forced_aligner/align.py
+++ b/tools/nemo_forced_aligner/align.py
@@ -149,8 +149,8 @@ class AlignmentConfig:
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig = CTMFileConfig()
- ass_file_config: ASSFileConfig = ASSFileConfig()
+ ctm_file_config: CTMFileConfig = field(default_factory=lambda: CTMFileConfig())
+ ass_file_config: ASSFileConfig = field(default_factory=lambda: ASSFileConfig())
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
</patch>
</s>
</patch>
|
diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_to_text_dataset.py
--- a/tests/collections/asr/test_text_to_text_dataset.py
+++ b/tests/collections/asr/test_text_to_text_dataset.py
@@ -15,7 +15,7 @@
import json
import multiprocessing
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
import pytest
@@ -118,7 +118,7 @@ class TextTokenizerCfg:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
config = OmegaConf.create(OmegaConf.to_yaml(TextTokenizerCfg()))
return instantiate(config)
|
1.0
| |||
slackapi__python-slack-events-api-71
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
</issue>
<code>
[start of README.rst]
Slack Events API adapter for Python
===================================
.. image:: https://badge.fury.io/py/slackeventsapi.svg
:target: https://pypi.org/project/slackeventsapi/
.. image:: https://travis-ci.org/slackapi/python-slack-events-api.svg?branch=master
:target: https://travis-ci.org/slackapi/python-slack-events-api
.. image:: https://codecov.io/gh/slackapi/python-slack-events-api/branch/master/graph/badge.svg
:target: https://codecov.io/gh/slackapi/python-slack-events-api
The Slack Events Adapter is a Python-based solution to receive and parse events
from Slackโs Events API. This library uses an event emitter framework to allow
you to easily process Slack events by simply attaching functions
to event listeners.
This adapter enhances and simplifies Slack's Events API by incorporating useful best practices, patterns, and opportunities to abstract out common tasks.
๐ก We wrote a `blog post which explains how`_ the Events API can help you, why we built these tools, and how you can use them to build production-ready Slack apps.
.. _blog post which explains how: https://medium.com/@SlackAPI/enhancing-slacks-events-api-7535827829ab
๐ค Installation
----------------
.. code:: shell
pip install slackeventsapi
๐ค App Setup
--------------------
Before you can use the `Events API`_ you must
`create a Slack App`_, and turn on
`Event Subscriptions`_.
๐ก When you add the Request URL to your app's Event Subscription settings,
Slack will send a request containing a `challenge` code to verify that your
server is alive. This package handles that URL Verification event for you, so
all you need to do is start the example app, start ngrok and configure your
URL accordingly.
โ
Once you have your `Request URL` verified, your app is ready to start
receiving Team Events.
๐ Your server will begin receiving Events from Slack's Events API as soon as a
user has authorized your app.
๐ค Development workflow:
===========================
(1) Create a Slack app on https://api.slack.com/apps
(2) Add a `bot user` for your app
(3) Start the example app on your **Request URL** endpoint
(4) Start ngrok and copy the **HTTPS** URL
(5) Add your **Request URL** and subscribe your app to events
(6) Go to your ngrok URL (e.g. https://myapp12.ngrok.com/) and auth your app
**๐ Once your app has been authorized, you will begin receiving Slack Events**
โ ๏ธ Ngrok is a great tool for developing Slack apps, but we don't recommend using ngrok
for production apps.
๐ค Usage
----------
**โ ๏ธ Keep your app's credentials safe!**
- For development, keep them in virtualenv variables.
- For production, use a secure data store.
- Never post your app's credentials to github.
.. code:: python
SLACK_SIGNING_SECRET = os.environ["SLACK_SIGNING_SECRET"]
Create a Slack Event Adapter for receiving actions via the Events API
-----------------------------------------------------------------------
**Using the built-in Flask server:**
.. code:: python
from slackeventsapi import SlackEventAdapter
slack_events_adapter = SlackEventAdapter(SLACK_SIGNING_SECRET, endpoint="/slack/events")
# Create an event listener for "reaction_added" events and print the emoji name
@slack_events_adapter.on("reaction_added")
def reaction_added(event_data):
emoji = event_data["event"]["reaction"]
print(emoji)
# Start the server on port 3000
slack_events_adapter.start(port=3000)
**Using your existing Flask instance:**
.. code:: python
from flask import Flask
from slackeventsapi import SlackEventAdapter
# This `app` represents your existing Flask app
app = Flask(__name__)
# An example of one of your Flask app's routes
@app.route("/")
def hello():
return "Hello there!"
# Bind the Events API route to your existing Flask app by passing the server
# instance as the last param, or with `server=app`.
slack_events_adapter = SlackEventAdapter(SLACK_SIGNING_SECRET, "/slack/events", app)
# Create an event listener for "reaction_added" events and print the emoji name
@slack_events_adapter.on("reaction_added")
def reaction_added(event_data):
emoji = event_data["event"]["reaction"]
print(emoji)
# Start the server on port 3000
if __name__ == "__main__":
app.run(port=3000)
For a comprehensive list of available Slack `Events` and more information on
`Scopes`, see https://api.slack.com/events-api
๐ค Example event listeners
-----------------------------
See `example.py`_ for usage examples. This example also utilizes the
SlackClient Web API client.
.. _example.py: /example/
๐ค Support
-----------
Need help? Join `Slack Community`_ and talk to us in `#slack-api`_.
You can also `create an Issue`_ right here on GitHub.
.. _Events API: https://api.slack.com/events-api
.. _create a Slack App: https://api.slack.com/apps/new
.. _Event Subscriptions: https://api.slack.com/events-api#subscriptions
.. _Slack Community: http://slackcommunity.com/
.. _#slack-api: https://dev4slack.slack.com/messages/slack-api/
.. _create an Issue: https://github.com/slackapi/python-slack-events-api/issues/new
[end of README.rst]
[start of /dev/null]
[end of /dev/null]
[start of slackeventsapi/server.py]
from flask import Flask, request, make_response, Blueprint
import json
import platform
import sys
import hmac
import hashlib
from time import time
from .version import __version__
class SlackServer(Flask):
def __init__(self, signing_secret, endpoint, emitter, server):
self.signing_secret = signing_secret
self.emitter = emitter
self.endpoint = endpoint
self.package_info = self.get_package_info()
# If a server is passed in, bind the event handler routes to it,
# otherwise create a new Flask instance.
if server:
if isinstance(server, Flask) or isinstance(server, Blueprint):
self.bind_route(server)
else:
raise TypeError("Server must be an instance of Flask or Blueprint")
else:
Flask.__init__(self, __name__)
self.bind_route(self)
def get_package_info(self):
client_name = __name__.split('.')[0]
client_version = __version__ # Version is returned from version.py
# Collect the package info, Python version and OS version.
package_info = {
"client": "{0}/{1}".format(client_name, client_version),
"python": "Python/{v.major}.{v.minor}.{v.micro}".format(v=sys.version_info),
"system": "{0}/{1}".format(platform.system(), platform.release())
}
# Concatenate and format the user-agent string to be passed into request headers
ua_string = []
for key, val in package_info.items():
ua_string.append(val)
return " ".join(ua_string)
def verify_signature(self, timestamp, signature):
# Verify the request signature of the request sent from Slack
# Generate a new hash using the app's signing secret and request data
# Compare the generated hash and incoming request signature
# Python 2.7.6 doesn't support compare_digest
# It's recommended to use Python 2.7.7+
# noqa See https://docs.python.org/2/whatsnew/2.7.html#pep-466-network-security-enhancements-for-python-2-7
req = str.encode('v0:' + str(timestamp) + ':') + request.get_data()
request_hash = 'v0=' + hmac.new(
str.encode(self.signing_secret),
req, hashlib.sha256
).hexdigest()
if hasattr(hmac, "compare_digest"):
# Compare byte strings for Python 2
if (sys.version_info[0] == 2):
return hmac.compare_digest(bytes(request_hash), bytes(signature))
else:
return hmac.compare_digest(request_hash, signature)
else:
if len(request_hash) != len(signature):
return False
result = 0
if isinstance(request_hash, bytes) and isinstance(signature, bytes):
for x, y in zip(request_hash, signature):
result |= x ^ y
else:
for x, y in zip(request_hash, signature):
result |= ord(x) ^ ord(y)
return result == 0
def bind_route(self, server):
@server.route(self.endpoint, methods=['GET', 'POST'])
def event():
# If a GET request is made, return 404.
if request.method == 'GET':
return make_response("These are not the slackbots you're looking for.", 404)
# Each request comes with request timestamp and request signature
# emit an error if the timestamp is out of range
req_timestamp = request.headers.get('X-Slack-Request-Timestamp')
if abs(time() - int(req_timestamp)) > 60 * 5:
slack_exception = SlackEventAdapterException('Invalid request timestamp')
self.emitter.emit('error', slack_exception)
return make_response("", 403)
# Verify the request signature using the app's signing secret
# emit an error if the signature can't be verified
req_signature = request.headers.get('X-Slack-Signature')
if not self.verify_signature(req_timestamp, req_signature):
slack_exception = SlackEventAdapterException('Invalid request signature')
self.emitter.emit('error', slack_exception)
return make_response("", 403)
# Parse the request payload into JSON
event_data = json.loads(request.data.decode('utf-8'))
# Echo the URL verification challenge code back to Slack
if "challenge" in event_data:
return make_response(
event_data.get("challenge"), 200, {"content_type": "application/json"}
)
# Parse the Event payload and emit the event to the event listener
if "event" in event_data:
event_type = event_data["event"]["type"]
self.emitter.emit(event_type, event_data)
response = make_response("", 200)
response.headers['X-Slack-Powered-By'] = self.package_info
return response
class SlackEventAdapterException(Exception):
"""
Base exception for all errors raised by the SlackClient library
"""
def __init__(self, msg=None):
if msg is None:
# default error message
msg = "An error occurred in the SlackEventsApiAdapter library"
super(SlackEventAdapterException, self).__init__(msg)
[end of slackeventsapi/server.py]
</code>
I need you to solve this issue by regenerating the full files in the code base that you would like to change. You can change as many files as you like. Please respond with a list of files and their revised contents in the following format.
<example>
[start of /src/this_file.py]
import os
def euclidean(a, b):
if b == 0:
return a
return euclidean(b, a % b)
[end of /src/this_file.py]
[start of /src/another_file.py]
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
x, y = x0, y0
sx = -1 if x0 > x1 else 1
sy = -1 if y0 > y1 else 1
if dx > dy:
err = dx / 2.0
while x != x1:
points.append((x, y))
err -= dy
if err < 0:
y += sy
err += dx
x += sx
else:
err = dy / 2.0
while y != y1:
points.append((x
err -= dx
if err < 0:
x += sx
err += dy
y += sy
points.append((x, y))
return points
[end of /src/another_file.py]
</example>
|
slackapi/python-slack-events-api
|
0c0ce604b502508622fb14c278a0d64841fa32e3
|
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
|
2020-06-12T06:58:10Z
|
<patch>
<patch>
diff --git a/example/current_app/main.py b/example/current_app/main.py
new file mode 100644
--- /dev/null
+++ b/example/current_app/main.py
@@ -0,0 +1,49 @@
+# ------------------
+# Only for running this script here
+import sys
+from os.path import dirname
+sys.path.insert(1, f"{dirname(__file__)}/../..")
+# ------------------
+
+import os
+from slack import WebClient
+import logging
+logging.basicConfig(level=logging.DEBUG)
+
+from flask import Flask
+
+app = Flask(__name__)
+
+with app.app_context():
+ from test_module.slack_app import slack_events_adapter
+
+ slack_bot_token = os.environ["SLACK_BOT_TOKEN"]
+ slack_client = WebClient(slack_bot_token)
+
+
+ @slack_events_adapter.on("message")
+ def handle_message(event_data):
+ message = event_data["event"]
+ if message.get("subtype") is None and "hi" in message.get('text'):
+ channel = message["channel"]
+ message = "Hi <@%s>! :tada:" % message["user"]
+ slack_client.chat_postMessage(channel=channel, text=message)
+
+
+ @slack_events_adapter.on("error")
+ def error_handler(err):
+ print("ERROR: " + str(err))
+
+# (Terminal A)
+# source env/bin/activate
+# (env) $ export SLACK_BOT_TOKEN=xoxb-***
+# (env) $ export SLACK_SIGNING_SECRET=**
+# (env) $ cd example/current_app
+# (env) $ FLASK_APP=main.py FLASK_ENV=development flask run --port 3000
+
+# (Terminal B)
+# ngrok http 3000
+
+# in Slack
+# /invite @{your app's bot user}
+# post a message "hi" in the channel
diff --git a/slackeventsapi/server.py b/slackeventsapi/server.py
--- a/slackeventsapi/server.py
+++ b/slackeventsapi/server.py
@@ -1,10 +1,13 @@
-from flask import Flask, request, make_response, Blueprint
+import hashlib
+import hmac
import json
import platform
import sys
-import hmac
-import hashlib
from time import time
+
+from flask import Flask, request, make_response, Blueprint
+from werkzeug.local import LocalProxy
+
from .version import __version__
@@ -18,10 +21,10 @@ def __init__(self, signing_secret, endpoint, emitter, server):
# If a server is passed in, bind the event handler routes to it,
# otherwise create a new Flask instance.
if server:
- if isinstance(server, Flask) or isinstance(server, Blueprint):
+ if isinstance(server, (Flask, Blueprint, LocalProxy)):
self.bind_route(server)
else:
- raise TypeError("Server must be an instance of Flask or Blueprint")
+ raise TypeError("Server must be an instance of Flask, Blueprint, or LocalProxy")
else:
Flask.__init__(self, __name__)
self.bind_route(self)
</patch>
</s>
</patch>
|
diff --git a/example/current_app/test_module/__init__.py b/example/current_app/test_module/__init__.py
new file mode 100644
diff --git a/example/current_app/test_module/slack_app.py b/example/current_app/test_module/slack_app.py
new file mode 100644
--- /dev/null
+++ b/example/current_app/test_module/slack_app.py
@@ -0,0 +1,16 @@
+# ------------------
+# Only for running this script here
+import logging
+import sys
+from os.path import dirname
+
+sys.path.insert(1, f"{dirname(__file__)}/../../..")
+logging.basicConfig(level=logging.DEBUG)
+# ------------------
+
+from flask import current_app as app
+from slackeventsapi import SlackEventAdapter
+import os
+
+slack_signing_secret = os.environ["SLACK_SIGNING_SECRET"]
+slack_events_adapter = SlackEventAdapter(slack_signing_secret, "/slack/events", app)
diff --git a/tests/test_server.py b/tests/test_server.py
--- a/tests/test_server.py
+++ b/tests/test_server.py
@@ -18,7 +18,7 @@ def test_server_not_flask():
with pytest.raises(TypeError) as e:
invalid_flask = "I am not a Flask"
SlackEventAdapter("SIGNING_SECRET", "/slack/events", invalid_flask)
- assert e.value.args[0] == 'Server must be an instance of Flask or Blueprint'
+ assert e.value.args[0] == 'Server must be an instance of Flask, Blueprint, or LocalProxy'
def test_blueprint_server():
|
1.0
| ||||
celery__celery-2598
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CELERY_RESULT_SERIALIZER = 'json' breaks Exception marshaling
Setting `CELERY_RESULT_SERIALIZER = json` and raising an exception in the worker leads to this:
```
/path/to/lib/python2.7/site-packages/celery/result.py in get(self, timeout, propagate, interval, no_ack, follow_parents, EXCEPTION_STATES, PROPAGATE_STATES)
173 status = meta['status']
174 if status in PROPAGATE_STATES and propagate:
--> 175 raise meta['result']
176 return meta['result']
177 wait = get # deprecated alias to :meth:`get`.
TypeError: exceptions must be old-style classes or derived from BaseException, not dict
```
where the contents of `meta['result']` are (in my case):
```
{u'exc_message': u'unknown keys: nam', u'exc_type': u'ValueError'}
```
so it _looks_ like celery could convert the dict to a real exception before raising, but it does not currently. Changing back to `pickle` works as expected.
bug can be reproduced with the following:
``` python
# jsonresults.py
from celery.app.base import Celery
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'amqp'
app = Celery(config_source=__name__)
@app.task
def hello():
raise ValueError('go away')
```
worker:
```
# python -m celery --app=jsonresults:app worker
```
caller:
``` python
import jsonresults
jsonresults.hello.delay().get()
```
</issue>
<code>
[start of README.rst]
=================================
celery - Distributed Task Queue
=================================
.. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
|build-status| |coverage-status|
:Version: 3.2.0a1 (Cipater)
:Web: http://celeryproject.org/
:Download: http://pypi.python.org/pypi/celery/
:Source: http://github.com/celery/celery/
:Keywords: task queue, job queue, asynchronous, async, rabbitmq, amqp, redis,
python, webhooks, queue, distributed
--
What is a Task Queue?
=====================
Task queues are used as a mechanism to distribute work across threads or
machines.
A task queue's input is a unit of work, called a task, dedicated worker
processes then constantly monitor the queue for new work to perform.
Celery communicates via messages, usually using a broker
to mediate between clients and workers. To initiate a task a client puts a
message on the queue, the broker then delivers the message to a worker.
A Celery system can consist of multiple workers and brokers, giving way
to high availability and horizontal scaling.
Celery is a library written in Python, but the protocol can be implemented in
any language. So far there's RCelery_ for the Ruby programming language, and a
`PHP client`, but language interoperability can also be achieved
by using webhooks.
.. _RCelery: https://github.com/leapfrogonline/rcelery
.. _`PHP client`: https://github.com/gjedeer/celery-php
.. _`using webhooks`:
http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
What do I need?
===============
Celery version 3.0 runs on,
- Python (2.6, 2.7, 3.3, 3.4)
- PyPy (1.8, 1.9)
- Jython (2.5, 2.7).
This is the last version to support Python 2.5,
and from Celery 3.1, Python 2.6 or later is required.
The last version to support Python 2.4 was Celery series 2.2.
*Celery* is usually used with a message broker to send and receive messages.
The RabbitMQ, Redis transports are feature complete,
but there's also experimental support for a myriad of other solutions, including
using SQLite for local development.
*Celery* can run on a single machine, on multiple machines, or even
across datacenters.
Get Started
===========
If this is the first time you're trying to use Celery, or you are
new to Celery 3.0 coming from previous versions then you should read our
getting started tutorials:
- `First steps with Celery`_
Tutorial teaching you the bare minimum needed to get started with Celery.
- `Next steps`_
A more complete overview, showing more features.
.. _`First steps with Celery`:
http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
.. _`Next steps`:
http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
Celery is...
==========
- **Simple**
Celery is easy to use and maintain, and does *not need configuration files*.
It has an active, friendly community you can talk to for support,
including a `mailing-list`_ and and an IRC channel.
Here's one of the simplest applications you can make::
from celery import Celery
app = Celery('hello', broker='amqp://guest@localhost//')
@app.task
def hello():
return 'hello world'
- **Highly Available**
Workers and clients will automatically retry in the event
of connection loss or failure, and some brokers support
HA in way of *Master/Master* or *Master/Slave* replication.
- **Fast**
A single Celery process can process millions of tasks a minute,
with sub-millisecond round-trip latency (using RabbitMQ,
py-librabbitmq, and optimized settings).
- **Flexible**
Almost every part of *Celery* can be extended or used on its own,
Custom pool implementations, serializers, compression schemes, logging,
schedulers, consumers, producers, autoscalers, broker transports and much more.
It supports...
============
- **Message Transports**
- RabbitMQ_, Redis_,
- MongoDB_ (experimental), Amazon SQS (experimental),
- CouchDB_ (experimental), SQLAlchemy_ (experimental),
- Django ORM (experimental), `IronMQ`_
- and more...
- **Concurrency**
- Prefork, Eventlet_, gevent_, threads/single threaded
- **Result Stores**
- AMQP, Redis
- memcached, MongoDB
- SQLAlchemy, Django ORM
- Apache Cassandra, IronCache
- **Serialization**
- *pickle*, *json*, *yaml*, *msgpack*.
- *zlib*, *bzip2* compression.
- Cryptographic message signing.
.. _`Eventlet`: http://eventlet.net/
.. _`gevent`: http://gevent.org/
.. _RabbitMQ: http://rabbitmq.com
.. _Redis: http://redis.io
.. _MongoDB: http://mongodb.org
.. _Beanstalk: http://kr.github.com/beanstalkd
.. _CouchDB: http://couchdb.apache.org
.. _SQLAlchemy: http://sqlalchemy.org
.. _`IronMQ`: http://iron.io
Framework Integration
=====================
Celery is easy to integrate with web frameworks, some of which even have
integration packages:
+--------------------+------------------------+
| `Django`_ | not needed |
+--------------------+------------------------+
| `Pyramid`_ | `pyramid_celery`_ |
+--------------------+------------------------+
| `Pylons`_ | `celery-pylons`_ |
+--------------------+------------------------+
| `Flask`_ | not needed |
+--------------------+------------------------+
| `web2py`_ | `web2py-celery`_ |
+--------------------+------------------------+
| `Tornado`_ | `tornado-celery`_ |
+--------------------+------------------------+
The integration packages are not strictly necessary, but they can make
development easier, and sometimes they add important hooks like closing
database connections at ``fork``.
.. _`Django`: http://djangoproject.com/
.. _`Pylons`: http://www.pylonsproject.org/
.. _`Flask`: http://flask.pocoo.org/
.. _`web2py`: http://web2py.com/
.. _`Bottle`: http://bottlepy.org/
.. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
.. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
.. _`django-celery`: http://pypi.python.org/pypi/django-celery
.. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
.. _`web2py-celery`: http://code.google.com/p/web2py-celery/
.. _`Tornado`: http://www.tornadoweb.org/
.. _`tornado-celery`: http://github.com/mher/tornado-celery/
.. _celery-documentation:
Documentation
=============
The `latest documentation`_ with user guides, tutorials and API reference
is hosted at Read The Docs.
.. _`latest documentation`: http://docs.celeryproject.org/en/latest/
.. _celery-installation:
Installation
============
You can install Celery either via the Python Package Index (PyPI)
or from source.
To install using `pip`,::
$ pip install -U Celery
To install using `easy_install`,::
$ easy_install -U Celery
.. _bundles:
Bundles
-------
Celery also defines a group of bundles that can be used
to install Celery and the dependencies for a given feature.
You can specify these in your requirements or on the ``pip`` comand-line
by using brackets. Multiple bundles can be specified by separating them by
commas.
::
$ pip install "celery[librabbitmq]"
$ pip install "celery[librabbitmq,redis,auth,msgpack]"
The following bundles are available:
Serializers
~~~~~~~~~~~
:celery[auth]:
for using the auth serializer.
:celery[msgpack]:
for using the msgpack serializer.
:celery[yaml]:
for using the yaml serializer.
Concurrency
~~~~~~~~~~~
:celery[eventlet]:
for using the eventlet pool.
:celery[gevent]:
for using the gevent pool.
:celery[threads]:
for using the thread pool.
Transports and Backends
~~~~~~~~~~~~~~~~~~~~~~~
:celery[librabbitmq]:
for using the librabbitmq C library.
:celery[redis]:
for using Redis as a message transport or as a result backend.
:celery[mongodb]:
for using MongoDB as a message transport (*experimental*),
or as a result backend (*supported*).
:celery[sqs]:
for using Amazon SQS as a message transport (*experimental*).
:celery[memcache]:
for using memcached as a result backend.
:celery[cassandra]:
for using Apache Cassandra as a result backend.
:celery[couchdb]:
for using CouchDB as a message transport (*experimental*).
:celery[couchbase]:
for using CouchBase as a result backend.
:celery[beanstalk]:
for using Beanstalk as a message transport (*experimental*).
:celery[zookeeper]:
for using Zookeeper as a message transport.
:celery[zeromq]:
for using ZeroMQ as a message transport (*experimental*).
:celery[sqlalchemy]:
for using SQLAlchemy as a message transport (*experimental*),
or as a result backend (*supported*).
:celery[pyro]:
for using the Pyro4 message transport (*experimental*).
:celery[slmq]:
for using the SoftLayer Message Queue transport (*experimental*).
.. _celery-installing-from-source:
Downloading and installing from source
--------------------------------------
Download the latest version of Celery from
http://pypi.python.org/pypi/celery/
You can install it by doing the following,::
$ tar xvfz celery-0.0.0.tar.gz
$ cd celery-0.0.0
$ python setup.py build
# python setup.py install
The last command must be executed as a privileged user if
you are not currently using a virtualenv.
.. _celery-installing-from-git:
Using the development version
-----------------------------
With pip
~~~~~~~~
The Celery development version also requires the development
versions of ``kombu``, ``amqp`` and ``billiard``.
You can install the latest snapshot of these using the following
pip commands::
$ pip install https://github.com/celery/celery/zipball/master#egg=celery
$ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
$ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
$ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
With git
~~~~~~~~
Please the Contributing section.
.. _getting-help:
Getting Help
============
.. _mailing-list:
Mailing list
------------
For discussions about the usage, development, and future of celery,
please join the `celery-users`_ mailing list.
.. _`celery-users`: http://groups.google.com/group/celery-users/
.. _irc-channel:
IRC
---
Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
network.
.. _`Freenode`: http://freenode.net
.. _bug-tracker:
Bug tracker
===========
If you have any suggestions, bug reports or annoyances please report them
to our issue tracker at http://github.com/celery/celery/issues/
.. _wiki:
Wiki
====
http://wiki.github.com/celery/celery/
.. _contributing-short:
Contributing
============
Development of `celery` happens at Github: http://github.com/celery/celery
You are highly encouraged to participate in the development
of `celery`. If you don't like Github (for some reason) you're welcome
to send regular patches.
Be sure to also read the `Contributing to Celery`_ section in the
documentation.
.. _`Contributing to Celery`:
http://docs.celeryproject.org/en/master/contributing.html
.. _license:
License
=======
This software is licensed under the `New BSD License`. See the ``LICENSE``
file in the top distribution directory for the full license text.
.. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
.. image:: https://d2weczhvl823v0.cloudfront.net/celery/celery/trend.png
:alt: Bitdeli badge
:target: https://bitdeli.com/free
.. |build-status| image:: https://travis-ci.org/celery/celery.svg?branch=master
:target: https://travis-ci.org/celery/celery
.. |coverage-status| image:: https://coveralls.io/repos/celery/celery/badge.svg
:target: https://coveralls.io/r/celery/celery
[end of README.rst]
[start of celery/backends/amqp.py]
# -*- coding: utf-8 -*-
"""
celery.backends.amqp
~~~~~~~~~~~~~~~~~~~~
The AMQP result backend.
This backend publishes results as messages.
"""
from __future__ import absolute_import
import socket
from collections import deque
from operator import itemgetter
from kombu import Exchange, Queue, Producer, Consumer
from celery import states
from celery.exceptions import TimeoutError
from celery.five import range, monotonic
from celery.utils.functional import dictfilter
from celery.utils.log import get_logger
from celery.utils.timeutils import maybe_s_to_ms
from .base import BaseBackend
__all__ = ['BacklogLimitExceeded', 'AMQPBackend']
logger = get_logger(__name__)
class BacklogLimitExceeded(Exception):
"""Too much state history to fast-forward."""
def repair_uuid(s):
# Historically the dashes in UUIDS are removed from AMQ entity names,
# but there is no known reason to. Hopefully we'll be able to fix
# this in v4.0.
return '%s-%s-%s-%s-%s' % (s[:8], s[8:12], s[12:16], s[16:20], s[20:])
class NoCacheQueue(Queue):
can_cache_declaration = False
class AMQPBackend(BaseBackend):
"""Publishes results by sending messages."""
Exchange = Exchange
Queue = NoCacheQueue
Consumer = Consumer
Producer = Producer
BacklogLimitExceeded = BacklogLimitExceeded
persistent = True
supports_autoexpire = True
supports_native_join = True
retry_policy = {
'max_retries': 20,
'interval_start': 0,
'interval_step': 1,
'interval_max': 1,
}
def __init__(self, app, connection=None, exchange=None, exchange_type=None,
persistent=None, serializer=None, auto_delete=True, **kwargs):
super(AMQPBackend, self).__init__(app, **kwargs)
conf = self.app.conf
self._connection = connection
self.persistent = self.prepare_persistent(persistent)
self.delivery_mode = 2 if self.persistent else 1
exchange = exchange or conf.CELERY_RESULT_EXCHANGE
exchange_type = exchange_type or conf.CELERY_RESULT_EXCHANGE_TYPE
self.exchange = self._create_exchange(
exchange, exchange_type, self.delivery_mode,
)
self.serializer = serializer or conf.CELERY_RESULT_SERIALIZER
self.auto_delete = auto_delete
self.queue_arguments = dictfilter({
'x-expires': maybe_s_to_ms(self.expires),
})
def _create_exchange(self, name, type='direct', delivery_mode=2):
return self.Exchange(name=name,
type=type,
delivery_mode=delivery_mode,
durable=self.persistent,
auto_delete=False)
def _create_binding(self, task_id):
name = self.rkey(task_id)
return self.Queue(name=name,
exchange=self.exchange,
routing_key=name,
durable=self.persistent,
auto_delete=self.auto_delete,
queue_arguments=self.queue_arguments)
def revive(self, channel):
pass
def rkey(self, task_id):
return task_id.replace('-', '')
def destination_for(self, task_id, request):
if request:
return self.rkey(task_id), request.correlation_id or task_id
return self.rkey(task_id), task_id
def store_result(self, task_id, result, status,
traceback=None, request=None, **kwargs):
"""Send task return value and status."""
routing_key, correlation_id = self.destination_for(task_id, request)
if not routing_key:
return
with self.app.amqp.producer_pool.acquire(block=True) as producer:
producer.publish(
{'task_id': task_id, 'status': status,
'result': self.encode_result(result, status),
'traceback': traceback,
'children': self.current_task_children(request)},
exchange=self.exchange,
routing_key=routing_key,
correlation_id=correlation_id,
serializer=self.serializer,
retry=True, retry_policy=self.retry_policy,
declare=self.on_reply_declare(task_id),
delivery_mode=self.delivery_mode,
)
return result
def on_reply_declare(self, task_id):
return [self._create_binding(task_id)]
def wait_for(self, task_id, timeout=None, cache=True,
no_ack=True, on_interval=None,
READY_STATES=states.READY_STATES,
PROPAGATE_STATES=states.PROPAGATE_STATES,
**kwargs):
cached_meta = self._cache.get(task_id)
if cache and cached_meta and \
cached_meta['status'] in READY_STATES:
return cached_meta
else:
try:
return self.consume(task_id, timeout=timeout, no_ack=no_ack,
on_interval=on_interval)
except socket.timeout:
raise TimeoutError('The operation timed out.')
def get_task_meta(self, task_id, backlog_limit=1000):
# Polling and using basic_get
with self.app.pool.acquire_channel(block=True) as (_, channel):
binding = self._create_binding(task_id)(channel)
binding.declare()
prev = latest = acc = None
for i in range(backlog_limit): # spool ffwd
acc = binding.get(
accept=self.accept, no_ack=False,
)
if not acc: # no more messages
break
if acc.payload['task_id'] == task_id:
prev, latest = latest, acc
if prev:
# backends are not expected to keep history,
# so we delete everything except the most recent state.
prev.ack()
prev = None
else:
raise self.BacklogLimitExceeded(task_id)
if latest:
payload = self._cache[task_id] = latest.payload
latest.requeue()
return payload
else:
# no new state, use previous
try:
return self._cache[task_id]
except KeyError:
# result probably pending.
return {'status': states.PENDING, 'result': None}
poll = get_task_meta # XXX compat
def drain_events(self, connection, consumer,
timeout=None, on_interval=None, now=monotonic, wait=None):
wait = wait or connection.drain_events
results = {}
def callback(meta, message):
if meta['status'] in states.READY_STATES:
results[meta['task_id']] = meta
consumer.callbacks[:] = [callback]
time_start = now()
while 1:
# Total time spent may exceed a single call to wait()
if timeout and now() - time_start >= timeout:
raise socket.timeout()
try:
wait(timeout=1)
except socket.timeout:
pass
if on_interval:
on_interval()
if results: # got event on the wanted channel.
break
self._cache.update(results)
return results
def consume(self, task_id, timeout=None, no_ack=True, on_interval=None):
wait = self.drain_events
with self.app.pool.acquire_channel(block=True) as (conn, channel):
binding = self._create_binding(task_id)
with self.Consumer(channel, binding,
no_ack=no_ack, accept=self.accept) as consumer:
while 1:
try:
return wait(
conn, consumer, timeout, on_interval)[task_id]
except KeyError:
continue
def _many_bindings(self, ids):
return [self._create_binding(task_id) for task_id in ids]
def get_many(self, task_ids, timeout=None, no_ack=True, on_message=None,
now=monotonic, getfields=itemgetter('status', 'task_id'),
READY_STATES=states.READY_STATES,
PROPAGATE_STATES=states.PROPAGATE_STATES, **kwargs):
with self.app.pool.acquire_channel(block=True) as (conn, channel):
ids = set(task_ids)
cached_ids = set()
mark_cached = cached_ids.add
for task_id in ids:
try:
cached = self._cache[task_id]
except KeyError:
pass
else:
if cached['status'] in READY_STATES:
yield task_id, cached
mark_cached(task_id)
ids.difference_update(cached_ids)
results = deque()
push_result = results.append
push_cache = self._cache.__setitem__
decode_result = self.meta_from_decoded
def _on_message(message):
body = decode_result(message.decode())
if on_message is not None:
on_message(body)
state, uid = getfields(body)
if state in READY_STATES:
push_result(body) \
if uid in task_ids else push_cache(uid, body)
bindings = self._many_bindings(task_ids)
with self.Consumer(channel, bindings, on_message=_on_message,
accept=self.accept, no_ack=no_ack):
wait = conn.drain_events
popleft = results.popleft
while ids:
wait(timeout=timeout)
while results:
state = popleft()
task_id = state['task_id']
ids.discard(task_id)
push_cache(task_id, state)
yield task_id, state
def reload_task_result(self, task_id):
raise NotImplementedError(
'reload_task_result is not supported by this backend.')
def reload_group_result(self, task_id):
"""Reload group result, even if it has been previously fetched."""
raise NotImplementedError(
'reload_group_result is not supported by this backend.')
def save_group(self, group_id, result):
raise NotImplementedError(
'save_group is not supported by this backend.')
def restore_group(self, group_id, cache=True):
raise NotImplementedError(
'restore_group is not supported by this backend.')
def delete_group(self, group_id):
raise NotImplementedError(
'delete_group is not supported by this backend.')
def __reduce__(self, args=(), kwargs={}):
kwargs.update(
connection=self._connection,
exchange=self.exchange.name,
exchange_type=self.exchange.type,
persistent=self.persistent,
serializer=self.serializer,
auto_delete=self.auto_delete,
expires=self.expires,
)
return super(AMQPBackend, self).__reduce__(args, kwargs)
[end of celery/backends/amqp.py]
</code>
I need you to solve this issue by regenerating the full files in the code base that you would like to change. You can change as many files as you like. Please respond with a list of files and their revised contents in the following format.
<example>
[start of /src/this_file.py]
import os
def euclidean(a, b):
if b == 0:
return a
return euclidean(b, a % b)
[end of /src/this_file.py]
[start of /src/another_file.py]
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
x, y = x0, y0
sx = -1 if x0 > x1 else 1
sy = -1 if y0 > y1 else 1
if dx > dy:
err = dx / 2.0
while x != x1:
points.append((x, y))
err -= dy
if err < 0:
y += sy
err += dx
x += sx
else:
err = dy / 2.0
while y != y1:
points.append((x
err -= dx
if err < 0:
x += sx
err += dy
y += sy
points.append((x, y))
return points
[end of /src/another_file.py]
</example>
|
celery/celery
|
6592ff64b6b024a4b68abcc53b151888fdf0dee3
|
CELERY_RESULT_SERIALIZER = 'json' breaks Exception marshaling
Setting `CELERY_RESULT_SERIALIZER = json` and raising an exception in the worker leads to this:
```
/path/to/lib/python2.7/site-packages/celery/result.py in get(self, timeout, propagate, interval, no_ack, follow_parents, EXCEPTION_STATES, PROPAGATE_STATES)
173 status = meta['status']
174 if status in PROPAGATE_STATES and propagate:
--> 175 raise meta['result']
176 return meta['result']
177 wait = get # deprecated alias to :meth:`get`.
TypeError: exceptions must be old-style classes or derived from BaseException, not dict
```
where the contents of `meta['result']` are (in my case):
```
{u'exc_message': u'unknown keys: nam', u'exc_type': u'ValueError'}
```
so it _looks_ like celery could convert the dict to a real exception before raising, but it does not currently. Changing back to `pickle` works as expected.
bug can be reproduced with the following:
``` python
# jsonresults.py
from celery.app.base import Celery
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'amqp'
app = Celery(config_source=__name__)
@app.task
def hello():
raise ValueError('go away')
```
worker:
```
# python -m celery --app=jsonresults:app worker
```
caller:
``` python
import jsonresults
jsonresults.hello.delay().get()
```
|
This is biting me as well. Any news?
|
2015-04-29T14:52:17Z
|
<patch>
<patch>
diff --git a/celery/backends/amqp.py b/celery/backends/amqp.py
--- a/celery/backends/amqp.py
+++ b/celery/backends/amqp.py
@@ -195,7 +195,7 @@ def drain_events(self, connection, consumer,
def callback(meta, message):
if meta['status'] in states.READY_STATES:
- results[meta['task_id']] = meta
+ results[meta['task_id']] = self.meta_from_decoded(meta)
consumer.callbacks[:] = [callback]
time_start = now()
</patch>
</s>
</patch>
|
diff --git a/celery/tests/backends/test_amqp.py b/celery/tests/backends/test_amqp.py
--- a/celery/tests/backends/test_amqp.py
+++ b/celery/tests/backends/test_amqp.py
@@ -13,6 +13,7 @@
from celery.backends.amqp import AMQPBackend
from celery.exceptions import TimeoutError
from celery.five import Empty, Queue, range
+from celery.result import AsyncResult
from celery.utils import uuid
from celery.tests.case import (
@@ -246,10 +247,20 @@ def test_wait_for(self):
with self.assertRaises(TimeoutError):
b.wait_for(tid, timeout=0.01, cache=False)
- def test_drain_events_remaining_timeouts(self):
+ def test_drain_events_decodes_exceptions_in_meta(self):
+ tid = uuid()
+ b = self.create_backend(serializer="json")
+ b.store_result(tid, RuntimeError("aap"), states.FAILURE)
+ result = AsyncResult(tid, backend=b)
- class Connection(object):
+ with self.assertRaises(Exception) as cm:
+ result.get()
+ self.assertEqual(cm.exception.__class__.__name__, "RuntimeError")
+ self.assertEqual(str(cm.exception), "aap")
+
+ def test_drain_events_remaining_timeouts(self):
+ class Connection(object):
def drain_events(self, timeout=None):
pass
|
1.0
| |||
celery__celery-2840
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
</issue>
<code>
[start of README.rst]
1 =================================
2 celery - Distributed Task Queue
3 =================================
4
5 .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
6
7 |build-status| |coverage-status|
8
9 :Version: 3.2.0a1 (Cipater)
10 :Web: http://celeryproject.org/
11 :Download: http://pypi.python.org/pypi/celery/
12 :Source: http://github.com/celery/celery/
13 :Keywords: task queue, job queue, asynchronous, async, rabbitmq, amqp, redis,
14 python, webhooks, queue, distributed
15
16 --
17
18 What is a Task Queue?
19 =====================
20
21 Task queues are used as a mechanism to distribute work across threads or
22 machines.
23
24 A task queue's input is a unit of work, called a task, dedicated worker
25 processes then constantly monitor the queue for new work to perform.
26
27 Celery communicates via messages, usually using a broker
28 to mediate between clients and workers. To initiate a task a client puts a
29 message on the queue, the broker then delivers the message to a worker.
30
31 A Celery system can consist of multiple workers and brokers, giving way
32 to high availability and horizontal scaling.
33
34 Celery is a library written in Python, but the protocol can be implemented in
35 any language. So far there's RCelery_ for the Ruby programming language, and a
36 `PHP client`, but language interoperability can also be achieved
37 by using webhooks.
38
39 .. _RCelery: https://github.com/leapfrogonline/rcelery
40 .. _`PHP client`: https://github.com/gjedeer/celery-php
41 .. _`using webhooks`:
42 http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
43
44 What do I need?
45 ===============
46
47 Celery version 3.0 runs on,
48
49 - Python (2.6, 2.7, 3.3, 3.4)
50 - PyPy (1.8, 1.9)
51 - Jython (2.5, 2.7).
52
53 This is the last version to support Python 2.5,
54 and from Celery 3.1, Python 2.6 or later is required.
55 The last version to support Python 2.4 was Celery series 2.2.
56
57 *Celery* is usually used with a message broker to send and receive messages.
58 The RabbitMQ, Redis transports are feature complete,
59 but there's also experimental support for a myriad of other solutions, including
60 using SQLite for local development.
61
62 *Celery* can run on a single machine, on multiple machines, or even
63 across datacenters.
64
65 Get Started
66 ===========
67
68 If this is the first time you're trying to use Celery, or you are
69 new to Celery 3.0 coming from previous versions then you should read our
70 getting started tutorials:
71
72 - `First steps with Celery`_
73
74 Tutorial teaching you the bare minimum needed to get started with Celery.
75
76 - `Next steps`_
77
78 A more complete overview, showing more features.
79
80 .. _`First steps with Celery`:
81 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
82
83 .. _`Next steps`:
84 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
85
86 Celery is...
87 ==========
88
89 - **Simple**
90
91 Celery is easy to use and maintain, and does *not need configuration files*.
92
93 It has an active, friendly community you can talk to for support,
94 including a `mailing-list`_ and and an IRC channel.
95
96 Here's one of the simplest applications you can make::
97
98 from celery import Celery
99
100 app = Celery('hello', broker='amqp://guest@localhost//')
101
102 @app.task
103 def hello():
104 return 'hello world'
105
106 - **Highly Available**
107
108 Workers and clients will automatically retry in the event
109 of connection loss or failure, and some brokers support
110 HA in way of *Master/Master* or *Master/Slave* replication.
111
112 - **Fast**
113
114 A single Celery process can process millions of tasks a minute,
115 with sub-millisecond round-trip latency (using RabbitMQ,
116 py-librabbitmq, and optimized settings).
117
118 - **Flexible**
119
120 Almost every part of *Celery* can be extended or used on its own,
121 Custom pool implementations, serializers, compression schemes, logging,
122 schedulers, consumers, producers, autoscalers, broker transports and much more.
123
124 It supports...
125 ============
126
127 - **Message Transports**
128
129 - RabbitMQ_, Redis_,
130 - MongoDB_ (experimental), Amazon SQS (experimental),
131 - CouchDB_ (experimental), SQLAlchemy_ (experimental),
132 - Django ORM (experimental), `IronMQ`_
133 - and more...
134
135 - **Concurrency**
136
137 - Prefork, Eventlet_, gevent_, threads/single threaded
138
139 - **Result Stores**
140
141 - AMQP, Redis
142 - memcached, MongoDB
143 - SQLAlchemy, Django ORM
144 - Apache Cassandra, IronCache
145
146 - **Serialization**
147
148 - *pickle*, *json*, *yaml*, *msgpack*.
149 - *zlib*, *bzip2* compression.
150 - Cryptographic message signing.
151
152 .. _`Eventlet`: http://eventlet.net/
153 .. _`gevent`: http://gevent.org/
154
155 .. _RabbitMQ: http://rabbitmq.com
156 .. _Redis: http://redis.io
157 .. _MongoDB: http://mongodb.org
158 .. _Beanstalk: http://kr.github.com/beanstalkd
159 .. _CouchDB: http://couchdb.apache.org
160 .. _SQLAlchemy: http://sqlalchemy.org
161 .. _`IronMQ`: http://iron.io
162
163 Framework Integration
164 =====================
165
166 Celery is easy to integrate with web frameworks, some of which even have
167 integration packages:
168
169 +--------------------+----------------------------------------------------+
170 | `Django`_ | not needed |
171 +--------------------+----------------------------------------------------+
172 | `Pyramid`_ | `pyramid_celery`_ |
173 +--------------------+----------------------------------------------------+
174 | `Pylons`_ | `celery-pylons`_ |
175 +--------------------+----------------------------------------------------+
176 | `Flask`_ | not needed |
177 +--------------------+----------------------------------------------------+
178 | `web2py`_ | `web2py-celery`_ |
179 +--------------------+----------------------------------------------------+
180 | `Tornado`_ | `tornado-celery`_ | `another tornado-celery`_ |
181 +--------------------+----------------------------------------------------+
182
183 The integration packages are not strictly necessary, but they can make
184 development easier, and sometimes they add important hooks like closing
185 database connections at ``fork``.
186
187 .. _`Django`: http://djangoproject.com/
188 .. _`Pylons`: http://www.pylonsproject.org/
189 .. _`Flask`: http://flask.pocoo.org/
190 .. _`web2py`: http://web2py.com/
191 .. _`Bottle`: http://bottlepy.org/
192 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
193 .. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
194 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
195 .. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
196 .. _`web2py-celery`: http://code.google.com/p/web2py-celery/
197 .. _`Tornado`: http://www.tornadoweb.org/
198 .. _`tornado-celery`: http://github.com/mher/tornado-celery/
199 .. _`another tornado-celery`: https://github.com/mayflaver/tornado-celery
200
201 .. _celery-documentation:
202
203 Documentation
204 =============
205
206 The `latest documentation`_ with user guides, tutorials and API reference
207 is hosted at Read The Docs.
208
209 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
210
211 .. _celery-installation:
212
213 Installation
214 ============
215
216 You can install Celery either via the Python Package Index (PyPI)
217 or from source.
218
219 To install using `pip`,::
220
221 $ pip install -U Celery
222
223 To install using `easy_install`,::
224
225 $ easy_install -U Celery
226
227 .. _bundles:
228
229 Bundles
230 -------
231
232 Celery also defines a group of bundles that can be used
233 to install Celery and the dependencies for a given feature.
234
235 You can specify these in your requirements or on the ``pip`` comand-line
236 by using brackets. Multiple bundles can be specified by separating them by
237 commas.
238 ::
239
240 $ pip install "celery[librabbitmq]"
241
242 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
243
244 The following bundles are available:
245
246 Serializers
247 ~~~~~~~~~~~
248
249 :celery[auth]:
250 for using the auth serializer.
251
252 :celery[msgpack]:
253 for using the msgpack serializer.
254
255 :celery[yaml]:
256 for using the yaml serializer.
257
258 Concurrency
259 ~~~~~~~~~~~
260
261 :celery[eventlet]:
262 for using the eventlet pool.
263
264 :celery[gevent]:
265 for using the gevent pool.
266
267 :celery[threads]:
268 for using the thread pool.
269
270 Transports and Backends
271 ~~~~~~~~~~~~~~~~~~~~~~~
272
273 :celery[librabbitmq]:
274 for using the librabbitmq C library.
275
276 :celery[redis]:
277 for using Redis as a message transport or as a result backend.
278
279 :celery[mongodb]:
280 for using MongoDB as a message transport (*experimental*),
281 or as a result backend (*supported*).
282
283 :celery[sqs]:
284 for using Amazon SQS as a message transport (*experimental*).
285
286 :celery[memcache]:
287 for using memcached as a result backend.
288
289 :celery[cassandra]:
290 for using Apache Cassandra as a result backend.
291
292 :celery[couchdb]:
293 for using CouchDB as a message transport (*experimental*).
294
295 :celery[couchbase]:
296 for using CouchBase as a result backend.
297
298 :celery[beanstalk]:
299 for using Beanstalk as a message transport (*experimental*).
300
301 :celery[zookeeper]:
302 for using Zookeeper as a message transport.
303
304 :celery[zeromq]:
305 for using ZeroMQ as a message transport (*experimental*).
306
307 :celery[sqlalchemy]:
308 for using SQLAlchemy as a message transport (*experimental*),
309 or as a result backend (*supported*).
310
311 :celery[pyro]:
312 for using the Pyro4 message transport (*experimental*).
313
314 :celery[slmq]:
315 for using the SoftLayer Message Queue transport (*experimental*).
316
317 .. _celery-installing-from-source:
318
319 Downloading and installing from source
320 --------------------------------------
321
322 Download the latest version of Celery from
323 http://pypi.python.org/pypi/celery/
324
325 You can install it by doing the following,::
326
327 $ tar xvfz celery-0.0.0.tar.gz
328 $ cd celery-0.0.0
329 $ python setup.py build
330 # python setup.py install
331
332 The last command must be executed as a privileged user if
333 you are not currently using a virtualenv.
334
335 .. _celery-installing-from-git:
336
337 Using the development version
338 -----------------------------
339
340 With pip
341 ~~~~~~~~
342
343 The Celery development version also requires the development
344 versions of ``kombu``, ``amqp`` and ``billiard``.
345
346 You can install the latest snapshot of these using the following
347 pip commands::
348
349 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
350 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
351 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
352 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
353
354 With git
355 ~~~~~~~~
356
357 Please the Contributing section.
358
359 .. _getting-help:
360
361 Getting Help
362 ============
363
364 .. _mailing-list:
365
366 Mailing list
367 ------------
368
369 For discussions about the usage, development, and future of celery,
370 please join the `celery-users`_ mailing list.
371
372 .. _`celery-users`: http://groups.google.com/group/celery-users/
373
374 .. _irc-channel:
375
376 IRC
377 ---
378
379 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
380 network.
381
382 .. _`Freenode`: http://freenode.net
383
384 .. _bug-tracker:
385
386 Bug tracker
387 ===========
388
389 If you have any suggestions, bug reports or annoyances please report them
390 to our issue tracker at http://github.com/celery/celery/issues/
391
392 .. _wiki:
393
394 Wiki
395 ====
396
397 http://wiki.github.com/celery/celery/
398
399
400 .. _maintainers:
401
402 Maintainers
403 ===========
404
405 - `@ask`_ (primary maintainer)
406 - `@thedrow`_
407 - `@chrisgogreen`_
408 - `@PMickael`_
409 - `@malinoff`_
410 - And you? We really need more: https://github.com/celery/celery/issues/2534
411
412 .. _`@ask`: http://github.com/ask
413 .. _`@thedrow`: http://github.com/thedrow
414 .. _`@chrisgogreen`: http://github.com/chrisgogreen
415 .. _`@PMickael`: http://github.com/PMickael
416 .. _`@malinoff`: http://github.com/malinoff
417
418
419 .. _contributing-short:
420
421 Contributing
422 ============
423
424 Development of `celery` happens at Github: http://github.com/celery/celery
425
426 You are highly encouraged to participate in the development
427 of `celery`. If you don't like Github (for some reason) you're welcome
428 to send regular patches.
429
430 Be sure to also read the `Contributing to Celery`_ section in the
431 documentation.
432
433 .. _`Contributing to Celery`:
434 http://docs.celeryproject.org/en/master/contributing.html
435
436 .. _license:
437
438 License
439 =======
440
441 This software is licensed under the `New BSD License`. See the ``LICENSE``
442 file in the top distribution directory for the full license text.
443
444 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
445
446
447 .. image:: https://d2weczhvl823v0.cloudfront.net/celery/celery/trend.png
448 :alt: Bitdeli badge
449 :target: https://bitdeli.com/free
450
451 .. |build-status| image:: https://travis-ci.org/celery/celery.svg?branch=master
452 :target: https://travis-ci.org/celery/celery
453 .. |coverage-status| image:: https://coveralls.io/repos/celery/celery/badge.svg
454 :target: https://coveralls.io/r/celery/celery
455
[end of README.rst]
[start of celery/app/defaults.py]
1 # -*- coding: utf-8 -*-
2 """
3 celery.app.defaults
4 ~~~~~~~~~~~~~~~~~~~
5
6 Configuration introspection and defaults.
7
8 """
9 from __future__ import absolute_import
10
11 import sys
12
13 from collections import deque, namedtuple
14 from datetime import timedelta
15
16 from celery.five import items
17 from celery.utils import strtobool
18 from celery.utils.functional import memoize
19
20 __all__ = ['Option', 'NAMESPACES', 'flatten', 'find']
21
22 is_jython = sys.platform.startswith('java')
23 is_pypy = hasattr(sys, 'pypy_version_info')
24
25 DEFAULT_POOL = 'prefork'
26 if is_jython:
27 DEFAULT_POOL = 'threads'
28 elif is_pypy:
29 if sys.pypy_version_info[0:3] < (1, 5, 0):
30 DEFAULT_POOL = 'solo'
31 else:
32 DEFAULT_POOL = 'prefork'
33
34 DEFAULT_ACCEPT_CONTENT = ['json', 'pickle', 'msgpack', 'yaml']
35 DEFAULT_PROCESS_LOG_FMT = """
36 [%(asctime)s: %(levelname)s/%(processName)s] %(message)s
37 """.strip()
38 DEFAULT_LOG_FMT = '[%(asctime)s: %(levelname)s] %(message)s'
39 DEFAULT_TASK_LOG_FMT = """[%(asctime)s: %(levelname)s/%(processName)s] \
40 %(task_name)s[%(task_id)s]: %(message)s"""
41
42 _BROKER_OLD = {'deprecate_by': '2.5', 'remove_by': '4.0',
43 'alt': 'BROKER_URL setting'}
44 _REDIS_OLD = {'deprecate_by': '2.5', 'remove_by': '4.0',
45 'alt': 'URL form of CELERY_RESULT_BACKEND'}
46
47 searchresult = namedtuple('searchresult', ('namespace', 'key', 'type'))
48
49
50 class Option(object):
51 alt = None
52 deprecate_by = None
53 remove_by = None
54 typemap = dict(string=str, int=int, float=float, any=lambda v: v,
55 bool=strtobool, dict=dict, tuple=tuple)
56
57 def __init__(self, default=None, *args, **kwargs):
58 self.default = default
59 self.type = kwargs.get('type') or 'string'
60 for attr, value in items(kwargs):
61 setattr(self, attr, value)
62
63 def to_python(self, value):
64 return self.typemap[self.type](value)
65
66 def __repr__(self):
67 return '<Option: type->{0} default->{1!r}>'.format(self.type,
68 self.default)
69
70 NAMESPACES = {
71 'BROKER': {
72 'URL': Option(None, type='string'),
73 'CONNECTION_TIMEOUT': Option(4, type='float'),
74 'CONNECTION_RETRY': Option(True, type='bool'),
75 'CONNECTION_MAX_RETRIES': Option(100, type='int'),
76 'FAILOVER_STRATEGY': Option(None, type='string'),
77 'HEARTBEAT': Option(None, type='int'),
78 'HEARTBEAT_CHECKRATE': Option(3.0, type='int'),
79 'LOGIN_METHOD': Option(None, type='string'),
80 'POOL_LIMIT': Option(10, type='int'),
81 'USE_SSL': Option(False, type='bool'),
82 'TRANSPORT': Option(type='string'),
83 'TRANSPORT_OPTIONS': Option({}, type='dict'),
84 'HOST': Option(type='string', **_BROKER_OLD),
85 'PORT': Option(type='int', **_BROKER_OLD),
86 'USER': Option(type='string', **_BROKER_OLD),
87 'PASSWORD': Option(type='string', **_BROKER_OLD),
88 'VHOST': Option(type='string', **_BROKER_OLD),
89 },
90 'CASSANDRA': {
91 'COLUMN_FAMILY': Option(type='string'),
92 'DETAILED_MODE': Option(False, type='bool'),
93 'KEYSPACE': Option(type='string'),
94 'READ_CONSISTENCY': Option(type='string'),
95 'SERVERS': Option(type='list'),
96 'WRITE_CONSISTENCY': Option(type='string'),
97 },
98 'CELERY': {
99 'ACCEPT_CONTENT': Option(DEFAULT_ACCEPT_CONTENT, type='list'),
100 'ACKS_LATE': Option(False, type='bool'),
101 'ALWAYS_EAGER': Option(False, type='bool'),
102 'ANNOTATIONS': Option(type='any'),
103 'BROADCAST_QUEUE': Option('celeryctl'),
104 'BROADCAST_EXCHANGE': Option('celeryctl'),
105 'BROADCAST_EXCHANGE_TYPE': Option('fanout'),
106 'CACHE_BACKEND': Option(),
107 'CACHE_BACKEND_OPTIONS': Option({}, type='dict'),
108 'CHORD_PROPAGATES': Option(True, type='bool'),
109 'COUCHBASE_BACKEND_SETTINGS': Option(None, type='dict'),
110 'CREATE_MISSING_QUEUES': Option(True, type='bool'),
111 'DEFAULT_RATE_LIMIT': Option(type='string'),
112 'DISABLE_RATE_LIMITS': Option(False, type='bool'),
113 'DEFAULT_ROUTING_KEY': Option('celery'),
114 'DEFAULT_QUEUE': Option('celery'),
115 'DEFAULT_EXCHANGE': Option('celery'),
116 'DEFAULT_EXCHANGE_TYPE': Option('direct'),
117 'DEFAULT_DELIVERY_MODE': Option(2, type='string'),
118 'EAGER_PROPAGATES_EXCEPTIONS': Option(False, type='bool'),
119 'ENABLE_UTC': Option(True, type='bool'),
120 'ENABLE_REMOTE_CONTROL': Option(True, type='bool'),
121 'EVENT_SERIALIZER': Option('json'),
122 'EVENT_QUEUE_EXPIRES': Option(60.0, type='float'),
123 'EVENT_QUEUE_TTL': Option(5.0, type='float'),
124 'IMPORTS': Option((), type='tuple'),
125 'INCLUDE': Option((), type='tuple'),
126 'IGNORE_RESULT': Option(False, type='bool'),
127 'MAX_CACHED_RESULTS': Option(100, type='int'),
128 'MESSAGE_COMPRESSION': Option(type='string'),
129 'MONGODB_BACKEND_SETTINGS': Option(type='dict'),
130 'REDIS_HOST': Option(type='string', **_REDIS_OLD),
131 'REDIS_PORT': Option(type='int', **_REDIS_OLD),
132 'REDIS_DB': Option(type='int', **_REDIS_OLD),
133 'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
134 'REDIS_MAX_CONNECTIONS': Option(type='int'),
135 'RESULT_BACKEND': Option(type='string'),
136 'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
137 'RESULT_DB_TABLENAMES': Option(type='dict'),
138 'RESULT_DBURI': Option(),
139 'RESULT_ENGINE_OPTIONS': Option(type='dict'),
140 'RESULT_EXCHANGE': Option('celeryresults'),
141 'RESULT_EXCHANGE_TYPE': Option('direct'),
142 'RESULT_SERIALIZER': Option('json'),
143 'RESULT_PERSISTENT': Option(None, type='bool'),
144 'RIAK_BACKEND_SETTINGS': Option(type='dict'),
145 'ROUTES': Option(type='any'),
146 'SEND_EVENTS': Option(False, type='bool'),
147 'SEND_TASK_ERROR_EMAILS': Option(False, type='bool'),
148 'SEND_TASK_SENT_EVENT': Option(False, type='bool'),
149 'STORE_ERRORS_EVEN_IF_IGNORED': Option(False, type='bool'),
150 'TASK_PROTOCOL': Option(1, type='int'),
151 'TASK_PUBLISH_RETRY': Option(True, type='bool'),
152 'TASK_PUBLISH_RETRY_POLICY': Option({
153 'max_retries': 3,
154 'interval_start': 0,
155 'interval_max': 1,
156 'interval_step': 0.2}, type='dict'),
157 'TASK_RESULT_EXPIRES': Option(timedelta(days=1), type='float'),
158 'TASK_SERIALIZER': Option('json'),
159 'TIMEZONE': Option(type='string'),
160 'TRACK_STARTED': Option(False, type='bool'),
161 'REDIRECT_STDOUTS': Option(True, type='bool'),
162 'REDIRECT_STDOUTS_LEVEL': Option('WARNING'),
163 'QUEUES': Option(type='dict'),
164 'QUEUE_HA_POLICY': Option(None, type='string'),
165 'SECURITY_KEY': Option(type='string'),
166 'SECURITY_CERTIFICATE': Option(type='string'),
167 'SECURITY_CERT_STORE': Option(type='string'),
168 'WORKER_DIRECT': Option(False, type='bool'),
169 },
170 'CELERYD': {
171 'AGENT': Option(None, type='string'),
172 'AUTOSCALER': Option('celery.worker.autoscale:Autoscaler'),
173 'AUTORELOADER': Option('celery.worker.autoreload:Autoreloader'),
174 'CONCURRENCY': Option(0, type='int'),
175 'TIMER': Option(type='string'),
176 'TIMER_PRECISION': Option(1.0, type='float'),
177 'FORCE_EXECV': Option(False, type='bool'),
178 'HIJACK_ROOT_LOGGER': Option(True, type='bool'),
179 'CONSUMER': Option('celery.worker.consumer:Consumer', type='string'),
180 'LOG_FORMAT': Option(DEFAULT_PROCESS_LOG_FMT),
181 'LOG_COLOR': Option(type='bool'),
182 'LOG_LEVEL': Option('WARN', deprecate_by='2.4', remove_by='4.0',
183 alt='--loglevel argument'),
184 'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
185 alt='--logfile argument'),
186 'MAX_TASKS_PER_CHILD': Option(type='int'),
187 'POOL': Option(DEFAULT_POOL),
188 'POOL_PUTLOCKS': Option(True, type='bool'),
189 'POOL_RESTARTS': Option(False, type='bool'),
190 'PREFETCH_MULTIPLIER': Option(4, type='int'),
191 'STATE_DB': Option(),
192 'TASK_LOG_FORMAT': Option(DEFAULT_TASK_LOG_FMT),
193 'TASK_SOFT_TIME_LIMIT': Option(type='float'),
194 'TASK_TIME_LIMIT': Option(type='float'),
195 'WORKER_LOST_WAIT': Option(10.0, type='float')
196 },
197 'CELERYBEAT': {
198 'SCHEDULE': Option({}, type='dict'),
199 'SCHEDULER': Option('celery.beat:PersistentScheduler'),
200 'SCHEDULE_FILENAME': Option('celerybeat-schedule'),
201 'SYNC_EVERY': Option(0, type='int'),
202 'MAX_LOOP_INTERVAL': Option(0, type='float'),
203 'LOG_LEVEL': Option('INFO', deprecate_by='2.4', remove_by='4.0',
204 alt='--loglevel argument'),
205 'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
206 alt='--logfile argument'),
207 },
208 'CELERYMON': {
209 'LOG_LEVEL': Option('INFO', deprecate_by='2.4', remove_by='4.0',
210 alt='--loglevel argument'),
211 'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
212 alt='--logfile argument'),
213 'LOG_FORMAT': Option(DEFAULT_LOG_FMT),
214 },
215 'EMAIL': {
216 'HOST': Option('localhost'),
217 'PORT': Option(25, type='int'),
218 'HOST_USER': Option(),
219 'HOST_PASSWORD': Option(),
220 'TIMEOUT': Option(2, type='float'),
221 'USE_SSL': Option(False, type='bool'),
222 'USE_TLS': Option(False, type='bool'),
223 'CHARSET': Option('us-ascii'),
224 },
225 'SERVER_EMAIL': Option('celery@localhost'),
226 'ADMINS': Option((), type='tuple'),
227 }
228
229
230 def flatten(d, ns=''):
231 stack = deque([(ns, d)])
232 while stack:
233 name, space = stack.popleft()
234 for key, value in items(space):
235 if isinstance(value, dict):
236 stack.append((name + key + '_', value))
237 else:
238 yield name + key, value
239 DEFAULTS = {key: value.default for key, value in flatten(NAMESPACES)}
240
241
242 def find_deprecated_settings(source):
243 from celery.utils import warn_deprecated
244 for name, opt in flatten(NAMESPACES):
245 if (opt.deprecate_by or opt.remove_by) and getattr(source, name, None):
246 warn_deprecated(description='The {0!r} setting'.format(name),
247 deprecation=opt.deprecate_by,
248 removal=opt.remove_by,
249 alternative='Use the {0.alt} instead'.format(opt))
250 return source
251
252
253 @memoize(maxsize=None)
254 def find(name, namespace='celery'):
255 # - Try specified namespace first.
256 namespace = namespace.upper()
257 try:
258 return searchresult(
259 namespace, name.upper(), NAMESPACES[namespace][name.upper()],
260 )
261 except KeyError:
262 # - Try all the other namespaces.
263 for ns, keys in items(NAMESPACES):
264 if ns.upper() == name.upper():
265 return searchresult(None, ns, keys)
266 elif isinstance(keys, dict):
267 try:
268 return searchresult(ns, name.upper(), keys[name.upper()])
269 except KeyError:
270 pass
271 # - See if name is a qualname last.
272 return searchresult(None, name.upper(), DEFAULTS[name.upper()])
273
[end of celery/app/defaults.py]
[start of celery/app/task.py]
1 # -*- coding: utf-8 -*-
2 """
3 celery.app.task
4 ~~~~~~~~~~~~~~~
5
6 Task Implementation: Task request context, and the base task class.
7
8 """
9 from __future__ import absolute_import
10
11 import sys
12
13 from billiard.einfo import ExceptionInfo
14
15 from celery import current_app, group
16 from celery import states
17 from celery._state import _task_stack
18 from celery.canvas import signature
19 from celery.exceptions import Ignore, MaxRetriesExceededError, Reject, Retry
20 from celery.five import class_property, items
21 from celery.result import EagerResult
22 from celery.utils import abstract
23 from celery.utils import uuid, maybe_reraise
24 from celery.utils.functional import mattrgetter, maybe_list
25 from celery.utils.imports import instantiate
26 from celery.utils.mail import ErrorMail
27
28 from .annotations import resolve_all as resolve_all_annotations
29 from .registry import _unpickle_task_v2
30 from .utils import appstr
31
32 __all__ = ['Context', 'Task']
33
34 #: extracts attributes related to publishing a message from an object.
35 extract_exec_options = mattrgetter(
36 'queue', 'routing_key', 'exchange', 'priority', 'expires',
37 'serializer', 'delivery_mode', 'compression', 'time_limit',
38 'soft_time_limit', 'immediate', 'mandatory', # imm+man is deprecated
39 )
40
41 # We take __repr__ very seriously around here ;)
42 R_BOUND_TASK = '<class {0.__name__} of {app}{flags}>'
43 R_UNBOUND_TASK = '<unbound {0.__name__}{flags}>'
44 R_SELF_TASK = '<@task {0.name} bound to other {0.__self__}>'
45 R_INSTANCE = '<@task: {0.name} of {app}{flags}>'
46
47 #: Here for backwards compatibility as tasks no longer use a custom metaclass.
48 TaskType = type
49
50
51 def _strflags(flags, default=''):
52 if flags:
53 return ' ({0})'.format(', '.join(flags))
54 return default
55
56
57 def _reprtask(task, fmt=None, flags=None):
58 flags = list(flags) if flags is not None else []
59 flags.append('v2 compatible') if task.__v2_compat__ else None
60 if not fmt:
61 fmt = R_BOUND_TASK if task._app else R_UNBOUND_TASK
62 return fmt.format(
63 task, flags=_strflags(flags),
64 app=appstr(task._app) if task._app else None,
65 )
66
67
68 class Context(object):
69 # Default context
70 logfile = None
71 loglevel = None
72 hostname = None
73 id = None
74 args = None
75 kwargs = None
76 retries = 0
77 eta = None
78 expires = None
79 is_eager = False
80 headers = None
81 delivery_info = None
82 reply_to = None
83 root_id = None
84 parent_id = None
85 correlation_id = None
86 taskset = None # compat alias to group
87 group = None
88 chord = None
89 utc = None
90 called_directly = True
91 callbacks = None
92 errbacks = None
93 timelimit = None
94 _children = None # see property
95 _protected = 0
96
97 def __init__(self, *args, **kwargs):
98 self.update(*args, **kwargs)
99
100 def update(self, *args, **kwargs):
101 return self.__dict__.update(*args, **kwargs)
102
103 def clear(self):
104 return self.__dict__.clear()
105
106 def get(self, key, default=None):
107 return getattr(self, key, default)
108
109 def __repr__(self):
110 return '<Context: {0!r}>'.format(vars(self))
111
112 @property
113 def children(self):
114 # children must be an empy list for every thread
115 if self._children is None:
116 self._children = []
117 return self._children
118
119
120 class Task(object):
121 """Task base class.
122
123 When called tasks apply the :meth:`run` method. This method must
124 be defined by all tasks (that is unless the :meth:`__call__` method
125 is overridden).
126
127 """
128 __trace__ = None
129 __v2_compat__ = False # set by old base in celery.task.base
130
131 ErrorMail = ErrorMail
132 MaxRetriesExceededError = MaxRetriesExceededError
133
134 #: Execution strategy used, or the qualified name of one.
135 Strategy = 'celery.worker.strategy:default'
136
137 #: This is the instance bound to if the task is a method of a class.
138 __self__ = None
139
140 #: The application instance associated with this task class.
141 _app = None
142
143 #: Name of the task.
144 name = None
145
146 #: If :const:`True` the task is an abstract base class.
147 abstract = True
148
149 #: Maximum number of retries before giving up. If set to :const:`None`,
150 #: it will **never** stop retrying.
151 max_retries = 3
152
153 #: Default time in seconds before a retry of the task should be
154 #: executed. 3 minutes by default.
155 default_retry_delay = 3 * 60
156
157 #: Rate limit for this task type. Examples: :const:`None` (no rate
158 #: limit), `'100/s'` (hundred tasks a second), `'100/m'` (hundred tasks
159 #: a minute),`'100/h'` (hundred tasks an hour)
160 rate_limit = None
161
162 #: If enabled the worker will not store task state and return values
163 #: for this task. Defaults to the :setting:`CELERY_IGNORE_RESULT`
164 #: setting.
165 ignore_result = None
166
167 #: If enabled the request will keep track of subtasks started by
168 #: this task, and this information will be sent with the result
169 #: (``result.children``).
170 trail = True
171
172 #: When enabled errors will be stored even if the task is otherwise
173 #: configured to ignore results.
174 store_errors_even_if_ignored = None
175
176 #: If enabled an email will be sent to :setting:`ADMINS` whenever a task
177 #: of this type fails.
178 send_error_emails = None
179
180 #: The name of a serializer that are registered with
181 #: :mod:`kombu.serialization.registry`. Default is `'pickle'`.
182 serializer = None
183
184 #: Hard time limit.
185 #: Defaults to the :setting:`CELERYD_TASK_TIME_LIMIT` setting.
186 time_limit = None
187
188 #: Soft time limit.
189 #: Defaults to the :setting:`CELERYD_TASK_SOFT_TIME_LIMIT` setting.
190 soft_time_limit = None
191
192 #: The result store backend used for this task.
193 backend = None
194
195 #: If disabled this task won't be registered automatically.
196 autoregister = True
197
198 #: If enabled the task will report its status as 'started' when the task
199 #: is executed by a worker. Disabled by default as the normal behaviour
200 #: is to not report that level of granularity. Tasks are either pending,
201 #: finished, or waiting to be retried.
202 #:
203 #: Having a 'started' status can be useful for when there are long
204 #: running tasks and there is a need to report which task is currently
205 #: running.
206 #:
207 #: The application default can be overridden using the
208 #: :setting:`CELERY_TRACK_STARTED` setting.
209 track_started = None
210
211 #: When enabled messages for this task will be acknowledged **after**
212 #: the task has been executed, and not *just before* which is the
213 #: default behavior.
214 #:
215 #: Please note that this means the task may be executed twice if the
216 #: worker crashes mid execution (which may be acceptable for some
217 #: applications).
218 #:
219 #: The application default can be overridden with the
220 #: :setting:`CELERY_ACKS_LATE` setting.
221 acks_late = None
222
223 #: Tuple of expected exceptions.
224 #:
225 #: These are errors that are expected in normal operation
226 #: and that should not be regarded as a real error by the worker.
227 #: Currently this means that the state will be updated to an error
228 #: state, but the worker will not log the event as an error.
229 throws = ()
230
231 #: Default task expiry time.
232 expires = None
233
234 #: Task request stack, the current request will be the topmost.
235 request_stack = None
236
237 #: Some may expect a request to exist even if the task has not been
238 #: called. This should probably be deprecated.
239 _default_request = None
240
241 _exec_options = None
242
243 __bound__ = False
244
245 from_config = (
246 ('send_error_emails', 'CELERY_SEND_TASK_ERROR_EMAILS'),
247 ('serializer', 'CELERY_TASK_SERIALIZER'),
248 ('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
249 ('track_started', 'CELERY_TRACK_STARTED'),
250 ('acks_late', 'CELERY_ACKS_LATE'),
251 ('ignore_result', 'CELERY_IGNORE_RESULT'),
252 ('store_errors_even_if_ignored',
253 'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
254 )
255
256 #: ignored
257 accept_magic_kwargs = False
258
259 _backend = None # set by backend property.
260
261 __bound__ = False
262
263 # - Tasks are lazily bound, so that configuration is not set
264 # - until the task is actually used
265
266 @classmethod
267 def bind(self, app):
268 was_bound, self.__bound__ = self.__bound__, True
269 self._app = app
270 conf = app.conf
271 self._exec_options = None # clear option cache
272
273 for attr_name, config_name in self.from_config:
274 if getattr(self, attr_name, None) is None:
275 setattr(self, attr_name, conf[config_name])
276
277 # decorate with annotations from config.
278 if not was_bound:
279 self.annotate()
280
281 from celery.utils.threads import LocalStack
282 self.request_stack = LocalStack()
283
284 # PeriodicTask uses this to add itself to the PeriodicTask schedule.
285 self.on_bound(app)
286
287 return app
288
289 @classmethod
290 def on_bound(self, app):
291 """This method can be defined to do additional actions when the
292 task class is bound to an app."""
293 pass
294
295 @classmethod
296 def _get_app(self):
297 if self._app is None:
298 self._app = current_app
299 if not self.__bound__:
300 # The app property's __set__ method is not called
301 # if Task.app is set (on the class), so must bind on use.
302 self.bind(self._app)
303 return self._app
304 app = class_property(_get_app, bind)
305
306 @classmethod
307 def annotate(self):
308 for d in resolve_all_annotations(self.app.annotations, self):
309 for key, value in items(d):
310 if key.startswith('@'):
311 self.add_around(key[1:], value)
312 else:
313 setattr(self, key, value)
314
315 @classmethod
316 def add_around(self, attr, around):
317 orig = getattr(self, attr)
318 if getattr(orig, '__wrapped__', None):
319 orig = orig.__wrapped__
320 meth = around(orig)
321 meth.__wrapped__ = orig
322 setattr(self, attr, meth)
323
324 def __call__(self, *args, **kwargs):
325 _task_stack.push(self)
326 self.push_request(args=args, kwargs=kwargs)
327 try:
328 # add self if this is a bound task
329 if self.__self__ is not None:
330 return self.run(self.__self__, *args, **kwargs)
331 return self.run(*args, **kwargs)
332 finally:
333 self.pop_request()
334 _task_stack.pop()
335
336 def __reduce__(self):
337 # - tasks are pickled into the name of the task only, and the reciever
338 # - simply grabs it from the local registry.
339 # - in later versions the module of the task is also included,
340 # - and the receiving side tries to import that module so that
341 # - it will work even if the task has not been registered.
342 mod = type(self).__module__
343 mod = mod if mod and mod in sys.modules else None
344 return (_unpickle_task_v2, (self.name, mod), None)
345
346 def run(self, *args, **kwargs):
347 """The body of the task executed by workers."""
348 raise NotImplementedError('Tasks must define the run method.')
349
350 def start_strategy(self, app, consumer, **kwargs):
351 return instantiate(self.Strategy, self, app, consumer, **kwargs)
352
353 def delay(self, *args, **kwargs):
354 """Star argument version of :meth:`apply_async`.
355
356 Does not support the extra options enabled by :meth:`apply_async`.
357
358 :param \*args: positional arguments passed on to the task.
359 :param \*\*kwargs: keyword arguments passed on to the task.
360
361 :returns :class:`celery.result.AsyncResult`:
362
363 """
364 return self.apply_async(args, kwargs)
365
366 def apply_async(self, args=None, kwargs=None, task_id=None, producer=None,
367 link=None, link_error=None, shadow=None, **options):
368 """Apply tasks asynchronously by sending a message.
369
370 :keyword args: The positional arguments to pass on to the
371 task (a :class:`list` or :class:`tuple`).
372
373 :keyword kwargs: The keyword arguments to pass on to the
374 task (a :class:`dict`)
375
376 :keyword countdown: Number of seconds into the future that the
377 task should execute. Defaults to immediate
378 execution.
379
380 :keyword eta: A :class:`~datetime.datetime` object describing
381 the absolute time and date of when the task should
382 be executed. May not be specified if `countdown`
383 is also supplied.
384
385 :keyword expires: Either a :class:`int`, describing the number of
386 seconds, or a :class:`~datetime.datetime` object
387 that describes the absolute time and date of when
388 the task should expire. The task will not be
389 executed after the expiration time.
390
391 :keyword shadow: Override task name used in logs/monitoring
392 (default from :meth:`shadow_name`).
393
394 :keyword connection: Re-use existing broker connection instead
395 of establishing a new one.
396
397 :keyword retry: If enabled sending of the task message will be retried
398 in the event of connection loss or failure. Default
399 is taken from the :setting:`CELERY_TASK_PUBLISH_RETRY`
400 setting. Note that you need to handle the
401 producer/connection manually for this to work.
402
403 :keyword retry_policy: Override the retry policy used. See the
404 :setting:`CELERY_TASK_PUBLISH_RETRY_POLICY`
405 setting.
406
407 :keyword routing_key: Custom routing key used to route the task to a
408 worker server. If in combination with a
409 ``queue`` argument only used to specify custom
410 routing keys to topic exchanges.
411
412 :keyword queue: The queue to route the task to. This must be a key
413 present in :setting:`CELERY_QUEUES`, or
414 :setting:`CELERY_CREATE_MISSING_QUEUES` must be
415 enabled. See :ref:`guide-routing` for more
416 information.
417
418 :keyword exchange: Named custom exchange to send the task to.
419 Usually not used in combination with the ``queue``
420 argument.
421
422 :keyword priority: The task priority, a number between 0 and 9.
423 Defaults to the :attr:`priority` attribute.
424
425 :keyword serializer: A string identifying the default
426 serialization method to use. Can be `pickle`,
427 `json`, `yaml`, `msgpack` or any custom
428 serialization method that has been registered
429 with :mod:`kombu.serialization.registry`.
430 Defaults to the :attr:`serializer` attribute.
431
432 :keyword compression: A string identifying the compression method
433 to use. Can be one of ``zlib``, ``bzip2``,
434 or any custom compression methods registered with
435 :func:`kombu.compression.register`. Defaults to
436 the :setting:`CELERY_MESSAGE_COMPRESSION`
437 setting.
438 :keyword link: A single, or a list of tasks to apply if the
439 task exits successfully.
440 :keyword link_error: A single, or a list of tasks to apply
441 if an error occurs while executing the task.
442
443 :keyword producer: :class:`kombu.Producer` instance to use.
444
445 :keyword add_to_parent: If set to True (default) and the task
446 is applied while executing another task, then the result
447 will be appended to the parent tasks ``request.children``
448 attribute. Trailing can also be disabled by default using the
449 :attr:`trail` attribute
450
451 :keyword publisher: Deprecated alias to ``producer``.
452
453 :keyword headers: Message headers to be sent in the
454 task (a :class:`dict`)
455
456 :rtype :class:`celery.result.AsyncResult`: if
457 :setting:`CELERY_ALWAYS_EAGER` is not set, otherwise
458 :class:`celery.result.EagerResult`:
459
460 Also supports all keyword arguments supported by
461 :meth:`kombu.Producer.publish`.
462
463 .. note::
464 If the :setting:`CELERY_ALWAYS_EAGER` setting is set, it will
465 be replaced by a local :func:`apply` call instead.
466
467 """
468 try:
469 check_arguments = self.__header__
470 except AttributeError:
471 pass
472 else:
473 check_arguments(*(args or ()), **(kwargs or {}))
474
475 app = self._get_app()
476 if app.conf.CELERY_ALWAYS_EAGER:
477 return self.apply(args, kwargs, task_id=task_id or uuid(),
478 link=link, link_error=link_error, **options)
479 # add 'self' if this is a "task_method".
480 if self.__self__ is not None:
481 args = args if isinstance(args, tuple) else tuple(args or ())
482 args = (self.__self__,) + args
483 shadow = shadow or self.shadow_name(args, kwargs, options)
484
485 preopts = self._get_exec_options()
486 options = dict(preopts, **options) if options else preopts
487 return app.send_task(
488 self.name, args, kwargs, task_id=task_id, producer=producer,
489 link=link, link_error=link_error, result_cls=self.AsyncResult,
490 shadow=shadow,
491 **options
492 )
493
494 def shadow_name(self, args, kwargs, options):
495 """Override for custom task name in worker logs/monitoring.
496
497 :param args: Task positional arguments.
498 :param kwargs: Task keyword arguments.
499 :param options: Task execution options.
500
501 **Example**:
502
503 .. code-block:: python
504
505 from celery.utils.imports import qualname
506
507 def shadow_name(task, args, kwargs, options):
508 return qualname(args[0])
509
510 @app.task(shadow_name=shadow_name, serializer='pickle')
511 def apply_function_async(fun, *args, **kwargs):
512 return fun(*args, **kwargs)
513
514 """
515 pass
516
517 def signature_from_request(self, request=None, args=None, kwargs=None,
518 queue=None, **extra_options):
519 request = self.request if request is None else request
520 args = request.args if args is None else args
521 kwargs = request.kwargs if kwargs is None else kwargs
522 limit_hard, limit_soft = request.timelimit or (None, None)
523 options = {
524 'task_id': request.id,
525 'link': request.callbacks,
526 'link_error': request.errbacks,
527 'group_id': request.group,
528 'chord': request.chord,
529 'soft_time_limit': limit_soft,
530 'time_limit': limit_hard,
531 'reply_to': request.reply_to,
532 'headers': request.headers,
533 }
534 options.update(
535 {'queue': queue} if queue else (request.delivery_info or {}),
536 )
537 return self.signature(
538 args, kwargs, options, type=self, **extra_options
539 )
540 subtask_from_request = signature_from_request
541
542 def retry(self, args=None, kwargs=None, exc=None, throw=True,
543 eta=None, countdown=None, max_retries=None, **options):
544 """Retry the task.
545
546 :param args: Positional arguments to retry with.
547 :param kwargs: Keyword arguments to retry with.
548 :keyword exc: Custom exception to report when the max restart
549 limit has been exceeded (default:
550 :exc:`~@MaxRetriesExceededError`).
551
552 If this argument is set and retry is called while
553 an exception was raised (``sys.exc_info()`` is set)
554 it will attempt to reraise the current exception.
555
556 If no exception was raised it will raise the ``exc``
557 argument provided.
558 :keyword countdown: Time in seconds to delay the retry for.
559 :keyword eta: Explicit time and date to run the retry at
560 (must be a :class:`~datetime.datetime` instance).
561 :keyword max_retries: If set, overrides the default retry limit.
562 A value of :const:`None`, means "use the default", so if you want
563 infinite retries you would have to set the :attr:`max_retries`
564 attribute of the task to :const:`None` first.
565 :keyword time_limit: If set, overrides the default time limit.
566 :keyword soft_time_limit: If set, overrides the default soft
567 time limit.
568 :keyword \*\*options: Any extra options to pass on to
569 meth:`apply_async`.
570 :keyword throw: If this is :const:`False`, do not raise the
571 :exc:`~@Retry` exception,
572 that tells the worker to mark the task as being
573 retried. Note that this means the task will be
574 marked as failed if the task raises an exception,
575 or successful if it returns.
576
577 :raises celery.exceptions.Retry: To tell the worker that
578 the task has been re-sent for retry. This always happens,
579 unless the `throw` keyword argument has been explicitly set
580 to :const:`False`, and is considered normal operation.
581
582 **Example**
583
584 .. code-block:: pycon
585
586 >>> from imaginary_twitter_lib import Twitter
587 >>> from proj.celery import app
588
589 >>> @app.task(bind=True)
590 ... def tweet(self, auth, message):
591 ... twitter = Twitter(oauth=auth)
592 ... try:
593 ... twitter.post_status_update(message)
594 ... except twitter.FailWhale as exc:
595 ... # Retry in 5 minutes.
596 ... raise self.retry(countdown=60 * 5, exc=exc)
597
598 Although the task will never return above as `retry` raises an
599 exception to notify the worker, we use `raise` in front of the retry
600 to convey that the rest of the block will not be executed.
601
602 """
603 request = self.request
604 retries = request.retries + 1
605 max_retries = self.max_retries if max_retries is None else max_retries
606
607 # Not in worker or emulated by (apply/always_eager),
608 # so just raise the original exception.
609 if request.called_directly:
610 maybe_reraise() # raise orig stack if PyErr_Occurred
611 raise exc or Retry('Task can be retried', None)
612
613 if not eta and countdown is None:
614 countdown = self.default_retry_delay
615
616 is_eager = request.is_eager
617 S = self.signature_from_request(
618 request, args, kwargs,
619 countdown=countdown, eta=eta, retries=retries,
620 **options
621 )
622
623 if max_retries is not None and retries > max_retries:
624 if exc:
625 # first try to reraise the original exception
626 maybe_reraise()
627 # or if not in an except block then raise the custom exc.
628 raise exc
629 raise self.MaxRetriesExceededError(
630 "Can't retry {0}[{1}] args:{2} kwargs:{3}".format(
631 self.name, request.id, S.args, S.kwargs))
632
633 ret = Retry(exc=exc, when=eta or countdown)
634
635 if is_eager:
636 # if task was executed eagerly using apply(),
637 # then the retry must also be executed eagerly.
638 S.apply().get()
639 if throw:
640 raise ret
641 return ret
642
643 try:
644 S.apply_async()
645 except Exception as exc:
646 raise Reject(exc, requeue=False)
647 if throw:
648 raise ret
649 return ret
650
651 def apply(self, args=None, kwargs=None,
652 link=None, link_error=None, **options):
653 """Execute this task locally, by blocking until the task returns.
654
655 :param args: positional arguments passed on to the task.
656 :param kwargs: keyword arguments passed on to the task.
657 :keyword throw: Re-raise task exceptions. Defaults to
658 the :setting:`CELERY_EAGER_PROPAGATES_EXCEPTIONS`
659 setting.
660
661 :rtype :class:`celery.result.EagerResult`:
662
663 """
664 # trace imports Task, so need to import inline.
665 from celery.app.trace import build_tracer
666
667 app = self._get_app()
668 args = args or ()
669 # add 'self' if this is a bound method.
670 if self.__self__ is not None:
671 args = (self.__self__,) + tuple(args)
672 kwargs = kwargs or {}
673 task_id = options.get('task_id') or uuid()
674 retries = options.get('retries', 0)
675 throw = app.either('CELERY_EAGER_PROPAGATES_EXCEPTIONS',
676 options.pop('throw', None))
677
678 # Make sure we get the task instance, not class.
679 task = app._tasks[self.name]
680
681 request = {'id': task_id,
682 'retries': retries,
683 'is_eager': True,
684 'logfile': options.get('logfile'),
685 'loglevel': options.get('loglevel', 0),
686 'callbacks': maybe_list(link),
687 'errbacks': maybe_list(link_error),
688 'headers': options.get('headers'),
689 'delivery_info': {'is_eager': True}}
690 tb = None
691 tracer = build_tracer(
692 task.name, task, eager=True,
693 propagate=throw, app=self._get_app(),
694 )
695 ret = tracer(task_id, args, kwargs, request)
696 retval = ret.retval
697 if isinstance(retval, ExceptionInfo):
698 retval, tb = retval.exception, retval.traceback
699 state = states.SUCCESS if ret.info is None else ret.info.state
700 return EagerResult(task_id, retval, state, traceback=tb)
701
702 def AsyncResult(self, task_id, **kwargs):
703 """Get AsyncResult instance for this kind of task.
704
705 :param task_id: Task id to get result for.
706
707 """
708 return self._get_app().AsyncResult(task_id, backend=self.backend,
709 task_name=self.name, **kwargs)
710
711 def signature(self, args=None, *starargs, **starkwargs):
712 """Return :class:`~celery.signature` object for
713 this task, wrapping arguments and execution options
714 for a single task invocation."""
715 starkwargs.setdefault('app', self.app)
716 return signature(self, args, *starargs, **starkwargs)
717 subtask = signature
718
719 def s(self, *args, **kwargs):
720 """``.s(*a, **k) -> .signature(a, k)``"""
721 return self.signature(args, kwargs)
722
723 def si(self, *args, **kwargs):
724 """``.si(*a, **k) -> .signature(a, k, immutable=True)``"""
725 return self.signature(args, kwargs, immutable=True)
726
727 def chunks(self, it, n):
728 """Creates a :class:`~celery.canvas.chunks` task for this task."""
729 from celery import chunks
730 return chunks(self.s(), it, n, app=self.app)
731
732 def map(self, it):
733 """Creates a :class:`~celery.canvas.xmap` task from ``it``."""
734 from celery import xmap
735 return xmap(self.s(), it, app=self.app)
736
737 def starmap(self, it):
738 """Creates a :class:`~celery.canvas.xstarmap` task from ``it``."""
739 from celery import xstarmap
740 return xstarmap(self.s(), it, app=self.app)
741
742 def send_event(self, type_, **fields):
743 req = self.request
744 with self.app.events.default_dispatcher(hostname=req.hostname) as d:
745 return d.send(type_, uuid=req.id, **fields)
746
747 def replace(self, sig):
748 """Replace the current task, with a new task inheriting the
749 same task id.
750
751 :param sig: :class:`@signature`
752
753 Note: This will raise :exc:`~@Ignore`, so the best practice
754 is to always use ``raise self.replace(...)`` to convey
755 to the reader that the task will not continue after being replaced.
756
757 :param: Signature of new task.
758
759 """
760 chord = self.request.chord
761 if isinstance(sig, group):
762 sig |= self.app.tasks['celery.accumulate'].s(index=0).set(
763 chord=chord,
764 )
765 chord = None
766 sig.freeze(self.request.id,
767 group_id=self.request.group,
768 chord=chord,
769 root_id=self.request.root_id)
770 sig.delay()
771 raise Ignore('Chord member replaced by new task')
772
773 def add_to_chord(self, sig, lazy=False):
774 """Add signature to the chord the current task is a member of.
775
776 :param sig: Signature to extend chord with.
777 :param lazy: If enabled the new task will not actually be called,
778 and ``sig.delay()`` must be called manually.
779
780 Currently only supported by the Redis result backend when
781 ``?new_join=1`` is enabled.
782
783 """
784 if not self.request.chord:
785 raise ValueError('Current task is not member of any chord')
786 result = sig.freeze(group_id=self.request.group,
787 chord=self.request.chord,
788 root_id=self.request.root_id)
789 self.backend.add_to_chord(self.request.group, result)
790 return sig.delay() if not lazy else sig
791
792 def update_state(self, task_id=None, state=None, meta=None):
793 """Update task state.
794
795 :keyword task_id: Id of the task to update, defaults to the
796 id of the current task
797 :keyword state: New state (:class:`str`).
798 :keyword meta: State metadata (:class:`dict`).
799
800
801
802 """
803 if task_id is None:
804 task_id = self.request.id
805 self.backend.store_result(task_id, meta, state)
806
807 def on_success(self, retval, task_id, args, kwargs):
808 """Success handler.
809
810 Run by the worker if the task executes successfully.
811
812 :param retval: The return value of the task.
813 :param task_id: Unique id of the executed task.
814 :param args: Original arguments for the executed task.
815 :param kwargs: Original keyword arguments for the executed task.
816
817 The return value of this handler is ignored.
818
819 """
820 pass
821
822 def on_retry(self, exc, task_id, args, kwargs, einfo):
823 """Retry handler.
824
825 This is run by the worker when the task is to be retried.
826
827 :param exc: The exception sent to :meth:`retry`.
828 :param task_id: Unique id of the retried task.
829 :param args: Original arguments for the retried task.
830 :param kwargs: Original keyword arguments for the retried task.
831
832 :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
833 instance, containing the traceback.
834
835 The return value of this handler is ignored.
836
837 """
838 pass
839
840 def on_failure(self, exc, task_id, args, kwargs, einfo):
841 """Error handler.
842
843 This is run by the worker when the task fails.
844
845 :param exc: The exception raised by the task.
846 :param task_id: Unique id of the failed task.
847 :param args: Original arguments for the task that failed.
848 :param kwargs: Original keyword arguments for the task
849 that failed.
850
851 :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
852 instance, containing the traceback.
853
854 The return value of this handler is ignored.
855
856 """
857 pass
858
859 def after_return(self, status, retval, task_id, args, kwargs, einfo):
860 """Handler called after the task returns.
861
862 :param status: Current task state.
863 :param retval: Task return value/exception.
864 :param task_id: Unique id of the task.
865 :param args: Original arguments for the task.
866 :param kwargs: Original keyword arguments for the task.
867
868 :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
869 instance, containing the traceback (if any).
870
871 The return value of this handler is ignored.
872
873 """
874 pass
875
876 def send_error_email(self, context, exc, **kwargs):
877 if self.send_error_emails and \
878 not getattr(self, 'disable_error_emails', None):
879 self.ErrorMail(self, **kwargs).send(context, exc)
880
881 def add_trail(self, result):
882 if self.trail:
883 self.request.children.append(result)
884 return result
885
886 def push_request(self, *args, **kwargs):
887 self.request_stack.push(Context(*args, **kwargs))
888
889 def pop_request(self):
890 self.request_stack.pop()
891
892 def __repr__(self):
893 """`repr(task)`"""
894 return _reprtask(self, R_SELF_TASK if self.__self__ else R_INSTANCE)
895
896 def _get_request(self):
897 """Get current request object."""
898 req = self.request_stack.top
899 if req is None:
900 # task was not called, but some may still expect a request
901 # to be there, perhaps that should be deprecated.
902 if self._default_request is None:
903 self._default_request = Context()
904 return self._default_request
905 return req
906 request = property(_get_request)
907
908 def _get_exec_options(self):
909 if self._exec_options is None:
910 self._exec_options = extract_exec_options(self)
911 return self._exec_options
912
913 @property
914 def backend(self):
915 backend = self._backend
916 if backend is None:
917 return self.app.backend
918 return backend
919
920 @backend.setter
921 def backend(self, value): # noqa
922 self._backend = value
923
924 @property
925 def __name__(self):
926 return self.__class__.__name__
927 abstract.CallableTask.register(Task)
928 BaseTask = Task # compat alias
929
[end of celery/app/task.py]
[start of celery/worker/request.py]
1 # -*- coding: utf-8 -*-
2 """
3 celery.worker.request
4 ~~~~~~~~~~~~~~~~~~~~~
5
6 This module defines the :class:`Request` class,
7 which specifies how tasks are executed.
8
9 """
10 from __future__ import absolute_import, unicode_literals
11
12 import logging
13 import socket
14 import sys
15
16 from datetime import datetime
17 from weakref import ref
18
19 from kombu.utils.encoding import safe_repr, safe_str
20
21 from celery import signals
22 from celery.app.trace import trace_task, trace_task_ret
23 from celery.exceptions import (
24 Ignore, TaskRevokedError, InvalidTaskError,
25 SoftTimeLimitExceeded, TimeLimitExceeded,
26 WorkerLostError, Terminated, Retry, Reject,
27 )
28 from celery.five import string
29 from celery.platforms import signals as _signals
30 from celery.utils.functional import noop
31 from celery.utils.log import get_logger
32 from celery.utils.timeutils import maybe_iso8601, timezone, maybe_make_aware
33 from celery.utils.serialization import get_pickled_exception
34
35 from . import state
36
37 __all__ = ['Request']
38
39 IS_PYPY = hasattr(sys, 'pypy_version_info')
40
41 logger = get_logger(__name__)
42 debug, info, warn, error = (logger.debug, logger.info,
43 logger.warning, logger.error)
44 _does_info = False
45 _does_debug = False
46
47
48 def __optimize__():
49 # this is also called by celery.app.trace.setup_worker_optimizations
50 global _does_debug
51 global _does_info
52 _does_debug = logger.isEnabledFor(logging.DEBUG)
53 _does_info = logger.isEnabledFor(logging.INFO)
54 __optimize__()
55
56 # Localize
57 tz_utc = timezone.utc
58 tz_or_local = timezone.tz_or_local
59 send_revoked = signals.task_revoked.send
60
61 task_accepted = state.task_accepted
62 task_ready = state.task_ready
63 revoked_tasks = state.revoked
64
65
66 class Request(object):
67 """A request for task execution."""
68 acknowledged = False
69 time_start = None
70 worker_pid = None
71 time_limits = (None, None)
72 _already_revoked = False
73 _terminate_on_ack = None
74 _apply_result = None
75 _tzlocal = None
76
77 if not IS_PYPY: # pragma: no cover
78 __slots__ = (
79 'app', 'type', 'name', 'id', 'on_ack', 'body',
80 'hostname', 'eventer', 'connection_errors', 'task', 'eta',
81 'expires', 'request_dict', 'on_reject', 'utc',
82 'content_type', 'content_encoding',
83 '__weakref__', '__dict__',
84 )
85
86 def __init__(self, message, on_ack=noop,
87 hostname=None, eventer=None, app=None,
88 connection_errors=None, request_dict=None,
89 task=None, on_reject=noop, body=None,
90 headers=None, decoded=False, utc=True,
91 maybe_make_aware=maybe_make_aware,
92 maybe_iso8601=maybe_iso8601, **opts):
93 if headers is None:
94 headers = message.headers
95 if body is None:
96 body = message.body
97 self.app = app
98 self.message = message
99 self.body = body
100 self.utc = utc
101 if decoded:
102 self.content_type = self.content_encoding = None
103 else:
104 self.content_type, self.content_encoding = (
105 message.content_type, message.content_encoding,
106 )
107
108 self.id = headers['id']
109 type = self.type = self.name = headers['task']
110 if 'shadow' in headers:
111 self.name = headers['shadow']
112 if 'timelimit' in headers:
113 self.time_limits = headers['timelimit']
114 self.on_ack = on_ack
115 self.on_reject = on_reject
116 self.hostname = hostname or socket.gethostname()
117 self.eventer = eventer
118 self.connection_errors = connection_errors or ()
119 self.task = task or self.app.tasks[type]
120
121 # timezone means the message is timezone-aware, and the only timezone
122 # supported at this point is UTC.
123 eta = headers.get('eta')
124 if eta is not None:
125 try:
126 eta = maybe_iso8601(eta)
127 except (AttributeError, ValueError, TypeError) as exc:
128 raise InvalidTaskError(
129 'invalid eta value {0!r}: {1}'.format(eta, exc))
130 self.eta = maybe_make_aware(eta, self.tzlocal)
131 else:
132 self.eta = None
133
134 expires = headers.get('expires')
135 if expires is not None:
136 try:
137 expires = maybe_iso8601(expires)
138 except (AttributeError, ValueError, TypeError) as exc:
139 raise InvalidTaskError(
140 'invalid expires value {0!r}: {1}'.format(expires, exc))
141 self.expires = maybe_make_aware(expires, self.tzlocal)
142 else:
143 self.expires = None
144
145 delivery_info = message.delivery_info or {}
146 properties = message.properties or {}
147 headers.update({
148 'reply_to': properties.get('reply_to'),
149 'correlation_id': properties.get('correlation_id'),
150 'delivery_info': {
151 'exchange': delivery_info.get('exchange'),
152 'routing_key': delivery_info.get('routing_key'),
153 'priority': delivery_info.get('priority'),
154 'redelivered': delivery_info.get('redelivered'),
155 }
156
157 })
158 self.request_dict = headers
159
160 @property
161 def delivery_info(self):
162 return self.request_dict['delivery_info']
163
164 def execute_using_pool(self, pool, **kwargs):
165 """Used by the worker to send this task to the pool.
166
167 :param pool: A :class:`celery.concurrency.base.TaskPool` instance.
168
169 :raises celery.exceptions.TaskRevokedError: if the task was revoked
170 and ignored.
171
172 """
173 task_id = self.id
174 task = self.task
175 if self.revoked():
176 raise TaskRevokedError(task_id)
177
178 time_limit, soft_time_limit = self.time_limits
179 time_limit = time_limit or task.time_limit
180 soft_time_limit = soft_time_limit or task.soft_time_limit
181 result = pool.apply_async(
182 trace_task_ret,
183 args=(self.type, task_id, self.request_dict, self.body,
184 self.content_type, self.content_encoding),
185 accept_callback=self.on_accepted,
186 timeout_callback=self.on_timeout,
187 callback=self.on_success,
188 error_callback=self.on_failure,
189 soft_timeout=soft_time_limit,
190 timeout=time_limit,
191 correlation_id=task_id,
192 )
193 # cannot create weakref to None
194 self._apply_result = ref(result) if result is not None else result
195 return result
196
197 def execute(self, loglevel=None, logfile=None):
198 """Execute the task in a :func:`~celery.app.trace.trace_task`.
199
200 :keyword loglevel: The loglevel used by the task.
201 :keyword logfile: The logfile used by the task.
202
203 """
204 if self.revoked():
205 return
206
207 # acknowledge task as being processed.
208 if not self.task.acks_late:
209 self.acknowledge()
210
211 request = self.request_dict
212 args, kwargs, embed = self.message.payload
213 request.update({'loglevel': loglevel, 'logfile': logfile,
214 'hostname': self.hostname, 'is_eager': False,
215 'args': args, 'kwargs': kwargs}, **embed or {})
216 retval = trace_task(self.task, self.id, args, kwargs, request,
217 hostname=self.hostname, loader=self.app.loader,
218 app=self.app)[0]
219 self.acknowledge()
220 return retval
221
222 def maybe_expire(self):
223 """If expired, mark the task as revoked."""
224 if self.expires:
225 now = datetime.now(self.expires.tzinfo)
226 if now > self.expires:
227 revoked_tasks.add(self.id)
228 return True
229
230 def terminate(self, pool, signal=None):
231 signal = _signals.signum(signal or 'TERM')
232 if self.time_start:
233 pool.terminate_job(self.worker_pid, signal)
234 self._announce_revoked('terminated', True, signal, False)
235 else:
236 self._terminate_on_ack = pool, signal
237 if self._apply_result is not None:
238 obj = self._apply_result() # is a weakref
239 if obj is not None:
240 obj.terminate(signal)
241
242 def _announce_revoked(self, reason, terminated, signum, expired):
243 task_ready(self)
244 self.send_event('task-revoked',
245 terminated=terminated, signum=signum, expired=expired)
246 if self.store_errors:
247 self.task.backend.mark_as_revoked(self.id, reason, request=self)
248 self.acknowledge()
249 self._already_revoked = True
250 send_revoked(self.task, request=self,
251 terminated=terminated, signum=signum, expired=expired)
252
253 def revoked(self):
254 """If revoked, skip task and mark state."""
255 expired = False
256 if self._already_revoked:
257 return True
258 if self.expires:
259 expired = self.maybe_expire()
260 if self.id in revoked_tasks:
261 info('Discarding revoked task: %s[%s]', self.name, self.id)
262 self._announce_revoked(
263 'expired' if expired else 'revoked', False, None, expired,
264 )
265 return True
266 return False
267
268 def send_event(self, type, **fields):
269 if self.eventer and self.eventer.enabled:
270 self.eventer.send(type, uuid=self.id, **fields)
271
272 def on_accepted(self, pid, time_accepted):
273 """Handler called when task is accepted by worker pool."""
274 self.worker_pid = pid
275 self.time_start = time_accepted
276 task_accepted(self)
277 if not self.task.acks_late:
278 self.acknowledge()
279 self.send_event('task-started')
280 if _does_debug:
281 debug('Task accepted: %s[%s] pid:%r', self.name, self.id, pid)
282 if self._terminate_on_ack is not None:
283 self.terminate(*self._terminate_on_ack)
284
285 def on_timeout(self, soft, timeout):
286 """Handler called if the task times out."""
287 task_ready(self)
288 if soft:
289 warn('Soft time limit (%ss) exceeded for %s[%s]',
290 soft, self.name, self.id)
291 exc = SoftTimeLimitExceeded(soft)
292 else:
293 error('Hard time limit (%ss) exceeded for %s[%s]',
294 timeout, self.name, self.id)
295 exc = TimeLimitExceeded(timeout)
296
297 if self.store_errors:
298 self.task.backend.mark_as_failure(self.id, exc, request=self)
299
300 if self.task.acks_late:
301 self.acknowledge()
302
303 def on_success(self, failed__retval__runtime, **kwargs):
304 """Handler called if the task was successfully processed."""
305 failed, retval, runtime = failed__retval__runtime
306 if failed:
307 if isinstance(retval.exception, (SystemExit, KeyboardInterrupt)):
308 raise retval.exception
309 return self.on_failure(retval, return_ok=True)
310 task_ready(self)
311
312 if self.task.acks_late:
313 self.acknowledge()
314
315 self.send_event('task-succeeded', result=retval, runtime=runtime)
316
317 def on_retry(self, exc_info):
318 """Handler called if the task should be retried."""
319 if self.task.acks_late:
320 self.acknowledge()
321
322 self.send_event('task-retried',
323 exception=safe_repr(exc_info.exception.exc),
324 traceback=safe_str(exc_info.traceback))
325
326 def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
327 """Handler called if the task raised an exception."""
328 task_ready(self)
329
330 if isinstance(exc_info.exception, MemoryError):
331 raise MemoryError('Process got: %s' % (exc_info.exception,))
332 elif isinstance(exc_info.exception, Reject):
333 return self.reject(requeue=exc_info.exception.requeue)
334 elif isinstance(exc_info.exception, Ignore):
335 return self.acknowledge()
336
337 exc = exc_info.exception
338
339 if isinstance(exc, Retry):
340 return self.on_retry(exc_info)
341
342 # These are special cases where the process would not have had
343 # time to write the result.
344 if self.store_errors:
345 if isinstance(exc, Terminated):
346 self._announce_revoked(
347 'terminated', True, string(exc), False)
348 send_failed_event = False # already sent revoked event
349 elif isinstance(exc, WorkerLostError) or not return_ok:
350 self.task.backend.mark_as_failure(
351 self.id, exc, request=self,
352 )
353 # (acks_late) acknowledge after result stored.
354 if self.task.acks_late:
355 self.acknowledge()
356
357 if send_failed_event:
358 self.send_event(
359 'task-failed',
360 exception=safe_repr(get_pickled_exception(exc_info.exception)),
361 traceback=exc_info.traceback,
362 )
363
364 if not return_ok:
365 error('Task handler raised error: %r', exc,
366 exc_info=exc_info.exc_info)
367
368 def acknowledge(self):
369 """Acknowledge task."""
370 if not self.acknowledged:
371 self.on_ack(logger, self.connection_errors)
372 self.acknowledged = True
373
374 def reject(self, requeue=False):
375 if not self.acknowledged:
376 self.on_reject(logger, self.connection_errors, requeue)
377 self.acknowledged = True
378
379 def info(self, safe=False):
380 return {'id': self.id,
381 'name': self.name,
382 'type': self.type,
383 'body': self.body,
384 'hostname': self.hostname,
385 'time_start': self.time_start,
386 'acknowledged': self.acknowledged,
387 'delivery_info': self.delivery_info,
388 'worker_pid': self.worker_pid}
389
390 def __str__(self):
391 return ' '.join([
392 self.humaninfo(),
393 ' eta:[{0}]'.format(self.eta) if self.eta else '',
394 ' expires:[{0}]'.format(self.expires) if self.expires else '',
395 ])
396 shortinfo = __str__
397
398 def humaninfo(self):
399 return '{0.name}[{0.id}]'.format(self)
400
401 def __repr__(self):
402 return '<{0}: {1}>'.format(type(self).__name__, self.humaninfo())
403
404 @property
405 def tzlocal(self):
406 if self._tzlocal is None:
407 self._tzlocal = self.app.conf.CELERY_TIMEZONE
408 return self._tzlocal
409
410 @property
411 def store_errors(self):
412 return (not self.task.ignore_result or
413 self.task.store_errors_even_if_ignored)
414
415 @property
416 def task_id(self):
417 # XXX compat
418 return self.id
419
420 @task_id.setter # noqa
421 def task_id(self, value):
422 self.id = value
423
424 @property
425 def task_name(self):
426 # XXX compat
427 return self.name
428
429 @task_name.setter # noqa
430 def task_name(self, value):
431 self.name = value
432
433 @property
434 def reply_to(self):
435 # used by rpc backend when failures reported by parent process
436 return self.request_dict['reply_to']
437
438 @property
439 def correlation_id(self):
440 # used similarly to reply_to
441 return self.request_dict['correlation_id']
442
443
444 def create_request_cls(base, task, pool, hostname, eventer,
445 ref=ref, revoked_tasks=revoked_tasks,
446 task_ready=task_ready):
447 from celery.app.trace import trace_task_ret as trace
448 default_time_limit = task.time_limit
449 default_soft_time_limit = task.soft_time_limit
450 apply_async = pool.apply_async
451 acks_late = task.acks_late
452 events = eventer and eventer.enabled
453
454 class Request(base):
455
456 def execute_using_pool(self, pool, **kwargs):
457 task_id = self.id
458 if (self.expires or task_id in revoked_tasks) and self.revoked():
459 raise TaskRevokedError(task_id)
460
461 time_limit, soft_time_limit = self.time_limits
462 time_limit = time_limit or default_time_limit
463 soft_time_limit = soft_time_limit or default_soft_time_limit
464 result = apply_async(
465 trace,
466 args=(self.type, task_id, self.request_dict, self.body,
467 self.content_type, self.content_encoding),
468 accept_callback=self.on_accepted,
469 timeout_callback=self.on_timeout,
470 callback=self.on_success,
471 error_callback=self.on_failure,
472 soft_timeout=soft_time_limit,
473 timeout=time_limit,
474 correlation_id=task_id,
475 )
476 # cannot create weakref to None
477 self._apply_result = ref(result) if result is not None else result
478 return result
479
480 def on_success(self, failed__retval__runtime, **kwargs):
481 failed, retval, runtime = failed__retval__runtime
482 if failed:
483 if isinstance(retval.exception, (
484 SystemExit, KeyboardInterrupt)):
485 raise retval.exception
486 return self.on_failure(retval, return_ok=True)
487 task_ready(self)
488
489 if acks_late:
490 self.acknowledge()
491
492 if events:
493 self.send_event(
494 'task-succeeded', result=retval, runtime=runtime,
495 )
496
497 return Request
498
[end of celery/worker/request.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
celery/celery
|
045b52f1450d6d5cc500e0057a4b498250dc5692
|
Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
|
This is deliberate as if a task is killed it may mean that the next invocation will also cause the same to happen. If the task is redelivered it may cause a loop where the same conditions occur again and again. Also, sadly you cannot distinguish processes killed by OOM from processes killed by other means, and if an administrator kills -9 a task going amok, you usually don't want that task to be called again.
There could be a configuration option for not acking terminated tasks, but I'm not sure how useful that would be.
A better solution could be to use `basic_reject(requeue=False)` instead of `basic_ack`, that way you can configure
a dead letter queue so that the killed tasks will be sent to a queue for manual inspection.
I must say, regardless of the status of this feature request, the documentation is misleading. Specifically, [this FAQ makes it seem that process failures would NOT acknowledge messages](http://celery.readthedocs.org/en/latest/faq.html#faq-acks-late-vs-retry). And [this FAQ boldface states](http://celery.readthedocs.org/en/latest/faq.html#id54) that in the event of a kill signal (9), that acks_late will allow the task to re-run (which again, is patently wrong based on this poorly documented behavior). Nowhere in the docs have I found that if the process _dies_, the message will be acknowledged, regardless of acks_late or not. (for instance, I have a set of 10k+ tasks, and some 1% of tasks wind up acknowledged but incomplete when a WorkerLostError is thrown in connection with the worker, although there are no other errors of any kind in any of my logs related to that task).
TL;DR at the least, appropriately document the current state when describing the functionality and limitations of acks_late. A work-around would be helpful -- I'm not sure I understand the solution of using `basic_reject`, although I'll keep looking into it.
The docs are referring to killing the worker process with KILL, not the child processes. The term worker will always refer to the worker instance, not the pool processes. The section within about acks_late is probably not very helpful and should be removed
|
2015-10-06T05:34:34Z
|
<patch>
<patch>
diff --git a/celery/app/defaults.py b/celery/app/defaults.py
--- a/celery/app/defaults.py
+++ b/celery/app/defaults.py
@@ -132,6 +132,7 @@ def __repr__(self):
'REDIS_DB': Option(type='int', **_REDIS_OLD),
'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
'REDIS_MAX_CONNECTIONS': Option(type='int'),
+ 'REJECT_ON_WORKER_LOST': Option(type='bool'),
'RESULT_BACKEND': Option(type='string'),
'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
'RESULT_DB_TABLENAMES': Option(type='dict'),
diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -220,6 +220,12 @@ class Task(object):
#: :setting:`CELERY_ACKS_LATE` setting.
acks_late = None
+ #: When CELERY_ACKS_LATE is set to True, the default behavior to
+ #: handle worker crash is to acknowledge the message. Setting
+ #: this to true allows the message to be rejected and requeued so
+ #: it will be executed again by another worker.
+ reject_on_worker_lost = None
+
#: Tuple of expected exceptions.
#:
#: These are errors that are expected in normal operation
@@ -248,6 +254,7 @@ class Task(object):
('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
('track_started', 'CELERY_TRACK_STARTED'),
('acks_late', 'CELERY_ACKS_LATE'),
+ ('reject_on_worker_lost', 'CELERY_REJECT_ON_WORKER_LOST'),
('ignore_result', 'CELERY_IGNORE_RESULT'),
('store_errors_even_if_ignored',
'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -326,7 +326,6 @@ def on_retry(self, exc_info):
def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
"""Handler called if the task raised an exception."""
task_ready(self)
-
if isinstance(exc_info.exception, MemoryError):
raise MemoryError('Process got: %s' % (exc_info.exception,))
elif isinstance(exc_info.exception, Reject):
@@ -352,7 +351,13 @@ def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
)
# (acks_late) acknowledge after result stored.
if self.task.acks_late:
- self.acknowledge()
+ reject_and_requeue = (self.task.reject_on_worker_lost and
+ isinstance(exc, WorkerLostError) and
+ self.delivery_info.get('redelivered', False) is False)
+ if reject_and_requeue:
+ self.reject(requeue=True)
+ else:
+ self.acknowledge()
if send_failed_event:
self.send_event(
</patch>
</s>
</patch>
|
diff --git a/celery/tests/worker/test_request.py b/celery/tests/worker/test_request.py
--- a/celery/tests/worker/test_request.py
+++ b/celery/tests/worker/test_request.py
@@ -325,6 +325,20 @@ def test_on_failure_Reject_rejects_with_requeue(self):
req_logger, req.connection_errors, True,
)
+ def test_on_failure_WrokerLostError_rejects_with_requeue(self):
+ einfo = None
+ try:
+ raise WorkerLostError()
+ except:
+ einfo = ExceptionInfo(internal=True)
+ req = self.get_request(self.add.s(2, 2))
+ req.task.acks_late = True
+ req.task.reject_on_worker_lost = True
+ req.delivery_info['redelivered'] = False
+ req.on_failure(einfo)
+ req.on_reject.assert_called_with(req_logger,
+ req.connection_errors, True)
+
def test_tzlocal_is_cached(self):
req = self.get_request(self.add.s(2, 2))
req._tzlocal = 'foo'
|
1.0
| |||
NVIDIA__NeMo-473
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
</issue>
<code>
[start of README.rst]
1 .. image:: http://www.repostatus.org/badges/latest/active.svg
2 :target: http://www.repostatus.org/#active
3 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
4
5 .. image:: https://img.shields.io/badge/documentation-github.io-blue.svg
6 :target: https://nvidia.github.io/NeMo/
7 :alt: NeMo documentation on GitHub pages
8
9 .. image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
10 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
11 :alt: NeMo core license and license for collections in this repo
12
13 .. image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
14 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
15 :alt: Language grade: Python
16
17 .. image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
18 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
19 :alt: Total alerts
20
21 .. image:: https://img.shields.io/badge/code%20style-black-000000.svg
22 :target: https://github.com/psf/black
23 :alt: Code style: black
24
25
26
27 NVIDIA Neural Modules: NeMo
28 ===========================
29
30 NeMo is a toolkit for defining and building `Conversational AI <https://developer.nvidia.com/conversational-ai#started>`_ applications.
31
32 Goal of the NeMo toolkit is to make it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components. Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
33
34 **Neural Modules** are conceptual blocks of neural networks that take *typed* inputs and produce *typed* outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.
35
36 The toolkit comes with extendable collections of pre-built modules for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS).
37
38 **Introduction**
39
40 * Watch `this video <https://nvidia.github.io/NeMo/>`_ for a quick walk-through.
41
42 * Documentation (latest released version): https://nvidia.github.io/NeMo/
43
44 * Read NVIDIA `Developer Blog for example applications <https://devblogs.nvidia.com/how-to-build-domain-specific-automatic-speech-recognition-models-on-gpus/>`_
45
46 * Read NVIDIA `Developer Blog for Quartznet ASR model <https://devblogs.nvidia.com/develop-smaller-speech-recognition-models-with-nvidias-nemo-framework/>`_
47
48 * Recommended version to install is **0.9.0** via ``pip install nemo-toolkit``
49
50 * Recommended NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_
51
52 * Pretrained models are available on NVIDIA `NGC Model repository <https://ngc.nvidia.com/catalog/models?orderBy=modifiedDESC&query=nemo&quickFilter=models&filters=>`_
53
54
55 Getting started
56 ~~~~~~~~~~~~~~~
57
58 THE LATEST STABLE VERSION OF NeMo is **0.9.0** (Available via PIP).
59
60 **Requirements**
61
62 1) Python 3.6 or 3.7
63 2) PyTorch 1.4.* with GPU support
64 3) (optional, for best performance) NVIDIA APEX. Install from here: https://github.com/NVIDIA/apex
65
66 **NeMo Docker Container**
67 NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_ is now available.
68
69 * Pull the docker: ``docker pull nvcr.io/nvidia/nemo:v0.9``
70 * Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/nemo:v0.9``
71
72 If you are using the NVIDIA `NGC PyTorch container <https://ngc.nvidia.com/catalog/containers/nvidia:pytorch>`_ follow these instructions
73
74 * Pull the docker: ``docker pull nvcr.io/nvidia/pytorch:20.01-py3``
75 * Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:20.01-py3``
76 * ``apt-get update && apt-get install -y libsndfile1``
77 * ``pip install nemo_toolkit`` NeMo core
78 * ``pip install nemo_asr`` NeMo ASR (Speech Recognition) collection
79 * ``pip install nemo_nlp`` NeMo NLP (Natural Language Processing) collection
80 * ``pip install nemo_tts`` NeMo TTS (Speech Synthesis) collection
81
82 See `examples/start_here` to get started with the simplest example. The folder `examples` contains several examples to get you started with various tasks in NLP and ASR.
83
84 **Tutorials**
85
86 * `Speech recognition <https://nvidia.github.io/NeMo/asr/intro.html>`_
87 * `Natural language processing <https://nvidia.github.io/NeMo/nlp/intro.html>`_
88 * `Speech Synthesis <https://nvidia.github.io/NeMo/tts/intro.html>`_
89
90
91 DEVELOPMENT
92 ~~~~~~~~~~~
93 If you'd like to use master branch and/or develop NeMo you can run "reinstall.sh" script.
94
95 `Documentation (master branch) <http://nemo-master-docs.s3-website.us-east-2.amazonaws.com/>`_.
96
97 **Installing From Github**
98
99 If you prefer to use NeMo's latest development version (from GitHub) follow the steps below:
100
101 1) Clone the repository ``git clone https://github.com/NVIDIA/NeMo.git``
102 2) Go to NeMo folder and re-install the toolkit with collections:
103
104 .. code-block:: bash
105
106 ./reinstall.sh
107
108 **Style tests**
109
110 .. code-block:: bash
111
112 python setup.py style # Checks overall project code style and output issues with diff.
113 python setup.py style --fix # Tries to fix error in-place.
114 python setup.py style --scope=tests # Operates within certain scope (dir of file).
115
116 **Unittests**
117
118 This command runs unittests:
119
120 .. code-block:: bash
121
122 ./reinstall.sh
123 python pytest tests
124
125
126 Citation
127 ~~~~~~~~
128
129 If you are using NeMo please cite the following publication
130
131 .. code-block:: tex
132
133 @misc{nemo2019,
134 title={NeMo: a toolkit for building AI applications using Neural Modules},
135 author={Oleksii Kuchaiev and Jason Li and Huyen Nguyen and Oleksii Hrinchuk and Ryan Leary and Boris Ginsburg and Samuel Kriman and Stanislav Beliaev and Vitaly Lavrukhin and Jack Cook and Patrice Castonguay and Mariya Popova and Jocelyn Huang and Jonathan M. Cohen},
136 year={2019},
137 eprint={1909.09577},
138 archivePrefix={arXiv},
139 primaryClass={cs.LG}
140 }
141
142
[end of README.rst]
[start of nemo/backends/pytorch/actions.py]
1 # Copyright (c) 2019 NVIDIA Corporation
2 import copy
3 import importlib
4 import itertools
5 import json
6 import os
7 from collections import defaultdict
8 from contextlib import ExitStack
9 from pathlib import Path
10 from typing import List, Optional
11
12 import torch
13 import torch.distributed as dist
14 import torch.nn as nn
15 import torch.optim as optim
16 from torch.nn.parallel import DistributedDataParallel as DDP
17
18 from nemo import logging
19 from nemo.backends.pytorch.module_wrapper import TrainableNeuralModuleWrapper
20 from nemo.backends.pytorch.nm import DataLayerNM, TrainableNM
21 from nemo.backends.pytorch.optimizers import AdamW, Novograd, master_params
22 from nemo.core import DeploymentFormat, DeviceType, NeuralModule, NmTensor
23 from nemo.core.callbacks import ActionCallback, EvaluatorCallback, SimpleLossLoggerCallback
24 from nemo.core.neural_factory import Actions, ModelMode, Optimization
25 from nemo.core.neural_types import *
26 from nemo.utils.helpers import get_checkpoint_from_dir
27
28 # these imports will happen on as-needed basis
29 amp = None
30 # convert_syncbn = None
31 # create_syncbn_process_group = None
32 LARC = None
33 FusedLAMB = None
34 FusedAdam = None
35 FusedNovoGrad = None
36
37 AmpOptimizations = {
38 Optimization.mxprO0: "O0",
39 Optimization.mxprO1: "O1",
40 Optimization.mxprO2: "O2",
41 Optimization.mxprO3: "O3",
42 }
43
44 _float_2_half_req = {
45 Optimization.mxprO1,
46 Optimization.mxprO2,
47 Optimization.mxprO3,
48 }
49
50
51 class PtActions(Actions):
52 def __init__(
53 self, local_rank=None, global_rank=None, tb_writer=None, optimization_level=Optimization.mxprO0,
54 ):
55 need_apex = local_rank is not None or optimization_level != Optimization.mxprO0
56 if need_apex:
57 try:
58 apex = importlib.import_module('apex')
59 if optimization_level != Optimization.mxprO0:
60 global amp
61 amp = importlib.import_module('apex.amp')
62 if local_rank is not None:
63 # global convert_syncbn
64 # global create_syncbn_process_group
65 global LARC
66 global FusedLAMB
67 global FusedAdam
68 global FusedNovoGrad
69 parallel = importlib.import_module('apex.parallel')
70 apex_optimizer = importlib.import_module('apex.optimizers')
71 # convert_syncbn = parallel.convert_syncbn_model
72 # create_syncbn_process_group = parallel.create_syncbn_process_group
73 LARC = parallel.LARC
74 FusedLAMB = apex_optimizer.FusedLAMB
75 FusedAdam = apex_optimizer.FusedAdam
76 FusedNovoGrad = apex_optimizer.FusedNovoGrad
77
78 except ImportError:
79 raise ImportError(
80 "NVIDIA Apex is necessary for distributed training and"
81 "mixed precision training. It only works on GPUs."
82 "Please install Apex from "
83 "https://www.github.com/nvidia/apex"
84 )
85
86 super(PtActions, self).__init__(
87 local_rank=local_rank, global_rank=global_rank, optimization_level=optimization_level,
88 )
89
90 # will be [unique_instance_id -> (NMModule, PTModule)]
91 self.module_reference_table = {}
92 self.step = 0
93 self.epoch_num = 0
94 self.optimizers = []
95 self.tb_writer = tb_writer
96 self._modules = set()
97 self.cache = None
98 self.amp_initialized = False
99
100 @property
101 def modules(self):
102 return self._modules
103
104 def __get_top_sorted_modules_and_dataloader(self, hook):
105 """
106 Constructs DAG leading to hook and creates its topological order.
107 It also populates self.module_reference_table.
108 Args:
109 hook: an NmTensor or a list of NmTensors representing leaf nodes
110 in DAG
111
112 Returns:
113 list of modules with their call arguments and outputs, and dataset
114 """
115
116 def create_node(producer, producer_args):
117 if producer_args is None:
118 return tuple((producer, ()))
119 else:
120 return tuple((producer, tuple([(k, v) for k, v in producer_args.items()]),))
121
122 def is_in_degree_zero(node, processed_nodes):
123 """A node has in degree of zero"""
124 if node[1] == ():
125 return True
126 for portname, nmtensor in node[1]:
127 nd = create_node(nmtensor.producer, nmtensor.producer_args)
128 if nd not in processed_nodes:
129 return False
130 return True
131
132 hooks = hook if isinstance(hook, list) else [hook]
133
134 # ensures that no tensors are processed twice
135 processed_nmtensors = set()
136
137 indices_to_remove = []
138 # Check for duplicates in hook
139 for i, nmtensor in enumerate(hook):
140 if nmtensor in processed_nmtensors:
141 indices_to_remove.append(i)
142 else:
143 processed_nmtensors.add(nmtensor)
144
145 for i in reversed(indices_to_remove):
146 hook.pop(i)
147
148 _top_sorted_modules = []
149 all_nodes = {}
150
151 # extract all nodes to all_nodes set
152 hooks_lst = list(hooks)
153 while len(hooks_lst) > 0:
154 # take nmtensor from the end of the list
155 nmtensor = hooks_lst.pop()
156 node = create_node(nmtensor.producer, nmtensor.producer_args)
157 # Store nmtensor as an output of its producer
158 # first make sure all keys are present per output port
159 # and nm is inside all_nodes
160 if node not in all_nodes:
161 all_nodes[node] = {k: None for k in nmtensor.producer.output_ports}
162 # second, populate output port with current nmtensor
163 # where applicable
164 all_nodes[node][nmtensor.name] = nmtensor
165 processed_nmtensors.add(nmtensor)
166 if nmtensor.producer_args is not None and nmtensor.producer_args != {}:
167 for _, new_nmtensor in nmtensor.producer_args.items():
168 if new_nmtensor not in processed_nmtensors:
169 # put in the start of list
170 hooks_lst.insert(0, new_nmtensor)
171
172 all_node_with_output = []
173 # Iterate over all_nodes to create new nodes that include its output
174 # now all nodes have (module, input tensors, output tensors)
175 for node in all_nodes:
176 all_node_with_output.append(tuple((node[0], node[1], all_nodes[node])))
177
178 processed_nodes = []
179 while len(all_node_with_output) > 0:
180 for node in all_node_with_output.copy():
181 # if node's in_degree is zero it can be added to
182 # _top_sorted_modules
183 # this will also reduce in_degree of its children
184 if is_in_degree_zero(node, processed_nodes):
185 _top_sorted_modules.append(node)
186 processed_nodes.append((node[0], node[1]))
187 all_node_with_output.remove(node)
188
189 # Create top_sorted_modules aka callchain
190 top_sorted_modules = []
191 for i, m in enumerate(_top_sorted_modules):
192 top_sorted_modules.append((m[0], dict(m[1]), m[2]))
193 # Ensure that there is only one dataset in callchain
194 if i > 0 and isinstance(m[0], DataLayerNM):
195 raise ValueError("There were more than one DataLayer NeuralModule inside your DAG.")
196
197 if not isinstance(top_sorted_modules[0][0], DataLayerNM):
198 raise ValueError("The first module in your DAG was not a DataLayer NeuralModule.")
199
200 tdataset = top_sorted_modules[0][0].dataset
201
202 # populate self.module_reference_table
203 for m in top_sorted_modules:
204 if m[0].factory is None and self._local_rank is not None:
205 raise ValueError(
206 "Neural module {0} was created without "
207 "NeuralModuleFactory, but you are trying to"
208 "run in distributed mode. Please instantiate"
209 "NeuralModuleFactory first and pass its "
210 "instance as `factory` parameter to all your"
211 "Neural Module objects."
212 "".format(str(m[0]))
213 )
214 key = m[0].unique_instance_id
215 if key not in self.module_reference_table:
216 if isinstance(m[0], TrainableNeuralModuleWrapper):
217 self.module_reference_table[key] = (m[0], m[0]._pt_module)
218 else:
219 self.module_reference_table[key] = (m[0], m[0])
220
221 return top_sorted_modules, tdataset
222
223 def create_optimizer(self, optimizer, things_to_optimize, optimizer_params=None):
224 """
225 Wrapper function around __setup_optimizer()
226
227 Args:
228 optimizer : A instantiated PyTorch optimizer or string. For
229 currently supported strings, see __setup_optimizer().
230 things_to_optimize (list): Must be a list of Neural Modules and/or
231 parameters. If a Neural Module is passed, all trainable
232 parameters are extracted and passed to the optimizer.
233 optimizer_params (dict): Optional parameters dictionary.
234
235 Returns:
236 Optimizer
237 """
238
239 optimizer_instance = None
240 optimizer_class = None
241 if isinstance(optimizer, str):
242 optimizer_class = optimizer
243 elif isinstance(optimizer, torch.optim.Optimizer):
244 optimizer_instance = optimizer
245 else:
246 raise ValueError("`optimizer` must be a string or an instance of torch.optim.Optimizer")
247
248 modules_to_optimize = []
249 tensors_to_optimize = []
250 if not isinstance(things_to_optimize, list):
251 things_to_optimize = [things_to_optimize]
252 for thing in things_to_optimize:
253 if isinstance(thing, NeuralModule):
254 modules_to_optimize.append(thing)
255 elif isinstance(thing, NmTensor):
256 tensors_to_optimize.append(thing)
257 else:
258 raise ValueError(
259 "{} passed to create_optimizer() was neither a neural module nor a neural module tensor"
260 )
261
262 if tensors_to_optimize:
263 call_chain, _ = self.__get_top_sorted_modules_and_dataloader(tensors_to_optimize)
264
265 for module in call_chain:
266 if module[0] not in modules_to_optimize:
267 modules_to_optimize.append(module[0])
268
269 # Extract trainable weights which will be optimized
270 params_list = [p.parameters() for p in modules_to_optimize if isinstance(p, TrainableNM) or p.is_trainable()]
271 params_to_optimize = itertools.chain(*params_list)
272
273 if optimizer_params is None:
274 optimizer_params = {}
275 # Init amp
276 optimizer = self.__setup_optimizer(
277 optimizer_instance=optimizer_instance,
278 optimizer_class=optimizer_class,
279 optimization_params=optimizer_params,
280 params_to_optimize=params_to_optimize,
281 )
282
283 self.optimizers.append(optimizer)
284 return optimizer
285
286 @staticmethod
287 def __setup_optimizer(
288 optimizer_instance, optimizer_class, optimization_params, params_to_optimize,
289 ):
290
291 if optimizer_instance is None:
292 # Setup optimizer instance, by default it is SGD
293 lr = optimization_params["lr"]
294 if optimizer_class.lower() == "sgd":
295 optimizer = optim.SGD(
296 params_to_optimize,
297 lr=lr,
298 momentum=optimization_params.get("momentum", 0.9),
299 weight_decay=optimization_params.get("weight_decay", 0.0),
300 )
301 elif optimizer_class.lower() == "adam":
302 optimizer = optim.Adam(
303 params=params_to_optimize, lr=lr, betas=optimization_params.get("betas", (0.9, 0.999)),
304 )
305 elif optimizer_class.lower() == "fused_adam":
306 optimizer = FusedAdam(params=params_to_optimize, lr=lr)
307 elif optimizer_class.lower() == "adam_w":
308 optimizer = AdamW(
309 params=params_to_optimize,
310 lr=lr,
311 weight_decay=optimization_params.get("weight_decay", 0.0),
312 betas=optimization_params.get("betas", (0.9, 0.999)),
313 )
314 elif optimizer_class.lower() == "novograd":
315 optimizer = Novograd(
316 params_to_optimize,
317 lr=lr,
318 weight_decay=optimization_params.get("weight_decay", 0.0),
319 luc=optimization_params.get("luc", False),
320 luc_trust=optimization_params.get("luc_eta", 1e-3),
321 betas=optimization_params.get("betas", (0.95, 0.25)),
322 )
323 elif optimizer_class.lower() == "fused_novograd":
324 optimizer = FusedNovoGrad(
325 params_to_optimize,
326 lr=lr,
327 weight_decay=optimization_params.get("weight_decay", 0.0),
328 reg_inside_moment=True,
329 grad_averaging=False,
330 betas=optimization_params.get("betas", (0.95, 0.25)),
331 )
332 elif optimizer_class.lower() == "fused_lamb":
333 optimizer = FusedLAMB(params_to_optimize, lr=lr,)
334 else:
335 raise ValueError("Unknown optimizer class: {0}".format(optimizer_class))
336
337 if optimization_params.get("larc", False):
338 logging.info("Enabling larc")
339 optimizer = LARC(optimizer, trust_coefficient=optimization_params.get("larc_eta", 2e-2),)
340 else:
341 logging.info("Optimizer instance: {0} is provided.")
342 if optimizer_class is not None and optimizer_class != "":
343 logging.warning("Ignoring `optimizer_class` parameter because `optimizer_instance` is provided")
344 if optimization_params is not None and optimization_params != {}:
345 logging.warning(
346 "Ignoring `optimization_params` parameter for "
347 "optimizer because `optimizer_instance` is provided"
348 )
349 optimizer = optimizer_instance
350 return optimizer
351
352 def __initialize_amp(
353 self, optimizer, optim_level, amp_max_loss_scale=2.0 ** 24, amp_min_loss_scale=1.0,
354 ):
355 if optim_level not in AmpOptimizations:
356 raise ValueError(f"__initialize_amp() was called with unknown optim_level={optim_level}")
357 # in this case, nothing to do here
358 if optim_level == Optimization.mxprO0:
359 return optimizer
360
361 if len(self.modules) < 1:
362 raise ValueError("There were no modules to initialize")
363 pt_modules = []
364 for module in self.modules:
365 if isinstance(module, nn.Module):
366 pt_modules.append(module)
367 elif isinstance(module, TrainableNeuralModuleWrapper):
368 pt_modules.append(module._pt_module)
369
370 _, optimizer = amp.initialize(
371 max_loss_scale=amp_max_loss_scale,
372 min_loss_scale=amp_min_loss_scale,
373 models=pt_modules,
374 optimizers=optimizer,
375 opt_level=AmpOptimizations[optim_level],
376 )
377 self.amp_initialized = True
378 return optimizer
379
380 def __nm_graph_forward_pass(
381 self, call_chain, registered_tensors, mode=ModelMode.train, use_cache=False,
382 ):
383 for ind in range(1, len(call_chain)):
384 if use_cache:
385 in_cache = True
386 for tensor in call_chain[ind][2].values():
387 if tensor is None:
388 # NM has an output tensor that is not used in the
389 # current call chain, so we don't care if it's not in
390 # cache
391 continue
392 if tensor.unique_name not in registered_tensors:
393 in_cache = False
394 if in_cache:
395 continue
396 call_args = call_chain[ind][1]
397 # module = call_chain[ind][0]
398 m_id = call_chain[ind][0].unique_instance_id
399 pmodule = self.module_reference_table[m_id][1]
400
401 # if self._local_rank is not None:
402 # if isinstance(pmodule, DDP):
403 # if disable_allreduce:
404 # pmodule.disable_allreduce()
405 # else:
406 # pmodule.enable_allreduce()
407
408 if mode == ModelMode.train:
409 # if module.is_trainable():
410 if isinstance(pmodule, nn.Module):
411 pmodule.train()
412 elif mode == ModelMode.eval:
413 # if module.is_trainable():
414 if isinstance(pmodule, nn.Module):
415 pmodule.eval()
416 else:
417 raise ValueError("Unknown ModelMode")
418 # prepare call signature for `module`
419 call_set = {}
420 for tensor_name, nmtensor in call_args.items():
421 # _add_uuid_2_name(nmtensor.name, nmtensor.producer._uuid)
422 key = nmtensor.unique_name
423 call_set[tensor_name] = registered_tensors[key]
424 # actual PyTorch module call with signature
425 if isinstance(self.module_reference_table[m_id][0], TrainableNeuralModuleWrapper,):
426 new_tensors = pmodule(**call_set)
427 else:
428 new_tensors = pmodule(force_pt=True, **call_set)
429
430 if not isinstance(new_tensors, List):
431 if not isinstance(new_tensors, tuple):
432 new_tensors = [new_tensors]
433 else:
434 new_tensors = list(new_tensors)
435 for t_tensor, nm_tensor in zip(new_tensors, call_chain[ind][2].values()):
436 if nm_tensor is None:
437 continue
438 t_name = nm_tensor.unique_name
439 if t_name not in registered_tensors:
440 registered_tensors[t_name] = t_tensor
441 else:
442 raise ValueError("A NMTensor was produced twice in " f"the same DAG. {t_name}")
443
444 @staticmethod
445 def pad_tensor(t: torch.Tensor, target_size: torch.Size):
446 padded_shape = target_size.cpu().data.numpy().tolist()
447 padded_t = torch.zeros(padded_shape).cuda().type_as(t)
448 t_size = t.size()
449 if len(t_size) == 0:
450 padded_t = t
451 elif len(t_size) == 1:
452 padded_t[: t_size[0]] = t
453 elif len(t_size) == 2:
454 padded_t[: t_size[0], : t_size[1]] = t
455 elif len(t_size) == 3:
456 padded_t[: t_size[0], : t_size[1], : t_size[2]] = t
457 elif len(t_size) == 4:
458 padded_t[: t_size[0], : t_size[1], : t_size[2], : t_size[3]] = t
459 else:
460 raise NotImplementedError
461 return padded_t
462
463 @staticmethod
464 def depad_tensor(t: torch.Tensor, original_size: torch.Size):
465 t_size = original_size
466 if len(t_size) == 0:
467 depadded_t = t
468 elif len(t_size) == 1:
469 depadded_t = t[: t_size[0]]
470 elif len(t_size) == 2:
471 depadded_t = t[: t_size[0], : t_size[1]]
472 elif len(t_size) == 3:
473 depadded_t = t[: t_size[0], : t_size[1], : t_size[2]]
474 elif len(t_size) == 4:
475 depadded_t = t[: t_size[0], : t_size[1], : t_size[2], : t_size[3]]
476 else:
477 raise NotImplementedError
478 return depadded_t
479
480 def _eval(self, tensors_2_evaluate, callback, step, verbose=False):
481 """
482 Evaluation process.
483 WARNING THIS function assumes that all tensors_2_evaluate are based
484 on a single datalayer
485 Args:
486 tensors_2_evaluate: list of NmTensors to evaluate
487 callback: instance of EvaluatorCallback
488 step: current training step, used for logging
489
490 Returns:
491 None
492 """
493 with torch.no_grad():
494 # each call chain corresponds to a tensor in tensors_2_evaluate
495 call_chain, _ = self.__get_top_sorted_modules_and_dataloader(hook=tensors_2_evaluate)
496 # "Retrieve" data layer from call chain.
497 dl_nm = call_chain[0][0]
498
499 # Prepare eval_dataloader
500 # For distributed training it should have disjoint subsets of
501 # all data on every worker
502 is_distributed = False
503 world_size = None
504 if dl_nm.placement == DeviceType.AllGpu:
505 assert dist.is_initialized()
506 is_distributed = True
507 world_size = torch.distributed.get_world_size()
508 # logging.info(
509 # "Doing distributed evaluation. Rank {0} of {1}".format(
510 # self.local_rank, world_size
511 # )
512 # )
513 if dl_nm.dataset is not None:
514 sampler = torch.utils.data.distributed.DistributedSampler(
515 dataset=dl_nm.dataset, shuffle=dl_nm.shuffle
516 )
517 eval_dataloader = torch.utils.data.DataLoader(
518 dataset=dl_nm.dataset,
519 sampler=sampler,
520 num_workers=dl_nm.num_workers,
521 batch_size=dl_nm.batch_size,
522 shuffle=False,
523 )
524 else:
525 eval_dataloader = dl_nm.data_iterator
526
527 if hasattr(eval_dataloader, 'sampler'):
528 eval_dataloader.sampler.set_epoch(0)
529 else: # Not distributed
530 if dl_nm.dataset is not None:
531 # Todo: remove local_parameters
532 eval_dataloader = torch.utils.data.DataLoader(
533 dataset=dl_nm.dataset,
534 sampler=None, # not distributed sampler
535 num_workers=dl_nm.num_workers,
536 batch_size=dl_nm.batch_size,
537 shuffle=dl_nm.shuffle,
538 )
539 else:
540 eval_dataloader = dl_nm.data_iterator
541 # after this eval_dataloader is ready to be used
542 # reset global_var_dict - results of evaluation will be stored
543 # there
544
545 callback.clear_global_var_dict()
546 dl_device = dl_nm._device
547
548 # Evaluation mini-batch for loop
549 num_batches = None
550 if hasattr(eval_dataloader, "__len__"):
551 num_batches = len(eval_dataloader)
552 for epoch_i, data in enumerate(eval_dataloader, 0):
553 if (
554 verbose
555 and num_batches is not None
556 and (num_batches < 10 or (epoch_i % int(num_batches / 10) == 0))
557 ):
558 logging.info(f"Evaluating batch {epoch_i} out of {num_batches}")
559 tensors = []
560 if isinstance(data, torch.Tensor):
561 data = (data,)
562 for d in data:
563 if isinstance(d, torch.Tensor):
564 tensors.append(d.to(dl_device))
565 else:
566 tensors.append(d)
567
568 registered_e_tensors = {
569 t.unique_name: d for t, d in zip(call_chain[0][2].values(), tensors) if t is not None
570 }
571 self.__nm_graph_forward_pass(
572 call_chain=call_chain, registered_tensors=registered_e_tensors, mode=ModelMode.eval,
573 )
574
575 if not is_distributed or self.global_rank == 0:
576 values_dict = {}
577 # If distributed. For the outer loop, we need to ensure that
578 # all processes loop through the elements in the same order
579 for t2e in tensors_2_evaluate:
580 key = t2e.unique_name
581 if key not in registered_e_tensors.keys():
582 logging.info("WARNING: Tensor {} was not found during eval".format(key))
583 continue
584 if is_distributed:
585 # where we will all_gather results from all workers
586 tensors_list = []
587 # where we will all_gather tensor sizes
588 tensor_on_worker = registered_e_tensors[key]
589 if tensor_on_worker.shape != torch.Size([]):
590 tensor_on_worker_size_as_tensor = torch.tensor(tensor_on_worker.shape).cuda()
591 sizes = []
592 for ind in range(world_size):
593 sizes.append(torch.empty_like(tensor_on_worker_size_as_tensor))
594 dist.all_gather(sizes, tensor_on_worker_size_as_tensor)
595 mx_dim, _ = torch.max(torch.stack(sizes), dim=0)
596 else: # this is a singleton. For example, loss value
597 sizes = [torch.Size([])] * world_size
598 mx_dim = None
599 for ind in range(world_size):
600 # we have to use max shape for all_gather
601 if mx_dim is None: # singletons
602 tensors_list.append(torch.tensor(2).cuda().type_as(tensor_on_worker))
603 else: # non-singletons
604 tensors_list.append(
605 torch.empty(mx_dim.cpu().data.numpy().tolist()).cuda().type_as(tensor_on_worker)
606 )
607
608 if mx_dim is not None:
609 t_to_send = self.pad_tensor(tensor_on_worker, mx_dim)
610 else:
611 t_to_send = tensor_on_worker
612 dist.all_gather(tensors_list, t_to_send)
613 tensors_list = [self.depad_tensor(t, size) for t, size in zip(tensors_list, sizes)]
614 if self.global_rank == 0:
615 values_dict["IS_FROM_DIST_EVAL"] = True
616 values_dict[key] = tensors_list
617 else: # NON-DISTRIBUTED TRAINING
618 values_dict["IS_FROM_DIST_EVAL"] = False
619 values_dict[key] = [registered_e_tensors[key]]
620 if callback.user_iter_callback and (self.global_rank is None or self.global_rank == 0):
621 # values_dict will contain results from all workers
622 callback.user_iter_callback(values_dict, callback._global_var_dict)
623
624 # final aggregation (over minibatches) and logging of results
625 # should happend only on one worker
626 if callback.user_done_callback and (self.global_rank is None or self.global_rank == 0):
627 vals_to_log = callback.user_done_callback(callback._global_var_dict)
628 # log results to Tensorboard or Weights & Biases
629 if vals_to_log is not None:
630 if hasattr(callback, 'swriter') and callback.swriter is not None:
631 if hasattr(callback, 'tb_writer_func') and callback.tb_writer_func is not None:
632 callback.tb_writer_func(callback.swriter, vals_to_log, step)
633 else:
634 for key, val in vals_to_log.items():
635 callback.swriter.add_scalar(key, val, step)
636 if hasattr(callback, 'wandb_log'):
637 callback.wandb_log(vals_to_log)
638
639 def _infer(
640 self, tensors_to_return, verbose=False, cache=False, use_cache=False, offload_to_cpu=True,
641 ):
642 """
643 Does the same as _eval() just with tensors instead of eval callback.
644 """
645 # Checking that cache is used properly
646 if cache and use_cache:
647 raise ValueError(
648 "cache and use_cache were both set. However cache must first be created prior to using it."
649 )
650 if cache:
651 if self.cache is not None:
652 raise ValueError("cache was set but was not empty")
653 self.cache = []
654 if use_cache:
655 if not self.cache:
656 raise ValueError("use_cache was set, but cache was empty")
657
658 with torch.no_grad():
659 # each call chain corresponds to a tensor in tensors_2_evaluate
660 dl_nm = None
661 call_chain, _ = self.__get_top_sorted_modules_and_dataloader(hook=tensors_to_return)
662 dl_nm = call_chain[0][0]
663
664 # Prepare eval_dataloader
665 # For distributed training it should have disjoint subsets of
666 # all data on every worker
667 is_distributed = False
668 world_size = None
669 if dl_nm.placement == DeviceType.AllGpu:
670 if self.cache or use_cache:
671 raise NotImplementedError("Caching is not available for distributed training.")
672 assert dist.is_initialized()
673 is_distributed = True
674 world_size = torch.distributed.get_world_size()
675 # logging.info(
676 # "Doing distributed evaluation. Rank {0} of {1}".format(
677 # self.local_rank, world_size
678 # )
679 # )
680 if dl_nm.dataset is not None:
681 sampler = torch.utils.data.distributed.DistributedSampler(
682 dataset=dl_nm.dataset, shuffle=dl_nm.shuffle
683 )
684 eval_dataloader = torch.utils.data.DataLoader(
685 dataset=dl_nm.dataset,
686 sampler=sampler,
687 num_workers=dl_nm.num_workers,
688 batch_size=dl_nm.batch_size,
689 shuffle=False,
690 )
691 else:
692 eval_dataloader = dl_nm.data_iterator
693 eval_dataloader.sampler.set_epoch(0)
694 elif not use_cache: # Not distributed and not using cache
695 # Dataloaders are only used if use_cache is False
696 # When caching, the DAG must cache all outputs from dataloader
697 if dl_nm.dataset is not None:
698 # Todo: remove local_parameters
699 eval_dataloader = torch.utils.data.DataLoader(
700 dataset=dl_nm.dataset,
701 sampler=None, # not distributed sampler
702 num_workers=dl_nm.num_workers,
703 batch_size=dl_nm.batch_size,
704 shuffle=dl_nm.shuffle,
705 )
706 else:
707 eval_dataloader = dl_nm.data_iterator
708 # after this eval_dataloader is ready to be used
709 # reset global_var_dict - results of evaluation will be stored
710 # there
711
712 if not is_distributed or self.global_rank == 0:
713 values_dict = {}
714 for t in tensors_to_return:
715 values_dict[t.unique_name] = []
716 dl_device = dl_nm._device
717
718 # Evaluation mini-batch for loop
719 if use_cache:
720 num_batches = len(self.cache)
721 loop_iterator = self.cache
722 else:
723 num_batches = len(eval_dataloader)
724 loop_iterator = eval_dataloader
725
726 for epoch_i, data in enumerate(loop_iterator, 0):
727 logging.debug(torch.cuda.memory_allocated())
728 if verbose and (num_batches < 10 or (epoch_i % int(num_batches / 10) == 0)):
729 logging.info(f"Evaluating batch {epoch_i} out of {num_batches}")
730 tensors = []
731 if use_cache:
732 registered_e_tensors = data
733 # delete tensors_to_return
734 for t in tensors_to_return:
735 if t.unique_name in registered_e_tensors:
736 del registered_e_tensors[t.unique_name]
737 # Need to check for device type mismatch
738 for t in registered_e_tensors:
739 registered_e_tensors[t].to(dl_device)
740 else:
741 if isinstance(data, torch.Tensor):
742 data = (data,)
743 for d in data:
744 if isinstance(d, torch.Tensor):
745 tensors.append(d.to(dl_device))
746 else:
747 tensors.append(d)
748
749 registered_e_tensors = {
750 t.unique_name: d for t, d in zip(call_chain[0][2].values(), tensors) if t is not None
751 }
752 self.__nm_graph_forward_pass(
753 call_chain=call_chain,
754 registered_tensors=registered_e_tensors,
755 mode=ModelMode.eval,
756 use_cache=use_cache,
757 )
758
759 # if offload_to_cpu:
760 # # Take all cuda tensors and save them to value_dict as
761 # # cpu tensors to save GPU memory
762 # for name, tensor in registered_e_tensors.items():
763 # if isinstance(tensor, torch.Tensor):
764 # registered_e_tensors[name] = tensor.cpu()
765 if cache:
766 self.append_to_cache(registered_e_tensors, offload_to_cpu)
767
768 # If distributed. For the outer loop, we need to ensure that
769 # all processes loop through the elements in the same order
770 for t2e in tensors_to_return:
771 key = t2e.unique_name
772 if key not in registered_e_tensors.keys():
773 logging.info("WARNING: Tensor {} was not found during eval".format(key))
774 continue
775 if is_distributed:
776 # where we will all_gather results from all workers
777 tensors_list = []
778 # where we will all_gather tensor sizes
779 tensor_on_worker = registered_e_tensors[key]
780 if tensor_on_worker.shape != torch.Size([]):
781 tensor_on_worker_size_as_tensor = torch.tensor(tensor_on_worker.shape).cuda()
782 sizes = []
783 for ind in range(world_size):
784 sizes.append(torch.empty_like(tensor_on_worker_size_as_tensor))
785 dist.all_gather(sizes, tensor_on_worker_size_as_tensor)
786 mx_dim, _ = torch.max(torch.stack(sizes), dim=0)
787 else: # this is a singleton. For example, loss value
788 sizes = [torch.Size([])] * world_size
789 mx_dim = None
790 for ind in range(world_size):
791 # we have to use max shape for all_gather
792 if mx_dim is None: # singletons
793 tensors_list.append(torch.tensor(2).cuda().type_as(tensor_on_worker))
794 else: # non-singletons
795 tensors_list.append(
796 torch.empty(mx_dim.cpu().data.numpy().tolist()).cuda().type_as(tensor_on_worker)
797 )
798
799 if mx_dim is not None:
800 t_to_send = self.pad_tensor(tensor_on_worker, mx_dim)
801 else:
802 t_to_send = tensor_on_worker
803 dist.all_gather(tensors_list, t_to_send)
804 tensors_list = [self.depad_tensor(t, size) for t, size in zip(tensors_list, sizes)]
805 if offload_to_cpu:
806 tensors_list = [t.cpu() for t in tensors_list]
807 if self.global_rank == 0:
808 values_dict[key] += tensors_list
809 else: # NON-DISTRIBUTED TRAINING
810 tensor = registered_e_tensors[key]
811 if offload_to_cpu and isinstance(tensor, torch.Tensor):
812 tensor = tensor.cpu()
813 values_dict[key] += [tensor]
814
815 if not is_distributed or self.global_rank == 0:
816 inferred_tensors = []
817 for t in tensors_to_return:
818 inferred_tensors.append(values_dict[t.unique_name])
819 return inferred_tensors
820
821 # For all other ranks
822 return None
823
824 def append_to_cache(self, registered_tensors: dict, offload_to_cpu):
825 """Simpler helper function to add results of __nm_graph_forward_pass to
826 current cache.
827 """
828 if offload_to_cpu:
829 for t in registered_tensors:
830 registered_tensors[t] = registered_tensors[t].cpu()
831 self.cache.append(registered_tensors)
832
833 def clear_cache(self):
834 """ Simple helpful function to clear cache by setting self.cache to
835 None
836 """
837 self.cache = None
838
839 def save_state_to(self, path: str):
840 """
841 Saves current state such as step, epoch and optimizer parameters
842 Args:
843 path:
844
845 Returns:
846
847 """
848 state = {
849 "step": self.step,
850 "epoch_num": self.epoch_num,
851 "optimizer_state": [opt.state_dict() for opt in self.optimizers],
852 }
853 torch.save(state, path)
854
855 def restore_state_from(self, path: str):
856 """
857 Restores state such as step, epoch and optimizer parameters
858 Args:
859 path:
860
861 Returns:
862
863 """
864 if os.path.isfile(path):
865 # map_location could be cuda:<device_id> but cpu seems to be more
866 # general since we are also saving step and epoch_num
867 # load_state_dict should move the variables to the relevant device
868 checkpoint = torch.load(path, map_location="cpu")
869 self.step = checkpoint["step"]
870 self.epoch_num = checkpoint["epoch_num"]
871 if checkpoint["optimizer_state"]:
872 for opt, opt_chkpt in zip(self.optimizers, checkpoint["optimizer_state"]):
873 opt.load_state_dict(opt_chkpt)
874 else:
875 raise FileNotFoundError("Could not find checkpoint file: {0}".format(path))
876
877 @staticmethod
878 def _check_all_tensors(list_of_tensors):
879 """Method that checks if the passed list contains all NmTensors
880 """
881 if not isinstance(list_of_tensors, list):
882 return False
883 for tensor in list_of_tensors:
884 if not isinstance(tensor, NmTensor):
885 return False
886 return True
887
888 @staticmethod
889 def _check_tuples(list_of_tuples):
890 """Method that checks if the passed tuple contains an optimizer in the
891 first element, and a list of NmTensors in the second.
892 """
893 for tup in list_of_tuples:
894 if not (isinstance(tup[0], torch.optim.Optimizer) and PtActions._check_all_tensors(tup[1])):
895 return False
896 return True
897
898 def _get_all_modules(self, training_loop, callbacks, logging_callchain=None):
899 """Gets all neural modules that will be used by train() and eval() via
900 EvaluatorCallbacks. Saves all modules to self.modules
901 """
902 # If there is a SimpleLossLoggerCallback, create an logger_callchain
903 # with all callchains from training_loop and
904 # SimpleLossLoggerCallback.tensors
905 if logging_callchain:
906 for module in logging_callchain:
907 self.modules.add(module[0])
908
909 # Else grab all callchains from training_loop
910 else:
911 for step in training_loop:
912 for module in step[2]:
913 self.modules.add(module[0])
914
915 # Lastly, grab all eval modules
916 if callbacks is not None:
917 for callback in callbacks:
918 if isinstance(callback, EvaluatorCallback):
919 (callchain, _,) = self.__get_top_sorted_modules_and_dataloader(hook=callback.eval_tensors)
920 for module in callchain:
921 self.modules.add(module[0])
922
923 @staticmethod
924 def __module_export(module, output, d_format: DeploymentFormat, input_example=None, output_example=None):
925 # Check if output already exists
926 destination = Path(output)
927 if destination.exists():
928 raise FileExistsError(f"Destination {output} already exists. " f"Aborting export.")
929
930 input_names = list(module.input_ports.keys())
931 output_names = list(module.output_ports.keys())
932 dynamic_axes = defaultdict(list)
933
934 def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defaultdict):
935 if ntype.axes:
936 for ind, axis in enumerate(ntype.axes):
937 if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
938 dynamic_axes[port_name].append(ind)
939
940 # This is a hack for Jasper to Jarvis export -- need re-design for this
941 inputs_to_drop = set()
942 outputs_to_drop = set()
943 if type(module).__name__ == "JasperEncoder":
944 logging.info(
945 "Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
946 "deployment"
947 )
948 inputs_to_drop.add("length")
949 outputs_to_drop.add("encoded_lengths")
950
951 # for input_ports
952 for port_name, ntype in module.input_ports.items():
953 if port_name in inputs_to_drop:
954 input_names.remove(port_name)
955 continue
956 __extract_dynamic_axes(port_name, ntype, dynamic_axes)
957 # for output_ports
958 for port_name, ntype in module.output_ports.items():
959 if port_name in outputs_to_drop:
960 output_names.remove(port_name)
961 continue
962 __extract_dynamic_axes(port_name, ntype, dynamic_axes)
963
964 if len(dynamic_axes) == 0:
965 dynamic_axes = None
966
967 # Make a deep copy of init parameters.
968 init_params_copy = copy.deepcopy(module._init_params)
969
970 # Remove NeMo-related things from the module
971 # We need to change __call__ method. Note that this will change the
972 # whole class, not just this object! Which is why we need to repair it
973 # in the finally block
974 type(module).__call__ = torch.nn.Module.__call__
975
976 # Reset standard instance field - making the file (probably) lighter.
977 module._init_params = None
978 module._placement = None
979 module._factory = None
980 module._device = None
981
982 module.eval()
983 try:
984 if d_format == DeploymentFormat.TORCHSCRIPT:
985 if input_example is None:
986 # Route 1 - via torch.jit.script
987 traced_m = torch.jit.script(module)
988 traced_m.save(output)
989 else:
990 # Route 2 - via tracing
991 traced_m = torch.jit.trace(module, input_example)
992 traced_m.save(output)
993 elif d_format == DeploymentFormat.ONNX or d_format == DeploymentFormat.TRTONNX:
994 if input_example is None:
995 raise ValueError(f'Example input is None, but ONNX tracing was' f' attempted')
996 if output_example is None:
997 if isinstance(input_example, tuple):
998 output_example = module.forward(*input_example)
999 else:
1000 output_example = module.forward(input_example)
1001 with torch.jit.optimized_execution(True):
1002 jitted_model = torch.jit.trace(module, input_example)
1003
1004 torch.onnx.export(
1005 jitted_model,
1006 input_example,
1007 output,
1008 input_names=input_names,
1009 output_names=output_names,
1010 verbose=False,
1011 export_params=True,
1012 do_constant_folding=True,
1013 dynamic_axes=dynamic_axes,
1014 opset_version=11,
1015 example_outputs=output_example,
1016 )
1017 # fn = output + ".readable"
1018 # with open(fn, 'w') as f:
1019 # tempModel = onnx.load(output)
1020 # onnx.save(tempModel, output + ".copy")
1021 # onnx.checker.check_model(tempModel)
1022 # pgraph = onnx.helper.printable_graph(tempModel.graph)
1023 # f.write(pgraph)
1024
1025 elif d_format == DeploymentFormat.PYTORCH:
1026 torch.save(module.state_dict(), output)
1027 with open(output + ".json", 'w') as outfile:
1028 json.dump(init_params_copy, outfile)
1029
1030 else:
1031 raise NotImplementedError(f"Not supported deployment format: {d_format}")
1032 except Exception as e: # nopep8
1033 logging.error(f'module export failed for {module} ' f'with exception {e}')
1034 finally:
1035
1036 def __old_call__(self, force_pt=False, *input, **kwargs):
1037 pt_call = len(input) > 0 or force_pt
1038 if pt_call:
1039 return nn.Module.__call__(self, *input, **kwargs)
1040 else:
1041 return NeuralModule.__call__(self, **kwargs)
1042
1043 type(module).__call__ = __old_call__
1044
1045 @staticmethod
1046 def deployment_export(module, output: str, d_format: DeploymentFormat, input_example=None, output_example=None):
1047 """Exports Neural Module instance for deployment.
1048
1049 Args:
1050 module: neural module to export
1051 output (str): where export results should be saved
1052 d_format (DeploymentFormat): which deployment format to use
1053 input_example: sometimes tracing will require input examples
1054 output_example: Should match inference on input_example
1055 amp_max_loss_scale (float): Max value for amp loss scaling.
1056 Defaults to 2.0**24.
1057 """
1058
1059 with torch.no_grad():
1060 PtActions.__module_export(
1061 module=module,
1062 output=output,
1063 d_format=d_format,
1064 input_example=input_example,
1065 output_example=output_example,
1066 )
1067
1068 def train(
1069 self,
1070 tensors_to_optimize,
1071 optimizer=None,
1072 optimization_params=None,
1073 callbacks: Optional[List[ActionCallback]] = None,
1074 lr_policy=None,
1075 batches_per_step=None,
1076 stop_on_nan_loss=False,
1077 synced_batchnorm=False,
1078 synced_batchnorm_groupsize=0,
1079 gradient_predivide=False,
1080 amp_max_loss_scale=2.0 ** 24,
1081 ):
1082 if gradient_predivide:
1083 logging.error(
1084 "gradient_predivide is currently disabled, and is under consideration for removal in future versions. "
1085 "If this functionality is needed, please raise a github issue."
1086 )
1087 if not optimization_params:
1088 optimization_params = {}
1089 num_epochs = optimization_params.get("num_epochs", None)
1090 max_steps = optimization_params.get("max_steps", None)
1091 if num_epochs is None and max_steps is None:
1092 raise ValueError("You must specify either max_steps or num_epochs")
1093 grad_norm_clip = optimization_params.get('grad_norm_clip', None)
1094
1095 if batches_per_step is None:
1096 batches_per_step = 1
1097 # this is necessary because we average gradients over batch
1098 bps_scale = torch.FloatTensor([1.0 / batches_per_step]).squeeze()
1099
1100 if tensors_to_optimize is None:
1101 # This is Evaluation Mode
1102 self._init_callbacks(callbacks)
1103 # Do action start callbacks
1104 self._perform_on_action_end(callbacks=callbacks)
1105 return
1106 # Check if tensors_to_optimize is just a list of NmTensors
1107 elif tensors_to_optimize is not None and (
1108 isinstance(tensors_to_optimize[0], NmTensor) and PtActions._check_all_tensors(tensors_to_optimize)
1109 ):
1110 # Parse graph into a topologically sorted sequence of neural
1111 # modules' calls
1112 (opt_call_chain, t_dataset,) = self.__get_top_sorted_modules_and_dataloader(hook=tensors_to_optimize)
1113 # Extract trainable weights which will be optimized
1114 params_list = [
1115 p[0].parameters() for p in opt_call_chain if isinstance(p[0], TrainableNM) or p[0].is_trainable()
1116 ]
1117 params_to_optimize = itertools.chain(*params_list)
1118
1119 # Setup optimizer instance. By default it is SGD
1120 optimizer_instance = None
1121 optimizer_class = None
1122 if isinstance(optimizer, str):
1123 optimizer_class = optimizer
1124 elif isinstance(optimizer, torch.optim.Optimizer):
1125 optimizer_instance = optimizer
1126 else:
1127 raise ValueError("optimizer was not understood")
1128 optimizer = self.__setup_optimizer(
1129 optimizer_instance=optimizer_instance,
1130 optimizer_class=optimizer_class,
1131 optimization_params=optimization_params,
1132 params_to_optimize=params_to_optimize,
1133 )
1134
1135 training_loop = [(optimizer, tensors_to_optimize, opt_call_chain)]
1136
1137 self.optimizers.append(optimizer)
1138 assert (
1139 len(self.optimizers) == 1
1140 ), "There was more than one optimizer, was create_optimizer() called before train()?"
1141
1142 elif PtActions._check_tuples(tensors_to_optimize):
1143 if batches_per_step != 1:
1144 raise ValueError("Gradient accumlation with multiple optimizers is not supported")
1145 datasets = []
1146 training_loop = []
1147 for step in tensors_to_optimize:
1148 (step_call_chain, dataset,) = self.__get_top_sorted_modules_and_dataloader(hook=step[1])
1149 datasets.append(dataset)
1150 training_loop.append((step[0], step[1], step_call_chain))
1151
1152 t_dataset = datasets[0]
1153 for dataset in datasets:
1154 if type(dataset) is not type(t_dataset):
1155 raise ValueError("There were two training datasets, we only support 1.")
1156 else:
1157 raise ValueError("tensors_to_optimize was not understood")
1158
1159 logging_callchain = None
1160 # callbacks setup
1161 if callbacks is not None:
1162 for callback in callbacks:
1163 if not isinstance(callback, ActionCallback):
1164 raise ValueError("A callback was received that was not a child of ActionCallback")
1165 elif isinstance(callback, SimpleLossLoggerCallback):
1166 if logging_callchain:
1167 raise ValueError("We only support one logger callback but more than one were found")
1168 logger_step_freq = callback._step_freq
1169 logging_tensors = callback.tensors
1170 all_tensors = logging_tensors
1171 for step in training_loop:
1172 all_tensors = all_tensors + step[1]
1173 (logging_callchain, _,) = self.__get_top_sorted_modules_and_dataloader(hook=all_tensors)
1174
1175 self._get_all_modules(training_loop, callbacks, logging_callchain)
1176
1177 # Intialize Amp if needed
1178 if self._optim_level in AmpOptimizations:
1179 # Store mapping of self.optimizers to optimizer in callchain
1180 training_loop_opts = []
1181 for opt in training_loop:
1182 training_loop_opts.append(self.optimizers.index(opt[0]))
1183 self.optimizers = self.__initialize_amp(
1184 optimizer=self.optimizers,
1185 optim_level=self._optim_level,
1186 amp_max_loss_scale=amp_max_loss_scale,
1187 amp_min_loss_scale=optimization_params.get('amp_min_loss_scale', 1.0),
1188 )
1189 # Use stored mapping to map amp_init opts to training loop
1190 for i, step in enumerate(training_loop):
1191 training_loop[i] = (
1192 self.optimizers[training_loop_opts[i]],
1193 step[1],
1194 step[2],
1195 )
1196
1197 dataNM = training_loop[0][2][0][0]
1198 if dataNM.placement == DeviceType.AllGpu:
1199 # if len(training_loop) > 1:
1200 # raise NotImplementedError(
1201 # "Distributed training does nor work with multiple "
1202 # "optimizers")
1203 logging.info("Doing distributed training")
1204 if t_dataset is not None:
1205 train_sampler = torch.utils.data.distributed.DistributedSampler(
1206 dataset=t_dataset, shuffle=dataNM.shuffle
1207 )
1208 train_dataloader = torch.utils.data.DataLoader(
1209 dataset=t_dataset,
1210 sampler=train_sampler,
1211 num_workers=dataNM.num_workers,
1212 batch_size=dataNM.batch_size,
1213 shuffle=False,
1214 )
1215 else:
1216 train_dataloader = dataNM.data_iterator
1217 if hasattr(train_dataloader, 'sampler'):
1218 train_sampler = train_dataloader.sampler
1219 else:
1220 train_sampler = None
1221
1222 for train_iter in training_loop:
1223 call_chain = train_iter[2]
1224 for i in range(1, len(call_chain) - 1):
1225 key = call_chain[i][0].unique_instance_id
1226 pmodule = self.module_reference_table[key][1]
1227 if not isinstance(pmodule, DDP) and isinstance(pmodule, torch.nn.Module):
1228 # gpf = 1
1229 # if gradient_predivide:
1230 # gpf = dist.get_world_size()
1231 # pmodule = DDP(pmodule, gradient_predivide_factor=gpf) # Old Apex Method
1232
1233 # Per pytorch docs, convert sync bn prior to DDP
1234 if synced_batchnorm:
1235 world_size = dist.get_world_size()
1236 sync_batchnorm_group = None
1237 if synced_batchnorm_groupsize > 0:
1238 if world_size % synced_batchnorm_groupsize != 0:
1239 raise ValueError(
1240 f"Synchronized batch norm group size ({synced_batchnorm_groupsize}) must be 0"
1241 f" or divide total number of GPUs ({world_size})."
1242 )
1243 # Find ranks of other nodes in the same batchnorm group
1244 rank = torch.distributed.get_rank()
1245 group = rank // synced_batchnorm_groupsize
1246 group_rank_ids = range(
1247 group * synced_batchnorm_groupsize, (group + 1) * synced_batchnorm_groupsize
1248 )
1249 sync_batchnorm_group = torch.distributed.new_group(group_rank_ids)
1250
1251 pmodule = nn.SyncBatchNorm.convert_sync_batchnorm(
1252 pmodule, process_group=sync_batchnorm_group
1253 )
1254
1255 # By default, disable broadcast_buffers. This disables batch norm synchronization on forward
1256 # pass
1257 pmodule = DDP(
1258 pmodule, device_ids=[self.local_rank], broadcast_buffers=False, find_unused_parameters=True
1259 )
1260
1261 # # Convert batchnorm modules to synced if applicable
1262 # if synced_batchnorm and isinstance(pmodule, torch.nn.Module):
1263 # world_size = dist.get_world_size()
1264 # if synced_batchnorm_groupsize > 0 and world_size % synced_batchnorm_groupsize != 0:
1265 # raise ValueError(
1266 # f"Synchronized batch norm group size"
1267 # f" ({synced_batchnorm_groupsize}) must be 0"
1268 # f" or divide total number of GPUs"
1269 # f" ({world_size})."
1270 # )
1271 # process_group = create_syncbn_process_group(synced_batchnorm_groupsize)
1272 # pmodule = convert_syncbn(pmodule, process_group=process_group)
1273
1274 self.module_reference_table[key] = (
1275 self.module_reference_table[key][0],
1276 pmodule,
1277 )
1278 # single GPU/CPU training
1279 else:
1280 if t_dataset is not None:
1281 train_sampler = None
1282 train_dataloader = torch.utils.data.DataLoader(
1283 dataset=t_dataset,
1284 sampler=None,
1285 num_workers=dataNM.num_workers,
1286 batch_size=dataNM.batch_size,
1287 shuffle=dataNM.shuffle,
1288 )
1289 else:
1290 train_dataloader = dataNM.data_iterator
1291 train_sampler = None
1292
1293 self._init_callbacks(callbacks)
1294 # Do action start callbacks
1295 self._perform_on_action_start(callbacks=callbacks)
1296
1297 # MAIN TRAINING LOOP
1298 # iteration over epochs
1299 while num_epochs is None or self.epoch_num < num_epochs:
1300 if train_sampler is not None:
1301 train_sampler.set_epoch(self.epoch_num)
1302 if max_steps is not None and self.step >= max_steps:
1303 break
1304
1305 # Register epochs start with callbacks
1306 self._perform_on_epoch_start(callbacks=callbacks)
1307
1308 # iteration over batches in epoch
1309 batch_counter = 0
1310 for _, data in enumerate(train_dataloader, 0):
1311 if max_steps is not None and self.step >= max_steps:
1312 break
1313
1314 if batch_counter == 0:
1315 # Started step, zero gradients
1316 curr_optimizer = training_loop[self.step % len(training_loop)][0]
1317 curr_optimizer.zero_grad()
1318 # Register iteration start with callbacks
1319 self._perform_on_iteration_start(callbacks=callbacks)
1320
1321 # set learning rate policy
1322 if lr_policy is not None:
1323 adjusted_lr = lr_policy(optimization_params["lr"], self.step, self.epoch_num)
1324 for param_group in curr_optimizer.param_groups:
1325 param_group["lr"] = adjusted_lr
1326 if self.tb_writer is not None:
1327 value = curr_optimizer.param_groups[0]['lr']
1328 self.tb_writer.add_scalar('param/lr', value, self.step)
1329 if callbacks is not None:
1330 for callback in callbacks:
1331 callback.learning_rate = curr_optimizer.param_groups[0]['lr']
1332
1333 # registered_tensors will contain created tensors
1334 # named by output port and uuid of module which created them
1335 # Get and properly name tensors returned by data layer
1336 curr_call_chain = training_loop[self.step % len(training_loop)][2]
1337 dl_device = curr_call_chain[0][0]._device
1338 if logging_callchain and self.step % logger_step_freq == 0:
1339 curr_call_chain = logging_callchain
1340 tensors = []
1341 if isinstance(data, torch.Tensor):
1342 data = (data,)
1343 for d in data:
1344 if isinstance(d, torch.Tensor):
1345 tensors.append(d.to(dl_device))
1346 else:
1347 tensors.append(d)
1348
1349 registered_tensors = {
1350 t.unique_name: d for t, d in zip(curr_call_chain[0][2].values(), tensors) if t is not None
1351 }
1352 disable_allreduce = batch_counter < (batches_per_step - 1)
1353 self.__nm_graph_forward_pass(
1354 call_chain=curr_call_chain, registered_tensors=registered_tensors,
1355 )
1356
1357 curr_tensors_to_optimize = training_loop[self.step % len(training_loop)][1]
1358 final_loss = 0
1359 nan = False
1360 for tensor in curr_tensors_to_optimize:
1361 if (
1362 torch.isnan(registered_tensors[tensor.unique_name]).any()
1363 or torch.isinf(registered_tensors[tensor.unique_name]).any()
1364 ):
1365 if stop_on_nan_loss:
1366 raise ValueError('Loss is NaN or inf - exiting')
1367 logging.warning('Loss is NaN or inf')
1368 curr_optimizer.zero_grad()
1369 nan = True
1370 break
1371 final_loss += registered_tensors[tensor.unique_name]
1372 if nan:
1373 continue
1374 if self._optim_level in AmpOptimizations and self._optim_level != Optimization.mxprO0:
1375 with amp.scale_loss(final_loss, curr_optimizer, delay_unscale=disable_allreduce) as scaled_loss:
1376 if torch.isnan(scaled_loss).any() or torch.isinf(scaled_loss).any():
1377 if stop_on_nan_loss:
1378 raise ValueError('Loss is NaN or inf -' ' exiting')
1379 logging.warning('WARNING: Loss is NaN or inf')
1380 curr_optimizer.zero_grad()
1381 continue
1382 if disable_allreduce:
1383 with ExitStack() as stack:
1384 for mod in self.get_DDP_modules(curr_call_chain):
1385 stack.enter_context(mod.no_sync())
1386 scaled_loss.backward(bps_scale.to(scaled_loss.get_device()))
1387 else:
1388 scaled_loss.backward(bps_scale.to(scaled_loss.get_device()))
1389 # no AMP optimizations needed
1390 else:
1391 # multi-GPU, float32
1392 if self._local_rank is not None:
1393 if disable_allreduce:
1394 with ExitStack() as stack:
1395 for mod in self.get_DDP_modules(curr_call_chain):
1396 stack.enter_context(mod.no_sync())
1397 final_loss.backward(bps_scale.to(final_loss.get_device()))
1398 else:
1399 final_loss.backward(bps_scale.to(final_loss.get_device()))
1400 # single device (CPU or GPU)
1401 else:
1402 # Fix (workaround?) enabling to backpropagate gradiens on CPUs.
1403 if final_loss.get_device() < 0:
1404 final_loss.backward(bps_scale)
1405 else:
1406 final_loss.backward(bps_scale.to(final_loss.get_device()))
1407
1408 batch_counter += 1
1409
1410 if batch_counter == batches_per_step:
1411 # Ended step. Do optimizer update
1412 if grad_norm_clip is not None:
1413 torch.nn.utils.clip_grad_norm_(master_params(curr_optimizer), grad_norm_clip)
1414 curr_optimizer.step()
1415 batch_counter = 0
1416 # Register iteration end with callbacks
1417 self._update_callbacks(
1418 callbacks=callbacks, registered_tensors=registered_tensors,
1419 )
1420 self._perform_on_iteration_end(callbacks=callbacks)
1421 self.step += 1
1422 # End of epoch for loop
1423 # Register epochs end with callbacks
1424 self._perform_on_epoch_end(callbacks=callbacks)
1425 self.epoch_num += 1
1426 self._perform_on_action_end(callbacks=callbacks)
1427
1428 def infer(
1429 self,
1430 tensors,
1431 checkpoint_dir=None,
1432 ckpt_pattern='',
1433 verbose=True,
1434 cache=False,
1435 use_cache=False,
1436 offload_to_cpu=True,
1437 modules_to_restore=None,
1438 ):
1439 """See NeuralModuleFactory.infer()
1440 """
1441
1442 call_chain, _ = self.__get_top_sorted_modules_and_dataloader(hook=tensors)
1443 if checkpoint_dir:
1444 # Find all modules that need to be restored
1445 if modules_to_restore is None:
1446 modules_to_restore = []
1447 modules_to_restore_name = []
1448 for op in call_chain:
1449 if op[0].num_weights > 0:
1450 modules_to_restore.append(op[0])
1451
1452 if not isinstance(modules_to_restore, list):
1453 modules_to_restore = [modules_to_restore]
1454 modules_to_restore_name = []
1455 for mod in modules_to_restore:
1456 if not isinstance(mod, NeuralModule):
1457 raise ValueError("Found something that was not a Neural Module inside modules_to_restore")
1458 elif mod.num_weights == 0:
1459 raise ValueError("Found a Neural Module with 0 weights inside modules_to_restore")
1460 modules_to_restore_name.append(str(mod))
1461
1462 module_checkpoints = get_checkpoint_from_dir(modules_to_restore_name, checkpoint_dir, ckpt_pattern)
1463
1464 for mod, checkpoint in zip(modules_to_restore, module_checkpoints):
1465 logging.info(f"Restoring {mod} from {checkpoint}")
1466 mod.restore_from(checkpoint, self._local_rank)
1467
1468 # Init Amp
1469 if (
1470 self._optim_level in AmpOptimizations
1471 and self._optim_level != Optimization.mxprO0
1472 and not self.amp_initialized
1473 ):
1474 pt_modules = []
1475 for i in range(len(call_chain)):
1476 if isinstance(call_chain[i][0], nn.Module):
1477 pt_modules.append(call_chain[i][0])
1478 elif isinstance(call_chain[i][0], TrainableNeuralModuleWrapper):
1479 pt_modules.append(call_chain[i][0]._pt_module)
1480
1481 amp.initialize(
1482 min_loss_scale=1.0, models=pt_modules, optimizers=None, opt_level=AmpOptimizations[self._optim_level],
1483 )
1484 self.amp_initialized = True
1485
1486 # Run infer
1487 return self._infer(
1488 tensors_to_return=tensors,
1489 verbose=verbose,
1490 cache=cache,
1491 use_cache=use_cache,
1492 offload_to_cpu=offload_to_cpu,
1493 )
1494
1495 def get_DDP_modules(self, call_chain):
1496 modules = []
1497 for ind in range(1, len(call_chain)):
1498 m_id = call_chain[ind][0].unique_instance_id
1499 module = self.module_reference_table[m_id][1]
1500 if isinstance(module, DDP):
1501 modules.append(module)
1502
1503 return modules
1504
[end of nemo/backends/pytorch/actions.py]
[start of nemo/collections/asr/jasper.py]
1 # Copyright (c) 2019 NVIDIA Corporation
2 from typing import Optional
3
4 import torch
5 import torch.nn as nn
6 import torch.nn.functional as F
7
8 import nemo
9 from .parts.jasper import JasperBlock, init_weights, jasper_activations
10 from nemo.backends.pytorch.nm import TrainableNM
11 from nemo.core.neural_types import *
12 from nemo.utils.decorators import add_port_docs
13
14 logging = nemo.logging
15
16
17 class JasperEncoder(TrainableNM):
18 """
19 Jasper Encoder creates the pre-processing (prologue), Jasper convolution
20 block, and the first 3 post-processing (epilogue) layers as described in
21 Jasper (https://arxiv.org/abs/1904.03288)
22
23 Args:
24 jasper (list): A list of dictionaries. Each element in the list
25 represents the configuration of one Jasper Block. Each element
26 should contain::
27
28 {
29 # Required parameters
30 'filters' (int) # Number of output channels,
31 'repeat' (int) # Number of sub-blocks,
32 'kernel' (int) # Size of conv kernel,
33 'stride' (int) # Conv stride
34 'dilation' (int) # Conv dilation
35 'dropout' (float) # Dropout probability
36 'residual' (bool) # Whether to use residual or not.
37 # Optional parameters
38 'residual_dense' (bool) # Whether to use Dense Residuals
39 # or not. 'residual' must be True for 'residual_dense'
40 # to be enabled.
41 # Defaults to False.
42 'separable' (bool) # Whether to use separable convolutions.
43 # Defaults to False
44 'groups' (int) # Number of groups in each conv layer.
45 # Defaults to 1
46 'heads' (int) # Sharing of separable filters
47 # Defaults to -1
48 'tied' (bool) # Whether to use the same weights for all
49 # sub-blocks.
50 # Defaults to False
51 'se' (bool) # Whether to add Squeeze and Excitation
52 # sub-blocks.
53 # Defaults to False
54 'se_reduction_ratio' (int) # The reduction ratio of the Squeeze
55 # sub-module.
56 # Must be an integer > 1.
57 # Defaults to 16
58 'kernel_size_factor' (float) # Conv kernel size multiplier
59 # Can be either an int or float
60 # Kernel size is recomputed as below:
61 # new_kernel_size = int(max(1, (kernel_size * kernel_width)))
62 # to prevent kernel sizes than 1.
63 # Note: If rescaled kernel size is an even integer,
64 # adds 1 to the rescaled kernel size to allow "same"
65 # padding.
66 }
67
68 activation (str): Activation function used for each sub-blocks. Can be
69 one of ["hardtanh", "relu", "selu"].
70 feat_in (int): Number of channels being input to this module
71 normalization_mode (str): Normalization to be used in each sub-block.
72 Can be one of ["batch", "layer", "instance", "group"]
73 Defaults to "batch".
74 residual_mode (str): Type of residual connection.
75 Can be "add" or "max".
76 Defaults to "add".
77 norm_groups (int): Number of groups for "group" normalization type.
78 If set to -1, number of channels is used.
79 Defaults to -1.
80 conv_mask (bool): Controls the use of sequence length masking prior
81 to convolutions.
82 Defaults to True.
83 frame_splicing (int): Defaults to 1.
84 init_mode (str): Describes how neural network parameters are
85 initialized. Options are ['xavier_uniform', 'xavier_normal',
86 'kaiming_uniform','kaiming_normal'].
87 Defaults to "xavier_uniform".
88 """
89
90 length: Optional[torch.Tensor]
91
92 @property
93 @add_port_docs()
94 def input_ports(self):
95 """Returns definitions of module input ports.
96 """
97 return {
98 # "audio_signal": NeuralType(
99 # {0: AxisType(BatchTag), 1: AxisType(SpectrogramSignalTag), 2: AxisType(ProcessedTimeTag),}
100 # ),
101 # "length": NeuralType({0: AxisType(BatchTag)}),
102 "audio_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
103 "length": NeuralType(tuple('B'), LengthsType()),
104 }
105
106 @property
107 @add_port_docs()
108 def output_ports(self):
109 """Returns definitions of module output ports.
110 """
111 return {
112 # "outputs": NeuralType(
113 # {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag),}
114 # ),
115 # "encoded_lengths": NeuralType({0: AxisType(BatchTag)}),
116 "outputs": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
117 "encoded_lengths": NeuralType(tuple('B'), LengthsType()),
118 }
119
120 @property
121 def disabled_deployment_input_ports(self):
122 return set(["length"])
123
124 @property
125 def disabled_deployment_output_ports(self):
126 return set(["encoded_lengths"])
127
128 def prepare_for_deployment(self):
129 m_count = 0
130 for m in self.modules():
131 if type(m).__name__ == "MaskedConv1d":
132 m.use_mask = False
133 m_count += 1
134 logging.warning(f"Turned off {m_count} masked convolutions")
135
136 def __init__(
137 self,
138 jasper,
139 activation,
140 feat_in,
141 normalization_mode="batch",
142 residual_mode="add",
143 norm_groups=-1,
144 conv_mask=True,
145 frame_splicing=1,
146 init_mode='xavier_uniform',
147 ):
148 super().__init__()
149
150 activation = jasper_activations[activation]()
151 feat_in = feat_in * frame_splicing
152
153 residual_panes = []
154 encoder_layers = []
155 self.dense_residual = False
156 for lcfg in jasper:
157 dense_res = []
158 if lcfg.get('residual_dense', False):
159 residual_panes.append(feat_in)
160 dense_res = residual_panes
161 self.dense_residual = True
162 groups = lcfg.get('groups', 1)
163 separable = lcfg.get('separable', False)
164 heads = lcfg.get('heads', -1)
165 se = lcfg.get('se', False)
166 se_reduction_ratio = lcfg.get('se_reduction_ratio', 16)
167 kernel_size_factor = lcfg.get('kernel_size_factor', 1.0)
168 encoder_layers.append(
169 JasperBlock(
170 feat_in,
171 lcfg['filters'],
172 repeat=lcfg['repeat'],
173 kernel_size=lcfg['kernel'],
174 stride=lcfg['stride'],
175 dilation=lcfg['dilation'],
176 dropout=lcfg['dropout'],
177 residual=lcfg['residual'],
178 groups=groups,
179 separable=separable,
180 heads=heads,
181 residual_mode=residual_mode,
182 normalization=normalization_mode,
183 norm_groups=norm_groups,
184 activation=activation,
185 residual_panes=dense_res,
186 conv_mask=conv_mask,
187 se=se,
188 se_reduction_ratio=se_reduction_ratio,
189 kernel_size_factor=kernel_size_factor,
190 )
191 )
192 feat_in = lcfg['filters']
193
194 self.encoder = nn.Sequential(*encoder_layers)
195 self.apply(lambda x: init_weights(x, mode=init_mode))
196 self.to(self._device)
197
198 def forward(self, audio_signal, length=None):
199 # type: (Tensor, Optional[Tensor]) -> Tensor, Optional[Tensor]
200
201 s_input, length = self.encoder(([audio_signal], length))
202 if length is None:
203 return s_input[-1]
204 return s_input[-1], length
205
206
207 class JasperDecoderForCTC(TrainableNM):
208 """
209 Jasper Decoder creates the final layer in Jasper that maps from the outputs
210 of Jasper Encoder to the vocabulary of interest.
211
212 Args:
213 feat_in (int): Number of channels being input to this module
214 num_classes (int): Number of characters in ASR model's vocab/labels.
215 This count should not include the CTC blank symbol.
216 init_mode (str): Describes how neural network parameters are
217 initialized. Options are ['xavier_uniform', 'xavier_normal',
218 'kaiming_uniform','kaiming_normal'].
219 Defaults to "xavier_uniform".
220 """
221
222 @property
223 @add_port_docs()
224 def input_ports(self):
225 """Returns definitions of module input ports.
226 """
227 return {
228 # "encoder_output": NeuralType(
229 # {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag),}
230 # )
231 "encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation())
232 }
233
234 @property
235 @add_port_docs()
236 def output_ports(self):
237 """Returns definitions of module output ports.
238 """
239 # return {"output": NeuralType({0: AxisType(BatchTag), 1: AxisType(TimeTag), 2: AxisType(ChannelTag),})}
240 return {"output": NeuralType(('B', 'T', 'D'), LogprobsType())}
241
242 def __init__(self, feat_in, num_classes, init_mode="xavier_uniform"):
243 super().__init__()
244
245 self._feat_in = feat_in
246 # Add 1 for blank char
247 self._num_classes = num_classes + 1
248
249 self.decoder_layers = nn.Sequential(nn.Conv1d(self._feat_in, self._num_classes, kernel_size=1, bias=True))
250 self.apply(lambda x: init_weights(x, mode=init_mode))
251 self.to(self._device)
252
253 def forward(self, encoder_output):
254 return F.log_softmax(self.decoder_layers(encoder_output).transpose(1, 2), dim=-1)
255
256
257 class JasperDecoderForClassification(TrainableNM):
258 """
259 Jasper Decoder creates the final layer in Jasper that maps from the outputs
260 of Jasper Encoder to one class label.
261
262 Args:
263 feat_in (int): Number of channels being input to this module
264 num_classes (int): Number of characters in ASR model's vocab/labels.
265 This count should not include the CTC blank symbol.
266 init_mode (str): Describes how neural network parameters are
267 initialized. Options are ['xavier_uniform', 'xavier_normal',
268 'kaiming_uniform','kaiming_normal'].
269 Defaults to "xavier_uniform".
270 """
271
272 @property
273 def input_ports(self):
274 """Returns definitions of module input ports.
275 """
276 return {
277 # "encoder_output": NeuralType(
278 # {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag)}
279 # )
280 "encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation())
281 }
282
283 @property
284 def output_ports(self):
285 """Returns definitions of module output ports.
286 """
287 # return {"logits": NeuralType({0: AxisType(BatchTag), 1: AxisType(ChannelTag)})}
288 return {"logits": NeuralType(('B', 'D'), LogitsType())}
289
290 def __init__(
291 self, *, feat_in, num_classes, init_mode="xavier_uniform", return_logits=True, pooling_type='avg', **kwargs
292 ):
293 TrainableNM.__init__(self, **kwargs)
294
295 self._feat_in = feat_in
296 self._return_logits = return_logits
297 self._num_classes = num_classes
298
299 if pooling_type == 'avg':
300 self.pooling = nn.AdaptiveAvgPool1d(1)
301 elif pooling_type == 'max':
302 self.pooling = nn.AdaptiveMaxPool1d(1)
303 else:
304 raise ValueError('Pooling type chosen is not valid. Must be either `avg` or `max`')
305
306 self.decoder_layers = nn.Sequential(nn.Linear(self._feat_in, self._num_classes, bias=True))
307 self.apply(lambda x: init_weights(x, mode=init_mode))
308 self.to(self._device)
309
310 def forward(self, encoder_output):
311 batch, in_channels, timesteps = encoder_output.size()
312
313 encoder_output = self.pooling(encoder_output).view(batch, in_channels) # [B, C]
314 logits = self.decoder_layers(encoder_output) # [B, num_classes]
315
316 if self._return_logits:
317 return logits
318
319 return F.softmax(logits, dim=-1)
320
[end of nemo/collections/asr/jasper.py]
[start of nemo/core/neural_factory.py]
1 # ! /usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17
18 __all__ = [
19 'Backend',
20 'ModelMode',
21 'Optimization',
22 'DeviceType',
23 'Actions',
24 'NeuralModuleFactory',
25 'DeploymentFormat',
26 ]
27
28 import random
29 from abc import ABC, abstractmethod
30 from enum import Enum
31 from typing import List, Optional
32
33 import numpy as np
34
35 import nemo
36 from ..utils import ExpManager
37 from .callbacks import ActionCallback, EvaluatorCallback
38 from .neural_types import *
39 from nemo.utils.decorators import deprecated
40
41 logging = nemo.logging
42
43
44 class DeploymentFormat(Enum):
45 """Which format to use when exporting a Neural Module for deployment"""
46
47 AUTO = 0
48 PYTORCH = 1
49 TORCHSCRIPT = 2
50 ONNX = 3
51 TRTONNX = 4
52
53
54 class Backend(Enum):
55 """Supported backends. For now, it is only PyTorch."""
56
57 PyTorch = 1
58 NotSupported = 2
59
60
61 class ModelMode(Enum):
62 """Training Mode or Evaluation/Inference"""
63
64 train = 0
65 eval = 1
66
67
68 class Optimization(Enum):
69 """Various levels of Apex/amp Optimization.
70 WARNING: This might have effect on model accuracy."""
71
72 mxprO0 = 0
73 mxprO1 = 1
74 mxprO2 = 2
75 mxprO3 = 3
76
77
78 class DeviceType(Enum):
79 """Device types where Neural Modules can be placed."""
80
81 GPU = 1
82 CPU = 2
83 AllGpu = 3
84
85
86 class Actions(ABC):
87 """Basic actions allowed on graphs of Neural Modules"""
88
89 def __init__(self, local_rank, global_rank, optimization_level=Optimization.mxprO0):
90 self._local_rank = local_rank
91 self._global_rank = global_rank
92 self._optim_level = optimization_level
93 self.step = None
94 self.epoch_num = None
95
96 @property
97 def local_rank(self):
98 """Local rank during distributed execution. None if single GPU/CPU
99
100 Returns:
101 (int) rank or worker or None if not in distributed model
102 """
103 return self._local_rank
104
105 @property
106 def global_rank(self):
107 """Global rank during distributed execution. None if single GPU/CPU
108
109 Returns:
110 (int) rank or worker or None if not in distributed model
111 """
112 return self._global_rank
113
114 @abstractmethod
115 def train(
116 self,
117 tensors_to_optimize: List[NmTensor],
118 callbacks: Optional[List[ActionCallback]],
119 lr_policy=None,
120 batches_per_step=None,
121 stop_on_nan_loss=False,
122 ):
123 """This action executes training and (optionally) evaluation.
124
125 Args:
126 tensors_to_optimize: which tensors to optimize. Typically this is
127 single loss tesnor.
128 callbacks: list of callback objects
129 lr_policy: function which should take (initial_lr, step, epoch) and
130 return learning rate
131 batches_per_step: number of mini-batches to process before one
132 optimizer step. (default: None, same as 1). Use this
133 to simulate larger batch sizes on hardware which could not fit
134 larger batch in memory otherwise. Effectively, this will make
135 "algorithmic" batch size per GPU/worker = batches_per_step*
136 batch_size
137 stop_on_nan_loss: (default: False) If set to True, the training
138 will stop if loss=nan. If set to False, the training will
139 continue, but the gradients will be zeroed before next
140 mini-batch.
141
142 Returns:
143 None
144 """
145 pass
146
147 @abstractmethod
148 def infer(self, tensors: List[NmTensor]):
149 """This action executes inference. Nothing is optimized.
150 Args:
151 tensors: which tensors to evaluate.
152
153 Returns:
154 None
155 """
156 pass
157
158 @abstractmethod
159 def save_state_to(self, path: str):
160 """
161 Saves current state such as step, epoch and optimizer parameters
162 Args:
163 path:
164
165 Returns:
166
167 """
168 pass
169
170 @abstractmethod
171 def restore_state_from(self, path: str):
172 """
173 Restores state such as step, epoch and optimizer parameters
174 Args:
175 path:
176
177 Returns:
178
179 """
180 pass
181
182 @abstractmethod
183 def create_optimizer(self, optimizer, things_to_optimize, optimizer_params):
184 """
185 Creates an optimizer object to be use in the train() method.
186
187 Args:
188 optimizer: Specifies which optimizer to use.
189 things_to_optimize: A list of neural modules or tensors to be
190 optimized.
191 optimizer_params: Specifies the parameters of the optimizer
192
193 Returns:
194 Optimizer
195 """
196 pass
197
198 def _perform_on_iteration_start(self, callbacks):
199 # TODO: Most of these checks can be relaxed since we enforce callbacks
200 # to be a list of ActionCallback objects
201 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
202 for callback in callbacks:
203 callback.on_iteration_start()
204
205 def _perform_on_iteration_end(self, callbacks):
206 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
207 for callback in callbacks:
208 callback.on_iteration_end()
209
210 def _perform_on_action_start(self, callbacks):
211 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
212 for callback in callbacks:
213 callback.on_action_start()
214
215 def _perform_on_action_end(self, callbacks):
216 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
217 for callback in callbacks:
218 callback.on_action_end()
219
220 def _perform_on_epoch_start(self, callbacks):
221 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
222 for callback in callbacks:
223 callback.on_epoch_start()
224
225 def _perform_on_epoch_end(self, callbacks):
226 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
227 for callback in callbacks:
228 callback.on_epoch_end()
229
230 def _init_callbacks(self, callbacks):
231 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
232 for callback in callbacks:
233 callback.action = self
234
235 def _update_callbacks(
236 self, callbacks=None, registered_tensors=None,
237 ):
238 # if self.local_rank is None or self.local_rank == 0:
239 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
240 for callback in callbacks:
241 callback._registered_tensors = registered_tensors
242
243
244 def _str_to_opt_level(opt_str: str) -> Optimization:
245 number = int(opt_str[1:])
246 if number not in Optimization._value2member_map_:
247 raise ValueError(f"Unknown optimization value {opt_str}")
248 return Optimization(number)
249
250
251 class NeuralModuleFactory(object):
252 _DEFAULT = None
253
254 """
255 Neural Module Factory instance is used to create neural modules and
256 trainers
257
258 Args:
259 backend (Backend): Currently only Backend.PyTorch is supported
260 local_rank (int): Process rank. Should be set by distributed runner
261 optimization_level (Optimization): Level of optimization to use. Will
262 be passed to neural modules and actions created by this factory.
263 placement (DeviceType: where to place NeuralModule instances by default
264 cudnn_benchmark (bool): (default False) If set to True it will use
265 cudnnFind method to find the best kernels instead of using
266 heuristics. If the shapes of your inputs are constant this
267 should help, for various shapes it can slow things down. Give it
268 few iterations to warmup if set to True. Currently only supported
269 by PyTorch backend.
270 random_seed (int): (default None) Sets random seed to control for
271 randomness. This should be used for debugging purposes as it might
272 have negative impact on performance. Can't be used when
273 `cudnn_benchmark=True`.
274 master_process (bool): (default True) Flag for master process
275 indication
276 set_default (bool): (default True) True if should set this instance as
277 default factory for modules instantiating.
278 """
279
280 def __init__(
281 self,
282 backend=Backend.PyTorch,
283 local_rank=None,
284 optimization_level=Optimization.mxprO0,
285 placement=None,
286 cudnn_benchmark=False,
287 random_seed=None,
288 set_default=True,
289 log_dir=None,
290 checkpoint_dir=None,
291 tensorboard_dir=None,
292 create_tb_writer=False,
293 files_to_copy=None,
294 add_time_to_log_dir=False,
295 ):
296 self._local_rank = local_rank
297 self._global_rank = None
298
299 if isinstance(optimization_level, str):
300 optimization_level = _str_to_opt_level(optimization_level)
301 self._optim_level = optimization_level
302
303 if placement is None:
304 if local_rank is not None:
305 device = DeviceType.AllGpu
306 else:
307 device = DeviceType.GPU
308
309 self._placement = device
310 else:
311 self._placement = placement
312
313 self._backend = backend
314 self._world_size = 1
315 broadcast_func = None
316 if backend == Backend.PyTorch:
317 # TODO: Move all framework specific code from this file
318 import torch
319
320 if self._placement != DeviceType.CPU:
321 if not torch.cuda.is_available():
322 raise ValueError(
323 "You requested to use GPUs but CUDA is "
324 "not installed. You can try running using"
325 " CPU-only. To do this, instantiate your"
326 " factory with placement=DeviceType.CPU"
327 "\n"
328 "Note that this is slow and is not "
329 "well supported."
330 )
331
332 torch.backends.cudnn.benchmark = cudnn_benchmark
333 if random_seed is not None and cudnn_benchmark:
334 raise ValueError("cudnn_benchmark can not be set to True when random_seed is not None.")
335 if random_seed is not None:
336 torch.backends.cudnn.deterministic = True
337 torch.backends.cudnn.benchmark = False
338 torch.manual_seed(random_seed)
339 np.random.seed(random_seed)
340 random.seed(random_seed)
341
342 if self._local_rank is not None:
343 torch.distributed.init_process_group(backend="nccl", init_method="env://")
344
345 cuda_set = True
346 # Try to set cuda device. This should fail if self._local_rank
347 # is greater than the number of available GPUs
348 try:
349 torch.cuda.set_device(self._local_rank)
350 except RuntimeError:
351 # Note in this case, all tensors are now sent to GPU 0
352 # who could crash because of OOM. Thus init_process_group()
353 # must be done before any cuda tensors are allocated
354 cuda_set = False
355 cuda_set_t = torch.cuda.IntTensor([cuda_set])
356
357 # Do an all_reduce to ensure all workers obtained a GPU
358 # For the strangest reason, BAND doesn't work so I am resorting
359 # to MIN.
360 torch.distributed.all_reduce(cuda_set_t, op=torch.distributed.ReduceOp.MIN)
361 if cuda_set_t.item() == 0:
362 raise RuntimeError(
363 "There was an error initializing distributed training."
364 " Perhaps you specified more gpus than you have "
365 "available"
366 )
367
368 del cuda_set_t
369 torch.cuda.empty_cache()
370 # Remove test tensor from memory
371
372 self._world_size = torch.distributed.get_world_size()
373 self._global_rank = torch.distributed.get_rank()
374
375 def torch_broadcast_wrapper(str_len=None, string=None, src=0):
376 """Wrapper function to broadcast string values across all
377 workers
378 """
379 # Create byte cuda torch tensor
380 if string is not None:
381 string_tensor = torch.tensor(list(string.encode()), dtype=torch.uint8).cuda()
382 else:
383 string_tensor = torch.tensor([0] * str_len, dtype=torch.uint8).cuda()
384 # Run broadcast
385 torch.distributed.broadcast(string_tensor, src)
386 # turn byte tensor back to string
387 return_string = string_tensor.cpu().numpy()
388 return_string = b''.join(return_string).decode()
389 return return_string
390
391 broadcast_func = torch_broadcast_wrapper
392 else:
393 raise NotImplementedError("Only Pytorch backend is currently supported.")
394
395 # Create ExpManager
396 # if log_dir is None, only create logger
397 self._exp_manager = ExpManager(
398 work_dir=log_dir,
399 ckpt_dir=checkpoint_dir,
400 use_tb=create_tb_writer,
401 tb_dir=tensorboard_dir,
402 local_rank=local_rank,
403 global_rank=self._global_rank,
404 files_to_copy=files_to_copy,
405 add_time=add_time_to_log_dir,
406 exist_ok=True,
407 broadcast_func=broadcast_func,
408 )
409 self._tb_writer = self._exp_manager.tb_writer
410
411 # Create trainer
412 self._trainer = self._get_trainer(tb_writer=self._tb_writer)
413
414 if set_default:
415 NeuralModuleFactory.set_default_factory(self)
416
417 @classmethod
418 def get_default_factory(cls):
419 return cls._DEFAULT
420
421 @classmethod
422 def set_default_factory(cls, factory):
423 cls._DEFAULT = factory
424
425 @classmethod
426 def reset_default_factory(cls):
427 cls._DEFAULT = None
428
429 @staticmethod
430 def __name_import(name):
431 components = name.split(".")
432 mod = __import__(components[0])
433 for comp in components[1:]:
434 mod = getattr(mod, comp)
435 return mod
436
437 @deprecated(version=0.11)
438 def __get_pytorch_module(self, name, collection, params, pretrained):
439 # TK: "factory" is not passed as parameter anymore.
440 # params["factory"] = self
441
442 if collection == "toys" or collection == "tutorials" or collection == "other":
443 constructor = NeuralModuleFactory.__name_import("nemo.backends.pytorch.tutorials." + name)
444 elif collection == "nemo_nlp":
445 constructor = NeuralModuleFactory.__name_import("nemo_nlp." + name)
446 if name == "BERT" and pretrained is True:
447 params["pretrained"] = True
448 elif collection == "nemo_asr":
449 constructor = NeuralModuleFactory.__name_import("nemo_asr." + name)
450 elif collection == "nemo_lpr":
451 constructor = NeuralModuleFactory.__name_import("nemo_lpr." + name)
452 elif collection == 'common':
453 constructor = NeuralModuleFactory.__name_import('nemo.backends.pytorch.common.' + name)
454 elif collection == "torchvision":
455 import torchvision.models as tv_models
456 import nemo.backends.pytorch.module_wrapper as mw
457 import torch.nn as nn
458
459 if name == "ImageFolderDataLayer":
460 constructor = NeuralModuleFactory.__name_import("nemo.backends.pytorch.torchvision.data." + name)
461 instance = constructor(**params)
462 return instance
463 else:
464 _nm_name = name.lower()
465 if _nm_name == "resnet18":
466 input_ports = {
467 "x": NeuralType(
468 {
469 0: AxisType(BatchTag),
470 1: AxisType(ChannelTag),
471 2: AxisType(HeightTag, 224),
472 3: AxisType(WidthTag, 224),
473 }
474 )
475 }
476 output_ports = {"output": NeuralType({0: AxisType(BatchTag), 1: AxisType(ChannelTag)})}
477
478 pt_model = tv_models.resnet18(pretrained=pretrained)
479 num_classes = params.get("num_classes", None)
480 if num_classes is not None:
481 pt_model.fc = nn.Linear(512, params["num_classes"])
482 return mw.TrainableNeuralModuleWrapper(
483 pt_nn_module=pt_model, input_ports_dict=input_ports, output_ports_dict=output_ports,
484 )
485 elif _nm_name == "resnet50":
486 input_ports = {
487 "x": NeuralType(
488 {
489 0: AxisType(BatchTag),
490 1: AxisType(ChannelTag),
491 2: AxisType(HeightTag, 224),
492 3: AxisType(WidthTag, 224),
493 }
494 )
495 }
496 output_ports = {"output": NeuralType({0: AxisType(BatchTag), 1: AxisType(ChannelTag)})}
497
498 pt_model = tv_models.resnet50(pretrained=pretrained)
499 num_classes = params.get("num_classes", None)
500 if num_classes is not None:
501 pt_model.fc = nn.Linear(2048, params["num_classes"])
502 return mw.TrainableNeuralModuleWrapper(
503 pt_nn_module=pt_model, input_ports_dict=input_ports, output_ports_dict=output_ports,
504 )
505 else:
506 collection_path = "nemo.collections." + collection + "." + name
507 constructor = NeuralModuleFactory.__name_import(collection_path)
508 if name == "BERT" and pretrained is True:
509 params["pretrained"] = True
510
511 # TK: "placement" is not passed as parameter anymore.
512 # if "placement" not in params:
513 # params["placement"] = self._placement
514 instance = constructor(**params)
515 return instance
516
517 @deprecated(version=0.11)
518 def get_module(self, name, collection, params, pretrained=False):
519 """
520 Creates NeuralModule instance
521
522 Args:
523 name (str): name of NeuralModule which instance should be returned.
524 params (dict): local parameters which should be passed to
525 NeuralModule's constructor.
526 collection (str): in which collection to look for
527 `neural_module_name`
528 pretrained (bool): return pre-trained instance or randomly
529 initialized (default)
530
531 Returns:
532 NeuralModule instance
533 """
534
535 # TK: "optimization_level" is not passed as parameter anymore.
536 # if params is not None and "optimization_level" in params:
537 # if params["optimization_level"] != self._optim_level:
538 # logging.warning(
539 # "Module's {0} requested optimization level {1} is"
540 # "different from the one specified by factory - {2}."
541 # "Using: {3} for this module".format(
542 # name, params["optimization_level"], self._optim_level, params["optimization_level"],
543 # )
544 # )
545 # else:
546 # if params is None:
547 # params = {}
548 # params["optimization_level"] = self._optim_level
549
550 if self._backend == Backend.PyTorch:
551 return self.__get_pytorch_module(name=name, collection=collection, params=params, pretrained=pretrained,)
552 else:
553 return None
554
555 def create_optimizer(self, optimizer, things_to_optimize, optimizer_params):
556 return self._trainer.create_optimizer(
557 optimizer=optimizer, things_to_optimize=things_to_optimize, optimizer_params=optimizer_params,
558 )
559
560 def train(
561 self,
562 tensors_to_optimize,
563 optimizer=None,
564 optimization_params=None,
565 callbacks: Optional[List[ActionCallback]] = None,
566 lr_policy=None,
567 batches_per_step=None,
568 stop_on_nan_loss=False,
569 synced_batchnorm=False,
570 synced_batchnorm_groupsize=0,
571 gradient_predivide=False,
572 amp_max_loss_scale=2.0 ** 24,
573 reset=False,
574 ):
575 if reset:
576 self.reset_trainer()
577 return self._trainer.train(
578 tensors_to_optimize=tensors_to_optimize,
579 optimizer=optimizer,
580 optimization_params=optimization_params,
581 callbacks=callbacks,
582 lr_policy=lr_policy,
583 batches_per_step=batches_per_step,
584 stop_on_nan_loss=stop_on_nan_loss,
585 synced_batchnorm=synced_batchnorm,
586 synced_batchnorm_groupsize=synced_batchnorm_groupsize,
587 gradient_predivide=gradient_predivide,
588 amp_max_loss_scale=amp_max_loss_scale,
589 )
590
591 def eval(self, callbacks: List[EvaluatorCallback]):
592 if callbacks is None or len(callbacks) == 0:
593 raise ValueError(f"You need to provide at lease one evaluation" f"callback to eval")
594 for callback in callbacks:
595 if not isinstance(callback, EvaluatorCallback):
596 raise TypeError(f"All callbacks passed to the eval action must" f"be inherited from EvaluatorCallback")
597 self.train(
598 tensors_to_optimize=None, optimizer='sgd', callbacks=callbacks, optimization_params={'num_epochs': 1},
599 )
600
601 def deployment_export(
602 self, module, output: str, d_format: DeploymentFormat, input_example=None, output_example=None
603 ):
604 """Exports Neural Module instance for deployment.
605
606 Args:
607 module: neural module to export
608 output (str): where export results should be saved
609 d_format (DeploymentFormat): which deployment format to use
610 input_example: sometimes tracing will require input examples
611 output_example: Should match inference on input_example
612 """
613 module.prepare_for_deployment()
614
615 return self._trainer.deployment_export(
616 module=module,
617 output=output,
618 d_format=d_format,
619 input_example=input_example,
620 output_example=output_example,
621 )
622
623 def infer(
624 self,
625 tensors: List[NmTensor],
626 checkpoint_dir=None,
627 ckpt_pattern='',
628 verbose=True,
629 cache=False,
630 use_cache=False,
631 offload_to_cpu=True,
632 modules_to_restore=None,
633 ):
634 """Runs inference to obtain values for tensors
635
636 Args:
637 tensors (list[NmTensor]): List of NeMo tensors that we want to get
638 values of.
639 checkpoint_dir (str): Path to checkpoint directory. Default is None
640 which does not load checkpoints.
641 ckpt_pattern (str): Pattern used to check for checkpoints inside
642 checkpoint_dir. Default is '' which matches any checkpoints
643 inside checkpoint_dir.
644 verbose (bool): Controls printing. Defaults to True.
645 cache (bool): If True, cache all `tensors` and intermediate tensors
646 so that future calls that have use_cache set will avoid
647 computation. Defaults to False.
648 use_cache (bool): Values from `tensors` will be always re-computed.
649 It will re-use intermediate tensors from the DAG leading to
650 `tensors`. If you want something to be re-computed, put it into
651 `tensors` list. Defaults to False.
652 offload_to_cpu (bool): If True, all evaluated tensors are moved to
653 cpu memory after each inference batch. Defaults to True.
654 modules_to_restore (list): Defaults to None, in which case all
655 NMs inside callchain with weights will be restored. If
656 specified only the modules inside this list will be restored.
657
658 Returns:
659 List of evaluated tensors. Each element in the list is also a list
660 where each element is now a batch of tensor values.
661 """
662 return self._trainer.infer(
663 tensors=tensors,
664 checkpoint_dir=checkpoint_dir,
665 ckpt_pattern=ckpt_pattern,
666 verbose=verbose,
667 cache=cache,
668 use_cache=use_cache,
669 offload_to_cpu=offload_to_cpu,
670 modules_to_restore=modules_to_restore,
671 )
672
673 def clear_cache(self):
674 """Helper function to clean inference cache."""
675 self._trainer.clear_cache()
676
677 @deprecated(version="future")
678 def _get_trainer(self, tb_writer=None):
679 if self._backend == Backend.PyTorch:
680 constructor = NeuralModuleFactory.__name_import("nemo.backends.pytorch.PtActions")
681 instance = constructor(
682 local_rank=self._local_rank,
683 global_rank=self._global_rank,
684 tb_writer=tb_writer,
685 optimization_level=self._optim_level,
686 )
687 return instance
688 else:
689 raise ValueError("Only PyTorch backend is currently supported.")
690
691 @deprecated(
692 version="future",
693 explanation="Please use .train(...), .eval(...), .infer(...) and "
694 f".create_optimizer(...) of the NeuralModuleFactory instance directly.",
695 )
696 def get_trainer(self, tb_writer=None):
697 if self._trainer:
698 logging.warning(
699 "The trainer instance was created during initialization of "
700 "Neural factory, using the already created instance."
701 )
702 return self._trainer
703 return self._get_trainer(tb_writer)
704
705 def reset_trainer(self):
706 del self._trainer
707 self._trainer = self._get_trainer(tb_writer=self._tb_writer)
708
709 def sync_all_processes(self, status=True):
710 """ Helper function for testing that allows proccess 0 to inform all
711 other processes of failures. Does nothing if not using distributed
712 training. Usage example can be seen in examples/asr/jasper_an4.py
713
714 Args:
715 status (bool): Defaults to True. If any proccess passes False, it
716 will trigger a graceful exit on all other processes. It is
717 assumed that the process that passed False will print an error
718 message on its own and exit
719 """
720 if self._world_size == 1:
721 logging.info("sync_all_processes does nothing if there is one process")
722 return
723 if self._backend == Backend.PyTorch:
724 import torch
725
726 status_tensor = torch.cuda.IntTensor([status])
727 torch.distributed.all_reduce(status_tensor, op=torch.distributed.ReduceOp.MIN)
728 if status_tensor.item() == 0:
729 logging.error("At least one process had a failure")
730 if status:
731 raise ValueError(
732 f"Process with global rank {self._global_rank} entered"
733 " sync_all_processes with a passing status, but "
734 "another process indicated a failure"
735 )
736
737 @property
738 def world_size(self):
739 return self._world_size
740
741 @property
742 def tb_writer(self):
743 return self._tb_writer
744
745 @property
746 def placement(self):
747 return self._placement
748
749 @property
750 def optim_level(self):
751 return self._optim_level
752
753 @property
754 @deprecated(version=0.11, explanation="Please use ``nemo.logging instead``")
755 def logger(self):
756 return nemo.logging
757
758 @property
759 def checkpoint_dir(self):
760 return self._exp_manager.ckpt_dir
761
762 @property
763 def work_dir(self):
764 return self._exp_manager.work_dir
765
766 @property
767 def global_rank(self):
768 return self._global_rank
769
[end of nemo/core/neural_factory.py]
[start of nemo/core/neural_modules.py]
1 # ! /usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright (c) 2019-, NVIDIA CORPORATION. All rights reserved.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17
18 """This file contains NeuralModule and NmTensor classes."""
19 __all__ = ['WeightShareTransform', 'NeuralModule']
20
21 import collections
22 import uuid
23 from abc import ABC, abstractmethod
24 from collections import namedtuple
25 from enum import Enum
26 from inspect import getargvalues, getfullargspec, stack
27 from os import path
28 from typing import Dict, List, Optional, Set, Tuple
29
30 from ruamel.yaml import YAML
31
32 from .neural_types import (
33 CanNotInferResultNeuralType,
34 NeuralPortNameMismatchError,
35 NeuralPortNmTensorMismatchError,
36 NeuralType,
37 NeuralTypeComparisonResult,
38 NmTensor,
39 )
40 from nemo import logging
41 from nemo.core import NeuralModuleFactory
42 from nemo.package_info import __version__ as nemo_version
43 from nemo.utils.decorators.deprecated import deprecated
44
45 YAML = YAML(typ='safe')
46
47
48 class WeightShareTransform(Enum):
49 """When sharing parameters, what kind of transform to apply."""
50
51 SAME = 0
52 TRANSPOSE = 1
53
54
55 PretrainedModelInfo = namedtuple(
56 "PretrainedModleInfo", ("pretrained_model_name", "description", "parameters", "location"),
57 )
58
59
60 class NeuralModule(ABC):
61 """Abstract class that every Neural Module must inherit from.
62 """
63
64 def __init__(self):
65
66 # Get default factory.
67 self._factory = NeuralModuleFactory.get_default_factory()
68
69 # Set module properties from factory else use defaults
70 self._placement = self._factory.placement
71 # If one needs to change that should override it manually.
72
73 # Optimization level.
74 self._opt_level = self._factory.optim_level
75
76 # Get object UUID.
77 self._uuid = str(uuid.uuid4())
78
79 # Retrieve dictionary of parameters (keys, values) passed to init.
80 self._init_params = self.__extract_init_params()
81
82 # Pint the types of the values.
83 # for key, value in self._init_params.items():
84 # print("{}: {} ({})".format(key, value, type(value)))
85
86 # Validate the parameters.
87 # self._validate_params(self._init_params)
88
89 @property
90 def init_params(self) -> Optional[Dict]:
91 """
92 Property returning parameters used to instantiate the module.
93
94 Returns:
95 Dictionary containing parameters used to instantiate the module.
96 """
97 return self._init_params
98
99 def __extract_init_params(self):
100 """
101 Retrieves the dictionary of of parameters (keys, values) passed to constructor of a class derived
102 (also indirectly) from the Neural Module class.
103
104 Returns:
105 Dictionary containing parameters passed to init().
106 """
107 # Get names of arguments of the original module init method.
108 init_keys = getfullargspec(type(self).__init__).args
109
110 # Remove self.
111 if "self" in init_keys:
112 init_keys.remove("self")
113
114 # Create list of params.
115 init_params = {}.fromkeys(init_keys)
116
117 # Retrieve values of those params from the call list.
118 for frame in stack()[1:]:
119 localvars = getargvalues(frame[0]).locals
120 # print("localvars: ", localvars)
121 for key in init_keys:
122 # Found the variable!
123 if key in localvars.keys():
124 # Save the value.
125 init_params[key] = localvars[key]
126
127 # Return parameters.
128 return init_params
129
130 def __validate_params(self, params):
131 """
132 Checks whether dictionary contains parameters being primitive types (string, int, float etc.)
133 or (lists of)+ primitive types.
134
135 Args:
136 params: dictionary of parameters.
137
138 Returns:
139 True if all parameters were ok, False otherwise.
140 """
141 ok = True
142
143 # Iterate over parameters and check them one by one.
144 for key, variable in params.items():
145 if not self.__is_of_allowed_type(variable):
146 logging.warning(
147 "Parameter '{}' contains a variable '{}' of type '{}' which is not allowed.".format(
148 key, variable, type(variable)
149 )
150 )
151 ok = False
152
153 # Return the result.
154 return ok
155
156 def __is_of_allowed_type(self, var):
157 """
158 A recursive function that checks if a given variable is of allowed type.
159
160 Args:
161 pretrained_model_name (str): name of pretrained model to use in order.
162
163 Returns:
164 True if all parameters were ok, False otherwise.
165 """
166 # Special case: None is also allowed.
167 if var is None:
168 return True
169
170 var_type = type(var)
171
172 # If this is list - check its elements.
173 if var_type == list:
174 for list_var in var:
175 if not self.__is_of_allowed_type(list_var):
176 return False
177
178 # If this is dict - check its elements.
179 elif var_type == dict:
180 for _, dict_var in var.items():
181 if not self.__is_of_allowed_type(dict_var):
182 return False
183
184 elif var_type not in (str, int, float, bool):
185 return False
186
187 # Well, seems that everything is ok.
188 return True
189
190 def _create_config_header(self):
191 """ A protected method that create a header stored later in the configuration file. """
192
193 # Get module "full specification".
194 module_full_spec = str(self.__module__) + "." + str(self.__class__.__qualname__)
195 module_class_name = type(self).__name__
196 # print(module_full_spec)
197
198 # Check whether module belongs to a collection.
199 spec_list = module_full_spec.split(".")
200
201 # Do not check Neural Modules from unit tests.
202 if spec_list[0] == "tests":
203 # Set collection variables.
204 collection_type = "tests"
205 collection_version = None
206 else:
207 # Check if component belongs to any collection
208 if len(spec_list) < 3 or (spec_list[0] != "nemo" and spec_list[1] != "collection"):
209 logging.warning(
210 "Module `{}` does not belong to any collection. This won't be allowed in the next release.".format(
211 module_class_name
212 )
213 )
214 collection_type = "unknown"
215 collection_version = None
216 else:
217 # Ok, set collection.
218 collection_type = spec_list[2]
219 collection_version = None
220 # TODO: to be SET!
221 # print(getattr("nemo.collections.nlp", __version__))
222
223 # Create a "header" with module "specification".
224 header = {
225 "nemo_core_version": nemo_version,
226 "collection_type": collection_type,
227 "collection_version": collection_version,
228 # "class": module_class_name, # Operating only on full_spec now.
229 "full_spec": module_full_spec,
230 }
231 return header
232
233 def export_to_config(self, config_file):
234 """
235 A function that exports module "configuration" (i.e. init parameters) to a YAML file.
236 Raises a ValueError exception in case then parameters coudn't be exported.
237
238 Args:
239 config_file: path (absolute or relative) and name of the config file (YML)
240 """
241 # Check if generic export will work.
242 if not self.__validate_params(self._init_params):
243 raise ValueError(
244 "Generic configuration export enables to use of parameters of primitive types (string, int, float) "
245 F"or (lists of/dicts of) primitive types. Please implement your own custom `export_to_config()` and "
246 F"`import_from_config()` methods for your custom Module class."
247 )
248
249 # Greate an absolute path.
250 abs_path_file = path.expanduser(config_file)
251
252 # Create the dictionary to be exported.
253 to_export = {}
254
255 # Add "header" with module "specification".
256 to_export["header"] = self._create_config_header()
257
258 # Add init parameters.
259 to_export["init_params"] = self._init_params
260 # print(to_export)
261
262 # All parameters are ok, let's export.
263 with open(abs_path_file, 'w') as outfile:
264 YAML.dump(to_export, outfile)
265
266 logging.info(
267 "Configuration of module {} ({}) exported to {}".format(self._uuid, type(self).__name__, abs_path_file)
268 )
269
270 @classmethod
271 def _validate_config_file(cls, config_file, section_name=None):
272 """
273 Class method validating whether the config file has a proper content (sections, specification etc.).
274 Raises an ImportError exception when config file is invalid or
275 incompatible (when called from a particular class).
276
277 Args:
278 config_file: path (absolute or relative) and name of the config file (YML)
279
280 section_name: section in the configuration file storing module configuration (optional, DEFAULT: None)
281
282 Returns:
283 A loaded configuration file (dictionary).
284 """
285 # Greate an absolute path.
286 abs_path_file = path.expanduser(config_file)
287
288 # Open the config file.
289 with open(abs_path_file, 'r') as stream:
290 loaded_config = YAML.load(stream)
291
292 # Check section.
293 if section_name is not None:
294 if section_name not in loaded_config:
295 raise ImportError(
296 "The loaded config `{}` doesn't contain the indicated `{}` section".format(
297 config_file, section_name
298 )
299 )
300 # Section exists - use only it for configuration.
301 loaded_config = loaded_config[section_name]
302
303 # Make sure that the config is valid.
304 if "header" not in loaded_config:
305 raise ImportError("The loaded config `{}` doesn't contain the `header` section".format(config_file))
306
307 if "init_params" not in loaded_config:
308 raise ImportError("The loaded config `{}` doesn't contain the `init_params` section".format(config_file))
309
310 # Parse the "full specification".
311 spec_list = loaded_config["header"]["full_spec"].split(".")
312
313 # Check if config contains data of a compatible class.
314 if cls.__name__ != "NeuralModule" and spec_list[-1] != cls.__name__:
315 txt = "The loaded file `{}` contains configuration of ".format(config_file)
316 txt = txt + "`{}` thus cannot be used for instantiation of an object of type `{}`".format(
317 spec_list[-1], cls.__name__
318 )
319 raise ImportError(txt)
320
321 # Success - return configuration.
322 return loaded_config
323
324 @classmethod
325 def import_from_config(cls, config_file, section_name=None, overwrite_params={}):
326 """
327 Class method importing the configuration file.
328 Raises an ImportError exception when config file is invalid or
329 incompatible (when called from a particular class).
330
331 Args:
332 config_file: path (absolute or relative) and name of the config file (YML)
333
334 section_name: section in the configuration file storing module configuration (optional, DEFAULT: None)
335
336 overwrite_params: Dictionary containing parameters that will be added to or overwrite (!) the default
337 parameters loaded from the configuration file
338
339 Returns:
340 Instance of the created NeuralModule object.
341 """
342 # Validate the content of the configuration file (its header).
343 loaded_config = cls._validate_config_file(config_file, section_name)
344
345 # Parse the "full specification".
346 spec_list = loaded_config["header"]["full_spec"].split(".")
347
348 # Get object class from "full specification".
349 mod_obj = __import__(spec_list[0])
350 for spec in spec_list[1:]:
351 mod_obj = getattr(mod_obj, spec)
352 # print(mod_obj)
353
354 # Get init parameters.
355 init_params = loaded_config["init_params"]
356 # Update parameters with additional ones.
357 init_params.update(overwrite_params)
358
359 # Create and return the object.
360 obj = mod_obj(**init_params)
361 logging.info(
362 "Instantiated a new Neural Module of type `{}` using configuration loaded from the `{}` file".format(
363 spec_list[-1], config_file
364 )
365 )
366 return obj
367
368 @deprecated(version=0.11)
369 @staticmethod
370 def create_ports(**kwargs):
371 """ Deprecated method, to be remoted in the next release."""
372 raise Exception(
373 'Deprecated method. Please implement ``inputs`` and ``outputs`` \
374 properties to define module ports instead'
375 )
376
377 @property
378 @abstractmethod
379 def input_ports(self) -> Optional[Dict[str, NeuralType]]:
380 """Returns definitions of module input ports
381
382 Returns:
383 A (dict) of module's input ports names to NeuralTypes mapping
384 """
385
386 @property
387 @abstractmethod
388 def output_ports(self) -> Optional[Dict[str, NeuralType]]:
389 """Returns definitions of module output ports
390
391 Returns:
392 A (dict) of module's output ports names to NeuralTypes mapping
393 """
394
395 @property
396 def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
397 """Returns names of input ports that will not be included in an export
398
399 Returns:
400 A (set) of module's input port names that are not exportable
401 """
402 return set([])
403
404 @property
405 def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
406 """Returns names of output ports that will not be included in an export
407
408 Returns:
409 A (set) of module's output port names that are not exportable
410 """
411 return set([])
412
413 def prepare_for_deployment(self) -> None:
414 """Patch the module if required to prepare for deployment
415
416 """
417 return
418
419 @staticmethod
420 def pretrained_storage():
421 return ''
422
423 def __call__(self, **kwargs):
424 """This method allows objects to be called with their port names
425
426 Args:
427 kwargs: Input ports and their values. For example:
428 ...
429 mymodule1 = Subclass1_of_NeuralModule(...)
430 mymodule2 = Subclass2_of_NeuralModule(...)
431 ...
432 out_port1, out_port2 = mymodule1(input_port1=value1,
433 input_port2=value2,
434 input_port3=value3)
435 out_port11 = mymodule2(input_port1=out_port2)
436 ...
437
438 Returns:
439 NmTensor object or tuple of NmTensor objects
440 """
441 # Get input and output ports definitions.
442 input_port_defs = self.input_ports
443 output_port_defs = self.output_ports
444
445 first_input_nmtensor_type = None
446 input_nmtensors_are_of_same_type = True
447 for port_name, tgv in kwargs.items():
448 # make sure that passed arguments correspond to input port names
449 if port_name not in input_port_defs.keys():
450 raise NeuralPortNameMismatchError("Wrong input port name: {0}".format(port_name))
451
452 input_port = input_port_defs[port_name]
453 type_comatibility = input_port.compare(tgv)
454 if (
455 type_comatibility != NeuralTypeComparisonResult.SAME
456 and type_comatibility != NeuralTypeComparisonResult.GREATER
457 ):
458 raise NeuralPortNmTensorMismatchError(
459 "\n\nIn {0}. \n"
460 "Port: {1} and a NmTensor it was fed are \n"
461 "of incompatible neural types:\n\n{2} \n\n and \n\n{3}"
462 "\n\nType comparison result: {4}".format(
463 self.__class__.__name__, port_name, input_port_defs[port_name], tgv, type_comatibility,
464 )
465 )
466
467 # if first_input_nmtensor_type is None:
468 # first_input_nmtensor_type = NeuralType(tgv._axis2type)
469 # else:
470 # if first_input_nmtensor_type._axis2type is None:
471 # input_nmtensors_are_of_same_type = True
472 # else:
473 # input_nmtensors_are_of_same_type = first_input_nmtensor_type.compare(
474 # tgv
475 # ) == NeuralTypeComparisonResult.SAME and len(first_input_nmtensor_type._axis2type)
476 # if not (
477 # type_comatibility == NeuralTypeComparisonResult.SAME
478 # or type_comatibility == NeuralTypeComparisonResult.GREATER
479 # ):
480 # raise NeuralPortNmTensorMismatchError(
481 # "\n\nIn {0}. \n"
482 # "Port: {1} and a NmTensor it was fed are \n"
483 # "of incompatible neural types:\n\n{2} \n\n and \n\n{3}"
484 # "\n\nType comparison result: {4}".format(
485 # self.__class__.__name__, port_name, input_port_defs[port_name], tgv, type_comatibility,
486 # )
487 # )
488 # if type_comatibility == NeuralTypeComparisonResult.LESS:
489 # print('Types were raised')
490
491 if len(output_port_defs) == 1:
492 out_name = list(output_port_defs)[0]
493 out_type = output_port_defs[out_name]
494 if out_type is None:
495 if input_nmtensors_are_of_same_type:
496 out_type = first_input_nmtensor_type
497 else:
498 raise CanNotInferResultNeuralType(
499 "Can't infer output neural type. Likely your inputs are of different type."
500 )
501 return NmTensor(producer=self, producer_args=kwargs, name=out_name, ntype=out_type,)
502 else:
503 result = []
504 for out_port, n_type in output_port_defs.items():
505 out_type = n_type
506 if out_type is None:
507 if input_nmtensors_are_of_same_type:
508 out_type = first_input_nmtensor_type
509 else:
510 raise CanNotInferResultNeuralType(
511 "Can't infer output neural type. Likely your inputs are of different type."
512 )
513 result.append(NmTensor(producer=self, producer_args=kwargs, name=out_port, ntype=out_type,))
514
515 # Creating ad-hoc class for returning from module's forward pass.
516 output_class_name = f'{self.__class__.__name__}Output'
517 field_names = list(output_port_defs)
518 result_type = collections.namedtuple(typename=output_class_name, field_names=field_names,)
519
520 # Tie tuple of output tensors with corresponding names.
521 result = result_type(*result)
522
523 return result
524
525 def __str__(self):
526 return self.__class__.__name__
527
528 @abstractmethod
529 def get_weights(self) -> Optional[Dict[(str, bool)]]:
530 """Returns NeuralModule's weights copy.
531
532 Returns:
533 Dictionary of name -> (weights, trainable)"""
534 pass
535
536 @abstractmethod
537 def set_weights(
538 self,
539 name2weight: Dict[(str, Tuple[str, bool])],
540 name2name_and_transform: Dict[(str, Tuple[str, WeightShareTransform])] = None,
541 ):
542 """Sets weight from given values. For every named weight in
543 name2weight,
544 if weight with the same name is found in the model, it will be set to
545 found value.
546
547 WARNING: This will NOT tie weights. It will copy values.
548
549 If ``name2name_and_transform`` is provided then if will set weights
550 using
551 name mapping and transform. For example, suppose ``objec1.X = 3x5
552 weight``.
553 Then, if ``name2name_and_transform['X']=('Y',
554 WeightShareTransform.TRANSPOSE)``
555 and ``Y`` is 5x3 weight and ``name2weight['Y']=Y. Then:
556 ``object1.set_weights(name2weight, name2name_and_transform)`` will
557 set object1.X=transpose(Y).
558
559 Args:
560 name2weight (dict): dictionary of name to (weight, trainable).
561 Typically this is output of get_weights method.
562 name2name_and_transform: mapping from name -> (name, transform)
563 """
564 pass
565
566 @staticmethod
567 def list_pretrained_models() -> Optional[List[PretrainedModelInfo]]:
568 """List all available pre-trained models (e.g. weights) for this NM.
569
570 Returns:
571 A list of PretrainedModelInfo tuples.
572 The pretrained_model_name field of the tuple can be used to
573 retrieve pre-trained model's weights (pass it as
574 pretrained_model_name argument to the module's constructor)
575 """
576 return None
577
578 def get_config_dict_and_checkpoint(self, pretrained_model_name):
579 """WARNING: This part is work in progress"""
580 return None
581
582 @abstractmethod
583 def tie_weights_with(
584 self,
585 module,
586 weight_names=List[str],
587 name2name_and_transform: Dict[(str, Tuple[str, WeightShareTransform])] = None,
588 ):
589 """Ties weights between self and module. For every weight name in
590 weight_names, if weight with the same name is found in self, it will
591 be tied
592 with a same weight from ``module``.
593
594 WARNING: Once weights are tied, updates to one weights's weights
595 will affect
596 other module's weights.
597
598
599 If ``name2name_and_transform`` is provided then if will set weights
600 using
601 name mapping and transform. For example, suppose ``objec1.X = 3x5
602 weights``
603 and ``object2.Y = 5x3 weights``. Then these weights can be tied like
604 this:
605
606 .. code-block:: python
607
608 object1.tie_weights_with(object2, weight_names=['X'],
609 name2name_and_transform =
610 { 'X': ('Y', WeightShareTransform.TRANSPOSE)})
611
612
613 Args:
614 module: with which module to tie weights
615 weight_names (List[str]): list of self weights' names
616 name2name_and_transform: mapping from name -> (name, transform)
617 """
618 pass
619
620 def is_trainable(self) -> bool:
621 """
622 Checks if NeuralModule is trainable.
623 A NeuralModule is trainable IFF it contains at least one trainable
624 weight
625
626 Returns:
627 True if module has trainable weights, False otherwise
628 """
629 weights = self.get_weights()
630 if weights is None:
631 return False
632 for name, w in weights.items():
633 if w[1]:
634 return True
635 return False
636
637 @abstractmethod
638 def save_to(self, path: str):
639 """Save module state to file.
640
641 Args:
642 path (string): path to while where to save.
643 """
644 pass
645
646 @abstractmethod
647 def restore_from(self, path: str):
648 """Restore module's state from file.
649
650 Args:
651 path (string): path to where to restore from.
652 """
653 pass
654
655 @abstractmethod
656 def freeze(self, weights: Set[str] = None):
657 """Freeze weights
658
659 Args:
660 weights (set): set of weight names to freeze
661 If None, all weights are freezed.
662 """
663 pass
664
665 @abstractmethod
666 def unfreeze(self, weights: Set[str] = None):
667 """Unfreeze weights
668
669 Args:
670 weights (set): set of weight names to unfreeze
671 If None, all weights are unfreezed.
672 """
673 pass
674
675 @property
676 def placement(self):
677 """Module's placement. Currently CPU or GPU.
678 DataParallel and ModelParallel will come later.
679
680 Returns:
681 (DeviceType) Device where NM's weights are located
682 """
683 return self._placement
684
685 @property
686 @deprecated(version=0.11)
687 def local_parameters(self) -> Optional[Dict]:
688 """Get module's parameters
689
690 Returns:
691 module's parameters
692 """
693 return self._init_params
694 # return self._local_parameters
695
696 @property
697 def unique_instance_id(self):
698 """A unique instance id for this object
699
700 Returns:
701 A uniq uuid which can be used to identify this object
702 """
703 return self._uuid
704
705 @property
706 def factory(self):
707 """ Neural module factory which created this module
708 Returns: NeuralModuleFactory instance or None
709 """
710 return self._factory
711
712 @property
713 @abstractmethod
714 def num_weights(self):
715 """Number of module's weights
716 """
717 pass
718
[end of nemo/core/neural_modules.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
NVIDIA/NeMo
|
ba4616f1f011d599de87f0cb3315605e715d402a
|
Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
|
2020-03-10T03:03:23Z
|
<patch>
<patch>
diff --git a/nemo/backends/pytorch/actions.py b/nemo/backends/pytorch/actions.py
--- a/nemo/backends/pytorch/actions.py
+++ b/nemo/backends/pytorch/actions.py
@@ -937,26 +937,16 @@ def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defa
if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
dynamic_axes[port_name].append(ind)
- # This is a hack for Jasper to Jarvis export -- need re-design for this
- inputs_to_drop = set()
- outputs_to_drop = set()
- if type(module).__name__ == "JasperEncoder":
- logging.info(
- "Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
- "deployment"
- )
- inputs_to_drop.add("length")
- outputs_to_drop.add("encoded_lengths")
-
+ # extract dynamic axes and remove unnecessary inputs/outputs
# for input_ports
for port_name, ntype in module.input_ports.items():
- if port_name in inputs_to_drop:
+ if port_name in module._disabled_deployment_input_ports:
input_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
# for output_ports
for port_name, ntype in module.output_ports.items():
- if port_name in outputs_to_drop:
+ if port_name in module._disabled_deployment_output_ports:
output_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
diff --git a/nemo/collections/asr/jasper.py b/nemo/collections/asr/jasper.py
--- a/nemo/collections/asr/jasper.py
+++ b/nemo/collections/asr/jasper.py
@@ -118,14 +118,14 @@ def output_ports(self):
}
@property
- def disabled_deployment_input_ports(self):
+ def _disabled_deployment_input_ports(self):
return set(["length"])
@property
- def disabled_deployment_output_ports(self):
+ def _disabled_deployment_output_ports(self):
return set(["encoded_lengths"])
- def prepare_for_deployment(self):
+ def _prepare_for_deployment(self):
m_count = 0
for m in self.modules():
if type(m).__name__ == "MaskedConv1d":
diff --git a/nemo/core/neural_factory.py b/nemo/core/neural_factory.py
--- a/nemo/core/neural_factory.py
+++ b/nemo/core/neural_factory.py
@@ -610,7 +610,7 @@ def deployment_export(
input_example: sometimes tracing will require input examples
output_example: Should match inference on input_example
"""
- module.prepare_for_deployment()
+ module._prepare_for_deployment()
return self._trainer.deployment_export(
module=module,
diff --git a/nemo/core/neural_modules.py b/nemo/core/neural_modules.py
--- a/nemo/core/neural_modules.py
+++ b/nemo/core/neural_modules.py
@@ -393,7 +393,7 @@ def output_ports(self) -> Optional[Dict[str, NeuralType]]:
"""
@property
- def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_input_ports(self) -> Optional[Set[str]]:
"""Returns names of input ports that will not be included in an export
Returns:
@@ -402,7 +402,7 @@ def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
return set([])
@property
- def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""Returns names of output ports that will not be included in an export
Returns:
@@ -410,7 +410,7 @@ def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""
return set([])
- def prepare_for_deployment(self) -> None:
+ def _prepare_for_deployment(self) -> None:
"""Patch the module if required to prepare for deployment
"""
</patch>
</s>
</patch>
|
diff --git a/tests/unit/core/test_deploy_export.py b/tests/unit/core/test_deploy_export.py
--- a/tests/unit/core/test_deploy_export.py
+++ b/tests/unit/core/test_deploy_export.py
@@ -46,9 +46,11 @@
import nemo.collections.nlp.nm.trainables.common.token_classification_nm
from nemo import logging
+TRT_ONNX_DISABLED = False
+
# Check if the required libraries and runtimes are installed.
+# Only initialize GPU after this runner is activated.
try:
- # Only initialize GPU after this runner is activated.
import pycuda.autoinit
# This import causes pycuda to automatically manage CUDA context creation and cleanup.
@@ -63,16 +65,17 @@
)
from .tensorrt_runner import TensorRTRunnerV2
except:
- # Skip tests.
- pytestmark = pytest.mark.skip
+ TRT_ONNX_DISABLED = True
@pytest.mark.usefixtures("neural_factory")
class TestDeployExport(TestCase):
- def setUp(self):
- logging.setLevel(logging.WARNING)
- device = nemo.core.DeviceType.GPU
- self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
+ # def setUp(self):
+ # super().setUp()
+
+ # logging.setLevel(logging.WARNING)
+ # device = nemo.core.DeviceType.GPU
+ # self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
def __test_export_route(self, module, out_name, mode, input_example=None):
out = Path(out_name)
@@ -112,7 +115,13 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
loader_cache = DataLoaderCache(data_loader)
profile_shapes = OrderedDict()
names = list(module.input_ports) + list(module.output_ports)
-
+ names = list(
+ filter(
+ lambda x: x
+ not in (module._disabled_deployment_input_ports | module._disabled_deployment_output_ports),
+ names,
+ )
+ )
if isinstance(input_example, tuple):
si = [tuple(input_example[i].shape) for i in range(len(input_example))]
elif isinstance(input_example, OrderedDict):
@@ -152,7 +161,7 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
input_names = list(input_metadata.keys())
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
+ if input_name in module._disabled_deployment_input_ports:
continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
@@ -209,8 +218,8 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
ort_inputs = ort_session.get_inputs()
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
- input_name = ort_inputs[i].name
+ if input_name in module._disabled_deployment_input_ports:
+ continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
if isinstance(input_example, OrderedDict)
@@ -263,9 +272,10 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
def __test_export_route_all(self, module, out_name, input_example=None):
if input_example is not None:
- self.__test_export_route(
- module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
- )
+ if not TRT_ONNX_DISABLED:
+ self.__test_export_route(
+ module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
+ )
self.__test_export_route(module, out_name + '.onnx', nemo.core.DeploymentFormat.ONNX, input_example)
self.__test_export_route(module, out_name + '.pt', nemo.core.DeploymentFormat.PYTORCH, input_example)
self.__test_export_route(module, out_name + '.ts', nemo.core.DeploymentFormat.TORCHSCRIPT, input_example)
@@ -323,9 +333,7 @@ def test_jasper_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="jasper_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randn(256).cuda()),
+ module=jasper_encoder, out_name="jasper_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
@pytest.mark.unit
@@ -343,7 +351,5 @@ def test_quartz_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="quartz_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randint(20, (16,)).cuda()),
+ module=jasper_encoder, out_name="quartz_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
|
1.0
| ||||
NVIDIA__NeMo-3632
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
./reinstall.sh crashes due to not being able to uninstall llvmlite
Starting off of `nemo:1.5.1` container, cloning the NeMo repo to a folder inside of it and calling `./reinstall.sh` fails with
```
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
`pip install -e` on the other hand succeeds installing `nemo:1.7.0rc0` and `numpy:1.22.2`, the rest of the packages remain untouched.
It seems that `./reinstall.sh` which used to work fine, a week or so ago when following the same procedure to upgrade to `nemo:1.6.0rc` redeveloped issue #841. The solution remains the same, first call
```
pip install --ignore-installed llvmlite
```
followed by `./reinstall.sh`. In this case, apart `llvml`, the following packages are updated
```
ftfy-6.0.3 nemo-toolkit-1.7.0rc0 numba-0.55.1 pytorch-lightning-1.5.9 sacrebleu-2.0.0 setuptools-59.5.0
```
Interestingly `numpy` in this case is left at `1.21.5`.
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |license| |lgtm_grade| |lgtm_alerts| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |lgtm_grade| image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
17 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
18 :alt: Language grade: Python
19
20 .. |lgtm_alerts| image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
21 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
22 :alt: Total alerts
23
24 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
25 :target: https://github.com/psf/black
26 :alt: Code style: black
27
28 .. _main-readme:
29
30 **NVIDIA NeMo**
31 ===============
32
33 Introduction
34 ------------
35
36 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR), natural language processing (NLP), and text-to-speech synthesis (TTS).
37 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models) and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
38
39 `Pre-trained NeMo models. <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_
40
41 `Introductory video. <https://www.youtube.com/embed/wBgpMf_KQVw>`_
42
43 Key Features
44 ------------
45
46 * Speech processing
47 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
48 * Supported models: Jasper, QuartzNet, CitriNet, Conformer-CTC, Conformer-Transducer, ContextNet, ...
49 * Supports CTC and Transducer/RNNT losses/decoders
50 * Beam Search decoding
51 * `Language Modelling for ASR <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
52 * Streaming and Buffered ASR (CTC/Transdcer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/main/examples/asr/asr_chunked_inference>`_
53 * `Speech Classification and Speech Command Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition)
54 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
55 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
56 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
57 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
58 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
59 * Natural Language Processing
60 * `Compatible with Hugging Face Transformers and NVIDIA Megatron <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron_finetuning.html>`_
61 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation.html>`_
62 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
63 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
64 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
65 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
66 * `BERT pre-training <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/bert_pretraining.html>`_
67 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
68 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
69 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
70 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
71 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
72 * `Neural Duplex Text Normalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization.html>`_
73 * `Prompt Tuning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron_finetuning.html#prompt-tuning>`_
74 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
75 * `Speech synthesis (TTS) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
76 * Spectrogram generation: Tacotron2, GlowTTS, TalkNet, FastPitch, FastSpeech2, Mixer-TTS, Mixer-TTS-X
77 * Vocoders: WaveGlow, SqueezeWave, UniGlow, MelGAN, HiFiGAN, UnivNet
78 * End-to-end speech generation: FastPitch_HifiGan_E2E, FastSpeech2_HifiGan_E2E
79 * `NGC collection of pre-trained TTS models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
80 * `Tools <https://github.com/NVIDIA/NeMo/tree/main/tools>`_
81 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/text_processing_deployment.html>`_
82 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
83 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
84
85
86 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
87
88 Requirements
89 ------------
90
91 1) Python 3.6, 3.7 or 3.8
92 2) Pytorch 1.10.0 or above
93 3) NVIDIA GPU for training
94
95 Documentation
96 -------------
97
98 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
99 :alt: Documentation Status
100 :scale: 100%
101 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
102
103 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
104 :alt: Documentation Status
105 :scale: 100%
106 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
107
108 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
109 | Version | Status | Description |
110 +=========+=============+==========================================================================================================================================+
111 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
112 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
113 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
114 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
115
116 Tutorials
117 ---------
118 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
119
120 Getting help with NeMo
121 ----------------------
122 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
123
124
125 Installation
126 ------------
127
128 Pip
129 ~~~
130 Use this installation mode if you want the latest released version.
131
132 .. code-block:: bash
133
134 apt-get update && apt-get install -y libsndfile1 ffmpeg
135 pip install Cython
136 pip install nemo_toolkit['all']
137
138 .. note::
139
140 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
141
142 Pip from source
143 ~~~~~~~~~~~~~~~
144 Use this installation mode if you want the a version from particular GitHub branch (e.g main).
145
146 .. code-block:: bash
147
148 apt-get update && apt-get install -y libsndfile1 ffmpeg
149 pip install Cython
150 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
151
152
153 From source
154 ~~~~~~~~~~~
155 Use this installation mode if you are contributing to NeMo.
156
157 .. code-block:: bash
158
159 apt-get update && apt-get install -y libsndfile1 ffmpeg
160 git clone https://github.com/NVIDIA/NeMo
161 cd NeMo
162 ./reinstall.sh
163
164 .. note::
165
166 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
167 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
168
169 RNNT
170 ~~~~
171 Note that RNNT requires numba to be installed from conda.
172
173 .. code-block:: bash
174
175 conda remove numba
176 pip uninstall numba
177 conda install -c conda-forge numba
178
179 Megatron GPT
180 ~~~~~~~~~~~~
181 Megatron GPT training requires NVIDIA Apex to be installed.
182
183 .. code-block:: bash
184
185 git clone https://github.com/NVIDIA/apex
186 cd apex
187 git checkout c8bcc98176ad8c3a0717082600c70c907891f9cb
188 pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" ./
189
190 Docker containers:
191 ~~~~~~~~~~~~~~~~~~
192 To build a nemo container with Dockerfile from a branch, please run
193
194 .. code-block:: bash
195
196 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
197
198
199 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 22.01-py3 and then installing from GitHub.
200
201 .. code-block:: bash
202
203 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
204 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
205 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:22.01-py3
206
207 Examples
208 --------
209
210 Many examples can be found under `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
211
212
213 Contributing
214 ------------
215
216 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
217
218 Publications
219 ------------
220
221 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/blob/main/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
222
223 Citation
224 --------
225
226 .. code-block:: bash
227
228 @article{kuchaiev2019nemo,
229 title={Nemo: a toolkit for building ai applications using neural modules},
230 author={Kuchaiev, Oleksii and Li, Jason and Nguyen, Huyen and Hrinchuk, Oleksii and Leary, Ryan and Ginsburg, Boris and Kriman, Samuel and Beliaev, Stanislav and Lavrukhin, Vitaly and Cook, Jack and others},
231 journal={arXiv preprint arXiv:1909.09577},
232 year={2019}
233 }
234
235 License
236 -------
237 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
238
[end of README.rst]
[start of /dev/null]
1
[end of /dev/null]
[start of nemo_text_processing/text_normalization/__init__.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from nemo.utils import logging
16
17 try:
18 import pynini
19
20 PYNINI_AVAILABLE = True
21 except (ModuleNotFoundError, ImportError):
22 logging.warning(
23 "`pynini` is not installed ! \n"
24 "Please run the `nemo_text_processing/setup.sh` script"
25 "prior to usage of this toolkit."
26 )
27
28 PYNINI_AVAILABLE = False
29
[end of nemo_text_processing/text_normalization/__init__.py]
[start of nemo_text_processing/text_normalization/en/graph_utils.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17 import string
18 from pathlib import Path
19 from typing import Dict
20
21 from nemo_text_processing.text_normalization.en.utils import get_abs_path
22
23 try:
24 import pynini
25 from pynini import Far
26 from pynini.export import export
27 from pynini.examples import plurals
28 from pynini.lib import byte, pynutil, utf8
29
30 NEMO_CHAR = utf8.VALID_UTF8_CHAR
31
32 NEMO_DIGIT = byte.DIGIT
33 NEMO_LOWER = pynini.union(*string.ascii_lowercase).optimize()
34 NEMO_UPPER = pynini.union(*string.ascii_uppercase).optimize()
35 NEMO_ALPHA = pynini.union(NEMO_LOWER, NEMO_UPPER).optimize()
36 NEMO_ALNUM = pynini.union(NEMO_DIGIT, NEMO_ALPHA).optimize()
37 NEMO_HEX = pynini.union(*string.hexdigits).optimize()
38 NEMO_NON_BREAKING_SPACE = u"\u00A0"
39 NEMO_SPACE = " "
40 NEMO_WHITE_SPACE = pynini.union(" ", "\t", "\n", "\r", u"\u00A0").optimize()
41 NEMO_NOT_SPACE = pynini.difference(NEMO_CHAR, NEMO_WHITE_SPACE).optimize()
42 NEMO_NOT_QUOTE = pynini.difference(NEMO_CHAR, r'"').optimize()
43
44 NEMO_PUNCT = pynini.union(*map(pynini.escape, string.punctuation)).optimize()
45 NEMO_GRAPH = pynini.union(NEMO_ALNUM, NEMO_PUNCT).optimize()
46
47 NEMO_SIGMA = pynini.closure(NEMO_CHAR)
48
49 delete_space = pynutil.delete(pynini.closure(NEMO_WHITE_SPACE))
50 insert_space = pynutil.insert(" ")
51 delete_extra_space = pynini.cross(pynini.closure(NEMO_WHITE_SPACE, 1), " ")
52 delete_preserve_order = pynini.closure(
53 pynutil.delete(" preserve_order: true")
54 | (pynutil.delete(" field_order: \"") + NEMO_NOT_QUOTE + pynutil.delete("\""))
55 )
56
57 suppletive = pynini.string_file(get_abs_path("data/suppletive.tsv"))
58 # _v = pynini.union("a", "e", "i", "o", "u")
59 _c = pynini.union(
60 "b", "c", "d", "f", "g", "h", "j", "k", "l", "m", "n", "p", "q", "r", "s", "t", "v", "w", "x", "y", "z"
61 )
62 _ies = NEMO_SIGMA + _c + pynini.cross("y", "ies")
63 _es = NEMO_SIGMA + pynini.union("s", "sh", "ch", "x", "z") + pynutil.insert("es")
64 _s = NEMO_SIGMA + pynutil.insert("s")
65
66 graph_plural = plurals._priority_union(
67 suppletive, plurals._priority_union(_ies, plurals._priority_union(_es, _s, NEMO_SIGMA), NEMO_SIGMA), NEMO_SIGMA
68 ).optimize()
69
70 SINGULAR_TO_PLURAL = graph_plural
71 PLURAL_TO_SINGULAR = pynini.invert(graph_plural)
72 TO_LOWER = pynini.union(*[pynini.cross(x, y) for x, y in zip(string.ascii_uppercase, string.ascii_lowercase)])
73 TO_UPPER = pynini.invert(TO_LOWER)
74
75 PYNINI_AVAILABLE = True
76 except (ModuleNotFoundError, ImportError):
77 # Create placeholders
78 NEMO_CHAR = None
79
80 NEMO_DIGIT = None
81 NEMO_LOWER = None
82 NEMO_UPPER = None
83 NEMO_ALPHA = None
84 NEMO_ALNUM = None
85 NEMO_HEX = None
86 NEMO_NON_BREAKING_SPACE = u"\u00A0"
87 NEMO_SPACE = " "
88 NEMO_WHITE_SPACE = None
89 NEMO_NOT_SPACE = None
90 NEMO_NOT_QUOTE = None
91
92 NEMO_PUNCT = None
93 NEMO_GRAPH = None
94
95 NEMO_SIGMA = None
96
97 delete_space = None
98 insert_space = None
99 delete_extra_space = None
100 delete_preserve_order = None
101
102 suppletive = None
103 # _v = pynini.union("a", "e", "i", "o", "u")
104 _c = None
105 _ies = None
106 _es = None
107 _s = None
108
109 graph_plural = None
110
111 SINGULAR_TO_PLURAL = None
112 PLURAL_TO_SINGULAR = None
113 TO_LOWER = None
114 TO_UPPER = None
115
116 PYNINI_AVAILABLE = False
117
118
119 def generator_main(file_name: str, graphs: Dict[str, 'pynini.FstLike']):
120 """
121 Exports graph as OpenFst finite state archive (FAR) file with given file name and rule name.
122
123 Args:
124 file_name: exported file name
125 graphs: Mapping of a rule name and Pynini WFST graph to be exported
126 """
127 exporter = export.Exporter(file_name)
128 for rule, graph in graphs.items():
129 exporter[rule] = graph.optimize()
130 exporter.close()
131 print(f'Created {file_name}')
132
133
134 def get_plurals(fst):
135 """
136 Given singular returns plurals
137
138 Args:
139 fst: Fst
140
141 Returns plurals to given singular forms
142 """
143 return SINGULAR_TO_PLURAL @ fst
144
145
146 def get_singulars(fst):
147 """
148 Given plural returns singulars
149
150 Args:
151 fst: Fst
152
153 Returns singulars to given plural forms
154 """
155 return PLURAL_TO_SINGULAR @ fst
156
157
158 def convert_space(fst) -> 'pynini.FstLike':
159 """
160 Converts space to nonbreaking space.
161 Used only in tagger grammars for transducing token values within quotes, e.g. name: "hello kitty"
162 This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
163
164 Args:
165 fst: input fst
166
167 Returns output fst where breaking spaces are converted to non breaking spaces
168 """
169 return fst @ pynini.cdrewrite(pynini.cross(NEMO_SPACE, NEMO_NON_BREAKING_SPACE), "", "", NEMO_SIGMA)
170
171
172 class GraphFst:
173 """
174 Base class for all grammar fsts.
175
176 Args:
177 name: name of grammar class
178 kind: either 'classify' or 'verbalize'
179 deterministic: if True will provide a single transduction option,
180 for False multiple transduction are generated (used for audio-based normalization)
181 """
182
183 def __init__(self, name: str, kind: str, deterministic: bool = True):
184 self.name = name
185 self.kind = str
186 self._fst = None
187 self.deterministic = deterministic
188
189 self.far_path = Path(os.path.dirname(__file__) + '/grammars/' + kind + '/' + name + '.far')
190 if self.far_exist():
191 self._fst = Far(self.far_path, mode="r", arc_type="standard", far_type="default").get_fst()
192
193 def far_exist(self) -> bool:
194 """
195 Returns true if FAR can be loaded
196 """
197 return self.far_path.exists()
198
199 @property
200 def fst(self) -> 'pynini.FstLike':
201 return self._fst
202
203 @fst.setter
204 def fst(self, fst):
205 self._fst = fst
206
207 def add_tokens(self, fst) -> 'pynini.FstLike':
208 """
209 Wraps class name around to given fst
210
211 Args:
212 fst: input fst
213
214 Returns:
215 Fst: fst
216 """
217 return pynutil.insert(f"{self.name} {{ ") + fst + pynutil.insert(" }")
218
219 def delete_tokens(self, fst) -> 'pynini.FstLike':
220 """
221 Deletes class name wrap around output of given fst
222
223 Args:
224 fst: input fst
225
226 Returns:
227 Fst: fst
228 """
229 res = (
230 pynutil.delete(f"{self.name}")
231 + delete_space
232 + pynutil.delete("{")
233 + delete_space
234 + fst
235 + delete_space
236 + pynutil.delete("}")
237 )
238 return res @ pynini.cdrewrite(pynini.cross(u"\u00A0", " "), "", "", NEMO_SIGMA)
239
[end of nemo_text_processing/text_normalization/en/graph_utils.py]
[start of nemo_text_processing/text_normalization/en/taggers/punctuation.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import sys
17 from unicodedata import category
18
19 from nemo_text_processing.text_normalization.en.graph_utils import GraphFst
20
21 try:
22 import pynini
23 from pynini.lib import pynutil
24
25 PYNINI_AVAILABLE = False
26 except (ModuleNotFoundError, ImportError):
27 PYNINI_AVAILABLE = False
28
29
30 class PunctuationFst(GraphFst):
31 """
32 Finite state transducer for classifying punctuation
33 e.g. a, -> tokens { name: "a" } tokens { name: "," }
34
35 Args:
36 deterministic: if True will provide a single transduction option,
37 for False multiple transduction are generated (used for audio-based normalization)
38
39 """
40
41 def __init__(self, deterministic: bool = True):
42 super().__init__(name="punctuation", kind="classify", deterministic=deterministic)
43
44 s = "!#%&\'()*+,-./:;<=>?@^_`{|}~\""
45
46 punct_unicode = [chr(i) for i in range(sys.maxunicode) if category(chr(i)).startswith("P")]
47 punct_unicode.remove('[')
48 punct_unicode.remove(']')
49 punct = pynini.union(*s) | pynini.union(*punct_unicode)
50
51 self.graph = punct
52 self.fst = (pynutil.insert("name: \"") + self.graph + pynutil.insert("\"")).optimize()
53
[end of nemo_text_processing/text_normalization/en/taggers/punctuation.py]
[start of nemo_text_processing/text_normalization/en/verbalizers/whitelist.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16
17 from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
18
19 try:
20 import pynini
21 from pynini.lib import pynutil
22
23 PYNINI_AVAILABLE = True
24 except (ModuleNotFoundError, ImportError):
25 PYNINI_AVAILABLE = False
26
27
28 class WhiteListFst(GraphFst):
29 """
30 Finite state transducer for verbalizing whitelist
31 e.g. tokens { name: "misses" } } -> misses
32
33 Args:
34 deterministic: if True will provide a single transduction option,
35 for False multiple transduction are generated (used for audio-based normalization)
36 """
37
38 def __init__(self, deterministic: bool = True):
39 super().__init__(name="whitelist", kind="verbalize", deterministic=deterministic)
40 graph = (
41 pynutil.delete("name:")
42 + delete_space
43 + pynutil.delete("\"")
44 + pynini.closure(NEMO_CHAR - " ", 1)
45 + pynutil.delete("\"")
46 )
47 graph = graph @ pynini.cdrewrite(pynini.cross(u"\u00A0", " "), "", "", NEMO_SIGMA)
48 self.fst = graph.optimize()
49
[end of nemo_text_processing/text_normalization/en/verbalizers/whitelist.py]
[start of nemo_text_processing/text_normalization/en/verbalizers/word.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
17
18 try:
19 import pynini
20 from pynini.lib import pynutil
21
22 PYNINI_AVAILABLE = True
23 except (ModuleNotFoundError, ImportError):
24 PYNINI_AVAILABLE = False
25
26
27 class WordFst(GraphFst):
28 """
29 Finite state transducer for verbalizing word
30 e.g. tokens { name: "sleep" } -> sleep
31
32 Args:
33 deterministic: if True will provide a single transduction option,
34 for False multiple transduction are generated (used for audio-based normalization)
35 """
36
37 def __init__(self, deterministic: bool = True):
38 super().__init__(name="word", kind="verbalize", deterministic=deterministic)
39 chars = pynini.closure(NEMO_CHAR - " ", 1)
40 char = pynutil.delete("name:") + delete_space + pynutil.delete("\"") + chars + pynutil.delete("\"")
41 graph = char @ pynini.cdrewrite(pynini.cross(u"\u00A0", " "), "", "", NEMO_SIGMA)
42
43 self.fst = graph.optimize()
44
[end of nemo_text_processing/text_normalization/en/verbalizers/word.py]
[start of nemo_text_processing/text_normalization/normalize.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import itertools
16 import os
17 import re
18 from argparse import ArgumentParser
19 from collections import OrderedDict
20 from math import factorial
21 from typing import Dict, List, Union
22
23 from nemo_text_processing.text_normalization.data_loader_utils import get_installation_msg, pre_process
24 from nemo_text_processing.text_normalization.token_parser import PRESERVE_ORDER_KEY, TokenParser
25 from tqdm import tqdm
26
27 try:
28 import pynini
29
30 PYNINI_AVAILABLE = True
31
32 except (ModuleNotFoundError, ImportError):
33 PYNINI_AVAILABLE = False
34
35 try:
36 from nemo.collections.common.tokenizers.moses_tokenizers import MosesProcessor
37 from nemo.collections.nlp.data.text_normalization.utils import post_process_punct
38
39 NLP_AVAILABLE = True
40 except (ModuleNotFoundError, ImportError):
41 NLP_AVAILABLE = False
42
43
44 SPACE_DUP = re.compile(' {2,}')
45
46
47 class Normalizer:
48 """
49 Normalizer class that converts text from written to spoken form.
50 Useful for TTS preprocessing.
51
52 Args:
53 input_case: expected input capitalization
54 lang: language specifying the TN rules, by default: English
55 cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
56 overwrite_cache: set to True to overwrite .far files
57 whitelist: path to a file with whitelist replacements
58 """
59
60 def __init__(
61 self,
62 input_case: str,
63 lang: str = 'en',
64 deterministic: bool = True,
65 cache_dir: str = None,
66 overwrite_cache: bool = False,
67 whitelist: str = None,
68 ):
69 assert input_case in ["lower_cased", "cased"]
70
71 if not PYNINI_AVAILABLE:
72 raise ImportError(get_installation_msg())
73
74 if lang == 'en' and deterministic:
75 from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify import ClassifyFst
76 from nemo_text_processing.text_normalization.en.verbalizers.verbalize_final import VerbalizeFinalFst
77 elif lang == 'en' and not deterministic:
78 from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify_with_audio import ClassifyFst
79 from nemo_text_processing.text_normalization.en.verbalizers.verbalize_final import VerbalizeFinalFst
80 elif lang == 'ru':
81 # Ru TN only support non-deterministic cases and produces multiple normalization options
82 # use normalize_with_audio.py
83 from nemo_text_processing.text_normalization.ru.taggers.tokenize_and_classify import ClassifyFst
84 from nemo_text_processing.text_normalization.ru.verbalizers.verbalize_final import VerbalizeFinalFst
85 elif lang == 'de':
86 # Ru TN only support non-deterministic cases and produces multiple normalization options
87 # use normalize_with_audio.py
88 from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import ClassifyFst
89 from nemo_text_processing.text_normalization.de.verbalizers.verbalize_final import VerbalizeFinalFst
90 self.tagger = ClassifyFst(
91 input_case=input_case,
92 deterministic=deterministic,
93 cache_dir=cache_dir,
94 overwrite_cache=overwrite_cache,
95 whitelist=whitelist,
96 )
97 self.verbalizer = VerbalizeFinalFst(deterministic=deterministic)
98 self.parser = TokenParser()
99 self.lang = lang
100
101 if NLP_AVAILABLE:
102 self.processor = MosesProcessor(lang_id=lang)
103 else:
104 self.processor = None
105 print("NeMo NLP is not available. Moses de-tokenization will be skipped.")
106
107 def normalize_list(self, texts: List[str], verbose=False, punct_post_process: bool = False) -> List[str]:
108 """
109 NeMo text normalizer
110
111 Args:
112 texts: list of input strings
113 verbose: whether to print intermediate meta information
114
115 Returns converted list input strings
116 """
117 res = []
118 for input in tqdm(texts):
119 try:
120 text = self.normalize(input, verbose=verbose, punct_post_process=punct_post_process)
121 except:
122 print(input)
123 raise Exception
124 res.append(text)
125 return res
126
127 def _estimate_number_of_permutations_in_nested_dict(
128 self, token_group: Dict[str, Union[OrderedDict, str, bool]]
129 ) -> int:
130 num_perms = 1
131 for k, inner in token_group.items():
132 if isinstance(inner, dict):
133 num_perms *= self._estimate_number_of_permutations_in_nested_dict(inner)
134 num_perms *= factorial(len(token_group))
135 return num_perms
136
137 def _split_tokens_to_reduce_number_of_permutations(
138 self, tokens: List[dict], max_number_of_permutations_per_split: int = 729
139 ) -> List[List[dict]]:
140 """
141 Splits a sequence of tokens in a smaller sequences of tokens in a way that maximum number of composite
142 tokens permutations does not exceed ``max_number_of_permutations_per_split``.
143
144 For example,
145
146 .. code-block:: python
147 tokens = [
148 {"tokens": {"date": {"year": "twenty eighteen", "month": "december", "day": "thirty one"}}},
149 {"tokens": {"date": {"year": "twenty eighteen", "month": "january", "day": "eight"}}},
150 ]
151 split = normalizer._split_tokens_to_reduce_number_of_permutations(
152 tokens, max_number_of_permutations_per_split=6
153 )
154 assert split == [
155 [{"tokens": {"date": {"year": "twenty eighteen", "month": "december", "day": "thirty one"}}}],
156 [{"tokens": {"date": {"year": "twenty eighteen", "month": "january", "day": "eight"}}}],
157 ]
158
159 Date tokens contain 3 items each which gives 6 permutations for every date. Since there are 2 dates, total
160 number of permutations would be ``6 * 6 == 36``. Parameter ``max_number_of_permutations_per_split`` equals 6,
161 so input sequence of tokens is split into 2 smaller sequences.
162
163 Args:
164 tokens (:obj:`List[dict]`): a list of dictionaries, possibly nested.
165 max_number_of_permutations_per_split (:obj:`int`, `optional`, defaults to :obj:`243`): a maximum number
166 of permutations which can be generated from input sequence of tokens.
167
168 Returns:
169 :obj:`List[List[dict]]`: a list of smaller sequences of tokens resulting from ``tokens`` split.
170 """
171 splits = []
172 prev_end_of_split = 0
173 current_number_of_permutations = 1
174 for i, token_group in enumerate(tokens):
175 n = self._estimate_number_of_permutations_in_nested_dict(token_group)
176 if n * current_number_of_permutations > max_number_of_permutations_per_split:
177 splits.append(tokens[prev_end_of_split:i])
178 prev_end_of_split = i
179 current_number_of_permutations = 1
180 if n > max_number_of_permutations_per_split:
181 raise ValueError(
182 f"Could not split token list with respect to condition that every split can generate number of "
183 f"permutations less or equal to "
184 f"`max_number_of_permutations_per_split={max_number_of_permutations_per_split}`. "
185 f"There is an unsplittable token group that generates more than "
186 f"{max_number_of_permutations_per_split} permutations. Try to increase "
187 f"`max_number_of_permutations_per_split` parameter."
188 )
189 current_number_of_permutations *= n
190 splits.append(tokens[prev_end_of_split:])
191 assert sum([len(s) for s in splits]) == len(tokens)
192 return splits
193
194 def normalize(
195 self, text: str, verbose: bool = False, punct_pre_process: bool = False, punct_post_process: bool = False
196 ) -> str:
197 """
198 Main function. Normalizes tokens from written to spoken form
199 e.g. 12 kg -> twelve kilograms
200
201 Args:
202 text: string that may include semiotic classes
203 verbose: whether to print intermediate meta information
204 punct_pre_process: whether to perform punctuation pre-processing, for example, [25] -> [ 25 ]
205 punct_post_process: whether to normalize punctuation
206
207 Returns: spoken form
208 """
209 original_text = text
210 if punct_pre_process:
211 text = pre_process(text)
212 text = text.strip()
213 if not text:
214 if verbose:
215 print(text)
216 return text
217 text = pynini.escape(text)
218 tagged_lattice = self.find_tags(text)
219 tagged_text = self.select_tag(tagged_lattice)
220 if verbose:
221 print(tagged_text)
222 self.parser(tagged_text)
223 tokens = self.parser.parse()
224 split_tokens = self._split_tokens_to_reduce_number_of_permutations(tokens)
225 output = ""
226 for s in split_tokens:
227 tags_reordered = self.generate_permutations(s)
228 verbalizer_lattice = None
229 for tagged_text in tags_reordered:
230 tagged_text = pynini.escape(tagged_text)
231
232 verbalizer_lattice = self.find_verbalizer(tagged_text)
233 if verbalizer_lattice.num_states() != 0:
234 break
235 if verbalizer_lattice is None:
236 raise ValueError(f"No permutations were generated from tokens {s}")
237 output += ' ' + self.select_verbalizer(verbalizer_lattice)
238 output = SPACE_DUP.sub(' ', output[1:])
239 if punct_post_process:
240 # do post-processing based on Moses detokenizer
241 if self.processor:
242 output = self.processor.moses_detokenizer.detokenize([output], unescape=False)
243 output = post_process_punct(input=original_text, normalized_text=output)
244 else:
245 print("NEMO_NLP collection is not available: skipping punctuation post_processing")
246 return output
247
248 def _permute(self, d: OrderedDict) -> List[str]:
249 """
250 Creates reorderings of dictionary elements and serializes as strings
251
252 Args:
253 d: (nested) dictionary of key value pairs
254
255 Return permutations of different string serializations of key value pairs
256 """
257 l = []
258 if PRESERVE_ORDER_KEY in d.keys():
259 d_permutations = [d.items()]
260 else:
261 d_permutations = itertools.permutations(d.items())
262 for perm in d_permutations:
263 subl = [""]
264 for k, v in perm:
265 if isinstance(v, str):
266 subl = ["".join(x) for x in itertools.product(subl, [f"{k}: \"{v}\" "])]
267 elif isinstance(v, OrderedDict):
268 rec = self._permute(v)
269 subl = ["".join(x) for x in itertools.product(subl, [f" {k} {{ "], rec, [f" }} "])]
270 elif isinstance(v, bool):
271 subl = ["".join(x) for x in itertools.product(subl, [f"{k}: true "])]
272 else:
273 raise ValueError()
274 l.extend(subl)
275 return l
276
277 def generate_permutations(self, tokens: List[dict]):
278 """
279 Generates permutations of string serializations of list of dictionaries
280
281 Args:
282 tokens: list of dictionaries
283
284 Returns string serialization of list of dictionaries
285 """
286
287 def _helper(prefix: str, tokens: List[dict], idx: int):
288 """
289 Generates permutations of string serializations of given dictionary
290
291 Args:
292 tokens: list of dictionaries
293 prefix: prefix string
294 idx: index of next dictionary
295
296 Returns string serialization of dictionary
297 """
298 if idx == len(tokens):
299 yield prefix
300 return
301 token_options = self._permute(tokens[idx])
302 for token_option in token_options:
303 yield from _helper(prefix + token_option, tokens, idx + 1)
304
305 return _helper("", tokens, 0)
306
307 def find_tags(self, text: str) -> 'pynini.FstLike':
308 """
309 Given text use tagger Fst to tag text
310
311 Args:
312 text: sentence
313
314 Returns: tagged lattice
315 """
316 lattice = text @ self.tagger.fst
317 return lattice
318
319 def select_tag(self, lattice: 'pynini.FstLike') -> str:
320 """
321 Given tagged lattice return shortest path
322
323 Args:
324 tagged_text: tagged text
325
326 Returns: shortest path
327 """
328 tagged_text = pynini.shortestpath(lattice, nshortest=1, unique=True).string()
329 return tagged_text
330
331 def find_verbalizer(self, tagged_text: str) -> 'pynini.FstLike':
332 """
333 Given tagged text creates verbalization lattice
334 This is context-independent.
335
336 Args:
337 tagged_text: input text
338
339 Returns: verbalized lattice
340 """
341 lattice = tagged_text @ self.verbalizer.fst
342 return lattice
343
344 def select_verbalizer(self, lattice: 'pynini.FstLike') -> str:
345 """
346 Given verbalized lattice return shortest path
347
348 Args:
349 lattice: verbalization lattice
350
351 Returns: shortest path
352 """
353 output = pynini.shortestpath(lattice, nshortest=1, unique=True).string()
354 return output
355
356
357 def parse_args():
358 parser = ArgumentParser()
359 parser.add_argument("input_string", help="input string", type=str)
360 parser.add_argument("--language", help="language", choices=["en", "de"], default="en", type=str)
361 parser.add_argument(
362 "--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
363 )
364 parser.add_argument("--verbose", help="print info for debugging", action='store_true')
365 parser.add_argument(
366 "--punct_post_process", help="set to True to enable punctuation post processing", action="store_true"
367 )
368 parser.add_argument(
369 "--punct_pre_process", help="set to True to enable punctuation pre processing", action="store_true"
370 )
371 parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
372 parser.add_argument("--whitelist", help="path to a file with with whitelist", default=None, type=str)
373 parser.add_argument(
374 "--cache_dir",
375 help="path to a dir with .far grammar file. Set to None to avoid using cache",
376 default=None,
377 type=str,
378 )
379 return parser.parse_args()
380
381
382 if __name__ == "__main__":
383 args = parse_args()
384 whitelist = os.path.abspath(args.whitelist) if args.whitelist else None
385 normalizer = Normalizer(
386 input_case=args.input_case,
387 cache_dir=args.cache_dir,
388 overwrite_cache=args.overwrite_cache,
389 whitelist=whitelist,
390 lang=args.language,
391 )
392 print(
393 normalizer.normalize(
394 args.input_string,
395 verbose=args.verbose,
396 punct_pre_process=args.punct_pre_process,
397 punct_post_process=args.punct_post_process,
398 )
399 )
400
[end of nemo_text_processing/text_normalization/normalize.py]
[start of nemo_text_processing/text_normalization/normalize_with_audio.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import os
17 import time
18 from argparse import ArgumentParser
19 from glob import glob
20 from typing import List, Tuple
21
22 from joblib import Parallel, delayed
23 from nemo_text_processing.text_normalization.normalize import Normalizer
24 from tqdm import tqdm
25
26 try:
27 from nemo.collections.asr.metrics.wer import word_error_rate
28 from nemo.collections.asr.models import ASRModel
29
30 ASR_AVAILABLE = True
31 except (ModuleNotFoundError, ImportError):
32 ASR_AVAILABLE = False
33
34 try:
35 import pynini
36 from pynini.lib import rewrite
37
38 PYNINI_AVAILABLE = True
39 except (ModuleNotFoundError, ImportError):
40 PYNINI_AVAILABLE = False
41
42 try:
43 from nemo.collections.nlp.data.text_normalization.utils import post_process_punct
44 from nemo_text_processing.text_normalization.data_loader_utils import pre_process
45
46 NLP_AVAILABLE = True
47 except (ModuleNotFoundError, ImportError):
48 NLP_AVAILABLE = False
49
50 """
51 The script provides multiple normalization options and chooses the best one that minimizes CER of the ASR output
52 (most of the semiotic classes use deterministic=False flag).
53
54 To run this script with a .json manifest file, the manifest file should contain the following fields:
55 "audio_data" - path to the audio file
56 "text" - raw text
57 "pred_text" - ASR model prediction
58
59 See https://github.com/NVIDIA/NeMo/blob/main/examples/asr/transcribe_speech.py on how to add ASR predictions
60
61 When the manifest is ready, run:
62 python normalize_with_audio.py \
63 --audio_data PATH/TO/MANIFEST.JSON \
64 --language en
65
66
67 To run with a single audio file, specify path to audio and text with:
68 python normalize_with_audio.py \
69 --audio_data PATH/TO/AUDIO.WAV \
70 --language en \
71 --text raw text OR PATH/TO/.TXT/FILE
72 --model QuartzNet15x5Base-En \
73 --verbose
74
75 To see possible normalization options for a text input without an audio file (could be used for debugging), run:
76 python python normalize_with_audio.py --text "RAW TEXT"
77
78 Specify `--cache_dir` to generate .far grammars once and re-used them for faster inference
79 """
80
81
82 class NormalizerWithAudio(Normalizer):
83 """
84 Normalizer class that converts text from written to spoken form.
85 Useful for TTS preprocessing.
86
87 Args:
88 input_case: expected input capitalization
89 lang: language
90 cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
91 overwrite_cache: set to True to overwrite .far files
92 whitelist: path to a file with whitelist replacements
93 """
94
95 def __init__(
96 self,
97 input_case: str,
98 lang: str = 'en',
99 cache_dir: str = None,
100 overwrite_cache: bool = False,
101 whitelist: str = None,
102 ):
103
104 super().__init__(
105 input_case=input_case,
106 lang=lang,
107 deterministic=False,
108 cache_dir=cache_dir,
109 overwrite_cache=overwrite_cache,
110 whitelist=whitelist,
111 )
112
113 def normalize(self, text: str, n_tagged: int, punct_post_process: bool = True, verbose: bool = False,) -> str:
114 """
115 Main function. Normalizes tokens from written to spoken form
116 e.g. 12 kg -> twelve kilograms
117
118 Args:
119 text: string that may include semiotic classes
120 n_tagged: number of tagged options to consider, -1 - to get all possible tagged options
121 punct_post_process: whether to normalize punctuation
122 verbose: whether to print intermediate meta information
123
124 Returns:
125 normalized text options (usually there are multiple ways of normalizing a given semiotic class)
126 """
127 original_text = text
128
129 if self.lang == "en":
130 text = pre_process(text)
131 text = text.strip()
132 if not text:
133 if verbose:
134 print(text)
135 return text
136 text = pynini.escape(text)
137
138 if n_tagged == -1:
139 if self.lang == "en":
140 try:
141 tagged_texts = rewrite.rewrites(text, self.tagger.fst_no_digits)
142 except pynini.lib.rewrite.Error:
143 tagged_texts = rewrite.rewrites(text, self.tagger.fst)
144 else:
145 tagged_texts = rewrite.rewrites(text, self.tagger.fst)
146 else:
147 if self.lang == "en":
148 try:
149 tagged_texts = rewrite.top_rewrites(text, self.tagger.fst_no_digits, nshortest=n_tagged)
150 except pynini.lib.rewrite.Error:
151 tagged_texts = rewrite.top_rewrites(text, self.tagger.fst, nshortest=n_tagged)
152 else:
153 tagged_texts = rewrite.top_rewrites(text, self.tagger.fst, nshortest=n_tagged)
154
155 # non-deterministic Eng normalization uses tagger composed with verbalizer, no permutation in between
156 if self.lang == "en":
157 normalized_texts = tagged_texts
158 else:
159 normalized_texts = []
160 for tagged_text in tagged_texts:
161 self._verbalize(tagged_text, normalized_texts, verbose=verbose)
162
163 if len(normalized_texts) == 0:
164 raise ValueError()
165
166 if punct_post_process:
167 # do post-processing based on Moses detokenizer
168 if self.processor:
169 normalized_texts = [self.processor.detokenize([t]) for t in normalized_texts]
170 normalized_texts = [
171 post_process_punct(input=original_text, normalized_text=t) for t in normalized_texts
172 ]
173
174 normalized_texts = set(normalized_texts)
175 return normalized_texts
176
177 def _verbalize(self, tagged_text: str, normalized_texts: List[str], verbose: bool = False):
178 """
179 Verbalizes tagged text
180
181 Args:
182 tagged_text: text with tags
183 normalized_texts: list of possible normalization options
184 verbose: if true prints intermediate classification results
185 """
186
187 def get_verbalized_text(tagged_text):
188 return rewrite.rewrites(tagged_text, self.verbalizer.fst)
189
190 self.parser(tagged_text)
191 tokens = self.parser.parse()
192 tags_reordered = self.generate_permutations(tokens)
193 for tagged_text_reordered in tags_reordered:
194 try:
195 tagged_text_reordered = pynini.escape(tagged_text_reordered)
196 normalized_texts.extend(get_verbalized_text(tagged_text_reordered))
197 if verbose:
198 print(tagged_text_reordered)
199
200 except pynini.lib.rewrite.Error:
201 continue
202
203 def select_best_match(
204 self,
205 normalized_texts: List[str],
206 input_text: str,
207 pred_text: str,
208 verbose: bool = False,
209 remove_punct: bool = False,
210 ):
211 """
212 Selects the best normalization option based on the lowest CER
213
214 Args:
215 normalized_texts: normalized text options
216 input_text: input text
217 pred_text: ASR model transcript of the audio file corresponding to the normalized text
218 verbose: whether to print intermediate meta information
219 remove_punct: whether to remove punctuation before calculating CER
220
221 Returns:
222 normalized text with the lowest CER and CER value
223 """
224 if pred_text == "":
225 return input_text, 1000
226
227 normalized_texts_cer = calculate_cer(normalized_texts, pred_text, remove_punct)
228 normalized_texts_cer = sorted(normalized_texts_cer, key=lambda x: x[1])
229 normalized_text, cer = normalized_texts_cer[0]
230
231 if verbose:
232 print('-' * 30)
233 for option in normalized_texts:
234 print(option)
235 print('-' * 30)
236 return normalized_text, cer
237
238
239 def calculate_cer(normalized_texts: List[str], pred_text: str, remove_punct=False) -> List[Tuple[str, float]]:
240 """
241 Calculates character error rate (CER)
242
243 Args:
244 normalized_texts: normalized text options
245 pred_text: ASR model output
246
247 Returns: normalized options with corresponding CER
248 """
249 normalized_options = []
250 for text in normalized_texts:
251 text_clean = text.replace('-', ' ').lower()
252 if remove_punct:
253 for punct in "!?:;,.-()*+-/<=>@^_":
254 text_clean = text_clean.replace(punct, "")
255 cer = round(word_error_rate([pred_text], [text_clean], use_cer=True) * 100, 2)
256 normalized_options.append((text, cer))
257 return normalized_options
258
259
260 def get_asr_model(asr_model):
261 """
262 Returns ASR Model
263
264 Args:
265 asr_model: NeMo ASR model
266 """
267 if os.path.exists(args.model):
268 asr_model = ASRModel.restore_from(asr_model)
269 elif args.model in ASRModel.get_available_model_names():
270 asr_model = ASRModel.from_pretrained(asr_model)
271 else:
272 raise ValueError(
273 f'Provide path to the pretrained checkpoint or choose from {ASRModel.get_available_model_names()}'
274 )
275 return asr_model
276
277
278 def parse_args():
279 parser = ArgumentParser()
280 parser.add_argument("--text", help="input string or path to a .txt file", default=None, type=str)
281 parser.add_argument(
282 "--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
283 )
284 parser.add_argument(
285 "--language", help="Select target language", choices=["en", "ru", "de"], default="en", type=str
286 )
287 parser.add_argument("--audio_data", default=None, help="path to an audio file or .json manifest")
288 parser.add_argument(
289 '--model', type=str, default='QuartzNet15x5Base-En', help='Pre-trained model name or path to model checkpoint'
290 )
291 parser.add_argument(
292 "--n_tagged",
293 type=int,
294 default=30,
295 help="number of tagged options to consider, -1 - return all possible tagged options",
296 )
297 parser.add_argument("--verbose", help="print info for debugging", action="store_true")
298 parser.add_argument(
299 "--no_remove_punct_for_cer",
300 help="Set to True to NOT remove punctuation before calculating CER",
301 action="store_true",
302 )
303 parser.add_argument(
304 "--no_punct_post_process", help="set to True to disable punctuation post processing", action="store_true"
305 )
306 parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
307 parser.add_argument("--whitelist", help="path to a file with with whitelist", default=None, type=str)
308 parser.add_argument(
309 "--cache_dir",
310 help="path to a dir with .far grammar file. Set to None to avoid using cache",
311 default=None,
312 type=str,
313 )
314 parser.add_argument("--n_jobs", default=-2, type=int, help="The maximum number of concurrently running jobs")
315 parser.add_argument("--batch_size", default=200, type=int, help="Number of examples for each process")
316 return parser.parse_args()
317
318
319 def _normalize_line(normalizer: NormalizerWithAudio, n_tagged, verbose, line: str, remove_punct, punct_post_process):
320 line = json.loads(line)
321 pred_text = line["pred_text"]
322
323 normalized_texts = normalizer.normalize(
324 text=line["text"], verbose=verbose, n_tagged=n_tagged, punct_post_process=punct_post_process,
325 )
326
327 normalized_text, cer = normalizer.select_best_match(
328 normalized_texts=normalized_texts,
329 input_text=line["text"],
330 pred_text=pred_text,
331 verbose=verbose,
332 remove_punct=remove_punct,
333 )
334 line["nemo_normalized"] = normalized_text
335 line["CER_nemo_normalized"] = cer
336 return line
337
338
339 def normalize_manifest(
340 normalizer,
341 audio_data: str,
342 n_jobs: int,
343 n_tagged: int,
344 remove_punct: bool,
345 punct_post_process: bool,
346 batch_size: int,
347 ):
348 """
349 Args:
350 args.audio_data: path to .json manifest file.
351 """
352
353 def __process_batch(batch_idx, batch, dir_name):
354 normalized_lines = [
355 _normalize_line(
356 normalizer,
357 n_tagged,
358 verbose=False,
359 line=line,
360 remove_punct=remove_punct,
361 punct_post_process=punct_post_process,
362 )
363 for line in tqdm(batch)
364 ]
365
366 with open(f"{dir_name}/{batch_idx}.json", "w") as f_out:
367 for line in normalized_lines:
368 f_out.write(json.dumps(line, ensure_ascii=False) + '\n')
369
370 print(f"Batch -- {batch_idx} -- is complete")
371 return normalized_lines
372
373 manifest_out = audio_data.replace('.json', '_normalized.json')
374 with open(audio_data, 'r') as f:
375 lines = f.readlines()
376
377 print(f'Normalizing {len(lines)} lines of {audio_data}...')
378
379 # to save intermediate results to a file
380 batch = min(len(lines), batch_size)
381
382 tmp_dir = manifest_out.replace(".json", "_parts")
383 os.makedirs(tmp_dir, exist_ok=True)
384
385 Parallel(n_jobs=n_jobs)(
386 delayed(__process_batch)(idx, lines[i : i + batch], tmp_dir)
387 for idx, i in enumerate(range(0, len(lines), batch))
388 )
389
390 # aggregate all intermediate files
391 with open(manifest_out, "w") as f_out:
392 for batch_f in sorted(glob(f"{tmp_dir}/*.json")):
393 with open(batch_f, "r") as f_in:
394 lines = f_in.read()
395 f_out.write(lines)
396
397 print(f'Normalized version saved at {manifest_out}')
398
399
400 if __name__ == "__main__":
401 args = parse_args()
402
403 if not ASR_AVAILABLE and args.audio_data:
404 raise ValueError("NeMo ASR collection is not installed.")
405 start = time.time()
406 args.whitelist = os.path.abspath(args.whitelist) if args.whitelist else None
407 if args.text is not None:
408 normalizer = NormalizerWithAudio(
409 input_case=args.input_case,
410 lang=args.language,
411 cache_dir=args.cache_dir,
412 overwrite_cache=args.overwrite_cache,
413 whitelist=args.whitelist,
414 )
415
416 if os.path.exists(args.text):
417 with open(args.text, 'r') as f:
418 args.text = f.read().strip()
419 normalized_texts = normalizer.normalize(
420 text=args.text,
421 verbose=args.verbose,
422 n_tagged=args.n_tagged,
423 punct_post_process=not args.no_punct_post_process,
424 )
425
426 if args.audio_data:
427 asr_model = get_asr_model(args.model)
428 pred_text = asr_model.transcribe([args.audio_data])[0]
429 normalized_text, cer = normalizer.select_best_match(
430 normalized_texts=normalized_texts,
431 pred_text=pred_text,
432 input_text=args.text,
433 verbose=args.verbose,
434 remove_punct=not args.no_remove_punct_for_cer,
435 )
436 print(f"Transcript: {pred_text}")
437 print(f"Normalized: {normalized_text}")
438 else:
439 print("Normalization options:")
440 for norm_text in normalized_texts:
441 print(norm_text)
442 elif not os.path.exists(args.audio_data):
443 raise ValueError(f"{args.audio_data} not found.")
444 elif args.audio_data.endswith('.json'):
445 normalizer = NormalizerWithAudio(
446 input_case=args.input_case,
447 lang=args.language,
448 cache_dir=args.cache_dir,
449 overwrite_cache=args.overwrite_cache,
450 whitelist=args.whitelist,
451 )
452 normalize_manifest(
453 normalizer=normalizer,
454 audio_data=args.audio_data,
455 n_jobs=args.n_jobs,
456 n_tagged=args.n_tagged,
457 remove_punct=not args.no_remove_punct_for_cer,
458 punct_post_process=not args.no_punct_post_process,
459 batch_size=args.batch_size,
460 )
461 else:
462 raise ValueError(
463 "Provide either path to .json manifest in '--audio_data' OR "
464 + "'--audio_data' path to audio file and '--text' path to a text file OR"
465 "'--text' string text (for debugging without audio)"
466 )
467 print(f'Execution time: {round((time.time() - start)/60, 2)} min.')
468
[end of nemo_text_processing/text_normalization/normalize_with_audio.py]
[start of tools/text_processing_deployment/pynini_export.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16
17 import os
18 import time
19 from argparse import ArgumentParser
20
21 from nemo.utils import logging
22
23 try:
24 import pynini
25 from nemo_text_processing.text_normalization.en.graph_utils import generator_main
26
27 PYNINI_AVAILABLE = True
28 except (ModuleNotFoundError, ImportError):
29
30 logging.warning(
31 "`pynini` is not installed ! \n"
32 "Please run the `nemo_text_processing/setup.sh` script"
33 "prior to usage of this toolkit."
34 )
35
36 PYNINI_AVAILABLE = False
37
38
39 # This script exports compiled grammars inside nemo_text_processing into OpenFst finite state archive files
40 # tokenize_and_classify.far and verbalize.far for production purposes
41
42
43 def itn_grammars(**kwargs):
44 d = {}
45 d['classify'] = {
46 'TOKENIZE_AND_CLASSIFY': ITNClassifyFst(
47 cache_dir=kwargs["cache_dir"], overwrite_cache=kwargs["overwrite_cache"]
48 ).fst
49 }
50 d['verbalize'] = {'ALL': ITNVerbalizeFst().fst, 'REDUP': pynini.accep("REDUP")}
51 return d
52
53
54 def tn_grammars(**kwargs):
55 d = {}
56 d['classify'] = {
57 'TOKENIZE_AND_CLASSIFY': TNClassifyFst(
58 input_case=kwargs["input_case"],
59 deterministic=True,
60 cache_dir=kwargs["cache_dir"],
61 overwrite_cache=kwargs["overwrite_cache"],
62 ).fst
63 }
64 d['verbalize'] = {'ALL': TNVerbalizeFst(deterministic=True).fst, 'REDUP': pynini.accep("REDUP")}
65 return d
66
67
68 def export_grammars(output_dir, grammars):
69 """
70 Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
71
72 Args:
73 output_dir: directory to export FAR files to. Subdirectories will be created for tagger and verbalizer respectively.
74 grammars: grammars to be exported
75 """
76
77 for category, graphs in grammars.items():
78 out_dir = os.path.join(output_dir, category)
79 if not os.path.exists(out_dir):
80 os.makedirs(out_dir)
81 time.sleep(1)
82 if category == "classify":
83 category = "tokenize_and_classify"
84 generator_main(f"{out_dir}/{category}.far", graphs)
85
86
87 def parse_args():
88 parser = ArgumentParser()
89 parser.add_argument("--output_dir", help="output directory for grammars", required=True, type=str)
90 parser.add_argument(
91 "--language", help="language", choices=["en", "de", "es", "ru", 'fr', 'vi'], type=str, default='en'
92 )
93 parser.add_argument(
94 "--grammars", help="grammars to be exported", choices=["tn_grammars", "itn_grammars"], type=str, required=True
95 )
96 parser.add_argument(
97 "--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
98 )
99 parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
100 parser.add_argument(
101 "--cache_dir",
102 help="path to a dir with .far grammar file. Set to None to avoid using cache",
103 default=None,
104 type=str,
105 )
106 return parser.parse_args()
107
108
109 if __name__ == '__main__':
110 args = parse_args()
111
112 if args.language in ['ru', 'fr', 'es', 'vi'] and args.grammars == 'tn_grammars':
113 raise ValueError('Only ITN grammars could be deployed in Sparrowhawk for the selected languages.')
114
115 if args.language == 'en':
116 from nemo_text_processing.inverse_text_normalization.en.taggers.tokenize_and_classify import (
117 ClassifyFst as ITNClassifyFst,
118 )
119 from nemo_text_processing.inverse_text_normalization.en.verbalizers.verbalize import (
120 VerbalizeFst as ITNVerbalizeFst,
121 )
122 from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify import (
123 ClassifyFst as TNClassifyFst,
124 )
125 from nemo_text_processing.text_normalization.en.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
126 elif args.language == 'de':
127 from nemo_text_processing.inverse_text_normalization.de.taggers.tokenize_and_classify import (
128 ClassifyFst as ITNClassifyFst,
129 )
130 from nemo_text_processing.inverse_text_normalization.de.verbalizers.verbalize import (
131 VerbalizeFst as ITNVerbalizeFst,
132 )
133 from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import (
134 ClassifyFst as TNClassifyFst,
135 )
136 from nemo_text_processing.text_normalization.de.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
137 elif args.language == 'ru':
138 from nemo_text_processing.inverse_text_normalization.ru.taggers.tokenize_and_classify import (
139 ClassifyFst as ITNClassifyFst,
140 )
141 from nemo_text_processing.inverse_text_normalization.ru.verbalizers.verbalize import (
142 VerbalizeFst as ITNVerbalizeFst,
143 )
144 elif args.language == 'es':
145 from nemo_text_processing.inverse_text_normalization.es.taggers.tokenize_and_classify import (
146 ClassifyFst as ITNClassifyFst,
147 )
148 from nemo_text_processing.inverse_text_normalization.es.verbalizers.verbalize import (
149 VerbalizeFst as ITNVerbalizeFst,
150 )
151 elif args.language == 'fr':
152 from nemo_text_processing.inverse_text_normalization.fr.taggers.tokenize_and_classify import (
153 ClassifyFst as ITNClassifyFst,
154 )
155 from nemo_text_processing.inverse_text_normalization.fr.verbalizers.verbalize import (
156 VerbalizeFst as ITNVerbalizeFst,
157 )
158 elif args.language == 'vi':
159 from nemo_text_processing.inverse_text_normalization.vi.taggers.tokenize_and_classify import (
160 ClassifyFst as ITNClassifyFst,
161 )
162 from nemo_text_processing.inverse_text_normalization.vi.verbalizers.verbalize import (
163 VerbalizeFst as ITNVerbalizeFst,
164 )
165
166 output_dir = os.path.join(args.output_dir, args.language)
167 export_grammars(
168 output_dir=output_dir,
169 grammars=locals()[args.grammars](
170 input_case=args.input_case, cache_dir=args.cache_dir, overwrite_cache=args.overwrite_cache
171 ),
172 )
173
[end of tools/text_processing_deployment/pynini_export.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
NVIDIA/NeMo
|
022f0292aecbc98d591d49423d5045235394f793
|
./reinstall.sh crashes due to not being able to uninstall llvmlite
Starting off of `nemo:1.5.1` container, cloning the NeMo repo to a folder inside of it and calling `./reinstall.sh` fails with
```
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
`pip install -e` on the other hand succeeds installing `nemo:1.7.0rc0` and `numpy:1.22.2`, the rest of the packages remain untouched.
It seems that `./reinstall.sh` which used to work fine, a week or so ago when following the same procedure to upgrade to `nemo:1.6.0rc` redeveloped issue #841. The solution remains the same, first call
```
pip install --ignore-installed llvmlite
```
followed by `./reinstall.sh`. In this case, apart `llvml`, the following packages are updated
```
ftfy-6.0.3 nemo-toolkit-1.7.0rc0 numba-0.55.1 pytorch-lightning-1.5.9 sacrebleu-2.0.0 setuptools-59.5.0
```
Interestingly `numpy` in this case is left at `1.21.5`.
|
2022-02-09T05:12:31Z
|
<patch>
<patch>
diff --git a/nemo_text_processing/text_normalization/__init__.py b/nemo_text_processing/text_normalization/__init__.py
--- a/nemo_text_processing/text_normalization/__init__.py
+++ b/nemo_text_processing/text_normalization/__init__.py
@@ -21,7 +21,7 @@
except (ModuleNotFoundError, ImportError):
logging.warning(
"`pynini` is not installed ! \n"
- "Please run the `nemo_text_processing/setup.sh` script"
+ "Please run the `nemo_text_processing/setup.sh` script "
"prior to usage of this toolkit."
)
diff --git a/nemo_text_processing/text_normalization/en/graph_utils.py b/nemo_text_processing/text_normalization/en/graph_utils.py
--- a/nemo_text_processing/text_normalization/en/graph_utils.py
+++ b/nemo_text_processing/text_normalization/en/graph_utils.py
@@ -159,7 +159,7 @@ def convert_space(fst) -> 'pynini.FstLike':
"""
Converts space to nonbreaking space.
Used only in tagger grammars for transducing token values within quotes, e.g. name: "hello kitty"
- This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
+ This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
Args:
fst: input fst
@@ -208,9 +208,9 @@ def add_tokens(self, fst) -> 'pynini.FstLike':
"""
Wraps class name around to given fst
- Args:
+ Args:
fst: input fst
-
+
Returns:
Fst: fst
"""
diff --git a/nemo_text_processing/text_normalization/en/taggers/punctuation.py b/nemo_text_processing/text_normalization/en/taggers/punctuation.py
--- a/nemo_text_processing/text_normalization/en/taggers/punctuation.py
+++ b/nemo_text_processing/text_normalization/en/taggers/punctuation.py
@@ -22,7 +22,7 @@
import pynini
from pynini.lib import pynutil
- PYNINI_AVAILABLE = False
+ PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py b/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
--- a/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
+++ b/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
@@ -12,8 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
-
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
@@ -21,6 +19,7 @@
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
+
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/en/verbalizers/word.py b/nemo_text_processing/text_normalization/en/verbalizers/word.py
--- a/nemo_text_processing/text_normalization/en/verbalizers/word.py
+++ b/nemo_text_processing/text_normalization/en/verbalizers/word.py
@@ -12,7 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
@@ -20,6 +19,7 @@
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
+
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/es/__init__.py b/nemo_text_processing/text_normalization/es/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/__init__.py
@@ -0,0 +1,15 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+LOCALIZATION = "eu" # Set to am for alternate formatting
diff --git a/nemo_text_processing/text_normalization/es/data/__init__.py b/nemo_text_processing/text_normalization/es/data/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/dates/__init__.py b/nemo_text_processing/text_normalization/es/data/dates/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/dates/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/electronic/__init__.py b/nemo_text_processing/text_normalization/es/data/electronic/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/electronic/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/fractions/__init__.py b/nemo_text_processing/text_normalization/es/data/fractions/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/fractions/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/measures/__init__.py b/nemo_text_processing/text_normalization/es/data/measures/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/measures/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/money/__init__.py b/nemo_text_processing/text_normalization/es/data/money/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/money/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/numbers/__init__.py b/nemo_text_processing/text_normalization/es/data/numbers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/numbers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py b/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/roman/__init__.py b/nemo_text_processing/text_normalization/es/data/roman/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/roman/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/time/__init__.py b/nemo_text_processing/text_normalization/es/data/time/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/time/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/graph_utils.py b/nemo_text_processing/text_normalization/es/graph_utils.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/graph_utils.py
@@ -0,0 +1,179 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_SIGMA, NEMO_SPACE
+from nemo_text_processing.text_normalization.es import LOCALIZATION
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digits = pynini.project(pynini.string_file(get_abs_path("data/numbers/digit.tsv")), "input")
+ tens = pynini.project(pynini.string_file(get_abs_path("data/numbers/ties.tsv")), "input")
+ teens = pynini.project(pynini.string_file(get_abs_path("data/numbers/teen.tsv")), "input")
+ twenties = pynini.project(pynini.string_file(get_abs_path("data/numbers/twenties.tsv")), "input")
+ hundreds = pynini.project(pynini.string_file(get_abs_path("data/numbers/hundreds.tsv")), "input")
+
+ accents = pynini.string_map([("รก", "a"), ("รฉ", "e"), ("รญ", "i"), ("รณ", "o"), ("รบ", "u")])
+
+ if LOCALIZATION == "am": # Setting localization for central and northern america formatting
+ cardinal_separator = pynini.string_map([",", NEMO_SPACE])
+ decimal_separator = pynini.accep(".")
+ else:
+ cardinal_separator = pynini.string_map([".", NEMO_SPACE])
+ decimal_separator = pynini.accep(",")
+
+ ones = pynini.union("un", "รบn")
+ fem_ones = pynini.union(pynini.cross("un", "una"), pynini.cross("รบn", "una"), pynini.cross("uno", "una"))
+ one_to_one_hundred = pynini.union(digits, tens, teens, twenties, tens + pynini.accep(" y ") + digits)
+ fem_hundreds = hundreds @ pynini.cdrewrite(pynini.cross("ientos", "ientas"), "", "", NEMO_SIGMA)
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ digits = None
+ tens = None
+ teens = None
+ twenties = None
+ hundreds = None
+
+ accents = None
+
+ cardinal_separator = None
+ decimal_separator = None
+
+ ones = None
+ fem_ones = None
+ one_to_one_hundred = None
+ fem_hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def strip_accent(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Converts all accented vowels to non-accented equivalents
+
+ Args:
+ fst: Any fst. Composes vowel conversion onto fst's output strings
+ """
+ return fst @ pynini.cdrewrite(accents, "", "", NEMO_SIGMA)
+
+
+def shift_cardinal_gender(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Applies gender conversion rules to a cardinal string. These include: rendering all masculine forms of "uno" (including apocopated forms) as "una" and
+ Converting all gendered numbers in the hundreds series (200,300,400...) to feminine equivalent (e.g. "doscientos" -> "doscientas"). Converssion only applies
+ to value place for <1000 and multiple of 1000. (e.g. "doscientos mil doscientos" -> "doscientas mil doscientas".) For place values greater than the thousands, there
+ is no gender shift as the higher powers of ten ("millones", "billones") are masculine nouns and any conversion would be formally
+ ungrammatical.
+ e.g.
+ "doscientos" -> "doscientas"
+ "doscientos mil" -> "doscientas mil"
+ "doscientos millones" -> "doscientos millones"
+ "doscientos mil millones" -> "doscientos mil millones"
+ "doscientos millones doscientos mil doscientos" -> "doscientos millones doscientas mil doscientas"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ before_mil = (
+ NEMO_SPACE
+ + (pynini.accep("mil") | pynini.accep("milรฉsimo"))
+ + pynini.closure(NEMO_SPACE + hundreds, 0, 1)
+ + pynini.closure(NEMO_SPACE + one_to_one_hundred, 0, 1)
+ + pynini.union(pynini.accep("[EOS]"), pynini.accep("\""), decimal_separator)
+ )
+ before_double_digits = pynini.closure(NEMO_SPACE + one_to_one_hundred, 0, 1) + pynini.union(
+ pynini.accep("[EOS]"), pynini.accep("\"")
+ )
+
+ fem_allign = pynini.cdrewrite(fem_hundreds, "", before_mil, NEMO_SIGMA) # doscientas mil dosciento
+ fem_allign @= pynini.cdrewrite(fem_hundreds, "", before_double_digits, NEMO_SIGMA) # doscientas mil doscienta
+ fem_allign @= pynini.cdrewrite(
+ fem_ones, "", pynini.union("[EOS]", "\"", decimal_separator), NEMO_SIGMA
+ ) # If before a quote or EOS, we know it's the end of a string
+
+ return fst @ fem_allign
+
+
+def shift_number_gender(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Performs gender conversion on all verbalized numbers in output. All values in the hundreds series (200,300,400) are changed to
+ feminine gender (e.g. "doscientos" -> "doscientas") and all forms of "uno" (including apocopated forms) are converted to "una".
+ This has no boundary restriction and will perform shift across all values in output string.
+ e.g.
+ "doscientos" -> "doscientas"
+ "doscientos millones" -> "doscientas millones"
+ "doscientos millones doscientos" -> "doscientas millones doscientas"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ fem_allign = pynini.cdrewrite(fem_hundreds, "", "", NEMO_SIGMA)
+ fem_allign @= pynini.cdrewrite(
+ fem_ones, "", pynini.union(NEMO_SPACE, pynini.accep("[EOS]"), pynini.accep("\"")), NEMO_SIGMA
+ ) # If before a quote or EOS, we know it's the end of a string
+
+ return fst @ fem_allign
+
+
+def strip_cardinal_apocope(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Reverts apocope on cardinal strings in line with formation rules. e.g. "un" -> "uno". Due to cardinal formation rules, this in effect only
+ affects strings where the final value is a variation of "un".
+ e.g.
+ "un" -> "uno"
+ "veintiรบn" -> "veintiuno"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ # Since cardinals use apocope by default for large values (e.g. "millรณn"), this only needs to act on the last instance of one
+ strip = pynini.cross("un", "uno") | pynini.cross("รบn", "uno")
+ strip = pynini.cdrewrite(strip, "", pynini.union("[EOS]", "\""), NEMO_SIGMA)
+ return fst @ strip
+
+
+def roman_to_int(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Alters given fst to convert Roman integers (lower and upper cased) into Arabic numerals. Valid for values up to 1000.
+ e.g.
+ "V" -> "5"
+ "i" -> "1"
+
+ Args:
+ fst: Any fst. Composes fst onto Roman conversion outputs.
+ """
+
+ def _load_roman(file: str):
+ roman = load_labels(get_abs_path(file))
+ roman_numerals = [(x, y) for x, y in roman] + [(x.upper(), y) for x, y in roman]
+ return pynini.string_map(roman_numerals)
+
+ digit = _load_roman("data/roman/digit.tsv")
+ ties = _load_roman("data/roman/ties.tsv")
+ hundreds = _load_roman("data/roman/hundreds.tsv")
+
+ graph = (
+ digit
+ | ties + (digit | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ | (
+ hundreds
+ + (ties | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ + (digit | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ )
+ ).optimize()
+
+ return graph @ fst
diff --git a/nemo_text_processing/text_normalization/es/taggers/__init__.py b/nemo_text_processing/text_normalization/es/taggers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/taggers/cardinal.py b/nemo_text_processing/text_normalization/es/taggers/cardinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/cardinal.py
@@ -0,0 +1,190 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import cardinal_separator
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ teen = pynini.invert(pynini.string_file(get_abs_path("data/numbers/teen.tsv")))
+ ties = pynini.invert(pynini.string_file(get_abs_path("data/numbers/ties.tsv")))
+ twenties = pynini.invert(pynini.string_file(get_abs_path("data/numbers/twenties.tsv")))
+ hundreds = pynini.invert(pynini.string_file(get_abs_path("data/numbers/hundreds.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ zero = None
+ digit = None
+ teen = None
+ ties = None
+ twenties = None
+ hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def filter_punctuation(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Helper function for parsing number strings. Converts common cardinal strings (groups of three digits delineated by 'cardinal_separator' - see graph_utils)
+ and converts to a string of digits:
+ "1 000" -> "1000"
+ "1.000.000" -> "1000000"
+ Args:
+ fst: Any pynini.FstLike object. Function composes fst onto string parser fst
+
+ Returns:
+ fst: A pynini.FstLike object
+ """
+ exactly_three_digits = NEMO_DIGIT ** 3 # for blocks of three
+ up_to_three_digits = pynini.closure(NEMO_DIGIT, 1, 3) # for start of string
+
+ cardinal_string = pynini.closure(
+ NEMO_DIGIT, 1
+ ) # For string w/o punctuation (used for page numbers, thousand series)
+
+ cardinal_string |= (
+ up_to_three_digits
+ + pynutil.delete(cardinal_separator)
+ + pynini.closure(exactly_three_digits + pynutil.delete(cardinal_separator))
+ + exactly_three_digits
+ )
+
+ return cardinal_string @ fst
+
+
+class CardinalFst(GraphFst):
+ """
+ Finite state transducer for classifying cardinals, e.g.
+ "1000" -> cardinal { integer: "mil" }
+ "2.000.000" -> cardinal { integer: "dos millones" }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="cardinal", kind="classify", deterministic=deterministic)
+
+ # Any single digit
+ graph_digit = digit
+ digits_no_one = (NEMO_DIGIT - "1") @ graph_digit
+
+ # Any double digit
+ graph_tens = teen
+ graph_tens |= ties + (pynutil.delete('0') | (pynutil.insert(" y ") + graph_digit))
+ graph_tens |= twenties
+
+ self.tens = graph_tens.optimize()
+
+ self.two_digit_non_zero = pynini.union(
+ graph_digit, graph_tens, (pynini.cross("0", NEMO_SPACE) + graph_digit)
+ ).optimize()
+
+ # Three digit strings
+ graph_hundreds = hundreds + pynini.union(
+ pynutil.delete("00"), (insert_space + graph_tens), (pynini.cross("0", NEMO_SPACE) + graph_digit)
+ )
+ graph_hundreds |= pynini.cross("100", "cien")
+ graph_hundreds |= (
+ pynini.cross("1", "ciento") + insert_space + pynini.union(graph_tens, pynutil.delete("0") + graph_digit)
+ )
+
+ self.hundreds = graph_hundreds.optimize()
+
+ # For all three digit strings with leading zeroes (graph appends '0's to manage place in string)
+ graph_hundreds_component = pynini.union(graph_hundreds, pynutil.delete("0") + graph_tens)
+
+ graph_hundreds_component_at_least_one_none_zero_digit = graph_hundreds_component | (
+ pynutil.delete("00") + graph_digit
+ )
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one = graph_hundreds_component | (
+ pynutil.delete("00") + digits_no_one
+ )
+
+ graph_thousands_component_at_least_one_none_zero_digit = pynini.union(
+ pynutil.delete("000") + graph_hundreds_component_at_least_one_none_zero_digit,
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one
+ + pynutil.insert(" mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ pynini.cross("001", "mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ )
+
+ graph_thousands_component_at_least_one_none_zero_digit_no_one = pynini.union(
+ pynutil.delete("000") + graph_hundreds_component_at_least_one_none_zero_digit_no_one,
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one
+ + pynutil.insert(" mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ pynini.cross("001", "mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ )
+
+ graph_million = pynutil.add_weight(pynini.cross("000001", "un millรณn"), -0.001)
+ graph_million |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" millones")
+ graph_million |= pynutil.delete("000000")
+ graph_million += insert_space
+
+ graph_billion = pynutil.add_weight(pynini.cross("000001", "un billรณn"), -0.001)
+ graph_billion |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" billones")
+ graph_billion |= pynutil.delete("000000")
+ graph_billion += insert_space
+
+ graph_trillion = pynutil.add_weight(pynini.cross("000001", "un trillรณn"), -0.001)
+ graph_trillion |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" trillones")
+ graph_trillion |= pynutil.delete("000000")
+ graph_trillion += insert_space
+
+ graph = (
+ graph_trillion
+ + graph_billion
+ + graph_million
+ + (graph_thousands_component_at_least_one_none_zero_digit | pynutil.delete("000000"))
+ )
+
+ self.graph = (
+ ((NEMO_DIGIT - "0") + pynini.closure(NEMO_DIGIT, 0))
+ @ pynini.cdrewrite(pynini.closure(pynutil.insert("0")), "[BOS]", "", NEMO_SIGMA)
+ @ NEMO_DIGIT ** 24
+ @ graph
+ @ pynini.cdrewrite(delete_space, "[BOS]", "", NEMO_SIGMA)
+ @ pynini.cdrewrite(delete_space, "", "[EOS]", NEMO_SIGMA)
+ @ pynini.cdrewrite(
+ pynini.cross(pynini.closure(NEMO_WHITE_SPACE, 2), NEMO_SPACE), NEMO_ALPHA, NEMO_ALPHA, NEMO_SIGMA
+ )
+ )
+ self.graph |= zero
+
+ self.graph = filter_punctuation(self.graph).optimize()
+
+ optional_minus_graph = pynini.closure(pynutil.insert("negative: ") + pynini.cross("-", "\"true\" "), 0, 1)
+
+ final_graph = optional_minus_graph + pynutil.insert("integer: \"") + self.graph + pynutil.insert("\"")
+
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/date.py b/nemo_text_processing/text_normalization/es/taggers/date.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/date.py
@@ -0,0 +1,107 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_DIGIT, NEMO_SPACE, GraphFst, delete_extra_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ articles = pynini.union("de", "del", "el", "del aรฑo")
+ delete_leading_zero = (pynutil.delete("0") | (NEMO_DIGIT - "0")) + NEMO_DIGIT
+ month_numbers = pynini.string_file(get_abs_path("data/dates/months.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ articles = None
+ delete_leading_zero = None
+ month_numbers = None
+
+ PYNINI_AVAILABLE = False
+
+
+class DateFst(GraphFst):
+ """
+ Finite state transducer for classifying date, e.g.
+ "01.04.2010" -> date { day: "un" month: "enero" year: "dos mil diez" preserve_order: true }
+ "marzo 4 2000" -> date { month: "marzo" day: "cuatro" year: "dos mil" }
+ "1990-20-01" -> date { year: "mil novecientos noventa" day: "veinte" month: "enero" }
+
+ Args:
+ cardinal: cardinal GraphFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool):
+ super().__init__(name="date", kind="classify", deterministic=deterministic)
+
+ number_to_month = month_numbers.optimize()
+ month_graph = pynini.project(number_to_month, "output")
+
+ numbers = cardinal.graph
+ optional_leading_zero = delete_leading_zero | NEMO_DIGIT
+
+ # 01, 31, 1
+ digit_day = optional_leading_zero @ pynini.union(*[str(x) for x in range(1, 32)]) @ numbers
+ day = (pynutil.insert("day: \"") + digit_day + pynutil.insert("\"")).optimize()
+
+ digit_month = optional_leading_zero @ pynini.union(*[str(x) for x in range(1, 13)])
+ number_to_month = digit_month @ number_to_month
+
+ month_name = (pynutil.insert("month: \"") + month_graph + pynutil.insert("\"")).optimize()
+ month_number = (pynutil.insert("month: \"") + number_to_month + pynutil.insert("\"")).optimize()
+
+ # prefer cardinal over year
+ year = (NEMO_DIGIT - "0") + pynini.closure(NEMO_DIGIT, 1, 3) # 90, 990, 1990
+ year @= numbers
+ self.year = year
+
+ year_only = pynutil.insert("year: \"") + year + pynutil.insert("\"")
+ year_with_articles = (
+ pynutil.insert("year: \"") + pynini.closure(articles + NEMO_SPACE, 0, 1) + year + pynutil.insert("\"")
+ )
+
+ graph_dmy = (
+ day
+ + pynini.closure(pynutil.delete(" de"))
+ + NEMO_SPACE
+ + month_name
+ + pynini.closure(NEMO_SPACE + year_with_articles, 0, 1)
+ )
+
+ graph_mdy = ( # English influences on language
+ month_name + delete_extra_space + day + pynini.closure(NEMO_SPACE + year_with_articles, 0, 1)
+ )
+
+ separators = [".", "-", "/"]
+ for sep in separators:
+ year_optional = pynini.closure(pynini.cross(sep, NEMO_SPACE) + year_only, 0, 1)
+ new_graph = day + pynini.cross(sep, NEMO_SPACE) + month_number + year_optional
+ graph_dmy |= new_graph
+ if not deterministic:
+ new_graph = month_number + pynini.cross(sep, NEMO_SPACE) + day + year_optional
+ graph_mdy |= new_graph
+
+ dash = "-"
+ day_optional = pynini.closure(pynini.cross(dash, NEMO_SPACE) + day, 0, 1)
+ graph_ymd = NEMO_DIGIT ** 4 @ year_only + pynini.cross(dash, NEMO_SPACE) + month_number + day_optional
+
+ final_graph = graph_dmy + pynutil.insert(" preserve_order: true")
+ final_graph |= graph_ymd
+ final_graph |= graph_mdy
+
+ self.final_graph = final_graph.optimize()
+ self.fst = self.add_tokens(self.final_graph).optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/decimals.py b/nemo_text_processing/text_normalization/es/taggers/decimals.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/decimals.py
@@ -0,0 +1,138 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ cardinal_separator,
+ decimal_separator,
+ strip_cardinal_apocope,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ quantities = pynini.string_file(get_abs_path("data/numbers/quantities.tsv"))
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ quantities = None
+ digit = None
+ zero = None
+
+ PYNINI_AVAILABLE = False
+
+
+def get_quantity(decimal_graph: 'pynini.FstLike', cardinal_graph: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Returns FST that transforms either a cardinal or decimal followed by a quantity into a numeral,
+ e.g. 2 millones -> integer_part: "dos" quantity: "millones"
+ e.g. 2,4 millones -> integer_part: "dos" fractional_part: "quatro" quantity: "millones"
+ e.g. 2,400 millones -> integer_part: "dos mil cuatrocientos" fractional_part: "quatro" quantity: "millones"
+
+ Args:
+ decimal_graph: DecimalFST
+ cardinal_graph: CardinalFST
+ """
+ numbers = pynini.closure(NEMO_DIGIT, 1, 6) @ cardinal_graph
+ numbers = pynini.cdrewrite(pynutil.delete(cardinal_separator), "", "", NEMO_SIGMA) @ numbers
+
+ res = (
+ pynutil.insert("integer_part: \"")
+ + numbers # The cardinal we're passing only produces 'un' for one, so gender agreement is safe (all quantities are masculine). Limit to 10^6 power.
+ + pynutil.insert("\"")
+ + NEMO_SPACE
+ + pynutil.insert("quantity: \"")
+ + quantities
+ + pynutil.insert("\"")
+ )
+ res |= decimal_graph + NEMO_SPACE + pynutil.insert("quantity: \"") + quantities + pynutil.insert("\"")
+ return res
+
+
+class DecimalFst(GraphFst):
+ """
+ Finite state transducer for classifying decimal, e.g.
+ -11,4006 billones -> decimal { negative: "true" integer_part: "once" fractional_part: "cuatro cero cero seis" quantity: "billones" preserve_order: true }
+ 1 billรณn -> decimal { integer_part: "un" quantity: "billรณn" preserve_order: true }
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="decimal", kind="classify", deterministic=deterministic)
+ graph_digit = digit | zero
+
+ if not deterministic:
+ graph = pynini.union(graph_digit, cardinal.hundreds, cardinal.tens)
+ graph += pynini.closure(insert_space + graph)
+
+ else:
+ # General pattern seems to be 1-3 digits: map as cardinal, default to digits otherwise \
+ graph = pynini.union(
+ graph_digit,
+ cardinal.tens,
+ cardinal.hundreds,
+ graph_digit + pynini.closure(insert_space + graph_digit, 3),
+ zero
+ + pynini.closure(insert_space + zero)
+ + pynini.closure(insert_space + graph_digit), # For cases such as "1,010"
+ )
+
+ # Need to strip apocope everywhere BUT end of string
+ reverse_apocope = pynini.string_map([("un", "uno"), ("รบn", "uno")])
+ apply_reverse_apocope = pynini.cdrewrite(reverse_apocope, "", NEMO_SPACE, NEMO_SIGMA)
+ graph @= apply_reverse_apocope
+
+ # Technically decimals should be space delineated groups of three, e.g. (1,333 333). This removes any possible spaces
+ strip_formatting = pynini.cdrewrite(delete_space, "", "", NEMO_SIGMA)
+ graph = strip_formatting @ graph
+
+ self.graph = graph.optimize()
+
+ graph_separator = pynutil.delete(decimal_separator)
+ optional_graph_negative = pynini.closure(pynutil.insert("negative: ") + pynini.cross("-", "\"true\" "), 0, 1)
+
+ self.graph_fractional = pynutil.insert("fractional_part: \"") + self.graph + pynutil.insert("\"")
+
+ # Integer graph maintains apocope except for ones place
+ graph_integer = (
+ strip_cardinal_apocope(cardinal.graph)
+ if deterministic
+ else pynini.union(cardinal.graph, strip_cardinal_apocope(cardinal.graph))
+ ) # Gives us forms w/ and w/o apocope
+ self.graph_integer = pynutil.insert("integer_part: \"") + graph_integer + pynutil.insert("\"")
+ final_graph_wo_sign = self.graph_integer + graph_separator + insert_space + self.graph_fractional
+
+ self.final_graph_wo_negative = (
+ final_graph_wo_sign | get_quantity(final_graph_wo_sign, cardinal.graph).optimize()
+ )
+ final_graph = optional_graph_negative + self.final_graph_wo_negative
+
+ final_graph += pynutil.insert(" preserve_order: true")
+ final_graph = self.add_tokens(final_graph)
+
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/electronic.py b/nemo_text_processing/text_normalization/es/taggers/electronic.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/electronic.py
@@ -0,0 +1,84 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_ALPHA, NEMO_DIGIT, GraphFst, insert_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ common_domains = [x[0] for x in load_labels(get_abs_path("data/electronic/domain.tsv"))]
+ symbols = [x[0] for x in load_labels(get_abs_path("data/electronic/symbols.tsv"))]
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ common_domains = None
+ symbols = None
+
+ PYNINI_AVAILABLE = False
+
+
+class ElectronicFst(GraphFst):
+ """
+ Finite state transducer for classifying electronic: email addresses
+ e.g. "abc@hotmail.com" -> electronic { username: "abc" domain: "hotmail.com" preserve_order: true }
+ e.g. "www.abc.com/123" -> electronic { protocol: "www." domain: "abc.com/123" preserve_order: true }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="electronic", kind="classify", deterministic=deterministic)
+
+ dot = pynini.accep(".")
+ accepted_common_domains = pynini.union(*common_domains)
+ accepted_symbols = pynini.union(*symbols) - dot
+ accepted_characters = pynini.closure(NEMO_ALPHA | NEMO_DIGIT | accepted_symbols)
+ acceepted_characters_with_dot = pynini.closure(NEMO_ALPHA | NEMO_DIGIT | accepted_symbols | dot)
+
+ # email
+ username = (
+ pynutil.insert("username: \"")
+ + acceepted_characters_with_dot
+ + pynutil.insert("\"")
+ + pynini.cross('@', ' ')
+ )
+ domain_graph = accepted_characters + dot + accepted_characters
+ domain_graph = pynutil.insert("domain: \"") + domain_graph + pynutil.insert("\"")
+ domain_common_graph = (
+ pynutil.insert("domain: \"")
+ + accepted_characters
+ + accepted_common_domains
+ + pynini.closure((accepted_symbols | dot) + pynini.closure(accepted_characters, 1), 0, 1)
+ + pynutil.insert("\"")
+ )
+ graph = (username + domain_graph) | domain_common_graph
+
+ # url
+ protocol_start = pynini.accep("https://") | pynini.accep("http://")
+ protocol_end = (
+ pynini.accep("www.")
+ if deterministic
+ else pynini.accep("www.") | pynini.cross("www.", "doble ve doble ve doble ve.")
+ )
+ protocol = protocol_start | protocol_end | (protocol_start + protocol_end)
+ protocol = pynutil.insert("protocol: \"") + protocol + pynutil.insert("\"")
+ graph |= protocol + insert_space + (domain_graph | domain_common_graph)
+ self.graph = graph
+
+ final_graph = self.add_tokens(self.graph + pynutil.insert(" preserve_order: true"))
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/fraction.py b/nemo_text_processing/text_normalization/es/taggers/fraction.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/fraction.py
@@ -0,0 +1,124 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ ordinal_exceptions = pynini.string_file(get_abs_path("data/fractions/ordinal_exceptions.tsv"))
+ higher_powers_of_ten = pynini.string_file(get_abs_path("data/fractions/powers_of_ten.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ ordinal_exceptions = None
+ higher_powers_of_ten = None
+
+ PYNINI_AVAILABLE = False
+
+
+class FractionFst(GraphFst):
+ """
+ Finite state transducer for classifying fraction
+ "23 4/5" ->
+ tokens { fraction { integer: "veintitrรฉs" numerator: "cuatro" denominator: "quinto" mophosyntactic_features: "ordinal" } }
+
+ Args:
+ cardinal: CardinalFst
+ ordinal: OrdinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, ordinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="fraction", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+ ordinal_graph = ordinal.graph
+
+ # 2-10 are all ordinals
+ three_to_ten = pynini.string_map(["2", "3", "4", "5", "6", "7", "8", "9", "10",])
+ block_three_to_ten = pynutil.delete(three_to_ten) # To block cardinal productions
+ if not deterministic: # Multiples of tens are sometimes rendered as ordinals
+ three_to_ten |= pynini.string_map(["20", "30", "40", "50", "60", "70", "80", "90",])
+ graph_three_to_ten = three_to_ten @ ordinal_graph
+ graph_three_to_ten @= pynini.cdrewrite(ordinal_exceptions, "", "", NEMO_SIGMA)
+
+ # Higher powers of tens (and multiples) are converted to ordinals.
+ hundreds = pynini.string_map(["100", "200", "300", "400", "500", "600", "700", "800", "900",])
+ graph_hundreds = hundreds @ ordinal_graph
+
+ multiples_of_thousand = ordinal.multiples_of_thousand # So we can have X milรฉsimos
+
+ graph_higher_powers_of_ten = (
+ pynini.closure(ordinal.one_to_one_thousand + NEMO_SPACE, 0, 1)
+ + pynini.closure("mil ", 0, 1)
+ + pynini.closure(ordinal.one_to_one_thousand + NEMO_SPACE, 0, 1)
+ ) # x millones / x mil millones / x mil z millones
+ graph_higher_powers_of_ten += higher_powers_of_ten
+ graph_higher_powers_of_ten = cardinal_graph @ graph_higher_powers_of_ten
+ graph_higher_powers_of_ten @= pynini.cdrewrite(
+ pynutil.delete("un "), pynini.accep("[BOS]"), pynini.project(higher_powers_of_ten, "output"), NEMO_SIGMA
+ ) # we drop 'un' from these ordinals (millionths, not one-millionths)
+
+ graph_higher_powers_of_ten = multiples_of_thousand | graph_hundreds | graph_higher_powers_of_ten
+ block_higher_powers_of_ten = pynutil.delete(
+ pynini.project(graph_higher_powers_of_ten, "input")
+ ) # For cardinal graph
+
+ graph_fractions_ordinals = graph_higher_powers_of_ten | graph_three_to_ten
+ graph_fractions_ordinals += pynutil.insert(
+ "\" morphosyntactic_features: \"ordinal\""
+ ) # We note the root for processing later
+
+ # Blocking the digits and hundreds from Cardinal graph
+ graph_fractions_cardinals = pynini.cdrewrite(
+ block_three_to_ten | block_higher_powers_of_ten, pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+ graph_fractions_cardinals @= NEMO_CHAR.plus @ pynini.cdrewrite(
+ pynutil.delete("0"), pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA
+ ) # Empty characters become '0' for NEMO_CHAR fst, so ned to block
+ graph_fractions_cardinals @= cardinal_graph
+ graph_fractions_cardinals += pynutil.insert(
+ "\" morphosyntactic_features: \"add_root\""
+ ) # blocking these entries to reduce erroneous possibilities in debugging
+
+ if deterministic:
+ graph_fractions_cardinals = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ graph_fractions_cardinals
+ ) # Past hundreds the conventional scheme can be hard to read. For determinism we stop here
+
+ graph_denominator = pynini.union(
+ graph_fractions_ordinals,
+ graph_fractions_cardinals,
+ pynutil.add_weight(cardinal_graph + pynutil.insert("\""), 0.001),
+ ) # Last form is simply recording the cardinal. Weighting so last resort
+
+ integer = pynutil.insert("integer_part: \"") + cardinal_graph + pynutil.insert("\"") + NEMO_SPACE
+ numerator = (
+ pynutil.insert("numerator: \"") + cardinal_graph + (pynini.cross("/", "\" ") | pynini.cross(" / ", "\" "))
+ )
+ denominator = pynutil.insert("denominator: \"") + graph_denominator
+
+ self.graph = pynini.closure(integer, 0, 1) + numerator + denominator
+
+ final_graph = self.add_tokens(self.graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/measure.py b/nemo_text_processing/text_normalization/es/taggers/measure.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/measure.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_NON_BREAKING_SPACE,
+ NEMO_SPACE,
+ GraphFst,
+ convert_space,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import strip_cardinal_apocope
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ unit = pynini.string_file(get_abs_path("data/measures/measurements.tsv"))
+ unit_plural_fem = pynini.string_file(get_abs_path("data/measures/measurements_plural_fem.tsv"))
+ unit_plural_masc = pynini.string_file(get_abs_path("data/measures/measurements_plural_masc.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ unit = None
+ unit_plural_fem = None
+ unit_plural_masc = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MeasureFst(GraphFst):
+ """
+ Finite state transducer for classifying measure, e.g.
+ "2,4 g" -> measure { cardinal { integer_part: "dos" fractional_part: "cuatro" units: "gramos" preserve_order: true } }
+ "1 g" -> measure { cardinal { integer: "un" units: "gramo" preserve_order: true } }
+ "1 millรณn g" -> measure { cardinal { integer: "un quantity: "millรณn" units: "gramos" preserve_order: true } }
+ e.g. "a-8" โ> "a ocho"
+ e.g. "1,2-a" โ> "uno coma dos a"
+ This class also converts words containing numbers and letters
+ e.g. "a-8" โ> "a ocho"
+ e.g. "1,2-a" โ> "uno coma dos a"
+
+
+ Args:
+ cardinal: CardinalFst
+ decimal: DecimalFst
+ fraction: FractionFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, decimal: GraphFst, fraction: GraphFst, deterministic: bool = True):
+ super().__init__(name="measure", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+
+ unit_singular = unit
+ unit_plural = unit_singular @ (unit_plural_fem | unit_plural_masc)
+
+ graph_unit_singular = convert_space(unit_singular)
+ graph_unit_plural = convert_space(unit_plural)
+
+ optional_graph_negative = pynini.closure("-", 0, 1)
+
+ graph_unit_denominator = (
+ pynini.cross("/", "por") + pynutil.insert(NEMO_NON_BREAKING_SPACE) + graph_unit_singular
+ )
+
+ optional_unit_denominator = pynini.closure(
+ pynutil.insert(NEMO_NON_BREAKING_SPACE) + graph_unit_denominator, 0, 1,
+ )
+
+ unit_plural = (
+ pynutil.insert("units: \"")
+ + ((graph_unit_plural + optional_unit_denominator) | graph_unit_denominator)
+ + pynutil.insert("\"")
+ )
+
+ unit_singular_graph = (
+ pynutil.insert("units: \"")
+ + ((graph_unit_singular + optional_unit_denominator) | graph_unit_denominator)
+ + pynutil.insert("\"")
+ )
+
+ subgraph_decimal = decimal.fst + insert_space + pynini.closure(NEMO_SPACE, 0, 1) + unit_plural
+
+ subgraph_cardinal = (
+ (optional_graph_negative + (pynini.closure(NEMO_DIGIT) - "1")) @ cardinal.fst
+ + insert_space
+ + pynini.closure(delete_space, 0, 1)
+ + unit_plural
+ )
+
+ subgraph_cardinal |= (
+ (optional_graph_negative + pynini.accep("1")) @ cardinal.fst
+ + insert_space
+ + pynini.closure(delete_space, 0, 1)
+ + unit_singular_graph
+ )
+
+ subgraph_fraction = fraction.fst + insert_space + pynini.closure(delete_space, 0, 1) + unit_plural
+
+ decimal_times = (
+ pynutil.insert("decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.insert(" } units: \"")
+ + pynini.union('x', 'X')
+ + pynutil.insert("\"")
+ )
+
+ cardinal_times = (
+ pynutil.insert("cardinal { integer: \"")
+ + strip_cardinal_apocope(cardinal_graph)
+ + pynutil.insert("\" } units: \"")
+ + pynini.union('x', 'X')
+ + pynutil.insert("\"")
+ )
+
+ cardinal_dash_alpha = (
+ pynutil.insert("cardinal { integer: \"")
+ + strip_cardinal_apocope(cardinal_graph)
+ + pynutil.delete('-')
+ + pynutil.insert("\" } units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.insert("\"")
+ )
+
+ decimal_dash_alpha = (
+ pynutil.insert("decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.delete('-')
+ + pynutil.insert(" } units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.insert("\"")
+ )
+
+ alpha_dash_cardinal = (
+ pynutil.insert("units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.delete('-')
+ + pynutil.insert("\"")
+ + pynutil.insert(" cardinal { integer: \"")
+ + cardinal_graph
+ + pynutil.insert("\" } preserve_order: true")
+ )
+
+ alpha_dash_decimal = (
+ pynutil.insert("units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.delete('-')
+ + pynutil.insert("\"")
+ + pynutil.insert(" decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.insert(" } preserve_order: true")
+ )
+
+ final_graph = (
+ subgraph_decimal
+ | subgraph_cardinal
+ | subgraph_fraction
+ | cardinal_dash_alpha
+ | alpha_dash_cardinal
+ | decimal_dash_alpha
+ | decimal_times
+ | cardinal_times
+ | alpha_dash_decimal
+ )
+ final_graph += pynutil.insert(" preserve_order: true")
+ final_graph = self.add_tokens(final_graph)
+
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/money.py b/nemo_text_processing/text_normalization/es/taggers/money.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/money.py
@@ -0,0 +1,194 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import decimal_separator
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ maj_singular_labels = load_labels(get_abs_path("data/money/currency_major.tsv"))
+ maj_singular = pynini.string_file((get_abs_path("data/money/currency_major.tsv")))
+ min_singular = pynini.string_file(get_abs_path("data/money/currency_minor.tsv"))
+ fem_plural = pynini.string_file((get_abs_path("data/money/currency_plural_fem.tsv")))
+ masc_plural = pynini.string_file((get_abs_path("data/money/currency_plural_masc.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ maj_singular_labels = None
+ min_singular = None
+ maj_singular = None
+ fem_plural = None
+ masc_plural = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MoneyFst(GraphFst):
+ """
+ Finite state transducer for classifying money, e.g.
+ "โฌ1" -> money { currency_maj: "euro" integer_part: "un"}
+ "โฌ1,000" -> money { currency_maj: "euro" integer_part: "un" }
+ "โฌ1,001" -> money { currency_maj: "euro" integer_part: "un" fractional_part: "cero cero un" }
+ "ยฃ1,4" -> money { integer_part: "una" currency_maj: "libra" fractional_part: "cuarenta" preserve_order: true }
+ -> money { integer_part: "una" currency_maj: "libra" fractional_part: "cuarenta" currency_min: "penique" preserve_order: true }
+ "0,01 ยฃ" -> money { fractional_part: "un" currency_min: "penique" preserve_order: true }
+ "0,02 ยฃ" -> money { fractional_part: "dos" currency_min: "peniques" preserve_order: true }
+ "ยฃ0,01 million" -> money { currency_maj: "libra" integer_part: "cero" fractional_part: "cero un" quantity: "million" }
+
+ Args:
+ cardinal: CardinalFst
+ decimal: DecimalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, decimal: GraphFst, deterministic: bool = True):
+ super().__init__(name="money", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+ graph_decimal_final = decimal.final_graph_wo_negative
+
+ maj_singular_graph = maj_singular
+ min_singular_graph = min_singular
+ maj_plural_graph = maj_singular @ (fem_plural | masc_plural)
+ min_plural_graph = min_singular @ (fem_plural | masc_plural)
+
+ graph_maj_singular = pynutil.insert("currency_maj: \"") + maj_singular_graph + pynutil.insert("\"")
+ graph_maj_plural = pynutil.insert("currency_maj: \"") + maj_plural_graph + pynutil.insert("\"")
+
+ graph_integer_one = pynutil.insert("integer_part: \"") + pynini.cross("1", "un") + pynutil.insert("\"")
+
+ decimal_with_quantity = (NEMO_SIGMA + NEMO_ALPHA) @ graph_decimal_final
+
+ graph_decimal_plural = pynini.union(
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + graph_decimal_final, # $1,05
+ graph_decimal_final + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_plural, # 1,05 $
+ )
+ graph_decimal_plural = (
+ (NEMO_SIGMA - "1") + decimal_separator + NEMO_SIGMA
+ ) @ graph_decimal_plural # Can't have "un euros"
+
+ graph_decimal_singular = pynini.union(
+ graph_maj_singular + pynini.closure(delete_space, 0, 1) + insert_space + graph_decimal_final, # $1,05
+ graph_decimal_final + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_singular, # 1,05 $
+ )
+ graph_decimal_singular = (pynini.accep("1") + decimal_separator + NEMO_SIGMA) @ graph_decimal_singular
+
+ graph_decimal = pynini.union(
+ graph_decimal_singular,
+ graph_decimal_plural,
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + decimal_with_quantity,
+ )
+
+ graph_integer = (
+ pynutil.insert("integer_part: \"") + ((NEMO_SIGMA - "1") @ cardinal_graph) + pynutil.insert("\"")
+ )
+
+ graph_integer_only = pynini.union(
+ graph_maj_singular + pynini.closure(delete_space, 0, 1) + insert_space + graph_integer_one,
+ graph_integer_one + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_singular,
+ )
+ graph_integer_only |= pynini.union(
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + graph_integer,
+ graph_integer + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_plural,
+ )
+
+ graph = graph_integer_only | graph_decimal
+
+ # remove trailing zeros of non zero number in the first 2 digits and fill up to 2 digits
+ # e.g. 2000 -> 20, 0200->02, 01 -> 01, 10 -> 10
+ # not accepted: 002, 00, 0,
+ two_digits_fractional_part = (
+ pynini.closure(NEMO_DIGIT) + (NEMO_DIGIT - "0") + pynini.closure(pynutil.delete("0"))
+ ) @ (
+ (pynutil.delete("0") + (NEMO_DIGIT - "0"))
+ | ((NEMO_DIGIT - "0") + pynutil.insert("0"))
+ | ((NEMO_DIGIT - "0") + NEMO_DIGIT)
+ )
+
+ graph_min_singular = pynutil.insert("currency_min: \"") + min_singular_graph + pynutil.insert("\"")
+ graph_min_plural = pynutil.insert("currency_min: \"") + min_plural_graph + pynutil.insert("\"")
+
+ # format ** euro ** cent
+ decimal_graph_with_minor = None
+ for curr_symbol, _ in maj_singular_labels:
+ preserve_order = pynutil.insert(" preserve_order: true")
+
+ integer_plus_maj = pynini.union(
+ graph_integer + insert_space + pynutil.insert(curr_symbol) @ graph_maj_plural,
+ graph_integer_one + insert_space + pynutil.insert(curr_symbol) @ graph_maj_singular,
+ )
+ # non zero integer part
+ integer_plus_maj = (pynini.closure(NEMO_DIGIT) - "0") @ integer_plus_maj
+
+ graph_fractional_one = (
+ pynutil.insert("fractional_part: \"")
+ + two_digits_fractional_part @ pynini.cross("1", "un")
+ + pynutil.insert("\"")
+ )
+
+ graph_fractional = (
+ two_digits_fractional_part @ (pynini.closure(NEMO_DIGIT, 1, 2) - "1") @ cardinal.two_digit_non_zero
+ )
+ graph_fractional = pynutil.insert("fractional_part: \"") + graph_fractional + pynutil.insert("\"")
+
+ fractional_plus_min = pynini.union(
+ graph_fractional + insert_space + pynutil.insert(curr_symbol) @ graph_min_plural,
+ graph_fractional_one + insert_space + pynutil.insert(curr_symbol) @ graph_min_singular,
+ )
+
+ decimal_graph_with_minor_curr = (
+ integer_plus_maj + pynini.cross(decimal_separator, NEMO_SPACE) + fractional_plus_min
+ )
+ decimal_graph_with_minor_curr |= pynutil.add_weight(
+ integer_plus_maj
+ + pynini.cross(decimal_separator, NEMO_SPACE)
+ + pynutil.insert("fractional_part: \"")
+ + two_digits_fractional_part @ cardinal.two_digit_non_zero
+ + pynutil.insert("\""),
+ weight=0.0001,
+ )
+
+ decimal_graph_with_minor_curr |= pynutil.delete("0,") + fractional_plus_min
+ decimal_graph_with_minor_curr = pynini.union(
+ pynutil.delete(curr_symbol)
+ + pynini.closure(delete_space, 0, 1)
+ + decimal_graph_with_minor_curr
+ + preserve_order,
+ decimal_graph_with_minor_curr
+ + preserve_order
+ + pynini.closure(delete_space, 0, 1)
+ + pynutil.delete(curr_symbol),
+ )
+
+ decimal_graph_with_minor = (
+ decimal_graph_with_minor_curr
+ if decimal_graph_with_minor is None
+ else pynini.union(decimal_graph_with_minor, decimal_graph_with_minor_curr)
+ )
+
+ final_graph = graph | pynutil.add_weight(decimal_graph_with_minor, -0.001)
+
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/ordinal.py b/nemo_text_processing/text_normalization/es/taggers/ordinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/ordinal.py
@@ -0,0 +1,186 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import roman_to_int, strip_accent
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/digit.tsv")))
+ teens = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/teen.tsv")))
+ twenties = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/twenties.tsv")))
+ ties = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/ties.tsv")))
+ hundreds = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/hundreds.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ImportError, ModuleNotFoundError):
+ digit = None
+ teens = None
+ twenties = None
+ ties = None
+ hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def get_one_to_one_thousand(cardinal: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Produces an acceptor for verbalizations of all numbers from 1 to 1000. Needed for ordinals and fractions.
+
+ Args:
+ cardinal: CardinalFst
+
+ Returns:
+ fst: A pynini.FstLike object
+ """
+ numbers = pynini.string_map([str(_) for _ in range(1, 1000)]) @ cardinal
+ return pynini.project(numbers, "output").optimize()
+
+
+class OrdinalFst(GraphFst):
+ """
+ Finite state transducer for classifying ordinal
+ "21.ยบ" -> ordinal { integer: "vigรฉsimo primero" morphosyntactic_features: "gender_masc" }
+ This class converts ordinal up to the millionth (millonรฉsimo) order (exclusive).
+
+ This FST also records the ending of the ordinal (called "morphosyntactic_features"):
+ either as gender_masc, gender_fem, or apocope. Also introduces plural feature for non-deterministic graphs.
+
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="ordinal", kind="classify")
+ cardinal_graph = cardinal.graph
+
+ graph_digit = digit.optimize()
+ graph_teens = teens.optimize()
+ graph_ties = ties.optimize()
+ graph_twenties = twenties.optimize()
+ graph_hundreds = hundreds.optimize()
+
+ if not deterministic:
+ # Some alternative derivations
+ graph_ties = graph_ties | pynini.cross("sesenta", "setuagรฉsimo")
+
+ graph_teens = graph_teens | pynini.cross("once", "decimoprimero")
+ graph_teens |= pynini.cross("doce", "decimosegundo")
+
+ graph_digit = graph_digit | pynini.cross("nueve", "nono")
+ graph_digit |= pynini.cross("siete", "sรฉtimo")
+
+ graph_tens_component = (
+ graph_teens
+ | (graph_ties + pynini.closure(pynini.cross(" y ", NEMO_SPACE) + graph_digit, 0, 1))
+ | graph_twenties
+ )
+
+ graph_hundred_component = pynini.union(
+ graph_hundreds + pynini.closure(NEMO_SPACE + pynini.union(graph_tens_component, graph_digit), 0, 1),
+ graph_tens_component,
+ graph_digit,
+ )
+
+ # Need to go up to thousands for fractions
+ self.one_to_one_thousand = get_one_to_one_thousand(cardinal_graph)
+
+ thousands = pynini.cross("mil", "milรฉsimo")
+
+ graph_thousands = (
+ strip_accent(self.one_to_one_thousand) + NEMO_SPACE + thousands
+ ) # Cardinals become prefix for thousands series. Snce accent on the powers of ten we strip accent from leading words
+ graph_thousands @= pynini.cdrewrite(delete_space, "", "milรฉsimo", NEMO_SIGMA) # merge as a prefix
+ graph_thousands |= thousands
+
+ self.multiples_of_thousand = (cardinal_graph @ graph_thousands).optimize()
+
+ if (
+ not deterministic
+ ): # Formally the words preceding the power of ten should be a prefix, but some maintain word boundaries.
+ graph_thousands |= (self.one_to_one_thousand @ graph_hundred_component) + NEMO_SPACE + thousands
+
+ graph_thousands += pynini.closure(NEMO_SPACE + graph_hundred_component, 0, 1)
+
+ ordinal_graph = graph_thousands | graph_hundred_component
+ ordinal_graph = cardinal_graph @ ordinal_graph
+
+ if not deterministic:
+ # The 10's and 20's series can also be two words
+ split_words = pynini.cross("decimo", "dรฉcimo ") | pynini.cross("vigesimo", "vigรฉsimo ")
+ split_words = pynini.cdrewrite(split_words, "", NEMO_CHAR, NEMO_SIGMA)
+ ordinal_graph |= ordinal_graph @ split_words
+
+ # If "octavo" is preceeded by the "o" within string, it needs deletion
+ ordinal_graph @= pynini.cdrewrite(pynutil.delete("o"), "", "octavo", NEMO_SIGMA)
+
+ self.graph = ordinal_graph.optimize()
+
+ masc = pynini.accep("gender_masc")
+ fem = pynini.accep("gender_fem")
+ apocope = pynini.accep("apocope")
+
+ delete_period = pynini.closure(pynutil.delete("."), 0, 1) # Sometimes the period is omitted f
+
+ accept_masc = delete_period + pynini.cross("ยบ", masc)
+ accep_fem = delete_period + pynini.cross("ยช", fem)
+ accep_apocope = delete_period + pynini.cross("แตสณ", apocope)
+
+ # Managing Romanization
+ graph_roman = pynutil.insert("integer: \"") + roman_to_int(ordinal_graph) + pynutil.insert("\"")
+ if not deterministic:
+ # Introduce plural
+ plural = pynini.closure(pynutil.insert("/plural"), 0, 1)
+ accept_masc += plural
+ accep_fem += plural
+
+ # Romanizations have no morphology marker, so in non-deterministic case we provide option for all
+ insert_morphology = pynutil.insert(pynini.union(masc, fem)) + plural
+ insert_morphology |= pynutil.insert(apocope)
+ insert_morphology = (
+ pynutil.insert(" morphosyntactic_features: \"") + insert_morphology + pynutil.insert("\"")
+ )
+
+ graph_roman += insert_morphology
+
+ else:
+ # We assume masculine gender as default
+ graph_roman += pynutil.insert(" morphosyntactic_features: \"gender_masc\"")
+
+ # Rest of graph
+ convert_abbreviation = accept_masc | accep_fem | accep_apocope
+
+ graph = (
+ pynutil.insert("integer: \"")
+ + ordinal_graph
+ + pynutil.insert("\"")
+ + pynutil.insert(" morphosyntactic_features: \"")
+ + convert_abbreviation
+ + pynutil.insert("\"")
+ )
+ graph = pynini.union(graph, graph_roman)
+
+ final_graph = self.add_tokens(graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/telephone.py b/nemo_text_processing/text_normalization/es/taggers/telephone.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/telephone.py
@@ -0,0 +1,156 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_SIGMA, GraphFst, insert_space
+from nemo_text_processing.text_normalization.es.graph_utils import ones
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ graph_digit = pynini.string_file(get_abs_path("data/numbers/digit.tsv"))
+ graph_ties = pynini.string_file(get_abs_path("data/numbers/ties.tsv"))
+ graph_teen = pynini.string_file(get_abs_path("data/numbers/teen.tsv"))
+ graph_twenties = pynini.string_file(get_abs_path("data/numbers/twenties.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ graph_digit = None
+ graph_ties = None
+ graph_teen = None
+ graph_twenties = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TelephoneFst(GraphFst):
+ """
+ Finite state transducer for classifying telephone numbers, e.g.
+ 123-123-5678 -> { number_part: "uno dos tres uno dos tres cinco seis siete ocho" }.
+ In Spanish, digits are generally read individually, or as 2-digit numbers,
+ eg. "123" = "uno dos tres",
+ "1234" = "doce treinta y cuatro".
+ This will verbalize sequences of 10 (3+3+4 e.g. 123-456-7890).
+ 9 (3+3+3 e.g. 123-456-789) and 8 (4+4 e.g. 1234-5678) digits.
+
+ (we ignore more complicated cases such as "doscientos y dos" or "tres nueves").
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="telephone", kind="classify")
+
+ # create `single_digits` and `double_digits` graphs as these will be
+ # the building blocks of possible telephone numbers
+ single_digits = pynini.invert(graph_digit).optimize() | pynini.cross("0", "cero")
+
+ double_digits = pynini.union(
+ graph_twenties,
+ graph_teen,
+ (graph_ties + pynutil.delete("0")),
+ (graph_ties + insert_space + pynutil.insert("y") + insert_space + graph_digit),
+ )
+ double_digits = pynini.invert(double_digits)
+
+ # define `ten_digit_graph`, `nine_digit_graph`, `eight_digit_graph`
+ # which produces telephone numbers spoken (1) only with single digits,
+ # or (2) spoken with double digits (and sometimes single digits)
+
+ # 10-digit option (1): all single digits
+ ten_digit_graph = (
+ pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + single_digits
+ )
+
+ # 9-digit option (1): all single digits
+ nine_digit_graph = (
+ pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 2, 2)
+ + single_digits
+ )
+
+ # 8-digit option (1): all single digits
+ eight_digit_graph = (
+ pynini.closure(single_digits + insert_space, 4, 4)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + single_digits
+ )
+
+ if not deterministic:
+ # 10-digit option (2): (1+2) + (1+2) + (2+2) digits
+ ten_digit_graph |= (
+ single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + double_digits
+ + insert_space
+ + double_digits
+ )
+
+ # 9-digit option (2): (1+2) + (1+2) + (1+2) digits
+ nine_digit_graph |= (
+ single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ )
+
+ # 8-digit option (2): (2+2) + (2+2) digits
+ eight_digit_graph |= (
+ double_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + double_digits
+ + insert_space
+ + double_digits
+ )
+
+ number_part = pynini.union(ten_digit_graph, nine_digit_graph, eight_digit_graph)
+ number_part @= pynini.cdrewrite(pynini.cross(ones, "uno"), "", "", NEMO_SIGMA)
+
+ number_part = pynutil.insert("number_part: \"") + number_part + pynutil.insert("\"")
+
+ graph = number_part
+ final_graph = self.add_tokens(graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/time.py b/nemo_text_processing/text_normalization/es/taggers/time.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/time.py
@@ -0,0 +1,218 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ time_zone_graph = pynini.string_file(get_abs_path("data/time/time_zone.tsv"))
+ suffix = pynini.string_file(get_abs_path("data/time/time_suffix.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ time_zone_graph = None
+ suffix = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TimeFst(GraphFst):
+ """
+ Finite state transducer for classifying time, e.g.
+ "02:15 est" -> time { hours: "dos" minutes: "quince" zone: "e s t"}
+ "2 h" -> time { hours: "dos" }
+ "9 h" -> time { hours: "nueve" }
+ "02:15:10 h" -> time { hours: "dos" minutes: "quince" seconds: "diez"}
+
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="time", kind="classify", deterministic=deterministic)
+
+ delete_time_delimiter = pynutil.delete(pynini.union(".", ":"))
+
+ one = pynini.string_map([("un", "una"), ("รบn", "una")])
+ change_one = pynini.cdrewrite(one, "", "", NEMO_SIGMA)
+ cardinal_graph = cardinal.graph @ change_one
+
+ day_suffix = pynutil.insert("suffix: \"") + suffix + pynutil.insert("\"")
+ day_suffix = delete_space + insert_space + day_suffix
+
+ delete_hora_suffix = delete_space + insert_space + pynutil.delete("h")
+ delete_minute_suffix = delete_space + insert_space + pynutil.delete("min")
+ delete_second_suffix = delete_space + insert_space + pynutil.delete("s")
+
+ labels_hour_24 = [
+ str(x) for x in range(0, 25)
+ ] # Can see both systems. Twelve hour requires am/pm for ambiguity resolution
+ labels_hour_12 = [str(x) for x in range(1, 13)]
+ labels_minute_single = [str(x) for x in range(1, 10)]
+ labels_minute_double = [str(x) for x in range(10, 60)]
+
+ delete_leading_zero_to_double_digit = (
+ pynini.closure(pynutil.delete("0") | (NEMO_DIGIT - "0"), 0, 1) + NEMO_DIGIT
+ )
+
+ graph_24 = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ delete_leading_zero_to_double_digit @ pynini.union(*labels_hour_24)
+ )
+ graph_12 = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ delete_leading_zero_to_double_digit @ pynini.union(*labels_hour_12)
+ )
+
+ graph_hour_24 = graph_24 @ cardinal_graph
+ graph_hour_12 = graph_12 @ cardinal_graph
+
+ graph_minute_single = delete_leading_zero_to_double_digit @ pynini.union(*labels_minute_single)
+ graph_minute_double = pynini.union(*labels_minute_double)
+
+ graph_minute = pynini.union(graph_minute_single, graph_minute_double) @ cardinal_graph
+
+ final_graph_hour_only_24 = (
+ pynutil.insert("hours: \"") + graph_hour_24 + pynutil.insert("\"") + delete_hora_suffix
+ )
+ final_graph_hour_only_12 = pynutil.insert("hours: \"") + graph_hour_12 + pynutil.insert("\"") + day_suffix
+
+ final_graph_hour_24 = pynutil.insert("hours: \"") + graph_hour_24 + pynutil.insert("\"")
+ final_graph_hour_12 = pynutil.insert("hours: \"") + graph_hour_12 + pynutil.insert("\"")
+
+ final_graph_minute = pynutil.insert("minutes: \"") + graph_minute + pynutil.insert("\"")
+ final_graph_second = pynutil.insert("seconds: \"") + graph_minute + pynutil.insert("\"")
+ final_time_zone_optional = pynini.closure(
+ delete_space + insert_space + pynutil.insert("zone: \"") + time_zone_graph + pynutil.insert("\""), 0, 1,
+ )
+
+ # 02.30 h
+ graph_hm = (
+ final_graph_hour_24
+ + delete_time_delimiter
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + pynini.closure(
+ delete_time_delimiter + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second)),
+ 0,
+ 1,
+ ) # For seconds 2.30.35 h
+ + pynini.closure(delete_hora_suffix, 0, 1) # 2.30 is valid if unambiguous
+ + final_time_zone_optional
+ )
+
+ # 2 h 30 min
+ graph_hm |= (
+ final_graph_hour_24
+ + delete_hora_suffix
+ + delete_space
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + delete_minute_suffix
+ + pynini.closure(
+ delete_space
+ + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second))
+ + delete_second_suffix,
+ 0,
+ 1,
+ ) # For seconds
+ + final_time_zone_optional
+ )
+
+ # 2.30 a. m. (Only for 12 hour clock)
+ graph_hm |= (
+ final_graph_hour_12
+ + delete_time_delimiter
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + pynini.closure(
+ delete_time_delimiter + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second)),
+ 0,
+ 1,
+ ) # For seconds 2.30.35 a. m.
+ + day_suffix
+ + final_time_zone_optional
+ )
+
+ graph_h = (
+ pynini.union(final_graph_hour_only_24, final_graph_hour_only_12) + final_time_zone_optional
+ ) # Should always have a time indicator, else we'll pass to cardinals
+
+ if not deterministic:
+ # This includes alternate vocalization (hour menos min, min para hour), here we shift the times and indicate a `style` tag
+ hour_shift_24 = pynini.invert(pynini.string_file(get_abs_path("data/time/hour_to_24.tsv")))
+ hour_shift_12 = pynini.invert(pynini.string_file(get_abs_path("data/time/hour_to_12.tsv")))
+ minute_shift = pynini.string_file(get_abs_path("data/time/minute_to.tsv"))
+
+ graph_hour_to_24 = graph_24 @ hour_shift_24 @ cardinal_graph
+ graph_hour_to_12 = graph_12 @ hour_shift_12 @ cardinal_graph
+
+ graph_minute_to = pynini.union(graph_minute_single, graph_minute_double) @ minute_shift @ cardinal_graph
+
+ final_graph_hour_to_24 = pynutil.insert("hours: \"") + graph_hour_to_24 + pynutil.insert("\"")
+ final_graph_hour_to_12 = pynutil.insert("hours: \"") + graph_hour_to_12 + pynutil.insert("\"")
+
+ final_graph_minute_to = pynutil.insert("minutes: \"") + graph_minute_to + pynutil.insert("\"")
+
+ graph_menos = pynutil.insert(" style: \"1\"")
+ graph_para = pynutil.insert(" style: \"2\"")
+
+ final_graph_style = graph_menos | graph_para
+
+ # 02.30 h (omitting seconds since a bit awkward)
+ graph_hm |= (
+ final_graph_hour_to_24
+ + delete_time_delimiter
+ + insert_space
+ + final_graph_minute_to
+ + pynini.closure(delete_hora_suffix, 0, 1) # 2.30 is valid if unambiguous
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ # 2 h 30 min
+ graph_hm |= (
+ final_graph_hour_to_24
+ + delete_hora_suffix
+ + delete_space
+ + insert_space
+ + final_graph_minute_to
+ + delete_minute_suffix
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ # 2.30 a. m. (Only for 12 hour clock)
+ graph_hm |= (
+ final_graph_hour_to_12
+ + delete_time_delimiter
+ + insert_space
+ + final_graph_minute_to
+ + day_suffix
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ final_graph = graph_hm | graph_h
+ if deterministic:
+ final_graph = final_graph + pynutil.insert(" preserve_order: true")
+ final_graph = final_graph.optimize()
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py b/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py
@@ -0,0 +1,157 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_extra_space,
+ delete_space,
+ generator_main,
+)
+from nemo_text_processing.text_normalization.en.taggers.punctuation import PunctuationFst
+from nemo_text_processing.text_normalization.es.taggers.cardinal import CardinalFst
+from nemo_text_processing.text_normalization.es.taggers.date import DateFst
+from nemo_text_processing.text_normalization.es.taggers.decimals import DecimalFst
+from nemo_text_processing.text_normalization.es.taggers.electronic import ElectronicFst
+from nemo_text_processing.text_normalization.es.taggers.fraction import FractionFst
+from nemo_text_processing.text_normalization.es.taggers.measure import MeasureFst
+from nemo_text_processing.text_normalization.es.taggers.money import MoneyFst
+from nemo_text_processing.text_normalization.es.taggers.ordinal import OrdinalFst
+from nemo_text_processing.text_normalization.es.taggers.telephone import TelephoneFst
+from nemo_text_processing.text_normalization.es.taggers.time import TimeFst
+from nemo_text_processing.text_normalization.es.taggers.whitelist import WhiteListFst
+from nemo_text_processing.text_normalization.es.taggers.word import WordFst
+
+from nemo.utils import logging
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class ClassifyFst(GraphFst):
+ """
+ Final class that composes all other classification grammars. This class can process an entire sentence, that is lower cased.
+ For deployment, this grammar will be compiled and exported to OpenFst Finate State aRchive (FAR) File.
+ More details to deployment at NeMo/tools/text_processing_deployment.
+
+ Args:
+ input_case: accepting either "lower_cased" or "cased" input.
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
+ overwrite_cache: set to True to overwrite .far files
+ whitelist: path to a file with whitelist replacements
+ """
+
+ def __init__(
+ self,
+ input_case: str,
+ deterministic: bool = False,
+ cache_dir: str = None,
+ overwrite_cache: bool = False,
+ whitelist: str = None,
+ ):
+ super().__init__(name="tokenize_and_classify", kind="classify", deterministic=deterministic)
+ far_file = None
+ if cache_dir is not None and cache_dir != "None":
+ os.makedirs(cache_dir, exist_ok=True)
+ whitelist_file = os.path.basename(whitelist) if whitelist else ""
+ far_file = os.path.join(
+ cache_dir, f"_{input_case}_es_tn_{deterministic}_deterministic{whitelist_file}.far"
+ )
+ if not overwrite_cache and far_file and os.path.exists(far_file):
+ self.fst = pynini.Far(far_file, mode="r")["tokenize_and_classify"]
+ logging.info(f"ClassifyFst.fst was restored from {far_file}.")
+ else:
+ logging.info(f"Creating ClassifyFst grammars. This might take some time...")
+
+ self.cardinal = CardinalFst(deterministic=deterministic)
+ cardinal_graph = self.cardinal.fst
+
+ self.ordinal = OrdinalFst(cardinal=self.cardinal, deterministic=deterministic)
+ ordinal_graph = self.ordinal.fst
+
+ self.decimal = DecimalFst(cardinal=self.cardinal, deterministic=deterministic)
+ decimal_graph = self.decimal.fst
+
+ self.fraction = FractionFst(cardinal=self.cardinal, ordinal=self.ordinal, deterministic=deterministic)
+ fraction_graph = self.fraction.fst
+ self.measure = MeasureFst(
+ cardinal=self.cardinal, decimal=self.decimal, fraction=self.fraction, deterministic=deterministic
+ )
+ measure_graph = self.measure.fst
+ self.date = DateFst(cardinal=self.cardinal, deterministic=deterministic)
+ date_graph = self.date.fst
+ word_graph = WordFst(deterministic=deterministic).fst
+ self.time = TimeFst(self.cardinal, deterministic=deterministic)
+ time_graph = self.time.fst
+ self.telephone = TelephoneFst(deterministic=deterministic)
+ telephone_graph = self.telephone.fst
+ self.electronic = ElectronicFst(deterministic=deterministic)
+ electronic_graph = self.electronic.fst
+ self.money = MoneyFst(cardinal=self.cardinal, decimal=self.decimal, deterministic=deterministic)
+ money_graph = self.money.fst
+ self.whitelist = WhiteListFst(input_case=input_case, deterministic=deterministic, input_file=whitelist)
+ whitelist_graph = self.whitelist.fst
+ punct_graph = PunctuationFst(deterministic=deterministic).fst
+
+ classify = (
+ pynutil.add_weight(whitelist_graph, 1.01)
+ | pynutil.add_weight(time_graph, 1.09)
+ | pynutil.add_weight(measure_graph, 1.08)
+ | pynutil.add_weight(cardinal_graph, 1.1)
+ | pynutil.add_weight(fraction_graph, 1.09)
+ | pynutil.add_weight(date_graph, 1.1)
+ | pynutil.add_weight(ordinal_graph, 1.1)
+ | pynutil.add_weight(decimal_graph, 1.1)
+ | pynutil.add_weight(money_graph, 1.1)
+ | pynutil.add_weight(telephone_graph, 1.1)
+ | pynutil.add_weight(electronic_graph, 1.1)
+ | pynutil.add_weight(word_graph, 200)
+ )
+ punct = pynutil.insert("tokens { ") + pynutil.add_weight(punct_graph, weight=2.1) + pynutil.insert(" }")
+ punct = pynini.closure(
+ pynini.compose(pynini.closure(NEMO_WHITE_SPACE, 1), delete_extra_space)
+ | (pynutil.insert(" ") + punct),
+ 1,
+ )
+ token = pynutil.insert("tokens { ") + classify + pynutil.insert(" }")
+ token_plus_punct = (
+ pynini.closure(punct + pynutil.insert(" ")) + token + pynini.closure(pynutil.insert(" ") + punct)
+ )
+
+ graph = token_plus_punct + pynini.closure(
+ (
+ pynini.compose(pynini.closure(NEMO_WHITE_SPACE, 1), delete_extra_space)
+ | (pynutil.insert(" ") + punct + pynutil.insert(" "))
+ )
+ + token_plus_punct
+ )
+
+ graph = delete_space + graph + delete_space
+ graph |= punct
+
+ self.fst = graph.optimize()
+
+ if far_file:
+ generator_main(far_file, {"tokenize_and_classify": self.fst})
+ logging.info(f"ClassifyFst grammars are saved to {far_file}.")
diff --git a/nemo_text_processing/text_normalization/es/taggers/whitelist.py b/nemo_text_processing/text_normalization/es/taggers/whitelist.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/whitelist.py
@@ -0,0 +1,69 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst, convert_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class WhiteListFst(GraphFst):
+ """
+ Finite state transducer for classifying whitelist, e.g.
+ "sr." -> tokens { name: "seรฑor" }
+ This class has highest priority among all classifier grammars. Whitelisted tokens are defined and loaded from "data/whitelist.tsv".
+
+ Args:
+ input_case: accepting either "lower_cased" or "cased" input.
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ input_file: path to a file with whitelist replacements
+ """
+
+ def __init__(self, input_case: str, deterministic: bool = True, input_file: str = None):
+ super().__init__(name="whitelist", kind="classify", deterministic=deterministic)
+
+ def _get_whitelist_graph(input_case, file):
+ whitelist = load_labels(file)
+ if input_case == "lower_cased":
+ whitelist = [[x[0].lower()] + x[1:] for x in whitelist]
+ graph = pynini.string_map(whitelist)
+ return graph
+
+ graph = _get_whitelist_graph(input_case, get_abs_path("data/whitelist.tsv"))
+ if not deterministic and input_case != "lower_cased":
+ graph |= pynutil.add_weight(
+ _get_whitelist_graph("lower_cased", get_abs_path("data/whitelist.tsv")), weight=0.0001
+ )
+
+ if input_file:
+ whitelist_provided = _get_whitelist_graph(input_case, input_file)
+ if not deterministic:
+ graph |= whitelist_provided
+ else:
+ graph = whitelist_provided
+
+ if not deterministic:
+ units_graph = _get_whitelist_graph(input_case, file=get_abs_path("data/measures/measurements.tsv"))
+ graph |= units_graph
+
+ self.graph = graph
+ self.final_graph = convert_space(self.graph).optimize()
+ self.fst = (pynutil.insert("name: \"") + self.final_graph + pynutil.insert("\"")).optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/word.py b/nemo_text_processing/text_normalization/es/taggers/word.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/word.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_SPACE, GraphFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class WordFst(GraphFst):
+ """
+ Finite state transducer for classifying word.
+ e.g. dormir -> tokens { name: "dormir" }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="word", kind="classify")
+ word = pynutil.insert("name: \"") + pynini.closure(NEMO_NOT_SPACE, 1) + pynutil.insert("\"")
+ self.fst = word.optimize()
diff --git a/nemo_text_processing/text_normalization/es/utils.py b/nemo_text_processing/text_normalization/es/utils.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/utils.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import csv
+import os
+
+
+def get_abs_path(rel_path):
+ """
+ Get absolute path
+
+ Args:
+ rel_path: relative path to this file
+
+ Returns absolute path
+ """
+ return os.path.dirname(os.path.abspath(__file__)) + '/' + rel_path
+
+
+def load_labels(abs_path):
+ """
+ loads relative path file as dictionary
+
+ Args:
+ abs_path: absolute path
+
+ Returns dictionary of mappings
+ """
+ label_tsv = open(abs_path)
+ labels = list(csv.reader(label_tsv, delimiter="\t"))
+ return labels
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/__init__.py b/nemo_text_processing/text_normalization/es/verbalizers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py b/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py
@@ -0,0 +1,57 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, GraphFst
+from nemo_text_processing.text_normalization.es.graph_utils import shift_cardinal_gender, strip_cardinal_apocope
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class CardinalFst(GraphFst):
+ """
+ Finite state transducer for verbalizing cardinals
+ e.g. cardinal { integer: "dos" } -> "dos"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="cardinal", kind="verbalize", deterministic=deterministic)
+ optional_sign = pynini.closure(pynini.cross("negative: \"true\" ", "menos "), 0, 1)
+ self.optional_sign = optional_sign
+
+ integer = pynini.closure(NEMO_NOT_QUOTE, 1)
+ self.integer = pynutil.delete(" \"") + integer + pynutil.delete("\"")
+
+ integer = pynutil.delete("integer:") + self.integer
+ self.numbers = integer
+ graph = optional_sign + self.numbers
+
+ if not deterministic:
+ # For alternate renderings
+ no_adjust = graph
+ fem_adjust = shift_cardinal_gender(graph)
+ apocope_adjust = strip_cardinal_apocope(graph)
+ graph = no_adjust | fem_adjust | apocope_adjust
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/date.py b/nemo_text_processing/text_normalization/es/verbalizers/date.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/date.py
@@ -0,0 +1,86 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import strip_cardinal_apocope
+from nemo_text_processing.text_normalization.es.taggers.date import articles
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class DateFst(GraphFst):
+ """
+ Finite state transducer for verbalizing date, e.g.
+ date { day: "treinta y uno" month: "marzo" year: "dos mil" } -> "treinta y uno de marzo de dos mil"
+ date { day: "uno" month: "mayo" year: "del mil novecientos noventa" } -> "primero de mayo del mil novecientos noventa"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="date", kind="verbalize", deterministic=deterministic)
+
+ day_cardinal = pynutil.delete("day: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ day = strip_cardinal_apocope(day_cardinal)
+
+ primero = pynini.cdrewrite(pynini.cross("uno", "primero"), "[BOS]", "[EOS]", NEMO_SIGMA)
+ day = (
+ (day @ primero) if deterministic else pynini.union(day, day @ primero)
+ ) # Primero for first day is traditional, but will vary depending on region
+
+ month = pynutil.delete("month: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+
+ year = (
+ pynutil.delete("year: \"")
+ + articles
+ + NEMO_SPACE
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ # Insert preposition if wasn't originally with the year. This would mean a space was present
+ year = pynutil.add_weight(year, -0.001)
+ year |= (
+ pynutil.delete("year: \"")
+ + pynutil.insert("de ")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ # day month year
+ graph_dmy = day + pynini.cross(NEMO_SPACE, " de ") + month + pynini.closure(pynini.accep(" ") + year, 0, 1)
+
+ graph_mdy = month + NEMO_SPACE + day + pynini.closure(NEMO_SPACE + year, 0, 1)
+ if deterministic:
+ graph_mdy += pynutil.delete(" preserve_order: true") # Only accepts this if was explicitly passed
+
+ self.graph = graph_dmy | graph_mdy
+ final_graph = self.graph + delete_preserve_order
+
+ delete_tokens = self.delete_tokens(final_graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/decimals.py b/nemo_text_processing/text_normalization/es/verbalizers/decimals.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/decimals.py
@@ -0,0 +1,87 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ GraphFst,
+ delete_preserve_order,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es import LOCALIZATION
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ shift_cardinal_gender,
+ shift_number_gender,
+ strip_cardinal_apocope,
+)
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class DecimalFst(GraphFst):
+ """
+ Finite state transducer for classifying decimal, e.g.
+ decimal { negative: "true" integer_part: "dos" fractional_part: "cuatro cero" quantity: "billones" } -> menos dos coma quatro cero billones
+ decimal { integer_part: "un" quantity: "billรณn" } -> un billรณn
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="decimal", kind="classify", deterministic=deterministic)
+
+ self.optional_sign = pynini.closure(pynini.cross("negative: \"true\"", "menos ") + delete_space, 0, 1)
+ self.integer = pynutil.delete("integer_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ self.fractional_default = (
+ pynutil.delete("fractional_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ )
+
+ conjunction = pynutil.insert(" punto ") if LOCALIZATION == "am" else pynutil.insert(" coma ")
+ if not deterministic:
+ conjunction |= pynutil.insert(pynini.union(" con ", " y "))
+ self.fractional_default |= strip_cardinal_apocope(self.fractional_default)
+ self.fractional = conjunction + self.fractional_default
+
+ self.quantity = (
+ delete_space
+ + insert_space
+ + pynutil.delete("quantity: \"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ self.optional_quantity = pynini.closure(self.quantity, 0, 1)
+
+ graph = self.optional_sign + pynini.union(
+ (self.integer + self.quantity), (self.integer + delete_space + self.fractional + self.optional_quantity)
+ )
+
+ self.numbers = graph.optimize()
+ self.numbers_no_quantity = self.integer + delete_space + self.fractional + self.optional_quantity
+
+ if not deterministic:
+ graph |= self.optional_sign + (
+ shift_cardinal_gender(self.integer + delete_space) + shift_number_gender(self.fractional)
+ )
+
+ graph += delete_preserve_order
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/electronic.py b/nemo_text_processing/text_normalization/es/verbalizers/electronic.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/electronic.py
@@ -0,0 +1,91 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_preserve_order,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digit_no_zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+
+ graph_symbols = pynini.string_file(get_abs_path("data/electronic/symbols.tsv"))
+ server_common = pynini.string_file(get_abs_path("data/electronic/server_name.tsv"))
+ domain_common = pynini.string_file(get_abs_path("data/electronic/domain.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ digit_no_zero = None
+ zero = None
+
+ graph_symbols = None
+ server_common = None
+ domain_common = None
+
+ PYNINI_AVAILABLE = False
+
+
+class ElectronicFst(GraphFst):
+ """
+ Finite state transducer for verbalizing electronic
+ e.g. electronic { username: "abc" domain: "hotmail.com" } -> "a b c arroba hotmail punto com"
+ -> "a b c arroba h o t m a i l punto c o m"
+ -> "a b c arroba hotmail punto c o m"
+ -> "a b c at h o t m a i l punto com"
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="electronic", kind="verbalize", deterministic=deterministic)
+
+ graph_digit_no_zero = (
+ digit_no_zero @ pynini.cdrewrite(pynini.cross("un", "uno"), "", "", NEMO_SIGMA).optimize()
+ )
+ graph_digit = graph_digit_no_zero | zero
+
+ def add_space_after_char():
+ return pynini.closure(NEMO_NOT_QUOTE - pynini.accep(" ") + insert_space) + (
+ NEMO_NOT_QUOTE - pynini.accep(" ")
+ )
+
+ verbalize_characters = pynini.cdrewrite(graph_symbols | graph_digit, "", "", NEMO_SIGMA)
+
+ user_name = pynutil.delete("username: \"") + add_space_after_char() + pynutil.delete("\"")
+ user_name @= verbalize_characters
+
+ convert_defaults = pynutil.add_weight(NEMO_NOT_QUOTE, weight=0.0001) | domain_common | server_common
+ domain = convert_defaults + pynini.closure(insert_space + convert_defaults)
+ domain @= verbalize_characters
+
+ domain = pynutil.delete("domain: \"") + domain + pynutil.delete("\"")
+ protocol = (
+ pynutil.delete("protocol: \"")
+ + add_space_after_char() @ pynini.cdrewrite(graph_symbols, "", "", NEMO_SIGMA)
+ + pynutil.delete("\"")
+ )
+ self.graph = (pynini.closure(protocol + pynini.accep(" "), 0, 1) + domain) | (
+ user_name + pynini.accep(" ") + pynutil.insert("arroba ") + domain
+ )
+ delete_tokens = self.delete_tokens(self.graph + delete_preserve_order)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/fraction.py b/nemo_text_processing/text_normalization/es/verbalizers/fraction.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/fraction.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_NOT_QUOTE,
+ NEMO_NOT_SPACE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ accents,
+ shift_cardinal_gender,
+ strip_cardinal_apocope,
+)
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class FractionFst(GraphFst):
+ """
+ Finite state transducer for verbalizing fraction
+ e.g. tokens { fraction { integer: "treinta y tres" numerator: "cuatro" denominator: "quinto" } } ->
+ treinta y tres y cuatro quintos
+
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="fraction", kind="verbalize", deterministic=deterministic)
+
+ # Derivational strings append 'avo' as a suffix. Adding space for processing aid
+ fraction_stem = pynutil.insert(" avo")
+ plural = pynutil.insert("s")
+
+ integer = (
+ pynutil.delete("integer_part: \"")
+ + strip_cardinal_apocope(pynini.closure(NEMO_NOT_QUOTE))
+ + pynutil.delete("\"")
+ )
+
+ numerator_one = pynutil.delete("numerator: \"") + pynini.accep("un") + pynutil.delete("\" ")
+ numerator = (
+ pynutil.delete("numerator: \"")
+ + pynini.difference(pynini.closure(NEMO_NOT_QUOTE), "un")
+ + pynutil.delete("\" ")
+ )
+
+ denominator_add_stem = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE)
+ + fraction_stem
+ + pynutil.delete("\" morphosyntactic_features: \"add_root\"")
+ )
+ denominator_ordinal = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\" morphosyntactic_features: \"ordinal\"")
+ )
+ denominator_cardinal = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\"")
+ )
+
+ denominator_singular = pynini.union(denominator_add_stem, denominator_ordinal)
+ denominator_plural = denominator_singular + plural
+
+ if not deterministic:
+ # Occasional exceptions
+ denominator_singular |= denominator_add_stem @ pynini.string_map(
+ [("once avo", "undรฉcimo"), ("doce avo", "duodรฉcimo")]
+ )
+
+ # Merging operations
+ merge = pynini.cdrewrite(
+ pynini.cross(" y ", "i"), "", "", NEMO_SIGMA
+ ) # The denominator must be a single word, with the conjunction "y" replaced by i
+ merge @= pynini.cdrewrite(delete_space, "", pynini.difference(NEMO_CHAR, "parte"), NEMO_SIGMA)
+
+ # The merger can produce duplicate vowels. This is not allowed in orthography
+ delete_duplicates = pynini.string_map([("aa", "a"), ("oo", "o")]) # Removes vowels
+ delete_duplicates = pynini.cdrewrite(delete_duplicates, "", "", NEMO_SIGMA)
+
+ remove_accents = pynini.cdrewrite(
+ accents,
+ pynini.union(NEMO_SPACE, pynini.accep("[BOS]")) + pynini.closure(NEMO_NOT_SPACE),
+ pynini.closure(NEMO_NOT_SPACE) + pynini.union("avo", "ava", "รฉsimo", "รฉsima"),
+ NEMO_SIGMA,
+ )
+ merge_into_single_word = merge @ remove_accents @ delete_duplicates
+
+ fraction_default = numerator + delete_space + insert_space + (denominator_plural @ merge_into_single_word)
+ fraction_with_one = (
+ numerator_one + delete_space + insert_space + (denominator_singular @ merge_into_single_word)
+ )
+
+ fraction_with_cardinal = strip_cardinal_apocope(numerator | numerator_one)
+ fraction_with_cardinal += (
+ delete_space + pynutil.insert(" sobre ") + strip_cardinal_apocope(denominator_cardinal)
+ )
+
+ conjunction = pynutil.insert(" y ")
+
+ if not deterministic:
+ # There is an alternative rendering where ordinals act as adjectives for 'parte'. This requires use of the feminine
+ # Other rules will manage use of "un" at end, so just worry about endings
+ exceptions = pynini.string_map([("tercia", "tercera")])
+ apply_exceptions = pynini.cdrewrite(exceptions, "", "", NEMO_SIGMA)
+ vowel_change = pynini.cdrewrite(pynini.cross("o", "a"), "", pynini.accep("[EOS]"), NEMO_SIGMA)
+
+ denominator_singular_fem = shift_cardinal_gender(denominator_singular) @ vowel_change @ apply_exceptions
+ denominator_plural_fem = denominator_singular_fem + plural
+
+ numerator_one_fem = shift_cardinal_gender(numerator_one)
+ numerator_fem = shift_cardinal_gender(numerator)
+
+ fraction_with_cardinal |= (
+ (numerator_one_fem | numerator_fem)
+ + delete_space
+ + pynutil.insert(" sobre ")
+ + shift_cardinal_gender(denominator_cardinal)
+ )
+
+ # Still need to manage stems
+ merge_stem = pynini.cdrewrite(
+ delete_space, "", pynini.union("avo", "ava", "avos", "avas"), NEMO_SIGMA
+ ) # For managing alternative spacing
+ merge_stem @= remove_accents @ delete_duplicates
+
+ fraction_with_one_fem = numerator_one_fem + delete_space + insert_space
+ fraction_with_one_fem += pynini.union(
+ denominator_singular_fem @ merge_stem, denominator_singular_fem @ merge_into_single_word
+ ) # Both forms exists
+ fraction_with_one_fem @= pynini.cdrewrite(pynini.cross("una media", "media"), "", "", NEMO_SIGMA)
+ fraction_with_one_fem += pynutil.insert(" parte")
+
+ fraction_default_fem = numerator_fem + delete_space + insert_space
+ fraction_default_fem += pynini.union(
+ denominator_plural_fem @ merge_stem, denominator_plural_fem @ merge_into_single_word
+ )
+ fraction_default_fem += pynutil.insert(" partes")
+
+ fraction_default |= (
+ numerator + delete_space + insert_space + denominator_plural @ merge_stem
+ ) # Case of no merger
+ fraction_default |= fraction_default_fem
+
+ fraction_with_one |= numerator_one + delete_space + insert_space + denominator_singular @ merge_stem
+ fraction_with_one |= fraction_with_one_fem
+
+ # Integers are influenced by dominant noun, need to allow feminine forms as well
+ integer |= shift_cardinal_gender(integer)
+
+ # Remove 'un medio'
+ fraction_with_one @= pynini.cdrewrite(pynini.cross("un medio", "medio"), "", "", NEMO_SIGMA)
+
+ integer = pynini.closure(integer + delete_space + conjunction, 0, 1)
+
+ fraction = fraction_with_one | fraction_default | fraction_with_cardinal
+
+ graph = integer + fraction
+
+ self.graph = graph
+ delete_tokens = self.delete_tokens(self.graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/measure.py b/nemo_text_processing/text_normalization/es/verbalizers/measure.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/measure.py
@@ -0,0 +1,110 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_extra_space,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import ones, shift_cardinal_gender
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ unit_plural_fem = pynini.string_file(get_abs_path("data/measures/measurements_plural_fem.tsv"))
+ unit_plural_masc = pynini.string_file(get_abs_path("data/measures/measurements_plural_masc.tsv"))
+
+ unit_singular_fem = pynini.project(unit_plural_fem, "input")
+ unit_singular_masc = pynini.project(unit_plural_masc, "input")
+
+ unit_plural_fem = pynini.project(unit_plural_fem, "output")
+ unit_plural_masc = pynini.project(unit_plural_masc, "output")
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ unit_plural_fem = None
+ unit_plural_masc = None
+
+ unit_singular_fem = None
+ unit_singular_masc = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MeasureFst(GraphFst):
+ """
+ Finite state transducer for verbalizing measure, e.g.
+ measure { cardinal { integer: "dos" units: "gramos" } } -> "dos gramos"
+ measure { cardinal { integer_part: "dos" quantity: "millones" units: "gramos" } } -> "dos millones de gramos"
+
+ Args:
+ decimal: DecimalFst
+ cardinal: CardinalFst
+ fraction: FractionFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, decimal: GraphFst, cardinal: GraphFst, fraction: GraphFst, deterministic: bool):
+ super().__init__(name="measure", kind="verbalize", deterministic=deterministic)
+
+ graph_decimal = decimal.fst
+ graph_cardinal = cardinal.fst
+ graph_fraction = fraction.fst
+
+ unit_masc = (unit_plural_masc | unit_singular_masc) + pynini.closure(
+ NEMO_WHITE_SPACE + "por" + pynini.closure(NEMO_NOT_QUOTE, 1), 0, 1
+ )
+ unit_masc |= "por" + pynini.closure(NEMO_NOT_QUOTE, 1)
+ unit_masc = pynutil.delete("units: \"") + (pynini.closure(NEMO_NOT_QUOTE) @ unit_masc) + pynutil.delete("\"")
+
+ unit_fem = (unit_plural_fem | unit_singular_fem) + pynini.closure(
+ NEMO_WHITE_SPACE + "por" + pynini.closure(NEMO_NOT_QUOTE, 1), 0, 1
+ )
+ unit_fem = pynutil.delete("units: \"") + (pynini.closure(NEMO_NOT_QUOTE) @ unit_fem) + pynutil.delete("\"")
+
+ graph_masc = (graph_cardinal | graph_decimal | graph_fraction) + NEMO_WHITE_SPACE + unit_masc
+ graph_fem = (
+ shift_cardinal_gender(graph_cardinal | graph_decimal | graph_fraction) + NEMO_WHITE_SPACE + unit_fem
+ )
+ graph = graph_masc | graph_fem
+
+ graph = (
+ pynini.cdrewrite(
+ pynutil.insert(" de"), "quantity: \"" + pynini.closure(NEMO_NOT_QUOTE, 1), "\"", NEMO_SIGMA
+ )
+ @ graph
+ ) # billones de xyz
+
+ graph @= pynini.cdrewrite(pynini.cross(ones, "uno"), "", NEMO_WHITE_SPACE + "por", NEMO_SIGMA)
+
+ # To manage alphanumeric combonations ("a-8, 5x"), we let them use a weighted default path.
+ alpha_num_unit = pynutil.delete("units: \"") + pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\"")
+ graph_alpha_num = pynini.union(
+ (graph_cardinal | graph_decimal) + NEMO_SPACE + alpha_num_unit,
+ alpha_num_unit + delete_extra_space + (graph_cardinal | graph_decimal),
+ )
+
+ graph |= pynutil.add_weight(graph_alpha_num, 0.01)
+
+ graph += delete_preserve_order
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/money.py b/nemo_text_processing/text_normalization/es/verbalizers/money.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/money.py
@@ -0,0 +1,195 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ shift_cardinal_gender,
+ shift_number_gender,
+ strip_cardinal_apocope,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ fem = pynini.string_file((get_abs_path("data/money/currency_plural_fem.tsv")))
+ masc = pynini.string_file((get_abs_path("data/money/currency_plural_masc.tsv")))
+
+ fem_singular = pynini.project(fem, "input")
+ masc_singular = pynini.project(masc, "input")
+
+ fem_plural = pynini.project(fem, "output")
+ masc_plural = pynini.project(masc, "output")
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ fem_plural = None
+ masc_plural = None
+
+ fem_singular = None
+ masc_singular = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MoneyFst(GraphFst):
+ """
+ Finite state transducer for verbalizing money, e.g.
+ money { currency_maj: "euro" integer_part: "un"} -> "un euro"
+ money { currency_maj: "euro" integer_part: "un" fractional_part: "cero cero un"} -> "uno coma cero cero uno euros"
+ money { integer_part: "un" currency_maj: "libra" fractional_part: "cuarenta" preserve_order: true} -> "una libra cuarenta"
+ money { integer_part: "un" currency_maj: "libra" fractional_part: "cuarenta" currency_min: "peniques" preserve_order: true} -> "una libra con cuarenta peniques"
+ money { fractional_part: "un" currency_min: "penique" preserve_order: true} -> "un penique"
+
+ Args:
+ decimal: GraphFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, decimal: GraphFst, deterministic: bool = True):
+ super().__init__(name="money", kind="verbalize", deterministic=deterministic)
+
+ maj_singular_masc = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_singular)
+ + pynutil.delete("\"")
+ )
+ maj_singular_fem = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_singular)
+ + pynutil.delete("\"")
+ )
+
+ maj_plural_masc = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_plural)
+ + pynutil.delete("\"")
+ )
+ maj_plural_fem = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_plural)
+ + pynutil.delete("\"")
+ )
+
+ maj_masc = maj_plural_masc | maj_singular_masc # Tagger kept quantity resolution stable
+ maj_fem = maj_plural_fem | maj_singular_fem
+
+ min_singular_masc = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_singular)
+ + pynutil.delete("\"")
+ )
+ min_singular_fem = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_singular)
+ + pynutil.delete("\"")
+ )
+
+ min_plural_masc = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_plural)
+ + pynutil.delete("\"")
+ )
+ min_plural_fem = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_plural)
+ + pynutil.delete("\"")
+ )
+
+ min_masc = min_plural_masc | min_singular_masc
+ min_fem = min_plural_fem | min_singular_fem
+
+ fractional_part = (
+ pynutil.delete("fractional_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ )
+
+ integer_part = pynutil.delete("integer_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ optional_add_and = pynini.closure(pynutil.insert(pynini.union("con ", "y ")), 0, 1)
+
+ # *** currency_maj
+ graph_integer_masc = integer_part + NEMO_SPACE + maj_masc
+ graph_integer_fem = shift_cardinal_gender(integer_part) + NEMO_SPACE + maj_fem
+ graph_integer = graph_integer_fem | graph_integer_masc
+
+ # *** currency_maj + (***) | ((con) *** current_min)
+ graph_integer_with_minor_masc = (
+ integer_part
+ + NEMO_SPACE
+ + maj_masc
+ + NEMO_SPACE
+ + pynini.union(
+ optional_add_and + strip_cardinal_apocope(fractional_part),
+ (optional_add_and + fractional_part + NEMO_SPACE + min_masc),
+ (optional_add_and + shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem),
+ ) # Could be minor currency that is different gender
+ + delete_preserve_order
+ )
+
+ graph_integer_with_minor_fem = (
+ shift_cardinal_gender(integer_part)
+ + NEMO_SPACE
+ + maj_fem
+ + NEMO_SPACE
+ + pynini.union(
+ optional_add_and + shift_cardinal_gender(fractional_part),
+ (optional_add_and + fractional_part + NEMO_SPACE + min_masc),
+ (optional_add_and + shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem),
+ ) # Could be minor currency that is different gender
+ + delete_preserve_order
+ )
+
+ graph_integer_with_minor = graph_integer_with_minor_fem | graph_integer_with_minor_masc
+
+ # *** coma *** currency_maj
+ graph_decimal_masc = decimal.numbers + NEMO_SPACE + maj_masc
+
+ # Need to fix some of the inner parts, so don't use decimal here (note: quantities covered by masc)
+ graph_decimal_fem = (
+ pynini.accep("integer_part: \"")
+ + shift_cardinal_gender(pynini.closure(NEMO_NOT_QUOTE, 1))
+ + pynini.accep("\"")
+ + NEMO_SPACE
+ + pynini.accep("fractional_part: \"")
+ + shift_number_gender(pynini.closure(NEMO_NOT_QUOTE, 1))
+ + pynini.accep("\"")
+ + NEMO_SIGMA
+ )
+ graph_decimal_fem @= decimal.numbers_no_quantity
+ graph_decimal_fem += NEMO_SPACE + maj_fem
+
+ graph_decimal = graph_decimal_fem | graph_decimal_masc
+ graph_decimal = (
+ pynini.cdrewrite(
+ pynutil.insert(" de"), "quantity: \"" + pynini.closure(NEMO_NOT_QUOTE, 1), "\"", NEMO_SIGMA
+ )
+ @ graph_decimal
+ ) # formally it's millones/billones de ***
+
+ # *** current_min
+ graph_minor_masc = fractional_part + NEMO_SPACE + min_masc + delete_preserve_order
+ graph_minor_fem = shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem + delete_preserve_order
+ graph_minor = graph_minor_fem | graph_minor_masc
+
+ graph = graph_integer | graph_integer_with_minor | graph_decimal | graph_minor
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py b/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py
@@ -0,0 +1,76 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, NEMO_SIGMA, NEMO_SPACE, GraphFst
+from nemo_text_processing.text_normalization.es.graph_utils import shift_number_gender
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class OrdinalFst(GraphFst):
+ """
+ Finite state transducer for verbalizing ordinals
+ e.g. ordinal { integer: "tercer" } } -> "tercero"
+ -> "tercera"
+ -> "tercer"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="ordinal", kind="verbalize", deterministic=deterministic)
+
+ graph = pynutil.delete("integer: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+
+ # masculne gender we leave as is
+ graph_masc = graph + pynutil.delete(" morphosyntactic_features: \"gender_masc")
+
+ # shift gender
+ graph_fem_ending = graph @ pynini.cdrewrite(
+ pynini.cross("o", "a"), "", NEMO_SPACE | pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+ graph_fem = shift_number_gender(graph_fem_ending) + pynutil.delete(" morphosyntactic_features: \"gender_fem")
+
+ # Apocope just changes tercero and primero. May occur if someone wrote 11.er (uncommon)
+ graph_apocope = (
+ pynini.cross("tercero", "tercer")
+ | pynini.cross("primero", "primer")
+ | pynini.cross("undรฉcimo", "decimoprimer")
+ ) # In case someone wrote 11.er with deterministic
+ graph_apocope = (graph @ pynini.cdrewrite(graph_apocope, "", "", NEMO_SIGMA)) + pynutil.delete(
+ " morphosyntactic_features: \"apocope"
+ )
+
+ graph = graph_apocope | graph_masc | graph_fem
+
+ if not deterministic:
+ # Plural graph
+ graph_plural = pynini.cdrewrite(
+ pynutil.insert("s"), pynini.union("o", "a"), NEMO_SPACE | pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+
+ graph |= (graph @ graph_plural) + pynutil.delete("/plural")
+
+ self.graph = graph + pynutil.delete("\"")
+
+ delete_tokens = self.delete_tokens(self.graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/telephone.py b/nemo_text_processing/text_normalization/es/verbalizers/telephone.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/telephone.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, GraphFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class TelephoneFst(GraphFst):
+ """
+ Finite state transducer for verbalizing telephone, e.g.
+ telephone { number_part: "uno dos tres uno dos tres cinco seis siete ocho" }
+ -> uno dos tres uno dos tres cinco seis siete ocho
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="telephone", kind="verbalize")
+
+ number_part = pynutil.delete("number_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ delete_tokens = self.delete_tokens(number_part)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/time.py b/nemo_text_processing/text_normalization/es/verbalizers/time.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/time.py
@@ -0,0 +1,269 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_preserve_order,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ alt_minutes = pynini.string_file(get_abs_path("data/time/alt_minutes.tsv"))
+
+ morning_times = pynini.string_file(get_abs_path("data/time/morning_times.tsv"))
+ afternoon_times = pynini.string_file(get_abs_path("data/time/afternoon_times.tsv"))
+ evening_times = pynini.string_file(get_abs_path("data/time/evening_times.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ alt_minutes = None
+
+ morning_times = None
+ afternoon_times = None
+ evening_times = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TimeFst(GraphFst):
+ """
+ Finite state transducer for verbalizing time, e.g.
+ time { hours: "doce" minutes: "media" suffix: "a m" } -> doce y media de la noche
+ time { hours: "doce" } -> twelve o'clock
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="time", kind="verbalize", deterministic=deterministic)
+
+ change_minutes = pynini.cdrewrite(alt_minutes, pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA)
+
+ morning_phrases = pynini.cross("am", "de la maรฑana")
+ afternoon_phrases = pynini.cross("pm", "de la tarde")
+ evening_phrases = pynini.cross("pm", "de la noche")
+
+ # For the 12's
+ mid_times = pynini.accep("doce")
+ mid_phrases = (
+ pynini.string_map([("pm", "del mediodรญa"), ("am", "de la noche")])
+ if deterministic
+ else pynini.string_map(
+ [
+ ("pm", "de la maรฑana"),
+ ("pm", "del dรญa"),
+ ("pm", "del mediodรญa"),
+ ("am", "de la noche"),
+ ("am", "de la medianoche"),
+ ]
+ )
+ )
+
+ hour = (
+ pynutil.delete("hours:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ minute = (
+ pynutil.delete("minutes:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ minute = (minute @ change_minutes) if deterministic else pynini.union(minute, minute @ change_minutes)
+
+ suffix = (
+ pynutil.delete("suffix:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ zone = (
+ pynutil.delete("zone:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ optional_zone = pynini.closure(delete_space + insert_space + zone, 0, 1)
+ second = (
+ pynutil.delete("seconds:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ graph_hms = (
+ hour
+ + pynutil.insert(" horas ")
+ + delete_space
+ + minute
+ + pynutil.insert(" minutos y ")
+ + delete_space
+ + second
+ + pynutil.insert(" segundos")
+ )
+
+ graph_hm = hour + delete_space + pynutil.insert(" y ") + minute
+ graph_hm |= pynini.union(
+ (hour @ morning_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases),
+ (hour @ afternoon_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases),
+ (hour @ evening_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases),
+ (hour @ mid_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases),
+ )
+
+ graph_h = pynini.union(
+ hour,
+ (hour @ morning_times) + delete_space + insert_space + (suffix @ morning_phrases),
+ (hour @ afternoon_times) + delete_space + insert_space + (suffix @ afternoon_phrases),
+ (hour @ evening_times) + delete_space + insert_space + (suffix @ evening_phrases),
+ (hour @ mid_times) + delete_space + insert_space + (suffix @ mid_phrases),
+ )
+
+ graph = (graph_hms | graph_hm | graph_h) + optional_zone
+
+ if not deterministic:
+ graph_style_1 = pynutil.delete(" style: \"1\"")
+ graph_style_2 = pynutil.delete(" style: \"2\"")
+
+ graph_menos = hour + delete_space + pynutil.insert(" menos ") + minute + graph_style_1
+ graph_menos |= (
+ (hour @ morning_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ afternoon_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ evening_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ mid_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases)
+ + graph_style_1
+ )
+ graph_menos += optional_zone
+
+ graph_para = minute + pynutil.insert(" para las ") + delete_space + hour + graph_style_2
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ morning_times)
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ afternoon_times)
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ evening_times)
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ mid_times)
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases)
+ + graph_style_2
+ )
+ graph_para += optional_zone
+ graph_para @= pynini.cdrewrite(
+ pynini.cross(" las ", " la "), "para", "una", NEMO_SIGMA
+ ) # Need agreement with one
+
+ graph |= graph_menos | graph_para
+ delete_tokens = self.delete_tokens(graph + delete_preserve_order)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py b/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py
@@ -0,0 +1,73 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst
+from nemo_text_processing.text_normalization.en.verbalizers.whitelist import WhiteListFst
+from nemo_text_processing.text_normalization.es.verbalizers.cardinal import CardinalFst
+from nemo_text_processing.text_normalization.es.verbalizers.date import DateFst
+from nemo_text_processing.text_normalization.es.verbalizers.decimals import DecimalFst
+from nemo_text_processing.text_normalization.es.verbalizers.electronic import ElectronicFst
+from nemo_text_processing.text_normalization.es.verbalizers.fraction import FractionFst
+from nemo_text_processing.text_normalization.es.verbalizers.measure import MeasureFst
+from nemo_text_processing.text_normalization.es.verbalizers.money import MoneyFst
+from nemo_text_processing.text_normalization.es.verbalizers.ordinal import OrdinalFst
+from nemo_text_processing.text_normalization.es.verbalizers.telephone import TelephoneFst
+from nemo_text_processing.text_normalization.es.verbalizers.time import TimeFst
+
+
+class VerbalizeFst(GraphFst):
+ """
+ Composes other verbalizer grammars.
+ For deployment, this grammar will be compiled and exported to OpenFst Finate State Archiv (FAR) File.
+ More details to deployment at NeMo/tools/text_processing_deployment.
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="verbalize", kind="verbalize", deterministic=deterministic)
+ cardinal = CardinalFst(deterministic=deterministic)
+ cardinal_graph = cardinal.fst
+ ordinal = OrdinalFst(deterministic=deterministic)
+ ordinal_graph = ordinal.fst
+ decimal = DecimalFst(deterministic=deterministic)
+ decimal_graph = decimal.fst
+ fraction = FractionFst(deterministic=deterministic)
+ fraction_graph = fraction.fst
+ date = DateFst(deterministic=deterministic)
+ date_graph = date.fst
+ measure = MeasureFst(cardinal=cardinal, decimal=decimal, fraction=fraction, deterministic=deterministic)
+ measure_graph = measure.fst
+ electronic = ElectronicFst(deterministic=deterministic)
+ electronic_graph = electronic.fst
+ whitelist_graph = WhiteListFst(deterministic=deterministic).fst
+ money_graph = MoneyFst(decimal=decimal, deterministic=deterministic).fst
+ telephone_graph = TelephoneFst(deterministic=deterministic).fst
+ time_graph = TimeFst(deterministic=deterministic).fst
+
+ graph = (
+ cardinal_graph
+ | measure_graph
+ | decimal_graph
+ | ordinal_graph
+ | date_graph
+ | electronic_graph
+ | money_graph
+ | fraction_graph
+ | whitelist_graph
+ | telephone_graph
+ | time_graph
+ )
+ self.fst = graph
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py b/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py
@@ -0,0 +1,52 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst, delete_extra_space, delete_space
+from nemo_text_processing.text_normalization.en.verbalizers.word import WordFst
+from nemo_text_processing.text_normalization.es.verbalizers.verbalize import VerbalizeFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class VerbalizeFinalFst(GraphFst):
+ """
+ Finite state transducer that verbalizes an entire sentence
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="verbalize_final", kind="verbalize", deterministic=deterministic)
+ verbalize = VerbalizeFst(deterministic=deterministic).fst
+ word = WordFst(deterministic=deterministic).fst
+ types = verbalize | word
+ graph = (
+ pynutil.delete("tokens")
+ + delete_space
+ + pynutil.delete("{")
+ + delete_space
+ + types
+ + delete_space
+ + pynutil.delete("}")
+ )
+ graph = delete_space + pynini.closure(graph + delete_extra_space) + graph + delete_space
+ self.fst = graph
diff --git a/nemo_text_processing/text_normalization/normalize.py b/nemo_text_processing/text_normalization/normalize.py
--- a/nemo_text_processing/text_normalization/normalize.py
+++ b/nemo_text_processing/text_normalization/normalize.py
@@ -46,8 +46,8 @@
class Normalizer:
"""
- Normalizer class that converts text from written to spoken form.
- Useful for TTS preprocessing.
+ Normalizer class that converts text from written to spoken form.
+ Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
@@ -83,10 +83,11 @@ def __init__(
from nemo_text_processing.text_normalization.ru.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.ru.verbalizers.verbalize_final import VerbalizeFinalFst
elif lang == 'de':
- # Ru TN only support non-deterministic cases and produces multiple normalization options
- # use normalize_with_audio.py
from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.de.verbalizers.verbalize_final import VerbalizeFinalFst
+ elif lang == 'es':
+ from nemo_text_processing.text_normalization.es.taggers.tokenize_and_classify import ClassifyFst
+ from nemo_text_processing.text_normalization.es.verbalizers.verbalize_final import VerbalizeFinalFst
self.tagger = ClassifyFst(
input_case=input_case,
deterministic=deterministic,
@@ -106,7 +107,7 @@ def __init__(
def normalize_list(self, texts: List[str], verbose=False, punct_post_process: bool = False) -> List[str]:
"""
- NeMo text normalizer
+ NeMo text normalizer
Args:
texts: list of input strings
@@ -357,7 +358,7 @@ def select_verbalizer(self, lattice: 'pynini.FstLike') -> str:
def parse_args():
parser = ArgumentParser()
parser.add_argument("input_string", help="input string", type=str)
- parser.add_argument("--language", help="language", choices=["en", "de"], default="en", type=str)
+ parser.add_argument("--language", help="language", choices=["en", "de", "es"], default="en", type=str)
parser.add_argument(
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
diff --git a/nemo_text_processing/text_normalization/normalize_with_audio.py b/nemo_text_processing/text_normalization/normalize_with_audio.py
--- a/nemo_text_processing/text_normalization/normalize_with_audio.py
+++ b/nemo_text_processing/text_normalization/normalize_with_audio.py
@@ -55,15 +55,15 @@
"audio_data" - path to the audio file
"text" - raw text
"pred_text" - ASR model prediction
-
+
See https://github.com/NVIDIA/NeMo/blob/main/examples/asr/transcribe_speech.py on how to add ASR predictions
-
+
When the manifest is ready, run:
python normalize_with_audio.py \
--audio_data PATH/TO/MANIFEST.JSON \
- --language en
-
-
+ --language en
+
+
To run with a single audio file, specify path to audio and text with:
python normalize_with_audio.py \
--audio_data PATH/TO/AUDIO.WAV \
@@ -71,18 +71,18 @@
--text raw text OR PATH/TO/.TXT/FILE
--model QuartzNet15x5Base-En \
--verbose
-
+
To see possible normalization options for a text input without an audio file (could be used for debugging), run:
python python normalize_with_audio.py --text "RAW TEXT"
-
+
Specify `--cache_dir` to generate .far grammars once and re-used them for faster inference
"""
class NormalizerWithAudio(Normalizer):
"""
- Normalizer class that converts text from written to spoken form.
- Useful for TTS preprocessing.
+ Normalizer class that converts text from written to spoken form.
+ Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
@@ -282,7 +282,7 @@ def parse_args():
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
parser.add_argument(
- "--language", help="Select target language", choices=["en", "ru", "de"], default="en", type=str
+ "--language", help="Select target language", choices=["en", "ru", "de", "es"], default="en", type=str
)
parser.add_argument("--audio_data", default=None, help="path to an audio file or .json manifest")
parser.add_argument(
diff --git a/tools/text_processing_deployment/pynini_export.py b/tools/text_processing_deployment/pynini_export.py
--- a/tools/text_processing_deployment/pynini_export.py
+++ b/tools/text_processing_deployment/pynini_export.py
@@ -67,7 +67,7 @@ def tn_grammars(**kwargs):
def export_grammars(output_dir, grammars):
"""
- Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
+ Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
Args:
output_dir: directory to export FAR files to. Subdirectories will be created for tagger and verbalizer respectively.
@@ -109,7 +109,7 @@ def parse_args():
if __name__ == '__main__':
args = parse_args()
- if args.language in ['ru', 'fr', 'es', 'vi'] and args.grammars == 'tn_grammars':
+ if args.language in ['ru', 'fr', 'vi'] and args.grammars == 'tn_grammars':
raise ValueError('Only ITN grammars could be deployed in Sparrowhawk for the selected languages.')
if args.language == 'en':
@@ -148,6 +148,10 @@ def parse_args():
from nemo_text_processing.inverse_text_normalization.es.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
+ from nemo_text_processing.text_normalization.es.taggers.tokenize_and_classify import (
+ ClassifyFst as TNClassifyFst,
+ )
+ from nemo_text_processing.text_normalization.es.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
elif args.language == 'fr':
from nemo_text_processing.inverse_text_normalization.fr.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
</patch>
</s>
</patch>
|
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt
@@ -0,0 +1,86 @@
+1~un
+2~dos
+3~tres
+4~cuatro
+5~cinco
+6~seis
+7~siete
+8~ocho
+9~nueve
+10~diez
+11~once
+12~doce
+13~trece
+14~catorce
+15~quince
+16~diecisรฉis
+17~diecisiete
+18~dieciocho
+19~diecinueve
+20~veinte
+21~veintiรบn
+22~veintidรณs
+23~veintitrรฉs
+24~veinticuatro
+25~veinticinco
+26~veintisรฉis
+27~veintisiete
+28~veintiocho
+29~veintinueve
+30~treinta
+31~treinta y un
+40~cuarenta
+41~cuarenta y un
+50~cincuenta
+51~cincuenta y un
+60~sesenta
+70~setenta
+80~ochenta
+90~noventa
+100~cien
+101~ciento un
+120~ciento veinte
+121~ciento veintiรบn
+130~ciento treinta
+131~ciento treinta y un
+200~doscientos
+201~doscientos un
+300~trescientos
+301~trescientos un
+1000~mil
+1 000~mil
+1.000~mil
+1001~mil un
+1010~mil diez
+1020~mil veinte
+1021~mil veintiรบn
+1100~mil cien
+1101~mil ciento un
+1110~mil ciento diez
+1111~mil ciento once
+1234~mil doscientos treinta y cuatro
+2000~dos mil
+2001~dos mil un
+2010~dos mil diez
+2020~dos mil veinte
+2100~dos mil cien
+2101~dos mil ciento un
+2110~dos mil ciento diez
+2111~dos mil ciento once
+2222~dos mil doscientos veintidรณs
+10000~diez mil
+10 000~diez mil
+10.000~diez mil
+100000~cien mil
+100 000~cien mil
+100.000~cien mil
+1 000 000~un millรณn
+1.000.000~un millรณn
+1 234 568~un millรณn doscientos treinta y cuatro mil quinientos sesenta y ocho
+2.000.000~dos millones
+1.000.000.000~mil millones
+2.000.000.000~dos mil millones
+3 000 000 000 000~tres billones
+3.000.000.000.000~tres billones
+100 000 000 000 000 000 000 000~cien mil trillones
+100 000 000 000 000 000 000 001~cien mil trillones un
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt
@@ -0,0 +1,13 @@
+1 enero~primero de enero
+5 febrero~cinco de febrero
+20 de marzo~veinte de marzo
+abril 30~treinta de abril
+31 marzo~treinta y uno de marzo
+10 mayo 1990~diez de mayo de mil novecientos noventa
+junio 11 2000~once de junio de dos mil
+30 julio del 2020~treinta de julio del dos mil veinte
+30-2-1990~treinta de febrero de mil novecientos noventa
+30/2/1990~treinta de febrero de mil novecientos noventa
+30.2.1990~treinta de febrero de mil novecientos noventa
+1990-2-30~treinta de febrero de mil novecientos noventa
+1990-02-30~treinta de febrero de mil novecientos noventa
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt
@@ -0,0 +1,27 @@
+0,1~cero coma un
+0,01~cero coma cero un
+0,010~cero coma cero uno cero
+1,0101~uno coma cero uno cero un
+0,0~cero coma cero
+1,0~uno coma cero
+1,00~uno coma cero cero
+1,1~uno coma un
+233,32~doscientos treinta y tres coma treinta y dos
+32,22 millones~treinta y dos coma veintidรณs millones
+320 320,22 millones~trescientos veinte mil trescientos veinte coma veintidรณs millones
+5.002,232~cinco mil dos coma doscientos treinta y dos
+3,2 trillones~tres coma dos trillones
+3 millones~tres millones
+3 000 millones~tres mil millones
+3000 millones~tres mil millones
+3.000 millones~tres mil millones
+3.001 millones~tres mil un millones
+1 millรณn~un millรณn
+1 000 millones~mil millones
+1000 millones~mil millones
+1.000 millones~mil millones
+2,33302 millones~dos coma tres tres tres cero dos millones
+1,5332 millรณn~uno coma cinco tres tres dos millรณn
+1,53322 millรณn~uno coma cinco tres tres dos dos millรณn
+1,53321 millรณn~uno coma cinco tres tres dos un millรณn
+101,010101 millones~ciento uno coma cero uno cero uno cero un millones
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt
@@ -0,0 +1,12 @@
+a.bc@gmail.com~a punto b c arroba gmail punto com
+cdf@abc.edu~c d f arroba a b c punto e d u
+abc@gmail.abc~a b c arroba gmail punto a b c
+abc@abc.com~a b c arroba a b c punto com
+asdf123@abc.com~a s d f uno dos tres arroba a b c punto com
+a1b2@abc.com~a uno b dos arroba a b c punto com
+ab3.sdd.3@gmail.com~a b tres punto s d d punto tres arroba gmail punto com
+https://www.nvidia.com~h t t p s dos puntos barra barra w w w punto nvidia punto com
+www.nvidia.com~w w w punto nvidia punto com
+www.abc.es/efg~w w w punto a b c punto es barra e f g
+www.abc.es~w w w punto a b c punto es
+http://www.ourdailynews.com.sm~h t t p dos puntos barra barra w w w punto o u r d a i l y n e w s punto com punto s m
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt
@@ -0,0 +1,76 @@
+1/2~medio
+1 1/2~uno y medio
+3/2~tres medios
+1 3/2~uno y tres medios
+1/3~un tercio
+2/3~dos tercios
+1/4~un cuarto
+2/4~dos cuartos
+1/5~un quinto
+2/5~dos quintos
+1/6~un sexto
+2/6~dos sextos
+1/7~un sรฉptimo
+2/7~dos sรฉptimos
+1/8~un octavo
+2/8~dos octavos
+1/9~un noveno
+2/9~dos novenos
+1/10~un dรฉcimo
+2/10~dos dรฉcimos
+1/11~un onceavo
+1/12~un doceavo
+1/13~un treceavo
+1/14~un catorceavo
+1/15~un quinceavo
+1/16~un dieciseisavo
+1/17~un diecisieteavo
+1/18~un dieciochoavo
+1/19~un diecinueveavo
+1/20~un veinteavo
+1/21~un veintiunavo
+1/22~un veintidosavo
+1/30~un treintavo
+1/31~un treintaiunavo
+1/40~un cuarentavo
+1/41~un cuarentaiunavo
+1/50~un cincuentavo
+1/60~un sesentavo
+1/70~un setentavo
+1/80~un ochentavo
+1/90~un noventavo
+1/100~un centรฉsimo
+2/100~dos centรฉsimos
+1 2/100~uno y dos centรฉsimos
+1/101~uno sobre ciento uno
+1/110~uno sobre ciento diez
+1/111~uno sobre ciento once
+1/112~uno sobre ciento doce
+1/123~uno sobre ciento veintitrรฉs
+1/134~uno sobre ciento treinta y cuatro
+1/200~un ducentรฉsimo
+1/201~uno sobre doscientos uno
+1/234~uno sobre doscientos treinta y cuatro
+1/300~un tricentรฉsimo
+1/345~uno sobre trescientos cuarenta y cinco
+1/400~un cuadringentรฉsimo
+1/456~uno sobre cuatrocientos cincuenta y seis
+1/500~un quingentรฉsimo
+1/600~un sexcentรฉsimo
+1/700~un septingentรฉsimo
+1/800~un octingentรฉsimo
+1/900~un noningentรฉsimo
+1/1000~un milรฉsimo
+2/1000~dos milรฉsimos
+1 2/1000~uno y dos milรฉsimos
+1/1001~uno sobre mil uno
+1/1100~uno sobre mil cien
+1/1200~uno sobre mil doscientos
+1/1234~uno sobre mil doscientos treinta y cuatro
+1/2000~un dosmilรฉsimo
+1/5000~un cincomilรฉsimo
+1/10000~un diezmilรฉsimo
+1/100.000~un cienmilรฉsimo
+1/1.000.000~un millonรฉsimo
+1/100.000.000~un cienmillonรฉsimo
+1/1.200.000.000~un mildoscientosmillonรฉsimo
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt
@@ -0,0 +1,17 @@
+1,2-a~uno coma dos a
+a-5~a cinco
+200 m~doscientos metros
+3 h~tres horas
+1 h~una hora
+245 mph~doscientas cuarenta y cinco millas por hora
+2 kg~dos kilogramos
+60,2400 kg~sesenta coma dos cuatro cero cero kilogramos
+-60,2400 kg~menos sesenta coma dos cuatro cero cero kilogramos
+8,52 %~ocho coma cincuenta y dos por ciento
+-8,52 %~menos ocho coma cincuenta y dos por ciento
+1 %~uno por ciento
+3 cm~tres centรญmetros
+4 s~cuatro segundos
+5 l~cinco litros
+4,51/s~cuatro coma cincuenta y uno por segundo
+0,0101 s~cero coma cero uno cero un segundos
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt
@@ -0,0 +1,24 @@
+$1~un dรณlar
+1 $~un dรณlar
+$1,50~un dรณlar cincuenta centavos
+1,50 $~un dรณlar cincuenta centavos
+ยฃ200.000.001~doscientos millones una libras
+200.000.001 ยฃ~doscientos millones una libras
+2 billones de euros~dos billones de euros
+โฌ2 billones~dos billones de euros
+โฌ 2 billones~dos billones de euros
+โฌ 2,3 billones~dos coma tres billones de euros
+2,3 billones de euros~dos coma tres billones de euros
+โฌ5,50~cinco euros cincuenta cรฉntimos
+5,50 โฌ~cinco euros cincuenta cรฉntimos
+5,01 โฌ~cinco euros un cรฉntimo
+5,01 ยฃ~cinco libras un penique
+21 czk~veintiuna coronas checas
+czk21~veintiuna coronas checas
+czk21,1 millones~veintiuna coma una millones de coronas checas
+czk 5,50 billones~cinco coma cincuenta billones de coronas checas
+rs 5,50 billones~cinco coma cincuenta billones de rupias
+czk5,50 billones~cinco coma cincuenta billones de coronas checas
+0,55 $~cincuenta y cinco centavos
+1,01 $~un dรณlar un centavo
+ยฅ12,05~doce yenes cinco centavos
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt
@@ -0,0 +1,120 @@
+~121
+ciento veintiรบn
+ciento veintiuno
+ciento veintiuna
+121
+~200
+doscientos
+doscientas
+200
+~201
+doscientos un
+doscientos uno
+doscientas una
+201
+~1
+un
+uno
+una
+1
+~550.000.001
+quinientos cincuenta millones un
+quinientos cincuenta millones una
+quinientos cincuenta millones uno
+550.000.001
+~500.501
+quinientos mil quinientos un
+quinientos mil quinientos uno
+quinientas mil quinientas una
+500.501
+~500.001.ยบ
+quinientosmilรฉsimo primero
+quingentรฉsimo milรฉsimo primero
+quinientosmilรฉsimos primeros
+quingentรฉsimos milรฉsimos primeros
+500.001.ยบ
+~500.001.ยช
+quinientasmilรฉsima primera
+quingentรฉsima milรฉsima primera
+quinientasmilรฉsimas primeras
+quingentรฉsimas milรฉsimas primeras
+500.001.ยช
+~11.ยช
+dรฉcima primera
+decimoprimera
+dรฉcimas primeras
+decimoprimeras
+undรฉcima
+undรฉcimas
+11.ยช
+~11.ยบ
+dรฉcimo primero
+decimoprimero
+dรฉcimos primeros
+decimoprimeros
+undรฉcimo
+undรฉcimos
+11.ยบ
+~12.ยบ
+dรฉcimo segundo
+decimosegundo
+dรฉcimos segundos
+decimosegundos
+duodรฉcimo
+duodรฉcimos
+12.ยบ
+~200,0101
+doscientos coma cero uno cero un
+doscientos coma cero uno cero uno
+doscientas coma cero una cero una
+200,0101
+~1.000.200,21
+un millรณn doscientos coma veintiรบn
+un millรณn doscientos coma veintiuno
+un millรณn doscientas coma veintiuna
+un millรณn doscientos coma dos un
+un millรณn doscientos coma dos uno
+un millรณn doscientas coma dos una
+1.000.200,21
+~1/12
+un doceavo
+una doceava parte
+un duodรฉcimo
+una duodรฉcima parte
+uno sobre doce
+1/12
+~5/200
+cinco ducentรฉsimos
+cinco ducentรฉsimas partes
+cinco sobre doscientos
+5/200
+~1 5/3
+uno y cinco tercios
+una y cinco terceras partes
+uno y cinco sobre tres
+una y cinco sobre tres
+~1/5/2020
+primero de mayo de dos mil veinte
+uno de mayo de dos mil veinte
+cinco de enero de dos mil veinte
+~$5,50
+cinco dรณlares con cincuenta
+cinco dรณlares y cincuenta
+cinco dรณlares cincuenta
+cinco dรณlares con cincuenta centavos
+cinco dรณlares y cincuenta centavos
+cinco dรณlares cincuenta centavos
+~2.30 h
+dos y treinta
+dos y media
+tres menos treinta
+tres menos media
+treinta para las tres
+~12.30 a.m.
+doce y treinta de la medianoche
+doce y treinta de la noche
+doce y media de la medianoche
+doce y media de la noche
+una menos treinta de la maรฑana
+una menos media de la maรฑana
+treinta para la una de la maรฑana
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt
@@ -0,0 +1,137 @@
+1.แตสณ~primer
+1.ยบ~primero
+1.ยช~primera
+2.ยบ~segundo
+2.ยช~segunda
+ii~segundo
+II~segundo
+3.แตสณ~tercer
+3.ยบ~tercero
+3.ยช~tercera
+4.ยบ~cuarto
+4.ยช~cuarta
+5.ยบ~quinto
+5.ยช~quinta
+6.ยบ~sexto
+6.ยช~sexta
+7.ยบ~sรฉptimo
+7.ยช~sรฉptima
+8.ยบ~octavo
+8.ยช~octava
+9.ยบ~noveno
+9.ยช~novena
+10.ยบ~dรฉcimo
+10.ยช~dรฉcima
+11.แตสณ~decimoprimer
+11.ยบ~undรฉcimo
+11.ยช~undรฉcima
+12.ยบ~duodรฉcimo
+12.ยช~duodรฉcima
+13.แตสณ~decimotercer
+13.ยบ~decimotercero
+13.ยช~decimotercera
+14.ยบ~decimocuarto
+14.ยช~decimocuarta
+15.ยบ~decimoquinto
+15.ยช~decimoquinta
+16.ยบ~decimosexto
+16.ยช~decimosexta
+17.ยบ~decimosรฉptimo
+17.ยช~decimosรฉptima
+18.ยบ~decimoctavo
+18.ยช~decimoctava
+19.ยบ~decimonoveno
+19.ยช~decimonovena
+20.ยบ~vigรฉsimo
+20.ยช~vigรฉsima
+21.แตสณ~vigesimoprimer
+21.ยบ~vigesimoprimero
+21.ยช~vigesimoprimera
+30.ยบ~trigรฉsimo
+30.ยช~trigรฉsima
+31.แตสณ~trigรฉsimo primer
+31.ยบ~trigรฉsimo primero
+31.ยช~trigรฉsima primera
+40.ยบ~cuadragรฉsimo
+40.ยช~cuadragรฉsima
+41.แตสณ~cuadragรฉsimo primer
+41.ยบ~cuadragรฉsimo primero
+41.ยช~cuadragรฉsima primera
+50.ยบ~quincuagรฉsimo
+50.ยช~quincuagรฉsima
+51.แตสณ~quincuagรฉsimo primer
+51.ยบ~quincuagรฉsimo primero
+51.ยช~quincuagรฉsima primera
+60.ยบ~sexagรฉsimo
+60.ยช~sexagรฉsima
+70.ยบ~septuagรฉsimo
+70.ยช~septuagรฉsima
+80.ยบ~octogรฉsimo
+80.ยช~octogรฉsima
+90.ยบ~nonagรฉsimo
+90.ยช~nonagรฉsima
+100.ยบ~centรฉsimo
+100.ยช~centรฉsima
+101.แตสณ~centรฉsimo primer
+101.ยบ~centรฉsimo primero
+101.ยช~centรฉsima primera
+134.ยบ~centรฉsimo trigรฉsimo cuarto
+134.ยช~centรฉsima trigรฉsima cuarta
+200.ยบ~ducentรฉsimo
+200.ยช~ducentรฉsima
+300.ยบ~tricentรฉsimo
+300.ยช~tricentรฉsima
+400.ยบ~cuadringentรฉsimo
+400.ยช~cuadringentรฉsima
+500.ยบ~quingentรฉsimo
+500.ยช~quingentรฉsima
+600.ยบ~sexcentรฉsimo
+600.ยช~sexcentรฉsima
+700.ยบ~septingentรฉsimo
+700.ยช~septingentรฉsima
+800.ยบ~octingentรฉsimo
+800.ยช~octingentรฉsima
+900.ยบ~noningentรฉsimo
+900.ยช~noningentรฉsima
+1000.ยบ~milรฉsimo
+1000.ยช~milรฉsima
+1001.แตสณ~milรฉsimo primer
+1 000.ยบ~milรฉsimo
+1 000.ยช~milรฉsima
+1 001.แตสณ~milรฉsimo primer
+1.000.ยบ~milรฉsimo
+1.000.ยช~milรฉsima
+1.001.แตสณ~milรฉsimo primer
+1248.ยบ~milรฉsimo ducentรฉsimo cuadragรฉsimo octavo
+1248.ยช~milรฉsima ducentรฉsima cuadragรฉsima octava
+2000.ยบ~dosmilรฉsimo
+100 000.ยบ~cienmilรฉsimo
+i~primero
+I~primero
+ii~segundo
+II~segundo
+iii~tercero
+III~tercero
+iv~cuarto
+IV~cuarto
+V~quinto
+VI~sexto
+VII~sรฉptimo
+VIII~octavo
+IX~noveno
+X~dรฉcimo
+XI~undรฉcimo
+XII~duodรฉcimo
+XIII~decimotercero
+XX~vigรฉsimo
+XXI~vigesimoprimero
+XXX~trigรฉsimo
+XL~cuadragรฉsimo
+L~quincuagรฉsimo
+XC~nonagรฉsimo
+C~centรฉsimo
+CD~cuadringentรฉsimo
+D~quingentรฉsimo
+CM~noningentรฉsimo
+999.ยบ~noningentรฉsimo nonagรฉsimo noveno
+cmxcix~noningentรฉsimo nonagรฉsimo noveno
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt
@@ -0,0 +1,3 @@
+123-123-5678~uno dos tres uno dos tres cinco seis siete ocho
+123-456-789~uno dos tres cuatro cinco seis siete ocho nueve
+1234-5678~uno dos tres cuatro cinco seis siete ocho
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt
@@ -0,0 +1,26 @@
+1.00~una
+1:00~una
+01:00~una
+01 h~una
+3 h~tres horas
+1 h~una hora
+1.05 h~una y cinco
+01.05 h~una y cinco
+1.00 h~una
+1.00 a.m.~una de la maรฑana
+1.00 a.m~una de la maรฑana
+1.00 p.m.~una de la tarde
+1.00 p.m est~una de la tarde e s t
+1.00 est~una e s t
+5:02 est~cinco y dos e s t
+5:02 p.m pst~cinco y dos de la noche p s t
+5:02 p.m.~cinco y dos de la noche
+12.15~doce y cuarto
+12.15 a.m.~doce y cuarto de la noche
+12.15 p.m.~doce y cuarto del mediodรญa
+13.30~trece y media
+14.05~catorce y cinco
+24:50~veinticuatro y cincuenta
+3:02:32 pst~tres horas dos minutos y treinta y dos segundos p s t
+00:52~cero y cincuenta y dos
+0:52~cero y cincuenta y dos
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt
@@ -0,0 +1,3 @@
+el dr.~el doctor
+sr. rodriguez~seรฑor rodriguez
+182 esq. toledo~ciento ochenta y dos esquina toledo
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt
@@ -0,0 +1,48 @@
+~
+yahoo!~yahoo!
+veinte!~veinte!
+โ~โ
+aaa~aaa
+aabach~aabach
+aabenraa~aabenraa
+aabye~aabye
+aaccessed~aaccessed
+aach~aach
+aachen's~aachen's
+aadri~aadri
+aafia~aafia
+aagaard~aagaard
+aagadu~aagadu
+aagard~aagard
+aagathadi~aagathadi
+aaghart's~aaghart's
+aagnes~aagnes
+aagomoni~aagomoni
+aagon~aagon
+aagoo~aagoo
+aagot~aagot
+aahar~aahar
+aahh~aahh
+aahperd~aahperd
+aaibinterstate~aaibinterstate
+aajab~aajab
+aakasa~aakasa
+aakervik~aakervik
+aakirkeby~aakirkeby
+aalam~aalam
+aalbaek~aalbaek
+aaldiu~aaldiu
+aalem~aalem
+a'ali~a'ali
+aalilaassamthey~aalilaassamthey
+aalin~aalin
+aaliyan~aaliyan
+aaliyan's~aaliyan's
+aamadu~aamadu
+aamara~aamara
+aambala~aambala
+aamera~aamera
+aamer's~aamer's
+aamina~aamina
+aaminah~aaminah
+aamjiwnaang~aamjiwnaang
diff --git a/tests/nemo_text_processing/es/test_cardinal.py b/tests/nemo_text_processing/es/test_cardinal.py
--- a/tests/nemo_text_processing/es/test_cardinal.py
+++ b/tests/nemo_text_processing/es/test_cardinal.py
@@ -22,7 +22,8 @@
class TestCardinal:
- inverse_normalizer_es = (
+
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +33,34 @@ class TestCardinal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_cardinal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_date.py b/tests/nemo_text_processing/es/test_date.py
--- a/tests/nemo_text_processing/es/test_date.py
+++ b/tests/nemo_text_processing/es/test_date.py
@@ -22,7 +22,7 @@
class TestDate:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +32,34 @@ class TestDate:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_date.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_decimal.py b/tests/nemo_text_processing/es/test_decimal.py
--- a/tests/nemo_text_processing/es/test_decimal.py
+++ b/tests/nemo_text_processing/es/test_decimal.py
@@ -22,7 +22,7 @@
class TestDecimal:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +32,34 @@ class TestDecimal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_decimal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_electronic.py b/tests/nemo_text_processing/es/test_electronic.py
--- a/tests/nemo_text_processing/es/test_electronic.py
+++ b/tests/nemo_text_processing/es/test_electronic.py
@@ -35,3 +35,31 @@ class TestElectronic:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_electronic.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_fraction.py b/tests/nemo_text_processing/es/test_fraction.py
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_fraction.py
@@ -0,0 +1,51 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import pytest
+from nemo_text_processing.text_normalization.normalize import Normalizer
+from nemo_text_processing.text_normalization.normalize_with_audio import NormalizerWithAudio
+from parameterized import parameterized
+
+from ..utils import CACHE_DIR, PYNINI_AVAILABLE, parse_test_case_file
+
+
+class TestFraction:
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_fraction.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_measure.py b/tests/nemo_text_processing/es/test_measure.py
--- a/tests/nemo_text_processing/es/test_measure.py
+++ b/tests/nemo_text_processing/es/test_measure.py
@@ -36,3 +36,31 @@ class TestMeasure:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_measure.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_money.py b/tests/nemo_text_processing/es/test_money.py
--- a/tests/nemo_text_processing/es/test_money.py
+++ b/tests/nemo_text_processing/es/test_money.py
@@ -23,7 +23,7 @@
class TestMoney:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -33,6 +33,34 @@ class TestMoney:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_money.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_normalization_with_audio.py b/tests/nemo_text_processing/es/test_normalization_with_audio.py
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_normalization_with_audio.py
@@ -0,0 +1,43 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+from nemo_text_processing.text_normalization.normalize_with_audio import NormalizerWithAudio
+from parameterized import parameterized
+
+from ..utils import CACHE_DIR, PYNINI_AVAILABLE, get_test_cases_multiple
+
+
+class TestNormalizeWithAudio:
+
+ normalizer_es = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ @parameterized.expand(get_test_cases_multiple('es/data_text_normalization/test_cases_normalize_with_audio.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer_es.normalize(test_input, n_tagged=1000, punct_post_process=False)
+ print(expected)
+ print("pred")
+ print(pred)
+ assert len(set(pred).intersection(set(expected))) == len(
+ expected
+ ), f'missing: {set(expected).difference(set(pred))}'
diff --git a/tests/nemo_text_processing/es/test_ordinal.py b/tests/nemo_text_processing/es/test_ordinal.py
--- a/tests/nemo_text_processing/es/test_ordinal.py
+++ b/tests/nemo_text_processing/es/test_ordinal.py
@@ -23,7 +23,7 @@
class TestOrdinal:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -33,6 +33,33 @@ class TestOrdinal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_ordinal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=30, punct_post_process=False,
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh b/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh
@@ -0,0 +1,84 @@
+#! /bin/sh
+
+PROJECT_DIR=/workspace/tests
+
+runtest () {
+ input=$1
+ cd /workspace/sparrowhawk/documentation/grammars
+
+ # read test file
+ while read testcase; do
+ IFS='~' read written spoken <<< $testcase
+ denorm_pred=$(echo $written | normalizer_main --config=sparrowhawk_configuration.ascii_proto 2>&1 | tail -n 1)
+
+ # trim white space
+ spoken="$(echo -e "${spoken}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
+ denorm_pred="$(echo -e "${denorm_pred}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
+
+ # input expected actual
+ assertEquals "$written" "$spoken" "$denorm_pred"
+ done < "$input"
+}
+
+testTNCardinal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_cardinal.txt
+ runtest $input
+}
+
+testTNDate() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_date.txt
+ runtest $input
+}
+
+testTNDecimal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_decimal.txt
+ runtest $input
+}
+
+testTNElectronic() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_electronic.txt
+ runtest $input
+}
+
+testTNFraction() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_fraction.txt
+ runtest $input
+}
+
+testTNMoney() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_money.txt
+ runtest $input
+}
+
+testTNOrdinal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_ordinal.txt
+ runtest $input
+}
+
+testTNTelephone() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_ordinal.txt
+ runtest $input
+}
+
+testTNTime() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_time.txt
+ runtest $input
+}
+
+testTNMeasure() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_measure.txt
+ runtest $input
+}
+
+testTNWhitelist() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_whitelist.txt
+ runtest $input
+}
+
+testTNWord() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_word.txt
+ runtest $input
+}
+
+# Load shUnit2
+. $PROJECT_DIR/../shunit2/shunit2
diff --git a/tests/nemo_text_processing/es/test_telephone.py b/tests/nemo_text_processing/es/test_telephone.py
--- a/tests/nemo_text_processing/es/test_telephone.py
+++ b/tests/nemo_text_processing/es/test_telephone.py
@@ -36,3 +36,31 @@ class TestTelephone:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_telephone.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_time.py b/tests/nemo_text_processing/es/test_time.py
--- a/tests/nemo_text_processing/es/test_time.py
+++ b/tests/nemo_text_processing/es/test_time.py
@@ -35,3 +35,31 @@ class TestTime:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_time.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_whitelist.py b/tests/nemo_text_processing/es/test_whitelist.py
--- a/tests/nemo_text_processing/es/test_whitelist.py
+++ b/tests/nemo_text_processing/es/test_whitelist.py
@@ -35,3 +35,30 @@ class TestWhitelist:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_whitelist.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=10, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_word.py b/tests/nemo_text_processing/es/test_word.py
--- a/tests/nemo_text_processing/es/test_word.py
+++ b/tests/nemo_text_processing/es/test_word.py
@@ -35,3 +35,30 @@ class TestWord:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer_es = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio_es = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_word.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer_es.normalize(test_input, verbose=False)
+ assert pred == expected, f"input: {test_input}"
+
+ if self.normalizer_with_audio_es:
+ pred_non_deterministic = self.normalizer_with_audio_es.normalize(
+ test_input, n_tagged=150, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic, f"input: {test_input}"
|
1.0
| ||||
NVIDIA__NeMo-7582
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
17 :target: https://badge.fury.io/py/nemo-toolkit
18 :alt: Release version
19
20 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
21 :target: https://badge.fury.io/py/nemo-toolkit
22 :alt: Python version
23
24 .. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
25 :target: https://pepy.tech/project/nemo-toolkit
26 :alt: PyPi total downloads
27
28 .. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
29 :target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
30 :alt: CodeQL
31
32 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
33 :target: https://github.com/psf/black
34 :alt: Code style: black
35
36 .. _main-readme:
37
38 **NVIDIA NeMo**
39 ===============
40
41 Introduction
42 ------------
43
44 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
45 text-to-speech synthesis (TTS), large language models (LLMs), and
46 natural language processing (NLP).
47 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
48 and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
49
50 All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
51 training is automatically scalable to 1000s of GPUs.
52 Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
53 NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
54
55 Getting started with NeMo is simple.
56 State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
57 `NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
58 These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
59
60 We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
61 can all be run on `Google Colab <https://colab.research.google.com>`_.
62
63 For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
64 we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
65
66 For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
67 The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
68 which can be used to find the optimal model parallel configuration for training on a specific cluster.
69
70 Also see our `introductory video <https://www.youtube.com/embed/wBgpMf_KQVw>`_ for a high level overview of NeMo.
71
72 Key Features
73 ------------
74
75 * Speech processing
76 * `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
77 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
78 * Supported ASR models: `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_
79 * Jasper, QuartzNet, CitriNet, ContextNet
80 * Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
81 * Squeezeformer-CTC and Squeezeformer-Transducer
82 * LSTM-Transducer (RNNT) and LSTM-CTC
83 * Supports the following decoders/losses:
84 * CTC
85 * Transducer/RNNT
86 * Hybrid Transducer/CTC
87 * NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
88 * Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
89 * Cache-aware Streaming Conformer with multiple lookaheads - `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_
90 * Beam Search decoding
91 * `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
92 * `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
93 * `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
94 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
95 * ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
96 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
97 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
98 * Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
99 * Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
100 * `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
101 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
102 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
103 * Natural Language Processing
104 * `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
105 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
106 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
107 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
108 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
109 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
110 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
111 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
112 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
113 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
114 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
115 * `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
116 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
117 * `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
118 * Text-to-Speech Synthesis (TTS):
119 * `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
120 * Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
121 * Vocoders: HiFiGAN, UnivNet, WaveGlow
122 * End-to-End Models: VITS
123 * `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
124 * `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
125 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
126 * `NeMo Forced Aligner <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/nemo_forced_aligner.html>`_
127 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
128 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
129 * `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
130
131
132 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
133
134 Requirements
135 ------------
136
137 1) Python 3.10 or above
138 2) Pytorch 1.13.1 or above
139 3) NVIDIA GPU for training
140
141 Documentation
142 -------------
143
144 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
145 :alt: Documentation Status
146 :scale: 100%
147 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
148
149 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
150 :alt: Documentation Status
151 :scale: 100%
152 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
153
154 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
155 | Version | Status | Description |
156 +=========+=============+==========================================================================================================================================+
157 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
158 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
159 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
160 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
161
162 Tutorials
163 ---------
164 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
165
166 Getting help with NeMo
167 ----------------------
168 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
169
170
171 Installation
172 ------------
173
174 Conda
175 ~~~~~
176
177 We recommend installing NeMo in a fresh Conda environment.
178
179 .. code-block:: bash
180
181 conda create --name nemo python==3.10.12
182 conda activate nemo
183
184 Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
185
186 .. code-block:: bash
187
188 conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
189
190 The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
191
192 Pip
193 ~~~
194 Use this installation mode if you want the latest released version.
195
196 .. code-block:: bash
197
198 apt-get update && apt-get install -y libsndfile1 ffmpeg
199 pip install Cython
200 pip install nemo_toolkit['all']
201
202 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
203
204 Pip from source
205 ~~~~~~~~~~~~~~~
206 Use this installation mode if you want the version from a particular GitHub branch (e.g main).
207
208 .. code-block:: bash
209
210 apt-get update && apt-get install -y libsndfile1 ffmpeg
211 pip install Cython
212 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
213
214
215 From source
216 ~~~~~~~~~~~
217 Use this installation mode if you are contributing to NeMo.
218
219 .. code-block:: bash
220
221 apt-get update && apt-get install -y libsndfile1 ffmpeg
222 git clone https://github.com/NVIDIA/NeMo
223 cd NeMo
224 ./reinstall.sh
225
226 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
227 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
228
229 RNNT
230 ~~~~
231 Note that RNNT requires numba to be installed from conda.
232
233 .. code-block:: bash
234
235 conda remove numba
236 pip uninstall numba
237 conda install -c conda-forge numba
238
239 NeMo Megatron
240 ~~~~~~~~~~~~~
241 NeMo Megatron training requires NVIDIA Apex to be installed.
242 Install it manually if not using the NVIDIA PyTorch container.
243
244 To install Apex, run
245
246 .. code-block:: bash
247
248 git clone https://github.com/NVIDIA/apex.git
249 cd apex
250 git checkout 52e18c894223800cb611682dce27d88050edf1de
251 pip install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
252
253 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
254
255 While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
256 This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
257
258 cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
259
260 .. code-block:: bash
261
262 conda install -c nvidia cuda-nvprof=11.8
263
264 packaging is also needed:
265
266 .. code-block:: bash
267
268 pip install packaging
269
270 With the latest versions of Apex, the `pyproject.toml` file in Apex may need to be deleted in order to install locally.
271
272
273 Transformer Engine
274 ~~~~~~~~~~~~~~~~~~
275 NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
276 Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
277 `Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
278
279 .. code-block:: bash
280
281 pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
282
283 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
284
285 Transformer Engine requires PyTorch to be built with CUDA 11.8.
286
287
288 Flash Attention
289 ~~~~~~~~~~~~~~~~~~~~
290 Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
291
292 .. code-block:: bash
293
294 pip install flash-attn
295 pip install triton==2.0.0.dev20221202
296
297 NLP inference UI
298 ~~~~~~~~~~~~~~~~~~~~
299 To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
300
301 .. code-block:: bash
302
303 pip install gradio==3.34.0
304
305 NeMo Text Processing
306 ~~~~~~~~~~~~~~~~~~~~
307 NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
308
309 Docker containers:
310 ~~~~~~~~~~~~~~~~~~
311 We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.20.0`` comes with container ``nemo:23.06``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
312
313 To use built container, please run
314
315 .. code-block:: bash
316
317 docker pull nvcr.io/nvidia/nemo:23.06
318
319 To build a nemo container with Dockerfile from a branch, please run
320
321 .. code-block:: bash
322
323 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
324
325
326 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
327
328 .. code-block:: bash
329
330 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
331 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
332 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
333
334 Examples
335 --------
336
337 Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
338
339
340 Contributing
341 ------------
342
343 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
344
345 Publications
346 ------------
347
348 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/tree/stable/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
349
350 License
351 -------
352 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
353
[end of README.rst]
[start of examples/asr/experimental/k2/align_speech_parallel.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 # Based on examples/asr/transcribe_speech_parallel.py
17 # ASR alignment with multi-GPU/multi-node support for large datasets
18 # It supports both tarred and non-tarred datasets
19 # Arguments
20 # model: path to a nemo/PTL checkpoint file or name of a pretrained model
21 # predict_ds: config of the dataset/dataloader
22 # aligner_args: aligner config
23 # output_path: path to store the predictions
24 # model_stride: model downsampling factor, 8 for Citrinet models and 4 for Conformer models
25 #
26 # Results of each GPU/worker is written into a file named 'predictions_{rank}.json, and aggregated results of all workers are written into 'predictions_all.json'
27
28 Example for non-tarred datasets:
29
30 python align_speech_parallel.py \
31 model=stt_en_conformer_ctc_large \
32 predict_ds.manifest_filepath=/dataset/manifest_file.json \
33 predict_ds.batch_size=16 \
34 output_path=/tmp/
35
36 Example for tarred datasets:
37
38 python align_speech_parallel.py \
39 predict_ds.is_tarred=true \
40 predict_ds.manifest_filepath=/tarred_dataset/tarred_audio_manifest.json \
41 predict_ds.tarred_audio_filepaths=/tarred_dataset/audio__OP_0..127_CL_.tar \
42 ...
43
44 By default the trainer uses all the GPUs available and default precision is FP32.
45 By setting the trainer config you may control these configs. For example to do the predictions with AMP on just two GPUs:
46
47 python align_speech_parallel.py \
48 trainer.precision=16 \
49 trainer.gpus=2 \
50 ...
51
52 You may control the dataloader's config by setting the predict_ds:
53
54 python align_speech_parallel.py \
55 predict_ds.num_workers=8 \
56 predict_ds.min_duration=2.0 \
57 predict_ds.sample_rate=16000 \
58 model=stt_en_conformer_ctc_small \
59 ...
60
61 You may control the aligner's config by setting the aligner_args:
62 aligner_args.alignment_type=argmax \
63 aligner_args.word_output=False \
64 aligner_args.cpu_decoding=True \
65 aligner_args.decode_batch_size=8 \
66 aligner_args.ctc_cfg.prob_suppress_index=-1 \
67 aligner_args.ctc_cfg.prob_suppress_value=0.5 \
68 aligner_args.rnnt_cfg.predictor_window_size=10 \
69 aligner_args.decoder_module_cfg.intersect_pruned=true \
70 aligner_args.decoder_module_cfg.intersect_conf.search_beam=40 \
71 ...
72
73 """
74
75
76 import os
77 from dataclasses import dataclass, is_dataclass
78 from typing import Optional
79
80 import pytorch_lightning as ptl
81 import torch
82 from omegaconf import MISSING, OmegaConf
83
84 from nemo.collections.asr.data.audio_to_ctm_dataset import ASRCTMPredictionWriter
85 from nemo.collections.asr.models import ASRModel
86 from nemo.collections.asr.models.configs.aligner_config import K2AlignerWrapperModelConfig
87 from nemo.collections.asr.models.configs.asr_models_config import ASRDatasetConfig
88 from nemo.collections.asr.models.k2_aligner_model import AlignerWrapperModel
89 from nemo.core.config import TrainerConfig, hydra_runner
90 from nemo.utils import logging
91 from nemo.utils.get_rank import is_global_rank_zero
92
93
94 @dataclass
95 class ParallelAlignmentConfig:
96 model: Optional[str] = None # name
97 predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
98 aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
99 output_path: str = MISSING
100 model_stride: int = 8
101
102 trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
103
104 # there arguments will be ignored
105 return_predictions: bool = False
106 use_cer: bool = False
107
108
109 def match_train_config(predict_ds, train_ds):
110 # It copies the important configurations from the train dataset of the model
111 # into the predict_ds to be used for prediction. It is needed to match the training configurations.
112 if train_ds is None:
113 return
114
115 predict_ds.sample_rate = train_ds.get("sample_rate", 16000)
116 cfg_name_list = [
117 "int_values",
118 "use_start_end_token",
119 "blank_index",
120 "unk_index",
121 "normalize",
122 "parser",
123 "eos_id",
124 "bos_id",
125 "pad_id",
126 ]
127
128 if is_dataclass(predict_ds):
129 predict_ds = OmegaConf.structured(predict_ds)
130 for cfg_name in cfg_name_list:
131 if hasattr(train_ds, cfg_name):
132 setattr(predict_ds, cfg_name, getattr(train_ds, cfg_name))
133
134 return predict_ds
135
136
137 @hydra_runner(config_name="AlignmentConfig", schema=ParallelAlignmentConfig)
138 def main(cfg: ParallelAlignmentConfig):
139 if cfg.model.endswith(".nemo"):
140 logging.info("Attempting to initialize from .nemo file")
141 model = ASRModel.restore_from(restore_path=cfg.model, map_location="cpu")
142 elif cfg.model.endswith(".ckpt"):
143 logging.info("Attempting to initialize from .ckpt file")
144 model = ASRModel.load_from_checkpoint(checkpoint_path=cfg.model, map_location="cpu")
145 else:
146 logging.info(
147 "Attempting to initialize from a pretrained model as the model name does not have the extension of .nemo or .ckpt"
148 )
149 model = ASRModel.from_pretrained(model_name=cfg.model, map_location="cpu")
150
151 trainer = ptl.Trainer(**cfg.trainer)
152
153 cfg.predict_ds.return_sample_id = True
154 cfg.return_predictions = False
155 cfg.use_cer = False
156 cfg.predict_ds = match_train_config(predict_ds=cfg.predict_ds, train_ds=model._cfg.train_ds)
157 data_loader = model._setup_dataloader_from_config(cfg.predict_ds)
158
159 os.makedirs(cfg.output_path, exist_ok=True)
160 # trainer.global_rank is not valid before predict() is called. Need this hack to find the correct global_rank.
161 global_rank = trainer.node_rank * trainer.num_devices + int(os.environ.get("LOCAL_RANK", 0))
162 output_file = os.path.join(cfg.output_path, f"predictions_{global_rank}.json")
163 output_ctm_dir = os.path.join(cfg.output_path, "ctm")
164 predictor_writer = ASRCTMPredictionWriter(
165 dataset=data_loader.dataset,
166 output_file=output_file,
167 output_ctm_dir=output_ctm_dir,
168 time_per_frame=cfg.model_stride * model._cfg.preprocessor['window_stride'],
169 )
170 trainer.callbacks.extend([predictor_writer])
171
172 aligner_wrapper = AlignerWrapperModel(model=model, cfg=cfg.aligner_args)
173 trainer.predict(model=aligner_wrapper, dataloaders=data_loader, return_predictions=cfg.return_predictions)
174 samples_num = predictor_writer.close_output_file()
175
176 logging.info(
177 f"Prediction on rank {global_rank} is done for {samples_num} samples and results are stored in {output_file}."
178 )
179
180 if torch.distributed.is_initialized():
181 torch.distributed.barrier()
182
183 samples_num = 0
184 if is_global_rank_zero():
185 output_file = os.path.join(cfg.output_path, f"predictions_all.json")
186 logging.info(f"Prediction files are being aggregated in {output_file}.")
187 with open(output_file, 'tw', encoding="utf-8") as outf:
188 for rank in range(trainer.world_size):
189 input_file = os.path.join(cfg.output_path, f"predictions_{rank}.json")
190 with open(input_file, 'r', encoding="utf-8") as inpf:
191 lines = inpf.readlines()
192 samples_num += len(lines)
193 outf.writelines(lines)
194 logging.info(
195 f"Prediction is done for {samples_num} samples in total on all workers and results are aggregated in {output_file}."
196 )
197
198
199 if __name__ == '__main__':
200 main()
201
[end of examples/asr/experimental/k2/align_speech_parallel.py]
[start of nemo/collections/asr/metrics/rnnt_wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import re
17 from abc import abstractmethod
18 from dataclasses import dataclass, is_dataclass
19 from typing import Callable, Dict, List, Optional, Tuple, Union
20
21 import editdistance
22 import numpy as np
23 import torch
24 from omegaconf import OmegaConf
25 from torchmetrics import Metric
26
27 from nemo.collections.asr.metrics.wer import move_dimension_to_the_front
28 from nemo.collections.asr.parts.submodules import rnnt_beam_decoding as beam_decode
29 from nemo.collections.asr.parts.submodules import rnnt_greedy_decoding as greedy_decode
30 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
31 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
32 from nemo.utils import logging
33
34 __all__ = ['RNNTDecoding', 'RNNTWER']
35
36
37 class AbstractRNNTDecoding(ConfidenceMixin):
38 """
39 Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
40
41 Args:
42 decoding_cfg: A dict-like object which contains the following key-value pairs.
43 strategy: str value which represents the type of decoding that can occur.
44 Possible values are :
45 - greedy, greedy_batch (for greedy decoding).
46 - beam, tsd, alsd (for beam search decoding).
47
48 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
49 tokens as well as the decoded string. Default is False in order to avoid double decoding
50 unless required.
51
52 preserve_alignments: Bool flag which preserves the history of logprobs generated during
53 decoding (sample / batched). When set to true, the Hypothesis will contain
54 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
55 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
56
57 In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
58 with the `return_hypotheses` flag set to True.
59
60 The length of the list corresponds to the Acoustic Length (T).
61 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
62 U is the number of target tokens for the current timestep Ti.
63
64 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
65 word based timestamp mapping the output log-probabilities to discrete intervals of timestamps.
66 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
67
68 rnnt_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
69 Can take the following values - "char" for character/subword time stamps, "word" for word level
70 time stamps and "all" (default), for both character level and word level time stamps.
71
72 word_seperator: Str token representing the seperator between words.
73
74 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
75 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
76 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of ints.
77
78 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
79 scores. In order to obtain hypotheses with confidence scores, please utilize
80 `rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
81
82 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
83 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
84 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
85
86 The length of the list corresponds to the Acoustic Length (T).
87 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
88 U is the number of target tokens for the current timestep Ti.
89 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
90 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
91 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
92
93 The length of the list corresponds to the number of recognized tokens.
94 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
95 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
96 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
97
98 The length of the list corresponds to the number of recognized words.
99 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
100 from the `token_confidence`.
101 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
102 Valid options are `mean`, `min`, `max`, `prod`.
103 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
104 confidence scores.
105
106 name: The measure name (str).
107 Supported values:
108 - 'max_prob' for using the maximum token probability as a confidence.
109 - 'entropy' for using a normalized entropy of a log-likelihood vector.
110
111 entropy_type: Which type of entropy to use (str).
112 Used if confidence_measure_cfg.name is set to `entropy`.
113 Supported values:
114 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
115 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
116 Note that for this entropy, the alpha should comply the following inequality:
117 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
118 where V is the model vocabulary size.
119 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
120 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
121 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
122 More: https://en.wikipedia.org/wiki/Tsallis_entropy
123 - 'renyi' for the Rรฉnyi entropy.
124 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
125 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
126 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
127
128 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
129 When the alpha equals one, scaling is not applied to 'max_prob',
130 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
131
132 entropy_norm: A mapping of the entropy value to the interval [0,1].
133 Supported values:
134 - 'lin' for using the linear mapping.
135 - 'exp' for using exponential mapping with linear shift.
136
137 The config may further contain the following sub-dictionaries:
138 "greedy":
139 max_symbols: int, describing the maximum number of target tokens to decode per
140 timestep during greedy decoding. Setting to larger values allows longer sentences
141 to be decoded, at the cost of increased execution time.
142 preserve_frame_confidence: Same as above, overrides above value.
143 confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
144
145 "beam":
146 beam_size: int, defining the beam size for beam search. Must be >= 1.
147 If beam_size == 1, will perform cached greedy search. This might be slightly different
148 results compared to the greedy search above.
149
150 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
151 Set to True by default.
152
153 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
154 hypotheses after beam search has concluded. This flag is set by default.
155
156 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
157 per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
158 at increased cost to execution time.
159
160 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
161 If an integer is provided, it can decode sequences of that particular maximum length.
162 If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
163 where seq_len is the length of the acoustic model output (T).
164
165 NOTE:
166 If a float is provided, it can be greater than 1!
167 By default, a float of 2.0 is used so that a target sequence can be at most twice
168 as long as the acoustic model output length T.
169
170 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
171 and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
172
173 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
174 in order to reduce expensive beam search cost later. int >= 0.
175
176 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
177 Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
178 and affects the speed of inference since large values will perform large beam search in the next step.
179
180 maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
181 The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
182 where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
183 predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
184 expansion apart from the "most likely" candidate.
185 Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
186 but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
187 thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
188 tuned on a validation set.
189
190 softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
191
192 decoder: The Decoder/Prediction network module.
193 joint: The Joint network module.
194 blank_id: The id of the RNNT blank token.
195 """
196
197 def __init__(self, decoding_cfg, decoder, joint, blank_id: int):
198 super(AbstractRNNTDecoding, self).__init__()
199
200 # Convert dataclass to config object
201 if is_dataclass(decoding_cfg):
202 decoding_cfg = OmegaConf.structured(decoding_cfg)
203
204 self.cfg = decoding_cfg
205 self.blank_id = blank_id
206 self.num_extra_outputs = joint.num_extra_outputs
207 self.big_blank_durations = self.cfg.get("big_blank_durations", None)
208 self.durations = self.cfg.get("durations", None)
209 self.compute_hypothesis_token_set = self.cfg.get("compute_hypothesis_token_set", False)
210 self.compute_langs = decoding_cfg.get('compute_langs', False)
211 self.preserve_alignments = self.cfg.get('preserve_alignments', None)
212 self.joint_fused_batch_size = self.cfg.get('fused_batch_size', None)
213 self.compute_timestamps = self.cfg.get('compute_timestamps', None)
214 self.word_seperator = self.cfg.get('word_seperator', ' ')
215
216 if self.durations is not None: # this means it's a TDT model.
217 if blank_id == 0:
218 raise ValueError("blank_id must equal len(non_blank_vocabs) for TDT models")
219 if self.big_blank_durations is not None:
220 raise ValueError("duration and big_blank_durations can't both be not None")
221 if self.cfg.strategy not in ['greedy', 'greedy_batch']:
222 raise ValueError("currently only greedy and greedy_batch inference is supported for TDT models")
223
224 if self.big_blank_durations is not None: # this means it's a multi-blank model.
225 if blank_id == 0:
226 raise ValueError("blank_id must equal len(vocabs) for multi-blank RNN-T models")
227 if self.cfg.strategy not in ['greedy', 'greedy_batch']:
228 raise ValueError(
229 "currently only greedy and greedy_batch inference is supported for multi-blank models"
230 )
231
232 possible_strategies = ['greedy', 'greedy_batch', 'beam', 'tsd', 'alsd', 'maes']
233 if self.cfg.strategy not in possible_strategies:
234 raise ValueError(f"Decoding strategy must be one of {possible_strategies}")
235
236 # Update preserve alignments
237 if self.preserve_alignments is None:
238 if self.cfg.strategy in ['greedy', 'greedy_batch']:
239 self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
240
241 elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
242 self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
243
244 # Update compute timestamps
245 if self.compute_timestamps is None:
246 if self.cfg.strategy in ['greedy', 'greedy_batch']:
247 self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
248
249 elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
250 self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
251
252 # Test if alignments are being preserved for RNNT
253 if self.compute_timestamps is True and self.preserve_alignments is False:
254 raise ValueError("If `compute_timesteps` flag is set, then `preserve_alignments` flag must also be set.")
255
256 # initialize confidence-related fields
257 self._init_confidence(self.cfg.get('confidence_cfg', None))
258
259 # Confidence estimation is not implemented for these strategies
260 if (
261 not self.preserve_frame_confidence
262 and self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']
263 and self.cfg.beam.get('preserve_frame_confidence', False)
264 ):
265 raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
266
267 if self.cfg.strategy == 'greedy':
268 if self.big_blank_durations is None:
269 if self.durations is None:
270 self.decoding = greedy_decode.GreedyRNNTInfer(
271 decoder_model=decoder,
272 joint_model=joint,
273 blank_index=self.blank_id,
274 max_symbols_per_step=(
275 self.cfg.greedy.get('max_symbols', None)
276 or self.cfg.greedy.get('max_symbols_per_step', None)
277 ),
278 preserve_alignments=self.preserve_alignments,
279 preserve_frame_confidence=self.preserve_frame_confidence,
280 confidence_measure_cfg=self.confidence_measure_cfg,
281 )
282 else:
283 self.decoding = greedy_decode.GreedyTDTInfer(
284 decoder_model=decoder,
285 joint_model=joint,
286 blank_index=self.blank_id,
287 durations=self.durations,
288 max_symbols_per_step=(
289 self.cfg.greedy.get('max_symbols', None)
290 or self.cfg.greedy.get('max_symbols_per_step', None)
291 ),
292 preserve_alignments=self.preserve_alignments,
293 preserve_frame_confidence=self.preserve_frame_confidence,
294 confidence_measure_cfg=self.confidence_measure_cfg,
295 )
296 else:
297 self.decoding = greedy_decode.GreedyMultiblankRNNTInfer(
298 decoder_model=decoder,
299 joint_model=joint,
300 blank_index=self.blank_id,
301 big_blank_durations=self.big_blank_durations,
302 max_symbols_per_step=(
303 self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
304 ),
305 preserve_alignments=self.preserve_alignments,
306 preserve_frame_confidence=self.preserve_frame_confidence,
307 confidence_measure_cfg=self.confidence_measure_cfg,
308 )
309
310 elif self.cfg.strategy == 'greedy_batch':
311 if self.big_blank_durations is None:
312 if self.durations is None:
313 self.decoding = greedy_decode.GreedyBatchedRNNTInfer(
314 decoder_model=decoder,
315 joint_model=joint,
316 blank_index=self.blank_id,
317 max_symbols_per_step=(
318 self.cfg.greedy.get('max_symbols', None)
319 or self.cfg.greedy.get('max_symbols_per_step', None)
320 ),
321 preserve_alignments=self.preserve_alignments,
322 preserve_frame_confidence=self.preserve_frame_confidence,
323 confidence_measure_cfg=self.confidence_measure_cfg,
324 )
325 else:
326 self.decoding = greedy_decode.GreedyBatchedTDTInfer(
327 decoder_model=decoder,
328 joint_model=joint,
329 blank_index=self.blank_id,
330 durations=self.durations,
331 max_symbols_per_step=(
332 self.cfg.greedy.get('max_symbols', None)
333 or self.cfg.greedy.get('max_symbols_per_step', None)
334 ),
335 preserve_alignments=self.preserve_alignments,
336 preserve_frame_confidence=self.preserve_frame_confidence,
337 confidence_measure_cfg=self.confidence_measure_cfg,
338 )
339
340 else:
341 self.decoding = greedy_decode.GreedyBatchedMultiblankRNNTInfer(
342 decoder_model=decoder,
343 joint_model=joint,
344 blank_index=self.blank_id,
345 big_blank_durations=self.big_blank_durations,
346 max_symbols_per_step=(
347 self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
348 ),
349 preserve_alignments=self.preserve_alignments,
350 preserve_frame_confidence=self.preserve_frame_confidence,
351 confidence_measure_cfg=self.confidence_measure_cfg,
352 )
353
354 elif self.cfg.strategy == 'beam':
355
356 self.decoding = beam_decode.BeamRNNTInfer(
357 decoder_model=decoder,
358 joint_model=joint,
359 beam_size=self.cfg.beam.beam_size,
360 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
361 search_type='default',
362 score_norm=self.cfg.beam.get('score_norm', True),
363 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
364 preserve_alignments=self.preserve_alignments,
365 )
366
367 elif self.cfg.strategy == 'tsd':
368
369 self.decoding = beam_decode.BeamRNNTInfer(
370 decoder_model=decoder,
371 joint_model=joint,
372 beam_size=self.cfg.beam.beam_size,
373 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
374 search_type='tsd',
375 score_norm=self.cfg.beam.get('score_norm', True),
376 tsd_max_sym_exp_per_step=self.cfg.beam.get('tsd_max_sym_exp', 10),
377 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
378 preserve_alignments=self.preserve_alignments,
379 )
380
381 elif self.cfg.strategy == 'alsd':
382
383 self.decoding = beam_decode.BeamRNNTInfer(
384 decoder_model=decoder,
385 joint_model=joint,
386 beam_size=self.cfg.beam.beam_size,
387 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
388 search_type='alsd',
389 score_norm=self.cfg.beam.get('score_norm', True),
390 alsd_max_target_len=self.cfg.beam.get('alsd_max_target_len', 2),
391 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
392 preserve_alignments=self.preserve_alignments,
393 )
394
395 elif self.cfg.strategy == 'maes':
396
397 self.decoding = beam_decode.BeamRNNTInfer(
398 decoder_model=decoder,
399 joint_model=joint,
400 beam_size=self.cfg.beam.beam_size,
401 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
402 search_type='maes',
403 score_norm=self.cfg.beam.get('score_norm', True),
404 maes_num_steps=self.cfg.beam.get('maes_num_steps', 2),
405 maes_prefix_alpha=self.cfg.beam.get('maes_prefix_alpha', 1),
406 maes_expansion_gamma=self.cfg.beam.get('maes_expansion_gamma', 2.3),
407 maes_expansion_beta=self.cfg.beam.get('maes_expansion_beta', 2.0),
408 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
409 preserve_alignments=self.preserve_alignments,
410 ngram_lm_model=self.cfg.beam.get('ngram_lm_model', None),
411 ngram_lm_alpha=self.cfg.beam.get('ngram_lm_alpha', 0.0),
412 hat_subtract_ilm=self.cfg.beam.get('hat_subtract_ilm', False),
413 hat_ilm_weight=self.cfg.beam.get('hat_ilm_weight', 0.0),
414 )
415
416 else:
417
418 raise ValueError(
419 f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
420 f"but was provided {self.cfg.strategy}"
421 )
422
423 # Update the joint fused batch size or disable it entirely if needed.
424 self.update_joint_fused_batch_size()
425
426 def rnnt_decoder_predictions_tensor(
427 self,
428 encoder_output: torch.Tensor,
429 encoded_lengths: torch.Tensor,
430 return_hypotheses: bool = False,
431 partial_hypotheses: Optional[List[Hypothesis]] = None,
432 ) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
433 """
434 Decode an encoder output by autoregressive decoding of the Decoder+Joint networks.
435
436 Args:
437 encoder_output: torch.Tensor of shape [B, D, T].
438 encoded_lengths: torch.Tensor containing lengths of the padded encoder outputs. Shape [B].
439 return_hypotheses: bool. If set to True it will return list of Hypothesis or NBestHypotheses
440
441 Returns:
442 If `return_best_hypothesis` is set:
443 A tuple (hypotheses, None):
444 hypotheses - list of Hypothesis (best hypothesis per sample).
445 Look at rnnt_utils.Hypothesis for more information.
446
447 If `return_best_hypothesis` is not set:
448 A tuple(hypotheses, all_hypotheses)
449 hypotheses - list of Hypothesis (best hypothesis per sample).
450 Look at rnnt_utils.Hypothesis for more information.
451 all_hypotheses - list of NBestHypotheses. Each NBestHypotheses further contains a sorted
452 list of all the hypotheses of the model per sample.
453 Look at rnnt_utils.NBestHypotheses for more information.
454 """
455 # Compute hypotheses
456 with torch.inference_mode():
457 hypotheses_list = self.decoding(
458 encoder_output=encoder_output, encoded_lengths=encoded_lengths, partial_hypotheses=partial_hypotheses
459 ) # type: [List[Hypothesis]]
460
461 # extract the hypotheses
462 hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
463
464 prediction_list = hypotheses_list
465
466 if isinstance(prediction_list[0], NBestHypotheses):
467 hypotheses = []
468 all_hypotheses = []
469
470 for nbest_hyp in prediction_list: # type: NBestHypotheses
471 n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
472 decoded_hyps = self.decode_hypothesis(n_hyps) # type: List[str]
473
474 # If computing timestamps
475 if self.compute_timestamps is True:
476 timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
477 for hyp_idx in range(len(decoded_hyps)):
478 decoded_hyps[hyp_idx] = self.compute_rnnt_timestamps(decoded_hyps[hyp_idx], timestamp_type)
479
480 hypotheses.append(decoded_hyps[0]) # best hypothesis
481 all_hypotheses.append(decoded_hyps)
482
483 if return_hypotheses:
484 return hypotheses, all_hypotheses
485
486 best_hyp_text = [h.text for h in hypotheses]
487 all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
488 return best_hyp_text, all_hyp_text
489
490 else:
491 hypotheses = self.decode_hypothesis(prediction_list) # type: List[str]
492
493 # If computing timestamps
494 if self.compute_timestamps is True:
495 timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
496 for hyp_idx in range(len(hypotheses)):
497 hypotheses[hyp_idx] = self.compute_rnnt_timestamps(hypotheses[hyp_idx], timestamp_type)
498
499 if return_hypotheses:
500 # greedy decoding, can get high-level confidence scores
501 if self.preserve_frame_confidence and (
502 self.preserve_word_confidence or self.preserve_token_confidence
503 ):
504 hypotheses = self.compute_confidence(hypotheses)
505 return hypotheses, None
506
507 best_hyp_text = [h.text for h in hypotheses]
508 return best_hyp_text, None
509
510 def decode_hypothesis(self, hypotheses_list: List[Hypothesis]) -> List[Union[Hypothesis, NBestHypotheses]]:
511 """
512 Decode a list of hypotheses into a list of strings.
513
514 Args:
515 hypotheses_list: List of Hypothesis.
516
517 Returns:
518 A list of strings.
519 """
520 for ind in range(len(hypotheses_list)):
521 # Extract the integer encoded hypothesis
522 prediction = hypotheses_list[ind].y_sequence
523
524 if type(prediction) != list:
525 prediction = prediction.tolist()
526
527 # RNN-T sample level is already preprocessed by implicit RNNT decoding
528 # Simply remove any blank and possibly big blank tokens
529 if self.big_blank_durations is not None: # multi-blank RNNT
530 num_extra_outputs = len(self.big_blank_durations)
531 prediction = [p for p in prediction if p < self.blank_id - num_extra_outputs]
532 elif self.durations is not None: # TDT model.
533 prediction = [p for p in prediction if p < self.blank_id]
534 else: # standard RNN-T
535 prediction = [p for p in prediction if p != self.blank_id]
536
537 # De-tokenize the integer tokens; if not computing timestamps
538 if self.compute_timestamps is True:
539 # keep the original predictions, wrap with the number of repetitions per token and alignments
540 # this is done so that `rnnt_decoder_predictions_tensor()` can process this hypothesis
541 # in order to compute exact time stamps.
542 alignments = copy.deepcopy(hypotheses_list[ind].alignments)
543 token_repetitions = [1] * len(alignments) # preserve number of repetitions per token
544 hypothesis = (prediction, alignments, token_repetitions)
545 else:
546 hypothesis = self.decode_tokens_to_str(prediction)
547
548 # TODO: remove
549 # collapse leading spaces before . , ? for PC models
550 hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
551
552 if self.compute_hypothesis_token_set:
553 hypotheses_list[ind].tokens = self.decode_ids_to_tokens(prediction)
554
555 # De-tokenize the integer tokens
556 hypotheses_list[ind].text = hypothesis
557
558 return hypotheses_list
559
560 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
561 """
562 Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
563 Assumes that `frame_confidence` is present in the hypotheses.
564
565 Args:
566 hypotheses_list: List of Hypothesis.
567
568 Returns:
569 A list of hypotheses with high-level confidence scores.
570 """
571 if self.exclude_blank_from_confidence:
572 for hyp in hypotheses_list:
573 hyp.token_confidence = hyp.non_blank_frame_confidence
574 else:
575 for hyp in hypotheses_list:
576 offset = 0
577 token_confidence = []
578 if len(hyp.timestep) > 0:
579 for ts, te in zip(hyp.timestep, hyp.timestep[1:] + [len(hyp.frame_confidence)]):
580 if ts != te:
581 # <blank> tokens are considered to belong to the last non-blank token, if any.
582 token_confidence.append(
583 self._aggregate_confidence(
584 [hyp.frame_confidence[ts][offset]]
585 + [fc[0] for fc in hyp.frame_confidence[ts + 1 : te]]
586 )
587 )
588 offset = 0
589 else:
590 token_confidence.append(hyp.frame_confidence[ts][offset])
591 offset += 1
592 hyp.token_confidence = token_confidence
593 if self.preserve_word_confidence:
594 for hyp in hypotheses_list:
595 hyp.word_confidence = self._aggregate_token_confidence(hyp)
596 return hypotheses_list
597
598 @abstractmethod
599 def decode_tokens_to_str(self, tokens: List[int]) -> str:
600 """
601 Implemented by subclass in order to decoder a token id list into a string.
602
603 Args:
604 tokens: List of int representing the token ids.
605
606 Returns:
607 A decoded string.
608 """
609 raise NotImplementedError()
610
611 @abstractmethod
612 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
613 """
614 Implemented by subclass in order to decode a token id list into a token list.
615 A token list is the string representation of each token id.
616
617 Args:
618 tokens: List of int representing the token ids.
619
620 Returns:
621 A list of decoded tokens.
622 """
623 raise NotImplementedError()
624
625 @abstractmethod
626 def decode_tokens_to_lang(self, tokens: List[int]) -> str:
627 """
628 Implemented by subclass in order to
629 compute the most likely language ID (LID) string given the tokens.
630
631 Args:
632 tokens: List of int representing the token ids.
633
634 Returns:
635 A decoded LID string.
636 """
637 raise NotImplementedError()
638
639 @abstractmethod
640 def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
641 """
642 Implemented by subclass in order to
643 decode a token id list into language ID (LID) list.
644
645 Args:
646 tokens: List of int representing the token ids.
647
648 Returns:
649 A list of decoded LIDS.
650 """
651 raise NotImplementedError()
652
653 def update_joint_fused_batch_size(self):
654 if self.joint_fused_batch_size is None:
655 # do nothing and let the Joint itself handle setting up of the fused batch
656 return
657
658 if not hasattr(self.decoding.joint, 'set_fused_batch_size'):
659 logging.warning(
660 "The joint module does not have `set_fused_batch_size(int)` as a setter function.\n"
661 "Ignoring update of joint fused batch size."
662 )
663 return
664
665 if not hasattr(self.decoding.joint, 'set_fuse_loss_wer'):
666 logging.warning(
667 "The joint module does not have `set_fuse_loss_wer(bool, RNNTLoss, RNNTWER)` "
668 "as a setter function.\n"
669 "Ignoring update of joint fused batch size."
670 )
671 return
672
673 if self.joint_fused_batch_size > 0:
674 self.decoding.joint.set_fused_batch_size(self.joint_fused_batch_size)
675 else:
676 logging.info("Joint fused batch size <= 0; Will temporarily disable fused batch step in the Joint.")
677 self.decoding.joint.set_fuse_loss_wer(False)
678
679 def compute_rnnt_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
680 assert timestamp_type in ['char', 'word', 'all']
681
682 # Unpack the temporary storage
683 decoded_prediction, alignments, token_repetitions = hypothesis.text
684
685 # Retrieve offsets
686 char_offsets = word_offsets = None
687 char_offsets = self._compute_offsets(hypothesis, token_repetitions, self.blank_id)
688
689 # finally, set the flattened decoded predictions to text field for later text decoding
690 hypothesis.text = decoded_prediction
691
692 # Assert number of offsets and hypothesis tokens are 1:1 match.
693 num_flattened_tokens = 0
694 for t in range(len(char_offsets)):
695 # Subtract one here for the extra RNNT BLANK token emitted to designate "End of timestep"
696 num_flattened_tokens += len(char_offsets[t]['char']) - 1
697
698 if num_flattened_tokens != len(hypothesis.text):
699 raise ValueError(
700 f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
701 " have to be of the same length, but are: "
702 f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
703 f" {len(hypothesis.text)}"
704 )
705
706 encoded_char_offsets = copy.deepcopy(char_offsets)
707
708 # Correctly process the token ids to chars/subwords.
709 for i, offsets in enumerate(char_offsets):
710 decoded_chars = []
711 for char in offsets['char'][:-1]: # ignore the RNNT Blank token at end of every timestep with -1 subset
712 decoded_chars.append(self.decode_tokens_to_str([int(char)]))
713 char_offsets[i]["char"] = decoded_chars
714
715 # detect char vs subword models
716 lens = []
717 for v in char_offsets:
718 tokens = v["char"]
719 # each token may be either 1 unicode token or multiple unicode token
720 # for character based models, only 1 token is used
721 # for subword, more than one token can be used.
722 # Computing max, then summing up total lens is a test to check for char vs subword
723 # For char models, len(lens) == sum(lens)
724 # but this is violated for subword models.
725 max_len = max(len(c) for c in tokens)
726 lens.append(max_len)
727
728 # array of one or more chars implies subword based model with multiple char emitted per TxU step (via subword)
729 if sum(lens) > len(lens):
730 text_type = 'subword'
731 else:
732 # full array of ones implies character based model with 1 char emitted per TxU step
733 text_type = 'char'
734
735 # retrieve word offsets from character offsets
736 word_offsets = None
737 if timestamp_type in ['word', 'all']:
738 if text_type == 'char':
739 word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
740 else:
741 # utilize the copy of char offsets with the correct integer ids for tokens
742 # so as to avoid tokenize -> detokenize -> compare -> merge steps.
743 word_offsets = self._get_word_offsets_subwords_sentencepiece(
744 encoded_char_offsets,
745 hypothesis,
746 decode_ids_to_tokens=self.decode_ids_to_tokens,
747 decode_tokens_to_str=self.decode_tokens_to_str,
748 )
749
750 # attach results
751 if len(hypothesis.timestep) > 0:
752 timestep_info = hypothesis.timestep
753 else:
754 timestep_info = []
755
756 # Setup defaults
757 hypothesis.timestep = {"timestep": timestep_info}
758
759 # Add char / subword time stamps
760 if char_offsets is not None and timestamp_type in ['char', 'all']:
761 hypothesis.timestep['char'] = char_offsets
762
763 # Add word time stamps
764 if word_offsets is not None and timestamp_type in ['word', 'all']:
765 hypothesis.timestep['word'] = word_offsets
766
767 # Convert the flattened token indices to text
768 hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
769
770 return hypothesis
771
772 @staticmethod
773 def _compute_offsets(
774 hypothesis: Hypothesis, token_repetitions: List[int], rnnt_token: int
775 ) -> List[Dict[str, Union[str, int]]]:
776 """
777 Utility method that calculates the indidual time indices where a token starts and ends.
778
779 Args:
780 hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
781 emitted at every time step after rnnt collapse.
782 token_repetitions: A list of ints representing the number of repetitions of each emitted token.
783 rnnt_token: The integer of the rnnt blank token used during rnnt collapse.
784
785 Returns:
786
787 """
788 start_index = 0
789
790 # If the exact timestep information is available, utilize the 1st non-rnnt blank token timestep
791 # as the start index.
792 if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
793 start_index = max(0, hypothesis.timestep[0] - 1)
794
795 # Construct the start and end indices brackets
796 end_indices = np.asarray(token_repetitions).cumsum()
797 start_indices = np.concatenate(([start_index], end_indices[:-1]))
798
799 # Process the TxU dangling alignment tensor, containing pairs of (logits, label)
800 alignment_labels = [al_logits_labels for al_logits_labels in hypothesis.text[1]]
801 for t in range(len(alignment_labels)):
802 for u in range(len(alignment_labels[t])):
803 alignment_labels[t][u] = alignment_labels[t][u][1] # pick label from (logit, label) tuple
804
805 # Merge the results per token into a list of dictionaries
806 offsets = [
807 {"char": a, "start_offset": s, "end_offset": e}
808 for a, s, e in zip(alignment_labels, start_indices, end_indices)
809 ]
810
811 # Filter out RNNT token (blank at [t][0] position). This is because blank can only occur at end of a
812 # time step for RNNT, so if 0th token is blank, then that timestep is skipped.
813 offsets = list(filter(lambda offsets: offsets["char"][0] != rnnt_token, offsets))
814 return offsets
815
816 @staticmethod
817 def _get_word_offsets_chars(
818 offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
819 ) -> Dict[str, Union[str, float]]:
820 """
821 Utility method which constructs word time stamps out of character time stamps.
822
823 References:
824 This code is a port of the Hugging Face code for word time stamp construction.
825
826 Args:
827 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
828 word_delimiter_char: Character token that represents the word delimiter. By default, " ".
829
830 Returns:
831 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
832 "end_offset".
833 """
834 word_offsets = []
835
836 last_state = "SPACE"
837 word = ""
838 start_offset = 0
839 end_offset = 0
840 for i, offset in enumerate(offsets):
841 chars = offset["char"]
842 for char in chars:
843 state = "SPACE" if char == word_delimiter_char else "WORD"
844
845 if state == last_state:
846 # If we are in the same state as before, we simply repeat what we've done before
847 end_offset = offset["end_offset"]
848 word += char
849 else:
850 # Switching state
851 if state == "SPACE":
852 # Finishing a word
853 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
854 else:
855 # Starting a new word
856 start_offset = offset["start_offset"]
857 end_offset = offset["end_offset"]
858 word = char
859
860 last_state = state
861
862 if last_state == "WORD":
863 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
864
865 return word_offsets
866
867 @staticmethod
868 def _get_word_offsets_subwords_sentencepiece(
869 offsets: Dict[str, Union[str, float]],
870 hypothesis: Hypothesis,
871 decode_ids_to_tokens: Callable[[List[int]], str],
872 decode_tokens_to_str: Callable[[List[int]], str],
873 ) -> Dict[str, Union[str, float]]:
874 """
875 Utility method which constructs word time stamps out of sub-word time stamps.
876
877 **Note**: Only supports Sentencepiece based tokenizers !
878
879 Args:
880 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
881 hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
882 after rnnt collapse.
883 decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
884 decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
885
886 Returns:
887 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
888 "end_offset".
889 """
890 word_offsets = []
891 built_token = []
892 previous_token_index = 0
893 # For every offset token
894 for i, offset in enumerate(offsets):
895 # For every subword token in offset token list (ignoring the RNNT Blank token at the end)
896 for char in offset['char'][:-1]:
897 char = int(char)
898
899 # Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
900 token = decode_ids_to_tokens([char])[0]
901 token_text = decode_tokens_to_str([char])
902
903 # It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
904 # after forcing partial text conversion of the token.
905 if token != token_text:
906 # If there are any partially or fully built sub-word token ids, construct to text.
907 # Note: This is "old" subword, that occurs *after* current sub-word has started.
908 if built_token:
909 word_offsets.append(
910 {
911 "word": decode_tokens_to_str(built_token),
912 "start_offset": offsets[previous_token_index]["start_offset"],
913 "end_offset": offsets[i]["start_offset"],
914 }
915 )
916
917 # Prepare list of new sub-word ids
918 built_token.clear()
919 built_token.append(char)
920 previous_token_index = i
921 else:
922 # If the token does not contain any sub-word start mark, then the sub-word has not completed yet
923 # Append to current sub-word list.
924 built_token.append(char)
925
926 # Inject the start offset of the first token to word offsets
927 # This is because we always skip the delay the injection of the first sub-word due to the loop
928 # condition and check whether built token is ready or not.
929 # Therefore without this forced injection, the start_offset appears as off by 1.
930 # This should only be done when these arrays contain more than one element.
931 if offsets and word_offsets:
932 word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
933
934 # If there are any remaining tokens left, inject them all into the final word offset.
935 # The start offset of this token is the start time of the next token to process.
936 # The end offset of this token is the end time of the last token from offsets.
937 # Note that built_token is a flat list; but offsets contains a nested list which
938 # may have different dimensionality.
939 # As such, we can't rely on the length of the list of built_token to index offsets.
940 if built_token:
941 # start from the previous token index as this hasn't been committed to word_offsets yet
942 # if we still have content in built_token
943 start_offset = offsets[previous_token_index]["start_offset"]
944 word_offsets.append(
945 {
946 "word": decode_tokens_to_str(built_token),
947 "start_offset": start_offset,
948 "end_offset": offsets[-1]["end_offset"],
949 }
950 )
951 built_token.clear()
952
953 return word_offsets
954
955
956 class RNNTDecoding(AbstractRNNTDecoding):
957 """
958 Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
959
960 Args:
961 decoding_cfg: A dict-like object which contains the following key-value pairs.
962 strategy: str value which represents the type of decoding that can occur.
963 Possible values are :
964 - greedy, greedy_batch (for greedy decoding).
965 - beam, tsd, alsd (for beam search decoding).
966
967 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
968 tokens as well as the decoded string. Default is False in order to avoid double decoding
969 unless required.
970
971 preserve_alignments: Bool flag which preserves the history of logprobs generated during
972 decoding (sample / batched). When set to true, the Hypothesis will contain
973 the non-null value for `logprobs` in it. Here, `alignments` is a List of List of
974 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
975
976 In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
977 with the `return_hypotheses` flag set to True.
978
979 The length of the list corresponds to the Acoustic Length (T).
980 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
981 U is the number of target tokens for the current timestep Ti.
982
983 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
984 scores. In order to obtain hypotheses with confidence scores, please utilize
985 `rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
986
987 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
988 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
989 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
990
991 The length of the list corresponds to the Acoustic Length (T).
992 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
993 U is the number of target tokens for the current timestep Ti.
994 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
995 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
996 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
997
998 The length of the list corresponds to the number of recognized tokens.
999 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
1000 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1001 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
1002
1003 The length of the list corresponds to the number of recognized words.
1004 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
1005 from the `token_confidence`.
1006 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
1007 Valid options are `mean`, `min`, `max`, `prod`.
1008 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
1009 confidence scores.
1010
1011 name: The measure name (str).
1012 Supported values:
1013 - 'max_prob' for using the maximum token probability as a confidence.
1014 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1015
1016 entropy_type: Which type of entropy to use (str).
1017 Used if confidence_measure_cfg.name is set to `entropy`.
1018 Supported values:
1019 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1020 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1021 Note that for this entropy, the alpha should comply the following inequality:
1022 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1023 where V is the model vocabulary size.
1024 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1025 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1026 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1027 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1028 - 'renyi' for the Rรฉnyi entropy.
1029 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1030 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1031 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1032
1033 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1034 When the alpha equals one, scaling is not applied to 'max_prob',
1035 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1036
1037 entropy_norm: A mapping of the entropy value to the interval [0,1].
1038 Supported values:
1039 - 'lin' for using the linear mapping.
1040 - 'exp' for using exponential mapping with linear shift.
1041
1042 The config may further contain the following sub-dictionaries:
1043 "greedy":
1044 max_symbols: int, describing the maximum number of target tokens to decode per
1045 timestep during greedy decoding. Setting to larger values allows longer sentences
1046 to be decoded, at the cost of increased execution time.
1047
1048 preserve_frame_confidence: Same as above, overrides above value.
1049
1050 confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
1051
1052 "beam":
1053 beam_size: int, defining the beam size for beam search. Must be >= 1.
1054 If beam_size == 1, will perform cached greedy search. This might be slightly different
1055 results compared to the greedy search above.
1056
1057 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
1058 Set to True by default.
1059
1060 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
1061 hypotheses after beam search has concluded. This flag is set by default.
1062
1063 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
1064 per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
1065 at increased cost to execution time.
1066
1067 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
1068 If an integer is provided, it can decode sequences of that particular maximum length.
1069 If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
1070 where seq_len is the length of the acoustic model output (T).
1071
1072 NOTE:
1073 If a float is provided, it can be greater than 1!
1074 By default, a float of 2.0 is used so that a target sequence can be at most twice
1075 as long as the acoustic model output length T.
1076
1077 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
1078 and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
1079
1080 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
1081 in order to reduce expensive beam search cost later. int >= 0.
1082
1083 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
1084 Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
1085 and affects the speed of inference since large values will perform large beam search in the next step.
1086
1087 maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
1088 The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
1089 where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
1090 predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
1091 expansion apart from the "most likely" candidate.
1092 Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
1093 but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
1094 thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
1095 tuned on a validation set.
1096
1097 softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
1098
1099 decoder: The Decoder/Prediction network module.
1100 joint: The Joint network module.
1101 vocabulary: The vocabulary (excluding the RNNT blank token) which will be used for decoding.
1102 """
1103
1104 def __init__(
1105 self, decoding_cfg, decoder, joint, vocabulary,
1106 ):
1107 # we need to ensure blank is the last token in the vocab for the case of RNNT and Multi-blank RNNT.
1108 blank_id = len(vocabulary) + joint.num_extra_outputs
1109
1110 if hasattr(decoding_cfg, 'model_type') and decoding_cfg.model_type == 'tdt':
1111 blank_id = len(vocabulary)
1112
1113 self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
1114
1115 super(RNNTDecoding, self).__init__(
1116 decoding_cfg=decoding_cfg, decoder=decoder, joint=joint, blank_id=blank_id,
1117 )
1118
1119 if isinstance(self.decoding, beam_decode.BeamRNNTInfer):
1120 self.decoding.set_decoding_type('char')
1121
1122 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
1123 """
1124 Implemented by subclass in order to aggregate token confidence to a word-level confidence.
1125
1126 Args:
1127 hypothesis: Hypothesis
1128
1129 Returns:
1130 A list of word-level confidence scores.
1131 """
1132 return self._aggregate_token_confidence_chars(hypothesis.words, hypothesis.token_confidence)
1133
1134 def decode_tokens_to_str(self, tokens: List[int]) -> str:
1135 """
1136 Implemented by subclass in order to decoder a token list into a string.
1137
1138 Args:
1139 tokens: List of int representing the token ids.
1140
1141 Returns:
1142 A decoded string.
1143 """
1144 hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
1145 return hypothesis
1146
1147 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
1148 """
1149 Implemented by subclass in order to decode a token id list into a token list.
1150 A token list is the string representation of each token id.
1151
1152 Args:
1153 tokens: List of int representing the token ids.
1154
1155 Returns:
1156 A list of decoded tokens.
1157 """
1158 token_list = [self.labels_map[c] for c in tokens if c < self.blank_id - self.num_extra_outputs]
1159 return token_list
1160
1161 def decode_tokens_to_lang(self, tokens: List[int]) -> str:
1162 """
1163 Compute the most likely language ID (LID) string given the tokens.
1164
1165 Args:
1166 tokens: List of int representing the token ids.
1167
1168 Returns:
1169 A decoded LID string.
1170 """
1171 lang = self.tokenizer.ids_to_lang(tokens)
1172 return lang
1173
1174 def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
1175 """
1176 Decode a token id list into language ID (LID) list.
1177
1178 Args:
1179 tokens: List of int representing the token ids.
1180
1181 Returns:
1182 A list of decoded LIDS.
1183 """
1184 lang_list = self.tokenizer.ids_to_text_and_langs(tokens)
1185 return lang_list
1186
1187
1188 class RNNTWER(Metric):
1189 """
1190 This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference texts.
1191 When doing distributed training/evaluation the result of res=WER(predictions, targets, target_lengths) calls
1192 will be all-reduced between all workers using SUM operations.
1193 Here contains two numbers res=[wer_numerator, wer_denominator]. WER=wer_numerator/wer_denominator.
1194
1195 If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step results.
1196 Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
1197
1198 Example:
1199 def validation_step(self, batch, batch_idx):
1200 ...
1201 wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
1202 self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
1203 return self.val_outputs
1204
1205 def on_validation_epoch_end(self):
1206 ...
1207 wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
1208 wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
1209 tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
1210 self.val_outputs.clear() # free memory
1211 return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
1212
1213 Args:
1214 decoding: RNNTDecoding object that will perform autoregressive decoding of the RNNT model.
1215 batch_dim_index: Index of the batch dimension.
1216 use_cer: Whether to use Character Error Rate isntead of Word Error Rate.
1217 log_prediction: Whether to log a single decoded sample per call.
1218
1219 Returns:
1220 res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenshtein's
1221 distances for all prediction - reference pairs, total number of words in all references.
1222 """
1223
1224 full_state_update = True
1225
1226 def __init__(
1227 self, decoding: RNNTDecoding, batch_dim_index=0, use_cer=False, log_prediction=True, dist_sync_on_step=False
1228 ):
1229 super(RNNTWER, self).__init__(dist_sync_on_step=dist_sync_on_step)
1230 self.decoding = decoding
1231 self.batch_dim_index = batch_dim_index
1232 self.use_cer = use_cer
1233 self.log_prediction = log_prediction
1234 self.blank_id = self.decoding.blank_id
1235 self.labels_map = self.decoding.labels_map
1236
1237 self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1238 self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1239
1240 def update(
1241 self,
1242 encoder_output: torch.Tensor,
1243 encoded_lengths: torch.Tensor,
1244 targets: torch.Tensor,
1245 target_lengths: torch.Tensor,
1246 ) -> torch.Tensor:
1247 words = 0
1248 scores = 0
1249 references = []
1250 with torch.no_grad():
1251 # prediction_cpu_tensor = tensors[0].long().cpu()
1252 targets_cpu_tensor = targets.long().cpu()
1253 targets_cpu_tensor = move_dimension_to_the_front(targets_cpu_tensor, self.batch_dim_index)
1254 tgt_lenths_cpu_tensor = target_lengths.long().cpu()
1255
1256 # iterate over batch
1257 for ind in range(targets_cpu_tensor.shape[0]):
1258 tgt_len = tgt_lenths_cpu_tensor[ind].item()
1259 target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
1260
1261 reference = self.decoding.decode_tokens_to_str(target)
1262 references.append(reference)
1263
1264 hypotheses, _ = self.decoding.rnnt_decoder_predictions_tensor(encoder_output, encoded_lengths)
1265
1266 if self.log_prediction:
1267 logging.info(f"\n")
1268 logging.info(f"reference :{references[0]}")
1269 logging.info(f"predicted :{hypotheses[0]}")
1270
1271 for h, r in zip(hypotheses, references):
1272 if self.use_cer:
1273 h_list = list(h)
1274 r_list = list(r)
1275 else:
1276 h_list = h.split()
1277 r_list = r.split()
1278 words += len(r_list)
1279 # Compute Levenshtein's distance
1280 scores += editdistance.eval(h_list, r_list)
1281
1282 self.scores += torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
1283 self.words += torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
1284 # return torch.tensor([scores, words]).to(predictions.device)
1285
1286 def compute(self):
1287 wer = self.scores.float() / self.words
1288 return wer, self.scores.detach(), self.words.detach()
1289
1290
1291 @dataclass
1292 class RNNTDecodingConfig:
1293 model_type: str = "rnnt" # one of "rnnt", "multiblank" or "tdt"
1294 strategy: str = "greedy_batch"
1295
1296 compute_hypothesis_token_set: bool = False
1297
1298 # preserve decoding alignments
1299 preserve_alignments: Optional[bool] = None
1300
1301 # confidence config
1302 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1303
1304 # RNNT Joint fused batch size
1305 fused_batch_size: Optional[int] = None
1306
1307 # compute RNNT time stamps
1308 compute_timestamps: Optional[bool] = None
1309
1310 # compute language IDs
1311 compute_langs: bool = False
1312
1313 # token representing word seperator
1314 word_seperator: str = " "
1315
1316 # type of timestamps to calculate
1317 rnnt_timestamp_type: str = "all" # can be char, word or all for both
1318
1319 # greedy decoding config
1320 greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
1321
1322 # beam decoding config
1323 beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
1324
1325 # can be used to change temperature for decoding
1326 temperature: float = 1.0
1327
[end of nemo/collections/asr/metrics/rnnt_wer.py]
[start of nemo/collections/asr/metrics/wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import re
16 from abc import abstractmethod
17 from dataclasses import dataclass, is_dataclass
18 from typing import Callable, Dict, List, Optional, Tuple, Union
19
20 import editdistance
21 import jiwer
22 import numpy as np
23 import torch
24 from omegaconf import DictConfig, OmegaConf
25 from torchmetrics import Metric
26
27 from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
28 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
29 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
30 from nemo.utils import logging, logging_mode
31
32 __all__ = ['word_error_rate', 'word_error_rate_detail', 'WER', 'move_dimension_to_the_front']
33
34
35 def word_error_rate(hypotheses: List[str], references: List[str], use_cer=False) -> float:
36 """
37 Computes Average Word Error rate between two texts represented as
38 corresponding lists of string.
39
40 Hypotheses and references must have same length.
41
42 Args:
43 hypotheses (list): list of hypotheses
44 references(list) : list of references
45 use_cer (bool): set True to enable cer
46
47 Returns:
48 wer (float): average word error rate
49 """
50 scores = 0
51 words = 0
52 if len(hypotheses) != len(references):
53 raise ValueError(
54 "In word error rate calculation, hypotheses and reference"
55 " lists must have the same number of elements. But I got:"
56 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
57 )
58 for h, r in zip(hypotheses, references):
59 if use_cer:
60 h_list = list(h)
61 r_list = list(r)
62 else:
63 h_list = h.split()
64 r_list = r.split()
65 words += len(r_list)
66 # May deprecate using editdistance in future release for here and rest of codebase
67 # once we confirm jiwer is reliable.
68 scores += editdistance.eval(h_list, r_list)
69 if words != 0:
70 wer = 1.0 * scores / words
71 else:
72 wer = float('inf')
73 return wer
74
75
76 def word_error_rate_detail(
77 hypotheses: List[str], references: List[str], use_cer=False
78 ) -> Tuple[float, int, float, float, float]:
79 """
80 Computes Average Word Error Rate with details (insertion rate, deletion rate, substitution rate)
81 between two texts represented as corresponding lists of string.
82
83 Hypotheses and references must have same length.
84
85 Args:
86 hypotheses (list): list of hypotheses
87 references(list) : list of references
88 use_cer (bool): set True to enable cer
89
90 Returns:
91 wer (float): average word error rate
92 words (int): Total number of words/charactors of given reference texts
93 ins_rate (float): average insertion error rate
94 del_rate (float): average deletion error rate
95 sub_rate (float): average substitution error rate
96 """
97 scores = 0
98 words = 0
99 ops_count = {'substitutions': 0, 'insertions': 0, 'deletions': 0}
100
101 if len(hypotheses) != len(references):
102 raise ValueError(
103 "In word error rate calculation, hypotheses and reference"
104 " lists must have the same number of elements. But I got:"
105 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
106 )
107
108 for h, r in zip(hypotheses, references):
109 if use_cer:
110 h_list = list(h)
111 r_list = list(r)
112 else:
113 h_list = h.split()
114 r_list = r.split()
115
116 # To get rid of the issue that jiwer does not allow empty string
117 if len(r_list) == 0:
118 if len(h_list) != 0:
119 errors = len(h_list)
120 ops_count['insertions'] += errors
121 else:
122 errors = 0
123 else:
124 if use_cer:
125 measures = jiwer.cer(r, h, return_dict=True)
126 else:
127 measures = jiwer.compute_measures(r, h)
128
129 errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
130 ops_count['insertions'] += measures['insertions']
131 ops_count['deletions'] += measures['deletions']
132 ops_count['substitutions'] += measures['substitutions']
133
134 scores += errors
135 words += len(r_list)
136
137 if words != 0:
138 wer = 1.0 * scores / words
139 ins_rate = 1.0 * ops_count['insertions'] / words
140 del_rate = 1.0 * ops_count['deletions'] / words
141 sub_rate = 1.0 * ops_count['substitutions'] / words
142 else:
143 wer, ins_rate, del_rate, sub_rate = float('inf'), float('inf'), float('inf'), float('inf')
144
145 return wer, words, ins_rate, del_rate, sub_rate
146
147
148 def word_error_rate_per_utt(hypotheses: List[str], references: List[str], use_cer=False) -> Tuple[List[float], float]:
149 """
150 Computes Word Error Rate per utterance and the average WER
151 between two texts represented as corresponding lists of string.
152
153 Hypotheses and references must have same length.
154
155 Args:
156 hypotheses (list): list of hypotheses
157 references(list) : list of references
158 use_cer (bool): set True to enable cer
159
160 Returns:
161 wer_per_utt (List[float]): word error rate per utterance
162 avg_wer (float): average word error rate
163 """
164 scores = 0
165 words = 0
166 wer_per_utt = []
167
168 if len(hypotheses) != len(references):
169 raise ValueError(
170 "In word error rate calculation, hypotheses and reference"
171 " lists must have the same number of elements. But I got:"
172 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
173 )
174
175 for h, r in zip(hypotheses, references):
176 if use_cer:
177 h_list = list(h)
178 r_list = list(r)
179 else:
180 h_list = h.split()
181 r_list = r.split()
182
183 # To get rid of the issue that jiwer does not allow empty string
184 if len(r_list) == 0:
185 if len(h_list) != 0:
186 errors = len(h_list)
187 wer_per_utt.append(float('inf'))
188 else:
189 if use_cer:
190 measures = jiwer.cer(r, h, return_dict=True)
191 er = measures['cer']
192 else:
193 measures = jiwer.compute_measures(r, h)
194 er = measures['wer']
195
196 errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
197 wer_per_utt.append(er)
198
199 scores += errors
200 words += len(r_list)
201
202 if words != 0:
203 avg_wer = 1.0 * scores / words
204 else:
205 avg_wer = float('inf')
206
207 return wer_per_utt, avg_wer
208
209
210 def move_dimension_to_the_front(tensor, dim_index):
211 all_dims = list(range(tensor.ndim))
212 return tensor.permute(*([dim_index] + all_dims[:dim_index] + all_dims[dim_index + 1 :]))
213
214
215 class AbstractCTCDecoding(ConfidenceMixin):
216 """
217 Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs.
218
219 Args:
220 decoding_cfg: A dict-like object which contains the following key-value pairs.
221 strategy: str value which represents the type of decoding that can occur.
222 Possible values are :
223 - greedy (for greedy decoding).
224 - beam (for DeepSpeed KenLM based decoding).
225
226 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
227 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
228 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
229
230 ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
231 Can take the following values - "char" for character/subword time stamps, "word" for word level
232 time stamps and "all" (default), for both character level and word level time stamps.
233
234 word_seperator: Str token representing the seperator between words.
235
236 preserve_alignments: Bool flag which preserves the history of logprobs generated during
237 decoding (sample / batched). When set to true, the Hypothesis will contain
238 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
239
240 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
241 scores. In order to obtain hypotheses with confidence scores, please utilize
242 `ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
243
244 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
245 generated during decoding. When set to true, the Hypothesis will contain
246 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
247 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
248 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
249 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
250
251 The length of the list corresponds to the number of recognized tokens.
252 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
253 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
254 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
255
256 The length of the list corresponds to the number of recognized words.
257 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
258 from the `token_confidence`.
259 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
260 Valid options are `mean`, `min`, `max`, `prod`.
261 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
262 confidence scores.
263
264 name: The measure name (str).
265 Supported values:
266 - 'max_prob' for using the maximum token probability as a confidence.
267 - 'entropy' for using a normalized entropy of a log-likelihood vector.
268
269 entropy_type: Which type of entropy to use (str).
270 Used if confidence_measure_cfg.name is set to `entropy`.
271 Supported values:
272 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
273 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
274 Note that for this entropy, the alpha should comply the following inequality:
275 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
276 where V is the model vocabulary size.
277 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
278 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
279 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
280 More: https://en.wikipedia.org/wiki/Tsallis_entropy
281 - 'renyi' for the Rรฉnyi entropy.
282 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
283 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
284 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
285
286 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
287 When the alpha equals one, scaling is not applied to 'max_prob',
288 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
289
290 entropy_norm: A mapping of the entropy value to the interval [0,1].
291 Supported values:
292 - 'lin' for using the linear mapping.
293 - 'exp' for using exponential mapping with linear shift.
294
295 batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
296 ``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
297
298 The config may further contain the following sub-dictionaries:
299 "greedy":
300 preserve_alignments: Same as above, overrides above value.
301 compute_timestamps: Same as above, overrides above value.
302 preserve_frame_confidence: Same as above, overrides above value.
303 confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
304
305 "beam":
306 beam_size: int, defining the beam size for beam search. Must be >= 1.
307 If beam_size == 1, will perform cached greedy search. This might be slightly different
308 results compared to the greedy search above.
309
310 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
311 hypotheses after beam search has concluded. This flag is set by default.
312
313 beam_alpha: float, the strength of the Language model on the final score of a token.
314 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
315
316 beam_beta: float, the strength of the sequence length penalty on the final score of a token.
317 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
318
319 kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
320 If the path is invalid (file is not found at path), will raise a deferred error at the moment
321 of calculation of beam search, so that users may update / change the decoding strategy
322 to point to the correct file.
323
324 blank_id: The id of the RNNT blank token.
325 """
326
327 def __init__(self, decoding_cfg, blank_id: int):
328 super().__init__()
329
330 # Convert dataclas to config
331 if is_dataclass(decoding_cfg):
332 decoding_cfg = OmegaConf.structured(decoding_cfg)
333
334 if not isinstance(decoding_cfg, DictConfig):
335 decoding_cfg = OmegaConf.create(decoding_cfg)
336
337 OmegaConf.set_struct(decoding_cfg, False)
338
339 # update minimal config
340 minimal_cfg = ['greedy']
341 for item in minimal_cfg:
342 if item not in decoding_cfg:
343 decoding_cfg[item] = OmegaConf.create({})
344
345 self.cfg = decoding_cfg
346 self.blank_id = blank_id
347 self.preserve_alignments = self.cfg.get('preserve_alignments', None)
348 self.compute_timestamps = self.cfg.get('compute_timestamps', None)
349 self.batch_dim_index = self.cfg.get('batch_dim_index', 0)
350 self.word_seperator = self.cfg.get('word_seperator', ' ')
351
352 possible_strategies = ['greedy', 'beam', 'pyctcdecode', 'flashlight']
353 if self.cfg.strategy not in possible_strategies:
354 raise ValueError(f"Decoding strategy must be one of {possible_strategies}. Given {self.cfg.strategy}")
355
356 # Update preserve alignments
357 if self.preserve_alignments is None:
358 if self.cfg.strategy in ['greedy']:
359 self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
360 else:
361 self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
362
363 # Update compute timestamps
364 if self.compute_timestamps is None:
365 if self.cfg.strategy in ['greedy']:
366 self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
367 elif self.cfg.strategy in ['beam']:
368 self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
369
370 # initialize confidence-related fields
371 self._init_confidence(self.cfg.get('confidence_cfg', None))
372
373 # Confidence estimation is not implemented for strategies other than `greedy`
374 if (
375 not self.preserve_frame_confidence
376 and self.cfg.strategy != 'greedy'
377 and self.cfg.beam.get('preserve_frame_confidence', False)
378 ):
379 raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
380
381 # we need timestamps to extract non-blank per-frame confidence
382 if self.compute_timestamps is not None:
383 self.compute_timestamps |= self.preserve_frame_confidence
384
385 if self.cfg.strategy == 'greedy':
386
387 self.decoding = ctc_greedy_decoding.GreedyCTCInfer(
388 blank_id=self.blank_id,
389 preserve_alignments=self.preserve_alignments,
390 compute_timestamps=self.compute_timestamps,
391 preserve_frame_confidence=self.preserve_frame_confidence,
392 confidence_measure_cfg=self.confidence_measure_cfg,
393 )
394
395 elif self.cfg.strategy == 'beam':
396
397 self.decoding = ctc_beam_decoding.BeamCTCInfer(
398 blank_id=blank_id,
399 beam_size=self.cfg.beam.get('beam_size', 1),
400 search_type='default',
401 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
402 preserve_alignments=self.preserve_alignments,
403 compute_timestamps=self.compute_timestamps,
404 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
405 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
406 kenlm_path=self.cfg.beam.get('kenlm_path', None),
407 )
408
409 self.decoding.override_fold_consecutive_value = False
410
411 elif self.cfg.strategy == 'pyctcdecode':
412
413 self.decoding = ctc_beam_decoding.BeamCTCInfer(
414 blank_id=blank_id,
415 beam_size=self.cfg.beam.get('beam_size', 1),
416 search_type='pyctcdecode',
417 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
418 preserve_alignments=self.preserve_alignments,
419 compute_timestamps=self.compute_timestamps,
420 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
421 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
422 kenlm_path=self.cfg.beam.get('kenlm_path', None),
423 pyctcdecode_cfg=self.cfg.beam.get('pyctcdecode_cfg', None),
424 )
425
426 self.decoding.override_fold_consecutive_value = False
427
428 elif self.cfg.strategy == 'flashlight':
429
430 self.decoding = ctc_beam_decoding.BeamCTCInfer(
431 blank_id=blank_id,
432 beam_size=self.cfg.beam.get('beam_size', 1),
433 search_type='flashlight',
434 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
435 preserve_alignments=self.preserve_alignments,
436 compute_timestamps=self.compute_timestamps,
437 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
438 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
439 kenlm_path=self.cfg.beam.get('kenlm_path', None),
440 flashlight_cfg=self.cfg.beam.get('flashlight_cfg', None),
441 )
442
443 self.decoding.override_fold_consecutive_value = False
444
445 else:
446 raise ValueError(
447 f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
448 f"but was provided {self.cfg.strategy}"
449 )
450
451 def ctc_decoder_predictions_tensor(
452 self,
453 decoder_outputs: torch.Tensor,
454 decoder_lengths: torch.Tensor = None,
455 fold_consecutive: bool = True,
456 return_hypotheses: bool = False,
457 ) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
458 """
459 Decodes a sequence of labels to words
460
461 Args:
462 decoder_outputs: An integer torch.Tensor of shape [Batch, Time, {Vocabulary}] (if ``batch_index_dim == 0``) or [Time, Batch]
463 (if ``batch_index_dim == 1``) of integer indices that correspond to the index of some character in the
464 label set.
465 decoder_lengths: Optional tensor of length `Batch` which contains the integer lengths
466 of the sequence in the padded `predictions` tensor.
467 fold_consecutive: Bool, determine whether to perform "ctc collapse", folding consecutive tokens
468 into a single token.
469 return_hypotheses: Bool flag whether to return just the decoding predictions of the model
470 or a Hypothesis object that holds information such as the decoded `text`,
471 the `alignment` of emited by the CTC Model, and the `length` of the sequence (if available).
472 May also contain the log-probabilities of the decoder (if this method is called via
473 transcribe())
474
475 Returns:
476 Either a list of str which represent the CTC decoded strings per sample,
477 or a list of Hypothesis objects containing additional information.
478 """
479
480 if isinstance(decoder_outputs, torch.Tensor):
481 decoder_outputs = move_dimension_to_the_front(decoder_outputs, self.batch_dim_index)
482
483 if (
484 hasattr(self.decoding, 'override_fold_consecutive_value')
485 and self.decoding.override_fold_consecutive_value is not None
486 ):
487 logging.info(
488 f"Beam search requires that consecutive ctc tokens are not folded. \n"
489 f"Overriding provided value of `fold_consecutive` = {fold_consecutive} to "
490 f"{self.decoding.override_fold_consecutive_value}",
491 mode=logging_mode.ONCE,
492 )
493 fold_consecutive = self.decoding.override_fold_consecutive_value
494
495 with torch.inference_mode():
496 # Resolve the forward step of the decoding strategy
497 hypotheses_list = self.decoding(
498 decoder_output=decoder_outputs, decoder_lengths=decoder_lengths
499 ) # type: List[List[Hypothesis]]
500
501 # extract the hypotheses
502 hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
503
504 if isinstance(hypotheses_list[0], NBestHypotheses):
505 hypotheses = []
506 all_hypotheses = []
507
508 for nbest_hyp in hypotheses_list: # type: NBestHypotheses
509 n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
510 decoded_hyps = self.decode_hypothesis(
511 n_hyps, fold_consecutive
512 ) # type: List[Union[Hypothesis, NBestHypotheses]]
513
514 # If computing timestamps
515 if self.compute_timestamps is True:
516 timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
517 for hyp_idx in range(len(decoded_hyps)):
518 decoded_hyps[hyp_idx] = self.compute_ctc_timestamps(decoded_hyps[hyp_idx], timestamp_type)
519
520 hypotheses.append(decoded_hyps[0]) # best hypothesis
521 all_hypotheses.append(decoded_hyps)
522
523 if return_hypotheses:
524 return hypotheses, all_hypotheses
525
526 best_hyp_text = [h.text for h in hypotheses]
527 all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
528 return best_hyp_text, all_hyp_text
529
530 else:
531 hypotheses = self.decode_hypothesis(
532 hypotheses_list, fold_consecutive
533 ) # type: List[Union[Hypothesis, NBestHypotheses]]
534
535 # If computing timestamps
536 if self.compute_timestamps is True:
537 # greedy decoding, can get high-level confidence scores
538 if return_hypotheses and (self.preserve_word_confidence or self.preserve_token_confidence):
539 hypotheses = self.compute_confidence(hypotheses)
540 else:
541 # remove unused token_repetitions from Hypothesis.text
542 for hyp in hypotheses:
543 hyp.text = hyp.text[:2]
544 timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
545 for hyp_idx in range(len(hypotheses)):
546 hypotheses[hyp_idx] = self.compute_ctc_timestamps(hypotheses[hyp_idx], timestamp_type)
547
548 if return_hypotheses:
549 return hypotheses, None
550
551 best_hyp_text = [h.text for h in hypotheses]
552 return best_hyp_text, None
553
554 def decode_hypothesis(
555 self, hypotheses_list: List[Hypothesis], fold_consecutive: bool
556 ) -> List[Union[Hypothesis, NBestHypotheses]]:
557 """
558 Decode a list of hypotheses into a list of strings.
559
560 Args:
561 hypotheses_list: List of Hypothesis.
562 fold_consecutive: Whether to collapse the ctc blank tokens or not.
563
564 Returns:
565 A list of strings.
566 """
567 for ind in range(len(hypotheses_list)):
568 # Extract the integer encoded hypothesis
569 hyp = hypotheses_list[ind]
570 prediction = hyp.y_sequence
571 predictions_len = hyp.length if hyp.length > 0 else None
572
573 if fold_consecutive:
574 if type(prediction) != list:
575 prediction = prediction.numpy().tolist()
576
577 if predictions_len is not None:
578 prediction = prediction[:predictions_len]
579
580 # CTC decoding procedure
581 decoded_prediction = []
582 token_lengths = [] # preserve token lengths
583 token_repetitions = [] # preserve number of repetitions per token
584
585 previous = self.blank_id
586 last_length = 0
587 last_repetition = 1
588
589 for pidx, p in enumerate(prediction):
590 if (p != previous or previous == self.blank_id) and p != self.blank_id:
591 decoded_prediction.append(p)
592
593 token_lengths.append(pidx - last_length)
594 last_length = pidx
595 token_repetitions.append(last_repetition)
596 last_repetition = 1
597
598 if p == previous and previous != self.blank_id:
599 last_repetition += 1
600
601 previous = p
602
603 if len(token_repetitions) > 0:
604 token_repetitions = token_repetitions[1:] + [last_repetition]
605
606 else:
607 if predictions_len is not None:
608 prediction = prediction[:predictions_len]
609 decoded_prediction = prediction[prediction != self.blank_id].tolist()
610 token_lengths = [1] * len(decoded_prediction) # preserve number of repetitions per token
611 token_repetitions = [1] * len(decoded_prediction) # preserve number of repetitions per token
612
613 # De-tokenize the integer tokens; if not computing timestamps
614 if self.compute_timestamps is True:
615 # keep the original predictions, wrap with the number of repetitions per token
616 # this is done so that `ctc_decoder_predictions_tensor()` can process this hypothesis
617 # in order to compute exact time stamps.
618 hypothesis = (decoded_prediction, token_lengths, token_repetitions)
619 else:
620 hypothesis = self.decode_tokens_to_str(decoded_prediction)
621
622 # TODO: remove
623 # collapse leading spaces before . , ? for PC models
624 hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
625
626 # Preserve this wrapped hypothesis or decoded text tokens.
627 hypotheses_list[ind].text = hypothesis
628
629 return hypotheses_list
630
631 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
632 """
633 Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
634 Assumes that `frame_confidence` is present in the hypotheses.
635
636 Args:
637 hypotheses_list: List of Hypothesis.
638
639 Returns:
640 A list of hypotheses with high-level confidence scores.
641 """
642 for hyp in hypotheses_list:
643 if not isinstance(hyp.text, tuple) or len(hyp.text) != 3:
644 # the method must have been called in the wrong place
645 raise ValueError(
646 """Wrong format of the `text` attribute of a hypothesis.\n
647 Expected: (decoded_prediction, token_repetitions)\n
648 The method invocation is expected between .decode_hypothesis() and .compute_ctc_timestamps()"""
649 )
650 token_repetitions = hyp.text[2]
651 hyp.text = hyp.text[:2]
652 token_confidence = []
653 if self.exclude_blank_from_confidence:
654 non_blank_frame_confidence = hyp.non_blank_frame_confidence
655 i = 0
656 for tr in token_repetitions:
657 # token repetition can be zero
658 j = i + tr
659 token_confidence.append(self._aggregate_confidence(non_blank_frame_confidence[i:j]))
660 i = j
661 else:
662 # <blank> tokens are considered to belong to the last non-blank token, if any.
663 token_lengths = hyp.text[1]
664 if len(token_lengths) > 0:
665 ts = token_lengths[0]
666 for tl in token_lengths[1:] + [len(hyp.frame_confidence)]:
667 token_confidence.append(self._aggregate_confidence(hyp.frame_confidence[ts : ts + tl]))
668 ts += tl
669 hyp.token_confidence = token_confidence
670 if self.preserve_word_confidence:
671 for hyp in hypotheses_list:
672 hyp.word_confidence = self._aggregate_token_confidence(hyp)
673 return hypotheses_list
674
675 @abstractmethod
676 def decode_tokens_to_str(self, tokens: List[int]) -> str:
677 """
678 Implemented by subclass in order to decoder a token id list into a string.
679
680 Args:
681 tokens: List of int representing the token ids.
682
683 Returns:
684 A decoded string.
685 """
686 raise NotImplementedError()
687
688 @abstractmethod
689 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
690 """
691 Implemented by subclass in order to decode a token id list into a token list.
692 A token list is the string representation of each token id.
693
694 Args:
695 tokens: List of int representing the token ids.
696
697 Returns:
698 A list of decoded tokens.
699 """
700 raise NotImplementedError()
701
702 def compute_ctc_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
703 """
704 Method to compute time stamps at char/subword, and word level given some hypothesis.
705 Requires the input hypothesis to contain a `text` field that is the tuple. The tuple contains -
706 the ctc collapsed integer ids, and the number of repetitions of each token.
707
708 Args:
709 hypothesis: A Hypothesis object, with a wrapped `text` field.
710 The `text` field must contain a tuple with two values -
711 The ctc collapsed integer ids
712 A list of integers that represents the number of repetitions per token.
713 timestamp_type: A str value that represents the type of time stamp calculated.
714 Can be one of "char", "word" or "all"
715
716 Returns:
717 A Hypothesis object with a modified `timestep` value, which is now a dictionary containing
718 the time stamp information.
719 """
720 assert timestamp_type in ['char', 'word', 'all']
721
722 # Unpack the temporary storage, and set the decoded predictions
723 decoded_prediction, token_lengths = hypothesis.text
724 hypothesis.text = decoded_prediction
725
726 # Retrieve offsets
727 char_offsets = word_offsets = None
728 char_offsets = self._compute_offsets(hypothesis, token_lengths, self.blank_id)
729
730 # Assert number of offsets and hypothesis tokens are 1:1 match.
731 if len(char_offsets) != len(hypothesis.text):
732 raise ValueError(
733 f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
734 " have to be of the same length, but are: "
735 f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
736 f" {len(hypothesis.text)}"
737 )
738
739 # Correctly process the token ids to chars/subwords.
740 for i, char in enumerate(hypothesis.text):
741 char_offsets[i]["char"] = self.decode_tokens_to_str([char])
742
743 # detect char vs subword models
744 lens = [len(list(v["char"])) > 1 for v in char_offsets]
745 if any(lens):
746 text_type = 'subword'
747 else:
748 text_type = 'char'
749
750 # retrieve word offsets from character offsets
751 word_offsets = None
752 if timestamp_type in ['word', 'all']:
753 if text_type == 'char':
754 word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
755 else:
756 word_offsets = self._get_word_offsets_subwords_sentencepiece(
757 char_offsets,
758 hypothesis,
759 decode_ids_to_tokens=self.decode_ids_to_tokens,
760 decode_tokens_to_str=self.decode_tokens_to_str,
761 )
762
763 # attach results
764 if len(hypothesis.timestep) > 0:
765 timestep_info = hypothesis.timestep
766 else:
767 timestep_info = []
768
769 # Setup defaults
770 hypothesis.timestep = {"timestep": timestep_info}
771
772 # Add char / subword time stamps
773 if char_offsets is not None and timestamp_type in ['char', 'all']:
774 hypothesis.timestep['char'] = char_offsets
775
776 # Add word time stamps
777 if word_offsets is not None and timestamp_type in ['word', 'all']:
778 hypothesis.timestep['word'] = word_offsets
779
780 # Convert the token indices to text
781 hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
782
783 return hypothesis
784
785 @staticmethod
786 def _compute_offsets(
787 hypothesis: Hypothesis, token_lengths: List[int], ctc_token: int
788 ) -> List[Dict[str, Union[str, int]]]:
789 """
790 Utility method that calculates the indidual time indices where a token starts and ends.
791
792 Args:
793 hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
794 emitted at every time step after ctc collapse.
795 token_lengths: A list of ints representing the lengths of each emitted token.
796 ctc_token: The integer of the ctc blank token used during ctc collapse.
797
798 Returns:
799
800 """
801 start_index = 0
802
803 # If the exact timestep information is available, utilize the 1st non-ctc blank token timestep
804 # as the start index.
805 if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
806 start_index = max(0, hypothesis.timestep[0] - 1)
807
808 # Construct the start and end indices brackets
809 end_indices = np.asarray(token_lengths).cumsum()
810 start_indices = np.concatenate(([start_index], end_indices[:-1]))
811
812 # Merge the results per token into a list of dictionaries
813 offsets = [
814 {"char": t, "start_offset": s, "end_offset": e}
815 for t, s, e in zip(hypothesis.text, start_indices, end_indices)
816 ]
817
818 # Filter out CTC token
819 offsets = list(filter(lambda offsets: offsets["char"] != ctc_token, offsets))
820 return offsets
821
822 @staticmethod
823 def _get_word_offsets_chars(
824 offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
825 ) -> Dict[str, Union[str, float]]:
826 """
827 Utility method which constructs word time stamps out of character time stamps.
828
829 References:
830 This code is a port of the Hugging Face code for word time stamp construction.
831
832 Args:
833 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
834 word_delimiter_char: Character token that represents the word delimiter. By default, " ".
835
836 Returns:
837 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
838 "end_offset".
839 """
840 word_offsets = []
841
842 last_state = "SPACE"
843 word = ""
844 start_offset = 0
845 end_offset = 0
846 for i, offset in enumerate(offsets):
847 char = offset["char"]
848 state = "SPACE" if char == word_delimiter_char else "WORD"
849
850 if state == last_state:
851 # If we are in the same state as before, we simply repeat what we've done before
852 end_offset = offset["end_offset"]
853 word += char
854 else:
855 # Switching state
856 if state == "SPACE":
857 # Finishing a word
858 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
859 else:
860 # Starting a new word
861 start_offset = offset["start_offset"]
862 end_offset = offset["end_offset"]
863 word = char
864
865 last_state = state
866 if last_state == "WORD":
867 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
868
869 return word_offsets
870
871 @staticmethod
872 def _get_word_offsets_subwords_sentencepiece(
873 offsets: Dict[str, Union[str, float]],
874 hypothesis: Hypothesis,
875 decode_ids_to_tokens: Callable[[List[int]], str],
876 decode_tokens_to_str: Callable[[List[int]], str],
877 ) -> Dict[str, Union[str, float]]:
878 """
879 Utility method which constructs word time stamps out of sub-word time stamps.
880
881 **Note**: Only supports Sentencepiece based tokenizers !
882
883 Args:
884 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
885 hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
886 after ctc collapse.
887 decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
888 decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
889
890 Returns:
891 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
892 "end_offset".
893 """
894 word_offsets = []
895 built_token = []
896 previous_token_index = 0
897 # For every collapsed sub-word token
898 for i, char in enumerate(hypothesis.text):
899 # Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
900 token = decode_ids_to_tokens([char])[0]
901 token_text = decode_tokens_to_str([char])
902
903 # It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
904 # after forcing partial text conversion of the token.
905 if token != token_text:
906 # If there are any partially or fully built sub-word token ids, construct to text.
907 # Note: This is "old" subword, that occurs *after* current sub-word has started.
908 if len(built_token) > 0:
909 word_offsets.append(
910 {
911 "word": decode_tokens_to_str(built_token),
912 "start_offset": offsets[previous_token_index]["start_offset"],
913 "end_offset": offsets[i]["start_offset"],
914 }
915 )
916
917 # Prepare list of new sub-word ids
918 built_token.clear()
919 built_token.append(char)
920 previous_token_index = i
921 else:
922 # If the token does not contain any sub-word start mark, then the sub-word has not completed yet
923 # Append to current sub-word list.
924 built_token.append(char)
925
926 # Inject the start offset of the first token to word offsets
927 # This is because we always skip the delay the injection of the first sub-word due to the loop
928 # condition and check whether built token is ready or not.
929 # Therefore without this forced injection, the start_offset appears as off by 1.
930 if len(word_offsets) == 0:
931 # alaptev: sometimes word_offsets can be empty
932 if len(built_token) > 0:
933 word_offsets.append(
934 {
935 "word": decode_tokens_to_str(built_token),
936 "start_offset": offsets[0]["start_offset"],
937 "end_offset": offsets[-1]["end_offset"],
938 }
939 )
940 built_token.clear()
941 else:
942 word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
943
944 # If there are any remaining tokens left, inject them all into the final word offset.
945 # Note: The start offset of this token is the start time of the first token inside build_token.
946 # Note: The end offset of this token is the end time of the last token inside build_token
947 if len(built_token) > 0:
948 word_offsets.append(
949 {
950 "word": decode_tokens_to_str(built_token),
951 "start_offset": offsets[-(len(built_token))]["start_offset"],
952 "end_offset": offsets[-1]["end_offset"],
953 }
954 )
955 built_token.clear()
956
957 return word_offsets
958
959 @property
960 def preserve_alignments(self):
961 return self._preserve_alignments
962
963 @preserve_alignments.setter
964 def preserve_alignments(self, value):
965 self._preserve_alignments = value
966
967 if hasattr(self, 'decoding'):
968 self.decoding.preserve_alignments = value
969
970 @property
971 def compute_timestamps(self):
972 return self._compute_timestamps
973
974 @compute_timestamps.setter
975 def compute_timestamps(self, value):
976 self._compute_timestamps = value
977
978 if hasattr(self, 'decoding'):
979 self.decoding.compute_timestamps = value
980
981 @property
982 def preserve_frame_confidence(self):
983 return self._preserve_frame_confidence
984
985 @preserve_frame_confidence.setter
986 def preserve_frame_confidence(self, value):
987 self._preserve_frame_confidence = value
988
989 if hasattr(self, 'decoding'):
990 self.decoding.preserve_frame_confidence = value
991
992
993 class CTCDecoding(AbstractCTCDecoding):
994 """
995 Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs for character
996 based models.
997
998 Args:
999 decoding_cfg: A dict-like object which contains the following key-value pairs.
1000 strategy: str value which represents the type of decoding that can occur.
1001 Possible values are :
1002 - greedy (for greedy decoding).
1003 - beam (for DeepSpeed KenLM based decoding).
1004
1005 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
1006 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
1007 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
1008
1009 ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
1010 Can take the following values - "char" for character/subword time stamps, "word" for word level
1011 time stamps and "all" (default), for both character level and word level time stamps.
1012
1013 word_seperator: Str token representing the seperator between words.
1014
1015 preserve_alignments: Bool flag which preserves the history of logprobs generated during
1016 decoding (sample / batched). When set to true, the Hypothesis will contain
1017 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
1018
1019 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
1020 scores. In order to obtain hypotheses with confidence scores, please utilize
1021 `ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
1022
1023 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
1024 generated during decoding. When set to true, the Hypothesis will contain
1025 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
1026 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
1027 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1028 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
1029
1030 The length of the list corresponds to the number of recognized tokens.
1031 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
1032 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1033 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
1034
1035 The length of the list corresponds to the number of recognized words.
1036 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
1037 from the `token_confidence`.
1038 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
1039 Valid options are `mean`, `min`, `max`, `prod`.
1040 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
1041 confidence scores.
1042
1043 name: The measure name (str).
1044 Supported values:
1045 - 'max_prob' for using the maximum token probability as a confidence.
1046 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1047
1048 entropy_type: Which type of entropy to use (str).
1049 Used if confidence_measure_cfg.name is set to `entropy`.
1050 Supported values:
1051 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1052 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1053 Note that for this entropy, the alpha should comply the following inequality:
1054 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1055 where V is the model vocabulary size.
1056 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1057 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1058 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1059 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1060 - 'renyi' for the Rรฉnyi entropy.
1061 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1062 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1063 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1064
1065 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1066 When the alpha equals one, scaling is not applied to 'max_prob',
1067 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1068
1069 entropy_norm: A mapping of the entropy value to the interval [0,1].
1070 Supported values:
1071 - 'lin' for using the linear mapping.
1072 - 'exp' for using exponential mapping with linear shift.
1073
1074 batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
1075 ``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
1076
1077 The config may further contain the following sub-dictionaries:
1078 "greedy":
1079 preserve_alignments: Same as above, overrides above value.
1080 compute_timestamps: Same as above, overrides above value.
1081 preserve_frame_confidence: Same as above, overrides above value.
1082 confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
1083
1084 "beam":
1085 beam_size: int, defining the beam size for beam search. Must be >= 1.
1086 If beam_size == 1, will perform cached greedy search. This might be slightly different
1087 results compared to the greedy search above.
1088
1089 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
1090 hypotheses after beam search has concluded. This flag is set by default.
1091
1092 beam_alpha: float, the strength of the Language model on the final score of a token.
1093 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
1094
1095 beam_beta: float, the strength of the sequence length penalty on the final score of a token.
1096 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
1097
1098 kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
1099 If the path is invalid (file is not found at path), will raise a deferred error at the moment
1100 of calculation of beam search, so that users may update / change the decoding strategy
1101 to point to the correct file.
1102
1103 blank_id: The id of the RNNT blank token.
1104 """
1105
1106 def __init__(
1107 self, decoding_cfg, vocabulary,
1108 ):
1109 blank_id = len(vocabulary)
1110 self.vocabulary = vocabulary
1111 self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
1112
1113 super().__init__(decoding_cfg=decoding_cfg, blank_id=blank_id)
1114
1115 # Finalize Beam Search Decoding framework
1116 if isinstance(self.decoding, ctc_beam_decoding.AbstractBeamCTCInfer):
1117 self.decoding.set_vocabulary(self.vocabulary)
1118 self.decoding.set_decoding_type('char')
1119
1120 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
1121 """
1122 Implemented by subclass in order to aggregate token confidence to a word-level confidence.
1123
1124 Args:
1125 hypothesis: Hypothesis
1126
1127 Returns:
1128 A list of word-level confidence scores.
1129 """
1130 return self._aggregate_token_confidence_chars(
1131 self.decode_tokens_to_str(hypothesis.text[0]).split(), hypothesis.token_confidence
1132 )
1133
1134 def decode_tokens_to_str(self, tokens: List[int]) -> str:
1135 """
1136 Implemented by subclass in order to decoder a token list into a string.
1137
1138 Args:
1139 tokens: List of int representing the token ids.
1140
1141 Returns:
1142 A decoded string.
1143 """
1144 hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
1145 return hypothesis
1146
1147 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
1148 """
1149 Implemented by subclass in order to decode a token id list into a token list.
1150 A token list is the string representation of each token id.
1151
1152 Args:
1153 tokens: List of int representing the token ids.
1154
1155 Returns:
1156 A list of decoded tokens.
1157 """
1158 token_list = [self.labels_map[c] for c in tokens if c != self.blank_id]
1159 return token_list
1160
1161
1162 class WER(Metric):
1163 """
1164 This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference
1165 texts. When doing distributed training/evaluation the result of ``res=WER(predictions, targets, target_lengths)``
1166 calls will be all-reduced between all workers using SUM operations. Here ``res`` contains three numbers
1167 ``res=[wer, total_levenstein_distance, total_number_of_words]``.
1168
1169 If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step
1170 results. Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
1171
1172 Example:
1173 def validation_step(self, batch, batch_idx):
1174 ...
1175 wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
1176 self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
1177 return self.val_outputs
1178
1179 def on_validation_epoch_end(self):
1180 ...
1181 wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
1182 wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
1183 tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
1184 self.val_outputs.clear() # free memory
1185 return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
1186
1187 Args:
1188 decoding: An instance of CTCDecoding.
1189 use_cer: Whether to use Character Error Rate instead of Word Error Rate.
1190 log_prediction: Whether to log a single decoded sample per call.
1191 fold_consecutive: Whether repeated consecutive characters should be folded into one when decoding.
1192
1193 Returns:
1194 res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenstein's
1195 distances for all prediction - reference pairs, total number of words in all references.
1196 """
1197
1198 full_state_update: bool = True
1199
1200 def __init__(
1201 self,
1202 decoding: CTCDecoding,
1203 use_cer=False,
1204 log_prediction=True,
1205 fold_consecutive=True,
1206 dist_sync_on_step=False,
1207 ):
1208 super().__init__(dist_sync_on_step=dist_sync_on_step)
1209
1210 self.decoding = decoding
1211 self.use_cer = use_cer
1212 self.log_prediction = log_prediction
1213 self.fold_consecutive = fold_consecutive
1214
1215 self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1216 self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1217
1218 def update(
1219 self,
1220 predictions: torch.Tensor,
1221 targets: torch.Tensor,
1222 target_lengths: torch.Tensor,
1223 predictions_lengths: torch.Tensor = None,
1224 ):
1225 """
1226 Updates metric state.
1227 Args:
1228 predictions: an integer torch.Tensor of shape ``[Batch, Time, {Vocabulary}]`` (if ``batch_dim_index == 0``) or
1229 ``[Time, Batch]`` (if ``batch_dim_index == 1``)
1230 targets: an integer torch.Tensor of shape ``[Batch, Time]`` (if ``batch_dim_index == 0``) or
1231 ``[Time, Batch]`` (if ``batch_dim_index == 1``)
1232 target_lengths: an integer torch.Tensor of shape ``[Batch]``
1233 predictions_lengths: an integer torch.Tensor of shape ``[Batch]``
1234 """
1235 words = 0
1236 scores = 0
1237 references = []
1238 with torch.no_grad():
1239 # prediction_cpu_tensor = tensors[0].long().cpu()
1240 targets_cpu_tensor = targets.long().cpu()
1241 tgt_lenths_cpu_tensor = target_lengths.long().cpu()
1242
1243 # iterate over batch
1244 for ind in range(targets_cpu_tensor.shape[0]):
1245 tgt_len = tgt_lenths_cpu_tensor[ind].item()
1246 target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
1247 reference = self.decoding.decode_tokens_to_str(target)
1248 references.append(reference)
1249
1250 hypotheses, _ = self.decoding.ctc_decoder_predictions_tensor(
1251 predictions, predictions_lengths, fold_consecutive=self.fold_consecutive
1252 )
1253
1254 if self.log_prediction:
1255 logging.info(f"\n")
1256 logging.info(f"reference:{references[0]}")
1257 logging.info(f"predicted:{hypotheses[0]}")
1258
1259 for h, r in zip(hypotheses, references):
1260 if self.use_cer:
1261 h_list = list(h)
1262 r_list = list(r)
1263 else:
1264 h_list = h.split()
1265 r_list = r.split()
1266 words += len(r_list)
1267 # Compute Levenstein's distance
1268 scores += editdistance.eval(h_list, r_list)
1269
1270 self.scores = torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
1271 self.words = torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
1272 # return torch.tensor([scores, words]).to(predictions.device)
1273
1274 def compute(self):
1275 scores = self.scores.detach().float()
1276 words = self.words.detach().float()
1277 return scores / words, scores, words
1278
1279
1280 @dataclass
1281 class CTCDecodingConfig:
1282 strategy: str = "greedy"
1283
1284 # preserve decoding alignments
1285 preserve_alignments: Optional[bool] = None
1286
1287 # compute ctc time stamps
1288 compute_timestamps: Optional[bool] = None
1289
1290 # token representing word seperator
1291 word_seperator: str = " "
1292
1293 # type of timestamps to calculate
1294 ctc_timestamp_type: str = "all" # can be char, word or all for both
1295
1296 # batch dimension
1297 batch_dim_index: int = 0
1298
1299 # greedy decoding config
1300 greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
1301
1302 # beam decoding config
1303 beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
1304
1305 # confidence config
1306 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1307
1308 # can be used to change temperature for decoding
1309 temperature: float = 1.0
1310
[end of nemo/collections/asr/metrics/wer.py]
[start of nemo/collections/asr/models/configs/aligner_config.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
18
19
20 @dataclass
21 class AlignerCTCConfig:
22 prob_suppress_index: int = -1
23 prob_suppress_value: float = 1.0
24
25
26 @dataclass
27 class AlignerRNNTConfig:
28 predictor_window_size: int = 0
29 predictor_step_size: int = 1
30
31
32 @dataclass
33 class AlignerWrapperModelConfig:
34 alignment_type: str = "forced"
35 word_output: bool = True
36 cpu_decoding: bool = False
37 decode_batch_size: int = 0
38 ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
39 rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
40
41
42 @dataclass
43 class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
44 decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
45
[end of nemo/collections/asr/models/configs/aligner_config.py]
[start of nemo/collections/asr/models/configs/asr_models_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, List, Optional
17
18 from omegaconf import MISSING
19
20 import nemo.core.classes.dataset
21 from nemo.collections.asr.metrics.wer import CTCDecodingConfig
22 from nemo.collections.asr.modules.audio_preprocessing import (
23 AudioToMelSpectrogramPreprocessorConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig
27 from nemo.core.config import modelPT as model_cfg
28
29
30 @dataclass
31 class ASRDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
32 manifest_filepath: Optional[Any] = None
33 sample_rate: int = MISSING
34 labels: List[str] = MISSING
35 trim_silence: bool = False
36
37 # Tarred dataset support
38 is_tarred: bool = False
39 tarred_audio_filepaths: Optional[Any] = None
40 tarred_shard_strategy: str = "scatter"
41 shard_manifests: bool = False
42 shuffle_n: int = 0
43
44 # Optional
45 int_values: Optional[int] = None
46 augmentor: Optional[Dict[str, Any]] = None
47 max_duration: Optional[float] = None
48 min_duration: Optional[float] = None
49 max_utts: int = 0
50 blank_index: int = -1
51 unk_index: int = -1
52 normalize: bool = False
53 trim: bool = True
54 parser: Optional[str] = 'en'
55 eos_id: Optional[int] = None
56 bos_id: Optional[int] = None
57 pad_id: int = 0
58 use_start_end_token: bool = False
59 return_sample_id: Optional[bool] = False
60
61 # bucketing params
62 bucketing_strategy: str = "synced_randomized"
63 bucketing_batch_size: Optional[Any] = None
64 bucketing_weights: Optional[List[int]] = None
65
66
67 @dataclass
68 class EncDecCTCConfig(model_cfg.ModelConfig):
69 # Model global arguments
70 sample_rate: int = 16000
71 repeat: int = 1
72 dropout: float = 0.0
73 separable: bool = False
74 labels: List[str] = MISSING
75
76 # Dataset configs
77 train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
78 validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
79 test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
80
81 # Optimizer / Scheduler config
82 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
83
84 # Model component configs
85 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
86 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
87 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
88 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
89 decoding: CTCDecodingConfig = CTCDecodingConfig()
90
91
92 @dataclass
93 class EncDecCTCModelConfig(model_cfg.NemoConfig):
94 model: EncDecCTCConfig = EncDecCTCConfig()
95
96
97 @dataclass
98 class CacheAwareStreamingConfig:
99 chunk_size: int = 0 # the size of each chunk at each step, it can be a list of two integers to specify different chunk sizes for the first step and others
100 shift_size: int = 0 # the size of the shift in each step, it can be a list of two integers to specify different shift sizes for the first step and others
101
102 cache_drop_size: int = 0 # the number of steps to drop from the cache
103 last_channel_cache_size: int = 0 # the size of the needed cache for last channel layers
104
105 valid_out_len: int = 0 # the number of the steps in the final output which are valid (have the same value as in the offline mode)
106
107 pre_encode_cache_size: int = 0 # the size of the needed cache for the pre-encoding part of the model to avoid caching inside the pre-encoding layers
108 drop_extra_pre_encoded: int = 0 # the number of steps to get dropped after the pre-encoding layer
109
110 last_channel_num: int = 0 # number of the last channel layers (like MHA layers) which need caching in the model
111 last_time_num: int = 0 # number of the last time layers (like convolutions) which need caching in the model
112
[end of nemo/collections/asr/models/configs/asr_models_config.py]
[start of nemo/collections/asr/models/configs/classification_models_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, List, Optional
17
18 from omegaconf import MISSING
19
20 import nemo.core.classes.dataset
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMFCCPreprocessorConfig,
23 CropOrPadSpectrogramAugmentationConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderClassificationConfig, ConvASREncoderConfig
27 from nemo.core.config import modelPT as model_cfg
28
29
30 @dataclass
31 class EncDecClassificationDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
32 manifest_filepath: Optional[str] = None
33 sample_rate: int = MISSING
34 labels: List[str] = MISSING
35 trim_silence: bool = False
36
37 # Tarred dataset support
38 is_tarred: bool = False
39 tarred_audio_filepaths: Optional[str] = None
40 tarred_shard_strategy: str = "scatter"
41 shuffle_n: int = 0
42
43 # Optional
44 int_values: Optional[int] = None
45 augmentor: Optional[Dict[str, Any]] = None
46 max_duration: Optional[float] = None
47 min_duration: Optional[float] = None
48 cal_labels_occurrence: Optional[bool] = False
49
50 # VAD Optional
51 vad_stream: Optional[bool] = None
52 window_length_in_sec: float = 0.31
53 shift_length_in_sec: float = 0.01
54 normalize_audio: bool = False
55 is_regression_task: bool = False
56
57 # bucketing params
58 bucketing_strategy: str = "synced_randomized"
59 bucketing_batch_size: Optional[Any] = None
60 bucketing_weights: Optional[List[int]] = None
61
62
63 @dataclass
64 class EncDecClassificationConfig(model_cfg.ModelConfig):
65 # Model global arguments
66 sample_rate: int = 16000
67 repeat: int = 1
68 dropout: float = 0.0
69 separable: bool = True
70 kernel_size_factor: float = 1.0
71 labels: List[str] = MISSING
72 timesteps: int = MISSING
73
74 # Dataset configs
75 train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
76 manifest_filepath=None, shuffle=True, trim_silence=False
77 )
78 validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
79 manifest_filepath=None, shuffle=False
80 )
81 test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
82 manifest_filepath=None, shuffle=False
83 )
84
85 # Optimizer / Scheduler config
86 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
87
88 # Model component configs
89 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
90 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
91 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
92 audio_length=timesteps
93 )
94
95 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
96 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
97
98
99 @dataclass
100 class EncDecClassificationModelConfig(model_cfg.NemoConfig):
101 model: EncDecClassificationConfig = EncDecClassificationConfig()
102
[end of nemo/collections/asr/models/configs/classification_models_config.py]
[start of nemo/collections/asr/models/configs/diarizer_config.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import asdict, dataclass
16 from typing import Any, Dict, Optional, Tuple, Union
17
18
19 @dataclass
20 class DiarizerComponentConfig:
21 """Dataclass to imitate HydraConfig dict when accessing parameters."""
22
23 def get(self, name: str, default: Optional[Any] = None):
24 return getattr(self, name, default)
25
26 def __iter__(self):
27 for key in asdict(self):
28 yield key
29
30 def dict(self) -> Dict:
31 return asdict(self)
32
33
34 @dataclass
35 class ASRDiarizerCTCDecoderParams:
36 pretrained_language_model: Optional[str] = None # KenLM model file: .arpa model file or .bin binary file.
37 beam_width: int = 32
38 alpha: float = 0.5
39 beta: float = 2.5
40
41
42 @dataclass
43 class ASRRealigningLMParams:
44 # Provide a KenLM language model in .arpa format.
45 arpa_language_model: Optional[str] = None
46 # Min number of words for the left context.
47 min_number_of_words: int = 3
48 # Max number of words for the right context.
49 max_number_of_words: int = 10
50 # The threshold for the difference between two log probability values from two hypotheses.
51 logprob_diff_threshold: float = 1.2
52
53
54 @dataclass
55 class ASRDiarizerParams(DiarizerComponentConfig):
56 # if True, speech segmentation for diarization is based on word-timestamps from ASR inference.
57 asr_based_vad: bool = False
58 # Threshold (in sec) that caps the gap between two words when generating VAD timestamps using ASR based VAD.
59 asr_based_vad_threshold: float = 1.0
60 # Batch size can be dependent on each ASR model. Default batch sizes are applied if set to null.
61 asr_batch_size: Optional[int] = None
62 # Native decoder delay. null is recommended to use the default values for each ASR model.
63 decoder_delay_in_sec: Optional[float] = None
64 # Offset to set a reference point from the start of the word. Recommended range of values is [-0.05 0.2].
65 word_ts_anchor_offset: Optional[float] = None
66 # Select which part of the word timestamp we want to use. The options are: 'start', 'end', 'mid'.
67 word_ts_anchor_pos: str = "start"
68 # Fix the word timestamp using VAD output. You must provide a VAD model to use this feature.
69 fix_word_ts_with_VAD: bool = False
70 # If True, use colored text to distinguish speakers in the output transcript.
71 colored_text: bool = False
72 # If True, the start and end time of each speaker turn is printed in the output transcript.
73 print_time: bool = True
74 # If True, the output transcript breaks the line to fix the line width (default is 90 chars)
75 break_lines: bool = False
76
77
78 @dataclass
79 class ASRDiarizerConfig(DiarizerComponentConfig):
80 model_path: Optional[str] = "stt_en_conformer_ctc_large"
81 parameters: ASRDiarizerParams = ASRDiarizerParams()
82 ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
83 realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
84
85
86 @dataclass
87 class VADParams(DiarizerComponentConfig):
88 window_length_in_sec: float = 0.15 # Window length in sec for VAD context input
89 shift_length_in_sec: float = 0.01 # Shift length in sec for generate frame level VAD prediction
90 smoothing: Union[str, bool] = "median" # False or type of smoothing method (eg: median)
91 overlap: float = 0.5 # Overlap ratio for overlapped mean/median smoothing filter
92 onset: float = 0.1 # Onset threshold for detecting the beginning and end of a speech
93 offset: float = 0.1 # Offset threshold for detecting the end of a speech
94 pad_onset: float = 0.1 # Adding durations before each speech segment
95 pad_offset: float = 0 # Adding durations after each speech segment
96 min_duration_on: float = 0 # Threshold for small non_speech deletion
97 min_duration_off: float = 0.2 # Threshold for short speech segment deletion
98 filter_speech_first: bool = True
99
100
101 @dataclass
102 class VADConfig(DiarizerComponentConfig):
103 model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
104 external_vad_manifest: Optional[str] = None
105 parameters: VADParams = VADParams()
106
107
108 @dataclass
109 class SpeakerEmbeddingsParams(DiarizerComponentConfig):
110 # Window length(s) in sec (floating-point number). either a number or a list. ex) 1.5 or [1.5,1.0,0.5]
111 window_length_in_sec: Tuple[float] = (1.5, 1.25, 1.0, 0.75, 0.5)
112 # Shift length(s) in sec (floating-point number). either a number or a list. ex) 0.75 or [0.75,0.5,0.25]
113 shift_length_in_sec: Tuple[float] = (0.75, 0.625, 0.5, 0.375, 0.25)
114 # Weight for each scale. None (for single scale) or list with window/shift scale count. ex) [0.33,0.33,0.33]
115 multiscale_weights: Tuple[float] = (1, 1, 1, 1, 1)
116 # save speaker embeddings in pickle format. True if clustering result is used for other models, such as MSDD.
117 save_embeddings: bool = True
118
119
120 @dataclass
121 class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
122 # .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
123 model_path: Optional[str] = None
124 parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
125
126
127 @dataclass
128 class ClusteringParams(DiarizerComponentConfig):
129 # If True, use num of speakers value provided in manifest file.
130 oracle_num_speakers: bool = False
131 # Max number of speakers for each recording. If an oracle number of speakers is passed, this value is ignored.
132 max_num_speakers: int = 8
133 # If the number of segments is lower than this number, enhanced speaker counting is activated.
134 enhanced_count_thres: int = 80
135 # Determines the range of p-value search: 0 < p <= max_rp_threshold.
136 max_rp_threshold: float = 0.25
137 # The higher the number, the more values will be examined with more time.
138 sparse_search_volume: int = 30
139 # If True, take a majority vote on multiple p-values to estimate the number of speakers.
140 maj_vote_spk_count: bool = False
141
142
143 @dataclass
144 class ClusteringConfig(DiarizerComponentConfig):
145 parameters: ClusteringParams = ClusteringParams()
146
147
148 @dataclass
149 class MSDDParams(DiarizerComponentConfig):
150 # If True, use speaker embedding model in checkpoint, else provided speaker embedding model in config will be used.
151 use_speaker_model_from_ckpt: bool = True
152 # Batch size for MSDD inference.
153 infer_batch_size: int = 25
154 # Sigmoid threshold for generating binarized speaker labels. The smaller the more generous on detecting overlaps.
155 sigmoid_threshold: Tuple[float] = (0.7,)
156 # If True, use oracle number of speaker and evaluate F1 score for the given speaker sequences. Default is False.
157 seq_eval_mode: bool = False
158 # If True, break the input audio clip to short sequences and calculate cluster average embeddings for inference.
159 split_infer: bool = True
160 # The length of split short sequence when split_infer is True.
161 diar_window_length: int = 50
162 # If the estimated number of speakers are larger than this number, overlap speech is not estimated.
163 overlap_infer_spk_limit: int = 5
164
165
166 @dataclass
167 class MSDDConfig(DiarizerComponentConfig):
168 model_path: Optional[str] = "diar_msdd_telephonic"
169 parameters: MSDDParams = MSDDParams()
170
171
172 @dataclass
173 class DiarizerConfig(DiarizerComponentConfig):
174 manifest_filepath: Optional[str] = None
175 out_dir: Optional[str] = None
176 oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
177 collar: float = 0.25 # Collar value for scoring
178 ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
179 vad: VADConfig = VADConfig()
180 speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
181 clustering: ClusteringConfig = ClusteringConfig()
182 msdd_model: MSDDConfig = MSDDConfig()
183 asr: ASRDiarizerConfig = ASRDiarizerConfig()
184
185
186 @dataclass
187 class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
188 diarizer: DiarizerConfig = DiarizerConfig()
189 device: str = "cpu"
190 verbose: bool = False
191 batch_size: int = 64
192 num_workers: int = 1
193 sample_rate: int = 16000
194 name: str = ""
195
196 @classmethod
197 def init_config(cls, diar_model_path: str, vad_model_path: str, map_location: str, verbose: bool):
198 return NeuralDiarizerInferenceConfig(
199 DiarizerConfig(
200 vad=VADConfig(model_path=vad_model_path), msdd_model=MSDDConfig(model_path=diar_model_path),
201 ),
202 device=map_location,
203 verbose=verbose,
204 )
205
[end of nemo/collections/asr/models/configs/diarizer_config.py]
[start of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
18 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
19 from nemo.core.config.modelPT import NemoConfig
20
21
22 @dataclass
23 class GraphModuleConfig:
24 criterion_type: str = "ml"
25 loss_type: str = "ctc"
26 split_batch_size: int = 0
27 dec_type: str = "topo"
28 transcribe_training: bool = True
29 backend_cfg: BackendConfig = BackendConfig()
30
31
32 @dataclass
33 class EncDecK2SeqConfig(EncDecCTCConfig):
34 graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
35
36
37 @dataclass
38 class EncDecK2SeqModelConfig(NemoConfig):
39 model: EncDecK2SeqConfig = EncDecK2SeqConfig()
40
[end of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
[start of nemo/collections/asr/models/configs/matchboxnet_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, field
16 from typing import Any, Callable, List, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.collections.asr.models.configs import classification_models_config as clf_cfg
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMFCCPreprocessorConfig,
23 CropOrPadSpectrogramAugmentationConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import (
27 ConvASRDecoderClassificationConfig,
28 ConvASREncoderConfig,
29 JasperEncoderConfig,
30 )
31 from nemo.core.config import modelPT as model_cfg
32
33
34 # fmt: off
35 def matchboxnet_3x1x64():
36 config = [
37 JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
38 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
39 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
40 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
41 JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
42 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
43 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
44 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
45 JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
46 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
47 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
48 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
49 JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
50 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
51 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
52 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
53 JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
54 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
55 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
56 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
57 JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
58 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
59 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
60 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
61 ]
62 return config
63
64
65 def matchboxnet_3x1x64_vad():
66 config = [
67 JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
68 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
69 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
70 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
71 JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
72 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
73 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
74 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
75 JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
76 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
77 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
78 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
79 JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
80 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
81 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
82 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
83 JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
84 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
85 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
86 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
87 JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
88 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
89 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
90 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
91 ]
92 return config
93
94
95 # fmt: on
96
97
98 @dataclass
99 class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
100 # Model global arguments
101 sample_rate: int = 16000
102 repeat: int = 1
103 dropout: float = 0.0
104 separable: bool = True
105 kernel_size_factor: float = 1.0
106 timesteps: int = 128
107 labels: List[str] = MISSING
108
109 # Dataset configs
110 train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
111 manifest_filepath=None, shuffle=True, trim_silence=False
112 )
113 validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
114 manifest_filepath=None, shuffle=False
115 )
116 test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
117 manifest_filepath=None, shuffle=False
118 )
119
120 # Optimizer / Scheduler config
121 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
122
123 # Model general component configs
124 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
125 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
126 freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
127 )
128 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
129 audio_length=128
130 )
131
132 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
133 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
134
135
136 @dataclass
137 class MatchboxNetVADModelConfig(MatchboxNetModelConfig):
138 timesteps: int = 64
139 labels: List[str] = field(default_factory=lambda: ['background', 'speech'])
140
141 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = None
142
143
144 class EncDecClassificationModelConfigBuilder(model_cfg.ModelConfigBuilder):
145 VALID_CONFIGS = ['matchboxnet_3x1x64', 'matchboxnet_3x1x64_vad']
146
147 def __init__(self, name: str = 'matchboxnet_3x1x64', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
148 if name not in EncDecClassificationModelConfigBuilder.VALID_CONFIGS:
149 raise ValueError("`name` must be one of : \n" f"{EncDecClassificationModelConfigBuilder.VALID_CONFIGS}")
150
151 self.name = name
152
153 if 'matchboxnet_3x1x64_vad' in name:
154 if encoder_cfg_func is None:
155 encoder_cfg_func = matchboxnet_3x1x64_vad
156
157 model_cfg = MatchboxNetVADModelConfig(
158 repeat=1,
159 separable=True,
160 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
161 decoder=ConvASRDecoderClassificationConfig(),
162 )
163
164 elif 'matchboxnet_3x1x64' in name:
165 if encoder_cfg_func is None:
166 encoder_cfg_func = matchboxnet_3x1x64
167
168 model_cfg = MatchboxNetModelConfig(
169 repeat=1,
170 separable=False,
171 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
172 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
173 decoder=ConvASRDecoderClassificationConfig(),
174 )
175
176 else:
177 raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
178
179 super(EncDecClassificationModelConfigBuilder, self).__init__(model_cfg)
180 self.model_cfg: clf_cfg.EncDecClassificationConfig = model_cfg # enable type hinting
181
182 def set_labels(self, labels: List[str]):
183 self.model_cfg.labels = labels
184
185 def set_separable(self, separable: bool):
186 self.model_cfg.separable = separable
187
188 def set_repeat(self, repeat: int):
189 self.model_cfg.repeat = repeat
190
191 def set_sample_rate(self, sample_rate: int):
192 self.model_cfg.sample_rate = sample_rate
193
194 def set_dropout(self, dropout: float = 0.0):
195 self.model_cfg.dropout = dropout
196
197 def set_timesteps(self, timesteps: int):
198 self.model_cfg.timesteps = timesteps
199
200 def set_is_regression_task(self, is_regression_task: bool):
201 self.model_cfg.is_regression_task = is_regression_task
202
203 # Note: Autocomplete for users wont work without these overrides
204 # But practically it is not needed since python will infer at runtime
205
206 # def set_train_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
207 # super().set_train_ds(cfg)
208 #
209 # def set_validation_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
210 # super().set_validation_ds(cfg)
211 #
212 # def set_test_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
213 # super().set_test_ds(cfg)
214
215 def _finalize_cfg(self):
216 # propagate labels
217 self.model_cfg.train_ds.labels = self.model_cfg.labels
218 self.model_cfg.validation_ds.labels = self.model_cfg.labels
219 self.model_cfg.test_ds.labels = self.model_cfg.labels
220 self.model_cfg.decoder.vocabulary = self.model_cfg.labels
221
222 # propagate num classes
223 self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
224
225 # propagate sample rate
226 self.model_cfg.sample_rate = self.model_cfg.sample_rate
227 self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
228 self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
229 self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
230 self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
231
232 # propagate filters
233 self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
234 self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
235
236 # propagate timeteps
237 if self.model_cfg.crop_or_pad_augment is not None:
238 self.model_cfg.crop_or_pad_augment.audio_length = self.model_cfg.timesteps
239
240 # propagate separable
241 for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
242 layer.separable = self.model_cfg.separable
243
244 # propagate repeat
245 for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
246 layer.repeat = self.model_cfg.repeat
247
248 # propagate dropout
249 for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
250 layer.dropout = self.model_cfg.dropout
251
252 def build(self) -> clf_cfg.EncDecClassificationConfig:
253 return super().build()
254
[end of nemo/collections/asr/models/configs/matchboxnet_config.py]
[start of nemo/collections/asr/models/configs/quartznet_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Callable, List, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.collections.asr.models.configs import asr_models_config as ctc_cfg
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMelSpectrogramPreprocessorConfig,
23 SpectrogramAugmentationConfig,
24 )
25 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig, JasperEncoderConfig
26 from nemo.core.config import modelPT as model_cfg
27
28
29 # fmt: off
30 def qn_15x5():
31 config = [
32 JasperEncoderConfig(filters=256, repeat=1, kernel=[33], stride=[2], dilation=[1], dropout=0.0,
33 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
34 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
35 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
36 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
37 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
38 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
39 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
40 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
41 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
42 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
43 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
44 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
45 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
46 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
47 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
48 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
49 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
50 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
51 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
52 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
53 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
54 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
55 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
56 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
57 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
58 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
59 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
60 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
61 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
62 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
63 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
64 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
65 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
66 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
67 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
68 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
69 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
70 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
71 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
72 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
73 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
74 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
75 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
76 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
77 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
78 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
79 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
80 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
81 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
82 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
83 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
84 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
85 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
86 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
87 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
88 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
89 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
90 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
91 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
92 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
93 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
94 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
95 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
96 JasperEncoderConfig(filters=512, repeat=1, kernel=[87], stride=[1], dilation=[2], dropout=0.0,
97 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
98 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
99 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
100 JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
101 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
102 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
103 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
104 ]
105 return config
106
107
108 def jasper_10x5_dr():
109 config = [
110 JasperEncoderConfig(filters=256, repeat=1, kernel=[11], stride=[2], dilation=[1], dropout=0.2,
111 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
112 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
113 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
114 JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
115 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
116 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
117 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
118 JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
119 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
120 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
121 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
122 JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
123 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
124 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
125 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
126 JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
127 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
128 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
129 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
130 JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
131 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
132 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
133 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
134 JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
135 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
136 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
137 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
138 JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
139 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
140 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
141 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
142 JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
143 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
144 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
145 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
146 JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
147 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
148 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
149 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
150 JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
151 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
152 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
153 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
154 JasperEncoderConfig(filters=896, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.4,
155 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
156 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
157 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
158 JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.4,
159 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
160 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
161 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
162 ]
163 return config
164 # fmt: on
165
166
167 @dataclass
168 class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
169 # Model global arguments
170 sample_rate: int = 16000
171 repeat: int = 1
172 dropout: float = 0.0
173 separable: bool = False
174 labels: List[str] = MISSING
175
176 # Dataset configs
177 train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
178 manifest_filepath=None, shuffle=True, trim_silence=True
179 )
180 validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
181 test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
182
183 # Optimizer / Scheduler config
184 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
185
186 # Model general component configs
187 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
188 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
189 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
190 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
191
192
193 @dataclass
194 class QuartzNetModelConfig(JasperModelConfig):
195 separable: bool = True
196
197
198 class EncDecCTCModelConfigBuilder(model_cfg.ModelConfigBuilder):
199 VALID_CONFIGS = ['quartznet_15x5', 'quartznet_15x5_zh', 'jasper_10x5dr']
200
201 def __init__(self, name: str = 'quartznet_15x5', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
202 if name not in EncDecCTCModelConfigBuilder.VALID_CONFIGS:
203 raise ValueError("`name` must be one of : \n" f"{EncDecCTCModelConfigBuilder.VALID_CONFIGS}")
204
205 self.name = name
206
207 if 'quartznet_15x5' in name:
208 if encoder_cfg_func is None:
209 encoder_cfg_func = qn_15x5
210
211 model_cfg = QuartzNetModelConfig(
212 repeat=5,
213 separable=True,
214 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
215 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
216 decoder=ConvASRDecoderConfig(),
217 )
218
219 elif 'jasper_10x5' in name:
220 if encoder_cfg_func is None:
221 encoder_cfg_func = jasper_10x5_dr
222
223 model_cfg = JasperModelConfig(
224 repeat=5,
225 separable=False,
226 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
227 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
228 decoder=ConvASRDecoderConfig(),
229 )
230
231 else:
232 raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
233
234 super(EncDecCTCModelConfigBuilder, self).__init__(model_cfg)
235 self.model_cfg: ctc_cfg.EncDecCTCConfig = model_cfg # enable type hinting
236
237 if 'zh' in name:
238 self.set_dataset_normalize(normalize=False)
239
240 def set_labels(self, labels: List[str]):
241 self.model_cfg.labels = labels
242
243 def set_separable(self, separable: bool):
244 self.model_cfg.separable = separable
245
246 def set_repeat(self, repeat: int):
247 self.model_cfg.repeat = repeat
248
249 def set_sample_rate(self, sample_rate: int):
250 self.model_cfg.sample_rate = sample_rate
251
252 def set_dropout(self, dropout: float = 0.0):
253 self.model_cfg.dropout = dropout
254
255 def set_dataset_normalize(self, normalize: bool):
256 self.model_cfg.train_ds.normalize = normalize
257 self.model_cfg.validation_ds.normalize = normalize
258 self.model_cfg.test_ds.normalize = normalize
259
260 # Note: Autocomplete for users wont work without these overrides
261 # But practically it is not needed since python will infer at runtime
262
263 # def set_train_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
264 # super().set_train_ds(cfg)
265 #
266 # def set_validation_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
267 # super().set_validation_ds(cfg)
268 #
269 # def set_test_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
270 # super().set_test_ds(cfg)
271
272 def _finalize_cfg(self):
273 # propagate labels
274 self.model_cfg.train_ds.labels = self.model_cfg.labels
275 self.model_cfg.validation_ds.labels = self.model_cfg.labels
276 self.model_cfg.test_ds.labels = self.model_cfg.labels
277 self.model_cfg.decoder.vocabulary = self.model_cfg.labels
278
279 # propagate num classes
280 self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
281
282 # propagate sample rate
283 self.model_cfg.sample_rate = self.model_cfg.sample_rate
284 self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
285 self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
286 self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
287 self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
288
289 # propagate filters
290 self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
291 self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
292
293 # propagate separable
294 for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
295 layer.separable = self.model_cfg.separable
296
297 # propagate repeat
298 for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
299 layer.repeat = self.model_cfg.repeat
300
301 # propagate dropout
302 for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
303 layer.dropout = self.model_cfg.dropout
304
305 def build(self) -> ctc_cfg.EncDecCTCConfig:
306 return super().build()
307
[end of nemo/collections/asr/models/configs/quartznet_config.py]
[start of nemo/collections/asr/modules/audio_preprocessing.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 import random
17 from abc import ABC, abstractmethod
18 from dataclasses import dataclass
19 from typing import Any, Dict, Optional, Tuple
20
21 import torch
22 from packaging import version
23
24 from nemo.collections.asr.parts.numba.spec_augment import SpecAugmentNumba, spec_augment_launch_heuristics
25 from nemo.collections.asr.parts.preprocessing.features import (
26 FilterbankFeatures,
27 FilterbankFeaturesTA,
28 make_seq_mask_like,
29 )
30 from nemo.collections.asr.parts.submodules.spectr_augment import SpecAugment, SpecCutout
31 from nemo.core.classes import Exportable, NeuralModule, typecheck
32 from nemo.core.neural_types import (
33 AudioSignal,
34 LengthsType,
35 MelSpectrogramType,
36 MFCCSpectrogramType,
37 NeuralType,
38 SpectrogramType,
39 )
40 from nemo.core.utils import numba_utils
41 from nemo.core.utils.numba_utils import __NUMBA_MINIMUM_VERSION__
42 from nemo.utils import logging
43
44 try:
45 import torchaudio
46 import torchaudio.functional
47 import torchaudio.transforms
48
49 TORCHAUDIO_VERSION = version.parse(torchaudio.__version__)
50 TORCHAUDIO_VERSION_MIN = version.parse('0.5')
51
52 HAVE_TORCHAUDIO = True
53 except ModuleNotFoundError:
54 HAVE_TORCHAUDIO = False
55
56 __all__ = [
57 'AudioToMelSpectrogramPreprocessor',
58 'AudioToSpectrogram',
59 'SpectrogramToAudio',
60 'AudioToMFCCPreprocessor',
61 'SpectrogramAugmentation',
62 'MaskedPatchAugmentation',
63 'CropOrPadSpectrogramAugmentation',
64 ]
65
66
67 class AudioPreprocessor(NeuralModule, ABC):
68 """
69 An interface for Neural Modules that performs audio pre-processing,
70 transforming the wav files to features.
71 """
72
73 def __init__(self, win_length, hop_length):
74 super().__init__()
75
76 self.win_length = win_length
77 self.hop_length = hop_length
78
79 self.torch_windows = {
80 'hann': torch.hann_window,
81 'hamming': torch.hamming_window,
82 'blackman': torch.blackman_window,
83 'bartlett': torch.bartlett_window,
84 'ones': torch.ones,
85 None: torch.ones,
86 }
87
88 @typecheck()
89 @torch.no_grad()
90 def forward(self, input_signal, length):
91 processed_signal, processed_length = self.get_features(input_signal, length)
92
93 return processed_signal, processed_length
94
95 @abstractmethod
96 def get_features(self, input_signal, length):
97 # Called by forward(). Subclasses should implement this.
98 pass
99
100
101 class AudioToMelSpectrogramPreprocessor(AudioPreprocessor, Exportable):
102 """Featurizer module that converts wavs to mel spectrograms.
103
104 Args:
105 sample_rate (int): Sample rate of the input audio data.
106 Defaults to 16000
107 window_size (float): Size of window for fft in seconds
108 Defaults to 0.02
109 window_stride (float): Stride of window for fft in seconds
110 Defaults to 0.01
111 n_window_size (int): Size of window for fft in samples
112 Defaults to None. Use one of window_size or n_window_size.
113 n_window_stride (int): Stride of window for fft in samples
114 Defaults to None. Use one of window_stride or n_window_stride.
115 window (str): Windowing function for fft. can be one of ['hann',
116 'hamming', 'blackman', 'bartlett']
117 Defaults to "hann"
118 normalize (str): Can be one of ['per_feature', 'all_features']; all
119 other options disable feature normalization. 'all_features'
120 normalizes the entire spectrogram to be mean 0 with std 1.
121 'pre_features' normalizes per channel / freq instead.
122 Defaults to "per_feature"
123 n_fft (int): Length of FT window. If None, it uses the smallest power
124 of 2 that is larger than n_window_size.
125 Defaults to None
126 preemph (float): Amount of pre emphasis to add to audio. Can be
127 disabled by passing None.
128 Defaults to 0.97
129 features (int): Number of mel spectrogram freq bins to output.
130 Defaults to 64
131 lowfreq (int): Lower bound on mel basis in Hz.
132 Defaults to 0
133 highfreq (int): Lower bound on mel basis in Hz.
134 Defaults to None
135 log (bool): Log features.
136 Defaults to True
137 log_zero_guard_type(str): Need to avoid taking the log of zero. There
138 are two options: "add" or "clamp".
139 Defaults to "add".
140 log_zero_guard_value(float, or str): Add or clamp requires the number
141 to add with or clamp to. log_zero_guard_value can either be a float
142 or "tiny" or "eps". torch.finfo is used if "tiny" or "eps" is
143 passed.
144 Defaults to 2**-24.
145 dither (float): Amount of white-noise dithering.
146 Defaults to 1e-5
147 pad_to (int): Ensures that the output size of the time dimension is
148 a multiple of pad_to.
149 Defaults to 16
150 frame_splicing (int): Defaults to 1
151 exact_pad (bool): If True, sets stft center to False and adds padding, such that num_frames = audio_length
152 // hop_length. Defaults to False.
153 pad_value (float): The value that shorter mels are padded with.
154 Defaults to 0
155 mag_power (float): The power that the linear spectrogram is raised to
156 prior to multiplication with mel basis.
157 Defaults to 2 for a power spec
158 rng : Random number generator
159 nb_augmentation_prob (float) : Probability with which narrowband augmentation would be applied to
160 samples in the batch.
161 Defaults to 0.0
162 nb_max_freq (int) : Frequency above which all frequencies will be masked for narrowband augmentation.
163 Defaults to 4000
164 use_torchaudio: Whether to use the `torchaudio` implementation.
165 mel_norm: Normalization used for mel filterbank weights.
166 Defaults to 'slaney' (area normalization)
167 stft_exact_pad: Deprecated argument, kept for compatibility with older checkpoints.
168 stft_conv: Deprecated argument, kept for compatibility with older checkpoints.
169 """
170
171 def save_to(self, save_path: str):
172 pass
173
174 @classmethod
175 def restore_from(cls, restore_path: str):
176 pass
177
178 @property
179 def input_types(self):
180 """Returns definitions of module input ports.
181 """
182 return {
183 "input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
184 "length": NeuralType(
185 tuple('B'), LengthsType()
186 ), # Please note that length should be in samples not seconds.
187 }
188
189 @property
190 def output_types(self):
191 """Returns definitions of module output ports.
192
193 processed_signal:
194 0: AxisType(BatchTag)
195 1: AxisType(MelSpectrogramSignalTag)
196 2: AxisType(ProcessedTimeTag)
197 processed_length:
198 0: AxisType(BatchTag)
199 """
200 return {
201 "processed_signal": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
202 "processed_length": NeuralType(tuple('B'), LengthsType()),
203 }
204
205 def __init__(
206 self,
207 sample_rate=16000,
208 window_size=0.02,
209 window_stride=0.01,
210 n_window_size=None,
211 n_window_stride=None,
212 window="hann",
213 normalize="per_feature",
214 n_fft=None,
215 preemph=0.97,
216 features=64,
217 lowfreq=0,
218 highfreq=None,
219 log=True,
220 log_zero_guard_type="add",
221 log_zero_guard_value=2 ** -24,
222 dither=1e-5,
223 pad_to=16,
224 frame_splicing=1,
225 exact_pad=False,
226 pad_value=0,
227 mag_power=2.0,
228 rng=None,
229 nb_augmentation_prob=0.0,
230 nb_max_freq=4000,
231 use_torchaudio: bool = False,
232 mel_norm="slaney",
233 stft_exact_pad=False, # Deprecated arguments; kept for config compatibility
234 stft_conv=False, # Deprecated arguments; kept for config compatibility
235 ):
236 super().__init__(n_window_size, n_window_stride)
237
238 self._sample_rate = sample_rate
239 if window_size and n_window_size:
240 raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
241 if window_stride and n_window_stride:
242 raise ValueError(
243 f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
244 )
245 if window_size:
246 n_window_size = int(window_size * self._sample_rate)
247 if window_stride:
248 n_window_stride = int(window_stride * self._sample_rate)
249
250 # Given the long and similar argument list, point to the class and instantiate it by reference
251 if not use_torchaudio:
252 featurizer_class = FilterbankFeatures
253 else:
254 featurizer_class = FilterbankFeaturesTA
255 self.featurizer = featurizer_class(
256 sample_rate=self._sample_rate,
257 n_window_size=n_window_size,
258 n_window_stride=n_window_stride,
259 window=window,
260 normalize=normalize,
261 n_fft=n_fft,
262 preemph=preemph,
263 nfilt=features,
264 lowfreq=lowfreq,
265 highfreq=highfreq,
266 log=log,
267 log_zero_guard_type=log_zero_guard_type,
268 log_zero_guard_value=log_zero_guard_value,
269 dither=dither,
270 pad_to=pad_to,
271 frame_splicing=frame_splicing,
272 exact_pad=exact_pad,
273 pad_value=pad_value,
274 mag_power=mag_power,
275 rng=rng,
276 nb_augmentation_prob=nb_augmentation_prob,
277 nb_max_freq=nb_max_freq,
278 mel_norm=mel_norm,
279 stft_exact_pad=stft_exact_pad, # Deprecated arguments; kept for config compatibility
280 stft_conv=stft_conv, # Deprecated arguments; kept for config compatibility
281 )
282
283 def input_example(self, max_batch: int = 8, max_dim: int = 32000, min_length: int = 200):
284 batch_size = torch.randint(low=1, high=max_batch, size=[1]).item()
285 max_length = torch.randint(low=min_length, high=max_dim, size=[1]).item()
286 signals = torch.rand(size=[batch_size, max_length]) * 2 - 1
287 lengths = torch.randint(low=min_length, high=max_dim, size=[batch_size])
288 lengths[0] = max_length
289 return signals, lengths
290
291 def get_features(self, input_signal, length):
292 return self.featurizer(input_signal, length)
293
294 @property
295 def filter_banks(self):
296 return self.featurizer.filter_banks
297
298
299 class AudioToMFCCPreprocessor(AudioPreprocessor):
300 """Preprocessor that converts wavs to MFCCs.
301 Uses torchaudio.transforms.MFCC.
302
303 Args:
304 sample_rate: The sample rate of the audio.
305 Defaults to 16000.
306 window_size: Size of window for fft in seconds. Used to calculate the
307 win_length arg for mel spectrogram.
308 Defaults to 0.02
309 window_stride: Stride of window for fft in seconds. Used to caculate
310 the hop_length arg for mel spect.
311 Defaults to 0.01
312 n_window_size: Size of window for fft in samples
313 Defaults to None. Use one of window_size or n_window_size.
314 n_window_stride: Stride of window for fft in samples
315 Defaults to None. Use one of window_stride or n_window_stride.
316 window: Windowing function for fft. can be one of ['hann',
317 'hamming', 'blackman', 'bartlett', 'none', 'null'].
318 Defaults to 'hann'
319 n_fft: Length of FT window. If None, it uses the smallest power of 2
320 that is larger than n_window_size.
321 Defaults to None
322 lowfreq (int): Lower bound on mel basis in Hz.
323 Defaults to 0
324 highfreq (int): Lower bound on mel basis in Hz.
325 Defaults to None
326 n_mels: Number of mel filterbanks.
327 Defaults to 64
328 n_mfcc: Number of coefficients to retain
329 Defaults to 64
330 dct_type: Type of discrete cosine transform to use
331 norm: Type of norm to use
332 log: Whether to use log-mel spectrograms instead of db-scaled.
333 Defaults to True.
334 """
335
336 @property
337 def input_types(self):
338 """Returns definitions of module input ports.
339 """
340 return {
341 "input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
342 "length": NeuralType(tuple('B'), LengthsType()),
343 }
344
345 @property
346 def output_types(self):
347 """Returns definitions of module output ports.
348 """
349 return {
350 "processed_signal": NeuralType(('B', 'D', 'T'), MFCCSpectrogramType()),
351 "processed_length": NeuralType(tuple('B'), LengthsType()),
352 }
353
354 def save_to(self, save_path: str):
355 pass
356
357 @classmethod
358 def restore_from(cls, restore_path: str):
359 pass
360
361 def __init__(
362 self,
363 sample_rate=16000,
364 window_size=0.02,
365 window_stride=0.01,
366 n_window_size=None,
367 n_window_stride=None,
368 window='hann',
369 n_fft=None,
370 lowfreq=0.0,
371 highfreq=None,
372 n_mels=64,
373 n_mfcc=64,
374 dct_type=2,
375 norm='ortho',
376 log=True,
377 ):
378 self._sample_rate = sample_rate
379 if not HAVE_TORCHAUDIO:
380 logging.error('Could not import torchaudio. Some features might not work.')
381
382 raise ModuleNotFoundError(
383 "torchaudio is not installed but is necessary for "
384 "AudioToMFCCPreprocessor. We recommend you try "
385 "building it from source for the PyTorch version you have."
386 )
387 if window_size and n_window_size:
388 raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
389 if window_stride and n_window_stride:
390 raise ValueError(
391 f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
392 )
393 # Get win_length (n_window_size) and hop_length (n_window_stride)
394 if window_size:
395 n_window_size = int(window_size * self._sample_rate)
396 if window_stride:
397 n_window_stride = int(window_stride * self._sample_rate)
398
399 super().__init__(n_window_size, n_window_stride)
400
401 mel_kwargs = {}
402
403 mel_kwargs['f_min'] = lowfreq
404 mel_kwargs['f_max'] = highfreq
405 mel_kwargs['n_mels'] = n_mels
406
407 mel_kwargs['n_fft'] = n_fft or 2 ** math.ceil(math.log2(n_window_size))
408
409 mel_kwargs['win_length'] = n_window_size
410 mel_kwargs['hop_length'] = n_window_stride
411
412 # Set window_fn. None defaults to torch.ones.
413 window_fn = self.torch_windows.get(window, None)
414 if window_fn is None:
415 raise ValueError(
416 f"Window argument for AudioProcessor is invalid: {window}."
417 f"For no window function, use 'ones' or None."
418 )
419 mel_kwargs['window_fn'] = window_fn
420
421 # Use torchaudio's implementation of MFCCs as featurizer
422 self.featurizer = torchaudio.transforms.MFCC(
423 sample_rate=self._sample_rate,
424 n_mfcc=n_mfcc,
425 dct_type=dct_type,
426 norm=norm,
427 log_mels=log,
428 melkwargs=mel_kwargs,
429 )
430
431 def get_features(self, input_signal, length):
432 features = self.featurizer(input_signal)
433 seq_len = torch.ceil(length.to(torch.float32) / self.hop_length).to(dtype=torch.long)
434 return features, seq_len
435
436
437 class SpectrogramAugmentation(NeuralModule):
438 """
439 Performs time and freq cuts in one of two ways.
440 SpecAugment zeroes out vertical and horizontal sections as described in
441 SpecAugment (https://arxiv.org/abs/1904.08779). Arguments for use with
442 SpecAugment are `freq_masks`, `time_masks`, `freq_width`, and `time_width`.
443 SpecCutout zeroes out rectangulars as described in Cutout
444 (https://arxiv.org/abs/1708.04552). Arguments for use with Cutout are
445 `rect_masks`, `rect_freq`, and `rect_time`.
446
447 Args:
448 freq_masks (int): how many frequency segments should be cut.
449 Defaults to 0.
450 time_masks (int): how many time segments should be cut
451 Defaults to 0.
452 freq_width (int): maximum number of frequencies to be cut in one
453 segment.
454 Defaults to 10.
455 time_width (int): maximum number of time steps to be cut in one
456 segment
457 Defaults to 10.
458 rect_masks (int): how many rectangular masks should be cut
459 Defaults to 0.
460 rect_freq (int): maximum size of cut rectangles along the frequency
461 dimension
462 Defaults to 5.
463 rect_time (int): maximum size of cut rectangles along the time
464 dimension
465 Defaults to 25.
466 """
467
468 @property
469 def input_types(self):
470 """Returns definitions of module input types
471 """
472 return {
473 "input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
474 "length": NeuralType(tuple('B'), LengthsType()),
475 }
476
477 @property
478 def output_types(self):
479 """Returns definitions of module output types
480 """
481 return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
482
483 def __init__(
484 self,
485 freq_masks=0,
486 time_masks=0,
487 freq_width=10,
488 time_width=10,
489 rect_masks=0,
490 rect_time=5,
491 rect_freq=20,
492 rng=None,
493 mask_value=0.0,
494 use_numba_spec_augment: bool = True,
495 ):
496 super().__init__()
497
498 if rect_masks > 0:
499 self.spec_cutout = SpecCutout(rect_masks=rect_masks, rect_time=rect_time, rect_freq=rect_freq, rng=rng,)
500 # self.spec_cutout.to(self._device)
501 else:
502 self.spec_cutout = lambda input_spec: input_spec
503 if freq_masks + time_masks > 0:
504 self.spec_augment = SpecAugment(
505 freq_masks=freq_masks,
506 time_masks=time_masks,
507 freq_width=freq_width,
508 time_width=time_width,
509 rng=rng,
510 mask_value=mask_value,
511 )
512 else:
513 self.spec_augment = lambda input_spec, length: input_spec
514
515 # Check if numba is supported, and use a Numba kernel if it is
516 if use_numba_spec_augment and numba_utils.numba_cuda_is_supported(__NUMBA_MINIMUM_VERSION__):
517 logging.info('Numba CUDA SpecAugment kernel is being used')
518 self.spec_augment_numba = SpecAugmentNumba(
519 freq_masks=freq_masks,
520 time_masks=time_masks,
521 freq_width=freq_width,
522 time_width=time_width,
523 rng=rng,
524 mask_value=mask_value,
525 )
526 else:
527 self.spec_augment_numba = None
528
529 @typecheck()
530 def forward(self, input_spec, length):
531 augmented_spec = self.spec_cutout(input_spec=input_spec)
532
533 # To run the Numba kernel, correct numba version is required as well as
534 # tensor must be on GPU and length must be provided
535 if self.spec_augment_numba is not None and spec_augment_launch_heuristics(augmented_spec, length):
536 augmented_spec = self.spec_augment_numba(input_spec=augmented_spec, length=length)
537 else:
538 augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
539 return augmented_spec
540
541
542 class MaskedPatchAugmentation(NeuralModule):
543 """
544 Zeroes out fixed size time patches of the spectrogram.
545 All samples in batch are guaranteed to have the same amount of masked time steps.
546 Optionally also performs frequency masking in the same way as SpecAugment.
547 Args:
548 patch_size (int): up to how many time steps does one patch consist of.
549 Defaults to 48.
550 mask_patches (float): how many patches should be masked in each sample.
551 if >= 1., interpreted as number of patches (after converting to int)
552 if <1., interpreted as fraction of total tokens to be masked (number of patches is rounded up)
553 Defaults to 10.
554 freq_masks (int): how many frequency segments should be cut.
555 Defaults to 0.
556 freq_width (int): maximum number of frequencies to be cut in a segment.
557 Defaults to 0.
558 """
559
560 @property
561 def input_types(self):
562 """Returns definitions of module input types
563 """
564 return {
565 "input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
566 "length": NeuralType(tuple('B'), LengthsType()),
567 }
568
569 @property
570 def output_types(self):
571 """Returns definitions of module output types
572 """
573 return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
574
575 def __init__(
576 self, patch_size: int = 48, mask_patches: float = 10.0, freq_masks: int = 0, freq_width: int = 0,
577 ):
578 super().__init__()
579 self.patch_size = patch_size
580 if mask_patches >= 1:
581 self.mask_patches = int(mask_patches)
582 elif mask_patches >= 0:
583 self._mask_fraction = mask_patches
584 self.mask_patches = None
585 else:
586 raise ValueError('mask_patches cannot be negative')
587
588 if freq_masks > 0:
589 self.spec_augment = SpecAugment(freq_masks=freq_masks, time_masks=0, freq_width=freq_width, time_width=0,)
590 else:
591 self.spec_augment = None
592
593 @typecheck()
594 def forward(self, input_spec, length):
595 augmented_spec = input_spec
596
597 min_len = torch.min(length)
598
599 if self.mask_patches is None:
600 # masking specified as fraction
601 len_fraction = int(min_len * self._mask_fraction)
602 mask_patches = len_fraction // self.patch_size + int(len_fraction % self.patch_size != 0)
603 else:
604 mask_patches = self.mask_patches
605
606 if min_len < self.patch_size * mask_patches:
607 mask_patches = min_len // self.patch_size
608
609 for idx in range(input_spec.shape[0]):
610 cur_len = length[idx]
611 patches = range(cur_len // self.patch_size)
612 masked_patches = random.sample(patches, mask_patches)
613
614 for mp in masked_patches:
615 augmented_spec[idx, :, mp * self.patch_size : (mp + 1) * self.patch_size] = 0.0
616
617 if self.spec_augment is not None:
618 augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
619
620 return augmented_spec
621
622
623 class CropOrPadSpectrogramAugmentation(NeuralModule):
624 """
625 Pad or Crop the incoming Spectrogram to a certain shape.
626
627 Args:
628 audio_length (int): the final number of timesteps that is required.
629 The signal will be either padded or cropped temporally to this
630 size.
631 """
632
633 def __init__(self, audio_length):
634 super(CropOrPadSpectrogramAugmentation, self).__init__()
635 self.audio_length = audio_length
636
637 @typecheck()
638 @torch.no_grad()
639 def forward(self, input_signal, length):
640 image = input_signal
641 num_images = image.shape[0]
642
643 audio_length = self.audio_length
644 image_len = image.shape[-1]
645
646 # Crop long signal
647 if image_len > audio_length: # randomly slice
648 cutout_images = []
649 offset = torch.randint(low=0, high=image_len - audio_length + 1, size=[num_images])
650
651 for idx, offset in enumerate(offset):
652 cutout_images.append(image[idx : idx + 1, :, offset : offset + audio_length])
653
654 image = torch.cat(cutout_images, dim=0)
655 del cutout_images
656
657 else: # symmetrically pad short signal with zeros
658 pad_left = (audio_length - image_len) // 2
659 pad_right = (audio_length - image_len) // 2
660
661 if (audio_length - image_len) % 2 == 1:
662 pad_right += 1
663
664 image = torch.nn.functional.pad(image, [pad_left, pad_right], mode="constant", value=0)
665
666 # Replace dynamic length sequences with static number of timesteps
667 length = (length * 0) + audio_length
668
669 return image, length
670
671 @property
672 def input_types(self):
673 """Returns definitions of module output ports.
674 """
675 return {
676 "input_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
677 "length": NeuralType(tuple('B'), LengthsType()),
678 }
679
680 @property
681 def output_types(self):
682 """Returns definitions of module output ports.
683 """
684 return {
685 "processed_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
686 "processed_length": NeuralType(tuple('B'), LengthsType()),
687 }
688
689 def save_to(self, save_path: str):
690 pass
691
692 @classmethod
693 def restore_from(cls, restore_path: str):
694 pass
695
696
697 class AudioToSpectrogram(NeuralModule):
698 """Transform a batch of input multi-channel signals into a batch of
699 STFT-based spectrograms.
700
701 Args:
702 fft_length: length of FFT
703 hop_length: length of hops/shifts of the sliding window
704 power: exponent for magnitude spectrogram. Default `None` will
705 return a complex-valued spectrogram
706 """
707
708 def __init__(self, fft_length: int, hop_length: int, power: Optional[float] = None):
709 if not HAVE_TORCHAUDIO:
710 logging.error('Could not import torchaudio. Some features might not work.')
711
712 raise ModuleNotFoundError(
713 "torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
714 )
715
716 super().__init__()
717
718 # For now, assume FFT length is divisible by two
719 if fft_length % 2 != 0:
720 raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
721
722 self.stft = torchaudio.transforms.Spectrogram(
723 n_fft=fft_length, hop_length=hop_length, power=power, pad_mode='constant'
724 )
725
726 # number of subbands
727 self.F = fft_length // 2 + 1
728
729 @property
730 def num_subbands(self) -> int:
731 return self.F
732
733 @property
734 def input_types(self) -> Dict[str, NeuralType]:
735 """Returns definitions of module output ports.
736 """
737 return {
738 "input": NeuralType(('B', 'C', 'T'), AudioSignal()),
739 "input_length": NeuralType(('B',), LengthsType(), optional=True),
740 }
741
742 @property
743 def output_types(self) -> Dict[str, NeuralType]:
744 """Returns definitions of module output ports.
745 """
746 return {
747 "output": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
748 "output_length": NeuralType(('B',), LengthsType()),
749 }
750
751 @typecheck()
752 def forward(
753 self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None
754 ) -> Tuple[torch.Tensor, torch.Tensor]:
755 """Convert a batch of C-channel input signals
756 into a batch of complex-valued spectrograms.
757
758 Args:
759 input: Time-domain input signal with C channels, shape (B, C, T)
760 input_length: Length of valid entries along the time dimension, shape (B,)
761
762 Returns:
763 Output spectrogram with F subbands and N time frames, shape (B, C, F, N)
764 and output length with shape (B,).
765 """
766 B, T = input.size(0), input.size(-1)
767 input = input.view(B, -1, T)
768
769 # STFT output (B, C, F, N)
770 with torch.cuda.amp.autocast(enabled=False):
771 output = self.stft(input.float())
772
773 if input_length is not None:
774 # Mask padded frames
775 output_length = self.get_output_length(input_length=input_length)
776
777 length_mask: torch.Tensor = make_seq_mask_like(
778 lengths=output_length, like=output, time_dim=-1, valid_ones=False
779 )
780 output = output.masked_fill(length_mask, 0.0)
781 else:
782 # Assume all frames are valid for all examples in the batch
783 output_length = output.size(-1) * torch.ones(B, device=output.device).long()
784
785 return output, output_length
786
787 def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
788 """Get length of valid frames for the output.
789
790 Args:
791 input_length: number of valid samples, shape (B,)
792
793 Returns:
794 Number of valid frames, shape (B,)
795 """
796 output_length = input_length.div(self.stft.hop_length, rounding_mode='floor').add(1).long()
797 return output_length
798
799
800 class SpectrogramToAudio(NeuralModule):
801 """Transform a batch of input multi-channel spectrograms into a batch of
802 time-domain multi-channel signals.
803
804 Args:
805 fft_length: length of FFT
806 hop_length: length of hops/shifts of the sliding window
807 power: exponent for magnitude spectrogram. Default `None` will
808 return a complex-valued spectrogram
809 """
810
811 def __init__(self, fft_length: int, hop_length: int):
812 if not HAVE_TORCHAUDIO:
813 logging.error('Could not import torchaudio. Some features might not work.')
814
815 raise ModuleNotFoundError(
816 "torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
817 )
818
819 super().__init__()
820
821 # For now, assume FFT length is divisible by two
822 if fft_length % 2 != 0:
823 raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
824
825 self.istft = torchaudio.transforms.InverseSpectrogram(
826 n_fft=fft_length, hop_length=hop_length, pad_mode='constant'
827 )
828
829 self.F = fft_length // 2 + 1
830
831 @property
832 def num_subbands(self) -> int:
833 return self.F
834
835 @property
836 def input_types(self) -> Dict[str, NeuralType]:
837 """Returns definitions of module output ports.
838 """
839 return {
840 "input": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
841 "input_length": NeuralType(('B',), LengthsType(), optional=True),
842 }
843
844 @property
845 def output_types(self) -> Dict[str, NeuralType]:
846 """Returns definitions of module output ports.
847 """
848 return {
849 "output": NeuralType(('B', 'C', 'T'), AudioSignal()),
850 "output_length": NeuralType(('B',), LengthsType()),
851 }
852
853 @typecheck()
854 def forward(self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None) -> torch.Tensor:
855 """Convert input complex-valued spectrogram to a time-domain
856 signal. Multi-channel IO is supported.
857
858 Args:
859 input: Input spectrogram for C channels, shape (B, C, F, N)
860 input_length: Length of valid entries along the time dimension, shape (B,)
861
862 Returns:
863 Time-domain signal with T time-domain samples and C channels, (B, C, T)
864 and output length with shape (B,).
865 """
866 B, F, N = input.size(0), input.size(-2), input.size(-1)
867 assert F == self.F, f'Number of subbands F={F} not matching self.F={self.F}'
868 input = input.view(B, -1, F, N)
869
870 # iSTFT output (B, C, T)
871 with torch.cuda.amp.autocast(enabled=False):
872 output = self.istft(input.cfloat())
873
874 if input_length is not None:
875 # Mask padded samples
876 output_length = self.get_output_length(input_length=input_length)
877
878 length_mask: torch.Tensor = make_seq_mask_like(
879 lengths=output_length, like=output, time_dim=-1, valid_ones=False
880 )
881 output = output.masked_fill(length_mask, 0.0)
882 else:
883 # Assume all frames are valid for all examples in the batch
884 output_length = output.size(-1) * torch.ones(B, device=output.device).long()
885
886 return output, output_length
887
888 def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
889 """Get length of valid samples for the output.
890
891 Args:
892 input_length: number of valid frames, shape (B,)
893
894 Returns:
895 Number of valid samples, shape (B,)
896 """
897 output_length = input_length.sub(1).mul(self.istft.hop_length).long()
898 return output_length
899
900
901 @dataclass
902 class AudioToMelSpectrogramPreprocessorConfig:
903 _target_: str = "nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor"
904 sample_rate: int = 16000
905 window_size: float = 0.02
906 window_stride: float = 0.01
907 n_window_size: Optional[int] = None
908 n_window_stride: Optional[int] = None
909 window: str = "hann"
910 normalize: str = "per_feature"
911 n_fft: Optional[int] = None
912 preemph: float = 0.97
913 features: int = 64
914 lowfreq: int = 0
915 highfreq: Optional[int] = None
916 log: bool = True
917 log_zero_guard_type: str = "add"
918 log_zero_guard_value: float = 2 ** -24
919 dither: float = 1e-5
920 pad_to: int = 16
921 frame_splicing: int = 1
922 exact_pad: bool = False
923 pad_value: int = 0
924 mag_power: float = 2.0
925 rng: Optional[str] = None
926 nb_augmentation_prob: float = 0.0
927 nb_max_freq: int = 4000
928 use_torchaudio: bool = False
929 mel_norm: str = "slaney"
930 stft_exact_pad: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
931 stft_conv: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
932
933
934 @dataclass
935 class AudioToMFCCPreprocessorConfig:
936 _target_: str = 'nemo.collections.asr.modules.AudioToMFCCPreprocessor'
937 sample_rate: int = 16000
938 window_size: float = 0.02
939 window_stride: float = 0.01
940 n_window_size: Optional[int] = None
941 n_window_stride: Optional[int] = None
942 window: str = 'hann'
943 n_fft: Optional[int] = None
944 lowfreq: Optional[float] = 0.0
945 highfreq: Optional[float] = None
946 n_mels: int = 64
947 n_mfcc: int = 64
948 dct_type: int = 2
949 norm: str = 'ortho'
950 log: bool = True
951
952
953 @dataclass
954 class SpectrogramAugmentationConfig:
955 _target_: str = "nemo.collections.asr.modules.SpectrogramAugmentation"
956 freq_masks: int = 0
957 time_masks: int = 0
958 freq_width: int = 0
959 time_width: Optional[Any] = 0
960 rect_masks: int = 0
961 rect_time: int = 0
962 rect_freq: int = 0
963 mask_value: float = 0
964 rng: Optional[Any] = None # random.Random() type
965 use_numba_spec_augment: bool = True
966
967
968 @dataclass
969 class CropOrPadSpectrogramAugmentationConfig:
970 audio_length: int
971 _target_: str = "nemo.collections.asr.modules.CropOrPadSpectrogramAugmentation"
972
973
974 @dataclass
975 class MaskedPatchAugmentationConfig:
976 patch_size: int = 48
977 mask_patches: float = 10.0
978 freq_masks: int = 0
979 freq_width: int = 0
980 _target_: str = "nemo.collections.asr.modules.MaskedPatchAugmentation"
981
[end of nemo/collections/asr/modules/audio_preprocessing.py]
[start of nemo/collections/asr/parts/k2/classes.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from abc import ABC
16 from dataclasses import dataclass
17 from typing import Any, Optional, Tuple
18
19 import torch
20 from omegaconf import DictConfig
21
22 from nemo.utils import logging
23
24
25 @dataclass
26 class GraphIntersectDenseConfig:
27 """Graph dense intersection config.
28 """
29
30 search_beam: float = 20.0
31 output_beam: float = 10.0
32 min_active_states: int = 30
33 max_active_states: int = 10000
34
35
36 @dataclass
37 class GraphModuleConfig:
38 """Config for graph modules.
39 Typically used with graph losses and decoders.
40 """
41
42 topo_type: str = "default"
43 topo_with_self_loops: bool = True
44 token_lm: Optional[Any] = None
45 intersect_pruned: bool = False
46 intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
47 boost_coeff: float = 0.0
48 predictor_window_size: int = 0
49 predictor_step_size: int = 1
50
51
52 class ASRK2Mixin(ABC):
53 """k2 Mixin class that simplifies the construction of various models with k2-based losses.
54
55 It does the following:
56 - Sets up the graph loss and decoder (methods _init_k2 and update_k2_modules).
57 - Registers external graphs, if needed.
58 - Augments forward(...) with optional graph decoding to get accurate predictions.
59 """
60
61 def _init_k2(self):
62 """
63 k2-related initialization implementation.
64
65 This method is expected to run after the __init__ which sets self._cfg
66 self._cfg is expected to have the attribute graph_module_cfg
67 """
68 if not hasattr(self, "_cfg"):
69 raise ValueError("self._cfg must be set before calling _init_k2().")
70 if not hasattr(self._cfg, "graph_module_cfg") or self._cfg.graph_module_cfg is None:
71 raise ValueError("self._cfg.graph_module_cfg must be set and cannot be None.")
72 self.graph_module_cfg = self._cfg.graph_module_cfg
73
74 # register token_lm for MAPLoss
75 criterion_type = self.graph_module_cfg.get("criterion_type", "ml")
76 self.use_graph_lm = criterion_type == "map"
77 if self.use_graph_lm:
78 token_lm_path = self.graph_module_cfg.backend_cfg.get("token_lm", None)
79 if token_lm_path is None:
80 raise ValueError(
81 f"graph_module_cfg.backend_cfg.token_lm is empty. It must be set for criterion_type == `{criterion_type}`"
82 )
83 token_lm_path = self.register_artifact('graph_module_cfg.backend_cfg.token_lm', token_lm_path)
84 self.graph_module_cfg.backend_cfg["token_lm"] = token_lm_path
85
86 self.update_k2_modules(self.graph_module_cfg)
87
88 def update_k2_modules(self, input_cfg: DictConfig):
89 """
90 Helper function to initialize or update k2 loss and transcribe_decoder.
91
92 Args:
93 input_cfg: DictConfig to take new parameters from. Schema is expected as in
94 nemo.collections.asr.models.configs.k2_sequence_models_config.GraphModuleConfig
95 """
96 del self.loss
97 if hasattr(self, "transcribe_decoder"):
98 del self.transcribe_decoder
99
100 if hasattr(self, "joint"):
101 # RNNT
102 num_classes = self.joint.num_classes_with_blank - 1
103 else:
104 # CTC, MMI, ...
105 num_classes = self.decoder.num_classes_with_blank - 1
106 remove_consecutive = input_cfg.backend_cfg.get("topo_with_self_loops", True) and input_cfg.backend_cfg.get(
107 "topo_type", "default"
108 ) not in ["forced_blank", "identity",]
109 self._wer.remove_consecutive = remove_consecutive
110
111 from nemo.collections.asr.losses.lattice_losses import LatticeLoss
112
113 self.loss = LatticeLoss(
114 num_classes=num_classes,
115 reduction=self._cfg.get("ctc_reduction", "mean_batch"),
116 backend="k2",
117 criterion_type=input_cfg.get("criterion_type", "ml"),
118 loss_type=input_cfg.get("loss_type", "ctc"),
119 split_batch_size=input_cfg.get("split_batch_size", 0),
120 graph_module_cfg=input_cfg.backend_cfg,
121 )
122
123 criterion_type = self.loss.criterion_type
124 self.use_graph_lm = criterion_type == "map"
125 transcribe_training = input_cfg.get("transcribe_training", False)
126 if transcribe_training and criterion_type == "ml":
127 logging.warning(
128 f"""You do not need to use transcribe_training=`{transcribe_training}`
129 with criterion_type=`{criterion_type}`. transcribe_training will be set to False."""
130 )
131 transcribe_training = False
132 self.transcribe_training = transcribe_training
133 if self.use_graph_lm:
134 from nemo.collections.asr.modules.graph_decoder import ViterbiDecoderWithGraph
135
136 self.transcribe_decoder = ViterbiDecoderWithGraph(
137 num_classes=num_classes,
138 backend="k2",
139 dec_type="token_lm",
140 return_type="1best",
141 return_ilabels=True,
142 output_aligned=True,
143 split_batch_size=input_cfg.get("split_batch_size", 0),
144 graph_module_cfg=input_cfg.backend_cfg,
145 )
146
147 def _forward_k2_post_processing(
148 self, log_probs: torch.Tensor, encoded_length: torch.Tensor, greedy_predictions: torch.Tensor
149 ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
150 """
151 k2-related post-processing parf of .forward()
152
153 Args:
154 log_probs: The log probabilities tensor of shape [B, T, D].
155 encoded_length: The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
156 greedy_predictions: The greedy token predictions of the model of shape [B, T]
157
158 Returns:
159 A tuple of 3 elements -
160 1) The log probabilities tensor of shape [B, T, D].
161 2) The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
162 3) The greedy token predictions of the model of shape [B, T] (via argmax)
163 """
164 # greedy_predictions from .forward() are incorrect for criterion_type=`map`
165 # getting correct greedy_predictions, if needed
166 if self.use_graph_lm and (not self.training or self.transcribe_training):
167 greedy_predictions, encoded_length, _ = self.transcribe_decoder.forward(
168 log_probs=log_probs, log_probs_length=encoded_length
169 )
170 return log_probs, encoded_length, greedy_predictions
171
[end of nemo/collections/asr/parts/k2/classes.py]
[start of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 from dataclasses import dataclass
17 from typing import Any, Optional
18
19 import torch
20 from torch import nn as nn
21
22 from nemo.collections.asr.parts.submodules import multi_head_attention as mha
23 from nemo.collections.common.parts import adapter_modules
24 from nemo.core.classes.mixins import adapter_mixin_strategies
25
26
27 class MHAResidualAddAdapterStrategy(adapter_mixin_strategies.ResidualAddAdapterStrategy):
28 """
29 An implementation of residual addition of an adapter module with its input for the MHA Adapters.
30 """
31
32 def forward(self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'):
33 """
34 A basic strategy, comprising of a residual connection over the input, after forward pass by
35 the underlying adapter. Additional work is done to pack and unpack the dictionary of inputs and outputs.
36
37 Note: The `value` tensor is added to the output of the attention adapter as the residual connection.
38
39 Args:
40 input: A dictionary of multiple input arguments for the adapter module.
41 `query`, `key`, `value`: Original output tensor of the module, or the output of the
42 previous adapter (if more than one adapters are enabled).
43 `mask`: Attention mask.
44 `pos_emb`: Optional positional embedding for relative encoding.
45 adapter: The adapter module that is currently required to perform the forward pass.
46 module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
47 therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
48
49 Returns:
50 The result tensor, after one of the active adapters has finished its forward passes.
51 """
52 out = self.compute_output(input, adapter, module=module)
53
54 # If not in training mode, or probability of stochastic depth is 0, skip step.
55 p = self.stochastic_depth
56 if not module.training or p == 0.0:
57 pass
58 else:
59 out = self.apply_stochastic_depth(out, input['value'], adapter, module=module)
60
61 # Return the residual connection output = input + adapter(input)
62 result = input['value'] + out
63
64 # If l2_lambda is activated, register the loss value
65 self.compute_auxiliary_losses(result, input['value'], adapter, module=module)
66
67 return result
68
69 def compute_output(
70 self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'
71 ) -> torch.Tensor:
72 """
73 Compute the output of a single adapter to some input.
74
75 Args:
76 input: Original output tensor of the module, or the output of the previous adapter (if more than
77 one adapters are enabled).
78 adapter: The adapter module that is currently required to perform the forward pass.
79 module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
80 therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
81
82 Returns:
83 The result tensor, after one of the active adapters has finished its forward passes.
84 """
85 if isinstance(input, (list, tuple)):
86 out = adapter(*input)
87 elif isinstance(input, dict):
88 out = adapter(**input)
89 else:
90 out = adapter(input)
91 return out
92
93
94 @dataclass
95 class MHAResidualAddAdapterStrategyConfig(adapter_mixin_strategies.ResidualAddAdapterStrategyConfig):
96 _target_: str = "{0}.{1}".format(
97 MHAResidualAddAdapterStrategy.__module__, MHAResidualAddAdapterStrategy.__name__
98 ) # mandatory field
99
100
101 class MultiHeadAttentionAdapter(mha.MultiHeadAttention, adapter_modules.AdapterModuleUtil):
102 """Multi-Head Attention layer of Transformer.
103 Args:
104 n_head (int): number of heads
105 n_feat (int): size of the features
106 dropout_rate (float): dropout rate
107 proj_dim (int, optional): Optional integer value for projection before computing attention.
108 If None, then there is no projection (equivalent to proj_dim = n_feat).
109 If > 0, then will project the n_feat to proj_dim before calculating attention.
110 If <0, then will equal n_head, so that each head has a projected dimension of 1.
111 adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
112 """
113
114 def __init__(
115 self,
116 n_head: int,
117 n_feat: int,
118 dropout_rate: float,
119 proj_dim: Optional[int] = None,
120 adapter_strategy: MHAResidualAddAdapterStrategy = None,
121 ):
122 super().__init__(n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, max_cache_len=0)
123
124 self.pre_norm = nn.LayerNorm(n_feat)
125
126 # Set the projection dim to number of heads automatically
127 if proj_dim is not None and proj_dim < 1:
128 proj_dim = n_head
129
130 self.proj_dim = proj_dim
131
132 # Recompute weights for projection dim
133 if self.proj_dim is not None:
134 if self.proj_dim % n_head != 0:
135 raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
136
137 self.d_k = self.proj_dim // n_head
138 self.s_d_k = math.sqrt(self.d_k)
139 self.linear_q = nn.Linear(n_feat, self.proj_dim)
140 self.linear_k = nn.Linear(n_feat, self.proj_dim)
141 self.linear_v = nn.Linear(n_feat, self.proj_dim)
142 self.linear_out = nn.Linear(self.proj_dim, n_feat)
143
144 # Setup adapter strategy
145 self.setup_adapter_strategy(adapter_strategy)
146
147 # reset parameters for Q to be identity operation
148 self.reset_parameters()
149
150 def forward(self, query, key, value, mask, pos_emb=None, cache=None):
151 """Compute 'Scaled Dot Product Attention'.
152 Args:
153 query (torch.Tensor): (batch, time1, size)
154 key (torch.Tensor): (batch, time2, size)
155 value(torch.Tensor): (batch, time2, size)
156 mask (torch.Tensor): (batch, time1, time2)
157 cache (torch.Tensor) : (batch, time_cache, size)
158
159 returns:
160 output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
161 cache (torch.Tensor) : (batch, time_cache_next, size)
162 """
163 # Need to perform duplicate computations as at this point the tensors have been
164 # separated by the adapter forward
165 query = self.pre_norm(query)
166 key = self.pre_norm(key)
167 value = self.pre_norm(value)
168
169 return super().forward(query, key, value, mask, pos_emb, cache=cache)
170
171 def reset_parameters(self):
172 with torch.no_grad():
173 nn.init.zeros_(self.linear_out.weight)
174 nn.init.zeros_(self.linear_out.bias)
175
176 def get_default_strategy_config(self) -> 'dataclass':
177 return MHAResidualAddAdapterStrategyConfig()
178
179
180 @dataclass
181 class MultiHeadAttentionAdapterConfig:
182 n_head: int
183 n_feat: int
184 dropout_rate: float = 0.0
185 proj_dim: Optional[int] = None
186 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
187 _target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
188
189
190 class RelPositionMultiHeadAttentionAdapter(mha.RelPositionMultiHeadAttention, adapter_modules.AdapterModuleUtil):
191 """Multi-Head Attention layer of Transformer-XL with support of relative positional encoding.
192 Paper: https://arxiv.org/abs/1901.02860
193 Args:
194 n_head (int): number of heads
195 n_feat (int): size of the features
196 dropout_rate (float): dropout rate
197 proj_dim (int, optional): Optional integer value for projection before computing attention.
198 If None, then there is no projection (equivalent to proj_dim = n_feat).
199 If > 0, then will project the n_feat to proj_dim before calculating attention.
200 If <0, then will equal n_head, so that each head has a projected dimension of 1.
201 adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
202 """
203
204 def __init__(
205 self,
206 n_head: int,
207 n_feat: int,
208 dropout_rate: float,
209 proj_dim: Optional[int] = None,
210 adapter_strategy: MHAResidualAddAdapterStrategyConfig = None,
211 ):
212 super().__init__(
213 n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, pos_bias_u=None, pos_bias_v=None, max_cache_len=0
214 )
215
216 self.pre_norm = nn.LayerNorm(n_feat)
217
218 # Set the projection dim to number of heads automatically
219 if proj_dim is not None and proj_dim < 1:
220 proj_dim = n_head
221
222 self.proj_dim = proj_dim
223
224 # Recompute weights for projection dim
225 if self.proj_dim is not None:
226 if self.proj_dim % n_head != 0:
227 raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
228
229 self.d_k = self.proj_dim // n_head
230 self.s_d_k = math.sqrt(self.d_k)
231 self.linear_q = nn.Linear(n_feat, self.proj_dim)
232 self.linear_k = nn.Linear(n_feat, self.proj_dim)
233 self.linear_v = nn.Linear(n_feat, self.proj_dim)
234 self.linear_out = nn.Linear(self.proj_dim, n_feat)
235 self.linear_pos = nn.Linear(n_feat, self.proj_dim, bias=False)
236 self.pos_bias_u = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
237 self.pos_bias_v = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
238
239 # Setup adapter strategy
240 self.setup_adapter_strategy(adapter_strategy)
241
242 # reset parameters for Q to be identity operation
243 self.reset_parameters()
244
245 def forward(self, query, key, value, mask, pos_emb, cache=None):
246 """Compute 'Scaled Dot Product Attention' with rel. positional encoding.
247 Args:
248 query (torch.Tensor): (batch, time1, size)
249 key (torch.Tensor): (batch, time2, size)
250 value(torch.Tensor): (batch, time2, size)
251 mask (torch.Tensor): (batch, time1, time2)
252 pos_emb (torch.Tensor) : (batch, time1, size)
253 cache (torch.Tensor) : (batch, time_cache, size)
254 Returns:
255 output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
256 cache_next (torch.Tensor) : (batch, time_cache_next, size)
257 """
258 # Need to perform duplicate computations as at this point the tensors have been
259 # separated by the adapter forward
260 query = self.pre_norm(query)
261 key = self.pre_norm(key)
262 value = self.pre_norm(value)
263
264 return super().forward(query, key, value, mask, pos_emb, cache=cache)
265
266 def reset_parameters(self):
267 with torch.no_grad():
268 nn.init.zeros_(self.linear_out.weight)
269 nn.init.zeros_(self.linear_out.bias)
270
271 # NOTE: This exact procedure apparently highly important.
272 # Above operation is safe to do as self.linear_out.weight *= 0.0 (similar for bias)
273 # However:
274 # DO NOT REPLACE BELOW WITH self.pos_bias_u *= 0.0 OR self.pos_bias_v *= 0.0
275 # For some reason at init sometimes it will cause the value of the tensor to become NaN
276 # All operations to compute matrix_ac and matrix_bd will then fail.
277 nn.init.zeros_(self.pos_bias_u)
278 nn.init.zeros_(self.pos_bias_v)
279
280 def get_default_strategy_config(self) -> 'dataclass':
281 return MHAResidualAddAdapterStrategyConfig()
282
283
284 @dataclass
285 class RelPositionMultiHeadAttentionAdapterConfig:
286 n_head: int
287 n_feat: int
288 dropout_rate: float = 0.0
289 proj_dim: Optional[int] = None
290 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
291 _target_: str = "{0}.{1}".format(
292 RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
293 )
294
295
296 class PositionalEncodingAdapter(mha.PositionalEncoding, adapter_modules.AdapterModuleUtil):
297
298 """
299 Absolute positional embedding adapter.
300
301 .. note::
302
303 Absolute positional embedding value is added to the input tensor *without residual connection* !
304 Therefore, the input is changed, if you only require the positional embedding, drop the returned `x` !
305
306 Args:
307 d_model (int): The input dimension of x.
308 max_len (int): The max sequence length.
309 xscale (float): The input scaling factor. Defaults to 1.0.
310 adapter_strategy (AbstractAdapterStrategy): By default, ReturnResultAdapterStrategyConfig.
311 An adapter composition function object.
312 NOTE: Since this is a positional encoding, it will not add a residual !
313 """
314
315 def __init__(
316 self,
317 d_model: int,
318 max_len: int = 5000,
319 xscale=1.0,
320 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
321 ):
322
323 super().__init__(
324 d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0,
325 )
326
327 # Setup adapter strategy
328 self.setup_adapter_strategy(adapter_strategy)
329
330 def get_default_strategy_config(self) -> 'dataclass':
331 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
332
333
334 @dataclass
335 class PositionalEncodingAdapterConfig:
336 d_model: int
337 max_len: int = 5000
338 xscale: float = 1.0
339 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
340 _target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
341
342
343 class RelPositionalEncodingAdapter(mha.RelPositionalEncoding, adapter_modules.AdapterModuleUtil):
344 """
345 Relative positional encoding for TransformerXL's layers
346 See : Appendix B in https://arxiv.org/abs/1901.02860
347
348 .. note::
349
350 Relative positional embedding value is **not** added to the input tensor !
351 Therefore, the input should be updated changed, if you only require the positional embedding, drop the returned `x` !
352
353 Args:
354 d_model (int): embedding dim
355 max_len (int): maximum input length
356 xscale (bool): whether to scale the input by sqrt(d_model)
357 adapter_strategy: By default, ReturnResultAdapterStrategyConfig. An adapter composition function object.
358 """
359
360 def __init__(
361 self,
362 d_model: int,
363 max_len: int = 5000,
364 xscale=1.0,
365 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
366 ):
367 super().__init__(d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0)
368
369 # Setup adapter strategy
370 self.setup_adapter_strategy(adapter_strategy)
371
372 def get_default_strategy_config(self) -> 'dataclass':
373 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
374
375
376 @dataclass
377 class RelPositionalEncodingAdapterConfig:
378 d_model: int
379 max_len: int = 5000
380 xscale: float = 1.0
381 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
382 _target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
383
[end of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
[start of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 import os
17 from dataclasses import dataclass
18 from typing import List, Optional, Tuple, Union
19
20 import torch
21
22 from nemo.collections.asr.parts.utils import rnnt_utils
23 from nemo.collections.common.tokenizers.tokenizer_spec import TokenizerSpec
24 from nemo.core.classes import Typing, typecheck
25 from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
26 from nemo.utils import logging
27
28 DEFAULT_TOKEN_OFFSET = 100
29
30
31 def pack_hypotheses(
32 hypotheses: List[rnnt_utils.NBestHypotheses], logitlen: torch.Tensor,
33 ) -> List[rnnt_utils.NBestHypotheses]:
34
35 if logitlen is not None:
36 if hasattr(logitlen, 'cpu'):
37 logitlen_cpu = logitlen.to('cpu')
38 else:
39 logitlen_cpu = logitlen
40
41 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.NBestHypotheses
42 for candidate_idx, cand in enumerate(hyp.n_best_hypotheses):
43 cand.y_sequence = torch.tensor(cand.y_sequence, dtype=torch.long)
44
45 if logitlen is not None:
46 cand.length = logitlen_cpu[idx]
47
48 if cand.dec_state is not None:
49 cand.dec_state = _states_to_device(cand.dec_state)
50
51 return hypotheses
52
53
54 def _states_to_device(dec_state, device='cpu'):
55 if torch.is_tensor(dec_state):
56 dec_state = dec_state.to(device)
57
58 elif isinstance(dec_state, (list, tuple)):
59 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
60
61 return dec_state
62
63
64 class AbstractBeamCTCInfer(Typing):
65 """A beam CTC decoder.
66
67 Provides a common abstraction for sample level beam decoding.
68
69 Args:
70 blank_id: int, index of the blank token. Can be 0 or len(vocabulary).
71 beam_size: int, size of the beam used in the underlying beam search engine.
72
73 """
74
75 @property
76 def input_types(self):
77 """Returns definitions of module input ports.
78 """
79 return {
80 "decoder_output": NeuralType(('B', 'T', 'D'), LogprobsType()),
81 "decoder_lengths": NeuralType(tuple('B'), LengthsType()),
82 }
83
84 @property
85 def output_types(self):
86 """Returns definitions of module output ports.
87 """
88 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
89
90 def __init__(self, blank_id: int, beam_size: int):
91 self.blank_id = blank_id
92
93 if beam_size < 1:
94 raise ValueError("Beam search size cannot be less than 1!")
95
96 self.beam_size = beam_size
97
98 # Variables set by corresponding setter methods
99 self.vocab = None
100 self.decoding_type = None
101 self.tokenizer = None
102
103 # Utility maps for vocabulary
104 self.vocab_index_map = None
105 self.index_vocab_map = None
106
107 # Internal variable, used to prevent double reduction of consecutive tokens (ctc collapse)
108 self.override_fold_consecutive_value = None
109
110 def set_vocabulary(self, vocab: List[str]):
111 """
112 Set the vocabulary of the decoding framework.
113
114 Args:
115 vocab: List of str. Each token corresponds to its location in the vocabulary emitted by the model.
116 Note that this vocabulary must NOT contain the "BLANK" token.
117 """
118 self.vocab = vocab
119 self.vocab_index_map = {v: i for i, v in enumerate(vocab)}
120 self.index_vocab_map = {i: v for i, v in enumerate(vocab)}
121
122 def set_decoding_type(self, decoding_type: str):
123 """
124 Sets the decoding type of the framework. Can support either char or subword models.
125
126 Args:
127 decoding_type: Str corresponding to decoding type. Only supports "char" and "subword".
128 """
129 decoding_type = decoding_type.lower()
130 supported_types = ['char', 'subword']
131
132 if decoding_type not in supported_types:
133 raise ValueError(
134 f"Unsupported decoding type. Supported types = {supported_types}.\n" f"Given = {decoding_type}"
135 )
136
137 self.decoding_type = decoding_type
138
139 def set_tokenizer(self, tokenizer: TokenizerSpec):
140 """
141 Set the tokenizer of the decoding framework.
142
143 Args:
144 tokenizer: NeMo tokenizer object, which inherits from TokenizerSpec.
145 """
146 self.tokenizer = tokenizer
147
148 @typecheck()
149 def forward(
150 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
151 ) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
152 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
153 Output token is generated auto-repressively.
154
155 Args:
156 decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
157 decoder_lengths: list of int representing the length of each sequence
158 output sequence.
159
160 Returns:
161 packed list containing batch number of sentences (Hypotheses).
162 """
163 raise NotImplementedError()
164
165 def __call__(self, *args, **kwargs):
166 return self.forward(*args, **kwargs)
167
168
169 class BeamCTCInfer(AbstractBeamCTCInfer):
170 """A greedy CTC decoder.
171
172 Provides a common abstraction for sample level and batch level greedy decoding.
173
174 Args:
175 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
176 preserve_alignments: Bool flag which preserves the history of logprobs generated during
177 decoding (sample / batched). When set to true, the Hypothesis will contain
178 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
179 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
180 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
181 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
182
183 """
184
185 def __init__(
186 self,
187 blank_id: int,
188 beam_size: int,
189 search_type: str = "default",
190 return_best_hypothesis: bool = True,
191 preserve_alignments: bool = False,
192 compute_timestamps: bool = False,
193 beam_alpha: float = 1.0,
194 beam_beta: float = 0.0,
195 kenlm_path: str = None,
196 flashlight_cfg: Optional['FlashlightConfig'] = None,
197 pyctcdecode_cfg: Optional['PyCTCDecodeConfig'] = None,
198 ):
199 super().__init__(blank_id=blank_id, beam_size=beam_size)
200
201 self.search_type = search_type
202 self.return_best_hypothesis = return_best_hypothesis
203 self.preserve_alignments = preserve_alignments
204 self.compute_timestamps = compute_timestamps
205
206 if self.compute_timestamps:
207 raise ValueError(f"Currently this flag is not supported for beam search algorithms.")
208
209 self.vocab = None # This must be set by specific method by user before calling forward() !
210
211 if search_type == "default" or search_type == "nemo":
212 self.search_algorithm = self.default_beam_search
213 elif search_type == "pyctcdecode":
214 self.search_algorithm = self._pyctcdecode_beam_search
215 elif search_type == "flashlight":
216 self.search_algorithm = self.flashlight_beam_search
217 else:
218 raise NotImplementedError(
219 f"The search type ({search_type}) supplied is not supported!\n"
220 f"Please use one of : (default, nemo, pyctcdecode)"
221 )
222
223 # Log the beam search algorithm
224 logging.info(f"Beam search algorithm: {search_type}")
225
226 self.beam_alpha = beam_alpha
227 self.beam_beta = beam_beta
228
229 # Default beam search args
230 self.kenlm_path = kenlm_path
231
232 # PyCTCDecode params
233 if pyctcdecode_cfg is None:
234 pyctcdecode_cfg = PyCTCDecodeConfig()
235 self.pyctcdecode_cfg = pyctcdecode_cfg # type: PyCTCDecodeConfig
236
237 if flashlight_cfg is None:
238 flashlight_cfg = FlashlightConfig()
239 self.flashlight_cfg = flashlight_cfg
240
241 # Default beam search scorer functions
242 self.default_beam_scorer = None
243 self.pyctcdecode_beam_scorer = None
244 self.flashlight_beam_scorer = None
245 self.token_offset = 0
246
247 @typecheck()
248 def forward(
249 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
250 ) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
251 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
252 Output token is generated auto-repressively.
253
254 Args:
255 decoder_output: A tensor of size (batch, timesteps, features).
256 decoder_lengths: list of int representing the length of each sequence
257 output sequence.
258
259 Returns:
260 packed list containing batch number of sentences (Hypotheses).
261 """
262 if self.vocab is None:
263 raise RuntimeError("Please set the vocabulary with `set_vocabulary()` before calling this function.")
264
265 if self.decoding_type is None:
266 raise ValueError("Please set the decoding type with `set_decoding_type()` before calling this function.")
267
268 with torch.no_grad(), torch.inference_mode():
269 # Process each sequence independently
270 prediction_tensor = decoder_output
271
272 if prediction_tensor.ndim != 3:
273 raise ValueError(
274 f"`decoder_output` must be a tensor of shape [B, T, V] (log probs, float). "
275 f"Provided shape = {prediction_tensor.shape}"
276 )
277
278 # determine type of input - logprobs or labels
279 out_len = decoder_lengths if decoder_lengths is not None else None
280 hypotheses = self.search_algorithm(prediction_tensor, out_len)
281
282 # Pack results into Hypotheses
283 packed_result = pack_hypotheses(hypotheses, decoder_lengths)
284
285 # Pack the result
286 if self.return_best_hypothesis and isinstance(packed_result[0], rnnt_utils.NBestHypotheses):
287 packed_result = [res.n_best_hypotheses[0] for res in packed_result] # type: Hypothesis
288
289 return (packed_result,)
290
291 @torch.no_grad()
292 def default_beam_search(
293 self, x: torch.Tensor, out_len: torch.Tensor
294 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
295 """
296 Open Seq2Seq Beam Search Algorithm (DeepSpeed)
297
298 Args:
299 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
300 and V is the vocabulary size. The tensor contains log-probabilities.
301 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
302
303 Returns:
304 A list of NBestHypotheses objects, one for each sequence in the batch.
305 """
306 if self.compute_timestamps:
307 raise ValueError(
308 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
309 )
310
311 if self.default_beam_scorer is None:
312 # Check for filepath
313 if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
314 raise FileNotFoundError(
315 f"KenLM binary file not found at : {self.kenlm_path}. "
316 f"Please set a valid path in the decoding config."
317 )
318
319 # perform token offset for subword models
320 if self.decoding_type == 'subword':
321 vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
322 else:
323 # char models
324 vocab = self.vocab
325
326 # Must import at runtime to avoid circular dependency due to module level import.
327 from nemo.collections.asr.modules.beam_search_decoder import BeamSearchDecoderWithLM
328
329 self.default_beam_scorer = BeamSearchDecoderWithLM(
330 vocab=vocab,
331 lm_path=self.kenlm_path,
332 beam_width=self.beam_size,
333 alpha=self.beam_alpha,
334 beta=self.beam_beta,
335 num_cpus=max(1, os.cpu_count()),
336 input_tensor=False,
337 )
338
339 x = x.to('cpu')
340
341 with typecheck.disable_checks():
342 data = [x[sample_id, : out_len[sample_id], :].softmax(dim=-1) for sample_id in range(len(x))]
343 beams_batch = self.default_beam_scorer.forward(log_probs=data, log_probs_length=None)
344
345 # For each sample in the batch
346 nbest_hypotheses = []
347 for beams_idx, beams in enumerate(beams_batch):
348 # For each beam candidate / hypothesis in each sample
349 hypotheses = []
350 for candidate_idx, candidate in enumerate(beams):
351 hypothesis = rnnt_utils.Hypothesis(
352 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
353 )
354
355 # For subword encoding, NeMo will double encode the subword (multiple tokens) into a
356 # singular unicode id. In doing so, we preserve the semantic of the unicode token, and
357 # compress the size of the final KenLM ARPA / Binary file.
358 # In order to do double encoding, we shift the subword by some token offset.
359 # This step is ignored for character based models.
360 if self.decoding_type == 'subword':
361 pred_token_ids = [ord(c) - self.token_offset for c in candidate[1]]
362 else:
363 # Char models
364 pred_token_ids = [self.vocab_index_map[c] for c in candidate[1]]
365
366 # We preserve the token ids and the score for this hypothesis
367 hypothesis.y_sequence = pred_token_ids
368 hypothesis.score = candidate[0]
369
370 # If alignment must be preserved, we preserve a view of the output logprobs.
371 # Note this view is shared amongst all beams within the sample, be sure to clone it if you
372 # require specific processing for each sample in the beam.
373 # This is done to preserve memory.
374 if self.preserve_alignments:
375 hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
376
377 hypotheses.append(hypothesis)
378
379 # Wrap the result in NBestHypothesis.
380 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
381 nbest_hypotheses.append(hypotheses)
382
383 return nbest_hypotheses
384
385 @torch.no_grad()
386 def _pyctcdecode_beam_search(
387 self, x: torch.Tensor, out_len: torch.Tensor
388 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
389 """
390 PyCTCDecode Beam Search Algorithm. Should support Char and Subword models.
391
392 Args:
393 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
394 and V is the vocabulary size. The tensor contains log-probabilities.
395 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
396
397 Returns:
398 A list of NBestHypotheses objects, one for each sequence in the batch.
399 """
400 if self.compute_timestamps:
401 raise ValueError(
402 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
403 )
404
405 try:
406 import pyctcdecode
407 except (ImportError, ModuleNotFoundError):
408 raise ImportError(
409 f"Could not load `pyctcdecode` library. Please install it from pip using :\n"
410 f"pip install --upgrade pyctcdecode"
411 )
412
413 if self.pyctcdecode_beam_scorer is None:
414 self.pyctcdecode_beam_scorer = pyctcdecode.build_ctcdecoder(
415 labels=self.vocab, kenlm_model_path=self.kenlm_path, alpha=self.beam_alpha, beta=self.beam_beta
416 ) # type: pyctcdecode.BeamSearchDecoderCTC
417
418 x = x.to('cpu').numpy()
419
420 with typecheck.disable_checks():
421 beams_batch = []
422 for sample_id in range(len(x)):
423 logprobs = x[sample_id, : out_len[sample_id], :]
424 result = self.pyctcdecode_beam_scorer.decode_beams(
425 logprobs,
426 beam_width=self.beam_size,
427 beam_prune_logp=self.pyctcdecode_cfg.beam_prune_logp,
428 token_min_logp=self.pyctcdecode_cfg.token_min_logp,
429 prune_history=self.pyctcdecode_cfg.prune_history,
430 hotwords=self.pyctcdecode_cfg.hotwords,
431 hotword_weight=self.pyctcdecode_cfg.hotword_weight,
432 lm_start_state=None,
433 ) # Output format: text, last_lm_state, text_frames, logit_score, lm_score
434 beams_batch.append(result)
435
436 nbest_hypotheses = []
437 for beams_idx, beams in enumerate(beams_batch):
438 hypotheses = []
439 for candidate_idx, candidate in enumerate(beams):
440 # Candidate = (text, last_lm_state, text_frames, logit_score, lm_score)
441 hypothesis = rnnt_utils.Hypothesis(
442 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
443 )
444
445 # TODO: Requires token ids to be returned rather than text.
446 if self.decoding_type == 'subword':
447 if self.tokenizer is None:
448 raise ValueError("Tokenizer must be provided for subword decoding. Use set_tokenizer().")
449
450 pred_token_ids = self.tokenizer.text_to_ids(candidate[0])
451 else:
452 if self.vocab is None:
453 raise ValueError("Vocab must be provided for character decoding. Use set_vocab().")
454
455 chars = list(candidate[0])
456 pred_token_ids = [self.vocab_index_map[c] for c in chars]
457
458 hypothesis.y_sequence = pred_token_ids
459 hypothesis.text = candidate[0] # text
460 hypothesis.score = candidate[4] # score
461
462 # Inject word level timestamps
463 hypothesis.timestep = candidate[2] # text_frames
464
465 if self.preserve_alignments:
466 hypothesis.alignments = torch.from_numpy(x[beams_idx][: out_len[beams_idx]])
467
468 hypotheses.append(hypothesis)
469
470 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
471 nbest_hypotheses.append(hypotheses)
472
473 return nbest_hypotheses
474
475 @torch.no_grad()
476 def flashlight_beam_search(
477 self, x: torch.Tensor, out_len: torch.Tensor
478 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
479 """
480 Flashlight Beam Search Algorithm. Should support Char and Subword models.
481
482 Args:
483 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
484 and V is the vocabulary size. The tensor contains log-probabilities.
485 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
486
487 Returns:
488 A list of NBestHypotheses objects, one for each sequence in the batch.
489 """
490 if self.compute_timestamps:
491 raise ValueError(
492 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
493 )
494
495 if self.flashlight_beam_scorer is None:
496 # Check for filepath
497 if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
498 raise FileNotFoundError(
499 f"KenLM binary file not found at : {self.kenlm_path}. "
500 f"Please set a valid path in the decoding config."
501 )
502
503 # perform token offset for subword models
504 # if self.decoding_type == 'subword':
505 # vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
506 # else:
507 # # char models
508 # vocab = self.vocab
509
510 # Must import at runtime to avoid circular dependency due to module level import.
511 from nemo.collections.asr.modules.flashlight_decoder import FlashLightKenLMBeamSearchDecoder
512
513 self.flashlight_beam_scorer = FlashLightKenLMBeamSearchDecoder(
514 lm_path=self.kenlm_path,
515 vocabulary=self.vocab,
516 tokenizer=self.tokenizer,
517 lexicon_path=self.flashlight_cfg.lexicon_path,
518 boost_path=self.flashlight_cfg.boost_path,
519 beam_size=self.beam_size,
520 beam_size_token=self.flashlight_cfg.beam_size_token,
521 beam_threshold=self.flashlight_cfg.beam_threshold,
522 lm_weight=self.beam_alpha,
523 word_score=self.beam_beta,
524 unk_weight=self.flashlight_cfg.unk_weight,
525 sil_weight=self.flashlight_cfg.sil_weight,
526 )
527
528 x = x.to('cpu')
529
530 with typecheck.disable_checks():
531 beams_batch = self.flashlight_beam_scorer.forward(log_probs=x)
532
533 # For each sample in the batch
534 nbest_hypotheses = []
535 for beams_idx, beams in enumerate(beams_batch):
536 # For each beam candidate / hypothesis in each sample
537 hypotheses = []
538 for candidate_idx, candidate in enumerate(beams):
539 hypothesis = rnnt_utils.Hypothesis(
540 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
541 )
542
543 # We preserve the token ids and the score for this hypothesis
544 hypothesis.y_sequence = candidate['tokens'].tolist()
545 hypothesis.score = candidate['score']
546
547 # If alignment must be preserved, we preserve a view of the output logprobs.
548 # Note this view is shared amongst all beams within the sample, be sure to clone it if you
549 # require specific processing for each sample in the beam.
550 # This is done to preserve memory.
551 if self.preserve_alignments:
552 hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
553
554 hypotheses.append(hypothesis)
555
556 # Wrap the result in NBestHypothesis.
557 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
558 nbest_hypotheses.append(hypotheses)
559
560 return nbest_hypotheses
561
562 def set_decoding_type(self, decoding_type: str):
563 super().set_decoding_type(decoding_type)
564
565 # Please check train_kenlm.py in scripts/asr_language_modeling/ to find out why we need
566 # TOKEN_OFFSET for BPE-based models
567 if self.decoding_type == 'subword':
568 self.token_offset = DEFAULT_TOKEN_OFFSET
569
570
571 @dataclass
572 class PyCTCDecodeConfig:
573 # These arguments cannot be imported from pyctcdecode (optional dependency)
574 # Therefore we copy the values explicitly
575 # Taken from pyctcdecode.constant
576 beam_prune_logp: float = -10.0
577 token_min_logp: float = -5.0
578 prune_history: bool = False
579 hotwords: Optional[List[str]] = None
580 hotword_weight: float = 10.0
581
582
583 @dataclass
584 class FlashlightConfig:
585 lexicon_path: Optional[str] = None
586 boost_path: Optional[str] = None
587 beam_size_token: int = 16
588 beam_threshold: float = 20.0
589 unk_weight: float = -math.inf
590 sil_weight: float = 0.0
591
592
593 @dataclass
594 class BeamCTCInferConfig:
595 beam_size: int
596 search_type: str = 'default'
597 preserve_alignments: bool = False
598 compute_timestamps: bool = False
599 return_best_hypothesis: bool = True
600
601 beam_alpha: float = 1.0
602 beam_beta: float = 0.0
603 kenlm_path: Optional[str] = None
604
605 flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
606 pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
607
[end of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
[start of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import List, Optional
17
18 import torch
19 from omegaconf import DictConfig, OmegaConf
20
21 from nemo.collections.asr.parts.utils import rnnt_utils
22 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMeasureConfig, ConfidenceMeasureMixin
23 from nemo.core.classes import Typing, typecheck
24 from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
25 from nemo.utils import logging
26
27
28 def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
29
30 if logitlen is not None:
31 if hasattr(logitlen, 'cpu'):
32 logitlen_cpu = logitlen.to('cpu')
33 else:
34 logitlen_cpu = logitlen
35
36 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
37 hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
38
39 if logitlen is not None:
40 hyp.length = logitlen_cpu[idx]
41
42 if hyp.dec_state is not None:
43 hyp.dec_state = _states_to_device(hyp.dec_state)
44
45 return hypotheses
46
47
48 def _states_to_device(dec_state, device='cpu'):
49 if torch.is_tensor(dec_state):
50 dec_state = dec_state.to(device)
51
52 elif isinstance(dec_state, (list, tuple)):
53 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
54
55 return dec_state
56
57
58 class GreedyCTCInfer(Typing, ConfidenceMeasureMixin):
59 """A greedy CTC decoder.
60
61 Provides a common abstraction for sample level and batch level greedy decoding.
62
63 Args:
64 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
65 preserve_alignments: Bool flag which preserves the history of logprobs generated during
66 decoding (sample / batched). When set to true, the Hypothesis will contain
67 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
68 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
69 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
70 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
71 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
72 generated during decoding. When set to true, the Hypothesis will contain
73 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
74 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
75 confidence scores.
76
77 name: The measure name (str).
78 Supported values:
79 - 'max_prob' for using the maximum token probability as a confidence.
80 - 'entropy' for using a normalized entropy of a log-likelihood vector.
81
82 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
83 Supported values:
84 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
85 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
86 Note that for this entropy, the alpha should comply the following inequality:
87 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
88 where V is the model vocabulary size.
89 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
90 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
91 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
92 More: https://en.wikipedia.org/wiki/Tsallis_entropy
93 - 'renyi' for the Rรฉnyi entropy.
94 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
95 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
96 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
97
98 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
99 When the alpha equals one, scaling is not applied to 'max_prob',
100 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
101
102 entropy_norm: A mapping of the entropy value to the interval [0,1].
103 Supported values:
104 - 'lin' for using the linear mapping.
105 - 'exp' for using exponential mapping with linear shift.
106
107 """
108
109 @property
110 def input_types(self):
111 """Returns definitions of module input ports.
112 """
113 # Input can be of dimention -
114 # ('B', 'T', 'D') [Log probs] or ('B', 'T') [Labels]
115
116 return {
117 "decoder_output": NeuralType(None, LogprobsType()),
118 "decoder_lengths": NeuralType(tuple('B'), LengthsType()),
119 }
120
121 @property
122 def output_types(self):
123 """Returns definitions of module output ports.
124 """
125 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
126
127 def __init__(
128 self,
129 blank_id: int,
130 preserve_alignments: bool = False,
131 compute_timestamps: bool = False,
132 preserve_frame_confidence: bool = False,
133 confidence_measure_cfg: Optional[DictConfig] = None,
134 ):
135 super().__init__()
136
137 self.blank_id = blank_id
138 self.preserve_alignments = preserve_alignments
139 # we need timestamps to extract non-blank per-frame confidence
140 self.compute_timestamps = compute_timestamps | preserve_frame_confidence
141 self.preserve_frame_confidence = preserve_frame_confidence
142
143 # set confidence calculation measure
144 self._init_confidence_measure(confidence_measure_cfg)
145
146 @typecheck()
147 def forward(
148 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
149 ):
150 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
151 Output token is generated auto-repressively.
152
153 Args:
154 decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
155 decoder_lengths: list of int representing the length of each sequence
156 output sequence.
157
158 Returns:
159 packed list containing batch number of sentences (Hypotheses).
160 """
161 with torch.inference_mode():
162 hypotheses = []
163 # Process each sequence independently
164 prediction_cpu_tensor = decoder_output.cpu()
165
166 if prediction_cpu_tensor.ndim < 2 or prediction_cpu_tensor.ndim > 3:
167 raise ValueError(
168 f"`decoder_output` must be a tensor of shape [B, T] (labels, int) or "
169 f"[B, T, V] (log probs, float). Provided shape = {prediction_cpu_tensor.shape}"
170 )
171
172 # determine type of input - logprobs or labels
173 if prediction_cpu_tensor.ndim == 2: # labels
174 greedy_decode = self._greedy_decode_labels
175 else:
176 greedy_decode = self._greedy_decode_logprobs
177
178 for ind in range(prediction_cpu_tensor.shape[0]):
179 out_len = decoder_lengths[ind] if decoder_lengths is not None else None
180 hypothesis = greedy_decode(prediction_cpu_tensor[ind], out_len)
181 hypotheses.append(hypothesis)
182
183 # Pack results into Hypotheses
184 packed_result = pack_hypotheses(hypotheses, decoder_lengths)
185
186 return (packed_result,)
187
188 @torch.no_grad()
189 def _greedy_decode_logprobs(self, x: torch.Tensor, out_len: torch.Tensor):
190 # x: [T, D]
191 # out_len: [seq_len]
192
193 # Initialize blank state and empty label set in Hypothesis
194 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
195 prediction = x.detach().cpu()
196
197 if out_len is not None:
198 prediction = prediction[:out_len]
199
200 prediction_logprobs, prediction_labels = prediction.max(dim=-1)
201
202 non_blank_ids = prediction_labels != self.blank_id
203 hypothesis.y_sequence = prediction_labels.numpy().tolist()
204 hypothesis.score = (prediction_logprobs[non_blank_ids]).sum()
205
206 if self.preserve_alignments:
207 # Preserve the logprobs, as well as labels after argmax
208 hypothesis.alignments = (prediction.clone(), prediction_labels.clone())
209
210 if self.compute_timestamps:
211 hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
212
213 if self.preserve_frame_confidence:
214 hypothesis.frame_confidence = self._get_confidence(prediction)
215
216 return hypothesis
217
218 @torch.no_grad()
219 def _greedy_decode_labels(self, x: torch.Tensor, out_len: torch.Tensor):
220 # x: [T]
221 # out_len: [seq_len]
222
223 # Initialize blank state and empty label set in Hypothesis
224 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
225 prediction_labels = x.detach().cpu()
226
227 if out_len is not None:
228 prediction_labels = prediction_labels[:out_len]
229
230 non_blank_ids = prediction_labels != self.blank_id
231 hypothesis.y_sequence = prediction_labels.numpy().tolist()
232 hypothesis.score = -1.0
233
234 if self.preserve_alignments:
235 raise ValueError("Requested for alignments, but predictions provided were labels, not log probabilities.")
236
237 if self.compute_timestamps:
238 hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
239
240 if self.preserve_frame_confidence:
241 raise ValueError(
242 "Requested for per-frame confidence, but predictions provided were labels, not log probabilities."
243 )
244
245 return hypothesis
246
247 def __call__(self, *args, **kwargs):
248 return self.forward(*args, **kwargs)
249
250
251 @dataclass
252 class GreedyCTCInferConfig:
253 preserve_alignments: bool = False
254 compute_timestamps: bool = False
255 preserve_frame_confidence: bool = False
256 confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
257 confidence_method_cfg: str = "DEPRECATED"
258
259 def __post_init__(self):
260 # OmegaConf.structured ensures that post_init check is always executed
261 self.confidence_measure_cfg = OmegaConf.structured(
262 self.confidence_measure_cfg
263 if isinstance(self.confidence_measure_cfg, ConfidenceMeasureConfig)
264 else ConfidenceMeasureConfig(**self.confidence_measure_cfg)
265 )
266 if self.confidence_method_cfg != "DEPRECATED":
267 logging.warning(
268 "`confidence_method_cfg` is deprecated and will be removed in the future. "
269 "Please use `confidence_measure_cfg` instead."
270 )
271
272 # TODO (alaptev): delete the following two lines sometime in the future
273 logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
274 # OmegaConf.structured ensures that post_init check is always executed
275 self.confidence_measure_cfg = OmegaConf.structured(
276 self.confidence_method_cfg
277 if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
278 else ConfidenceMeasureConfig(**self.confidence_method_cfg)
279 )
280
[end of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
[start of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Copyright 2017 Johns Hopkins University (Shinji Watanabe)
16 #
17 # Licensed under the Apache License, Version 2.0 (the "License");
18 # you may not use this file except in compliance with the License.
19 # You may obtain a copy of the License at
20 #
21 # http://www.apache.org/licenses/LICENSE-2.0
22 #
23 # Unless required by applicable law or agreed to in writing, software
24 # distributed under the License is distributed on an "AS IS" BASIS,
25 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
26 # See the License for the specific language governing permissions and
27 # limitations under the License.
28
29 from dataclasses import dataclass
30 from typing import List, Optional, Tuple, Union
31
32 import numpy as np
33 import torch
34 from omegaconf import DictConfig, OmegaConf
35
36 from nemo.collections.asr.modules import rnnt_abstract
37 from nemo.collections.asr.parts.utils import rnnt_utils
38 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMeasureConfig, ConfidenceMeasureMixin
39 from nemo.collections.common.parts.rnn import label_collate
40 from nemo.core.classes import Typing, typecheck
41 from nemo.core.neural_types import AcousticEncodedRepresentation, ElementType, HypothesisType, LengthsType, NeuralType
42 from nemo.utils import logging
43
44
45 def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
46
47 if hasattr(logitlen, 'cpu'):
48 logitlen_cpu = logitlen.to('cpu')
49 else:
50 logitlen_cpu = logitlen
51
52 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
53 hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
54 hyp.length = logitlen_cpu[idx]
55
56 if hyp.dec_state is not None:
57 hyp.dec_state = _states_to_device(hyp.dec_state)
58
59 return hypotheses
60
61
62 def _states_to_device(dec_state, device='cpu'):
63 if torch.is_tensor(dec_state):
64 dec_state = dec_state.to(device)
65
66 elif isinstance(dec_state, (list, tuple)):
67 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
68
69 return dec_state
70
71
72 class _GreedyRNNTInfer(Typing, ConfidenceMeasureMixin):
73 """A greedy transducer decoder.
74
75 Provides a common abstraction for sample level and batch level greedy decoding.
76
77 Args:
78 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
79 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
80 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
81 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
82 to a sequence in a single time step; if set to None then there is
83 no limit.
84 preserve_alignments: Bool flag which preserves the history of alignments generated during
85 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
86 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
87 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
88
89 The length of the list corresponds to the Acoustic Length (T).
90 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
91 U is the number of target tokens for the current timestep Ti.
92 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
93 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
94 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
95
96 The length of the list corresponds to the Acoustic Length (T).
97 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
98 U is the number of target tokens for the current timestep Ti.
99 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
100 confidence scores.
101
102 name: The measure name (str).
103 Supported values:
104 - 'max_prob' for using the maximum token probability as a confidence.
105 - 'entropy' for using a normalized entropy of a log-likelihood vector.
106
107 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
108 Supported values:
109 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
110 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
111 Note that for this entropy, the alpha should comply the following inequality:
112 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
113 where V is the model vocabulary size.
114 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
115 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
116 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
117 More: https://en.wikipedia.org/wiki/Tsallis_entropy
118 - 'renyi' for the Rรฉnyi entropy.
119 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
120 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
121 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
122
123 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
124 When the alpha equals one, scaling is not applied to 'max_prob',
125 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
126
127 entropy_norm: A mapping of the entropy value to the interval [0,1].
128 Supported values:
129 - 'lin' for using the linear mapping.
130 - 'exp' for using exponential mapping with linear shift.
131 """
132
133 @property
134 def input_types(self):
135 """Returns definitions of module input ports.
136 """
137 return {
138 "encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
139 "encoded_lengths": NeuralType(tuple('B'), LengthsType()),
140 "partial_hypotheses": [NeuralType(elements_type=HypothesisType(), optional=True)], # must always be last
141 }
142
143 @property
144 def output_types(self):
145 """Returns definitions of module output ports.
146 """
147 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
148
149 def __init__(
150 self,
151 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
152 joint_model: rnnt_abstract.AbstractRNNTJoint,
153 blank_index: int,
154 max_symbols_per_step: Optional[int] = None,
155 preserve_alignments: bool = False,
156 preserve_frame_confidence: bool = False,
157 confidence_measure_cfg: Optional[DictConfig] = None,
158 ):
159 super().__init__()
160 self.decoder = decoder_model
161 self.joint = joint_model
162
163 self._blank_index = blank_index
164 self._SOS = blank_index # Start of single index
165 self.max_symbols = max_symbols_per_step
166 self.preserve_alignments = preserve_alignments
167 self.preserve_frame_confidence = preserve_frame_confidence
168
169 # set confidence calculation measure
170 self._init_confidence_measure(confidence_measure_cfg)
171
172 def __call__(self, *args, **kwargs):
173 return self.forward(*args, **kwargs)
174
175 @torch.no_grad()
176 def _pred_step(
177 self,
178 label: Union[torch.Tensor, int],
179 hidden: Optional[torch.Tensor],
180 add_sos: bool = False,
181 batch_size: Optional[int] = None,
182 ) -> Tuple[torch.Tensor, torch.Tensor]:
183 """
184 Common prediction step based on the AbstractRNNTDecoder implementation.
185
186 Args:
187 label: (int/torch.Tensor): Label or "Start-of-Signal" token.
188 hidden: (Optional torch.Tensor): RNN State vector
189 add_sos (bool): Whether to add a zero vector at the begging as "start of sentence" token.
190 batch_size: Batch size of the output tensor.
191
192 Returns:
193 g: (B, U, H) if add_sos is false, else (B, U + 1, H)
194 hid: (h, c) where h is the final sequence hidden state and c is
195 the final cell state:
196 h (tensor), shape (L, B, H)
197 c (tensor), shape (L, B, H)
198 """
199 if isinstance(label, torch.Tensor):
200 # label: [batch, 1]
201 if label.dtype != torch.long:
202 label = label.long()
203
204 else:
205 # Label is an integer
206 if label == self._SOS:
207 return self.decoder.predict(None, hidden, add_sos=add_sos, batch_size=batch_size)
208
209 label = label_collate([[label]])
210
211 # output: [B, 1, K]
212 return self.decoder.predict(label, hidden, add_sos=add_sos, batch_size=batch_size)
213
214 def _joint_step(self, enc, pred, log_normalize: Optional[bool] = None):
215 """
216 Common joint step based on AbstractRNNTJoint implementation.
217
218 Args:
219 enc: Output of the Encoder model. A torch.Tensor of shape [B, 1, H1]
220 pred: Output of the Decoder model. A torch.Tensor of shape [B, 1, H2]
221 log_normalize: Whether to log normalize or not. None will log normalize only for CPU.
222
223 Returns:
224 logits of shape (B, T=1, U=1, V + 1)
225 """
226 with torch.no_grad():
227 logits = self.joint.joint(enc, pred)
228
229 if log_normalize is None:
230 if not logits.is_cuda: # Use log softmax only if on CPU
231 logits = logits.log_softmax(dim=len(logits.shape) - 1)
232 else:
233 if log_normalize:
234 logits = logits.log_softmax(dim=len(logits.shape) - 1)
235
236 return logits
237
238
239 class GreedyRNNTInfer(_GreedyRNNTInfer):
240 """A greedy transducer decoder.
241
242 Sequence level greedy decoding, performed auto-regressively.
243
244 Args:
245 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
246 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
247 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
248 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
249 to a sequence in a single time step; if set to None then there is
250 no limit.
251 preserve_alignments: Bool flag which preserves the history of alignments generated during
252 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
253 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
254 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
255
256 The length of the list corresponds to the Acoustic Length (T).
257 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
258 U is the number of target tokens for the current timestep Ti.
259 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
260 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
261 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
262
263 The length of the list corresponds to the Acoustic Length (T).
264 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
265 U is the number of target tokens for the current timestep Ti.
266 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
267 confidence scores.
268
269 name: The measure name (str).
270 Supported values:
271 - 'max_prob' for using the maximum token probability as a confidence.
272 - 'entropy' for using a normalized entropy of a log-likelihood vector.
273
274 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
275 Supported values:
276 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
277 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
278 Note that for this entropy, the alpha should comply the following inequality:
279 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
280 where V is the model vocabulary size.
281 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
282 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
283 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
284 More: https://en.wikipedia.org/wiki/Tsallis_entropy
285 - 'renyi' for the Rรฉnyi entropy.
286 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
287 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
288 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
289
290 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
291 When the alpha equals one, scaling is not applied to 'max_prob',
292 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
293
294 entropy_norm: A mapping of the entropy value to the interval [0,1].
295 Supported values:
296 - 'lin' for using the linear mapping.
297 - 'exp' for using exponential mapping with linear shift.
298 """
299
300 def __init__(
301 self,
302 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
303 joint_model: rnnt_abstract.AbstractRNNTJoint,
304 blank_index: int,
305 max_symbols_per_step: Optional[int] = None,
306 preserve_alignments: bool = False,
307 preserve_frame_confidence: bool = False,
308 confidence_measure_cfg: Optional[DictConfig] = None,
309 ):
310 super().__init__(
311 decoder_model=decoder_model,
312 joint_model=joint_model,
313 blank_index=blank_index,
314 max_symbols_per_step=max_symbols_per_step,
315 preserve_alignments=preserve_alignments,
316 preserve_frame_confidence=preserve_frame_confidence,
317 confidence_measure_cfg=confidence_measure_cfg,
318 )
319
320 @typecheck()
321 def forward(
322 self,
323 encoder_output: torch.Tensor,
324 encoded_lengths: torch.Tensor,
325 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
326 ):
327 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
328 Output token is generated auto-regressively.
329
330 Args:
331 encoder_output: A tensor of size (batch, features, timesteps).
332 encoded_lengths: list of int representing the length of each sequence
333 output sequence.
334
335 Returns:
336 packed list containing batch number of sentences (Hypotheses).
337 """
338 # Preserve decoder and joint training state
339 decoder_training_state = self.decoder.training
340 joint_training_state = self.joint.training
341
342 with torch.inference_mode():
343 # Apply optional preprocessing
344 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
345
346 self.decoder.eval()
347 self.joint.eval()
348
349 hypotheses = []
350 # Process each sequence independently
351 with self.decoder.as_frozen(), self.joint.as_frozen():
352 for batch_idx in range(encoder_output.size(0)):
353 inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
354 logitlen = encoded_lengths[batch_idx]
355
356 partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
357 hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
358 hypotheses.append(hypothesis)
359
360 # Pack results into Hypotheses
361 packed_result = pack_hypotheses(hypotheses, encoded_lengths)
362
363 self.decoder.train(decoder_training_state)
364 self.joint.train(joint_training_state)
365
366 return (packed_result,)
367
368 @torch.no_grad()
369 def _greedy_decode(
370 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
371 ):
372 # x: [T, 1, D]
373 # out_len: [seq_len]
374
375 # Initialize blank state and empty label set in Hypothesis
376 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
377
378 if partial_hypotheses is not None:
379 hypothesis.last_token = partial_hypotheses.last_token
380 hypothesis.y_sequence = (
381 partial_hypotheses.y_sequence.cpu().tolist()
382 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
383 else partial_hypotheses.y_sequence
384 )
385 if partial_hypotheses.dec_state is not None:
386 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
387 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
388
389 if self.preserve_alignments:
390 # Alignments is a 2-dimensional dangling list representing T x U
391 hypothesis.alignments = [[]]
392
393 if self.preserve_frame_confidence:
394 hypothesis.frame_confidence = [[]]
395
396 # For timestep t in X_t
397 for time_idx in range(out_len):
398 # Extract encoder embedding at timestep t
399 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
400 f = x.narrow(dim=0, start=time_idx, length=1)
401
402 # Setup exit flags and counter
403 not_blank = True
404 symbols_added = 0
405 # While blank is not predicted, or we dont run out of max symbols per timestep
406 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
407 # In the first timestep, we initialize the network with RNNT Blank
408 # In later timesteps, we provide previous predicted label as input.
409 if hypothesis.last_token is None and hypothesis.dec_state is None:
410 last_label = self._SOS
411 else:
412 last_label = label_collate([[hypothesis.last_token]])
413
414 # Perform prediction network and joint network steps.
415 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
416 # If preserving per-frame confidence, log_normalize must be true
417 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
418 0, 0, 0, :
419 ]
420
421 del g
422
423 # torch.max(0) op doesnt exist for FP 16.
424 if logp.dtype != torch.float32:
425 logp = logp.float()
426
427 # get index k, of max prob
428 v, k = logp.max(0)
429 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
430
431 if self.preserve_alignments:
432 # insert logprobs into last timestep
433 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
434
435 if self.preserve_frame_confidence:
436 # insert confidence into last timestep
437 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
438
439 del logp
440
441 # If blank token is predicted, exit inner loop, move onto next timestep t
442 if k == self._blank_index:
443 not_blank = False
444 else:
445 # Append token to label set, update RNN state.
446 hypothesis.y_sequence.append(k)
447 hypothesis.score += float(v)
448 hypothesis.timestep.append(time_idx)
449 hypothesis.dec_state = hidden_prime
450 hypothesis.last_token = k
451
452 # Increment token counter.
453 symbols_added += 1
454
455 if self.preserve_alignments:
456 # convert Ti-th logits into a torch array
457 hypothesis.alignments.append([]) # blank buffer for next timestep
458
459 if self.preserve_frame_confidence:
460 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
461
462 # Remove trailing empty list of Alignments
463 if self.preserve_alignments:
464 if len(hypothesis.alignments[-1]) == 0:
465 del hypothesis.alignments[-1]
466
467 # Remove trailing empty list of per-frame confidence
468 if self.preserve_frame_confidence:
469 if len(hypothesis.frame_confidence[-1]) == 0:
470 del hypothesis.frame_confidence[-1]
471
472 # Unpack the hidden states
473 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
474
475 return hypothesis
476
477
478 class GreedyBatchedRNNTInfer(_GreedyRNNTInfer):
479 """A batch level greedy transducer decoder.
480
481 Batch level greedy decoding, performed auto-regressively.
482
483 Args:
484 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
485 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
486 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
487 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
488 to a sequence in a single time step; if set to None then there is
489 no limit.
490 preserve_alignments: Bool flag which preserves the history of alignments generated during
491 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
492 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
493 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
494
495 The length of the list corresponds to the Acoustic Length (T).
496 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
497 U is the number of target tokens for the current timestep Ti.
498 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
499 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
500 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
501
502 The length of the list corresponds to the Acoustic Length (T).
503 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
504 U is the number of target tokens for the current timestep Ti.
505 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
506 confidence scores.
507
508 name: The measure name (str).
509 Supported values:
510 - 'max_prob' for using the maximum token probability as a confidence.
511 - 'entropy' for using a normalized entropy of a log-likelihood vector.
512
513 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
514 Supported values:
515 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
516 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
517 Note that for this entropy, the alpha should comply the following inequality:
518 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
519 where V is the model vocabulary size.
520 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
521 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
522 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
523 More: https://en.wikipedia.org/wiki/Tsallis_entropy
524 - 'renyi' for the Rรฉnyi entropy.
525 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
526 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
527 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
528
529 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
530 When the alpha equals one, scaling is not applied to 'max_prob',
531 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
532
533 entropy_norm: A mapping of the entropy value to the interval [0,1].
534 Supported values:
535 - 'lin' for using the linear mapping.
536 - 'exp' for using exponential mapping with linear shift.
537 """
538
539 def __init__(
540 self,
541 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
542 joint_model: rnnt_abstract.AbstractRNNTJoint,
543 blank_index: int,
544 max_symbols_per_step: Optional[int] = None,
545 preserve_alignments: bool = False,
546 preserve_frame_confidence: bool = False,
547 confidence_measure_cfg: Optional[DictConfig] = None,
548 ):
549 super().__init__(
550 decoder_model=decoder_model,
551 joint_model=joint_model,
552 blank_index=blank_index,
553 max_symbols_per_step=max_symbols_per_step,
554 preserve_alignments=preserve_alignments,
555 preserve_frame_confidence=preserve_frame_confidence,
556 confidence_measure_cfg=confidence_measure_cfg,
557 )
558
559 # Depending on availability of `blank_as_pad` support
560 # switch between more efficient batch decoding technique
561 if self.decoder.blank_as_pad:
562 self._greedy_decode = self._greedy_decode_blank_as_pad
563 else:
564 self._greedy_decode = self._greedy_decode_masked
565
566 @typecheck()
567 def forward(
568 self,
569 encoder_output: torch.Tensor,
570 encoded_lengths: torch.Tensor,
571 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
572 ):
573 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
574 Output token is generated auto-regressively.
575
576 Args:
577 encoder_output: A tensor of size (batch, features, timesteps).
578 encoded_lengths: list of int representing the length of each sequence
579 output sequence.
580
581 Returns:
582 packed list containing batch number of sentences (Hypotheses).
583 """
584 # Preserve decoder and joint training state
585 decoder_training_state = self.decoder.training
586 joint_training_state = self.joint.training
587
588 with torch.inference_mode():
589 # Apply optional preprocessing
590 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
591 logitlen = encoded_lengths
592
593 self.decoder.eval()
594 self.joint.eval()
595
596 with self.decoder.as_frozen(), self.joint.as_frozen():
597 inseq = encoder_output # [B, T, D]
598 hypotheses = self._greedy_decode(
599 inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
600 )
601
602 # Pack the hypotheses results
603 packed_result = pack_hypotheses(hypotheses, logitlen)
604
605 self.decoder.train(decoder_training_state)
606 self.joint.train(joint_training_state)
607
608 return (packed_result,)
609
610 def _greedy_decode_blank_as_pad(
611 self,
612 x: torch.Tensor,
613 out_len: torch.Tensor,
614 device: torch.device,
615 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
616 ):
617 if partial_hypotheses is not None:
618 raise NotImplementedError("`partial_hypotheses` support is not supported")
619
620 with torch.inference_mode():
621 # x: [B, T, D]
622 # out_len: [B]
623 # device: torch.device
624
625 # Initialize list of Hypothesis
626 batchsize = x.shape[0]
627 hypotheses = [
628 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
629 ]
630
631 # Initialize Hidden state matrix (shared by entire batch)
632 hidden = None
633
634 # If alignments need to be preserved, register a dangling list to hold the values
635 if self.preserve_alignments:
636 # alignments is a 3-dimensional dangling list representing B x T x U
637 for hyp in hypotheses:
638 hyp.alignments = [[]]
639
640 # If confidence scores need to be preserved, register a dangling list to hold the values
641 if self.preserve_frame_confidence:
642 # frame_confidence is a 3-dimensional dangling list representing B x T x U
643 for hyp in hypotheses:
644 hyp.frame_confidence = [[]]
645
646 # Last Label buffer + Last Label without blank buffer
647 # batch level equivalent of the last_label
648 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
649
650 # Mask buffers
651 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
652
653 # Get max sequence length
654 max_out_len = out_len.max()
655 for time_idx in range(max_out_len):
656 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
657
658 # Prepare t timestamp batch variables
659 not_blank = True
660 symbols_added = 0
661
662 # Reset blank mask
663 blank_mask.mul_(False)
664
665 # Update blank mask with time mask
666 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
667 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
668 blank_mask = time_idx >= out_len
669 # Start inner loop
670 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
671 # Batch prediction and joint network steps
672 # If very first prediction step, submit SOS tag (blank) to pred_step.
673 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
674 if time_idx == 0 and symbols_added == 0 and hidden is None:
675 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
676 else:
677 # Perform batch step prediction of decoder, getting new states and scores ("g")
678 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
679
680 # Batched joint step - Output = [B, V + 1]
681 # If preserving per-frame confidence, log_normalize must be true
682 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
683 :, 0, 0, :
684 ]
685
686 if logp.dtype != torch.float32:
687 logp = logp.float()
688
689 # Get index k, of max prob for batch
690 v, k = logp.max(1)
691 del g
692
693 # Update blank mask with current predicted blanks
694 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
695 k_is_blank = k == self._blank_index
696 blank_mask.bitwise_or_(k_is_blank)
697 all_blanks = torch.all(blank_mask)
698
699 del k_is_blank
700
701 # If preserving alignments, check if sequence length of sample has been reached
702 # before adding alignment
703 if self.preserve_alignments:
704 # Insert logprobs into last timestep per sample
705 logp_vals = logp.to('cpu')
706 logp_ids = logp_vals.max(1)[1]
707 for batch_idx, is_blank in enumerate(blank_mask):
708 # we only want to update non-blanks, unless we are at the last step in the loop where
709 # all elements produced blanks, otherwise there will be duplicate predictions
710 # saved in alignments
711 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
712 hypotheses[batch_idx].alignments[-1].append(
713 (logp_vals[batch_idx], logp_ids[batch_idx])
714 )
715 del logp_vals
716
717 # If preserving per-frame confidence, check if sequence length of sample has been reached
718 # before adding confidence scores
719 if self.preserve_frame_confidence:
720 # Insert probabilities into last timestep per sample
721 confidence = self._get_confidence(logp)
722 for batch_idx, is_blank in enumerate(blank_mask):
723 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
724 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
725 del logp
726
727 # If all samples predict / have predicted prior blanks, exit loop early
728 # This is equivalent to if single sample predicted k
729 if all_blanks:
730 not_blank = False
731 else:
732 # Collect batch indices where blanks occurred now/past
733 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
734
735 # Recover prior state for all samples which predicted blank now/past
736 if hidden is not None:
737 # LSTM has 2 states
738 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
739
740 elif len(blank_indices) > 0 and hidden is None:
741 # Reset state if there were some blank and other non-blank predictions in batch
742 # Original state is filled with zeros so we just multiply
743 # LSTM has 2 states
744 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
745
746 # Recover prior predicted label for all samples which predicted blank now/past
747 k[blank_indices] = last_label[blank_indices, 0]
748
749 # Update new label and hidden state for next iteration
750 last_label = k.clone().view(-1, 1)
751 hidden = hidden_prime
752
753 # Update predicted labels, accounting for time mask
754 # If blank was predicted even once, now or in the past,
755 # Force the current predicted label to also be blank
756 # This ensures that blanks propogate across all timesteps
757 # once they have occured (normally stopping condition of sample level loop).
758 for kidx, ki in enumerate(k):
759 if blank_mask[kidx] == 0:
760 hypotheses[kidx].y_sequence.append(ki)
761 hypotheses[kidx].timestep.append(time_idx)
762 hypotheses[kidx].score += float(v[kidx])
763 symbols_added += 1
764
765 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
766 # Then preserve U at current timestep Ti
767 # Finally, forward the timestep history to Ti+1 for that sample
768 # All of this should only be done iff the current time index <= sample-level AM length.
769 # Otherwise ignore and move to next sample / next timestep.
770 if self.preserve_alignments:
771
772 # convert Ti-th logits into a torch array
773 for batch_idx in range(batchsize):
774
775 # this checks if current timestep <= sample-level AM length
776 # If current timestep > sample-level AM length, no alignments will be added
777 # Therefore the list of Uj alignments is empty here.
778 if len(hypotheses[batch_idx].alignments[-1]) > 0:
779 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
780
781 # Do the same if preserving per-frame confidence
782 if self.preserve_frame_confidence:
783
784 for batch_idx in range(batchsize):
785 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
786 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
787
788 # Remove trailing empty list of alignments at T_{am-len} x Uj
789 if self.preserve_alignments:
790 for batch_idx in range(batchsize):
791 if len(hypotheses[batch_idx].alignments[-1]) == 0:
792 del hypotheses[batch_idx].alignments[-1]
793
794 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
795 if self.preserve_frame_confidence:
796 for batch_idx in range(batchsize):
797 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
798 del hypotheses[batch_idx].frame_confidence[-1]
799
800 # Preserve states
801 for batch_idx in range(batchsize):
802 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
803
804 return hypotheses
805
806 def _greedy_decode_masked(
807 self,
808 x: torch.Tensor,
809 out_len: torch.Tensor,
810 device: torch.device,
811 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
812 ):
813 if partial_hypotheses is not None:
814 raise NotImplementedError("`partial_hypotheses` support is not supported")
815
816 # x: [B, T, D]
817 # out_len: [B]
818 # device: torch.device
819
820 # Initialize state
821 batchsize = x.shape[0]
822 hypotheses = [
823 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
824 ]
825
826 # Initialize Hidden state matrix (shared by entire batch)
827 hidden = None
828
829 # If alignments need to be preserved, register a danling list to hold the values
830 if self.preserve_alignments:
831 # alignments is a 3-dimensional dangling list representing B x T x U
832 for hyp in hypotheses:
833 hyp.alignments = [[]]
834 else:
835 alignments = None
836
837 # If confidence scores need to be preserved, register a danling list to hold the values
838 if self.preserve_frame_confidence:
839 # frame_confidence is a 3-dimensional dangling list representing B x T x U
840 for hyp in hypotheses:
841 hyp.frame_confidence = [[]]
842
843 # Last Label buffer + Last Label without blank buffer
844 # batch level equivalent of the last_label
845 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
846 last_label_without_blank = last_label.clone()
847
848 # Mask buffers
849 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
850
851 # Get max sequence length
852 max_out_len = out_len.max()
853
854 with torch.inference_mode():
855 for time_idx in range(max_out_len):
856 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
857
858 # Prepare t timestamp batch variables
859 not_blank = True
860 symbols_added = 0
861
862 # Reset blank mask
863 blank_mask.mul_(False)
864
865 # Update blank mask with time mask
866 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
867 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
868 blank_mask = time_idx >= out_len
869
870 # Start inner loop
871 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
872 # Batch prediction and joint network steps
873 # If very first prediction step, submit SOS tag (blank) to pred_step.
874 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
875 if time_idx == 0 and symbols_added == 0 and hidden is None:
876 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
877 else:
878 # Set a dummy label for the blank value
879 # This value will be overwritten by "blank" again the last label update below
880 # This is done as vocabulary of prediction network does not contain "blank" token of RNNT
881 last_label_without_blank_mask = last_label == self._blank_index
882 last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
883 last_label_without_blank[~last_label_without_blank_mask] = last_label[
884 ~last_label_without_blank_mask
885 ]
886
887 # Perform batch step prediction of decoder, getting new states and scores ("g")
888 g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
889
890 # Batched joint step - Output = [B, V + 1]
891 # If preserving per-frame confidence, log_normalize must be true
892 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
893 :, 0, 0, :
894 ]
895
896 if logp.dtype != torch.float32:
897 logp = logp.float()
898
899 # Get index k, of max prob for batch
900 v, k = logp.max(1)
901 del g
902
903 # Update blank mask with current predicted blanks
904 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
905 k_is_blank = k == self._blank_index
906 blank_mask.bitwise_or_(k_is_blank)
907 all_blanks = torch.all(blank_mask)
908
909 # If preserving alignments, check if sequence length of sample has been reached
910 # before adding alignment
911 if self.preserve_alignments:
912 # Insert logprobs into last timestep per sample
913 logp_vals = logp.to('cpu')
914 logp_ids = logp_vals.max(1)[1]
915 for batch_idx, is_blank in enumerate(blank_mask):
916 # we only want to update non-blanks, unless we are at the last step in the loop where
917 # all elements produced blanks, otherwise there will be duplicate predictions
918 # saved in alignments
919 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
920 hypotheses[batch_idx].alignments[-1].append(
921 (logp_vals[batch_idx], logp_ids[batch_idx])
922 )
923
924 del logp_vals
925
926 # If preserving per-frame confidence, check if sequence length of sample has been reached
927 # before adding confidence scores
928 if self.preserve_frame_confidence:
929 # Insert probabilities into last timestep per sample
930 confidence = self._get_confidence(logp)
931 for batch_idx, is_blank in enumerate(blank_mask):
932 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
933 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
934 del logp
935
936 # If all samples predict / have predicted prior blanks, exit loop early
937 # This is equivalent to if single sample predicted k
938 if blank_mask.all():
939 not_blank = False
940 else:
941 # Collect batch indices where blanks occurred now/past
942 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
943
944 # Recover prior state for all samples which predicted blank now/past
945 if hidden is not None:
946 # LSTM has 2 states
947 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
948
949 elif len(blank_indices) > 0 and hidden is None:
950 # Reset state if there were some blank and other non-blank predictions in batch
951 # Original state is filled with zeros so we just multiply
952 # LSTM has 2 states
953 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
954
955 # Recover prior predicted label for all samples which predicted blank now/past
956 k[blank_indices] = last_label[blank_indices, 0]
957
958 # Update new label and hidden state for next iteration
959 last_label = k.view(-1, 1)
960 hidden = hidden_prime
961
962 # Update predicted labels, accounting for time mask
963 # If blank was predicted even once, now or in the past,
964 # Force the current predicted label to also be blank
965 # This ensures that blanks propogate across all timesteps
966 # once they have occured (normally stopping condition of sample level loop).
967 for kidx, ki in enumerate(k):
968 if blank_mask[kidx] == 0:
969 hypotheses[kidx].y_sequence.append(ki)
970 hypotheses[kidx].timestep.append(time_idx)
971 hypotheses[kidx].score += float(v[kidx])
972
973 symbols_added += 1
974
975 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
976 # Then preserve U at current timestep Ti
977 # Finally, forward the timestep history to Ti+1 for that sample
978 # All of this should only be done iff the current time index <= sample-level AM length.
979 # Otherwise ignore and move to next sample / next timestep.
980 if self.preserve_alignments:
981
982 # convert Ti-th logits into a torch array
983 for batch_idx in range(batchsize):
984
985 # this checks if current timestep <= sample-level AM length
986 # If current timestep > sample-level AM length, no alignments will be added
987 # Therefore the list of Uj alignments is empty here.
988 if len(hypotheses[batch_idx].alignments[-1]) > 0:
989 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
990
991 # Do the same if preserving per-frame confidence
992 if self.preserve_frame_confidence:
993
994 for batch_idx in range(batchsize):
995 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
996 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
997
998 # Remove trailing empty list of alignments at T_{am-len} x Uj
999 if self.preserve_alignments:
1000 for batch_idx in range(batchsize):
1001 if len(hypotheses[batch_idx].alignments[-1]) == 0:
1002 del hypotheses[batch_idx].alignments[-1]
1003
1004 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
1005 if self.preserve_frame_confidence:
1006 for batch_idx in range(batchsize):
1007 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
1008 del hypotheses[batch_idx].frame_confidence[-1]
1009
1010 # Preserve states
1011 for batch_idx in range(batchsize):
1012 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
1013
1014 return hypotheses
1015
1016
1017 class ExportedModelGreedyBatchedRNNTInfer:
1018 def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = None):
1019 self.encoder_model_path = encoder_model
1020 self.decoder_joint_model_path = decoder_joint_model
1021 self.max_symbols_per_step = max_symbols_per_step
1022
1023 # Will be populated at runtime
1024 self._blank_index = None
1025
1026 def __call__(self, audio_signal: torch.Tensor, length: torch.Tensor):
1027 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
1028 Output token is generated auto-regressively.
1029
1030 Args:
1031 encoder_output: A tensor of size (batch, features, timesteps).
1032 encoded_lengths: list of int representing the length of each sequence
1033 output sequence.
1034
1035 Returns:
1036 packed list containing batch number of sentences (Hypotheses).
1037 """
1038 with torch.no_grad():
1039 # Apply optional preprocessing
1040 encoder_output, encoded_lengths = self.run_encoder(audio_signal=audio_signal, length=length)
1041
1042 if torch.is_tensor(encoder_output):
1043 encoder_output = encoder_output.transpose(1, 2)
1044 else:
1045 encoder_output = encoder_output.transpose([0, 2, 1]) # (B, T, D)
1046 logitlen = encoded_lengths
1047
1048 inseq = encoder_output # [B, T, D]
1049 hypotheses, timestamps = self._greedy_decode(inseq, logitlen)
1050
1051 # Pack the hypotheses results
1052 packed_result = [rnnt_utils.Hypothesis(score=-1.0, y_sequence=[]) for _ in range(len(hypotheses))]
1053 for i in range(len(packed_result)):
1054 packed_result[i].y_sequence = torch.tensor(hypotheses[i], dtype=torch.long)
1055 packed_result[i].length = timestamps[i]
1056
1057 del hypotheses
1058
1059 return packed_result
1060
1061 def _greedy_decode(self, x, out_len):
1062 # x: [B, T, D]
1063 # out_len: [B]
1064
1065 # Initialize state
1066 batchsize = x.shape[0]
1067 hidden = self._get_initial_states(batchsize)
1068 target_lengths = torch.ones(batchsize, dtype=torch.int32)
1069
1070 # Output string buffer
1071 label = [[] for _ in range(batchsize)]
1072 timesteps = [[] for _ in range(batchsize)]
1073
1074 # Last Label buffer + Last Label without blank buffer
1075 # batch level equivalent of the last_label
1076 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long).numpy()
1077 if torch.is_tensor(x):
1078 last_label = torch.from_numpy(last_label).to(self.device)
1079
1080 # Mask buffers
1081 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool).numpy()
1082
1083 # Get max sequence length
1084 max_out_len = out_len.max()
1085 for time_idx in range(max_out_len):
1086 f = x[:, time_idx : time_idx + 1, :] # [B, 1, D]
1087
1088 if torch.is_tensor(f):
1089 f = f.transpose(1, 2)
1090 else:
1091 f = f.transpose([0, 2, 1])
1092
1093 # Prepare t timestamp batch variables
1094 not_blank = True
1095 symbols_added = 0
1096
1097 # Reset blank mask
1098 blank_mask *= False
1099
1100 # Update blank mask with time mask
1101 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
1102 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
1103 blank_mask = time_idx >= out_len
1104 # Start inner loop
1105 while not_blank and (self.max_symbols_per_step is None or symbols_added < self.max_symbols_per_step):
1106
1107 # Batch prediction and joint network steps
1108 # If very first prediction step, submit SOS tag (blank) to pred_step.
1109 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
1110 if time_idx == 0 and symbols_added == 0:
1111 g = torch.tensor([self._blank_index] * batchsize, dtype=torch.int32).view(-1, 1)
1112 else:
1113 if torch.is_tensor(last_label):
1114 g = last_label.type(torch.int32)
1115 else:
1116 g = last_label.astype(np.int32)
1117
1118 # Batched joint step - Output = [B, V + 1]
1119 joint_out, hidden_prime = self.run_decoder_joint(f, g, target_lengths, *hidden)
1120 logp, pred_lengths = joint_out
1121 logp = logp[:, 0, 0, :]
1122
1123 # Get index k, of max prob for batch
1124 if torch.is_tensor(logp):
1125 v, k = logp.max(1)
1126 else:
1127 k = np.argmax(logp, axis=1).astype(np.int32)
1128
1129 # Update blank mask with current predicted blanks
1130 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
1131 k_is_blank = k == self._blank_index
1132 blank_mask |= k_is_blank
1133
1134 del k_is_blank
1135 del logp
1136
1137 # If all samples predict / have predicted prior blanks, exit loop early
1138 # This is equivalent to if single sample predicted k
1139 if blank_mask.all():
1140 not_blank = False
1141
1142 else:
1143 # Collect batch indices where blanks occurred now/past
1144 if torch.is_tensor(blank_mask):
1145 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
1146 else:
1147 blank_indices = blank_mask.astype(np.int32).nonzero()
1148
1149 if type(blank_indices) in (list, tuple):
1150 blank_indices = blank_indices[0]
1151
1152 # Recover prior state for all samples which predicted blank now/past
1153 if hidden is not None:
1154 # LSTM has 2 states
1155 for state_id in range(len(hidden)):
1156 hidden_prime[state_id][:, blank_indices, :] = hidden[state_id][:, blank_indices, :]
1157
1158 elif len(blank_indices) > 0 and hidden is None:
1159 # Reset state if there were some blank and other non-blank predictions in batch
1160 # Original state is filled with zeros so we just multiply
1161 # LSTM has 2 states
1162 for state_id in range(len(hidden_prime)):
1163 hidden_prime[state_id][:, blank_indices, :] *= 0.0
1164
1165 # Recover prior predicted label for all samples which predicted blank now/past
1166 k[blank_indices] = last_label[blank_indices, 0]
1167
1168 # Update new label and hidden state for next iteration
1169 if torch.is_tensor(k):
1170 last_label = k.clone().reshape(-1, 1)
1171 else:
1172 last_label = k.copy().reshape(-1, 1)
1173 hidden = hidden_prime
1174
1175 # Update predicted labels, accounting for time mask
1176 # If blank was predicted even once, now or in the past,
1177 # Force the current predicted label to also be blank
1178 # This ensures that blanks propogate across all timesteps
1179 # once they have occured (normally stopping condition of sample level loop).
1180 for kidx, ki in enumerate(k):
1181 if blank_mask[kidx] == 0:
1182 label[kidx].append(ki)
1183 timesteps[kidx].append(time_idx)
1184
1185 symbols_added += 1
1186
1187 return label, timesteps
1188
1189 def _setup_blank_index(self):
1190 raise NotImplementedError()
1191
1192 def run_encoder(self, audio_signal, length):
1193 raise NotImplementedError()
1194
1195 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1196 raise NotImplementedError()
1197
1198 def _get_initial_states(self, batchsize):
1199 raise NotImplementedError()
1200
1201
1202 class ONNXGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
1203 def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = 10):
1204 super().__init__(
1205 encoder_model=encoder_model,
1206 decoder_joint_model=decoder_joint_model,
1207 max_symbols_per_step=max_symbols_per_step,
1208 )
1209
1210 try:
1211 import onnx
1212 import onnxruntime
1213 except (ModuleNotFoundError, ImportError):
1214 raise ImportError(f"`onnx` or `onnxruntime` could not be imported, please install the libraries.\n")
1215
1216 if torch.cuda.is_available():
1217 # Try to use onnxruntime-gpu
1218 providers = ['TensorrtExecutionProvider', 'CUDAExecutionProvider']
1219 else:
1220 # Fall back to CPU and onnxruntime-cpu
1221 providers = ['CPUExecutionProvider']
1222
1223 onnx_session_opt = onnxruntime.SessionOptions()
1224 onnx_session_opt.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
1225
1226 onnx_model = onnx.load(self.encoder_model_path)
1227 onnx.checker.check_model(onnx_model, full_check=True)
1228 self.encoder_model = onnx_model
1229 self.encoder = onnxruntime.InferenceSession(
1230 onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
1231 )
1232
1233 onnx_model = onnx.load(self.decoder_joint_model_path)
1234 onnx.checker.check_model(onnx_model, full_check=True)
1235 self.decoder_joint_model = onnx_model
1236 self.decoder_joint = onnxruntime.InferenceSession(
1237 onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
1238 )
1239
1240 logging.info("Successfully loaded encoder, decoder and joint onnx models !")
1241
1242 # Will be populated at runtime
1243 self._blank_index = None
1244 self.max_symbols_per_step = max_symbols_per_step
1245
1246 self._setup_encoder_input_output_keys()
1247 self._setup_decoder_joint_input_output_keys()
1248 self._setup_blank_index()
1249
1250 def _setup_encoder_input_output_keys(self):
1251 self.encoder_inputs = list(self.encoder_model.graph.input)
1252 self.encoder_outputs = list(self.encoder_model.graph.output)
1253
1254 def _setup_decoder_joint_input_output_keys(self):
1255 self.decoder_joint_inputs = list(self.decoder_joint_model.graph.input)
1256 self.decoder_joint_outputs = list(self.decoder_joint_model.graph.output)
1257
1258 def _setup_blank_index(self):
1259 # ASSUME: Single input with no time length information
1260 dynamic_dim = 257
1261 shapes = self.encoder_inputs[0].type.tensor_type.shape.dim
1262 ip_shape = []
1263 for shape in shapes:
1264 if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
1265 ip_shape.append(dynamic_dim) # replace dynamic axes with constant
1266 else:
1267 ip_shape.append(int(shape.dim_value))
1268
1269 enc_logits, encoded_length = self.run_encoder(
1270 audio_signal=torch.randn(*ip_shape), length=torch.randint(0, 1, size=(dynamic_dim,))
1271 )
1272
1273 # prepare states
1274 states = self._get_initial_states(batchsize=dynamic_dim)
1275
1276 # run decoder 1 step
1277 joint_out, states = self.run_decoder_joint(enc_logits, None, None, *states)
1278 log_probs, lengths = joint_out
1279
1280 self._blank_index = log_probs.shape[-1] - 1 # last token of vocab size is blank token
1281 logging.info(
1282 f"Enc-Dec-Joint step was evaluated, blank token id = {self._blank_index}; vocab size = {log_probs.shape[-1]}"
1283 )
1284
1285 def run_encoder(self, audio_signal, length):
1286 if hasattr(audio_signal, 'cpu'):
1287 audio_signal = audio_signal.cpu().numpy()
1288
1289 if hasattr(length, 'cpu'):
1290 length = length.cpu().numpy()
1291
1292 ip = {
1293 self.encoder_inputs[0].name: audio_signal,
1294 self.encoder_inputs[1].name: length,
1295 }
1296 enc_out = self.encoder.run(None, ip)
1297 enc_out, encoded_length = enc_out # ASSUME: single output
1298 return enc_out, encoded_length
1299
1300 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1301 # ASSUME: Decoder is RNN Transducer
1302 if targets is None:
1303 targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32)
1304 target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32)
1305
1306 if hasattr(targets, 'cpu'):
1307 targets = targets.cpu().numpy()
1308
1309 if hasattr(target_length, 'cpu'):
1310 target_length = target_length.cpu().numpy()
1311
1312 ip = {
1313 self.decoder_joint_inputs[0].name: enc_logits,
1314 self.decoder_joint_inputs[1].name: targets,
1315 self.decoder_joint_inputs[2].name: target_length,
1316 }
1317
1318 num_states = 0
1319 if states is not None and len(states) > 0:
1320 num_states = len(states)
1321 for idx, state in enumerate(states):
1322 if hasattr(state, 'cpu'):
1323 state = state.cpu().numpy()
1324
1325 ip[self.decoder_joint_inputs[len(ip)].name] = state
1326
1327 dec_out = self.decoder_joint.run(None, ip)
1328
1329 # unpack dec output
1330 if num_states > 0:
1331 new_states = dec_out[-num_states:]
1332 dec_out = dec_out[:-num_states]
1333 else:
1334 new_states = None
1335
1336 return dec_out, new_states
1337
1338 def _get_initial_states(self, batchsize):
1339 # ASSUME: LSTM STATES of shape (layers, batchsize, dim)
1340 input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
1341 num_states = len(input_state_nodes)
1342 if num_states == 0:
1343 return
1344
1345 input_states = []
1346 for state_id in range(num_states):
1347 node = input_state_nodes[state_id]
1348 ip_shape = []
1349 for shape_idx, shape in enumerate(node.type.tensor_type.shape.dim):
1350 if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
1351 ip_shape.append(batchsize) # replace dynamic axes with constant
1352 else:
1353 ip_shape.append(int(shape.dim_value))
1354
1355 input_states.append(torch.zeros(*ip_shape))
1356
1357 return input_states
1358
1359
1360 class TorchscriptGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
1361 def __init__(
1362 self,
1363 encoder_model: str,
1364 decoder_joint_model: str,
1365 cfg: DictConfig,
1366 device: str,
1367 max_symbols_per_step: Optional[int] = 10,
1368 ):
1369 super().__init__(
1370 encoder_model=encoder_model,
1371 decoder_joint_model=decoder_joint_model,
1372 max_symbols_per_step=max_symbols_per_step,
1373 )
1374
1375 self.cfg = cfg
1376 self.device = device
1377
1378 self.encoder = torch.jit.load(self.encoder_model_path, map_location=self.device)
1379 self.decoder_joint = torch.jit.load(self.decoder_joint_model_path, map_location=self.device)
1380
1381 logging.info("Successfully loaded encoder, decoder and joint torchscript models !")
1382
1383 # Will be populated at runtime
1384 self._blank_index = None
1385 self.max_symbols_per_step = max_symbols_per_step
1386
1387 self._setup_encoder_input_keys()
1388 self._setup_decoder_joint_input_keys()
1389 self._setup_blank_index()
1390
1391 def _setup_encoder_input_keys(self):
1392 arguments = self.encoder.forward.schema.arguments[1:]
1393 self.encoder_inputs = [arg for arg in arguments]
1394
1395 def _setup_decoder_joint_input_keys(self):
1396 arguments = self.decoder_joint.forward.schema.arguments[1:]
1397 self.decoder_joint_inputs = [arg for arg in arguments]
1398
1399 def _setup_blank_index(self):
1400 self._blank_index = len(self.cfg.joint.vocabulary)
1401
1402 logging.info(f"Blank token id = {self._blank_index}; vocab size = {len(self.cfg.joint.vocabulary) + 1}")
1403
1404 def run_encoder(self, audio_signal, length):
1405 enc_out = self.encoder(audio_signal, length)
1406 enc_out, encoded_length = enc_out # ASSUME: single output
1407 return enc_out, encoded_length
1408
1409 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1410 # ASSUME: Decoder is RNN Transducer
1411 if targets is None:
1412 targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32, device=enc_logits.device)
1413 target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32, device=enc_logits.device)
1414
1415 num_states = 0
1416 if states is not None and len(states) > 0:
1417 num_states = len(states)
1418
1419 dec_out = self.decoder_joint(enc_logits, targets, target_length, *states)
1420
1421 # unpack dec output
1422 if num_states > 0:
1423 new_states = dec_out[-num_states:]
1424 dec_out = dec_out[:-num_states]
1425 else:
1426 new_states = None
1427
1428 return dec_out, new_states
1429
1430 def _get_initial_states(self, batchsize):
1431 # ASSUME: LSTM STATES of shape (layers, batchsize, dim)
1432 input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
1433 num_states = len(input_state_nodes)
1434 if num_states == 0:
1435 return
1436
1437 input_states = []
1438 for state_id in range(num_states):
1439 # Hardcode shape size for LSTM (1 is for num layers in LSTM, which is flattened for export)
1440 ip_shape = [1, batchsize, self.cfg.model_defaults.pred_hidden]
1441 input_states.append(torch.zeros(*ip_shape, device=self.device))
1442
1443 return input_states
1444
1445
1446 class GreedyMultiblankRNNTInfer(GreedyRNNTInfer):
1447 """A greedy transducer decoder for multi-blank RNN-T.
1448
1449 Sequence level greedy decoding, performed auto-regressively.
1450
1451 Args:
1452 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
1453 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
1454 blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
1455 big_blank_durations: a list containing durations for big blanks the model supports.
1456 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
1457 to a sequence in a single time step; if set to None then there is
1458 no limit.
1459 preserve_alignments: Bool flag which preserves the history of alignments generated during
1460 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1461 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
1462 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
1463 The length of the list corresponds to the Acoustic Length (T).
1464 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
1465 U is the number of target tokens for the current timestep Ti.
1466 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
1467 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1468 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
1469 The length of the list corresponds to the Acoustic Length (T).
1470 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
1471 U is the number of target tokens for the current timestep Ti.
1472 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
1473 confidence scores.
1474
1475 name: The measure name (str).
1476 Supported values:
1477 - 'max_prob' for using the maximum token probability as a confidence.
1478 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1479
1480 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
1481 Supported values:
1482 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1483 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1484 Note that for this entropy, the alpha should comply the following inequality:
1485 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1486 where V is the model vocabulary size.
1487 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1488 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1489 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1490 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1491 - 'renyi' for the Rรฉnyi entropy.
1492 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1493 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1494 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1495
1496 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1497 When the alpha equals one, scaling is not applied to 'max_prob',
1498 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1499
1500 entropy_norm: A mapping of the entropy value to the interval [0,1].
1501 Supported values:
1502 - 'lin' for using the linear mapping.
1503 - 'exp' for using exponential mapping with linear shift.
1504 """
1505
1506 def __init__(
1507 self,
1508 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
1509 joint_model: rnnt_abstract.AbstractRNNTJoint,
1510 blank_index: int,
1511 big_blank_durations: list,
1512 max_symbols_per_step: Optional[int] = None,
1513 preserve_alignments: bool = False,
1514 preserve_frame_confidence: bool = False,
1515 confidence_measure_cfg: Optional[DictConfig] = None,
1516 ):
1517 super().__init__(
1518 decoder_model=decoder_model,
1519 joint_model=joint_model,
1520 blank_index=blank_index,
1521 max_symbols_per_step=max_symbols_per_step,
1522 preserve_alignments=preserve_alignments,
1523 preserve_frame_confidence=preserve_frame_confidence,
1524 confidence_measure_cfg=confidence_measure_cfg,
1525 )
1526 self.big_blank_durations = big_blank_durations
1527 self._SOS = blank_index - len(big_blank_durations)
1528
1529 @torch.no_grad()
1530 def _greedy_decode(
1531 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
1532 ):
1533 # x: [T, 1, D]
1534 # out_len: [seq_len]
1535
1536 # Initialize blank state and empty label set in Hypothesis
1537 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
1538
1539 if partial_hypotheses is not None:
1540 hypothesis.last_token = partial_hypotheses.last_token
1541 hypothesis.y_sequence = (
1542 partial_hypotheses.y_sequence.cpu().tolist()
1543 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
1544 else partial_hypotheses.y_sequence
1545 )
1546 if partial_hypotheses.dec_state is not None:
1547 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
1548 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
1549
1550 if self.preserve_alignments:
1551 # Alignments is a 2-dimensional dangling list representing T x U
1552 hypothesis.alignments = [[]]
1553
1554 if self.preserve_frame_confidence:
1555 hypothesis.frame_confidence = [[]]
1556
1557 # if this variable is > 1, it means the last emission was a big-blank and we need to skip frames.
1558 big_blank_duration = 1
1559
1560 # For timestep t in X_t
1561 for time_idx in range(out_len):
1562 if big_blank_duration > 1:
1563 # skip frames until big_blank_duration == 1.
1564 big_blank_duration -= 1
1565 continue
1566 # Extract encoder embedding at timestep t
1567 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
1568 f = x.narrow(dim=0, start=time_idx, length=1)
1569
1570 # Setup exit flags and counter
1571 not_blank = True
1572 symbols_added = 0
1573
1574 # While blank is not predicted, or we dont run out of max symbols per timestep
1575 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
1576 # In the first timestep, we initialize the network with RNNT Blank
1577 # In later timesteps, we provide previous predicted label as input.
1578 if hypothesis.last_token is None and hypothesis.dec_state is None:
1579 last_label = self._SOS
1580 else:
1581 last_label = label_collate([[hypothesis.last_token]])
1582
1583 # Perform prediction network and joint network steps.
1584 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
1585 # If preserving per-frame confidence, log_normalize must be true
1586 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
1587 0, 0, 0, :
1588 ]
1589
1590 del g
1591
1592 # torch.max(0) op doesnt exist for FP 16.
1593 if logp.dtype != torch.float32:
1594 logp = logp.float()
1595
1596 # get index k, of max prob
1597 v, k = logp.max(0)
1598 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
1599
1600 # Note, we have non-blanks in the vocab first, followed by big blanks, and standard blank at last.
1601 # here we check if it's a big blank and if yes, set the duration variable.
1602 if k >= self._blank_index - len(self.big_blank_durations) and k < self._blank_index:
1603 big_blank_duration = self.big_blank_durations[self._blank_index - k - 1]
1604
1605 if self.preserve_alignments:
1606 # insert logprobs into last timestep
1607 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
1608
1609 if self.preserve_frame_confidence:
1610 # insert confidence into last timestep
1611 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
1612
1613 del logp
1614
1615 # If any type of blank token is predicted, exit inner loop, move onto next timestep t
1616 if k >= self._blank_index - len(self.big_blank_durations):
1617 not_blank = False
1618 else:
1619 # Append token to label set, update RNN state.
1620 hypothesis.y_sequence.append(k)
1621 hypothesis.score += float(v)
1622 hypothesis.timestep.append(time_idx)
1623 hypothesis.dec_state = hidden_prime
1624 hypothesis.last_token = k
1625
1626 # Increment token counter.
1627 symbols_added += 1
1628
1629 if self.preserve_alignments:
1630 # convert Ti-th logits into a torch array
1631 hypothesis.alignments.append([]) # blank buffer for next timestep
1632
1633 if self.preserve_frame_confidence:
1634 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
1635
1636 # Remove trailing empty list of Alignments
1637 if self.preserve_alignments:
1638 if len(hypothesis.alignments[-1]) == 0:
1639 del hypothesis.alignments[-1]
1640
1641 # Remove trailing empty list of per-frame confidence
1642 if self.preserve_frame_confidence:
1643 if len(hypothesis.frame_confidence[-1]) == 0:
1644 del hypothesis.frame_confidence[-1]
1645
1646 # Unpack the hidden states
1647 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
1648
1649 return hypothesis
1650
1651
1652 class GreedyBatchedMultiblankRNNTInfer(GreedyBatchedRNNTInfer):
1653 """A batch level greedy transducer decoder.
1654 Batch level greedy decoding, performed auto-regressively.
1655 Args:
1656 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
1657 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
1658 blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
1659 big_blank_durations: a list containing durations for big blanks the model supports.
1660 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
1661 to a sequence in a single time step; if set to None then there is
1662 no limit.
1663 preserve_alignments: Bool flag which preserves the history of alignments generated during
1664 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1665 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
1666 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
1667 The length of the list corresponds to the Acoustic Length (T).
1668 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
1669 U is the number of target tokens for the current timestep Ti.
1670 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
1671 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1672 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
1673 The length of the list corresponds to the Acoustic Length (T).
1674 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
1675 U is the number of target tokens for the current timestep Ti.
1676 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
1677 confidence scores.
1678
1679 name: The measure name (str).
1680 Supported values:
1681 - 'max_prob' for using the maximum token probability as a confidence.
1682 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1683
1684 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
1685 Supported values:
1686 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1687 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1688 Note that for this entropy, the alpha should comply the following inequality:
1689 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1690 where V is the model vocabulary size.
1691 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1692 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1693 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1694 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1695 - 'renyi' for the Rรฉnyi entropy.
1696 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1697 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1698 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1699
1700 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1701 When the alpha equals one, scaling is not applied to 'max_prob',
1702 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1703
1704 entropy_norm: A mapping of the entropy value to the interval [0,1].
1705 Supported values:
1706 - 'lin' for using the linear mapping.
1707 - 'exp' for using exponential mapping with linear shift.
1708 """
1709
1710 def __init__(
1711 self,
1712 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
1713 joint_model: rnnt_abstract.AbstractRNNTJoint,
1714 blank_index: int,
1715 big_blank_durations: List[int],
1716 max_symbols_per_step: Optional[int] = None,
1717 preserve_alignments: bool = False,
1718 preserve_frame_confidence: bool = False,
1719 confidence_measure_cfg: Optional[DictConfig] = None,
1720 ):
1721 super().__init__(
1722 decoder_model=decoder_model,
1723 joint_model=joint_model,
1724 blank_index=blank_index,
1725 max_symbols_per_step=max_symbols_per_step,
1726 preserve_alignments=preserve_alignments,
1727 preserve_frame_confidence=preserve_frame_confidence,
1728 confidence_measure_cfg=confidence_measure_cfg,
1729 )
1730 self.big_blank_durations = big_blank_durations
1731
1732 # Depending on availability of `blank_as_pad` support
1733 # switch between more efficient batch decoding technique
1734 if self.decoder.blank_as_pad:
1735 self._greedy_decode = self._greedy_decode_blank_as_pad
1736 else:
1737 self._greedy_decode = self._greedy_decode_masked
1738 self._SOS = blank_index - len(big_blank_durations)
1739
1740 def _greedy_decode_blank_as_pad(
1741 self,
1742 x: torch.Tensor,
1743 out_len: torch.Tensor,
1744 device: torch.device,
1745 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
1746 ):
1747 if partial_hypotheses is not None:
1748 raise NotImplementedError("`partial_hypotheses` support is not supported")
1749
1750 with torch.inference_mode():
1751 # x: [B, T, D]
1752 # out_len: [B]
1753 # device: torch.device
1754
1755 # Initialize list of Hypothesis
1756 batchsize = x.shape[0]
1757 hypotheses = [
1758 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
1759 ]
1760
1761 # Initialize Hidden state matrix (shared by entire batch)
1762 hidden = None
1763
1764 # If alignments need to be preserved, register a danling list to hold the values
1765 if self.preserve_alignments:
1766 # alignments is a 3-dimensional dangling list representing B x T x U
1767 for hyp in hypotheses:
1768 hyp.alignments = [[]]
1769
1770 # If confidence scores need to be preserved, register a danling list to hold the values
1771 if self.preserve_frame_confidence:
1772 # frame_confidence is a 3-dimensional dangling list representing B x T x U
1773 for hyp in hypotheses:
1774 hyp.frame_confidence = [[]]
1775
1776 # Last Label buffer + Last Label without blank buffer
1777 # batch level equivalent of the last_label
1778 last_label = torch.full([batchsize, 1], fill_value=self._SOS, dtype=torch.long, device=device)
1779
1780 # this mask is true for if the emission is *any type* of blank.
1781 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
1782
1783 # Get max sequence length
1784 max_out_len = out_len.max()
1785
1786 # We have a mask for each big blank. A mask is "true" means: the previous emission is exactly the big-blank
1787 # with the corresponding duration, or has larger duration. E.g., for big_blank_mask for duration 2, it will
1788 # be set true if the previous emission was a big blank with duration 4, or 3 or 2; but false if prevoius
1789 # emission was a standard blank (with duration = 1).
1790 big_blank_masks = [torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)] * len(
1791 self.big_blank_durations
1792 )
1793
1794 # if this variable > 1, it means the previous emission was big-blank and we need to skip frames.
1795 big_blank_duration = 1
1796
1797 for time_idx in range(max_out_len):
1798 if big_blank_duration > 1:
1799 # skip frames until big_blank_duration == 1
1800 big_blank_duration -= 1
1801 continue
1802 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
1803
1804 # Prepare t timestamp batch variables
1805 not_blank = True
1806 symbols_added = 0
1807
1808 # Reset all blank masks
1809 blank_mask.mul_(False)
1810 for i in range(len(big_blank_masks)):
1811 big_blank_masks[i].mul_(False)
1812
1813 # Update blank mask with time mask
1814 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
1815 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
1816 blank_mask = time_idx >= out_len
1817 for i in range(len(big_blank_masks)):
1818 big_blank_masks[i] = time_idx >= out_len
1819
1820 # Start inner loop
1821 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
1822 # Batch prediction and joint network steps
1823 # If very first prediction step, submit SOS tag (blank) to pred_step.
1824 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
1825 if time_idx == 0 and symbols_added == 0 and hidden is None:
1826 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
1827 else:
1828 # Perform batch step prediction of decoder, getting new states and scores ("g")
1829 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
1830
1831 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
1832 # If preserving per-frame confidence, log_normalize must be true
1833 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
1834 :, 0, 0, :
1835 ]
1836
1837 if logp.dtype != torch.float32:
1838 logp = logp.float()
1839
1840 # Get index k, of max prob for batch
1841 v, k = logp.max(1)
1842 del g
1843
1844 # Update blank mask with current predicted blanks
1845 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
1846 k_is_blank = k >= self._blank_index - len(self.big_blank_durations)
1847 blank_mask.bitwise_or_(k_is_blank)
1848
1849 for i in range(len(big_blank_masks)):
1850 # using <= since as we mentioned before, the mask doesn't store exact matches.
1851 # instead, it is True when the predicted blank's duration is >= the duration that the
1852 # mask corresponds to.
1853 k_is_big_blank = k <= self._blank_index - 1 - i
1854
1855 # need to do a bitwise_and since it could also be a non-blank.
1856 k_is_big_blank.bitwise_and_(k_is_blank)
1857 big_blank_masks[i].bitwise_or_(k_is_big_blank)
1858
1859 del k_is_blank
1860
1861 # If preserving alignments, check if sequence length of sample has been reached
1862 # before adding alignment
1863 if self.preserve_alignments:
1864 # Insert logprobs into last timestep per sample
1865 logp_vals = logp.to('cpu')
1866 logp_ids = logp_vals.max(1)[1]
1867 for batch_idx in range(batchsize):
1868 if time_idx < out_len[batch_idx]:
1869 hypotheses[batch_idx].alignments[-1].append(
1870 (logp_vals[batch_idx], logp_ids[batch_idx])
1871 )
1872 del logp_vals
1873
1874 # If preserving per-frame confidence, check if sequence length of sample has been reached
1875 # before adding confidence scores
1876 if self.preserve_frame_confidence:
1877 # Insert probabilities into last timestep per sample
1878 confidence = self._get_confidence(logp)
1879 for batch_idx in range(batchsize):
1880 if time_idx < out_len[batch_idx]:
1881 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
1882 del logp
1883
1884 # If all samples predict / have predicted prior blanks, exit loop early
1885 # This is equivalent to if single sample predicted k
1886 if blank_mask.all():
1887 not_blank = False
1888 else:
1889 # Collect batch indices where blanks occurred now/past
1890 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
1891
1892 # Recover prior state for all samples which predicted blank now/past
1893 if hidden is not None:
1894 # LSTM has 2 states
1895 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
1896
1897 elif len(blank_indices) > 0 and hidden is None:
1898 # Reset state if there were some blank and other non-blank predictions in batch
1899 # Original state is filled with zeros so we just multiply
1900 # LSTM has 2 states
1901 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
1902
1903 # Recover prior predicted label for all samples which predicted blank now/past
1904 k[blank_indices] = last_label[blank_indices, 0]
1905
1906 # Update new label and hidden state for next iteration
1907 last_label = k.clone().view(-1, 1)
1908 hidden = hidden_prime
1909
1910 # Update predicted labels, accounting for time mask
1911 # If blank was predicted even once, now or in the past,
1912 # Force the current predicted label to also be blank
1913 # This ensures that blanks propogate across all timesteps
1914 # once they have occured (normally stopping condition of sample level loop).
1915 for kidx, ki in enumerate(k):
1916 if blank_mask[kidx] == 0:
1917 hypotheses[kidx].y_sequence.append(ki)
1918 hypotheses[kidx].timestep.append(time_idx)
1919 hypotheses[kidx].score += float(v[kidx])
1920
1921 symbols_added += 1
1922
1923 for i in range(len(big_blank_masks) + 1):
1924 # The task here is find the shortest blank duration of all batches.
1925 # so we start from the shortest blank duration and go up,
1926 # and stop once we found the duration whose corresponding mask isn't all True.
1927 if i == len(big_blank_masks) or not big_blank_masks[i].all():
1928 big_blank_duration = self.big_blank_durations[i - 1] if i > 0 else 1
1929 break
1930
1931 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
1932 # Then preserve U at current timestep Ti
1933 # Finally, forward the timestep history to Ti+1 for that sample
1934 # All of this should only be done iff the current time index <= sample-level AM length.
1935 # Otherwise ignore and move to next sample / next timestep.
1936 if self.preserve_alignments:
1937
1938 # convert Ti-th logits into a torch array
1939 for batch_idx in range(batchsize):
1940
1941 # this checks if current timestep <= sample-level AM length
1942 # If current timestep > sample-level AM length, no alignments will be added
1943 # Therefore the list of Uj alignments is empty here.
1944 if len(hypotheses[batch_idx].alignments[-1]) > 0:
1945 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
1946
1947 # Do the same if preserving per-frame confidence
1948 if self.preserve_frame_confidence:
1949
1950 for batch_idx in range(batchsize):
1951 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
1952 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
1953
1954 # Remove trailing empty list of alignments at T_{am-len} x Uj
1955 if self.preserve_alignments:
1956 for batch_idx in range(batchsize):
1957 if len(hypotheses[batch_idx].alignments[-1]) == 0:
1958 del hypotheses[batch_idx].alignments[-1]
1959
1960 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
1961 if self.preserve_frame_confidence:
1962 for batch_idx in range(batchsize):
1963 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
1964 del hypotheses[batch_idx].frame_confidence[-1]
1965
1966 # Preserve states
1967 for batch_idx in range(batchsize):
1968 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
1969
1970 return hypotheses
1971
1972 def _greedy_decode_masked(
1973 self,
1974 x: torch.Tensor,
1975 out_len: torch.Tensor,
1976 device: torch.device,
1977 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
1978 ):
1979 if partial_hypotheses is not None:
1980 raise NotImplementedError("`partial_hypotheses` support is not supported")
1981
1982 if self.big_blank_durations != [1] * len(self.big_blank_durations):
1983 raise NotImplementedError(
1984 "Efficient frame-skipping version for multi-blank masked decoding is not supported."
1985 )
1986
1987 # x: [B, T, D]
1988 # out_len: [B]
1989 # device: torch.device
1990
1991 # Initialize state
1992 batchsize = x.shape[0]
1993 hypotheses = [
1994 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
1995 ]
1996
1997 # Initialize Hidden state matrix (shared by entire batch)
1998 hidden = None
1999
2000 # If alignments need to be preserved, register a danling list to hold the values
2001 if self.preserve_alignments:
2002 # alignments is a 3-dimensional dangling list representing B x T x U
2003 for hyp in hypotheses:
2004 hyp.alignments = [[]]
2005 else:
2006 hyp.alignments = None
2007
2008 # If confidence scores need to be preserved, register a danling list to hold the values
2009 if self.preserve_frame_confidence:
2010 # frame_confidence is a 3-dimensional dangling list representing B x T x U
2011 for hyp in hypotheses:
2012 hyp.frame_confidence = [[]]
2013
2014 # Last Label buffer + Last Label without blank buffer
2015 # batch level equivalent of the last_label
2016 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
2017 last_label_without_blank = last_label.clone()
2018
2019 # Mask buffers
2020 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
2021
2022 # Get max sequence length
2023 max_out_len = out_len.max()
2024
2025 with torch.inference_mode():
2026 for time_idx in range(max_out_len):
2027 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
2028
2029 # Prepare t timestamp batch variables
2030 not_blank = True
2031 symbols_added = 0
2032
2033 # Reset blank mask
2034 blank_mask.mul_(False)
2035
2036 # Update blank mask with time mask
2037 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
2038 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
2039 blank_mask = time_idx >= out_len
2040
2041 # Start inner loop
2042 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
2043 # Batch prediction and joint network steps
2044 # If very first prediction step, submit SOS tag (blank) to pred_step.
2045 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
2046 if time_idx == 0 and symbols_added == 0 and hidden is None:
2047 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
2048 else:
2049 # Set a dummy label for the blank value
2050 # This value will be overwritten by "blank" again the last label update below
2051 # This is done as vocabulary of prediction network does not contain "blank" token of RNNT
2052 last_label_without_blank_mask = last_label >= self._blank_index
2053 last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
2054 last_label_without_blank[~last_label_without_blank_mask] = last_label[
2055 ~last_label_without_blank_mask
2056 ]
2057
2058 # Perform batch step prediction of decoder, getting new states and scores ("g")
2059 g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
2060
2061 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
2062 # If preserving per-frame confidence, log_normalize must be true
2063 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
2064 :, 0, 0, :
2065 ]
2066
2067 if logp.dtype != torch.float32:
2068 logp = logp.float()
2069
2070 # Get index k, of max prob for batch
2071 v, k = logp.max(1)
2072 del g
2073
2074 # Update blank mask with current predicted blanks
2075 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
2076 k_is_blank = k == self._blank_index
2077 blank_mask.bitwise_or_(k_is_blank)
2078
2079 # If preserving alignments, check if sequence length of sample has been reached
2080 # before adding alignment
2081 if self.preserve_alignments:
2082 # Insert logprobs into last timestep per sample
2083 logp_vals = logp.to('cpu')
2084 logp_ids = logp_vals.max(1)[1]
2085 for batch_idx in range(batchsize):
2086 if time_idx < out_len[batch_idx]:
2087 hypotheses[batch_idx].alignments[-1].append(
2088 (logp_vals[batch_idx], logp_ids[batch_idx])
2089 )
2090 del logp_vals
2091
2092 # If preserving per-frame confidence, check if sequence length of sample has been reached
2093 # before adding confidence scores
2094 if self.preserve_frame_confidence:
2095 # Insert probabilities into last timestep per sample
2096 confidence = self._get_confidence(logp)
2097 for batch_idx in range(batchsize):
2098 if time_idx < out_len[batch_idx]:
2099 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
2100 del logp
2101
2102 # If all samples predict / have predicted prior blanks, exit loop early
2103 # This is equivalent to if single sample predicted k
2104 if blank_mask.all():
2105 not_blank = False
2106 else:
2107 # Collect batch indices where blanks occurred now/past
2108 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
2109
2110 # Recover prior state for all samples which predicted blank now/past
2111 if hidden is not None:
2112 # LSTM has 2 states
2113 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
2114
2115 elif len(blank_indices) > 0 and hidden is None:
2116 # Reset state if there were some blank and other non-blank predictions in batch
2117 # Original state is filled with zeros so we just multiply
2118 # LSTM has 2 states
2119 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
2120
2121 # Recover prior predicted label for all samples which predicted blank now/past
2122 k[blank_indices] = last_label[blank_indices, 0]
2123
2124 # Update new label and hidden state for next iteration
2125 last_label = k.view(-1, 1)
2126 hidden = hidden_prime
2127
2128 # Update predicted labels, accounting for time mask
2129 # If blank was predicted even once, now or in the past,
2130 # Force the current predicted label to also be blank
2131 # This ensures that blanks propogate across all timesteps
2132 # once they have occured (normally stopping condition of sample level loop).
2133 for kidx, ki in enumerate(k):
2134 if blank_mask[kidx] == 0:
2135 hypotheses[kidx].y_sequence.append(ki)
2136 hypotheses[kidx].timestep.append(time_idx)
2137 hypotheses[kidx].score += float(v[kidx])
2138
2139 symbols_added += 1
2140
2141 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
2142 # Then preserve U at current timestep Ti
2143 # Finally, forward the timestep history to Ti+1 for that sample
2144 # All of this should only be done iff the current time index <= sample-level AM length.
2145 # Otherwise ignore and move to next sample / next timestep.
2146 if self.preserve_alignments:
2147
2148 # convert Ti-th logits into a torch array
2149 for batch_idx in range(batchsize):
2150
2151 # this checks if current timestep <= sample-level AM length
2152 # If current timestep > sample-level AM length, no alignments will be added
2153 # Therefore the list of Uj alignments is empty here.
2154 if len(hypotheses[batch_idx].alignments[-1]) > 0:
2155 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
2156
2157 # Do the same if preserving per-frame confidence
2158 if self.preserve_frame_confidence:
2159
2160 for batch_idx in range(batchsize):
2161 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
2162 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
2163
2164 # Remove trailing empty list of alignments at T_{am-len} x Uj
2165 if self.preserve_alignments:
2166 for batch_idx in range(batchsize):
2167 if len(hypotheses[batch_idx].alignments[-1]) == 0:
2168 del hypotheses[batch_idx].alignments[-1]
2169
2170 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
2171 if self.preserve_frame_confidence:
2172 for batch_idx in range(batchsize):
2173 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2174 del hypotheses[batch_idx].frame_confidence[-1]
2175
2176 # Preserve states
2177 for batch_idx in range(batchsize):
2178 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2179
2180 return hypotheses
2181
2182
2183 @dataclass
2184 class GreedyRNNTInferConfig:
2185 max_symbols_per_step: Optional[int] = 10
2186 preserve_alignments: bool = False
2187 preserve_frame_confidence: bool = False
2188 confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
2189 confidence_method_cfg: str = "DEPRECATED"
2190
2191 def __post_init__(self):
2192 # OmegaConf.structured ensures that post_init check is always executed
2193 self.confidence_measure_cfg = OmegaConf.structured(
2194 self.confidence_measure_cfg
2195 if isinstance(self.confidence_measure_cfg, ConfidenceMeasureConfig)
2196 else ConfidenceMeasureConfig(**self.confidence_measure_cfg)
2197 )
2198 if self.confidence_method_cfg != "DEPRECATED":
2199 logging.warning(
2200 "`confidence_method_cfg` is deprecated and will be removed in the future. "
2201 "Please use `confidence_measure_cfg` instead."
2202 )
2203
2204 # TODO (alaptev): delete the following two lines sometime in the future
2205 logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
2206 # OmegaConf.structured ensures that post_init check is always executed
2207 self.confidence_measure_cfg = OmegaConf.structured(
2208 self.confidence_method_cfg
2209 if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
2210 else ConfidenceMeasureConfig(**self.confidence_method_cfg)
2211 )
2212 self.confidence_method_cfg = "DEPRECATED"
2213
2214
2215 @dataclass
2216 class GreedyBatchedRNNTInferConfig:
2217 max_symbols_per_step: Optional[int] = 10
2218 preserve_alignments: bool = False
2219 preserve_frame_confidence: bool = False
2220 confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
2221 confidence_method_cfg: str = "DEPRECATED"
2222
2223 def __post_init__(self):
2224 # OmegaConf.structured ensures that post_init check is always executed
2225 self.confidence_measure_cfg = OmegaConf.structured(
2226 self.confidence_measure_cfg
2227 if isinstance(self.confidence_measure_cfg, ConfidenceMeasureConfig)
2228 else ConfidenceMeasureConfig(**self.confidence_measure_cfg)
2229 )
2230 if self.confidence_method_cfg != "DEPRECATED":
2231 logging.warning(
2232 "`confidence_method_cfg` is deprecated and will be removed in the future. "
2233 "Please use `confidence_measure_cfg` instead."
2234 )
2235
2236 # TODO (alaptev): delete the following two lines sometime in the future
2237 logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
2238 # OmegaConf.structured ensures that post_init check is always executed
2239 self.confidence_measure_cfg = OmegaConf.structured(
2240 self.confidence_method_cfg
2241 if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
2242 else ConfidenceMeasureConfig(**self.confidence_method_cfg)
2243 )
2244 self.confidence_method_cfg = "DEPRECATED"
2245
2246
2247 class GreedyTDTInfer(_GreedyRNNTInfer):
2248 """A greedy TDT decoder.
2249
2250 Sequence level greedy decoding, performed auto-regressively.
2251
2252 Args:
2253 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
2254 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
2255 blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
2256 durations: a list containing durations for TDT.
2257 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
2258 to a sequence in a single time step; if set to None then there is
2259 no limit.
2260 preserve_alignments: Bool flag which preserves the history of alignments generated during
2261 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2262 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
2263 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
2264 The length of the list corresponds to the Acoustic Length (T).
2265 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
2266 U is the number of target tokens for the current timestep Ti.
2267 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
2268 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2269 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
2270 The length of the list corresponds to the Acoustic Length (T).
2271 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
2272 U is the number of target tokens for the current timestep Ti.
2273 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
2274 confidence scores.
2275
2276 name: The measure name (str).
2277 Supported values:
2278 - 'max_prob' for using the maximum token probability as a confidence.
2279 - 'entropy' for using a normalized entropy of a log-likelihood vector.
2280
2281 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
2282 Supported values:
2283 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
2284 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
2285 Note that for this entropy, the alpha should comply the following inequality:
2286 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
2287 where V is the model vocabulary size.
2288 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
2289 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
2290 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2291 More: https://en.wikipedia.org/wiki/Tsallis_entropy
2292 - 'renyi' for the Rรฉnyi entropy.
2293 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
2294 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2295 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
2296
2297 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
2298 When the alpha equals one, scaling is not applied to 'max_prob',
2299 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
2300
2301 entropy_norm: A mapping of the entropy value to the interval [0,1].
2302 Supported values:
2303 - 'lin' for using the linear mapping.
2304 - 'exp' for using exponential mapping with linear shift.
2305 """
2306
2307 def __init__(
2308 self,
2309 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
2310 joint_model: rnnt_abstract.AbstractRNNTJoint,
2311 blank_index: int,
2312 durations: list,
2313 max_symbols_per_step: Optional[int] = None,
2314 preserve_alignments: bool = False,
2315 preserve_frame_confidence: bool = False,
2316 confidence_measure_cfg: Optional[DictConfig] = None,
2317 ):
2318 super().__init__(
2319 decoder_model=decoder_model,
2320 joint_model=joint_model,
2321 blank_index=blank_index,
2322 max_symbols_per_step=max_symbols_per_step,
2323 preserve_alignments=preserve_alignments,
2324 preserve_frame_confidence=preserve_frame_confidence,
2325 confidence_measure_cfg=confidence_measure_cfg,
2326 )
2327 self.durations = durations
2328
2329 @typecheck()
2330 def forward(
2331 self,
2332 encoder_output: torch.Tensor,
2333 encoded_lengths: torch.Tensor,
2334 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2335 ):
2336 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
2337 Output token is generated auto-regressively.
2338 Args:
2339 encoder_output: A tensor of size (batch, features, timesteps).
2340 encoded_lengths: list of int representing the length of each sequence
2341 output sequence.
2342 Returns:
2343 packed list containing batch number of sentences (Hypotheses).
2344 """
2345 # Preserve decoder and joint training state
2346 decoder_training_state = self.decoder.training
2347 joint_training_state = self.joint.training
2348
2349 with torch.inference_mode():
2350 # Apply optional preprocessing
2351 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
2352
2353 self.decoder.eval()
2354 self.joint.eval()
2355
2356 hypotheses = []
2357 # Process each sequence independently
2358 with self.decoder.as_frozen(), self.joint.as_frozen():
2359 for batch_idx in range(encoder_output.size(0)):
2360 inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
2361 logitlen = encoded_lengths[batch_idx]
2362
2363 partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
2364 hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
2365 hypotheses.append(hypothesis)
2366
2367 # Pack results into Hypotheses
2368 packed_result = pack_hypotheses(hypotheses, encoded_lengths)
2369
2370 self.decoder.train(decoder_training_state)
2371 self.joint.train(joint_training_state)
2372
2373 return (packed_result,)
2374
2375 @torch.no_grad()
2376 def _greedy_decode(
2377 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
2378 ):
2379 # x: [T, 1, D]
2380 # out_len: [seq_len]
2381
2382 # Initialize blank state and empty label set in Hypothesis
2383 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
2384
2385 if partial_hypotheses is not None:
2386 hypothesis.last_token = partial_hypotheses.last_token
2387 hypothesis.y_sequence = (
2388 partial_hypotheses.y_sequence.cpu().tolist()
2389 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
2390 else partial_hypotheses.y_sequence
2391 )
2392 if partial_hypotheses.dec_state is not None:
2393 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
2394 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
2395
2396 if self.preserve_alignments:
2397 # Alignments is a 2-dimensional dangling list representing T x U
2398 hypothesis.alignments = [[]]
2399
2400 if self.preserve_frame_confidence:
2401 hypothesis.frame_confidence = [[]]
2402
2403 time_idx = 0
2404 while time_idx < out_len:
2405 # Extract encoder embedding at timestep t
2406 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
2407 f = x.narrow(dim=0, start=time_idx, length=1)
2408
2409 # Setup exit flags and counter
2410 not_blank = True
2411 symbols_added = 0
2412
2413 need_loop = True
2414 # While blank is not predicted, or we dont run out of max symbols per timestep
2415 while need_loop and (self.max_symbols is None or symbols_added < self.max_symbols):
2416 # In the first timestep, we initialize the network with RNNT Blank
2417 # In later timesteps, we provide previous predicted label as input.
2418 if hypothesis.last_token is None and hypothesis.dec_state is None:
2419 last_label = self._SOS
2420 else:
2421 last_label = label_collate([[hypothesis.last_token]])
2422
2423 # Perform prediction network and joint network steps.
2424 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
2425 # If preserving per-frame confidence, log_normalize must be true
2426 logits = self._joint_step(f, g, log_normalize=False)
2427 logp = logits[0, 0, 0, : -len(self.durations)]
2428 if self.preserve_frame_confidence:
2429 logp = torch.log_softmax(logp, -1)
2430
2431 duration_logp = torch.log_softmax(logits[0, 0, 0, -len(self.durations) :], dim=-1)
2432 del g
2433
2434 # torch.max(0) op doesnt exist for FP 16.
2435 if logp.dtype != torch.float32:
2436 logp = logp.float()
2437
2438 # get index k, of max prob
2439 v, k = logp.max(0)
2440 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
2441
2442 d_v, d_k = duration_logp.max(0)
2443 d_k = d_k.item()
2444
2445 skip = self.durations[d_k]
2446
2447 if self.preserve_alignments:
2448 # insert logprobs into last timestep
2449 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
2450
2451 if self.preserve_frame_confidence:
2452 # insert confidence into last timestep
2453 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
2454
2455 del logp
2456
2457 # If blank token is predicted, exit inner loop, move onto next timestep t
2458 if k == self._blank_index:
2459 not_blank = False
2460 else:
2461 # Append token to label set, update RNN state.
2462 hypothesis.y_sequence.append(k)
2463 hypothesis.score += float(v)
2464 hypothesis.timestep.append(time_idx)
2465 hypothesis.dec_state = hidden_prime
2466 hypothesis.last_token = k
2467
2468 # Increment token counter.
2469 symbols_added += 1
2470 time_idx += skip
2471 need_loop = skip == 0
2472
2473 # this rarely happens, but we manually increment the `skip` number
2474 # if blank is emitted and duration=0 is predicted. This prevents possible
2475 # infinite loops.
2476 if skip == 0:
2477 skip = 1
2478
2479 if self.preserve_alignments:
2480 # convert Ti-th logits into a torch array
2481 hypothesis.alignments.append([]) # blank buffer for next timestep
2482
2483 if self.preserve_frame_confidence:
2484 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
2485
2486 if symbols_added == self.max_symbols:
2487 time_idx += 1
2488
2489 # Remove trailing empty list of Alignments
2490 if self.preserve_alignments:
2491 if len(hypothesis.alignments[-1]) == 0:
2492 del hypothesis.alignments[-1]
2493
2494 # Remove trailing empty list of per-frame confidence
2495 if self.preserve_frame_confidence:
2496 if len(hypothesis.frame_confidence[-1]) == 0:
2497 del hypothesis.frame_confidence[-1]
2498
2499 # Unpack the hidden states
2500 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
2501
2502 return hypothesis
2503
2504
2505 class GreedyBatchedTDTInfer(_GreedyRNNTInfer):
2506 """A batch level greedy TDT decoder.
2507 Batch level greedy decoding, performed auto-regressively.
2508 Args:
2509 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
2510 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
2511 blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
2512 durations: a list containing durations.
2513 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
2514 to a sequence in a single time step; if set to None then there is
2515 no limit.
2516 preserve_alignments: Bool flag which preserves the history of alignments generated during
2517 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2518 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
2519 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
2520 The length of the list corresponds to the Acoustic Length (T).
2521 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
2522 U is the number of target tokens for the current timestep Ti.
2523 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
2524 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2525 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
2526 The length of the list corresponds to the Acoustic Length (T).
2527 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
2528 U is the number of target tokens for the current timestep Ti.
2529 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
2530 confidence scores.
2531
2532 name: The measure name (str).
2533 Supported values:
2534 - 'max_prob' for using the maximum token probability as a confidence.
2535 - 'entropy' for using a normalized entropy of a log-likelihood vector.
2536
2537 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
2538 Supported values:
2539 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
2540 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
2541 Note that for this entropy, the alpha should comply the following inequality:
2542 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
2543 where V is the model vocabulary size.
2544 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
2545 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
2546 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2547 More: https://en.wikipedia.org/wiki/Tsallis_entropy
2548 - 'renyi' for the Rรฉnyi entropy.
2549 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
2550 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2551 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
2552
2553 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
2554 When the alpha equals one, scaling is not applied to 'max_prob',
2555 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
2556
2557 entropy_norm: A mapping of the entropy value to the interval [0,1].
2558 Supported values:
2559 - 'lin' for using the linear mapping.
2560 - 'exp' for using exponential mapping with linear shift.
2561 """
2562
2563 def __init__(
2564 self,
2565 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
2566 joint_model: rnnt_abstract.AbstractRNNTJoint,
2567 blank_index: int,
2568 durations: List[int],
2569 max_symbols_per_step: Optional[int] = None,
2570 preserve_alignments: bool = False,
2571 preserve_frame_confidence: bool = False,
2572 confidence_measure_cfg: Optional[DictConfig] = None,
2573 ):
2574 super().__init__(
2575 decoder_model=decoder_model,
2576 joint_model=joint_model,
2577 blank_index=blank_index,
2578 max_symbols_per_step=max_symbols_per_step,
2579 preserve_alignments=preserve_alignments,
2580 preserve_frame_confidence=preserve_frame_confidence,
2581 confidence_measure_cfg=confidence_measure_cfg,
2582 )
2583 self.durations = durations
2584
2585 # Depending on availability of `blank_as_pad` support
2586 # switch between more efficient batch decoding technique
2587 if self.decoder.blank_as_pad:
2588 self._greedy_decode = self._greedy_decode_blank_as_pad
2589 else:
2590 self._greedy_decode = self._greedy_decode_masked
2591
2592 @typecheck()
2593 def forward(
2594 self,
2595 encoder_output: torch.Tensor,
2596 encoded_lengths: torch.Tensor,
2597 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2598 ):
2599 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
2600 Output token is generated auto-regressively.
2601 Args:
2602 encoder_output: A tensor of size (batch, features, timesteps).
2603 encoded_lengths: list of int representing the length of each sequence
2604 output sequence.
2605 Returns:
2606 packed list containing batch number of sentences (Hypotheses).
2607 """
2608 # Preserve decoder and joint training state
2609 decoder_training_state = self.decoder.training
2610 joint_training_state = self.joint.training
2611
2612 with torch.inference_mode():
2613 # Apply optional preprocessing
2614 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
2615 logitlen = encoded_lengths
2616
2617 self.decoder.eval()
2618 self.joint.eval()
2619
2620 with self.decoder.as_frozen(), self.joint.as_frozen():
2621 inseq = encoder_output # [B, T, D]
2622 hypotheses = self._greedy_decode(
2623 inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
2624 )
2625
2626 # Pack the hypotheses results
2627 packed_result = pack_hypotheses(hypotheses, logitlen)
2628
2629 self.decoder.train(decoder_training_state)
2630 self.joint.train(joint_training_state)
2631
2632 return (packed_result,)
2633
2634 def _greedy_decode_blank_as_pad(
2635 self,
2636 x: torch.Tensor,
2637 out_len: torch.Tensor,
2638 device: torch.device,
2639 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2640 ):
2641 if partial_hypotheses is not None:
2642 raise NotImplementedError("`partial_hypotheses` support is not supported")
2643
2644 with torch.inference_mode():
2645 # x: [B, T, D]
2646 # out_len: [B]
2647 # device: torch.device
2648
2649 # Initialize list of Hypothesis
2650 batchsize = x.shape[0]
2651 hypotheses = [
2652 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
2653 ]
2654
2655 # Initialize Hidden state matrix (shared by entire batch)
2656 hidden = None
2657
2658 # If alignments need to be preserved, register a danling list to hold the values
2659 if self.preserve_alignments:
2660 # alignments is a 3-dimensional dangling list representing B x T x U
2661 for hyp in hypotheses:
2662 hyp.alignments = [[]]
2663
2664 # If confidence scores need to be preserved, register a danling list to hold the values
2665 if self.preserve_frame_confidence:
2666 # frame_confidence is a 3-dimensional dangling list representing B x T x U
2667 for hyp in hypotheses:
2668 hyp.frame_confidence = [[]]
2669
2670 # Last Label buffer + Last Label without blank buffer
2671 # batch level equivalent of the last_label
2672 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
2673
2674 # Mask buffers
2675 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
2676
2677 # Get max sequence length
2678 max_out_len = out_len.max()
2679
2680 # skip means the number of frames the next decoding step should "jump" to. When skip == 1
2681 # it means the next decoding step will just use the next input frame.
2682 skip = 1
2683 for time_idx in range(max_out_len):
2684 if skip > 1: # if skip > 1 at the current step, we decrement it and skip the current frame.
2685 skip -= 1
2686 continue
2687 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
2688
2689 # need_to_stay is a boolean indicates whether the next decoding step should remain in the same frame.
2690 need_to_stay = True
2691 symbols_added = 0
2692
2693 # Reset blank mask
2694 blank_mask.mul_(False)
2695
2696 # Update blank mask with time mask
2697 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
2698 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
2699 blank_mask = time_idx >= out_len
2700
2701 # Start inner loop
2702 while need_to_stay and (self.max_symbols is None or symbols_added < self.max_symbols):
2703 # Batch prediction and joint network steps
2704 # If very first prediction step, submit SOS tag (blank) to pred_step.
2705 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
2706 if time_idx == 0 and symbols_added == 0 and hidden is None:
2707 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
2708 else:
2709 # Perform batch step prediction of decoder, getting new states and scores ("g")
2710 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
2711
2712 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
2713 # Note: log_normalize must not be True here since the joiner output is contanetation of both token logits and duration logits,
2714 # and they need to be normalized independently.
2715 joined = self._joint_step(f, g, log_normalize=None)
2716 logp = joined[:, 0, 0, : -len(self.durations)]
2717 duration_logp = joined[:, 0, 0, -len(self.durations) :]
2718
2719 if logp.dtype != torch.float32:
2720 logp = logp.float()
2721 duration_logp = duration_logp.float()
2722
2723 # get the max for both token and duration predictions.
2724 v, k = logp.max(1)
2725 dv, dk = duration_logp.max(1)
2726
2727 # here we set the skip value to be the minimum of all predicted durations, hense the "torch.min(dk)" call there.
2728 # Please refer to Section 5.2 of our paper https://arxiv.org/pdf/2304.06795.pdf for explanation of this.
2729 skip = self.durations[int(torch.min(dk))]
2730
2731 # this is a special case: if all batches emit blanks, we require that skip be at least 1
2732 # so we don't loop forever at the current frame.
2733 if blank_mask.all():
2734 if skip == 0:
2735 skip = 1
2736
2737 need_to_stay = skip == 0
2738 del g
2739
2740 # Update blank mask with current predicted blanks
2741 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
2742 k_is_blank = k == self._blank_index
2743 blank_mask.bitwise_or_(k_is_blank)
2744
2745 del k_is_blank
2746 del logp, duration_logp
2747
2748 # If all samples predict / have predicted prior blanks, exit loop early
2749 # This is equivalent to if single sample predicted k
2750 if not blank_mask.all():
2751 # Collect batch indices where blanks occurred now/past
2752 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
2753
2754 # Recover prior state for all samples which predicted blank now/past
2755 if hidden is not None:
2756 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
2757
2758 elif len(blank_indices) > 0 and hidden is None:
2759 # Reset state if there were some blank and other non-blank predictions in batch
2760 # Original state is filled with zeros so we just multiply
2761 # LSTM has 2 states
2762 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
2763
2764 # Recover prior predicted label for all samples which predicted blank now/past
2765 k[blank_indices] = last_label[blank_indices, 0]
2766
2767 # Update new label and hidden state for next iteration
2768 last_label = k.clone().view(-1, 1)
2769 hidden = hidden_prime
2770
2771 # Update predicted labels, accounting for time mask
2772 # If blank was predicted even once, now or in the past,
2773 # Force the current predicted label to also be blank
2774 # This ensures that blanks propogate across all timesteps
2775 # once they have occured (normally stopping condition of sample level loop).
2776 for kidx, ki in enumerate(k):
2777 if blank_mask[kidx] == 0:
2778 hypotheses[kidx].y_sequence.append(ki)
2779 hypotheses[kidx].timestep.append(time_idx)
2780 hypotheses[kidx].score += float(v[kidx])
2781
2782 symbols_added += 1
2783
2784 # Remove trailing empty list of alignments at T_{am-len} x Uj
2785 if self.preserve_alignments:
2786 for batch_idx in range(batchsize):
2787 if len(hypotheses[batch_idx].alignments[-1]) == 0:
2788 del hypotheses[batch_idx].alignments[-1]
2789
2790 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
2791 if self.preserve_frame_confidence:
2792 for batch_idx in range(batchsize):
2793 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2794 del hypotheses[batch_idx].frame_confidence[-1]
2795
2796 # Preserve states
2797 for batch_idx in range(batchsize):
2798 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2799
2800 return hypotheses
2801
2802 def _greedy_decode_masked(
2803 self,
2804 x: torch.Tensor,
2805 out_len: torch.Tensor,
2806 device: torch.device,
2807 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2808 ):
2809 raise NotImplementedError("masked greedy-batched decode is not supported for TDT models.")
2810
[end of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
[start of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 from abc import ABC, abstractmethod
17 from dataclasses import dataclass
18 from functools import partial
19 from typing import List, Optional
20
21 import torch
22 from omegaconf import DictConfig, OmegaConf
23
24 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
25 from nemo.utils import logging
26
27
28 class ConfidenceMeasureConstants:
29 NAMES = ("max_prob", "entropy")
30 ENTROPY_TYPES = ("gibbs", "tsallis", "renyi")
31 ENTROPY_NORMS = ("lin", "exp")
32
33 @classmethod
34 def print(cls):
35 return (
36 cls.__name__
37 + ": "
38 + str({"NAMES": cls.NAMES, "ENTROPY_TYPES": cls.ENTROPY_TYPES, "ENTROPY_NORMS": cls.ENTROPY_NORMS})
39 )
40
41
42 class ConfidenceConstants:
43 AGGREGATIONS = ("mean", "min", "max", "prod")
44
45 @classmethod
46 def print(cls):
47 return cls.__name__ + ": " + str({"AGGREGATIONS": cls.AGGREGATIONS})
48
49
50 @dataclass
51 class ConfidenceMeasureConfig:
52 """A Config which contains the measure name and settings to compute per-frame confidence scores.
53
54 Args:
55 name: The measure name (str).
56 Supported values:
57 - 'max_prob' for using the maximum token probability as a confidence.
58 - 'entropy' for using a normalized entropy of a log-likelihood vector.
59
60 entropy_type: Which type of entropy to use (str).
61 Used if confidence_measure_cfg.name is set to `entropy`.
62 Supported values:
63 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
64 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
65 Note that for this entropy, the alpha should comply the following inequality:
66 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
67 where V is the model vocabulary size.
68 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
69 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
70 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
71 More: https://en.wikipedia.org/wiki/Tsallis_entropy
72 - 'renyi' for the Rรฉnyi entropy.
73 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
74 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
75 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
76
77 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
78 When the alpha equals one, scaling is not applied to 'max_prob',
79 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
80
81 entropy_norm: A mapping of the entropy value to the interval [0,1].
82 Supported values:
83 - 'lin' for using the linear mapping.
84 - 'exp' for using exponential mapping with linear shift.
85 """
86
87 name: str = "entropy"
88 entropy_type: str = "tsallis"
89 alpha: float = 0.33
90 entropy_norm: str = "exp"
91 temperature: str = "DEPRECATED"
92
93 def __post_init__(self):
94 if self.temperature != "DEPRECATED":
95 logging.warning(
96 "`temperature` is deprecated and will be removed in the future. Please use `alpha` instead."
97 )
98
99 # TODO (alaptev): delete the following two lines sometime in the future
100 logging.warning("Re-writing `alpha` with the value of `temperature`.")
101 # self.temperature has type str
102 self.alpha = float(self.temperature)
103 self.temperature = "DEPRECATED"
104 if self.name not in ConfidenceMeasureConstants.NAMES:
105 raise ValueError(
106 f"`name` must be one of the following: "
107 f"{'`' + '`, `'.join(ConfidenceMeasureConstants.NAMES) + '`'}. Provided: `{self.name}`"
108 )
109 if self.entropy_type not in ConfidenceMeasureConstants.ENTROPY_TYPES:
110 raise ValueError(
111 f"`entropy_type` must be one of the following: "
112 f"{'`' + '`, `'.join(ConfidenceMeasureConstants.ENTROPY_TYPES) + '`'}. Provided: `{self.entropy_type}`"
113 )
114 if self.alpha <= 0.0:
115 raise ValueError(f"`alpha` must be > 0. Provided: {self.alpha}")
116 if self.entropy_norm not in ConfidenceMeasureConstants.ENTROPY_NORMS:
117 raise ValueError(
118 f"`entropy_norm` must be one of the following: "
119 f"{'`' + '`, `'.join(ConfidenceMeasureConstants.ENTROPY_NORMS) + '`'}. Provided: `{self.entropy_norm}`"
120 )
121
122
123 @dataclass
124 class ConfidenceConfig:
125 """A config which contains the following key-value pairs related to confidence scores.
126
127 Args:
128 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
129 generated during decoding. When set to true, the Hypothesis will contain
130 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
131 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
132 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
133 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
134
135 The length of the list corresponds to the number of recognized tokens.
136 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
137 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
138 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
139
140 The length of the list corresponds to the number of recognized words.
141 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
142 from the `token_confidence`.
143 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
144 Valid options are `mean`, `min`, `max`, `prod`.
145 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
146 confidence scores.
147
148 name: The measure name (str).
149 Supported values:
150 - 'max_prob' for using the maximum token probability as a confidence.
151 - 'entropy' for using a normalized entropy of a log-likelihood vector.
152
153 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
154 Supported values:
155 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
156 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
157 Note that for this entropy, the alpha should comply the following inequality:
158 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
159 where V is the model vocabulary size.
160 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
161 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
162 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
163 More: https://en.wikipedia.org/wiki/Tsallis_entropy
164 - 'renyi' for the Rรฉnyi entropy.
165 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
166 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
167 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
168
169 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
170 When the alpha equals one, scaling is not applied to 'max_prob',
171 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
172
173 entropy_norm: A mapping of the entropy value to the interval [0,1].
174 Supported values:
175 - 'lin' for using the linear mapping.
176 - 'exp' for using exponential mapping with linear shift.
177 """
178
179 preserve_frame_confidence: bool = False
180 preserve_token_confidence: bool = False
181 preserve_word_confidence: bool = False
182 exclude_blank: bool = True
183 aggregation: str = "min"
184 measure_cfg: ConfidenceMeasureConfig = ConfidenceMeasureConfig()
185 method_cfg: str = "DEPRECATED"
186
187 def __post_init__(self):
188 # OmegaConf.structured ensures that post_init check is always executed
189 self.measure_cfg = OmegaConf.structured(
190 self.measure_cfg
191 if isinstance(self.measure_cfg, ConfidenceMeasureConfig)
192 else ConfidenceMeasureConfig(**self.measure_cfg)
193 )
194 if self.method_cfg != "DEPRECATED":
195 logging.warning(
196 "`method_cfg` is deprecated and will be removed in the future. Please use `measure_cfg` instead."
197 )
198
199 # TODO (alaptev): delete the following two lines sometime in the future
200 logging.warning("Re-writing `measure_cfg` with the value of `method_cfg`.")
201 # OmegaConf.structured ensures that post_init check is always executed
202 self.measure_cfg = OmegaConf.structured(
203 self.method_cfg
204 if isinstance(self.method_cfg, ConfidenceMeasureConfig)
205 else ConfidenceMeasureConfig(**self.method_cfg)
206 )
207 self.method_cfg = "DEPRECATED"
208 if self.aggregation not in ConfidenceConstants.AGGREGATIONS:
209 raise ValueError(
210 f"`aggregation` has to be one of the following: "
211 f"{'`' + '`, `'.join(ConfidenceMeasureConstants.AGGREGATIONS) + '`'}. Provided: `{self.aggregation}`"
212 )
213
214
215 def get_confidence_measure_bank():
216 """Generate a dictionary with confidence measure functionals.
217
218 Supported confidence measures:
219 max_prob: normalized maximum probability
220 entropy_gibbs_lin: Gibbs entropy with linear normalization
221 entropy_gibbs_exp: Gibbs entropy with exponential normalization
222 entropy_tsallis_lin: Tsallis entropy with linear normalization
223 entropy_tsallis_exp: Tsallis entropy with exponential normalization
224 entropy_renyi_lin: Rรฉnyi entropy with linear normalization
225 entropy_renyi_exp: Rรฉnyi entropy with exponential normalization
226
227 Returns:
228 dictionary with lambda functions.
229 """
230 # helper functions
231 # Gibbs entropy is implemented without alpha
232 neg_entropy_gibbs = lambda x: (x.exp() * x).sum(-1)
233 neg_entropy_alpha = lambda x, t: (x * t).exp().sum(-1)
234 neg_entropy_alpha_gibbs = lambda x, t: ((x * t).exp() * x).sum(-1)
235 # too big for a lambda
236 def entropy_tsallis_exp(x, v, t):
237 exp_neg_max_ent = math.exp((1 - math.pow(v, 1 - t)) / (1 - t))
238 return (((1 - neg_entropy_alpha(x, t)) / (1 - t)).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
239
240 def entropy_gibbs_exp(x, v, t):
241 exp_neg_max_ent = math.pow(v, -t * math.pow(v, 1 - t))
242 return ((neg_entropy_alpha_gibbs(x, t) * t).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
243
244 # use Gibbs entropies for Tsallis and Rรฉnyi with t == 1.0
245 entropy_gibbs_lin_baseline = lambda x, v: 1 + neg_entropy_gibbs(x) / math.log(v)
246 entropy_gibbs_exp_baseline = lambda x, v: (neg_entropy_gibbs(x).exp() * v - 1) / (v - 1)
247 # fill the measure bank
248 confidence_measure_bank = {}
249 # Maximum probability measure is implemented without alpha
250 confidence_measure_bank["max_prob"] = (
251 lambda x, v, t: (x.max(dim=-1)[0].exp() * v - 1) / (v - 1)
252 if t == 1.0
253 else ((x.max(dim=-1)[0] * t).exp() * math.pow(v, t) - 1) / (math.pow(v, t) - 1)
254 )
255 confidence_measure_bank["entropy_gibbs_lin"] = (
256 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
257 if t == 1.0
258 else 1 + neg_entropy_alpha_gibbs(x, t) / math.log(v) / math.pow(v, 1 - t)
259 )
260 confidence_measure_bank["entropy_gibbs_exp"] = (
261 lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_gibbs_exp(x, v, t)
262 )
263 confidence_measure_bank["entropy_tsallis_lin"] = (
264 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
265 if t == 1.0
266 else 1 + (1 - neg_entropy_alpha(x, t)) / (math.pow(v, 1 - t) - 1)
267 )
268 confidence_measure_bank["entropy_tsallis_exp"] = (
269 lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_tsallis_exp(x, v, t)
270 )
271 confidence_measure_bank["entropy_renyi_lin"] = (
272 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
273 if t == 1.0
274 else 1 + neg_entropy_alpha(x, t).log2() / (t - 1) / math.log(v, 2)
275 )
276 confidence_measure_bank["entropy_renyi_exp"] = (
277 lambda x, v, t: entropy_gibbs_exp_baseline(x, v)
278 if t == 1.0
279 else (neg_entropy_alpha(x, t).pow(1 / (t - 1)) * v - 1) / (v - 1)
280 )
281 return confidence_measure_bank
282
283
284 def get_confidence_aggregation_bank():
285 """Generate a dictionary with confidence aggregation functions.
286
287 Supported confidence measures:
288 min: minimum
289 max: maximum
290 mean: arithmetic mean
291 prod: product
292
293 Returns:
294 dictionary with functions.
295 """
296 confidence_aggregation_bank = {"mean": lambda x: sum(x) / len(x), "min": min, "max": max}
297 # python 3.7 and earlier do not have math.prod
298 if hasattr(math, "prod"):
299 confidence_aggregation_bank["prod"] = math.prod
300 else:
301 import operator
302 from functools import reduce
303
304 confidence_aggregation_bank["prod"] = lambda x: reduce(operator.mul, x, 1)
305 return confidence_aggregation_bank
306
307
308 class ConfidenceMeasureMixin(ABC):
309 """Confidence Measure Mixin class.
310
311 It initializes per-frame confidence measure.
312 """
313
314 def _init_confidence_measure(self, confidence_measure_cfg: Optional[DictConfig] = None):
315 """Initialize per-frame confidence measure from config.
316 """
317 # OmegaConf.structured ensures that post_init check is always executed
318 confidence_measure_cfg = OmegaConf.structured(
319 ConfidenceMeasureConfig()
320 if confidence_measure_cfg is None
321 else ConfidenceMeasureConfig(**confidence_measure_cfg)
322 )
323
324 # set confidence calculation measure
325 # we suppose that self.blank_id == len(vocabulary)
326 self.num_tokens = (self.blank_id if hasattr(self, "blank_id") else self._blank_index) + 1
327 self.alpha = confidence_measure_cfg.alpha
328
329 # init confidence measure bank
330 self.confidence_measure_bank = get_confidence_measure_bank()
331
332 measure = None
333 # construct measure_name
334 measure_name = ""
335 if confidence_measure_cfg.name == "max_prob":
336 measure_name = "max_prob"
337 elif confidence_measure_cfg.name == "entropy":
338 measure_name = '_'.join(
339 [confidence_measure_cfg.name, confidence_measure_cfg.entropy_type, confidence_measure_cfg.entropy_norm]
340 )
341 else:
342 raise ValueError(f"Unsupported `confidence_measure_cfg.name`: `{confidence_measure_cfg.name}`")
343 if measure_name not in self.confidence_measure_bank:
344 raise ValueError(f"Unsupported measure setup: `{measure_name}`")
345 measure = partial(self.confidence_measure_bank[measure_name], v=self.num_tokens, t=self.alpha)
346 self._get_confidence = lambda x: measure(torch.nan_to_num(x)).tolist()
347
348
349 class ConfidenceMixin(ABC):
350 """Confidence Mixin class.
351
352 It is responsible for confidence estimation method initialization and high-level confidence score calculation.
353 """
354
355 def _init_confidence(self, confidence_cfg: Optional[DictConfig] = None):
356 """Initialize confidence-related fields and confidence aggregation function from config.
357 """
358 # OmegaConf.structured ensures that post_init check is always executed
359 confidence_cfg = OmegaConf.structured(
360 ConfidenceConfig() if confidence_cfg is None else ConfidenceConfig(**confidence_cfg)
361 )
362 self.confidence_measure_cfg = confidence_cfg.measure_cfg
363
364 # extract the config
365 self.preserve_word_confidence = confidence_cfg.get('preserve_word_confidence', False)
366 # set preserve_frame_confidence and preserve_token_confidence to True
367 # if preserve_word_confidence is True
368 self.preserve_token_confidence = (
369 confidence_cfg.get('preserve_token_confidence', False) | self.preserve_word_confidence
370 )
371 # set preserve_frame_confidence to True if preserve_token_confidence is True
372 self.preserve_frame_confidence = (
373 confidence_cfg.get('preserve_frame_confidence', False) | self.preserve_token_confidence
374 )
375 self.exclude_blank_from_confidence = confidence_cfg.get('exclude_blank', True)
376 self.word_confidence_aggregation = confidence_cfg.get('aggregation', "min")
377
378 # define aggregation functions
379 self.confidence_aggregation_bank = get_confidence_aggregation_bank()
380 self._aggregate_confidence = self.confidence_aggregation_bank[self.word_confidence_aggregation]
381
382 # Update preserve frame confidence
383 if self.preserve_frame_confidence is False:
384 if self.cfg.strategy in ['greedy', 'greedy_batch']:
385 self.preserve_frame_confidence = self.cfg.greedy.get('preserve_frame_confidence', False)
386 # OmegaConf.structured ensures that post_init check is always executed
387 confidence_measure_cfg = OmegaConf.structured(self.cfg.greedy).get('confidence_measure_cfg', None)
388 self.confidence_measure_cfg = (
389 OmegaConf.structured(ConfidenceMeasureConfig())
390 if confidence_measure_cfg is None
391 else OmegaConf.structured(ConfidenceMeasureConfig(**confidence_measure_cfg))
392 )
393
394 @abstractmethod
395 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
396 """Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
397 Assumes that `frame_confidence` is present in the hypotheses.
398
399 Args:
400 hypotheses_list: List of Hypothesis.
401
402 Returns:
403 A list of hypotheses with high-level confidence scores.
404 """
405 raise NotImplementedError()
406
407 @abstractmethod
408 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
409 """Implemented by subclass in order to aggregate token confidence to a word-level confidence.
410
411 Args:
412 hypothesis: Hypothesis
413
414 Returns:
415 A list of word-level confidence scores.
416 """
417 raise NotImplementedError()
418
419 def _aggregate_token_confidence_chars(self, words: List[str], token_confidence: List[float]) -> List[float]:
420 """Implementation of token confidence aggregation for character-based models.
421
422 Args:
423 words: List of words of a hypothesis.
424 token_confidence: List of token-level confidence scores of a hypothesis.
425
426 Returns:
427 A list of word-level confidence scores.
428 """
429 word_confidence = []
430 i = 0
431 for word in words:
432 word_len = len(word)
433 word_confidence.append(self._aggregate_confidence(token_confidence[i : i + word_len]))
434 # we assume that there is exactly one space token between words and exclude it from word confidence
435 i += word_len + 1
436 return word_confidence
437
438 def _aggregate_token_confidence_subwords_sentencepiece(
439 self, words: List[str], token_confidence: List[float], token_ids: List[int]
440 ) -> List[float]:
441 """Implementation of token confidence aggregation for subword-based models.
442
443 **Note**: Only supports Sentencepiece based tokenizers !
444
445 Args:
446 words: List of words of a hypothesis.
447 token_confidence: List of token-level confidence scores of a hypothesis.
448 token_ids: List of token ids of a hypothesis.
449
450 Returns:
451 A list of word-level confidence scores.
452 """
453 word_confidence = []
454 # run only if there are final words
455 if len(words) > 0:
456 j = 0
457 prev_unk = False
458 prev_underline = False
459 for i, token_id in enumerate(token_ids):
460 token = self.decode_ids_to_tokens([int(token_id)])[0]
461 token_text = self.decode_tokens_to_str([int(token_id)])
462 # treat `<unk>` as a separate word regardless of the next token
463 # to match the result of `tokenizer.ids_to_text`
464 if (token != token_text or prev_unk) and i > j:
465 # do not add confidence for `โ` if the current token starts with `โ`
466 # to match the result of `tokenizer.ids_to_text`
467 if not prev_underline:
468 word_confidence.append(self._aggregate_confidence(token_confidence[j:i]))
469 j = i
470 prev_unk = token == '<unk>'
471 prev_underline = token == 'โ'
472 if not prev_underline:
473 word_confidence.append(self._aggregate_confidence(token_confidence[j : len(token_ids)]))
474 if len(words) != len(word_confidence):
475 raise RuntimeError(
476 f"""Something went wrong with word-level confidence aggregation.\n
477 Please check these values for debugging:\n
478 len(words): {len(words)},\n
479 len(word_confidence): {len(word_confidence)},\n
480 recognized text: `{' '.join(words)}`"""
481 )
482 return word_confidence
483
[end of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
[start of nemo/collections/common/parts/adapter_modules.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, is_dataclass
16 from typing import Any, Optional
17
18 from hydra.utils import instantiate
19 from omegaconf import OmegaConf
20 from torch import nn as nn
21
22 from nemo.collections.common.parts.utils import activation_registry
23 from nemo.core.classes.mixins import access_mixins, adapter_mixin_strategies
24
25
26 class AdapterModuleUtil(access_mixins.AccessMixin):
27 """
28 Base class of Adapter Modules, providing common functionality to all Adapter Modules.
29 """
30
31 def setup_adapter_strategy(self, adapter_strategy: Optional[adapter_mixin_strategies.AbstractAdapterStrategy]):
32 """
33 Setup adapter strategy of this class, enabling dynamic change in the way the adapter output is
34 merged with the input.
35
36 When called successfully, will assign the variable `adapter_strategy` to the module.
37
38 Args:
39 adapter_strategy: Can be a None or an implementation of AbstractAdapterStrategy.
40 """
41 # set default adapter strategy
42 if adapter_strategy is None:
43 adapter_strategy = self.get_default_strategy_config()
44
45 if is_dataclass(adapter_strategy):
46 adapter_strategy = OmegaConf.structured(adapter_strategy)
47 OmegaConf.set_struct(adapter_strategy, False)
48
49 # The config must have the `_target_` field pointing to the actual adapter strategy class
50 # which will load that strategy dynamically to this module.
51 if isinstance(adapter_strategy, dict) or OmegaConf.is_config(adapter_strategy):
52 self.adapter_strategy = instantiate(adapter_strategy)
53 elif isinstance(adapter_strategy, adapter_mixin_strategies.AbstractAdapterStrategy):
54 self.adapter_strategy = adapter_strategy
55 else:
56 raise AttributeError(f'`adapter_strategy` provided is invalid : {adapter_strategy}')
57
58 def get_default_strategy_config(self) -> 'dataclass':
59 """
60 Returns a default adapter module strategy.
61 """
62 return adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
63
64 def adapter_unfreeze(self,):
65 """
66 Sets the requires grad for all parameters in the adapter to True.
67 This method should be overridden for any custom unfreeze behavior that is required.
68 For example, if not all params of the adapter should be unfrozen.
69 """
70 for param in self.parameters():
71 param.requires_grad_(True)
72
73
74 class LinearAdapter(nn.Module, AdapterModuleUtil):
75
76 """
77 Simple Linear Feedforward Adapter module with LayerNorm and singe hidden layer with activation function.
78 Note: The adapter explicitly initializes its final layer with all zeros in order to avoid affecting the
79 original model when all adapters are disabled.
80
81 Args:
82 in_features: Input dimension of the module. Note that for adapters, input_dim == output_dim.
83 dim: Hidden dimension of the feed forward network.
84 activation: Str name for an activation function.
85 norm_position: Str, can be `pre` or `post`. Defaults to `pre`. Determines whether the normalization
86 will occur in the first layer or the last layer. Certain architectures may prefer one over the other.
87 dropout: float value, whether to perform dropout on the output of the last layer of the adapter.
88 adapter_strategy: By default, ResidualAddAdapterStrategyConfig. An adapter composition function object.
89 """
90
91 def __init__(
92 self,
93 in_features: int,
94 dim: int,
95 activation: str = 'swish',
96 norm_position: str = 'pre',
97 dropout: float = 0.0,
98 adapter_strategy: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig = None,
99 ):
100 super().__init__()
101
102 activation = activation_registry[activation]()
103 # If the activation can be executed in place, do so.
104 if hasattr(activation, 'inplace'):
105 activation.inplace = True
106
107 assert norm_position in ['pre', 'post']
108 self.norm_position = norm_position
109
110 if norm_position == 'pre':
111 self.module = nn.Sequential(
112 nn.LayerNorm(in_features),
113 nn.Linear(in_features, dim, bias=False),
114 activation,
115 nn.Linear(dim, in_features, bias=False),
116 )
117
118 elif norm_position == 'post':
119 self.module = nn.Sequential(
120 nn.Linear(in_features, dim, bias=False),
121 activation,
122 nn.Linear(dim, in_features, bias=False),
123 nn.LayerNorm(in_features),
124 )
125
126 if dropout > 0.0:
127 self.dropout = nn.Dropout(dropout)
128 else:
129 self.dropout = None
130
131 # Setup adapter strategy
132 self.setup_adapter_strategy(adapter_strategy)
133
134 # reset parameters
135 self.reset_parameters()
136
137 def reset_parameters(self):
138 # Final layer initializations must be 0
139 if self.norm_position == 'pre':
140 self.module[-1].weight.data *= 0
141
142 elif self.norm_position == 'post':
143 self.module[-1].weight.data *= 0
144 self.module[-1].bias.data *= 0
145
146 def forward(self, x):
147 x = self.module(x)
148
149 # Add dropout if available
150 if self.dropout is not None:
151 x = self.dropout(x)
152
153 return x
154
155
156 @dataclass
157 class LinearAdapterConfig:
158 in_features: int
159 dim: int
160 activation: str = 'swish'
161 norm_position: str = 'pre'
162 dropout: float = 0.0
163 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
164 _target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
165
[end of nemo/collections/common/parts/adapter_modules.py]
[start of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import re
15 from typing import List
16
17 import ipadic
18 import MeCab
19 from pangu import spacing
20 from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
21
22
23 class EnJaProcessor:
24 """
25 Tokenizer, Detokenizer and Normalizer utilities for Japanese & English
26 Args:
27 lang_id: One of ['en', 'ja'].
28 """
29
30 def __init__(self, lang_id: str):
31 self.lang_id = lang_id
32 self.moses_tokenizer = MosesTokenizer(lang=lang_id)
33 self.moses_detokenizer = MosesDetokenizer(lang=lang_id)
34 self.normalizer = MosesPunctNormalizer(
35 lang=lang_id, pre_replace_unicode_punct=True, post_remove_control_chars=True
36 )
37
38 def detokenize(self, tokens: List[str]) -> str:
39 """
40 Detokenizes a list of tokens
41 Args:
42 tokens: list of strings as tokens
43 Returns:
44 detokenized Japanese or English string
45 """
46 return self.moses_detokenizer.detokenize(tokens)
47
48 def tokenize(self, text) -> str:
49 """
50 Tokenizes text using Moses. Returns a string of tokens.
51 """
52 tokens = self.moses_tokenizer.tokenize(text)
53 return ' '.join(tokens)
54
55 def normalize(self, text) -> str:
56 # Normalization doesn't handle Japanese periods correctly;
57 # 'ใ'becomes '.'.
58 if self.lang_id == 'en':
59 return self.normalizer.normalize(text)
60 else:
61 return text
62
63
64 class JaMecabProcessor:
65 """
66 Tokenizer, Detokenizer and Normalizer utilities for Japanese MeCab & English
67 """
68
69 def __init__(self):
70 self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
71
72 def detokenize(self, text: List[str]) -> str:
73 RE_WS_IN_FW = re.compile(
74 r'([\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])\s+(?=[\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])'
75 )
76
77 detokenize = lambda s: spacing(RE_WS_IN_FW.sub(r'\1', s)).strip()
78 return detokenize(' '.join(text))
79
80 def tokenize(self, text) -> str:
81 """
82 Tokenizes text using Moses. Returns a string of tokens.
83 """
84 return self.mecab_tokenizer.parse(text).strip()
85
86 def normalize(self, text) -> str:
87 return text
88
[end of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
[start of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Optional, Tuple
17
18 from omegaconf.omegaconf import MISSING
19
20 from nemo.collections.nlp.data.machine_translation.machine_translation_dataset import TranslationDataConfig
21 from nemo.collections.nlp.models.enc_dec_nlp_model import EncDecNLPModelConfig
22 from nemo.collections.nlp.modules.common.token_classifier import TokenClassifierConfig
23 from nemo.collections.nlp.modules.common.tokenizer_utils import TokenizerConfig
24 from nemo.collections.nlp.modules.common.transformer.transformer import (
25 NeMoTransformerConfig,
26 NeMoTransformerEncoderConfig,
27 )
28 from nemo.collections.nlp.modules.common.transformer.transformer_bottleneck import (
29 NeMoTransformerBottleneckDecoderConfig,
30 NeMoTransformerBottleneckEncoderConfig,
31 )
32 from nemo.core.config.modelPT import OptimConfig, SchedConfig
33
34
35 @dataclass
36 class MTSchedConfig(SchedConfig):
37 name: str = 'InverseSquareRootAnnealing'
38 warmup_ratio: Optional[float] = None
39 last_epoch: int = -1
40
41
42 # TODO: Refactor this dataclass to to support more optimizers (it pins the optimizer to Adam-like optimizers).
43 @dataclass
44 class MTOptimConfig(OptimConfig):
45 name: str = 'adam'
46 lr: float = 1e-3
47 betas: Tuple[float, float] = (0.9, 0.98)
48 weight_decay: float = 0.0
49 sched: Optional[MTSchedConfig] = MTSchedConfig()
50
51
52 @dataclass
53 class MTEncDecModelConfig(EncDecNLPModelConfig):
54 # machine translation configurations
55 num_val_examples: int = 3
56 num_test_examples: int = 3
57 max_generation_delta: int = 10
58 label_smoothing: Optional[float] = 0.0
59 beam_size: int = 4
60 len_pen: float = 0.0
61 src_language: Any = 'en' # Any = str or List[str]
62 tgt_language: Any = 'en' # Any = str or List[str]
63 find_unused_parameters: Optional[bool] = True
64 shared_tokenizer: Optional[bool] = True
65 multilingual: Optional[bool] = False
66 preproc_out_dir: Optional[str] = None
67 validate_input_ids: Optional[bool] = True
68 shared_embeddings: bool = False
69
70 # network architecture configuration
71 encoder_tokenizer: Any = MISSING
72 encoder: Any = MISSING
73
74 decoder_tokenizer: Any = MISSING
75 decoder: Any = MISSING
76
77 head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
78
79 # dataset configurations
80 train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
81 src_file_name=MISSING,
82 tgt_file_name=MISSING,
83 tokens_in_batch=512,
84 clean=True,
85 shuffle=True,
86 cache_ids=False,
87 use_cache=False,
88 )
89 validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
90 src_file_name=MISSING,
91 tgt_file_name=MISSING,
92 tokens_in_batch=512,
93 clean=False,
94 shuffle=False,
95 cache_ids=False,
96 use_cache=False,
97 )
98 test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
99 src_file_name=MISSING,
100 tgt_file_name=MISSING,
101 tokens_in_batch=512,
102 clean=False,
103 shuffle=False,
104 cache_ids=False,
105 use_cache=False,
106 )
107 optim: Optional[OptimConfig] = MTOptimConfig()
108
109
110 @dataclass
111 class AAYNBaseConfig(MTEncDecModelConfig):
112
113 # Attention is All You Need Base Configuration
114 encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
115 decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
116
117 encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
118 library='nemo',
119 model_name=None,
120 pretrained=False,
121 hidden_size=512,
122 inner_size=2048,
123 num_layers=6,
124 num_attention_heads=8,
125 ffn_dropout=0.1,
126 attn_score_dropout=0.1,
127 attn_layer_dropout=0.1,
128 )
129
130 decoder: NeMoTransformerConfig = NeMoTransformerConfig(
131 library='nemo',
132 model_name=None,
133 pretrained=False,
134 hidden_size=512,
135 inner_size=2048,
136 num_layers=6,
137 num_attention_heads=8,
138 ffn_dropout=0.1,
139 attn_score_dropout=0.1,
140 attn_layer_dropout=0.1,
141 )
142
143
144 @dataclass
145 class MTBottleneckModelConfig(AAYNBaseConfig):
146 model_type: str = 'nll'
147 min_logv: float = -6
148 latent_size: int = -1 # -1 will take value of encoder hidden
149 non_recon_warmup_batches: int = 200000
150 recon_per_token: bool = True
151 log_timing: bool = True
152
153 encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
154 library='nemo',
155 model_name=None,
156 pretrained=False,
157 hidden_size=512,
158 inner_size=2048,
159 num_layers=6,
160 num_attention_heads=8,
161 ffn_dropout=0.1,
162 attn_score_dropout=0.1,
163 attn_layer_dropout=0.1,
164 arch='seq2seq',
165 hidden_steps=32,
166 hidden_blocks=1,
167 hidden_init_method='params',
168 )
169
170 decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
171 library='nemo',
172 model_name=None,
173 pretrained=False,
174 inner_size=2048,
175 num_layers=6,
176 num_attention_heads=8,
177 ffn_dropout=0.1,
178 attn_score_dropout=0.1,
179 attn_layer_dropout=0.1,
180 arch='seq2seq',
181 )
182
[end of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
[start of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, Optional
17
18 from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
19
20 from nemo.collections.common.parts.adapter_modules import LinearAdapterConfig
21 from nemo.collections.nlp.data.token_classification.punctuation_capitalization_dataset import (
22 PunctuationCapitalizationEvalDataConfig,
23 PunctuationCapitalizationTrainDataConfig,
24 legacy_data_config_to_new_data_config,
25 )
26 from nemo.core.config import TrainerConfig
27 from nemo.core.config.modelPT import NemoConfig
28 from nemo.utils.exp_manager import ExpManagerConfig
29
30
31 @dataclass
32 class FreezeConfig:
33 is_enabled: bool = False
34 """Freeze audio encoder weight and add Conformer Layers on top of it"""
35 d_model: Optional[int] = 256
36 """`d_model` parameter of ``ConformerLayer``"""
37 d_ff: Optional[int] = 1024
38 """``d_ff`` parameter of ``ConformerLayer``"""
39 num_layers: Optional[int] = 8
40 """``num_layers`` number of ``ConformerLayer`` modules to add on top of audio encoder"""
41
42
43 @dataclass
44 class AdapterConfig:
45 config: Optional[LinearAdapterConfig] = None
46 """Linear adapter config see ``collections.common.parts.LinearAdapterConfig``"""
47 enable: bool = False
48 """Use adapters for audio encoder"""
49
50
51 @dataclass
52 class FusionConfig:
53 num_layers: Optional[int] = 4
54 """"Number of layers to use in fusion"""
55 num_attention_heads: Optional[int] = 4
56 """Number of attention heads to use in fusion"""
57 inner_size: Optional[int] = 2048
58 """Fusion inner size"""
59
60
61 @dataclass
62 class AudioEncoderConfig:
63 pretrained_model: str = MISSING
64 """A configuration for restoring pretrained audio encoder"""
65 freeze: Optional[FreezeConfig] = None
66 adapter: Optional[AdapterConfig] = None
67 fusion: Optional[FusionConfig] = None
68
69
70 @dataclass
71 class TokenizerConfig:
72 """A structure and default values of source text tokenizer."""
73
74 vocab_file: Optional[str] = None
75 """A path to vocabulary file which is used in ``'word'``, ``'char'``, and HuggingFace tokenizers"""
76
77 tokenizer_name: str = MISSING
78 """A name of the tokenizer used for tokenization of source sequences. Possible options are ``'sentencepiece'``,
79 ``'word'``, ``'char'``, HuggingFace tokenizers (e.g. ``'bert-base-uncased'``). For more options see function
80 ``nemo.collections.nlp.modules.common.get_tokenizer``. The tokenizer must have properties ``cls_id``, ``pad_id``,
81 ``sep_id``, ``unk_id``."""
82
83 special_tokens: Optional[Dict[str, str]] = None
84 """A dictionary with special tokens passed to constructors of ``'char'``, ``'word'``, ``'sentencepiece'``, and
85 various HuggingFace tokenizers."""
86
87 tokenizer_model: Optional[str] = None
88 """A path to a tokenizer model required for ``'sentencepiece'`` tokenizer."""
89
90
91 @dataclass
92 class LanguageModelConfig:
93 """
94 A structure and default values of language model configuration of punctuation and capitalization model. BERT like
95 HuggingFace models are supported. Provide a valid ``pretrained_model_name`` and, optionally, you may
96 reinitialize model via ``config_file`` or ``config``.
97
98 Alternatively you can initialize the language model using ``lm_checkpoint``.
99
100 This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
101 """
102
103 pretrained_model_name: str = MISSING
104 """A mandatory parameter containing name of HuggingFace pretrained model. For example, ``'bert-base-uncased'``."""
105
106 config_file: Optional[str] = None
107 """A path to a file with HuggingFace model config which is used to reinitialize language model."""
108
109 config: Optional[Dict] = None
110 """A HuggingFace config which is used to reinitialize language model."""
111
112 lm_checkpoint: Optional[str] = None
113 """A path to a ``torch`` checkpoint of a language model."""
114
115
116 @dataclass
117 class HeadConfig:
118 """
119 A structure and default values of configuration of capitalization or punctuation model head. This config defines a
120 multilayer perceptron which is applied to output of a language model. Number of units in the hidden layer is equal
121 to the dimension of the language model.
122
123 This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
124 """
125
126 num_fc_layers: int = 1
127 """A number of hidden layers in a multilayer perceptron."""
128
129 fc_dropout: float = 0.1
130 """A dropout used in an MLP."""
131
132 activation: str = 'relu'
133 """An activation used in hidden layers."""
134
135 use_transformer_init: bool = True
136 """Whether to initialize the weights of the classifier head with the approach that was used for language model
137 initialization."""
138
139
140 @dataclass
141 class ClassLabelsConfig:
142 """
143 A structure and default values of a mandatory part of config which contains names of files which are saved in .nemo
144 checkpoint. These files can also be used for passing label vocabulary to the model. For using them as label
145 vocabularies you will need to provide path these files in parameter
146 ``model.common_dataset_parameters.label_vocab_dir``. Each line in labels files
147 contains 1 label. The values are sorted, ``<line number>==<label id>``, starting from ``0``. A label with ``0`` id
148 must contain neutral label which must be equal to ``model.common_dataset_parameters.pad_label``.
149
150 This config is a part of :class:`~CommonDatasetParametersConfig`.
151 """
152
153 punct_labels_file: str = MISSING
154 """A name of punctuation labels file."""
155
156 capit_labels_file: str = MISSING
157 """A name of capitalization labels file."""
158
159
160 @dataclass
161 class CommonDatasetParametersConfig:
162 """
163 A structure and default values of common dataset parameters config which includes label and loss mask information.
164 If you omit parameters ``punct_label_ids``, ``capit_label_ids``, ``label_vocab_dir``, then labels will be inferred
165 from a training dataset or loaded from a checkpoint.
166
167 Parameters ``ignore_extra_tokens`` and ``ignore_start_end`` are responsible for forming loss mask. A loss mask
168 defines on which tokens loss is computed.
169
170 This parameter is a part of config :class:`~PunctuationCapitalizationModelConfig`.
171 """
172
173 pad_label: str = MISSING
174 """A mandatory parameter which should contain label used for punctuation and capitalization label padding. It
175 also serves as a neutral label for both punctuation and capitalization. If any of ``punct_label_ids``,
176 ``capit_label_ids`` parameters is provided, then ``pad_label`` must have ``0`` id in them. In addition, if ``label_vocab_dir``
177 is provided, then ``pad_label`` must be on the first lines in files ``class_labels.punct_labels_file`` and
178 ``class_labels.capit_labels_file``."""
179
180 ignore_extra_tokens: bool = False
181 """Whether to compute loss on not first tokens in words. If this parameter is ``True``, then loss mask is ``False``
182 for all tokens in a word except the first."""
183
184 ignore_start_end: bool = True
185 """If ``False``, then loss is computed on [CLS] and [SEP] tokens."""
186
187 punct_label_ids: Optional[Dict[str, int]] = None
188 """A dictionary with punctuation label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit this
189 parameter and pass label ids through ``class_labels.punct_labels_file`` or let the model to infer label ids from
190 dataset or load them from checkpoint."""
191
192 capit_label_ids: Optional[Dict[str, int]] = None
193 """A dictionary with capitalization label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit
194 this parameter and pass label ids through ``class_labels.capit_labels_file`` or let model to infer label ids from
195 dataset or load them from checkpoint."""
196
197 label_vocab_dir: Optional[str] = None
198 """A path to directory which contains class labels files. See :class:`ClassLabelsConfig`. If this parameter is
199 provided, then labels will be loaded from files which are located in ``label_vocab_dir`` and have names specified
200 in ``model.class_labels`` configuration section. A label specified in ``pad_label`` has to be on the first lines
201 of ``model.class_labels`` files."""
202
203
204 @dataclass
205 class PunctuationCapitalizationModelConfig:
206 """
207 A configuration of
208 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
209 model.
210
211 See an example of model config in
212 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
213 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
214
215 This config is a part of :class:`~PunctuationCapitalizationConfig`.
216 """
217
218 class_labels: ClassLabelsConfig = ClassLabelsConfig()
219 """A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
220 These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
221 for passing vocabularies, please provide path to vocabulary files in
222 ``model.common_dataset_parameters.label_vocab_dir`` parameter."""
223
224 common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
225 """Label ids and loss mask information information."""
226
227 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
228 """A configuration for creating training dataset and data loader."""
229
230 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
231 """A configuration for creating validation datasets and data loaders."""
232
233 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
234 """A configuration for creating test datasets and data loaders."""
235
236 punct_head: HeadConfig = HeadConfig()
237 """A configuration for creating punctuation MLP head that is applied to a language model outputs."""
238
239 capit_head: HeadConfig = HeadConfig()
240 """A configuration for creating capitalization MLP head that is applied to a language model outputs."""
241
242 tokenizer: Any = TokenizerConfig()
243 """A configuration for source text tokenizer."""
244
245 language_model: LanguageModelConfig = LanguageModelConfig()
246 """A configuration of a BERT-like language model which serves as a model body."""
247
248 optim: Optional[Any] = None
249 """A configuration of optimizer and learning rate scheduler. There is much variability in such config. For
250 description see `Optimizers
251 <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html#optimizers>`_ section in
252 documentation and `primer <https://github.com/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb>_ tutorial."""
253
254
255 @dataclass
256 class PunctuationCapitalizationLexicalAudioModelConfig(PunctuationCapitalizationModelConfig):
257 """
258 A configuration of
259 :class:`~nemo.collections.nlp.models.token_classification.punctuation_lexical_audio_capitalization_model.PunctuationCapitalizationLexicalAudioModel`
260 model.
261
262 See an example of model config in
263 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
264 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_lexical_audio_config.yaml>`_
265
266 Audio encoder can be frozen during training with ``freeze_audio_encoder`` parameter.
267 Adapter can be added to audio encoder with ``use_adapters`` and ``adapter_config`` parameters.
268 More conformer layers can be added on top of pretrained audio encoder with ``frozen_conf_d_model``, ``frozen_conf_d_ff`` and ``frozen_conf_num_layers`` parameters.
269 """
270
271 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
272 """A configuration for creating training dataset and data loader."""
273
274 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
275 """A configuration for creating validation datasets and data loaders."""
276
277 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
278 """A configuration for creating test datasets and data loaders."""
279
280 audio_encoder: Optional[AudioEncoderConfig] = None
281
282 restore_lexical_encoder_from: Optional[str] = None
283 """"Path to .nemo checkpoint to load weights from""" # add more comments
284
285 use_weighted_loss: Optional[bool] = False
286 """If set to ``True`` CrossEntropyLoss will be weighted"""
287
288
289 @dataclass
290 class PunctuationCapitalizationConfig(NemoConfig):
291 """
292 A config for punctuation model training and testing.
293
294 See an example of full config in
295 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
296 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
297 """
298
299 pretrained_model: Optional[str] = None
300 """Can be an NVIDIA's NGC cloud model or a path to a .nemo checkpoint. You can get list of possible cloud options
301 by calling method
302 :func:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel.list_available_models`.
303 """
304
305 name: Optional[str] = 'Punctuation_and_Capitalization'
306 """A name of the model. Used for naming output directories and ``.nemo`` checkpoints."""
307
308 do_training: bool = True
309 """Whether to perform training of the model."""
310
311 do_testing: bool = False
312 """Whether ot perform testing of the model."""
313
314 model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
315 """A configuration for the
316 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
317 model."""
318
319 trainer: Optional[TrainerConfig] = TrainerConfig()
320 """Contains ``Trainer`` Lightning class constructor parameters."""
321
322 exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
323 """A configuration with various NeMo training options such as output directories, resuming from checkpoint,
324 tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
325
326
327 @dataclass
328 class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
329 model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
330
331
332 def is_legacy_model_config(model_cfg: DictConfig) -> bool:
333 """
334 Test if model config is old style config. Old style configs are configs which were used before
335 ``common_dataset_parameters`` item was added. Old style datasets use ``dataset`` instead of
336 ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``. Old style configs do not support
337 tarred datasets.
338
339 Args:
340 model_cfg: model configuration
341
342 Returns:
343 whether ``model_config`` is legacy
344 """
345 return 'common_dataset_parameters' not in model_cfg
346
347
348 def legacy_model_config_to_new_model_config(model_cfg: DictConfig) -> DictConfig:
349 """
350 Transform old style config into
351 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`.
352 Old style configs are configs which were used before ``common_dataset_parameters`` item was added. Old style
353 datasets use ``dataset`` instead of ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``.
354 Old style configs do not support tarred datasets.
355
356 Args:
357 model_cfg: old style config
358
359 Returns:
360 model config which follows dataclass
361 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`
362 """
363 train_ds = model_cfg.get('train_ds')
364 validation_ds = model_cfg.get('validation_ds')
365 test_ds = model_cfg.get('test_ds')
366 dataset = model_cfg.dataset
367 punct_head_config = model_cfg.get('punct_head', {})
368 capit_head_config = model_cfg.get('capit_head', {})
369 omega_conf = OmegaConf.structured(
370 PunctuationCapitalizationModelConfig(
371 class_labels=model_cfg.class_labels,
372 common_dataset_parameters=CommonDatasetParametersConfig(
373 pad_label=dataset.pad_label,
374 ignore_extra_tokens=dataset.get(
375 'ignore_extra_tokens', CommonDatasetParametersConfig.ignore_extra_tokens
376 ),
377 ignore_start_end=dataset.get('ignore_start_end', CommonDatasetParametersConfig.ignore_start_end),
378 punct_label_ids=model_cfg.punct_label_ids,
379 capit_label_ids=model_cfg.capit_label_ids,
380 ),
381 train_ds=None
382 if train_ds is None
383 else legacy_data_config_to_new_data_config(train_ds, dataset, train=True),
384 validation_ds=None
385 if validation_ds is None
386 else legacy_data_config_to_new_data_config(validation_ds, dataset, train=False),
387 test_ds=None if test_ds is None else legacy_data_config_to_new_data_config(test_ds, dataset, train=False),
388 punct_head=HeadConfig(
389 num_fc_layers=punct_head_config.get('punct_num_fc_layers', HeadConfig.num_fc_layers),
390 fc_dropout=punct_head_config.get('fc_dropout', HeadConfig.fc_dropout),
391 activation=punct_head_config.get('activation', HeadConfig.activation),
392 use_transformer_init=punct_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
393 ),
394 capit_head=HeadConfig(
395 num_fc_layers=capit_head_config.get('capit_num_fc_layers', HeadConfig.num_fc_layers),
396 fc_dropout=capit_head_config.get('fc_dropout', HeadConfig.fc_dropout),
397 activation=capit_head_config.get('activation', HeadConfig.activation),
398 use_transformer_init=capit_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
399 ),
400 tokenizer=model_cfg.tokenizer,
401 language_model=model_cfg.language_model,
402 optim=model_cfg.optim,
403 )
404 )
405 with open_dict(omega_conf):
406 retain_during_legacy_conversion = model_cfg.get('retain_during_legacy_conversion', {})
407 for key in retain_during_legacy_conversion.keys():
408 omega_conf[key] = retain_during_legacy_conversion[key]
409 return omega_conf
410
[end of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
[start of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Transformer based language model."""
16 from MeCab import Model
17 from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
18 from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
19 from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
20 MegatronRetrievalTransformerEncoderModule,
21 )
22 from nemo.collections.nlp.modules.common.megatron.utils import (
23 ApexGuardDefaults,
24 init_method_normal,
25 scaled_init_method_normal,
26 )
27
28 try:
29 from apex.transformer.enums import AttnMaskType, ModelType
30
31 HAVE_APEX = True
32 except (ImportError, ModuleNotFoundError):
33 HAVE_APEX = False
34 # fake missing classes with None attributes
35 AttnMaskType = ApexGuardDefaults()
36 ModelType = ApexGuardDefaults()
37
38 try:
39 from megatron.core import ModelParallelConfig
40
41 HAVE_MEGATRON_CORE = True
42
43 except (ImportError, ModuleNotFoundError):
44
45 ModelParallelConfig = ApexGuardDefaults
46
47 HAVE_MEGATRON_CORE = False
48
49 __all__ = []
50
51 AVAILABLE_ENCODERS = ["transformer", "perceiver", "retro"]
52
53
54 def get_encoder_model(
55 config: ModelParallelConfig,
56 arch,
57 hidden_size,
58 ffn_hidden_size,
59 num_layers,
60 num_attention_heads,
61 apply_query_key_layer_scaling=False,
62 kv_channels=None,
63 init_method=None,
64 scaled_init_method=None,
65 encoder_attn_mask_type=AttnMaskType.padding,
66 pre_process=True,
67 post_process=True,
68 init_method_std=0.02,
69 megatron_amp_O2=False,
70 hidden_dropout=0.1,
71 attention_dropout=0.1,
72 ffn_dropout=0.0,
73 precision=16,
74 fp32_residual_connection=False,
75 activations_checkpoint_method=None,
76 activations_checkpoint_num_layers=1,
77 activations_checkpoint_granularity=None,
78 layernorm_epsilon=1e-5,
79 bias_activation_fusion=True,
80 bias_dropout_add_fusion=True,
81 masked_softmax_fusion=True,
82 persist_layer_norm=False,
83 openai_gelu=False,
84 activation="gelu",
85 onnx_safe=False,
86 bias=True,
87 normalization="layernorm",
88 headscale=False,
89 transformer_block_type="pre_ln",
90 hidden_steps=32,
91 parent_model_type=ModelType.encoder_or_decoder,
92 layer_type=None,
93 chunk_size=64,
94 num_self_attention_per_cross_attention=1,
95 layer_number_offset=0, # this is use only for attention norm_factor scaling
96 megatron_legacy=False,
97 normalize_attention_scores=True,
98 sequence_parallel=False,
99 num_moe_experts=1,
100 moe_frequency=1,
101 moe_dropout=0.0,
102 turn_off_rop=False, # turn off the RoP positional embedding
103 version=1, # model version
104 position_embedding_type='learned_absolute',
105 use_flash_attention=False,
106 ):
107 """Build language model and return along with the key to save."""
108
109 if kv_channels is None:
110 assert (
111 hidden_size % num_attention_heads == 0
112 ), 'hidden_size must be divisible by num_attention_heads if kv_channels is None'
113 kv_channels = hidden_size // num_attention_heads
114
115 if init_method is None:
116 init_method = init_method_normal(init_method_std)
117
118 if scaled_init_method is None:
119 scaled_init_method = scaled_init_method_normal(init_method_std, num_layers)
120
121 if arch == "transformer":
122 # Language encoder.
123 encoder = MegatronTransformerEncoderModule(
124 config=config,
125 init_method=init_method,
126 output_layer_init_method=scaled_init_method,
127 hidden_size=hidden_size,
128 num_layers=num_layers,
129 num_attention_heads=num_attention_heads,
130 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
131 kv_channels=kv_channels,
132 ffn_hidden_size=ffn_hidden_size,
133 encoder_attn_mask_type=encoder_attn_mask_type,
134 pre_process=pre_process,
135 post_process=post_process,
136 megatron_amp_O2=megatron_amp_O2,
137 hidden_dropout=hidden_dropout,
138 attention_dropout=attention_dropout,
139 ffn_dropout=ffn_dropout,
140 precision=precision,
141 fp32_residual_connection=fp32_residual_connection,
142 activations_checkpoint_method=activations_checkpoint_method,
143 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
144 activations_checkpoint_granularity=activations_checkpoint_granularity,
145 layernorm_epsilon=layernorm_epsilon,
146 bias_activation_fusion=bias_activation_fusion,
147 bias_dropout_add_fusion=bias_dropout_add_fusion,
148 masked_softmax_fusion=masked_softmax_fusion,
149 persist_layer_norm=persist_layer_norm,
150 openai_gelu=openai_gelu,
151 onnx_safe=onnx_safe,
152 activation=activation,
153 bias=bias,
154 normalization=normalization,
155 transformer_block_type=transformer_block_type,
156 headscale=headscale,
157 parent_model_type=parent_model_type,
158 megatron_legacy=megatron_legacy,
159 normalize_attention_scores=normalize_attention_scores,
160 num_moe_experts=num_moe_experts,
161 moe_frequency=moe_frequency,
162 moe_dropout=moe_dropout,
163 position_embedding_type=position_embedding_type,
164 use_flash_attention=use_flash_attention,
165 )
166 elif arch == "retro":
167 encoder = MegatronRetrievalTransformerEncoderModule(
168 config=config,
169 init_method=init_method,
170 output_layer_init_method=scaled_init_method,
171 hidden_size=hidden_size,
172 num_layers=num_layers,
173 num_attention_heads=num_attention_heads,
174 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
175 kv_channels=kv_channels,
176 layer_type=layer_type,
177 ffn_hidden_size=ffn_hidden_size,
178 pre_process=pre_process,
179 post_process=post_process,
180 megatron_amp_O2=megatron_amp_O2,
181 hidden_dropout=hidden_dropout,
182 attention_dropout=attention_dropout,
183 precision=precision,
184 fp32_residual_connection=fp32_residual_connection,
185 activations_checkpoint_method=activations_checkpoint_method,
186 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
187 activations_checkpoint_granularity=activations_checkpoint_granularity,
188 layernorm_epsilon=layernorm_epsilon,
189 bias_activation_fusion=bias_activation_fusion,
190 bias_dropout_add_fusion=bias_dropout_add_fusion,
191 masked_softmax_fusion=masked_softmax_fusion,
192 persist_layer_norm=persist_layer_norm,
193 openai_gelu=openai_gelu,
194 onnx_safe=onnx_safe,
195 activation=activation,
196 bias=bias,
197 normalization=normalization,
198 transformer_block_type=transformer_block_type,
199 parent_model_type=parent_model_type,
200 chunk_size=chunk_size,
201 layer_number_offset=layer_number_offset,
202 megatron_legacy=megatron_legacy,
203 normalize_attention_scores=normalize_attention_scores,
204 turn_off_rop=turn_off_rop,
205 version=version,
206 )
207 elif arch == "perceiver":
208 encoder = MegatronPerceiverEncoderModule(
209 config=config,
210 init_method=init_method,
211 output_layer_init_method=scaled_init_method,
212 hidden_size=hidden_size,
213 num_layers=num_layers,
214 num_attention_heads=num_attention_heads,
215 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
216 kv_channels=kv_channels,
217 ffn_hidden_size=ffn_hidden_size,
218 encoder_attn_mask_type=encoder_attn_mask_type,
219 pre_process=pre_process,
220 post_process=post_process,
221 megatron_amp_O2=megatron_amp_O2,
222 hidden_dropout=hidden_dropout,
223 attention_dropout=attention_dropout,
224 ffn_dropout=ffn_dropout,
225 precision=precision,
226 fp32_residual_connection=fp32_residual_connection,
227 activations_checkpoint_method=activations_checkpoint_method,
228 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
229 activations_checkpoint_granularity=activations_checkpoint_granularity,
230 layernorm_epsilon=layernorm_epsilon,
231 bias_activation_fusion=bias_activation_fusion,
232 bias_dropout_add_fusion=bias_dropout_add_fusion,
233 masked_softmax_fusion=masked_softmax_fusion,
234 persist_layer_norm=persist_layer_norm,
235 openai_gelu=openai_gelu,
236 onnx_safe=onnx_safe,
237 activation=activation,
238 bias=bias,
239 normalization=normalization,
240 transformer_block_type=transformer_block_type,
241 headscale=headscale,
242 parent_model_type=parent_model_type,
243 hidden_steps=hidden_steps,
244 num_self_attention_per_cross_attention=num_self_attention_per_cross_attention,
245 megatron_legacy=megatron_legacy,
246 normalize_attention_scores=normalize_attention_scores,
247 )
248 else:
249 raise ValueError(f"Unknown encoder arch = {arch}. Available encoder arch = {AVAILABLE_ENCODERS}")
250
251 return encoder
252
[end of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
[start of nemo/collections/tts/models/fastpitch.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import contextlib
15 from dataclasses import dataclass
16 from pathlib import Path
17 from typing import List, Optional
18
19 import torch
20 from hydra.utils import instantiate
21 from omegaconf import DictConfig, OmegaConf, open_dict
22 from pytorch_lightning import Trainer
23 from pytorch_lightning.loggers import TensorBoardLogger
24
25 from nemo.collections.common.parts.preprocessing import parsers
26 from nemo.collections.tts.losses.aligner_loss import BinLoss, ForwardSumLoss
27 from nemo.collections.tts.losses.fastpitchloss import DurationLoss, EnergyLoss, MelLoss, PitchLoss
28 from nemo.collections.tts.models.base import SpectrogramGenerator
29 from nemo.collections.tts.modules.fastpitch import FastPitchModule
30 from nemo.collections.tts.parts.mixins import FastPitchAdapterModelMixin
31 from nemo.collections.tts.parts.utils.callbacks import LoggingCallback
32 from nemo.collections.tts.parts.utils.helpers import (
33 batch_from_ragged,
34 g2p_backward_compatible_support,
35 plot_alignment_to_numpy,
36 plot_spectrogram_to_numpy,
37 process_batch,
38 sample_tts_input,
39 )
40 from nemo.core.classes import Exportable
41 from nemo.core.classes.common import PretrainedModelInfo, typecheck
42 from nemo.core.neural_types.elements import (
43 Index,
44 LengthsType,
45 MelSpectrogramType,
46 ProbsType,
47 RegressionValuesType,
48 TokenDurationType,
49 TokenIndex,
50 TokenLogDurationType,
51 )
52 from nemo.core.neural_types.neural_type import NeuralType
53 from nemo.utils import logging, model_utils
54
55
56 @dataclass
57 class G2PConfig:
58 _target_: str = "nemo.collections.tts.g2p.models.en_us_arpabet.EnglishG2p"
59 phoneme_dict: str = "scripts/tts_dataset_files/cmudict-0.7b_nv22.10"
60 heteronyms: str = "scripts/tts_dataset_files/heteronyms-052722"
61 phoneme_probability: float = 0.5
62
63
64 @dataclass
65 class TextTokenizer:
66 _target_: str = "nemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.EnglishPhonemesTokenizer"
67 punct: bool = True
68 stresses: bool = True
69 chars: bool = True
70 apostrophe: bool = True
71 pad_with_space: bool = True
72 add_blank_at: bool = True
73 g2p: G2PConfig = G2PConfig()
74
75
76 @dataclass
77 class TextTokenizerConfig:
78 text_tokenizer: TextTokenizer = TextTokenizer()
79
80
81 class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
82 """FastPitch model (https://arxiv.org/abs/2006.06873) that is used to generate mel spectrogram from text."""
83
84 def __init__(self, cfg: DictConfig, trainer: Trainer = None):
85 # Convert to Hydra 1.0 compatible DictConfig
86 cfg = model_utils.convert_model_config_to_dict_config(cfg)
87 cfg = model_utils.maybe_update_config_version(cfg)
88
89 # Setup normalizer
90 self.normalizer = None
91 self.text_normalizer_call = None
92 self.text_normalizer_call_kwargs = {}
93 self._setup_normalizer(cfg)
94
95 self.learn_alignment = cfg.get("learn_alignment", False)
96
97 # Setup vocabulary (=tokenizer) and input_fft_kwargs (supported only with self.learn_alignment=True)
98 input_fft_kwargs = {}
99 if self.learn_alignment:
100 self.vocab = None
101
102 self.ds_class = cfg.train_ds.dataset._target_
103 self.ds_class_name = self.ds_class.split(".")[-1]
104 if not self.ds_class in [
105 "nemo.collections.tts.data.dataset.TTSDataset",
106 "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset",
107 "nemo.collections.tts.torch.data.TTSDataset",
108 ]:
109 raise ValueError(f"Unknown dataset class: {self.ds_class}.")
110
111 self._setup_tokenizer(cfg)
112 assert self.vocab is not None
113 input_fft_kwargs["n_embed"] = len(self.vocab.tokens)
114 input_fft_kwargs["padding_idx"] = self.vocab.pad
115
116 self._parser = None
117 self._tb_logger = None
118 super().__init__(cfg=cfg, trainer=trainer)
119
120 self.bin_loss_warmup_epochs = cfg.get("bin_loss_warmup_epochs", 100)
121 self.log_images = cfg.get("log_images", False)
122 self.log_train_images = False
123
124 default_prosody_loss_scale = 0.1 if self.learn_alignment else 1.0
125 dur_loss_scale = cfg.get("dur_loss_scale", default_prosody_loss_scale)
126 pitch_loss_scale = cfg.get("pitch_loss_scale", default_prosody_loss_scale)
127 energy_loss_scale = cfg.get("energy_loss_scale", default_prosody_loss_scale)
128
129 self.mel_loss_fn = MelLoss()
130 self.pitch_loss_fn = PitchLoss(loss_scale=pitch_loss_scale)
131 self.duration_loss_fn = DurationLoss(loss_scale=dur_loss_scale)
132 self.energy_loss_fn = EnergyLoss(loss_scale=energy_loss_scale)
133
134 self.aligner = None
135 if self.learn_alignment:
136 aligner_loss_scale = cfg.get("aligner_loss_scale", 1.0)
137 self.aligner = instantiate(self._cfg.alignment_module)
138 self.forward_sum_loss_fn = ForwardSumLoss(loss_scale=aligner_loss_scale)
139 self.bin_loss_fn = BinLoss(loss_scale=aligner_loss_scale)
140
141 self.preprocessor = instantiate(self._cfg.preprocessor)
142 input_fft = instantiate(self._cfg.input_fft, **input_fft_kwargs)
143 output_fft = instantiate(self._cfg.output_fft)
144 duration_predictor = instantiate(self._cfg.duration_predictor)
145 pitch_predictor = instantiate(self._cfg.pitch_predictor)
146 speaker_encoder = instantiate(self._cfg.get("speaker_encoder", None))
147 energy_embedding_kernel_size = cfg.get("energy_embedding_kernel_size", 0)
148 energy_predictor = instantiate(self._cfg.get("energy_predictor", None))
149
150 # [TODO] may remove if we change the pre-trained config
151 # cfg: condition_types = [ "add" ]
152 n_speakers = cfg.get("n_speakers", 0)
153 speaker_emb_condition_prosody = cfg.get("speaker_emb_condition_prosody", False)
154 speaker_emb_condition_decoder = cfg.get("speaker_emb_condition_decoder", False)
155 speaker_emb_condition_aligner = cfg.get("speaker_emb_condition_aligner", False)
156 min_token_duration = cfg.get("min_token_duration", 0)
157 use_log_energy = cfg.get("use_log_energy", True)
158 if n_speakers > 1 and "add" not in input_fft.cond_input.condition_types:
159 input_fft.cond_input.condition_types.append("add")
160 if speaker_emb_condition_prosody:
161 duration_predictor.cond_input.condition_types.append("add")
162 pitch_predictor.cond_input.condition_types.append("add")
163 if speaker_emb_condition_decoder:
164 output_fft.cond_input.condition_types.append("add")
165 if speaker_emb_condition_aligner and self.aligner is not None:
166 self.aligner.cond_input.condition_types.append("add")
167
168 self.fastpitch = FastPitchModule(
169 input_fft,
170 output_fft,
171 duration_predictor,
172 pitch_predictor,
173 energy_predictor,
174 self.aligner,
175 speaker_encoder,
176 n_speakers,
177 cfg.symbols_embedding_dim,
178 cfg.pitch_embedding_kernel_size,
179 energy_embedding_kernel_size,
180 cfg.n_mel_channels,
181 min_token_duration,
182 cfg.max_token_duration,
183 use_log_energy,
184 )
185 self._input_types = self._output_types = None
186 self.export_config = {
187 "emb_range": (0, self.fastpitch.encoder.word_emb.num_embeddings),
188 "enable_volume": False,
189 "enable_ragged_batches": False,
190 }
191 if self.fastpitch.speaker_emb is not None:
192 self.export_config["num_speakers"] = cfg.n_speakers
193
194 self.log_config = cfg.get("log_config", None)
195
196 # Adapter modules setup (from FastPitchAdapterModelMixin)
197 self.setup_adapters()
198
199 def _get_default_text_tokenizer_conf(self):
200 text_tokenizer: TextTokenizerConfig = TextTokenizerConfig()
201 return OmegaConf.create(OmegaConf.to_yaml(text_tokenizer))
202
203 def _setup_normalizer(self, cfg):
204 if "text_normalizer" in cfg:
205 normalizer_kwargs = {}
206
207 if "whitelist" in cfg.text_normalizer:
208 normalizer_kwargs["whitelist"] = self.register_artifact(
209 'text_normalizer.whitelist', cfg.text_normalizer.whitelist
210 )
211 try:
212 import nemo_text_processing
213
214 self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
215 except Exception as e:
216 logging.error(e)
217 raise ImportError(
218 "`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
219 )
220
221 self.text_normalizer_call = self.normalizer.normalize
222 if "text_normalizer_call_kwargs" in cfg:
223 self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
224
225 def _setup_tokenizer(self, cfg):
226 text_tokenizer_kwargs = {}
227
228 if "g2p" in cfg.text_tokenizer:
229 # for backward compatibility
230 if (
231 self._is_model_being_restored()
232 and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
233 and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
234 ):
235 cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
236 cfg.text_tokenizer.g2p["_target_"]
237 )
238
239 g2p_kwargs = {}
240
241 if "phoneme_dict" in cfg.text_tokenizer.g2p:
242 g2p_kwargs["phoneme_dict"] = self.register_artifact(
243 'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
244 )
245
246 if "heteronyms" in cfg.text_tokenizer.g2p:
247 g2p_kwargs["heteronyms"] = self.register_artifact(
248 'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
249 )
250
251 # for backward compatability
252 text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
253
254 # TODO @xueyang: rename the instance of tokenizer because vocab is misleading.
255 self.vocab = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
256
257 @property
258 def tb_logger(self):
259 if self._tb_logger is None:
260 if self.logger is None and self.logger.experiment is None:
261 return None
262 tb_logger = self.logger.experiment
263 for logger in self.trainer.loggers:
264 if isinstance(logger, TensorBoardLogger):
265 tb_logger = logger.experiment
266 break
267 self._tb_logger = tb_logger
268 return self._tb_logger
269
270 @property
271 def parser(self):
272 if self._parser is not None:
273 return self._parser
274
275 if self.learn_alignment:
276 self._parser = self.vocab.encode
277 else:
278 self._parser = parsers.make_parser(
279 labels=self._cfg.labels,
280 name='en',
281 unk_id=-1,
282 blank_id=-1,
283 do_normalize=True,
284 abbreviation_version="fastpitch",
285 make_table=False,
286 )
287 return self._parser
288
289 def parse(self, str_input: str, normalize=True) -> torch.tensor:
290 if self.training:
291 logging.warning("parse() is meant to be called in eval mode.")
292
293 if normalize and self.text_normalizer_call is not None:
294 str_input = self.text_normalizer_call(str_input, **self.text_normalizer_call_kwargs)
295
296 if self.learn_alignment:
297 eval_phon_mode = contextlib.nullcontext()
298 if hasattr(self.vocab, "set_phone_prob"):
299 eval_phon_mode = self.vocab.set_phone_prob(prob=1.0)
300
301 # Disable mixed g2p representation if necessary
302 with eval_phon_mode:
303 tokens = self.parser(str_input)
304 else:
305 tokens = self.parser(str_input)
306
307 x = torch.tensor(tokens).unsqueeze_(0).long().to(self.device)
308 return x
309
310 @typecheck(
311 input_types={
312 "text": NeuralType(('B', 'T_text'), TokenIndex()),
313 "durs": NeuralType(('B', 'T_text'), TokenDurationType()),
314 "pitch": NeuralType(('B', 'T_audio'), RegressionValuesType()),
315 "energy": NeuralType(('B', 'T_audio'), RegressionValuesType(), optional=True),
316 "speaker": NeuralType(('B'), Index(), optional=True),
317 "pace": NeuralType(optional=True),
318 "spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
319 "attn_prior": NeuralType(('B', 'T_spec', 'T_text'), ProbsType(), optional=True),
320 "mel_lens": NeuralType(('B'), LengthsType(), optional=True),
321 "input_lens": NeuralType(('B'), LengthsType(), optional=True),
322 # reference_* data is used for multi-speaker FastPitch training
323 "reference_spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
324 "reference_spec_lens": NeuralType(('B'), LengthsType(), optional=True),
325 }
326 )
327 def forward(
328 self,
329 *,
330 text,
331 durs=None,
332 pitch=None,
333 energy=None,
334 speaker=None,
335 pace=1.0,
336 spec=None,
337 attn_prior=None,
338 mel_lens=None,
339 input_lens=None,
340 reference_spec=None,
341 reference_spec_lens=None,
342 ):
343 return self.fastpitch(
344 text=text,
345 durs=durs,
346 pitch=pitch,
347 energy=energy,
348 speaker=speaker,
349 pace=pace,
350 spec=spec,
351 attn_prior=attn_prior,
352 mel_lens=mel_lens,
353 input_lens=input_lens,
354 reference_spec=reference_spec,
355 reference_spec_lens=reference_spec_lens,
356 )
357
358 @typecheck(output_types={"spect": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType())})
359 def generate_spectrogram(
360 self,
361 tokens: 'torch.tensor',
362 speaker: Optional[int] = None,
363 pace: float = 1.0,
364 reference_spec: Optional['torch.tensor'] = None,
365 reference_spec_lens: Optional['torch.tensor'] = None,
366 ) -> torch.tensor:
367 if self.training:
368 logging.warning("generate_spectrogram() is meant to be called in eval mode.")
369 if isinstance(speaker, int):
370 speaker = torch.tensor([speaker]).to(self.device)
371 spect, *_ = self(
372 text=tokens,
373 durs=None,
374 pitch=None,
375 speaker=speaker,
376 pace=pace,
377 reference_spec=reference_spec,
378 reference_spec_lens=reference_spec_lens,
379 )
380 return spect
381
382 def training_step(self, batch, batch_idx):
383 attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
384 None,
385 None,
386 None,
387 None,
388 None,
389 None,
390 )
391 if self.learn_alignment:
392 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
393 batch_dict = batch
394 else:
395 batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
396 audio = batch_dict.get("audio")
397 audio_lens = batch_dict.get("audio_lens")
398 text = batch_dict.get("text")
399 text_lens = batch_dict.get("text_lens")
400 attn_prior = batch_dict.get("align_prior_matrix", None)
401 pitch = batch_dict.get("pitch", None)
402 energy = batch_dict.get("energy", None)
403 speaker = batch_dict.get("speaker_id", None)
404 reference_audio = batch_dict.get("reference_audio", None)
405 reference_audio_len = batch_dict.get("reference_audio_lens", None)
406 else:
407 audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
408
409 mels, spec_len = self.preprocessor(input_signal=audio, length=audio_lens)
410 reference_spec, reference_spec_len = None, None
411 if reference_audio is not None:
412 reference_spec, reference_spec_len = self.preprocessor(
413 input_signal=reference_audio, length=reference_audio_len
414 )
415
416 (
417 mels_pred,
418 _,
419 _,
420 log_durs_pred,
421 pitch_pred,
422 attn_soft,
423 attn_logprob,
424 attn_hard,
425 attn_hard_dur,
426 pitch,
427 energy_pred,
428 energy_tgt,
429 ) = self(
430 text=text,
431 durs=durs,
432 pitch=pitch,
433 energy=energy,
434 speaker=speaker,
435 pace=1.0,
436 spec=mels if self.learn_alignment else None,
437 reference_spec=reference_spec,
438 reference_spec_lens=reference_spec_len,
439 attn_prior=attn_prior,
440 mel_lens=spec_len,
441 input_lens=text_lens,
442 )
443 if durs is None:
444 durs = attn_hard_dur
445
446 mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
447 dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
448 loss = mel_loss + dur_loss
449 if self.learn_alignment:
450 ctc_loss = self.forward_sum_loss_fn(attn_logprob=attn_logprob, in_lens=text_lens, out_lens=spec_len)
451 bin_loss_weight = min(self.current_epoch / self.bin_loss_warmup_epochs, 1.0) * 1.0
452 bin_loss = self.bin_loss_fn(hard_attention=attn_hard, soft_attention=attn_soft) * bin_loss_weight
453 loss += ctc_loss + bin_loss
454
455 pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
456 energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
457 loss += pitch_loss + energy_loss
458
459 self.log("t_loss", loss)
460 self.log("t_mel_loss", mel_loss)
461 self.log("t_dur_loss", dur_loss)
462 self.log("t_pitch_loss", pitch_loss)
463 if energy_tgt is not None:
464 self.log("t_energy_loss", energy_loss)
465 if self.learn_alignment:
466 self.log("t_ctc_loss", ctc_loss)
467 self.log("t_bin_loss", bin_loss)
468
469 # Log images to tensorboard
470 if self.log_images and self.log_train_images and isinstance(self.logger, TensorBoardLogger):
471 self.log_train_images = False
472
473 self.tb_logger.add_image(
474 "train_mel_target",
475 plot_spectrogram_to_numpy(mels[0].data.cpu().float().numpy()),
476 self.global_step,
477 dataformats="HWC",
478 )
479 spec_predict = mels_pred[0].data.cpu().float().numpy()
480 self.tb_logger.add_image(
481 "train_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
482 )
483 if self.learn_alignment:
484 attn = attn_hard[0].data.cpu().float().numpy().squeeze()
485 self.tb_logger.add_image(
486 "train_attn", plot_alignment_to_numpy(attn.T), self.global_step, dataformats="HWC",
487 )
488 soft_attn = attn_soft[0].data.cpu().float().numpy().squeeze()
489 self.tb_logger.add_image(
490 "train_soft_attn", plot_alignment_to_numpy(soft_attn.T), self.global_step, dataformats="HWC",
491 )
492
493 return loss
494
495 def validation_step(self, batch, batch_idx):
496 attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
497 None,
498 None,
499 None,
500 None,
501 None,
502 None,
503 )
504 if self.learn_alignment:
505 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
506 batch_dict = batch
507 else:
508 batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
509 audio = batch_dict.get("audio")
510 audio_lens = batch_dict.get("audio_lens")
511 text = batch_dict.get("text")
512 text_lens = batch_dict.get("text_lens")
513 attn_prior = batch_dict.get("align_prior_matrix", None)
514 pitch = batch_dict.get("pitch", None)
515 energy = batch_dict.get("energy", None)
516 speaker = batch_dict.get("speaker_id", None)
517 reference_audio = batch_dict.get("reference_audio", None)
518 reference_audio_len = batch_dict.get("reference_audio_lens", None)
519 else:
520 audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
521
522 mels, mel_lens = self.preprocessor(input_signal=audio, length=audio_lens)
523 reference_spec, reference_spec_len = None, None
524 if reference_audio is not None:
525 reference_spec, reference_spec_len = self.preprocessor(
526 input_signal=reference_audio, length=reference_audio_len
527 )
528
529 # Calculate val loss on ground truth durations to better align L2 loss in time
530 (mels_pred, _, _, log_durs_pred, pitch_pred, _, _, _, attn_hard_dur, pitch, energy_pred, energy_tgt,) = self(
531 text=text,
532 durs=durs,
533 pitch=pitch,
534 energy=energy,
535 speaker=speaker,
536 pace=1.0,
537 spec=mels if self.learn_alignment else None,
538 reference_spec=reference_spec,
539 reference_spec_lens=reference_spec_len,
540 attn_prior=attn_prior,
541 mel_lens=mel_lens,
542 input_lens=text_lens,
543 )
544 if durs is None:
545 durs = attn_hard_dur
546
547 mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
548 dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
549 pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
550 energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
551 loss = mel_loss + dur_loss + pitch_loss + energy_loss
552
553 val_outputs = {
554 "val_loss": loss,
555 "mel_loss": mel_loss,
556 "dur_loss": dur_loss,
557 "pitch_loss": pitch_loss,
558 "energy_loss": energy_loss if energy_tgt is not None else None,
559 "mel_target": mels if batch_idx == 0 else None,
560 "mel_pred": mels_pred if batch_idx == 0 else None,
561 }
562 self.validation_step_outputs.append(val_outputs)
563 return val_outputs
564
565 def on_validation_epoch_end(self):
566 collect = lambda key: torch.stack([x[key] for x in self.validation_step_outputs]).mean()
567 val_loss = collect("val_loss")
568 mel_loss = collect("mel_loss")
569 dur_loss = collect("dur_loss")
570 pitch_loss = collect("pitch_loss")
571 self.log("val_loss", val_loss, sync_dist=True)
572 self.log("val_mel_loss", mel_loss, sync_dist=True)
573 self.log("val_dur_loss", dur_loss, sync_dist=True)
574 self.log("val_pitch_loss", pitch_loss, sync_dist=True)
575 if self.validation_step_outputs[0]["energy_loss"] is not None:
576 energy_loss = collect("energy_loss")
577 self.log("val_energy_loss", energy_loss, sync_dist=True)
578
579 _, _, _, _, _, spec_target, spec_predict = self.validation_step_outputs[0].values()
580
581 if self.log_images and isinstance(self.logger, TensorBoardLogger):
582 self.tb_logger.add_image(
583 "val_mel_target",
584 plot_spectrogram_to_numpy(spec_target[0].data.cpu().float().numpy()),
585 self.global_step,
586 dataformats="HWC",
587 )
588 spec_predict = spec_predict[0].data.cpu().float().numpy()
589 self.tb_logger.add_image(
590 "val_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
591 )
592 self.log_train_images = True
593 self.validation_step_outputs.clear() # free memory)
594
595 def _setup_train_dataloader(self, cfg):
596 phon_mode = contextlib.nullcontext()
597 if hasattr(self.vocab, "set_phone_prob"):
598 phon_mode = self.vocab.set_phone_prob(self.vocab.phoneme_probability)
599
600 with phon_mode:
601 dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
602
603 sampler = dataset.get_sampler(cfg.dataloader_params.batch_size)
604 return torch.utils.data.DataLoader(
605 dataset, collate_fn=dataset.collate_fn, sampler=sampler, **cfg.dataloader_params
606 )
607
608 def _setup_test_dataloader(self, cfg):
609 phon_mode = contextlib.nullcontext()
610 if hasattr(self.vocab, "set_phone_prob"):
611 phon_mode = self.vocab.set_phone_prob(0.0)
612
613 with phon_mode:
614 dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
615
616 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
617
618 def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
619 if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
620 raise ValueError(f"No dataset for {name}")
621 if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
622 raise ValueError(f"No dataloader_params for {name}")
623 if shuffle_should_be:
624 if 'shuffle' not in cfg.dataloader_params:
625 logging.warning(
626 f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
627 "config. Manually setting to True"
628 )
629 with open_dict(cfg.dataloader_params):
630 cfg.dataloader_params.shuffle = True
631 elif not cfg.dataloader_params.shuffle:
632 logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
633 elif cfg.dataloader_params.shuffle:
634 logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
635
636 if self.ds_class == "nemo.collections.tts.data.dataset.TTSDataset":
637 phon_mode = contextlib.nullcontext()
638 if hasattr(self.vocab, "set_phone_prob"):
639 phon_mode = self.vocab.set_phone_prob(prob=None if name == "val" else self.vocab.phoneme_probability)
640
641 with phon_mode:
642 dataset = instantiate(
643 cfg.dataset,
644 text_normalizer=self.normalizer,
645 text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
646 text_tokenizer=self.vocab,
647 )
648 else:
649 dataset = instantiate(cfg.dataset)
650
651 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
652
653 def setup_training_data(self, cfg):
654 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
655 self._train_dl = self._setup_train_dataloader(cfg)
656 else:
657 self._train_dl = self.__setup_dataloader_from_config(cfg)
658
659 def setup_validation_data(self, cfg):
660 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
661 self._validation_dl = self._setup_test_dataloader(cfg)
662 else:
663 self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="val")
664
665 def setup_test_data(self, cfg):
666 """Omitted."""
667 pass
668
669 def configure_callbacks(self):
670 if not self.log_config:
671 return []
672
673 sample_ds_class = self.log_config.dataset._target_
674 if sample_ds_class != "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
675 raise ValueError(f"Logging callback only supported for TextToSpeechDataset, got {sample_ds_class}")
676
677 data_loader = self._setup_test_dataloader(self.log_config)
678
679 generators = instantiate(self.log_config.generators)
680 log_dir = Path(self.log_config.log_dir) if self.log_config.log_dir else None
681 log_callback = LoggingCallback(
682 generators=generators,
683 data_loader=data_loader,
684 log_epochs=self.log_config.log_epochs,
685 epoch_frequency=self.log_config.epoch_frequency,
686 output_dir=log_dir,
687 loggers=self.trainer.loggers,
688 log_tensorboard=self.log_config.log_tensorboard,
689 log_wandb=self.log_config.log_wandb,
690 )
691
692 return [log_callback]
693
694 @classmethod
695 def list_available_models(cls) -> 'List[PretrainedModelInfo]':
696 """
697 This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
698 Returns:
699 List of available pre-trained models.
700 """
701 list_of_models = []
702
703 # en-US, single speaker, 22050Hz, LJSpeech (ARPABET).
704 model = PretrainedModelInfo(
705 pretrained_model_name="tts_en_fastpitch",
706 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/1.8.1/files/tts_en_fastpitch_align.nemo",
707 description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is ARPABET-based.",
708 class_=cls,
709 )
710 list_of_models.append(model)
711
712 # en-US, single speaker, 22050Hz, LJSpeech (IPA).
713 model = PretrainedModelInfo(
714 pretrained_model_name="tts_en_fastpitch_ipa",
715 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/IPA_1.13.0/files/tts_en_fastpitch_align_ipa.nemo",
716 description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is IPA-based.",
717 class_=cls,
718 )
719 list_of_models.append(model)
720
721 # en-US, multi-speaker, 44100Hz, HiFiTTS.
722 model = PretrainedModelInfo(
723 pretrained_model_name="tts_en_fastpitch_multispeaker",
724 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_multispeaker_fastpitchhifigan/versions/1.10.0/files/tts_en_fastpitch_multispeaker.nemo",
725 description="This model is trained on HiFITTS sampled at 44100Hz with and can be used to generate male and female English voices with an American accent.",
726 class_=cls,
727 )
728 list_of_models.append(model)
729
730 # de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 21.02
731 model = PretrainedModelInfo(
732 pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2102",
733 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2102.nemo",
734 description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 21.02 Dataset sampled at 22050Hz and can be used to generate male German voices.",
735 class_=cls,
736 )
737 list_of_models.append(model)
738
739 # de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 22.10
740 model = PretrainedModelInfo(
741 pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2210",
742 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2210.nemo",
743 description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 22.10 Dataset sampled at 22050Hz and can be used to generate male German voices.",
744 class_=cls,
745 )
746 list_of_models.append(model)
747
748 # de-DE, multi-speaker, 5 speakers, 44100 Hz, HUI-Audio-Corpus-German Clean.
749 model = PretrainedModelInfo(
750 pretrained_model_name="tts_de_fastpitch_multispeaker_5",
751 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitch_multispeaker_5/versions/1.11.0/files/tts_de_fastpitch_multispeaker_5.nemo",
752 description="This model is trained on 5 speakers in HUI-Audio-Corpus-German clean subset sampled at 44100Hz with and can be used to generate male and female German voices.",
753 class_=cls,
754 )
755 list_of_models.append(model)
756
757 # es, 174 speakers, 44100Hz, OpenSLR (IPA)
758 model = PretrainedModelInfo(
759 pretrained_model_name="tts_es_fastpitch_multispeaker",
760 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_es_multispeaker_fastpitchhifigan/versions/1.15.0/files/tts_es_fastpitch_multispeaker.nemo",
761 description="This model is trained on 174 speakers in 6 crowdsourced Latin American Spanish OpenSLR datasets sampled at 44100Hz and can be used to generate male and female Spanish voices with Latin American accents.",
762 class_=cls,
763 )
764 list_of_models.append(model)
765
766 # zh, single female speaker, 22050Hz, SFSpeech Bilingual Chinese/English dataset, improved model using richer
767 # dict and jieba word segmenter for polyphone disambiguation.
768 model = PretrainedModelInfo(
769 pretrained_model_name="tts_zh_fastpitch_sfspeech",
770 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_zh_fastpitch_hifigan_sfspeech/versions/1.15.0/files/tts_zh_fastpitch_sfspeech.nemo",
771 description="This model is trained on a single female speaker in SFSpeech Bilingual Chinese/English dataset"
772 " sampled at 22050Hz and can be used to generate female Mandarin Chinese voices. It is improved"
773 " using richer dict and jieba word segmenter for polyphone disambiguation.",
774 class_=cls,
775 )
776 list_of_models.append(model)
777
778 # en, multi speaker, LibriTTS, 16000 Hz
779 # stft 25ms 10ms matching ASR params
780 # for use during Enhlish ASR training/adaptation
781 model = PretrainedModelInfo(
782 pretrained_model_name="tts_en_fastpitch_for_asr_finetuning",
783 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch_spectrogram_enhancer_for_asr_finetuning/versions/1.20.0/files/tts_en_fastpitch_for_asr_finetuning.nemo",
784 description="This model is trained on LibriSpeech, train-960 subset."
785 " STFT parameters follow those commonly used in ASR: 25 ms window, 10 ms hop."
786 " This model is supposed to be used with its companion SpetrogramEnhancer for "
787 " ASR fine-tuning. Usage for regular TTS tasks is not advised.",
788 class_=cls,
789 )
790 list_of_models.append(model)
791
792 return list_of_models
793
794 # Methods for model exportability
795 def _prepare_for_export(self, **kwargs):
796 super()._prepare_for_export(**kwargs)
797
798 tensor_shape = ('T') if self.export_config["enable_ragged_batches"] else ('B', 'T')
799
800 # Define input_types and output_types as required by export()
801 self._input_types = {
802 "text": NeuralType(tensor_shape, TokenIndex()),
803 "pitch": NeuralType(tensor_shape, RegressionValuesType()),
804 "pace": NeuralType(tensor_shape),
805 "volume": NeuralType(tensor_shape, optional=True),
806 "batch_lengths": NeuralType(('B'), optional=True),
807 "speaker": NeuralType(('B'), Index(), optional=True),
808 }
809 self._output_types = {
810 "spect": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
811 "num_frames": NeuralType(('B'), TokenDurationType()),
812 "durs_predicted": NeuralType(('B', 'T'), TokenDurationType()),
813 "log_durs_predicted": NeuralType(('B', 'T'), TokenLogDurationType()),
814 "pitch_predicted": NeuralType(('B', 'T'), RegressionValuesType()),
815 }
816 if self.export_config["enable_volume"]:
817 self._output_types["volume_aligned"] = NeuralType(('B', 'T'), RegressionValuesType())
818
819 def _export_teardown(self):
820 self._input_types = self._output_types = None
821
822 @property
823 def disabled_deployment_input_names(self):
824 """Implement this method to return a set of input names disabled for export"""
825 disabled_inputs = set()
826 if self.fastpitch.speaker_emb is None:
827 disabled_inputs.add("speaker")
828 if not self.export_config["enable_ragged_batches"]:
829 disabled_inputs.add("batch_lengths")
830 if not self.export_config["enable_volume"]:
831 disabled_inputs.add("volume")
832 return disabled_inputs
833
834 @property
835 def input_types(self):
836 return self._input_types
837
838 @property
839 def output_types(self):
840 return self._output_types
841
842 def input_example(self, max_batch=1, max_dim=44):
843 """
844 Generates input examples for tracing etc.
845 Returns:
846 A tuple of input examples.
847 """
848 par = next(self.fastpitch.parameters())
849 inputs = sample_tts_input(self.export_config, par.device, max_batch=max_batch, max_dim=max_dim)
850 if 'enable_ragged_batches' not in self.export_config:
851 inputs.pop('batch_lengths', None)
852 return (inputs,)
853
854 def forward_for_export(self, text, pitch, pace, volume=None, batch_lengths=None, speaker=None):
855 if self.export_config["enable_ragged_batches"]:
856 text, pitch, pace, volume_tensor, lens = batch_from_ragged(
857 text, pitch, pace, batch_lengths, padding_idx=self.fastpitch.encoder.padding_idx, volume=volume
858 )
859 if volume is not None:
860 volume = volume_tensor
861 return self.fastpitch.infer(text=text, pitch=pitch, pace=pace, volume=volume, speaker=speaker)
862
863 def interpolate_speaker(
864 self, original_speaker_1, original_speaker_2, weight_speaker_1, weight_speaker_2, new_speaker_id
865 ):
866 """
867 This method performs speaker interpolation between two original speakers the model is trained on.
868
869 Inputs:
870 original_speaker_1: Integer speaker ID of first existing speaker in the model
871 original_speaker_2: Integer speaker ID of second existing speaker in the model
872 weight_speaker_1: Floating point weight associated in to first speaker during weight combination
873 weight_speaker_2: Floating point weight associated in to second speaker during weight combination
874 new_speaker_id: Integer speaker ID of new interpolated speaker in the model
875 """
876 if self.fastpitch.speaker_emb is None:
877 raise Exception(
878 "Current FastPitch model is not a multi-speaker FastPitch model. Speaker interpolation can only \
879 be performed with a multi-speaker model"
880 )
881 n_speakers = self.fastpitch.speaker_emb.weight.data.size()[0]
882 if original_speaker_1 >= n_speakers or original_speaker_2 >= n_speakers or new_speaker_id >= n_speakers:
883 raise Exception(
884 f"Parameters original_speaker_1, original_speaker_2, new_speaker_id should be less than the total \
885 total number of speakers FastPitch was trained on (n_speakers = {n_speakers})."
886 )
887 speaker_emb_1 = (
888 self.fastpitch.speaker_emb(torch.tensor(original_speaker_1, dtype=torch.int32).cuda()).clone().detach()
889 )
890 speaker_emb_2 = (
891 self.fastpitch.speaker_emb(torch.tensor(original_speaker_2, dtype=torch.int32).cuda()).clone().detach()
892 )
893 new_speaker_emb = weight_speaker_1 * speaker_emb_1 + weight_speaker_2 * speaker_emb_2
894 self.fastpitch.speaker_emb.weight.data[new_speaker_id] = new_speaker_emb
895
[end of nemo/collections/tts/models/fastpitch.py]
[start of nemo/collections/tts/models/tacotron2.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import contextlib
16 from dataclasses import dataclass
17 from typing import Any, Dict, List, Optional
18
19 import torch
20 from hydra.utils import instantiate
21 from omegaconf import MISSING, DictConfig, OmegaConf, open_dict
22 from omegaconf.errors import ConfigAttributeError
23 from pytorch_lightning.loggers import TensorBoardLogger, WandbLogger
24 from torch import nn
25
26 from nemo.collections.common.parts.preprocessing import parsers
27 from nemo.collections.tts.losses.tacotron2loss import Tacotron2Loss
28 from nemo.collections.tts.models.base import SpectrogramGenerator
29 from nemo.collections.tts.parts.utils.helpers import (
30 g2p_backward_compatible_support,
31 get_mask_from_lengths,
32 tacotron2_log_to_tb_func,
33 tacotron2_log_to_wandb_func,
34 )
35 from nemo.core.classes.common import PretrainedModelInfo, typecheck
36 from nemo.core.neural_types.elements import (
37 AudioSignal,
38 EmbeddedTextType,
39 LengthsType,
40 LogitsType,
41 MelSpectrogramType,
42 SequenceToSequenceAlignmentType,
43 )
44 from nemo.core.neural_types.neural_type import NeuralType
45 from nemo.utils import logging, model_utils
46
47
48 @dataclass
49 class Preprocessor:
50 _target_: str = MISSING
51 pad_value: float = MISSING
52
53
54 @dataclass
55 class Tacotron2Config:
56 preprocessor: Preprocessor = Preprocessor()
57 encoder: Dict[Any, Any] = MISSING
58 decoder: Dict[Any, Any] = MISSING
59 postnet: Dict[Any, Any] = MISSING
60 labels: List = MISSING
61 train_ds: Optional[Dict[Any, Any]] = None
62 validation_ds: Optional[Dict[Any, Any]] = None
63
64
65 class Tacotron2Model(SpectrogramGenerator):
66 """Tacotron 2 Model that is used to generate mel spectrograms from text"""
67
68 def __init__(self, cfg: DictConfig, trainer: 'Trainer' = None):
69 # Convert to Hydra 1.0 compatible DictConfig
70 cfg = model_utils.convert_model_config_to_dict_config(cfg)
71 cfg = model_utils.maybe_update_config_version(cfg)
72
73 # setup normalizer
74 self.normalizer = None
75 self.text_normalizer_call = None
76 self.text_normalizer_call_kwargs = {}
77 self._setup_normalizer(cfg)
78
79 # setup tokenizer
80 self.tokenizer = None
81 if hasattr(cfg, 'text_tokenizer'):
82 self._setup_tokenizer(cfg)
83
84 self.num_tokens = len(self.tokenizer.tokens)
85 self.tokenizer_pad = self.tokenizer.pad
86 self.tokenizer_unk = self.tokenizer.oov
87 # assert self.tokenizer is not None
88 else:
89 self.num_tokens = len(cfg.labels) + 3
90
91 super().__init__(cfg=cfg, trainer=trainer)
92
93 schema = OmegaConf.structured(Tacotron2Config)
94 # ModelPT ensures that cfg is a DictConfig, but do this second check in case ModelPT changes
95 if isinstance(cfg, dict):
96 cfg = OmegaConf.create(cfg)
97 elif not isinstance(cfg, DictConfig):
98 raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
99 # Ensure passed cfg is compliant with schema
100 try:
101 OmegaConf.merge(cfg, schema)
102 self.pad_value = cfg.preprocessor.pad_value
103 except ConfigAttributeError:
104 self.pad_value = cfg.preprocessor.params.pad_value
105 logging.warning(
106 "Your config is using an old NeMo yaml configuration. Please ensure that the yaml matches the "
107 "current version in the main branch for future compatibility."
108 )
109
110 self._parser = None
111 self.audio_to_melspec_precessor = instantiate(cfg.preprocessor)
112 self.text_embedding = nn.Embedding(self.num_tokens, 512)
113 self.encoder = instantiate(self._cfg.encoder)
114 self.decoder = instantiate(self._cfg.decoder)
115 self.postnet = instantiate(self._cfg.postnet)
116 self.loss = Tacotron2Loss()
117 self.calculate_loss = True
118
119 @property
120 def parser(self):
121 if self._parser is not None:
122 return self._parser
123
124 ds_class_name = self._cfg.train_ds.dataset._target_.split(".")[-1]
125 if ds_class_name == "TTSDataset":
126 self._parser = None
127 elif hasattr(self._cfg, "labels"):
128 self._parser = parsers.make_parser(
129 labels=self._cfg.labels,
130 name='en',
131 unk_id=-1,
132 blank_id=-1,
133 do_normalize=True,
134 abbreviation_version="fastpitch",
135 make_table=False,
136 )
137 else:
138 raise ValueError("Wanted to setup parser, but model does not have necessary paramaters")
139
140 return self._parser
141
142 def parse(self, text: str, normalize=True) -> torch.Tensor:
143 if self.training:
144 logging.warning("parse() is meant to be called in eval mode.")
145 if normalize and self.text_normalizer_call is not None:
146 text = self.text_normalizer_call(text, **self.text_normalizer_call_kwargs)
147
148 eval_phon_mode = contextlib.nullcontext()
149 if hasattr(self.tokenizer, "set_phone_prob"):
150 eval_phon_mode = self.tokenizer.set_phone_prob(prob=1.0)
151
152 with eval_phon_mode:
153 if self.tokenizer is not None:
154 tokens = self.tokenizer.encode(text)
155 else:
156 tokens = self.parser(text)
157 # Old parser doesn't add bos and eos ids, so maunally add it
158 tokens = [len(self._cfg.labels)] + tokens + [len(self._cfg.labels) + 1]
159 tokens_tensor = torch.tensor(tokens).unsqueeze_(0).to(self.device)
160 return tokens_tensor
161
162 @property
163 def input_types(self):
164 if self.training:
165 return {
166 "tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
167 "token_len": NeuralType(('B'), LengthsType()),
168 "audio": NeuralType(('B', 'T'), AudioSignal()),
169 "audio_len": NeuralType(('B'), LengthsType()),
170 }
171 else:
172 return {
173 "tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
174 "token_len": NeuralType(('B'), LengthsType()),
175 "audio": NeuralType(('B', 'T'), AudioSignal(), optional=True),
176 "audio_len": NeuralType(('B'), LengthsType(), optional=True),
177 }
178
179 @property
180 def output_types(self):
181 if not self.calculate_loss and not self.training:
182 return {
183 "spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
184 "spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
185 "gate_pred": NeuralType(('B', 'T'), LogitsType()),
186 "alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
187 "pred_length": NeuralType(('B'), LengthsType()),
188 }
189 return {
190 "spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
191 "spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
192 "gate_pred": NeuralType(('B', 'T'), LogitsType()),
193 "spec_target": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
194 "spec_target_len": NeuralType(('B'), LengthsType()),
195 "alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
196 }
197
198 @typecheck()
199 def forward(self, *, tokens, token_len, audio=None, audio_len=None):
200 if audio is not None and audio_len is not None:
201 spec_target, spec_target_len = self.audio_to_melspec_precessor(audio, audio_len)
202 else:
203 if self.training or self.calculate_loss:
204 raise ValueError(
205 f"'audio' and 'audio_len' can not be None when either 'self.training' or 'self.calculate_loss' is True."
206 )
207
208 token_embedding = self.text_embedding(tokens).transpose(1, 2)
209 encoder_embedding = self.encoder(token_embedding=token_embedding, token_len=token_len)
210
211 if self.training:
212 spec_pred_dec, gate_pred, alignments = self.decoder(
213 memory=encoder_embedding, decoder_inputs=spec_target, memory_lengths=token_len
214 )
215 else:
216 spec_pred_dec, gate_pred, alignments, pred_length = self.decoder(
217 memory=encoder_embedding, memory_lengths=token_len
218 )
219
220 spec_pred_postnet = self.postnet(mel_spec=spec_pred_dec)
221
222 if not self.calculate_loss and not self.training:
223 return spec_pred_dec, spec_pred_postnet, gate_pred, alignments, pred_length
224
225 return spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments
226
227 @typecheck(
228 input_types={"tokens": NeuralType(('B', 'T'), EmbeddedTextType())},
229 output_types={"spec": NeuralType(('B', 'D', 'T'), MelSpectrogramType())},
230 )
231 def generate_spectrogram(self, *, tokens):
232 self.eval()
233 self.calculate_loss = False
234 token_len = torch.tensor([len(i) for i in tokens]).to(self.device)
235 tensors = self(tokens=tokens, token_len=token_len)
236 spectrogram_pred = tensors[1]
237
238 if spectrogram_pred.shape[0] > 1:
239 # Silence all frames past the predicted end
240 mask = ~get_mask_from_lengths(tensors[-1])
241 mask = mask.expand(spectrogram_pred.shape[1], mask.size(0), mask.size(1))
242 mask = mask.permute(1, 0, 2)
243 spectrogram_pred.data.masked_fill_(mask, self.pad_value)
244
245 return spectrogram_pred
246
247 def training_step(self, batch, batch_idx):
248 audio, audio_len, tokens, token_len = batch
249 spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, _ = self.forward(
250 audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
251 )
252
253 loss, _ = self.loss(
254 spec_pred_dec=spec_pred_dec,
255 spec_pred_postnet=spec_pred_postnet,
256 gate_pred=gate_pred,
257 spec_target=spec_target,
258 spec_target_len=spec_target_len,
259 pad_value=self.pad_value,
260 )
261
262 output = {
263 'loss': loss,
264 'progress_bar': {'training_loss': loss},
265 'log': {'loss': loss},
266 }
267 return output
268
269 def validation_step(self, batch, batch_idx):
270 audio, audio_len, tokens, token_len = batch
271 spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments = self.forward(
272 audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
273 )
274
275 loss, gate_target = self.loss(
276 spec_pred_dec=spec_pred_dec,
277 spec_pred_postnet=spec_pred_postnet,
278 gate_pred=gate_pred,
279 spec_target=spec_target,
280 spec_target_len=spec_target_len,
281 pad_value=self.pad_value,
282 )
283 loss = {
284 "val_loss": loss,
285 "mel_target": spec_target,
286 "mel_postnet": spec_pred_postnet,
287 "gate": gate_pred,
288 "gate_target": gate_target,
289 "alignments": alignments,
290 }
291 self.validation_step_outputs.append(loss)
292 return loss
293
294 def on_validation_epoch_end(self):
295 if self.logger is not None and self.logger.experiment is not None:
296 logger = self.logger.experiment
297 for logger in self.trainer.loggers:
298 if isinstance(logger, TensorBoardLogger):
299 logger = logger.experiment
300 break
301 if isinstance(logger, TensorBoardLogger):
302 tacotron2_log_to_tb_func(
303 logger,
304 self.validation_step_outputs[0].values(),
305 self.global_step,
306 tag="val",
307 log_images=True,
308 add_audio=False,
309 )
310 elif isinstance(logger, WandbLogger):
311 tacotron2_log_to_wandb_func(
312 logger,
313 self.validation_step_outputs[0].values(),
314 self.global_step,
315 tag="val",
316 log_images=True,
317 add_audio=False,
318 )
319 avg_loss = torch.stack(
320 [x['val_loss'] for x in self.validation_step_outputs]
321 ).mean() # This reduces across batches, not workers!
322 self.log('val_loss', avg_loss)
323 self.validation_step_outputs.clear() # free memory
324
325 def _setup_normalizer(self, cfg):
326 if "text_normalizer" in cfg:
327 normalizer_kwargs = {}
328
329 if "whitelist" in cfg.text_normalizer:
330 normalizer_kwargs["whitelist"] = self.register_artifact(
331 'text_normalizer.whitelist', cfg.text_normalizer.whitelist
332 )
333
334 try:
335 import nemo_text_processing
336
337 self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
338 except Exception as e:
339 logging.error(e)
340 raise ImportError(
341 "`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
342 )
343
344 self.text_normalizer_call = self.normalizer.normalize
345 if "text_normalizer_call_kwargs" in cfg:
346 self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
347
348 def _setup_tokenizer(self, cfg):
349 text_tokenizer_kwargs = {}
350 if "g2p" in cfg.text_tokenizer and cfg.text_tokenizer.g2p is not None:
351 # for backward compatibility
352 if (
353 self._is_model_being_restored()
354 and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
355 and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
356 ):
357 cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
358 cfg.text_tokenizer.g2p["_target_"]
359 )
360
361 g2p_kwargs = {}
362
363 if "phoneme_dict" in cfg.text_tokenizer.g2p:
364 g2p_kwargs["phoneme_dict"] = self.register_artifact(
365 'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
366 )
367
368 if "heteronyms" in cfg.text_tokenizer.g2p:
369 g2p_kwargs["heteronyms"] = self.register_artifact(
370 'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
371 )
372
373 text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
374
375 self.tokenizer = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
376
377 def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
378 if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
379 raise ValueError(f"No dataset for {name}")
380 if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
381 raise ValueError(f"No dataloder_params for {name}")
382 if shuffle_should_be:
383 if 'shuffle' not in cfg.dataloader_params:
384 logging.warning(
385 f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
386 "config. Manually setting to True"
387 )
388 with open_dict(cfg.dataloader_params):
389 cfg.dataloader_params.shuffle = True
390 elif not cfg.dataloader_params.shuffle:
391 logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
392 elif not shuffle_should_be and cfg.dataloader_params.shuffle:
393 logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
394
395 dataset = instantiate(
396 cfg.dataset,
397 text_normalizer=self.normalizer,
398 text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
399 text_tokenizer=self.tokenizer,
400 )
401
402 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
403
404 def setup_training_data(self, cfg):
405 self._train_dl = self.__setup_dataloader_from_config(cfg)
406
407 def setup_validation_data(self, cfg):
408 self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="validation")
409
410 @classmethod
411 def list_available_models(cls) -> 'List[PretrainedModelInfo]':
412 """
413 This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
414 Returns:
415 List of available pre-trained models.
416 """
417 list_of_models = []
418 model = PretrainedModelInfo(
419 pretrained_model_name="tts_en_tacotron2",
420 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_tacotron2/versions/1.10.0/files/tts_en_tacotron2.nemo",
421 description="This model is trained on LJSpeech sampled at 22050Hz, and can be used to generate female English voices with an American accent.",
422 class_=cls,
423 aliases=["Tacotron2-22050Hz"],
424 )
425 list_of_models.append(model)
426 return list_of_models
427
[end of nemo/collections/tts/models/tacotron2.py]
[start of nemo/core/config/modelPT.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, field
16 from typing import Any, Dict, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.core import config
21 from nemo.core.classes.dataset import DatasetConfig
22 from nemo.utils import exp_manager
23
24
25 @dataclass
26 class SchedConfig:
27 name: str = MISSING
28 min_lr: float = 0.0
29 last_epoch: int = -1
30
31
32 @dataclass
33 class OptimConfig:
34 name: str = MISSING
35 sched: Optional[SchedConfig] = None
36
37
38 @dataclass
39 class ModelConfig:
40 """
41 Model component inside ModelPT
42 """
43
44 # ...
45 train_ds: Optional[DatasetConfig] = None
46 validation_ds: Optional[DatasetConfig] = None
47 test_ds: Optional[DatasetConfig] = None
48 optim: Optional[OptimConfig] = None
49
50
51 @dataclass
52 class HydraConfig:
53 run: Dict[str, Any] = field(default_factory=lambda: {"dir": "."})
54 job_logging: Dict[str, Any] = field(default_factory=lambda: {"root": {"handlers": None}})
55
56
57 @dataclass
58 class NemoConfig:
59 name: str = MISSING
60 model: ModelConfig = MISSING
61 trainer: config.TrainerConfig = config.TrainerConfig(
62 strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
63 )
64 exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
65 hydra: HydraConfig = HydraConfig()
66
67
68 class ModelConfigBuilder:
69 def __init__(self, model_cfg: ModelConfig):
70 """
71 Base class for any Model Config Builder.
72
73 A Model Config Builder is a utility class that accepts a ModelConfig dataclass,
74 and via a set of utility methods (that are implemented by the subclassed ModelConfigBuilder),
75 builds a finalized ModelConfig that can be supplied to a NemoModel dataclass as
76 the `model` component.
77
78 Subclasses *must* implement the private method `_finalize_cfg`.
79 Inside this method, they must update `self.model_cfg` with all interdependent config
80 options that need to be set (either updated by user explicitly or with their default value).
81
82 The updated model config must then be preserved in `self.model_cfg`.
83
84 Example:
85 # Create the config builder
86 config_builder = <subclass>ModelConfigBuilder()
87
88 # Update the components of the config that are modifiable
89 config_builder.set_X(X)
90 config_builder.set_Y(Y)
91
92 # Create a "finalized" config dataclass that will contain all the updates
93 # that were specified by the builder
94 model_config = config_builder.build()
95
96 # Use model config as is (or further update values), then create a new Model
97 model = nemo.<domain>.models.<ModelName>Model(cfg=model_config, trainer=Trainer())
98
99 Supported build methods:
100 - set_train_ds: All model configs can accept a subclass of `DatasetConfig` as their
101 training config. Subclasses can override this method to enable auto-complete
102 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
103
104 - set_validation_ds: All model configs can accept a subclass of `DatasetConfig` as their
105 validation config. Subclasses can override this method to enable auto-complete
106 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
107
108 - set_test_ds: All model configs can accept a subclass of `DatasetConfig` as their
109 test config. Subclasses can override this method to enable auto-complete
110 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
111
112 - set_optim: A build method that supports changes to the Optimizer (and optionally,
113 the Scheduler) used for training the model. The function accepts two inputs -
114
115 `cfg`: A subclass of `OptimizerParams` - any OptimizerParams subclass can be used,
116 in order to select an appropriate Optimizer. Examples: AdamParams.
117
118 `sched_cfg`: A subclass of `SchedulerParams` - any SchedulerParams subclass can be used,
119 in order to select an appropriate Scheduler. Examples: CosineAnnealingParams.
120 Note that this argument is optional.
121
122 - build(): The method which should return a "finalized" ModelConfig dataclass.
123 Subclasses *should* always override this method, and update the signature
124 of this method with the return type of the Dataclass, so that it enables
125 autocomplete for the user.
126
127 Example:
128 def build(self) -> EncDecCTCConfig:
129 return super().build()
130
131 Any additional build methods must be added by subclasses of ModelConfigBuilder.
132
133 Args:
134 model_cfg:
135 """
136 self.model_cfg = model_cfg
137 self.train_ds_cfg = None
138 self.validation_ds_cfg = None
139 self.test_ds_cfg = None
140 self.optim_cfg = None
141
142 def set_train_ds(self, cfg: Optional[DatasetConfig] = None):
143 self.model_cfg.train_ds = cfg
144
145 def set_validation_ds(self, cfg: Optional[DatasetConfig] = None):
146 self.model_cfg.validation_ds = cfg
147
148 def set_test_ds(self, cfg: Optional[DatasetConfig] = None):
149 self.model_cfg.test_ds = cfg
150
151 def set_optim(self, cfg: config.OptimizerParams, sched_cfg: Optional[config.SchedulerParams] = None):
152 @dataclass
153 class WrappedOptimConfig(OptimConfig, cfg.__class__):
154 pass
155
156 # Setup optim
157 optim_name = cfg.__class__.__name__.replace("Params", "").lower()
158 wrapped_cfg = WrappedOptimConfig(name=optim_name, sched=None, **vars(cfg))
159
160 if sched_cfg is not None:
161
162 @dataclass
163 class WrappedSchedConfig(SchedConfig, sched_cfg.__class__):
164 pass
165
166 # Setup scheduler
167 sched_name = sched_cfg.__class__.__name__.replace("Params", "")
168 wrapped_sched_cfg = WrappedSchedConfig(name=sched_name, **vars(sched_cfg))
169
170 wrapped_cfg.sched = wrapped_sched_cfg
171
172 self.model_cfg.optim = wrapped_cfg
173
174 def _finalize_cfg(self):
175 raise NotImplementedError()
176
177 def build(self) -> ModelConfig:
178 # validate config
179 self._finalize_cfg()
180
181 return self.model_cfg
182
[end of nemo/core/config/modelPT.py]
[start of nemo/utils/exp_manager.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import os
17 import subprocess
18 import sys
19 import time
20 import warnings
21 from dataclasses import dataclass
22 from datetime import timedelta
23 from pathlib import Path
24 from shutil import copy, move
25 from typing import Any, Dict, List, Optional, Tuple, Union
26
27 import pytorch_lightning
28 import torch
29 from hydra.core.hydra_config import HydraConfig
30 from hydra.utils import get_original_cwd
31 from omegaconf import DictConfig, OmegaConf, open_dict
32 from pytorch_lightning.callbacks import Callback, ModelCheckpoint
33 from pytorch_lightning.callbacks.early_stopping import EarlyStopping
34 from pytorch_lightning.callbacks.timer import Interval, Timer
35 from pytorch_lightning.loggers import MLFlowLogger, TensorBoardLogger, WandbLogger
36 from pytorch_lightning.loops import _TrainingEpochLoop
37 from pytorch_lightning.strategies.ddp import DDPStrategy
38
39 from nemo.collections.common.callbacks import EMA
40 from nemo.constants import NEMO_ENV_VARNAME_TESTING, NEMO_ENV_VARNAME_VERSION
41 from nemo.utils import logging, timers
42 from nemo.utils.app_state import AppState
43 from nemo.utils.callbacks import NeMoModelCheckpoint, PreemptionCallback
44 from nemo.utils.env_var_parsing import get_envbool
45 from nemo.utils.exceptions import NeMoBaseException
46 from nemo.utils.get_rank import is_global_rank_zero
47 from nemo.utils.lightning_logger_patch import add_filehandlers_to_pl_logger
48 from nemo.utils.loggers import ClearMLLogger, ClearMLParams, DLLogger, DLLoggerParams, MLFlowParams
49 from nemo.utils.model_utils import uninject_model_parallel_rank
50
51
52 class NotFoundError(NeMoBaseException):
53 """ Raised when a file or folder is not found"""
54
55
56 class LoggerMisconfigurationError(NeMoBaseException):
57 """ Raised when a mismatch between trainer.logger and exp_manager occurs"""
58
59 def __init__(self, message):
60 message = (
61 message
62 + " You can disable lighning's trainer from creating a logger by passing logger=False to its constructor."
63 )
64 super().__init__(message)
65
66
67 class CheckpointMisconfigurationError(NeMoBaseException):
68 """ Raised when a mismatch between trainer.callbacks and exp_manager occurs"""
69
70
71 @dataclass
72 class EarlyStoppingParams:
73 monitor: str = "val_loss" # The metric that early stopping should consider.
74 mode: str = "min" # inform early stopping whether to look for increase or decrease in monitor.
75 min_delta: float = 0.001 # smallest change to consider as improvement.
76 patience: int = 10 # how many (continuous) validation cycles to wait with no improvement and stopping training.
77 verbose: bool = True
78 strict: bool = True
79 check_finite: bool = True
80 stopping_threshold: Optional[float] = None
81 divergence_threshold: Optional[float] = None
82 check_on_train_epoch_end: Optional[bool] = None
83 log_rank_zero_only: bool = False
84
85
86 @dataclass
87 class CallbackParams:
88 filepath: Optional[str] = None # Deprecated
89 dirpath: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
90 filename: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
91 monitor: Optional[str] = "val_loss"
92 verbose: Optional[bool] = True
93 save_last: Optional[bool] = True
94 save_top_k: Optional[int] = 3
95 save_weights_only: Optional[bool] = False
96 mode: Optional[str] = "min"
97 auto_insert_metric_name: bool = True
98 every_n_epochs: Optional[int] = 1
99 every_n_train_steps: Optional[int] = None
100 train_time_interval: Optional[str] = None
101 prefix: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
102 postfix: str = ".nemo"
103 save_best_model: bool = False
104 always_save_nemo: bool = False
105 save_nemo_on_train_end: Optional[bool] = True # Whether to automatically save .nemo file durin on_train_end hook
106 model_parallel_size: Optional[int] = None # tensor parallel size * pipeline parallel size
107 save_on_train_epoch_end: Optional[bool] = False # Save after training, not after validation
108
109
110 @dataclass
111 class StepTimingParams:
112 reduction: Optional[str] = "mean"
113 # if True torch.cuda.synchronize() is called on start/stop
114 sync_cuda: Optional[bool] = False
115 # if positive, defines the size of a sliding window for computing mean
116 buffer_size: Optional[int] = 1
117
118
119 @dataclass
120 class EMAParams:
121 enable: Optional[bool] = False
122 decay: Optional[float] = 0.999
123 cpu_offload: Optional[bool] = False
124 validate_original_weights: Optional[bool] = False
125 every_n_steps: int = 1
126
127
128 @dataclass
129 class ExpManagerConfig:
130 """Experiment Manager config for validation of passed arguments.
131 """
132
133 # Log dir creation parameters
134 explicit_log_dir: Optional[str] = None
135 exp_dir: Optional[str] = None
136 name: Optional[str] = None
137 version: Optional[str] = None
138 use_datetime_version: Optional[bool] = True
139 resume_if_exists: Optional[bool] = False
140 resume_past_end: Optional[bool] = False
141 resume_ignore_no_checkpoint: Optional[bool] = False
142 resume_from_checkpoint: Optional[str] = None
143 # Logging parameters
144 create_tensorboard_logger: Optional[bool] = True
145 summary_writer_kwargs: Optional[Dict[Any, Any]] = None
146 create_wandb_logger: Optional[bool] = False
147 wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
148 create_mlflow_logger: Optional[bool] = False
149 mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
150 create_dllogger_logger: Optional[bool] = False
151 dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
152 create_clearml_logger: Optional[bool] = False
153 clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
154 # Checkpointing parameters
155 create_checkpoint_callback: Optional[bool] = True
156 checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
157 create_early_stopping_callback: Optional[bool] = False
158 early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
159 create_preemption_callback: Optional[bool] = True
160 # Additional exp_manager arguments
161 files_to_copy: Optional[List[str]] = None
162 # logs timing of train/val/test steps
163 log_step_timing: Optional[bool] = True
164 step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
165 # Configures creation of log files for different ranks
166 log_local_rank_0_only: Optional[bool] = False
167 log_global_rank_0_only: Optional[bool] = False
168 # disable initial validation when resuming from a checkpoint saved during validation
169 disable_validation_on_resume: Optional[bool] = True
170 ema: Optional[EMAParams] = EMAParams()
171 # Wall clock time limit
172 max_time_per_run: Optional[str] = None
173 # time to sleep non 0 ranks during initialization
174 seconds_to_sleep: float = 5
175
176
177 class TimingCallback(Callback):
178 """
179 Logs execution time of train/val/test steps
180 """
181
182 def __init__(self, timer_kwargs={}):
183 self.timer = timers.NamedTimer(**timer_kwargs)
184
185 def _on_batch_start(self, name):
186 # reset only if we do not return mean of a sliding window
187 if self.timer.buffer_size <= 0:
188 self.timer.reset(name)
189
190 self.timer.start(name)
191
192 def _on_batch_end(self, name, pl_module):
193 self.timer.stop(name)
194 # Set the `batch_size=1` as WAR for `dataloader_iter`, which is not used for any metric
195 pl_module.log(
196 name + ' in s',
197 self.timer[name],
198 on_step=True,
199 on_epoch=False,
200 batch_size=1,
201 prog_bar=(name == "train_step_timing"),
202 )
203
204 def on_train_batch_start(self, trainer, pl_module, batch, batch_idx):
205 self._on_batch_start("train_step_timing")
206
207 def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
208 self._on_batch_end("train_step_timing", pl_module)
209
210 def on_validation_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
211 self._on_batch_start("validation_step_timing")
212
213 def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
214 self._on_batch_end("validation_step_timing", pl_module)
215
216 def on_test_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
217 self._on_batch_start("test_step_timing")
218
219 def on_test_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
220 self._on_batch_end("test_step_timing", pl_module)
221
222 def on_before_backward(self, trainer, pl_module, loss):
223 self._on_batch_start("train_backward_timing")
224
225 def on_after_backward(self, trainer, pl_module):
226 self._on_batch_end("train_backward_timing", pl_module)
227
228
229 def exp_manager(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None) -> Optional[Path]:
230 """
231 exp_manager is a helper function used to manage folders for experiments. It follows the pytorch lightning paradigm
232 of exp_dir/model_or_experiment_name/version. If the lightning trainer has a logger, exp_manager will get exp_dir,
233 name, and version from the logger. Otherwise it will use the exp_dir and name arguments to create the logging
234 directory. exp_manager also allows for explicit folder creation via explicit_log_dir.
235
236 The version can be a datetime string or an integer. Datestime version can be disabled if use_datetime_version is set
237 to False. It optionally creates TensorBoardLogger, WandBLogger, DLLogger, MLFlowLogger, ClearMLLogger,
238 ModelCheckpoint objects from pytorch lightning.
239 It copies sys.argv, and git information if available to the logging directory. It creates a log file for each
240 process to log their output into.
241
242 exp_manager additionally has a resume feature (resume_if_exists) which can be used to continuing training from
243 the constructed log_dir. When you need to continue the training repeatedly (like on a cluster which you need
244 multiple consecutive jobs), you need to avoid creating the version folders. Therefore from v1.0.0, when
245 resume_if_exists is set to True, creating the version folders is ignored.
246
247 Args:
248 trainer (pytorch_lightning.Trainer): The lightning trainer.
249 cfg (DictConfig, dict): Can have the following keys:
250
251 - explicit_log_dir (str, Path): Can be used to override exp_dir/name/version folder creation. Defaults to
252 None, which will use exp_dir, name, and version to construct the logging directory.
253 - exp_dir (str, Path): The base directory to create the logging directory. Defaults to None, which logs to
254 ./nemo_experiments.
255 - name (str): The name of the experiment. Defaults to None which turns into "default" via name = name or
256 "default".
257 - version (str): The version of the experiment. Defaults to None which uses either a datetime string or
258 lightning's TensorboardLogger system of using version_{int}.
259 - use_datetime_version (bool): Whether to use a datetime string for version. Defaults to True.
260 - resume_if_exists (bool): Whether this experiment is resuming from a previous run. If True, it sets
261 trainer._checkpoint_connector._ckpt_path so that the trainer should auto-resume. exp_manager will move files
262 under log_dir to log_dir/run_{int}. Defaults to False. From v1.0.0, when resume_if_exists is True,
263 we would not create version folders to make it easier to find the log folder for next runs.
264 - resume_past_end (bool): exp_manager errors out if resume_if_exists is True and a checkpoint matching
265 ``*end.ckpt`` indicating a previous training run fully completed. This behaviour can be disabled, in which
266 case the ``*end.ckpt`` will be loaded by setting resume_past_end to True. Defaults to False.
267 - resume_ignore_no_checkpoint (bool): exp_manager errors out if resume_if_exists is True and no checkpoint
268 could be found. This behaviour can be disabled, in which case exp_manager will print a message and
269 continue without restoring, by setting resume_ignore_no_checkpoint to True. Defaults to False.
270 - resume_from_checkpoint (str): Can be used to specify a path to a specific checkpoint file to load from. This will
271 override any checkpoint found when resume_if_exists is True. Defaults to None.
272 - create_tensorboard_logger (bool): Whether to create a tensorboard logger and attach it to the pytorch
273 lightning trainer. Defaults to True.
274 - summary_writer_kwargs (dict): A dictionary of kwargs that can be passed to lightning's TensorboardLogger
275 class. Note that log_dir is passed by exp_manager and cannot exist in this dict. Defaults to None.
276 - create_wandb_logger (bool): Whether to create a Weights and Baises logger and attach it to the pytorch
277 lightning trainer. Defaults to False.
278 - wandb_logger_kwargs (dict): A dictionary of kwargs that can be passed to lightning's WandBLogger
279 class. Note that name and project are required parameters if create_wandb_logger is True.
280 Defaults to None.
281 - create_mlflow_logger (bool): Whether to create an MLFlow logger and attach it to the pytorch lightning
282 training. Defaults to False
283 - mlflow_logger_kwargs (dict): optional parameters for the MLFlow logger
284 - create_dllogger_logger (bool): Whether to create an DLLogger logger and attach it to the pytorch lightning
285 training. Defaults to False
286 - dllogger_logger_kwargs (dict): optional parameters for the DLLogger logger
287 - create_clearml_logger (bool): Whether to create an ClearML logger and attach it to the pytorch lightning
288 training. Defaults to False
289 - clearml_logger_kwargs (dict): optional parameters for the ClearML logger
290 - create_checkpoint_callback (bool): Whether to create a ModelCheckpoint callback and attach it to the
291 pytorch lightning trainer. The ModelCheckpoint saves the top 3 models with the best "val_loss", the most
292 recent checkpoint under ``*last.ckpt``, and the final checkpoint after training completes under ``*end.ckpt``.
293 Defaults to True.
294 - create_early_stopping_callback (bool): Flag to decide if early stopping should be used to stop training. Default is False.
295 See EarlyStoppingParams dataclass above.
296 - create_preemption_callback (bool): Flag to decide whether to enable preemption callback to save checkpoints and exit training
297 immediately upon preemption. Default is True.
298 - files_to_copy (list): A list of files to copy to the experiment logging directory. Defaults to None which
299 copies no files.
300 - log_local_rank_0_only (bool): Whether to only create log files for local rank 0. Defaults to False.
301 Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
302 - log_global_rank_0_only (bool): Whether to only create log files for global rank 0. Defaults to False.
303 Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
304 - max_time (str): The maximum wall clock time *per run*. This is intended to be used on clusters where you want
305 a checkpoint to be saved after this specified time and be able to resume from that checkpoint. Defaults to None.
306 - seconds_to_sleep (float): seconds to sleep non rank 0 processes for. Used to give enough time for rank 0 to initialize
307
308 returns:
309 log_dir (Path): The final logging directory where logging files are saved. Usually the concatenation of
310 exp_dir, name, and version.
311 """
312 # Add rank information to logger
313 # Note: trainer.global_rank and trainer.is_global_zero are not set until trainer.fit, so have to hack around it
314 local_rank = int(os.environ.get("LOCAL_RANK", 0))
315 global_rank = trainer.node_rank * trainer.num_devices + local_rank
316 logging.rank = global_rank
317
318 if cfg is None:
319 logging.error("exp_manager did not receive a cfg argument. It will be disabled.")
320 return
321 if trainer.fast_dev_run:
322 logging.info("Trainer was called with fast_dev_run. exp_manager will return without any functionality.")
323 return
324
325 # Ensure passed cfg is compliant with ExpManagerConfig
326 schema = OmegaConf.structured(ExpManagerConfig)
327 if isinstance(cfg, dict):
328 cfg = OmegaConf.create(cfg)
329 elif not isinstance(cfg, DictConfig):
330 raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
331 cfg = OmegaConf.create(OmegaConf.to_container(cfg, resolve=True))
332 cfg = OmegaConf.merge(schema, cfg)
333
334 error_checks(trainer, cfg) # Ensures that trainer options are compliant with NeMo and exp_manager arguments
335
336 log_dir, exp_dir, name, version = get_log_dir(
337 trainer=trainer,
338 exp_dir=cfg.exp_dir,
339 name=cfg.name,
340 version=cfg.version,
341 explicit_log_dir=cfg.explicit_log_dir,
342 use_datetime_version=cfg.use_datetime_version,
343 resume_if_exists=cfg.resume_if_exists,
344 )
345
346 check_resume(
347 trainer,
348 log_dir,
349 cfg.resume_if_exists,
350 cfg.resume_past_end,
351 cfg.resume_ignore_no_checkpoint,
352 cfg.checkpoint_callback_params.dirpath,
353 cfg.resume_from_checkpoint,
354 )
355
356 checkpoint_name = name
357 # If name returned from get_log_dir is "", use cfg.name for checkpointing
358 if checkpoint_name is None or checkpoint_name == '':
359 checkpoint_name = cfg.name or "default"
360
361 # Set mlflow name if it's not set, before the main name is erased
362 if cfg.create_mlflow_logger and (not cfg.mlflow_logger_kwargs.get("experiment_name", None)):
363 cfg.mlflow_logger_kwargs.experiment_name = cfg.name
364 logging.warning(
365 'mlflow logger specified but no experiment name set. Using the same as Tensorboard: %s',
366 cfg.mlflow_logger_kwargs.experiment_name,
367 )
368
369 cfg.name = name # Used for configure_loggers so that the log_dir is properly set even if name is ""
370 cfg.version = version
371
372 # update app_state with log_dir, exp_dir, etc
373 app_state = AppState()
374 app_state.log_dir = log_dir
375 app_state.exp_dir = exp_dir
376 app_state.name = name
377 app_state.version = version
378 app_state.checkpoint_name = checkpoint_name
379 app_state.create_checkpoint_callback = cfg.create_checkpoint_callback
380 app_state.checkpoint_callback_params = cfg.checkpoint_callback_params
381
382 # Create the logging directory if it does not exist
383 os.makedirs(log_dir, exist_ok=True) # Cannot limit creation to global zero as all ranks write to own log file
384 logging.info(f'Experiments will be logged at {log_dir}')
385 trainer._default_root_dir = log_dir
386
387 if cfg.log_local_rank_0_only is True and cfg.log_global_rank_0_only is True:
388 raise ValueError(
389 f"Cannot set both log_local_rank_0_only and log_global_rank_0_only to True. Please set either one or neither."
390 )
391
392 # This is set if the env var NEMO_TESTING is set to True.
393 nemo_testing = get_envbool(NEMO_ENV_VARNAME_TESTING, False)
394
395 # Handle logging to file
396 log_file = log_dir / f'nemo_log_globalrank-{global_rank}_localrank-{local_rank}.txt'
397 if cfg.log_local_rank_0_only is True and not nemo_testing:
398 if local_rank == 0:
399 logging.add_file_handler(log_file)
400 elif cfg.log_global_rank_0_only is True and not nemo_testing:
401 if global_rank == 0:
402 logging.add_file_handler(log_file)
403 else:
404 # Logs on all ranks.
405 logging.add_file_handler(log_file)
406
407 # For some reason, LearningRateLogger requires trainer to have a logger. Safer to create logger on all ranks
408 # not just global rank 0.
409 if (
410 cfg.create_tensorboard_logger
411 or cfg.create_wandb_logger
412 or cfg.create_mlflow_logger
413 or cfg.create_dllogger_logger
414 or cfg.create_clearml_logger
415 ):
416 configure_loggers(
417 trainer,
418 exp_dir,
419 log_dir,
420 cfg.name,
421 cfg.version,
422 cfg.checkpoint_callback_params,
423 cfg.create_tensorboard_logger,
424 cfg.summary_writer_kwargs,
425 cfg.create_wandb_logger,
426 cfg.wandb_logger_kwargs,
427 cfg.create_mlflow_logger,
428 cfg.mlflow_logger_kwargs,
429 cfg.create_dllogger_logger,
430 cfg.dllogger_logger_kwargs,
431 cfg.create_clearml_logger,
432 cfg.clearml_logger_kwargs,
433 )
434
435 # add loggers timing callbacks
436 if cfg.log_step_timing:
437 timing_callback = TimingCallback(timer_kwargs=cfg.step_timing_kwargs or {})
438 trainer.callbacks.insert(0, timing_callback)
439
440 if cfg.ema.enable:
441 ema_callback = EMA(
442 decay=cfg.ema.decay,
443 validate_original_weights=cfg.ema.validate_original_weights,
444 cpu_offload=cfg.ema.cpu_offload,
445 every_n_steps=cfg.ema.every_n_steps,
446 )
447 trainer.callbacks.append(ema_callback)
448
449 if cfg.create_early_stopping_callback:
450 early_stop_callback = EarlyStopping(**cfg.early_stopping_callback_params)
451 trainer.callbacks.append(early_stop_callback)
452
453 if cfg.create_checkpoint_callback:
454 configure_checkpointing(
455 trainer,
456 log_dir,
457 checkpoint_name,
458 cfg.resume_if_exists,
459 cfg.checkpoint_callback_params,
460 cfg.create_preemption_callback,
461 )
462
463 if cfg.disable_validation_on_resume:
464 # extend training loop to skip initial validation when resuming from checkpoint
465 configure_no_restart_validation_training_loop(trainer)
466 # Setup a stateless timer for use on clusters.
467 if cfg.max_time_per_run is not None:
468 found_ptl_timer = False
469 for idx, callback in enumerate(trainer.callbacks):
470 if isinstance(callback, Timer):
471 # NOTE: PTL does not expose a `trainer.max_time`. By the time we are in this function, PTL has already setup a timer if the user specifies `trainer.max_time` so best we can do is replace that.
472 # Working: If only `trainer.max_time` is set - it behaves as a normal PTL timer. If only `exp_manager.max_time_per_run` is set - it behaves as a StateLessTimer. If both are set, it also behaves as a StateLessTimer.
473 logging.warning(
474 f'Found a PTL Timer callback, replacing with a StatelessTimer callback. This will happen if you set trainer.max_time as well as exp_manager.max_time_per_run.'
475 )
476 trainer.callbacks[idx] = StatelessTimer(cfg.max_time_per_run)
477 found_ptl_timer = True
478 break
479
480 if not found_ptl_timer:
481 trainer.max_time = cfg.max_time_per_run
482 trainer.callbacks.append(StatelessTimer(cfg.max_time_per_run))
483
484 if is_global_rank_zero():
485 # Move files_to_copy to folder and add git information if present
486 if cfg.files_to_copy:
487 for _file in cfg.files_to_copy:
488 copy(Path(_file), log_dir)
489
490 # Create files for cmd args and git info
491 with open(log_dir / 'cmd-args.log', 'w', encoding='utf-8') as _file:
492 _file.write(" ".join(sys.argv))
493
494 # Try to get git hash
495 git_repo, git_hash = get_git_hash()
496 if git_repo:
497 with open(log_dir / 'git-info.log', 'w', encoding='utf-8') as _file:
498 _file.write(f'commit hash: {git_hash}')
499 _file.write(get_git_diff())
500
501 # Add err_file logging to global_rank zero
502 logging.add_err_file_handler(log_dir / 'nemo_error_log.txt')
503
504 # Add lightning file logging to global_rank zero
505 add_filehandlers_to_pl_logger(log_dir / 'lightning_logs.txt', log_dir / 'nemo_error_log.txt')
506
507 elif trainer.num_nodes * trainer.num_devices > 1:
508 # sleep other ranks so rank 0 can finish
509 # doing the initialization such as moving files
510 time.sleep(cfg.seconds_to_sleep)
511
512 return log_dir
513
514
515 def error_checks(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None):
516 """
517 Checks that the passed trainer is compliant with NeMo and exp_manager's passed configuration. Checks that:
518 - Throws error when hydra has changed the working directory. This causes issues with lightning's DDP
519 - Throws error when trainer has loggers defined but create_tensorboard_logger or create_wandB_logger
520 or create_mlflow_logger or create_dllogger_logger is True
521 - Prints error messages when 1) run on multi-node and not Slurm, and 2) run on multi-gpu without DDP
522 """
523 if HydraConfig.initialized() and get_original_cwd() != os.getcwd():
524 raise ValueError(
525 "Hydra changed the working directory. This interferes with ExpManger's functionality. Please pass "
526 "hydra.run.dir=. to your python script."
527 )
528 if trainer.logger is not None and (
529 cfg.create_tensorboard_logger or cfg.create_wandb_logger or cfg.create_mlflow_logger
530 ):
531 raise LoggerMisconfigurationError(
532 "The pytorch lightning trainer that was passed to exp_manager contained a logger, and either "
533 f"create_tensorboard_logger: {cfg.create_tensorboard_logger} or create_wandb_logger: "
534 f"{cfg.create_wandb_logger} or create_mlflow_logger: {cfg.create_mlflow_logger}"
535 f"or create_dllogger_logger: {cfg.create_mlflow_logger} was set to True. "
536 "These can only be used if trainer does not already have a logger."
537 )
538 if trainer.num_nodes > 1 and not check_slurm(trainer):
539 logging.error(
540 "You are running multi-node training without SLURM handling the processes."
541 " Please note that this is not tested in NeMo and could result in errors."
542 )
543 if trainer.num_devices > 1 and not isinstance(trainer.strategy, DDPStrategy):
544 logging.error(
545 "You are running multi-gpu without ddp.Please note that this is not tested in NeMo and could result in "
546 "errors."
547 )
548
549
550 def check_resume(
551 trainer: 'pytorch_lightning.Trainer',
552 log_dir: str,
553 resume_if_exists: bool = False,
554 resume_past_end: bool = False,
555 resume_ignore_no_checkpoint: bool = False,
556 dirpath: str = None,
557 resume_from_checkpoint: str = None,
558 ):
559 """Checks that resume=True was used correctly with the arguments pass to exp_manager. Sets
560 trainer._checkpoint_connector._ckpt_path as necessary.
561
562 Returns:
563 log_dir (Path): The log_dir
564 exp_dir (str): The base exp_dir without name nor version
565 name (str): The name of the experiment
566 version (str): The version of the experiment
567
568 Raises:
569 NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
570 ValueError: If resume is True, and there were more than 1 checkpoint could found.
571 """
572
573 if not log_dir:
574 raise ValueError(f"Resuming requires the log_dir {log_dir} to be passed to exp_manager")
575
576 checkpoint = None
577 if resume_from_checkpoint:
578 checkpoint = resume_from_checkpoint
579 if resume_if_exists:
580 # Use <log_dir>/checkpoints/ unless `dirpath` is set
581 checkpoint_dir = Path(dirpath) if dirpath else Path(Path(log_dir) / "checkpoints")
582
583 # when using distributed checkpointing, checkpoint_dir is a directory of directories
584 # we check for this here
585 dist_checkpoints = [d for d in list(checkpoint_dir.glob("*")) if d.is_dir()]
586 end_dist_checkpoints = [d for d in dist_checkpoints if d.match("*end")]
587 last_dist_checkpoints = [d for d in dist_checkpoints if d.match("*last")]
588
589 end_checkpoints = end_dist_checkpoints if end_dist_checkpoints else list(checkpoint_dir.rglob("*end.ckpt"))
590 last_checkpoints = last_dist_checkpoints if last_dist_checkpoints else list(checkpoint_dir.rglob("*last.ckpt"))
591
592 if not checkpoint_dir.exists() or (not len(end_checkpoints) > 0 and not len(last_checkpoints) > 0):
593 if resume_ignore_no_checkpoint:
594 warn = f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. "
595 if checkpoint is None:
596 warn += "Training from scratch."
597 elif checkpoint == resume_from_checkpoint:
598 warn += f"Training from {resume_from_checkpoint}."
599 logging.warning(warn)
600 else:
601 raise NotFoundError(
602 f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. Cannot resume."
603 )
604 elif len(end_checkpoints) > 0:
605 if resume_past_end:
606 if len(end_checkpoints) > 1:
607 if 'mp_rank' in str(end_checkpoints[0]):
608 checkpoint = end_checkpoints[0]
609 else:
610 raise ValueError(f"Multiple checkpoints {end_checkpoints} that matches *end.ckpt.")
611 else:
612 raise ValueError(
613 f"Found {end_checkpoints[0]} indicating that the last training run has already completed."
614 )
615 elif len(last_checkpoints) > 1:
616 if 'mp_rank' in str(last_checkpoints[0]) or 'tp_rank' in str(last_checkpoints[0]):
617 checkpoint = last_checkpoints[0]
618 checkpoint = uninject_model_parallel_rank(checkpoint)
619 else:
620 raise ValueError(f"Multiple checkpoints {last_checkpoints} that matches *last.ckpt.")
621 else:
622 checkpoint = last_checkpoints[0]
623
624 # PTL 2.0 supports ckpt_path instead of resume_from_checkpoint as the trainer flag
625 if checkpoint is not None:
626 trainer.ckpt_path = str(checkpoint)
627 logging.info(f'Resuming training from checkpoint: {trainer.ckpt_path}')
628
629 if is_global_rank_zero():
630 # Check to see if any files exist that need to be moved
631 files_to_move = []
632 if Path(log_dir).exists():
633 for child in Path(log_dir).iterdir():
634 if child.is_file():
635 files_to_move.append(child)
636
637 if len(files_to_move) > 0:
638 # Move old files to a new folder
639 other_run_dirs = Path(log_dir).glob("run_*")
640 run_count = 0
641 for fold in other_run_dirs:
642 if fold.is_dir():
643 run_count += 1
644 new_run_dir = Path(Path(log_dir) / f"run_{run_count}")
645 new_run_dir.mkdir()
646 for _file in files_to_move:
647 move(str(_file), str(new_run_dir))
648
649
650 def check_explicit_log_dir(
651 trainer: 'pytorch_lightning.Trainer', explicit_log_dir: Union[Path, str], exp_dir: str, name: str, version: str
652 ) -> Tuple[Path, str, str, str]:
653 """ Checks that the passed arguments are compatible with explicit_log_dir.
654
655 Returns:
656 log_dir (Path): the log_dir
657 exp_dir (str): the base exp_dir without name nor version
658 name (str): The name of the experiment
659 version (str): The version of the experiment
660
661 Raise:
662 LoggerMisconfigurationError
663 """
664 if trainer.logger is not None:
665 raise LoggerMisconfigurationError(
666 "The pytorch lightning trainer that was passed to exp_manager contained a logger and explicit_log_dir: "
667 f"{explicit_log_dir} was pass to exp_manager. Please remove the logger from the lightning trainer."
668 )
669 # Checking only (explicit_log_dir) vs (exp_dir and version).
670 # The `name` will be used as the actual name of checkpoint/archive.
671 if exp_dir or version:
672 logging.error(
673 f"exp_manager received explicit_log_dir: {explicit_log_dir} and at least one of exp_dir: {exp_dir}, "
674 f"or version: {version}. Please note that exp_dir, name, and version will be ignored."
675 )
676 if is_global_rank_zero() and Path(explicit_log_dir).exists():
677 logging.warning(f"Exp_manager is logging to {explicit_log_dir}, but it already exists.")
678 return Path(explicit_log_dir), str(explicit_log_dir), "", ""
679
680
681 def get_log_dir(
682 trainer: 'pytorch_lightning.Trainer',
683 exp_dir: str = None,
684 name: str = None,
685 version: str = None,
686 explicit_log_dir: str = None,
687 use_datetime_version: bool = True,
688 resume_if_exists: bool = False,
689 ) -> Tuple[Path, str, str, str]:
690 """
691 Obtains the log_dir used for exp_manager.
692
693 Returns:
694 log_dir (Path): the log_dir
695 exp_dir (str): the base exp_dir without name nor version
696 name (str): The name of the experiment
697 version (str): The version of the experiment
698 explicit_log_dir (str): The explicit path to the log folder. Defaults to False.
699 use_datetime_version (bool): Uses date and time as the version of the log folder. Defaults to True.
700 resume_if_exists (bool): if resume_if_exists of the exp_manager's config is enabled or not. When enabled, the
701 version folders would not get created.
702
703 Raise:
704 LoggerMisconfigurationError: If trainer is incompatible with arguments
705 NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
706 ValueError: If resume is True, and there were more than 1 checkpoint could found.
707 """
708 if explicit_log_dir: # If explicit log_dir was passed, short circuit
709 return check_explicit_log_dir(trainer, explicit_log_dir, exp_dir, name, version)
710
711 # Default exp_dir to ./nemo_experiments if None was passed
712 _exp_dir = exp_dir
713 if exp_dir is None:
714 _exp_dir = str(Path.cwd() / 'nemo_experiments')
715
716 # If the user has already defined a logger for the trainer, use the logger defaults for logging directory
717 if trainer.logger is not None:
718 if trainer.logger.save_dir:
719 if exp_dir:
720 raise LoggerMisconfigurationError(
721 "The pytorch lightning trainer that was passed to exp_manager contained a logger, the logger's "
722 f"save_dir was not None, and exp_dir ({exp_dir}) was not None. If trainer.logger.save_dir "
723 "exists, exp_manager will use trainer.logger.save_dir as the logging directory and exp_dir "
724 "must be None."
725 )
726 _exp_dir = trainer.logger.save_dir
727 if name:
728 raise LoggerMisconfigurationError(
729 "The pytorch lightning trainer that was passed to exp_manager contained a logger, and name: "
730 f"{name} was also passed to exp_manager. If the trainer contains a "
731 "logger, exp_manager will use trainer.logger.name, and name passed to exp_manager must be None."
732 )
733 name = trainer.logger.name
734 version = f"version_{trainer.logger.version}"
735 # Use user-defined exp_dir, project_name, exp_name, and versioning options
736 else:
737 name = name or "default"
738 version = version or os.environ.get(NEMO_ENV_VARNAME_VERSION, None)
739
740 if not version:
741 if resume_if_exists:
742 logging.warning(
743 "No version folders would be created under the log folder as 'resume_if_exists' is enabled."
744 )
745 version = None
746 elif is_global_rank_zero():
747 if use_datetime_version:
748 version = time.strftime('%Y-%m-%d_%H-%M-%S')
749 else:
750 tensorboard_logger = TensorBoardLogger(save_dir=Path(_exp_dir), name=name, version=version)
751 version = f"version_{tensorboard_logger.version}"
752 os.environ[NEMO_ENV_VARNAME_VERSION] = "" if version is None else version
753
754 log_dir = Path(_exp_dir) / Path(str(name)) / Path("" if version is None else str(version))
755 return log_dir, str(_exp_dir), name, version
756
757
758 def get_git_hash():
759 """
760 Helper function that tries to get the commit hash if running inside a git folder
761
762 returns:
763 Bool: Whether the git subprocess ran without error
764 str: git subprocess output or error message
765 """
766 try:
767 return (
768 True,
769 subprocess.check_output(['git', 'rev-parse', 'HEAD'], stderr=subprocess.STDOUT).decode(),
770 )
771 except subprocess.CalledProcessError as err:
772 return False, "{}\n".format(err.output.decode("utf-8"))
773
774
775 def get_git_diff():
776 """
777 Helper function that tries to get the git diff if running inside a git folder
778
779 returns:
780 Bool: Whether the git subprocess ran without error
781 str: git subprocess output or error message
782 """
783 try:
784 return subprocess.check_output(['git', 'diff'], stderr=subprocess.STDOUT).decode()
785 except subprocess.CalledProcessError as err:
786 return "{}\n".format(err.output.decode("utf-8"))
787
788
789 def configure_loggers(
790 trainer: 'pytorch_lightning.Trainer',
791 exp_dir: [Path, str],
792 log_dir: [Path, str],
793 name: str,
794 version: str,
795 checkpoint_callback_params: dict,
796 create_tensorboard_logger: bool,
797 summary_writer_kwargs: dict,
798 create_wandb_logger: bool,
799 wandb_kwargs: dict,
800 create_mlflow_logger: bool,
801 mlflow_kwargs: dict,
802 create_dllogger_logger: bool,
803 dllogger_kwargs: dict,
804 create_clearml_logger: bool,
805 clearml_kwargs: dict,
806 ):
807 """
808 Creates TensorboardLogger and/or WandBLogger / MLFlowLogger / DLlogger / ClearMLLogger and attach them to trainer.
809 Raises ValueError if summary_writer_kwargs or wandb_kwargs are misconfigured.
810 """
811 # Potentially create tensorboard logger and/or WandBLogger / MLFlowLogger / DLLogger
812 logger_list = []
813 if create_tensorboard_logger:
814 if summary_writer_kwargs is None:
815 summary_writer_kwargs = {}
816 elif "log_dir" in summary_writer_kwargs:
817 raise ValueError(
818 "You cannot pass `log_dir` as part of `summary_writer_kwargs`. `log_dir` is handled by lightning's "
819 "TensorBoardLogger logger."
820 )
821 tensorboard_logger = TensorBoardLogger(save_dir=exp_dir, name=name, version=version, **summary_writer_kwargs)
822 logger_list.append(tensorboard_logger)
823 logging.info("TensorboardLogger has been set up")
824
825 if create_wandb_logger:
826 if wandb_kwargs is None:
827 wandb_kwargs = {}
828 if "name" not in wandb_kwargs and "project" not in wandb_kwargs:
829 raise ValueError("name and project are required for wandb_logger")
830
831 # Update the wandb save_dir
832 if wandb_kwargs.get('save_dir', None) is None:
833 wandb_kwargs['save_dir'] = exp_dir
834 os.makedirs(wandb_kwargs['save_dir'], exist_ok=True)
835 wandb_logger = WandbLogger(version=version, **wandb_kwargs)
836
837 logger_list.append(wandb_logger)
838 logging.info("WandBLogger has been set up")
839
840 if create_mlflow_logger:
841 mlflow_logger = MLFlowLogger(run_name=version, **mlflow_kwargs)
842
843 logger_list.append(mlflow_logger)
844 logging.info("MLFlowLogger has been set up")
845
846 if create_dllogger_logger:
847 dllogger_logger = DLLogger(**dllogger_kwargs)
848
849 logger_list.append(dllogger_logger)
850 logging.info("DLLogger has been set up")
851
852 if create_clearml_logger:
853 clearml_logger = ClearMLLogger(
854 clearml_cfg=clearml_kwargs,
855 log_dir=log_dir,
856 prefix=name,
857 save_best_model=checkpoint_callback_params.save_best_model,
858 )
859
860 logger_list.append(clearml_logger)
861 logging.info("ClearMLLogger has been set up")
862
863 trainer._logger_connector.configure_logger(logger_list)
864
865
866 def configure_checkpointing(
867 trainer: 'pytorch_lightning.Trainer',
868 log_dir: Path,
869 name: str,
870 resume: bool,
871 params: 'DictConfig',
872 create_preemption_callback: bool,
873 ):
874 """ Adds ModelCheckpoint to trainer. Raises CheckpointMisconfigurationError if trainer already has a ModelCheckpoint
875 callback
876 """
877 for callback in trainer.callbacks:
878 if isinstance(callback, ModelCheckpoint):
879 raise CheckpointMisconfigurationError(
880 "The pytorch lightning trainer that was passed to exp_manager contained a ModelCheckpoint "
881 "and create_checkpoint_callback was set to True. Please either set create_checkpoint_callback "
882 "to False, or remove ModelCheckpoint from the lightning trainer"
883 )
884 # Create the callback and attach it to trainer
885 if "filepath" in params:
886 if params.filepath is not None:
887 logging.warning("filepath is deprecated. Please switch to dirpath and filename instead")
888 if params.dirpath is None:
889 params.dirpath = Path(params.filepath).parent
890 if params.filename is None:
891 params.filename = Path(params.filepath).name
892 with open_dict(params):
893 del params["filepath"]
894 if params.dirpath is None:
895 params.dirpath = Path(log_dir / 'checkpoints')
896 if params.filename is None:
897 params.filename = f'{name}--{{{params.monitor}:.4f}}-{{epoch}}'
898 if params.prefix is None:
899 params.prefix = name
900 NeMoModelCheckpoint.CHECKPOINT_NAME_LAST = params.filename + '-last'
901
902 logging.debug(params.dirpath)
903 logging.debug(params.filename)
904 logging.debug(params.prefix)
905
906 if "val" in params.monitor:
907 if (
908 trainer.max_epochs is not None
909 and trainer.max_epochs != -1
910 and trainer.max_epochs < trainer.check_val_every_n_epoch
911 ):
912 logging.error(
913 "The checkpoint callback was told to monitor a validation value but trainer.max_epochs("
914 f"{trainer.max_epochs}) was less than trainer.check_val_every_n_epoch({trainer.check_val_every_n_epoch}"
915 f"). It is very likely this run will fail with ModelCheckpoint(monitor='{params.monitor}') not found "
916 "in the returned metrics. Please ensure that validation is run within trainer.max_epochs."
917 )
918 elif trainer.max_steps is not None and trainer.max_steps != -1:
919 logging.warning(
920 "The checkpoint callback was told to monitor a validation value and trainer's max_steps was set to "
921 f"{trainer.max_steps}. Please ensure that max_steps will run for at least "
922 f"{trainer.check_val_every_n_epoch} epochs to ensure that checkpointing will not error out."
923 )
924
925 checkpoint_callback = NeMoModelCheckpoint(n_resume=resume, **params)
926 checkpoint_callback.last_model_path = trainer.ckpt_path or ""
927 if 'mp_rank' in checkpoint_callback.last_model_path or 'tp_rank' in checkpoint_callback.last_model_path:
928 checkpoint_callback.last_model_path = uninject_model_parallel_rank(checkpoint_callback.last_model_path)
929 trainer.callbacks.append(checkpoint_callback)
930 if create_preemption_callback:
931 # Check if cuda is avialable as preemption is supported only on GPUs
932 if torch.cuda.is_available():
933 ## By default PreemptionCallback handles SIGTERM. To handle other signals pass the signal in the call as below:
934 ## PreemptionCallback(checkpoint_callback, signal.SIGCHLD)
935 preemption_callback = PreemptionCallback(checkpoint_callback)
936 trainer.callbacks.append(preemption_callback)
937 else:
938 logging.info("Preemption is supported only on GPUs, disabling preemption")
939
940
941 def check_slurm(trainer):
942 try:
943 return trainer.accelerator_connector.is_slurm_managing_tasks
944 except AttributeError:
945 return False
946
947
948 class StatelessTimer(Timer):
949 """Extension of PTL timers to be per run."""
950
951 def __init__(self, duration: timedelta = None, interval: str = Interval.step, verbose: bool = True,) -> None:
952 super().__init__(duration, interval, verbose)
953
954 # Override PTL Timer's state dict to not store elapsed time information so that we can restore and continue training.
955 def state_dict(self) -> Dict[str, Any]:
956 return {}
957
958 def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
959 return
960
961
962 def configure_no_restart_validation_training_loop(trainer: pytorch_lightning.Trainer) -> None:
963 if type(trainer.fit_loop.epoch_loop) != _TrainingEpochLoop:
964 warnings.warn("Detected custom epoch loop. Skipping no validation on restart support.", UserWarning)
965 return
966 ## Pass trainer object to avoid trainer getting overwritten as None
967 loop = SkipResumeTrainingValidationLoop(trainer, trainer.min_steps, trainer.max_steps)
968 trainer.fit_loop.epoch_loop = loop
969
970
971 class SkipResumeTrainingValidationLoop(_TrainingEpochLoop):
972 """
973 Extend the PTL Epoch loop to skip validating when resuming.
974 This happens when resuming a checkpoint that has already run validation, but loading restores
975 the training state before validation has run.
976 """
977
978 def _should_check_val_fx(self) -> bool:
979 if self.restarting and self.global_step % self.trainer.val_check_batch == 0:
980 return False
981 return super()._should_check_val_fx()
982
983
984 def clean_exp_ckpt(exp_log_dir: Union[str, Path], remove_ckpt: bool = True, remove_nemo: bool = False):
985 """
986 Helper method that removes Pytorch Lightning .ckpt files or NeMo .nemo files from the checkpoint directory
987
988 Args:
989 exp_log_dir: str path to the root directory of the current experiment.
990 remove_ckpt: bool, whether to remove all *.ckpt files in the checkpoints directory.
991 remove_nemo: bool, whether to remove all *.nemo files in the checkpoints directory.
992 """
993 exp_log_dir = str(exp_log_dir)
994
995 if remove_ckpt:
996 logging.info("Deleting *.ckpt files ...")
997 ckpt_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.ckpt"))
998 for filepath in ckpt_files:
999 os.remove(filepath)
1000 logging.info(f"Deleted file : {filepath}")
1001
1002 if remove_nemo:
1003 logging.info("Deleting *.nemo files ...")
1004 nemo_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.nemo"))
1005 for filepath in nemo_files:
1006 os.remove(filepath)
1007 logging.info(f"Deleted file : {filepath}")
1008
[end of nemo/utils/exp_manager.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """
17 # This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
18 # fusion with beam search decoders on top of a trained ASR model with CTC decoder. To evaluate a model with
19 # Transducer (RNN-T) decoder use another script 'scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py'.
20 # NeMo's beam search decoders are capable of using the KenLM's N-gram models
21 # to find the best candidates. This script supports both character level and BPE level
22 # encodings and models which is detected automatically from the type of the model.
23 # You may train the LM model with 'scripts/asr_language_modeling/ngram_lm/train_kenlm.py'.
24
25 # Config Help
26
27 To discover all arguments of the script, please run :
28 python eval_beamsearch_ngram.py --help
29 python eval_beamsearch_ngram.py --cfg job
30
31 # USAGE
32
33 python eval_beamsearch_ngram.py nemo_model_file=<path to the .nemo file of the model> \
34 input_manifest=<path to the evaluation JSON manifest file> \
35 kenlm_model_file=<path to the binary KenLM model> \
36 beam_width=[<list of the beam widths, separated with commas>] \
37 beam_alpha=[<list of the beam alphas, separated with commas>] \
38 beam_beta=[<list of the beam betas, separated with commas>] \
39 preds_output_folder=<optional folder to store the predictions> \
40 probs_cache_file=null \
41 decoding_mode=beamsearch_ngram
42 ...
43
44
45 # Grid Search for Hyper parameters
46
47 For grid search, you can provide a list of arguments as follows -
48
49 beam_width=[4,8,16,....] \
50 beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
51 beam_beta=[-1.0,-0.5,0.0,...,1.0] \
52
53 # You may find more info on how to use this script at:
54 # https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
55
56 """
57
58
59 import contextlib
60 import json
61 import os
62 import pickle
63 from dataclasses import dataclass, field, is_dataclass
64 from pathlib import Path
65 from typing import List, Optional
66
67 import editdistance
68 import numpy as np
69 import torch
70 from omegaconf import MISSING, OmegaConf
71 from sklearn.model_selection import ParameterGrid
72 from tqdm.auto import tqdm
73
74 import nemo.collections.asr as nemo_asr
75 from nemo.collections.asr.models import EncDecHybridRNNTCTCModel
76 from nemo.collections.asr.parts.submodules import ctc_beam_decoding
77 from nemo.collections.asr.parts.utils.transcribe_utils import PunctuationCapitalization, TextProcessingConfig
78 from nemo.core.config import hydra_runner
79 from nemo.utils import logging
80
81 # fmt: off
82
83
84 @dataclass
85 class EvalBeamSearchNGramConfig:
86 """
87 Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
88 """
89 # # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
90 nemo_model_file: str = MISSING
91
92 # File paths
93 input_manifest: str = MISSING # The manifest file of the evaluation set
94 kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
95 preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
96 probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
97
98 # Parameters for inference
99 acoustic_batch_size: int = 16 # The batch size to calculate log probabilities
100 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
101 device: str = "cuda" # The device to load the model onto to calculate log probabilities
102 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
103
104 # Beam Search hyperparameters
105
106 # The decoding scheme to be used for evaluation.
107 # Can be one of ["greedy", "beamsearch", "beamsearch_ngram"]
108 decoding_mode: str = "beamsearch_ngram"
109
110 beam_width: List[int] = field(default_factory=lambda: [128]) # The width or list of the widths for the beam search decoding
111 beam_alpha: List[float] = field(default_factory=lambda: [1.0]) # The alpha parameter or list of the alphas for the beam search decoding
112 beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
113
114 decoding_strategy: str = "beam"
115 decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
116
117 text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
118 punctuation_marks = ".,?",
119 separate_punctuation = False,
120 do_lowercase = False,
121 rm_punctuation = False,
122 )
123 # fmt: on
124
125
126 def beam_search_eval(
127 model: nemo_asr.models.ASRModel,
128 cfg: EvalBeamSearchNGramConfig,
129 all_probs: List[torch.Tensor],
130 target_transcripts: List[str],
131 preds_output_file: str = None,
132 lm_path: str = None,
133 beam_alpha: float = 1.0,
134 beam_beta: float = 0.0,
135 beam_width: int = 128,
136 beam_batch_size: int = 128,
137 progress_bar: bool = True,
138 punctuation_capitalization: PunctuationCapitalization = None,
139 ):
140 level = logging.getEffectiveLevel()
141 logging.setLevel(logging.CRITICAL)
142 # Reset config
143 model.change_decoding_strategy(None)
144
145 # Override the beam search config with current search candidate configuration
146 cfg.decoding.beam_size = beam_width
147 cfg.decoding.beam_alpha = beam_alpha
148 cfg.decoding.beam_beta = beam_beta
149 cfg.decoding.return_best_hypothesis = False
150 cfg.decoding.kenlm_path = cfg.kenlm_model_file
151
152 # Update model's decoding strategy config
153 model.cfg.decoding.strategy = cfg.decoding_strategy
154 model.cfg.decoding.beam = cfg.decoding
155
156 # Update model's decoding strategy
157 if isinstance(model, EncDecHybridRNNTCTCModel):
158 model.change_decoding_strategy(model.cfg.decoding, decoder_type='ctc')
159 decoding = model.ctc_decoding
160 else:
161 model.change_decoding_strategy(model.cfg.decoding)
162 decoding = model.decoding
163 logging.setLevel(level)
164
165 wer_dist_first = cer_dist_first = 0
166 wer_dist_best = cer_dist_best = 0
167 words_count = 0
168 chars_count = 0
169 sample_idx = 0
170 if preds_output_file:
171 out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
172
173 if progress_bar:
174 it = tqdm(
175 range(int(np.ceil(len(all_probs) / beam_batch_size))),
176 desc=f"Beam search decoding with width={beam_width}, alpha={beam_alpha}, beta={beam_beta}",
177 ncols=120,
178 )
179 else:
180 it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
181 for batch_idx in it:
182 # disabling type checking
183 probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
184 probs_lens = torch.tensor([prob.shape[0] for prob in probs_batch])
185 with torch.no_grad():
186 packed_batch = torch.zeros(len(probs_batch), max(probs_lens), probs_batch[0].shape[-1], device='cpu')
187
188 for prob_index in range(len(probs_batch)):
189 packed_batch[prob_index, : probs_lens[prob_index], :] = torch.tensor(
190 probs_batch[prob_index], device=packed_batch.device, dtype=packed_batch.dtype
191 )
192
193 _, beams_batch = decoding.ctc_decoder_predictions_tensor(
194 packed_batch, decoder_lengths=probs_lens, return_hypotheses=True,
195 )
196
197 for beams_idx, beams in enumerate(beams_batch):
198 target = target_transcripts[sample_idx + beams_idx]
199 target_split_w = target.split()
200 target_split_c = list(target)
201 words_count += len(target_split_w)
202 chars_count += len(target_split_c)
203 wer_dist_min = cer_dist_min = 10000
204 for candidate_idx, candidate in enumerate(beams): # type: (int, ctc_beam_decoding.rnnt_utils.Hypothesis)
205 pred_text = candidate.text
206 if cfg.text_processing.do_lowercase:
207 pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
208 if cfg.text_processing.rm_punctuation:
209 pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
210 if cfg.text_processing.separate_punctuation:
211 pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
212 pred_split_w = pred_text.split()
213 wer_dist = editdistance.eval(target_split_w, pred_split_w)
214 pred_split_c = list(pred_text)
215 cer_dist = editdistance.eval(target_split_c, pred_split_c)
216
217 wer_dist_min = min(wer_dist_min, wer_dist)
218 cer_dist_min = min(cer_dist_min, cer_dist)
219
220 if candidate_idx == 0:
221 # first candidate
222 wer_dist_first += wer_dist
223 cer_dist_first += cer_dist
224
225 score = candidate.score
226 if preds_output_file:
227 out_file.write('{}\t{}\n'.format(pred_text, score))
228 wer_dist_best += wer_dist_min
229 cer_dist_best += cer_dist_min
230 sample_idx += len(probs_batch)
231
232 if preds_output_file:
233 out_file.close()
234 logging.info(f"Stored the predictions of beam search decoding at '{preds_output_file}'.")
235
236 if lm_path:
237 logging.info(
238 'WER/CER with beam search decoding and N-gram model = {:.2%}/{:.2%}'.format(
239 wer_dist_first / words_count, cer_dist_first / chars_count
240 )
241 )
242 else:
243 logging.info(
244 'WER/CER with beam search decoding = {:.2%}/{:.2%}'.format(
245 wer_dist_first / words_count, cer_dist_first / chars_count
246 )
247 )
248 logging.info(
249 'Oracle WER/CER in candidates with perfect LM= {:.2%}/{:.2%}'.format(
250 wer_dist_best / words_count, cer_dist_best / chars_count
251 )
252 )
253 logging.info(f"=================================================================================")
254
255 return wer_dist_first / words_count, cer_dist_first / chars_count
256
257
258 @hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
259 def main(cfg: EvalBeamSearchNGramConfig):
260 logging.warning("This file will be renamed to eval_beamsearch_ngram_ctc.py in the future NeMo (1.21) release.")
261 if is_dataclass(cfg):
262 cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
263
264 valid_decoding_modes = ["greedy", "beamsearch", "beamsearch_ngram"]
265 if cfg.decoding_mode not in valid_decoding_modes:
266 raise ValueError(
267 f"Given decoding_mode={cfg.decoding_mode} is invalid. Available options are :\n" f"{valid_decoding_modes}"
268 )
269
270 if cfg.nemo_model_file.endswith('.nemo'):
271 asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
272 else:
273 logging.warning(
274 "nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
275 )
276 asr_model = nemo_asr.models.ASRModel.from_pretrained(
277 cfg.nemo_model_file, map_location=torch.device(cfg.device)
278 )
279
280 target_transcripts = []
281 manifest_dir = Path(cfg.input_manifest).parent
282 with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
283 audio_file_paths = []
284 for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
285 data = json.loads(line)
286 audio_file = Path(data['audio_filepath'])
287 if not audio_file.is_file() and not audio_file.is_absolute():
288 audio_file = manifest_dir / audio_file
289 target_transcripts.append(data['text'])
290 audio_file_paths.append(str(audio_file.absolute()))
291
292 punctuation_capitalization = PunctuationCapitalization(cfg.text_processing.punctuation_marks)
293 if cfg.text_processing.do_lowercase:
294 target_transcripts = punctuation_capitalization.do_lowercase(target_transcripts)
295 if cfg.text_processing.rm_punctuation:
296 target_transcripts = punctuation_capitalization.rm_punctuation(target_transcripts)
297 if cfg.text_processing.separate_punctuation:
298 target_transcripts = punctuation_capitalization.separate_punctuation(target_transcripts)
299
300 if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
301 logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
302 logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
303 with open(cfg.probs_cache_file, 'rb') as probs_file:
304 all_probs = pickle.load(probs_file)
305
306 if len(all_probs) != len(audio_file_paths):
307 raise ValueError(
308 f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
309 f"match the manifest file. You may need to delete the probabilities cached file."
310 )
311 else:
312
313 @contextlib.contextmanager
314 def default_autocast():
315 yield
316
317 if cfg.use_amp:
318 if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
319 logging.info("AMP is enabled!\n")
320 autocast = torch.cuda.amp.autocast
321
322 else:
323 autocast = default_autocast
324 else:
325
326 autocast = default_autocast
327
328 with autocast():
329 with torch.no_grad():
330 if isinstance(asr_model, EncDecHybridRNNTCTCModel):
331 asr_model.cur_decoder = 'ctc'
332 all_logits = asr_model.transcribe(audio_file_paths, batch_size=cfg.acoustic_batch_size, logprobs=True)
333
334 all_probs = all_logits
335 if cfg.probs_cache_file:
336 logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
337 with open(cfg.probs_cache_file, 'wb') as f_dump:
338 pickle.dump(all_probs, f_dump)
339
340 wer_dist_greedy = 0
341 cer_dist_greedy = 0
342 words_count = 0
343 chars_count = 0
344 for batch_idx, probs in enumerate(all_probs):
345 preds = np.argmax(probs, axis=1)
346 preds_tensor = torch.tensor(preds, device='cpu').unsqueeze(0)
347 if isinstance(asr_model, EncDecHybridRNNTCTCModel):
348 pred_text = asr_model.ctc_decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
349 else:
350 pred_text = asr_model._wer.decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
351
352 if cfg.text_processing.do_lowercase:
353 pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
354 if cfg.text_processing.rm_punctuation:
355 pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
356 if cfg.text_processing.separate_punctuation:
357 pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
358
359 pred_split_w = pred_text.split()
360 target_split_w = target_transcripts[batch_idx].split()
361 pred_split_c = list(pred_text)
362 target_split_c = list(target_transcripts[batch_idx])
363
364 wer_dist = editdistance.eval(target_split_w, pred_split_w)
365 cer_dist = editdistance.eval(target_split_c, pred_split_c)
366
367 wer_dist_greedy += wer_dist
368 cer_dist_greedy += cer_dist
369 words_count += len(target_split_w)
370 chars_count += len(target_split_c)
371
372 logging.info('Greedy WER/CER = {:.2%}/{:.2%}'.format(wer_dist_greedy / words_count, cer_dist_greedy / chars_count))
373
374 asr_model = asr_model.to('cpu')
375
376 if cfg.decoding_mode == "beamsearch_ngram":
377 if not os.path.exists(cfg.kenlm_model_file):
378 raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
379 lm_path = cfg.kenlm_model_file
380 else:
381 lm_path = None
382
383 # 'greedy' decoding_mode would skip the beam search decoding
384 if cfg.decoding_mode in ["beamsearch_ngram", "beamsearch"]:
385 if cfg.beam_width is None or cfg.beam_alpha is None or cfg.beam_beta is None:
386 raise ValueError("beam_width, beam_alpha and beam_beta are needed to perform beam search decoding.")
387 params = {'beam_width': cfg.beam_width, 'beam_alpha': cfg.beam_alpha, 'beam_beta': cfg.beam_beta}
388 hp_grid = ParameterGrid(params)
389 hp_grid = list(hp_grid)
390
391 best_wer_beam_size, best_cer_beam_size = None, None
392 best_wer_alpha, best_cer_alpha = None, None
393 best_wer_beta, best_cer_beta = None, None
394 best_wer, best_cer = 1e6, 1e6
395
396 logging.info(f"==============================Starting the beam search decoding===============================")
397 logging.info(f"Grid search size: {len(hp_grid)}")
398 logging.info(f"It may take some time...")
399 logging.info(f"==============================================================================================")
400
401 if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
402 os.mkdir(cfg.preds_output_folder)
403 for hp in hp_grid:
404 if cfg.preds_output_folder:
405 preds_output_file = os.path.join(
406 cfg.preds_output_folder,
407 f"preds_out_width{hp['beam_width']}_alpha{hp['beam_alpha']}_beta{hp['beam_beta']}.tsv",
408 )
409 else:
410 preds_output_file = None
411
412 candidate_wer, candidate_cer = beam_search_eval(
413 asr_model,
414 cfg,
415 all_probs=all_probs,
416 target_transcripts=target_transcripts,
417 preds_output_file=preds_output_file,
418 lm_path=lm_path,
419 beam_width=hp["beam_width"],
420 beam_alpha=hp["beam_alpha"],
421 beam_beta=hp["beam_beta"],
422 beam_batch_size=cfg.beam_batch_size,
423 progress_bar=True,
424 punctuation_capitalization=punctuation_capitalization,
425 )
426
427 if candidate_cer < best_cer:
428 best_cer_beam_size = hp["beam_width"]
429 best_cer_alpha = hp["beam_alpha"]
430 best_cer_beta = hp["beam_beta"]
431 best_cer = candidate_cer
432
433 if candidate_wer < best_wer:
434 best_wer_beam_size = hp["beam_width"]
435 best_wer_alpha = hp["beam_alpha"]
436 best_wer_beta = hp["beam_beta"]
437 best_wer = candidate_wer
438
439 logging.info(
440 f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
441 f'Beam alpha = {best_wer_alpha}, Beam beta = {best_wer_beta}'
442 )
443
444 logging.info(
445 f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
446 f'Beam alpha = {best_cer_alpha}, Beam beta = {best_cer_beta}'
447 )
448 logging.info(f"=================================================================================")
449
450
451 if __name__ == '__main__':
452 main()
453
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """
17 # This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
18 # fusion with beam search decoders on top of a trained ASR Transducer model. NeMo's beam search decoders are capable of using the
19 # KenLM's N-gram models to find the best candidates. This script supports both character level and BPE level
20 # encodings and models which is detected automatically from the type of the model.
21 # You may train the LM model with 'scripts/ngram_lm/train_kenlm.py'.
22
23 # Config Help
24
25 To discover all arguments of the script, please run :
26 python eval_beamsearch_ngram.py --help
27 python eval_beamsearch_ngram.py --cfg job
28
29 # USAGE
30
31 python eval_beamsearch_ngram_transducer.py nemo_model_file=<path to the .nemo file of the model> \
32 input_manifest=<path to the evaluation JSON manifest file \
33 kenlm_model_file=<path to the binary KenLM model> \
34 beam_width=[<list of the beam widths, separated with commas>] \
35 beam_alpha=[<list of the beam alphas, separated with commas>] \
36 preds_output_folder=<optional folder to store the predictions> \
37 probs_cache_file=null \
38 decoding_strategy=<greedy_batch or maes decoding>
39 maes_prefix_alpha=[<list of the maes prefix alphas, separated with commas>] \
40 maes_expansion_gamma=[<list of the maes expansion gammas, separated with commas>] \
41 hat_subtract_ilm=<in case of HAT model: subtract internal LM or not> \
42 hat_ilm_weight=[<in case of HAT model: list of the HAT internal LM weights, separated with commas>] \
43 ...
44
45
46 # Grid Search for Hyper parameters
47
48 For grid search, you can provide a list of arguments as follows -
49
50 beam_width=[4,8,16,....] \
51 beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
52
53 # You may find more info on how to use this script at:
54 # https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
55
56 """
57
58
59 import contextlib
60 import json
61 import os
62 import pickle
63 import tempfile
64 from dataclasses import dataclass, field, is_dataclass
65 from pathlib import Path
66 from typing import List, Optional
67
68 import editdistance
69 import numpy as np
70 import torch
71 from omegaconf import MISSING, OmegaConf
72 from sklearn.model_selection import ParameterGrid
73 from tqdm.auto import tqdm
74
75 import nemo.collections.asr as nemo_asr
76 from nemo.collections.asr.parts.submodules import rnnt_beam_decoding
77 from nemo.core.config import hydra_runner
78 from nemo.utils import logging
79
80 # fmt: off
81
82
83 @dataclass
84 class EvalBeamSearchNGramConfig:
85 """
86 Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
87 """
88 # # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
89 nemo_model_file: str = MISSING
90
91 # File paths
92 input_manifest: str = MISSING # The manifest file of the evaluation set
93 kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
94 preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
95 probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
96
97 # Parameters for inference
98 acoustic_batch_size: int = 128 # The batch size to calculate log probabilities
99 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
100 device: str = "cuda" # The device to load the model onto to calculate log probabilities
101 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
102 num_workers: int = 1 # Number of workers for DataLoader
103
104 # The decoding scheme to be used for evaluation
105 decoding_strategy: str = "greedy_batch" # ["greedy_batch", "beam", "tsd", "alsd", "maes"]
106
107 # Beam Search hyperparameters
108 beam_width: List[int] = field(default_factory=lambda: [8]) # The width or list of the widths for the beam search decoding
109 beam_alpha: List[float] = field(default_factory=lambda: [0.2]) # The alpha parameter or list of the alphas for the beam search decoding
110
111 maes_prefix_alpha: List[int] = field(default_factory=lambda: [2]) # The maes_prefix_alpha or list of the maes_prefix_alpha for the maes decoding
112 maes_expansion_gamma: List[float] = field(default_factory=lambda: [2.3]) # The maes_expansion_gamma or list of the maes_expansion_gamma for the maes decoding
113
114 # HAT related parameters (only for internal lm subtraction)
115 hat_subtract_ilm: bool = False
116 hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
117
118 decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
119
120
121 # fmt: on
122
123
124 def decoding_step(
125 model: nemo_asr.models.ASRModel,
126 cfg: EvalBeamSearchNGramConfig,
127 all_probs: List[torch.Tensor],
128 target_transcripts: List[str],
129 preds_output_file: str = None,
130 beam_batch_size: int = 128,
131 progress_bar: bool = True,
132 ):
133 level = logging.getEffectiveLevel()
134 logging.setLevel(logging.CRITICAL)
135 # Reset config
136 model.change_decoding_strategy(None)
137
138 cfg.decoding.hat_ilm_weight = cfg.decoding.hat_ilm_weight * cfg.hat_subtract_ilm
139 # Override the beam search config with current search candidate configuration
140 cfg.decoding.return_best_hypothesis = False
141 cfg.decoding.ngram_lm_model = cfg.kenlm_model_file
142 cfg.decoding.hat_subtract_ilm = cfg.hat_subtract_ilm
143
144 # Update model's decoding strategy config
145 model.cfg.decoding.strategy = cfg.decoding_strategy
146 model.cfg.decoding.beam = cfg.decoding
147
148 # Update model's decoding strategy
149 model.change_decoding_strategy(model.cfg.decoding)
150 logging.setLevel(level)
151
152 wer_dist_first = cer_dist_first = 0
153 wer_dist_best = cer_dist_best = 0
154 words_count = 0
155 chars_count = 0
156 sample_idx = 0
157 if preds_output_file:
158 out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
159
160 if progress_bar:
161 if cfg.decoding_strategy == "greedy_batch":
162 description = "Greedy_batch decoding.."
163 else:
164 description = f"{cfg.decoding_strategy} decoding with bw={cfg.decoding.beam_size}, ba={cfg.decoding.ngram_lm_alpha}, ma={cfg.decoding.maes_prefix_alpha}, mg={cfg.decoding.maes_expansion_gamma}, hat_ilmw={cfg.decoding.hat_ilm_weight}"
165 it = tqdm(range(int(np.ceil(len(all_probs) / beam_batch_size))), desc=description, ncols=120)
166 else:
167 it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
168 for batch_idx in it:
169 # disabling type checking
170 probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
171 probs_lens = torch.tensor([prob.shape[-1] for prob in probs_batch])
172 with torch.no_grad():
173 packed_batch = torch.zeros(len(probs_batch), probs_batch[0].shape[0], max(probs_lens), device='cpu')
174
175 for prob_index in range(len(probs_batch)):
176 packed_batch[prob_index, :, : probs_lens[prob_index]] = torch.tensor(
177 probs_batch[prob_index].unsqueeze(0), device=packed_batch.device, dtype=packed_batch.dtype
178 )
179 best_hyp_batch, beams_batch = model.decoding.rnnt_decoder_predictions_tensor(
180 packed_batch, probs_lens, return_hypotheses=True,
181 )
182 if cfg.decoding_strategy == "greedy_batch":
183 beams_batch = [[x] for x in best_hyp_batch]
184
185 for beams_idx, beams in enumerate(beams_batch):
186 target = target_transcripts[sample_idx + beams_idx]
187 target_split_w = target.split()
188 target_split_c = list(target)
189 words_count += len(target_split_w)
190 chars_count += len(target_split_c)
191 wer_dist_min = cer_dist_min = 10000
192 for candidate_idx, candidate in enumerate(beams): # type: (int, rnnt_beam_decoding.rnnt_utils.Hypothesis)
193 pred_text = candidate.text
194 pred_split_w = pred_text.split()
195 wer_dist = editdistance.eval(target_split_w, pred_split_w)
196 pred_split_c = list(pred_text)
197 cer_dist = editdistance.eval(target_split_c, pred_split_c)
198
199 wer_dist_min = min(wer_dist_min, wer_dist)
200 cer_dist_min = min(cer_dist_min, cer_dist)
201
202 if candidate_idx == 0:
203 # first candidate
204 wer_dist_first += wer_dist
205 cer_dist_first += cer_dist
206
207 score = candidate.score
208 if preds_output_file:
209 out_file.write('{}\t{}\n'.format(pred_text, score))
210 wer_dist_best += wer_dist_min
211 cer_dist_best += cer_dist_min
212 sample_idx += len(probs_batch)
213
214 if cfg.decoding_strategy == "greedy_batch":
215 return wer_dist_first / words_count, cer_dist_first / chars_count
216
217 if preds_output_file:
218 out_file.close()
219 logging.info(f"Stored the predictions of {cfg.decoding_strategy} decoding at '{preds_output_file}'.")
220
221 if cfg.decoding.ngram_lm_model:
222 logging.info(
223 f"WER/CER with {cfg.decoding_strategy} decoding and N-gram model = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
224 )
225 else:
226 logging.info(
227 f"WER/CER with {cfg.decoding_strategy} decoding = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
228 )
229 logging.info(
230 f"Oracle WER/CER in candidates with perfect LM= {wer_dist_best / words_count:.2%}/{cer_dist_best / chars_count:.2%}"
231 )
232 logging.info(f"=================================================================================")
233
234 return wer_dist_first / words_count, cer_dist_first / chars_count
235
236
237 @hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
238 def main(cfg: EvalBeamSearchNGramConfig):
239 if is_dataclass(cfg):
240 cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
241
242 valid_decoding_strategis = ["greedy_batch", "beam", "tsd", "alsd", "maes"]
243 if cfg.decoding_strategy not in valid_decoding_strategis:
244 raise ValueError(
245 f"Given decoding_strategy={cfg.decoding_strategy} is invalid. Available options are :\n"
246 f"{valid_decoding_strategis}"
247 )
248
249 if cfg.nemo_model_file.endswith('.nemo'):
250 asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
251 else:
252 logging.warning(
253 "nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
254 )
255 asr_model = nemo_asr.models.ASRModel.from_pretrained(
256 cfg.nemo_model_file, map_location=torch.device(cfg.device)
257 )
258
259 if cfg.kenlm_model_file:
260 if not os.path.exists(cfg.kenlm_model_file):
261 raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
262 if cfg.decoding_strategy != "maes":
263 raise ValueError(f"Decoding with kenlm model is supported only for maes decoding algorithm.")
264 lm_path = cfg.kenlm_model_file
265 else:
266 lm_path = None
267 cfg.beam_alpha = [0.0]
268 if cfg.hat_subtract_ilm:
269 assert lm_path, "kenlm must be set for hat internal lm subtraction"
270
271 if cfg.decoding_strategy != "maes":
272 cfg.maes_prefix_alpha, cfg.maes_expansion_gamma, cfg.hat_ilm_weight = [0], [0], [0]
273
274 target_transcripts = []
275 manifest_dir = Path(cfg.input_manifest).parent
276 with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
277 audio_file_paths = []
278 for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
279 data = json.loads(line)
280 audio_file = Path(data['audio_filepath'])
281 if not audio_file.is_file() and not audio_file.is_absolute():
282 audio_file = manifest_dir / audio_file
283 target_transcripts.append(data['text'])
284 audio_file_paths.append(str(audio_file.absolute()))
285
286 if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
287 logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
288 logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
289 with open(cfg.probs_cache_file, 'rb') as probs_file:
290 all_probs = pickle.load(probs_file)
291
292 if len(all_probs) != len(audio_file_paths):
293 raise ValueError(
294 f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
295 f"match the manifest file. You may need to delete the probabilities cached file."
296 )
297 else:
298
299 @contextlib.contextmanager
300 def default_autocast():
301 yield
302
303 if cfg.use_amp:
304 if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
305 logging.info("AMP is enabled!\n")
306 autocast = torch.cuda.amp.autocast
307
308 else:
309 autocast = default_autocast
310 else:
311
312 autocast = default_autocast
313
314 # manual calculation of encoder_embeddings
315 with autocast():
316 with torch.no_grad():
317 asr_model.eval()
318 asr_model.encoder.freeze()
319 device = next(asr_model.parameters()).device
320 all_probs = []
321 with tempfile.TemporaryDirectory() as tmpdir:
322 with open(os.path.join(tmpdir, 'manifest.json'), 'w', encoding='utf-8') as fp:
323 for audio_file in audio_file_paths:
324 entry = {'audio_filepath': audio_file, 'duration': 100000, 'text': ''}
325 fp.write(json.dumps(entry) + '\n')
326 config = {
327 'paths2audio_files': audio_file_paths,
328 'batch_size': cfg.acoustic_batch_size,
329 'temp_dir': tmpdir,
330 'num_workers': cfg.num_workers,
331 'channel_selector': None,
332 'augmentor': None,
333 }
334 temporary_datalayer = asr_model._setup_transcribe_dataloader(config)
335 for test_batch in tqdm(temporary_datalayer, desc="Transcribing", disable=True):
336 encoded, encoded_len = asr_model.forward(
337 input_signal=test_batch[0].to(device), input_signal_length=test_batch[1].to(device)
338 )
339 # dump encoder embeddings per file
340 for idx in range(encoded.shape[0]):
341 encoded_no_pad = encoded[idx, :, : encoded_len[idx]]
342 all_probs.append(encoded_no_pad)
343
344 if cfg.probs_cache_file:
345 logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
346 with open(cfg.probs_cache_file, 'wb') as f_dump:
347 pickle.dump(all_probs, f_dump)
348
349 if cfg.decoding_strategy == "greedy_batch":
350 asr_model = asr_model.to('cpu')
351 candidate_wer, candidate_cer = decoding_step(
352 asr_model,
353 cfg,
354 all_probs=all_probs,
355 target_transcripts=target_transcripts,
356 beam_batch_size=cfg.beam_batch_size,
357 progress_bar=True,
358 )
359 logging.info(f"Greedy batch WER/CER = {candidate_wer:.2%}/{candidate_cer:.2%}")
360
361 asr_model = asr_model.to('cpu')
362
363 # 'greedy_batch' decoding_strategy would skip the beam search decoding
364 if cfg.decoding_strategy in ["beam", "tsd", "alsd", "maes"]:
365 if cfg.beam_width is None or cfg.beam_alpha is None:
366 raise ValueError("beam_width and beam_alpha are needed to perform beam search decoding.")
367 params = {
368 'beam_width': cfg.beam_width,
369 'beam_alpha': cfg.beam_alpha,
370 'maes_prefix_alpha': cfg.maes_prefix_alpha,
371 'maes_expansion_gamma': cfg.maes_expansion_gamma,
372 'hat_ilm_weight': cfg.hat_ilm_weight,
373 }
374 hp_grid = ParameterGrid(params)
375 hp_grid = list(hp_grid)
376
377 best_wer_beam_size, best_cer_beam_size = None, None
378 best_wer_alpha, best_cer_alpha = None, None
379 best_wer, best_cer = 1e6, 1e6
380
381 logging.info(
382 f"==============================Starting the {cfg.decoding_strategy} decoding==============================="
383 )
384 logging.info(f"Grid search size: {len(hp_grid)}")
385 logging.info(f"It may take some time...")
386 logging.info(f"==============================================================================================")
387
388 if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
389 os.mkdir(cfg.preds_output_folder)
390 for hp in hp_grid:
391 if cfg.preds_output_folder:
392 results_file = f"preds_out_{cfg.decoding_strategy}_bw{hp['beam_width']}"
393 if cfg.decoding_strategy == "maes":
394 results_file = f"{results_file}_ma{hp['maes_prefix_alpha']}_mg{hp['maes_expansion_gamma']}"
395 if cfg.kenlm_model_file:
396 results_file = f"{results_file}_ba{hp['beam_alpha']}"
397 if cfg.hat_subtract_ilm:
398 results_file = f"{results_file}_hat_ilmw{hp['hat_ilm_weight']}"
399 preds_output_file = os.path.join(cfg.preds_output_folder, f"{results_file}.tsv")
400 else:
401 preds_output_file = None
402
403 cfg.decoding.beam_size = hp["beam_width"]
404 cfg.decoding.ngram_lm_alpha = hp["beam_alpha"]
405 cfg.decoding.maes_prefix_alpha = hp["maes_prefix_alpha"]
406 cfg.decoding.maes_expansion_gamma = hp["maes_expansion_gamma"]
407 cfg.decoding.hat_ilm_weight = hp["hat_ilm_weight"]
408
409 candidate_wer, candidate_cer = decoding_step(
410 asr_model,
411 cfg,
412 all_probs=all_probs,
413 target_transcripts=target_transcripts,
414 preds_output_file=preds_output_file,
415 beam_batch_size=cfg.beam_batch_size,
416 progress_bar=True,
417 )
418
419 if candidate_cer < best_cer:
420 best_cer_beam_size = hp["beam_width"]
421 best_cer_alpha = hp["beam_alpha"]
422 best_cer_ma = hp["maes_prefix_alpha"]
423 best_cer_mg = hp["maes_expansion_gamma"]
424 best_cer_hat_ilm_weight = hp["hat_ilm_weight"]
425 best_cer = candidate_cer
426
427 if candidate_wer < best_wer:
428 best_wer_beam_size = hp["beam_width"]
429 best_wer_alpha = hp["beam_alpha"]
430 best_wer_ma = hp["maes_prefix_alpha"]
431 best_wer_ga = hp["maes_expansion_gamma"]
432 best_wer_hat_ilm_weight = hp["hat_ilm_weight"]
433 best_wer = candidate_wer
434
435 wer_hat_parameter = ""
436 if cfg.hat_subtract_ilm:
437 wer_hat_parameter = f"HAT ilm weight = {best_wer_hat_ilm_weight}, "
438 logging.info(
439 f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
440 f'Beam alpha = {best_wer_alpha}, {wer_hat_parameter}'
441 f'maes_prefix_alpha = {best_wer_ma}, maes_expansion_gamma = {best_wer_ga} '
442 )
443
444 cer_hat_parameter = ""
445 if cfg.hat_subtract_ilm:
446 cer_hat_parameter = f"HAT ilm weight = {best_cer_hat_ilm_weight}"
447 logging.info(
448 f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
449 f'Beam alpha = {best_cer_alpha}, {cer_hat_parameter} '
450 f'maes_prefix_alpha = {best_cer_ma}, maes_expansion_gamma = {best_cer_mg}'
451 )
452 logging.info(f"=================================================================================")
453
454
455 if __name__ == '__main__':
456 main()
457
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
[start of scripts/confidence_ensembles/build_ensemble.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This script provides a functionality to create confidence-based ensembles
17 from a collection of pretrained models.
18
19 For more details see the paper https://arxiv.org/abs/2306.15824
20 or tutorial in tutorials/asr/Confidence_Ensembles.ipynb
21
22 You would typically use this script by providing a yaml config file or overriding
23 default options from command line.
24
25 Usage examples:
26
27 1. Building an ensemble of two monolingual models with default settings (no confidence tuning).
28
29 python build_ensemble.py --config-path=. --config-name=ensemble_config.yaml
30 ensemble.0.model=stt_it_conformer_ctc_large
31 ensemble.0.training_manifest=<path to the Italian data of 100+ utterances (no transcription required)>
32 ensemble.1.model=stt_es_conformer_ctc_large
33 ensemble.1.training_manifest=<path to the Spanish data of 100+ utterances (no transcription required)>
34 output_path=<path to the desired location of the .nemo checkpoint>
35
36 You can have more than 2 models and can control transcription settings (e.g., batch size)
37 with ``transcription.<any argument of examples/asr/transcribe_speech.py>`` parameters.
38
39 2. If you want to get improved results, you can enable tuning of the confidence and logistic regression (LR) parameters.
40 E.g.
41
42 python build_ensemble.py
43 <all arguments like in the previous example>
44 ensemble.0.dev_manifest=<path to the dev data that's required for tuning>
45 ...
46 # IMPORTANT: see the note below if you use > 2 models!
47 ensemble.N.dev_manifest=<path to the dev data that's required for tuning>
48 tune_confidence=True # to allow confidence tuning. LR is tuned by default
49
50 As with any tuning, it is recommended to have reasonably large validation set for each model,
51 otherwise you might overfit to the validation data.
52
53 Note that if you add additional models (> 2) you will need to modify ensemble_config.yaml
54 or create a new one with added models in there. While it's theoretically possible to
55 fully override such parameters from commandline, hydra is very unfriendly for such
56 use-cases, so it's strongly recommended to be creating new configs.
57
58 3. If you want to precisely control tuning grid search, you can do that with
59
60 python build_ensemble.py
61 <all arguments as in the previous examples>
62 tune_confidence_config.confidence_type='[entropy_renyi_exp,entropy_tsallis_exp]' # only tune over this set
63 tune_confidence_config.alpha='[0.1,0.5,1.0]' # only tune over this set
64
65 You can check the dataclasses in this file for the full list of supported
66 arguments and their default values.
67 """
68
69 import atexit
70
71 # using default logging to be able to silence unnecessary messages from nemo
72 import logging
73 import os
74 import random
75 import sys
76 import tempfile
77 from copy import deepcopy
78 from dataclasses import dataclass
79 from pathlib import Path
80 from typing import Dict, List, Optional, Tuple
81
82 import joblib
83 import numpy as np
84 import pytorch_lightning as pl
85 from omegaconf import MISSING, DictConfig, OmegaConf
86 from sklearn.linear_model import LogisticRegression
87 from sklearn.metrics import confusion_matrix
88 from sklearn.pipeline import Pipeline, make_pipeline
89 from sklearn.preprocessing import StandardScaler
90 from tqdm import tqdm
91
92 from nemo.collections.asr.models.confidence_ensemble import (
93 ConfidenceEnsembleModel,
94 ConfidenceSpec,
95 compute_confidence,
96 get_filtered_logprobs,
97 )
98 from nemo.collections.asr.parts.utils.asr_confidence_utils import (
99 ConfidenceConfig,
100 ConfidenceMeasureConfig,
101 get_confidence_aggregation_bank,
102 get_confidence_measure_bank,
103 )
104 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
105 from nemo.core.config import hydra_runner
106
107 LOG = logging.getLogger(__file__)
108
109 # adding Python path. If not found, asking user to get the file
110 try:
111 sys.path.append(str(Path(__file__).parents[2] / "examples" / "asr"))
112 import transcribe_speech
113 except ImportError:
114 # if users run script normally from nemo repo, this shouldn't be triggered as
115 # we modify the path above. But if they downloaded the build_ensemble.py as
116 # an isolated script, we'd ask them to also download corresponding version
117 # of the transcribe_speech.py
118 print(
119 "Current script depends on 'examples/asr/transcribe_speech.py', but can't find it. "
120 "If it's not present, download it from the NeMo github manually and put inside this folder."
121 )
122
123
124 @dataclass
125 class EnsembleConfig:
126 # .nemo path or pretrained name
127 model: str = MISSING
128 # path to the training data manifest (non-tarred)
129 training_manifest: str = MISSING
130 # specify to limit the number of training samples
131 # 100 is most likely enough, but setting higher default just in case
132 max_training_samples: int = 1000
133 # specify to provide dev data manifest for HP tuning
134 dev_manifest: Optional[str] = None
135
136
137 @dataclass
138 class TuneConfidenceConfig:
139 # important parameter, so should always be tuned
140 exclude_blank: Tuple[bool] = (True, False)
141 # prod is pretty much always worse, so not including by default
142 aggregation: Tuple[str] = ("mean", "min", "max")
143 # not including max prob, as there is always an entropy-based metric
144 # that's better but otherwise including everything
145 confidence_type: Tuple[str] = (
146 "entropy_renyi_exp",
147 "entropy_renyi_lin",
148 "entropy_tsallis_exp",
149 "entropy_tsallis_lin",
150 "entropy_gibbs_lin",
151 "entropy_gibbs_exp",
152 )
153
154 # TODO: currently it's not possible to efficiently tune temperature, as we always
155 # apply log-softmax in the decoder, so to try different values it will be required
156 # to rerun the decoding, which is very slow. To support this for one-off experiments
157 # it's possible to modify the code of CTC decoder / Transducer joint to
158 # remove log-softmax and then apply it directly in this script with the temperature
159 #
160 # Alternatively, one can run this script multiple times with different values of
161 # temperature and pick the best performing ensemble. Note that this will increase
162 # tuning time by the number of temperature values tried. On the other hand,
163 # the above approach is a lot more efficient and will only slightly increase
164 # the total tuning runtime.
165
166 # very important to tune for max prob, but for entropy metrics 1.0 is almost always best
167 # temperature: Tuple[float] = (1.0,)
168
169 # not that important, but can sometimes make a small difference
170 alpha: Tuple[float] = (0.25, 0.33, 0.5, 1.0)
171
172 def get_grid_size(self) -> int:
173 """Returns the total number of points in the search space."""
174 if "max_prob" in self.confidence_type:
175 return (
176 len(self.exclude_blank)
177 * len(self.aggregation)
178 * ((len(self.confidence_type) - 1) * len(self.alpha) + 1)
179 )
180 return len(self.exclude_blank) * len(self.aggregation) * len(self.confidence_type) * len(self.alpha)
181
182
183 @dataclass
184 class TuneLogisticRegressionConfig:
185 # will have log-uniform grid over this range with that many points
186 # note that a value of 10000.0 (not regularization) is always added
187 C_num_points: int = 10
188 C_min: float = 0.0001
189 C_max: float = 10.0
190
191 # not too important
192 multi_class: Tuple[str] = ("ovr", "multinomial")
193
194 # should try to include weights directly if the data is too imbalanced
195 class_weight: Tuple = (None, "balanced")
196
197 # increase if getting many warnings that algorithm didn't converge
198 max_iter: int = 1000
199
200
201 @dataclass
202 class BuildEnsembleConfig:
203 # where to save the resulting ensemble model
204 output_path: str = MISSING
205
206 # each model specification
207 ensemble: List[EnsembleConfig] = MISSING
208
209 random_seed: int = 0 # for reproducibility
210
211 # default confidence, can override
212 confidence: ConfidenceConfig = ConfidenceConfig(
213 # we keep frame confidences and apply aggregation manually to get full-utterance confidence
214 preserve_frame_confidence=True,
215 exclude_blank=True,
216 aggregation="mean",
217 measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
218 )
219 temperature: float = 1.0
220
221 # this is optional, but can be used to change any aspect of the transcription
222 # config, such as batch size or amp usage. Note that model, data and confidence
223 # will be overriden by this script
224 transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
225
226 # set to True to tune the confidence.
227 # requires dev manifests to be specified for each model
228 tune_confidence: bool = False
229 # used to specify what to tune over. By default runs tuning over some
230 # reasonalbe grid, so that it does not take forever.
231 # Can be changed as needed
232 tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
233
234 # very fast to tune and can be important in case of imbalanced datasets
235 # will automatically set to False if dev data is not available
236 tune_logistic_regression: bool = True
237 tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
238
239 def __post_init__(self):
240 """Checking that if any dev data is provided, all are provided.
241
242 Will also auto-set tune_logistic_regression to False if no dev data
243 is available.
244
245 If tune_confidence is set to True (user choice) and no dev data is
246 provided, will raise an error.
247 """
248 num_dev_data = 0
249 for ensemble_cfg in self.ensemble:
250 num_dev_data += ensemble_cfg.dev_manifest is not None
251 if num_dev_data == 0:
252 if self.tune_confidence:
253 raise ValueError("tune_confidence is set to True, but no dev data is provided")
254 LOG.info("Setting tune_logistic_regression = False since no dev data is provided")
255 self.tune_logistic_regression = False
256 return
257
258 if num_dev_data < len(self.ensemble):
259 raise ValueError(
260 "Some ensemble configs specify dev data, but some don't. Either all have to specify it or none!"
261 )
262
263
264 def calculate_score(features: np.ndarray, labels: np.ndarray, pipe: Pipeline) -> Tuple[float, np.ndarray]:
265 """Score is always calculated as mean of the per-class scores.
266
267 This is done to account for possible class imbalances.
268
269 Args:
270 features: numpy array of features of shape [N x D], where N is the
271 number of objects (typically a total number of utterances in
272 all datasets) and D is the total number of confidence scores
273 used to train the model (typically = number of models).
274 labels: numpy array of shape [N] contatining ground-truth model indices.
275 pipe: classification pipeline (currently, standardization + logistic
276 regression).
277
278 Returns:
279 tuple: score value in [0, 1] and full classification confusion matrix.
280 """
281 predictions = pipe.predict(features)
282 conf_m = confusion_matrix(labels, predictions)
283 score = np.diag(conf_m).sum() / conf_m.sum()
284 return score, conf_m
285
286
287 def train_model_selection(
288 training_features: np.ndarray,
289 training_labels: np.ndarray,
290 dev_features: Optional[np.ndarray] = None,
291 dev_labels: Optional[np.ndarray] = None,
292 tune_lr: bool = False,
293 tune_lr_cfg: Optional[TuneLogisticRegressionConfig] = None,
294 verbose: bool = False,
295 ) -> Tuple[Pipeline, float]:
296 """Trains model selection block with an (optional) tuning of the parameters.
297
298 Returns a pipeline consisting of feature standardization and logistic
299 regression. If tune_lr is set to True, dev features/labels will be used
300 to tune the hyperparameters of the logistic regression with the grid
301 search that's defined via ``tune_lr_cfg``.
302
303 If no tuning is requested, uses the following parameters::
304
305 best_pipe = make_pipeline(
306 StandardScaler(),
307 LogisticRegression(
308 multi_class="multinomial",
309 C=10000.0,
310 max_iter=1000,
311 class_weight="balanced",
312 ),
313 )
314
315 Args:
316 training_features: numpy array of features of shape [N x D], where N is
317 the number of objects (typically a total number of utterances in
318 all training datasets) and D is the total number of confidence
319 scores used to train the model (typically = number of models).
320 training_labels: numpy array of shape [N] contatining ground-truth
321 model indices.
322 dev_features: same as training, but for the validation subset.
323 dev_labels: same as training, but for the validation subset.
324 tune_lr: controls whether tuning of LR hyperparameters is performed.
325 If set to True, it's required to also provide dev features/labels.
326 tune_lr_cfg: specifies what values of LR hyperparameters to try.
327 verbose: if True, will output final training/dev scores.
328
329 Returns:
330 tuple: trained model selection pipeline, best score (or -1 if no tuning
331 was done).
332 """
333 if not tune_lr:
334 # default parameters: C=10000.0 disables regularization
335 best_pipe = make_pipeline(
336 StandardScaler(),
337 LogisticRegression(multi_class="multinomial", C=10000.0, max_iter=1000, class_weight="balanced"),
338 )
339 max_score = -1
340 else:
341 C_pms = np.append(
342 np.exp(np.linspace(np.log(tune_lr_cfg.C_min), np.log(tune_lr_cfg.C_max), tune_lr_cfg.C_num_points)),
343 10000.0,
344 )
345 max_score = 0
346 best_pipe = None
347 for class_weight in tune_lr_cfg.class_weight:
348 for multi_class in tune_lr_cfg.multi_class:
349 for C in C_pms:
350 pipe = make_pipeline(
351 StandardScaler(),
352 LogisticRegression(
353 multi_class=multi_class, C=C, max_iter=tune_lr_cfg.max_iter, class_weight=class_weight
354 ),
355 )
356 pipe.fit(training_features, training_labels)
357 score, confusion = calculate_score(dev_features, dev_labels, pipe)
358 if score > max_score:
359 max_score = score
360 best_pipe = pipe
361
362 best_pipe.fit(training_features, training_labels)
363 if verbose:
364 accuracy, confusion = calculate_score(training_features, training_labels, best_pipe)
365 LOG.info("Training fit accuracy: %.4f", accuracy * 100.0)
366 LOG.info("Training confusion matrix:\n%s", str(confusion))
367 if dev_features is not None and verbose:
368 accuracy, confusion = calculate_score(dev_features, dev_labels, best_pipe)
369 LOG.info("Dev fit accuracy: %.4f", accuracy * 100.0)
370 LOG.info("Dev confusion matrix:\n%s", str(confusion))
371
372 return best_pipe, max_score
373
374
375 def subsample_manifest(manifest_file: str, max_samples: int) -> str:
376 """Will save a subsampled version of the manifest to the same folder.
377
378 Have to save to the same folder to support relative paths.
379
380 Args:
381 manifest_file: path to the manifest file that needs subsampling.
382 max_samples: how many samples to retain. Will randomly select that
383 many lines from the manifest.
384
385 Returns:
386 str: the path to the subsampled manifest file.
387 """
388 with open(manifest_file, "rt", encoding="utf-8") as fin:
389 lines = fin.readlines()
390 if max_samples < len(lines):
391 lines = random.sample(lines, max_samples)
392 output_file = manifest_file + "-subsampled"
393 with open(output_file, "wt", encoding="utf-8") as fout:
394 fout.write("".join(lines))
395 return output_file
396
397
398 def cleanup_subsampled_manifests(subsampled_manifests: List[str]):
399 """Removes all generated subsamples manifests."""
400 for manifest in subsampled_manifests:
401 os.remove(manifest)
402
403
404 def compute_all_confidences(
405 hypothesis: Hypothesis, tune_confidence_cfg: TuneConfidenceConfig
406 ) -> Dict[ConfidenceSpec, float]:
407 """Computes a set of confidence scores from a given hypothesis.
408
409 Works with the output of both CTC and Transducer decoding.
410
411 Args:
412 hypothesis: generated hypothesis as returned from the transcribe
413 method of the ASR model.
414 tune_confidence_cfg: config specifying what confidence scores to
415 compute.
416
417 Returns:
418 dict: dictionary with confidenct spec -> confidence score mapping.
419 """
420 conf_values = {}
421
422 for exclude_blank in tune_confidence_cfg.exclude_blank:
423 filtered_logprobs = get_filtered_logprobs(hypothesis, exclude_blank)
424 vocab_size = filtered_logprobs.shape[1]
425 for aggregation in tune_confidence_cfg.aggregation:
426 aggr_func = get_confidence_aggregation_bank()[aggregation]
427 for conf_type in tune_confidence_cfg.confidence_type:
428 conf_func = get_confidence_measure_bank()[conf_type]
429 if conf_type == "max_prob": # skipping alpha in this case
430 conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=1.0)).cpu().item()
431 conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, 1.0)] = conf_value
432 else:
433 for alpha in tune_confidence_cfg.alpha:
434 conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=alpha)).cpu().item()
435 conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, alpha)] = conf_value
436
437 return conf_values
438
439
440 def find_best_confidence(
441 train_confidences: List[List[Dict[ConfidenceSpec, float]]],
442 train_labels: List[int],
443 dev_confidences: List[List[Dict[ConfidenceSpec, float]]],
444 dev_labels: List[int],
445 tune_lr: bool,
446 tune_lr_config: TuneConfidenceConfig,
447 ) -> Tuple[ConfidenceConfig, Pipeline]:
448 """Finds the best confidence configuration for model selection.
449
450 Will loop over all values in the confidence dictionary and fit the LR
451 model (optionally tuning its HPs). The best performing confidence (on the
452 dev set) will be used for the final LR model.
453
454 Args:
455 train_confidences: this is an object of type
456 ``List[List[Dict[ConfidenceSpec, float]]]``. The shape of this
457 object is [M, N, S], where
458 M: number of models
459 N: number of utterances in all training sets
460 S: number of confidence scores to try
461
462 This argument will be used to construct np.array objects for each
463 of the confidence scores with the shape [M, N]
464
465 train_labels: ground-truth labels of the correct model for each data
466 points. This is a list of size [N]
467 dev_confidences: same as training, but for the validation subset.
468 dev_labels: same as training, but for the validation subset.
469 tune_lr: controls whether tuning of LR hyperparameters is performed.
470 tune_lr_cfg: specifies what values of LR hyperparameters to try.
471
472 Returns:
473 tuple: best confidence config, best model selection pipeline
474 """
475 max_score = 0
476 best_pipe = None
477 best_conf_spec = None
478 LOG.info("Evaluation all confidences. Total grid size: %d", len(train_confidences[0][0].keys()))
479 for conf_spec in tqdm(train_confidences[0][0].keys()):
480 cur_train_confidences = []
481 for model_confs in train_confidences:
482 cur_train_confidences.append([])
483 for model_conf in model_confs:
484 cur_train_confidences[-1].append(model_conf[conf_spec])
485 cur_dev_confidences = []
486 for model_confs in dev_confidences:
487 cur_dev_confidences.append([])
488 for model_conf in model_confs:
489 cur_dev_confidences[-1].append(model_conf[conf_spec])
490 # transposing with zip(*list)
491 training_features = np.array(list(zip(*cur_train_confidences)))
492 training_labels = np.array(train_labels)
493 dev_features = np.array(list(zip(*cur_dev_confidences)))
494 dev_labels = np.array(dev_labels)
495 pipe, score = train_model_selection(
496 training_features, training_labels, dev_features, dev_labels, tune_lr, tune_lr_config,
497 )
498 if max_score < score:
499 max_score = score
500 best_pipe = pipe
501 best_conf_spec = conf_spec
502 LOG.info("Found better parameters: %s. New score: %.4f", str(conf_spec), max_score)
503
504 return best_conf_spec.to_confidence_config(), best_pipe
505
506
507 @hydra_runner(config_name="BuildEnsembleConfig", schema=BuildEnsembleConfig)
508 def main(cfg: BuildEnsembleConfig):
509 # silencing all messages from nemo/ptl to avoid dumping tons of configs to the stdout
510 logging.getLogger('pytorch_lightning').setLevel(logging.CRITICAL)
511 logging.getLogger('nemo_logger').setLevel(logging.CRITICAL)
512 LOG.info(f'Build ensemble config:\n{OmegaConf.to_yaml(cfg)}')
513
514 # to ensure post init is called
515 cfg = BuildEnsembleConfig(**cfg)
516
517 pl.seed_everything(cfg.random_seed)
518 cfg.transcription.random_seed = None # seed is already applied
519 cfg.transcription.return_transcriptions = True
520 cfg.transcription.preserve_alignment = True
521 cfg.transcription.ctc_decoding.temperature = cfg.temperature
522 cfg.transcription.rnnt_decoding.temperature = cfg.temperature
523 # this ensures that generated output is after log-softmax for consistency with CTC
524
525 train_confidences = []
526 dev_confidences = []
527 train_labels = []
528 dev_labels = []
529
530 # registering clean-up function that will hold on to this list and
531 # should clean up even if there is partial error in some of the transcribe
532 # calls
533 subsampled_manifests = []
534 atexit.register(cleanup_subsampled_manifests, subsampled_manifests)
535
536 # note that we loop over the same config.
537 # This is intentional, as we need to run all models on all datasets
538 # this loop will do the following things:
539 # 1. Goes through each model X each training dataset
540 # 2. Computes predictions by directly calling transcribe_speech.main
541 # 3. Converts transcription to the confidence score(s) as specified in the config
542 # 4. If dev sets are provided, computes the same for them
543 # 5. Creates a list of ground-truth model indices by mapping each model
544 # to its own training dataset as specified in the config.
545 # 6. After the loop, we either run tuning over all confidence scores or
546 # directly use a single score to fit logistic regression and save the
547 # final ensemble model.
548 for model_idx, model_cfg in enumerate(cfg.ensemble):
549 train_model_confidences = []
550 dev_model_confidences = []
551 for data_idx, data_cfg in enumerate(cfg.ensemble):
552 if model_idx == 0: # generating subsampled manifests only one time
553 subsampled_manifests.append(
554 subsample_manifest(data_cfg.training_manifest, data_cfg.max_training_samples)
555 )
556 subsampled_manifest = subsampled_manifests[data_idx]
557
558 if model_cfg.model.endswith(".nemo"):
559 cfg.transcription.model_path = model_cfg.model
560 else: # assuming pretrained model
561 cfg.transcription.pretrained_name = model_cfg.model
562
563 cfg.transcription.dataset_manifest = subsampled_manifest
564
565 # training
566 with tempfile.NamedTemporaryFile() as output_file:
567 cfg.transcription.output_filename = output_file.name
568 LOG.info("Transcribing training dataset %d with model %d", data_idx, model_idx)
569 transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
570 LOG.info("Generating confidence scores")
571 # TODO: parallelize this loop?
572 for transcription in tqdm(transcriptions):
573 if cfg.tune_confidence:
574 train_model_confidences.append(
575 compute_all_confidences(transcription, cfg.tune_confidence_config)
576 )
577 else:
578 train_model_confidences.append(compute_confidence(transcription, cfg.confidence))
579 if model_idx == 0: # labels are the same for all models
580 train_labels.append(data_idx)
581
582 # optional dev
583 if data_cfg.dev_manifest is not None:
584 cfg.transcription.dataset_manifest = data_cfg.dev_manifest
585 with tempfile.NamedTemporaryFile() as output_file:
586 cfg.transcription.output_filename = output_file.name
587 LOG.info("Transcribing dev dataset %d with model %d", data_idx, model_idx)
588 transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
589 LOG.info("Generating confidence scores")
590 for transcription in tqdm(transcriptions):
591 if cfg.tune_confidence:
592 dev_model_confidences.append(
593 compute_all_confidences(transcription, cfg.tune_confidence_config)
594 )
595 else:
596 dev_model_confidences.append(compute_confidence(transcription, cfg.confidence))
597 if model_idx == 0: # labels are the same for all models
598 dev_labels.append(data_idx)
599
600 train_confidences.append(train_model_confidences)
601 if dev_model_confidences:
602 dev_confidences.append(dev_model_confidences)
603
604 if cfg.tune_confidence:
605 best_confidence, model_selection_block = find_best_confidence(
606 train_confidences,
607 train_labels,
608 dev_confidences,
609 dev_labels,
610 cfg.tune_logistic_regression,
611 cfg.tune_logistic_regression_config,
612 )
613 else:
614 best_confidence = cfg.confidence
615 # transposing with zip(*list)
616 training_features = np.array(list(zip(*train_confidences)))
617 training_labels = np.array(train_labels)
618 if dev_confidences:
619 dev_features = np.array(list(zip(*dev_confidences)))
620 dev_labels = np.array(dev_labels)
621 else:
622 dev_features = None
623 dev_labels = None
624 model_selection_block, _ = train_model_selection(
625 training_features,
626 training_labels,
627 dev_features,
628 dev_labels,
629 cfg.tune_logistic_regression,
630 cfg.tune_logistic_regression_config,
631 verbose=True,
632 )
633
634 with tempfile.TemporaryDirectory() as tmpdir:
635 model_selection_block_path = os.path.join(tmpdir, 'model_selection_block.pkl')
636 joblib.dump(model_selection_block, model_selection_block_path)
637
638 # creating ensemble checkpoint
639 ensemble_model = ConfidenceEnsembleModel(
640 cfg=DictConfig(
641 {
642 'model_selection_block': model_selection_block_path,
643 'confidence': best_confidence,
644 'temperature': cfg.temperature,
645 'load_models': [model_cfg.model for model_cfg in cfg.ensemble],
646 }
647 ),
648 trainer=None,
649 )
650 ensemble_model.save_to(cfg.output_path)
651
652
653 if __name__ == '__main__':
654 main()
655
[end of scripts/confidence_ensembles/build_ensemble.py]
[start of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import os
17 from dataclasses import dataclass, is_dataclass
18 from pathlib import Path
19 from typing import Optional
20
21 import pytorch_lightning as pl
22 import torch
23 from omegaconf import MISSING, OmegaConf
24 from sklearn.model_selection import ParameterGrid
25
26 from nemo.collections.asr.metrics.rnnt_wer import RNNTDecodingConfig
27 from nemo.collections.asr.metrics.wer import CTCDecodingConfig
28 from nemo.collections.asr.models import ASRModel, EncDecRNNTModel
29 from nemo.collections.asr.parts.utils.asr_confidence_benchmarking_utils import (
30 apply_confidence_parameters,
31 run_confidence_benchmark,
32 )
33 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig
34 from nemo.core.config import hydra_runner
35 from nemo.utils import logging
36
37 """
38 Get confidence metrics and curve plots for a given model, dataset, and confidence parameters.
39
40 # Arguments
41 model_path: Path to .nemo ASR checkpoint
42 pretrained_name: Name of pretrained ASR model (from NGC registry)
43 dataset_manifest: Path to dataset JSON manifest file (in NeMo format)
44 output_dir: Output directory to store a report and curve plot directories
45
46 batch_size: batch size during inference
47 num_workers: number of workers during inference
48
49 cuda: Optional int to enable or disable execution of model on certain CUDA device
50 amp: Bool to decide if Automatic Mixed Precision should be used during inference
51 audio_type: Str filetype of the audio. Supported = wav, flac, mp3
52
53 target_level: Word- or token-level confidence. Supported = word, token, auto (for computing both word and token)
54 confidence_cfg: Config with confidence parameters
55 grid_params: Dictionary with lists of parameters to iteratively benchmark on
56
57 # Usage
58 ASR model can be specified by either "model_path" or "pretrained_name".
59 Data for transcription are defined with "dataset_manifest".
60 Results are returned as a benchmark report and curve plots.
61
62 python benchmark_asr_confidence.py \
63 model_path=null \
64 pretrained_name=null \
65 dataset_manifest="" \
66 output_dir="" \
67 batch_size=64 \
68 num_workers=8 \
69 cuda=0 \
70 amp=True \
71 target_level="word" \
72 confidence_cfg.exclude_blank=False \
73 'grid_params="{\"aggregation\": [\"min\", \"prod\"], \"alpha\": [0.33, 0.5]}"'
74 """
75
76
77 def get_experiment_params(cfg):
78 """Get experiment parameters from a confidence config and generate the experiment name.
79
80 Returns:
81 List of experiment parameters.
82 String with the experiment name.
83 """
84 blank = "no_blank" if cfg.exclude_blank else "blank"
85 aggregation = cfg.aggregation
86 method_name = cfg.measure_cfg.name
87 alpha = cfg.measure_cfg.alpha
88 if method_name == "entropy":
89 entropy_type = cfg.measure_cfg.entropy_type
90 entropy_norm = cfg.measure_cfg.entropy_norm
91 experiment_param_list = [
92 aggregation,
93 str(cfg.exclude_blank),
94 method_name,
95 entropy_type,
96 entropy_norm,
97 str(alpha),
98 ]
99 experiment_str = "-".join([aggregation, blank, method_name, entropy_type, entropy_norm, str(alpha)])
100 else:
101 experiment_param_list = [aggregation, str(cfg.exclude_blank), method_name, "-", "-", str(alpha)]
102 experiment_str = "-".join([aggregation, blank, method_name, str(alpha)])
103 return experiment_param_list, experiment_str
104
105
106 @dataclass
107 class ConfidenceBenchmarkingConfig:
108 # Required configs
109 model_path: Optional[str] = None # Path to a .nemo file
110 pretrained_name: Optional[str] = None # Name of a pretrained model
111 dataset_manifest: str = MISSING
112 output_dir: str = MISSING
113
114 # General configs
115 batch_size: int = 32
116 num_workers: int = 4
117
118 # Set `cuda` to int to define CUDA device. If 'None', will look for CUDA
119 # device anyway, and do inference on CPU only if CUDA device is not found.
120 # If `cuda` is a negative number, inference will be on CPU only.
121 cuda: Optional[int] = None
122 amp: bool = False
123 audio_type: str = "wav"
124
125 # Confidence configs
126 target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
127 confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
128 grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
129
130
131 @hydra_runner(config_name="ConfidenceBenchmarkingConfig", schema=ConfidenceBenchmarkingConfig)
132 def main(cfg: ConfidenceBenchmarkingConfig):
133 torch.set_grad_enabled(False)
134
135 logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
136
137 if is_dataclass(cfg):
138 cfg = OmegaConf.structured(cfg)
139
140 if cfg.model_path is None and cfg.pretrained_name is None:
141 raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None!")
142
143 # setup GPU
144 if cfg.cuda is None:
145 if torch.cuda.is_available():
146 device = [0] # use 0th CUDA device
147 accelerator = 'gpu'
148 else:
149 device = 1
150 accelerator = 'cpu'
151 else:
152 device = [cfg.cuda]
153 accelerator = 'gpu'
154
155 map_location = torch.device('cuda:{}'.format(device[0]) if accelerator == 'gpu' else 'cpu')
156
157 # setup model
158 if cfg.model_path is not None:
159 # restore model from .nemo file path
160 model_cfg = ASRModel.restore_from(restore_path=cfg.model_path, return_config=True)
161 classpath = model_cfg.target # original class path
162 imported_class = model_utils.import_class_by_path(classpath) # type: ASRModel
163 logging.info(f"Restoring model : {imported_class.__name__}")
164 asr_model = imported_class.restore_from(
165 restore_path=cfg.model_path, map_location=map_location
166 ) # type: ASRModel
167 else:
168 # restore model by name
169 asr_model = ASRModel.from_pretrained(
170 model_name=cfg.pretrained_name, map_location=map_location
171 ) # type: ASRModel
172
173 trainer = pl.Trainer(devices=device, accelerator=accelerator)
174 asr_model.set_trainer(trainer)
175 asr_model = asr_model.eval()
176
177 # Check if ctc or rnnt model
178 is_rnnt = isinstance(asr_model, EncDecRNNTModel)
179
180 # Check that the model has the `change_decoding_strategy` method
181 if not hasattr(asr_model, 'change_decoding_strategy'):
182 raise RuntimeError("The asr_model you are using must have the `change_decoding_strategy` method.")
183
184 # get filenames and reference texts from manifest
185 filepaths = []
186 reference_texts = []
187 if os.stat(cfg.dataset_manifest).st_size == 0:
188 logging.error(f"The input dataset_manifest {cfg.dataset_manifest} is empty. Exiting!")
189 return None
190 manifest_dir = Path(cfg.dataset_manifest).parent
191 with open(cfg.dataset_manifest, 'r') as f:
192 for line in f:
193 item = json.loads(line)
194 audio_file = Path(item['audio_filepath'])
195 if not audio_file.is_file() and not audio_file.is_absolute():
196 audio_file = manifest_dir / audio_file
197 filepaths.append(str(audio_file.absolute()))
198 reference_texts.append(item['text'])
199
200 # setup AMP (optional)
201 autocast = None
202 if cfg.amp and torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
203 logging.info("AMP enabled!\n")
204 autocast = torch.cuda.amp.autocast
205
206 # do grid-based benchmarking if grid_params is provided, otherwise a regular one
207 work_dir = Path(cfg.output_dir)
208 os.makedirs(work_dir, exist_ok=True)
209 report_legend = (
210 ",".join(
211 [
212 "model_type",
213 "aggregation",
214 "blank",
215 "method_name",
216 "entropy_type",
217 "entropy_norm",
218 "alpha",
219 "target_level",
220 "auc_roc",
221 "auc_pr",
222 "auc_nt",
223 "nce",
224 "ece",
225 "auc_yc",
226 "std_yc",
227 "max_yc",
228 ]
229 )
230 + "\n"
231 )
232 model_typename = "RNNT" if is_rnnt else "CTC"
233 report_file = work_dir / Path("report.csv")
234 if cfg.grid_params:
235 asr_model.change_decoding_strategy(
236 RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
237 if is_rnnt
238 else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
239 )
240 params = json.loads(cfg.grid_params)
241 hp_grid = ParameterGrid(params)
242 hp_grid = list(hp_grid)
243
244 logging.info(f"==============================Running a benchmarking with grid search=========================")
245 logging.info(f"Grid search size: {len(hp_grid)}")
246 logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directories near the file")
247 logging.info(f"==============================================================================================")
248
249 with open(report_file, "tw", encoding="utf-8") as f:
250 f.write(report_legend)
251 f.flush()
252 for i, hp in enumerate(hp_grid):
253 logging.info(f"Run # {i + 1}, grid: `{hp}`")
254 asr_model.change_decoding_strategy(apply_confidence_parameters(asr_model.cfg.decoding, hp))
255 param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
256 plot_dir = work_dir / Path(experiment_name)
257 results = run_confidence_benchmark(
258 asr_model,
259 cfg.target_level,
260 filepaths,
261 reference_texts,
262 cfg.batch_size,
263 cfg.num_workers,
264 plot_dir,
265 autocast,
266 )
267 for level, result in results.items():
268 f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
269 f.flush()
270 else:
271 asr_model.change_decoding_strategy(
272 RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
273 if is_rnnt
274 else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
275 )
276 param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
277 plot_dir = work_dir / Path(experiment_name)
278
279 logging.info(f"==============================Running a single benchmarking===================================")
280 logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directory `{plot_dir}`")
281
282 with open(report_file, "tw", encoding="utf-8") as f:
283 f.write(report_legend)
284 f.flush()
285 results = run_confidence_benchmark(
286 asr_model,
287 cfg.batch_size,
288 cfg.num_workers,
289 cfg.target_level,
290 filepaths,
291 reference_texts,
292 plot_dir,
293 autocast,
294 )
295 for level, result in results.items():
296 f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
297 logging.info(f"===========================================Done===============================================")
298
299
300 if __name__ == '__main__':
301 main()
302
[end of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
[start of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 # This script converts an existing audio dataset with a manifest to
16 # a tarred and sharded audio dataset that can be read by the
17 # TarredAudioToTextDataLayer.
18
19 # Please make sure your audio_filepath DOES NOT CONTAIN '-sub'!
20 # Because we will use it to handle files which have duplicate filenames but with different offsets
21 # (see function create_shard for details)
22
23
24 # Bucketing can help to improve the training speed. You may use --buckets_num to specify the number of buckets.
25 # It creates multiple tarred datasets, one per bucket, based on the audio durations.
26 # The range of [min_duration, max_duration) is split into equal sized buckets.
27 # Recommend to use --sort_in_shards to speedup the training by reducing the paddings in the batches
28 # More info on how to use bucketing feature: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/datasets.html
29
30 # If valid NVIDIA DALI version is installed, will also generate the corresponding DALI index files that need to be
31 # supplied to the config in order to utilize webdataset for efficient large dataset handling.
32 # NOTE: DALI + Webdataset is NOT compatible with Bucketing support !
33
34 # Usage:
35 1) Creating a new tarfile dataset
36
37 python convert_to_tarred_audio_dataset.py \
38 --manifest_path=<path to the manifest file> \
39 --target_dir=<path to output directory> \
40 --num_shards=<number of tarfiles that will contain the audio> \
41 --max_duration=<float representing maximum duration of audio samples> \
42 --min_duration=<float representing minimum duration of audio samples> \
43 --shuffle --shuffle_seed=1 \
44 --sort_in_shards \
45 --workers=-1
46
47
48 2) Concatenating more tarfiles to a pre-existing tarred dataset
49
50 python convert_to_tarred_audio_dataset.py \
51 --manifest_path=<path to the tarred manifest file> \
52 --metadata_path=<path to the metadata.yaml (or metadata_version_{X}.yaml) file> \
53 --target_dir=<path to output directory where the original tarfiles are contained> \
54 --max_duration=<float representing maximum duration of audio samples> \
55 --min_duration=<float representing minimum duration of audio samples> \
56 --shuffle --shuffle_seed=1 \
57 --sort_in_shards \
58 --workers=-1 \
59 --concat_manifest_paths \
60 <space separated paths to 1 or more manifest files to concatenate into the original tarred dataset>
61
62 3) Writing an empty metadata file
63
64 python convert_to_tarred_audio_dataset.py \
65 --target_dir=<path to output directory> \
66 # any other optional argument
67 --num_shards=8 \
68 --max_duration=16.7 \
69 --min_duration=0.01 \
70 --shuffle \
71 --workers=-1 \
72 --sort_in_shards \
73 --shuffle_seed=1 \
74 --write_metadata
75
76 """
77 import argparse
78 import copy
79 import json
80 import os
81 import random
82 import tarfile
83 from collections import defaultdict
84 from dataclasses import dataclass, field
85 from datetime import datetime
86 from typing import Any, List, Optional
87
88 from joblib import Parallel, delayed
89 from omegaconf import DictConfig, OmegaConf, open_dict
90
91 try:
92 import create_dali_tarred_dataset_index as dali_index
93
94 DALI_INDEX_SCRIPT_AVAILABLE = True
95 except (ImportError, ModuleNotFoundError, FileNotFoundError):
96 DALI_INDEX_SCRIPT_AVAILABLE = False
97
98 parser = argparse.ArgumentParser(
99 description="Convert an existing ASR dataset to tarballs compatible with TarredAudioToTextDataLayer."
100 )
101 parser.add_argument(
102 "--manifest_path", default=None, type=str, required=False, help="Path to the existing dataset's manifest."
103 )
104
105 parser.add_argument(
106 '--concat_manifest_paths',
107 nargs='+',
108 default=None,
109 type=str,
110 required=False,
111 help="Path to the additional dataset's manifests that will be concatenated with base dataset.",
112 )
113
114 # Optional arguments
115 parser.add_argument(
116 "--target_dir",
117 default='./tarred',
118 type=str,
119 help="Target directory for resulting tarballs and manifest. Defaults to `./tarred`. Creates the path if necessary.",
120 )
121
122 parser.add_argument(
123 "--metadata_path", required=False, default=None, type=str, help="Path to metadata file for the dataset.",
124 )
125
126 parser.add_argument(
127 "--num_shards",
128 default=-1,
129 type=int,
130 help="Number of shards (tarballs) to create. Used for partitioning data among workers.",
131 )
132 parser.add_argument(
133 '--max_duration',
134 default=None,
135 required=True,
136 type=float,
137 help='Maximum duration of audio clip in the dataset. By default, it is None and is required to be set.',
138 )
139 parser.add_argument(
140 '--min_duration',
141 default=None,
142 type=float,
143 help='Minimum duration of audio clip in the dataset. By default, it is None and will not filter files.',
144 )
145 parser.add_argument(
146 "--shuffle",
147 action='store_true',
148 help="Whether or not to randomly shuffle the samples in the manifest before tarring/sharding.",
149 )
150
151 parser.add_argument(
152 "--keep_files_together",
153 action='store_true',
154 help="Whether or not to keep entries from the same file (but different offsets) together when sorting before tarring/sharding.",
155 )
156
157 parser.add_argument(
158 "--sort_in_shards",
159 action='store_true',
160 help="Whether or not to sort samples inside the shards based on their duration.",
161 )
162
163 parser.add_argument(
164 "--buckets_num", type=int, default=1, help="Number of buckets to create based on duration.",
165 )
166
167 parser.add_argument("--shuffle_seed", type=int, default=None, help="Random seed for use if shuffling is enabled.")
168 parser.add_argument(
169 '--write_metadata',
170 action='store_true',
171 help=(
172 "Flag to write a blank metadata with the current call config. "
173 "Note that the metadata will not contain the number of shards, "
174 "and it must be filled out by the user."
175 ),
176 )
177 parser.add_argument(
178 "--no_shard_manifests",
179 action='store_true',
180 help="Do not write sharded manifests along with the aggregated manifest.",
181 )
182 parser.add_argument('--workers', type=int, default=1, help='Number of worker processes')
183 args = parser.parse_args()
184
185
186 @dataclass
187 class ASRTarredDatasetConfig:
188 num_shards: int = -1
189 shuffle: bool = False
190 max_duration: Optional[float] = None
191 min_duration: Optional[float] = None
192 shuffle_seed: Optional[int] = None
193 sort_in_shards: bool = True
194 shard_manifests: bool = True
195 keep_files_together: bool = False
196
197
198 @dataclass
199 class ASRTarredDatasetMetadata:
200 created_datetime: Optional[str] = None
201 version: int = 0
202 num_samples_per_shard: Optional[int] = None
203 is_concatenated_manifest: bool = False
204
205 dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
206 history: Optional[List[Any]] = field(default_factory=lambda: [])
207
208 def __post_init__(self):
209 self.created_datetime = self.get_current_datetime()
210
211 def get_current_datetime(self):
212 return datetime.now().strftime("%m-%d-%Y %H-%M-%S")
213
214 @classmethod
215 def from_config(cls, config: DictConfig):
216 obj = cls()
217 obj.__dict__.update(**config)
218 return obj
219
220 @classmethod
221 def from_file(cls, filepath: str):
222 config = OmegaConf.load(filepath)
223 return ASRTarredDatasetMetadata.from_config(config=config)
224
225
226 class ASRTarredDatasetBuilder:
227 """
228 Helper class that constructs a tarred dataset from scratch, or concatenates tarred datasets
229 together and constructs manifests for them.
230 """
231
232 def __init__(self):
233 self.config = None
234
235 def configure(self, config: ASRTarredDatasetConfig):
236 """
237 Sets the config generated from command line overrides.
238
239 Args:
240 config: ASRTarredDatasetConfig dataclass object.
241 """
242 self.config = config # type: ASRTarredDatasetConfig
243
244 if self.config.num_shards < 0:
245 raise ValueError("`num_shards` must be > 0. Please fill in the metadata information correctly.")
246
247 def create_new_dataset(self, manifest_path: str, target_dir: str = "./tarred/", num_workers: int = 0):
248 """
249 Creates a new tarred dataset from a given manifest file.
250
251 Args:
252 manifest_path: Path to the original ASR manifest.
253 target_dir: Output directory.
254 num_workers: Integer denoting number of parallel worker processes which will write tarfiles.
255 Defaults to 1 - which denotes sequential worker process.
256
257 Output:
258 Writes tarfiles, along with the tarred dataset compatible manifest file.
259 Also preserves a record of the metadata used to construct this tarred dataset.
260 """
261 if self.config is None:
262 raise ValueError("Config has not been set. Please call `configure(config: ASRTarredDatasetConfig)`")
263
264 if manifest_path is None:
265 raise FileNotFoundError("Manifest filepath cannot be None !")
266
267 config = self.config # type: ASRTarredDatasetConfig
268
269 if not os.path.exists(target_dir):
270 os.makedirs(target_dir)
271
272 # Read the existing manifest
273 entries, total_duration, filtered_entries, filtered_duration = self._read_manifest(manifest_path, config)
274
275 if len(filtered_entries) > 0:
276 print(f"Filtered {len(filtered_entries)} files which amounts to {filtered_duration} seconds of audio.")
277 print(
278 f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
279 )
280
281 if len(entries) == 0:
282 print("No tarred dataset was created as there were 0 valid samples after filtering!")
283 return
284 if config.shuffle:
285 random.seed(config.shuffle_seed)
286 print("Shuffling...")
287 if config.keep_files_together:
288 filename_entries = defaultdict(list)
289 for ent in entries:
290 filename_entries[ent["audio_filepath"]].append(ent)
291 filenames = list(filename_entries.keys())
292 random.shuffle(filenames)
293 shuffled_entries = []
294 for filename in filenames:
295 shuffled_entries += filename_entries[filename]
296 entries = shuffled_entries
297 else:
298 random.shuffle(entries)
299
300 # Create shards and updated manifest entries
301 print(f"Number of samples added : {len(entries)}")
302 print(f"Remainder: {len(entries) % config.num_shards}")
303
304 start_indices = []
305 end_indices = []
306 # Build indices
307 for i in range(config.num_shards):
308 start_idx = (len(entries) // config.num_shards) * i
309 end_idx = start_idx + (len(entries) // config.num_shards)
310 print(f"Shard {i} has entries {start_idx} ~ {end_idx}")
311 files = set()
312 for ent_id in range(start_idx, end_idx):
313 files.add(entries[ent_id]["audio_filepath"])
314 print(f"Shard {i} contains {len(files)} files")
315 if i == config.num_shards - 1:
316 # We discard in order to have the same number of entries per shard.
317 print(f"Have {len(entries) - end_idx} entries left over that will be discarded.")
318
319 start_indices.append(start_idx)
320 end_indices.append(end_idx)
321
322 manifest_folder, _ = os.path.split(manifest_path)
323
324 with Parallel(n_jobs=num_workers, verbose=config.num_shards) as parallel:
325 # Call parallel tarfile construction
326 new_entries_list = parallel(
327 delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, i, manifest_folder)
328 for i, (start_idx, end_idx) in enumerate(zip(start_indices, end_indices))
329 )
330
331 if config.shard_manifests:
332 sharded_manifests_dir = target_dir + '/sharded_manifests'
333 if not os.path.exists(sharded_manifests_dir):
334 os.makedirs(sharded_manifests_dir)
335
336 for manifest in new_entries_list:
337 shard_id = manifest[0]['shard_id']
338 new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
339 with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
340 for entry in manifest:
341 json.dump(entry, m2)
342 m2.write('\n')
343
344 # Flatten the list of list of entries to a list of entries
345 new_entries = [sample for manifest in new_entries_list for sample in manifest]
346 del new_entries_list
347
348 print("Total number of entries in manifest :", len(new_entries))
349
350 # Write manifest
351 new_manifest_path = os.path.join(target_dir, 'tarred_audio_manifest.json')
352 with open(new_manifest_path, 'w', encoding='utf-8') as m2:
353 for entry in new_entries:
354 json.dump(entry, m2)
355 m2.write('\n')
356
357 # Write metadata (default metadata for new datasets)
358 new_metadata_path = os.path.join(target_dir, 'metadata.yaml')
359 metadata = ASRTarredDatasetMetadata()
360
361 # Update metadata
362 metadata.dataset_config = config
363 metadata.num_samples_per_shard = len(new_entries) // config.num_shards
364
365 # Write metadata
366 metadata_yaml = OmegaConf.structured(metadata)
367 OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
368
369 def create_concatenated_dataset(
370 self,
371 base_manifest_path: str,
372 manifest_paths: List[str],
373 metadata: ASRTarredDatasetMetadata,
374 target_dir: str = "./tarred_concatenated/",
375 num_workers: int = 1,
376 ):
377 """
378 Creates new tarfiles in order to create a concatenated dataset, whose manifest contains the data for
379 both the original dataset as well as the new data submitted in manifest paths.
380
381 Args:
382 base_manifest_path: Path to the manifest file which contains the information for the original
383 tarred dataset (with flattened paths).
384 manifest_paths: List of one or more paths to manifest files that will be concatenated with above
385 base tarred dataset.
386 metadata: ASRTarredDatasetMetadata dataclass instance with overrides from command line.
387 target_dir: Output directory
388
389 Output:
390 Writes tarfiles which with indices mapping to a "concatenated" tarred dataset,
391 along with the tarred dataset compatible manifest file which includes information
392 about all the datasets that comprise the concatenated dataset.
393
394 Also preserves a record of the metadata used to construct this tarred dataset.
395 """
396 if not os.path.exists(target_dir):
397 os.makedirs(target_dir)
398
399 if base_manifest_path is None:
400 raise FileNotFoundError("Base manifest filepath cannot be None !")
401
402 if manifest_paths is None or len(manifest_paths) == 0:
403 raise FileNotFoundError("List of additional manifest filepaths cannot be None !")
404
405 config = ASRTarredDatasetConfig(**(metadata.dataset_config))
406
407 # Read the existing manifest (no filtering here)
408 base_entries, _, _, _ = self._read_manifest(base_manifest_path, config)
409 print(f"Read base manifest containing {len(base_entries)} samples.")
410
411 # Precompute number of samples per shard
412 if metadata.num_samples_per_shard is None:
413 num_samples_per_shard = len(base_entries) // config.num_shards
414 else:
415 num_samples_per_shard = metadata.num_samples_per_shard
416
417 print("Number of samples per shard :", num_samples_per_shard)
418
419 # Compute min and max duration and update config (if no metadata passed)
420 print(f"Selected max duration : {config.max_duration}")
421 print(f"Selected min duration : {config.min_duration}")
422
423 entries = []
424 for new_manifest_idx in range(len(manifest_paths)):
425 new_entries, total_duration, filtered_new_entries, filtered_duration = self._read_manifest(
426 manifest_paths[new_manifest_idx], config
427 )
428
429 if len(filtered_new_entries) > 0:
430 print(
431 f"Filtered {len(filtered_new_entries)} files which amounts to {filtered_duration:0.2f}"
432 f" seconds of audio from manifest {manifest_paths[new_manifest_idx]}."
433 )
434 print(
435 f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
436 )
437
438 entries.extend(new_entries)
439
440 if len(entries) == 0:
441 print("No tarred dataset was created as there were 0 valid samples after filtering!")
442 return
443
444 if config.shuffle:
445 random.seed(config.shuffle_seed)
446 print("Shuffling...")
447 random.shuffle(entries)
448
449 # Drop last section of samples that cannot be added onto a chunk
450 drop_count = len(entries) % num_samples_per_shard
451 total_new_entries = len(entries)
452 entries = entries[:-drop_count]
453
454 print(
455 f"Dropping {drop_count} samples from total new samples {total_new_entries} since they cannot "
456 f"be added into a uniformly sized chunk."
457 )
458
459 # Create shards and updated manifest entries
460 num_added_shards = len(entries) // num_samples_per_shard
461
462 print(f"Number of samples in base dataset : {len(base_entries)}")
463 print(f"Number of samples in additional datasets : {len(entries)}")
464 print(f"Number of added shards : {num_added_shards}")
465 print(f"Remainder: {len(entries) % num_samples_per_shard}")
466
467 start_indices = []
468 end_indices = []
469 shard_indices = []
470 for i in range(num_added_shards):
471 start_idx = (len(entries) // num_added_shards) * i
472 end_idx = start_idx + (len(entries) // num_added_shards)
473 shard_idx = i + config.num_shards
474 print(f"Shard {shard_idx} has entries {start_idx + len(base_entries)} ~ {end_idx + len(base_entries)}")
475
476 start_indices.append(start_idx)
477 end_indices.append(end_idx)
478 shard_indices.append(shard_idx)
479
480 manifest_folder, _ = os.path.split(base_manifest_path)
481
482 with Parallel(n_jobs=num_workers, verbose=num_added_shards) as parallel:
483 # Call parallel tarfile construction
484 new_entries_list = parallel(
485 delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, shard_idx, manifest_folder)
486 for i, (start_idx, end_idx, shard_idx) in enumerate(zip(start_indices, end_indices, shard_indices))
487 )
488
489 if config.shard_manifests:
490 sharded_manifests_dir = target_dir + '/sharded_manifests'
491 if not os.path.exists(sharded_manifests_dir):
492 os.makedirs(sharded_manifests_dir)
493
494 for manifest in new_entries_list:
495 shard_id = manifest[0]['shard_id']
496 new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
497 with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
498 for entry in manifest:
499 json.dump(entry, m2)
500 m2.write('\n')
501
502 # Flatten the list of list of entries to a list of entries
503 new_entries = [sample for manifest in new_entries_list for sample in manifest]
504 del new_entries_list
505
506 # Write manifest
507 if metadata is None:
508 new_version = 1 # start with `1`, where `0` indicates the base manifest + dataset
509 else:
510 new_version = metadata.version + 1
511
512 print("Total number of entries in manifest :", len(base_entries) + len(new_entries))
513
514 new_manifest_path = os.path.join(target_dir, f'tarred_audio_manifest_version_{new_version}.json')
515 with open(new_manifest_path, 'w', encoding='utf-8') as m2:
516 # First write all the entries of base manifest
517 for entry in base_entries:
518 json.dump(entry, m2)
519 m2.write('\n')
520
521 # Finally write the new entries
522 for entry in new_entries:
523 json.dump(entry, m2)
524 m2.write('\n')
525
526 # Preserve historical metadata
527 base_metadata = metadata
528
529 # Write metadata (updated metadata for concatenated datasets)
530 new_metadata_path = os.path.join(target_dir, f'metadata_version_{new_version}.yaml')
531 metadata = ASRTarredDatasetMetadata()
532
533 # Update config
534 config.num_shards = config.num_shards + num_added_shards
535
536 # Update metadata
537 metadata.version = new_version
538 metadata.dataset_config = config
539 metadata.num_samples_per_shard = num_samples_per_shard
540 metadata.is_concatenated_manifest = True
541 metadata.created_datetime = metadata.get_current_datetime()
542
543 # Attach history
544 current_metadata = OmegaConf.structured(base_metadata.history)
545 metadata.history = current_metadata
546
547 # Write metadata
548 metadata_yaml = OmegaConf.structured(metadata)
549 OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
550
551 def _read_manifest(self, manifest_path: str, config: ASRTarredDatasetConfig):
552 """Read and filters data from the manifest"""
553 # Read the existing manifest
554 entries = []
555 total_duration = 0.0
556 filtered_entries = []
557 filtered_duration = 0.0
558 with open(manifest_path, 'r', encoding='utf-8') as m:
559 for line in m:
560 entry = json.loads(line)
561 if (config.max_duration is None or entry['duration'] < config.max_duration) and (
562 config.min_duration is None or entry['duration'] >= config.min_duration
563 ):
564 entries.append(entry)
565 total_duration += entry["duration"]
566 else:
567 filtered_entries.append(entry)
568 filtered_duration += entry['duration']
569
570 return entries, total_duration, filtered_entries, filtered_duration
571
572 def _create_shard(self, entries, target_dir, shard_id, manifest_folder):
573 """Creates a tarball containing the audio files from `entries`.
574 """
575 if self.config.sort_in_shards:
576 entries.sort(key=lambda x: x["duration"], reverse=False)
577
578 new_entries = []
579 tar = tarfile.open(os.path.join(target_dir, f'audio_{shard_id}.tar'), mode='w', dereference=True)
580
581 count = dict()
582 for entry in entries:
583 # We squash the filename since we do not preserve directory structure of audio files in the tarball.
584 if os.path.exists(entry["audio_filepath"]):
585 audio_filepath = entry["audio_filepath"]
586 else:
587 audio_filepath = os.path.join(manifest_folder, entry["audio_filepath"])
588 if not os.path.exists(audio_filepath):
589 raise FileNotFoundError(f"Could not find {entry['audio_filepath']}!")
590
591 base, ext = os.path.splitext(audio_filepath)
592 base = base.replace('/', '_')
593 # Need the following replacement as long as WebDataset splits on first period
594 base = base.replace('.', '_')
595 squashed_filename = f'{base}{ext}'
596 if squashed_filename not in count:
597 tar.add(audio_filepath, arcname=squashed_filename)
598 to_write = squashed_filename
599 count[squashed_filename] = 1
600 else:
601 to_write = base + "-sub" + str(count[squashed_filename]) + ext
602 count[squashed_filename] += 1
603
604 new_entry = {
605 'audio_filepath': to_write,
606 'duration': entry['duration'],
607 'shard_id': shard_id, # Keep shard ID for recordkeeping
608 }
609
610 if 'label' in entry:
611 new_entry['label'] = entry['label']
612
613 if 'text' in entry:
614 new_entry['text'] = entry['text']
615
616 if 'offset' in entry:
617 new_entry['offset'] = entry['offset']
618
619 if 'lang' in entry:
620 new_entry['lang'] = entry['lang']
621
622 new_entries.append(new_entry)
623
624 tar.close()
625 return new_entries
626
627 @classmethod
628 def setup_history(cls, base_metadata: ASRTarredDatasetMetadata, history: List[Any]):
629 if 'history' in base_metadata.keys():
630 for history_val in base_metadata.history:
631 cls.setup_history(history_val, history)
632
633 if base_metadata is not None:
634 metadata_copy = copy.deepcopy(base_metadata)
635 with open_dict(metadata_copy):
636 metadata_copy.pop('history', None)
637 history.append(metadata_copy)
638
639
640 def main():
641 if args.buckets_num > 1:
642 bucket_length = (args.max_duration - args.min_duration) / float(args.buckets_num)
643 for i in range(args.buckets_num):
644 min_duration = args.min_duration + i * bucket_length
645 max_duration = min_duration + bucket_length
646 if i == args.buckets_num - 1:
647 # add a small number to cover the samples with exactly duration of max_duration in the last bucket.
648 max_duration += 1e-5
649 target_dir = os.path.join(args.target_dir, f"bucket{i+1}")
650 print(f"Creating bucket {i+1} with min_duration={min_duration} and max_duration={max_duration} ...")
651 print(f"Results are being saved at: {target_dir}.")
652 create_tar_datasets(min_duration=min_duration, max_duration=max_duration, target_dir=target_dir)
653 print(f"Bucket {i+1} is created.")
654 else:
655 create_tar_datasets(min_duration=args.min_duration, max_duration=args.max_duration, target_dir=args.target_dir)
656
657
658 def create_tar_datasets(min_duration: float, max_duration: float, target_dir: str):
659 builder = ASRTarredDatasetBuilder()
660
661 shard_manifests = False if args.no_shard_manifests else True
662
663 if args.write_metadata:
664 metadata = ASRTarredDatasetMetadata()
665 dataset_cfg = ASRTarredDatasetConfig(
666 num_shards=args.num_shards,
667 shuffle=args.shuffle,
668 max_duration=max_duration,
669 min_duration=min_duration,
670 shuffle_seed=args.shuffle_seed,
671 sort_in_shards=args.sort_in_shards,
672 shard_manifests=shard_manifests,
673 keep_files_together=args.keep_files_together,
674 )
675 metadata.dataset_config = dataset_cfg
676
677 output_path = os.path.join(target_dir, 'default_metadata.yaml')
678 OmegaConf.save(metadata, output_path, resolve=True)
679 print(f"Default metadata written to {output_path}")
680 exit(0)
681
682 if args.concat_manifest_paths is None or len(args.concat_manifest_paths) == 0:
683 print("Creating new tarred dataset ...")
684
685 # Create a tarred dataset from scratch
686 config = ASRTarredDatasetConfig(
687 num_shards=args.num_shards,
688 shuffle=args.shuffle,
689 max_duration=max_duration,
690 min_duration=min_duration,
691 shuffle_seed=args.shuffle_seed,
692 sort_in_shards=args.sort_in_shards,
693 shard_manifests=shard_manifests,
694 keep_files_together=args.keep_files_together,
695 )
696 builder.configure(config)
697 builder.create_new_dataset(manifest_path=args.manifest_path, target_dir=target_dir, num_workers=args.workers)
698
699 else:
700 if args.buckets_num > 1:
701 raise ValueError("Concatenation feature does not support buckets_num > 1.")
702 print("Concatenating multiple tarred datasets ...")
703
704 # Implicitly update config from base details
705 if args.metadata_path is not None:
706 metadata = ASRTarredDatasetMetadata.from_file(args.metadata_path)
707 else:
708 raise ValueError("`metadata` yaml file path must be provided!")
709
710 # Preserve history
711 history = []
712 builder.setup_history(OmegaConf.structured(metadata), history)
713 metadata.history = history
714
715 # Add command line overrides (everything other than num_shards)
716 metadata.dataset_config.max_duration = max_duration
717 metadata.dataset_config.min_duration = min_duration
718 metadata.dataset_config.shuffle = args.shuffle
719 metadata.dataset_config.shuffle_seed = args.shuffle_seed
720 metadata.dataset_config.sort_in_shards = args.sort_in_shards
721 metadata.dataset_config.shard_manifests = shard_manifests
722
723 builder.configure(metadata.dataset_config)
724
725 # Concatenate a tarred dataset onto a previous one
726 builder.create_concatenated_dataset(
727 base_manifest_path=args.manifest_path,
728 manifest_paths=args.concat_manifest_paths,
729 metadata=metadata,
730 target_dir=target_dir,
731 num_workers=args.workers,
732 )
733
734 if DALI_INDEX_SCRIPT_AVAILABLE and dali_index.INDEX_CREATOR_AVAILABLE:
735 print("Constructing DALI Tarfile Index - ", target_dir)
736 index_config = dali_index.DALITarredIndexConfig(tar_dir=target_dir, workers=args.workers)
737 dali_index.main(index_config)
738
739
740 if __name__ == "__main__":
741 main()
742
[end of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
[start of tools/nemo_forced_aligner/align.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import math
17 import os
18 from dataclasses import dataclass, field, is_dataclass
19 from pathlib import Path
20 from typing import List, Optional
21
22 import torch
23 from omegaconf import OmegaConf
24 from utils.data_prep import (
25 add_t_start_end_to_utt_obj,
26 get_batch_starts_ends,
27 get_batch_variables,
28 get_manifest_lines_batch,
29 is_entry_in_all_lines,
30 is_entry_in_any_lines,
31 )
32 from utils.make_ass_files import make_ass_files
33 from utils.make_ctm_files import make_ctm_files
34 from utils.make_output_manifest import write_manifest_out_line
35 from utils.viterbi_decoding import viterbi_decoding
36
37 from nemo.collections.asr.models.ctc_models import EncDecCTCModel
38 from nemo.collections.asr.models.hybrid_rnnt_ctc_models import EncDecHybridRNNTCTCModel
39 from nemo.collections.asr.parts.utils.streaming_utils import FrameBatchASR
40 from nemo.collections.asr.parts.utils.transcribe_utils import setup_model
41 from nemo.core.config import hydra_runner
42 from nemo.utils import logging
43
44 """
45 Align the utterances in manifest_filepath.
46 Results are saved in ctm files in output_dir.
47
48 Arguments:
49 pretrained_name: string specifying the name of a CTC NeMo ASR model which will be automatically downloaded
50 from NGC and used for generating the log-probs which we will use to do alignment.
51 Note: NFA can only use CTC models (not Transducer models) at the moment.
52 model_path: string specifying the local filepath to a CTC NeMo ASR model which will be used to generate the
53 log-probs which we will use to do alignment.
54 Note: NFA can only use CTC models (not Transducer models) at the moment.
55 Note: if a model_path is provided, it will override the pretrained_name.
56 manifest_filepath: filepath to the manifest of the data you want to align,
57 containing 'audio_filepath' and 'text' fields.
58 output_dir: the folder where output CTM files and new JSON manifest will be saved.
59 align_using_pred_text: if True, will transcribe the audio using the specified model and then use that transcription
60 as the reference text for the forced alignment.
61 transcribe_device: None, or a string specifying the device that will be used for generating log-probs (i.e. "transcribing").
62 The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
63 (otherwise will set it to 'cpu').
64 viterbi_device: None, or string specifying the device that will be used for doing Viterbi decoding.
65 The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
66 (otherwise will set it to 'cpu').
67 batch_size: int specifying batch size that will be used for generating log-probs and doing Viterbi decoding.
68 use_local_attention: boolean flag specifying whether to try to use local attention for the ASR Model (will only
69 work if the ASR Model is a Conformer model). If local attention is used, we will set the local attention context
70 size to [64,64].
71 additional_segment_grouping_separator: an optional string used to separate the text into smaller segments.
72 If this is not specified, then the whole text will be treated as a single segment.
73 remove_blank_tokens_from_ctm: a boolean denoting whether to remove <blank> tokens from token-level output CTMs.
74 audio_filepath_parts_in_utt_id: int specifying how many of the 'parts' of the audio_filepath
75 we will use (starting from the final part of the audio_filepath) to determine the
76 utt_id that will be used in the CTM files. Note also that any spaces that are present in the audio_filepath
77 will be replaced with dashes, so as not to change the number of space-separated elements in the
78 CTM files.
79 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 1 => utt_id will be "e1"
80 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 2 => utt_id will be "d_e1"
81 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 3 => utt_id will be "c_d_e1"
82 use_buffered_infer: False, if set True, using streaming to do get the logits for alignment
83 This flag is useful when aligning large audio file.
84 However, currently the chunk streaming inference does not support batch inference,
85 which means even you set batch_size > 1, it will only infer one by one instead of doing
86 the whole batch inference together.
87 chunk_len_in_secs: float chunk length in seconds
88 total_buffer_in_secs: float Length of buffer (chunk + left and right padding) in seconds
89 chunk_batch_size: int batch size for buffered chunk inference,
90 which will cut one audio into segments and do inference on chunk_batch_size segments at a time
91
92 simulate_cache_aware_streaming: False, if set True, using cache aware streaming to do get the logits for alignment
93
94 save_output_file_formats: List of strings specifying what type of output files to save (default: ["ctm", "ass"])
95 ctm_file_config: CTMFileConfig to specify the configuration of the output CTM files
96 ass_file_config: ASSFileConfig to specify the configuration of the output ASS files
97 """
98
99
100 @dataclass
101 class CTMFileConfig:
102 remove_blank_tokens: bool = False
103 # minimum duration (in seconds) for timestamps in the CTM.If any line in the CTM has a
104 # duration lower than this, it will be enlarged from the middle outwards until it
105 # meets the minimum_timestamp_duration, or reaches the beginning or end of the audio file.
106 # Note that this may cause timestamps to overlap.
107 minimum_timestamp_duration: float = 0
108
109
110 @dataclass
111 class ASSFileConfig:
112 fontsize: int = 20
113 vertical_alignment: str = "center"
114 # if resegment_text_to_fill_space is True, the ASS files will use new segments
115 # such that each segment will not take up more than (approximately) max_lines_per_segment
116 # when the ASS file is applied to a video
117 resegment_text_to_fill_space: bool = False
118 max_lines_per_segment: int = 2
119 text_already_spoken_rgb: List[int] = field(default_factory=lambda: [49, 46, 61]) # dark gray
120 text_being_spoken_rgb: List[int] = field(default_factory=lambda: [57, 171, 9]) # dark green
121 text_not_yet_spoken_rgb: List[int] = field(default_factory=lambda: [194, 193, 199]) # light gray
122
123
124 @dataclass
125 class AlignmentConfig:
126 # Required configs
127 pretrained_name: Optional[str] = None
128 model_path: Optional[str] = None
129 manifest_filepath: Optional[str] = None
130 output_dir: Optional[str] = None
131
132 # General configs
133 align_using_pred_text: bool = False
134 transcribe_device: Optional[str] = None
135 viterbi_device: Optional[str] = None
136 batch_size: int = 1
137 use_local_attention: bool = True
138 additional_segment_grouping_separator: Optional[str] = None
139 audio_filepath_parts_in_utt_id: int = 1
140
141 # Buffered chunked streaming configs
142 use_buffered_chunked_streaming: bool = False
143 chunk_len_in_secs: float = 1.6
144 total_buffer_in_secs: float = 4.0
145 chunk_batch_size: int = 32
146
147 # Cache aware streaming configs
148 simulate_cache_aware_streaming: Optional[bool] = False
149
150 # Output file configs
151 save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
152 ctm_file_config: CTMFileConfig = CTMFileConfig()
153 ass_file_config: ASSFileConfig = ASSFileConfig()
154
155
156 @hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
157 def main(cfg: AlignmentConfig):
158
159 logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
160
161 if is_dataclass(cfg):
162 cfg = OmegaConf.structured(cfg)
163
164 # Validate config
165 if cfg.model_path is None and cfg.pretrained_name is None:
166 raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None")
167
168 if cfg.model_path is not None and cfg.pretrained_name is not None:
169 raise ValueError("One of cfg.model_path and cfg.pretrained_name must be None")
170
171 if cfg.manifest_filepath is None:
172 raise ValueError("cfg.manifest_filepath must be specified")
173
174 if cfg.output_dir is None:
175 raise ValueError("cfg.output_dir must be specified")
176
177 if cfg.batch_size < 1:
178 raise ValueError("cfg.batch_size cannot be zero or a negative number")
179
180 if cfg.additional_segment_grouping_separator == "" or cfg.additional_segment_grouping_separator == " ":
181 raise ValueError("cfg.additional_grouping_separator cannot be empty string or space character")
182
183 if cfg.ctm_file_config.minimum_timestamp_duration < 0:
184 raise ValueError("cfg.minimum_timestamp_duration cannot be a negative number")
185
186 if cfg.ass_file_config.vertical_alignment not in ["top", "center", "bottom"]:
187 raise ValueError("cfg.ass_file_config.vertical_alignment must be one of 'top', 'center' or 'bottom'")
188
189 for rgb_list in [
190 cfg.ass_file_config.text_already_spoken_rgb,
191 cfg.ass_file_config.text_already_spoken_rgb,
192 cfg.ass_file_config.text_already_spoken_rgb,
193 ]:
194 if len(rgb_list) != 3:
195 raise ValueError(
196 "cfg.ass_file_config.text_already_spoken_rgb,"
197 " cfg.ass_file_config.text_being_spoken_rgb,"
198 " and cfg.ass_file_config.text_already_spoken_rgb all need to contain"
199 " exactly 3 elements."
200 )
201
202 # Validate manifest contents
203 if not is_entry_in_all_lines(cfg.manifest_filepath, "audio_filepath"):
204 raise RuntimeError(
205 "At least one line in cfg.manifest_filepath does not contain an 'audio_filepath' entry. "
206 "All lines must contain an 'audio_filepath' entry."
207 )
208
209 if cfg.align_using_pred_text:
210 if is_entry_in_any_lines(cfg.manifest_filepath, "pred_text"):
211 raise RuntimeError(
212 "Cannot specify cfg.align_using_pred_text=True when the manifest at cfg.manifest_filepath "
213 "contains 'pred_text' entries. This is because the audio will be transcribed and may produce "
214 "a different 'pred_text'. This may cause confusion."
215 )
216 else:
217 if not is_entry_in_all_lines(cfg.manifest_filepath, "text"):
218 raise RuntimeError(
219 "At least one line in cfg.manifest_filepath does not contain a 'text' entry. "
220 "NFA requires all lines to contain a 'text' entry when cfg.align_using_pred_text=False."
221 )
222
223 # init devices
224 if cfg.transcribe_device is None:
225 transcribe_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
226 else:
227 transcribe_device = torch.device(cfg.transcribe_device)
228 logging.info(f"Device to be used for transcription step (`transcribe_device`) is {transcribe_device}")
229
230 if cfg.viterbi_device is None:
231 viterbi_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
232 else:
233 viterbi_device = torch.device(cfg.viterbi_device)
234 logging.info(f"Device to be used for viterbi step (`viterbi_device`) is {viterbi_device}")
235
236 if transcribe_device.type == 'cuda' or viterbi_device.type == 'cuda':
237 logging.warning(
238 'One or both of transcribe_device and viterbi_device are GPUs. If you run into OOM errors '
239 'it may help to change both devices to be the CPU.'
240 )
241
242 # load model
243 model, _ = setup_model(cfg, transcribe_device)
244 model.eval()
245
246 if isinstance(model, EncDecHybridRNNTCTCModel):
247 model.change_decoding_strategy(decoder_type="ctc")
248
249 if cfg.use_local_attention:
250 logging.info(
251 "Flag use_local_attention is set to True => will try to use local attention for model if it allows it"
252 )
253 model.change_attention_model(self_attention_model="rel_pos_local_attn", att_context_size=[64, 64])
254
255 if not (isinstance(model, EncDecCTCModel) or isinstance(model, EncDecHybridRNNTCTCModel)):
256 raise NotImplementedError(
257 f"Model is not an instance of NeMo EncDecCTCModel or ENCDecHybridRNNTCTCModel."
258 " Currently only instances of these models are supported"
259 )
260
261 if cfg.ctm_file_config.minimum_timestamp_duration > 0:
262 logging.warning(
263 f"cfg.ctm_file_config.minimum_timestamp_duration has been set to {cfg.ctm_file_config.minimum_timestamp_duration} seconds. "
264 "This may cause the alignments for some tokens/words/additional segments to be overlapping."
265 )
266
267 buffered_chunk_params = {}
268 if cfg.use_buffered_chunked_streaming:
269 model_cfg = copy.deepcopy(model._cfg)
270
271 OmegaConf.set_struct(model_cfg.preprocessor, False)
272 # some changes for streaming scenario
273 model_cfg.preprocessor.dither = 0.0
274 model_cfg.preprocessor.pad_to = 0
275
276 if model_cfg.preprocessor.normalize != "per_feature":
277 logging.error(
278 "Only EncDecCTCModelBPE models trained with per_feature normalization are supported currently"
279 )
280 # Disable config overwriting
281 OmegaConf.set_struct(model_cfg.preprocessor, True)
282
283 feature_stride = model_cfg.preprocessor['window_stride']
284 model_stride_in_secs = feature_stride * cfg.model_downsample_factor
285 total_buffer = cfg.total_buffer_in_secs
286 chunk_len = float(cfg.chunk_len_in_secs)
287 tokens_per_chunk = math.ceil(chunk_len / model_stride_in_secs)
288 mid_delay = math.ceil((chunk_len + (total_buffer - chunk_len) / 2) / model_stride_in_secs)
289 logging.info(f"tokens_per_chunk is {tokens_per_chunk}, mid_delay is {mid_delay}")
290
291 model = FrameBatchASR(
292 asr_model=model,
293 frame_len=chunk_len,
294 total_buffer=cfg.total_buffer_in_secs,
295 batch_size=cfg.chunk_batch_size,
296 )
297 buffered_chunk_params = {
298 "delay": mid_delay,
299 "model_stride_in_secs": model_stride_in_secs,
300 "tokens_per_chunk": tokens_per_chunk,
301 }
302 # get start and end line IDs of batches
303 starts, ends = get_batch_starts_ends(cfg.manifest_filepath, cfg.batch_size)
304
305 # init output_timestep_duration = None and we will calculate and update it during the first batch
306 output_timestep_duration = None
307
308 # init f_manifest_out
309 os.makedirs(cfg.output_dir, exist_ok=True)
310 tgt_manifest_name = str(Path(cfg.manifest_filepath).stem) + "_with_output_file_paths.json"
311 tgt_manifest_filepath = str(Path(cfg.output_dir) / tgt_manifest_name)
312 f_manifest_out = open(tgt_manifest_filepath, 'w')
313
314 # get alignment and save in CTM batch-by-batch
315 for start, end in zip(starts, ends):
316 manifest_lines_batch = get_manifest_lines_batch(cfg.manifest_filepath, start, end)
317
318 (log_probs_batch, y_batch, T_batch, U_batch, utt_obj_batch, output_timestep_duration,) = get_batch_variables(
319 manifest_lines_batch,
320 model,
321 cfg.additional_segment_grouping_separator,
322 cfg.align_using_pred_text,
323 cfg.audio_filepath_parts_in_utt_id,
324 output_timestep_duration,
325 cfg.simulate_cache_aware_streaming,
326 cfg.use_buffered_chunked_streaming,
327 buffered_chunk_params,
328 )
329
330 alignments_batch = viterbi_decoding(log_probs_batch, y_batch, T_batch, U_batch, viterbi_device)
331
332 for utt_obj, alignment_utt in zip(utt_obj_batch, alignments_batch):
333
334 utt_obj = add_t_start_end_to_utt_obj(utt_obj, alignment_utt, output_timestep_duration)
335
336 if "ctm" in cfg.save_output_file_formats:
337 utt_obj = make_ctm_files(utt_obj, cfg.output_dir, cfg.ctm_file_config,)
338
339 if "ass" in cfg.save_output_file_formats:
340 utt_obj = make_ass_files(utt_obj, cfg.output_dir, cfg.ass_file_config)
341
342 write_manifest_out_line(
343 f_manifest_out, utt_obj,
344 )
345
346 f_manifest_out.close()
347
348 return None
349
350
351 if __name__ == "__main__":
352 main()
353
[end of tools/nemo_forced_aligner/align.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
NVIDIA/NeMo
|
8a892b86186dbdf61803d75570cb5c58471e9dda
|
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
|
Seems to be a similar to #7002
Interesting. The fix is easy but needs to be applied to basically every single place that has this constructor for our adapter configs. Let me see if I can update it. But no guarantees on how soon fixes will come in main.
Looking forward to it @titu1994 ! Thanks ๐
@titu1994 I was looking to use NeMo speaker diarization with Python 3.11 and hit this dataclass issue. I patched everything involved in the specific code paths I needed: https://github.com/lmnt-com/NeMo/commit/d89acf9f0152e43dee29d7d1c4667ee34c26ffd7
I was using the neural diarizer as described in https://github.com/NVIDIA/NeMo/tree/main/examples/speaker_tasks/diarization
I'd be happy to upstream this if it's helpful.
I haven't checked whether this is backwards compatible for earlier python/dataclass versions, do you know?
For reference, what led me to this issue, though it's duplicative to the above discussion:
[Similar error](https://github.com/huggingface/datasets/issues/5230)
[StackOverflow solution](https://stackoverflow.com/questions/53632152/why-cant-dataclasses-have-mutable-defaults-in-their-class-attributes-declaratio)
@shaper Thanks for sharing. For brevity, you don't really need a `lambda` when you don't pass any init parameters, like this:
```
field(default_factory=lambda: ConfidenceConfig())
```
You can just do
```
field(default_factory=ConfidenceConfig)
```
It's only needed when you do pass parameter(s), like
```
field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
```
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
I have the same issue. @tango4j suggested using one of the models from https://huggingface.co/spaces/hf-audio/open_asr_leaderboard, but I cannot import nemo.collections.asr:
```
Traceback (most recent call last):
File "/opt/pycharm-2022.3.3/plugins/python/helpers/pycharm/docrunner.py", line 138, in __run
exec(compile(example.source, filename, "single",
File "<doctest NeMoASR[2]>", line 1, in <module>
NeMoASR().apply_asr(file)
^^^^^^^^^
File "/home/cbj/python/cbj/cbj/transcribe/pretrained.py", line 504, in __init__
import nemo.collections.asr as nemo_asr
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
For documentation (I had to search in the provided links):
Mutable defaults were never allowed in dataclasses (by convention), but in python 3.11 they improved the check: Instead of checking some types (dict, list, set) they now use hashable as indicator for mutable.
An alternative to default_factory would be to use frozen dataclasses, but I don't know whether in this code base the configs are used as mutable objects or not.
You need to update to NeMo 1.20, omegaconf did a fix that should resolve this
I have NeMo 1.20.0.
With `pip install nemo_toolkit` and `pip install pytorch_lightning` I installed yesterday nemo.
So it should be the newest PyPI version.
```
$ pip show nemo_toolkit
Name: nemo-toolkit
Version: 1.20.0
Summary: NeMo - a toolkit for Conversational AI
Home-page: https://github.com/nvidia/nemo
Author: NVIDIA
Author-email: nemo-toolkit@nvidia.com
License: Apache2
Location: /opt/py/2023/lib/python3.11/site-packages
Requires: huggingface-hub, numba, numpy, onnx, python-dateutil, ruamel.yaml, scikit-learn, setuptools, tensorboard, text-unidecode, torch, tqdm, wget, wrapt
Required-by:
$ pip show omegaconf
Name: omegaconf
Version: 2.3.0
Summary: A flexible configuration library
Home-page: https://github.com/omry/omegaconf
Author: Omry Yadan
Author-email: omry@yadan.net
License:
Location: /home/cbj/.local/lib/python3.11/site-packages
Requires: antlr4-python3-runtime, PyYAML
Required-by: hydra-core
$ python -c "import nemo.collections.asr as nemo_asr"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
Hmm ok I'll take a look
|
2023-09-30T01:26:50Z
|
<patch>
<patch>
diff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr/experimental/k2/align_speech_parallel.py
--- a/examples/asr/experimental/k2/align_speech_parallel.py
+++ b/examples/asr/experimental/k2/align_speech_parallel.py
@@ -74,7 +74,7 @@
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
@@ -94,12 +94,14 @@
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
- predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
- aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
+ predict_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(return_sample_id=True, num_workers=4)
+ )
+ aligner_args: K2AlignerWrapperModelConfig = field(default_factory=lambda: K2AlignerWrapperModelConfig())
output_path: str = MISSING
model_stride: int = 8
- trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
+ trainer: TrainerConfig = field(default_factory=lambda: TrainerConfig(gpus=-1, accelerator="ddp"))
# there arguments will be ignored
return_predictions: bool = False
diff --git a/nemo/collections/asr/metrics/rnnt_wer.py b/nemo/collections/asr/metrics/rnnt_wer.py
--- a/nemo/collections/asr/metrics/rnnt_wer.py
+++ b/nemo/collections/asr/metrics/rnnt_wer.py
@@ -15,7 +15,7 @@
import copy
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1299,7 +1299,7 @@ class RNNTDecodingConfig:
preserve_alignments: Optional[bool] = None
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
@@ -1317,10 +1317,10 @@ class RNNTDecodingConfig:
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
- greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
+ greedy: greedy_decode.GreedyRNNTInferConfig = field(default_factory=lambda: greedy_decode.GreedyRNNTInferConfig())
# beam decoding config
- beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
+ beam: beam_decode.BeamRNNTInferConfig = field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/metrics/wer.py b/nemo/collections/asr/metrics/wer.py
--- a/nemo/collections/asr/metrics/wer.py
+++ b/nemo/collections/asr/metrics/wer.py
@@ -14,7 +14,7 @@
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1297,13 +1297,17 @@ class CTCDecodingConfig:
batch_dim_index: int = 0
# greedy decoding config
- greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
+ greedy: ctc_greedy_decoding.GreedyCTCInferConfig = field(
+ default_factory=lambda: ctc_greedy_decoding.GreedyCTCInferConfig()
+ )
# beam decoding config
- beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ beam: ctc_beam_decoding.BeamCTCInferConfig = field(
+ default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ )
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/models/configs/aligner_config.py b/nemo/collections/asr/models/configs/aligner_config.py
--- a/nemo/collections/asr/models/configs/aligner_config.py
+++ b/nemo/collections/asr/models/configs/aligner_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@@ -35,10 +35,10 @@ class AlignerWrapperModelConfig:
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
- ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
- rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
+ ctc_cfg: AlignerCTCConfig = field(default_factory=lambda: AlignerCTCConfig())
+ rnnt_cfg: AlignerRNNTConfig = field(default_factory=lambda: AlignerRNNTConfig())
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
- decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ decoder_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
diff --git a/nemo/collections/asr/models/configs/asr_models_config.py b/nemo/collections/asr/models/configs/asr_models_config.py
--- a/nemo/collections/asr/models/configs/asr_models_config.py
+++ b/nemo/collections/asr/models/configs/asr_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -74,24 +74,32 @@ class EncDecCTCConfig(model_cfg.ModelConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
- validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ train_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=True))
+ validation_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False))
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
- decoding: CTCDecodingConfig = CTCDecodingConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
+ decoding: CTCDecodingConfig = field(default_factory=lambda: CTCDecodingConfig())
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
- model: EncDecCTCConfig = EncDecCTCConfig()
+ model: EncDecCTCConfig = field(default_factory=lambda: EncDecCTCConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/classification_models_config.py b/nemo/collections/asr/models/configs/classification_models_config.py
--- a/nemo/collections/asr/models/configs/classification_models_config.py
+++ b/nemo/collections/asr/models/configs/classification_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -72,30 +72,40 @@ class EncDecClassificationConfig(model_cfg.ModelConfig):
timesteps: int = MISSING
# Dataset configs
- train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=timesteps
+ preprocessor: AudioToMFCCPreprocessorConfig = field(default_factory=lambda: AudioToMFCCPreprocessorConfig())
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=-1)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
+
+ def __post_init__(self):
+ if self.crop_or_pad_augment is not None:
+ self.crop_or_pad_augment.audio_length = self.timesteps
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
- model: EncDecClassificationConfig = EncDecClassificationConfig()
+ model: EncDecClassificationConfig = field(default_factory=lambda: EncDecClassificationConfig())
diff --git a/nemo/collections/asr/models/configs/diarizer_config.py b/nemo/collections/asr/models/configs/diarizer_config.py
--- a/nemo/collections/asr/models/configs/diarizer_config.py
+++ b/nemo/collections/asr/models/configs/diarizer_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import asdict, dataclass
+from dataclasses import asdict, dataclass, field
from typing import Any, Dict, Optional, Tuple, Union
@@ -78,9 +78,9 @@ class ASRDiarizerParams(DiarizerComponentConfig):
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
- parameters: ASRDiarizerParams = ASRDiarizerParams()
- ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
- realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
+ parameters: ASRDiarizerParams = field(default_factory=lambda: ASRDiarizerParams())
+ ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = field(default_factory=lambda: ASRDiarizerCTCDecoderParams())
+ realigning_lm_parameters: ASRRealigningLMParams = field(default_factory=lambda: ASRRealigningLMParams())
@dataclass
@@ -102,7 +102,7 @@ class VADParams(DiarizerComponentConfig):
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
- parameters: VADParams = VADParams()
+ parameters: VADParams = field(default_factory=lambda: VADParams())
@dataclass
@@ -121,7 +121,7 @@ class SpeakerEmbeddingsParams(DiarizerComponentConfig):
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
- parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
+ parameters: SpeakerEmbeddingsParams = field(default_factory=lambda: SpeakerEmbeddingsParams())
@dataclass
@@ -142,7 +142,7 @@ class ClusteringParams(DiarizerComponentConfig):
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
- parameters: ClusteringParams = ClusteringParams()
+ parameters: ClusteringParams = field(default_factory=lambda: ClusteringParams())
@dataclass
@@ -166,7 +166,7 @@ class MSDDParams(DiarizerComponentConfig):
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
- parameters: MSDDParams = MSDDParams()
+ parameters: MSDDParams = field(default_factory=lambda: MSDDParams())
@dataclass
@@ -176,16 +176,16 @@ class DiarizerConfig(DiarizerComponentConfig):
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
- vad: VADConfig = VADConfig()
- speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
- clustering: ClusteringConfig = ClusteringConfig()
- msdd_model: MSDDConfig = MSDDConfig()
- asr: ASRDiarizerConfig = ASRDiarizerConfig()
+ vad: VADConfig = field(default_factory=lambda: VADConfig())
+ speaker_embeddings: SpeakerEmbeddingsConfig = field(default_factory=lambda: SpeakerEmbeddingsConfig())
+ clustering: ClusteringConfig = field(default_factory=lambda: ClusteringConfig())
+ msdd_model: MSDDConfig = field(default_factory=lambda: MSDDConfig())
+ asr: ASRDiarizerConfig = field(default_factory=lambda: ASRDiarizerConfig())
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
- diarizer: DiarizerConfig = DiarizerConfig()
+ diarizer: DiarizerConfig = field(default_factory=lambda: DiarizerConfig())
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
diff --git a/nemo/collections/asr/models/configs/k2_sequence_models_config.py b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
--- a/nemo/collections/asr/models/configs/k2_sequence_models_config.py
+++ b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
@@ -26,14 +26,14 @@ class GraphModuleConfig:
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
- backend_cfg: BackendConfig = BackendConfig()
+ backend_cfg: BackendConfig = field(default_factory=lambda: BackendConfig())
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
- graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ graph_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
- model: EncDecK2SeqConfig = EncDecK2SeqConfig()
+ model: EncDecK2SeqConfig = field(default_factory=lambda: EncDecK2SeqConfig())
diff --git a/nemo/collections/asr/models/configs/matchboxnet_config.py b/nemo/collections/asr/models/configs/matchboxnet_config.py
--- a/nemo/collections/asr/models/configs/matchboxnet_config.py
+++ b/nemo/collections/asr/models/configs/matchboxnet_config.py
@@ -107,30 +107,38 @@ class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
- freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ preprocessor: AudioToMFCCPreprocessorConfig = field(
+ default_factory=lambda: AudioToMFCCPreprocessorConfig(window_size=0.025)
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig(
+ freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ )
)
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=128
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=128)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/quartznet_config.py b/nemo/collections/asr/models/configs/quartznet_config.py
--- a/nemo/collections/asr/models/configs/quartznet_config.py
+++ b/nemo/collections/asr/models/configs/quartznet_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
@@ -174,20 +174,30 @@ class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=True
+ train_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=True, trim_silence=True)
+ )
+ validation_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
)
- validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
@dataclass
diff --git a/nemo/collections/asr/modules/audio_preprocessing.py b/nemo/collections/asr/modules/audio_preprocessing.py
--- a/nemo/collections/asr/modules/audio_preprocessing.py
+++ b/nemo/collections/asr/modules/audio_preprocessing.py
@@ -634,6 +634,12 @@ def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
+ if self.audio_length < 0:
+ raise ValueError(
+ 'audio_length must be non-negative. If using a dataclass with OmegaConf, '
+ 'please call OmegaConf.to_object(cfg) to call appropriate __post_init__ methods.'
+ )
+
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
diff --git a/nemo/collections/asr/parts/k2/classes.py b/nemo/collections/asr/parts/k2/classes.py
--- a/nemo/collections/asr/parts/k2/classes.py
+++ b/nemo/collections/asr/parts/k2/classes.py
@@ -13,7 +13,7 @@
# limitations under the License.
from abc import ABC
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
import torch
@@ -43,7 +43,7 @@ class GraphModuleConfig:
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
- intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
+ intersect_conf: GraphIntersectDenseConfig = field(default_factory=lambda: GraphIntersectDenseConfig())
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
diff --git a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
--- a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
+++ b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
@@ -13,7 +13,7 @@
# limitations under the License.
import math
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional
import torch
@@ -183,7 +183,7 @@ class MultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
@@ -287,7 +287,7 @@ class RelPositionMultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
@@ -336,7 +336,9 @@ class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
@@ -378,5 +380,7 @@ class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
diff --git a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
@@ -14,7 +14,7 @@
import math
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import torch
@@ -602,5 +602,5 @@ class BeamCTCInferConfig:
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
- flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
- pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
+ flashlight_cfg: Optional[FlashlightConfig] = field(default_factory=lambda: FlashlightConfig())
+ pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = field(default_factory=lambda: PyCTCDecodeConfig())
diff --git a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional
import torch
@@ -253,7 +253,9 @@ class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
@@ -26,7 +26,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import numpy as np
@@ -2185,7 +2185,9 @@ class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
@@ -2217,7 +2219,9 @@ class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/asr/parts/utils/asr_confidence_utils.py b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
--- a/nemo/collections/asr/parts/utils/asr_confidence_utils.py
+++ b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
@@ -14,7 +14,7 @@
import math
from abc import ABC, abstractmethod
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from functools import partial
from typing import List, Optional
@@ -181,7 +181,7 @@ class ConfidenceConfig:
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
- measure_cfg: ConfidenceMeasureConfig = ConfidenceMeasureConfig()
+ measure_cfg: ConfidenceMeasureConfig = field(default_factory=lambda: ConfidenceMeasureConfig())
method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/common/parts/adapter_modules.py b/nemo/collections/common/parts/adapter_modules.py
--- a/nemo/collections/common/parts/adapter_modules.py
+++ b/nemo/collections/common/parts/adapter_modules.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
@@ -160,5 +160,7 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
diff --git a/nemo/collections/common/tokenizers/en_ja_tokenizers.py b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
--- a/nemo/collections/common/tokenizers/en_ja_tokenizers.py
+++ b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
@@ -14,11 +14,19 @@
import re
from typing import List
-import ipadic
-import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
+try:
+ import ipadic
+ import MeCab
+
+ HAVE_MECAB = True
+ HAVE_IPADIC = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+ HAVE_IPADIC = False
+
class EnJaProcessor:
"""
@@ -67,6 +75,9 @@ class JaMecabProcessor:
"""
def __init__(self):
+ if not HAVE_MECAB or not HAVE_IPADIC:
+ raise ImportError("Please ensure that you have installed `MeCab` and `ipadic` to use JaMecabProcessor")
+
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
diff --git a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
--- a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
+++ b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
@@ -46,7 +46,7 @@ class MTOptimConfig(OptimConfig):
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
- sched: Optional[MTSchedConfig] = MTSchedConfig()
+ sched: Optional[MTSchedConfig] = field(default_factory=lambda: MTSchedConfig())
@dataclass
@@ -74,70 +74,80 @@ class MTEncDecModelConfig(EncDecNLPModelConfig):
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
- head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
+ head: TokenClassifierConfig = field(default_factory=lambda: TokenClassifierConfig(log_softmax=True))
# dataset configurations
- train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=True,
- shuffle=True,
- cache_ids=False,
- use_cache=False,
+ train_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=True,
+ shuffle=True,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ validation_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ test_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- optim: Optional[OptimConfig] = MTOptimConfig()
+ optim: Optional[OptimConfig] = field(default_factory=lambda: MTOptimConfig())
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
- encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
- decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
-
- encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ encoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+ decoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+
+ encoder: NeMoTransformerEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
- decoder: NeMoTransformerConfig = NeMoTransformerConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ decoder: NeMoTransformerConfig = field(
+ default_factory=lambda: NeMoTransformerConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
@@ -150,32 +160,36 @@ class MTBottleneckModelConfig(AAYNBaseConfig):
recon_per_token: bool = True
log_timing: bool = True
- encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
- hidden_steps=32,
- hidden_blocks=1,
- hidden_init_method='params',
+ encoder: NeMoTransformerBottleneckEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ hidden_steps=32,
+ hidden_blocks=1,
+ hidden_init_method='params',
+ )
)
- decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
+ decoder: NeMoTransformerBottleneckDecoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckDecoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ )
)
diff --git a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
--- a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
+++ b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
@@ -215,13 +215,15 @@ class PunctuationCapitalizationModelConfig:
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
- class_labels: ClassLabelsConfig = ClassLabelsConfig()
+ class_labels: ClassLabelsConfig = field(default_factory=lambda: ClassLabelsConfig())
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
- common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
+ common_dataset_parameters: Optional[CommonDatasetParametersConfig] = field(
+ default_factory=lambda: CommonDatasetParametersConfig()
+ )
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
@@ -233,16 +235,16 @@ class PunctuationCapitalizationModelConfig:
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
- punct_head: HeadConfig = HeadConfig()
+ punct_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
- capit_head: HeadConfig = HeadConfig()
+ capit_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
- tokenizer: Any = TokenizerConfig()
+ tokenizer: Any = field(default_factory=lambda: TokenizerConfig())
"""A configuration for source text tokenizer."""
- language_model: LanguageModelConfig = LanguageModelConfig()
+ language_model: LanguageModelConfig = field(default_factory=lambda: LanguageModelConfig())
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
@@ -311,22 +313,30 @@ class PunctuationCapitalizationConfig(NemoConfig):
do_testing: bool = False
"""Whether ot perform testing of the model."""
- model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
+ model: PunctuationCapitalizationModelConfig = field(default_factory=lambda: PunctuationCapitalizationModelConfig())
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
- trainer: Optional[TrainerConfig] = TrainerConfig()
+ trainer: Optional[TrainerConfig] = field(default_factory=lambda: TrainerConfig())
"""Contains ``Trainer`` Lightning class constructor parameters."""
- exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
+ exp_manager: Optional[ExpManagerConfig] = field(
+ default_factory=lambda: ExpManagerConfig(name=None, files_to_copy=[])
+ )
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
+ def __post_init__(self):
+ if self.exp_manager is not None:
+ self.exp_manager.name = self.name
+
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
- model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
+ model: PunctuationCapitalizationLexicalAudioModelConfig = field(
+ default_factory=lambda: PunctuationCapitalizationLexicalAudioModelConfig()
+ )
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
diff --git a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
--- a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
+++ b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
@@ -13,7 +13,6 @@
# limitations under the License.
"""Transformer based language model."""
-from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
@@ -25,6 +24,13 @@
scaled_init_method_normal,
)
+try:
+ from MeCab import Model
+
+ HAVE_MECAB = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+
try:
from apex.transformer.enums import AttnMaskType, ModelType
diff --git a/nemo/collections/tts/models/fastpitch.py b/nemo/collections/tts/models/fastpitch.py
--- a/nemo/collections/tts/models/fastpitch.py
+++ b/nemo/collections/tts/models/fastpitch.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import List, Optional
@@ -70,12 +70,12 @@ class TextTokenizer:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
@dataclass
class TextTokenizerConfig:
- text_tokenizer: TextTokenizer = TextTokenizer()
+ text_tokenizer: TextTokenizer = field(default_factory=lambda: TextTokenizer())
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
diff --git a/nemo/collections/tts/models/tacotron2.py b/nemo/collections/tts/models/tacotron2.py
--- a/nemo/collections/tts/models/tacotron2.py
+++ b/nemo/collections/tts/models/tacotron2.py
@@ -13,7 +13,7 @@
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
import torch
@@ -53,7 +53,7 @@ class Preprocessor:
@dataclass
class Tacotron2Config:
- preprocessor: Preprocessor = Preprocessor()
+ preprocessor: Preprocessor = field(default_factory=lambda: Preprocessor())
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
diff --git a/nemo/core/config/modelPT.py b/nemo/core/config/modelPT.py
--- a/nemo/core/config/modelPT.py
+++ b/nemo/core/config/modelPT.py
@@ -58,11 +58,13 @@ class HydraConfig:
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
- trainer: config.TrainerConfig = config.TrainerConfig(
- strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ trainer: config.TrainerConfig = field(
+ default_factory=lambda: config.TrainerConfig(
+ strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ )
)
- exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
- hydra: HydraConfig = HydraConfig()
+ exp_manager: Optional[Any] = field(default_factory=lambda: exp_manager.ExpManagerConfig())
+ hydra: HydraConfig = field(default_factory=lambda: HydraConfig())
class ModelConfigBuilder:
diff --git a/nemo/utils/exp_manager.py b/nemo/utils/exp_manager.py
--- a/nemo/utils/exp_manager.py
+++ b/nemo/utils/exp_manager.py
@@ -18,7 +18,7 @@
import sys
import time
import warnings
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
@@ -146,28 +146,30 @@ class ExpManagerConfig:
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
- mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
+ mlflow_logger_kwargs: Optional[MLFlowParams] = field(default_factory=lambda: MLFlowParams())
create_dllogger_logger: Optional[bool] = False
- dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
+ dllogger_logger_kwargs: Optional[DLLoggerParams] = field(default_factory=lambda: DLLoggerParams())
create_clearml_logger: Optional[bool] = False
- clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
+ clearml_logger_kwargs: Optional[ClearMLParams] = field(default_factory=lambda: ClearMLParams())
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
- checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
+ checkpoint_callback_params: Optional[CallbackParams] = field(default_factory=lambda: CallbackParams())
create_early_stopping_callback: Optional[bool] = False
- early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
+ early_stopping_callback_params: Optional[EarlyStoppingParams] = field(
+ default_factory=lambda: EarlyStoppingParams()
+ )
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
- step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
+ step_timing_kwargs: Optional[StepTimingParams] = field(default_factory=lambda: StepTimingParams())
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
- ema: Optional[EMAParams] = EMAParams()
+ ema: Optional[EMAParams] = field(default_factory=lambda: EMAParams())
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
@@ -112,14 +112,14 @@ class EvalBeamSearchNGramConfig:
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
- decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
+ decoding: ctc_beam_decoding.BeamCTCInferConfig = field(default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=128))
- text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
+ text_processing: Optional[TextProcessingConfig] = field(default_factory=lambda: TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
- )
+ ))
# fmt: on
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
@@ -115,7 +115,7 @@ class EvalBeamSearchNGramConfig:
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
- decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
+ decoding: rnnt_beam_decoding.BeamRNNTInferConfig = field(default_factory=lambda: rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128))
# fmt: on
diff --git a/scripts/confidence_ensembles/build_ensemble.py b/scripts/confidence_ensembles/build_ensemble.py
--- a/scripts/confidence_ensembles/build_ensemble.py
+++ b/scripts/confidence_ensembles/build_ensemble.py
@@ -75,7 +75,7 @@
import sys
import tempfile
from copy import deepcopy
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Tuple
@@ -209,19 +209,23 @@ class BuildEnsembleConfig:
random_seed: int = 0 # for reproducibility
# default confidence, can override
- confidence: ConfidenceConfig = ConfidenceConfig(
- # we keep frame confidences and apply aggregation manually to get full-utterance confidence
- preserve_frame_confidence=True,
- exclude_blank=True,
- aggregation="mean",
- measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ confidence: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(
+ # we keep frame confidences and apply aggregation manually to get full-utterance confidence
+ preserve_frame_confidence=True,
+ exclude_blank=True,
+ aggregation="mean",
+ measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ )
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
- transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
+ transcription: transcribe_speech.TranscriptionConfig = field(
+ default_factory=lambda: transcribe_speech.TranscriptionConfig()
+ )
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
@@ -229,12 +233,14 @@ class BuildEnsembleConfig:
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
- tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
+ tune_confidence_config: TuneConfidenceConfig = field(default_factory=lambda: TuneConfidenceConfig())
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
- tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
+ tune_logistic_regression_config: TuneLogisticRegressionConfig = field(
+ default_factory=lambda: TuneLogisticRegressionConfig()
+ )
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
diff --git a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
--- a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
+++ b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
@@ -14,7 +14,7 @@
import json
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import Optional
@@ -124,7 +124,9 @@ class ConfidenceBenchmarkingConfig:
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
- confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ confidence_cfg: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ )
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
diff --git a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
--- a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
+++ b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
@@ -202,7 +202,7 @@ class ASRTarredDatasetMetadata:
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
- dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
+ dataset_config: Optional[ASRTarredDatasetConfig] = field(default_factory=lambda: ASRTarredDatasetConfig())
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
diff --git a/tools/nemo_forced_aligner/align.py b/tools/nemo_forced_aligner/align.py
--- a/tools/nemo_forced_aligner/align.py
+++ b/tools/nemo_forced_aligner/align.py
@@ -149,8 +149,8 @@ class AlignmentConfig:
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig = CTMFileConfig()
- ass_file_config: ASSFileConfig = ASSFileConfig()
+ ctm_file_config: CTMFileConfig = field(default_factory=lambda: CTMFileConfig())
+ ass_file_config: ASSFileConfig = field(default_factory=lambda: ASSFileConfig())
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
</patch>
</s>
</patch>
|
diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_to_text_dataset.py
--- a/tests/collections/asr/test_text_to_text_dataset.py
+++ b/tests/collections/asr/test_text_to_text_dataset.py
@@ -15,7 +15,7 @@
import json
import multiprocessing
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
import pytest
@@ -110,7 +110,7 @@ class TextTokenizerCfg:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
config = OmegaConf.create(OmegaConf.to_yaml(TextTokenizerCfg()))
return instantiate(config)
|
1.0
| |||
NVIDIA__NeMo-7616
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
17 :target: https://badge.fury.io/py/nemo-toolkit
18 :alt: Release version
19
20 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
21 :target: https://badge.fury.io/py/nemo-toolkit
22 :alt: Python version
23
24 .. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
25 :target: https://pepy.tech/project/nemo-toolkit
26 :alt: PyPi total downloads
27
28 .. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
29 :target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
30 :alt: CodeQL
31
32 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
33 :target: https://github.com/psf/black
34 :alt: Code style: black
35
36 .. _main-readme:
37
38 **NVIDIA NeMo**
39 ===============
40
41 Introduction
42 ------------
43
44 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
45 text-to-speech synthesis (TTS), large language models (LLMs), and
46 natural language processing (NLP).
47 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
48 and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
49
50 All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
51 training is automatically scalable to 1000s of GPUs.
52 Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
53 NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
54
55 Getting started with NeMo is simple.
56 State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
57 `NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
58 These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
59
60 We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
61 can all be run on `Google Colab <https://colab.research.google.com>`_.
62
63 For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
64 we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
65
66 For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
67 The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
68 which can be used to find the optimal model parallel configuration for training on a specific cluster.
69
70 Also see the two introductory videos below for a high level overview of NeMo.
71
72 * Developing State-Of-The-Art Conversational AI Models in Three Lines of Code.
73 * NVIDIA NeMo: Toolkit for Conversational AI at PyData Yerevan 2022.
74
75 |three_lines| |pydata|
76
77 .. |pydata| image:: https://img.youtube.com/vi/J-P6Sczmas8/maxres3.jpg
78 :target: https://www.youtube.com/embed/J-P6Sczmas8?mute=0&start=14&autoplay=0
79 :width: 600
80 :alt: Develop Conversational AI Models in 3 Lines
81
82 .. |three_lines| image:: https://img.youtube.com/vi/wBgpMf_KQVw/maxresdefault.jpg
83 :target: https://www.youtube.com/embed/wBgpMf_KQVw?mute=0&start=0&autoplay=0
84 :width: 600
85 :alt: Introduction at PyData@Yerevan 2022
86
87 Key Features
88 ------------
89
90 * Speech processing
91 * `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
92 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
93 * Supported ASR models: `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_
94 * Jasper, QuartzNet, CitriNet, ContextNet
95 * Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
96 * Squeezeformer-CTC and Squeezeformer-Transducer
97 * LSTM-Transducer (RNNT) and LSTM-CTC
98 * Supports the following decoders/losses:
99 * CTC
100 * Transducer/RNNT
101 * Hybrid Transducer/CTC
102 * NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
103 * Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
104 * Cache-aware Streaming Conformer with multiple lookaheads - `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_
105 * Beam Search decoding
106 * `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
107 * `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
108 * `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
109 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
110 * ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
111 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
112 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
113 * Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
114 * Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
115 * `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
116 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
117 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
118 * Natural Language Processing
119 * `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
120 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
121 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
122 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
123 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
124 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
125 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
126 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
127 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
128 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
129 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
130 * `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
131 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
132 * `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
133 * Text-to-Speech Synthesis (TTS):
134 * `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
135 * Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
136 * Vocoders: HiFiGAN, UnivNet, WaveGlow
137 * End-to-End Models: VITS
138 * `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
139 * `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
140 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
141 * `NeMo Forced Aligner <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/nemo_forced_aligner.html>`_
142 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
143 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
144 * `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
145
146
147 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
148
149 Requirements
150 ------------
151
152 1) Python 3.10 or above
153 2) Pytorch 1.13.1 or above
154 3) NVIDIA GPU for training
155
156 Documentation
157 -------------
158
159 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
160 :alt: Documentation Status
161 :scale: 100%
162 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
163
164 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
165 :alt: Documentation Status
166 :scale: 100%
167 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
168
169 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
170 | Version | Status | Description |
171 +=========+=============+==========================================================================================================================================+
172 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
173 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
174 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
175 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
176
177 Tutorials
178 ---------
179 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
180
181 Getting help with NeMo
182 ----------------------
183 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
184
185
186 Installation
187 ------------
188
189 Conda
190 ~~~~~
191
192 We recommend installing NeMo in a fresh Conda environment.
193
194 .. code-block:: bash
195
196 conda create --name nemo python==3.10.12
197 conda activate nemo
198
199 Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
200
201 .. code-block:: bash
202
203 conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
204
205 The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
206
207 Pip
208 ~~~
209 Use this installation mode if you want the latest released version.
210
211 .. code-block:: bash
212
213 apt-get update && apt-get install -y libsndfile1 ffmpeg
214 pip install Cython
215 pip install nemo_toolkit['all']
216
217 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
218
219 Pip from source
220 ~~~~~~~~~~~~~~~
221 Use this installation mode if you want the version from a particular GitHub branch (e.g main).
222
223 .. code-block:: bash
224
225 apt-get update && apt-get install -y libsndfile1 ffmpeg
226 pip install Cython
227 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
228
229
230 From source
231 ~~~~~~~~~~~
232 Use this installation mode if you are contributing to NeMo.
233
234 .. code-block:: bash
235
236 apt-get update && apt-get install -y libsndfile1 ffmpeg
237 git clone https://github.com/NVIDIA/NeMo
238 cd NeMo
239 ./reinstall.sh
240
241 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
242 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
243
244 RNNT
245 ~~~~
246 Note that RNNT requires numba to be installed from conda.
247
248 .. code-block:: bash
249
250 conda remove numba
251 pip uninstall numba
252 conda install -c conda-forge numba
253
254 NeMo Megatron
255 ~~~~~~~~~~~~~
256 NeMo Megatron training requires NVIDIA Apex to be installed.
257 Install it manually if not using the NVIDIA PyTorch container.
258
259 To install Apex, run
260
261 .. code-block:: bash
262
263 git clone https://github.com/NVIDIA/apex.git
264 cd apex
265 git checkout 52e18c894223800cb611682dce27d88050edf1de
266 pip install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
267
268 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
269
270 While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
271 This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
272
273 cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
274
275 .. code-block:: bash
276
277 conda install -c nvidia cuda-nvprof=11.8
278
279 packaging is also needed:
280
281 .. code-block:: bash
282
283 pip install packaging
284
285 With the latest versions of Apex, the `pyproject.toml` file in Apex may need to be deleted in order to install locally.
286
287
288 Transformer Engine
289 ~~~~~~~~~~~~~~~~~~
290 NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
291 Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
292 `Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
293
294 .. code-block:: bash
295
296 pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
297
298 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
299
300 Transformer Engine requires PyTorch to be built with CUDA 11.8.
301
302
303 Flash Attention
304 ~~~~~~~~~~~~~~~~~~~~
305 Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
306
307 .. code-block:: bash
308
309 pip install flash-attn
310 pip install triton==2.0.0.dev20221202
311
312 NLP inference UI
313 ~~~~~~~~~~~~~~~~~~~~
314 To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
315
316 .. code-block:: bash
317
318 pip install gradio==3.34.0
319
320 NeMo Text Processing
321 ~~~~~~~~~~~~~~~~~~~~
322 NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
323
324 Docker containers:
325 ~~~~~~~~~~~~~~~~~~
326 We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.20.0`` comes with container ``nemo:23.06``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
327
328 To use built container, please run
329
330 .. code-block:: bash
331
332 docker pull nvcr.io/nvidia/nemo:23.06
333
334 To build a nemo container with Dockerfile from a branch, please run
335
336 .. code-block:: bash
337
338 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
339
340
341 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
342
343 .. code-block:: bash
344
345 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
346 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
347 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
348
349 Examples
350 --------
351
352 Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
353
354
355 Contributing
356 ------------
357
358 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
359
360 Publications
361 ------------
362
363 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/tree/stable/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
364
365 License
366 -------
367 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
368
[end of README.rst]
[start of examples/asr/experimental/k2/align_speech_parallel.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 # Based on examples/asr/transcribe_speech_parallel.py
17 # ASR alignment with multi-GPU/multi-node support for large datasets
18 # It supports both tarred and non-tarred datasets
19 # Arguments
20 # model: path to a nemo/PTL checkpoint file or name of a pretrained model
21 # predict_ds: config of the dataset/dataloader
22 # aligner_args: aligner config
23 # output_path: path to store the predictions
24 # model_stride: model downsampling factor, 8 for Citrinet models and 4 for Conformer models
25 #
26 # Results of each GPU/worker is written into a file named 'predictions_{rank}.json, and aggregated results of all workers are written into 'predictions_all.json'
27
28 Example for non-tarred datasets:
29
30 python align_speech_parallel.py \
31 model=stt_en_conformer_ctc_large \
32 predict_ds.manifest_filepath=/dataset/manifest_file.json \
33 predict_ds.batch_size=16 \
34 output_path=/tmp/
35
36 Example for tarred datasets:
37
38 python align_speech_parallel.py \
39 predict_ds.is_tarred=true \
40 predict_ds.manifest_filepath=/tarred_dataset/tarred_audio_manifest.json \
41 predict_ds.tarred_audio_filepaths=/tarred_dataset/audio__OP_0..127_CL_.tar \
42 ...
43
44 By default the trainer uses all the GPUs available and default precision is FP32.
45 By setting the trainer config you may control these configs. For example to do the predictions with AMP on just two GPUs:
46
47 python align_speech_parallel.py \
48 trainer.precision=16 \
49 trainer.gpus=2 \
50 ...
51
52 You may control the dataloader's config by setting the predict_ds:
53
54 python align_speech_parallel.py \
55 predict_ds.num_workers=8 \
56 predict_ds.min_duration=2.0 \
57 predict_ds.sample_rate=16000 \
58 model=stt_en_conformer_ctc_small \
59 ...
60
61 You may control the aligner's config by setting the aligner_args:
62 aligner_args.alignment_type=argmax \
63 aligner_args.word_output=False \
64 aligner_args.cpu_decoding=True \
65 aligner_args.decode_batch_size=8 \
66 aligner_args.ctc_cfg.prob_suppress_index=-1 \
67 aligner_args.ctc_cfg.prob_suppress_value=0.5 \
68 aligner_args.rnnt_cfg.predictor_window_size=10 \
69 aligner_args.decoder_module_cfg.intersect_pruned=true \
70 aligner_args.decoder_module_cfg.intersect_conf.search_beam=40 \
71 ...
72
73 """
74
75
76 import os
77 from dataclasses import dataclass, is_dataclass
78 from typing import Optional
79
80 import pytorch_lightning as ptl
81 import torch
82 from omegaconf import MISSING, OmegaConf
83
84 from nemo.collections.asr.data.audio_to_ctm_dataset import ASRCTMPredictionWriter
85 from nemo.collections.asr.models import ASRModel
86 from nemo.collections.asr.models.configs.aligner_config import K2AlignerWrapperModelConfig
87 from nemo.collections.asr.models.configs.asr_models_config import ASRDatasetConfig
88 from nemo.collections.asr.models.k2_aligner_model import AlignerWrapperModel
89 from nemo.core.config import TrainerConfig, hydra_runner
90 from nemo.utils import logging
91 from nemo.utils.get_rank import is_global_rank_zero
92
93
94 @dataclass
95 class ParallelAlignmentConfig:
96 model: Optional[str] = None # name
97 predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
98 aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
99 output_path: str = MISSING
100 model_stride: int = 8
101
102 trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
103
104 # there arguments will be ignored
105 return_predictions: bool = False
106 use_cer: bool = False
107
108
109 def match_train_config(predict_ds, train_ds):
110 # It copies the important configurations from the train dataset of the model
111 # into the predict_ds to be used for prediction. It is needed to match the training configurations.
112 if train_ds is None:
113 return
114
115 predict_ds.sample_rate = train_ds.get("sample_rate", 16000)
116 cfg_name_list = [
117 "int_values",
118 "use_start_end_token",
119 "blank_index",
120 "unk_index",
121 "normalize",
122 "parser",
123 "eos_id",
124 "bos_id",
125 "pad_id",
126 ]
127
128 if is_dataclass(predict_ds):
129 predict_ds = OmegaConf.structured(predict_ds)
130 for cfg_name in cfg_name_list:
131 if hasattr(train_ds, cfg_name):
132 setattr(predict_ds, cfg_name, getattr(train_ds, cfg_name))
133
134 return predict_ds
135
136
137 @hydra_runner(config_name="AlignmentConfig", schema=ParallelAlignmentConfig)
138 def main(cfg: ParallelAlignmentConfig):
139 if cfg.model.endswith(".nemo"):
140 logging.info("Attempting to initialize from .nemo file")
141 model = ASRModel.restore_from(restore_path=cfg.model, map_location="cpu")
142 elif cfg.model.endswith(".ckpt"):
143 logging.info("Attempting to initialize from .ckpt file")
144 model = ASRModel.load_from_checkpoint(checkpoint_path=cfg.model, map_location="cpu")
145 else:
146 logging.info(
147 "Attempting to initialize from a pretrained model as the model name does not have the extension of .nemo or .ckpt"
148 )
149 model = ASRModel.from_pretrained(model_name=cfg.model, map_location="cpu")
150
151 trainer = ptl.Trainer(**cfg.trainer)
152
153 cfg.predict_ds.return_sample_id = True
154 cfg.return_predictions = False
155 cfg.use_cer = False
156 cfg.predict_ds = match_train_config(predict_ds=cfg.predict_ds, train_ds=model._cfg.train_ds)
157 data_loader = model._setup_dataloader_from_config(cfg.predict_ds)
158
159 os.makedirs(cfg.output_path, exist_ok=True)
160 # trainer.global_rank is not valid before predict() is called. Need this hack to find the correct global_rank.
161 global_rank = trainer.node_rank * trainer.num_devices + int(os.environ.get("LOCAL_RANK", 0))
162 output_file = os.path.join(cfg.output_path, f"predictions_{global_rank}.json")
163 output_ctm_dir = os.path.join(cfg.output_path, "ctm")
164 predictor_writer = ASRCTMPredictionWriter(
165 dataset=data_loader.dataset,
166 output_file=output_file,
167 output_ctm_dir=output_ctm_dir,
168 time_per_frame=cfg.model_stride * model._cfg.preprocessor['window_stride'],
169 )
170 trainer.callbacks.extend([predictor_writer])
171
172 aligner_wrapper = AlignerWrapperModel(model=model, cfg=cfg.aligner_args)
173 trainer.predict(model=aligner_wrapper, dataloaders=data_loader, return_predictions=cfg.return_predictions)
174 samples_num = predictor_writer.close_output_file()
175
176 logging.info(
177 f"Prediction on rank {global_rank} is done for {samples_num} samples and results are stored in {output_file}."
178 )
179
180 if torch.distributed.is_initialized():
181 torch.distributed.barrier()
182
183 samples_num = 0
184 if is_global_rank_zero():
185 output_file = os.path.join(cfg.output_path, f"predictions_all.json")
186 logging.info(f"Prediction files are being aggregated in {output_file}.")
187 with open(output_file, 'tw', encoding="utf-8") as outf:
188 for rank in range(trainer.world_size):
189 input_file = os.path.join(cfg.output_path, f"predictions_{rank}.json")
190 with open(input_file, 'r', encoding="utf-8") as inpf:
191 lines = inpf.readlines()
192 samples_num += len(lines)
193 outf.writelines(lines)
194 logging.info(
195 f"Prediction is done for {samples_num} samples in total on all workers and results are aggregated in {output_file}."
196 )
197
198
199 if __name__ == '__main__':
200 main()
201
[end of examples/asr/experimental/k2/align_speech_parallel.py]
[start of nemo/collections/asr/metrics/rnnt_wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import re
17 from abc import abstractmethod
18 from dataclasses import dataclass, is_dataclass
19 from typing import Callable, Dict, List, Optional, Tuple, Union
20
21 import editdistance
22 import numpy as np
23 import torch
24 from omegaconf import OmegaConf
25 from torchmetrics import Metric
26
27 from nemo.collections.asr.metrics.wer import move_dimension_to_the_front
28 from nemo.collections.asr.parts.submodules import rnnt_beam_decoding as beam_decode
29 from nemo.collections.asr.parts.submodules import rnnt_greedy_decoding as greedy_decode
30 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
31 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
32 from nemo.utils import logging
33
34 __all__ = ['RNNTDecoding', 'RNNTWER']
35
36
37 class AbstractRNNTDecoding(ConfidenceMixin):
38 """
39 Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
40
41 Args:
42 decoding_cfg: A dict-like object which contains the following key-value pairs.
43 strategy: str value which represents the type of decoding that can occur.
44 Possible values are :
45 - greedy, greedy_batch (for greedy decoding).
46 - beam, tsd, alsd (for beam search decoding).
47
48 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
49 tokens as well as the decoded string. Default is False in order to avoid double decoding
50 unless required.
51
52 preserve_alignments: Bool flag which preserves the history of logprobs generated during
53 decoding (sample / batched). When set to true, the Hypothesis will contain
54 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
55 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
56
57 In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
58 with the `return_hypotheses` flag set to True.
59
60 The length of the list corresponds to the Acoustic Length (T).
61 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
62 U is the number of target tokens for the current timestep Ti.
63
64 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
65 word based timestamp mapping the output log-probabilities to discrete intervals of timestamps.
66 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
67
68 rnnt_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
69 Can take the following values - "char" for character/subword time stamps, "word" for word level
70 time stamps and "all" (default), for both character level and word level time stamps.
71
72 word_seperator: Str token representing the seperator between words.
73
74 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
75 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
76 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of ints.
77
78 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
79 scores. In order to obtain hypotheses with confidence scores, please utilize
80 `rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
81
82 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
83 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
84 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
85
86 The length of the list corresponds to the Acoustic Length (T).
87 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
88 U is the number of target tokens for the current timestep Ti.
89 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
90 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
91 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
92
93 The length of the list corresponds to the number of recognized tokens.
94 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
95 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
96 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
97
98 The length of the list corresponds to the number of recognized words.
99 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
100 from the `token_confidence`.
101 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
102 Valid options are `mean`, `min`, `max`, `prod`.
103 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
104 confidence scores.
105
106 name: The method name (str).
107 Supported values:
108 - 'max_prob' for using the maximum token probability as a confidence.
109 - 'entropy' for using a normalized entropy of a log-likelihood vector.
110
111 entropy_type: Which type of entropy to use (str).
112 Used if confidence_method_cfg.name is set to `entropy`.
113 Supported values:
114 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
115 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
116 Note that for this entropy, the alpha should comply the following inequality:
117 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
118 where V is the model vocabulary size.
119 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
120 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
121 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
122 More: https://en.wikipedia.org/wiki/Tsallis_entropy
123 - 'renyi' for the Rรฉnyi entropy.
124 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
125 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
126 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
127
128 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
129 When the alpha equals one, scaling is not applied to 'max_prob',
130 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
131
132 entropy_norm: A mapping of the entropy value to the interval [0,1].
133 Supported values:
134 - 'lin' for using the linear mapping.
135 - 'exp' for using exponential mapping with linear shift.
136
137 The config may further contain the following sub-dictionaries:
138 "greedy":
139 max_symbols: int, describing the maximum number of target tokens to decode per
140 timestep during greedy decoding. Setting to larger values allows longer sentences
141 to be decoded, at the cost of increased execution time.
142 preserve_frame_confidence: Same as above, overrides above value.
143 confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
144
145 "beam":
146 beam_size: int, defining the beam size for beam search. Must be >= 1.
147 If beam_size == 1, will perform cached greedy search. This might be slightly different
148 results compared to the greedy search above.
149
150 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
151 Set to True by default.
152
153 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
154 hypotheses after beam search has concluded. This flag is set by default.
155
156 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
157 per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
158 at increased cost to execution time.
159
160 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
161 If an integer is provided, it can decode sequences of that particular maximum length.
162 If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
163 where seq_len is the length of the acoustic model output (T).
164
165 NOTE:
166 If a float is provided, it can be greater than 1!
167 By default, a float of 2.0 is used so that a target sequence can be at most twice
168 as long as the acoustic model output length T.
169
170 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
171 and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
172
173 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
174 in order to reduce expensive beam search cost later. int >= 0.
175
176 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
177 Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
178 and affects the speed of inference since large values will perform large beam search in the next step.
179
180 maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
181 The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
182 where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
183 predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
184 expansion apart from the "most likely" candidate.
185 Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
186 but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
187 thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
188 tuned on a validation set.
189
190 softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
191
192 decoder: The Decoder/Prediction network module.
193 joint: The Joint network module.
194 blank_id: The id of the RNNT blank token.
195 """
196
197 def __init__(self, decoding_cfg, decoder, joint, blank_id: int):
198 super(AbstractRNNTDecoding, self).__init__()
199
200 # Convert dataclass to config object
201 if is_dataclass(decoding_cfg):
202 decoding_cfg = OmegaConf.structured(decoding_cfg)
203
204 self.cfg = decoding_cfg
205 self.blank_id = blank_id
206 self.num_extra_outputs = joint.num_extra_outputs
207 self.big_blank_durations = self.cfg.get("big_blank_durations", None)
208 self.durations = self.cfg.get("durations", None)
209 self.compute_hypothesis_token_set = self.cfg.get("compute_hypothesis_token_set", False)
210 self.compute_langs = decoding_cfg.get('compute_langs', False)
211 self.preserve_alignments = self.cfg.get('preserve_alignments', None)
212 self.joint_fused_batch_size = self.cfg.get('fused_batch_size', None)
213 self.compute_timestamps = self.cfg.get('compute_timestamps', None)
214 self.word_seperator = self.cfg.get('word_seperator', ' ')
215
216 if self.durations is not None: # this means it's a TDT model.
217 if blank_id == 0:
218 raise ValueError("blank_id must equal len(non_blank_vocabs) for TDT models")
219 if self.big_blank_durations is not None:
220 raise ValueError("duration and big_blank_durations can't both be not None")
221 if self.cfg.strategy not in ['greedy', 'greedy_batch']:
222 raise ValueError("currently only greedy and greedy_batch inference is supported for TDT models")
223
224 if self.big_blank_durations is not None: # this means it's a multi-blank model.
225 if blank_id == 0:
226 raise ValueError("blank_id must equal len(vocabs) for multi-blank RNN-T models")
227 if self.cfg.strategy not in ['greedy', 'greedy_batch']:
228 raise ValueError(
229 "currently only greedy and greedy_batch inference is supported for multi-blank models"
230 )
231
232 possible_strategies = ['greedy', 'greedy_batch', 'beam', 'tsd', 'alsd', 'maes']
233 if self.cfg.strategy not in possible_strategies:
234 raise ValueError(f"Decoding strategy must be one of {possible_strategies}")
235
236 # Update preserve alignments
237 if self.preserve_alignments is None:
238 if self.cfg.strategy in ['greedy', 'greedy_batch']:
239 self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
240
241 elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
242 self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
243
244 # Update compute timestamps
245 if self.compute_timestamps is None:
246 if self.cfg.strategy in ['greedy', 'greedy_batch']:
247 self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
248
249 elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
250 self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
251
252 # Test if alignments are being preserved for RNNT
253 if self.compute_timestamps is True and self.preserve_alignments is False:
254 raise ValueError("If `compute_timesteps` flag is set, then `preserve_alignments` flag must also be set.")
255
256 # initialize confidence-related fields
257 self._init_confidence(self.cfg.get('confidence_cfg', None))
258
259 # Confidence estimation is not implemented for these strategies
260 if (
261 not self.preserve_frame_confidence
262 and self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']
263 and self.cfg.beam.get('preserve_frame_confidence', False)
264 ):
265 raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
266
267 if self.cfg.strategy == 'greedy':
268 if self.big_blank_durations is None:
269 if self.durations is None:
270 self.decoding = greedy_decode.GreedyRNNTInfer(
271 decoder_model=decoder,
272 joint_model=joint,
273 blank_index=self.blank_id,
274 max_symbols_per_step=(
275 self.cfg.greedy.get('max_symbols', None)
276 or self.cfg.greedy.get('max_symbols_per_step', None)
277 ),
278 preserve_alignments=self.preserve_alignments,
279 preserve_frame_confidence=self.preserve_frame_confidence,
280 confidence_method_cfg=self.confidence_method_cfg,
281 )
282 else:
283 self.decoding = greedy_decode.GreedyTDTInfer(
284 decoder_model=decoder,
285 joint_model=joint,
286 blank_index=self.blank_id,
287 durations=self.durations,
288 max_symbols_per_step=(
289 self.cfg.greedy.get('max_symbols', None)
290 or self.cfg.greedy.get('max_symbols_per_step', None)
291 ),
292 preserve_alignments=self.preserve_alignments,
293 preserve_frame_confidence=self.preserve_frame_confidence,
294 confidence_method_cfg=self.confidence_method_cfg,
295 )
296 else:
297 self.decoding = greedy_decode.GreedyMultiblankRNNTInfer(
298 decoder_model=decoder,
299 joint_model=joint,
300 blank_index=self.blank_id,
301 big_blank_durations=self.big_blank_durations,
302 max_symbols_per_step=(
303 self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
304 ),
305 preserve_alignments=self.preserve_alignments,
306 preserve_frame_confidence=self.preserve_frame_confidence,
307 confidence_method_cfg=self.confidence_method_cfg,
308 )
309
310 elif self.cfg.strategy == 'greedy_batch':
311 if self.big_blank_durations is None:
312 if self.durations is None:
313 self.decoding = greedy_decode.GreedyBatchedRNNTInfer(
314 decoder_model=decoder,
315 joint_model=joint,
316 blank_index=self.blank_id,
317 max_symbols_per_step=(
318 self.cfg.greedy.get('max_symbols', None)
319 or self.cfg.greedy.get('max_symbols_per_step', None)
320 ),
321 preserve_alignments=self.preserve_alignments,
322 preserve_frame_confidence=self.preserve_frame_confidence,
323 confidence_method_cfg=self.confidence_method_cfg,
324 )
325 else:
326 self.decoding = greedy_decode.GreedyBatchedTDTInfer(
327 decoder_model=decoder,
328 joint_model=joint,
329 blank_index=self.blank_id,
330 durations=self.durations,
331 max_symbols_per_step=(
332 self.cfg.greedy.get('max_symbols', None)
333 or self.cfg.greedy.get('max_symbols_per_step', None)
334 ),
335 preserve_alignments=self.preserve_alignments,
336 preserve_frame_confidence=self.preserve_frame_confidence,
337 confidence_method_cfg=self.confidence_method_cfg,
338 )
339
340 else:
341 self.decoding = greedy_decode.GreedyBatchedMultiblankRNNTInfer(
342 decoder_model=decoder,
343 joint_model=joint,
344 blank_index=self.blank_id,
345 big_blank_durations=self.big_blank_durations,
346 max_symbols_per_step=(
347 self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
348 ),
349 preserve_alignments=self.preserve_alignments,
350 preserve_frame_confidence=self.preserve_frame_confidence,
351 confidence_method_cfg=self.confidence_method_cfg,
352 )
353
354 elif self.cfg.strategy == 'beam':
355
356 self.decoding = beam_decode.BeamRNNTInfer(
357 decoder_model=decoder,
358 joint_model=joint,
359 beam_size=self.cfg.beam.beam_size,
360 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
361 search_type='default',
362 score_norm=self.cfg.beam.get('score_norm', True),
363 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
364 preserve_alignments=self.preserve_alignments,
365 )
366
367 elif self.cfg.strategy == 'tsd':
368
369 self.decoding = beam_decode.BeamRNNTInfer(
370 decoder_model=decoder,
371 joint_model=joint,
372 beam_size=self.cfg.beam.beam_size,
373 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
374 search_type='tsd',
375 score_norm=self.cfg.beam.get('score_norm', True),
376 tsd_max_sym_exp_per_step=self.cfg.beam.get('tsd_max_sym_exp', 10),
377 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
378 preserve_alignments=self.preserve_alignments,
379 )
380
381 elif self.cfg.strategy == 'alsd':
382
383 self.decoding = beam_decode.BeamRNNTInfer(
384 decoder_model=decoder,
385 joint_model=joint,
386 beam_size=self.cfg.beam.beam_size,
387 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
388 search_type='alsd',
389 score_norm=self.cfg.beam.get('score_norm', True),
390 alsd_max_target_len=self.cfg.beam.get('alsd_max_target_len', 2),
391 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
392 preserve_alignments=self.preserve_alignments,
393 )
394
395 elif self.cfg.strategy == 'maes':
396
397 self.decoding = beam_decode.BeamRNNTInfer(
398 decoder_model=decoder,
399 joint_model=joint,
400 beam_size=self.cfg.beam.beam_size,
401 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
402 search_type='maes',
403 score_norm=self.cfg.beam.get('score_norm', True),
404 maes_num_steps=self.cfg.beam.get('maes_num_steps', 2),
405 maes_prefix_alpha=self.cfg.beam.get('maes_prefix_alpha', 1),
406 maes_expansion_gamma=self.cfg.beam.get('maes_expansion_gamma', 2.3),
407 maes_expansion_beta=self.cfg.beam.get('maes_expansion_beta', 2.0),
408 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
409 preserve_alignments=self.preserve_alignments,
410 ngram_lm_model=self.cfg.beam.get('ngram_lm_model', None),
411 ngram_lm_alpha=self.cfg.beam.get('ngram_lm_alpha', 0.0),
412 hat_subtract_ilm=self.cfg.beam.get('hat_subtract_ilm', False),
413 hat_ilm_weight=self.cfg.beam.get('hat_ilm_weight', 0.0),
414 )
415
416 else:
417
418 raise ValueError(
419 f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
420 f"but was provided {self.cfg.strategy}"
421 )
422
423 # Update the joint fused batch size or disable it entirely if needed.
424 self.update_joint_fused_batch_size()
425
426 def rnnt_decoder_predictions_tensor(
427 self,
428 encoder_output: torch.Tensor,
429 encoded_lengths: torch.Tensor,
430 return_hypotheses: bool = False,
431 partial_hypotheses: Optional[List[Hypothesis]] = None,
432 ) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
433 """
434 Decode an encoder output by autoregressive decoding of the Decoder+Joint networks.
435
436 Args:
437 encoder_output: torch.Tensor of shape [B, D, T].
438 encoded_lengths: torch.Tensor containing lengths of the padded encoder outputs. Shape [B].
439 return_hypotheses: bool. If set to True it will return list of Hypothesis or NBestHypotheses
440
441 Returns:
442 If `return_best_hypothesis` is set:
443 A tuple (hypotheses, None):
444 hypotheses - list of Hypothesis (best hypothesis per sample).
445 Look at rnnt_utils.Hypothesis for more information.
446
447 If `return_best_hypothesis` is not set:
448 A tuple(hypotheses, all_hypotheses)
449 hypotheses - list of Hypothesis (best hypothesis per sample).
450 Look at rnnt_utils.Hypothesis for more information.
451 all_hypotheses - list of NBestHypotheses. Each NBestHypotheses further contains a sorted
452 list of all the hypotheses of the model per sample.
453 Look at rnnt_utils.NBestHypotheses for more information.
454 """
455 # Compute hypotheses
456 with torch.inference_mode():
457 hypotheses_list = self.decoding(
458 encoder_output=encoder_output, encoded_lengths=encoded_lengths, partial_hypotheses=partial_hypotheses
459 ) # type: [List[Hypothesis]]
460
461 # extract the hypotheses
462 hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
463
464 prediction_list = hypotheses_list
465
466 if isinstance(prediction_list[0], NBestHypotheses):
467 hypotheses = []
468 all_hypotheses = []
469
470 for nbest_hyp in prediction_list: # type: NBestHypotheses
471 n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
472 decoded_hyps = self.decode_hypothesis(n_hyps) # type: List[str]
473
474 # If computing timestamps
475 if self.compute_timestamps is True:
476 timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
477 for hyp_idx in range(len(decoded_hyps)):
478 decoded_hyps[hyp_idx] = self.compute_rnnt_timestamps(decoded_hyps[hyp_idx], timestamp_type)
479
480 hypotheses.append(decoded_hyps[0]) # best hypothesis
481 all_hypotheses.append(decoded_hyps)
482
483 if return_hypotheses:
484 return hypotheses, all_hypotheses
485
486 best_hyp_text = [h.text for h in hypotheses]
487 all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
488 return best_hyp_text, all_hyp_text
489
490 else:
491 hypotheses = self.decode_hypothesis(prediction_list) # type: List[str]
492
493 # If computing timestamps
494 if self.compute_timestamps is True:
495 timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
496 for hyp_idx in range(len(hypotheses)):
497 hypotheses[hyp_idx] = self.compute_rnnt_timestamps(hypotheses[hyp_idx], timestamp_type)
498
499 if return_hypotheses:
500 # greedy decoding, can get high-level confidence scores
501 if self.preserve_frame_confidence and (
502 self.preserve_word_confidence or self.preserve_token_confidence
503 ):
504 hypotheses = self.compute_confidence(hypotheses)
505 return hypotheses, None
506
507 best_hyp_text = [h.text for h in hypotheses]
508 return best_hyp_text, None
509
510 def decode_hypothesis(self, hypotheses_list: List[Hypothesis]) -> List[Union[Hypothesis, NBestHypotheses]]:
511 """
512 Decode a list of hypotheses into a list of strings.
513
514 Args:
515 hypotheses_list: List of Hypothesis.
516
517 Returns:
518 A list of strings.
519 """
520 for ind in range(len(hypotheses_list)):
521 # Extract the integer encoded hypothesis
522 prediction = hypotheses_list[ind].y_sequence
523
524 if type(prediction) != list:
525 prediction = prediction.tolist()
526
527 # RNN-T sample level is already preprocessed by implicit RNNT decoding
528 # Simply remove any blank and possibly big blank tokens
529 if self.big_blank_durations is not None: # multi-blank RNNT
530 num_extra_outputs = len(self.big_blank_durations)
531 prediction = [p for p in prediction if p < self.blank_id - num_extra_outputs]
532 elif self.durations is not None: # TDT model.
533 prediction = [p for p in prediction if p < self.blank_id]
534 else: # standard RNN-T
535 prediction = [p for p in prediction if p != self.blank_id]
536
537 # De-tokenize the integer tokens; if not computing timestamps
538 if self.compute_timestamps is True:
539 # keep the original predictions, wrap with the number of repetitions per token and alignments
540 # this is done so that `rnnt_decoder_predictions_tensor()` can process this hypothesis
541 # in order to compute exact time stamps.
542 alignments = copy.deepcopy(hypotheses_list[ind].alignments)
543 token_repetitions = [1] * len(alignments) # preserve number of repetitions per token
544 hypothesis = (prediction, alignments, token_repetitions)
545 else:
546 hypothesis = self.decode_tokens_to_str(prediction)
547
548 # TODO: remove
549 # collapse leading spaces before . , ? for PC models
550 hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
551
552 if self.compute_hypothesis_token_set:
553 hypotheses_list[ind].tokens = self.decode_ids_to_tokens(prediction)
554
555 # De-tokenize the integer tokens
556 hypotheses_list[ind].text = hypothesis
557
558 return hypotheses_list
559
560 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
561 """
562 Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
563 Assumes that `frame_confidence` is present in the hypotheses.
564
565 Args:
566 hypotheses_list: List of Hypothesis.
567
568 Returns:
569 A list of hypotheses with high-level confidence scores.
570 """
571 if self.exclude_blank_from_confidence:
572 for hyp in hypotheses_list:
573 hyp.token_confidence = hyp.non_blank_frame_confidence
574 else:
575 for hyp in hypotheses_list:
576 offset = 0
577 token_confidence = []
578 if len(hyp.timestep) > 0:
579 for ts, te in zip(hyp.timestep, hyp.timestep[1:] + [len(hyp.frame_confidence)]):
580 if ts != te:
581 # <blank> tokens are considered to belong to the last non-blank token, if any.
582 token_confidence.append(
583 self._aggregate_confidence(
584 [hyp.frame_confidence[ts][offset]]
585 + [fc[0] for fc in hyp.frame_confidence[ts + 1 : te]]
586 )
587 )
588 offset = 0
589 else:
590 token_confidence.append(hyp.frame_confidence[ts][offset])
591 offset += 1
592 hyp.token_confidence = token_confidence
593 if self.preserve_word_confidence:
594 for hyp in hypotheses_list:
595 hyp.word_confidence = self._aggregate_token_confidence(hyp)
596 return hypotheses_list
597
598 @abstractmethod
599 def decode_tokens_to_str(self, tokens: List[int]) -> str:
600 """
601 Implemented by subclass in order to decoder a token id list into a string.
602
603 Args:
604 tokens: List of int representing the token ids.
605
606 Returns:
607 A decoded string.
608 """
609 raise NotImplementedError()
610
611 @abstractmethod
612 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
613 """
614 Implemented by subclass in order to decode a token id list into a token list.
615 A token list is the string representation of each token id.
616
617 Args:
618 tokens: List of int representing the token ids.
619
620 Returns:
621 A list of decoded tokens.
622 """
623 raise NotImplementedError()
624
625 @abstractmethod
626 def decode_tokens_to_lang(self, tokens: List[int]) -> str:
627 """
628 Implemented by subclass in order to
629 compute the most likely language ID (LID) string given the tokens.
630
631 Args:
632 tokens: List of int representing the token ids.
633
634 Returns:
635 A decoded LID string.
636 """
637 raise NotImplementedError()
638
639 @abstractmethod
640 def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
641 """
642 Implemented by subclass in order to
643 decode a token id list into language ID (LID) list.
644
645 Args:
646 tokens: List of int representing the token ids.
647
648 Returns:
649 A list of decoded LIDS.
650 """
651 raise NotImplementedError()
652
653 def update_joint_fused_batch_size(self):
654 if self.joint_fused_batch_size is None:
655 # do nothing and let the Joint itself handle setting up of the fused batch
656 return
657
658 if not hasattr(self.decoding.joint, 'set_fused_batch_size'):
659 logging.warning(
660 "The joint module does not have `set_fused_batch_size(int)` as a setter function.\n"
661 "Ignoring update of joint fused batch size."
662 )
663 return
664
665 if not hasattr(self.decoding.joint, 'set_fuse_loss_wer'):
666 logging.warning(
667 "The joint module does not have `set_fuse_loss_wer(bool, RNNTLoss, RNNTWER)` "
668 "as a setter function.\n"
669 "Ignoring update of joint fused batch size."
670 )
671 return
672
673 if self.joint_fused_batch_size > 0:
674 self.decoding.joint.set_fused_batch_size(self.joint_fused_batch_size)
675 else:
676 logging.info("Joint fused batch size <= 0; Will temporarily disable fused batch step in the Joint.")
677 self.decoding.joint.set_fuse_loss_wer(False)
678
679 def compute_rnnt_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
680 assert timestamp_type in ['char', 'word', 'all']
681
682 # Unpack the temporary storage
683 decoded_prediction, alignments, token_repetitions = hypothesis.text
684
685 # Retrieve offsets
686 char_offsets = word_offsets = None
687 char_offsets = self._compute_offsets(hypothesis, token_repetitions, self.blank_id)
688
689 # finally, set the flattened decoded predictions to text field for later text decoding
690 hypothesis.text = decoded_prediction
691
692 # Assert number of offsets and hypothesis tokens are 1:1 match.
693 num_flattened_tokens = 0
694 for t in range(len(char_offsets)):
695 # Subtract one here for the extra RNNT BLANK token emitted to designate "End of timestep"
696 num_flattened_tokens += len(char_offsets[t]['char']) - 1
697
698 if num_flattened_tokens != len(hypothesis.text):
699 raise ValueError(
700 f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
701 " have to be of the same length, but are: "
702 f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
703 f" {len(hypothesis.text)}"
704 )
705
706 encoded_char_offsets = copy.deepcopy(char_offsets)
707
708 # Correctly process the token ids to chars/subwords.
709 for i, offsets in enumerate(char_offsets):
710 decoded_chars = []
711 for char in offsets['char'][:-1]: # ignore the RNNT Blank token at end of every timestep with -1 subset
712 decoded_chars.append(self.decode_tokens_to_str([int(char)]))
713 char_offsets[i]["char"] = decoded_chars
714
715 # detect char vs subword models
716 lens = []
717 for v in char_offsets:
718 tokens = v["char"]
719 # each token may be either 1 unicode token or multiple unicode token
720 # for character based models, only 1 token is used
721 # for subword, more than one token can be used.
722 # Computing max, then summing up total lens is a test to check for char vs subword
723 # For char models, len(lens) == sum(lens)
724 # but this is violated for subword models.
725 max_len = max(len(c) for c in tokens)
726 lens.append(max_len)
727
728 # array of one or more chars implies subword based model with multiple char emitted per TxU step (via subword)
729 if sum(lens) > len(lens):
730 text_type = 'subword'
731 else:
732 # full array of ones implies character based model with 1 char emitted per TxU step
733 text_type = 'char'
734
735 # retrieve word offsets from character offsets
736 word_offsets = None
737 if timestamp_type in ['word', 'all']:
738 if text_type == 'char':
739 word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
740 else:
741 # utilize the copy of char offsets with the correct integer ids for tokens
742 # so as to avoid tokenize -> detokenize -> compare -> merge steps.
743 word_offsets = self._get_word_offsets_subwords_sentencepiece(
744 encoded_char_offsets,
745 hypothesis,
746 decode_ids_to_tokens=self.decode_ids_to_tokens,
747 decode_tokens_to_str=self.decode_tokens_to_str,
748 )
749
750 # attach results
751 if len(hypothesis.timestep) > 0:
752 timestep_info = hypothesis.timestep
753 else:
754 timestep_info = []
755
756 # Setup defaults
757 hypothesis.timestep = {"timestep": timestep_info}
758
759 # Add char / subword time stamps
760 if char_offsets is not None and timestamp_type in ['char', 'all']:
761 hypothesis.timestep['char'] = char_offsets
762
763 # Add word time stamps
764 if word_offsets is not None and timestamp_type in ['word', 'all']:
765 hypothesis.timestep['word'] = word_offsets
766
767 # Convert the flattened token indices to text
768 hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
769
770 return hypothesis
771
772 @staticmethod
773 def _compute_offsets(
774 hypothesis: Hypothesis, token_repetitions: List[int], rnnt_token: int
775 ) -> List[Dict[str, Union[str, int]]]:
776 """
777 Utility method that calculates the indidual time indices where a token starts and ends.
778
779 Args:
780 hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
781 emitted at every time step after rnnt collapse.
782 token_repetitions: A list of ints representing the number of repetitions of each emitted token.
783 rnnt_token: The integer of the rnnt blank token used during rnnt collapse.
784
785 Returns:
786
787 """
788 start_index = 0
789
790 # If the exact timestep information is available, utilize the 1st non-rnnt blank token timestep
791 # as the start index.
792 if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
793 start_index = max(0, hypothesis.timestep[0] - 1)
794
795 # Construct the start and end indices brackets
796 end_indices = np.asarray(token_repetitions).cumsum()
797 start_indices = np.concatenate(([start_index], end_indices[:-1]))
798
799 # Process the TxU dangling alignment tensor, containing pairs of (logits, label)
800 alignment_labels = [al_logits_labels for al_logits_labels in hypothesis.text[1]]
801 for t in range(len(alignment_labels)):
802 for u in range(len(alignment_labels[t])):
803 alignment_labels[t][u] = alignment_labels[t][u][1] # pick label from (logit, label) tuple
804
805 # Merge the results per token into a list of dictionaries
806 offsets = [
807 {"char": a, "start_offset": s, "end_offset": e}
808 for a, s, e in zip(alignment_labels, start_indices, end_indices)
809 ]
810
811 # Filter out RNNT token (blank at [t][0] position). This is because blank can only occur at end of a
812 # time step for RNNT, so if 0th token is blank, then that timestep is skipped.
813 offsets = list(filter(lambda offsets: offsets["char"][0] != rnnt_token, offsets))
814 return offsets
815
816 @staticmethod
817 def _get_word_offsets_chars(
818 offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
819 ) -> Dict[str, Union[str, float]]:
820 """
821 Utility method which constructs word time stamps out of character time stamps.
822
823 References:
824 This code is a port of the Hugging Face code for word time stamp construction.
825
826 Args:
827 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
828 word_delimiter_char: Character token that represents the word delimiter. By default, " ".
829
830 Returns:
831 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
832 "end_offset".
833 """
834 word_offsets = []
835
836 last_state = "SPACE"
837 word = ""
838 start_offset = 0
839 end_offset = 0
840 for i, offset in enumerate(offsets):
841 chars = offset["char"]
842 for char in chars:
843 state = "SPACE" if char == word_delimiter_char else "WORD"
844
845 if state == last_state:
846 # If we are in the same state as before, we simply repeat what we've done before
847 end_offset = offset["end_offset"]
848 word += char
849 else:
850 # Switching state
851 if state == "SPACE":
852 # Finishing a word
853 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
854 else:
855 # Starting a new word
856 start_offset = offset["start_offset"]
857 end_offset = offset["end_offset"]
858 word = char
859
860 last_state = state
861
862 if last_state == "WORD":
863 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
864
865 return word_offsets
866
867 @staticmethod
868 def _get_word_offsets_subwords_sentencepiece(
869 offsets: Dict[str, Union[str, float]],
870 hypothesis: Hypothesis,
871 decode_ids_to_tokens: Callable[[List[int]], str],
872 decode_tokens_to_str: Callable[[List[int]], str],
873 ) -> Dict[str, Union[str, float]]:
874 """
875 Utility method which constructs word time stamps out of sub-word time stamps.
876
877 **Note**: Only supports Sentencepiece based tokenizers !
878
879 Args:
880 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
881 hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
882 after rnnt collapse.
883 decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
884 decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
885
886 Returns:
887 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
888 "end_offset".
889 """
890 word_offsets = []
891 built_token = []
892 previous_token_index = 0
893 # For every offset token
894 for i, offset in enumerate(offsets):
895 # For every subword token in offset token list (ignoring the RNNT Blank token at the end)
896 for char in offset['char'][:-1]:
897 char = int(char)
898
899 # Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
900 token = decode_ids_to_tokens([char])[0]
901 token_text = decode_tokens_to_str([char])
902
903 # It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
904 # after forcing partial text conversion of the token.
905 if token != token_text:
906 # If there are any partially or fully built sub-word token ids, construct to text.
907 # Note: This is "old" subword, that occurs *after* current sub-word has started.
908 if built_token:
909 word_offsets.append(
910 {
911 "word": decode_tokens_to_str(built_token),
912 "start_offset": offsets[previous_token_index]["start_offset"],
913 "end_offset": offsets[i]["start_offset"],
914 }
915 )
916
917 # Prepare list of new sub-word ids
918 built_token.clear()
919 built_token.append(char)
920 previous_token_index = i
921 else:
922 # If the token does not contain any sub-word start mark, then the sub-word has not completed yet
923 # Append to current sub-word list.
924 built_token.append(char)
925
926 # Inject the start offset of the first token to word offsets
927 # This is because we always skip the delay the injection of the first sub-word due to the loop
928 # condition and check whether built token is ready or not.
929 # Therefore without this forced injection, the start_offset appears as off by 1.
930 # This should only be done when these arrays contain more than one element.
931 if offsets and word_offsets:
932 word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
933
934 # If there are any remaining tokens left, inject them all into the final word offset.
935 # The start offset of this token is the start time of the next token to process.
936 # The end offset of this token is the end time of the last token from offsets.
937 # Note that built_token is a flat list; but offsets contains a nested list which
938 # may have different dimensionality.
939 # As such, we can't rely on the length of the list of built_token to index offsets.
940 if built_token:
941 # start from the previous token index as this hasn't been committed to word_offsets yet
942 # if we still have content in built_token
943 start_offset = offsets[previous_token_index]["start_offset"]
944 word_offsets.append(
945 {
946 "word": decode_tokens_to_str(built_token),
947 "start_offset": start_offset,
948 "end_offset": offsets[-1]["end_offset"],
949 }
950 )
951 built_token.clear()
952
953 return word_offsets
954
955
956 class RNNTDecoding(AbstractRNNTDecoding):
957 """
958 Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
959
960 Args:
961 decoding_cfg: A dict-like object which contains the following key-value pairs.
962 strategy: str value which represents the type of decoding that can occur.
963 Possible values are :
964 - greedy, greedy_batch (for greedy decoding).
965 - beam, tsd, alsd (for beam search decoding).
966
967 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
968 tokens as well as the decoded string. Default is False in order to avoid double decoding
969 unless required.
970
971 preserve_alignments: Bool flag which preserves the history of logprobs generated during
972 decoding (sample / batched). When set to true, the Hypothesis will contain
973 the non-null value for `logprobs` in it. Here, `alignments` is a List of List of
974 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
975
976 In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
977 with the `return_hypotheses` flag set to True.
978
979 The length of the list corresponds to the Acoustic Length (T).
980 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
981 U is the number of target tokens for the current timestep Ti.
982
983 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
984 scores. In order to obtain hypotheses with confidence scores, please utilize
985 `rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
986
987 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
988 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
989 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
990
991 The length of the list corresponds to the Acoustic Length (T).
992 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
993 U is the number of target tokens for the current timestep Ti.
994 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
995 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
996 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
997
998 The length of the list corresponds to the number of recognized tokens.
999 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
1000 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1001 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
1002
1003 The length of the list corresponds to the number of recognized words.
1004 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
1005 from the `token_confidence`.
1006 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
1007 Valid options are `mean`, `min`, `max`, `prod`.
1008 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
1009 confidence scores.
1010
1011 name: The method name (str).
1012 Supported values:
1013 - 'max_prob' for using the maximum token probability as a confidence.
1014 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1015
1016 entropy_type: Which type of entropy to use (str).
1017 Used if confidence_method_cfg.name is set to `entropy`.
1018 Supported values:
1019 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1020 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1021 Note that for this entropy, the alpha should comply the following inequality:
1022 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1023 where V is the model vocabulary size.
1024 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1025 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1026 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1027 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1028 - 'renyi' for the Rรฉnyi entropy.
1029 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1030 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1031 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1032
1033 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1034 When the alpha equals one, scaling is not applied to 'max_prob',
1035 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1036
1037 entropy_norm: A mapping of the entropy value to the interval [0,1].
1038 Supported values:
1039 - 'lin' for using the linear mapping.
1040 - 'exp' for using exponential mapping with linear shift.
1041
1042 The config may further contain the following sub-dictionaries:
1043 "greedy":
1044 max_symbols: int, describing the maximum number of target tokens to decode per
1045 timestep during greedy decoding. Setting to larger values allows longer sentences
1046 to be decoded, at the cost of increased execution time.
1047
1048 preserve_frame_confidence: Same as above, overrides above value.
1049
1050 confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
1051
1052 "beam":
1053 beam_size: int, defining the beam size for beam search. Must be >= 1.
1054 If beam_size == 1, will perform cached greedy search. This might be slightly different
1055 results compared to the greedy search above.
1056
1057 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
1058 Set to True by default.
1059
1060 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
1061 hypotheses after beam search has concluded. This flag is set by default.
1062
1063 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
1064 per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
1065 at increased cost to execution time.
1066
1067 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
1068 If an integer is provided, it can decode sequences of that particular maximum length.
1069 If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
1070 where seq_len is the length of the acoustic model output (T).
1071
1072 NOTE:
1073 If a float is provided, it can be greater than 1!
1074 By default, a float of 2.0 is used so that a target sequence can be at most twice
1075 as long as the acoustic model output length T.
1076
1077 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
1078 and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
1079
1080 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
1081 in order to reduce expensive beam search cost later. int >= 0.
1082
1083 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
1084 Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
1085 and affects the speed of inference since large values will perform large beam search in the next step.
1086
1087 maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
1088 The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
1089 where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
1090 predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
1091 expansion apart from the "most likely" candidate.
1092 Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
1093 but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
1094 thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
1095 tuned on a validation set.
1096
1097 softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
1098
1099 decoder: The Decoder/Prediction network module.
1100 joint: The Joint network module.
1101 vocabulary: The vocabulary (excluding the RNNT blank token) which will be used for decoding.
1102 """
1103
1104 def __init__(
1105 self, decoding_cfg, decoder, joint, vocabulary,
1106 ):
1107 # we need to ensure blank is the last token in the vocab for the case of RNNT and Multi-blank RNNT.
1108 blank_id = len(vocabulary) + joint.num_extra_outputs
1109
1110 if hasattr(decoding_cfg, 'model_type') and decoding_cfg.model_type == 'tdt':
1111 blank_id = len(vocabulary)
1112
1113 self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
1114
1115 super(RNNTDecoding, self).__init__(
1116 decoding_cfg=decoding_cfg, decoder=decoder, joint=joint, blank_id=blank_id,
1117 )
1118
1119 if isinstance(self.decoding, beam_decode.BeamRNNTInfer):
1120 self.decoding.set_decoding_type('char')
1121
1122 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
1123 """
1124 Implemented by subclass in order to aggregate token confidence to a word-level confidence.
1125
1126 Args:
1127 hypothesis: Hypothesis
1128
1129 Returns:
1130 A list of word-level confidence scores.
1131 """
1132 return self._aggregate_token_confidence_chars(hypothesis.words, hypothesis.token_confidence)
1133
1134 def decode_tokens_to_str(self, tokens: List[int]) -> str:
1135 """
1136 Implemented by subclass in order to decoder a token list into a string.
1137
1138 Args:
1139 tokens: List of int representing the token ids.
1140
1141 Returns:
1142 A decoded string.
1143 """
1144 hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
1145 return hypothesis
1146
1147 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
1148 """
1149 Implemented by subclass in order to decode a token id list into a token list.
1150 A token list is the string representation of each token id.
1151
1152 Args:
1153 tokens: List of int representing the token ids.
1154
1155 Returns:
1156 A list of decoded tokens.
1157 """
1158 token_list = [self.labels_map[c] for c in tokens if c < self.blank_id - self.num_extra_outputs]
1159 return token_list
1160
1161 def decode_tokens_to_lang(self, tokens: List[int]) -> str:
1162 """
1163 Compute the most likely language ID (LID) string given the tokens.
1164
1165 Args:
1166 tokens: List of int representing the token ids.
1167
1168 Returns:
1169 A decoded LID string.
1170 """
1171 lang = self.tokenizer.ids_to_lang(tokens)
1172 return lang
1173
1174 def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
1175 """
1176 Decode a token id list into language ID (LID) list.
1177
1178 Args:
1179 tokens: List of int representing the token ids.
1180
1181 Returns:
1182 A list of decoded LIDS.
1183 """
1184 lang_list = self.tokenizer.ids_to_text_and_langs(tokens)
1185 return lang_list
1186
1187
1188 class RNNTWER(Metric):
1189 """
1190 This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference texts.
1191 When doing distributed training/evaluation the result of res=WER(predictions, targets, target_lengths) calls
1192 will be all-reduced between all workers using SUM operations.
1193 Here contains two numbers res=[wer_numerator, wer_denominator]. WER=wer_numerator/wer_denominator.
1194
1195 If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step results.
1196 Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
1197
1198 Example:
1199 def validation_step(self, batch, batch_idx):
1200 ...
1201 wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
1202 self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
1203 return self.val_outputs
1204
1205 def on_validation_epoch_end(self):
1206 ...
1207 wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
1208 wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
1209 tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
1210 self.val_outputs.clear() # free memory
1211 return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
1212
1213 Args:
1214 decoding: RNNTDecoding object that will perform autoregressive decoding of the RNNT model.
1215 batch_dim_index: Index of the batch dimension.
1216 use_cer: Whether to use Character Error Rate isntead of Word Error Rate.
1217 log_prediction: Whether to log a single decoded sample per call.
1218
1219 Returns:
1220 res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenshtein's
1221 distances for all prediction - reference pairs, total number of words in all references.
1222 """
1223
1224 full_state_update = True
1225
1226 def __init__(
1227 self, decoding: RNNTDecoding, batch_dim_index=0, use_cer=False, log_prediction=True, dist_sync_on_step=False
1228 ):
1229 super(RNNTWER, self).__init__(dist_sync_on_step=dist_sync_on_step)
1230 self.decoding = decoding
1231 self.batch_dim_index = batch_dim_index
1232 self.use_cer = use_cer
1233 self.log_prediction = log_prediction
1234 self.blank_id = self.decoding.blank_id
1235 self.labels_map = self.decoding.labels_map
1236
1237 self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1238 self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1239
1240 def update(
1241 self,
1242 encoder_output: torch.Tensor,
1243 encoded_lengths: torch.Tensor,
1244 targets: torch.Tensor,
1245 target_lengths: torch.Tensor,
1246 ) -> torch.Tensor:
1247 words = 0
1248 scores = 0
1249 references = []
1250 with torch.no_grad():
1251 # prediction_cpu_tensor = tensors[0].long().cpu()
1252 targets_cpu_tensor = targets.long().cpu()
1253 targets_cpu_tensor = move_dimension_to_the_front(targets_cpu_tensor, self.batch_dim_index)
1254 tgt_lenths_cpu_tensor = target_lengths.long().cpu()
1255
1256 # iterate over batch
1257 for ind in range(targets_cpu_tensor.shape[0]):
1258 tgt_len = tgt_lenths_cpu_tensor[ind].item()
1259 target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
1260
1261 reference = self.decoding.decode_tokens_to_str(target)
1262 references.append(reference)
1263
1264 hypotheses, _ = self.decoding.rnnt_decoder_predictions_tensor(encoder_output, encoded_lengths)
1265
1266 if self.log_prediction:
1267 logging.info(f"\n")
1268 logging.info(f"reference :{references[0]}")
1269 logging.info(f"predicted :{hypotheses[0]}")
1270
1271 for h, r in zip(hypotheses, references):
1272 if self.use_cer:
1273 h_list = list(h)
1274 r_list = list(r)
1275 else:
1276 h_list = h.split()
1277 r_list = r.split()
1278 words += len(r_list)
1279 # Compute Levenshtein's distance
1280 scores += editdistance.eval(h_list, r_list)
1281
1282 self.scores += torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
1283 self.words += torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
1284 # return torch.tensor([scores, words]).to(predictions.device)
1285
1286 def compute(self):
1287 wer = self.scores.float() / self.words
1288 return wer, self.scores.detach(), self.words.detach()
1289
1290
1291 @dataclass
1292 class RNNTDecodingConfig:
1293 model_type: str = "rnnt" # one of "rnnt", "multiblank" or "tdt"
1294 strategy: str = "greedy_batch"
1295
1296 compute_hypothesis_token_set: bool = False
1297
1298 # preserve decoding alignments
1299 preserve_alignments: Optional[bool] = None
1300
1301 # confidence config
1302 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1303
1304 # RNNT Joint fused batch size
1305 fused_batch_size: Optional[int] = None
1306
1307 # compute RNNT time stamps
1308 compute_timestamps: Optional[bool] = None
1309
1310 # compute language IDs
1311 compute_langs: bool = False
1312
1313 # token representing word seperator
1314 word_seperator: str = " "
1315
1316 # type of timestamps to calculate
1317 rnnt_timestamp_type: str = "all" # can be char, word or all for both
1318
1319 # greedy decoding config
1320 greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
1321
1322 # beam decoding config
1323 beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
1324
1325 # can be used to change temperature for decoding
1326 temperature: float = 1.0
1327
[end of nemo/collections/asr/metrics/rnnt_wer.py]
[start of nemo/collections/asr/metrics/wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import re
16 from abc import abstractmethod
17 from dataclasses import dataclass, is_dataclass
18 from typing import Callable, Dict, List, Optional, Tuple, Union
19
20 import editdistance
21 import jiwer
22 import numpy as np
23 import torch
24 from omegaconf import DictConfig, OmegaConf
25 from torchmetrics import Metric
26
27 from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
28 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
29 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
30 from nemo.utils import logging, logging_mode
31
32 __all__ = ['word_error_rate', 'word_error_rate_detail', 'WER', 'move_dimension_to_the_front']
33
34
35 def word_error_rate(hypotheses: List[str], references: List[str], use_cer=False) -> float:
36 """
37 Computes Average Word Error rate between two texts represented as
38 corresponding lists of string.
39
40 Hypotheses and references must have same length.
41
42 Args:
43 hypotheses (list): list of hypotheses
44 references(list) : list of references
45 use_cer (bool): set True to enable cer
46
47 Returns:
48 wer (float): average word error rate
49 """
50 scores = 0
51 words = 0
52 if len(hypotheses) != len(references):
53 raise ValueError(
54 "In word error rate calculation, hypotheses and reference"
55 " lists must have the same number of elements. But I got:"
56 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
57 )
58 for h, r in zip(hypotheses, references):
59 if use_cer:
60 h_list = list(h)
61 r_list = list(r)
62 else:
63 h_list = h.split()
64 r_list = r.split()
65 words += len(r_list)
66 # May deprecate using editdistance in future release for here and rest of codebase
67 # once we confirm jiwer is reliable.
68 scores += editdistance.eval(h_list, r_list)
69 if words != 0:
70 wer = 1.0 * scores / words
71 else:
72 wer = float('inf')
73 return wer
74
75
76 def word_error_rate_detail(
77 hypotheses: List[str], references: List[str], use_cer=False
78 ) -> Tuple[float, int, float, float, float]:
79 """
80 Computes Average Word Error Rate with details (insertion rate, deletion rate, substitution rate)
81 between two texts represented as corresponding lists of string.
82
83 Hypotheses and references must have same length.
84
85 Args:
86 hypotheses (list): list of hypotheses
87 references(list) : list of references
88 use_cer (bool): set True to enable cer
89
90 Returns:
91 wer (float): average word error rate
92 words (int): Total number of words/charactors of given reference texts
93 ins_rate (float): average insertion error rate
94 del_rate (float): average deletion error rate
95 sub_rate (float): average substitution error rate
96 """
97 scores = 0
98 words = 0
99 ops_count = {'substitutions': 0, 'insertions': 0, 'deletions': 0}
100
101 if len(hypotheses) != len(references):
102 raise ValueError(
103 "In word error rate calculation, hypotheses and reference"
104 " lists must have the same number of elements. But I got:"
105 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
106 )
107
108 for h, r in zip(hypotheses, references):
109 if use_cer:
110 h_list = list(h)
111 r_list = list(r)
112 else:
113 h_list = h.split()
114 r_list = r.split()
115
116 # To get rid of the issue that jiwer does not allow empty string
117 if len(r_list) == 0:
118 if len(h_list) != 0:
119 errors = len(h_list)
120 ops_count['insertions'] += errors
121 else:
122 errors = 0
123 else:
124 if use_cer:
125 measures = jiwer.cer(r, h, return_dict=True)
126 else:
127 measures = jiwer.compute_measures(r, h)
128
129 errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
130 ops_count['insertions'] += measures['insertions']
131 ops_count['deletions'] += measures['deletions']
132 ops_count['substitutions'] += measures['substitutions']
133
134 scores += errors
135 words += len(r_list)
136
137 if words != 0:
138 wer = 1.0 * scores / words
139 ins_rate = 1.0 * ops_count['insertions'] / words
140 del_rate = 1.0 * ops_count['deletions'] / words
141 sub_rate = 1.0 * ops_count['substitutions'] / words
142 else:
143 wer, ins_rate, del_rate, sub_rate = float('inf'), float('inf'), float('inf'), float('inf')
144
145 return wer, words, ins_rate, del_rate, sub_rate
146
147
148 def word_error_rate_per_utt(hypotheses: List[str], references: List[str], use_cer=False) -> Tuple[List[float], float]:
149 """
150 Computes Word Error Rate per utterance and the average WER
151 between two texts represented as corresponding lists of string.
152
153 Hypotheses and references must have same length.
154
155 Args:
156 hypotheses (list): list of hypotheses
157 references(list) : list of references
158 use_cer (bool): set True to enable cer
159
160 Returns:
161 wer_per_utt (List[float]): word error rate per utterance
162 avg_wer (float): average word error rate
163 """
164 scores = 0
165 words = 0
166 wer_per_utt = []
167
168 if len(hypotheses) != len(references):
169 raise ValueError(
170 "In word error rate calculation, hypotheses and reference"
171 " lists must have the same number of elements. But I got:"
172 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
173 )
174
175 for h, r in zip(hypotheses, references):
176 if use_cer:
177 h_list = list(h)
178 r_list = list(r)
179 else:
180 h_list = h.split()
181 r_list = r.split()
182
183 # To get rid of the issue that jiwer does not allow empty string
184 if len(r_list) == 0:
185 if len(h_list) != 0:
186 errors = len(h_list)
187 wer_per_utt.append(float('inf'))
188 else:
189 if use_cer:
190 measures = jiwer.cer(r, h, return_dict=True)
191 er = measures['cer']
192 else:
193 measures = jiwer.compute_measures(r, h)
194 er = measures['wer']
195
196 errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
197 wer_per_utt.append(er)
198
199 scores += errors
200 words += len(r_list)
201
202 if words != 0:
203 avg_wer = 1.0 * scores / words
204 else:
205 avg_wer = float('inf')
206
207 return wer_per_utt, avg_wer
208
209
210 def move_dimension_to_the_front(tensor, dim_index):
211 all_dims = list(range(tensor.ndim))
212 return tensor.permute(*([dim_index] + all_dims[:dim_index] + all_dims[dim_index + 1 :]))
213
214
215 class AbstractCTCDecoding(ConfidenceMixin):
216 """
217 Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs.
218
219 Args:
220 decoding_cfg: A dict-like object which contains the following key-value pairs.
221 strategy: str value which represents the type of decoding that can occur.
222 Possible values are :
223 - greedy (for greedy decoding).
224 - beam (for DeepSpeed KenLM based decoding).
225
226 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
227 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
228 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
229
230 ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
231 Can take the following values - "char" for character/subword time stamps, "word" for word level
232 time stamps and "all" (default), for both character level and word level time stamps.
233
234 word_seperator: Str token representing the seperator between words.
235
236 preserve_alignments: Bool flag which preserves the history of logprobs generated during
237 decoding (sample / batched). When set to true, the Hypothesis will contain
238 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
239
240 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
241 scores. In order to obtain hypotheses with confidence scores, please utilize
242 `ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
243
244 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
245 generated during decoding. When set to true, the Hypothesis will contain
246 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
247 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
248 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
249 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
250
251 The length of the list corresponds to the number of recognized tokens.
252 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
253 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
254 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
255
256 The length of the list corresponds to the number of recognized words.
257 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
258 from the `token_confidence`.
259 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
260 Valid options are `mean`, `min`, `max`, `prod`.
261 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
262 confidence scores.
263
264 name: The method name (str).
265 Supported values:
266 - 'max_prob' for using the maximum token probability as a confidence.
267 - 'entropy' for using a normalized entropy of a log-likelihood vector.
268
269 entropy_type: Which type of entropy to use (str).
270 Used if confidence_method_cfg.name is set to `entropy`.
271 Supported values:
272 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
273 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
274 Note that for this entropy, the alpha should comply the following inequality:
275 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
276 where V is the model vocabulary size.
277 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
278 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
279 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
280 More: https://en.wikipedia.org/wiki/Tsallis_entropy
281 - 'renyi' for the Rรฉnyi entropy.
282 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
283 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
284 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
285
286 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
287 When the alpha equals one, scaling is not applied to 'max_prob',
288 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
289
290 entropy_norm: A mapping of the entropy value to the interval [0,1].
291 Supported values:
292 - 'lin' for using the linear mapping.
293 - 'exp' for using exponential mapping with linear shift.
294
295 batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
296 ``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
297
298 The config may further contain the following sub-dictionaries:
299 "greedy":
300 preserve_alignments: Same as above, overrides above value.
301 compute_timestamps: Same as above, overrides above value.
302 preserve_frame_confidence: Same as above, overrides above value.
303 confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
304
305 "beam":
306 beam_size: int, defining the beam size for beam search. Must be >= 1.
307 If beam_size == 1, will perform cached greedy search. This might be slightly different
308 results compared to the greedy search above.
309
310 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
311 hypotheses after beam search has concluded. This flag is set by default.
312
313 beam_alpha: float, the strength of the Language model on the final score of a token.
314 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
315
316 beam_beta: float, the strength of the sequence length penalty on the final score of a token.
317 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
318
319 kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
320 If the path is invalid (file is not found at path), will raise a deferred error at the moment
321 of calculation of beam search, so that users may update / change the decoding strategy
322 to point to the correct file.
323
324 blank_id: The id of the RNNT blank token.
325 """
326
327 def __init__(self, decoding_cfg, blank_id: int):
328 super().__init__()
329
330 # Convert dataclas to config
331 if is_dataclass(decoding_cfg):
332 decoding_cfg = OmegaConf.structured(decoding_cfg)
333
334 if not isinstance(decoding_cfg, DictConfig):
335 decoding_cfg = OmegaConf.create(decoding_cfg)
336
337 OmegaConf.set_struct(decoding_cfg, False)
338
339 # update minimal config
340 minimal_cfg = ['greedy']
341 for item in minimal_cfg:
342 if item not in decoding_cfg:
343 decoding_cfg[item] = OmegaConf.create({})
344
345 self.cfg = decoding_cfg
346 self.blank_id = blank_id
347 self.preserve_alignments = self.cfg.get('preserve_alignments', None)
348 self.compute_timestamps = self.cfg.get('compute_timestamps', None)
349 self.batch_dim_index = self.cfg.get('batch_dim_index', 0)
350 self.word_seperator = self.cfg.get('word_seperator', ' ')
351
352 possible_strategies = ['greedy', 'beam', 'pyctcdecode', 'flashlight']
353 if self.cfg.strategy not in possible_strategies:
354 raise ValueError(f"Decoding strategy must be one of {possible_strategies}. Given {self.cfg.strategy}")
355
356 # Update preserve alignments
357 if self.preserve_alignments is None:
358 if self.cfg.strategy in ['greedy']:
359 self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
360 else:
361 self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
362
363 # Update compute timestamps
364 if self.compute_timestamps is None:
365 if self.cfg.strategy in ['greedy']:
366 self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
367 elif self.cfg.strategy in ['beam']:
368 self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
369
370 # initialize confidence-related fields
371 self._init_confidence(self.cfg.get('confidence_cfg', None))
372
373 # Confidence estimation is not implemented for strategies other than `greedy`
374 if (
375 not self.preserve_frame_confidence
376 and self.cfg.strategy != 'greedy'
377 and self.cfg.beam.get('preserve_frame_confidence', False)
378 ):
379 raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
380
381 # we need timestamps to extract non-blank per-frame confidence
382 if self.compute_timestamps is not None:
383 self.compute_timestamps |= self.preserve_frame_confidence
384
385 if self.cfg.strategy == 'greedy':
386
387 self.decoding = ctc_greedy_decoding.GreedyCTCInfer(
388 blank_id=self.blank_id,
389 preserve_alignments=self.preserve_alignments,
390 compute_timestamps=self.compute_timestamps,
391 preserve_frame_confidence=self.preserve_frame_confidence,
392 confidence_method_cfg=self.confidence_method_cfg,
393 )
394
395 elif self.cfg.strategy == 'beam':
396
397 self.decoding = ctc_beam_decoding.BeamCTCInfer(
398 blank_id=blank_id,
399 beam_size=self.cfg.beam.get('beam_size', 1),
400 search_type='default',
401 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
402 preserve_alignments=self.preserve_alignments,
403 compute_timestamps=self.compute_timestamps,
404 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
405 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
406 kenlm_path=self.cfg.beam.get('kenlm_path', None),
407 )
408
409 self.decoding.override_fold_consecutive_value = False
410
411 elif self.cfg.strategy == 'pyctcdecode':
412
413 self.decoding = ctc_beam_decoding.BeamCTCInfer(
414 blank_id=blank_id,
415 beam_size=self.cfg.beam.get('beam_size', 1),
416 search_type='pyctcdecode',
417 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
418 preserve_alignments=self.preserve_alignments,
419 compute_timestamps=self.compute_timestamps,
420 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
421 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
422 kenlm_path=self.cfg.beam.get('kenlm_path', None),
423 pyctcdecode_cfg=self.cfg.beam.get('pyctcdecode_cfg', None),
424 )
425
426 self.decoding.override_fold_consecutive_value = False
427
428 elif self.cfg.strategy == 'flashlight':
429
430 self.decoding = ctc_beam_decoding.BeamCTCInfer(
431 blank_id=blank_id,
432 beam_size=self.cfg.beam.get('beam_size', 1),
433 search_type='flashlight',
434 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
435 preserve_alignments=self.preserve_alignments,
436 compute_timestamps=self.compute_timestamps,
437 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
438 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
439 kenlm_path=self.cfg.beam.get('kenlm_path', None),
440 flashlight_cfg=self.cfg.beam.get('flashlight_cfg', None),
441 )
442
443 self.decoding.override_fold_consecutive_value = False
444
445 else:
446 raise ValueError(
447 f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
448 f"but was provided {self.cfg.strategy}"
449 )
450
451 def ctc_decoder_predictions_tensor(
452 self,
453 decoder_outputs: torch.Tensor,
454 decoder_lengths: torch.Tensor = None,
455 fold_consecutive: bool = True,
456 return_hypotheses: bool = False,
457 ) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
458 """
459 Decodes a sequence of labels to words
460
461 Args:
462 decoder_outputs: An integer torch.Tensor of shape [Batch, Time, {Vocabulary}] (if ``batch_index_dim == 0``) or [Time, Batch]
463 (if ``batch_index_dim == 1``) of integer indices that correspond to the index of some character in the
464 label set.
465 decoder_lengths: Optional tensor of length `Batch` which contains the integer lengths
466 of the sequence in the padded `predictions` tensor.
467 fold_consecutive: Bool, determine whether to perform "ctc collapse", folding consecutive tokens
468 into a single token.
469 return_hypotheses: Bool flag whether to return just the decoding predictions of the model
470 or a Hypothesis object that holds information such as the decoded `text`,
471 the `alignment` of emited by the CTC Model, and the `length` of the sequence (if available).
472 May also contain the log-probabilities of the decoder (if this method is called via
473 transcribe())
474
475 Returns:
476 Either a list of str which represent the CTC decoded strings per sample,
477 or a list of Hypothesis objects containing additional information.
478 """
479
480 if isinstance(decoder_outputs, torch.Tensor):
481 decoder_outputs = move_dimension_to_the_front(decoder_outputs, self.batch_dim_index)
482
483 if (
484 hasattr(self.decoding, 'override_fold_consecutive_value')
485 and self.decoding.override_fold_consecutive_value is not None
486 ):
487 logging.info(
488 f"Beam search requires that consecutive ctc tokens are not folded. \n"
489 f"Overriding provided value of `fold_consecutive` = {fold_consecutive} to "
490 f"{self.decoding.override_fold_consecutive_value}",
491 mode=logging_mode.ONCE,
492 )
493 fold_consecutive = self.decoding.override_fold_consecutive_value
494
495 with torch.inference_mode():
496 # Resolve the forward step of the decoding strategy
497 hypotheses_list = self.decoding(
498 decoder_output=decoder_outputs, decoder_lengths=decoder_lengths
499 ) # type: List[List[Hypothesis]]
500
501 # extract the hypotheses
502 hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
503
504 if isinstance(hypotheses_list[0], NBestHypotheses):
505 hypotheses = []
506 all_hypotheses = []
507
508 for nbest_hyp in hypotheses_list: # type: NBestHypotheses
509 n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
510 decoded_hyps = self.decode_hypothesis(
511 n_hyps, fold_consecutive
512 ) # type: List[Union[Hypothesis, NBestHypotheses]]
513
514 # If computing timestamps
515 if self.compute_timestamps is True:
516 timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
517 for hyp_idx in range(len(decoded_hyps)):
518 decoded_hyps[hyp_idx] = self.compute_ctc_timestamps(decoded_hyps[hyp_idx], timestamp_type)
519
520 hypotheses.append(decoded_hyps[0]) # best hypothesis
521 all_hypotheses.append(decoded_hyps)
522
523 if return_hypotheses:
524 return hypotheses, all_hypotheses
525
526 best_hyp_text = [h.text for h in hypotheses]
527 all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
528 return best_hyp_text, all_hyp_text
529
530 else:
531 hypotheses = self.decode_hypothesis(
532 hypotheses_list, fold_consecutive
533 ) # type: List[Union[Hypothesis, NBestHypotheses]]
534
535 # If computing timestamps
536 if self.compute_timestamps is True:
537 # greedy decoding, can get high-level confidence scores
538 if return_hypotheses and (self.preserve_word_confidence or self.preserve_token_confidence):
539 hypotheses = self.compute_confidence(hypotheses)
540 else:
541 # remove unused token_repetitions from Hypothesis.text
542 for hyp in hypotheses:
543 hyp.text = hyp.text[:2]
544 timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
545 for hyp_idx in range(len(hypotheses)):
546 hypotheses[hyp_idx] = self.compute_ctc_timestamps(hypotheses[hyp_idx], timestamp_type)
547
548 if return_hypotheses:
549 return hypotheses, None
550
551 best_hyp_text = [h.text for h in hypotheses]
552 return best_hyp_text, None
553
554 def decode_hypothesis(
555 self, hypotheses_list: List[Hypothesis], fold_consecutive: bool
556 ) -> List[Union[Hypothesis, NBestHypotheses]]:
557 """
558 Decode a list of hypotheses into a list of strings.
559
560 Args:
561 hypotheses_list: List of Hypothesis.
562 fold_consecutive: Whether to collapse the ctc blank tokens or not.
563
564 Returns:
565 A list of strings.
566 """
567 for ind in range(len(hypotheses_list)):
568 # Extract the integer encoded hypothesis
569 hyp = hypotheses_list[ind]
570 prediction = hyp.y_sequence
571 predictions_len = hyp.length if hyp.length > 0 else None
572
573 if fold_consecutive:
574 if type(prediction) != list:
575 prediction = prediction.numpy().tolist()
576
577 if predictions_len is not None:
578 prediction = prediction[:predictions_len]
579
580 # CTC decoding procedure
581 decoded_prediction = []
582 token_lengths = [] # preserve token lengths
583 token_repetitions = [] # preserve number of repetitions per token
584
585 previous = self.blank_id
586 last_length = 0
587 last_repetition = 1
588
589 for pidx, p in enumerate(prediction):
590 if (p != previous or previous == self.blank_id) and p != self.blank_id:
591 decoded_prediction.append(p)
592
593 token_lengths.append(pidx - last_length)
594 last_length = pidx
595 token_repetitions.append(last_repetition)
596 last_repetition = 1
597
598 if p == previous and previous != self.blank_id:
599 last_repetition += 1
600
601 previous = p
602
603 if len(token_repetitions) > 0:
604 token_repetitions = token_repetitions[1:] + [last_repetition]
605
606 else:
607 if predictions_len is not None:
608 prediction = prediction[:predictions_len]
609 decoded_prediction = prediction[prediction != self.blank_id].tolist()
610 token_lengths = [1] * len(decoded_prediction) # preserve number of repetitions per token
611 token_repetitions = [1] * len(decoded_prediction) # preserve number of repetitions per token
612
613 # De-tokenize the integer tokens; if not computing timestamps
614 if self.compute_timestamps is True:
615 # keep the original predictions, wrap with the number of repetitions per token
616 # this is done so that `ctc_decoder_predictions_tensor()` can process this hypothesis
617 # in order to compute exact time stamps.
618 hypothesis = (decoded_prediction, token_lengths, token_repetitions)
619 else:
620 hypothesis = self.decode_tokens_to_str(decoded_prediction)
621
622 # TODO: remove
623 # collapse leading spaces before . , ? for PC models
624 hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
625
626 # Preserve this wrapped hypothesis or decoded text tokens.
627 hypotheses_list[ind].text = hypothesis
628
629 return hypotheses_list
630
631 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
632 """
633 Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
634 Assumes that `frame_confidence` is present in the hypotheses.
635
636 Args:
637 hypotheses_list: List of Hypothesis.
638
639 Returns:
640 A list of hypotheses with high-level confidence scores.
641 """
642 for hyp in hypotheses_list:
643 if not isinstance(hyp.text, tuple) or len(hyp.text) != 3:
644 # the method must have been called in the wrong place
645 raise ValueError(
646 """Wrong format of the `text` attribute of a hypothesis.\n
647 Expected: (decoded_prediction, token_repetitions)\n
648 The method invocation is expected between .decode_hypothesis() and .compute_ctc_timestamps()"""
649 )
650 token_repetitions = hyp.text[2]
651 hyp.text = hyp.text[:2]
652 token_confidence = []
653 if self.exclude_blank_from_confidence:
654 non_blank_frame_confidence = hyp.non_blank_frame_confidence
655 i = 0
656 for tr in token_repetitions:
657 # token repetition can be zero
658 j = i + tr
659 token_confidence.append(self._aggregate_confidence(non_blank_frame_confidence[i:j]))
660 i = j
661 else:
662 # <blank> tokens are considered to belong to the last non-blank token, if any.
663 token_lengths = hyp.text[1]
664 if len(token_lengths) > 0:
665 ts = token_lengths[0]
666 for tl in token_lengths[1:] + [len(hyp.frame_confidence)]:
667 token_confidence.append(self._aggregate_confidence(hyp.frame_confidence[ts : ts + tl]))
668 ts += tl
669 hyp.token_confidence = token_confidence
670 if self.preserve_word_confidence:
671 for hyp in hypotheses_list:
672 hyp.word_confidence = self._aggregate_token_confidence(hyp)
673 return hypotheses_list
674
675 @abstractmethod
676 def decode_tokens_to_str(self, tokens: List[int]) -> str:
677 """
678 Implemented by subclass in order to decoder a token id list into a string.
679
680 Args:
681 tokens: List of int representing the token ids.
682
683 Returns:
684 A decoded string.
685 """
686 raise NotImplementedError()
687
688 @abstractmethod
689 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
690 """
691 Implemented by subclass in order to decode a token id list into a token list.
692 A token list is the string representation of each token id.
693
694 Args:
695 tokens: List of int representing the token ids.
696
697 Returns:
698 A list of decoded tokens.
699 """
700 raise NotImplementedError()
701
702 def compute_ctc_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
703 """
704 Method to compute time stamps at char/subword, and word level given some hypothesis.
705 Requires the input hypothesis to contain a `text` field that is the tuple. The tuple contains -
706 the ctc collapsed integer ids, and the number of repetitions of each token.
707
708 Args:
709 hypothesis: A Hypothesis object, with a wrapped `text` field.
710 The `text` field must contain a tuple with two values -
711 The ctc collapsed integer ids
712 A list of integers that represents the number of repetitions per token.
713 timestamp_type: A str value that represents the type of time stamp calculated.
714 Can be one of "char", "word" or "all"
715
716 Returns:
717 A Hypothesis object with a modified `timestep` value, which is now a dictionary containing
718 the time stamp information.
719 """
720 assert timestamp_type in ['char', 'word', 'all']
721
722 # Unpack the temporary storage, and set the decoded predictions
723 decoded_prediction, token_lengths = hypothesis.text
724 hypothesis.text = decoded_prediction
725
726 # Retrieve offsets
727 char_offsets = word_offsets = None
728 char_offsets = self._compute_offsets(hypothesis, token_lengths, self.blank_id)
729
730 # Assert number of offsets and hypothesis tokens are 1:1 match.
731 if len(char_offsets) != len(hypothesis.text):
732 raise ValueError(
733 f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
734 " have to be of the same length, but are: "
735 f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
736 f" {len(hypothesis.text)}"
737 )
738
739 # Correctly process the token ids to chars/subwords.
740 for i, char in enumerate(hypothesis.text):
741 char_offsets[i]["char"] = self.decode_tokens_to_str([char])
742
743 # detect char vs subword models
744 lens = [len(list(v["char"])) > 1 for v in char_offsets]
745 if any(lens):
746 text_type = 'subword'
747 else:
748 text_type = 'char'
749
750 # retrieve word offsets from character offsets
751 word_offsets = None
752 if timestamp_type in ['word', 'all']:
753 if text_type == 'char':
754 word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
755 else:
756 word_offsets = self._get_word_offsets_subwords_sentencepiece(
757 char_offsets,
758 hypothesis,
759 decode_ids_to_tokens=self.decode_ids_to_tokens,
760 decode_tokens_to_str=self.decode_tokens_to_str,
761 )
762
763 # attach results
764 if len(hypothesis.timestep) > 0:
765 timestep_info = hypothesis.timestep
766 else:
767 timestep_info = []
768
769 # Setup defaults
770 hypothesis.timestep = {"timestep": timestep_info}
771
772 # Add char / subword time stamps
773 if char_offsets is not None and timestamp_type in ['char', 'all']:
774 hypothesis.timestep['char'] = char_offsets
775
776 # Add word time stamps
777 if word_offsets is not None and timestamp_type in ['word', 'all']:
778 hypothesis.timestep['word'] = word_offsets
779
780 # Convert the token indices to text
781 hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
782
783 return hypothesis
784
785 @staticmethod
786 def _compute_offsets(
787 hypothesis: Hypothesis, token_lengths: List[int], ctc_token: int
788 ) -> List[Dict[str, Union[str, int]]]:
789 """
790 Utility method that calculates the indidual time indices where a token starts and ends.
791
792 Args:
793 hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
794 emitted at every time step after ctc collapse.
795 token_lengths: A list of ints representing the lengths of each emitted token.
796 ctc_token: The integer of the ctc blank token used during ctc collapse.
797
798 Returns:
799
800 """
801 start_index = 0
802
803 # If the exact timestep information is available, utilize the 1st non-ctc blank token timestep
804 # as the start index.
805 if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
806 start_index = max(0, hypothesis.timestep[0] - 1)
807
808 # Construct the start and end indices brackets
809 end_indices = np.asarray(token_lengths).cumsum()
810 start_indices = np.concatenate(([start_index], end_indices[:-1]))
811
812 # Merge the results per token into a list of dictionaries
813 offsets = [
814 {"char": t, "start_offset": s, "end_offset": e}
815 for t, s, e in zip(hypothesis.text, start_indices, end_indices)
816 ]
817
818 # Filter out CTC token
819 offsets = list(filter(lambda offsets: offsets["char"] != ctc_token, offsets))
820 return offsets
821
822 @staticmethod
823 def _get_word_offsets_chars(
824 offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
825 ) -> Dict[str, Union[str, float]]:
826 """
827 Utility method which constructs word time stamps out of character time stamps.
828
829 References:
830 This code is a port of the Hugging Face code for word time stamp construction.
831
832 Args:
833 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
834 word_delimiter_char: Character token that represents the word delimiter. By default, " ".
835
836 Returns:
837 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
838 "end_offset".
839 """
840 word_offsets = []
841
842 last_state = "SPACE"
843 word = ""
844 start_offset = 0
845 end_offset = 0
846 for i, offset in enumerate(offsets):
847 char = offset["char"]
848 state = "SPACE" if char == word_delimiter_char else "WORD"
849
850 if state == last_state:
851 # If we are in the same state as before, we simply repeat what we've done before
852 end_offset = offset["end_offset"]
853 word += char
854 else:
855 # Switching state
856 if state == "SPACE":
857 # Finishing a word
858 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
859 else:
860 # Starting a new word
861 start_offset = offset["start_offset"]
862 end_offset = offset["end_offset"]
863 word = char
864
865 last_state = state
866 if last_state == "WORD":
867 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
868
869 return word_offsets
870
871 @staticmethod
872 def _get_word_offsets_subwords_sentencepiece(
873 offsets: Dict[str, Union[str, float]],
874 hypothesis: Hypothesis,
875 decode_ids_to_tokens: Callable[[List[int]], str],
876 decode_tokens_to_str: Callable[[List[int]], str],
877 ) -> Dict[str, Union[str, float]]:
878 """
879 Utility method which constructs word time stamps out of sub-word time stamps.
880
881 **Note**: Only supports Sentencepiece based tokenizers !
882
883 Args:
884 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
885 hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
886 after ctc collapse.
887 decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
888 decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
889
890 Returns:
891 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
892 "end_offset".
893 """
894 word_offsets = []
895 built_token = []
896 previous_token_index = 0
897 # For every collapsed sub-word token
898 for i, char in enumerate(hypothesis.text):
899 # Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
900 token = decode_ids_to_tokens([char])[0]
901 token_text = decode_tokens_to_str([char])
902
903 # It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
904 # after forcing partial text conversion of the token.
905 if token != token_text:
906 # If there are any partially or fully built sub-word token ids, construct to text.
907 # Note: This is "old" subword, that occurs *after* current sub-word has started.
908 if len(built_token) > 0:
909 word_offsets.append(
910 {
911 "word": decode_tokens_to_str(built_token),
912 "start_offset": offsets[previous_token_index]["start_offset"],
913 "end_offset": offsets[i]["start_offset"],
914 }
915 )
916
917 # Prepare list of new sub-word ids
918 built_token.clear()
919 built_token.append(char)
920 previous_token_index = i
921 else:
922 # If the token does not contain any sub-word start mark, then the sub-word has not completed yet
923 # Append to current sub-word list.
924 built_token.append(char)
925
926 # Inject the start offset of the first token to word offsets
927 # This is because we always skip the delay the injection of the first sub-word due to the loop
928 # condition and check whether built token is ready or not.
929 # Therefore without this forced injection, the start_offset appears as off by 1.
930 if len(word_offsets) == 0:
931 # alaptev: sometimes word_offsets can be empty
932 if len(built_token) > 0:
933 word_offsets.append(
934 {
935 "word": decode_tokens_to_str(built_token),
936 "start_offset": offsets[0]["start_offset"],
937 "end_offset": offsets[-1]["end_offset"],
938 }
939 )
940 built_token.clear()
941 else:
942 word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
943
944 # If there are any remaining tokens left, inject them all into the final word offset.
945 # Note: The start offset of this token is the start time of the first token inside build_token.
946 # Note: The end offset of this token is the end time of the last token inside build_token
947 if len(built_token) > 0:
948 word_offsets.append(
949 {
950 "word": decode_tokens_to_str(built_token),
951 "start_offset": offsets[-(len(built_token))]["start_offset"],
952 "end_offset": offsets[-1]["end_offset"],
953 }
954 )
955 built_token.clear()
956
957 return word_offsets
958
959 @property
960 def preserve_alignments(self):
961 return self._preserve_alignments
962
963 @preserve_alignments.setter
964 def preserve_alignments(self, value):
965 self._preserve_alignments = value
966
967 if hasattr(self, 'decoding'):
968 self.decoding.preserve_alignments = value
969
970 @property
971 def compute_timestamps(self):
972 return self._compute_timestamps
973
974 @compute_timestamps.setter
975 def compute_timestamps(self, value):
976 self._compute_timestamps = value
977
978 if hasattr(self, 'decoding'):
979 self.decoding.compute_timestamps = value
980
981 @property
982 def preserve_frame_confidence(self):
983 return self._preserve_frame_confidence
984
985 @preserve_frame_confidence.setter
986 def preserve_frame_confidence(self, value):
987 self._preserve_frame_confidence = value
988
989 if hasattr(self, 'decoding'):
990 self.decoding.preserve_frame_confidence = value
991
992
993 class CTCDecoding(AbstractCTCDecoding):
994 """
995 Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs for character
996 based models.
997
998 Args:
999 decoding_cfg: A dict-like object which contains the following key-value pairs.
1000 strategy: str value which represents the type of decoding that can occur.
1001 Possible values are :
1002 - greedy (for greedy decoding).
1003 - beam (for DeepSpeed KenLM based decoding).
1004
1005 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
1006 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
1007 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
1008
1009 ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
1010 Can take the following values - "char" for character/subword time stamps, "word" for word level
1011 time stamps and "all" (default), for both character level and word level time stamps.
1012
1013 word_seperator: Str token representing the seperator between words.
1014
1015 preserve_alignments: Bool flag which preserves the history of logprobs generated during
1016 decoding (sample / batched). When set to true, the Hypothesis will contain
1017 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
1018
1019 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
1020 scores. In order to obtain hypotheses with confidence scores, please utilize
1021 `ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
1022
1023 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
1024 generated during decoding. When set to true, the Hypothesis will contain
1025 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
1026 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
1027 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1028 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
1029
1030 The length of the list corresponds to the number of recognized tokens.
1031 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
1032 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1033 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
1034
1035 The length of the list corresponds to the number of recognized words.
1036 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
1037 from the `token_confidence`.
1038 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
1039 Valid options are `mean`, `min`, `max`, `prod`.
1040 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
1041 confidence scores.
1042
1043 name: The method name (str).
1044 Supported values:
1045 - 'max_prob' for using the maximum token probability as a confidence.
1046 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1047
1048 entropy_type: Which type of entropy to use (str).
1049 Used if confidence_method_cfg.name is set to `entropy`.
1050 Supported values:
1051 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1052 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1053 Note that for this entropy, the alpha should comply the following inequality:
1054 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1055 where V is the model vocabulary size.
1056 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1057 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1058 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1059 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1060 - 'renyi' for the Rรฉnyi entropy.
1061 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1062 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1063 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1064
1065 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1066 When the alpha equals one, scaling is not applied to 'max_prob',
1067 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1068
1069 entropy_norm: A mapping of the entropy value to the interval [0,1].
1070 Supported values:
1071 - 'lin' for using the linear mapping.
1072 - 'exp' for using exponential mapping with linear shift.
1073
1074 batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
1075 ``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
1076
1077 The config may further contain the following sub-dictionaries:
1078 "greedy":
1079 preserve_alignments: Same as above, overrides above value.
1080 compute_timestamps: Same as above, overrides above value.
1081 preserve_frame_confidence: Same as above, overrides above value.
1082 confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
1083
1084 "beam":
1085 beam_size: int, defining the beam size for beam search. Must be >= 1.
1086 If beam_size == 1, will perform cached greedy search. This might be slightly different
1087 results compared to the greedy search above.
1088
1089 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
1090 hypotheses after beam search has concluded. This flag is set by default.
1091
1092 beam_alpha: float, the strength of the Language model on the final score of a token.
1093 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
1094
1095 beam_beta: float, the strength of the sequence length penalty on the final score of a token.
1096 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
1097
1098 kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
1099 If the path is invalid (file is not found at path), will raise a deferred error at the moment
1100 of calculation of beam search, so that users may update / change the decoding strategy
1101 to point to the correct file.
1102
1103 blank_id: The id of the RNNT blank token.
1104 """
1105
1106 def __init__(
1107 self, decoding_cfg, vocabulary,
1108 ):
1109 blank_id = len(vocabulary)
1110 self.vocabulary = vocabulary
1111 self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
1112
1113 super().__init__(decoding_cfg=decoding_cfg, blank_id=blank_id)
1114
1115 # Finalize Beam Search Decoding framework
1116 if isinstance(self.decoding, ctc_beam_decoding.AbstractBeamCTCInfer):
1117 self.decoding.set_vocabulary(self.vocabulary)
1118 self.decoding.set_decoding_type('char')
1119
1120 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
1121 """
1122 Implemented by subclass in order to aggregate token confidence to a word-level confidence.
1123
1124 Args:
1125 hypothesis: Hypothesis
1126
1127 Returns:
1128 A list of word-level confidence scores.
1129 """
1130 return self._aggregate_token_confidence_chars(
1131 self.decode_tokens_to_str(hypothesis.text[0]).split(), hypothesis.token_confidence
1132 )
1133
1134 def decode_tokens_to_str(self, tokens: List[int]) -> str:
1135 """
1136 Implemented by subclass in order to decoder a token list into a string.
1137
1138 Args:
1139 tokens: List of int representing the token ids.
1140
1141 Returns:
1142 A decoded string.
1143 """
1144 hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
1145 return hypothesis
1146
1147 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
1148 """
1149 Implemented by subclass in order to decode a token id list into a token list.
1150 A token list is the string representation of each token id.
1151
1152 Args:
1153 tokens: List of int representing the token ids.
1154
1155 Returns:
1156 A list of decoded tokens.
1157 """
1158 token_list = [self.labels_map[c] for c in tokens if c != self.blank_id]
1159 return token_list
1160
1161
1162 class WER(Metric):
1163 """
1164 This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference
1165 texts. When doing distributed training/evaluation the result of ``res=WER(predictions, targets, target_lengths)``
1166 calls will be all-reduced between all workers using SUM operations. Here ``res`` contains three numbers
1167 ``res=[wer, total_levenstein_distance, total_number_of_words]``.
1168
1169 If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step
1170 results. Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
1171
1172 Example:
1173 def validation_step(self, batch, batch_idx):
1174 ...
1175 wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
1176 self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
1177 return self.val_outputs
1178
1179 def on_validation_epoch_end(self):
1180 ...
1181 wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
1182 wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
1183 tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
1184 self.val_outputs.clear() # free memory
1185 return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
1186
1187 Args:
1188 decoding: An instance of CTCDecoding.
1189 use_cer: Whether to use Character Error Rate instead of Word Error Rate.
1190 log_prediction: Whether to log a single decoded sample per call.
1191 fold_consecutive: Whether repeated consecutive characters should be folded into one when decoding.
1192
1193 Returns:
1194 res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenstein's
1195 distances for all prediction - reference pairs, total number of words in all references.
1196 """
1197
1198 full_state_update: bool = True
1199
1200 def __init__(
1201 self,
1202 decoding: CTCDecoding,
1203 use_cer=False,
1204 log_prediction=True,
1205 fold_consecutive=True,
1206 dist_sync_on_step=False,
1207 ):
1208 super().__init__(dist_sync_on_step=dist_sync_on_step)
1209
1210 self.decoding = decoding
1211 self.use_cer = use_cer
1212 self.log_prediction = log_prediction
1213 self.fold_consecutive = fold_consecutive
1214
1215 self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1216 self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1217
1218 def update(
1219 self,
1220 predictions: torch.Tensor,
1221 targets: torch.Tensor,
1222 target_lengths: torch.Tensor,
1223 predictions_lengths: torch.Tensor = None,
1224 ):
1225 """
1226 Updates metric state.
1227 Args:
1228 predictions: an integer torch.Tensor of shape ``[Batch, Time, {Vocabulary}]`` (if ``batch_dim_index == 0``) or
1229 ``[Time, Batch]`` (if ``batch_dim_index == 1``)
1230 targets: an integer torch.Tensor of shape ``[Batch, Time]`` (if ``batch_dim_index == 0``) or
1231 ``[Time, Batch]`` (if ``batch_dim_index == 1``)
1232 target_lengths: an integer torch.Tensor of shape ``[Batch]``
1233 predictions_lengths: an integer torch.Tensor of shape ``[Batch]``
1234 """
1235 words = 0
1236 scores = 0
1237 references = []
1238 with torch.no_grad():
1239 # prediction_cpu_tensor = tensors[0].long().cpu()
1240 targets_cpu_tensor = targets.long().cpu()
1241 tgt_lenths_cpu_tensor = target_lengths.long().cpu()
1242
1243 # iterate over batch
1244 for ind in range(targets_cpu_tensor.shape[0]):
1245 tgt_len = tgt_lenths_cpu_tensor[ind].item()
1246 target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
1247 reference = self.decoding.decode_tokens_to_str(target)
1248 references.append(reference)
1249
1250 hypotheses, _ = self.decoding.ctc_decoder_predictions_tensor(
1251 predictions, predictions_lengths, fold_consecutive=self.fold_consecutive
1252 )
1253
1254 if self.log_prediction:
1255 logging.info(f"\n")
1256 logging.info(f"reference:{references[0]}")
1257 logging.info(f"predicted:{hypotheses[0]}")
1258
1259 for h, r in zip(hypotheses, references):
1260 if self.use_cer:
1261 h_list = list(h)
1262 r_list = list(r)
1263 else:
1264 h_list = h.split()
1265 r_list = r.split()
1266 words += len(r_list)
1267 # Compute Levenstein's distance
1268 scores += editdistance.eval(h_list, r_list)
1269
1270 self.scores = torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
1271 self.words = torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
1272 # return torch.tensor([scores, words]).to(predictions.device)
1273
1274 def compute(self):
1275 scores = self.scores.detach().float()
1276 words = self.words.detach().float()
1277 return scores / words, scores, words
1278
1279
1280 @dataclass
1281 class CTCDecodingConfig:
1282 strategy: str = "greedy"
1283
1284 # preserve decoding alignments
1285 preserve_alignments: Optional[bool] = None
1286
1287 # compute ctc time stamps
1288 compute_timestamps: Optional[bool] = None
1289
1290 # token representing word seperator
1291 word_seperator: str = " "
1292
1293 # type of timestamps to calculate
1294 ctc_timestamp_type: str = "all" # can be char, word or all for both
1295
1296 # batch dimension
1297 batch_dim_index: int = 0
1298
1299 # greedy decoding config
1300 greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
1301
1302 # beam decoding config
1303 beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
1304
1305 # confidence config
1306 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1307
1308 # can be used to change temperature for decoding
1309 temperature: float = 1.0
1310
[end of nemo/collections/asr/metrics/wer.py]
[start of nemo/collections/asr/models/configs/aligner_config.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
18
19
20 @dataclass
21 class AlignerCTCConfig:
22 prob_suppress_index: int = -1
23 prob_suppress_value: float = 1.0
24
25
26 @dataclass
27 class AlignerRNNTConfig:
28 predictor_window_size: int = 0
29 predictor_step_size: int = 1
30
31
32 @dataclass
33 class AlignerWrapperModelConfig:
34 alignment_type: str = "forced"
35 word_output: bool = True
36 cpu_decoding: bool = False
37 decode_batch_size: int = 0
38 ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
39 rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
40
41
42 @dataclass
43 class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
44 decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
45
[end of nemo/collections/asr/models/configs/aligner_config.py]
[start of nemo/collections/asr/models/configs/asr_models_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, List, Optional
17
18 from omegaconf import MISSING
19
20 import nemo.core.classes.dataset
21 from nemo.collections.asr.metrics.wer import CTCDecodingConfig
22 from nemo.collections.asr.modules.audio_preprocessing import (
23 AudioToMelSpectrogramPreprocessorConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig
27 from nemo.core.config import modelPT as model_cfg
28
29
30 @dataclass
31 class ASRDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
32 manifest_filepath: Optional[Any] = None
33 sample_rate: int = MISSING
34 labels: List[str] = MISSING
35 trim_silence: bool = False
36
37 # Tarred dataset support
38 is_tarred: bool = False
39 tarred_audio_filepaths: Optional[Any] = None
40 tarred_shard_strategy: str = "scatter"
41 shard_manifests: bool = False
42 shuffle_n: int = 0
43
44 # Optional
45 int_values: Optional[int] = None
46 augmentor: Optional[Dict[str, Any]] = None
47 max_duration: Optional[float] = None
48 min_duration: Optional[float] = None
49 max_utts: int = 0
50 blank_index: int = -1
51 unk_index: int = -1
52 normalize: bool = False
53 trim: bool = True
54 parser: Optional[str] = 'en'
55 eos_id: Optional[int] = None
56 bos_id: Optional[int] = None
57 pad_id: int = 0
58 use_start_end_token: bool = False
59 return_sample_id: Optional[bool] = False
60
61 # bucketing params
62 bucketing_strategy: str = "synced_randomized"
63 bucketing_batch_size: Optional[Any] = None
64 bucketing_weights: Optional[List[int]] = None
65
66
67 @dataclass
68 class EncDecCTCConfig(model_cfg.ModelConfig):
69 # Model global arguments
70 sample_rate: int = 16000
71 repeat: int = 1
72 dropout: float = 0.0
73 separable: bool = False
74 labels: List[str] = MISSING
75
76 # Dataset configs
77 train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
78 validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
79 test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
80
81 # Optimizer / Scheduler config
82 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
83
84 # Model component configs
85 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
86 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
87 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
88 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
89 decoding: CTCDecodingConfig = CTCDecodingConfig()
90
91
92 @dataclass
93 class EncDecCTCModelConfig(model_cfg.NemoConfig):
94 model: EncDecCTCConfig = EncDecCTCConfig()
95
96
97 @dataclass
98 class CacheAwareStreamingConfig:
99 chunk_size: int = 0 # the size of each chunk at each step, it can be a list of two integers to specify different chunk sizes for the first step and others
100 shift_size: int = 0 # the size of the shift in each step, it can be a list of two integers to specify different shift sizes for the first step and others
101
102 cache_drop_size: int = 0 # the number of steps to drop from the cache
103 last_channel_cache_size: int = 0 # the size of the needed cache for last channel layers
104
105 valid_out_len: int = 0 # the number of the steps in the final output which are valid (have the same value as in the offline mode)
106
107 pre_encode_cache_size: int = 0 # the size of the needed cache for the pre-encoding part of the model to avoid caching inside the pre-encoding layers
108 drop_extra_pre_encoded: int = 0 # the number of steps to get dropped after the pre-encoding layer
109
110 last_channel_num: int = 0 # number of the last channel layers (like MHA layers) which need caching in the model
111 last_time_num: int = 0 # number of the last time layers (like convolutions) which need caching in the model
112
[end of nemo/collections/asr/models/configs/asr_models_config.py]
[start of nemo/collections/asr/models/configs/classification_models_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, List, Optional
17
18 from omegaconf import MISSING
19
20 import nemo.core.classes.dataset
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMFCCPreprocessorConfig,
23 CropOrPadSpectrogramAugmentationConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderClassificationConfig, ConvASREncoderConfig
27 from nemo.core.config import modelPT as model_cfg
28
29
30 @dataclass
31 class EncDecClassificationDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
32 manifest_filepath: Optional[str] = None
33 sample_rate: int = MISSING
34 labels: List[str] = MISSING
35 trim_silence: bool = False
36
37 # Tarred dataset support
38 is_tarred: bool = False
39 tarred_audio_filepaths: Optional[str] = None
40 tarred_shard_strategy: str = "scatter"
41 shuffle_n: int = 0
42
43 # Optional
44 int_values: Optional[int] = None
45 augmentor: Optional[Dict[str, Any]] = None
46 max_duration: Optional[float] = None
47 min_duration: Optional[float] = None
48 cal_labels_occurrence: Optional[bool] = False
49
50 # VAD Optional
51 vad_stream: Optional[bool] = None
52 window_length_in_sec: float = 0.31
53 shift_length_in_sec: float = 0.01
54 normalize_audio: bool = False
55 is_regression_task: bool = False
56
57 # bucketing params
58 bucketing_strategy: str = "synced_randomized"
59 bucketing_batch_size: Optional[Any] = None
60 bucketing_weights: Optional[List[int]] = None
61
62
63 @dataclass
64 class EncDecClassificationConfig(model_cfg.ModelConfig):
65 # Model global arguments
66 sample_rate: int = 16000
67 repeat: int = 1
68 dropout: float = 0.0
69 separable: bool = True
70 kernel_size_factor: float = 1.0
71 labels: List[str] = MISSING
72 timesteps: int = MISSING
73
74 # Dataset configs
75 train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
76 manifest_filepath=None, shuffle=True, trim_silence=False
77 )
78 validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
79 manifest_filepath=None, shuffle=False
80 )
81 test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
82 manifest_filepath=None, shuffle=False
83 )
84
85 # Optimizer / Scheduler config
86 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
87
88 # Model component configs
89 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
90 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
91 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
92 audio_length=timesteps
93 )
94
95 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
96 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
97
98
99 @dataclass
100 class EncDecClassificationModelConfig(model_cfg.NemoConfig):
101 model: EncDecClassificationConfig = EncDecClassificationConfig()
102
[end of nemo/collections/asr/models/configs/classification_models_config.py]
[start of nemo/collections/asr/models/configs/diarizer_config.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import asdict, dataclass
16 from typing import Any, Dict, Optional, Tuple, Union
17
18
19 @dataclass
20 class DiarizerComponentConfig:
21 """Dataclass to imitate HydraConfig dict when accessing parameters."""
22
23 def get(self, name: str, default: Optional[Any] = None):
24 return getattr(self, name, default)
25
26 def __iter__(self):
27 for key in asdict(self):
28 yield key
29
30 def dict(self) -> Dict:
31 return asdict(self)
32
33
34 @dataclass
35 class ASRDiarizerCTCDecoderParams:
36 pretrained_language_model: Optional[str] = None # KenLM model file: .arpa model file or .bin binary file.
37 beam_width: int = 32
38 alpha: float = 0.5
39 beta: float = 2.5
40
41
42 @dataclass
43 class ASRRealigningLMParams:
44 # Provide a KenLM language model in .arpa format.
45 arpa_language_model: Optional[str] = None
46 # Min number of words for the left context.
47 min_number_of_words: int = 3
48 # Max number of words for the right context.
49 max_number_of_words: int = 10
50 # The threshold for the difference between two log probability values from two hypotheses.
51 logprob_diff_threshold: float = 1.2
52
53
54 @dataclass
55 class ASRDiarizerParams(DiarizerComponentConfig):
56 # if True, speech segmentation for diarization is based on word-timestamps from ASR inference.
57 asr_based_vad: bool = False
58 # Threshold (in sec) that caps the gap between two words when generating VAD timestamps using ASR based VAD.
59 asr_based_vad_threshold: float = 1.0
60 # Batch size can be dependent on each ASR model. Default batch sizes are applied if set to null.
61 asr_batch_size: Optional[int] = None
62 # Native decoder delay. null is recommended to use the default values for each ASR model.
63 decoder_delay_in_sec: Optional[float] = None
64 # Offset to set a reference point from the start of the word. Recommended range of values is [-0.05 0.2].
65 word_ts_anchor_offset: Optional[float] = None
66 # Select which part of the word timestamp we want to use. The options are: 'start', 'end', 'mid'.
67 word_ts_anchor_pos: str = "start"
68 # Fix the word timestamp using VAD output. You must provide a VAD model to use this feature.
69 fix_word_ts_with_VAD: bool = False
70 # If True, use colored text to distinguish speakers in the output transcript.
71 colored_text: bool = False
72 # If True, the start and end time of each speaker turn is printed in the output transcript.
73 print_time: bool = True
74 # If True, the output transcript breaks the line to fix the line width (default is 90 chars)
75 break_lines: bool = False
76
77
78 @dataclass
79 class ASRDiarizerConfig(DiarizerComponentConfig):
80 model_path: Optional[str] = "stt_en_conformer_ctc_large"
81 parameters: ASRDiarizerParams = ASRDiarizerParams()
82 ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
83 realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
84
85
86 @dataclass
87 class VADParams(DiarizerComponentConfig):
88 window_length_in_sec: float = 0.15 # Window length in sec for VAD context input
89 shift_length_in_sec: float = 0.01 # Shift length in sec for generate frame level VAD prediction
90 smoothing: Union[str, bool] = "median" # False or type of smoothing method (eg: median)
91 overlap: float = 0.5 # Overlap ratio for overlapped mean/median smoothing filter
92 onset: float = 0.1 # Onset threshold for detecting the beginning and end of a speech
93 offset: float = 0.1 # Offset threshold for detecting the end of a speech
94 pad_onset: float = 0.1 # Adding durations before each speech segment
95 pad_offset: float = 0 # Adding durations after each speech segment
96 min_duration_on: float = 0 # Threshold for small non_speech deletion
97 min_duration_off: float = 0.2 # Threshold for short speech segment deletion
98 filter_speech_first: bool = True
99
100
101 @dataclass
102 class VADConfig(DiarizerComponentConfig):
103 model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
104 external_vad_manifest: Optional[str] = None
105 parameters: VADParams = VADParams()
106
107
108 @dataclass
109 class SpeakerEmbeddingsParams(DiarizerComponentConfig):
110 # Window length(s) in sec (floating-point number). either a number or a list. ex) 1.5 or [1.5,1.0,0.5]
111 window_length_in_sec: Tuple[float] = (1.5, 1.25, 1.0, 0.75, 0.5)
112 # Shift length(s) in sec (floating-point number). either a number or a list. ex) 0.75 or [0.75,0.5,0.25]
113 shift_length_in_sec: Tuple[float] = (0.75, 0.625, 0.5, 0.375, 0.25)
114 # Weight for each scale. None (for single scale) or list with window/shift scale count. ex) [0.33,0.33,0.33]
115 multiscale_weights: Tuple[float] = (1, 1, 1, 1, 1)
116 # save speaker embeddings in pickle format. True if clustering result is used for other models, such as MSDD.
117 save_embeddings: bool = True
118
119
120 @dataclass
121 class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
122 # .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
123 model_path: Optional[str] = None
124 parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
125
126
127 @dataclass
128 class ClusteringParams(DiarizerComponentConfig):
129 # If True, use num of speakers value provided in manifest file.
130 oracle_num_speakers: bool = False
131 # Max number of speakers for each recording. If an oracle number of speakers is passed, this value is ignored.
132 max_num_speakers: int = 8
133 # If the number of segments is lower than this number, enhanced speaker counting is activated.
134 enhanced_count_thres: int = 80
135 # Determines the range of p-value search: 0 < p <= max_rp_threshold.
136 max_rp_threshold: float = 0.25
137 # The higher the number, the more values will be examined with more time.
138 sparse_search_volume: int = 30
139 # If True, take a majority vote on multiple p-values to estimate the number of speakers.
140 maj_vote_spk_count: bool = False
141
142
143 @dataclass
144 class ClusteringConfig(DiarizerComponentConfig):
145 parameters: ClusteringParams = ClusteringParams()
146
147
148 @dataclass
149 class MSDDParams(DiarizerComponentConfig):
150 # If True, use speaker embedding model in checkpoint, else provided speaker embedding model in config will be used.
151 use_speaker_model_from_ckpt: bool = True
152 # Batch size for MSDD inference.
153 infer_batch_size: int = 25
154 # Sigmoid threshold for generating binarized speaker labels. The smaller the more generous on detecting overlaps.
155 sigmoid_threshold: Tuple[float] = (0.7,)
156 # If True, use oracle number of speaker and evaluate F1 score for the given speaker sequences. Default is False.
157 seq_eval_mode: bool = False
158 # If True, break the input audio clip to short sequences and calculate cluster average embeddings for inference.
159 split_infer: bool = True
160 # The length of split short sequence when split_infer is True.
161 diar_window_length: int = 50
162 # If the estimated number of speakers are larger than this number, overlap speech is not estimated.
163 overlap_infer_spk_limit: int = 5
164
165
166 @dataclass
167 class MSDDConfig(DiarizerComponentConfig):
168 model_path: Optional[str] = "diar_msdd_telephonic"
169 parameters: MSDDParams = MSDDParams()
170
171
172 @dataclass
173 class DiarizerConfig(DiarizerComponentConfig):
174 manifest_filepath: Optional[str] = None
175 out_dir: Optional[str] = None
176 oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
177 collar: float = 0.25 # Collar value for scoring
178 ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
179 vad: VADConfig = VADConfig()
180 speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
181 clustering: ClusteringConfig = ClusteringConfig()
182 msdd_model: MSDDConfig = MSDDConfig()
183 asr: ASRDiarizerConfig = ASRDiarizerConfig()
184
185
186 @dataclass
187 class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
188 diarizer: DiarizerConfig = DiarizerConfig()
189 device: str = "cpu"
190 verbose: bool = False
191 batch_size: int = 64
192 num_workers: int = 1
193 sample_rate: int = 16000
194 name: str = ""
195
196 @classmethod
197 def init_config(cls, diar_model_path: str, vad_model_path: str, map_location: str, verbose: bool):
198 return NeuralDiarizerInferenceConfig(
199 DiarizerConfig(
200 vad=VADConfig(model_path=vad_model_path), msdd_model=MSDDConfig(model_path=diar_model_path),
201 ),
202 device=map_location,
203 verbose=verbose,
204 )
205
[end of nemo/collections/asr/models/configs/diarizer_config.py]
[start of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
18 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
19 from nemo.core.config.modelPT import NemoConfig
20
21
22 @dataclass
23 class GraphModuleConfig:
24 criterion_type: str = "ml"
25 loss_type: str = "ctc"
26 split_batch_size: int = 0
27 dec_type: str = "topo"
28 transcribe_training: bool = True
29 backend_cfg: BackendConfig = BackendConfig()
30
31
32 @dataclass
33 class EncDecK2SeqConfig(EncDecCTCConfig):
34 graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
35
36
37 @dataclass
38 class EncDecK2SeqModelConfig(NemoConfig):
39 model: EncDecK2SeqConfig = EncDecK2SeqConfig()
40
[end of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
[start of nemo/collections/asr/models/configs/matchboxnet_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, field
16 from typing import Any, Callable, List, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.collections.asr.models.configs import classification_models_config as clf_cfg
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMFCCPreprocessorConfig,
23 CropOrPadSpectrogramAugmentationConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import (
27 ConvASRDecoderClassificationConfig,
28 ConvASREncoderConfig,
29 JasperEncoderConfig,
30 )
31 from nemo.core.config import modelPT as model_cfg
32
33
34 # fmt: off
35 def matchboxnet_3x1x64():
36 config = [
37 JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
38 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
39 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
40 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
41 JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
42 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
43 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
44 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
45 JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
46 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
47 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
48 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
49 JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
50 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
51 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
52 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
53 JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
54 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
55 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
56 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
57 JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
58 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
59 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
60 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
61 ]
62 return config
63
64
65 def matchboxnet_3x1x64_vad():
66 config = [
67 JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
68 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
69 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
70 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
71 JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
72 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
73 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
74 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
75 JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
76 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
77 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
78 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
79 JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
80 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
81 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
82 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
83 JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
84 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
85 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
86 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
87 JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
88 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
89 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
90 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
91 ]
92 return config
93
94
95 # fmt: on
96
97
98 @dataclass
99 class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
100 # Model global arguments
101 sample_rate: int = 16000
102 repeat: int = 1
103 dropout: float = 0.0
104 separable: bool = True
105 kernel_size_factor: float = 1.0
106 timesteps: int = 128
107 labels: List[str] = MISSING
108
109 # Dataset configs
110 train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
111 manifest_filepath=None, shuffle=True, trim_silence=False
112 )
113 validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
114 manifest_filepath=None, shuffle=False
115 )
116 test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
117 manifest_filepath=None, shuffle=False
118 )
119
120 # Optimizer / Scheduler config
121 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
122
123 # Model general component configs
124 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
125 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
126 freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
127 )
128 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
129 audio_length=128
130 )
131
132 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
133 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
134
135
136 @dataclass
137 class MatchboxNetVADModelConfig(MatchboxNetModelConfig):
138 timesteps: int = 64
139 labels: List[str] = field(default_factory=lambda: ['background', 'speech'])
140
141 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = None
142
143
144 class EncDecClassificationModelConfigBuilder(model_cfg.ModelConfigBuilder):
145 VALID_CONFIGS = ['matchboxnet_3x1x64', 'matchboxnet_3x1x64_vad']
146
147 def __init__(self, name: str = 'matchboxnet_3x1x64', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
148 if name not in EncDecClassificationModelConfigBuilder.VALID_CONFIGS:
149 raise ValueError("`name` must be one of : \n" f"{EncDecClassificationModelConfigBuilder.VALID_CONFIGS}")
150
151 self.name = name
152
153 if 'matchboxnet_3x1x64_vad' in name:
154 if encoder_cfg_func is None:
155 encoder_cfg_func = matchboxnet_3x1x64_vad
156
157 model_cfg = MatchboxNetVADModelConfig(
158 repeat=1,
159 separable=True,
160 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
161 decoder=ConvASRDecoderClassificationConfig(),
162 )
163
164 elif 'matchboxnet_3x1x64' in name:
165 if encoder_cfg_func is None:
166 encoder_cfg_func = matchboxnet_3x1x64
167
168 model_cfg = MatchboxNetModelConfig(
169 repeat=1,
170 separable=False,
171 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
172 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
173 decoder=ConvASRDecoderClassificationConfig(),
174 )
175
176 else:
177 raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
178
179 super(EncDecClassificationModelConfigBuilder, self).__init__(model_cfg)
180 self.model_cfg: clf_cfg.EncDecClassificationConfig = model_cfg # enable type hinting
181
182 def set_labels(self, labels: List[str]):
183 self.model_cfg.labels = labels
184
185 def set_separable(self, separable: bool):
186 self.model_cfg.separable = separable
187
188 def set_repeat(self, repeat: int):
189 self.model_cfg.repeat = repeat
190
191 def set_sample_rate(self, sample_rate: int):
192 self.model_cfg.sample_rate = sample_rate
193
194 def set_dropout(self, dropout: float = 0.0):
195 self.model_cfg.dropout = dropout
196
197 def set_timesteps(self, timesteps: int):
198 self.model_cfg.timesteps = timesteps
199
200 def set_is_regression_task(self, is_regression_task: bool):
201 self.model_cfg.is_regression_task = is_regression_task
202
203 # Note: Autocomplete for users wont work without these overrides
204 # But practically it is not needed since python will infer at runtime
205
206 # def set_train_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
207 # super().set_train_ds(cfg)
208 #
209 # def set_validation_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
210 # super().set_validation_ds(cfg)
211 #
212 # def set_test_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
213 # super().set_test_ds(cfg)
214
215 def _finalize_cfg(self):
216 # propagate labels
217 self.model_cfg.train_ds.labels = self.model_cfg.labels
218 self.model_cfg.validation_ds.labels = self.model_cfg.labels
219 self.model_cfg.test_ds.labels = self.model_cfg.labels
220 self.model_cfg.decoder.vocabulary = self.model_cfg.labels
221
222 # propagate num classes
223 self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
224
225 # propagate sample rate
226 self.model_cfg.sample_rate = self.model_cfg.sample_rate
227 self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
228 self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
229 self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
230 self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
231
232 # propagate filters
233 self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
234 self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
235
236 # propagate timeteps
237 if self.model_cfg.crop_or_pad_augment is not None:
238 self.model_cfg.crop_or_pad_augment.audio_length = self.model_cfg.timesteps
239
240 # propagate separable
241 for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
242 layer.separable = self.model_cfg.separable
243
244 # propagate repeat
245 for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
246 layer.repeat = self.model_cfg.repeat
247
248 # propagate dropout
249 for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
250 layer.dropout = self.model_cfg.dropout
251
252 def build(self) -> clf_cfg.EncDecClassificationConfig:
253 return super().build()
254
[end of nemo/collections/asr/models/configs/matchboxnet_config.py]
[start of nemo/collections/asr/models/configs/quartznet_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Callable, List, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.collections.asr.models.configs import asr_models_config as ctc_cfg
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMelSpectrogramPreprocessorConfig,
23 SpectrogramAugmentationConfig,
24 )
25 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig, JasperEncoderConfig
26 from nemo.core.config import modelPT as model_cfg
27
28
29 # fmt: off
30 def qn_15x5():
31 config = [
32 JasperEncoderConfig(filters=256, repeat=1, kernel=[33], stride=[2], dilation=[1], dropout=0.0,
33 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
34 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
35 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
36 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
37 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
38 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
39 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
40 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
41 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
42 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
43 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
44 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
45 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
46 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
47 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
48 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
49 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
50 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
51 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
52 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
53 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
54 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
55 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
56 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
57 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
58 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
59 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
60 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
61 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
62 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
63 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
64 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
65 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
66 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
67 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
68 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
69 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
70 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
71 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
72 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
73 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
74 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
75 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
76 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
77 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
78 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
79 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
80 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
81 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
82 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
83 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
84 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
85 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
86 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
87 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
88 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
89 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
90 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
91 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
92 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
93 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
94 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
95 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
96 JasperEncoderConfig(filters=512, repeat=1, kernel=[87], stride=[1], dilation=[2], dropout=0.0,
97 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
98 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
99 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
100 JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
101 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
102 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
103 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
104 ]
105 return config
106
107
108 def jasper_10x5_dr():
109 config = [
110 JasperEncoderConfig(filters=256, repeat=1, kernel=[11], stride=[2], dilation=[1], dropout=0.2,
111 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
112 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
113 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
114 JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
115 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
116 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
117 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
118 JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
119 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
120 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
121 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
122 JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
123 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
124 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
125 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
126 JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
127 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
128 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
129 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
130 JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
131 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
132 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
133 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
134 JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
135 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
136 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
137 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
138 JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
139 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
140 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
141 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
142 JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
143 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
144 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
145 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
146 JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
147 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
148 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
149 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
150 JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
151 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
152 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
153 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
154 JasperEncoderConfig(filters=896, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.4,
155 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
156 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
157 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
158 JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.4,
159 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
160 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
161 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
162 ]
163 return config
164 # fmt: on
165
166
167 @dataclass
168 class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
169 # Model global arguments
170 sample_rate: int = 16000
171 repeat: int = 1
172 dropout: float = 0.0
173 separable: bool = False
174 labels: List[str] = MISSING
175
176 # Dataset configs
177 train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
178 manifest_filepath=None, shuffle=True, trim_silence=True
179 )
180 validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
181 test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
182
183 # Optimizer / Scheduler config
184 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
185
186 # Model general component configs
187 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
188 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
189 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
190 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
191
192
193 @dataclass
194 class QuartzNetModelConfig(JasperModelConfig):
195 separable: bool = True
196
197
198 class EncDecCTCModelConfigBuilder(model_cfg.ModelConfigBuilder):
199 VALID_CONFIGS = ['quartznet_15x5', 'quartznet_15x5_zh', 'jasper_10x5dr']
200
201 def __init__(self, name: str = 'quartznet_15x5', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
202 if name not in EncDecCTCModelConfigBuilder.VALID_CONFIGS:
203 raise ValueError("`name` must be one of : \n" f"{EncDecCTCModelConfigBuilder.VALID_CONFIGS}")
204
205 self.name = name
206
207 if 'quartznet_15x5' in name:
208 if encoder_cfg_func is None:
209 encoder_cfg_func = qn_15x5
210
211 model_cfg = QuartzNetModelConfig(
212 repeat=5,
213 separable=True,
214 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
215 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
216 decoder=ConvASRDecoderConfig(),
217 )
218
219 elif 'jasper_10x5' in name:
220 if encoder_cfg_func is None:
221 encoder_cfg_func = jasper_10x5_dr
222
223 model_cfg = JasperModelConfig(
224 repeat=5,
225 separable=False,
226 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
227 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
228 decoder=ConvASRDecoderConfig(),
229 )
230
231 else:
232 raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
233
234 super(EncDecCTCModelConfigBuilder, self).__init__(model_cfg)
235 self.model_cfg: ctc_cfg.EncDecCTCConfig = model_cfg # enable type hinting
236
237 if 'zh' in name:
238 self.set_dataset_normalize(normalize=False)
239
240 def set_labels(self, labels: List[str]):
241 self.model_cfg.labels = labels
242
243 def set_separable(self, separable: bool):
244 self.model_cfg.separable = separable
245
246 def set_repeat(self, repeat: int):
247 self.model_cfg.repeat = repeat
248
249 def set_sample_rate(self, sample_rate: int):
250 self.model_cfg.sample_rate = sample_rate
251
252 def set_dropout(self, dropout: float = 0.0):
253 self.model_cfg.dropout = dropout
254
255 def set_dataset_normalize(self, normalize: bool):
256 self.model_cfg.train_ds.normalize = normalize
257 self.model_cfg.validation_ds.normalize = normalize
258 self.model_cfg.test_ds.normalize = normalize
259
260 # Note: Autocomplete for users wont work without these overrides
261 # But practically it is not needed since python will infer at runtime
262
263 # def set_train_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
264 # super().set_train_ds(cfg)
265 #
266 # def set_validation_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
267 # super().set_validation_ds(cfg)
268 #
269 # def set_test_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
270 # super().set_test_ds(cfg)
271
272 def _finalize_cfg(self):
273 # propagate labels
274 self.model_cfg.train_ds.labels = self.model_cfg.labels
275 self.model_cfg.validation_ds.labels = self.model_cfg.labels
276 self.model_cfg.test_ds.labels = self.model_cfg.labels
277 self.model_cfg.decoder.vocabulary = self.model_cfg.labels
278
279 # propagate num classes
280 self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
281
282 # propagate sample rate
283 self.model_cfg.sample_rate = self.model_cfg.sample_rate
284 self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
285 self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
286 self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
287 self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
288
289 # propagate filters
290 self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
291 self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
292
293 # propagate separable
294 for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
295 layer.separable = self.model_cfg.separable
296
297 # propagate repeat
298 for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
299 layer.repeat = self.model_cfg.repeat
300
301 # propagate dropout
302 for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
303 layer.dropout = self.model_cfg.dropout
304
305 def build(self) -> ctc_cfg.EncDecCTCConfig:
306 return super().build()
307
[end of nemo/collections/asr/models/configs/quartznet_config.py]
[start of nemo/collections/asr/modules/audio_preprocessing.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 import random
17 from abc import ABC, abstractmethod
18 from dataclasses import dataclass
19 from typing import Any, Dict, Optional, Tuple
20
21 import torch
22 from packaging import version
23
24 from nemo.collections.asr.parts.numba.spec_augment import SpecAugmentNumba, spec_augment_launch_heuristics
25 from nemo.collections.asr.parts.preprocessing.features import (
26 FilterbankFeatures,
27 FilterbankFeaturesTA,
28 make_seq_mask_like,
29 )
30 from nemo.collections.asr.parts.submodules.spectr_augment import SpecAugment, SpecCutout
31 from nemo.core.classes import Exportable, NeuralModule, typecheck
32 from nemo.core.neural_types import (
33 AudioSignal,
34 LengthsType,
35 MelSpectrogramType,
36 MFCCSpectrogramType,
37 NeuralType,
38 SpectrogramType,
39 )
40 from nemo.core.utils import numba_utils
41 from nemo.core.utils.numba_utils import __NUMBA_MINIMUM_VERSION__
42 from nemo.utils import logging
43
44 try:
45 import torchaudio
46 import torchaudio.functional
47 import torchaudio.transforms
48
49 TORCHAUDIO_VERSION = version.parse(torchaudio.__version__)
50 TORCHAUDIO_VERSION_MIN = version.parse('0.5')
51
52 HAVE_TORCHAUDIO = True
53 except ModuleNotFoundError:
54 HAVE_TORCHAUDIO = False
55
56 __all__ = [
57 'AudioToMelSpectrogramPreprocessor',
58 'AudioToSpectrogram',
59 'SpectrogramToAudio',
60 'AudioToMFCCPreprocessor',
61 'SpectrogramAugmentation',
62 'MaskedPatchAugmentation',
63 'CropOrPadSpectrogramAugmentation',
64 ]
65
66
67 class AudioPreprocessor(NeuralModule, ABC):
68 """
69 An interface for Neural Modules that performs audio pre-processing,
70 transforming the wav files to features.
71 """
72
73 def __init__(self, win_length, hop_length):
74 super().__init__()
75
76 self.win_length = win_length
77 self.hop_length = hop_length
78
79 self.torch_windows = {
80 'hann': torch.hann_window,
81 'hamming': torch.hamming_window,
82 'blackman': torch.blackman_window,
83 'bartlett': torch.bartlett_window,
84 'ones': torch.ones,
85 None: torch.ones,
86 }
87
88 @typecheck()
89 @torch.no_grad()
90 def forward(self, input_signal, length):
91 processed_signal, processed_length = self.get_features(input_signal, length)
92
93 return processed_signal, processed_length
94
95 @abstractmethod
96 def get_features(self, input_signal, length):
97 # Called by forward(). Subclasses should implement this.
98 pass
99
100
101 class AudioToMelSpectrogramPreprocessor(AudioPreprocessor, Exportable):
102 """Featurizer module that converts wavs to mel spectrograms.
103
104 Args:
105 sample_rate (int): Sample rate of the input audio data.
106 Defaults to 16000
107 window_size (float): Size of window for fft in seconds
108 Defaults to 0.02
109 window_stride (float): Stride of window for fft in seconds
110 Defaults to 0.01
111 n_window_size (int): Size of window for fft in samples
112 Defaults to None. Use one of window_size or n_window_size.
113 n_window_stride (int): Stride of window for fft in samples
114 Defaults to None. Use one of window_stride or n_window_stride.
115 window (str): Windowing function for fft. can be one of ['hann',
116 'hamming', 'blackman', 'bartlett']
117 Defaults to "hann"
118 normalize (str): Can be one of ['per_feature', 'all_features']; all
119 other options disable feature normalization. 'all_features'
120 normalizes the entire spectrogram to be mean 0 with std 1.
121 'pre_features' normalizes per channel / freq instead.
122 Defaults to "per_feature"
123 n_fft (int): Length of FT window. If None, it uses the smallest power
124 of 2 that is larger than n_window_size.
125 Defaults to None
126 preemph (float): Amount of pre emphasis to add to audio. Can be
127 disabled by passing None.
128 Defaults to 0.97
129 features (int): Number of mel spectrogram freq bins to output.
130 Defaults to 64
131 lowfreq (int): Lower bound on mel basis in Hz.
132 Defaults to 0
133 highfreq (int): Lower bound on mel basis in Hz.
134 Defaults to None
135 log (bool): Log features.
136 Defaults to True
137 log_zero_guard_type(str): Need to avoid taking the log of zero. There
138 are two options: "add" or "clamp".
139 Defaults to "add".
140 log_zero_guard_value(float, or str): Add or clamp requires the number
141 to add with or clamp to. log_zero_guard_value can either be a float
142 or "tiny" or "eps". torch.finfo is used if "tiny" or "eps" is
143 passed.
144 Defaults to 2**-24.
145 dither (float): Amount of white-noise dithering.
146 Defaults to 1e-5
147 pad_to (int): Ensures that the output size of the time dimension is
148 a multiple of pad_to.
149 Defaults to 16
150 frame_splicing (int): Defaults to 1
151 exact_pad (bool): If True, sets stft center to False and adds padding, such that num_frames = audio_length
152 // hop_length. Defaults to False.
153 pad_value (float): The value that shorter mels are padded with.
154 Defaults to 0
155 mag_power (float): The power that the linear spectrogram is raised to
156 prior to multiplication with mel basis.
157 Defaults to 2 for a power spec
158 rng : Random number generator
159 nb_augmentation_prob (float) : Probability with which narrowband augmentation would be applied to
160 samples in the batch.
161 Defaults to 0.0
162 nb_max_freq (int) : Frequency above which all frequencies will be masked for narrowband augmentation.
163 Defaults to 4000
164 use_torchaudio: Whether to use the `torchaudio` implementation.
165 mel_norm: Normalization used for mel filterbank weights.
166 Defaults to 'slaney' (area normalization)
167 stft_exact_pad: Deprecated argument, kept for compatibility with older checkpoints.
168 stft_conv: Deprecated argument, kept for compatibility with older checkpoints.
169 """
170
171 def save_to(self, save_path: str):
172 pass
173
174 @classmethod
175 def restore_from(cls, restore_path: str):
176 pass
177
178 @property
179 def input_types(self):
180 """Returns definitions of module input ports.
181 """
182 return {
183 "input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
184 "length": NeuralType(
185 tuple('B'), LengthsType()
186 ), # Please note that length should be in samples not seconds.
187 }
188
189 @property
190 def output_types(self):
191 """Returns definitions of module output ports.
192
193 processed_signal:
194 0: AxisType(BatchTag)
195 1: AxisType(MelSpectrogramSignalTag)
196 2: AxisType(ProcessedTimeTag)
197 processed_length:
198 0: AxisType(BatchTag)
199 """
200 return {
201 "processed_signal": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
202 "processed_length": NeuralType(tuple('B'), LengthsType()),
203 }
204
205 def __init__(
206 self,
207 sample_rate=16000,
208 window_size=0.02,
209 window_stride=0.01,
210 n_window_size=None,
211 n_window_stride=None,
212 window="hann",
213 normalize="per_feature",
214 n_fft=None,
215 preemph=0.97,
216 features=64,
217 lowfreq=0,
218 highfreq=None,
219 log=True,
220 log_zero_guard_type="add",
221 log_zero_guard_value=2 ** -24,
222 dither=1e-5,
223 pad_to=16,
224 frame_splicing=1,
225 exact_pad=False,
226 pad_value=0,
227 mag_power=2.0,
228 rng=None,
229 nb_augmentation_prob=0.0,
230 nb_max_freq=4000,
231 use_torchaudio: bool = False,
232 mel_norm="slaney",
233 stft_exact_pad=False, # Deprecated arguments; kept for config compatibility
234 stft_conv=False, # Deprecated arguments; kept for config compatibility
235 ):
236 super().__init__(n_window_size, n_window_stride)
237
238 self._sample_rate = sample_rate
239 if window_size and n_window_size:
240 raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
241 if window_stride and n_window_stride:
242 raise ValueError(
243 f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
244 )
245 if window_size:
246 n_window_size = int(window_size * self._sample_rate)
247 if window_stride:
248 n_window_stride = int(window_stride * self._sample_rate)
249
250 # Given the long and similar argument list, point to the class and instantiate it by reference
251 if not use_torchaudio:
252 featurizer_class = FilterbankFeatures
253 else:
254 featurizer_class = FilterbankFeaturesTA
255 self.featurizer = featurizer_class(
256 sample_rate=self._sample_rate,
257 n_window_size=n_window_size,
258 n_window_stride=n_window_stride,
259 window=window,
260 normalize=normalize,
261 n_fft=n_fft,
262 preemph=preemph,
263 nfilt=features,
264 lowfreq=lowfreq,
265 highfreq=highfreq,
266 log=log,
267 log_zero_guard_type=log_zero_guard_type,
268 log_zero_guard_value=log_zero_guard_value,
269 dither=dither,
270 pad_to=pad_to,
271 frame_splicing=frame_splicing,
272 exact_pad=exact_pad,
273 pad_value=pad_value,
274 mag_power=mag_power,
275 rng=rng,
276 nb_augmentation_prob=nb_augmentation_prob,
277 nb_max_freq=nb_max_freq,
278 mel_norm=mel_norm,
279 stft_exact_pad=stft_exact_pad, # Deprecated arguments; kept for config compatibility
280 stft_conv=stft_conv, # Deprecated arguments; kept for config compatibility
281 )
282
283 def input_example(self, max_batch: int = 8, max_dim: int = 32000, min_length: int = 200):
284 batch_size = torch.randint(low=1, high=max_batch, size=[1]).item()
285 max_length = torch.randint(low=min_length, high=max_dim, size=[1]).item()
286 signals = torch.rand(size=[batch_size, max_length]) * 2 - 1
287 lengths = torch.randint(low=min_length, high=max_dim, size=[batch_size])
288 lengths[0] = max_length
289 return signals, lengths
290
291 def get_features(self, input_signal, length):
292 return self.featurizer(input_signal, length)
293
294 @property
295 def filter_banks(self):
296 return self.featurizer.filter_banks
297
298
299 class AudioToMFCCPreprocessor(AudioPreprocessor):
300 """Preprocessor that converts wavs to MFCCs.
301 Uses torchaudio.transforms.MFCC.
302
303 Args:
304 sample_rate: The sample rate of the audio.
305 Defaults to 16000.
306 window_size: Size of window for fft in seconds. Used to calculate the
307 win_length arg for mel spectrogram.
308 Defaults to 0.02
309 window_stride: Stride of window for fft in seconds. Used to caculate
310 the hop_length arg for mel spect.
311 Defaults to 0.01
312 n_window_size: Size of window for fft in samples
313 Defaults to None. Use one of window_size or n_window_size.
314 n_window_stride: Stride of window for fft in samples
315 Defaults to None. Use one of window_stride or n_window_stride.
316 window: Windowing function for fft. can be one of ['hann',
317 'hamming', 'blackman', 'bartlett', 'none', 'null'].
318 Defaults to 'hann'
319 n_fft: Length of FT window. If None, it uses the smallest power of 2
320 that is larger than n_window_size.
321 Defaults to None
322 lowfreq (int): Lower bound on mel basis in Hz.
323 Defaults to 0
324 highfreq (int): Lower bound on mel basis in Hz.
325 Defaults to None
326 n_mels: Number of mel filterbanks.
327 Defaults to 64
328 n_mfcc: Number of coefficients to retain
329 Defaults to 64
330 dct_type: Type of discrete cosine transform to use
331 norm: Type of norm to use
332 log: Whether to use log-mel spectrograms instead of db-scaled.
333 Defaults to True.
334 """
335
336 @property
337 def input_types(self):
338 """Returns definitions of module input ports.
339 """
340 return {
341 "input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
342 "length": NeuralType(tuple('B'), LengthsType()),
343 }
344
345 @property
346 def output_types(self):
347 """Returns definitions of module output ports.
348 """
349 return {
350 "processed_signal": NeuralType(('B', 'D', 'T'), MFCCSpectrogramType()),
351 "processed_length": NeuralType(tuple('B'), LengthsType()),
352 }
353
354 def save_to(self, save_path: str):
355 pass
356
357 @classmethod
358 def restore_from(cls, restore_path: str):
359 pass
360
361 def __init__(
362 self,
363 sample_rate=16000,
364 window_size=0.02,
365 window_stride=0.01,
366 n_window_size=None,
367 n_window_stride=None,
368 window='hann',
369 n_fft=None,
370 lowfreq=0.0,
371 highfreq=None,
372 n_mels=64,
373 n_mfcc=64,
374 dct_type=2,
375 norm='ortho',
376 log=True,
377 ):
378 self._sample_rate = sample_rate
379 if not HAVE_TORCHAUDIO:
380 logging.error('Could not import torchaudio. Some features might not work.')
381
382 raise ModuleNotFoundError(
383 "torchaudio is not installed but is necessary for "
384 "AudioToMFCCPreprocessor. We recommend you try "
385 "building it from source for the PyTorch version you have."
386 )
387 if window_size and n_window_size:
388 raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
389 if window_stride and n_window_stride:
390 raise ValueError(
391 f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
392 )
393 # Get win_length (n_window_size) and hop_length (n_window_stride)
394 if window_size:
395 n_window_size = int(window_size * self._sample_rate)
396 if window_stride:
397 n_window_stride = int(window_stride * self._sample_rate)
398
399 super().__init__(n_window_size, n_window_stride)
400
401 mel_kwargs = {}
402
403 mel_kwargs['f_min'] = lowfreq
404 mel_kwargs['f_max'] = highfreq
405 mel_kwargs['n_mels'] = n_mels
406
407 mel_kwargs['n_fft'] = n_fft or 2 ** math.ceil(math.log2(n_window_size))
408
409 mel_kwargs['win_length'] = n_window_size
410 mel_kwargs['hop_length'] = n_window_stride
411
412 # Set window_fn. None defaults to torch.ones.
413 window_fn = self.torch_windows.get(window, None)
414 if window_fn is None:
415 raise ValueError(
416 f"Window argument for AudioProcessor is invalid: {window}."
417 f"For no window function, use 'ones' or None."
418 )
419 mel_kwargs['window_fn'] = window_fn
420
421 # Use torchaudio's implementation of MFCCs as featurizer
422 self.featurizer = torchaudio.transforms.MFCC(
423 sample_rate=self._sample_rate,
424 n_mfcc=n_mfcc,
425 dct_type=dct_type,
426 norm=norm,
427 log_mels=log,
428 melkwargs=mel_kwargs,
429 )
430
431 def get_features(self, input_signal, length):
432 features = self.featurizer(input_signal)
433 seq_len = torch.ceil(length.to(torch.float32) / self.hop_length).to(dtype=torch.long)
434 return features, seq_len
435
436
437 class SpectrogramAugmentation(NeuralModule):
438 """
439 Performs time and freq cuts in one of two ways.
440 SpecAugment zeroes out vertical and horizontal sections as described in
441 SpecAugment (https://arxiv.org/abs/1904.08779). Arguments for use with
442 SpecAugment are `freq_masks`, `time_masks`, `freq_width`, and `time_width`.
443 SpecCutout zeroes out rectangulars as described in Cutout
444 (https://arxiv.org/abs/1708.04552). Arguments for use with Cutout are
445 `rect_masks`, `rect_freq`, and `rect_time`.
446
447 Args:
448 freq_masks (int): how many frequency segments should be cut.
449 Defaults to 0.
450 time_masks (int): how many time segments should be cut
451 Defaults to 0.
452 freq_width (int): maximum number of frequencies to be cut in one
453 segment.
454 Defaults to 10.
455 time_width (int): maximum number of time steps to be cut in one
456 segment
457 Defaults to 10.
458 rect_masks (int): how many rectangular masks should be cut
459 Defaults to 0.
460 rect_freq (int): maximum size of cut rectangles along the frequency
461 dimension
462 Defaults to 5.
463 rect_time (int): maximum size of cut rectangles along the time
464 dimension
465 Defaults to 25.
466 """
467
468 @property
469 def input_types(self):
470 """Returns definitions of module input types
471 """
472 return {
473 "input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
474 "length": NeuralType(tuple('B'), LengthsType()),
475 }
476
477 @property
478 def output_types(self):
479 """Returns definitions of module output types
480 """
481 return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
482
483 def __init__(
484 self,
485 freq_masks=0,
486 time_masks=0,
487 freq_width=10,
488 time_width=10,
489 rect_masks=0,
490 rect_time=5,
491 rect_freq=20,
492 rng=None,
493 mask_value=0.0,
494 use_numba_spec_augment: bool = True,
495 ):
496 super().__init__()
497
498 if rect_masks > 0:
499 self.spec_cutout = SpecCutout(rect_masks=rect_masks, rect_time=rect_time, rect_freq=rect_freq, rng=rng,)
500 # self.spec_cutout.to(self._device)
501 else:
502 self.spec_cutout = lambda input_spec: input_spec
503 if freq_masks + time_masks > 0:
504 self.spec_augment = SpecAugment(
505 freq_masks=freq_masks,
506 time_masks=time_masks,
507 freq_width=freq_width,
508 time_width=time_width,
509 rng=rng,
510 mask_value=mask_value,
511 )
512 else:
513 self.spec_augment = lambda input_spec, length: input_spec
514
515 # Check if numba is supported, and use a Numba kernel if it is
516 if use_numba_spec_augment and numba_utils.numba_cuda_is_supported(__NUMBA_MINIMUM_VERSION__):
517 logging.info('Numba CUDA SpecAugment kernel is being used')
518 self.spec_augment_numba = SpecAugmentNumba(
519 freq_masks=freq_masks,
520 time_masks=time_masks,
521 freq_width=freq_width,
522 time_width=time_width,
523 rng=rng,
524 mask_value=mask_value,
525 )
526 else:
527 self.spec_augment_numba = None
528
529 @typecheck()
530 def forward(self, input_spec, length):
531 augmented_spec = self.spec_cutout(input_spec=input_spec)
532
533 # To run the Numba kernel, correct numba version is required as well as
534 # tensor must be on GPU and length must be provided
535 if self.spec_augment_numba is not None and spec_augment_launch_heuristics(augmented_spec, length):
536 augmented_spec = self.spec_augment_numba(input_spec=augmented_spec, length=length)
537 else:
538 augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
539 return augmented_spec
540
541
542 class MaskedPatchAugmentation(NeuralModule):
543 """
544 Zeroes out fixed size time patches of the spectrogram.
545 All samples in batch are guaranteed to have the same amount of masked time steps.
546 Optionally also performs frequency masking in the same way as SpecAugment.
547 Args:
548 patch_size (int): up to how many time steps does one patch consist of.
549 Defaults to 48.
550 mask_patches (float): how many patches should be masked in each sample.
551 if >= 1., interpreted as number of patches (after converting to int)
552 if <1., interpreted as fraction of total tokens to be masked (number of patches is rounded up)
553 Defaults to 10.
554 freq_masks (int): how many frequency segments should be cut.
555 Defaults to 0.
556 freq_width (int): maximum number of frequencies to be cut in a segment.
557 Defaults to 0.
558 """
559
560 @property
561 def input_types(self):
562 """Returns definitions of module input types
563 """
564 return {
565 "input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
566 "length": NeuralType(tuple('B'), LengthsType()),
567 }
568
569 @property
570 def output_types(self):
571 """Returns definitions of module output types
572 """
573 return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
574
575 def __init__(
576 self, patch_size: int = 48, mask_patches: float = 10.0, freq_masks: int = 0, freq_width: int = 0,
577 ):
578 super().__init__()
579 self.patch_size = patch_size
580 if mask_patches >= 1:
581 self.mask_patches = int(mask_patches)
582 elif mask_patches >= 0:
583 self._mask_fraction = mask_patches
584 self.mask_patches = None
585 else:
586 raise ValueError('mask_patches cannot be negative')
587
588 if freq_masks > 0:
589 self.spec_augment = SpecAugment(freq_masks=freq_masks, time_masks=0, freq_width=freq_width, time_width=0,)
590 else:
591 self.spec_augment = None
592
593 @typecheck()
594 def forward(self, input_spec, length):
595 augmented_spec = input_spec
596
597 min_len = torch.min(length)
598
599 if self.mask_patches is None:
600 # masking specified as fraction
601 len_fraction = int(min_len * self._mask_fraction)
602 mask_patches = len_fraction // self.patch_size + int(len_fraction % self.patch_size != 0)
603 else:
604 mask_patches = self.mask_patches
605
606 if min_len < self.patch_size * mask_patches:
607 mask_patches = min_len // self.patch_size
608
609 for idx in range(input_spec.shape[0]):
610 cur_len = length[idx]
611 patches = range(cur_len // self.patch_size)
612 masked_patches = random.sample(patches, mask_patches)
613
614 for mp in masked_patches:
615 augmented_spec[idx, :, mp * self.patch_size : (mp + 1) * self.patch_size] = 0.0
616
617 if self.spec_augment is not None:
618 augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
619
620 return augmented_spec
621
622
623 class CropOrPadSpectrogramAugmentation(NeuralModule):
624 """
625 Pad or Crop the incoming Spectrogram to a certain shape.
626
627 Args:
628 audio_length (int): the final number of timesteps that is required.
629 The signal will be either padded or cropped temporally to this
630 size.
631 """
632
633 def __init__(self, audio_length):
634 super(CropOrPadSpectrogramAugmentation, self).__init__()
635 self.audio_length = audio_length
636
637 @typecheck()
638 @torch.no_grad()
639 def forward(self, input_signal, length):
640 image = input_signal
641 num_images = image.shape[0]
642
643 audio_length = self.audio_length
644 image_len = image.shape[-1]
645
646 # Crop long signal
647 if image_len > audio_length: # randomly slice
648 cutout_images = []
649 offset = torch.randint(low=0, high=image_len - audio_length + 1, size=[num_images])
650
651 for idx, offset in enumerate(offset):
652 cutout_images.append(image[idx : idx + 1, :, offset : offset + audio_length])
653
654 image = torch.cat(cutout_images, dim=0)
655 del cutout_images
656
657 else: # symmetrically pad short signal with zeros
658 pad_left = (audio_length - image_len) // 2
659 pad_right = (audio_length - image_len) // 2
660
661 if (audio_length - image_len) % 2 == 1:
662 pad_right += 1
663
664 image = torch.nn.functional.pad(image, [pad_left, pad_right], mode="constant", value=0)
665
666 # Replace dynamic length sequences with static number of timesteps
667 length = (length * 0) + audio_length
668
669 return image, length
670
671 @property
672 def input_types(self):
673 """Returns definitions of module output ports.
674 """
675 return {
676 "input_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
677 "length": NeuralType(tuple('B'), LengthsType()),
678 }
679
680 @property
681 def output_types(self):
682 """Returns definitions of module output ports.
683 """
684 return {
685 "processed_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
686 "processed_length": NeuralType(tuple('B'), LengthsType()),
687 }
688
689 def save_to(self, save_path: str):
690 pass
691
692 @classmethod
693 def restore_from(cls, restore_path: str):
694 pass
695
696
697 class AudioToSpectrogram(NeuralModule):
698 """Transform a batch of input multi-channel signals into a batch of
699 STFT-based spectrograms.
700
701 Args:
702 fft_length: length of FFT
703 hop_length: length of hops/shifts of the sliding window
704 power: exponent for magnitude spectrogram. Default `None` will
705 return a complex-valued spectrogram
706 """
707
708 def __init__(self, fft_length: int, hop_length: int, power: Optional[float] = None):
709 if not HAVE_TORCHAUDIO:
710 logging.error('Could not import torchaudio. Some features might not work.')
711
712 raise ModuleNotFoundError(
713 "torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
714 )
715
716 super().__init__()
717
718 # For now, assume FFT length is divisible by two
719 if fft_length % 2 != 0:
720 raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
721
722 self.stft = torchaudio.transforms.Spectrogram(
723 n_fft=fft_length, hop_length=hop_length, power=power, pad_mode='constant'
724 )
725
726 # number of subbands
727 self.F = fft_length // 2 + 1
728
729 @property
730 def num_subbands(self) -> int:
731 return self.F
732
733 @property
734 def input_types(self) -> Dict[str, NeuralType]:
735 """Returns definitions of module output ports.
736 """
737 return {
738 "input": NeuralType(('B', 'C', 'T'), AudioSignal()),
739 "input_length": NeuralType(('B',), LengthsType(), optional=True),
740 }
741
742 @property
743 def output_types(self) -> Dict[str, NeuralType]:
744 """Returns definitions of module output ports.
745 """
746 return {
747 "output": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
748 "output_length": NeuralType(('B',), LengthsType()),
749 }
750
751 @typecheck()
752 def forward(
753 self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None
754 ) -> Tuple[torch.Tensor, torch.Tensor]:
755 """Convert a batch of C-channel input signals
756 into a batch of complex-valued spectrograms.
757
758 Args:
759 input: Time-domain input signal with C channels, shape (B, C, T)
760 input_length: Length of valid entries along the time dimension, shape (B,)
761
762 Returns:
763 Output spectrogram with F subbands and N time frames, shape (B, C, F, N)
764 and output length with shape (B,).
765 """
766 B, T = input.size(0), input.size(-1)
767 input = input.view(B, -1, T)
768
769 # STFT output (B, C, F, N)
770 with torch.cuda.amp.autocast(enabled=False):
771 output = self.stft(input.float())
772
773 if input_length is not None:
774 # Mask padded frames
775 output_length = self.get_output_length(input_length=input_length)
776
777 length_mask: torch.Tensor = make_seq_mask_like(
778 lengths=output_length, like=output, time_dim=-1, valid_ones=False
779 )
780 output = output.masked_fill(length_mask, 0.0)
781 else:
782 # Assume all frames are valid for all examples in the batch
783 output_length = output.size(-1) * torch.ones(B, device=output.device).long()
784
785 return output, output_length
786
787 def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
788 """Get length of valid frames for the output.
789
790 Args:
791 input_length: number of valid samples, shape (B,)
792
793 Returns:
794 Number of valid frames, shape (B,)
795 """
796 output_length = input_length.div(self.stft.hop_length, rounding_mode='floor').add(1).long()
797 return output_length
798
799
800 class SpectrogramToAudio(NeuralModule):
801 """Transform a batch of input multi-channel spectrograms into a batch of
802 time-domain multi-channel signals.
803
804 Args:
805 fft_length: length of FFT
806 hop_length: length of hops/shifts of the sliding window
807 power: exponent for magnitude spectrogram. Default `None` will
808 return a complex-valued spectrogram
809 """
810
811 def __init__(self, fft_length: int, hop_length: int):
812 if not HAVE_TORCHAUDIO:
813 logging.error('Could not import torchaudio. Some features might not work.')
814
815 raise ModuleNotFoundError(
816 "torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
817 )
818
819 super().__init__()
820
821 # For now, assume FFT length is divisible by two
822 if fft_length % 2 != 0:
823 raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
824
825 self.istft = torchaudio.transforms.InverseSpectrogram(
826 n_fft=fft_length, hop_length=hop_length, pad_mode='constant'
827 )
828
829 self.F = fft_length // 2 + 1
830
831 @property
832 def num_subbands(self) -> int:
833 return self.F
834
835 @property
836 def input_types(self) -> Dict[str, NeuralType]:
837 """Returns definitions of module output ports.
838 """
839 return {
840 "input": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
841 "input_length": NeuralType(('B',), LengthsType(), optional=True),
842 }
843
844 @property
845 def output_types(self) -> Dict[str, NeuralType]:
846 """Returns definitions of module output ports.
847 """
848 return {
849 "output": NeuralType(('B', 'C', 'T'), AudioSignal()),
850 "output_length": NeuralType(('B',), LengthsType()),
851 }
852
853 @typecheck()
854 def forward(self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None) -> torch.Tensor:
855 """Convert input complex-valued spectrogram to a time-domain
856 signal. Multi-channel IO is supported.
857
858 Args:
859 input: Input spectrogram for C channels, shape (B, C, F, N)
860 input_length: Length of valid entries along the time dimension, shape (B,)
861
862 Returns:
863 Time-domain signal with T time-domain samples and C channels, (B, C, T)
864 and output length with shape (B,).
865 """
866 B, F, N = input.size(0), input.size(-2), input.size(-1)
867 assert F == self.F, f'Number of subbands F={F} not matching self.F={self.F}'
868 input = input.view(B, -1, F, N)
869
870 # iSTFT output (B, C, T)
871 with torch.cuda.amp.autocast(enabled=False):
872 output = self.istft(input.cfloat())
873
874 if input_length is not None:
875 # Mask padded samples
876 output_length = self.get_output_length(input_length=input_length)
877
878 length_mask: torch.Tensor = make_seq_mask_like(
879 lengths=output_length, like=output, time_dim=-1, valid_ones=False
880 )
881 output = output.masked_fill(length_mask, 0.0)
882 else:
883 # Assume all frames are valid for all examples in the batch
884 output_length = output.size(-1) * torch.ones(B, device=output.device).long()
885
886 return output, output_length
887
888 def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
889 """Get length of valid samples for the output.
890
891 Args:
892 input_length: number of valid frames, shape (B,)
893
894 Returns:
895 Number of valid samples, shape (B,)
896 """
897 output_length = input_length.sub(1).mul(self.istft.hop_length).long()
898 return output_length
899
900
901 @dataclass
902 class AudioToMelSpectrogramPreprocessorConfig:
903 _target_: str = "nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor"
904 sample_rate: int = 16000
905 window_size: float = 0.02
906 window_stride: float = 0.01
907 n_window_size: Optional[int] = None
908 n_window_stride: Optional[int] = None
909 window: str = "hann"
910 normalize: str = "per_feature"
911 n_fft: Optional[int] = None
912 preemph: float = 0.97
913 features: int = 64
914 lowfreq: int = 0
915 highfreq: Optional[int] = None
916 log: bool = True
917 log_zero_guard_type: str = "add"
918 log_zero_guard_value: float = 2 ** -24
919 dither: float = 1e-5
920 pad_to: int = 16
921 frame_splicing: int = 1
922 exact_pad: bool = False
923 pad_value: int = 0
924 mag_power: float = 2.0
925 rng: Optional[str] = None
926 nb_augmentation_prob: float = 0.0
927 nb_max_freq: int = 4000
928 use_torchaudio: bool = False
929 mel_norm: str = "slaney"
930 stft_exact_pad: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
931 stft_conv: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
932
933
934 @dataclass
935 class AudioToMFCCPreprocessorConfig:
936 _target_: str = 'nemo.collections.asr.modules.AudioToMFCCPreprocessor'
937 sample_rate: int = 16000
938 window_size: float = 0.02
939 window_stride: float = 0.01
940 n_window_size: Optional[int] = None
941 n_window_stride: Optional[int] = None
942 window: str = 'hann'
943 n_fft: Optional[int] = None
944 lowfreq: Optional[float] = 0.0
945 highfreq: Optional[float] = None
946 n_mels: int = 64
947 n_mfcc: int = 64
948 dct_type: int = 2
949 norm: str = 'ortho'
950 log: bool = True
951
952
953 @dataclass
954 class SpectrogramAugmentationConfig:
955 _target_: str = "nemo.collections.asr.modules.SpectrogramAugmentation"
956 freq_masks: int = 0
957 time_masks: int = 0
958 freq_width: int = 0
959 time_width: Optional[Any] = 0
960 rect_masks: int = 0
961 rect_time: int = 0
962 rect_freq: int = 0
963 mask_value: float = 0
964 rng: Optional[Any] = None # random.Random() type
965 use_numba_spec_augment: bool = True
966
967
968 @dataclass
969 class CropOrPadSpectrogramAugmentationConfig:
970 audio_length: int
971 _target_: str = "nemo.collections.asr.modules.CropOrPadSpectrogramAugmentation"
972
973
974 @dataclass
975 class MaskedPatchAugmentationConfig:
976 patch_size: int = 48
977 mask_patches: float = 10.0
978 freq_masks: int = 0
979 freq_width: int = 0
980 _target_: str = "nemo.collections.asr.modules.MaskedPatchAugmentation"
981
[end of nemo/collections/asr/modules/audio_preprocessing.py]
[start of nemo/collections/asr/parts/k2/classes.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from abc import ABC
16 from dataclasses import dataclass
17 from typing import Any, Optional, Tuple
18
19 import torch
20 from omegaconf import DictConfig
21
22 from nemo.utils import logging
23
24
25 @dataclass
26 class GraphIntersectDenseConfig:
27 """Graph dense intersection config.
28 """
29
30 search_beam: float = 20.0
31 output_beam: float = 10.0
32 min_active_states: int = 30
33 max_active_states: int = 10000
34
35
36 @dataclass
37 class GraphModuleConfig:
38 """Config for graph modules.
39 Typically used with graph losses and decoders.
40 """
41
42 topo_type: str = "default"
43 topo_with_self_loops: bool = True
44 token_lm: Optional[Any] = None
45 intersect_pruned: bool = False
46 intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
47 boost_coeff: float = 0.0
48 predictor_window_size: int = 0
49 predictor_step_size: int = 1
50
51
52 class ASRK2Mixin(ABC):
53 """k2 Mixin class that simplifies the construction of various models with k2-based losses.
54
55 It does the following:
56 - Sets up the graph loss and decoder (methods _init_k2 and update_k2_modules).
57 - Registers external graphs, if needed.
58 - Augments forward(...) with optional graph decoding to get accurate predictions.
59 """
60
61 def _init_k2(self):
62 """
63 k2-related initialization implementation.
64
65 This method is expected to run after the __init__ which sets self._cfg
66 self._cfg is expected to have the attribute graph_module_cfg
67 """
68 if not hasattr(self, "_cfg"):
69 raise ValueError("self._cfg must be set before calling _init_k2().")
70 if not hasattr(self._cfg, "graph_module_cfg") or self._cfg.graph_module_cfg is None:
71 raise ValueError("self._cfg.graph_module_cfg must be set and cannot be None.")
72 self.graph_module_cfg = self._cfg.graph_module_cfg
73
74 # register token_lm for MAPLoss
75 criterion_type = self.graph_module_cfg.get("criterion_type", "ml")
76 self.use_graph_lm = criterion_type == "map"
77 if self.use_graph_lm:
78 token_lm_path = self.graph_module_cfg.backend_cfg.get("token_lm", None)
79 if token_lm_path is None:
80 raise ValueError(
81 f"graph_module_cfg.backend_cfg.token_lm is empty. It must be set for criterion_type == `{criterion_type}`"
82 )
83 token_lm_path = self.register_artifact('graph_module_cfg.backend_cfg.token_lm', token_lm_path)
84 self.graph_module_cfg.backend_cfg["token_lm"] = token_lm_path
85
86 self.update_k2_modules(self.graph_module_cfg)
87
88 def update_k2_modules(self, input_cfg: DictConfig):
89 """
90 Helper function to initialize or update k2 loss and transcribe_decoder.
91
92 Args:
93 input_cfg: DictConfig to take new parameters from. Schema is expected as in
94 nemo.collections.asr.models.configs.k2_sequence_models_config.GraphModuleConfig
95 """
96 del self.loss
97 if hasattr(self, "transcribe_decoder"):
98 del self.transcribe_decoder
99
100 if hasattr(self, "joint"):
101 # RNNT
102 num_classes = self.joint.num_classes_with_blank - 1
103 else:
104 # CTC, MMI, ...
105 num_classes = self.decoder.num_classes_with_blank - 1
106 remove_consecutive = input_cfg.backend_cfg.get("topo_with_self_loops", True) and input_cfg.backend_cfg.get(
107 "topo_type", "default"
108 ) not in ["forced_blank", "identity",]
109 self._wer.remove_consecutive = remove_consecutive
110
111 from nemo.collections.asr.losses.lattice_losses import LatticeLoss
112
113 self.loss = LatticeLoss(
114 num_classes=num_classes,
115 reduction=self._cfg.get("ctc_reduction", "mean_batch"),
116 backend="k2",
117 criterion_type=input_cfg.get("criterion_type", "ml"),
118 loss_type=input_cfg.get("loss_type", "ctc"),
119 split_batch_size=input_cfg.get("split_batch_size", 0),
120 graph_module_cfg=input_cfg.backend_cfg,
121 )
122
123 criterion_type = self.loss.criterion_type
124 self.use_graph_lm = criterion_type == "map"
125 transcribe_training = input_cfg.get("transcribe_training", False)
126 if transcribe_training and criterion_type == "ml":
127 logging.warning(
128 f"""You do not need to use transcribe_training=`{transcribe_training}`
129 with criterion_type=`{criterion_type}`. transcribe_training will be set to False."""
130 )
131 transcribe_training = False
132 self.transcribe_training = transcribe_training
133 if self.use_graph_lm:
134 from nemo.collections.asr.modules.graph_decoder import ViterbiDecoderWithGraph
135
136 self.transcribe_decoder = ViterbiDecoderWithGraph(
137 num_classes=num_classes,
138 backend="k2",
139 dec_type="token_lm",
140 return_type="1best",
141 return_ilabels=True,
142 output_aligned=True,
143 split_batch_size=input_cfg.get("split_batch_size", 0),
144 graph_module_cfg=input_cfg.backend_cfg,
145 )
146
147 def _forward_k2_post_processing(
148 self, log_probs: torch.Tensor, encoded_length: torch.Tensor, greedy_predictions: torch.Tensor
149 ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
150 """
151 k2-related post-processing parf of .forward()
152
153 Args:
154 log_probs: The log probabilities tensor of shape [B, T, D].
155 encoded_length: The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
156 greedy_predictions: The greedy token predictions of the model of shape [B, T]
157
158 Returns:
159 A tuple of 3 elements -
160 1) The log probabilities tensor of shape [B, T, D].
161 2) The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
162 3) The greedy token predictions of the model of shape [B, T] (via argmax)
163 """
164 # greedy_predictions from .forward() are incorrect for criterion_type=`map`
165 # getting correct greedy_predictions, if needed
166 if self.use_graph_lm and (not self.training or self.transcribe_training):
167 greedy_predictions, encoded_length, _ = self.transcribe_decoder.forward(
168 log_probs=log_probs, log_probs_length=encoded_length
169 )
170 return log_probs, encoded_length, greedy_predictions
171
[end of nemo/collections/asr/parts/k2/classes.py]
[start of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 from dataclasses import dataclass
17 from typing import Any, Optional
18
19 import torch
20 from torch import nn as nn
21
22 from nemo.collections.asr.parts.submodules import multi_head_attention as mha
23 from nemo.collections.common.parts import adapter_modules
24 from nemo.core.classes.mixins import adapter_mixin_strategies
25
26
27 class MHAResidualAddAdapterStrategy(adapter_mixin_strategies.ResidualAddAdapterStrategy):
28 """
29 An implementation of residual addition of an adapter module with its input for the MHA Adapters.
30 """
31
32 def forward(self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'):
33 """
34 A basic strategy, comprising of a residual connection over the input, after forward pass by
35 the underlying adapter. Additional work is done to pack and unpack the dictionary of inputs and outputs.
36
37 Note: The `value` tensor is added to the output of the attention adapter as the residual connection.
38
39 Args:
40 input: A dictionary of multiple input arguments for the adapter module.
41 `query`, `key`, `value`: Original output tensor of the module, or the output of the
42 previous adapter (if more than one adapters are enabled).
43 `mask`: Attention mask.
44 `pos_emb`: Optional positional embedding for relative encoding.
45 adapter: The adapter module that is currently required to perform the forward pass.
46 module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
47 therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
48
49 Returns:
50 The result tensor, after one of the active adapters has finished its forward passes.
51 """
52 out = self.compute_output(input, adapter, module=module)
53
54 # If not in training mode, or probability of stochastic depth is 0, skip step.
55 p = self.stochastic_depth
56 if not module.training or p == 0.0:
57 pass
58 else:
59 out = self.apply_stochastic_depth(out, input['value'], adapter, module=module)
60
61 # Return the residual connection output = input + adapter(input)
62 result = input['value'] + out
63
64 # If l2_lambda is activated, register the loss value
65 self.compute_auxiliary_losses(result, input['value'], adapter, module=module)
66
67 return result
68
69 def compute_output(
70 self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'
71 ) -> torch.Tensor:
72 """
73 Compute the output of a single adapter to some input.
74
75 Args:
76 input: Original output tensor of the module, or the output of the previous adapter (if more than
77 one adapters are enabled).
78 adapter: The adapter module that is currently required to perform the forward pass.
79 module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
80 therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
81
82 Returns:
83 The result tensor, after one of the active adapters has finished its forward passes.
84 """
85 if isinstance(input, (list, tuple)):
86 out = adapter(*input)
87 elif isinstance(input, dict):
88 out = adapter(**input)
89 else:
90 out = adapter(input)
91 return out
92
93
94 @dataclass
95 class MHAResidualAddAdapterStrategyConfig(adapter_mixin_strategies.ResidualAddAdapterStrategyConfig):
96 _target_: str = "{0}.{1}".format(
97 MHAResidualAddAdapterStrategy.__module__, MHAResidualAddAdapterStrategy.__name__
98 ) # mandatory field
99
100
101 class MultiHeadAttentionAdapter(mha.MultiHeadAttention, adapter_modules.AdapterModuleUtil):
102 """Multi-Head Attention layer of Transformer.
103 Args:
104 n_head (int): number of heads
105 n_feat (int): size of the features
106 dropout_rate (float): dropout rate
107 proj_dim (int, optional): Optional integer value for projection before computing attention.
108 If None, then there is no projection (equivalent to proj_dim = n_feat).
109 If > 0, then will project the n_feat to proj_dim before calculating attention.
110 If <0, then will equal n_head, so that each head has a projected dimension of 1.
111 adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
112 """
113
114 def __init__(
115 self,
116 n_head: int,
117 n_feat: int,
118 dropout_rate: float,
119 proj_dim: Optional[int] = None,
120 adapter_strategy: MHAResidualAddAdapterStrategy = None,
121 ):
122 super().__init__(n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, max_cache_len=0)
123
124 self.pre_norm = nn.LayerNorm(n_feat)
125
126 # Set the projection dim to number of heads automatically
127 if proj_dim is not None and proj_dim < 1:
128 proj_dim = n_head
129
130 self.proj_dim = proj_dim
131
132 # Recompute weights for projection dim
133 if self.proj_dim is not None:
134 if self.proj_dim % n_head != 0:
135 raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
136
137 self.d_k = self.proj_dim // n_head
138 self.s_d_k = math.sqrt(self.d_k)
139 self.linear_q = nn.Linear(n_feat, self.proj_dim)
140 self.linear_k = nn.Linear(n_feat, self.proj_dim)
141 self.linear_v = nn.Linear(n_feat, self.proj_dim)
142 self.linear_out = nn.Linear(self.proj_dim, n_feat)
143
144 # Setup adapter strategy
145 self.setup_adapter_strategy(adapter_strategy)
146
147 # reset parameters for Q to be identity operation
148 self.reset_parameters()
149
150 def forward(self, query, key, value, mask, pos_emb=None, cache=None):
151 """Compute 'Scaled Dot Product Attention'.
152 Args:
153 query (torch.Tensor): (batch, time1, size)
154 key (torch.Tensor): (batch, time2, size)
155 value(torch.Tensor): (batch, time2, size)
156 mask (torch.Tensor): (batch, time1, time2)
157 cache (torch.Tensor) : (batch, time_cache, size)
158
159 returns:
160 output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
161 cache (torch.Tensor) : (batch, time_cache_next, size)
162 """
163 # Need to perform duplicate computations as at this point the tensors have been
164 # separated by the adapter forward
165 query = self.pre_norm(query)
166 key = self.pre_norm(key)
167 value = self.pre_norm(value)
168
169 return super().forward(query, key, value, mask, pos_emb, cache=cache)
170
171 def reset_parameters(self):
172 with torch.no_grad():
173 nn.init.zeros_(self.linear_out.weight)
174 nn.init.zeros_(self.linear_out.bias)
175
176 def get_default_strategy_config(self) -> 'dataclass':
177 return MHAResidualAddAdapterStrategyConfig()
178
179
180 @dataclass
181 class MultiHeadAttentionAdapterConfig:
182 n_head: int
183 n_feat: int
184 dropout_rate: float = 0.0
185 proj_dim: Optional[int] = None
186 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
187 _target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
188
189
190 class RelPositionMultiHeadAttentionAdapter(mha.RelPositionMultiHeadAttention, adapter_modules.AdapterModuleUtil):
191 """Multi-Head Attention layer of Transformer-XL with support of relative positional encoding.
192 Paper: https://arxiv.org/abs/1901.02860
193 Args:
194 n_head (int): number of heads
195 n_feat (int): size of the features
196 dropout_rate (float): dropout rate
197 proj_dim (int, optional): Optional integer value for projection before computing attention.
198 If None, then there is no projection (equivalent to proj_dim = n_feat).
199 If > 0, then will project the n_feat to proj_dim before calculating attention.
200 If <0, then will equal n_head, so that each head has a projected dimension of 1.
201 adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
202 """
203
204 def __init__(
205 self,
206 n_head: int,
207 n_feat: int,
208 dropout_rate: float,
209 proj_dim: Optional[int] = None,
210 adapter_strategy: MHAResidualAddAdapterStrategyConfig = None,
211 ):
212 super().__init__(
213 n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, pos_bias_u=None, pos_bias_v=None, max_cache_len=0
214 )
215
216 self.pre_norm = nn.LayerNorm(n_feat)
217
218 # Set the projection dim to number of heads automatically
219 if proj_dim is not None and proj_dim < 1:
220 proj_dim = n_head
221
222 self.proj_dim = proj_dim
223
224 # Recompute weights for projection dim
225 if self.proj_dim is not None:
226 if self.proj_dim % n_head != 0:
227 raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
228
229 self.d_k = self.proj_dim // n_head
230 self.s_d_k = math.sqrt(self.d_k)
231 self.linear_q = nn.Linear(n_feat, self.proj_dim)
232 self.linear_k = nn.Linear(n_feat, self.proj_dim)
233 self.linear_v = nn.Linear(n_feat, self.proj_dim)
234 self.linear_out = nn.Linear(self.proj_dim, n_feat)
235 self.linear_pos = nn.Linear(n_feat, self.proj_dim, bias=False)
236 self.pos_bias_u = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
237 self.pos_bias_v = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
238
239 # Setup adapter strategy
240 self.setup_adapter_strategy(adapter_strategy)
241
242 # reset parameters for Q to be identity operation
243 self.reset_parameters()
244
245 def forward(self, query, key, value, mask, pos_emb, cache=None):
246 """Compute 'Scaled Dot Product Attention' with rel. positional encoding.
247 Args:
248 query (torch.Tensor): (batch, time1, size)
249 key (torch.Tensor): (batch, time2, size)
250 value(torch.Tensor): (batch, time2, size)
251 mask (torch.Tensor): (batch, time1, time2)
252 pos_emb (torch.Tensor) : (batch, time1, size)
253 cache (torch.Tensor) : (batch, time_cache, size)
254 Returns:
255 output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
256 cache_next (torch.Tensor) : (batch, time_cache_next, size)
257 """
258 # Need to perform duplicate computations as at this point the tensors have been
259 # separated by the adapter forward
260 query = self.pre_norm(query)
261 key = self.pre_norm(key)
262 value = self.pre_norm(value)
263
264 return super().forward(query, key, value, mask, pos_emb, cache=cache)
265
266 def reset_parameters(self):
267 with torch.no_grad():
268 nn.init.zeros_(self.linear_out.weight)
269 nn.init.zeros_(self.linear_out.bias)
270
271 # NOTE: This exact procedure apparently highly important.
272 # Above operation is safe to do as self.linear_out.weight *= 0.0 (similar for bias)
273 # However:
274 # DO NOT REPLACE BELOW WITH self.pos_bias_u *= 0.0 OR self.pos_bias_v *= 0.0
275 # For some reason at init sometimes it will cause the value of the tensor to become NaN
276 # All operations to compute matrix_ac and matrix_bd will then fail.
277 nn.init.zeros_(self.pos_bias_u)
278 nn.init.zeros_(self.pos_bias_v)
279
280 def get_default_strategy_config(self) -> 'dataclass':
281 return MHAResidualAddAdapterStrategyConfig()
282
283
284 @dataclass
285 class RelPositionMultiHeadAttentionAdapterConfig:
286 n_head: int
287 n_feat: int
288 dropout_rate: float = 0.0
289 proj_dim: Optional[int] = None
290 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
291 _target_: str = "{0}.{1}".format(
292 RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
293 )
294
295
296 class PositionalEncodingAdapter(mha.PositionalEncoding, adapter_modules.AdapterModuleUtil):
297
298 """
299 Absolute positional embedding adapter.
300
301 .. note::
302
303 Absolute positional embedding value is added to the input tensor *without residual connection* !
304 Therefore, the input is changed, if you only require the positional embedding, drop the returned `x` !
305
306 Args:
307 d_model (int): The input dimension of x.
308 max_len (int): The max sequence length.
309 xscale (float): The input scaling factor. Defaults to 1.0.
310 adapter_strategy (AbstractAdapterStrategy): By default, ReturnResultAdapterStrategyConfig.
311 An adapter composition function object.
312 NOTE: Since this is a positional encoding, it will not add a residual !
313 """
314
315 def __init__(
316 self,
317 d_model: int,
318 max_len: int = 5000,
319 xscale=1.0,
320 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
321 ):
322
323 super().__init__(
324 d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0,
325 )
326
327 # Setup adapter strategy
328 self.setup_adapter_strategy(adapter_strategy)
329
330 def get_default_strategy_config(self) -> 'dataclass':
331 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
332
333
334 @dataclass
335 class PositionalEncodingAdapterConfig:
336 d_model: int
337 max_len: int = 5000
338 xscale: float = 1.0
339 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
340 _target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
341
342
343 class RelPositionalEncodingAdapter(mha.RelPositionalEncoding, adapter_modules.AdapterModuleUtil):
344 """
345 Relative positional encoding for TransformerXL's layers
346 See : Appendix B in https://arxiv.org/abs/1901.02860
347
348 .. note::
349
350 Relative positional embedding value is **not** added to the input tensor !
351 Therefore, the input should be updated changed, if you only require the positional embedding, drop the returned `x` !
352
353 Args:
354 d_model (int): embedding dim
355 max_len (int): maximum input length
356 xscale (bool): whether to scale the input by sqrt(d_model)
357 adapter_strategy: By default, ReturnResultAdapterStrategyConfig. An adapter composition function object.
358 """
359
360 def __init__(
361 self,
362 d_model: int,
363 max_len: int = 5000,
364 xscale=1.0,
365 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
366 ):
367 super().__init__(d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0)
368
369 # Setup adapter strategy
370 self.setup_adapter_strategy(adapter_strategy)
371
372 def get_default_strategy_config(self) -> 'dataclass':
373 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
374
375
376 @dataclass
377 class RelPositionalEncodingAdapterConfig:
378 d_model: int
379 max_len: int = 5000
380 xscale: float = 1.0
381 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
382 _target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
383
[end of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
[start of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 import os
17 from dataclasses import dataclass
18 from typing import List, Optional, Tuple, Union
19
20 import torch
21
22 from nemo.collections.asr.parts.utils import rnnt_utils
23 from nemo.collections.common.tokenizers.tokenizer_spec import TokenizerSpec
24 from nemo.core.classes import Typing, typecheck
25 from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
26 from nemo.utils import logging
27
28 DEFAULT_TOKEN_OFFSET = 100
29
30
31 def pack_hypotheses(
32 hypotheses: List[rnnt_utils.NBestHypotheses], logitlen: torch.Tensor,
33 ) -> List[rnnt_utils.NBestHypotheses]:
34
35 if logitlen is not None:
36 if hasattr(logitlen, 'cpu'):
37 logitlen_cpu = logitlen.to('cpu')
38 else:
39 logitlen_cpu = logitlen
40
41 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.NBestHypotheses
42 for candidate_idx, cand in enumerate(hyp.n_best_hypotheses):
43 cand.y_sequence = torch.tensor(cand.y_sequence, dtype=torch.long)
44
45 if logitlen is not None:
46 cand.length = logitlen_cpu[idx]
47
48 if cand.dec_state is not None:
49 cand.dec_state = _states_to_device(cand.dec_state)
50
51 return hypotheses
52
53
54 def _states_to_device(dec_state, device='cpu'):
55 if torch.is_tensor(dec_state):
56 dec_state = dec_state.to(device)
57
58 elif isinstance(dec_state, (list, tuple)):
59 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
60
61 return dec_state
62
63
64 class AbstractBeamCTCInfer(Typing):
65 """A beam CTC decoder.
66
67 Provides a common abstraction for sample level beam decoding.
68
69 Args:
70 blank_id: int, index of the blank token. Can be 0 or len(vocabulary).
71 beam_size: int, size of the beam used in the underlying beam search engine.
72
73 """
74
75 @property
76 def input_types(self):
77 """Returns definitions of module input ports.
78 """
79 return {
80 "decoder_output": NeuralType(('B', 'T', 'D'), LogprobsType()),
81 "decoder_lengths": NeuralType(tuple('B'), LengthsType()),
82 }
83
84 @property
85 def output_types(self):
86 """Returns definitions of module output ports.
87 """
88 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
89
90 def __init__(self, blank_id: int, beam_size: int):
91 self.blank_id = blank_id
92
93 if beam_size < 1:
94 raise ValueError("Beam search size cannot be less than 1!")
95
96 self.beam_size = beam_size
97
98 # Variables set by corresponding setter methods
99 self.vocab = None
100 self.decoding_type = None
101 self.tokenizer = None
102
103 # Utility maps for vocabulary
104 self.vocab_index_map = None
105 self.index_vocab_map = None
106
107 # Internal variable, used to prevent double reduction of consecutive tokens (ctc collapse)
108 self.override_fold_consecutive_value = None
109
110 def set_vocabulary(self, vocab: List[str]):
111 """
112 Set the vocabulary of the decoding framework.
113
114 Args:
115 vocab: List of str. Each token corresponds to its location in the vocabulary emitted by the model.
116 Note that this vocabulary must NOT contain the "BLANK" token.
117 """
118 self.vocab = vocab
119 self.vocab_index_map = {v: i for i, v in enumerate(vocab)}
120 self.index_vocab_map = {i: v for i, v in enumerate(vocab)}
121
122 def set_decoding_type(self, decoding_type: str):
123 """
124 Sets the decoding type of the framework. Can support either char or subword models.
125
126 Args:
127 decoding_type: Str corresponding to decoding type. Only supports "char" and "subword".
128 """
129 decoding_type = decoding_type.lower()
130 supported_types = ['char', 'subword']
131
132 if decoding_type not in supported_types:
133 raise ValueError(
134 f"Unsupported decoding type. Supported types = {supported_types}.\n" f"Given = {decoding_type}"
135 )
136
137 self.decoding_type = decoding_type
138
139 def set_tokenizer(self, tokenizer: TokenizerSpec):
140 """
141 Set the tokenizer of the decoding framework.
142
143 Args:
144 tokenizer: NeMo tokenizer object, which inherits from TokenizerSpec.
145 """
146 self.tokenizer = tokenizer
147
148 @typecheck()
149 def forward(
150 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
151 ) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
152 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
153 Output token is generated auto-repressively.
154
155 Args:
156 decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
157 decoder_lengths: list of int representing the length of each sequence
158 output sequence.
159
160 Returns:
161 packed list containing batch number of sentences (Hypotheses).
162 """
163 raise NotImplementedError()
164
165 def __call__(self, *args, **kwargs):
166 return self.forward(*args, **kwargs)
167
168
169 class BeamCTCInfer(AbstractBeamCTCInfer):
170 """A greedy CTC decoder.
171
172 Provides a common abstraction for sample level and batch level greedy decoding.
173
174 Args:
175 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
176 preserve_alignments: Bool flag which preserves the history of logprobs generated during
177 decoding (sample / batched). When set to true, the Hypothesis will contain
178 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
179 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
180 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
181 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
182
183 """
184
185 def __init__(
186 self,
187 blank_id: int,
188 beam_size: int,
189 search_type: str = "default",
190 return_best_hypothesis: bool = True,
191 preserve_alignments: bool = False,
192 compute_timestamps: bool = False,
193 beam_alpha: float = 1.0,
194 beam_beta: float = 0.0,
195 kenlm_path: str = None,
196 flashlight_cfg: Optional['FlashlightConfig'] = None,
197 pyctcdecode_cfg: Optional['PyCTCDecodeConfig'] = None,
198 ):
199 super().__init__(blank_id=blank_id, beam_size=beam_size)
200
201 self.search_type = search_type
202 self.return_best_hypothesis = return_best_hypothesis
203 self.preserve_alignments = preserve_alignments
204 self.compute_timestamps = compute_timestamps
205
206 if self.compute_timestamps:
207 raise ValueError(f"Currently this flag is not supported for beam search algorithms.")
208
209 self.vocab = None # This must be set by specific method by user before calling forward() !
210
211 if search_type == "default" or search_type == "nemo":
212 self.search_algorithm = self.default_beam_search
213 elif search_type == "pyctcdecode":
214 self.search_algorithm = self._pyctcdecode_beam_search
215 elif search_type == "flashlight":
216 self.search_algorithm = self.flashlight_beam_search
217 else:
218 raise NotImplementedError(
219 f"The search type ({search_type}) supplied is not supported!\n"
220 f"Please use one of : (default, nemo, pyctcdecode)"
221 )
222
223 # Log the beam search algorithm
224 logging.info(f"Beam search algorithm: {search_type}")
225
226 self.beam_alpha = beam_alpha
227 self.beam_beta = beam_beta
228
229 # Default beam search args
230 self.kenlm_path = kenlm_path
231
232 # PyCTCDecode params
233 if pyctcdecode_cfg is None:
234 pyctcdecode_cfg = PyCTCDecodeConfig()
235 self.pyctcdecode_cfg = pyctcdecode_cfg # type: PyCTCDecodeConfig
236
237 if flashlight_cfg is None:
238 flashlight_cfg = FlashlightConfig()
239 self.flashlight_cfg = flashlight_cfg
240
241 # Default beam search scorer functions
242 self.default_beam_scorer = None
243 self.pyctcdecode_beam_scorer = None
244 self.flashlight_beam_scorer = None
245 self.token_offset = 0
246
247 @typecheck()
248 def forward(
249 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
250 ) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
251 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
252 Output token is generated auto-repressively.
253
254 Args:
255 decoder_output: A tensor of size (batch, timesteps, features).
256 decoder_lengths: list of int representing the length of each sequence
257 output sequence.
258
259 Returns:
260 packed list containing batch number of sentences (Hypotheses).
261 """
262 if self.vocab is None:
263 raise RuntimeError("Please set the vocabulary with `set_vocabulary()` before calling this function.")
264
265 if self.decoding_type is None:
266 raise ValueError("Please set the decoding type with `set_decoding_type()` before calling this function.")
267
268 with torch.no_grad(), torch.inference_mode():
269 # Process each sequence independently
270 prediction_tensor = decoder_output
271
272 if prediction_tensor.ndim != 3:
273 raise ValueError(
274 f"`decoder_output` must be a tensor of shape [B, T, V] (log probs, float). "
275 f"Provided shape = {prediction_tensor.shape}"
276 )
277
278 # determine type of input - logprobs or labels
279 out_len = decoder_lengths if decoder_lengths is not None else None
280 hypotheses = self.search_algorithm(prediction_tensor, out_len)
281
282 # Pack results into Hypotheses
283 packed_result = pack_hypotheses(hypotheses, decoder_lengths)
284
285 # Pack the result
286 if self.return_best_hypothesis and isinstance(packed_result[0], rnnt_utils.NBestHypotheses):
287 packed_result = [res.n_best_hypotheses[0] for res in packed_result] # type: Hypothesis
288
289 return (packed_result,)
290
291 @torch.no_grad()
292 def default_beam_search(
293 self, x: torch.Tensor, out_len: torch.Tensor
294 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
295 """
296 Open Seq2Seq Beam Search Algorithm (DeepSpeed)
297
298 Args:
299 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
300 and V is the vocabulary size. The tensor contains log-probabilities.
301 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
302
303 Returns:
304 A list of NBestHypotheses objects, one for each sequence in the batch.
305 """
306 if self.compute_timestamps:
307 raise ValueError(
308 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
309 )
310
311 if self.default_beam_scorer is None:
312 # Check for filepath
313 if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
314 raise FileNotFoundError(
315 f"KenLM binary file not found at : {self.kenlm_path}. "
316 f"Please set a valid path in the decoding config."
317 )
318
319 # perform token offset for subword models
320 if self.decoding_type == 'subword':
321 vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
322 else:
323 # char models
324 vocab = self.vocab
325
326 # Must import at runtime to avoid circular dependency due to module level import.
327 from nemo.collections.asr.modules.beam_search_decoder import BeamSearchDecoderWithLM
328
329 self.default_beam_scorer = BeamSearchDecoderWithLM(
330 vocab=vocab,
331 lm_path=self.kenlm_path,
332 beam_width=self.beam_size,
333 alpha=self.beam_alpha,
334 beta=self.beam_beta,
335 num_cpus=max(1, os.cpu_count()),
336 input_tensor=False,
337 )
338
339 x = x.to('cpu')
340
341 with typecheck.disable_checks():
342 data = [x[sample_id, : out_len[sample_id], :].softmax(dim=-1) for sample_id in range(len(x))]
343 beams_batch = self.default_beam_scorer.forward(log_probs=data, log_probs_length=None)
344
345 # For each sample in the batch
346 nbest_hypotheses = []
347 for beams_idx, beams in enumerate(beams_batch):
348 # For each beam candidate / hypothesis in each sample
349 hypotheses = []
350 for candidate_idx, candidate in enumerate(beams):
351 hypothesis = rnnt_utils.Hypothesis(
352 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
353 )
354
355 # For subword encoding, NeMo will double encode the subword (multiple tokens) into a
356 # singular unicode id. In doing so, we preserve the semantic of the unicode token, and
357 # compress the size of the final KenLM ARPA / Binary file.
358 # In order to do double encoding, we shift the subword by some token offset.
359 # This step is ignored for character based models.
360 if self.decoding_type == 'subword':
361 pred_token_ids = [ord(c) - self.token_offset for c in candidate[1]]
362 else:
363 # Char models
364 pred_token_ids = [self.vocab_index_map[c] for c in candidate[1]]
365
366 # We preserve the token ids and the score for this hypothesis
367 hypothesis.y_sequence = pred_token_ids
368 hypothesis.score = candidate[0]
369
370 # If alignment must be preserved, we preserve a view of the output logprobs.
371 # Note this view is shared amongst all beams within the sample, be sure to clone it if you
372 # require specific processing for each sample in the beam.
373 # This is done to preserve memory.
374 if self.preserve_alignments:
375 hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
376
377 hypotheses.append(hypothesis)
378
379 # Wrap the result in NBestHypothesis.
380 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
381 nbest_hypotheses.append(hypotheses)
382
383 return nbest_hypotheses
384
385 @torch.no_grad()
386 def _pyctcdecode_beam_search(
387 self, x: torch.Tensor, out_len: torch.Tensor
388 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
389 """
390 PyCTCDecode Beam Search Algorithm. Should support Char and Subword models.
391
392 Args:
393 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
394 and V is the vocabulary size. The tensor contains log-probabilities.
395 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
396
397 Returns:
398 A list of NBestHypotheses objects, one for each sequence in the batch.
399 """
400 if self.compute_timestamps:
401 raise ValueError(
402 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
403 )
404
405 try:
406 import pyctcdecode
407 except (ImportError, ModuleNotFoundError):
408 raise ImportError(
409 f"Could not load `pyctcdecode` library. Please install it from pip using :\n"
410 f"pip install --upgrade pyctcdecode"
411 )
412
413 if self.pyctcdecode_beam_scorer is None:
414 self.pyctcdecode_beam_scorer = pyctcdecode.build_ctcdecoder(
415 labels=self.vocab, kenlm_model_path=self.kenlm_path, alpha=self.beam_alpha, beta=self.beam_beta
416 ) # type: pyctcdecode.BeamSearchDecoderCTC
417
418 x = x.to('cpu').numpy()
419
420 with typecheck.disable_checks():
421 beams_batch = []
422 for sample_id in range(len(x)):
423 logprobs = x[sample_id, : out_len[sample_id], :]
424 result = self.pyctcdecode_beam_scorer.decode_beams(
425 logprobs,
426 beam_width=self.beam_size,
427 beam_prune_logp=self.pyctcdecode_cfg.beam_prune_logp,
428 token_min_logp=self.pyctcdecode_cfg.token_min_logp,
429 prune_history=self.pyctcdecode_cfg.prune_history,
430 hotwords=self.pyctcdecode_cfg.hotwords,
431 hotword_weight=self.pyctcdecode_cfg.hotword_weight,
432 lm_start_state=None,
433 ) # Output format: text, last_lm_state, text_frames, logit_score, lm_score
434 beams_batch.append(result)
435
436 nbest_hypotheses = []
437 for beams_idx, beams in enumerate(beams_batch):
438 hypotheses = []
439 for candidate_idx, candidate in enumerate(beams):
440 # Candidate = (text, last_lm_state, text_frames, logit_score, lm_score)
441 hypothesis = rnnt_utils.Hypothesis(
442 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
443 )
444
445 # TODO: Requires token ids to be returned rather than text.
446 if self.decoding_type == 'subword':
447 if self.tokenizer is None:
448 raise ValueError("Tokenizer must be provided for subword decoding. Use set_tokenizer().")
449
450 pred_token_ids = self.tokenizer.text_to_ids(candidate[0])
451 else:
452 if self.vocab is None:
453 raise ValueError("Vocab must be provided for character decoding. Use set_vocab().")
454
455 chars = list(candidate[0])
456 pred_token_ids = [self.vocab_index_map[c] for c in chars]
457
458 hypothesis.y_sequence = pred_token_ids
459 hypothesis.text = candidate[0] # text
460 hypothesis.score = candidate[4] # score
461
462 # Inject word level timestamps
463 hypothesis.timestep = candidate[2] # text_frames
464
465 if self.preserve_alignments:
466 hypothesis.alignments = torch.from_numpy(x[beams_idx][: out_len[beams_idx]])
467
468 hypotheses.append(hypothesis)
469
470 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
471 nbest_hypotheses.append(hypotheses)
472
473 return nbest_hypotheses
474
475 @torch.no_grad()
476 def flashlight_beam_search(
477 self, x: torch.Tensor, out_len: torch.Tensor
478 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
479 """
480 Flashlight Beam Search Algorithm. Should support Char and Subword models.
481
482 Args:
483 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
484 and V is the vocabulary size. The tensor contains log-probabilities.
485 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
486
487 Returns:
488 A list of NBestHypotheses objects, one for each sequence in the batch.
489 """
490 if self.compute_timestamps:
491 raise ValueError(
492 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
493 )
494
495 if self.flashlight_beam_scorer is None:
496 # Check for filepath
497 if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
498 raise FileNotFoundError(
499 f"KenLM binary file not found at : {self.kenlm_path}. "
500 f"Please set a valid path in the decoding config."
501 )
502
503 # perform token offset for subword models
504 # if self.decoding_type == 'subword':
505 # vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
506 # else:
507 # # char models
508 # vocab = self.vocab
509
510 # Must import at runtime to avoid circular dependency due to module level import.
511 from nemo.collections.asr.modules.flashlight_decoder import FlashLightKenLMBeamSearchDecoder
512
513 self.flashlight_beam_scorer = FlashLightKenLMBeamSearchDecoder(
514 lm_path=self.kenlm_path,
515 vocabulary=self.vocab,
516 tokenizer=self.tokenizer,
517 lexicon_path=self.flashlight_cfg.lexicon_path,
518 boost_path=self.flashlight_cfg.boost_path,
519 beam_size=self.beam_size,
520 beam_size_token=self.flashlight_cfg.beam_size_token,
521 beam_threshold=self.flashlight_cfg.beam_threshold,
522 lm_weight=self.beam_alpha,
523 word_score=self.beam_beta,
524 unk_weight=self.flashlight_cfg.unk_weight,
525 sil_weight=self.flashlight_cfg.sil_weight,
526 )
527
528 x = x.to('cpu')
529
530 with typecheck.disable_checks():
531 beams_batch = self.flashlight_beam_scorer.forward(log_probs=x)
532
533 # For each sample in the batch
534 nbest_hypotheses = []
535 for beams_idx, beams in enumerate(beams_batch):
536 # For each beam candidate / hypothesis in each sample
537 hypotheses = []
538 for candidate_idx, candidate in enumerate(beams):
539 hypothesis = rnnt_utils.Hypothesis(
540 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
541 )
542
543 # We preserve the token ids and the score for this hypothesis
544 hypothesis.y_sequence = candidate['tokens'].tolist()
545 hypothesis.score = candidate['score']
546
547 # If alignment must be preserved, we preserve a view of the output logprobs.
548 # Note this view is shared amongst all beams within the sample, be sure to clone it if you
549 # require specific processing for each sample in the beam.
550 # This is done to preserve memory.
551 if self.preserve_alignments:
552 hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
553
554 hypotheses.append(hypothesis)
555
556 # Wrap the result in NBestHypothesis.
557 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
558 nbest_hypotheses.append(hypotheses)
559
560 return nbest_hypotheses
561
562 def set_decoding_type(self, decoding_type: str):
563 super().set_decoding_type(decoding_type)
564
565 # Please check train_kenlm.py in scripts/asr_language_modeling/ to find out why we need
566 # TOKEN_OFFSET for BPE-based models
567 if self.decoding_type == 'subword':
568 self.token_offset = DEFAULT_TOKEN_OFFSET
569
570
571 @dataclass
572 class PyCTCDecodeConfig:
573 # These arguments cannot be imported from pyctcdecode (optional dependency)
574 # Therefore we copy the values explicitly
575 # Taken from pyctcdecode.constant
576 beam_prune_logp: float = -10.0
577 token_min_logp: float = -5.0
578 prune_history: bool = False
579 hotwords: Optional[List[str]] = None
580 hotword_weight: float = 10.0
581
582
583 @dataclass
584 class FlashlightConfig:
585 lexicon_path: Optional[str] = None
586 boost_path: Optional[str] = None
587 beam_size_token: int = 16
588 beam_threshold: float = 20.0
589 unk_weight: float = -math.inf
590 sil_weight: float = 0.0
591
592
593 @dataclass
594 class BeamCTCInferConfig:
595 beam_size: int
596 search_type: str = 'default'
597 preserve_alignments: bool = False
598 compute_timestamps: bool = False
599 return_best_hypothesis: bool = True
600
601 beam_alpha: float = 1.0
602 beam_beta: float = 0.0
603 kenlm_path: Optional[str] = None
604
605 flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
606 pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
607
[end of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
[start of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import List, Optional
17
18 import torch
19 from omegaconf import DictConfig, OmegaConf
20
21 from nemo.collections.asr.parts.utils import rnnt_utils
22 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMethodConfig, ConfidenceMethodMixin
23 from nemo.core.classes import Typing, typecheck
24 from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
25 from nemo.utils import logging
26
27
28 def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
29
30 if logitlen is not None:
31 if hasattr(logitlen, 'cpu'):
32 logitlen_cpu = logitlen.to('cpu')
33 else:
34 logitlen_cpu = logitlen
35
36 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
37 hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
38
39 if logitlen is not None:
40 hyp.length = logitlen_cpu[idx]
41
42 if hyp.dec_state is not None:
43 hyp.dec_state = _states_to_device(hyp.dec_state)
44
45 return hypotheses
46
47
48 def _states_to_device(dec_state, device='cpu'):
49 if torch.is_tensor(dec_state):
50 dec_state = dec_state.to(device)
51
52 elif isinstance(dec_state, (list, tuple)):
53 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
54
55 return dec_state
56
57
58 class GreedyCTCInfer(Typing, ConfidenceMethodMixin):
59 """A greedy CTC decoder.
60
61 Provides a common abstraction for sample level and batch level greedy decoding.
62
63 Args:
64 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
65 preserve_alignments: Bool flag which preserves the history of logprobs generated during
66 decoding (sample / batched). When set to true, the Hypothesis will contain
67 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
68 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
69 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
70 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
71 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
72 generated during decoding. When set to true, the Hypothesis will contain
73 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
74 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
75 confidence scores.
76
77 name: The method name (str).
78 Supported values:
79 - 'max_prob' for using the maximum token probability as a confidence.
80 - 'entropy' for using a normalized entropy of a log-likelihood vector.
81
82 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
83 Supported values:
84 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
85 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
86 Note that for this entropy, the alpha should comply the following inequality:
87 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
88 where V is the model vocabulary size.
89 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
90 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
91 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
92 More: https://en.wikipedia.org/wiki/Tsallis_entropy
93 - 'renyi' for the Rรฉnyi entropy.
94 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
95 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
96 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
97
98 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
99 When the alpha equals one, scaling is not applied to 'max_prob',
100 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
101
102 entropy_norm: A mapping of the entropy value to the interval [0,1].
103 Supported values:
104 - 'lin' for using the linear mapping.
105 - 'exp' for using exponential mapping with linear shift.
106
107 """
108
109 @property
110 def input_types(self):
111 """Returns definitions of module input ports.
112 """
113 # Input can be of dimention -
114 # ('B', 'T', 'D') [Log probs] or ('B', 'T') [Labels]
115
116 return {
117 "decoder_output": NeuralType(None, LogprobsType()),
118 "decoder_lengths": NeuralType(tuple('B'), LengthsType()),
119 }
120
121 @property
122 def output_types(self):
123 """Returns definitions of module output ports.
124 """
125 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
126
127 def __init__(
128 self,
129 blank_id: int,
130 preserve_alignments: bool = False,
131 compute_timestamps: bool = False,
132 preserve_frame_confidence: bool = False,
133 confidence_method_cfg: Optional[DictConfig] = None,
134 ):
135 super().__init__()
136
137 self.blank_id = blank_id
138 self.preserve_alignments = preserve_alignments
139 # we need timestamps to extract non-blank per-frame confidence
140 self.compute_timestamps = compute_timestamps | preserve_frame_confidence
141 self.preserve_frame_confidence = preserve_frame_confidence
142
143 # set confidence calculation method
144 self._init_confidence_method(confidence_method_cfg)
145
146 @typecheck()
147 def forward(
148 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
149 ):
150 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
151 Output token is generated auto-repressively.
152
153 Args:
154 decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
155 decoder_lengths: list of int representing the length of each sequence
156 output sequence.
157
158 Returns:
159 packed list containing batch number of sentences (Hypotheses).
160 """
161 with torch.inference_mode():
162 hypotheses = []
163 # Process each sequence independently
164 prediction_cpu_tensor = decoder_output.cpu()
165
166 if prediction_cpu_tensor.ndim < 2 or prediction_cpu_tensor.ndim > 3:
167 raise ValueError(
168 f"`decoder_output` must be a tensor of shape [B, T] (labels, int) or "
169 f"[B, T, V] (log probs, float). Provided shape = {prediction_cpu_tensor.shape}"
170 )
171
172 # determine type of input - logprobs or labels
173 if prediction_cpu_tensor.ndim == 2: # labels
174 greedy_decode = self._greedy_decode_labels
175 else:
176 greedy_decode = self._greedy_decode_logprobs
177
178 for ind in range(prediction_cpu_tensor.shape[0]):
179 out_len = decoder_lengths[ind] if decoder_lengths is not None else None
180 hypothesis = greedy_decode(prediction_cpu_tensor[ind], out_len)
181 hypotheses.append(hypothesis)
182
183 # Pack results into Hypotheses
184 packed_result = pack_hypotheses(hypotheses, decoder_lengths)
185
186 return (packed_result,)
187
188 @torch.no_grad()
189 def _greedy_decode_logprobs(self, x: torch.Tensor, out_len: torch.Tensor):
190 # x: [T, D]
191 # out_len: [seq_len]
192
193 # Initialize blank state and empty label set in Hypothesis
194 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
195 prediction = x.detach().cpu()
196
197 if out_len is not None:
198 prediction = prediction[:out_len]
199
200 prediction_logprobs, prediction_labels = prediction.max(dim=-1)
201
202 non_blank_ids = prediction_labels != self.blank_id
203 hypothesis.y_sequence = prediction_labels.numpy().tolist()
204 hypothesis.score = (prediction_logprobs[non_blank_ids]).sum()
205
206 if self.preserve_alignments:
207 # Preserve the logprobs, as well as labels after argmax
208 hypothesis.alignments = (prediction.clone(), prediction_labels.clone())
209
210 if self.compute_timestamps:
211 hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
212
213 if self.preserve_frame_confidence:
214 hypothesis.frame_confidence = self._get_confidence(prediction)
215
216 return hypothesis
217
218 @torch.no_grad()
219 def _greedy_decode_labels(self, x: torch.Tensor, out_len: torch.Tensor):
220 # x: [T]
221 # out_len: [seq_len]
222
223 # Initialize blank state and empty label set in Hypothesis
224 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
225 prediction_labels = x.detach().cpu()
226
227 if out_len is not None:
228 prediction_labels = prediction_labels[:out_len]
229
230 non_blank_ids = prediction_labels != self.blank_id
231 hypothesis.y_sequence = prediction_labels.numpy().tolist()
232 hypothesis.score = -1.0
233
234 if self.preserve_alignments:
235 raise ValueError("Requested for alignments, but predictions provided were labels, not log probabilities.")
236
237 if self.compute_timestamps:
238 hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
239
240 if self.preserve_frame_confidence:
241 raise ValueError(
242 "Requested for per-frame confidence, but predictions provided were labels, not log probabilities."
243 )
244
245 return hypothesis
246
247 def __call__(self, *args, **kwargs):
248 return self.forward(*args, **kwargs)
249
250
251 @dataclass
252 class GreedyCTCInferConfig:
253 preserve_alignments: bool = False
254 compute_timestamps: bool = False
255 preserve_frame_confidence: bool = False
256 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
257
258 def __post_init__(self):
259 # OmegaConf.structured ensures that post_init check is always executed
260 self.confidence_method_cfg = OmegaConf.structured(
261 self.confidence_method_cfg
262 if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
263 else ConfidenceMethodConfig(**self.confidence_method_cfg)
264 )
265
[end of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
[start of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Copyright 2017 Johns Hopkins University (Shinji Watanabe)
16 #
17 # Licensed under the Apache License, Version 2.0 (the "License");
18 # you may not use this file except in compliance with the License.
19 # You may obtain a copy of the License at
20 #
21 # http://www.apache.org/licenses/LICENSE-2.0
22 #
23 # Unless required by applicable law or agreed to in writing, software
24 # distributed under the License is distributed on an "AS IS" BASIS,
25 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
26 # See the License for the specific language governing permissions and
27 # limitations under the License.
28
29 from dataclasses import dataclass
30 from typing import List, Optional, Tuple, Union
31
32 import numpy as np
33 import torch
34 from omegaconf import DictConfig, OmegaConf
35
36 from nemo.collections.asr.modules import rnnt_abstract
37 from nemo.collections.asr.parts.utils import rnnt_utils
38 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMethodConfig, ConfidenceMethodMixin
39 from nemo.collections.common.parts.rnn import label_collate
40 from nemo.core.classes import Typing, typecheck
41 from nemo.core.neural_types import AcousticEncodedRepresentation, ElementType, HypothesisType, LengthsType, NeuralType
42 from nemo.utils import logging
43
44
45 def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
46
47 if hasattr(logitlen, 'cpu'):
48 logitlen_cpu = logitlen.to('cpu')
49 else:
50 logitlen_cpu = logitlen
51
52 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
53 hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
54 hyp.length = logitlen_cpu[idx]
55
56 if hyp.dec_state is not None:
57 hyp.dec_state = _states_to_device(hyp.dec_state)
58
59 return hypotheses
60
61
62 def _states_to_device(dec_state, device='cpu'):
63 if torch.is_tensor(dec_state):
64 dec_state = dec_state.to(device)
65
66 elif isinstance(dec_state, (list, tuple)):
67 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
68
69 return dec_state
70
71
72 class _GreedyRNNTInfer(Typing, ConfidenceMethodMixin):
73 """A greedy transducer decoder.
74
75 Provides a common abstraction for sample level and batch level greedy decoding.
76
77 Args:
78 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
79 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
80 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
81 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
82 to a sequence in a single time step; if set to None then there is
83 no limit.
84 preserve_alignments: Bool flag which preserves the history of alignments generated during
85 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
86 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
87 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
88
89 The length of the list corresponds to the Acoustic Length (T).
90 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
91 U is the number of target tokens for the current timestep Ti.
92 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
93 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
94 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
95
96 The length of the list corresponds to the Acoustic Length (T).
97 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
98 U is the number of target tokens for the current timestep Ti.
99 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
100 confidence scores.
101
102 name: The method name (str).
103 Supported values:
104 - 'max_prob' for using the maximum token probability as a confidence.
105 - 'entropy' for using a normalized entropy of a log-likelihood vector.
106
107 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
108 Supported values:
109 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
110 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
111 Note that for this entropy, the alpha should comply the following inequality:
112 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
113 where V is the model vocabulary size.
114 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
115 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
116 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
117 More: https://en.wikipedia.org/wiki/Tsallis_entropy
118 - 'renyi' for the Rรฉnyi entropy.
119 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
120 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
121 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
122
123 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
124 When the alpha equals one, scaling is not applied to 'max_prob',
125 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
126
127 entropy_norm: A mapping of the entropy value to the interval [0,1].
128 Supported values:
129 - 'lin' for using the linear mapping.
130 - 'exp' for using exponential mapping with linear shift.
131 """
132
133 @property
134 def input_types(self):
135 """Returns definitions of module input ports.
136 """
137 return {
138 "encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
139 "encoded_lengths": NeuralType(tuple('B'), LengthsType()),
140 "partial_hypotheses": [NeuralType(elements_type=HypothesisType(), optional=True)], # must always be last
141 }
142
143 @property
144 def output_types(self):
145 """Returns definitions of module output ports.
146 """
147 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
148
149 def __init__(
150 self,
151 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
152 joint_model: rnnt_abstract.AbstractRNNTJoint,
153 blank_index: int,
154 max_symbols_per_step: Optional[int] = None,
155 preserve_alignments: bool = False,
156 preserve_frame_confidence: bool = False,
157 confidence_method_cfg: Optional[DictConfig] = None,
158 ):
159 super().__init__()
160 self.decoder = decoder_model
161 self.joint = joint_model
162
163 self._blank_index = blank_index
164 self._SOS = blank_index # Start of single index
165 self.max_symbols = max_symbols_per_step
166 self.preserve_alignments = preserve_alignments
167 self.preserve_frame_confidence = preserve_frame_confidence
168
169 # set confidence calculation method
170 self._init_confidence_method(confidence_method_cfg)
171
172 def __call__(self, *args, **kwargs):
173 return self.forward(*args, **kwargs)
174
175 @torch.no_grad()
176 def _pred_step(
177 self,
178 label: Union[torch.Tensor, int],
179 hidden: Optional[torch.Tensor],
180 add_sos: bool = False,
181 batch_size: Optional[int] = None,
182 ) -> Tuple[torch.Tensor, torch.Tensor]:
183 """
184 Common prediction step based on the AbstractRNNTDecoder implementation.
185
186 Args:
187 label: (int/torch.Tensor): Label or "Start-of-Signal" token.
188 hidden: (Optional torch.Tensor): RNN State vector
189 add_sos (bool): Whether to add a zero vector at the begging as "start of sentence" token.
190 batch_size: Batch size of the output tensor.
191
192 Returns:
193 g: (B, U, H) if add_sos is false, else (B, U + 1, H)
194 hid: (h, c) where h is the final sequence hidden state and c is
195 the final cell state:
196 h (tensor), shape (L, B, H)
197 c (tensor), shape (L, B, H)
198 """
199 if isinstance(label, torch.Tensor):
200 # label: [batch, 1]
201 if label.dtype != torch.long:
202 label = label.long()
203
204 else:
205 # Label is an integer
206 if label == self._SOS:
207 return self.decoder.predict(None, hidden, add_sos=add_sos, batch_size=batch_size)
208
209 label = label_collate([[label]])
210
211 # output: [B, 1, K]
212 return self.decoder.predict(label, hidden, add_sos=add_sos, batch_size=batch_size)
213
214 def _joint_step(self, enc, pred, log_normalize: Optional[bool] = None):
215 """
216 Common joint step based on AbstractRNNTJoint implementation.
217
218 Args:
219 enc: Output of the Encoder model. A torch.Tensor of shape [B, 1, H1]
220 pred: Output of the Decoder model. A torch.Tensor of shape [B, 1, H2]
221 log_normalize: Whether to log normalize or not. None will log normalize only for CPU.
222
223 Returns:
224 logits of shape (B, T=1, U=1, V + 1)
225 """
226 with torch.no_grad():
227 logits = self.joint.joint(enc, pred)
228
229 if log_normalize is None:
230 if not logits.is_cuda: # Use log softmax only if on CPU
231 logits = logits.log_softmax(dim=len(logits.shape) - 1)
232 else:
233 if log_normalize:
234 logits = logits.log_softmax(dim=len(logits.shape) - 1)
235
236 return logits
237
238
239 class GreedyRNNTInfer(_GreedyRNNTInfer):
240 """A greedy transducer decoder.
241
242 Sequence level greedy decoding, performed auto-regressively.
243
244 Args:
245 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
246 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
247 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
248 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
249 to a sequence in a single time step; if set to None then there is
250 no limit.
251 preserve_alignments: Bool flag which preserves the history of alignments generated during
252 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
253 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
254 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
255
256 The length of the list corresponds to the Acoustic Length (T).
257 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
258 U is the number of target tokens for the current timestep Ti.
259 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
260 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
261 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
262
263 The length of the list corresponds to the Acoustic Length (T).
264 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
265 U is the number of target tokens for the current timestep Ti.
266 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
267 confidence scores.
268
269 name: The method name (str).
270 Supported values:
271 - 'max_prob' for using the maximum token probability as a confidence.
272 - 'entropy' for using a normalized entropy of a log-likelihood vector.
273
274 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
275 Supported values:
276 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
277 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
278 Note that for this entropy, the alpha should comply the following inequality:
279 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
280 where V is the model vocabulary size.
281 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
282 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
283 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
284 More: https://en.wikipedia.org/wiki/Tsallis_entropy
285 - 'renyi' for the Rรฉnyi entropy.
286 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
287 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
288 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
289
290 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
291 When the alpha equals one, scaling is not applied to 'max_prob',
292 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
293
294 entropy_norm: A mapping of the entropy value to the interval [0,1].
295 Supported values:
296 - 'lin' for using the linear mapping.
297 - 'exp' for using exponential mapping with linear shift.
298 """
299
300 def __init__(
301 self,
302 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
303 joint_model: rnnt_abstract.AbstractRNNTJoint,
304 blank_index: int,
305 max_symbols_per_step: Optional[int] = None,
306 preserve_alignments: bool = False,
307 preserve_frame_confidence: bool = False,
308 confidence_method_cfg: Optional[DictConfig] = None,
309 ):
310 super().__init__(
311 decoder_model=decoder_model,
312 joint_model=joint_model,
313 blank_index=blank_index,
314 max_symbols_per_step=max_symbols_per_step,
315 preserve_alignments=preserve_alignments,
316 preserve_frame_confidence=preserve_frame_confidence,
317 confidence_method_cfg=confidence_method_cfg,
318 )
319
320 @typecheck()
321 def forward(
322 self,
323 encoder_output: torch.Tensor,
324 encoded_lengths: torch.Tensor,
325 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
326 ):
327 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
328 Output token is generated auto-regressively.
329
330 Args:
331 encoder_output: A tensor of size (batch, features, timesteps).
332 encoded_lengths: list of int representing the length of each sequence
333 output sequence.
334
335 Returns:
336 packed list containing batch number of sentences (Hypotheses).
337 """
338 # Preserve decoder and joint training state
339 decoder_training_state = self.decoder.training
340 joint_training_state = self.joint.training
341
342 with torch.inference_mode():
343 # Apply optional preprocessing
344 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
345
346 self.decoder.eval()
347 self.joint.eval()
348
349 hypotheses = []
350 # Process each sequence independently
351 with self.decoder.as_frozen(), self.joint.as_frozen():
352 for batch_idx in range(encoder_output.size(0)):
353 inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
354 logitlen = encoded_lengths[batch_idx]
355
356 partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
357 hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
358 hypotheses.append(hypothesis)
359
360 # Pack results into Hypotheses
361 packed_result = pack_hypotheses(hypotheses, encoded_lengths)
362
363 self.decoder.train(decoder_training_state)
364 self.joint.train(joint_training_state)
365
366 return (packed_result,)
367
368 @torch.no_grad()
369 def _greedy_decode(
370 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
371 ):
372 # x: [T, 1, D]
373 # out_len: [seq_len]
374
375 # Initialize blank state and empty label set in Hypothesis
376 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
377
378 if partial_hypotheses is not None:
379 hypothesis.last_token = partial_hypotheses.last_token
380 hypothesis.y_sequence = (
381 partial_hypotheses.y_sequence.cpu().tolist()
382 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
383 else partial_hypotheses.y_sequence
384 )
385 if partial_hypotheses.dec_state is not None:
386 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
387 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
388
389 if self.preserve_alignments:
390 # Alignments is a 2-dimensional dangling list representing T x U
391 hypothesis.alignments = [[]]
392
393 if self.preserve_frame_confidence:
394 hypothesis.frame_confidence = [[]]
395
396 # For timestep t in X_t
397 for time_idx in range(out_len):
398 # Extract encoder embedding at timestep t
399 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
400 f = x.narrow(dim=0, start=time_idx, length=1)
401
402 # Setup exit flags and counter
403 not_blank = True
404 symbols_added = 0
405 # While blank is not predicted, or we dont run out of max symbols per timestep
406 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
407 # In the first timestep, we initialize the network with RNNT Blank
408 # In later timesteps, we provide previous predicted label as input.
409 if hypothesis.last_token is None and hypothesis.dec_state is None:
410 last_label = self._SOS
411 else:
412 last_label = label_collate([[hypothesis.last_token]])
413
414 # Perform prediction network and joint network steps.
415 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
416 # If preserving per-frame confidence, log_normalize must be true
417 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
418 0, 0, 0, :
419 ]
420
421 del g
422
423 # torch.max(0) op doesnt exist for FP 16.
424 if logp.dtype != torch.float32:
425 logp = logp.float()
426
427 # get index k, of max prob
428 v, k = logp.max(0)
429 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
430
431 if self.preserve_alignments:
432 # insert logprobs into last timestep
433 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
434
435 if self.preserve_frame_confidence:
436 # insert confidence into last timestep
437 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
438
439 del logp
440
441 # If blank token is predicted, exit inner loop, move onto next timestep t
442 if k == self._blank_index:
443 not_blank = False
444 else:
445 # Append token to label set, update RNN state.
446 hypothesis.y_sequence.append(k)
447 hypothesis.score += float(v)
448 hypothesis.timestep.append(time_idx)
449 hypothesis.dec_state = hidden_prime
450 hypothesis.last_token = k
451
452 # Increment token counter.
453 symbols_added += 1
454
455 if self.preserve_alignments:
456 # convert Ti-th logits into a torch array
457 hypothesis.alignments.append([]) # blank buffer for next timestep
458
459 if self.preserve_frame_confidence:
460 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
461
462 # Remove trailing empty list of Alignments
463 if self.preserve_alignments:
464 if len(hypothesis.alignments[-1]) == 0:
465 del hypothesis.alignments[-1]
466
467 # Remove trailing empty list of per-frame confidence
468 if self.preserve_frame_confidence:
469 if len(hypothesis.frame_confidence[-1]) == 0:
470 del hypothesis.frame_confidence[-1]
471
472 # Unpack the hidden states
473 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
474
475 return hypothesis
476
477
478 class GreedyBatchedRNNTInfer(_GreedyRNNTInfer):
479 """A batch level greedy transducer decoder.
480
481 Batch level greedy decoding, performed auto-regressively.
482
483 Args:
484 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
485 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
486 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
487 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
488 to a sequence in a single time step; if set to None then there is
489 no limit.
490 preserve_alignments: Bool flag which preserves the history of alignments generated during
491 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
492 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
493 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
494
495 The length of the list corresponds to the Acoustic Length (T).
496 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
497 U is the number of target tokens for the current timestep Ti.
498 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
499 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
500 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
501
502 The length of the list corresponds to the Acoustic Length (T).
503 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
504 U is the number of target tokens for the current timestep Ti.
505 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
506 confidence scores.
507
508 name: The method name (str).
509 Supported values:
510 - 'max_prob' for using the maximum token probability as a confidence.
511 - 'entropy' for using a normalized entropy of a log-likelihood vector.
512
513 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
514 Supported values:
515 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
516 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
517 Note that for this entropy, the alpha should comply the following inequality:
518 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
519 where V is the model vocabulary size.
520 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
521 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
522 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
523 More: https://en.wikipedia.org/wiki/Tsallis_entropy
524 - 'renyi' for the Rรฉnyi entropy.
525 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
526 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
527 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
528
529 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
530 When the alpha equals one, scaling is not applied to 'max_prob',
531 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
532
533 entropy_norm: A mapping of the entropy value to the interval [0,1].
534 Supported values:
535 - 'lin' for using the linear mapping.
536 - 'exp' for using exponential mapping with linear shift.
537 """
538
539 def __init__(
540 self,
541 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
542 joint_model: rnnt_abstract.AbstractRNNTJoint,
543 blank_index: int,
544 max_symbols_per_step: Optional[int] = None,
545 preserve_alignments: bool = False,
546 preserve_frame_confidence: bool = False,
547 confidence_method_cfg: Optional[DictConfig] = None,
548 ):
549 super().__init__(
550 decoder_model=decoder_model,
551 joint_model=joint_model,
552 blank_index=blank_index,
553 max_symbols_per_step=max_symbols_per_step,
554 preserve_alignments=preserve_alignments,
555 preserve_frame_confidence=preserve_frame_confidence,
556 confidence_method_cfg=confidence_method_cfg,
557 )
558
559 # Depending on availability of `blank_as_pad` support
560 # switch between more efficient batch decoding technique
561 if self.decoder.blank_as_pad:
562 self._greedy_decode = self._greedy_decode_blank_as_pad
563 else:
564 self._greedy_decode = self._greedy_decode_masked
565
566 @typecheck()
567 def forward(
568 self,
569 encoder_output: torch.Tensor,
570 encoded_lengths: torch.Tensor,
571 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
572 ):
573 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
574 Output token is generated auto-regressively.
575
576 Args:
577 encoder_output: A tensor of size (batch, features, timesteps).
578 encoded_lengths: list of int representing the length of each sequence
579 output sequence.
580
581 Returns:
582 packed list containing batch number of sentences (Hypotheses).
583 """
584 # Preserve decoder and joint training state
585 decoder_training_state = self.decoder.training
586 joint_training_state = self.joint.training
587
588 with torch.inference_mode():
589 # Apply optional preprocessing
590 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
591 logitlen = encoded_lengths
592
593 self.decoder.eval()
594 self.joint.eval()
595
596 with self.decoder.as_frozen(), self.joint.as_frozen():
597 inseq = encoder_output # [B, T, D]
598 hypotheses = self._greedy_decode(
599 inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
600 )
601
602 # Pack the hypotheses results
603 packed_result = pack_hypotheses(hypotheses, logitlen)
604
605 self.decoder.train(decoder_training_state)
606 self.joint.train(joint_training_state)
607
608 return (packed_result,)
609
610 def _greedy_decode_blank_as_pad(
611 self,
612 x: torch.Tensor,
613 out_len: torch.Tensor,
614 device: torch.device,
615 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
616 ):
617 if partial_hypotheses is not None:
618 raise NotImplementedError("`partial_hypotheses` support is not supported")
619
620 with torch.inference_mode():
621 # x: [B, T, D]
622 # out_len: [B]
623 # device: torch.device
624
625 # Initialize list of Hypothesis
626 batchsize = x.shape[0]
627 hypotheses = [
628 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
629 ]
630
631 # Initialize Hidden state matrix (shared by entire batch)
632 hidden = None
633
634 # If alignments need to be preserved, register a dangling list to hold the values
635 if self.preserve_alignments:
636 # alignments is a 3-dimensional dangling list representing B x T x U
637 for hyp in hypotheses:
638 hyp.alignments = [[]]
639
640 # If confidence scores need to be preserved, register a dangling list to hold the values
641 if self.preserve_frame_confidence:
642 # frame_confidence is a 3-dimensional dangling list representing B x T x U
643 for hyp in hypotheses:
644 hyp.frame_confidence = [[]]
645
646 # Last Label buffer + Last Label without blank buffer
647 # batch level equivalent of the last_label
648 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
649
650 # Mask buffers
651 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
652
653 # Get max sequence length
654 max_out_len = out_len.max()
655 for time_idx in range(max_out_len):
656 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
657
658 # Prepare t timestamp batch variables
659 not_blank = True
660 symbols_added = 0
661
662 # Reset blank mask
663 blank_mask.mul_(False)
664
665 # Update blank mask with time mask
666 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
667 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
668 blank_mask = time_idx >= out_len
669 # Start inner loop
670 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
671 # Batch prediction and joint network steps
672 # If very first prediction step, submit SOS tag (blank) to pred_step.
673 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
674 if time_idx == 0 and symbols_added == 0 and hidden is None:
675 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
676 else:
677 # Perform batch step prediction of decoder, getting new states and scores ("g")
678 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
679
680 # Batched joint step - Output = [B, V + 1]
681 # If preserving per-frame confidence, log_normalize must be true
682 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
683 :, 0, 0, :
684 ]
685
686 if logp.dtype != torch.float32:
687 logp = logp.float()
688
689 # Get index k, of max prob for batch
690 v, k = logp.max(1)
691 del g
692
693 # Update blank mask with current predicted blanks
694 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
695 k_is_blank = k == self._blank_index
696 blank_mask.bitwise_or_(k_is_blank)
697 all_blanks = torch.all(blank_mask)
698
699 del k_is_blank
700
701 # If preserving alignments, check if sequence length of sample has been reached
702 # before adding alignment
703 if self.preserve_alignments:
704 # Insert logprobs into last timestep per sample
705 logp_vals = logp.to('cpu')
706 logp_ids = logp_vals.max(1)[1]
707 for batch_idx, is_blank in enumerate(blank_mask):
708 # we only want to update non-blanks, unless we are at the last step in the loop where
709 # all elements produced blanks, otherwise there will be duplicate predictions
710 # saved in alignments
711 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
712 hypotheses[batch_idx].alignments[-1].append(
713 (logp_vals[batch_idx], logp_ids[batch_idx])
714 )
715 del logp_vals
716
717 # If preserving per-frame confidence, check if sequence length of sample has been reached
718 # before adding confidence scores
719 if self.preserve_frame_confidence:
720 # Insert probabilities into last timestep per sample
721 confidence = self._get_confidence(logp)
722 for batch_idx, is_blank in enumerate(blank_mask):
723 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
724 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
725 del logp
726
727 # If all samples predict / have predicted prior blanks, exit loop early
728 # This is equivalent to if single sample predicted k
729 if all_blanks:
730 not_blank = False
731 else:
732 # Collect batch indices where blanks occurred now/past
733 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
734
735 # Recover prior state for all samples which predicted blank now/past
736 if hidden is not None:
737 # LSTM has 2 states
738 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
739
740 elif len(blank_indices) > 0 and hidden is None:
741 # Reset state if there were some blank and other non-blank predictions in batch
742 # Original state is filled with zeros so we just multiply
743 # LSTM has 2 states
744 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
745
746 # Recover prior predicted label for all samples which predicted blank now/past
747 k[blank_indices] = last_label[blank_indices, 0]
748
749 # Update new label and hidden state for next iteration
750 last_label = k.clone().view(-1, 1)
751 hidden = hidden_prime
752
753 # Update predicted labels, accounting for time mask
754 # If blank was predicted even once, now or in the past,
755 # Force the current predicted label to also be blank
756 # This ensures that blanks propogate across all timesteps
757 # once they have occured (normally stopping condition of sample level loop).
758 for kidx, ki in enumerate(k):
759 if blank_mask[kidx] == 0:
760 hypotheses[kidx].y_sequence.append(ki)
761 hypotheses[kidx].timestep.append(time_idx)
762 hypotheses[kidx].score += float(v[kidx])
763 symbols_added += 1
764
765 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
766 # Then preserve U at current timestep Ti
767 # Finally, forward the timestep history to Ti+1 for that sample
768 # All of this should only be done iff the current time index <= sample-level AM length.
769 # Otherwise ignore and move to next sample / next timestep.
770 if self.preserve_alignments:
771
772 # convert Ti-th logits into a torch array
773 for batch_idx in range(batchsize):
774
775 # this checks if current timestep <= sample-level AM length
776 # If current timestep > sample-level AM length, no alignments will be added
777 # Therefore the list of Uj alignments is empty here.
778 if len(hypotheses[batch_idx].alignments[-1]) > 0:
779 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
780
781 # Do the same if preserving per-frame confidence
782 if self.preserve_frame_confidence:
783
784 for batch_idx in range(batchsize):
785 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
786 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
787
788 # Remove trailing empty list of alignments at T_{am-len} x Uj
789 if self.preserve_alignments:
790 for batch_idx in range(batchsize):
791 if len(hypotheses[batch_idx].alignments[-1]) == 0:
792 del hypotheses[batch_idx].alignments[-1]
793
794 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
795 if self.preserve_frame_confidence:
796 for batch_idx in range(batchsize):
797 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
798 del hypotheses[batch_idx].frame_confidence[-1]
799
800 # Preserve states
801 for batch_idx in range(batchsize):
802 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
803
804 return hypotheses
805
806 def _greedy_decode_masked(
807 self,
808 x: torch.Tensor,
809 out_len: torch.Tensor,
810 device: torch.device,
811 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
812 ):
813 if partial_hypotheses is not None:
814 raise NotImplementedError("`partial_hypotheses` support is not supported")
815
816 # x: [B, T, D]
817 # out_len: [B]
818 # device: torch.device
819
820 # Initialize state
821 batchsize = x.shape[0]
822 hypotheses = [
823 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
824 ]
825
826 # Initialize Hidden state matrix (shared by entire batch)
827 hidden = None
828
829 # If alignments need to be preserved, register a danling list to hold the values
830 if self.preserve_alignments:
831 # alignments is a 3-dimensional dangling list representing B x T x U
832 for hyp in hypotheses:
833 hyp.alignments = [[]]
834 else:
835 alignments = None
836
837 # If confidence scores need to be preserved, register a danling list to hold the values
838 if self.preserve_frame_confidence:
839 # frame_confidence is a 3-dimensional dangling list representing B x T x U
840 for hyp in hypotheses:
841 hyp.frame_confidence = [[]]
842
843 # Last Label buffer + Last Label without blank buffer
844 # batch level equivalent of the last_label
845 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
846 last_label_without_blank = last_label.clone()
847
848 # Mask buffers
849 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
850
851 # Get max sequence length
852 max_out_len = out_len.max()
853
854 with torch.inference_mode():
855 for time_idx in range(max_out_len):
856 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
857
858 # Prepare t timestamp batch variables
859 not_blank = True
860 symbols_added = 0
861
862 # Reset blank mask
863 blank_mask.mul_(False)
864
865 # Update blank mask with time mask
866 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
867 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
868 blank_mask = time_idx >= out_len
869
870 # Start inner loop
871 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
872 # Batch prediction and joint network steps
873 # If very first prediction step, submit SOS tag (blank) to pred_step.
874 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
875 if time_idx == 0 and symbols_added == 0 and hidden is None:
876 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
877 else:
878 # Set a dummy label for the blank value
879 # This value will be overwritten by "blank" again the last label update below
880 # This is done as vocabulary of prediction network does not contain "blank" token of RNNT
881 last_label_without_blank_mask = last_label == self._blank_index
882 last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
883 last_label_without_blank[~last_label_without_blank_mask] = last_label[
884 ~last_label_without_blank_mask
885 ]
886
887 # Perform batch step prediction of decoder, getting new states and scores ("g")
888 g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
889
890 # Batched joint step - Output = [B, V + 1]
891 # If preserving per-frame confidence, log_normalize must be true
892 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
893 :, 0, 0, :
894 ]
895
896 if logp.dtype != torch.float32:
897 logp = logp.float()
898
899 # Get index k, of max prob for batch
900 v, k = logp.max(1)
901 del g
902
903 # Update blank mask with current predicted blanks
904 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
905 k_is_blank = k == self._blank_index
906 blank_mask.bitwise_or_(k_is_blank)
907 all_blanks = torch.all(blank_mask)
908
909 # If preserving alignments, check if sequence length of sample has been reached
910 # before adding alignment
911 if self.preserve_alignments:
912 # Insert logprobs into last timestep per sample
913 logp_vals = logp.to('cpu')
914 logp_ids = logp_vals.max(1)[1]
915 for batch_idx, is_blank in enumerate(blank_mask):
916 # we only want to update non-blanks, unless we are at the last step in the loop where
917 # all elements produced blanks, otherwise there will be duplicate predictions
918 # saved in alignments
919 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
920 hypotheses[batch_idx].alignments[-1].append(
921 (logp_vals[batch_idx], logp_ids[batch_idx])
922 )
923
924 del logp_vals
925
926 # If preserving per-frame confidence, check if sequence length of sample has been reached
927 # before adding confidence scores
928 if self.preserve_frame_confidence:
929 # Insert probabilities into last timestep per sample
930 confidence = self._get_confidence(logp)
931 for batch_idx, is_blank in enumerate(blank_mask):
932 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
933 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
934 del logp
935
936 # If all samples predict / have predicted prior blanks, exit loop early
937 # This is equivalent to if single sample predicted k
938 if blank_mask.all():
939 not_blank = False
940 else:
941 # Collect batch indices where blanks occurred now/past
942 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
943
944 # Recover prior state for all samples which predicted blank now/past
945 if hidden is not None:
946 # LSTM has 2 states
947 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
948
949 elif len(blank_indices) > 0 and hidden is None:
950 # Reset state if there were some blank and other non-blank predictions in batch
951 # Original state is filled with zeros so we just multiply
952 # LSTM has 2 states
953 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
954
955 # Recover prior predicted label for all samples which predicted blank now/past
956 k[blank_indices] = last_label[blank_indices, 0]
957
958 # Update new label and hidden state for next iteration
959 last_label = k.view(-1, 1)
960 hidden = hidden_prime
961
962 # Update predicted labels, accounting for time mask
963 # If blank was predicted even once, now or in the past,
964 # Force the current predicted label to also be blank
965 # This ensures that blanks propogate across all timesteps
966 # once they have occured (normally stopping condition of sample level loop).
967 for kidx, ki in enumerate(k):
968 if blank_mask[kidx] == 0:
969 hypotheses[kidx].y_sequence.append(ki)
970 hypotheses[kidx].timestep.append(time_idx)
971 hypotheses[kidx].score += float(v[kidx])
972
973 symbols_added += 1
974
975 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
976 # Then preserve U at current timestep Ti
977 # Finally, forward the timestep history to Ti+1 for that sample
978 # All of this should only be done iff the current time index <= sample-level AM length.
979 # Otherwise ignore and move to next sample / next timestep.
980 if self.preserve_alignments:
981
982 # convert Ti-th logits into a torch array
983 for batch_idx in range(batchsize):
984
985 # this checks if current timestep <= sample-level AM length
986 # If current timestep > sample-level AM length, no alignments will be added
987 # Therefore the list of Uj alignments is empty here.
988 if len(hypotheses[batch_idx].alignments[-1]) > 0:
989 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
990
991 # Do the same if preserving per-frame confidence
992 if self.preserve_frame_confidence:
993
994 for batch_idx in range(batchsize):
995 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
996 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
997
998 # Remove trailing empty list of alignments at T_{am-len} x Uj
999 if self.preserve_alignments:
1000 for batch_idx in range(batchsize):
1001 if len(hypotheses[batch_idx].alignments[-1]) == 0:
1002 del hypotheses[batch_idx].alignments[-1]
1003
1004 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
1005 if self.preserve_frame_confidence:
1006 for batch_idx in range(batchsize):
1007 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
1008 del hypotheses[batch_idx].frame_confidence[-1]
1009
1010 # Preserve states
1011 for batch_idx in range(batchsize):
1012 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
1013
1014 return hypotheses
1015
1016
1017 class ExportedModelGreedyBatchedRNNTInfer:
1018 def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = None):
1019 self.encoder_model_path = encoder_model
1020 self.decoder_joint_model_path = decoder_joint_model
1021 self.max_symbols_per_step = max_symbols_per_step
1022
1023 # Will be populated at runtime
1024 self._blank_index = None
1025
1026 def __call__(self, audio_signal: torch.Tensor, length: torch.Tensor):
1027 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
1028 Output token is generated auto-regressively.
1029
1030 Args:
1031 encoder_output: A tensor of size (batch, features, timesteps).
1032 encoded_lengths: list of int representing the length of each sequence
1033 output sequence.
1034
1035 Returns:
1036 packed list containing batch number of sentences (Hypotheses).
1037 """
1038 with torch.no_grad():
1039 # Apply optional preprocessing
1040 encoder_output, encoded_lengths = self.run_encoder(audio_signal=audio_signal, length=length)
1041
1042 if torch.is_tensor(encoder_output):
1043 encoder_output = encoder_output.transpose(1, 2)
1044 else:
1045 encoder_output = encoder_output.transpose([0, 2, 1]) # (B, T, D)
1046 logitlen = encoded_lengths
1047
1048 inseq = encoder_output # [B, T, D]
1049 hypotheses, timestamps = self._greedy_decode(inseq, logitlen)
1050
1051 # Pack the hypotheses results
1052 packed_result = [rnnt_utils.Hypothesis(score=-1.0, y_sequence=[]) for _ in range(len(hypotheses))]
1053 for i in range(len(packed_result)):
1054 packed_result[i].y_sequence = torch.tensor(hypotheses[i], dtype=torch.long)
1055 packed_result[i].length = timestamps[i]
1056
1057 del hypotheses
1058
1059 return packed_result
1060
1061 def _greedy_decode(self, x, out_len):
1062 # x: [B, T, D]
1063 # out_len: [B]
1064
1065 # Initialize state
1066 batchsize = x.shape[0]
1067 hidden = self._get_initial_states(batchsize)
1068 target_lengths = torch.ones(batchsize, dtype=torch.int32)
1069
1070 # Output string buffer
1071 label = [[] for _ in range(batchsize)]
1072 timesteps = [[] for _ in range(batchsize)]
1073
1074 # Last Label buffer + Last Label without blank buffer
1075 # batch level equivalent of the last_label
1076 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long).numpy()
1077 if torch.is_tensor(x):
1078 last_label = torch.from_numpy(last_label).to(self.device)
1079
1080 # Mask buffers
1081 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool).numpy()
1082
1083 # Get max sequence length
1084 max_out_len = out_len.max()
1085 for time_idx in range(max_out_len):
1086 f = x[:, time_idx : time_idx + 1, :] # [B, 1, D]
1087
1088 if torch.is_tensor(f):
1089 f = f.transpose(1, 2)
1090 else:
1091 f = f.transpose([0, 2, 1])
1092
1093 # Prepare t timestamp batch variables
1094 not_blank = True
1095 symbols_added = 0
1096
1097 # Reset blank mask
1098 blank_mask *= False
1099
1100 # Update blank mask with time mask
1101 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
1102 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
1103 blank_mask = time_idx >= out_len
1104 # Start inner loop
1105 while not_blank and (self.max_symbols_per_step is None or symbols_added < self.max_symbols_per_step):
1106
1107 # Batch prediction and joint network steps
1108 # If very first prediction step, submit SOS tag (blank) to pred_step.
1109 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
1110 if time_idx == 0 and symbols_added == 0:
1111 g = torch.tensor([self._blank_index] * batchsize, dtype=torch.int32).view(-1, 1)
1112 else:
1113 if torch.is_tensor(last_label):
1114 g = last_label.type(torch.int32)
1115 else:
1116 g = last_label.astype(np.int32)
1117
1118 # Batched joint step - Output = [B, V + 1]
1119 joint_out, hidden_prime = self.run_decoder_joint(f, g, target_lengths, *hidden)
1120 logp, pred_lengths = joint_out
1121 logp = logp[:, 0, 0, :]
1122
1123 # Get index k, of max prob for batch
1124 if torch.is_tensor(logp):
1125 v, k = logp.max(1)
1126 else:
1127 k = np.argmax(logp, axis=1).astype(np.int32)
1128
1129 # Update blank mask with current predicted blanks
1130 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
1131 k_is_blank = k == self._blank_index
1132 blank_mask |= k_is_blank
1133
1134 del k_is_blank
1135 del logp
1136
1137 # If all samples predict / have predicted prior blanks, exit loop early
1138 # This is equivalent to if single sample predicted k
1139 if blank_mask.all():
1140 not_blank = False
1141
1142 else:
1143 # Collect batch indices where blanks occurred now/past
1144 if torch.is_tensor(blank_mask):
1145 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
1146 else:
1147 blank_indices = blank_mask.astype(np.int32).nonzero()
1148
1149 if type(blank_indices) in (list, tuple):
1150 blank_indices = blank_indices[0]
1151
1152 # Recover prior state for all samples which predicted blank now/past
1153 if hidden is not None:
1154 # LSTM has 2 states
1155 for state_id in range(len(hidden)):
1156 hidden_prime[state_id][:, blank_indices, :] = hidden[state_id][:, blank_indices, :]
1157
1158 elif len(blank_indices) > 0 and hidden is None:
1159 # Reset state if there were some blank and other non-blank predictions in batch
1160 # Original state is filled with zeros so we just multiply
1161 # LSTM has 2 states
1162 for state_id in range(len(hidden_prime)):
1163 hidden_prime[state_id][:, blank_indices, :] *= 0.0
1164
1165 # Recover prior predicted label for all samples which predicted blank now/past
1166 k[blank_indices] = last_label[blank_indices, 0]
1167
1168 # Update new label and hidden state for next iteration
1169 if torch.is_tensor(k):
1170 last_label = k.clone().reshape(-1, 1)
1171 else:
1172 last_label = k.copy().reshape(-1, 1)
1173 hidden = hidden_prime
1174
1175 # Update predicted labels, accounting for time mask
1176 # If blank was predicted even once, now or in the past,
1177 # Force the current predicted label to also be blank
1178 # This ensures that blanks propogate across all timesteps
1179 # once they have occured (normally stopping condition of sample level loop).
1180 for kidx, ki in enumerate(k):
1181 if blank_mask[kidx] == 0:
1182 label[kidx].append(ki)
1183 timesteps[kidx].append(time_idx)
1184
1185 symbols_added += 1
1186
1187 return label, timesteps
1188
1189 def _setup_blank_index(self):
1190 raise NotImplementedError()
1191
1192 def run_encoder(self, audio_signal, length):
1193 raise NotImplementedError()
1194
1195 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1196 raise NotImplementedError()
1197
1198 def _get_initial_states(self, batchsize):
1199 raise NotImplementedError()
1200
1201
1202 class ONNXGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
1203 def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = 10):
1204 super().__init__(
1205 encoder_model=encoder_model,
1206 decoder_joint_model=decoder_joint_model,
1207 max_symbols_per_step=max_symbols_per_step,
1208 )
1209
1210 try:
1211 import onnx
1212 import onnxruntime
1213 except (ModuleNotFoundError, ImportError):
1214 raise ImportError(f"`onnx` or `onnxruntime` could not be imported, please install the libraries.\n")
1215
1216 if torch.cuda.is_available():
1217 # Try to use onnxruntime-gpu
1218 providers = ['TensorrtExecutionProvider', 'CUDAExecutionProvider']
1219 else:
1220 # Fall back to CPU and onnxruntime-cpu
1221 providers = ['CPUExecutionProvider']
1222
1223 onnx_session_opt = onnxruntime.SessionOptions()
1224 onnx_session_opt.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
1225
1226 onnx_model = onnx.load(self.encoder_model_path)
1227 onnx.checker.check_model(onnx_model, full_check=True)
1228 self.encoder_model = onnx_model
1229 self.encoder = onnxruntime.InferenceSession(
1230 onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
1231 )
1232
1233 onnx_model = onnx.load(self.decoder_joint_model_path)
1234 onnx.checker.check_model(onnx_model, full_check=True)
1235 self.decoder_joint_model = onnx_model
1236 self.decoder_joint = onnxruntime.InferenceSession(
1237 onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
1238 )
1239
1240 logging.info("Successfully loaded encoder, decoder and joint onnx models !")
1241
1242 # Will be populated at runtime
1243 self._blank_index = None
1244 self.max_symbols_per_step = max_symbols_per_step
1245
1246 self._setup_encoder_input_output_keys()
1247 self._setup_decoder_joint_input_output_keys()
1248 self._setup_blank_index()
1249
1250 def _setup_encoder_input_output_keys(self):
1251 self.encoder_inputs = list(self.encoder_model.graph.input)
1252 self.encoder_outputs = list(self.encoder_model.graph.output)
1253
1254 def _setup_decoder_joint_input_output_keys(self):
1255 self.decoder_joint_inputs = list(self.decoder_joint_model.graph.input)
1256 self.decoder_joint_outputs = list(self.decoder_joint_model.graph.output)
1257
1258 def _setup_blank_index(self):
1259 # ASSUME: Single input with no time length information
1260 dynamic_dim = 257
1261 shapes = self.encoder_inputs[0].type.tensor_type.shape.dim
1262 ip_shape = []
1263 for shape in shapes:
1264 if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
1265 ip_shape.append(dynamic_dim) # replace dynamic axes with constant
1266 else:
1267 ip_shape.append(int(shape.dim_value))
1268
1269 enc_logits, encoded_length = self.run_encoder(
1270 audio_signal=torch.randn(*ip_shape), length=torch.randint(0, 1, size=(dynamic_dim,))
1271 )
1272
1273 # prepare states
1274 states = self._get_initial_states(batchsize=dynamic_dim)
1275
1276 # run decoder 1 step
1277 joint_out, states = self.run_decoder_joint(enc_logits, None, None, *states)
1278 log_probs, lengths = joint_out
1279
1280 self._blank_index = log_probs.shape[-1] - 1 # last token of vocab size is blank token
1281 logging.info(
1282 f"Enc-Dec-Joint step was evaluated, blank token id = {self._blank_index}; vocab size = {log_probs.shape[-1]}"
1283 )
1284
1285 def run_encoder(self, audio_signal, length):
1286 if hasattr(audio_signal, 'cpu'):
1287 audio_signal = audio_signal.cpu().numpy()
1288
1289 if hasattr(length, 'cpu'):
1290 length = length.cpu().numpy()
1291
1292 ip = {
1293 self.encoder_inputs[0].name: audio_signal,
1294 self.encoder_inputs[1].name: length,
1295 }
1296 enc_out = self.encoder.run(None, ip)
1297 enc_out, encoded_length = enc_out # ASSUME: single output
1298 return enc_out, encoded_length
1299
1300 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1301 # ASSUME: Decoder is RNN Transducer
1302 if targets is None:
1303 targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32)
1304 target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32)
1305
1306 if hasattr(targets, 'cpu'):
1307 targets = targets.cpu().numpy()
1308
1309 if hasattr(target_length, 'cpu'):
1310 target_length = target_length.cpu().numpy()
1311
1312 ip = {
1313 self.decoder_joint_inputs[0].name: enc_logits,
1314 self.decoder_joint_inputs[1].name: targets,
1315 self.decoder_joint_inputs[2].name: target_length,
1316 }
1317
1318 num_states = 0
1319 if states is not None and len(states) > 0:
1320 num_states = len(states)
1321 for idx, state in enumerate(states):
1322 if hasattr(state, 'cpu'):
1323 state = state.cpu().numpy()
1324
1325 ip[self.decoder_joint_inputs[len(ip)].name] = state
1326
1327 dec_out = self.decoder_joint.run(None, ip)
1328
1329 # unpack dec output
1330 if num_states > 0:
1331 new_states = dec_out[-num_states:]
1332 dec_out = dec_out[:-num_states]
1333 else:
1334 new_states = None
1335
1336 return dec_out, new_states
1337
1338 def _get_initial_states(self, batchsize):
1339 # ASSUME: LSTM STATES of shape (layers, batchsize, dim)
1340 input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
1341 num_states = len(input_state_nodes)
1342 if num_states == 0:
1343 return
1344
1345 input_states = []
1346 for state_id in range(num_states):
1347 node = input_state_nodes[state_id]
1348 ip_shape = []
1349 for shape_idx, shape in enumerate(node.type.tensor_type.shape.dim):
1350 if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
1351 ip_shape.append(batchsize) # replace dynamic axes with constant
1352 else:
1353 ip_shape.append(int(shape.dim_value))
1354
1355 input_states.append(torch.zeros(*ip_shape))
1356
1357 return input_states
1358
1359
1360 class TorchscriptGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
1361 def __init__(
1362 self,
1363 encoder_model: str,
1364 decoder_joint_model: str,
1365 cfg: DictConfig,
1366 device: str,
1367 max_symbols_per_step: Optional[int] = 10,
1368 ):
1369 super().__init__(
1370 encoder_model=encoder_model,
1371 decoder_joint_model=decoder_joint_model,
1372 max_symbols_per_step=max_symbols_per_step,
1373 )
1374
1375 self.cfg = cfg
1376 self.device = device
1377
1378 self.encoder = torch.jit.load(self.encoder_model_path, map_location=self.device)
1379 self.decoder_joint = torch.jit.load(self.decoder_joint_model_path, map_location=self.device)
1380
1381 logging.info("Successfully loaded encoder, decoder and joint torchscript models !")
1382
1383 # Will be populated at runtime
1384 self._blank_index = None
1385 self.max_symbols_per_step = max_symbols_per_step
1386
1387 self._setup_encoder_input_keys()
1388 self._setup_decoder_joint_input_keys()
1389 self._setup_blank_index()
1390
1391 def _setup_encoder_input_keys(self):
1392 arguments = self.encoder.forward.schema.arguments[1:]
1393 self.encoder_inputs = [arg for arg in arguments]
1394
1395 def _setup_decoder_joint_input_keys(self):
1396 arguments = self.decoder_joint.forward.schema.arguments[1:]
1397 self.decoder_joint_inputs = [arg for arg in arguments]
1398
1399 def _setup_blank_index(self):
1400 self._blank_index = len(self.cfg.joint.vocabulary)
1401
1402 logging.info(f"Blank token id = {self._blank_index}; vocab size = {len(self.cfg.joint.vocabulary) + 1}")
1403
1404 def run_encoder(self, audio_signal, length):
1405 enc_out = self.encoder(audio_signal, length)
1406 enc_out, encoded_length = enc_out # ASSUME: single output
1407 return enc_out, encoded_length
1408
1409 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1410 # ASSUME: Decoder is RNN Transducer
1411 if targets is None:
1412 targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32, device=enc_logits.device)
1413 target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32, device=enc_logits.device)
1414
1415 num_states = 0
1416 if states is not None and len(states) > 0:
1417 num_states = len(states)
1418
1419 dec_out = self.decoder_joint(enc_logits, targets, target_length, *states)
1420
1421 # unpack dec output
1422 if num_states > 0:
1423 new_states = dec_out[-num_states:]
1424 dec_out = dec_out[:-num_states]
1425 else:
1426 new_states = None
1427
1428 return dec_out, new_states
1429
1430 def _get_initial_states(self, batchsize):
1431 # ASSUME: LSTM STATES of shape (layers, batchsize, dim)
1432 input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
1433 num_states = len(input_state_nodes)
1434 if num_states == 0:
1435 return
1436
1437 input_states = []
1438 for state_id in range(num_states):
1439 # Hardcode shape size for LSTM (1 is for num layers in LSTM, which is flattened for export)
1440 ip_shape = [1, batchsize, self.cfg.model_defaults.pred_hidden]
1441 input_states.append(torch.zeros(*ip_shape, device=self.device))
1442
1443 return input_states
1444
1445
1446 class GreedyMultiblankRNNTInfer(GreedyRNNTInfer):
1447 """A greedy transducer decoder for multi-blank RNN-T.
1448
1449 Sequence level greedy decoding, performed auto-regressively.
1450
1451 Args:
1452 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
1453 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
1454 blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
1455 big_blank_durations: a list containing durations for big blanks the model supports.
1456 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
1457 to a sequence in a single time step; if set to None then there is
1458 no limit.
1459 preserve_alignments: Bool flag which preserves the history of alignments generated during
1460 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1461 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
1462 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
1463 The length of the list corresponds to the Acoustic Length (T).
1464 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
1465 U is the number of target tokens for the current timestep Ti.
1466 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
1467 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1468 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
1469 The length of the list corresponds to the Acoustic Length (T).
1470 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
1471 U is the number of target tokens for the current timestep Ti.
1472 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
1473 confidence scores.
1474
1475 name: The method name (str).
1476 Supported values:
1477 - 'max_prob' for using the maximum token probability as a confidence.
1478 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1479
1480 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
1481 Supported values:
1482 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1483 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1484 Note that for this entropy, the alpha should comply the following inequality:
1485 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1486 where V is the model vocabulary size.
1487 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1488 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1489 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1490 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1491 - 'renyi' for the Rรฉnyi entropy.
1492 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1493 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1494 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1495
1496 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1497 When the alpha equals one, scaling is not applied to 'max_prob',
1498 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1499
1500 entropy_norm: A mapping of the entropy value to the interval [0,1].
1501 Supported values:
1502 - 'lin' for using the linear mapping.
1503 - 'exp' for using exponential mapping with linear shift.
1504 """
1505
1506 def __init__(
1507 self,
1508 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
1509 joint_model: rnnt_abstract.AbstractRNNTJoint,
1510 blank_index: int,
1511 big_blank_durations: list,
1512 max_symbols_per_step: Optional[int] = None,
1513 preserve_alignments: bool = False,
1514 preserve_frame_confidence: bool = False,
1515 confidence_method_cfg: Optional[DictConfig] = None,
1516 ):
1517 super().__init__(
1518 decoder_model=decoder_model,
1519 joint_model=joint_model,
1520 blank_index=blank_index,
1521 max_symbols_per_step=max_symbols_per_step,
1522 preserve_alignments=preserve_alignments,
1523 preserve_frame_confidence=preserve_frame_confidence,
1524 confidence_method_cfg=confidence_method_cfg,
1525 )
1526 self.big_blank_durations = big_blank_durations
1527 self._SOS = blank_index - len(big_blank_durations)
1528
1529 @torch.no_grad()
1530 def _greedy_decode(
1531 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
1532 ):
1533 # x: [T, 1, D]
1534 # out_len: [seq_len]
1535
1536 # Initialize blank state and empty label set in Hypothesis
1537 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
1538
1539 if partial_hypotheses is not None:
1540 hypothesis.last_token = partial_hypotheses.last_token
1541 hypothesis.y_sequence = (
1542 partial_hypotheses.y_sequence.cpu().tolist()
1543 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
1544 else partial_hypotheses.y_sequence
1545 )
1546 if partial_hypotheses.dec_state is not None:
1547 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
1548 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
1549
1550 if self.preserve_alignments:
1551 # Alignments is a 2-dimensional dangling list representing T x U
1552 hypothesis.alignments = [[]]
1553
1554 if self.preserve_frame_confidence:
1555 hypothesis.frame_confidence = [[]]
1556
1557 # if this variable is > 1, it means the last emission was a big-blank and we need to skip frames.
1558 big_blank_duration = 1
1559
1560 # For timestep t in X_t
1561 for time_idx in range(out_len):
1562 if big_blank_duration > 1:
1563 # skip frames until big_blank_duration == 1.
1564 big_blank_duration -= 1
1565 continue
1566 # Extract encoder embedding at timestep t
1567 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
1568 f = x.narrow(dim=0, start=time_idx, length=1)
1569
1570 # Setup exit flags and counter
1571 not_blank = True
1572 symbols_added = 0
1573
1574 # While blank is not predicted, or we dont run out of max symbols per timestep
1575 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
1576 # In the first timestep, we initialize the network with RNNT Blank
1577 # In later timesteps, we provide previous predicted label as input.
1578 if hypothesis.last_token is None and hypothesis.dec_state is None:
1579 last_label = self._SOS
1580 else:
1581 last_label = label_collate([[hypothesis.last_token]])
1582
1583 # Perform prediction network and joint network steps.
1584 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
1585 # If preserving per-frame confidence, log_normalize must be true
1586 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
1587 0, 0, 0, :
1588 ]
1589
1590 del g
1591
1592 # torch.max(0) op doesnt exist for FP 16.
1593 if logp.dtype != torch.float32:
1594 logp = logp.float()
1595
1596 # get index k, of max prob
1597 v, k = logp.max(0)
1598 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
1599
1600 # Note, we have non-blanks in the vocab first, followed by big blanks, and standard blank at last.
1601 # here we check if it's a big blank and if yes, set the duration variable.
1602 if k >= self._blank_index - len(self.big_blank_durations) and k < self._blank_index:
1603 big_blank_duration = self.big_blank_durations[self._blank_index - k - 1]
1604
1605 if self.preserve_alignments:
1606 # insert logprobs into last timestep
1607 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
1608
1609 if self.preserve_frame_confidence:
1610 # insert confidence into last timestep
1611 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
1612
1613 del logp
1614
1615 # If any type of blank token is predicted, exit inner loop, move onto next timestep t
1616 if k >= self._blank_index - len(self.big_blank_durations):
1617 not_blank = False
1618 else:
1619 # Append token to label set, update RNN state.
1620 hypothesis.y_sequence.append(k)
1621 hypothesis.score += float(v)
1622 hypothesis.timestep.append(time_idx)
1623 hypothesis.dec_state = hidden_prime
1624 hypothesis.last_token = k
1625
1626 # Increment token counter.
1627 symbols_added += 1
1628
1629 if self.preserve_alignments:
1630 # convert Ti-th logits into a torch array
1631 hypothesis.alignments.append([]) # blank buffer for next timestep
1632
1633 if self.preserve_frame_confidence:
1634 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
1635
1636 # Remove trailing empty list of Alignments
1637 if self.preserve_alignments:
1638 if len(hypothesis.alignments[-1]) == 0:
1639 del hypothesis.alignments[-1]
1640
1641 # Remove trailing empty list of per-frame confidence
1642 if self.preserve_frame_confidence:
1643 if len(hypothesis.frame_confidence[-1]) == 0:
1644 del hypothesis.frame_confidence[-1]
1645
1646 # Unpack the hidden states
1647 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
1648
1649 return hypothesis
1650
1651
1652 class GreedyBatchedMultiblankRNNTInfer(GreedyBatchedRNNTInfer):
1653 """A batch level greedy transducer decoder.
1654 Batch level greedy decoding, performed auto-regressively.
1655 Args:
1656 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
1657 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
1658 blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
1659 big_blank_durations: a list containing durations for big blanks the model supports.
1660 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
1661 to a sequence in a single time step; if set to None then there is
1662 no limit.
1663 preserve_alignments: Bool flag which preserves the history of alignments generated during
1664 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1665 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
1666 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
1667 The length of the list corresponds to the Acoustic Length (T).
1668 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
1669 U is the number of target tokens for the current timestep Ti.
1670 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
1671 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1672 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
1673 The length of the list corresponds to the Acoustic Length (T).
1674 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
1675 U is the number of target tokens for the current timestep Ti.
1676 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
1677 confidence scores.
1678
1679 name: The method name (str).
1680 Supported values:
1681 - 'max_prob' for using the maximum token probability as a confidence.
1682 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1683
1684 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
1685 Supported values:
1686 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1687 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1688 Note that for this entropy, the alpha should comply the following inequality:
1689 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1690 where V is the model vocabulary size.
1691 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1692 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1693 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1694 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1695 - 'renyi' for the Rรฉnyi entropy.
1696 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1697 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1698 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1699
1700 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1701 When the alpha equals one, scaling is not applied to 'max_prob',
1702 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1703
1704 entropy_norm: A mapping of the entropy value to the interval [0,1].
1705 Supported values:
1706 - 'lin' for using the linear mapping.
1707 - 'exp' for using exponential mapping with linear shift.
1708 """
1709
1710 def __init__(
1711 self,
1712 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
1713 joint_model: rnnt_abstract.AbstractRNNTJoint,
1714 blank_index: int,
1715 big_blank_durations: List[int],
1716 max_symbols_per_step: Optional[int] = None,
1717 preserve_alignments: bool = False,
1718 preserve_frame_confidence: bool = False,
1719 confidence_method_cfg: Optional[DictConfig] = None,
1720 ):
1721 super().__init__(
1722 decoder_model=decoder_model,
1723 joint_model=joint_model,
1724 blank_index=blank_index,
1725 max_symbols_per_step=max_symbols_per_step,
1726 preserve_alignments=preserve_alignments,
1727 preserve_frame_confidence=preserve_frame_confidence,
1728 confidence_method_cfg=confidence_method_cfg,
1729 )
1730 self.big_blank_durations = big_blank_durations
1731
1732 # Depending on availability of `blank_as_pad` support
1733 # switch between more efficient batch decoding technique
1734 if self.decoder.blank_as_pad:
1735 self._greedy_decode = self._greedy_decode_blank_as_pad
1736 else:
1737 self._greedy_decode = self._greedy_decode_masked
1738 self._SOS = blank_index - len(big_blank_durations)
1739
1740 def _greedy_decode_blank_as_pad(
1741 self,
1742 x: torch.Tensor,
1743 out_len: torch.Tensor,
1744 device: torch.device,
1745 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
1746 ):
1747 if partial_hypotheses is not None:
1748 raise NotImplementedError("`partial_hypotheses` support is not supported")
1749
1750 with torch.inference_mode():
1751 # x: [B, T, D]
1752 # out_len: [B]
1753 # device: torch.device
1754
1755 # Initialize list of Hypothesis
1756 batchsize = x.shape[0]
1757 hypotheses = [
1758 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
1759 ]
1760
1761 # Initialize Hidden state matrix (shared by entire batch)
1762 hidden = None
1763
1764 # If alignments need to be preserved, register a danling list to hold the values
1765 if self.preserve_alignments:
1766 # alignments is a 3-dimensional dangling list representing B x T x U
1767 for hyp in hypotheses:
1768 hyp.alignments = [[]]
1769
1770 # If confidence scores need to be preserved, register a danling list to hold the values
1771 if self.preserve_frame_confidence:
1772 # frame_confidence is a 3-dimensional dangling list representing B x T x U
1773 for hyp in hypotheses:
1774 hyp.frame_confidence = [[]]
1775
1776 # Last Label buffer + Last Label without blank buffer
1777 # batch level equivalent of the last_label
1778 last_label = torch.full([batchsize, 1], fill_value=self._SOS, dtype=torch.long, device=device)
1779
1780 # this mask is true for if the emission is *any type* of blank.
1781 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
1782
1783 # Get max sequence length
1784 max_out_len = out_len.max()
1785
1786 # We have a mask for each big blank. A mask is "true" means: the previous emission is exactly the big-blank
1787 # with the corresponding duration, or has larger duration. E.g., for big_blank_mask for duration 2, it will
1788 # be set true if the previous emission was a big blank with duration 4, or 3 or 2; but false if prevoius
1789 # emission was a standard blank (with duration = 1).
1790 big_blank_masks = [torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)] * len(
1791 self.big_blank_durations
1792 )
1793
1794 # if this variable > 1, it means the previous emission was big-blank and we need to skip frames.
1795 big_blank_duration = 1
1796
1797 for time_idx in range(max_out_len):
1798 if big_blank_duration > 1:
1799 # skip frames until big_blank_duration == 1
1800 big_blank_duration -= 1
1801 continue
1802 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
1803
1804 # Prepare t timestamp batch variables
1805 not_blank = True
1806 symbols_added = 0
1807
1808 # Reset all blank masks
1809 blank_mask.mul_(False)
1810 for i in range(len(big_blank_masks)):
1811 big_blank_masks[i].mul_(False)
1812
1813 # Update blank mask with time mask
1814 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
1815 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
1816 blank_mask = time_idx >= out_len
1817 for i in range(len(big_blank_masks)):
1818 big_blank_masks[i] = time_idx >= out_len
1819
1820 # Start inner loop
1821 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
1822 # Batch prediction and joint network steps
1823 # If very first prediction step, submit SOS tag (blank) to pred_step.
1824 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
1825 if time_idx == 0 and symbols_added == 0 and hidden is None:
1826 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
1827 else:
1828 # Perform batch step prediction of decoder, getting new states and scores ("g")
1829 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
1830
1831 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
1832 # If preserving per-frame confidence, log_normalize must be true
1833 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
1834 :, 0, 0, :
1835 ]
1836
1837 if logp.dtype != torch.float32:
1838 logp = logp.float()
1839
1840 # Get index k, of max prob for batch
1841 v, k = logp.max(1)
1842 del g
1843
1844 # Update blank mask with current predicted blanks
1845 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
1846 k_is_blank = k >= self._blank_index - len(self.big_blank_durations)
1847 blank_mask.bitwise_or_(k_is_blank)
1848
1849 for i in range(len(big_blank_masks)):
1850 # using <= since as we mentioned before, the mask doesn't store exact matches.
1851 # instead, it is True when the predicted blank's duration is >= the duration that the
1852 # mask corresponds to.
1853 k_is_big_blank = k <= self._blank_index - 1 - i
1854
1855 # need to do a bitwise_and since it could also be a non-blank.
1856 k_is_big_blank.bitwise_and_(k_is_blank)
1857 big_blank_masks[i].bitwise_or_(k_is_big_blank)
1858
1859 del k_is_blank
1860
1861 # If preserving alignments, check if sequence length of sample has been reached
1862 # before adding alignment
1863 if self.preserve_alignments:
1864 # Insert logprobs into last timestep per sample
1865 logp_vals = logp.to('cpu')
1866 logp_ids = logp_vals.max(1)[1]
1867 for batch_idx in range(batchsize):
1868 if time_idx < out_len[batch_idx]:
1869 hypotheses[batch_idx].alignments[-1].append(
1870 (logp_vals[batch_idx], logp_ids[batch_idx])
1871 )
1872 del logp_vals
1873
1874 # If preserving per-frame confidence, check if sequence length of sample has been reached
1875 # before adding confidence scores
1876 if self.preserve_frame_confidence:
1877 # Insert probabilities into last timestep per sample
1878 confidence = self._get_confidence(logp)
1879 for batch_idx in range(batchsize):
1880 if time_idx < out_len[batch_idx]:
1881 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
1882 del logp
1883
1884 # If all samples predict / have predicted prior blanks, exit loop early
1885 # This is equivalent to if single sample predicted k
1886 if blank_mask.all():
1887 not_blank = False
1888 else:
1889 # Collect batch indices where blanks occurred now/past
1890 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
1891
1892 # Recover prior state for all samples which predicted blank now/past
1893 if hidden is not None:
1894 # LSTM has 2 states
1895 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
1896
1897 elif len(blank_indices) > 0 and hidden is None:
1898 # Reset state if there were some blank and other non-blank predictions in batch
1899 # Original state is filled with zeros so we just multiply
1900 # LSTM has 2 states
1901 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
1902
1903 # Recover prior predicted label for all samples which predicted blank now/past
1904 k[blank_indices] = last_label[blank_indices, 0]
1905
1906 # Update new label and hidden state for next iteration
1907 last_label = k.clone().view(-1, 1)
1908 hidden = hidden_prime
1909
1910 # Update predicted labels, accounting for time mask
1911 # If blank was predicted even once, now or in the past,
1912 # Force the current predicted label to also be blank
1913 # This ensures that blanks propogate across all timesteps
1914 # once they have occured (normally stopping condition of sample level loop).
1915 for kidx, ki in enumerate(k):
1916 if blank_mask[kidx] == 0:
1917 hypotheses[kidx].y_sequence.append(ki)
1918 hypotheses[kidx].timestep.append(time_idx)
1919 hypotheses[kidx].score += float(v[kidx])
1920
1921 symbols_added += 1
1922
1923 for i in range(len(big_blank_masks) + 1):
1924 # The task here is find the shortest blank duration of all batches.
1925 # so we start from the shortest blank duration and go up,
1926 # and stop once we found the duration whose corresponding mask isn't all True.
1927 if i == len(big_blank_masks) or not big_blank_masks[i].all():
1928 big_blank_duration = self.big_blank_durations[i - 1] if i > 0 else 1
1929 break
1930
1931 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
1932 # Then preserve U at current timestep Ti
1933 # Finally, forward the timestep history to Ti+1 for that sample
1934 # All of this should only be done iff the current time index <= sample-level AM length.
1935 # Otherwise ignore and move to next sample / next timestep.
1936 if self.preserve_alignments:
1937
1938 # convert Ti-th logits into a torch array
1939 for batch_idx in range(batchsize):
1940
1941 # this checks if current timestep <= sample-level AM length
1942 # If current timestep > sample-level AM length, no alignments will be added
1943 # Therefore the list of Uj alignments is empty here.
1944 if len(hypotheses[batch_idx].alignments[-1]) > 0:
1945 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
1946
1947 # Do the same if preserving per-frame confidence
1948 if self.preserve_frame_confidence:
1949
1950 for batch_idx in range(batchsize):
1951 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
1952 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
1953
1954 # Remove trailing empty list of alignments at T_{am-len} x Uj
1955 if self.preserve_alignments:
1956 for batch_idx in range(batchsize):
1957 if len(hypotheses[batch_idx].alignments[-1]) == 0:
1958 del hypotheses[batch_idx].alignments[-1]
1959
1960 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
1961 if self.preserve_frame_confidence:
1962 for batch_idx in range(batchsize):
1963 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
1964 del hypotheses[batch_idx].frame_confidence[-1]
1965
1966 # Preserve states
1967 for batch_idx in range(batchsize):
1968 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
1969
1970 return hypotheses
1971
1972 def _greedy_decode_masked(
1973 self,
1974 x: torch.Tensor,
1975 out_len: torch.Tensor,
1976 device: torch.device,
1977 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
1978 ):
1979 if partial_hypotheses is not None:
1980 raise NotImplementedError("`partial_hypotheses` support is not supported")
1981
1982 if self.big_blank_durations != [1] * len(self.big_blank_durations):
1983 raise NotImplementedError(
1984 "Efficient frame-skipping version for multi-blank masked decoding is not supported."
1985 )
1986
1987 # x: [B, T, D]
1988 # out_len: [B]
1989 # device: torch.device
1990
1991 # Initialize state
1992 batchsize = x.shape[0]
1993 hypotheses = [
1994 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
1995 ]
1996
1997 # Initialize Hidden state matrix (shared by entire batch)
1998 hidden = None
1999
2000 # If alignments need to be preserved, register a danling list to hold the values
2001 if self.preserve_alignments:
2002 # alignments is a 3-dimensional dangling list representing B x T x U
2003 for hyp in hypotheses:
2004 hyp.alignments = [[]]
2005 else:
2006 hyp.alignments = None
2007
2008 # If confidence scores need to be preserved, register a danling list to hold the values
2009 if self.preserve_frame_confidence:
2010 # frame_confidence is a 3-dimensional dangling list representing B x T x U
2011 for hyp in hypotheses:
2012 hyp.frame_confidence = [[]]
2013
2014 # Last Label buffer + Last Label without blank buffer
2015 # batch level equivalent of the last_label
2016 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
2017 last_label_without_blank = last_label.clone()
2018
2019 # Mask buffers
2020 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
2021
2022 # Get max sequence length
2023 max_out_len = out_len.max()
2024
2025 with torch.inference_mode():
2026 for time_idx in range(max_out_len):
2027 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
2028
2029 # Prepare t timestamp batch variables
2030 not_blank = True
2031 symbols_added = 0
2032
2033 # Reset blank mask
2034 blank_mask.mul_(False)
2035
2036 # Update blank mask with time mask
2037 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
2038 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
2039 blank_mask = time_idx >= out_len
2040
2041 # Start inner loop
2042 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
2043 # Batch prediction and joint network steps
2044 # If very first prediction step, submit SOS tag (blank) to pred_step.
2045 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
2046 if time_idx == 0 and symbols_added == 0 and hidden is None:
2047 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
2048 else:
2049 # Set a dummy label for the blank value
2050 # This value will be overwritten by "blank" again the last label update below
2051 # This is done as vocabulary of prediction network does not contain "blank" token of RNNT
2052 last_label_without_blank_mask = last_label >= self._blank_index
2053 last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
2054 last_label_without_blank[~last_label_without_blank_mask] = last_label[
2055 ~last_label_without_blank_mask
2056 ]
2057
2058 # Perform batch step prediction of decoder, getting new states and scores ("g")
2059 g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
2060
2061 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
2062 # If preserving per-frame confidence, log_normalize must be true
2063 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
2064 :, 0, 0, :
2065 ]
2066
2067 if logp.dtype != torch.float32:
2068 logp = logp.float()
2069
2070 # Get index k, of max prob for batch
2071 v, k = logp.max(1)
2072 del g
2073
2074 # Update blank mask with current predicted blanks
2075 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
2076 k_is_blank = k == self._blank_index
2077 blank_mask.bitwise_or_(k_is_blank)
2078
2079 # If preserving alignments, check if sequence length of sample has been reached
2080 # before adding alignment
2081 if self.preserve_alignments:
2082 # Insert logprobs into last timestep per sample
2083 logp_vals = logp.to('cpu')
2084 logp_ids = logp_vals.max(1)[1]
2085 for batch_idx in range(batchsize):
2086 if time_idx < out_len[batch_idx]:
2087 hypotheses[batch_idx].alignments[-1].append(
2088 (logp_vals[batch_idx], logp_ids[batch_idx])
2089 )
2090 del logp_vals
2091
2092 # If preserving per-frame confidence, check if sequence length of sample has been reached
2093 # before adding confidence scores
2094 if self.preserve_frame_confidence:
2095 # Insert probabilities into last timestep per sample
2096 confidence = self._get_confidence(logp)
2097 for batch_idx in range(batchsize):
2098 if time_idx < out_len[batch_idx]:
2099 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
2100 del logp
2101
2102 # If all samples predict / have predicted prior blanks, exit loop early
2103 # This is equivalent to if single sample predicted k
2104 if blank_mask.all():
2105 not_blank = False
2106 else:
2107 # Collect batch indices where blanks occurred now/past
2108 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
2109
2110 # Recover prior state for all samples which predicted blank now/past
2111 if hidden is not None:
2112 # LSTM has 2 states
2113 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
2114
2115 elif len(blank_indices) > 0 and hidden is None:
2116 # Reset state if there were some blank and other non-blank predictions in batch
2117 # Original state is filled with zeros so we just multiply
2118 # LSTM has 2 states
2119 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
2120
2121 # Recover prior predicted label for all samples which predicted blank now/past
2122 k[blank_indices] = last_label[blank_indices, 0]
2123
2124 # Update new label and hidden state for next iteration
2125 last_label = k.view(-1, 1)
2126 hidden = hidden_prime
2127
2128 # Update predicted labels, accounting for time mask
2129 # If blank was predicted even once, now or in the past,
2130 # Force the current predicted label to also be blank
2131 # This ensures that blanks propogate across all timesteps
2132 # once they have occured (normally stopping condition of sample level loop).
2133 for kidx, ki in enumerate(k):
2134 if blank_mask[kidx] == 0:
2135 hypotheses[kidx].y_sequence.append(ki)
2136 hypotheses[kidx].timestep.append(time_idx)
2137 hypotheses[kidx].score += float(v[kidx])
2138
2139 symbols_added += 1
2140
2141 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
2142 # Then preserve U at current timestep Ti
2143 # Finally, forward the timestep history to Ti+1 for that sample
2144 # All of this should only be done iff the current time index <= sample-level AM length.
2145 # Otherwise ignore and move to next sample / next timestep.
2146 if self.preserve_alignments:
2147
2148 # convert Ti-th logits into a torch array
2149 for batch_idx in range(batchsize):
2150
2151 # this checks if current timestep <= sample-level AM length
2152 # If current timestep > sample-level AM length, no alignments will be added
2153 # Therefore the list of Uj alignments is empty here.
2154 if len(hypotheses[batch_idx].alignments[-1]) > 0:
2155 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
2156
2157 # Do the same if preserving per-frame confidence
2158 if self.preserve_frame_confidence:
2159
2160 for batch_idx in range(batchsize):
2161 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
2162 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
2163
2164 # Remove trailing empty list of alignments at T_{am-len} x Uj
2165 if self.preserve_alignments:
2166 for batch_idx in range(batchsize):
2167 if len(hypotheses[batch_idx].alignments[-1]) == 0:
2168 del hypotheses[batch_idx].alignments[-1]
2169
2170 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
2171 if self.preserve_frame_confidence:
2172 for batch_idx in range(batchsize):
2173 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2174 del hypotheses[batch_idx].frame_confidence[-1]
2175
2176 # Preserve states
2177 for batch_idx in range(batchsize):
2178 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2179
2180 return hypotheses
2181
2182
2183 @dataclass
2184 class GreedyRNNTInferConfig:
2185 max_symbols_per_step: Optional[int] = 10
2186 preserve_alignments: bool = False
2187 preserve_frame_confidence: bool = False
2188 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
2189
2190 def __post_init__(self):
2191 # OmegaConf.structured ensures that post_init check is always executed
2192 self.confidence_method_cfg = OmegaConf.structured(
2193 self.confidence_method_cfg
2194 if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
2195 else ConfidenceMethodConfig(**self.confidence_method_cfg)
2196 )
2197
2198
2199 @dataclass
2200 class GreedyBatchedRNNTInferConfig:
2201 max_symbols_per_step: Optional[int] = 10
2202 preserve_alignments: bool = False
2203 preserve_frame_confidence: bool = False
2204 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
2205
2206 def __post_init__(self):
2207 # OmegaConf.structured ensures that post_init check is always executed
2208 self.confidence_method_cfg = OmegaConf.structured(
2209 self.confidence_method_cfg
2210 if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
2211 else ConfidenceMethodConfig(**self.confidence_method_cfg)
2212 )
2213
2214
2215 class GreedyTDTInfer(_GreedyRNNTInfer):
2216 """A greedy TDT decoder.
2217
2218 Sequence level greedy decoding, performed auto-regressively.
2219
2220 Args:
2221 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
2222 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
2223 blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
2224 durations: a list containing durations for TDT.
2225 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
2226 to a sequence in a single time step; if set to None then there is
2227 no limit.
2228 preserve_alignments: Bool flag which preserves the history of alignments generated during
2229 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2230 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
2231 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
2232 The length of the list corresponds to the Acoustic Length (T).
2233 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
2234 U is the number of target tokens for the current timestep Ti.
2235 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
2236 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2237 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
2238 The length of the list corresponds to the Acoustic Length (T).
2239 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
2240 U is the number of target tokens for the current timestep Ti.
2241 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
2242 confidence scores.
2243
2244 name: The method name (str).
2245 Supported values:
2246 - 'max_prob' for using the maximum token probability as a confidence.
2247 - 'entropy' for using a normalized entropy of a log-likelihood vector.
2248
2249 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
2250 Supported values:
2251 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
2252 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
2253 Note that for this entropy, the alpha should comply the following inequality:
2254 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
2255 where V is the model vocabulary size.
2256 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
2257 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
2258 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2259 More: https://en.wikipedia.org/wiki/Tsallis_entropy
2260 - 'renyi' for the Rรฉnyi entropy.
2261 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
2262 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2263 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
2264
2265 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
2266 When the alpha equals one, scaling is not applied to 'max_prob',
2267 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
2268
2269 entropy_norm: A mapping of the entropy value to the interval [0,1].
2270 Supported values:
2271 - 'lin' for using the linear mapping.
2272 - 'exp' for using exponential mapping with linear shift.
2273 """
2274
2275 def __init__(
2276 self,
2277 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
2278 joint_model: rnnt_abstract.AbstractRNNTJoint,
2279 blank_index: int,
2280 durations: list,
2281 max_symbols_per_step: Optional[int] = None,
2282 preserve_alignments: bool = False,
2283 preserve_frame_confidence: bool = False,
2284 confidence_method_cfg: Optional[DictConfig] = None,
2285 ):
2286 super().__init__(
2287 decoder_model=decoder_model,
2288 joint_model=joint_model,
2289 blank_index=blank_index,
2290 max_symbols_per_step=max_symbols_per_step,
2291 preserve_alignments=preserve_alignments,
2292 preserve_frame_confidence=preserve_frame_confidence,
2293 confidence_method_cfg=confidence_method_cfg,
2294 )
2295 self.durations = durations
2296
2297 @typecheck()
2298 def forward(
2299 self,
2300 encoder_output: torch.Tensor,
2301 encoded_lengths: torch.Tensor,
2302 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2303 ):
2304 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
2305 Output token is generated auto-regressively.
2306 Args:
2307 encoder_output: A tensor of size (batch, features, timesteps).
2308 encoded_lengths: list of int representing the length of each sequence
2309 output sequence.
2310 Returns:
2311 packed list containing batch number of sentences (Hypotheses).
2312 """
2313 # Preserve decoder and joint training state
2314 decoder_training_state = self.decoder.training
2315 joint_training_state = self.joint.training
2316
2317 with torch.inference_mode():
2318 # Apply optional preprocessing
2319 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
2320
2321 self.decoder.eval()
2322 self.joint.eval()
2323
2324 hypotheses = []
2325 # Process each sequence independently
2326 with self.decoder.as_frozen(), self.joint.as_frozen():
2327 for batch_idx in range(encoder_output.size(0)):
2328 inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
2329 logitlen = encoded_lengths[batch_idx]
2330
2331 partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
2332 hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
2333 hypotheses.append(hypothesis)
2334
2335 # Pack results into Hypotheses
2336 packed_result = pack_hypotheses(hypotheses, encoded_lengths)
2337
2338 self.decoder.train(decoder_training_state)
2339 self.joint.train(joint_training_state)
2340
2341 return (packed_result,)
2342
2343 @torch.no_grad()
2344 def _greedy_decode(
2345 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
2346 ):
2347 # x: [T, 1, D]
2348 # out_len: [seq_len]
2349
2350 # Initialize blank state and empty label set in Hypothesis
2351 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
2352
2353 if partial_hypotheses is not None:
2354 hypothesis.last_token = partial_hypotheses.last_token
2355 hypothesis.y_sequence = (
2356 partial_hypotheses.y_sequence.cpu().tolist()
2357 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
2358 else partial_hypotheses.y_sequence
2359 )
2360 if partial_hypotheses.dec_state is not None:
2361 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
2362 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
2363
2364 if self.preserve_alignments:
2365 # Alignments is a 2-dimensional dangling list representing T x U
2366 hypothesis.alignments = [[]]
2367
2368 if self.preserve_frame_confidence:
2369 hypothesis.frame_confidence = [[]]
2370
2371 time_idx = 0
2372 while time_idx < out_len:
2373 # Extract encoder embedding at timestep t
2374 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
2375 f = x.narrow(dim=0, start=time_idx, length=1)
2376
2377 # Setup exit flags and counter
2378 not_blank = True
2379 symbols_added = 0
2380
2381 need_loop = True
2382 # While blank is not predicted, or we dont run out of max symbols per timestep
2383 while need_loop and (self.max_symbols is None or symbols_added < self.max_symbols):
2384 # In the first timestep, we initialize the network with RNNT Blank
2385 # In later timesteps, we provide previous predicted label as input.
2386 if hypothesis.last_token is None and hypothesis.dec_state is None:
2387 last_label = self._SOS
2388 else:
2389 last_label = label_collate([[hypothesis.last_token]])
2390
2391 # Perform prediction network and joint network steps.
2392 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
2393 # If preserving per-frame confidence, log_normalize must be true
2394 logits = self._joint_step(f, g, log_normalize=False)
2395 logp = logits[0, 0, 0, : -len(self.durations)]
2396 if self.preserve_frame_confidence:
2397 logp = torch.log_softmax(logp, -1)
2398
2399 duration_logp = torch.log_softmax(logits[0, 0, 0, -len(self.durations) :], dim=-1)
2400 del g
2401
2402 # torch.max(0) op doesnt exist for FP 16.
2403 if logp.dtype != torch.float32:
2404 logp = logp.float()
2405
2406 # get index k, of max prob
2407 v, k = logp.max(0)
2408 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
2409
2410 d_v, d_k = duration_logp.max(0)
2411 d_k = d_k.item()
2412
2413 skip = self.durations[d_k]
2414
2415 if self.preserve_alignments:
2416 # insert logprobs into last timestep
2417 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
2418
2419 if self.preserve_frame_confidence:
2420 # insert confidence into last timestep
2421 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
2422
2423 del logp
2424
2425 # If blank token is predicted, exit inner loop, move onto next timestep t
2426 if k == self._blank_index:
2427 not_blank = False
2428 else:
2429 # Append token to label set, update RNN state.
2430 hypothesis.y_sequence.append(k)
2431 hypothesis.score += float(v)
2432 hypothesis.timestep.append(time_idx)
2433 hypothesis.dec_state = hidden_prime
2434 hypothesis.last_token = k
2435
2436 # Increment token counter.
2437 symbols_added += 1
2438 time_idx += skip
2439 need_loop = skip == 0
2440
2441 # this rarely happens, but we manually increment the `skip` number
2442 # if blank is emitted and duration=0 is predicted. This prevents possible
2443 # infinite loops.
2444 if skip == 0:
2445 skip = 1
2446
2447 if self.preserve_alignments:
2448 # convert Ti-th logits into a torch array
2449 hypothesis.alignments.append([]) # blank buffer for next timestep
2450
2451 if self.preserve_frame_confidence:
2452 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
2453
2454 if symbols_added == self.max_symbols:
2455 time_idx += 1
2456
2457 # Remove trailing empty list of Alignments
2458 if self.preserve_alignments:
2459 if len(hypothesis.alignments[-1]) == 0:
2460 del hypothesis.alignments[-1]
2461
2462 # Remove trailing empty list of per-frame confidence
2463 if self.preserve_frame_confidence:
2464 if len(hypothesis.frame_confidence[-1]) == 0:
2465 del hypothesis.frame_confidence[-1]
2466
2467 # Unpack the hidden states
2468 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
2469
2470 return hypothesis
2471
2472
2473 class GreedyBatchedTDTInfer(_GreedyRNNTInfer):
2474 """A batch level greedy TDT decoder.
2475 Batch level greedy decoding, performed auto-regressively.
2476 Args:
2477 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
2478 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
2479 blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
2480 durations: a list containing durations.
2481 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
2482 to a sequence in a single time step; if set to None then there is
2483 no limit.
2484 preserve_alignments: Bool flag which preserves the history of alignments generated during
2485 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2486 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
2487 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
2488 The length of the list corresponds to the Acoustic Length (T).
2489 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
2490 U is the number of target tokens for the current timestep Ti.
2491 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
2492 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2493 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
2494 The length of the list corresponds to the Acoustic Length (T).
2495 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
2496 U is the number of target tokens for the current timestep Ti.
2497 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
2498 confidence scores.
2499
2500 name: The method name (str).
2501 Supported values:
2502 - 'max_prob' for using the maximum token probability as a confidence.
2503 - 'entropy' for using a normalized entropy of a log-likelihood vector.
2504
2505 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
2506 Supported values:
2507 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
2508 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
2509 Note that for this entropy, the alpha should comply the following inequality:
2510 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
2511 where V is the model vocabulary size.
2512 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
2513 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
2514 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2515 More: https://en.wikipedia.org/wiki/Tsallis_entropy
2516 - 'renyi' for the Rรฉnyi entropy.
2517 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
2518 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2519 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
2520
2521 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
2522 When the alpha equals one, scaling is not applied to 'max_prob',
2523 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
2524
2525 entropy_norm: A mapping of the entropy value to the interval [0,1].
2526 Supported values:
2527 - 'lin' for using the linear mapping.
2528 - 'exp' for using exponential mapping with linear shift.
2529 """
2530
2531 def __init__(
2532 self,
2533 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
2534 joint_model: rnnt_abstract.AbstractRNNTJoint,
2535 blank_index: int,
2536 durations: List[int],
2537 max_symbols_per_step: Optional[int] = None,
2538 preserve_alignments: bool = False,
2539 preserve_frame_confidence: bool = False,
2540 confidence_method_cfg: Optional[DictConfig] = None,
2541 ):
2542 super().__init__(
2543 decoder_model=decoder_model,
2544 joint_model=joint_model,
2545 blank_index=blank_index,
2546 max_symbols_per_step=max_symbols_per_step,
2547 preserve_alignments=preserve_alignments,
2548 preserve_frame_confidence=preserve_frame_confidence,
2549 confidence_method_cfg=confidence_method_cfg,
2550 )
2551 self.durations = durations
2552
2553 # Depending on availability of `blank_as_pad` support
2554 # switch between more efficient batch decoding technique
2555 if self.decoder.blank_as_pad:
2556 self._greedy_decode = self._greedy_decode_blank_as_pad
2557 else:
2558 self._greedy_decode = self._greedy_decode_masked
2559
2560 @typecheck()
2561 def forward(
2562 self,
2563 encoder_output: torch.Tensor,
2564 encoded_lengths: torch.Tensor,
2565 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2566 ):
2567 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
2568 Output token is generated auto-regressively.
2569 Args:
2570 encoder_output: A tensor of size (batch, features, timesteps).
2571 encoded_lengths: list of int representing the length of each sequence
2572 output sequence.
2573 Returns:
2574 packed list containing batch number of sentences (Hypotheses).
2575 """
2576 # Preserve decoder and joint training state
2577 decoder_training_state = self.decoder.training
2578 joint_training_state = self.joint.training
2579
2580 with torch.inference_mode():
2581 # Apply optional preprocessing
2582 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
2583 logitlen = encoded_lengths
2584
2585 self.decoder.eval()
2586 self.joint.eval()
2587
2588 with self.decoder.as_frozen(), self.joint.as_frozen():
2589 inseq = encoder_output # [B, T, D]
2590 hypotheses = self._greedy_decode(
2591 inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
2592 )
2593
2594 # Pack the hypotheses results
2595 packed_result = pack_hypotheses(hypotheses, logitlen)
2596
2597 self.decoder.train(decoder_training_state)
2598 self.joint.train(joint_training_state)
2599
2600 return (packed_result,)
2601
2602 def _greedy_decode_blank_as_pad(
2603 self,
2604 x: torch.Tensor,
2605 out_len: torch.Tensor,
2606 device: torch.device,
2607 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2608 ):
2609 if partial_hypotheses is not None:
2610 raise NotImplementedError("`partial_hypotheses` support is not supported")
2611
2612 with torch.inference_mode():
2613 # x: [B, T, D]
2614 # out_len: [B]
2615 # device: torch.device
2616
2617 # Initialize list of Hypothesis
2618 batchsize = x.shape[0]
2619 hypotheses = [
2620 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
2621 ]
2622
2623 # Initialize Hidden state matrix (shared by entire batch)
2624 hidden = None
2625
2626 # If alignments need to be preserved, register a danling list to hold the values
2627 if self.preserve_alignments:
2628 # alignments is a 3-dimensional dangling list representing B x T x U
2629 for hyp in hypotheses:
2630 hyp.alignments = [[]]
2631
2632 # If confidence scores need to be preserved, register a danling list to hold the values
2633 if self.preserve_frame_confidence:
2634 # frame_confidence is a 3-dimensional dangling list representing B x T x U
2635 for hyp in hypotheses:
2636 hyp.frame_confidence = [[]]
2637
2638 # Last Label buffer + Last Label without blank buffer
2639 # batch level equivalent of the last_label
2640 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
2641
2642 # Mask buffers
2643 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
2644
2645 # Get max sequence length
2646 max_out_len = out_len.max()
2647
2648 # skip means the number of frames the next decoding step should "jump" to. When skip == 1
2649 # it means the next decoding step will just use the next input frame.
2650 skip = 1
2651 for time_idx in range(max_out_len):
2652 if skip > 1: # if skip > 1 at the current step, we decrement it and skip the current frame.
2653 skip -= 1
2654 continue
2655 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
2656
2657 # need_to_stay is a boolean indicates whether the next decoding step should remain in the same frame.
2658 need_to_stay = True
2659 symbols_added = 0
2660
2661 # Reset blank mask
2662 blank_mask.mul_(False)
2663
2664 # Update blank mask with time mask
2665 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
2666 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
2667 blank_mask = time_idx >= out_len
2668
2669 # Start inner loop
2670 while need_to_stay and (self.max_symbols is None or symbols_added < self.max_symbols):
2671 # Batch prediction and joint network steps
2672 # If very first prediction step, submit SOS tag (blank) to pred_step.
2673 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
2674 if time_idx == 0 and symbols_added == 0 and hidden is None:
2675 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
2676 else:
2677 # Perform batch step prediction of decoder, getting new states and scores ("g")
2678 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
2679
2680 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
2681 # Note: log_normalize must not be True here since the joiner output is contanetation of both token logits and duration logits,
2682 # and they need to be normalized independently.
2683 joined = self._joint_step(f, g, log_normalize=None)
2684 logp = joined[:, 0, 0, : -len(self.durations)]
2685 duration_logp = joined[:, 0, 0, -len(self.durations) :]
2686
2687 if logp.dtype != torch.float32:
2688 logp = logp.float()
2689 duration_logp = duration_logp.float()
2690
2691 # get the max for both token and duration predictions.
2692 v, k = logp.max(1)
2693 dv, dk = duration_logp.max(1)
2694
2695 # here we set the skip value to be the minimum of all predicted durations, hense the "torch.min(dk)" call there.
2696 # Please refer to Section 5.2 of our paper https://arxiv.org/pdf/2304.06795.pdf for explanation of this.
2697 skip = self.durations[int(torch.min(dk))]
2698
2699 # this is a special case: if all batches emit blanks, we require that skip be at least 1
2700 # so we don't loop forever at the current frame.
2701 if blank_mask.all():
2702 if skip == 0:
2703 skip = 1
2704
2705 need_to_stay = skip == 0
2706 del g
2707
2708 # Update blank mask with current predicted blanks
2709 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
2710 k_is_blank = k == self._blank_index
2711 blank_mask.bitwise_or_(k_is_blank)
2712
2713 del k_is_blank
2714 del logp, duration_logp
2715
2716 # If all samples predict / have predicted prior blanks, exit loop early
2717 # This is equivalent to if single sample predicted k
2718 if not blank_mask.all():
2719 # Collect batch indices where blanks occurred now/past
2720 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
2721
2722 # Recover prior state for all samples which predicted blank now/past
2723 if hidden is not None:
2724 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
2725
2726 elif len(blank_indices) > 0 and hidden is None:
2727 # Reset state if there were some blank and other non-blank predictions in batch
2728 # Original state is filled with zeros so we just multiply
2729 # LSTM has 2 states
2730 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
2731
2732 # Recover prior predicted label for all samples which predicted blank now/past
2733 k[blank_indices] = last_label[blank_indices, 0]
2734
2735 # Update new label and hidden state for next iteration
2736 last_label = k.clone().view(-1, 1)
2737 hidden = hidden_prime
2738
2739 # Update predicted labels, accounting for time mask
2740 # If blank was predicted even once, now or in the past,
2741 # Force the current predicted label to also be blank
2742 # This ensures that blanks propogate across all timesteps
2743 # once they have occured (normally stopping condition of sample level loop).
2744 for kidx, ki in enumerate(k):
2745 if blank_mask[kidx] == 0:
2746 hypotheses[kidx].y_sequence.append(ki)
2747 hypotheses[kidx].timestep.append(time_idx)
2748 hypotheses[kidx].score += float(v[kidx])
2749
2750 symbols_added += 1
2751
2752 # Remove trailing empty list of alignments at T_{am-len} x Uj
2753 if self.preserve_alignments:
2754 for batch_idx in range(batchsize):
2755 if len(hypotheses[batch_idx].alignments[-1]) == 0:
2756 del hypotheses[batch_idx].alignments[-1]
2757
2758 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
2759 if self.preserve_frame_confidence:
2760 for batch_idx in range(batchsize):
2761 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2762 del hypotheses[batch_idx].frame_confidence[-1]
2763
2764 # Preserve states
2765 for batch_idx in range(batchsize):
2766 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2767
2768 return hypotheses
2769
2770 def _greedy_decode_masked(
2771 self,
2772 x: torch.Tensor,
2773 out_len: torch.Tensor,
2774 device: torch.device,
2775 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2776 ):
2777 raise NotImplementedError("masked greedy-batched decode is not supported for TDT models.")
2778
[end of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
[start of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 from abc import ABC, abstractmethod
17 from dataclasses import dataclass
18 from functools import partial
19 from typing import List, Optional
20
21 import torch
22 from omegaconf import DictConfig, OmegaConf
23
24 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
25 from nemo.utils import logging
26
27
28 class ConfidenceMethodConstants:
29 NAMES = ("max_prob", "entropy")
30 ENTROPY_TYPES = ("gibbs", "tsallis", "renyi")
31 ENTROPY_NORMS = ("lin", "exp")
32
33 @classmethod
34 def print(cls):
35 return (
36 cls.__name__
37 + ": "
38 + str({"NAMES": cls.NAMES, "ENTROPY_TYPES": cls.ENTROPY_TYPES, "ENTROPY_NORMS": cls.ENTROPY_NORMS})
39 )
40
41
42 class ConfidenceConstants:
43 AGGREGATIONS = ("mean", "min", "max", "prod")
44
45 @classmethod
46 def print(cls):
47 return cls.__name__ + ": " + str({"AGGREGATIONS": cls.AGGREGATIONS})
48
49
50 @dataclass
51 class ConfidenceMethodConfig:
52 """A Config which contains the method name and settings to compute per-frame confidence scores.
53
54 Args:
55 name: The method name (str).
56 Supported values:
57 - 'max_prob' for using the maximum token probability as a confidence.
58 - 'entropy' for using a normalized entropy of a log-likelihood vector.
59
60 entropy_type: Which type of entropy to use (str).
61 Used if confidence_method_cfg.name is set to `entropy`.
62 Supported values:
63 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
64 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
65 Note that for this entropy, the alpha should comply the following inequality:
66 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
67 where V is the model vocabulary size.
68 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
69 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
70 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
71 More: https://en.wikipedia.org/wiki/Tsallis_entropy
72 - 'renyi' for the Rรฉnyi entropy.
73 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
74 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
75 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
76
77 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
78 When the alpha equals one, scaling is not applied to 'max_prob',
79 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
80
81 entropy_norm: A mapping of the entropy value to the interval [0,1].
82 Supported values:
83 - 'lin' for using the linear mapping.
84 - 'exp' for using exponential mapping with linear shift.
85 """
86
87 name: str = "entropy"
88 entropy_type: str = "tsallis"
89 alpha: float = 0.33
90 entropy_norm: str = "exp"
91 temperature: str = "DEPRECATED"
92
93 def __post_init__(self):
94 if self.temperature != "DEPRECATED":
95 # self.temperature has type str
96 self.alpha = float(self.temperature)
97 self.temperature = "DEPRECATED"
98 if self.name not in ConfidenceMethodConstants.NAMES:
99 raise ValueError(
100 f"`name` must be one of the following: "
101 f"{'`' + '`, `'.join(ConfidenceMethodConstants.NAMES) + '`'}. Provided: `{self.name}`"
102 )
103 if self.entropy_type not in ConfidenceMethodConstants.ENTROPY_TYPES:
104 raise ValueError(
105 f"`entropy_type` must be one of the following: "
106 f"{'`' + '`, `'.join(ConfidenceMethodConstants.ENTROPY_TYPES) + '`'}. Provided: `{self.entropy_type}`"
107 )
108 if self.alpha <= 0.0:
109 raise ValueError(f"`alpha` must be > 0. Provided: {self.alpha}")
110 if self.entropy_norm not in ConfidenceMethodConstants.ENTROPY_NORMS:
111 raise ValueError(
112 f"`entropy_norm` must be one of the following: "
113 f"{'`' + '`, `'.join(ConfidenceMethodConstants.ENTROPY_NORMS) + '`'}. Provided: `{self.entropy_norm}`"
114 )
115
116
117 @dataclass
118 class ConfidenceConfig:
119 """A config which contains the following key-value pairs related to confidence scores.
120
121 Args:
122 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
123 generated during decoding. When set to true, the Hypothesis will contain
124 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
125 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
126 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
127 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
128
129 The length of the list corresponds to the number of recognized tokens.
130 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
131 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
132 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
133
134 The length of the list corresponds to the number of recognized words.
135 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
136 from the `token_confidence`.
137 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
138 Valid options are `mean`, `min`, `max`, `prod`.
139 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
140 confidence scores.
141
142 name: The method name (str).
143 Supported values:
144 - 'max_prob' for using the maximum token probability as a confidence.
145 - 'entropy' for using a normalized entropy of a log-likelihood vector.
146
147 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
148 Supported values:
149 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
150 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
151 Note that for this entropy, the alpha should comply the following inequality:
152 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
153 where V is the model vocabulary size.
154 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
155 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
156 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
157 More: https://en.wikipedia.org/wiki/Tsallis_entropy
158 - 'renyi' for the Rรฉnyi entropy.
159 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
160 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
161 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
162
163 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
164 When the alpha equals one, scaling is not applied to 'max_prob',
165 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
166
167 entropy_norm: A mapping of the entropy value to the interval [0,1].
168 Supported values:
169 - 'lin' for using the linear mapping.
170 - 'exp' for using exponential mapping with linear shift.
171 """
172
173 preserve_frame_confidence: bool = False
174 preserve_token_confidence: bool = False
175 preserve_word_confidence: bool = False
176 exclude_blank: bool = True
177 aggregation: str = "min"
178 method_cfg: ConfidenceMethodConfig = ConfidenceMethodConfig()
179
180 def __post_init__(self):
181 # OmegaConf.structured ensures that post_init check is always executed
182 self.method_cfg = OmegaConf.structured(
183 self.method_cfg
184 if isinstance(self.method_cfg, ConfidenceMethodConfig)
185 else ConfidenceMethodConfig(**self.method_cfg)
186 )
187 if self.aggregation not in ConfidenceConstants.AGGREGATIONS:
188 raise ValueError(
189 f"`aggregation` has to be one of the following: "
190 f"{'`' + '`, `'.join(ConfidenceConstants.AGGREGATIONS) + '`'}. Provided: `{self.aggregation}`"
191 )
192
193
194 def get_confidence_measure_bank():
195 """Generate a dictionary with confidence measure functionals.
196
197 Supported confidence measures:
198 max_prob: normalized maximum probability
199 entropy_gibbs_lin: Gibbs entropy with linear normalization
200 entropy_gibbs_exp: Gibbs entropy with exponential normalization
201 entropy_tsallis_lin: Tsallis entropy with linear normalization
202 entropy_tsallis_exp: Tsallis entropy with exponential normalization
203 entropy_renyi_lin: Rรฉnyi entropy with linear normalization
204 entropy_renyi_exp: Rรฉnyi entropy with exponential normalization
205
206 Returns:
207 dictionary with lambda functions.
208 """
209 # helper functions
210 # Gibbs entropy is implemented without alpha
211 neg_entropy_gibbs = lambda x: (x.exp() * x).sum(-1)
212 neg_entropy_alpha = lambda x, t: (x * t).exp().sum(-1)
213 neg_entropy_alpha_gibbs = lambda x, t: ((x * t).exp() * x).sum(-1)
214 # too big for a lambda
215 def entropy_tsallis_exp(x, v, t):
216 exp_neg_max_ent = math.exp((1 - math.pow(v, 1 - t)) / (1 - t))
217 return (((1 - neg_entropy_alpha(x, t)) / (1 - t)).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
218
219 def entropy_gibbs_exp(x, v, t):
220 exp_neg_max_ent = math.pow(v, -t * math.pow(v, 1 - t))
221 return ((neg_entropy_alpha_gibbs(x, t) * t).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
222
223 # use Gibbs entropies for Tsallis and Rรฉnyi with t == 1.0
224 entropy_gibbs_lin_baseline = lambda x, v: 1 + neg_entropy_gibbs(x) / math.log(v)
225 entropy_gibbs_exp_baseline = lambda x, v: (neg_entropy_gibbs(x).exp() * v - 1) / (v - 1)
226 # fill the measure bank
227 confidence_measure_bank = {}
228 # Maximum probability measure is implemented without alpha
229 confidence_measure_bank["max_prob"] = (
230 lambda x, v, t: (x.max(dim=-1)[0].exp() * v - 1) / (v - 1)
231 if t == 1.0
232 else ((x.max(dim=-1)[0] * t).exp() * math.pow(v, t) - 1) / (math.pow(v, t) - 1)
233 )
234 confidence_measure_bank["entropy_gibbs_lin"] = (
235 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
236 if t == 1.0
237 else 1 + neg_entropy_alpha_gibbs(x, t) / math.log(v) / math.pow(v, 1 - t)
238 )
239 confidence_measure_bank["entropy_gibbs_exp"] = (
240 lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_gibbs_exp(x, v, t)
241 )
242 confidence_measure_bank["entropy_tsallis_lin"] = (
243 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
244 if t == 1.0
245 else 1 + (1 - neg_entropy_alpha(x, t)) / (math.pow(v, 1 - t) - 1)
246 )
247 confidence_measure_bank["entropy_tsallis_exp"] = (
248 lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_tsallis_exp(x, v, t)
249 )
250 confidence_measure_bank["entropy_renyi_lin"] = (
251 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
252 if t == 1.0
253 else 1 + neg_entropy_alpha(x, t).log2() / (t - 1) / math.log(v, 2)
254 )
255 confidence_measure_bank["entropy_renyi_exp"] = (
256 lambda x, v, t: entropy_gibbs_exp_baseline(x, v)
257 if t == 1.0
258 else (neg_entropy_alpha(x, t).pow(1 / (t - 1)) * v - 1) / (v - 1)
259 )
260 return confidence_measure_bank
261
262
263 def get_confidence_aggregation_bank():
264 """Generate a dictionary with confidence aggregation functions.
265
266 Supported confidence aggregation functions:
267 min: minimum
268 max: maximum
269 mean: arithmetic mean
270 prod: product
271
272 Returns:
273 dictionary with functions.
274 """
275 confidence_aggregation_bank = {"mean": lambda x: sum(x) / len(x), "min": min, "max": max}
276 # python 3.7 and earlier do not have math.prod
277 if hasattr(math, "prod"):
278 confidence_aggregation_bank["prod"] = math.prod
279 else:
280 import operator
281 from functools import reduce
282
283 confidence_aggregation_bank["prod"] = lambda x: reduce(operator.mul, x, 1)
284 return confidence_aggregation_bank
285
286
287 class ConfidenceMethodMixin(ABC):
288 """Confidence Method Mixin class.
289
290 It initializes per-frame confidence method.
291 """
292
293 def _init_confidence_method(self, confidence_method_cfg: Optional[DictConfig] = None):
294 """Initialize per-frame confidence method from config.
295 """
296 # OmegaConf.structured ensures that post_init check is always executed
297 confidence_method_cfg = OmegaConf.structured(
298 ConfidenceMethodConfig()
299 if confidence_method_cfg is None
300 else ConfidenceMethodConfig(**confidence_method_cfg)
301 )
302
303 # set confidence calculation method
304 # we suppose that self.blank_id == len(vocabulary)
305 self.num_tokens = (self.blank_id if hasattr(self, "blank_id") else self._blank_index) + 1
306 self.alpha = confidence_method_cfg.alpha
307
308 # init confidence measure bank
309 self.confidence_measure_bank = get_confidence_measure_bank()
310
311 measure = None
312 # construct measure_name
313 measure_name = ""
314 if confidence_method_cfg.name == "max_prob":
315 measure_name = "max_prob"
316 elif confidence_method_cfg.name == "entropy":
317 measure_name = '_'.join(
318 [confidence_method_cfg.name, confidence_method_cfg.entropy_type, confidence_method_cfg.entropy_norm]
319 )
320 else:
321 raise ValueError(f"Unsupported `confidence_method_cfg.name`: `{confidence_method_cfg.name}`")
322 if measure_name not in self.confidence_measure_bank:
323 raise ValueError(f"Unsupported measure setup: `{measure_name}`")
324 measure = partial(self.confidence_measure_bank[measure_name], v=self.num_tokens, t=self.alpha)
325 self._get_confidence = lambda x: measure(torch.nan_to_num(x)).tolist()
326
327
328 class ConfidenceMixin(ABC):
329 """Confidence Mixin class.
330
331 It is responsible for confidence estimation method initialization and high-level confidence score calculation.
332 """
333
334 def _init_confidence(self, confidence_cfg: Optional[DictConfig] = None):
335 """Initialize confidence-related fields and confidence aggregation function from config.
336 """
337 # OmegaConf.structured ensures that post_init check is always executed
338 confidence_cfg = OmegaConf.structured(
339 ConfidenceConfig() if confidence_cfg is None else ConfidenceConfig(**confidence_cfg)
340 )
341 self.confidence_method_cfg = confidence_cfg.method_cfg
342
343 # extract the config
344 self.preserve_word_confidence = confidence_cfg.get('preserve_word_confidence', False)
345 # set preserve_frame_confidence and preserve_token_confidence to True
346 # if preserve_word_confidence is True
347 self.preserve_token_confidence = (
348 confidence_cfg.get('preserve_token_confidence', False) | self.preserve_word_confidence
349 )
350 # set preserve_frame_confidence to True if preserve_token_confidence is True
351 self.preserve_frame_confidence = (
352 confidence_cfg.get('preserve_frame_confidence', False) | self.preserve_token_confidence
353 )
354 self.exclude_blank_from_confidence = confidence_cfg.get('exclude_blank', True)
355 self.word_confidence_aggregation = confidence_cfg.get('aggregation', "min")
356
357 # define aggregation functions
358 self.confidence_aggregation_bank = get_confidence_aggregation_bank()
359 self._aggregate_confidence = self.confidence_aggregation_bank[self.word_confidence_aggregation]
360
361 # Update preserve frame confidence
362 if self.preserve_frame_confidence is False:
363 if self.cfg.strategy in ['greedy', 'greedy_batch']:
364 self.preserve_frame_confidence = self.cfg.greedy.get('preserve_frame_confidence', False)
365 # OmegaConf.structured ensures that post_init check is always executed
366 confidence_method_cfg = OmegaConf.structured(self.cfg.greedy).get('confidence_method_cfg', None)
367 self.confidence_method_cfg = (
368 OmegaConf.structured(ConfidenceMethodConfig())
369 if confidence_method_cfg is None
370 else OmegaConf.structured(ConfidenceMethodConfig(**confidence_method_cfg))
371 )
372
373 @abstractmethod
374 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
375 """Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
376 Assumes that `frame_confidence` is present in the hypotheses.
377
378 Args:
379 hypotheses_list: List of Hypothesis.
380
381 Returns:
382 A list of hypotheses with high-level confidence scores.
383 """
384 raise NotImplementedError()
385
386 @abstractmethod
387 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
388 """Implemented by subclass in order to aggregate token confidence to a word-level confidence.
389
390 Args:
391 hypothesis: Hypothesis
392
393 Returns:
394 A list of word-level confidence scores.
395 """
396 raise NotImplementedError()
397
398 def _aggregate_token_confidence_chars(self, words: List[str], token_confidence: List[float]) -> List[float]:
399 """Implementation of token confidence aggregation for character-based models.
400
401 Args:
402 words: List of words of a hypothesis.
403 token_confidence: List of token-level confidence scores of a hypothesis.
404
405 Returns:
406 A list of word-level confidence scores.
407 """
408 word_confidence = []
409 i = 0
410 for word in words:
411 word_len = len(word)
412 word_confidence.append(self._aggregate_confidence(token_confidence[i : i + word_len]))
413 # we assume that there is exactly one space token between words and exclude it from word confidence
414 i += word_len + 1
415 return word_confidence
416
417 def _aggregate_token_confidence_subwords_sentencepiece(
418 self, words: List[str], token_confidence: List[float], token_ids: List[int]
419 ) -> List[float]:
420 """Implementation of token confidence aggregation for subword-based models.
421
422 **Note**: Only supports Sentencepiece based tokenizers !
423
424 Args:
425 words: List of words of a hypothesis.
426 token_confidence: List of token-level confidence scores of a hypothesis.
427 token_ids: List of token ids of a hypothesis.
428
429 Returns:
430 A list of word-level confidence scores.
431 """
432 word_confidence = []
433 # run only if there are final words
434 if len(words) > 0:
435 j = 0
436 prev_unk = False
437 prev_underline = False
438 for i, token_id in enumerate(token_ids):
439 token = self.decode_ids_to_tokens([int(token_id)])[0]
440 token_text = self.decode_tokens_to_str([int(token_id)])
441 # treat `<unk>` as a separate word regardless of the next token
442 # to match the result of `tokenizer.ids_to_text`
443 if (token != token_text or prev_unk) and i > j:
444 # do not add confidence for `โ` if the current token starts with `โ`
445 # to match the result of `tokenizer.ids_to_text`
446 if not prev_underline:
447 word_confidence.append(self._aggregate_confidence(token_confidence[j:i]))
448 j = i
449 prev_unk = token == '<unk>'
450 prev_underline = token == 'โ'
451 if not prev_underline:
452 word_confidence.append(self._aggregate_confidence(token_confidence[j : len(token_ids)]))
453 if len(words) != len(word_confidence):
454 raise RuntimeError(
455 f"""Something went wrong with word-level confidence aggregation.\n
456 Please check these values for debugging:\n
457 len(words): {len(words)},\n
458 len(word_confidence): {len(word_confidence)},\n
459 recognized text: `{' '.join(words)}`"""
460 )
461 return word_confidence
462
[end of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
[start of nemo/collections/common/parts/adapter_modules.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, is_dataclass
16 from typing import Any, Optional
17
18 from hydra.utils import instantiate
19 from omegaconf import OmegaConf
20 from torch import nn as nn
21
22 from nemo.collections.common.parts.utils import activation_registry
23 from nemo.core.classes.mixins import access_mixins, adapter_mixin_strategies
24
25
26 class AdapterModuleUtil(access_mixins.AccessMixin):
27 """
28 Base class of Adapter Modules, providing common functionality to all Adapter Modules.
29 """
30
31 def setup_adapter_strategy(self, adapter_strategy: Optional[adapter_mixin_strategies.AbstractAdapterStrategy]):
32 """
33 Setup adapter strategy of this class, enabling dynamic change in the way the adapter output is
34 merged with the input.
35
36 When called successfully, will assign the variable `adapter_strategy` to the module.
37
38 Args:
39 adapter_strategy: Can be a None or an implementation of AbstractAdapterStrategy.
40 """
41 # set default adapter strategy
42 if adapter_strategy is None:
43 adapter_strategy = self.get_default_strategy_config()
44
45 if is_dataclass(adapter_strategy):
46 adapter_strategy = OmegaConf.structured(adapter_strategy)
47 OmegaConf.set_struct(adapter_strategy, False)
48
49 # The config must have the `_target_` field pointing to the actual adapter strategy class
50 # which will load that strategy dynamically to this module.
51 if isinstance(adapter_strategy, dict) or OmegaConf.is_config(adapter_strategy):
52 self.adapter_strategy = instantiate(adapter_strategy)
53 elif isinstance(adapter_strategy, adapter_mixin_strategies.AbstractAdapterStrategy):
54 self.adapter_strategy = adapter_strategy
55 else:
56 raise AttributeError(f'`adapter_strategy` provided is invalid : {adapter_strategy}')
57
58 def get_default_strategy_config(self) -> 'dataclass':
59 """
60 Returns a default adapter module strategy.
61 """
62 return adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
63
64 def adapter_unfreeze(self,):
65 """
66 Sets the requires grad for all parameters in the adapter to True.
67 This method should be overridden for any custom unfreeze behavior that is required.
68 For example, if not all params of the adapter should be unfrozen.
69 """
70 for param in self.parameters():
71 param.requires_grad_(True)
72
73
74 class LinearAdapter(nn.Module, AdapterModuleUtil):
75
76 """
77 Simple Linear Feedforward Adapter module with LayerNorm and singe hidden layer with activation function.
78 Note: The adapter explicitly initializes its final layer with all zeros in order to avoid affecting the
79 original model when all adapters are disabled.
80
81 Args:
82 in_features: Input dimension of the module. Note that for adapters, input_dim == output_dim.
83 dim: Hidden dimension of the feed forward network.
84 activation: Str name for an activation function.
85 norm_position: Str, can be `pre` or `post`. Defaults to `pre`. Determines whether the normalization
86 will occur in the first layer or the last layer. Certain architectures may prefer one over the other.
87 dropout: float value, whether to perform dropout on the output of the last layer of the adapter.
88 adapter_strategy: By default, ResidualAddAdapterStrategyConfig. An adapter composition function object.
89 """
90
91 def __init__(
92 self,
93 in_features: int,
94 dim: int,
95 activation: str = 'swish',
96 norm_position: str = 'pre',
97 dropout: float = 0.0,
98 adapter_strategy: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig = None,
99 ):
100 super().__init__()
101
102 activation = activation_registry[activation]()
103 # If the activation can be executed in place, do so.
104 if hasattr(activation, 'inplace'):
105 activation.inplace = True
106
107 assert norm_position in ['pre', 'post']
108 self.norm_position = norm_position
109
110 if norm_position == 'pre':
111 self.module = nn.Sequential(
112 nn.LayerNorm(in_features),
113 nn.Linear(in_features, dim, bias=False),
114 activation,
115 nn.Linear(dim, in_features, bias=False),
116 )
117
118 elif norm_position == 'post':
119 self.module = nn.Sequential(
120 nn.Linear(in_features, dim, bias=False),
121 activation,
122 nn.Linear(dim, in_features, bias=False),
123 nn.LayerNorm(in_features),
124 )
125
126 if dropout > 0.0:
127 self.dropout = nn.Dropout(dropout)
128 else:
129 self.dropout = None
130
131 # Setup adapter strategy
132 self.setup_adapter_strategy(adapter_strategy)
133
134 # reset parameters
135 self.reset_parameters()
136
137 def reset_parameters(self):
138 # Final layer initializations must be 0
139 if self.norm_position == 'pre':
140 self.module[-1].weight.data *= 0
141
142 elif self.norm_position == 'post':
143 self.module[-1].weight.data *= 0
144 self.module[-1].bias.data *= 0
145
146 def forward(self, x):
147 x = self.module(x)
148
149 # Add dropout if available
150 if self.dropout is not None:
151 x = self.dropout(x)
152
153 return x
154
155
156 @dataclass
157 class LinearAdapterConfig:
158 in_features: int
159 dim: int
160 activation: str = 'swish'
161 norm_position: str = 'pre'
162 dropout: float = 0.0
163 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
164 _target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
165
[end of nemo/collections/common/parts/adapter_modules.py]
[start of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import re
15 from typing import List
16
17 import ipadic
18 import MeCab
19 from pangu import spacing
20 from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
21
22
23 class EnJaProcessor:
24 """
25 Tokenizer, Detokenizer and Normalizer utilities for Japanese & English
26 Args:
27 lang_id: One of ['en', 'ja'].
28 """
29
30 def __init__(self, lang_id: str):
31 self.lang_id = lang_id
32 self.moses_tokenizer = MosesTokenizer(lang=lang_id)
33 self.moses_detokenizer = MosesDetokenizer(lang=lang_id)
34 self.normalizer = MosesPunctNormalizer(
35 lang=lang_id, pre_replace_unicode_punct=True, post_remove_control_chars=True
36 )
37
38 def detokenize(self, tokens: List[str]) -> str:
39 """
40 Detokenizes a list of tokens
41 Args:
42 tokens: list of strings as tokens
43 Returns:
44 detokenized Japanese or English string
45 """
46 return self.moses_detokenizer.detokenize(tokens)
47
48 def tokenize(self, text) -> str:
49 """
50 Tokenizes text using Moses. Returns a string of tokens.
51 """
52 tokens = self.moses_tokenizer.tokenize(text)
53 return ' '.join(tokens)
54
55 def normalize(self, text) -> str:
56 # Normalization doesn't handle Japanese periods correctly;
57 # 'ใ'becomes '.'.
58 if self.lang_id == 'en':
59 return self.normalizer.normalize(text)
60 else:
61 return text
62
63
64 class JaMecabProcessor:
65 """
66 Tokenizer, Detokenizer and Normalizer utilities for Japanese MeCab & English
67 """
68
69 def __init__(self):
70 self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
71
72 def detokenize(self, text: List[str]) -> str:
73 RE_WS_IN_FW = re.compile(
74 r'([\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])\s+(?=[\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])'
75 )
76
77 detokenize = lambda s: spacing(RE_WS_IN_FW.sub(r'\1', s)).strip()
78 return detokenize(' '.join(text))
79
80 def tokenize(self, text) -> str:
81 """
82 Tokenizes text using Moses. Returns a string of tokens.
83 """
84 return self.mecab_tokenizer.parse(text).strip()
85
86 def normalize(self, text) -> str:
87 return text
88
[end of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
[start of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Optional, Tuple
17
18 from omegaconf.omegaconf import MISSING
19
20 from nemo.collections.nlp.data.machine_translation.machine_translation_dataset import TranslationDataConfig
21 from nemo.collections.nlp.models.enc_dec_nlp_model import EncDecNLPModelConfig
22 from nemo.collections.nlp.modules.common.token_classifier import TokenClassifierConfig
23 from nemo.collections.nlp.modules.common.tokenizer_utils import TokenizerConfig
24 from nemo.collections.nlp.modules.common.transformer.transformer import (
25 NeMoTransformerConfig,
26 NeMoTransformerEncoderConfig,
27 )
28 from nemo.collections.nlp.modules.common.transformer.transformer_bottleneck import (
29 NeMoTransformerBottleneckDecoderConfig,
30 NeMoTransformerBottleneckEncoderConfig,
31 )
32 from nemo.core.config.modelPT import OptimConfig, SchedConfig
33
34
35 @dataclass
36 class MTSchedConfig(SchedConfig):
37 name: str = 'InverseSquareRootAnnealing'
38 warmup_ratio: Optional[float] = None
39 last_epoch: int = -1
40
41
42 # TODO: Refactor this dataclass to to support more optimizers (it pins the optimizer to Adam-like optimizers).
43 @dataclass
44 class MTOptimConfig(OptimConfig):
45 name: str = 'adam'
46 lr: float = 1e-3
47 betas: Tuple[float, float] = (0.9, 0.98)
48 weight_decay: float = 0.0
49 sched: Optional[MTSchedConfig] = MTSchedConfig()
50
51
52 @dataclass
53 class MTEncDecModelConfig(EncDecNLPModelConfig):
54 # machine translation configurations
55 num_val_examples: int = 3
56 num_test_examples: int = 3
57 max_generation_delta: int = 10
58 label_smoothing: Optional[float] = 0.0
59 beam_size: int = 4
60 len_pen: float = 0.0
61 src_language: Any = 'en' # Any = str or List[str]
62 tgt_language: Any = 'en' # Any = str or List[str]
63 find_unused_parameters: Optional[bool] = True
64 shared_tokenizer: Optional[bool] = True
65 multilingual: Optional[bool] = False
66 preproc_out_dir: Optional[str] = None
67 validate_input_ids: Optional[bool] = True
68 shared_embeddings: bool = False
69
70 # network architecture configuration
71 encoder_tokenizer: Any = MISSING
72 encoder: Any = MISSING
73
74 decoder_tokenizer: Any = MISSING
75 decoder: Any = MISSING
76
77 head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
78
79 # dataset configurations
80 train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
81 src_file_name=MISSING,
82 tgt_file_name=MISSING,
83 tokens_in_batch=512,
84 clean=True,
85 shuffle=True,
86 cache_ids=False,
87 use_cache=False,
88 )
89 validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
90 src_file_name=MISSING,
91 tgt_file_name=MISSING,
92 tokens_in_batch=512,
93 clean=False,
94 shuffle=False,
95 cache_ids=False,
96 use_cache=False,
97 )
98 test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
99 src_file_name=MISSING,
100 tgt_file_name=MISSING,
101 tokens_in_batch=512,
102 clean=False,
103 shuffle=False,
104 cache_ids=False,
105 use_cache=False,
106 )
107 optim: Optional[OptimConfig] = MTOptimConfig()
108
109
110 @dataclass
111 class AAYNBaseConfig(MTEncDecModelConfig):
112
113 # Attention is All You Need Base Configuration
114 encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
115 decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
116
117 encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
118 library='nemo',
119 model_name=None,
120 pretrained=False,
121 hidden_size=512,
122 inner_size=2048,
123 num_layers=6,
124 num_attention_heads=8,
125 ffn_dropout=0.1,
126 attn_score_dropout=0.1,
127 attn_layer_dropout=0.1,
128 )
129
130 decoder: NeMoTransformerConfig = NeMoTransformerConfig(
131 library='nemo',
132 model_name=None,
133 pretrained=False,
134 hidden_size=512,
135 inner_size=2048,
136 num_layers=6,
137 num_attention_heads=8,
138 ffn_dropout=0.1,
139 attn_score_dropout=0.1,
140 attn_layer_dropout=0.1,
141 )
142
143
144 @dataclass
145 class MTBottleneckModelConfig(AAYNBaseConfig):
146 model_type: str = 'nll'
147 min_logv: float = -6
148 latent_size: int = -1 # -1 will take value of encoder hidden
149 non_recon_warmup_batches: int = 200000
150 recon_per_token: bool = True
151 log_timing: bool = True
152
153 encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
154 library='nemo',
155 model_name=None,
156 pretrained=False,
157 hidden_size=512,
158 inner_size=2048,
159 num_layers=6,
160 num_attention_heads=8,
161 ffn_dropout=0.1,
162 attn_score_dropout=0.1,
163 attn_layer_dropout=0.1,
164 arch='seq2seq',
165 hidden_steps=32,
166 hidden_blocks=1,
167 hidden_init_method='params',
168 )
169
170 decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
171 library='nemo',
172 model_name=None,
173 pretrained=False,
174 inner_size=2048,
175 num_layers=6,
176 num_attention_heads=8,
177 ffn_dropout=0.1,
178 attn_score_dropout=0.1,
179 attn_layer_dropout=0.1,
180 arch='seq2seq',
181 )
182
[end of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
[start of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, Optional
17
18 from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
19
20 from nemo.collections.common.parts.adapter_modules import LinearAdapterConfig
21 from nemo.collections.nlp.data.token_classification.punctuation_capitalization_dataset import (
22 PunctuationCapitalizationEvalDataConfig,
23 PunctuationCapitalizationTrainDataConfig,
24 legacy_data_config_to_new_data_config,
25 )
26 from nemo.core.config import TrainerConfig
27 from nemo.core.config.modelPT import NemoConfig
28 from nemo.utils.exp_manager import ExpManagerConfig
29
30
31 @dataclass
32 class FreezeConfig:
33 is_enabled: bool = False
34 """Freeze audio encoder weight and add Conformer Layers on top of it"""
35 d_model: Optional[int] = 256
36 """`d_model` parameter of ``ConformerLayer``"""
37 d_ff: Optional[int] = 1024
38 """``d_ff`` parameter of ``ConformerLayer``"""
39 num_layers: Optional[int] = 8
40 """``num_layers`` number of ``ConformerLayer`` modules to add on top of audio encoder"""
41
42
43 @dataclass
44 class AdapterConfig:
45 config: Optional[LinearAdapterConfig] = None
46 """Linear adapter config see ``collections.common.parts.LinearAdapterConfig``"""
47 enable: bool = False
48 """Use adapters for audio encoder"""
49
50
51 @dataclass
52 class FusionConfig:
53 num_layers: Optional[int] = 4
54 """"Number of layers to use in fusion"""
55 num_attention_heads: Optional[int] = 4
56 """Number of attention heads to use in fusion"""
57 inner_size: Optional[int] = 2048
58 """Fusion inner size"""
59
60
61 @dataclass
62 class AudioEncoderConfig:
63 pretrained_model: str = MISSING
64 """A configuration for restoring pretrained audio encoder"""
65 freeze: Optional[FreezeConfig] = None
66 adapter: Optional[AdapterConfig] = None
67 fusion: Optional[FusionConfig] = None
68
69
70 @dataclass
71 class TokenizerConfig:
72 """A structure and default values of source text tokenizer."""
73
74 vocab_file: Optional[str] = None
75 """A path to vocabulary file which is used in ``'word'``, ``'char'``, and HuggingFace tokenizers"""
76
77 tokenizer_name: str = MISSING
78 """A name of the tokenizer used for tokenization of source sequences. Possible options are ``'sentencepiece'``,
79 ``'word'``, ``'char'``, HuggingFace tokenizers (e.g. ``'bert-base-uncased'``). For more options see function
80 ``nemo.collections.nlp.modules.common.get_tokenizer``. The tokenizer must have properties ``cls_id``, ``pad_id``,
81 ``sep_id``, ``unk_id``."""
82
83 special_tokens: Optional[Dict[str, str]] = None
84 """A dictionary with special tokens passed to constructors of ``'char'``, ``'word'``, ``'sentencepiece'``, and
85 various HuggingFace tokenizers."""
86
87 tokenizer_model: Optional[str] = None
88 """A path to a tokenizer model required for ``'sentencepiece'`` tokenizer."""
89
90
91 @dataclass
92 class LanguageModelConfig:
93 """
94 A structure and default values of language model configuration of punctuation and capitalization model. BERT like
95 HuggingFace models are supported. Provide a valid ``pretrained_model_name`` and, optionally, you may
96 reinitialize model via ``config_file`` or ``config``.
97
98 Alternatively you can initialize the language model using ``lm_checkpoint``.
99
100 This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
101 """
102
103 pretrained_model_name: str = MISSING
104 """A mandatory parameter containing name of HuggingFace pretrained model. For example, ``'bert-base-uncased'``."""
105
106 config_file: Optional[str] = None
107 """A path to a file with HuggingFace model config which is used to reinitialize language model."""
108
109 config: Optional[Dict] = None
110 """A HuggingFace config which is used to reinitialize language model."""
111
112 lm_checkpoint: Optional[str] = None
113 """A path to a ``torch`` checkpoint of a language model."""
114
115
116 @dataclass
117 class HeadConfig:
118 """
119 A structure and default values of configuration of capitalization or punctuation model head. This config defines a
120 multilayer perceptron which is applied to output of a language model. Number of units in the hidden layer is equal
121 to the dimension of the language model.
122
123 This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
124 """
125
126 num_fc_layers: int = 1
127 """A number of hidden layers in a multilayer perceptron."""
128
129 fc_dropout: float = 0.1
130 """A dropout used in an MLP."""
131
132 activation: str = 'relu'
133 """An activation used in hidden layers."""
134
135 use_transformer_init: bool = True
136 """Whether to initialize the weights of the classifier head with the approach that was used for language model
137 initialization."""
138
139
140 @dataclass
141 class ClassLabelsConfig:
142 """
143 A structure and default values of a mandatory part of config which contains names of files which are saved in .nemo
144 checkpoint. These files can also be used for passing label vocabulary to the model. For using them as label
145 vocabularies you will need to provide path these files in parameter
146 ``model.common_dataset_parameters.label_vocab_dir``. Each line in labels files
147 contains 1 label. The values are sorted, ``<line number>==<label id>``, starting from ``0``. A label with ``0`` id
148 must contain neutral label which must be equal to ``model.common_dataset_parameters.pad_label``.
149
150 This config is a part of :class:`~CommonDatasetParametersConfig`.
151 """
152
153 punct_labels_file: str = MISSING
154 """A name of punctuation labels file."""
155
156 capit_labels_file: str = MISSING
157 """A name of capitalization labels file."""
158
159
160 @dataclass
161 class CommonDatasetParametersConfig:
162 """
163 A structure and default values of common dataset parameters config which includes label and loss mask information.
164 If you omit parameters ``punct_label_ids``, ``capit_label_ids``, ``label_vocab_dir``, then labels will be inferred
165 from a training dataset or loaded from a checkpoint.
166
167 Parameters ``ignore_extra_tokens`` and ``ignore_start_end`` are responsible for forming loss mask. A loss mask
168 defines on which tokens loss is computed.
169
170 This parameter is a part of config :class:`~PunctuationCapitalizationModelConfig`.
171 """
172
173 pad_label: str = MISSING
174 """A mandatory parameter which should contain label used for punctuation and capitalization label padding. It
175 also serves as a neutral label for both punctuation and capitalization. If any of ``punct_label_ids``,
176 ``capit_label_ids`` parameters is provided, then ``pad_label`` must have ``0`` id in them. In addition, if ``label_vocab_dir``
177 is provided, then ``pad_label`` must be on the first lines in files ``class_labels.punct_labels_file`` and
178 ``class_labels.capit_labels_file``."""
179
180 ignore_extra_tokens: bool = False
181 """Whether to compute loss on not first tokens in words. If this parameter is ``True``, then loss mask is ``False``
182 for all tokens in a word except the first."""
183
184 ignore_start_end: bool = True
185 """If ``False``, then loss is computed on [CLS] and [SEP] tokens."""
186
187 punct_label_ids: Optional[Dict[str, int]] = None
188 """A dictionary with punctuation label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit this
189 parameter and pass label ids through ``class_labels.punct_labels_file`` or let the model to infer label ids from
190 dataset or load them from checkpoint."""
191
192 capit_label_ids: Optional[Dict[str, int]] = None
193 """A dictionary with capitalization label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit
194 this parameter and pass label ids through ``class_labels.capit_labels_file`` or let model to infer label ids from
195 dataset or load them from checkpoint."""
196
197 label_vocab_dir: Optional[str] = None
198 """A path to directory which contains class labels files. See :class:`ClassLabelsConfig`. If this parameter is
199 provided, then labels will be loaded from files which are located in ``label_vocab_dir`` and have names specified
200 in ``model.class_labels`` configuration section. A label specified in ``pad_label`` has to be on the first lines
201 of ``model.class_labels`` files."""
202
203
204 @dataclass
205 class PunctuationCapitalizationModelConfig:
206 """
207 A configuration of
208 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
209 model.
210
211 See an example of model config in
212 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
213 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
214
215 This config is a part of :class:`~PunctuationCapitalizationConfig`.
216 """
217
218 class_labels: ClassLabelsConfig = ClassLabelsConfig()
219 """A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
220 These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
221 for passing vocabularies, please provide path to vocabulary files in
222 ``model.common_dataset_parameters.label_vocab_dir`` parameter."""
223
224 common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
225 """Label ids and loss mask information information."""
226
227 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
228 """A configuration for creating training dataset and data loader."""
229
230 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
231 """A configuration for creating validation datasets and data loaders."""
232
233 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
234 """A configuration for creating test datasets and data loaders."""
235
236 punct_head: HeadConfig = HeadConfig()
237 """A configuration for creating punctuation MLP head that is applied to a language model outputs."""
238
239 capit_head: HeadConfig = HeadConfig()
240 """A configuration for creating capitalization MLP head that is applied to a language model outputs."""
241
242 tokenizer: Any = TokenizerConfig()
243 """A configuration for source text tokenizer."""
244
245 language_model: LanguageModelConfig = LanguageModelConfig()
246 """A configuration of a BERT-like language model which serves as a model body."""
247
248 optim: Optional[Any] = None
249 """A configuration of optimizer and learning rate scheduler. There is much variability in such config. For
250 description see `Optimizers
251 <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html#optimizers>`_ section in
252 documentation and `primer <https://github.com/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb>_ tutorial."""
253
254
255 @dataclass
256 class PunctuationCapitalizationLexicalAudioModelConfig(PunctuationCapitalizationModelConfig):
257 """
258 A configuration of
259 :class:`~nemo.collections.nlp.models.token_classification.punctuation_lexical_audio_capitalization_model.PunctuationCapitalizationLexicalAudioModel`
260 model.
261
262 See an example of model config in
263 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
264 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_lexical_audio_config.yaml>`_
265
266 Audio encoder can be frozen during training with ``freeze_audio_encoder`` parameter.
267 Adapter can be added to audio encoder with ``use_adapters`` and ``adapter_config`` parameters.
268 More conformer layers can be added on top of pretrained audio encoder with ``frozen_conf_d_model``, ``frozen_conf_d_ff`` and ``frozen_conf_num_layers`` parameters.
269 """
270
271 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
272 """A configuration for creating training dataset and data loader."""
273
274 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
275 """A configuration for creating validation datasets and data loaders."""
276
277 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
278 """A configuration for creating test datasets and data loaders."""
279
280 audio_encoder: Optional[AudioEncoderConfig] = None
281
282 restore_lexical_encoder_from: Optional[str] = None
283 """"Path to .nemo checkpoint to load weights from""" # add more comments
284
285 use_weighted_loss: Optional[bool] = False
286 """If set to ``True`` CrossEntropyLoss will be weighted"""
287
288
289 @dataclass
290 class PunctuationCapitalizationConfig(NemoConfig):
291 """
292 A config for punctuation model training and testing.
293
294 See an example of full config in
295 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
296 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
297 """
298
299 pretrained_model: Optional[str] = None
300 """Can be an NVIDIA's NGC cloud model or a path to a .nemo checkpoint. You can get list of possible cloud options
301 by calling method
302 :func:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel.list_available_models`.
303 """
304
305 name: Optional[str] = 'Punctuation_and_Capitalization'
306 """A name of the model. Used for naming output directories and ``.nemo`` checkpoints."""
307
308 do_training: bool = True
309 """Whether to perform training of the model."""
310
311 do_testing: bool = False
312 """Whether ot perform testing of the model."""
313
314 model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
315 """A configuration for the
316 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
317 model."""
318
319 trainer: Optional[TrainerConfig] = TrainerConfig()
320 """Contains ``Trainer`` Lightning class constructor parameters."""
321
322 exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
323 """A configuration with various NeMo training options such as output directories, resuming from checkpoint,
324 tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
325
326
327 @dataclass
328 class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
329 model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
330
331
332 def is_legacy_model_config(model_cfg: DictConfig) -> bool:
333 """
334 Test if model config is old style config. Old style configs are configs which were used before
335 ``common_dataset_parameters`` item was added. Old style datasets use ``dataset`` instead of
336 ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``. Old style configs do not support
337 tarred datasets.
338
339 Args:
340 model_cfg: model configuration
341
342 Returns:
343 whether ``model_config`` is legacy
344 """
345 return 'common_dataset_parameters' not in model_cfg
346
347
348 def legacy_model_config_to_new_model_config(model_cfg: DictConfig) -> DictConfig:
349 """
350 Transform old style config into
351 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`.
352 Old style configs are configs which were used before ``common_dataset_parameters`` item was added. Old style
353 datasets use ``dataset`` instead of ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``.
354 Old style configs do not support tarred datasets.
355
356 Args:
357 model_cfg: old style config
358
359 Returns:
360 model config which follows dataclass
361 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`
362 """
363 train_ds = model_cfg.get('train_ds')
364 validation_ds = model_cfg.get('validation_ds')
365 test_ds = model_cfg.get('test_ds')
366 dataset = model_cfg.dataset
367 punct_head_config = model_cfg.get('punct_head', {})
368 capit_head_config = model_cfg.get('capit_head', {})
369 omega_conf = OmegaConf.structured(
370 PunctuationCapitalizationModelConfig(
371 class_labels=model_cfg.class_labels,
372 common_dataset_parameters=CommonDatasetParametersConfig(
373 pad_label=dataset.pad_label,
374 ignore_extra_tokens=dataset.get(
375 'ignore_extra_tokens', CommonDatasetParametersConfig.ignore_extra_tokens
376 ),
377 ignore_start_end=dataset.get('ignore_start_end', CommonDatasetParametersConfig.ignore_start_end),
378 punct_label_ids=model_cfg.punct_label_ids,
379 capit_label_ids=model_cfg.capit_label_ids,
380 ),
381 train_ds=None
382 if train_ds is None
383 else legacy_data_config_to_new_data_config(train_ds, dataset, train=True),
384 validation_ds=None
385 if validation_ds is None
386 else legacy_data_config_to_new_data_config(validation_ds, dataset, train=False),
387 test_ds=None if test_ds is None else legacy_data_config_to_new_data_config(test_ds, dataset, train=False),
388 punct_head=HeadConfig(
389 num_fc_layers=punct_head_config.get('punct_num_fc_layers', HeadConfig.num_fc_layers),
390 fc_dropout=punct_head_config.get('fc_dropout', HeadConfig.fc_dropout),
391 activation=punct_head_config.get('activation', HeadConfig.activation),
392 use_transformer_init=punct_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
393 ),
394 capit_head=HeadConfig(
395 num_fc_layers=capit_head_config.get('capit_num_fc_layers', HeadConfig.num_fc_layers),
396 fc_dropout=capit_head_config.get('fc_dropout', HeadConfig.fc_dropout),
397 activation=capit_head_config.get('activation', HeadConfig.activation),
398 use_transformer_init=capit_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
399 ),
400 tokenizer=model_cfg.tokenizer,
401 language_model=model_cfg.language_model,
402 optim=model_cfg.optim,
403 )
404 )
405 with open_dict(omega_conf):
406 retain_during_legacy_conversion = model_cfg.get('retain_during_legacy_conversion', {})
407 for key in retain_during_legacy_conversion.keys():
408 omega_conf[key] = retain_during_legacy_conversion[key]
409 return omega_conf
410
[end of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
[start of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Transformer based language model."""
16 from MeCab import Model
17 from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
18 from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
19 from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
20 MegatronRetrievalTransformerEncoderModule,
21 )
22 from nemo.collections.nlp.modules.common.megatron.utils import (
23 ApexGuardDefaults,
24 init_method_normal,
25 scaled_init_method_normal,
26 )
27
28 try:
29 from apex.transformer.enums import AttnMaskType, ModelType
30
31 HAVE_APEX = True
32 except (ImportError, ModuleNotFoundError):
33 HAVE_APEX = False
34 # fake missing classes with None attributes
35 AttnMaskType = ApexGuardDefaults()
36 ModelType = ApexGuardDefaults()
37
38 try:
39 from megatron.core import ModelParallelConfig
40
41 HAVE_MEGATRON_CORE = True
42
43 except (ImportError, ModuleNotFoundError):
44
45 ModelParallelConfig = ApexGuardDefaults
46
47 HAVE_MEGATRON_CORE = False
48
49 __all__ = []
50
51 AVAILABLE_ENCODERS = ["transformer", "perceiver", "retro"]
52
53
54 def get_encoder_model(
55 config: ModelParallelConfig,
56 arch,
57 hidden_size,
58 ffn_hidden_size,
59 num_layers,
60 num_attention_heads,
61 apply_query_key_layer_scaling=False,
62 kv_channels=None,
63 init_method=None,
64 scaled_init_method=None,
65 encoder_attn_mask_type=AttnMaskType.padding,
66 pre_process=True,
67 post_process=True,
68 init_method_std=0.02,
69 megatron_amp_O2=False,
70 hidden_dropout=0.1,
71 attention_dropout=0.1,
72 ffn_dropout=0.0,
73 precision=16,
74 fp32_residual_connection=False,
75 activations_checkpoint_method=None,
76 activations_checkpoint_num_layers=1,
77 activations_checkpoint_granularity=None,
78 layernorm_epsilon=1e-5,
79 bias_activation_fusion=True,
80 bias_dropout_add_fusion=True,
81 masked_softmax_fusion=True,
82 persist_layer_norm=False,
83 openai_gelu=False,
84 activation="gelu",
85 onnx_safe=False,
86 bias=True,
87 normalization="layernorm",
88 headscale=False,
89 transformer_block_type="pre_ln",
90 hidden_steps=32,
91 parent_model_type=ModelType.encoder_or_decoder,
92 layer_type=None,
93 chunk_size=64,
94 num_self_attention_per_cross_attention=1,
95 layer_number_offset=0, # this is use only for attention norm_factor scaling
96 megatron_legacy=False,
97 normalize_attention_scores=True,
98 sequence_parallel=False,
99 num_moe_experts=1,
100 moe_frequency=1,
101 moe_dropout=0.0,
102 turn_off_rop=False, # turn off the RoP positional embedding
103 version=1, # model version
104 position_embedding_type='learned_absolute',
105 use_flash_attention=False,
106 ):
107 """Build language model and return along with the key to save."""
108
109 if kv_channels is None:
110 assert (
111 hidden_size % num_attention_heads == 0
112 ), 'hidden_size must be divisible by num_attention_heads if kv_channels is None'
113 kv_channels = hidden_size // num_attention_heads
114
115 if init_method is None:
116 init_method = init_method_normal(init_method_std)
117
118 if scaled_init_method is None:
119 scaled_init_method = scaled_init_method_normal(init_method_std, num_layers)
120
121 if arch == "transformer":
122 # Language encoder.
123 encoder = MegatronTransformerEncoderModule(
124 config=config,
125 init_method=init_method,
126 output_layer_init_method=scaled_init_method,
127 hidden_size=hidden_size,
128 num_layers=num_layers,
129 num_attention_heads=num_attention_heads,
130 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
131 kv_channels=kv_channels,
132 ffn_hidden_size=ffn_hidden_size,
133 encoder_attn_mask_type=encoder_attn_mask_type,
134 pre_process=pre_process,
135 post_process=post_process,
136 megatron_amp_O2=megatron_amp_O2,
137 hidden_dropout=hidden_dropout,
138 attention_dropout=attention_dropout,
139 ffn_dropout=ffn_dropout,
140 precision=precision,
141 fp32_residual_connection=fp32_residual_connection,
142 activations_checkpoint_method=activations_checkpoint_method,
143 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
144 activations_checkpoint_granularity=activations_checkpoint_granularity,
145 layernorm_epsilon=layernorm_epsilon,
146 bias_activation_fusion=bias_activation_fusion,
147 bias_dropout_add_fusion=bias_dropout_add_fusion,
148 masked_softmax_fusion=masked_softmax_fusion,
149 persist_layer_norm=persist_layer_norm,
150 openai_gelu=openai_gelu,
151 onnx_safe=onnx_safe,
152 activation=activation,
153 bias=bias,
154 normalization=normalization,
155 transformer_block_type=transformer_block_type,
156 headscale=headscale,
157 parent_model_type=parent_model_type,
158 megatron_legacy=megatron_legacy,
159 normalize_attention_scores=normalize_attention_scores,
160 num_moe_experts=num_moe_experts,
161 moe_frequency=moe_frequency,
162 moe_dropout=moe_dropout,
163 position_embedding_type=position_embedding_type,
164 use_flash_attention=use_flash_attention,
165 )
166 elif arch == "retro":
167 encoder = MegatronRetrievalTransformerEncoderModule(
168 config=config,
169 init_method=init_method,
170 output_layer_init_method=scaled_init_method,
171 hidden_size=hidden_size,
172 num_layers=num_layers,
173 num_attention_heads=num_attention_heads,
174 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
175 kv_channels=kv_channels,
176 layer_type=layer_type,
177 ffn_hidden_size=ffn_hidden_size,
178 pre_process=pre_process,
179 post_process=post_process,
180 megatron_amp_O2=megatron_amp_O2,
181 hidden_dropout=hidden_dropout,
182 attention_dropout=attention_dropout,
183 precision=precision,
184 fp32_residual_connection=fp32_residual_connection,
185 activations_checkpoint_method=activations_checkpoint_method,
186 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
187 activations_checkpoint_granularity=activations_checkpoint_granularity,
188 layernorm_epsilon=layernorm_epsilon,
189 bias_activation_fusion=bias_activation_fusion,
190 bias_dropout_add_fusion=bias_dropout_add_fusion,
191 masked_softmax_fusion=masked_softmax_fusion,
192 persist_layer_norm=persist_layer_norm,
193 openai_gelu=openai_gelu,
194 onnx_safe=onnx_safe,
195 activation=activation,
196 bias=bias,
197 normalization=normalization,
198 transformer_block_type=transformer_block_type,
199 parent_model_type=parent_model_type,
200 chunk_size=chunk_size,
201 layer_number_offset=layer_number_offset,
202 megatron_legacy=megatron_legacy,
203 normalize_attention_scores=normalize_attention_scores,
204 turn_off_rop=turn_off_rop,
205 version=version,
206 )
207 elif arch == "perceiver":
208 encoder = MegatronPerceiverEncoderModule(
209 config=config,
210 init_method=init_method,
211 output_layer_init_method=scaled_init_method,
212 hidden_size=hidden_size,
213 num_layers=num_layers,
214 num_attention_heads=num_attention_heads,
215 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
216 kv_channels=kv_channels,
217 ffn_hidden_size=ffn_hidden_size,
218 encoder_attn_mask_type=encoder_attn_mask_type,
219 pre_process=pre_process,
220 post_process=post_process,
221 megatron_amp_O2=megatron_amp_O2,
222 hidden_dropout=hidden_dropout,
223 attention_dropout=attention_dropout,
224 ffn_dropout=ffn_dropout,
225 precision=precision,
226 fp32_residual_connection=fp32_residual_connection,
227 activations_checkpoint_method=activations_checkpoint_method,
228 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
229 activations_checkpoint_granularity=activations_checkpoint_granularity,
230 layernorm_epsilon=layernorm_epsilon,
231 bias_activation_fusion=bias_activation_fusion,
232 bias_dropout_add_fusion=bias_dropout_add_fusion,
233 masked_softmax_fusion=masked_softmax_fusion,
234 persist_layer_norm=persist_layer_norm,
235 openai_gelu=openai_gelu,
236 onnx_safe=onnx_safe,
237 activation=activation,
238 bias=bias,
239 normalization=normalization,
240 transformer_block_type=transformer_block_type,
241 headscale=headscale,
242 parent_model_type=parent_model_type,
243 hidden_steps=hidden_steps,
244 num_self_attention_per_cross_attention=num_self_attention_per_cross_attention,
245 megatron_legacy=megatron_legacy,
246 normalize_attention_scores=normalize_attention_scores,
247 )
248 else:
249 raise ValueError(f"Unknown encoder arch = {arch}. Available encoder arch = {AVAILABLE_ENCODERS}")
250
251 return encoder
252
[end of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
[start of nemo/collections/tts/models/fastpitch.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import contextlib
15 from dataclasses import dataclass
16 from pathlib import Path
17 from typing import List, Optional
18
19 import torch
20 from hydra.utils import instantiate
21 from omegaconf import DictConfig, OmegaConf, open_dict
22 from pytorch_lightning import Trainer
23 from pytorch_lightning.loggers import TensorBoardLogger
24
25 from nemo.collections.common.parts.preprocessing import parsers
26 from nemo.collections.tts.losses.aligner_loss import BinLoss, ForwardSumLoss
27 from nemo.collections.tts.losses.fastpitchloss import DurationLoss, EnergyLoss, MelLoss, PitchLoss
28 from nemo.collections.tts.models.base import SpectrogramGenerator
29 from nemo.collections.tts.modules.fastpitch import FastPitchModule
30 from nemo.collections.tts.parts.mixins import FastPitchAdapterModelMixin
31 from nemo.collections.tts.parts.utils.callbacks import LoggingCallback
32 from nemo.collections.tts.parts.utils.helpers import (
33 batch_from_ragged,
34 g2p_backward_compatible_support,
35 plot_alignment_to_numpy,
36 plot_spectrogram_to_numpy,
37 process_batch,
38 sample_tts_input,
39 )
40 from nemo.core.classes import Exportable
41 from nemo.core.classes.common import PretrainedModelInfo, typecheck
42 from nemo.core.neural_types.elements import (
43 Index,
44 LengthsType,
45 MelSpectrogramType,
46 ProbsType,
47 RegressionValuesType,
48 TokenDurationType,
49 TokenIndex,
50 TokenLogDurationType,
51 )
52 from nemo.core.neural_types.neural_type import NeuralType
53 from nemo.utils import logging, model_utils
54
55
56 @dataclass
57 class G2PConfig:
58 _target_: str = "nemo.collections.tts.g2p.models.en_us_arpabet.EnglishG2p"
59 phoneme_dict: str = "scripts/tts_dataset_files/cmudict-0.7b_nv22.10"
60 heteronyms: str = "scripts/tts_dataset_files/heteronyms-052722"
61 phoneme_probability: float = 0.5
62
63
64 @dataclass
65 class TextTokenizer:
66 _target_: str = "nemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.EnglishPhonemesTokenizer"
67 punct: bool = True
68 stresses: bool = True
69 chars: bool = True
70 apostrophe: bool = True
71 pad_with_space: bool = True
72 add_blank_at: bool = True
73 g2p: G2PConfig = G2PConfig()
74
75
76 @dataclass
77 class TextTokenizerConfig:
78 text_tokenizer: TextTokenizer = TextTokenizer()
79
80
81 class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
82 """FastPitch model (https://arxiv.org/abs/2006.06873) that is used to generate mel spectrogram from text."""
83
84 def __init__(self, cfg: DictConfig, trainer: Trainer = None):
85 # Convert to Hydra 1.0 compatible DictConfig
86 cfg = model_utils.convert_model_config_to_dict_config(cfg)
87 cfg = model_utils.maybe_update_config_version(cfg)
88
89 # Setup normalizer
90 self.normalizer = None
91 self.text_normalizer_call = None
92 self.text_normalizer_call_kwargs = {}
93 self._setup_normalizer(cfg)
94
95 self.learn_alignment = cfg.get("learn_alignment", False)
96
97 # Setup vocabulary (=tokenizer) and input_fft_kwargs (supported only with self.learn_alignment=True)
98 input_fft_kwargs = {}
99 if self.learn_alignment:
100 self.vocab = None
101
102 self.ds_class = cfg.train_ds.dataset._target_
103 self.ds_class_name = self.ds_class.split(".")[-1]
104 if not self.ds_class in [
105 "nemo.collections.tts.data.dataset.TTSDataset",
106 "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset",
107 "nemo.collections.tts.torch.data.TTSDataset",
108 ]:
109 raise ValueError(f"Unknown dataset class: {self.ds_class}.")
110
111 self._setup_tokenizer(cfg)
112 assert self.vocab is not None
113 input_fft_kwargs["n_embed"] = len(self.vocab.tokens)
114 input_fft_kwargs["padding_idx"] = self.vocab.pad
115
116 self._parser = None
117 self._tb_logger = None
118 super().__init__(cfg=cfg, trainer=trainer)
119
120 self.bin_loss_warmup_epochs = cfg.get("bin_loss_warmup_epochs", 100)
121 self.log_images = cfg.get("log_images", False)
122 self.log_train_images = False
123
124 default_prosody_loss_scale = 0.1 if self.learn_alignment else 1.0
125 dur_loss_scale = cfg.get("dur_loss_scale", default_prosody_loss_scale)
126 pitch_loss_scale = cfg.get("pitch_loss_scale", default_prosody_loss_scale)
127 energy_loss_scale = cfg.get("energy_loss_scale", default_prosody_loss_scale)
128
129 self.mel_loss_fn = MelLoss()
130 self.pitch_loss_fn = PitchLoss(loss_scale=pitch_loss_scale)
131 self.duration_loss_fn = DurationLoss(loss_scale=dur_loss_scale)
132 self.energy_loss_fn = EnergyLoss(loss_scale=energy_loss_scale)
133
134 self.aligner = None
135 if self.learn_alignment:
136 aligner_loss_scale = cfg.get("aligner_loss_scale", 1.0)
137 self.aligner = instantiate(self._cfg.alignment_module)
138 self.forward_sum_loss_fn = ForwardSumLoss(loss_scale=aligner_loss_scale)
139 self.bin_loss_fn = BinLoss(loss_scale=aligner_loss_scale)
140
141 self.preprocessor = instantiate(self._cfg.preprocessor)
142 input_fft = instantiate(self._cfg.input_fft, **input_fft_kwargs)
143 output_fft = instantiate(self._cfg.output_fft)
144 duration_predictor = instantiate(self._cfg.duration_predictor)
145 pitch_predictor = instantiate(self._cfg.pitch_predictor)
146 speaker_encoder = instantiate(self._cfg.get("speaker_encoder", None))
147 energy_embedding_kernel_size = cfg.get("energy_embedding_kernel_size", 0)
148 energy_predictor = instantiate(self._cfg.get("energy_predictor", None))
149
150 # [TODO] may remove if we change the pre-trained config
151 # cfg: condition_types = [ "add" ]
152 n_speakers = cfg.get("n_speakers", 0)
153 speaker_emb_condition_prosody = cfg.get("speaker_emb_condition_prosody", False)
154 speaker_emb_condition_decoder = cfg.get("speaker_emb_condition_decoder", False)
155 speaker_emb_condition_aligner = cfg.get("speaker_emb_condition_aligner", False)
156 min_token_duration = cfg.get("min_token_duration", 0)
157 use_log_energy = cfg.get("use_log_energy", True)
158 if n_speakers > 1 and "add" not in input_fft.cond_input.condition_types:
159 input_fft.cond_input.condition_types.append("add")
160 if speaker_emb_condition_prosody:
161 duration_predictor.cond_input.condition_types.append("add")
162 pitch_predictor.cond_input.condition_types.append("add")
163 if speaker_emb_condition_decoder:
164 output_fft.cond_input.condition_types.append("add")
165 if speaker_emb_condition_aligner and self.aligner is not None:
166 self.aligner.cond_input.condition_types.append("add")
167
168 self.fastpitch = FastPitchModule(
169 input_fft,
170 output_fft,
171 duration_predictor,
172 pitch_predictor,
173 energy_predictor,
174 self.aligner,
175 speaker_encoder,
176 n_speakers,
177 cfg.symbols_embedding_dim,
178 cfg.pitch_embedding_kernel_size,
179 energy_embedding_kernel_size,
180 cfg.n_mel_channels,
181 min_token_duration,
182 cfg.max_token_duration,
183 use_log_energy,
184 )
185 self._input_types = self._output_types = None
186 self.export_config = {
187 "emb_range": (0, self.fastpitch.encoder.word_emb.num_embeddings),
188 "enable_volume": False,
189 "enable_ragged_batches": False,
190 }
191 if self.fastpitch.speaker_emb is not None:
192 self.export_config["num_speakers"] = cfg.n_speakers
193
194 self.log_config = cfg.get("log_config", None)
195
196 # Adapter modules setup (from FastPitchAdapterModelMixin)
197 self.setup_adapters()
198
199 def _get_default_text_tokenizer_conf(self):
200 text_tokenizer: TextTokenizerConfig = TextTokenizerConfig()
201 return OmegaConf.create(OmegaConf.to_yaml(text_tokenizer))
202
203 def _setup_normalizer(self, cfg):
204 if "text_normalizer" in cfg:
205 normalizer_kwargs = {}
206
207 if "whitelist" in cfg.text_normalizer:
208 normalizer_kwargs["whitelist"] = self.register_artifact(
209 'text_normalizer.whitelist', cfg.text_normalizer.whitelist
210 )
211 try:
212 import nemo_text_processing
213
214 self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
215 except Exception as e:
216 logging.error(e)
217 raise ImportError(
218 "`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
219 )
220
221 self.text_normalizer_call = self.normalizer.normalize
222 if "text_normalizer_call_kwargs" in cfg:
223 self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
224
225 def _setup_tokenizer(self, cfg):
226 text_tokenizer_kwargs = {}
227
228 if "g2p" in cfg.text_tokenizer:
229 # for backward compatibility
230 if (
231 self._is_model_being_restored()
232 and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
233 and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
234 ):
235 cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
236 cfg.text_tokenizer.g2p["_target_"]
237 )
238
239 g2p_kwargs = {}
240
241 if "phoneme_dict" in cfg.text_tokenizer.g2p:
242 g2p_kwargs["phoneme_dict"] = self.register_artifact(
243 'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
244 )
245
246 if "heteronyms" in cfg.text_tokenizer.g2p:
247 g2p_kwargs["heteronyms"] = self.register_artifact(
248 'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
249 )
250
251 # for backward compatability
252 text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
253
254 # TODO @xueyang: rename the instance of tokenizer because vocab is misleading.
255 self.vocab = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
256
257 @property
258 def tb_logger(self):
259 if self._tb_logger is None:
260 if self.logger is None and self.logger.experiment is None:
261 return None
262 tb_logger = self.logger.experiment
263 for logger in self.trainer.loggers:
264 if isinstance(logger, TensorBoardLogger):
265 tb_logger = logger.experiment
266 break
267 self._tb_logger = tb_logger
268 return self._tb_logger
269
270 @property
271 def parser(self):
272 if self._parser is not None:
273 return self._parser
274
275 if self.learn_alignment:
276 self._parser = self.vocab.encode
277 else:
278 self._parser = parsers.make_parser(
279 labels=self._cfg.labels,
280 name='en',
281 unk_id=-1,
282 blank_id=-1,
283 do_normalize=True,
284 abbreviation_version="fastpitch",
285 make_table=False,
286 )
287 return self._parser
288
289 def parse(self, str_input: str, normalize=True) -> torch.tensor:
290 if self.training:
291 logging.warning("parse() is meant to be called in eval mode.")
292
293 if normalize and self.text_normalizer_call is not None:
294 str_input = self.text_normalizer_call(str_input, **self.text_normalizer_call_kwargs)
295
296 if self.learn_alignment:
297 eval_phon_mode = contextlib.nullcontext()
298 if hasattr(self.vocab, "set_phone_prob"):
299 eval_phon_mode = self.vocab.set_phone_prob(prob=1.0)
300
301 # Disable mixed g2p representation if necessary
302 with eval_phon_mode:
303 tokens = self.parser(str_input)
304 else:
305 tokens = self.parser(str_input)
306
307 x = torch.tensor(tokens).unsqueeze_(0).long().to(self.device)
308 return x
309
310 @typecheck(
311 input_types={
312 "text": NeuralType(('B', 'T_text'), TokenIndex()),
313 "durs": NeuralType(('B', 'T_text'), TokenDurationType()),
314 "pitch": NeuralType(('B', 'T_audio'), RegressionValuesType()),
315 "energy": NeuralType(('B', 'T_audio'), RegressionValuesType(), optional=True),
316 "speaker": NeuralType(('B'), Index(), optional=True),
317 "pace": NeuralType(optional=True),
318 "spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
319 "attn_prior": NeuralType(('B', 'T_spec', 'T_text'), ProbsType(), optional=True),
320 "mel_lens": NeuralType(('B'), LengthsType(), optional=True),
321 "input_lens": NeuralType(('B'), LengthsType(), optional=True),
322 # reference_* data is used for multi-speaker FastPitch training
323 "reference_spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
324 "reference_spec_lens": NeuralType(('B'), LengthsType(), optional=True),
325 }
326 )
327 def forward(
328 self,
329 *,
330 text,
331 durs=None,
332 pitch=None,
333 energy=None,
334 speaker=None,
335 pace=1.0,
336 spec=None,
337 attn_prior=None,
338 mel_lens=None,
339 input_lens=None,
340 reference_spec=None,
341 reference_spec_lens=None,
342 ):
343 return self.fastpitch(
344 text=text,
345 durs=durs,
346 pitch=pitch,
347 energy=energy,
348 speaker=speaker,
349 pace=pace,
350 spec=spec,
351 attn_prior=attn_prior,
352 mel_lens=mel_lens,
353 input_lens=input_lens,
354 reference_spec=reference_spec,
355 reference_spec_lens=reference_spec_lens,
356 )
357
358 @typecheck(output_types={"spect": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType())})
359 def generate_spectrogram(
360 self,
361 tokens: 'torch.tensor',
362 speaker: Optional[int] = None,
363 pace: float = 1.0,
364 reference_spec: Optional['torch.tensor'] = None,
365 reference_spec_lens: Optional['torch.tensor'] = None,
366 ) -> torch.tensor:
367 if self.training:
368 logging.warning("generate_spectrogram() is meant to be called in eval mode.")
369 if isinstance(speaker, int):
370 speaker = torch.tensor([speaker]).to(self.device)
371 spect, *_ = self(
372 text=tokens,
373 durs=None,
374 pitch=None,
375 speaker=speaker,
376 pace=pace,
377 reference_spec=reference_spec,
378 reference_spec_lens=reference_spec_lens,
379 )
380 return spect
381
382 def training_step(self, batch, batch_idx):
383 attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
384 None,
385 None,
386 None,
387 None,
388 None,
389 None,
390 )
391 if self.learn_alignment:
392 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
393 batch_dict = batch
394 else:
395 batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
396 audio = batch_dict.get("audio")
397 audio_lens = batch_dict.get("audio_lens")
398 text = batch_dict.get("text")
399 text_lens = batch_dict.get("text_lens")
400 attn_prior = batch_dict.get("align_prior_matrix", None)
401 pitch = batch_dict.get("pitch", None)
402 energy = batch_dict.get("energy", None)
403 speaker = batch_dict.get("speaker_id", None)
404 reference_audio = batch_dict.get("reference_audio", None)
405 reference_audio_len = batch_dict.get("reference_audio_lens", None)
406 else:
407 audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
408
409 mels, spec_len = self.preprocessor(input_signal=audio, length=audio_lens)
410 reference_spec, reference_spec_len = None, None
411 if reference_audio is not None:
412 reference_spec, reference_spec_len = self.preprocessor(
413 input_signal=reference_audio, length=reference_audio_len
414 )
415
416 (
417 mels_pred,
418 _,
419 _,
420 log_durs_pred,
421 pitch_pred,
422 attn_soft,
423 attn_logprob,
424 attn_hard,
425 attn_hard_dur,
426 pitch,
427 energy_pred,
428 energy_tgt,
429 ) = self(
430 text=text,
431 durs=durs,
432 pitch=pitch,
433 energy=energy,
434 speaker=speaker,
435 pace=1.0,
436 spec=mels if self.learn_alignment else None,
437 reference_spec=reference_spec,
438 reference_spec_lens=reference_spec_len,
439 attn_prior=attn_prior,
440 mel_lens=spec_len,
441 input_lens=text_lens,
442 )
443 if durs is None:
444 durs = attn_hard_dur
445
446 mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
447 dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
448 loss = mel_loss + dur_loss
449 if self.learn_alignment:
450 ctc_loss = self.forward_sum_loss_fn(attn_logprob=attn_logprob, in_lens=text_lens, out_lens=spec_len)
451 bin_loss_weight = min(self.current_epoch / self.bin_loss_warmup_epochs, 1.0) * 1.0
452 bin_loss = self.bin_loss_fn(hard_attention=attn_hard, soft_attention=attn_soft) * bin_loss_weight
453 loss += ctc_loss + bin_loss
454
455 pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
456 energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
457 loss += pitch_loss + energy_loss
458
459 self.log("t_loss", loss)
460 self.log("t_mel_loss", mel_loss)
461 self.log("t_dur_loss", dur_loss)
462 self.log("t_pitch_loss", pitch_loss)
463 if energy_tgt is not None:
464 self.log("t_energy_loss", energy_loss)
465 if self.learn_alignment:
466 self.log("t_ctc_loss", ctc_loss)
467 self.log("t_bin_loss", bin_loss)
468
469 # Log images to tensorboard
470 if self.log_images and self.log_train_images and isinstance(self.logger, TensorBoardLogger):
471 self.log_train_images = False
472
473 self.tb_logger.add_image(
474 "train_mel_target",
475 plot_spectrogram_to_numpy(mels[0].data.cpu().float().numpy()),
476 self.global_step,
477 dataformats="HWC",
478 )
479 spec_predict = mels_pred[0].data.cpu().float().numpy()
480 self.tb_logger.add_image(
481 "train_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
482 )
483 if self.learn_alignment:
484 attn = attn_hard[0].data.cpu().float().numpy().squeeze()
485 self.tb_logger.add_image(
486 "train_attn", plot_alignment_to_numpy(attn.T), self.global_step, dataformats="HWC",
487 )
488 soft_attn = attn_soft[0].data.cpu().float().numpy().squeeze()
489 self.tb_logger.add_image(
490 "train_soft_attn", plot_alignment_to_numpy(soft_attn.T), self.global_step, dataformats="HWC",
491 )
492
493 return loss
494
495 def validation_step(self, batch, batch_idx):
496 attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
497 None,
498 None,
499 None,
500 None,
501 None,
502 None,
503 )
504 if self.learn_alignment:
505 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
506 batch_dict = batch
507 else:
508 batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
509 audio = batch_dict.get("audio")
510 audio_lens = batch_dict.get("audio_lens")
511 text = batch_dict.get("text")
512 text_lens = batch_dict.get("text_lens")
513 attn_prior = batch_dict.get("align_prior_matrix", None)
514 pitch = batch_dict.get("pitch", None)
515 energy = batch_dict.get("energy", None)
516 speaker = batch_dict.get("speaker_id", None)
517 reference_audio = batch_dict.get("reference_audio", None)
518 reference_audio_len = batch_dict.get("reference_audio_lens", None)
519 else:
520 audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
521
522 mels, mel_lens = self.preprocessor(input_signal=audio, length=audio_lens)
523 reference_spec, reference_spec_len = None, None
524 if reference_audio is not None:
525 reference_spec, reference_spec_len = self.preprocessor(
526 input_signal=reference_audio, length=reference_audio_len
527 )
528
529 # Calculate val loss on ground truth durations to better align L2 loss in time
530 (mels_pred, _, _, log_durs_pred, pitch_pred, _, _, _, attn_hard_dur, pitch, energy_pred, energy_tgt,) = self(
531 text=text,
532 durs=durs,
533 pitch=pitch,
534 energy=energy,
535 speaker=speaker,
536 pace=1.0,
537 spec=mels if self.learn_alignment else None,
538 reference_spec=reference_spec,
539 reference_spec_lens=reference_spec_len,
540 attn_prior=attn_prior,
541 mel_lens=mel_lens,
542 input_lens=text_lens,
543 )
544 if durs is None:
545 durs = attn_hard_dur
546
547 mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
548 dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
549 pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
550 energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
551 loss = mel_loss + dur_loss + pitch_loss + energy_loss
552
553 val_outputs = {
554 "val_loss": loss,
555 "mel_loss": mel_loss,
556 "dur_loss": dur_loss,
557 "pitch_loss": pitch_loss,
558 "energy_loss": energy_loss if energy_tgt is not None else None,
559 "mel_target": mels if batch_idx == 0 else None,
560 "mel_pred": mels_pred if batch_idx == 0 else None,
561 }
562 self.validation_step_outputs.append(val_outputs)
563 return val_outputs
564
565 def on_validation_epoch_end(self):
566 collect = lambda key: torch.stack([x[key] for x in self.validation_step_outputs]).mean()
567 val_loss = collect("val_loss")
568 mel_loss = collect("mel_loss")
569 dur_loss = collect("dur_loss")
570 pitch_loss = collect("pitch_loss")
571 self.log("val_loss", val_loss, sync_dist=True)
572 self.log("val_mel_loss", mel_loss, sync_dist=True)
573 self.log("val_dur_loss", dur_loss, sync_dist=True)
574 self.log("val_pitch_loss", pitch_loss, sync_dist=True)
575 if self.validation_step_outputs[0]["energy_loss"] is not None:
576 energy_loss = collect("energy_loss")
577 self.log("val_energy_loss", energy_loss, sync_dist=True)
578
579 _, _, _, _, _, spec_target, spec_predict = self.validation_step_outputs[0].values()
580
581 if self.log_images and isinstance(self.logger, TensorBoardLogger):
582 self.tb_logger.add_image(
583 "val_mel_target",
584 plot_spectrogram_to_numpy(spec_target[0].data.cpu().float().numpy()),
585 self.global_step,
586 dataformats="HWC",
587 )
588 spec_predict = spec_predict[0].data.cpu().float().numpy()
589 self.tb_logger.add_image(
590 "val_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
591 )
592 self.log_train_images = True
593 self.validation_step_outputs.clear() # free memory)
594
595 def _setup_train_dataloader(self, cfg):
596 phon_mode = contextlib.nullcontext()
597 if hasattr(self.vocab, "set_phone_prob"):
598 phon_mode = self.vocab.set_phone_prob(self.vocab.phoneme_probability)
599
600 with phon_mode:
601 dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
602
603 sampler = dataset.get_sampler(cfg.dataloader_params.batch_size)
604 return torch.utils.data.DataLoader(
605 dataset, collate_fn=dataset.collate_fn, sampler=sampler, **cfg.dataloader_params
606 )
607
608 def _setup_test_dataloader(self, cfg):
609 phon_mode = contextlib.nullcontext()
610 if hasattr(self.vocab, "set_phone_prob"):
611 phon_mode = self.vocab.set_phone_prob(0.0)
612
613 with phon_mode:
614 dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
615
616 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
617
618 def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
619 if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
620 raise ValueError(f"No dataset for {name}")
621 if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
622 raise ValueError(f"No dataloader_params for {name}")
623 if shuffle_should_be:
624 if 'shuffle' not in cfg.dataloader_params:
625 logging.warning(
626 f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
627 "config. Manually setting to True"
628 )
629 with open_dict(cfg.dataloader_params):
630 cfg.dataloader_params.shuffle = True
631 elif not cfg.dataloader_params.shuffle:
632 logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
633 elif cfg.dataloader_params.shuffle:
634 logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
635
636 if self.ds_class == "nemo.collections.tts.data.dataset.TTSDataset":
637 phon_mode = contextlib.nullcontext()
638 if hasattr(self.vocab, "set_phone_prob"):
639 phon_mode = self.vocab.set_phone_prob(prob=None if name == "val" else self.vocab.phoneme_probability)
640
641 with phon_mode:
642 dataset = instantiate(
643 cfg.dataset,
644 text_normalizer=self.normalizer,
645 text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
646 text_tokenizer=self.vocab,
647 )
648 else:
649 dataset = instantiate(cfg.dataset)
650
651 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
652
653 def setup_training_data(self, cfg):
654 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
655 self._train_dl = self._setup_train_dataloader(cfg)
656 else:
657 self._train_dl = self.__setup_dataloader_from_config(cfg)
658
659 def setup_validation_data(self, cfg):
660 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
661 self._validation_dl = self._setup_test_dataloader(cfg)
662 else:
663 self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="val")
664
665 def setup_test_data(self, cfg):
666 """Omitted."""
667 pass
668
669 def configure_callbacks(self):
670 if not self.log_config:
671 return []
672
673 sample_ds_class = self.log_config.dataset._target_
674 if sample_ds_class != "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
675 raise ValueError(f"Logging callback only supported for TextToSpeechDataset, got {sample_ds_class}")
676
677 data_loader = self._setup_test_dataloader(self.log_config)
678
679 generators = instantiate(self.log_config.generators)
680 log_dir = Path(self.log_config.log_dir) if self.log_config.log_dir else None
681 log_callback = LoggingCallback(
682 generators=generators,
683 data_loader=data_loader,
684 log_epochs=self.log_config.log_epochs,
685 epoch_frequency=self.log_config.epoch_frequency,
686 output_dir=log_dir,
687 loggers=self.trainer.loggers,
688 log_tensorboard=self.log_config.log_tensorboard,
689 log_wandb=self.log_config.log_wandb,
690 )
691
692 return [log_callback]
693
694 @classmethod
695 def list_available_models(cls) -> 'List[PretrainedModelInfo]':
696 """
697 This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
698 Returns:
699 List of available pre-trained models.
700 """
701 list_of_models = []
702
703 # en-US, single speaker, 22050Hz, LJSpeech (ARPABET).
704 model = PretrainedModelInfo(
705 pretrained_model_name="tts_en_fastpitch",
706 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/1.8.1/files/tts_en_fastpitch_align.nemo",
707 description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is ARPABET-based.",
708 class_=cls,
709 )
710 list_of_models.append(model)
711
712 # en-US, single speaker, 22050Hz, LJSpeech (IPA).
713 model = PretrainedModelInfo(
714 pretrained_model_name="tts_en_fastpitch_ipa",
715 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/IPA_1.13.0/files/tts_en_fastpitch_align_ipa.nemo",
716 description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is IPA-based.",
717 class_=cls,
718 )
719 list_of_models.append(model)
720
721 # en-US, multi-speaker, 44100Hz, HiFiTTS.
722 model = PretrainedModelInfo(
723 pretrained_model_name="tts_en_fastpitch_multispeaker",
724 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_multispeaker_fastpitchhifigan/versions/1.10.0/files/tts_en_fastpitch_multispeaker.nemo",
725 description="This model is trained on HiFITTS sampled at 44100Hz with and can be used to generate male and female English voices with an American accent.",
726 class_=cls,
727 )
728 list_of_models.append(model)
729
730 # de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 21.02
731 model = PretrainedModelInfo(
732 pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2102",
733 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2102.nemo",
734 description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 21.02 Dataset sampled at 22050Hz and can be used to generate male German voices.",
735 class_=cls,
736 )
737 list_of_models.append(model)
738
739 # de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 22.10
740 model = PretrainedModelInfo(
741 pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2210",
742 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2210.nemo",
743 description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 22.10 Dataset sampled at 22050Hz and can be used to generate male German voices.",
744 class_=cls,
745 )
746 list_of_models.append(model)
747
748 # de-DE, multi-speaker, 5 speakers, 44100 Hz, HUI-Audio-Corpus-German Clean.
749 model = PretrainedModelInfo(
750 pretrained_model_name="tts_de_fastpitch_multispeaker_5",
751 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitch_multispeaker_5/versions/1.11.0/files/tts_de_fastpitch_multispeaker_5.nemo",
752 description="This model is trained on 5 speakers in HUI-Audio-Corpus-German clean subset sampled at 44100Hz with and can be used to generate male and female German voices.",
753 class_=cls,
754 )
755 list_of_models.append(model)
756
757 # es, 174 speakers, 44100Hz, OpenSLR (IPA)
758 model = PretrainedModelInfo(
759 pretrained_model_name="tts_es_fastpitch_multispeaker",
760 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_es_multispeaker_fastpitchhifigan/versions/1.15.0/files/tts_es_fastpitch_multispeaker.nemo",
761 description="This model is trained on 174 speakers in 6 crowdsourced Latin American Spanish OpenSLR datasets sampled at 44100Hz and can be used to generate male and female Spanish voices with Latin American accents.",
762 class_=cls,
763 )
764 list_of_models.append(model)
765
766 # zh, single female speaker, 22050Hz, SFSpeech Bilingual Chinese/English dataset, improved model using richer
767 # dict and jieba word segmenter for polyphone disambiguation.
768 model = PretrainedModelInfo(
769 pretrained_model_name="tts_zh_fastpitch_sfspeech",
770 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_zh_fastpitch_hifigan_sfspeech/versions/1.15.0/files/tts_zh_fastpitch_sfspeech.nemo",
771 description="This model is trained on a single female speaker in SFSpeech Bilingual Chinese/English dataset"
772 " sampled at 22050Hz and can be used to generate female Mandarin Chinese voices. It is improved"
773 " using richer dict and jieba word segmenter for polyphone disambiguation.",
774 class_=cls,
775 )
776 list_of_models.append(model)
777
778 # en, multi speaker, LibriTTS, 16000 Hz
779 # stft 25ms 10ms matching ASR params
780 # for use during Enhlish ASR training/adaptation
781 model = PretrainedModelInfo(
782 pretrained_model_name="tts_en_fastpitch_for_asr_finetuning",
783 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch_spectrogram_enhancer_for_asr_finetuning/versions/1.20.0/files/tts_en_fastpitch_for_asr_finetuning.nemo",
784 description="This model is trained on LibriSpeech, train-960 subset."
785 " STFT parameters follow those commonly used in ASR: 25 ms window, 10 ms hop."
786 " This model is supposed to be used with its companion SpetrogramEnhancer for "
787 " ASR fine-tuning. Usage for regular TTS tasks is not advised.",
788 class_=cls,
789 )
790 list_of_models.append(model)
791
792 return list_of_models
793
794 # Methods for model exportability
795 def _prepare_for_export(self, **kwargs):
796 super()._prepare_for_export(**kwargs)
797
798 tensor_shape = ('T') if self.export_config["enable_ragged_batches"] else ('B', 'T')
799
800 # Define input_types and output_types as required by export()
801 self._input_types = {
802 "text": NeuralType(tensor_shape, TokenIndex()),
803 "pitch": NeuralType(tensor_shape, RegressionValuesType()),
804 "pace": NeuralType(tensor_shape),
805 "volume": NeuralType(tensor_shape, optional=True),
806 "batch_lengths": NeuralType(('B'), optional=True),
807 "speaker": NeuralType(('B'), Index(), optional=True),
808 }
809 self._output_types = {
810 "spect": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
811 "num_frames": NeuralType(('B'), TokenDurationType()),
812 "durs_predicted": NeuralType(('B', 'T'), TokenDurationType()),
813 "log_durs_predicted": NeuralType(('B', 'T'), TokenLogDurationType()),
814 "pitch_predicted": NeuralType(('B', 'T'), RegressionValuesType()),
815 }
816 if self.export_config["enable_volume"]:
817 self._output_types["volume_aligned"] = NeuralType(('B', 'T'), RegressionValuesType())
818
819 def _export_teardown(self):
820 self._input_types = self._output_types = None
821
822 @property
823 def disabled_deployment_input_names(self):
824 """Implement this method to return a set of input names disabled for export"""
825 disabled_inputs = set()
826 if self.fastpitch.speaker_emb is None:
827 disabled_inputs.add("speaker")
828 if not self.export_config["enable_ragged_batches"]:
829 disabled_inputs.add("batch_lengths")
830 if not self.export_config["enable_volume"]:
831 disabled_inputs.add("volume")
832 return disabled_inputs
833
834 @property
835 def input_types(self):
836 return self._input_types
837
838 @property
839 def output_types(self):
840 return self._output_types
841
842 def input_example(self, max_batch=1, max_dim=44):
843 """
844 Generates input examples for tracing etc.
845 Returns:
846 A tuple of input examples.
847 """
848 par = next(self.fastpitch.parameters())
849 inputs = sample_tts_input(self.export_config, par.device, max_batch=max_batch, max_dim=max_dim)
850 if 'enable_ragged_batches' not in self.export_config:
851 inputs.pop('batch_lengths', None)
852 return (inputs,)
853
854 def forward_for_export(self, text, pitch, pace, volume=None, batch_lengths=None, speaker=None):
855 if self.export_config["enable_ragged_batches"]:
856 text, pitch, pace, volume_tensor, lens = batch_from_ragged(
857 text, pitch, pace, batch_lengths, padding_idx=self.fastpitch.encoder.padding_idx, volume=volume
858 )
859 if volume is not None:
860 volume = volume_tensor
861 return self.fastpitch.infer(text=text, pitch=pitch, pace=pace, volume=volume, speaker=speaker)
862
863 def interpolate_speaker(
864 self, original_speaker_1, original_speaker_2, weight_speaker_1, weight_speaker_2, new_speaker_id
865 ):
866 """
867 This method performs speaker interpolation between two original speakers the model is trained on.
868
869 Inputs:
870 original_speaker_1: Integer speaker ID of first existing speaker in the model
871 original_speaker_2: Integer speaker ID of second existing speaker in the model
872 weight_speaker_1: Floating point weight associated in to first speaker during weight combination
873 weight_speaker_2: Floating point weight associated in to second speaker during weight combination
874 new_speaker_id: Integer speaker ID of new interpolated speaker in the model
875 """
876 if self.fastpitch.speaker_emb is None:
877 raise Exception(
878 "Current FastPitch model is not a multi-speaker FastPitch model. Speaker interpolation can only \
879 be performed with a multi-speaker model"
880 )
881 n_speakers = self.fastpitch.speaker_emb.weight.data.size()[0]
882 if original_speaker_1 >= n_speakers or original_speaker_2 >= n_speakers or new_speaker_id >= n_speakers:
883 raise Exception(
884 f"Parameters original_speaker_1, original_speaker_2, new_speaker_id should be less than the total \
885 total number of speakers FastPitch was trained on (n_speakers = {n_speakers})."
886 )
887 speaker_emb_1 = (
888 self.fastpitch.speaker_emb(torch.tensor(original_speaker_1, dtype=torch.int32).cuda()).clone().detach()
889 )
890 speaker_emb_2 = (
891 self.fastpitch.speaker_emb(torch.tensor(original_speaker_2, dtype=torch.int32).cuda()).clone().detach()
892 )
893 new_speaker_emb = weight_speaker_1 * speaker_emb_1 + weight_speaker_2 * speaker_emb_2
894 self.fastpitch.speaker_emb.weight.data[new_speaker_id] = new_speaker_emb
895
[end of nemo/collections/tts/models/fastpitch.py]
[start of nemo/collections/tts/models/tacotron2.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import contextlib
16 from dataclasses import dataclass
17 from typing import Any, Dict, List, Optional
18
19 import torch
20 from hydra.utils import instantiate
21 from omegaconf import MISSING, DictConfig, OmegaConf, open_dict
22 from omegaconf.errors import ConfigAttributeError
23 from pytorch_lightning.loggers import TensorBoardLogger, WandbLogger
24 from torch import nn
25
26 from nemo.collections.common.parts.preprocessing import parsers
27 from nemo.collections.tts.losses.tacotron2loss import Tacotron2Loss
28 from nemo.collections.tts.models.base import SpectrogramGenerator
29 from nemo.collections.tts.parts.utils.helpers import (
30 g2p_backward_compatible_support,
31 get_mask_from_lengths,
32 tacotron2_log_to_tb_func,
33 tacotron2_log_to_wandb_func,
34 )
35 from nemo.core.classes.common import PretrainedModelInfo, typecheck
36 from nemo.core.neural_types.elements import (
37 AudioSignal,
38 EmbeddedTextType,
39 LengthsType,
40 LogitsType,
41 MelSpectrogramType,
42 SequenceToSequenceAlignmentType,
43 )
44 from nemo.core.neural_types.neural_type import NeuralType
45 from nemo.utils import logging, model_utils
46
47
48 @dataclass
49 class Preprocessor:
50 _target_: str = MISSING
51 pad_value: float = MISSING
52
53
54 @dataclass
55 class Tacotron2Config:
56 preprocessor: Preprocessor = Preprocessor()
57 encoder: Dict[Any, Any] = MISSING
58 decoder: Dict[Any, Any] = MISSING
59 postnet: Dict[Any, Any] = MISSING
60 labels: List = MISSING
61 train_ds: Optional[Dict[Any, Any]] = None
62 validation_ds: Optional[Dict[Any, Any]] = None
63
64
65 class Tacotron2Model(SpectrogramGenerator):
66 """Tacotron 2 Model that is used to generate mel spectrograms from text"""
67
68 def __init__(self, cfg: DictConfig, trainer: 'Trainer' = None):
69 # Convert to Hydra 1.0 compatible DictConfig
70 cfg = model_utils.convert_model_config_to_dict_config(cfg)
71 cfg = model_utils.maybe_update_config_version(cfg)
72
73 # setup normalizer
74 self.normalizer = None
75 self.text_normalizer_call = None
76 self.text_normalizer_call_kwargs = {}
77 self._setup_normalizer(cfg)
78
79 # setup tokenizer
80 self.tokenizer = None
81 if hasattr(cfg, 'text_tokenizer'):
82 self._setup_tokenizer(cfg)
83
84 self.num_tokens = len(self.tokenizer.tokens)
85 self.tokenizer_pad = self.tokenizer.pad
86 self.tokenizer_unk = self.tokenizer.oov
87 # assert self.tokenizer is not None
88 else:
89 self.num_tokens = len(cfg.labels) + 3
90
91 super().__init__(cfg=cfg, trainer=trainer)
92
93 schema = OmegaConf.structured(Tacotron2Config)
94 # ModelPT ensures that cfg is a DictConfig, but do this second check in case ModelPT changes
95 if isinstance(cfg, dict):
96 cfg = OmegaConf.create(cfg)
97 elif not isinstance(cfg, DictConfig):
98 raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
99 # Ensure passed cfg is compliant with schema
100 try:
101 OmegaConf.merge(cfg, schema)
102 self.pad_value = cfg.preprocessor.pad_value
103 except ConfigAttributeError:
104 self.pad_value = cfg.preprocessor.params.pad_value
105 logging.warning(
106 "Your config is using an old NeMo yaml configuration. Please ensure that the yaml matches the "
107 "current version in the main branch for future compatibility."
108 )
109
110 self._parser = None
111 self.audio_to_melspec_precessor = instantiate(cfg.preprocessor)
112 self.text_embedding = nn.Embedding(self.num_tokens, 512)
113 self.encoder = instantiate(self._cfg.encoder)
114 self.decoder = instantiate(self._cfg.decoder)
115 self.postnet = instantiate(self._cfg.postnet)
116 self.loss = Tacotron2Loss()
117 self.calculate_loss = True
118
119 @property
120 def parser(self):
121 if self._parser is not None:
122 return self._parser
123
124 ds_class_name = self._cfg.train_ds.dataset._target_.split(".")[-1]
125 if ds_class_name == "TTSDataset":
126 self._parser = None
127 elif hasattr(self._cfg, "labels"):
128 self._parser = parsers.make_parser(
129 labels=self._cfg.labels,
130 name='en',
131 unk_id=-1,
132 blank_id=-1,
133 do_normalize=True,
134 abbreviation_version="fastpitch",
135 make_table=False,
136 )
137 else:
138 raise ValueError("Wanted to setup parser, but model does not have necessary paramaters")
139
140 return self._parser
141
142 def parse(self, text: str, normalize=True) -> torch.Tensor:
143 if self.training:
144 logging.warning("parse() is meant to be called in eval mode.")
145 if normalize and self.text_normalizer_call is not None:
146 text = self.text_normalizer_call(text, **self.text_normalizer_call_kwargs)
147
148 eval_phon_mode = contextlib.nullcontext()
149 if hasattr(self.tokenizer, "set_phone_prob"):
150 eval_phon_mode = self.tokenizer.set_phone_prob(prob=1.0)
151
152 with eval_phon_mode:
153 if self.tokenizer is not None:
154 tokens = self.tokenizer.encode(text)
155 else:
156 tokens = self.parser(text)
157 # Old parser doesn't add bos and eos ids, so maunally add it
158 tokens = [len(self._cfg.labels)] + tokens + [len(self._cfg.labels) + 1]
159 tokens_tensor = torch.tensor(tokens).unsqueeze_(0).to(self.device)
160 return tokens_tensor
161
162 @property
163 def input_types(self):
164 if self.training:
165 return {
166 "tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
167 "token_len": NeuralType(('B'), LengthsType()),
168 "audio": NeuralType(('B', 'T'), AudioSignal()),
169 "audio_len": NeuralType(('B'), LengthsType()),
170 }
171 else:
172 return {
173 "tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
174 "token_len": NeuralType(('B'), LengthsType()),
175 "audio": NeuralType(('B', 'T'), AudioSignal(), optional=True),
176 "audio_len": NeuralType(('B'), LengthsType(), optional=True),
177 }
178
179 @property
180 def output_types(self):
181 if not self.calculate_loss and not self.training:
182 return {
183 "spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
184 "spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
185 "gate_pred": NeuralType(('B', 'T'), LogitsType()),
186 "alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
187 "pred_length": NeuralType(('B'), LengthsType()),
188 }
189 return {
190 "spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
191 "spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
192 "gate_pred": NeuralType(('B', 'T'), LogitsType()),
193 "spec_target": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
194 "spec_target_len": NeuralType(('B'), LengthsType()),
195 "alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
196 }
197
198 @typecheck()
199 def forward(self, *, tokens, token_len, audio=None, audio_len=None):
200 if audio is not None and audio_len is not None:
201 spec_target, spec_target_len = self.audio_to_melspec_precessor(audio, audio_len)
202 else:
203 if self.training or self.calculate_loss:
204 raise ValueError(
205 f"'audio' and 'audio_len' can not be None when either 'self.training' or 'self.calculate_loss' is True."
206 )
207
208 token_embedding = self.text_embedding(tokens).transpose(1, 2)
209 encoder_embedding = self.encoder(token_embedding=token_embedding, token_len=token_len)
210
211 if self.training:
212 spec_pred_dec, gate_pred, alignments = self.decoder(
213 memory=encoder_embedding, decoder_inputs=spec_target, memory_lengths=token_len
214 )
215 else:
216 spec_pred_dec, gate_pred, alignments, pred_length = self.decoder(
217 memory=encoder_embedding, memory_lengths=token_len
218 )
219
220 spec_pred_postnet = self.postnet(mel_spec=spec_pred_dec)
221
222 if not self.calculate_loss and not self.training:
223 return spec_pred_dec, spec_pred_postnet, gate_pred, alignments, pred_length
224
225 return spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments
226
227 @typecheck(
228 input_types={"tokens": NeuralType(('B', 'T'), EmbeddedTextType())},
229 output_types={"spec": NeuralType(('B', 'D', 'T'), MelSpectrogramType())},
230 )
231 def generate_spectrogram(self, *, tokens):
232 self.eval()
233 self.calculate_loss = False
234 token_len = torch.tensor([len(i) for i in tokens]).to(self.device)
235 tensors = self(tokens=tokens, token_len=token_len)
236 spectrogram_pred = tensors[1]
237
238 if spectrogram_pred.shape[0] > 1:
239 # Silence all frames past the predicted end
240 mask = ~get_mask_from_lengths(tensors[-1])
241 mask = mask.expand(spectrogram_pred.shape[1], mask.size(0), mask.size(1))
242 mask = mask.permute(1, 0, 2)
243 spectrogram_pred.data.masked_fill_(mask, self.pad_value)
244
245 return spectrogram_pred
246
247 def training_step(self, batch, batch_idx):
248 audio, audio_len, tokens, token_len = batch
249 spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, _ = self.forward(
250 audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
251 )
252
253 loss, _ = self.loss(
254 spec_pred_dec=spec_pred_dec,
255 spec_pred_postnet=spec_pred_postnet,
256 gate_pred=gate_pred,
257 spec_target=spec_target,
258 spec_target_len=spec_target_len,
259 pad_value=self.pad_value,
260 )
261
262 output = {
263 'loss': loss,
264 'progress_bar': {'training_loss': loss},
265 'log': {'loss': loss},
266 }
267 return output
268
269 def validation_step(self, batch, batch_idx):
270 audio, audio_len, tokens, token_len = batch
271 spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments = self.forward(
272 audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
273 )
274
275 loss, gate_target = self.loss(
276 spec_pred_dec=spec_pred_dec,
277 spec_pred_postnet=spec_pred_postnet,
278 gate_pred=gate_pred,
279 spec_target=spec_target,
280 spec_target_len=spec_target_len,
281 pad_value=self.pad_value,
282 )
283 loss = {
284 "val_loss": loss,
285 "mel_target": spec_target,
286 "mel_postnet": spec_pred_postnet,
287 "gate": gate_pred,
288 "gate_target": gate_target,
289 "alignments": alignments,
290 }
291 self.validation_step_outputs.append(loss)
292 return loss
293
294 def on_validation_epoch_end(self):
295 if self.logger is not None and self.logger.experiment is not None:
296 logger = self.logger.experiment
297 for logger in self.trainer.loggers:
298 if isinstance(logger, TensorBoardLogger):
299 logger = logger.experiment
300 break
301 if isinstance(logger, TensorBoardLogger):
302 tacotron2_log_to_tb_func(
303 logger,
304 self.validation_step_outputs[0].values(),
305 self.global_step,
306 tag="val",
307 log_images=True,
308 add_audio=False,
309 )
310 elif isinstance(logger, WandbLogger):
311 tacotron2_log_to_wandb_func(
312 logger,
313 self.validation_step_outputs[0].values(),
314 self.global_step,
315 tag="val",
316 log_images=True,
317 add_audio=False,
318 )
319 avg_loss = torch.stack(
320 [x['val_loss'] for x in self.validation_step_outputs]
321 ).mean() # This reduces across batches, not workers!
322 self.log('val_loss', avg_loss)
323 self.validation_step_outputs.clear() # free memory
324
325 def _setup_normalizer(self, cfg):
326 if "text_normalizer" in cfg:
327 normalizer_kwargs = {}
328
329 if "whitelist" in cfg.text_normalizer:
330 normalizer_kwargs["whitelist"] = self.register_artifact(
331 'text_normalizer.whitelist', cfg.text_normalizer.whitelist
332 )
333
334 try:
335 import nemo_text_processing
336
337 self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
338 except Exception as e:
339 logging.error(e)
340 raise ImportError(
341 "`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
342 )
343
344 self.text_normalizer_call = self.normalizer.normalize
345 if "text_normalizer_call_kwargs" in cfg:
346 self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
347
348 def _setup_tokenizer(self, cfg):
349 text_tokenizer_kwargs = {}
350 if "g2p" in cfg.text_tokenizer and cfg.text_tokenizer.g2p is not None:
351 # for backward compatibility
352 if (
353 self._is_model_being_restored()
354 and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
355 and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
356 ):
357 cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
358 cfg.text_tokenizer.g2p["_target_"]
359 )
360
361 g2p_kwargs = {}
362
363 if "phoneme_dict" in cfg.text_tokenizer.g2p:
364 g2p_kwargs["phoneme_dict"] = self.register_artifact(
365 'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
366 )
367
368 if "heteronyms" in cfg.text_tokenizer.g2p:
369 g2p_kwargs["heteronyms"] = self.register_artifact(
370 'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
371 )
372
373 text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
374
375 self.tokenizer = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
376
377 def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
378 if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
379 raise ValueError(f"No dataset for {name}")
380 if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
381 raise ValueError(f"No dataloder_params for {name}")
382 if shuffle_should_be:
383 if 'shuffle' not in cfg.dataloader_params:
384 logging.warning(
385 f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
386 "config. Manually setting to True"
387 )
388 with open_dict(cfg.dataloader_params):
389 cfg.dataloader_params.shuffle = True
390 elif not cfg.dataloader_params.shuffle:
391 logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
392 elif not shuffle_should_be and cfg.dataloader_params.shuffle:
393 logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
394
395 dataset = instantiate(
396 cfg.dataset,
397 text_normalizer=self.normalizer,
398 text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
399 text_tokenizer=self.tokenizer,
400 )
401
402 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
403
404 def setup_training_data(self, cfg):
405 self._train_dl = self.__setup_dataloader_from_config(cfg)
406
407 def setup_validation_data(self, cfg):
408 self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="validation")
409
410 @classmethod
411 def list_available_models(cls) -> 'List[PretrainedModelInfo]':
412 """
413 This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
414 Returns:
415 List of available pre-trained models.
416 """
417 list_of_models = []
418 model = PretrainedModelInfo(
419 pretrained_model_name="tts_en_tacotron2",
420 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_tacotron2/versions/1.10.0/files/tts_en_tacotron2.nemo",
421 description="This model is trained on LJSpeech sampled at 22050Hz, and can be used to generate female English voices with an American accent.",
422 class_=cls,
423 aliases=["Tacotron2-22050Hz"],
424 )
425 list_of_models.append(model)
426 return list_of_models
427
[end of nemo/collections/tts/models/tacotron2.py]
[start of nemo/core/config/modelPT.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, field
16 from typing import Any, Dict, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.core import config
21 from nemo.core.classes.dataset import DatasetConfig
22 from nemo.utils import exp_manager
23
24
25 @dataclass
26 class SchedConfig:
27 name: str = MISSING
28 min_lr: float = 0.0
29 last_epoch: int = -1
30
31
32 @dataclass
33 class OptimConfig:
34 name: str = MISSING
35 sched: Optional[SchedConfig] = None
36
37
38 @dataclass
39 class ModelConfig:
40 """
41 Model component inside ModelPT
42 """
43
44 # ...
45 train_ds: Optional[DatasetConfig] = None
46 validation_ds: Optional[DatasetConfig] = None
47 test_ds: Optional[DatasetConfig] = None
48 optim: Optional[OptimConfig] = None
49
50
51 @dataclass
52 class HydraConfig:
53 run: Dict[str, Any] = field(default_factory=lambda: {"dir": "."})
54 job_logging: Dict[str, Any] = field(default_factory=lambda: {"root": {"handlers": None}})
55
56
57 @dataclass
58 class NemoConfig:
59 name: str = MISSING
60 model: ModelConfig = MISSING
61 trainer: config.TrainerConfig = config.TrainerConfig(
62 strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
63 )
64 exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
65 hydra: HydraConfig = HydraConfig()
66
67
68 class ModelConfigBuilder:
69 def __init__(self, model_cfg: ModelConfig):
70 """
71 Base class for any Model Config Builder.
72
73 A Model Config Builder is a utility class that accepts a ModelConfig dataclass,
74 and via a set of utility methods (that are implemented by the subclassed ModelConfigBuilder),
75 builds a finalized ModelConfig that can be supplied to a NemoModel dataclass as
76 the `model` component.
77
78 Subclasses *must* implement the private method `_finalize_cfg`.
79 Inside this method, they must update `self.model_cfg` with all interdependent config
80 options that need to be set (either updated by user explicitly or with their default value).
81
82 The updated model config must then be preserved in `self.model_cfg`.
83
84 Example:
85 # Create the config builder
86 config_builder = <subclass>ModelConfigBuilder()
87
88 # Update the components of the config that are modifiable
89 config_builder.set_X(X)
90 config_builder.set_Y(Y)
91
92 # Create a "finalized" config dataclass that will contain all the updates
93 # that were specified by the builder
94 model_config = config_builder.build()
95
96 # Use model config as is (or further update values), then create a new Model
97 model = nemo.<domain>.models.<ModelName>Model(cfg=model_config, trainer=Trainer())
98
99 Supported build methods:
100 - set_train_ds: All model configs can accept a subclass of `DatasetConfig` as their
101 training config. Subclasses can override this method to enable auto-complete
102 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
103
104 - set_validation_ds: All model configs can accept a subclass of `DatasetConfig` as their
105 validation config. Subclasses can override this method to enable auto-complete
106 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
107
108 - set_test_ds: All model configs can accept a subclass of `DatasetConfig` as their
109 test config. Subclasses can override this method to enable auto-complete
110 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
111
112 - set_optim: A build method that supports changes to the Optimizer (and optionally,
113 the Scheduler) used for training the model. The function accepts two inputs -
114
115 `cfg`: A subclass of `OptimizerParams` - any OptimizerParams subclass can be used,
116 in order to select an appropriate Optimizer. Examples: AdamParams.
117
118 `sched_cfg`: A subclass of `SchedulerParams` - any SchedulerParams subclass can be used,
119 in order to select an appropriate Scheduler. Examples: CosineAnnealingParams.
120 Note that this argument is optional.
121
122 - build(): The method which should return a "finalized" ModelConfig dataclass.
123 Subclasses *should* always override this method, and update the signature
124 of this method with the return type of the Dataclass, so that it enables
125 autocomplete for the user.
126
127 Example:
128 def build(self) -> EncDecCTCConfig:
129 return super().build()
130
131 Any additional build methods must be added by subclasses of ModelConfigBuilder.
132
133 Args:
134 model_cfg:
135 """
136 self.model_cfg = model_cfg
137 self.train_ds_cfg = None
138 self.validation_ds_cfg = None
139 self.test_ds_cfg = None
140 self.optim_cfg = None
141
142 def set_train_ds(self, cfg: Optional[DatasetConfig] = None):
143 self.model_cfg.train_ds = cfg
144
145 def set_validation_ds(self, cfg: Optional[DatasetConfig] = None):
146 self.model_cfg.validation_ds = cfg
147
148 def set_test_ds(self, cfg: Optional[DatasetConfig] = None):
149 self.model_cfg.test_ds = cfg
150
151 def set_optim(self, cfg: config.OptimizerParams, sched_cfg: Optional[config.SchedulerParams] = None):
152 @dataclass
153 class WrappedOptimConfig(OptimConfig, cfg.__class__):
154 pass
155
156 # Setup optim
157 optim_name = cfg.__class__.__name__.replace("Params", "").lower()
158 wrapped_cfg = WrappedOptimConfig(name=optim_name, sched=None, **vars(cfg))
159
160 if sched_cfg is not None:
161
162 @dataclass
163 class WrappedSchedConfig(SchedConfig, sched_cfg.__class__):
164 pass
165
166 # Setup scheduler
167 sched_name = sched_cfg.__class__.__name__.replace("Params", "")
168 wrapped_sched_cfg = WrappedSchedConfig(name=sched_name, **vars(sched_cfg))
169
170 wrapped_cfg.sched = wrapped_sched_cfg
171
172 self.model_cfg.optim = wrapped_cfg
173
174 def _finalize_cfg(self):
175 raise NotImplementedError()
176
177 def build(self) -> ModelConfig:
178 # validate config
179 self._finalize_cfg()
180
181 return self.model_cfg
182
[end of nemo/core/config/modelPT.py]
[start of nemo/utils/exp_manager.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import os
17 import subprocess
18 import sys
19 import time
20 import warnings
21 from dataclasses import dataclass
22 from datetime import timedelta
23 from pathlib import Path
24 from shutil import copy, move
25 from typing import Any, Dict, List, Optional, Tuple, Union
26
27 import pytorch_lightning
28 import torch
29 from hydra.core.hydra_config import HydraConfig
30 from hydra.utils import get_original_cwd
31 from omegaconf import DictConfig, OmegaConf, open_dict
32 from pytorch_lightning.callbacks import Callback, ModelCheckpoint
33 from pytorch_lightning.callbacks.early_stopping import EarlyStopping
34 from pytorch_lightning.callbacks.timer import Interval, Timer
35 from pytorch_lightning.loggers import MLFlowLogger, TensorBoardLogger, WandbLogger
36 from pytorch_lightning.loops import _TrainingEpochLoop
37 from pytorch_lightning.strategies.ddp import DDPStrategy
38
39 from nemo.collections.common.callbacks import EMA
40 from nemo.constants import NEMO_ENV_VARNAME_TESTING, NEMO_ENV_VARNAME_VERSION
41 from nemo.utils import logging, timers
42 from nemo.utils.app_state import AppState
43 from nemo.utils.callbacks import NeMoModelCheckpoint, PreemptionCallback
44 from nemo.utils.env_var_parsing import get_envbool
45 from nemo.utils.exceptions import NeMoBaseException
46 from nemo.utils.get_rank import is_global_rank_zero
47 from nemo.utils.lightning_logger_patch import add_filehandlers_to_pl_logger
48 from nemo.utils.loggers import ClearMLLogger, ClearMLParams, DLLogger, DLLoggerParams, MLFlowParams
49 from nemo.utils.model_utils import uninject_model_parallel_rank
50
51
52 class NotFoundError(NeMoBaseException):
53 """ Raised when a file or folder is not found"""
54
55
56 class LoggerMisconfigurationError(NeMoBaseException):
57 """ Raised when a mismatch between trainer.logger and exp_manager occurs"""
58
59 def __init__(self, message):
60 message = (
61 message
62 + " You can disable lighning's trainer from creating a logger by passing logger=False to its constructor."
63 )
64 super().__init__(message)
65
66
67 class CheckpointMisconfigurationError(NeMoBaseException):
68 """ Raised when a mismatch between trainer.callbacks and exp_manager occurs"""
69
70
71 @dataclass
72 class EarlyStoppingParams:
73 monitor: str = "val_loss" # The metric that early stopping should consider.
74 mode: str = "min" # inform early stopping whether to look for increase or decrease in monitor.
75 min_delta: float = 0.001 # smallest change to consider as improvement.
76 patience: int = 10 # how many (continuous) validation cycles to wait with no improvement and stopping training.
77 verbose: bool = True
78 strict: bool = True
79 check_finite: bool = True
80 stopping_threshold: Optional[float] = None
81 divergence_threshold: Optional[float] = None
82 check_on_train_epoch_end: Optional[bool] = None
83 log_rank_zero_only: bool = False
84
85
86 @dataclass
87 class CallbackParams:
88 filepath: Optional[str] = None # Deprecated
89 dirpath: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
90 filename: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
91 monitor: Optional[str] = "val_loss"
92 verbose: Optional[bool] = True
93 save_last: Optional[bool] = True
94 save_top_k: Optional[int] = 3
95 save_weights_only: Optional[bool] = False
96 mode: Optional[str] = "min"
97 auto_insert_metric_name: bool = True
98 every_n_epochs: Optional[int] = 1
99 every_n_train_steps: Optional[int] = None
100 train_time_interval: Optional[str] = None
101 prefix: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
102 postfix: str = ".nemo"
103 save_best_model: bool = False
104 always_save_nemo: bool = False
105 save_nemo_on_train_end: Optional[bool] = True # Whether to automatically save .nemo file durin on_train_end hook
106 model_parallel_size: Optional[int] = None # tensor parallel size * pipeline parallel size
107 save_on_train_epoch_end: Optional[bool] = False # Save after training, not after validation
108
109
110 @dataclass
111 class StepTimingParams:
112 reduction: Optional[str] = "mean"
113 # if True torch.cuda.synchronize() is called on start/stop
114 sync_cuda: Optional[bool] = False
115 # if positive, defines the size of a sliding window for computing mean
116 buffer_size: Optional[int] = 1
117
118
119 @dataclass
120 class EMAParams:
121 enable: Optional[bool] = False
122 decay: Optional[float] = 0.999
123 cpu_offload: Optional[bool] = False
124 validate_original_weights: Optional[bool] = False
125 every_n_steps: int = 1
126
127
128 @dataclass
129 class ExpManagerConfig:
130 """Experiment Manager config for validation of passed arguments.
131 """
132
133 # Log dir creation parameters
134 explicit_log_dir: Optional[str] = None
135 exp_dir: Optional[str] = None
136 name: Optional[str] = None
137 version: Optional[str] = None
138 use_datetime_version: Optional[bool] = True
139 resume_if_exists: Optional[bool] = False
140 resume_past_end: Optional[bool] = False
141 resume_ignore_no_checkpoint: Optional[bool] = False
142 resume_from_checkpoint: Optional[str] = None
143 # Logging parameters
144 create_tensorboard_logger: Optional[bool] = True
145 summary_writer_kwargs: Optional[Dict[Any, Any]] = None
146 create_wandb_logger: Optional[bool] = False
147 wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
148 create_mlflow_logger: Optional[bool] = False
149 mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
150 create_dllogger_logger: Optional[bool] = False
151 dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
152 create_clearml_logger: Optional[bool] = False
153 clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
154 # Checkpointing parameters
155 create_checkpoint_callback: Optional[bool] = True
156 checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
157 create_early_stopping_callback: Optional[bool] = False
158 early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
159 create_preemption_callback: Optional[bool] = True
160 # Additional exp_manager arguments
161 files_to_copy: Optional[List[str]] = None
162 # logs timing of train/val/test steps
163 log_step_timing: Optional[bool] = True
164 step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
165 # Configures creation of log files for different ranks
166 log_local_rank_0_only: Optional[bool] = False
167 log_global_rank_0_only: Optional[bool] = False
168 # disable initial validation when resuming from a checkpoint saved during validation
169 disable_validation_on_resume: Optional[bool] = True
170 ema: Optional[EMAParams] = EMAParams()
171 # Wall clock time limit
172 max_time_per_run: Optional[str] = None
173 # time to sleep non 0 ranks during initialization
174 seconds_to_sleep: float = 5
175
176
177 class TimingCallback(Callback):
178 """
179 Logs execution time of train/val/test steps
180 """
181
182 def __init__(self, timer_kwargs={}):
183 self.timer = timers.NamedTimer(**timer_kwargs)
184
185 def _on_batch_start(self, name):
186 # reset only if we do not return mean of a sliding window
187 if self.timer.buffer_size <= 0:
188 self.timer.reset(name)
189
190 self.timer.start(name)
191
192 def _on_batch_end(self, name, pl_module):
193 self.timer.stop(name)
194 # Set the `batch_size=1` as WAR for `dataloader_iter`, which is not used for any metric
195 pl_module.log(
196 name + ' in s',
197 self.timer[name],
198 on_step=True,
199 on_epoch=False,
200 batch_size=1,
201 prog_bar=(name == "train_step_timing"),
202 )
203
204 def on_train_batch_start(self, trainer, pl_module, batch, batch_idx):
205 self._on_batch_start("train_step_timing")
206
207 def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
208 self._on_batch_end("train_step_timing", pl_module)
209
210 def on_validation_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
211 self._on_batch_start("validation_step_timing")
212
213 def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
214 self._on_batch_end("validation_step_timing", pl_module)
215
216 def on_test_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
217 self._on_batch_start("test_step_timing")
218
219 def on_test_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
220 self._on_batch_end("test_step_timing", pl_module)
221
222 def on_before_backward(self, trainer, pl_module, loss):
223 self._on_batch_start("train_backward_timing")
224
225 def on_after_backward(self, trainer, pl_module):
226 self._on_batch_end("train_backward_timing", pl_module)
227
228
229 def exp_manager(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None) -> Optional[Path]:
230 """
231 exp_manager is a helper function used to manage folders for experiments. It follows the pytorch lightning paradigm
232 of exp_dir/model_or_experiment_name/version. If the lightning trainer has a logger, exp_manager will get exp_dir,
233 name, and version from the logger. Otherwise it will use the exp_dir and name arguments to create the logging
234 directory. exp_manager also allows for explicit folder creation via explicit_log_dir.
235
236 The version can be a datetime string or an integer. Datestime version can be disabled if use_datetime_version is set
237 to False. It optionally creates TensorBoardLogger, WandBLogger, DLLogger, MLFlowLogger, ClearMLLogger,
238 ModelCheckpoint objects from pytorch lightning.
239 It copies sys.argv, and git information if available to the logging directory. It creates a log file for each
240 process to log their output into.
241
242 exp_manager additionally has a resume feature (resume_if_exists) which can be used to continuing training from
243 the constructed log_dir. When you need to continue the training repeatedly (like on a cluster which you need
244 multiple consecutive jobs), you need to avoid creating the version folders. Therefore from v1.0.0, when
245 resume_if_exists is set to True, creating the version folders is ignored.
246
247 Args:
248 trainer (pytorch_lightning.Trainer): The lightning trainer.
249 cfg (DictConfig, dict): Can have the following keys:
250
251 - explicit_log_dir (str, Path): Can be used to override exp_dir/name/version folder creation. Defaults to
252 None, which will use exp_dir, name, and version to construct the logging directory.
253 - exp_dir (str, Path): The base directory to create the logging directory. Defaults to None, which logs to
254 ./nemo_experiments.
255 - name (str): The name of the experiment. Defaults to None which turns into "default" via name = name or
256 "default".
257 - version (str): The version of the experiment. Defaults to None which uses either a datetime string or
258 lightning's TensorboardLogger system of using version_{int}.
259 - use_datetime_version (bool): Whether to use a datetime string for version. Defaults to True.
260 - resume_if_exists (bool): Whether this experiment is resuming from a previous run. If True, it sets
261 trainer._checkpoint_connector._ckpt_path so that the trainer should auto-resume. exp_manager will move files
262 under log_dir to log_dir/run_{int}. Defaults to False. From v1.0.0, when resume_if_exists is True,
263 we would not create version folders to make it easier to find the log folder for next runs.
264 - resume_past_end (bool): exp_manager errors out if resume_if_exists is True and a checkpoint matching
265 ``*end.ckpt`` indicating a previous training run fully completed. This behaviour can be disabled, in which
266 case the ``*end.ckpt`` will be loaded by setting resume_past_end to True. Defaults to False.
267 - resume_ignore_no_checkpoint (bool): exp_manager errors out if resume_if_exists is True and no checkpoint
268 could be found. This behaviour can be disabled, in which case exp_manager will print a message and
269 continue without restoring, by setting resume_ignore_no_checkpoint to True. Defaults to False.
270 - resume_from_checkpoint (str): Can be used to specify a path to a specific checkpoint file to load from. This will
271 override any checkpoint found when resume_if_exists is True. Defaults to None.
272 - create_tensorboard_logger (bool): Whether to create a tensorboard logger and attach it to the pytorch
273 lightning trainer. Defaults to True.
274 - summary_writer_kwargs (dict): A dictionary of kwargs that can be passed to lightning's TensorboardLogger
275 class. Note that log_dir is passed by exp_manager and cannot exist in this dict. Defaults to None.
276 - create_wandb_logger (bool): Whether to create a Weights and Baises logger and attach it to the pytorch
277 lightning trainer. Defaults to False.
278 - wandb_logger_kwargs (dict): A dictionary of kwargs that can be passed to lightning's WandBLogger
279 class. Note that name and project are required parameters if create_wandb_logger is True.
280 Defaults to None.
281 - create_mlflow_logger (bool): Whether to create an MLFlow logger and attach it to the pytorch lightning
282 training. Defaults to False
283 - mlflow_logger_kwargs (dict): optional parameters for the MLFlow logger
284 - create_dllogger_logger (bool): Whether to create an DLLogger logger and attach it to the pytorch lightning
285 training. Defaults to False
286 - dllogger_logger_kwargs (dict): optional parameters for the DLLogger logger
287 - create_clearml_logger (bool): Whether to create an ClearML logger and attach it to the pytorch lightning
288 training. Defaults to False
289 - clearml_logger_kwargs (dict): optional parameters for the ClearML logger
290 - create_checkpoint_callback (bool): Whether to create a ModelCheckpoint callback and attach it to the
291 pytorch lightning trainer. The ModelCheckpoint saves the top 3 models with the best "val_loss", the most
292 recent checkpoint under ``*last.ckpt``, and the final checkpoint after training completes under ``*end.ckpt``.
293 Defaults to True.
294 - create_early_stopping_callback (bool): Flag to decide if early stopping should be used to stop training. Default is False.
295 See EarlyStoppingParams dataclass above.
296 - create_preemption_callback (bool): Flag to decide whether to enable preemption callback to save checkpoints and exit training
297 immediately upon preemption. Default is True.
298 - files_to_copy (list): A list of files to copy to the experiment logging directory. Defaults to None which
299 copies no files.
300 - log_local_rank_0_only (bool): Whether to only create log files for local rank 0. Defaults to False.
301 Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
302 - log_global_rank_0_only (bool): Whether to only create log files for global rank 0. Defaults to False.
303 Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
304 - max_time (str): The maximum wall clock time *per run*. This is intended to be used on clusters where you want
305 a checkpoint to be saved after this specified time and be able to resume from that checkpoint. Defaults to None.
306 - seconds_to_sleep (float): seconds to sleep non rank 0 processes for. Used to give enough time for rank 0 to initialize
307
308 returns:
309 log_dir (Path): The final logging directory where logging files are saved. Usually the concatenation of
310 exp_dir, name, and version.
311 """
312 # Add rank information to logger
313 # Note: trainer.global_rank and trainer.is_global_zero are not set until trainer.fit, so have to hack around it
314 local_rank = int(os.environ.get("LOCAL_RANK", 0))
315 global_rank = trainer.node_rank * trainer.num_devices + local_rank
316 logging.rank = global_rank
317
318 if cfg is None:
319 logging.error("exp_manager did not receive a cfg argument. It will be disabled.")
320 return
321 if trainer.fast_dev_run:
322 logging.info("Trainer was called with fast_dev_run. exp_manager will return without any functionality.")
323 return
324
325 # Ensure passed cfg is compliant with ExpManagerConfig
326 schema = OmegaConf.structured(ExpManagerConfig)
327 if isinstance(cfg, dict):
328 cfg = OmegaConf.create(cfg)
329 elif not isinstance(cfg, DictConfig):
330 raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
331 cfg = OmegaConf.create(OmegaConf.to_container(cfg, resolve=True))
332 cfg = OmegaConf.merge(schema, cfg)
333
334 error_checks(trainer, cfg) # Ensures that trainer options are compliant with NeMo and exp_manager arguments
335
336 log_dir, exp_dir, name, version = get_log_dir(
337 trainer=trainer,
338 exp_dir=cfg.exp_dir,
339 name=cfg.name,
340 version=cfg.version,
341 explicit_log_dir=cfg.explicit_log_dir,
342 use_datetime_version=cfg.use_datetime_version,
343 resume_if_exists=cfg.resume_if_exists,
344 )
345
346 check_resume(
347 trainer,
348 log_dir,
349 cfg.resume_if_exists,
350 cfg.resume_past_end,
351 cfg.resume_ignore_no_checkpoint,
352 cfg.checkpoint_callback_params.dirpath,
353 cfg.resume_from_checkpoint,
354 )
355
356 checkpoint_name = name
357 # If name returned from get_log_dir is "", use cfg.name for checkpointing
358 if checkpoint_name is None or checkpoint_name == '':
359 checkpoint_name = cfg.name or "default"
360
361 # Set mlflow name if it's not set, before the main name is erased
362 if cfg.create_mlflow_logger and (not cfg.mlflow_logger_kwargs.get("experiment_name", None)):
363 cfg.mlflow_logger_kwargs.experiment_name = cfg.name
364 logging.warning(
365 'mlflow logger specified but no experiment name set. Using the same as Tensorboard: %s',
366 cfg.mlflow_logger_kwargs.experiment_name,
367 )
368
369 cfg.name = name # Used for configure_loggers so that the log_dir is properly set even if name is ""
370 cfg.version = version
371
372 # update app_state with log_dir, exp_dir, etc
373 app_state = AppState()
374 app_state.log_dir = log_dir
375 app_state.exp_dir = exp_dir
376 app_state.name = name
377 app_state.version = version
378 app_state.checkpoint_name = checkpoint_name
379 app_state.create_checkpoint_callback = cfg.create_checkpoint_callback
380 app_state.checkpoint_callback_params = cfg.checkpoint_callback_params
381
382 # Create the logging directory if it does not exist
383 os.makedirs(log_dir, exist_ok=True) # Cannot limit creation to global zero as all ranks write to own log file
384 logging.info(f'Experiments will be logged at {log_dir}')
385 trainer._default_root_dir = log_dir
386
387 if cfg.log_local_rank_0_only is True and cfg.log_global_rank_0_only is True:
388 raise ValueError(
389 f"Cannot set both log_local_rank_0_only and log_global_rank_0_only to True. Please set either one or neither."
390 )
391
392 # This is set if the env var NEMO_TESTING is set to True.
393 nemo_testing = get_envbool(NEMO_ENV_VARNAME_TESTING, False)
394
395 # Handle logging to file
396 log_file = log_dir / f'nemo_log_globalrank-{global_rank}_localrank-{local_rank}.txt'
397 if cfg.log_local_rank_0_only is True and not nemo_testing:
398 if local_rank == 0:
399 logging.add_file_handler(log_file)
400 elif cfg.log_global_rank_0_only is True and not nemo_testing:
401 if global_rank == 0:
402 logging.add_file_handler(log_file)
403 else:
404 # Logs on all ranks.
405 logging.add_file_handler(log_file)
406
407 # For some reason, LearningRateLogger requires trainer to have a logger. Safer to create logger on all ranks
408 # not just global rank 0.
409 if (
410 cfg.create_tensorboard_logger
411 or cfg.create_wandb_logger
412 or cfg.create_mlflow_logger
413 or cfg.create_dllogger_logger
414 or cfg.create_clearml_logger
415 ):
416 configure_loggers(
417 trainer,
418 exp_dir,
419 log_dir,
420 cfg.name,
421 cfg.version,
422 cfg.checkpoint_callback_params,
423 cfg.create_tensorboard_logger,
424 cfg.summary_writer_kwargs,
425 cfg.create_wandb_logger,
426 cfg.wandb_logger_kwargs,
427 cfg.create_mlflow_logger,
428 cfg.mlflow_logger_kwargs,
429 cfg.create_dllogger_logger,
430 cfg.dllogger_logger_kwargs,
431 cfg.create_clearml_logger,
432 cfg.clearml_logger_kwargs,
433 )
434
435 # add loggers timing callbacks
436 if cfg.log_step_timing:
437 timing_callback = TimingCallback(timer_kwargs=cfg.step_timing_kwargs or {})
438 trainer.callbacks.insert(0, timing_callback)
439
440 if cfg.ema.enable:
441 ema_callback = EMA(
442 decay=cfg.ema.decay,
443 validate_original_weights=cfg.ema.validate_original_weights,
444 cpu_offload=cfg.ema.cpu_offload,
445 every_n_steps=cfg.ema.every_n_steps,
446 )
447 trainer.callbacks.append(ema_callback)
448
449 if cfg.create_early_stopping_callback:
450 early_stop_callback = EarlyStopping(**cfg.early_stopping_callback_params)
451 trainer.callbacks.append(early_stop_callback)
452
453 if cfg.create_checkpoint_callback:
454 configure_checkpointing(
455 trainer,
456 log_dir,
457 checkpoint_name,
458 cfg.resume_if_exists,
459 cfg.checkpoint_callback_params,
460 cfg.create_preemption_callback,
461 )
462
463 if cfg.disable_validation_on_resume:
464 # extend training loop to skip initial validation when resuming from checkpoint
465 configure_no_restart_validation_training_loop(trainer)
466 # Setup a stateless timer for use on clusters.
467 if cfg.max_time_per_run is not None:
468 found_ptl_timer = False
469 for idx, callback in enumerate(trainer.callbacks):
470 if isinstance(callback, Timer):
471 # NOTE: PTL does not expose a `trainer.max_time`. By the time we are in this function, PTL has already setup a timer if the user specifies `trainer.max_time` so best we can do is replace that.
472 # Working: If only `trainer.max_time` is set - it behaves as a normal PTL timer. If only `exp_manager.max_time_per_run` is set - it behaves as a StateLessTimer. If both are set, it also behaves as a StateLessTimer.
473 logging.warning(
474 f'Found a PTL Timer callback, replacing with a StatelessTimer callback. This will happen if you set trainer.max_time as well as exp_manager.max_time_per_run.'
475 )
476 trainer.callbacks[idx] = StatelessTimer(cfg.max_time_per_run)
477 found_ptl_timer = True
478 break
479
480 if not found_ptl_timer:
481 trainer.max_time = cfg.max_time_per_run
482 trainer.callbacks.append(StatelessTimer(cfg.max_time_per_run))
483
484 if is_global_rank_zero():
485 # Move files_to_copy to folder and add git information if present
486 if cfg.files_to_copy:
487 for _file in cfg.files_to_copy:
488 copy(Path(_file), log_dir)
489
490 # Create files for cmd args and git info
491 with open(log_dir / 'cmd-args.log', 'w', encoding='utf-8') as _file:
492 _file.write(" ".join(sys.argv))
493
494 # Try to get git hash
495 git_repo, git_hash = get_git_hash()
496 if git_repo:
497 with open(log_dir / 'git-info.log', 'w', encoding='utf-8') as _file:
498 _file.write(f'commit hash: {git_hash}')
499 _file.write(get_git_diff())
500
501 # Add err_file logging to global_rank zero
502 logging.add_err_file_handler(log_dir / 'nemo_error_log.txt')
503
504 # Add lightning file logging to global_rank zero
505 add_filehandlers_to_pl_logger(log_dir / 'lightning_logs.txt', log_dir / 'nemo_error_log.txt')
506
507 elif trainer.num_nodes * trainer.num_devices > 1:
508 # sleep other ranks so rank 0 can finish
509 # doing the initialization such as moving files
510 time.sleep(cfg.seconds_to_sleep)
511
512 return log_dir
513
514
515 def error_checks(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None):
516 """
517 Checks that the passed trainer is compliant with NeMo and exp_manager's passed configuration. Checks that:
518 - Throws error when hydra has changed the working directory. This causes issues with lightning's DDP
519 - Throws error when trainer has loggers defined but create_tensorboard_logger or create_wandB_logger
520 or create_mlflow_logger or create_dllogger_logger is True
521 - Prints error messages when 1) run on multi-node and not Slurm, and 2) run on multi-gpu without DDP
522 """
523 if HydraConfig.initialized() and get_original_cwd() != os.getcwd():
524 raise ValueError(
525 "Hydra changed the working directory. This interferes with ExpManger's functionality. Please pass "
526 "hydra.run.dir=. to your python script."
527 )
528 if trainer.logger is not None and (
529 cfg.create_tensorboard_logger or cfg.create_wandb_logger or cfg.create_mlflow_logger
530 ):
531 raise LoggerMisconfigurationError(
532 "The pytorch lightning trainer that was passed to exp_manager contained a logger, and either "
533 f"create_tensorboard_logger: {cfg.create_tensorboard_logger} or create_wandb_logger: "
534 f"{cfg.create_wandb_logger} or create_mlflow_logger: {cfg.create_mlflow_logger}"
535 f"or create_dllogger_logger: {cfg.create_mlflow_logger} was set to True. "
536 "These can only be used if trainer does not already have a logger."
537 )
538 if trainer.num_nodes > 1 and not check_slurm(trainer):
539 logging.error(
540 "You are running multi-node training without SLURM handling the processes."
541 " Please note that this is not tested in NeMo and could result in errors."
542 )
543 if trainer.num_devices > 1 and not isinstance(trainer.strategy, DDPStrategy):
544 logging.error(
545 "You are running multi-gpu without ddp.Please note that this is not tested in NeMo and could result in "
546 "errors."
547 )
548
549
550 def check_resume(
551 trainer: 'pytorch_lightning.Trainer',
552 log_dir: str,
553 resume_if_exists: bool = False,
554 resume_past_end: bool = False,
555 resume_ignore_no_checkpoint: bool = False,
556 dirpath: str = None,
557 resume_from_checkpoint: str = None,
558 ):
559 """Checks that resume=True was used correctly with the arguments pass to exp_manager. Sets
560 trainer._checkpoint_connector._ckpt_path as necessary.
561
562 Returns:
563 log_dir (Path): The log_dir
564 exp_dir (str): The base exp_dir without name nor version
565 name (str): The name of the experiment
566 version (str): The version of the experiment
567
568 Raises:
569 NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
570 ValueError: If resume is True, and there were more than 1 checkpoint could found.
571 """
572
573 if not log_dir:
574 raise ValueError(f"Resuming requires the log_dir {log_dir} to be passed to exp_manager")
575
576 checkpoint = None
577 if resume_from_checkpoint:
578 checkpoint = resume_from_checkpoint
579 if resume_if_exists:
580 # Use <log_dir>/checkpoints/ unless `dirpath` is set
581 checkpoint_dir = Path(dirpath) if dirpath else Path(Path(log_dir) / "checkpoints")
582
583 # when using distributed checkpointing, checkpoint_dir is a directory of directories
584 # we check for this here
585 dist_checkpoints = [d for d in list(checkpoint_dir.glob("*")) if d.is_dir()]
586 end_dist_checkpoints = [d for d in dist_checkpoints if d.match("*end")]
587 last_dist_checkpoints = [d for d in dist_checkpoints if d.match("*last")]
588
589 end_checkpoints = end_dist_checkpoints if end_dist_checkpoints else list(checkpoint_dir.rglob("*end.ckpt"))
590 last_checkpoints = last_dist_checkpoints if last_dist_checkpoints else list(checkpoint_dir.rglob("*last.ckpt"))
591
592 if not checkpoint_dir.exists() or (not len(end_checkpoints) > 0 and not len(last_checkpoints) > 0):
593 if resume_ignore_no_checkpoint:
594 warn = f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. "
595 if checkpoint is None:
596 warn += "Training from scratch."
597 elif checkpoint == resume_from_checkpoint:
598 warn += f"Training from {resume_from_checkpoint}."
599 logging.warning(warn)
600 else:
601 raise NotFoundError(
602 f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. Cannot resume."
603 )
604 elif len(end_checkpoints) > 0:
605 if resume_past_end:
606 if len(end_checkpoints) > 1:
607 if 'mp_rank' in str(end_checkpoints[0]):
608 checkpoint = end_checkpoints[0]
609 else:
610 raise ValueError(f"Multiple checkpoints {end_checkpoints} that matches *end.ckpt.")
611 else:
612 raise ValueError(
613 f"Found {end_checkpoints[0]} indicating that the last training run has already completed."
614 )
615 elif len(last_checkpoints) > 1:
616 if 'mp_rank' in str(last_checkpoints[0]) or 'tp_rank' in str(last_checkpoints[0]):
617 checkpoint = last_checkpoints[0]
618 checkpoint = uninject_model_parallel_rank(checkpoint)
619 else:
620 raise ValueError(f"Multiple checkpoints {last_checkpoints} that matches *last.ckpt.")
621 else:
622 checkpoint = last_checkpoints[0]
623
624 # PTL 2.0 supports ckpt_path instead of resume_from_checkpoint as the trainer flag
625 if checkpoint is not None:
626 trainer.ckpt_path = str(checkpoint)
627 logging.info(f'Resuming training from checkpoint: {trainer.ckpt_path}')
628
629 if is_global_rank_zero():
630 # Check to see if any files exist that need to be moved
631 files_to_move = []
632 if Path(log_dir).exists():
633 for child in Path(log_dir).iterdir():
634 if child.is_file():
635 files_to_move.append(child)
636
637 if len(files_to_move) > 0:
638 # Move old files to a new folder
639 other_run_dirs = Path(log_dir).glob("run_*")
640 run_count = 0
641 for fold in other_run_dirs:
642 if fold.is_dir():
643 run_count += 1
644 new_run_dir = Path(Path(log_dir) / f"run_{run_count}")
645 new_run_dir.mkdir()
646 for _file in files_to_move:
647 move(str(_file), str(new_run_dir))
648
649
650 def check_explicit_log_dir(
651 trainer: 'pytorch_lightning.Trainer', explicit_log_dir: Union[Path, str], exp_dir: str, name: str, version: str
652 ) -> Tuple[Path, str, str, str]:
653 """ Checks that the passed arguments are compatible with explicit_log_dir.
654
655 Returns:
656 log_dir (Path): the log_dir
657 exp_dir (str): the base exp_dir without name nor version
658 name (str): The name of the experiment
659 version (str): The version of the experiment
660
661 Raise:
662 LoggerMisconfigurationError
663 """
664 if trainer.logger is not None:
665 raise LoggerMisconfigurationError(
666 "The pytorch lightning trainer that was passed to exp_manager contained a logger and explicit_log_dir: "
667 f"{explicit_log_dir} was pass to exp_manager. Please remove the logger from the lightning trainer."
668 )
669 # Checking only (explicit_log_dir) vs (exp_dir and version).
670 # The `name` will be used as the actual name of checkpoint/archive.
671 if exp_dir or version:
672 logging.error(
673 f"exp_manager received explicit_log_dir: {explicit_log_dir} and at least one of exp_dir: {exp_dir}, "
674 f"or version: {version}. Please note that exp_dir, name, and version will be ignored."
675 )
676 if is_global_rank_zero() and Path(explicit_log_dir).exists():
677 logging.warning(f"Exp_manager is logging to {explicit_log_dir}, but it already exists.")
678 return Path(explicit_log_dir), str(explicit_log_dir), "", ""
679
680
681 def get_log_dir(
682 trainer: 'pytorch_lightning.Trainer',
683 exp_dir: str = None,
684 name: str = None,
685 version: str = None,
686 explicit_log_dir: str = None,
687 use_datetime_version: bool = True,
688 resume_if_exists: bool = False,
689 ) -> Tuple[Path, str, str, str]:
690 """
691 Obtains the log_dir used for exp_manager.
692
693 Returns:
694 log_dir (Path): the log_dir
695 exp_dir (str): the base exp_dir without name nor version
696 name (str): The name of the experiment
697 version (str): The version of the experiment
698 explicit_log_dir (str): The explicit path to the log folder. Defaults to False.
699 use_datetime_version (bool): Uses date and time as the version of the log folder. Defaults to True.
700 resume_if_exists (bool): if resume_if_exists of the exp_manager's config is enabled or not. When enabled, the
701 version folders would not get created.
702
703 Raise:
704 LoggerMisconfigurationError: If trainer is incompatible with arguments
705 NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
706 ValueError: If resume is True, and there were more than 1 checkpoint could found.
707 """
708 if explicit_log_dir: # If explicit log_dir was passed, short circuit
709 return check_explicit_log_dir(trainer, explicit_log_dir, exp_dir, name, version)
710
711 # Default exp_dir to ./nemo_experiments if None was passed
712 _exp_dir = exp_dir
713 if exp_dir is None:
714 _exp_dir = str(Path.cwd() / 'nemo_experiments')
715
716 # If the user has already defined a logger for the trainer, use the logger defaults for logging directory
717 if trainer.logger is not None:
718 if trainer.logger.save_dir:
719 if exp_dir:
720 raise LoggerMisconfigurationError(
721 "The pytorch lightning trainer that was passed to exp_manager contained a logger, the logger's "
722 f"save_dir was not None, and exp_dir ({exp_dir}) was not None. If trainer.logger.save_dir "
723 "exists, exp_manager will use trainer.logger.save_dir as the logging directory and exp_dir "
724 "must be None."
725 )
726 _exp_dir = trainer.logger.save_dir
727 if name:
728 raise LoggerMisconfigurationError(
729 "The pytorch lightning trainer that was passed to exp_manager contained a logger, and name: "
730 f"{name} was also passed to exp_manager. If the trainer contains a "
731 "logger, exp_manager will use trainer.logger.name, and name passed to exp_manager must be None."
732 )
733 name = trainer.logger.name
734 version = f"version_{trainer.logger.version}"
735 # Use user-defined exp_dir, project_name, exp_name, and versioning options
736 else:
737 name = name or "default"
738 version = version or os.environ.get(NEMO_ENV_VARNAME_VERSION, None)
739
740 if not version:
741 if resume_if_exists:
742 logging.warning(
743 "No version folders would be created under the log folder as 'resume_if_exists' is enabled."
744 )
745 version = None
746 elif is_global_rank_zero():
747 if use_datetime_version:
748 version = time.strftime('%Y-%m-%d_%H-%M-%S')
749 else:
750 tensorboard_logger = TensorBoardLogger(save_dir=Path(_exp_dir), name=name, version=version)
751 version = f"version_{tensorboard_logger.version}"
752 os.environ[NEMO_ENV_VARNAME_VERSION] = "" if version is None else version
753
754 log_dir = Path(_exp_dir) / Path(str(name)) / Path("" if version is None else str(version))
755 return log_dir, str(_exp_dir), name, version
756
757
758 def get_git_hash():
759 """
760 Helper function that tries to get the commit hash if running inside a git folder
761
762 returns:
763 Bool: Whether the git subprocess ran without error
764 str: git subprocess output or error message
765 """
766 try:
767 return (
768 True,
769 subprocess.check_output(['git', 'rev-parse', 'HEAD'], stderr=subprocess.STDOUT).decode(),
770 )
771 except subprocess.CalledProcessError as err:
772 return False, "{}\n".format(err.output.decode("utf-8"))
773
774
775 def get_git_diff():
776 """
777 Helper function that tries to get the git diff if running inside a git folder
778
779 returns:
780 Bool: Whether the git subprocess ran without error
781 str: git subprocess output or error message
782 """
783 try:
784 return subprocess.check_output(['git', 'diff'], stderr=subprocess.STDOUT).decode()
785 except subprocess.CalledProcessError as err:
786 return "{}\n".format(err.output.decode("utf-8"))
787
788
789 def configure_loggers(
790 trainer: 'pytorch_lightning.Trainer',
791 exp_dir: [Path, str],
792 log_dir: [Path, str],
793 name: str,
794 version: str,
795 checkpoint_callback_params: dict,
796 create_tensorboard_logger: bool,
797 summary_writer_kwargs: dict,
798 create_wandb_logger: bool,
799 wandb_kwargs: dict,
800 create_mlflow_logger: bool,
801 mlflow_kwargs: dict,
802 create_dllogger_logger: bool,
803 dllogger_kwargs: dict,
804 create_clearml_logger: bool,
805 clearml_kwargs: dict,
806 ):
807 """
808 Creates TensorboardLogger and/or WandBLogger / MLFlowLogger / DLlogger / ClearMLLogger and attach them to trainer.
809 Raises ValueError if summary_writer_kwargs or wandb_kwargs are misconfigured.
810 """
811 # Potentially create tensorboard logger and/or WandBLogger / MLFlowLogger / DLLogger
812 logger_list = []
813 if create_tensorboard_logger:
814 if summary_writer_kwargs is None:
815 summary_writer_kwargs = {}
816 elif "log_dir" in summary_writer_kwargs:
817 raise ValueError(
818 "You cannot pass `log_dir` as part of `summary_writer_kwargs`. `log_dir` is handled by lightning's "
819 "TensorBoardLogger logger."
820 )
821 tensorboard_logger = TensorBoardLogger(save_dir=exp_dir, name=name, version=version, **summary_writer_kwargs)
822 logger_list.append(tensorboard_logger)
823 logging.info("TensorboardLogger has been set up")
824
825 if create_wandb_logger:
826 if wandb_kwargs is None:
827 wandb_kwargs = {}
828 if "name" not in wandb_kwargs and "project" not in wandb_kwargs:
829 raise ValueError("name and project are required for wandb_logger")
830
831 # Update the wandb save_dir
832 if wandb_kwargs.get('save_dir', None) is None:
833 wandb_kwargs['save_dir'] = exp_dir
834 os.makedirs(wandb_kwargs['save_dir'], exist_ok=True)
835 wandb_logger = WandbLogger(version=version, **wandb_kwargs)
836
837 logger_list.append(wandb_logger)
838 logging.info("WandBLogger has been set up")
839
840 if create_mlflow_logger:
841 mlflow_logger = MLFlowLogger(run_name=version, **mlflow_kwargs)
842
843 logger_list.append(mlflow_logger)
844 logging.info("MLFlowLogger has been set up")
845
846 if create_dllogger_logger:
847 dllogger_logger = DLLogger(**dllogger_kwargs)
848
849 logger_list.append(dllogger_logger)
850 logging.info("DLLogger has been set up")
851
852 if create_clearml_logger:
853 clearml_logger = ClearMLLogger(
854 clearml_cfg=clearml_kwargs,
855 log_dir=log_dir,
856 prefix=name,
857 save_best_model=checkpoint_callback_params.save_best_model,
858 )
859
860 logger_list.append(clearml_logger)
861 logging.info("ClearMLLogger has been set up")
862
863 trainer._logger_connector.configure_logger(logger_list)
864
865
866 def configure_checkpointing(
867 trainer: 'pytorch_lightning.Trainer',
868 log_dir: Path,
869 name: str,
870 resume: bool,
871 params: 'DictConfig',
872 create_preemption_callback: bool,
873 ):
874 """ Adds ModelCheckpoint to trainer. Raises CheckpointMisconfigurationError if trainer already has a ModelCheckpoint
875 callback
876 """
877 for callback in trainer.callbacks:
878 if isinstance(callback, ModelCheckpoint):
879 raise CheckpointMisconfigurationError(
880 "The pytorch lightning trainer that was passed to exp_manager contained a ModelCheckpoint "
881 "and create_checkpoint_callback was set to True. Please either set create_checkpoint_callback "
882 "to False, or remove ModelCheckpoint from the lightning trainer"
883 )
884 # Create the callback and attach it to trainer
885 if "filepath" in params:
886 if params.filepath is not None:
887 logging.warning("filepath is deprecated. Please switch to dirpath and filename instead")
888 if params.dirpath is None:
889 params.dirpath = Path(params.filepath).parent
890 if params.filename is None:
891 params.filename = Path(params.filepath).name
892 with open_dict(params):
893 del params["filepath"]
894 if params.dirpath is None:
895 params.dirpath = Path(log_dir / 'checkpoints')
896 if params.filename is None:
897 params.filename = f'{name}--{{{params.monitor}:.4f}}-{{epoch}}'
898 if params.prefix is None:
899 params.prefix = name
900 NeMoModelCheckpoint.CHECKPOINT_NAME_LAST = params.filename + '-last'
901
902 logging.debug(params.dirpath)
903 logging.debug(params.filename)
904 logging.debug(params.prefix)
905
906 if "val" in params.monitor:
907 if (
908 trainer.max_epochs is not None
909 and trainer.max_epochs != -1
910 and trainer.max_epochs < trainer.check_val_every_n_epoch
911 ):
912 logging.error(
913 "The checkpoint callback was told to monitor a validation value but trainer.max_epochs("
914 f"{trainer.max_epochs}) was less than trainer.check_val_every_n_epoch({trainer.check_val_every_n_epoch}"
915 f"). It is very likely this run will fail with ModelCheckpoint(monitor='{params.monitor}') not found "
916 "in the returned metrics. Please ensure that validation is run within trainer.max_epochs."
917 )
918 elif trainer.max_steps is not None and trainer.max_steps != -1:
919 logging.warning(
920 "The checkpoint callback was told to monitor a validation value and trainer's max_steps was set to "
921 f"{trainer.max_steps}. Please ensure that max_steps will run for at least "
922 f"{trainer.check_val_every_n_epoch} epochs to ensure that checkpointing will not error out."
923 )
924
925 checkpoint_callback = NeMoModelCheckpoint(n_resume=resume, **params)
926 checkpoint_callback.last_model_path = trainer.ckpt_path or ""
927 if 'mp_rank' in checkpoint_callback.last_model_path or 'tp_rank' in checkpoint_callback.last_model_path:
928 checkpoint_callback.last_model_path = uninject_model_parallel_rank(checkpoint_callback.last_model_path)
929 trainer.callbacks.append(checkpoint_callback)
930 if create_preemption_callback:
931 # Check if cuda is avialable as preemption is supported only on GPUs
932 if torch.cuda.is_available():
933 ## By default PreemptionCallback handles SIGTERM. To handle other signals pass the signal in the call as below:
934 ## PreemptionCallback(checkpoint_callback, signal.SIGCHLD)
935 preemption_callback = PreemptionCallback(checkpoint_callback)
936 trainer.callbacks.append(preemption_callback)
937 else:
938 logging.info("Preemption is supported only on GPUs, disabling preemption")
939
940
941 def check_slurm(trainer):
942 try:
943 return trainer.accelerator_connector.is_slurm_managing_tasks
944 except AttributeError:
945 return False
946
947
948 class StatelessTimer(Timer):
949 """Extension of PTL timers to be per run."""
950
951 def __init__(self, duration: timedelta = None, interval: str = Interval.step, verbose: bool = True,) -> None:
952 super().__init__(duration, interval, verbose)
953
954 # Override PTL Timer's state dict to not store elapsed time information so that we can restore and continue training.
955 def state_dict(self) -> Dict[str, Any]:
956 return {}
957
958 def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
959 return
960
961
962 def configure_no_restart_validation_training_loop(trainer: pytorch_lightning.Trainer) -> None:
963 if type(trainer.fit_loop.epoch_loop) != _TrainingEpochLoop:
964 warnings.warn("Detected custom epoch loop. Skipping no validation on restart support.", UserWarning)
965 return
966 ## Pass trainer object to avoid trainer getting overwritten as None
967 loop = SkipResumeTrainingValidationLoop(trainer, trainer.min_steps, trainer.max_steps)
968 trainer.fit_loop.epoch_loop = loop
969
970
971 class SkipResumeTrainingValidationLoop(_TrainingEpochLoop):
972 """
973 Extend the PTL Epoch loop to skip validating when resuming.
974 This happens when resuming a checkpoint that has already run validation, but loading restores
975 the training state before validation has run.
976 """
977
978 def _should_check_val_fx(self) -> bool:
979 if self.restarting and self.global_step % self.trainer.val_check_batch == 0:
980 return False
981 return super()._should_check_val_fx()
982
983
984 def clean_exp_ckpt(exp_log_dir: Union[str, Path], remove_ckpt: bool = True, remove_nemo: bool = False):
985 """
986 Helper method that removes Pytorch Lightning .ckpt files or NeMo .nemo files from the checkpoint directory
987
988 Args:
989 exp_log_dir: str path to the root directory of the current experiment.
990 remove_ckpt: bool, whether to remove all *.ckpt files in the checkpoints directory.
991 remove_nemo: bool, whether to remove all *.nemo files in the checkpoints directory.
992 """
993 exp_log_dir = str(exp_log_dir)
994
995 if remove_ckpt:
996 logging.info("Deleting *.ckpt files ...")
997 ckpt_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.ckpt"))
998 for filepath in ckpt_files:
999 os.remove(filepath)
1000 logging.info(f"Deleted file : {filepath}")
1001
1002 if remove_nemo:
1003 logging.info("Deleting *.nemo files ...")
1004 nemo_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.nemo"))
1005 for filepath in nemo_files:
1006 os.remove(filepath)
1007 logging.info(f"Deleted file : {filepath}")
1008
[end of nemo/utils/exp_manager.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """
17 # This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
18 # fusion with beam search decoders on top of a trained ASR model with CTC decoder. To evaluate a model with
19 # Transducer (RNN-T) decoder use another script 'scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py'.
20 # NeMo's beam search decoders are capable of using the KenLM's N-gram models
21 # to find the best candidates. This script supports both character level and BPE level
22 # encodings and models which is detected automatically from the type of the model.
23 # You may train the LM model with 'scripts/asr_language_modeling/ngram_lm/train_kenlm.py'.
24
25 # Config Help
26
27 To discover all arguments of the script, please run :
28 python eval_beamsearch_ngram.py --help
29 python eval_beamsearch_ngram.py --cfg job
30
31 # USAGE
32
33 python eval_beamsearch_ngram.py nemo_model_file=<path to the .nemo file of the model> \
34 input_manifest=<path to the evaluation JSON manifest file> \
35 kenlm_model_file=<path to the binary KenLM model> \
36 beam_width=[<list of the beam widths, separated with commas>] \
37 beam_alpha=[<list of the beam alphas, separated with commas>] \
38 beam_beta=[<list of the beam betas, separated with commas>] \
39 preds_output_folder=<optional folder to store the predictions> \
40 probs_cache_file=null \
41 decoding_mode=beamsearch_ngram
42 ...
43
44
45 # Grid Search for Hyper parameters
46
47 For grid search, you can provide a list of arguments as follows -
48
49 beam_width=[4,8,16,....] \
50 beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
51 beam_beta=[-1.0,-0.5,0.0,...,1.0] \
52
53 # You may find more info on how to use this script at:
54 # https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
55
56 """
57
58
59 import contextlib
60 import json
61 import os
62 import pickle
63 from dataclasses import dataclass, field, is_dataclass
64 from pathlib import Path
65 from typing import List, Optional
66
67 import editdistance
68 import numpy as np
69 import torch
70 from omegaconf import MISSING, OmegaConf
71 from sklearn.model_selection import ParameterGrid
72 from tqdm.auto import tqdm
73
74 import nemo.collections.asr as nemo_asr
75 from nemo.collections.asr.models import EncDecHybridRNNTCTCModel
76 from nemo.collections.asr.parts.submodules import ctc_beam_decoding
77 from nemo.collections.asr.parts.utils.transcribe_utils import PunctuationCapitalization, TextProcessingConfig
78 from nemo.core.config import hydra_runner
79 from nemo.utils import logging
80
81 # fmt: off
82
83
84 @dataclass
85 class EvalBeamSearchNGramConfig:
86 """
87 Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
88 """
89 # # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
90 nemo_model_file: str = MISSING
91
92 # File paths
93 input_manifest: str = MISSING # The manifest file of the evaluation set
94 kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
95 preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
96 probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
97
98 # Parameters for inference
99 acoustic_batch_size: int = 16 # The batch size to calculate log probabilities
100 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
101 device: str = "cuda" # The device to load the model onto to calculate log probabilities
102 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
103
104 # Beam Search hyperparameters
105
106 # The decoding scheme to be used for evaluation.
107 # Can be one of ["greedy", "beamsearch", "beamsearch_ngram"]
108 decoding_mode: str = "beamsearch_ngram"
109
110 beam_width: List[int] = field(default_factory=lambda: [128]) # The width or list of the widths for the beam search decoding
111 beam_alpha: List[float] = field(default_factory=lambda: [1.0]) # The alpha parameter or list of the alphas for the beam search decoding
112 beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
113
114 decoding_strategy: str = "beam"
115 decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
116
117 text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
118 punctuation_marks = ".,?",
119 separate_punctuation = False,
120 do_lowercase = False,
121 rm_punctuation = False,
122 )
123 # fmt: on
124
125
126 def beam_search_eval(
127 model: nemo_asr.models.ASRModel,
128 cfg: EvalBeamSearchNGramConfig,
129 all_probs: List[torch.Tensor],
130 target_transcripts: List[str],
131 preds_output_file: str = None,
132 lm_path: str = None,
133 beam_alpha: float = 1.0,
134 beam_beta: float = 0.0,
135 beam_width: int = 128,
136 beam_batch_size: int = 128,
137 progress_bar: bool = True,
138 punctuation_capitalization: PunctuationCapitalization = None,
139 ):
140 level = logging.getEffectiveLevel()
141 logging.setLevel(logging.CRITICAL)
142 # Reset config
143 model.change_decoding_strategy(None)
144
145 # Override the beam search config with current search candidate configuration
146 cfg.decoding.beam_size = beam_width
147 cfg.decoding.beam_alpha = beam_alpha
148 cfg.decoding.beam_beta = beam_beta
149 cfg.decoding.return_best_hypothesis = False
150 cfg.decoding.kenlm_path = cfg.kenlm_model_file
151
152 # Update model's decoding strategy config
153 model.cfg.decoding.strategy = cfg.decoding_strategy
154 model.cfg.decoding.beam = cfg.decoding
155
156 # Update model's decoding strategy
157 if isinstance(model, EncDecHybridRNNTCTCModel):
158 model.change_decoding_strategy(model.cfg.decoding, decoder_type='ctc')
159 decoding = model.ctc_decoding
160 else:
161 model.change_decoding_strategy(model.cfg.decoding)
162 decoding = model.decoding
163 logging.setLevel(level)
164
165 wer_dist_first = cer_dist_first = 0
166 wer_dist_best = cer_dist_best = 0
167 words_count = 0
168 chars_count = 0
169 sample_idx = 0
170 if preds_output_file:
171 out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
172
173 if progress_bar:
174 it = tqdm(
175 range(int(np.ceil(len(all_probs) / beam_batch_size))),
176 desc=f"Beam search decoding with width={beam_width}, alpha={beam_alpha}, beta={beam_beta}",
177 ncols=120,
178 )
179 else:
180 it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
181 for batch_idx in it:
182 # disabling type checking
183 probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
184 probs_lens = torch.tensor([prob.shape[0] for prob in probs_batch])
185 with torch.no_grad():
186 packed_batch = torch.zeros(len(probs_batch), max(probs_lens), probs_batch[0].shape[-1], device='cpu')
187
188 for prob_index in range(len(probs_batch)):
189 packed_batch[prob_index, : probs_lens[prob_index], :] = torch.tensor(
190 probs_batch[prob_index], device=packed_batch.device, dtype=packed_batch.dtype
191 )
192
193 _, beams_batch = decoding.ctc_decoder_predictions_tensor(
194 packed_batch, decoder_lengths=probs_lens, return_hypotheses=True,
195 )
196
197 for beams_idx, beams in enumerate(beams_batch):
198 target = target_transcripts[sample_idx + beams_idx]
199 target_split_w = target.split()
200 target_split_c = list(target)
201 words_count += len(target_split_w)
202 chars_count += len(target_split_c)
203 wer_dist_min = cer_dist_min = 10000
204 for candidate_idx, candidate in enumerate(beams): # type: (int, ctc_beam_decoding.rnnt_utils.Hypothesis)
205 pred_text = candidate.text
206 if cfg.text_processing.do_lowercase:
207 pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
208 if cfg.text_processing.rm_punctuation:
209 pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
210 if cfg.text_processing.separate_punctuation:
211 pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
212 pred_split_w = pred_text.split()
213 wer_dist = editdistance.eval(target_split_w, pred_split_w)
214 pred_split_c = list(pred_text)
215 cer_dist = editdistance.eval(target_split_c, pred_split_c)
216
217 wer_dist_min = min(wer_dist_min, wer_dist)
218 cer_dist_min = min(cer_dist_min, cer_dist)
219
220 if candidate_idx == 0:
221 # first candidate
222 wer_dist_first += wer_dist
223 cer_dist_first += cer_dist
224
225 score = candidate.score
226 if preds_output_file:
227 out_file.write('{}\t{}\n'.format(pred_text, score))
228 wer_dist_best += wer_dist_min
229 cer_dist_best += cer_dist_min
230 sample_idx += len(probs_batch)
231
232 if preds_output_file:
233 out_file.close()
234 logging.info(f"Stored the predictions of beam search decoding at '{preds_output_file}'.")
235
236 if lm_path:
237 logging.info(
238 'WER/CER with beam search decoding and N-gram model = {:.2%}/{:.2%}'.format(
239 wer_dist_first / words_count, cer_dist_first / chars_count
240 )
241 )
242 else:
243 logging.info(
244 'WER/CER with beam search decoding = {:.2%}/{:.2%}'.format(
245 wer_dist_first / words_count, cer_dist_first / chars_count
246 )
247 )
248 logging.info(
249 'Oracle WER/CER in candidates with perfect LM= {:.2%}/{:.2%}'.format(
250 wer_dist_best / words_count, cer_dist_best / chars_count
251 )
252 )
253 logging.info(f"=================================================================================")
254
255 return wer_dist_first / words_count, cer_dist_first / chars_count
256
257
258 @hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
259 def main(cfg: EvalBeamSearchNGramConfig):
260 logging.warning("This file will be renamed to eval_beamsearch_ngram_ctc.py in the future NeMo (1.21) release.")
261 if is_dataclass(cfg):
262 cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
263
264 valid_decoding_modes = ["greedy", "beamsearch", "beamsearch_ngram"]
265 if cfg.decoding_mode not in valid_decoding_modes:
266 raise ValueError(
267 f"Given decoding_mode={cfg.decoding_mode} is invalid. Available options are :\n" f"{valid_decoding_modes}"
268 )
269
270 if cfg.nemo_model_file.endswith('.nemo'):
271 asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
272 else:
273 logging.warning(
274 "nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
275 )
276 asr_model = nemo_asr.models.ASRModel.from_pretrained(
277 cfg.nemo_model_file, map_location=torch.device(cfg.device)
278 )
279
280 target_transcripts = []
281 manifest_dir = Path(cfg.input_manifest).parent
282 with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
283 audio_file_paths = []
284 for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
285 data = json.loads(line)
286 audio_file = Path(data['audio_filepath'])
287 if not audio_file.is_file() and not audio_file.is_absolute():
288 audio_file = manifest_dir / audio_file
289 target_transcripts.append(data['text'])
290 audio_file_paths.append(str(audio_file.absolute()))
291
292 punctuation_capitalization = PunctuationCapitalization(cfg.text_processing.punctuation_marks)
293 if cfg.text_processing.do_lowercase:
294 target_transcripts = punctuation_capitalization.do_lowercase(target_transcripts)
295 if cfg.text_processing.rm_punctuation:
296 target_transcripts = punctuation_capitalization.rm_punctuation(target_transcripts)
297 if cfg.text_processing.separate_punctuation:
298 target_transcripts = punctuation_capitalization.separate_punctuation(target_transcripts)
299
300 if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
301 logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
302 logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
303 with open(cfg.probs_cache_file, 'rb') as probs_file:
304 all_probs = pickle.load(probs_file)
305
306 if len(all_probs) != len(audio_file_paths):
307 raise ValueError(
308 f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
309 f"match the manifest file. You may need to delete the probabilities cached file."
310 )
311 else:
312
313 @contextlib.contextmanager
314 def default_autocast():
315 yield
316
317 if cfg.use_amp:
318 if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
319 logging.info("AMP is enabled!\n")
320 autocast = torch.cuda.amp.autocast
321
322 else:
323 autocast = default_autocast
324 else:
325
326 autocast = default_autocast
327
328 with autocast():
329 with torch.no_grad():
330 if isinstance(asr_model, EncDecHybridRNNTCTCModel):
331 asr_model.cur_decoder = 'ctc'
332 all_logits = asr_model.transcribe(audio_file_paths, batch_size=cfg.acoustic_batch_size, logprobs=True)
333
334 all_probs = all_logits
335 if cfg.probs_cache_file:
336 logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
337 with open(cfg.probs_cache_file, 'wb') as f_dump:
338 pickle.dump(all_probs, f_dump)
339
340 wer_dist_greedy = 0
341 cer_dist_greedy = 0
342 words_count = 0
343 chars_count = 0
344 for batch_idx, probs in enumerate(all_probs):
345 preds = np.argmax(probs, axis=1)
346 preds_tensor = torch.tensor(preds, device='cpu').unsqueeze(0)
347 if isinstance(asr_model, EncDecHybridRNNTCTCModel):
348 pred_text = asr_model.ctc_decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
349 else:
350 pred_text = asr_model._wer.decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
351
352 if cfg.text_processing.do_lowercase:
353 pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
354 if cfg.text_processing.rm_punctuation:
355 pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
356 if cfg.text_processing.separate_punctuation:
357 pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
358
359 pred_split_w = pred_text.split()
360 target_split_w = target_transcripts[batch_idx].split()
361 pred_split_c = list(pred_text)
362 target_split_c = list(target_transcripts[batch_idx])
363
364 wer_dist = editdistance.eval(target_split_w, pred_split_w)
365 cer_dist = editdistance.eval(target_split_c, pred_split_c)
366
367 wer_dist_greedy += wer_dist
368 cer_dist_greedy += cer_dist
369 words_count += len(target_split_w)
370 chars_count += len(target_split_c)
371
372 logging.info('Greedy WER/CER = {:.2%}/{:.2%}'.format(wer_dist_greedy / words_count, cer_dist_greedy / chars_count))
373
374 asr_model = asr_model.to('cpu')
375
376 if cfg.decoding_mode == "beamsearch_ngram":
377 if not os.path.exists(cfg.kenlm_model_file):
378 raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
379 lm_path = cfg.kenlm_model_file
380 else:
381 lm_path = None
382
383 # 'greedy' decoding_mode would skip the beam search decoding
384 if cfg.decoding_mode in ["beamsearch_ngram", "beamsearch"]:
385 if cfg.beam_width is None or cfg.beam_alpha is None or cfg.beam_beta is None:
386 raise ValueError("beam_width, beam_alpha and beam_beta are needed to perform beam search decoding.")
387 params = {'beam_width': cfg.beam_width, 'beam_alpha': cfg.beam_alpha, 'beam_beta': cfg.beam_beta}
388 hp_grid = ParameterGrid(params)
389 hp_grid = list(hp_grid)
390
391 best_wer_beam_size, best_cer_beam_size = None, None
392 best_wer_alpha, best_cer_alpha = None, None
393 best_wer_beta, best_cer_beta = None, None
394 best_wer, best_cer = 1e6, 1e6
395
396 logging.info(f"==============================Starting the beam search decoding===============================")
397 logging.info(f"Grid search size: {len(hp_grid)}")
398 logging.info(f"It may take some time...")
399 logging.info(f"==============================================================================================")
400
401 if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
402 os.mkdir(cfg.preds_output_folder)
403 for hp in hp_grid:
404 if cfg.preds_output_folder:
405 preds_output_file = os.path.join(
406 cfg.preds_output_folder,
407 f"preds_out_width{hp['beam_width']}_alpha{hp['beam_alpha']}_beta{hp['beam_beta']}.tsv",
408 )
409 else:
410 preds_output_file = None
411
412 candidate_wer, candidate_cer = beam_search_eval(
413 asr_model,
414 cfg,
415 all_probs=all_probs,
416 target_transcripts=target_transcripts,
417 preds_output_file=preds_output_file,
418 lm_path=lm_path,
419 beam_width=hp["beam_width"],
420 beam_alpha=hp["beam_alpha"],
421 beam_beta=hp["beam_beta"],
422 beam_batch_size=cfg.beam_batch_size,
423 progress_bar=True,
424 punctuation_capitalization=punctuation_capitalization,
425 )
426
427 if candidate_cer < best_cer:
428 best_cer_beam_size = hp["beam_width"]
429 best_cer_alpha = hp["beam_alpha"]
430 best_cer_beta = hp["beam_beta"]
431 best_cer = candidate_cer
432
433 if candidate_wer < best_wer:
434 best_wer_beam_size = hp["beam_width"]
435 best_wer_alpha = hp["beam_alpha"]
436 best_wer_beta = hp["beam_beta"]
437 best_wer = candidate_wer
438
439 logging.info(
440 f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
441 f'Beam alpha = {best_wer_alpha}, Beam beta = {best_wer_beta}'
442 )
443
444 logging.info(
445 f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
446 f'Beam alpha = {best_cer_alpha}, Beam beta = {best_cer_beta}'
447 )
448 logging.info(f"=================================================================================")
449
450
451 if __name__ == '__main__':
452 main()
453
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """
17 # This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
18 # fusion with beam search decoders on top of a trained ASR Transducer model. NeMo's beam search decoders are capable of using the
19 # KenLM's N-gram models to find the best candidates. This script supports both character level and BPE level
20 # encodings and models which is detected automatically from the type of the model.
21 # You may train the LM model with 'scripts/ngram_lm/train_kenlm.py'.
22
23 # Config Help
24
25 To discover all arguments of the script, please run :
26 python eval_beamsearch_ngram.py --help
27 python eval_beamsearch_ngram.py --cfg job
28
29 # USAGE
30
31 python eval_beamsearch_ngram_transducer.py nemo_model_file=<path to the .nemo file of the model> \
32 input_manifest=<path to the evaluation JSON manifest file \
33 kenlm_model_file=<path to the binary KenLM model> \
34 beam_width=[<list of the beam widths, separated with commas>] \
35 beam_alpha=[<list of the beam alphas, separated with commas>] \
36 preds_output_folder=<optional folder to store the predictions> \
37 probs_cache_file=null \
38 decoding_strategy=<greedy_batch or maes decoding>
39 maes_prefix_alpha=[<list of the maes prefix alphas, separated with commas>] \
40 maes_expansion_gamma=[<list of the maes expansion gammas, separated with commas>] \
41 hat_subtract_ilm=<in case of HAT model: subtract internal LM or not> \
42 hat_ilm_weight=[<in case of HAT model: list of the HAT internal LM weights, separated with commas>] \
43 ...
44
45
46 # Grid Search for Hyper parameters
47
48 For grid search, you can provide a list of arguments as follows -
49
50 beam_width=[4,8,16,....] \
51 beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
52
53 # You may find more info on how to use this script at:
54 # https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
55
56 """
57
58
59 import contextlib
60 import json
61 import os
62 import pickle
63 import tempfile
64 from dataclasses import dataclass, field, is_dataclass
65 from pathlib import Path
66 from typing import List, Optional
67
68 import editdistance
69 import numpy as np
70 import torch
71 from omegaconf import MISSING, OmegaConf
72 from sklearn.model_selection import ParameterGrid
73 from tqdm.auto import tqdm
74
75 import nemo.collections.asr as nemo_asr
76 from nemo.collections.asr.parts.submodules import rnnt_beam_decoding
77 from nemo.core.config import hydra_runner
78 from nemo.utils import logging
79
80 # fmt: off
81
82
83 @dataclass
84 class EvalBeamSearchNGramConfig:
85 """
86 Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
87 """
88 # # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
89 nemo_model_file: str = MISSING
90
91 # File paths
92 input_manifest: str = MISSING # The manifest file of the evaluation set
93 kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
94 preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
95 probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
96
97 # Parameters for inference
98 acoustic_batch_size: int = 128 # The batch size to calculate log probabilities
99 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
100 device: str = "cuda" # The device to load the model onto to calculate log probabilities
101 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
102 num_workers: int = 1 # Number of workers for DataLoader
103
104 # The decoding scheme to be used for evaluation
105 decoding_strategy: str = "greedy_batch" # ["greedy_batch", "beam", "tsd", "alsd", "maes"]
106
107 # Beam Search hyperparameters
108 beam_width: List[int] = field(default_factory=lambda: [8]) # The width or list of the widths for the beam search decoding
109 beam_alpha: List[float] = field(default_factory=lambda: [0.2]) # The alpha parameter or list of the alphas for the beam search decoding
110
111 maes_prefix_alpha: List[int] = field(default_factory=lambda: [2]) # The maes_prefix_alpha or list of the maes_prefix_alpha for the maes decoding
112 maes_expansion_gamma: List[float] = field(default_factory=lambda: [2.3]) # The maes_expansion_gamma or list of the maes_expansion_gamma for the maes decoding
113
114 # HAT related parameters (only for internal lm subtraction)
115 hat_subtract_ilm: bool = False
116 hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
117
118 decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
119
120
121 # fmt: on
122
123
124 def decoding_step(
125 model: nemo_asr.models.ASRModel,
126 cfg: EvalBeamSearchNGramConfig,
127 all_probs: List[torch.Tensor],
128 target_transcripts: List[str],
129 preds_output_file: str = None,
130 beam_batch_size: int = 128,
131 progress_bar: bool = True,
132 ):
133 level = logging.getEffectiveLevel()
134 logging.setLevel(logging.CRITICAL)
135 # Reset config
136 model.change_decoding_strategy(None)
137
138 cfg.decoding.hat_ilm_weight = cfg.decoding.hat_ilm_weight * cfg.hat_subtract_ilm
139 # Override the beam search config with current search candidate configuration
140 cfg.decoding.return_best_hypothesis = False
141 cfg.decoding.ngram_lm_model = cfg.kenlm_model_file
142 cfg.decoding.hat_subtract_ilm = cfg.hat_subtract_ilm
143
144 # Update model's decoding strategy config
145 model.cfg.decoding.strategy = cfg.decoding_strategy
146 model.cfg.decoding.beam = cfg.decoding
147
148 # Update model's decoding strategy
149 model.change_decoding_strategy(model.cfg.decoding)
150 logging.setLevel(level)
151
152 wer_dist_first = cer_dist_first = 0
153 wer_dist_best = cer_dist_best = 0
154 words_count = 0
155 chars_count = 0
156 sample_idx = 0
157 if preds_output_file:
158 out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
159
160 if progress_bar:
161 if cfg.decoding_strategy == "greedy_batch":
162 description = "Greedy_batch decoding.."
163 else:
164 description = f"{cfg.decoding_strategy} decoding with bw={cfg.decoding.beam_size}, ba={cfg.decoding.ngram_lm_alpha}, ma={cfg.decoding.maes_prefix_alpha}, mg={cfg.decoding.maes_expansion_gamma}, hat_ilmw={cfg.decoding.hat_ilm_weight}"
165 it = tqdm(range(int(np.ceil(len(all_probs) / beam_batch_size))), desc=description, ncols=120)
166 else:
167 it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
168 for batch_idx in it:
169 # disabling type checking
170 probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
171 probs_lens = torch.tensor([prob.shape[-1] for prob in probs_batch])
172 with torch.no_grad():
173 packed_batch = torch.zeros(len(probs_batch), probs_batch[0].shape[0], max(probs_lens), device='cpu')
174
175 for prob_index in range(len(probs_batch)):
176 packed_batch[prob_index, :, : probs_lens[prob_index]] = torch.tensor(
177 probs_batch[prob_index].unsqueeze(0), device=packed_batch.device, dtype=packed_batch.dtype
178 )
179 best_hyp_batch, beams_batch = model.decoding.rnnt_decoder_predictions_tensor(
180 packed_batch, probs_lens, return_hypotheses=True,
181 )
182 if cfg.decoding_strategy == "greedy_batch":
183 beams_batch = [[x] for x in best_hyp_batch]
184
185 for beams_idx, beams in enumerate(beams_batch):
186 target = target_transcripts[sample_idx + beams_idx]
187 target_split_w = target.split()
188 target_split_c = list(target)
189 words_count += len(target_split_w)
190 chars_count += len(target_split_c)
191 wer_dist_min = cer_dist_min = 10000
192 for candidate_idx, candidate in enumerate(beams): # type: (int, rnnt_beam_decoding.rnnt_utils.Hypothesis)
193 pred_text = candidate.text
194 pred_split_w = pred_text.split()
195 wer_dist = editdistance.eval(target_split_w, pred_split_w)
196 pred_split_c = list(pred_text)
197 cer_dist = editdistance.eval(target_split_c, pred_split_c)
198
199 wer_dist_min = min(wer_dist_min, wer_dist)
200 cer_dist_min = min(cer_dist_min, cer_dist)
201
202 if candidate_idx == 0:
203 # first candidate
204 wer_dist_first += wer_dist
205 cer_dist_first += cer_dist
206
207 score = candidate.score
208 if preds_output_file:
209 out_file.write('{}\t{}\n'.format(pred_text, score))
210 wer_dist_best += wer_dist_min
211 cer_dist_best += cer_dist_min
212 sample_idx += len(probs_batch)
213
214 if cfg.decoding_strategy == "greedy_batch":
215 return wer_dist_first / words_count, cer_dist_first / chars_count
216
217 if preds_output_file:
218 out_file.close()
219 logging.info(f"Stored the predictions of {cfg.decoding_strategy} decoding at '{preds_output_file}'.")
220
221 if cfg.decoding.ngram_lm_model:
222 logging.info(
223 f"WER/CER with {cfg.decoding_strategy} decoding and N-gram model = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
224 )
225 else:
226 logging.info(
227 f"WER/CER with {cfg.decoding_strategy} decoding = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
228 )
229 logging.info(
230 f"Oracle WER/CER in candidates with perfect LM= {wer_dist_best / words_count:.2%}/{cer_dist_best / chars_count:.2%}"
231 )
232 logging.info(f"=================================================================================")
233
234 return wer_dist_first / words_count, cer_dist_first / chars_count
235
236
237 @hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
238 def main(cfg: EvalBeamSearchNGramConfig):
239 if is_dataclass(cfg):
240 cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
241
242 valid_decoding_strategis = ["greedy_batch", "beam", "tsd", "alsd", "maes"]
243 if cfg.decoding_strategy not in valid_decoding_strategis:
244 raise ValueError(
245 f"Given decoding_strategy={cfg.decoding_strategy} is invalid. Available options are :\n"
246 f"{valid_decoding_strategis}"
247 )
248
249 if cfg.nemo_model_file.endswith('.nemo'):
250 asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
251 else:
252 logging.warning(
253 "nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
254 )
255 asr_model = nemo_asr.models.ASRModel.from_pretrained(
256 cfg.nemo_model_file, map_location=torch.device(cfg.device)
257 )
258
259 if cfg.kenlm_model_file:
260 if not os.path.exists(cfg.kenlm_model_file):
261 raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
262 if cfg.decoding_strategy != "maes":
263 raise ValueError(f"Decoding with kenlm model is supported only for maes decoding algorithm.")
264 lm_path = cfg.kenlm_model_file
265 else:
266 lm_path = None
267 cfg.beam_alpha = [0.0]
268 if cfg.hat_subtract_ilm:
269 assert lm_path, "kenlm must be set for hat internal lm subtraction"
270
271 if cfg.decoding_strategy != "maes":
272 cfg.maes_prefix_alpha, cfg.maes_expansion_gamma, cfg.hat_ilm_weight = [0], [0], [0]
273
274 target_transcripts = []
275 manifest_dir = Path(cfg.input_manifest).parent
276 with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
277 audio_file_paths = []
278 for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
279 data = json.loads(line)
280 audio_file = Path(data['audio_filepath'])
281 if not audio_file.is_file() and not audio_file.is_absolute():
282 audio_file = manifest_dir / audio_file
283 target_transcripts.append(data['text'])
284 audio_file_paths.append(str(audio_file.absolute()))
285
286 if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
287 logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
288 logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
289 with open(cfg.probs_cache_file, 'rb') as probs_file:
290 all_probs = pickle.load(probs_file)
291
292 if len(all_probs) != len(audio_file_paths):
293 raise ValueError(
294 f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
295 f"match the manifest file. You may need to delete the probabilities cached file."
296 )
297 else:
298
299 @contextlib.contextmanager
300 def default_autocast():
301 yield
302
303 if cfg.use_amp:
304 if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
305 logging.info("AMP is enabled!\n")
306 autocast = torch.cuda.amp.autocast
307
308 else:
309 autocast = default_autocast
310 else:
311
312 autocast = default_autocast
313
314 # manual calculation of encoder_embeddings
315 with autocast():
316 with torch.no_grad():
317 asr_model.eval()
318 asr_model.encoder.freeze()
319 device = next(asr_model.parameters()).device
320 all_probs = []
321 with tempfile.TemporaryDirectory() as tmpdir:
322 with open(os.path.join(tmpdir, 'manifest.json'), 'w', encoding='utf-8') as fp:
323 for audio_file in audio_file_paths:
324 entry = {'audio_filepath': audio_file, 'duration': 100000, 'text': ''}
325 fp.write(json.dumps(entry) + '\n')
326 config = {
327 'paths2audio_files': audio_file_paths,
328 'batch_size': cfg.acoustic_batch_size,
329 'temp_dir': tmpdir,
330 'num_workers': cfg.num_workers,
331 'channel_selector': None,
332 'augmentor': None,
333 }
334 temporary_datalayer = asr_model._setup_transcribe_dataloader(config)
335 for test_batch in tqdm(temporary_datalayer, desc="Transcribing", disable=True):
336 encoded, encoded_len = asr_model.forward(
337 input_signal=test_batch[0].to(device), input_signal_length=test_batch[1].to(device)
338 )
339 # dump encoder embeddings per file
340 for idx in range(encoded.shape[0]):
341 encoded_no_pad = encoded[idx, :, : encoded_len[idx]]
342 all_probs.append(encoded_no_pad)
343
344 if cfg.probs_cache_file:
345 logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
346 with open(cfg.probs_cache_file, 'wb') as f_dump:
347 pickle.dump(all_probs, f_dump)
348
349 if cfg.decoding_strategy == "greedy_batch":
350 asr_model = asr_model.to('cpu')
351 candidate_wer, candidate_cer = decoding_step(
352 asr_model,
353 cfg,
354 all_probs=all_probs,
355 target_transcripts=target_transcripts,
356 beam_batch_size=cfg.beam_batch_size,
357 progress_bar=True,
358 )
359 logging.info(f"Greedy batch WER/CER = {candidate_wer:.2%}/{candidate_cer:.2%}")
360
361 asr_model = asr_model.to('cpu')
362
363 # 'greedy_batch' decoding_strategy would skip the beam search decoding
364 if cfg.decoding_strategy in ["beam", "tsd", "alsd", "maes"]:
365 if cfg.beam_width is None or cfg.beam_alpha is None:
366 raise ValueError("beam_width and beam_alpha are needed to perform beam search decoding.")
367 params = {
368 'beam_width': cfg.beam_width,
369 'beam_alpha': cfg.beam_alpha,
370 'maes_prefix_alpha': cfg.maes_prefix_alpha,
371 'maes_expansion_gamma': cfg.maes_expansion_gamma,
372 'hat_ilm_weight': cfg.hat_ilm_weight,
373 }
374 hp_grid = ParameterGrid(params)
375 hp_grid = list(hp_grid)
376
377 best_wer_beam_size, best_cer_beam_size = None, None
378 best_wer_alpha, best_cer_alpha = None, None
379 best_wer, best_cer = 1e6, 1e6
380
381 logging.info(
382 f"==============================Starting the {cfg.decoding_strategy} decoding==============================="
383 )
384 logging.info(f"Grid search size: {len(hp_grid)}")
385 logging.info(f"It may take some time...")
386 logging.info(f"==============================================================================================")
387
388 if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
389 os.mkdir(cfg.preds_output_folder)
390 for hp in hp_grid:
391 if cfg.preds_output_folder:
392 results_file = f"preds_out_{cfg.decoding_strategy}_bw{hp['beam_width']}"
393 if cfg.decoding_strategy == "maes":
394 results_file = f"{results_file}_ma{hp['maes_prefix_alpha']}_mg{hp['maes_expansion_gamma']}"
395 if cfg.kenlm_model_file:
396 results_file = f"{results_file}_ba{hp['beam_alpha']}"
397 if cfg.hat_subtract_ilm:
398 results_file = f"{results_file}_hat_ilmw{hp['hat_ilm_weight']}"
399 preds_output_file = os.path.join(cfg.preds_output_folder, f"{results_file}.tsv")
400 else:
401 preds_output_file = None
402
403 cfg.decoding.beam_size = hp["beam_width"]
404 cfg.decoding.ngram_lm_alpha = hp["beam_alpha"]
405 cfg.decoding.maes_prefix_alpha = hp["maes_prefix_alpha"]
406 cfg.decoding.maes_expansion_gamma = hp["maes_expansion_gamma"]
407 cfg.decoding.hat_ilm_weight = hp["hat_ilm_weight"]
408
409 candidate_wer, candidate_cer = decoding_step(
410 asr_model,
411 cfg,
412 all_probs=all_probs,
413 target_transcripts=target_transcripts,
414 preds_output_file=preds_output_file,
415 beam_batch_size=cfg.beam_batch_size,
416 progress_bar=True,
417 )
418
419 if candidate_cer < best_cer:
420 best_cer_beam_size = hp["beam_width"]
421 best_cer_alpha = hp["beam_alpha"]
422 best_cer_ma = hp["maes_prefix_alpha"]
423 best_cer_mg = hp["maes_expansion_gamma"]
424 best_cer_hat_ilm_weight = hp["hat_ilm_weight"]
425 best_cer = candidate_cer
426
427 if candidate_wer < best_wer:
428 best_wer_beam_size = hp["beam_width"]
429 best_wer_alpha = hp["beam_alpha"]
430 best_wer_ma = hp["maes_prefix_alpha"]
431 best_wer_ga = hp["maes_expansion_gamma"]
432 best_wer_hat_ilm_weight = hp["hat_ilm_weight"]
433 best_wer = candidate_wer
434
435 wer_hat_parameter = ""
436 if cfg.hat_subtract_ilm:
437 wer_hat_parameter = f"HAT ilm weight = {best_wer_hat_ilm_weight}, "
438 logging.info(
439 f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
440 f'Beam alpha = {best_wer_alpha}, {wer_hat_parameter}'
441 f'maes_prefix_alpha = {best_wer_ma}, maes_expansion_gamma = {best_wer_ga} '
442 )
443
444 cer_hat_parameter = ""
445 if cfg.hat_subtract_ilm:
446 cer_hat_parameter = f"HAT ilm weight = {best_cer_hat_ilm_weight}"
447 logging.info(
448 f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
449 f'Beam alpha = {best_cer_alpha}, {cer_hat_parameter} '
450 f'maes_prefix_alpha = {best_cer_ma}, maes_expansion_gamma = {best_cer_mg}'
451 )
452 logging.info(f"=================================================================================")
453
454
455 if __name__ == '__main__':
456 main()
457
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
[start of scripts/confidence_ensembles/build_ensemble.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This script provides a functionality to create confidence-based ensembles
17 from a collection of pretrained models.
18
19 For more details see the paper https://arxiv.org/abs/2306.15824
20 or tutorial in tutorials/asr/Confidence_Ensembles.ipynb
21
22 You would typically use this script by providing a yaml config file or overriding
23 default options from command line.
24
25 Usage examples:
26
27 1. Building an ensemble of two monolingual models with default settings (no confidence tuning).
28
29 python build_ensemble.py --config-path=. --config-name=ensemble_config.yaml
30 ensemble.0.model=stt_it_conformer_ctc_large
31 ensemble.0.training_manifest=<path to the Italian data of 100+ utterances (no transcription required)>
32 ensemble.1.model=stt_es_conformer_ctc_large
33 ensemble.1.training_manifest=<path to the Spanish data of 100+ utterances (no transcription required)>
34 output_path=<path to the desired location of the .nemo checkpoint>
35
36 You can have more than 2 models and can control transcription settings (e.g., batch size)
37 with ``transcription.<any argument of examples/asr/transcribe_speech.py>`` parameters.
38
39 2. If you want to get improved results, you can enable tuning of the confidence and logistic regression (LR) parameters.
40 E.g.
41
42 python build_ensemble.py
43 <all arguments like in the previous example>
44 ensemble.0.dev_manifest=<path to the dev data that's required for tuning>
45 ...
46 # IMPORTANT: see the note below if you use > 2 models!
47 ensemble.N.dev_manifest=<path to the dev data that's required for tuning>
48 tune_confidence=True # to allow confidence tuning. LR is tuned by default
49
50 As with any tuning, it is recommended to have reasonably large validation set for each model,
51 otherwise you might overfit to the validation data.
52
53 Note that if you add additional models (> 2) you will need to modify ensemble_config.yaml
54 or create a new one with added models in there. While it's theoretically possible to
55 fully override such parameters from commandline, hydra is very unfriendly for such
56 use-cases, so it's strongly recommended to be creating new configs.
57
58 3. If you want to precisely control tuning grid search, you can do that with
59
60 python build_ensemble.py
61 <all arguments as in the previous examples>
62 tune_confidence_config.confidence_type='[entropy_renyi_exp,entropy_tsallis_exp]' # only tune over this set
63 tune_confidence_config.alpha='[0.1,0.5,1.0]' # only tune over this set
64
65 You can check the dataclasses in this file for the full list of supported
66 arguments and their default values.
67 """
68
69 import atexit
70
71 # using default logging to be able to silence unnecessary messages from nemo
72 import logging
73 import os
74 import random
75 import sys
76 import tempfile
77 from copy import deepcopy
78 from dataclasses import dataclass
79 from pathlib import Path
80 from typing import Dict, List, Optional, Tuple
81
82 import joblib
83 import numpy as np
84 import pytorch_lightning as pl
85 from omegaconf import MISSING, DictConfig, OmegaConf
86 from sklearn.linear_model import LogisticRegression
87 from sklearn.metrics import confusion_matrix
88 from sklearn.pipeline import Pipeline, make_pipeline
89 from sklearn.preprocessing import StandardScaler
90 from tqdm import tqdm
91
92 from nemo.collections.asr.models.confidence_ensemble import (
93 ConfidenceEnsembleModel,
94 ConfidenceSpec,
95 compute_confidence,
96 get_filtered_logprobs,
97 )
98 from nemo.collections.asr.parts.utils.asr_confidence_utils import (
99 ConfidenceConfig,
100 ConfidenceMethodConfig,
101 get_confidence_aggregation_bank,
102 get_confidence_measure_bank,
103 )
104 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
105 from nemo.core.config import hydra_runner
106
107 LOG = logging.getLogger(__file__)
108
109 # adding Python path. If not found, asking user to get the file
110 try:
111 sys.path.append(str(Path(__file__).parents[2] / "examples" / "asr"))
112 import transcribe_speech
113 except ImportError:
114 # if users run script normally from nemo repo, this shouldn't be triggered as
115 # we modify the path above. But if they downloaded the build_ensemble.py as
116 # an isolated script, we'd ask them to also download corresponding version
117 # of the transcribe_speech.py
118 print(
119 "Current script depends on 'examples/asr/transcribe_speech.py', but can't find it. "
120 "If it's not present, download it from the NeMo github manually and put inside this folder."
121 )
122
123
124 @dataclass
125 class EnsembleConfig:
126 # .nemo path or pretrained name
127 model: str = MISSING
128 # path to the training data manifest (non-tarred)
129 training_manifest: str = MISSING
130 # specify to limit the number of training samples
131 # 100 is most likely enough, but setting higher default just in case
132 max_training_samples: int = 1000
133 # specify to provide dev data manifest for HP tuning
134 dev_manifest: Optional[str] = None
135
136
137 @dataclass
138 class TuneConfidenceConfig:
139 # important parameter, so should always be tuned
140 exclude_blank: Tuple[bool] = (True, False)
141 # prod is pretty much always worse, so not including by default
142 aggregation: Tuple[str] = ("mean", "min", "max")
143 # not including max prob, as there is always an entropy-based metric
144 # that's better but otherwise including everything
145 confidence_type: Tuple[str] = (
146 "entropy_renyi_exp",
147 "entropy_renyi_lin",
148 "entropy_tsallis_exp",
149 "entropy_tsallis_lin",
150 "entropy_gibbs_lin",
151 "entropy_gibbs_exp",
152 )
153
154 # TODO: currently it's not possible to efficiently tune temperature, as we always
155 # apply log-softmax in the decoder, so to try different values it will be required
156 # to rerun the decoding, which is very slow. To support this for one-off experiments
157 # it's possible to modify the code of CTC decoder / Transducer joint to
158 # remove log-softmax and then apply it directly in this script with the temperature
159 #
160 # Alternatively, one can run this script multiple times with different values of
161 # temperature and pick the best performing ensemble. Note that this will increase
162 # tuning time by the number of temperature values tried. On the other hand,
163 # the above approach is a lot more efficient and will only slightly increase
164 # the total tuning runtime.
165
166 # very important to tune for max prob, but for entropy metrics 1.0 is almost always best
167 # temperature: Tuple[float] = (1.0,)
168
169 # not that important, but can sometimes make a small difference
170 alpha: Tuple[float] = (0.25, 0.33, 0.5, 1.0)
171
172 def get_grid_size(self) -> int:
173 """Returns the total number of points in the search space."""
174 if "max_prob" in self.confidence_type:
175 return (
176 len(self.exclude_blank)
177 * len(self.aggregation)
178 * ((len(self.confidence_type) - 1) * len(self.alpha) + 1)
179 )
180 return len(self.exclude_blank) * len(self.aggregation) * len(self.confidence_type) * len(self.alpha)
181
182
183 @dataclass
184 class TuneLogisticRegressionConfig:
185 # will have log-uniform grid over this range with that many points
186 # note that a value of 10000.0 (not regularization) is always added
187 C_num_points: int = 10
188 C_min: float = 0.0001
189 C_max: float = 10.0
190
191 # not too important
192 multi_class: Tuple[str] = ("ovr", "multinomial")
193
194 # should try to include weights directly if the data is too imbalanced
195 class_weight: Tuple = (None, "balanced")
196
197 # increase if getting many warnings that algorithm didn't converge
198 max_iter: int = 1000
199
200
201 @dataclass
202 class BuildEnsembleConfig:
203 # where to save the resulting ensemble model
204 output_path: str = MISSING
205
206 # each model specification
207 ensemble: List[EnsembleConfig] = MISSING
208
209 random_seed: int = 0 # for reproducibility
210
211 # default confidence, can override
212 confidence: ConfidenceConfig = ConfidenceConfig(
213 # we keep frame confidences and apply aggregation manually to get full-utterance confidence
214 preserve_frame_confidence=True,
215 exclude_blank=True,
216 aggregation="mean",
217 method_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
218 )
219 temperature: float = 1.0
220
221 # this is optional, but can be used to change any aspect of the transcription
222 # config, such as batch size or amp usage. Note that model, data and confidence
223 # will be overriden by this script
224 transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
225
226 # set to True to tune the confidence.
227 # requires dev manifests to be specified for each model
228 tune_confidence: bool = False
229 # used to specify what to tune over. By default runs tuning over some
230 # reasonalbe grid, so that it does not take forever.
231 # Can be changed as needed
232 tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
233
234 # very fast to tune and can be important in case of imbalanced datasets
235 # will automatically set to False if dev data is not available
236 tune_logistic_regression: bool = True
237 tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
238
239 def __post_init__(self):
240 """Checking that if any dev data is provided, all are provided.
241
242 Will also auto-set tune_logistic_regression to False if no dev data
243 is available.
244
245 If tune_confidence is set to True (user choice) and no dev data is
246 provided, will raise an error.
247 """
248 num_dev_data = 0
249 for ensemble_cfg in self.ensemble:
250 num_dev_data += ensemble_cfg.dev_manifest is not None
251 if num_dev_data == 0:
252 if self.tune_confidence:
253 raise ValueError("tune_confidence is set to True, but no dev data is provided")
254 LOG.info("Setting tune_logistic_regression = False since no dev data is provided")
255 self.tune_logistic_regression = False
256 return
257
258 if num_dev_data < len(self.ensemble):
259 raise ValueError(
260 "Some ensemble configs specify dev data, but some don't. Either all have to specify it or none!"
261 )
262
263
264 def calculate_score(features: np.ndarray, labels: np.ndarray, pipe: Pipeline) -> Tuple[float, np.ndarray]:
265 """Score is always calculated as mean of the per-class scores.
266
267 This is done to account for possible class imbalances.
268
269 Args:
270 features: numpy array of features of shape [N x D], where N is the
271 number of objects (typically a total number of utterances in
272 all datasets) and D is the total number of confidence scores
273 used to train the model (typically = number of models).
274 labels: numpy array of shape [N] contatining ground-truth model indices.
275 pipe: classification pipeline (currently, standardization + logistic
276 regression).
277
278 Returns:
279 tuple: score value in [0, 1] and full classification confusion matrix.
280 """
281 predictions = pipe.predict(features)
282 conf_m = confusion_matrix(labels, predictions)
283 score = np.diag(conf_m).sum() / conf_m.sum()
284 return score, conf_m
285
286
287 def train_model_selection(
288 training_features: np.ndarray,
289 training_labels: np.ndarray,
290 dev_features: Optional[np.ndarray] = None,
291 dev_labels: Optional[np.ndarray] = None,
292 tune_lr: bool = False,
293 tune_lr_cfg: Optional[TuneLogisticRegressionConfig] = None,
294 verbose: bool = False,
295 ) -> Tuple[Pipeline, float]:
296 """Trains model selection block with an (optional) tuning of the parameters.
297
298 Returns a pipeline consisting of feature standardization and logistic
299 regression. If tune_lr is set to True, dev features/labels will be used
300 to tune the hyperparameters of the logistic regression with the grid
301 search that's defined via ``tune_lr_cfg``.
302
303 If no tuning is requested, uses the following parameters::
304
305 best_pipe = make_pipeline(
306 StandardScaler(),
307 LogisticRegression(
308 multi_class="multinomial",
309 C=10000.0,
310 max_iter=1000,
311 class_weight="balanced",
312 ),
313 )
314
315 Args:
316 training_features: numpy array of features of shape [N x D], where N is
317 the number of objects (typically a total number of utterances in
318 all training datasets) and D is the total number of confidence
319 scores used to train the model (typically = number of models).
320 training_labels: numpy array of shape [N] contatining ground-truth
321 model indices.
322 dev_features: same as training, but for the validation subset.
323 dev_labels: same as training, but for the validation subset.
324 tune_lr: controls whether tuning of LR hyperparameters is performed.
325 If set to True, it's required to also provide dev features/labels.
326 tune_lr_cfg: specifies what values of LR hyperparameters to try.
327 verbose: if True, will output final training/dev scores.
328
329 Returns:
330 tuple: trained model selection pipeline, best score (or -1 if no tuning
331 was done).
332 """
333 if not tune_lr:
334 # default parameters: C=10000.0 disables regularization
335 best_pipe = make_pipeline(
336 StandardScaler(),
337 LogisticRegression(multi_class="multinomial", C=10000.0, max_iter=1000, class_weight="balanced"),
338 )
339 max_score = -1
340 else:
341 C_pms = np.append(
342 np.exp(np.linspace(np.log(tune_lr_cfg.C_min), np.log(tune_lr_cfg.C_max), tune_lr_cfg.C_num_points)),
343 10000.0,
344 )
345 max_score = 0
346 best_pipe = None
347 for class_weight in tune_lr_cfg.class_weight:
348 for multi_class in tune_lr_cfg.multi_class:
349 for C in C_pms:
350 pipe = make_pipeline(
351 StandardScaler(),
352 LogisticRegression(
353 multi_class=multi_class, C=C, max_iter=tune_lr_cfg.max_iter, class_weight=class_weight
354 ),
355 )
356 pipe.fit(training_features, training_labels)
357 score, confusion = calculate_score(dev_features, dev_labels, pipe)
358 if score > max_score:
359 max_score = score
360 best_pipe = pipe
361
362 best_pipe.fit(training_features, training_labels)
363 if verbose:
364 accuracy, confusion = calculate_score(training_features, training_labels, best_pipe)
365 LOG.info("Training fit accuracy: %.4f", accuracy * 100.0)
366 LOG.info("Training confusion matrix:\n%s", str(confusion))
367 if dev_features is not None and verbose:
368 accuracy, confusion = calculate_score(dev_features, dev_labels, best_pipe)
369 LOG.info("Dev fit accuracy: %.4f", accuracy * 100.0)
370 LOG.info("Dev confusion matrix:\n%s", str(confusion))
371
372 return best_pipe, max_score
373
374
375 def subsample_manifest(manifest_file: str, max_samples: int) -> str:
376 """Will save a subsampled version of the manifest to the same folder.
377
378 Have to save to the same folder to support relative paths.
379
380 Args:
381 manifest_file: path to the manifest file that needs subsampling.
382 max_samples: how many samples to retain. Will randomly select that
383 many lines from the manifest.
384
385 Returns:
386 str: the path to the subsampled manifest file.
387 """
388 with open(manifest_file, "rt", encoding="utf-8") as fin:
389 lines = fin.readlines()
390 if max_samples < len(lines):
391 lines = random.sample(lines, max_samples)
392 output_file = manifest_file + "-subsampled"
393 with open(output_file, "wt", encoding="utf-8") as fout:
394 fout.write("".join(lines))
395 return output_file
396
397
398 def cleanup_subsampled_manifests(subsampled_manifests: List[str]):
399 """Removes all generated subsamples manifests."""
400 for manifest in subsampled_manifests:
401 os.remove(manifest)
402
403
404 def compute_all_confidences(
405 hypothesis: Hypothesis, tune_confidence_cfg: TuneConfidenceConfig
406 ) -> Dict[ConfidenceSpec, float]:
407 """Computes a set of confidence scores from a given hypothesis.
408
409 Works with the output of both CTC and Transducer decoding.
410
411 Args:
412 hypothesis: generated hypothesis as returned from the transcribe
413 method of the ASR model.
414 tune_confidence_cfg: config specifying what confidence scores to
415 compute.
416
417 Returns:
418 dict: dictionary with confidenct spec -> confidence score mapping.
419 """
420 conf_values = {}
421
422 for exclude_blank in tune_confidence_cfg.exclude_blank:
423 filtered_logprobs = get_filtered_logprobs(hypothesis, exclude_blank)
424 vocab_size = filtered_logprobs.shape[1]
425 for aggregation in tune_confidence_cfg.aggregation:
426 aggr_func = get_confidence_aggregation_bank()[aggregation]
427 for conf_type in tune_confidence_cfg.confidence_type:
428 conf_func = get_confidence_measure_bank()[conf_type]
429 if conf_type == "max_prob": # skipping alpha in this case
430 conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=1.0)).cpu().item()
431 conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, 1.0)] = conf_value
432 else:
433 for alpha in tune_confidence_cfg.alpha:
434 conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=alpha)).cpu().item()
435 conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, alpha)] = conf_value
436
437 return conf_values
438
439
440 def find_best_confidence(
441 train_confidences: List[List[Dict[ConfidenceSpec, float]]],
442 train_labels: List[int],
443 dev_confidences: List[List[Dict[ConfidenceSpec, float]]],
444 dev_labels: List[int],
445 tune_lr: bool,
446 tune_lr_config: TuneConfidenceConfig,
447 ) -> Tuple[ConfidenceConfig, Pipeline]:
448 """Finds the best confidence configuration for model selection.
449
450 Will loop over all values in the confidence dictionary and fit the LR
451 model (optionally tuning its HPs). The best performing confidence (on the
452 dev set) will be used for the final LR model.
453
454 Args:
455 train_confidences: this is an object of type
456 ``List[List[Dict[ConfidenceSpec, float]]]``. The shape of this
457 object is [M, N, S], where
458 M: number of models
459 N: number of utterances in all training sets
460 S: number of confidence scores to try
461
462 This argument will be used to construct np.array objects for each
463 of the confidence scores with the shape [M, N]
464
465 train_labels: ground-truth labels of the correct model for each data
466 points. This is a list of size [N]
467 dev_confidences: same as training, but for the validation subset.
468 dev_labels: same as training, but for the validation subset.
469 tune_lr: controls whether tuning of LR hyperparameters is performed.
470 tune_lr_cfg: specifies what values of LR hyperparameters to try.
471
472 Returns:
473 tuple: best confidence config, best model selection pipeline
474 """
475 max_score = 0
476 best_pipe = None
477 best_conf_spec = None
478 LOG.info("Evaluation all confidences. Total grid size: %d", len(train_confidences[0][0].keys()))
479 for conf_spec in tqdm(train_confidences[0][0].keys()):
480 cur_train_confidences = []
481 for model_confs in train_confidences:
482 cur_train_confidences.append([])
483 for model_conf in model_confs:
484 cur_train_confidences[-1].append(model_conf[conf_spec])
485 cur_dev_confidences = []
486 for model_confs in dev_confidences:
487 cur_dev_confidences.append([])
488 for model_conf in model_confs:
489 cur_dev_confidences[-1].append(model_conf[conf_spec])
490 # transposing with zip(*list)
491 training_features = np.array(list(zip(*cur_train_confidences)))
492 training_labels = np.array(train_labels)
493 dev_features = np.array(list(zip(*cur_dev_confidences)))
494 dev_labels = np.array(dev_labels)
495 pipe, score = train_model_selection(
496 training_features, training_labels, dev_features, dev_labels, tune_lr, tune_lr_config,
497 )
498 if max_score < score:
499 max_score = score
500 best_pipe = pipe
501 best_conf_spec = conf_spec
502 LOG.info("Found better parameters: %s. New score: %.4f", str(conf_spec), max_score)
503
504 return best_conf_spec.to_confidence_config(), best_pipe
505
506
507 @hydra_runner(config_name="BuildEnsembleConfig", schema=BuildEnsembleConfig)
508 def main(cfg: BuildEnsembleConfig):
509 # silencing all messages from nemo/ptl to avoid dumping tons of configs to the stdout
510 logging.getLogger('pytorch_lightning').setLevel(logging.CRITICAL)
511 logging.getLogger('nemo_logger').setLevel(logging.CRITICAL)
512 LOG.info(f'Build ensemble config:\n{OmegaConf.to_yaml(cfg)}')
513
514 # to ensure post init is called
515 cfg = BuildEnsembleConfig(**cfg)
516
517 pl.seed_everything(cfg.random_seed)
518 cfg.transcription.random_seed = None # seed is already applied
519 cfg.transcription.return_transcriptions = True
520 cfg.transcription.preserve_alignment = True
521 cfg.transcription.ctc_decoding.temperature = cfg.temperature
522 cfg.transcription.rnnt_decoding.temperature = cfg.temperature
523 # this ensures that generated output is after log-softmax for consistency with CTC
524
525 train_confidences = []
526 dev_confidences = []
527 train_labels = []
528 dev_labels = []
529
530 # registering clean-up function that will hold on to this list and
531 # should clean up even if there is partial error in some of the transcribe
532 # calls
533 subsampled_manifests = []
534 atexit.register(cleanup_subsampled_manifests, subsampled_manifests)
535
536 # note that we loop over the same config.
537 # This is intentional, as we need to run all models on all datasets
538 # this loop will do the following things:
539 # 1. Goes through each model X each training dataset
540 # 2. Computes predictions by directly calling transcribe_speech.main
541 # 3. Converts transcription to the confidence score(s) as specified in the config
542 # 4. If dev sets are provided, computes the same for them
543 # 5. Creates a list of ground-truth model indices by mapping each model
544 # to its own training dataset as specified in the config.
545 # 6. After the loop, we either run tuning over all confidence scores or
546 # directly use a single score to fit logistic regression and save the
547 # final ensemble model.
548 for model_idx, model_cfg in enumerate(cfg.ensemble):
549 train_model_confidences = []
550 dev_model_confidences = []
551 for data_idx, data_cfg in enumerate(cfg.ensemble):
552 if model_idx == 0: # generating subsampled manifests only one time
553 subsampled_manifests.append(
554 subsample_manifest(data_cfg.training_manifest, data_cfg.max_training_samples)
555 )
556 subsampled_manifest = subsampled_manifests[data_idx]
557
558 if model_cfg.model.endswith(".nemo"):
559 cfg.transcription.model_path = model_cfg.model
560 else: # assuming pretrained model
561 cfg.transcription.pretrained_name = model_cfg.model
562
563 cfg.transcription.dataset_manifest = subsampled_manifest
564
565 # training
566 with tempfile.NamedTemporaryFile() as output_file:
567 cfg.transcription.output_filename = output_file.name
568 LOG.info("Transcribing training dataset %d with model %d", data_idx, model_idx)
569 transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
570 LOG.info("Generating confidence scores")
571 # TODO: parallelize this loop?
572 for transcription in tqdm(transcriptions):
573 if cfg.tune_confidence:
574 train_model_confidences.append(
575 compute_all_confidences(transcription, cfg.tune_confidence_config)
576 )
577 else:
578 train_model_confidences.append(compute_confidence(transcription, cfg.confidence))
579 if model_idx == 0: # labels are the same for all models
580 train_labels.append(data_idx)
581
582 # optional dev
583 if data_cfg.dev_manifest is not None:
584 cfg.transcription.dataset_manifest = data_cfg.dev_manifest
585 with tempfile.NamedTemporaryFile() as output_file:
586 cfg.transcription.output_filename = output_file.name
587 LOG.info("Transcribing dev dataset %d with model %d", data_idx, model_idx)
588 transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
589 LOG.info("Generating confidence scores")
590 for transcription in tqdm(transcriptions):
591 if cfg.tune_confidence:
592 dev_model_confidences.append(
593 compute_all_confidences(transcription, cfg.tune_confidence_config)
594 )
595 else:
596 dev_model_confidences.append(compute_confidence(transcription, cfg.confidence))
597 if model_idx == 0: # labels are the same for all models
598 dev_labels.append(data_idx)
599
600 train_confidences.append(train_model_confidences)
601 if dev_model_confidences:
602 dev_confidences.append(dev_model_confidences)
603
604 if cfg.tune_confidence:
605 best_confidence, model_selection_block = find_best_confidence(
606 train_confidences,
607 train_labels,
608 dev_confidences,
609 dev_labels,
610 cfg.tune_logistic_regression,
611 cfg.tune_logistic_regression_config,
612 )
613 else:
614 best_confidence = cfg.confidence
615 # transposing with zip(*list)
616 training_features = np.array(list(zip(*train_confidences)))
617 training_labels = np.array(train_labels)
618 if dev_confidences:
619 dev_features = np.array(list(zip(*dev_confidences)))
620 dev_labels = np.array(dev_labels)
621 else:
622 dev_features = None
623 dev_labels = None
624 model_selection_block, _ = train_model_selection(
625 training_features,
626 training_labels,
627 dev_features,
628 dev_labels,
629 cfg.tune_logistic_regression,
630 cfg.tune_logistic_regression_config,
631 verbose=True,
632 )
633
634 with tempfile.TemporaryDirectory() as tmpdir:
635 model_selection_block_path = os.path.join(tmpdir, 'model_selection_block.pkl')
636 joblib.dump(model_selection_block, model_selection_block_path)
637
638 # creating ensemble checkpoint
639 ensemble_model = ConfidenceEnsembleModel(
640 cfg=DictConfig(
641 {
642 'model_selection_block': model_selection_block_path,
643 'confidence': best_confidence,
644 'temperature': cfg.temperature,
645 'load_models': [model_cfg.model for model_cfg in cfg.ensemble],
646 }
647 ),
648 trainer=None,
649 )
650 ensemble_model.save_to(cfg.output_path)
651
652
653 if __name__ == '__main__':
654 main()
655
[end of scripts/confidence_ensembles/build_ensemble.py]
[start of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import os
17 from dataclasses import dataclass, is_dataclass
18 from pathlib import Path
19 from typing import Optional
20
21 import pytorch_lightning as pl
22 import torch
23 from omegaconf import MISSING, OmegaConf
24 from sklearn.model_selection import ParameterGrid
25
26 from nemo.collections.asr.metrics.rnnt_wer import RNNTDecodingConfig
27 from nemo.collections.asr.metrics.wer import CTCDecodingConfig
28 from nemo.collections.asr.models import ASRModel, EncDecRNNTModel
29 from nemo.collections.asr.parts.utils.asr_confidence_benchmarking_utils import (
30 apply_confidence_parameters,
31 run_confidence_benchmark,
32 )
33 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig
34 from nemo.core.config import hydra_runner
35 from nemo.utils import logging, model_utils
36
37 """
38 Get confidence metrics and curve plots for a given model, dataset, and confidence parameters.
39
40 # Arguments
41 model_path: Path to .nemo ASR checkpoint
42 pretrained_name: Name of pretrained ASR model (from NGC registry)
43 dataset_manifest: Path to dataset JSON manifest file (in NeMo format)
44 output_dir: Output directory to store a report and curve plot directories
45
46 batch_size: batch size during inference
47 num_workers: number of workers during inference
48
49 cuda: Optional int to enable or disable execution of model on certain CUDA device
50 amp: Bool to decide if Automatic Mixed Precision should be used during inference
51 audio_type: Str filetype of the audio. Supported = wav, flac, mp3
52
53 target_level: Word- or token-level confidence. Supported = word, token, auto (for computing both word and token)
54 confidence_cfg: Config with confidence parameters
55 grid_params: Dictionary with lists of parameters to iteratively benchmark on
56
57 # Usage
58 ASR model can be specified by either "model_path" or "pretrained_name".
59 Data for transcription are defined with "dataset_manifest".
60 Results are returned as a benchmark report and curve plots.
61
62 python benchmark_asr_confidence.py \
63 model_path=null \
64 pretrained_name=null \
65 dataset_manifest="" \
66 output_dir="" \
67 batch_size=64 \
68 num_workers=8 \
69 cuda=0 \
70 amp=True \
71 target_level="word" \
72 confidence_cfg.exclude_blank=False \
73 'grid_params="{\"aggregation\": [\"min\", \"prod\"], \"alpha\": [0.33, 0.5]}"'
74 """
75
76
77 def get_experiment_params(cfg):
78 """Get experiment parameters from a confidence config and generate the experiment name.
79
80 Returns:
81 List of experiment parameters.
82 String with the experiment name.
83 """
84 blank = "no_blank" if cfg.exclude_blank else "blank"
85 aggregation = cfg.aggregation
86 method_name = cfg.method_cfg.name
87 alpha = cfg.method_cfg.alpha
88 if method_name == "entropy":
89 entropy_type = cfg.method_cfg.entropy_type
90 entropy_norm = cfg.method_cfg.entropy_norm
91 experiment_param_list = [
92 aggregation,
93 str(cfg.exclude_blank),
94 method_name,
95 entropy_type,
96 entropy_norm,
97 str(alpha),
98 ]
99 experiment_str = "-".join([aggregation, blank, method_name, entropy_type, entropy_norm, str(alpha)])
100 else:
101 experiment_param_list = [aggregation, str(cfg.exclude_blank), method_name, "-", "-", str(alpha)]
102 experiment_str = "-".join([aggregation, blank, method_name, str(alpha)])
103 return experiment_param_list, experiment_str
104
105
106 @dataclass
107 class ConfidenceBenchmarkingConfig:
108 # Required configs
109 model_path: Optional[str] = None # Path to a .nemo file
110 pretrained_name: Optional[str] = None # Name of a pretrained model
111 dataset_manifest: str = MISSING
112 output_dir: str = MISSING
113
114 # General configs
115 batch_size: int = 32
116 num_workers: int = 4
117
118 # Set `cuda` to int to define CUDA device. If 'None', will look for CUDA
119 # device anyway, and do inference on CPU only if CUDA device is not found.
120 # If `cuda` is a negative number, inference will be on CPU only.
121 cuda: Optional[int] = None
122 amp: bool = False
123 audio_type: str = "wav"
124
125 # Confidence configs
126 target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
127 confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
128 grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
129
130
131 @hydra_runner(config_name="ConfidenceBenchmarkingConfig", schema=ConfidenceBenchmarkingConfig)
132 def main(cfg: ConfidenceBenchmarkingConfig):
133 torch.set_grad_enabled(False)
134
135 logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
136
137 if is_dataclass(cfg):
138 cfg = OmegaConf.structured(cfg)
139
140 if cfg.model_path is None and cfg.pretrained_name is None:
141 raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None!")
142
143 # setup GPU
144 if cfg.cuda is None:
145 if torch.cuda.is_available():
146 device = [0] # use 0th CUDA device
147 accelerator = 'gpu'
148 else:
149 device = 1
150 accelerator = 'cpu'
151 else:
152 device = [cfg.cuda]
153 accelerator = 'gpu'
154
155 map_location = torch.device('cuda:{}'.format(device[0]) if accelerator == 'gpu' else 'cpu')
156
157 # setup model
158 if cfg.model_path is not None:
159 # restore model from .nemo file path
160 model_cfg = ASRModel.restore_from(restore_path=cfg.model_path, return_config=True)
161 classpath = model_cfg.target # original class path
162 imported_class = model_utils.import_class_by_path(classpath) # type: ASRModel
163 logging.info(f"Restoring model : {imported_class.__name__}")
164 asr_model = imported_class.restore_from(
165 restore_path=cfg.model_path, map_location=map_location
166 ) # type: ASRModel
167 else:
168 # restore model by name
169 asr_model = ASRModel.from_pretrained(
170 model_name=cfg.pretrained_name, map_location=map_location
171 ) # type: ASRModel
172
173 trainer = pl.Trainer(devices=device, accelerator=accelerator)
174 asr_model.set_trainer(trainer)
175 asr_model = asr_model.eval()
176
177 # Check if ctc or rnnt model
178 is_rnnt = isinstance(asr_model, EncDecRNNTModel)
179
180 # Check that the model has the `change_decoding_strategy` method
181 if not hasattr(asr_model, 'change_decoding_strategy'):
182 raise RuntimeError("The asr_model you are using must have the `change_decoding_strategy` method.")
183
184 # get filenames and reference texts from manifest
185 filepaths = []
186 reference_texts = []
187 if os.stat(cfg.dataset_manifest).st_size == 0:
188 logging.error(f"The input dataset_manifest {cfg.dataset_manifest} is empty. Exiting!")
189 return None
190 manifest_dir = Path(cfg.dataset_manifest).parent
191 with open(cfg.dataset_manifest, 'r') as f:
192 for line in f:
193 item = json.loads(line)
194 audio_file = Path(item['audio_filepath'])
195 if not audio_file.is_file() and not audio_file.is_absolute():
196 audio_file = manifest_dir / audio_file
197 filepaths.append(str(audio_file.absolute()))
198 reference_texts.append(item['text'])
199
200 # setup AMP (optional)
201 autocast = None
202 if cfg.amp and torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
203 logging.info("AMP enabled!\n")
204 autocast = torch.cuda.amp.autocast
205
206 # do grid-based benchmarking if grid_params is provided, otherwise a regular one
207 work_dir = Path(cfg.output_dir)
208 os.makedirs(work_dir, exist_ok=True)
209 report_legend = (
210 ",".join(
211 [
212 "model_type",
213 "aggregation",
214 "blank",
215 "method_name",
216 "entropy_type",
217 "entropy_norm",
218 "alpha",
219 "target_level",
220 "auc_roc",
221 "auc_pr",
222 "auc_nt",
223 "nce",
224 "ece",
225 "auc_yc",
226 "std_yc",
227 "max_yc",
228 ]
229 )
230 + "\n"
231 )
232 model_typename = "RNNT" if is_rnnt else "CTC"
233 report_file = work_dir / Path("report.csv")
234 if cfg.grid_params:
235 asr_model.change_decoding_strategy(
236 RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
237 if is_rnnt
238 else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
239 )
240 params = json.loads(cfg.grid_params)
241 hp_grid = ParameterGrid(params)
242 hp_grid = list(hp_grid)
243
244 logging.info(f"==============================Running a benchmarking with grid search=========================")
245 logging.info(f"Grid search size: {len(hp_grid)}")
246 logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directories near the file")
247 logging.info(f"==============================================================================================")
248
249 with open(report_file, "tw", encoding="utf-8") as f:
250 f.write(report_legend)
251 f.flush()
252 for i, hp in enumerate(hp_grid):
253 logging.info(f"Run # {i + 1}, grid: `{hp}`")
254 asr_model.change_decoding_strategy(apply_confidence_parameters(asr_model.cfg.decoding, hp))
255 param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
256 plot_dir = work_dir / Path(experiment_name)
257 results = run_confidence_benchmark(
258 asr_model,
259 cfg.target_level,
260 filepaths,
261 reference_texts,
262 cfg.batch_size,
263 cfg.num_workers,
264 plot_dir,
265 autocast,
266 )
267 for level, result in results.items():
268 f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
269 f.flush()
270 else:
271 asr_model.change_decoding_strategy(
272 RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
273 if is_rnnt
274 else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
275 )
276 param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
277 plot_dir = work_dir / Path(experiment_name)
278
279 logging.info(f"==============================Running a single benchmarking===================================")
280 logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directory `{plot_dir}`")
281
282 with open(report_file, "tw", encoding="utf-8") as f:
283 f.write(report_legend)
284 f.flush()
285 results = run_confidence_benchmark(
286 asr_model,
287 cfg.batch_size,
288 cfg.num_workers,
289 cfg.target_level,
290 filepaths,
291 reference_texts,
292 plot_dir,
293 autocast,
294 )
295 for level, result in results.items():
296 f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
297 logging.info(f"===========================================Done===============================================")
298
299
300 if __name__ == '__main__':
301 main()
302
[end of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
[start of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 # This script converts an existing audio dataset with a manifest to
16 # a tarred and sharded audio dataset that can be read by the
17 # TarredAudioToTextDataLayer.
18
19 # Please make sure your audio_filepath DOES NOT CONTAIN '-sub'!
20 # Because we will use it to handle files which have duplicate filenames but with different offsets
21 # (see function create_shard for details)
22
23
24 # Bucketing can help to improve the training speed. You may use --buckets_num to specify the number of buckets.
25 # It creates multiple tarred datasets, one per bucket, based on the audio durations.
26 # The range of [min_duration, max_duration) is split into equal sized buckets.
27 # Recommend to use --sort_in_shards to speedup the training by reducing the paddings in the batches
28 # More info on how to use bucketing feature: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/datasets.html
29
30 # If valid NVIDIA DALI version is installed, will also generate the corresponding DALI index files that need to be
31 # supplied to the config in order to utilize webdataset for efficient large dataset handling.
32 # NOTE: DALI + Webdataset is NOT compatible with Bucketing support !
33
34 # Usage:
35 1) Creating a new tarfile dataset
36
37 python convert_to_tarred_audio_dataset.py \
38 --manifest_path=<path to the manifest file> \
39 --target_dir=<path to output directory> \
40 --num_shards=<number of tarfiles that will contain the audio> \
41 --max_duration=<float representing maximum duration of audio samples> \
42 --min_duration=<float representing minimum duration of audio samples> \
43 --shuffle --shuffle_seed=1 \
44 --sort_in_shards \
45 --workers=-1
46
47
48 2) Concatenating more tarfiles to a pre-existing tarred dataset
49
50 python convert_to_tarred_audio_dataset.py \
51 --manifest_path=<path to the tarred manifest file> \
52 --metadata_path=<path to the metadata.yaml (or metadata_version_{X}.yaml) file> \
53 --target_dir=<path to output directory where the original tarfiles are contained> \
54 --max_duration=<float representing maximum duration of audio samples> \
55 --min_duration=<float representing minimum duration of audio samples> \
56 --shuffle --shuffle_seed=1 \
57 --sort_in_shards \
58 --workers=-1 \
59 --concat_manifest_paths \
60 <space separated paths to 1 or more manifest files to concatenate into the original tarred dataset>
61
62 3) Writing an empty metadata file
63
64 python convert_to_tarred_audio_dataset.py \
65 --target_dir=<path to output directory> \
66 # any other optional argument
67 --num_shards=8 \
68 --max_duration=16.7 \
69 --min_duration=0.01 \
70 --shuffle \
71 --workers=-1 \
72 --sort_in_shards \
73 --shuffle_seed=1 \
74 --write_metadata
75
76 """
77 import argparse
78 import copy
79 import json
80 import os
81 import random
82 import tarfile
83 from collections import defaultdict
84 from dataclasses import dataclass, field
85 from datetime import datetime
86 from typing import Any, List, Optional
87
88 from joblib import Parallel, delayed
89 from omegaconf import DictConfig, OmegaConf, open_dict
90
91 try:
92 import create_dali_tarred_dataset_index as dali_index
93
94 DALI_INDEX_SCRIPT_AVAILABLE = True
95 except (ImportError, ModuleNotFoundError, FileNotFoundError):
96 DALI_INDEX_SCRIPT_AVAILABLE = False
97
98 parser = argparse.ArgumentParser(
99 description="Convert an existing ASR dataset to tarballs compatible with TarredAudioToTextDataLayer."
100 )
101 parser.add_argument(
102 "--manifest_path", default=None, type=str, required=False, help="Path to the existing dataset's manifest."
103 )
104
105 parser.add_argument(
106 '--concat_manifest_paths',
107 nargs='+',
108 default=None,
109 type=str,
110 required=False,
111 help="Path to the additional dataset's manifests that will be concatenated with base dataset.",
112 )
113
114 # Optional arguments
115 parser.add_argument(
116 "--target_dir",
117 default='./tarred',
118 type=str,
119 help="Target directory for resulting tarballs and manifest. Defaults to `./tarred`. Creates the path if necessary.",
120 )
121
122 parser.add_argument(
123 "--metadata_path", required=False, default=None, type=str, help="Path to metadata file for the dataset.",
124 )
125
126 parser.add_argument(
127 "--num_shards",
128 default=-1,
129 type=int,
130 help="Number of shards (tarballs) to create. Used for partitioning data among workers.",
131 )
132 parser.add_argument(
133 '--max_duration',
134 default=None,
135 required=True,
136 type=float,
137 help='Maximum duration of audio clip in the dataset. By default, it is None and is required to be set.',
138 )
139 parser.add_argument(
140 '--min_duration',
141 default=None,
142 type=float,
143 help='Minimum duration of audio clip in the dataset. By default, it is None and will not filter files.',
144 )
145 parser.add_argument(
146 "--shuffle",
147 action='store_true',
148 help="Whether or not to randomly shuffle the samples in the manifest before tarring/sharding.",
149 )
150
151 parser.add_argument(
152 "--keep_files_together",
153 action='store_true',
154 help="Whether or not to keep entries from the same file (but different offsets) together when sorting before tarring/sharding.",
155 )
156
157 parser.add_argument(
158 "--sort_in_shards",
159 action='store_true',
160 help="Whether or not to sort samples inside the shards based on their duration.",
161 )
162
163 parser.add_argument(
164 "--buckets_num", type=int, default=1, help="Number of buckets to create based on duration.",
165 )
166
167 parser.add_argument("--shuffle_seed", type=int, default=None, help="Random seed for use if shuffling is enabled.")
168 parser.add_argument(
169 '--write_metadata',
170 action='store_true',
171 help=(
172 "Flag to write a blank metadata with the current call config. "
173 "Note that the metadata will not contain the number of shards, "
174 "and it must be filled out by the user."
175 ),
176 )
177 parser.add_argument(
178 "--no_shard_manifests",
179 action='store_true',
180 help="Do not write sharded manifests along with the aggregated manifest.",
181 )
182 parser.add_argument('--workers', type=int, default=1, help='Number of worker processes')
183 args = parser.parse_args()
184
185
186 @dataclass
187 class ASRTarredDatasetConfig:
188 num_shards: int = -1
189 shuffle: bool = False
190 max_duration: Optional[float] = None
191 min_duration: Optional[float] = None
192 shuffle_seed: Optional[int] = None
193 sort_in_shards: bool = True
194 shard_manifests: bool = True
195 keep_files_together: bool = False
196
197
198 @dataclass
199 class ASRTarredDatasetMetadata:
200 created_datetime: Optional[str] = None
201 version: int = 0
202 num_samples_per_shard: Optional[int] = None
203 is_concatenated_manifest: bool = False
204
205 dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
206 history: Optional[List[Any]] = field(default_factory=lambda: [])
207
208 def __post_init__(self):
209 self.created_datetime = self.get_current_datetime()
210
211 def get_current_datetime(self):
212 return datetime.now().strftime("%m-%d-%Y %H-%M-%S")
213
214 @classmethod
215 def from_config(cls, config: DictConfig):
216 obj = cls()
217 obj.__dict__.update(**config)
218 return obj
219
220 @classmethod
221 def from_file(cls, filepath: str):
222 config = OmegaConf.load(filepath)
223 return ASRTarredDatasetMetadata.from_config(config=config)
224
225
226 class ASRTarredDatasetBuilder:
227 """
228 Helper class that constructs a tarred dataset from scratch, or concatenates tarred datasets
229 together and constructs manifests for them.
230 """
231
232 def __init__(self):
233 self.config = None
234
235 def configure(self, config: ASRTarredDatasetConfig):
236 """
237 Sets the config generated from command line overrides.
238
239 Args:
240 config: ASRTarredDatasetConfig dataclass object.
241 """
242 self.config = config # type: ASRTarredDatasetConfig
243
244 if self.config.num_shards < 0:
245 raise ValueError("`num_shards` must be > 0. Please fill in the metadata information correctly.")
246
247 def create_new_dataset(self, manifest_path: str, target_dir: str = "./tarred/", num_workers: int = 0):
248 """
249 Creates a new tarred dataset from a given manifest file.
250
251 Args:
252 manifest_path: Path to the original ASR manifest.
253 target_dir: Output directory.
254 num_workers: Integer denoting number of parallel worker processes which will write tarfiles.
255 Defaults to 1 - which denotes sequential worker process.
256
257 Output:
258 Writes tarfiles, along with the tarred dataset compatible manifest file.
259 Also preserves a record of the metadata used to construct this tarred dataset.
260 """
261 if self.config is None:
262 raise ValueError("Config has not been set. Please call `configure(config: ASRTarredDatasetConfig)`")
263
264 if manifest_path is None:
265 raise FileNotFoundError("Manifest filepath cannot be None !")
266
267 config = self.config # type: ASRTarredDatasetConfig
268
269 if not os.path.exists(target_dir):
270 os.makedirs(target_dir)
271
272 # Read the existing manifest
273 entries, total_duration, filtered_entries, filtered_duration = self._read_manifest(manifest_path, config)
274
275 if len(filtered_entries) > 0:
276 print(f"Filtered {len(filtered_entries)} files which amounts to {filtered_duration} seconds of audio.")
277 print(
278 f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
279 )
280
281 if len(entries) == 0:
282 print("No tarred dataset was created as there were 0 valid samples after filtering!")
283 return
284 if config.shuffle:
285 random.seed(config.shuffle_seed)
286 print("Shuffling...")
287 if config.keep_files_together:
288 filename_entries = defaultdict(list)
289 for ent in entries:
290 filename_entries[ent["audio_filepath"]].append(ent)
291 filenames = list(filename_entries.keys())
292 random.shuffle(filenames)
293 shuffled_entries = []
294 for filename in filenames:
295 shuffled_entries += filename_entries[filename]
296 entries = shuffled_entries
297 else:
298 random.shuffle(entries)
299
300 # Create shards and updated manifest entries
301 print(f"Number of samples added : {len(entries)}")
302 print(f"Remainder: {len(entries) % config.num_shards}")
303
304 start_indices = []
305 end_indices = []
306 # Build indices
307 for i in range(config.num_shards):
308 start_idx = (len(entries) // config.num_shards) * i
309 end_idx = start_idx + (len(entries) // config.num_shards)
310 print(f"Shard {i} has entries {start_idx} ~ {end_idx}")
311 files = set()
312 for ent_id in range(start_idx, end_idx):
313 files.add(entries[ent_id]["audio_filepath"])
314 print(f"Shard {i} contains {len(files)} files")
315 if i == config.num_shards - 1:
316 # We discard in order to have the same number of entries per shard.
317 print(f"Have {len(entries) - end_idx} entries left over that will be discarded.")
318
319 start_indices.append(start_idx)
320 end_indices.append(end_idx)
321
322 manifest_folder, _ = os.path.split(manifest_path)
323
324 with Parallel(n_jobs=num_workers, verbose=config.num_shards) as parallel:
325 # Call parallel tarfile construction
326 new_entries_list = parallel(
327 delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, i, manifest_folder)
328 for i, (start_idx, end_idx) in enumerate(zip(start_indices, end_indices))
329 )
330
331 if config.shard_manifests:
332 sharded_manifests_dir = target_dir + '/sharded_manifests'
333 if not os.path.exists(sharded_manifests_dir):
334 os.makedirs(sharded_manifests_dir)
335
336 for manifest in new_entries_list:
337 shard_id = manifest[0]['shard_id']
338 new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
339 with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
340 for entry in manifest:
341 json.dump(entry, m2)
342 m2.write('\n')
343
344 # Flatten the list of list of entries to a list of entries
345 new_entries = [sample for manifest in new_entries_list for sample in manifest]
346 del new_entries_list
347
348 print("Total number of entries in manifest :", len(new_entries))
349
350 # Write manifest
351 new_manifest_path = os.path.join(target_dir, 'tarred_audio_manifest.json')
352 with open(new_manifest_path, 'w', encoding='utf-8') as m2:
353 for entry in new_entries:
354 json.dump(entry, m2)
355 m2.write('\n')
356
357 # Write metadata (default metadata for new datasets)
358 new_metadata_path = os.path.join(target_dir, 'metadata.yaml')
359 metadata = ASRTarredDatasetMetadata()
360
361 # Update metadata
362 metadata.dataset_config = config
363 metadata.num_samples_per_shard = len(new_entries) // config.num_shards
364
365 # Write metadata
366 metadata_yaml = OmegaConf.structured(metadata)
367 OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
368
369 def create_concatenated_dataset(
370 self,
371 base_manifest_path: str,
372 manifest_paths: List[str],
373 metadata: ASRTarredDatasetMetadata,
374 target_dir: str = "./tarred_concatenated/",
375 num_workers: int = 1,
376 ):
377 """
378 Creates new tarfiles in order to create a concatenated dataset, whose manifest contains the data for
379 both the original dataset as well as the new data submitted in manifest paths.
380
381 Args:
382 base_manifest_path: Path to the manifest file which contains the information for the original
383 tarred dataset (with flattened paths).
384 manifest_paths: List of one or more paths to manifest files that will be concatenated with above
385 base tarred dataset.
386 metadata: ASRTarredDatasetMetadata dataclass instance with overrides from command line.
387 target_dir: Output directory
388
389 Output:
390 Writes tarfiles which with indices mapping to a "concatenated" tarred dataset,
391 along with the tarred dataset compatible manifest file which includes information
392 about all the datasets that comprise the concatenated dataset.
393
394 Also preserves a record of the metadata used to construct this tarred dataset.
395 """
396 if not os.path.exists(target_dir):
397 os.makedirs(target_dir)
398
399 if base_manifest_path is None:
400 raise FileNotFoundError("Base manifest filepath cannot be None !")
401
402 if manifest_paths is None or len(manifest_paths) == 0:
403 raise FileNotFoundError("List of additional manifest filepaths cannot be None !")
404
405 config = ASRTarredDatasetConfig(**(metadata.dataset_config))
406
407 # Read the existing manifest (no filtering here)
408 base_entries, _, _, _ = self._read_manifest(base_manifest_path, config)
409 print(f"Read base manifest containing {len(base_entries)} samples.")
410
411 # Precompute number of samples per shard
412 if metadata.num_samples_per_shard is None:
413 num_samples_per_shard = len(base_entries) // config.num_shards
414 else:
415 num_samples_per_shard = metadata.num_samples_per_shard
416
417 print("Number of samples per shard :", num_samples_per_shard)
418
419 # Compute min and max duration and update config (if no metadata passed)
420 print(f"Selected max duration : {config.max_duration}")
421 print(f"Selected min duration : {config.min_duration}")
422
423 entries = []
424 for new_manifest_idx in range(len(manifest_paths)):
425 new_entries, total_duration, filtered_new_entries, filtered_duration = self._read_manifest(
426 manifest_paths[new_manifest_idx], config
427 )
428
429 if len(filtered_new_entries) > 0:
430 print(
431 f"Filtered {len(filtered_new_entries)} files which amounts to {filtered_duration:0.2f}"
432 f" seconds of audio from manifest {manifest_paths[new_manifest_idx]}."
433 )
434 print(
435 f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
436 )
437
438 entries.extend(new_entries)
439
440 if len(entries) == 0:
441 print("No tarred dataset was created as there were 0 valid samples after filtering!")
442 return
443
444 if config.shuffle:
445 random.seed(config.shuffle_seed)
446 print("Shuffling...")
447 random.shuffle(entries)
448
449 # Drop last section of samples that cannot be added onto a chunk
450 drop_count = len(entries) % num_samples_per_shard
451 total_new_entries = len(entries)
452 entries = entries[:-drop_count]
453
454 print(
455 f"Dropping {drop_count} samples from total new samples {total_new_entries} since they cannot "
456 f"be added into a uniformly sized chunk."
457 )
458
459 # Create shards and updated manifest entries
460 num_added_shards = len(entries) // num_samples_per_shard
461
462 print(f"Number of samples in base dataset : {len(base_entries)}")
463 print(f"Number of samples in additional datasets : {len(entries)}")
464 print(f"Number of added shards : {num_added_shards}")
465 print(f"Remainder: {len(entries) % num_samples_per_shard}")
466
467 start_indices = []
468 end_indices = []
469 shard_indices = []
470 for i in range(num_added_shards):
471 start_idx = (len(entries) // num_added_shards) * i
472 end_idx = start_idx + (len(entries) // num_added_shards)
473 shard_idx = i + config.num_shards
474 print(f"Shard {shard_idx} has entries {start_idx + len(base_entries)} ~ {end_idx + len(base_entries)}")
475
476 start_indices.append(start_idx)
477 end_indices.append(end_idx)
478 shard_indices.append(shard_idx)
479
480 manifest_folder, _ = os.path.split(base_manifest_path)
481
482 with Parallel(n_jobs=num_workers, verbose=num_added_shards) as parallel:
483 # Call parallel tarfile construction
484 new_entries_list = parallel(
485 delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, shard_idx, manifest_folder)
486 for i, (start_idx, end_idx, shard_idx) in enumerate(zip(start_indices, end_indices, shard_indices))
487 )
488
489 if config.shard_manifests:
490 sharded_manifests_dir = target_dir + '/sharded_manifests'
491 if not os.path.exists(sharded_manifests_dir):
492 os.makedirs(sharded_manifests_dir)
493
494 for manifest in new_entries_list:
495 shard_id = manifest[0]['shard_id']
496 new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
497 with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
498 for entry in manifest:
499 json.dump(entry, m2)
500 m2.write('\n')
501
502 # Flatten the list of list of entries to a list of entries
503 new_entries = [sample for manifest in new_entries_list for sample in manifest]
504 del new_entries_list
505
506 # Write manifest
507 if metadata is None:
508 new_version = 1 # start with `1`, where `0` indicates the base manifest + dataset
509 else:
510 new_version = metadata.version + 1
511
512 print("Total number of entries in manifest :", len(base_entries) + len(new_entries))
513
514 new_manifest_path = os.path.join(target_dir, f'tarred_audio_manifest_version_{new_version}.json')
515 with open(new_manifest_path, 'w', encoding='utf-8') as m2:
516 # First write all the entries of base manifest
517 for entry in base_entries:
518 json.dump(entry, m2)
519 m2.write('\n')
520
521 # Finally write the new entries
522 for entry in new_entries:
523 json.dump(entry, m2)
524 m2.write('\n')
525
526 # Preserve historical metadata
527 base_metadata = metadata
528
529 # Write metadata (updated metadata for concatenated datasets)
530 new_metadata_path = os.path.join(target_dir, f'metadata_version_{new_version}.yaml')
531 metadata = ASRTarredDatasetMetadata()
532
533 # Update config
534 config.num_shards = config.num_shards + num_added_shards
535
536 # Update metadata
537 metadata.version = new_version
538 metadata.dataset_config = config
539 metadata.num_samples_per_shard = num_samples_per_shard
540 metadata.is_concatenated_manifest = True
541 metadata.created_datetime = metadata.get_current_datetime()
542
543 # Attach history
544 current_metadata = OmegaConf.structured(base_metadata.history)
545 metadata.history = current_metadata
546
547 # Write metadata
548 metadata_yaml = OmegaConf.structured(metadata)
549 OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
550
551 def _read_manifest(self, manifest_path: str, config: ASRTarredDatasetConfig):
552 """Read and filters data from the manifest"""
553 # Read the existing manifest
554 entries = []
555 total_duration = 0.0
556 filtered_entries = []
557 filtered_duration = 0.0
558 with open(manifest_path, 'r', encoding='utf-8') as m:
559 for line in m:
560 entry = json.loads(line)
561 if (config.max_duration is None or entry['duration'] < config.max_duration) and (
562 config.min_duration is None or entry['duration'] >= config.min_duration
563 ):
564 entries.append(entry)
565 total_duration += entry["duration"]
566 else:
567 filtered_entries.append(entry)
568 filtered_duration += entry['duration']
569
570 return entries, total_duration, filtered_entries, filtered_duration
571
572 def _create_shard(self, entries, target_dir, shard_id, manifest_folder):
573 """Creates a tarball containing the audio files from `entries`.
574 """
575 if self.config.sort_in_shards:
576 entries.sort(key=lambda x: x["duration"], reverse=False)
577
578 new_entries = []
579 tar = tarfile.open(os.path.join(target_dir, f'audio_{shard_id}.tar'), mode='w', dereference=True)
580
581 count = dict()
582 for entry in entries:
583 # We squash the filename since we do not preserve directory structure of audio files in the tarball.
584 if os.path.exists(entry["audio_filepath"]):
585 audio_filepath = entry["audio_filepath"]
586 else:
587 audio_filepath = os.path.join(manifest_folder, entry["audio_filepath"])
588 if not os.path.exists(audio_filepath):
589 raise FileNotFoundError(f"Could not find {entry['audio_filepath']}!")
590
591 base, ext = os.path.splitext(audio_filepath)
592 base = base.replace('/', '_')
593 # Need the following replacement as long as WebDataset splits on first period
594 base = base.replace('.', '_')
595 squashed_filename = f'{base}{ext}'
596 if squashed_filename not in count:
597 tar.add(audio_filepath, arcname=squashed_filename)
598 to_write = squashed_filename
599 count[squashed_filename] = 1
600 else:
601 to_write = base + "-sub" + str(count[squashed_filename]) + ext
602 count[squashed_filename] += 1
603
604 new_entry = {
605 'audio_filepath': to_write,
606 'duration': entry['duration'],
607 'shard_id': shard_id, # Keep shard ID for recordkeeping
608 }
609
610 if 'label' in entry:
611 new_entry['label'] = entry['label']
612
613 if 'text' in entry:
614 new_entry['text'] = entry['text']
615
616 if 'offset' in entry:
617 new_entry['offset'] = entry['offset']
618
619 if 'lang' in entry:
620 new_entry['lang'] = entry['lang']
621
622 new_entries.append(new_entry)
623
624 tar.close()
625 return new_entries
626
627 @classmethod
628 def setup_history(cls, base_metadata: ASRTarredDatasetMetadata, history: List[Any]):
629 if 'history' in base_metadata.keys():
630 for history_val in base_metadata.history:
631 cls.setup_history(history_val, history)
632
633 if base_metadata is not None:
634 metadata_copy = copy.deepcopy(base_metadata)
635 with open_dict(metadata_copy):
636 metadata_copy.pop('history', None)
637 history.append(metadata_copy)
638
639
640 def main():
641 if args.buckets_num > 1:
642 bucket_length = (args.max_duration - args.min_duration) / float(args.buckets_num)
643 for i in range(args.buckets_num):
644 min_duration = args.min_duration + i * bucket_length
645 max_duration = min_duration + bucket_length
646 if i == args.buckets_num - 1:
647 # add a small number to cover the samples with exactly duration of max_duration in the last bucket.
648 max_duration += 1e-5
649 target_dir = os.path.join(args.target_dir, f"bucket{i+1}")
650 print(f"Creating bucket {i+1} with min_duration={min_duration} and max_duration={max_duration} ...")
651 print(f"Results are being saved at: {target_dir}.")
652 create_tar_datasets(min_duration=min_duration, max_duration=max_duration, target_dir=target_dir)
653 print(f"Bucket {i+1} is created.")
654 else:
655 create_tar_datasets(min_duration=args.min_duration, max_duration=args.max_duration, target_dir=args.target_dir)
656
657
658 def create_tar_datasets(min_duration: float, max_duration: float, target_dir: str):
659 builder = ASRTarredDatasetBuilder()
660
661 shard_manifests = False if args.no_shard_manifests else True
662
663 if args.write_metadata:
664 metadata = ASRTarredDatasetMetadata()
665 dataset_cfg = ASRTarredDatasetConfig(
666 num_shards=args.num_shards,
667 shuffle=args.shuffle,
668 max_duration=max_duration,
669 min_duration=min_duration,
670 shuffle_seed=args.shuffle_seed,
671 sort_in_shards=args.sort_in_shards,
672 shard_manifests=shard_manifests,
673 keep_files_together=args.keep_files_together,
674 )
675 metadata.dataset_config = dataset_cfg
676
677 output_path = os.path.join(target_dir, 'default_metadata.yaml')
678 OmegaConf.save(metadata, output_path, resolve=True)
679 print(f"Default metadata written to {output_path}")
680 exit(0)
681
682 if args.concat_manifest_paths is None or len(args.concat_manifest_paths) == 0:
683 print("Creating new tarred dataset ...")
684
685 # Create a tarred dataset from scratch
686 config = ASRTarredDatasetConfig(
687 num_shards=args.num_shards,
688 shuffle=args.shuffle,
689 max_duration=max_duration,
690 min_duration=min_duration,
691 shuffle_seed=args.shuffle_seed,
692 sort_in_shards=args.sort_in_shards,
693 shard_manifests=shard_manifests,
694 keep_files_together=args.keep_files_together,
695 )
696 builder.configure(config)
697 builder.create_new_dataset(manifest_path=args.manifest_path, target_dir=target_dir, num_workers=args.workers)
698
699 else:
700 if args.buckets_num > 1:
701 raise ValueError("Concatenation feature does not support buckets_num > 1.")
702 print("Concatenating multiple tarred datasets ...")
703
704 # Implicitly update config from base details
705 if args.metadata_path is not None:
706 metadata = ASRTarredDatasetMetadata.from_file(args.metadata_path)
707 else:
708 raise ValueError("`metadata` yaml file path must be provided!")
709
710 # Preserve history
711 history = []
712 builder.setup_history(OmegaConf.structured(metadata), history)
713 metadata.history = history
714
715 # Add command line overrides (everything other than num_shards)
716 metadata.dataset_config.max_duration = max_duration
717 metadata.dataset_config.min_duration = min_duration
718 metadata.dataset_config.shuffle = args.shuffle
719 metadata.dataset_config.shuffle_seed = args.shuffle_seed
720 metadata.dataset_config.sort_in_shards = args.sort_in_shards
721 metadata.dataset_config.shard_manifests = shard_manifests
722
723 builder.configure(metadata.dataset_config)
724
725 # Concatenate a tarred dataset onto a previous one
726 builder.create_concatenated_dataset(
727 base_manifest_path=args.manifest_path,
728 manifest_paths=args.concat_manifest_paths,
729 metadata=metadata,
730 target_dir=target_dir,
731 num_workers=args.workers,
732 )
733
734 if DALI_INDEX_SCRIPT_AVAILABLE and dali_index.INDEX_CREATOR_AVAILABLE:
735 print("Constructing DALI Tarfile Index - ", target_dir)
736 index_config = dali_index.DALITarredIndexConfig(tar_dir=target_dir, workers=args.workers)
737 dali_index.main(index_config)
738
739
740 if __name__ == "__main__":
741 main()
742
[end of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
[start of tools/nemo_forced_aligner/align.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import math
17 import os
18 from dataclasses import dataclass, field, is_dataclass
19 from pathlib import Path
20 from typing import List, Optional
21
22 import torch
23 from omegaconf import OmegaConf
24 from utils.data_prep import (
25 add_t_start_end_to_utt_obj,
26 get_batch_starts_ends,
27 get_batch_variables,
28 get_manifest_lines_batch,
29 is_entry_in_all_lines,
30 is_entry_in_any_lines,
31 )
32 from utils.make_ass_files import make_ass_files
33 from utils.make_ctm_files import make_ctm_files
34 from utils.make_output_manifest import write_manifest_out_line
35 from utils.viterbi_decoding import viterbi_decoding
36
37 from nemo.collections.asr.models.ctc_models import EncDecCTCModel
38 from nemo.collections.asr.models.hybrid_rnnt_ctc_models import EncDecHybridRNNTCTCModel
39 from nemo.collections.asr.parts.utils.streaming_utils import FrameBatchASR
40 from nemo.collections.asr.parts.utils.transcribe_utils import setup_model
41 from nemo.core.config import hydra_runner
42 from nemo.utils import logging
43
44 """
45 Align the utterances in manifest_filepath.
46 Results are saved in ctm files in output_dir.
47
48 Arguments:
49 pretrained_name: string specifying the name of a CTC NeMo ASR model which will be automatically downloaded
50 from NGC and used for generating the log-probs which we will use to do alignment.
51 Note: NFA can only use CTC models (not Transducer models) at the moment.
52 model_path: string specifying the local filepath to a CTC NeMo ASR model which will be used to generate the
53 log-probs which we will use to do alignment.
54 Note: NFA can only use CTC models (not Transducer models) at the moment.
55 Note: if a model_path is provided, it will override the pretrained_name.
56 manifest_filepath: filepath to the manifest of the data you want to align,
57 containing 'audio_filepath' and 'text' fields.
58 output_dir: the folder where output CTM files and new JSON manifest will be saved.
59 align_using_pred_text: if True, will transcribe the audio using the specified model and then use that transcription
60 as the reference text for the forced alignment.
61 transcribe_device: None, or a string specifying the device that will be used for generating log-probs (i.e. "transcribing").
62 The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
63 (otherwise will set it to 'cpu').
64 viterbi_device: None, or string specifying the device that will be used for doing Viterbi decoding.
65 The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
66 (otherwise will set it to 'cpu').
67 batch_size: int specifying batch size that will be used for generating log-probs and doing Viterbi decoding.
68 use_local_attention: boolean flag specifying whether to try to use local attention for the ASR Model (will only
69 work if the ASR Model is a Conformer model). If local attention is used, we will set the local attention context
70 size to [64,64].
71 additional_segment_grouping_separator: an optional string used to separate the text into smaller segments.
72 If this is not specified, then the whole text will be treated as a single segment.
73 remove_blank_tokens_from_ctm: a boolean denoting whether to remove <blank> tokens from token-level output CTMs.
74 audio_filepath_parts_in_utt_id: int specifying how many of the 'parts' of the audio_filepath
75 we will use (starting from the final part of the audio_filepath) to determine the
76 utt_id that will be used in the CTM files. Note also that any spaces that are present in the audio_filepath
77 will be replaced with dashes, so as not to change the number of space-separated elements in the
78 CTM files.
79 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 1 => utt_id will be "e1"
80 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 2 => utt_id will be "d_e1"
81 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 3 => utt_id will be "c_d_e1"
82 use_buffered_infer: False, if set True, using streaming to do get the logits for alignment
83 This flag is useful when aligning large audio file.
84 However, currently the chunk streaming inference does not support batch inference,
85 which means even you set batch_size > 1, it will only infer one by one instead of doing
86 the whole batch inference together.
87 chunk_len_in_secs: float chunk length in seconds
88 total_buffer_in_secs: float Length of buffer (chunk + left and right padding) in seconds
89 chunk_batch_size: int batch size for buffered chunk inference,
90 which will cut one audio into segments and do inference on chunk_batch_size segments at a time
91
92 simulate_cache_aware_streaming: False, if set True, using cache aware streaming to do get the logits for alignment
93
94 save_output_file_formats: List of strings specifying what type of output files to save (default: ["ctm", "ass"])
95 ctm_file_config: CTMFileConfig to specify the configuration of the output CTM files
96 ass_file_config: ASSFileConfig to specify the configuration of the output ASS files
97 """
98
99
100 @dataclass
101 class CTMFileConfig:
102 remove_blank_tokens: bool = False
103 # minimum duration (in seconds) for timestamps in the CTM.If any line in the CTM has a
104 # duration lower than this, it will be enlarged from the middle outwards until it
105 # meets the minimum_timestamp_duration, or reaches the beginning or end of the audio file.
106 # Note that this may cause timestamps to overlap.
107 minimum_timestamp_duration: float = 0
108
109
110 @dataclass
111 class ASSFileConfig:
112 fontsize: int = 20
113 vertical_alignment: str = "center"
114 # if resegment_text_to_fill_space is True, the ASS files will use new segments
115 # such that each segment will not take up more than (approximately) max_lines_per_segment
116 # when the ASS file is applied to a video
117 resegment_text_to_fill_space: bool = False
118 max_lines_per_segment: int = 2
119 text_already_spoken_rgb: List[int] = field(default_factory=lambda: [49, 46, 61]) # dark gray
120 text_being_spoken_rgb: List[int] = field(default_factory=lambda: [57, 171, 9]) # dark green
121 text_not_yet_spoken_rgb: List[int] = field(default_factory=lambda: [194, 193, 199]) # light gray
122
123
124 @dataclass
125 class AlignmentConfig:
126 # Required configs
127 pretrained_name: Optional[str] = None
128 model_path: Optional[str] = None
129 manifest_filepath: Optional[str] = None
130 output_dir: Optional[str] = None
131
132 # General configs
133 align_using_pred_text: bool = False
134 transcribe_device: Optional[str] = None
135 viterbi_device: Optional[str] = None
136 batch_size: int = 1
137 use_local_attention: bool = True
138 additional_segment_grouping_separator: Optional[str] = None
139 audio_filepath_parts_in_utt_id: int = 1
140
141 # Buffered chunked streaming configs
142 use_buffered_chunked_streaming: bool = False
143 chunk_len_in_secs: float = 1.6
144 total_buffer_in_secs: float = 4.0
145 chunk_batch_size: int = 32
146
147 # Cache aware streaming configs
148 simulate_cache_aware_streaming: Optional[bool] = False
149
150 # Output file configs
151 save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
152 ctm_file_config: CTMFileConfig = CTMFileConfig()
153 ass_file_config: ASSFileConfig = ASSFileConfig()
154
155
156 @hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
157 def main(cfg: AlignmentConfig):
158
159 logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
160
161 if is_dataclass(cfg):
162 cfg = OmegaConf.structured(cfg)
163
164 # Validate config
165 if cfg.model_path is None and cfg.pretrained_name is None:
166 raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None")
167
168 if cfg.model_path is not None and cfg.pretrained_name is not None:
169 raise ValueError("One of cfg.model_path and cfg.pretrained_name must be None")
170
171 if cfg.manifest_filepath is None:
172 raise ValueError("cfg.manifest_filepath must be specified")
173
174 if cfg.output_dir is None:
175 raise ValueError("cfg.output_dir must be specified")
176
177 if cfg.batch_size < 1:
178 raise ValueError("cfg.batch_size cannot be zero or a negative number")
179
180 if cfg.additional_segment_grouping_separator == "" or cfg.additional_segment_grouping_separator == " ":
181 raise ValueError("cfg.additional_grouping_separator cannot be empty string or space character")
182
183 if cfg.ctm_file_config.minimum_timestamp_duration < 0:
184 raise ValueError("cfg.minimum_timestamp_duration cannot be a negative number")
185
186 if cfg.ass_file_config.vertical_alignment not in ["top", "center", "bottom"]:
187 raise ValueError("cfg.ass_file_config.vertical_alignment must be one of 'top', 'center' or 'bottom'")
188
189 for rgb_list in [
190 cfg.ass_file_config.text_already_spoken_rgb,
191 cfg.ass_file_config.text_already_spoken_rgb,
192 cfg.ass_file_config.text_already_spoken_rgb,
193 ]:
194 if len(rgb_list) != 3:
195 raise ValueError(
196 "cfg.ass_file_config.text_already_spoken_rgb,"
197 " cfg.ass_file_config.text_being_spoken_rgb,"
198 " and cfg.ass_file_config.text_already_spoken_rgb all need to contain"
199 " exactly 3 elements."
200 )
201
202 # Validate manifest contents
203 if not is_entry_in_all_lines(cfg.manifest_filepath, "audio_filepath"):
204 raise RuntimeError(
205 "At least one line in cfg.manifest_filepath does not contain an 'audio_filepath' entry. "
206 "All lines must contain an 'audio_filepath' entry."
207 )
208
209 if cfg.align_using_pred_text:
210 if is_entry_in_any_lines(cfg.manifest_filepath, "pred_text"):
211 raise RuntimeError(
212 "Cannot specify cfg.align_using_pred_text=True when the manifest at cfg.manifest_filepath "
213 "contains 'pred_text' entries. This is because the audio will be transcribed and may produce "
214 "a different 'pred_text'. This may cause confusion."
215 )
216 else:
217 if not is_entry_in_all_lines(cfg.manifest_filepath, "text"):
218 raise RuntimeError(
219 "At least one line in cfg.manifest_filepath does not contain a 'text' entry. "
220 "NFA requires all lines to contain a 'text' entry when cfg.align_using_pred_text=False."
221 )
222
223 # init devices
224 if cfg.transcribe_device is None:
225 transcribe_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
226 else:
227 transcribe_device = torch.device(cfg.transcribe_device)
228 logging.info(f"Device to be used for transcription step (`transcribe_device`) is {transcribe_device}")
229
230 if cfg.viterbi_device is None:
231 viterbi_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
232 else:
233 viterbi_device = torch.device(cfg.viterbi_device)
234 logging.info(f"Device to be used for viterbi step (`viterbi_device`) is {viterbi_device}")
235
236 if transcribe_device.type == 'cuda' or viterbi_device.type == 'cuda':
237 logging.warning(
238 'One or both of transcribe_device and viterbi_device are GPUs. If you run into OOM errors '
239 'it may help to change both devices to be the CPU.'
240 )
241
242 # load model
243 model, _ = setup_model(cfg, transcribe_device)
244 model.eval()
245
246 if isinstance(model, EncDecHybridRNNTCTCModel):
247 model.change_decoding_strategy(decoder_type="ctc")
248
249 if cfg.use_local_attention:
250 logging.info(
251 "Flag use_local_attention is set to True => will try to use local attention for model if it allows it"
252 )
253 model.change_attention_model(self_attention_model="rel_pos_local_attn", att_context_size=[64, 64])
254
255 if not (isinstance(model, EncDecCTCModel) or isinstance(model, EncDecHybridRNNTCTCModel)):
256 raise NotImplementedError(
257 f"Model is not an instance of NeMo EncDecCTCModel or ENCDecHybridRNNTCTCModel."
258 " Currently only instances of these models are supported"
259 )
260
261 if cfg.ctm_file_config.minimum_timestamp_duration > 0:
262 logging.warning(
263 f"cfg.ctm_file_config.minimum_timestamp_duration has been set to {cfg.ctm_file_config.minimum_timestamp_duration} seconds. "
264 "This may cause the alignments for some tokens/words/additional segments to be overlapping."
265 )
266
267 buffered_chunk_params = {}
268 if cfg.use_buffered_chunked_streaming:
269 model_cfg = copy.deepcopy(model._cfg)
270
271 OmegaConf.set_struct(model_cfg.preprocessor, False)
272 # some changes for streaming scenario
273 model_cfg.preprocessor.dither = 0.0
274 model_cfg.preprocessor.pad_to = 0
275
276 if model_cfg.preprocessor.normalize != "per_feature":
277 logging.error(
278 "Only EncDecCTCModelBPE models trained with per_feature normalization are supported currently"
279 )
280 # Disable config overwriting
281 OmegaConf.set_struct(model_cfg.preprocessor, True)
282
283 feature_stride = model_cfg.preprocessor['window_stride']
284 model_stride_in_secs = feature_stride * cfg.model_downsample_factor
285 total_buffer = cfg.total_buffer_in_secs
286 chunk_len = float(cfg.chunk_len_in_secs)
287 tokens_per_chunk = math.ceil(chunk_len / model_stride_in_secs)
288 mid_delay = math.ceil((chunk_len + (total_buffer - chunk_len) / 2) / model_stride_in_secs)
289 logging.info(f"tokens_per_chunk is {tokens_per_chunk}, mid_delay is {mid_delay}")
290
291 model = FrameBatchASR(
292 asr_model=model,
293 frame_len=chunk_len,
294 total_buffer=cfg.total_buffer_in_secs,
295 batch_size=cfg.chunk_batch_size,
296 )
297 buffered_chunk_params = {
298 "delay": mid_delay,
299 "model_stride_in_secs": model_stride_in_secs,
300 "tokens_per_chunk": tokens_per_chunk,
301 }
302 # get start and end line IDs of batches
303 starts, ends = get_batch_starts_ends(cfg.manifest_filepath, cfg.batch_size)
304
305 # init output_timestep_duration = None and we will calculate and update it during the first batch
306 output_timestep_duration = None
307
308 # init f_manifest_out
309 os.makedirs(cfg.output_dir, exist_ok=True)
310 tgt_manifest_name = str(Path(cfg.manifest_filepath).stem) + "_with_output_file_paths.json"
311 tgt_manifest_filepath = str(Path(cfg.output_dir) / tgt_manifest_name)
312 f_manifest_out = open(tgt_manifest_filepath, 'w')
313
314 # get alignment and save in CTM batch-by-batch
315 for start, end in zip(starts, ends):
316 manifest_lines_batch = get_manifest_lines_batch(cfg.manifest_filepath, start, end)
317
318 (log_probs_batch, y_batch, T_batch, U_batch, utt_obj_batch, output_timestep_duration,) = get_batch_variables(
319 manifest_lines_batch,
320 model,
321 cfg.additional_segment_grouping_separator,
322 cfg.align_using_pred_text,
323 cfg.audio_filepath_parts_in_utt_id,
324 output_timestep_duration,
325 cfg.simulate_cache_aware_streaming,
326 cfg.use_buffered_chunked_streaming,
327 buffered_chunk_params,
328 )
329
330 alignments_batch = viterbi_decoding(log_probs_batch, y_batch, T_batch, U_batch, viterbi_device)
331
332 for utt_obj, alignment_utt in zip(utt_obj_batch, alignments_batch):
333
334 utt_obj = add_t_start_end_to_utt_obj(utt_obj, alignment_utt, output_timestep_duration)
335
336 if "ctm" in cfg.save_output_file_formats:
337 utt_obj = make_ctm_files(utt_obj, cfg.output_dir, cfg.ctm_file_config,)
338
339 if "ass" in cfg.save_output_file_formats:
340 utt_obj = make_ass_files(utt_obj, cfg.output_dir, cfg.ass_file_config)
341
342 write_manifest_out_line(
343 f_manifest_out, utt_obj,
344 )
345
346 f_manifest_out.close()
347
348 return None
349
350
351 if __name__ == "__main__":
352 main()
353
[end of tools/nemo_forced_aligner/align.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
NVIDIA/NeMo
|
15db83ec4a65e649d83b61d7a4a58d911586e853
|
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
|
Seems to be a similar to #7002
Interesting. The fix is easy but needs to be applied to basically every single place that has this constructor for our adapter configs. Let me see if I can update it. But no guarantees on how soon fixes will come in main.
Looking forward to it @titu1994 ! Thanks ๐
@titu1994 I was looking to use NeMo speaker diarization with Python 3.11 and hit this dataclass issue. I patched everything involved in the specific code paths I needed: https://github.com/lmnt-com/NeMo/commit/d89acf9f0152e43dee29d7d1c4667ee34c26ffd7
I was using the neural diarizer as described in https://github.com/NVIDIA/NeMo/tree/main/examples/speaker_tasks/diarization
I'd be happy to upstream this if it's helpful.
I haven't checked whether this is backwards compatible for earlier python/dataclass versions, do you know?
For reference, what led me to this issue, though it's duplicative to the above discussion:
[Similar error](https://github.com/huggingface/datasets/issues/5230)
[StackOverflow solution](https://stackoverflow.com/questions/53632152/why-cant-dataclasses-have-mutable-defaults-in-their-class-attributes-declaratio)
@shaper Thanks for sharing. For brevity, you don't really need a `lambda` when you don't pass any init parameters, like this:
```
field(default_factory=lambda: ConfidenceConfig())
```
You can just do
```
field(default_factory=ConfidenceConfig)
```
It's only needed when you do pass parameter(s), like
```
field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
```
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
I have the same issue. @tango4j suggested using one of the models from https://huggingface.co/spaces/hf-audio/open_asr_leaderboard, but I cannot import nemo.collections.asr:
```
Traceback (most recent call last):
File "/opt/pycharm-2022.3.3/plugins/python/helpers/pycharm/docrunner.py", line 138, in __run
exec(compile(example.source, filename, "single",
File "<doctest NeMoASR[2]>", line 1, in <module>
NeMoASR().apply_asr(file)
^^^^^^^^^
File "/home/cbj/python/cbj/cbj/transcribe/pretrained.py", line 504, in __init__
import nemo.collections.asr as nemo_asr
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
For documentation (I had to search in the provided links):
Mutable defaults were never allowed in dataclasses (by convention), but in python 3.11 they improved the check: Instead of checking some types (dict, list, set) they now use hashable as indicator for mutable.
An alternative to default_factory would be to use frozen dataclasses, but I don't know whether in this code base the configs are used as mutable objects or not.
You need to update to NeMo 1.20, omegaconf did a fix that should resolve this
I have NeMo 1.20.0.
With `pip install nemo_toolkit` and `pip install pytorch_lightning` I installed yesterday nemo.
So it should be the newest PyPI version.
```
$ pip show nemo_toolkit
Name: nemo-toolkit
Version: 1.20.0
Summary: NeMo - a toolkit for Conversational AI
Home-page: https://github.com/nvidia/nemo
Author: NVIDIA
Author-email: nemo-toolkit@nvidia.com
License: Apache2
Location: /opt/py/2023/lib/python3.11/site-packages
Requires: huggingface-hub, numba, numpy, onnx, python-dateutil, ruamel.yaml, scikit-learn, setuptools, tensorboard, text-unidecode, torch, tqdm, wget, wrapt
Required-by:
$ pip show omegaconf
Name: omegaconf
Version: 2.3.0
Summary: A flexible configuration library
Home-page: https://github.com/omry/omegaconf
Author: Omry Yadan
Author-email: omry@yadan.net
License:
Location: /home/cbj/.local/lib/python3.11/site-packages
Requires: antlr4-python3-runtime, PyYAML
Required-by: hydra-core
$ python -c "import nemo.collections.asr as nemo_asr"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
Hmm ok I'll take a look
|
2023-10-03T19:14:38Z
|
<patch>
<patch>
diff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr/experimental/k2/align_speech_parallel.py
--- a/examples/asr/experimental/k2/align_speech_parallel.py
+++ b/examples/asr/experimental/k2/align_speech_parallel.py
@@ -74,7 +74,7 @@
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
@@ -94,12 +94,14 @@
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
- predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
- aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
+ predict_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(return_sample_id=True, num_workers=4)
+ )
+ aligner_args: K2AlignerWrapperModelConfig = field(default_factory=lambda: K2AlignerWrapperModelConfig())
output_path: str = MISSING
model_stride: int = 8
- trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
+ trainer: TrainerConfig = field(default_factory=lambda: TrainerConfig(gpus=-1, accelerator="ddp"))
# there arguments will be ignored
return_predictions: bool = False
diff --git a/nemo/collections/asr/metrics/rnnt_wer.py b/nemo/collections/asr/metrics/rnnt_wer.py
--- a/nemo/collections/asr/metrics/rnnt_wer.py
+++ b/nemo/collections/asr/metrics/rnnt_wer.py
@@ -15,7 +15,7 @@
import copy
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1299,7 +1299,7 @@ class RNNTDecodingConfig:
preserve_alignments: Optional[bool] = None
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
@@ -1317,10 +1317,10 @@ class RNNTDecodingConfig:
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
- greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
+ greedy: greedy_decode.GreedyRNNTInferConfig = field(default_factory=lambda: greedy_decode.GreedyRNNTInferConfig())
# beam decoding config
- beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
+ beam: beam_decode.BeamRNNTInferConfig = field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/metrics/wer.py b/nemo/collections/asr/metrics/wer.py
--- a/nemo/collections/asr/metrics/wer.py
+++ b/nemo/collections/asr/metrics/wer.py
@@ -14,7 +14,7 @@
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1297,13 +1297,17 @@ class CTCDecodingConfig:
batch_dim_index: int = 0
# greedy decoding config
- greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
+ greedy: ctc_greedy_decoding.GreedyCTCInferConfig = field(
+ default_factory=lambda: ctc_greedy_decoding.GreedyCTCInferConfig()
+ )
# beam decoding config
- beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ beam: ctc_beam_decoding.BeamCTCInferConfig = field(
+ default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ )
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/models/configs/aligner_config.py b/nemo/collections/asr/models/configs/aligner_config.py
--- a/nemo/collections/asr/models/configs/aligner_config.py
+++ b/nemo/collections/asr/models/configs/aligner_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@@ -35,10 +35,10 @@ class AlignerWrapperModelConfig:
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
- ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
- rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
+ ctc_cfg: AlignerCTCConfig = field(default_factory=lambda: AlignerCTCConfig())
+ rnnt_cfg: AlignerRNNTConfig = field(default_factory=lambda: AlignerRNNTConfig())
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
- decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ decoder_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
diff --git a/nemo/collections/asr/models/configs/asr_models_config.py b/nemo/collections/asr/models/configs/asr_models_config.py
--- a/nemo/collections/asr/models/configs/asr_models_config.py
+++ b/nemo/collections/asr/models/configs/asr_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -74,24 +74,32 @@ class EncDecCTCConfig(model_cfg.ModelConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
- validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ train_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=True))
+ validation_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False))
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
- decoding: CTCDecodingConfig = CTCDecodingConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
+ decoding: CTCDecodingConfig = field(default_factory=lambda: CTCDecodingConfig())
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
- model: EncDecCTCConfig = EncDecCTCConfig()
+ model: EncDecCTCConfig = field(default_factory=lambda: EncDecCTCConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/classification_models_config.py b/nemo/collections/asr/models/configs/classification_models_config.py
--- a/nemo/collections/asr/models/configs/classification_models_config.py
+++ b/nemo/collections/asr/models/configs/classification_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -72,30 +72,40 @@ class EncDecClassificationConfig(model_cfg.ModelConfig):
timesteps: int = MISSING
# Dataset configs
- train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=timesteps
+ preprocessor: AudioToMFCCPreprocessorConfig = field(default_factory=lambda: AudioToMFCCPreprocessorConfig())
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=-1)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
+
+ def __post_init__(self):
+ if self.crop_or_pad_augment is not None:
+ self.crop_or_pad_augment.audio_length = self.timesteps
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
- model: EncDecClassificationConfig = EncDecClassificationConfig()
+ model: EncDecClassificationConfig = field(default_factory=lambda: EncDecClassificationConfig())
diff --git a/nemo/collections/asr/models/configs/diarizer_config.py b/nemo/collections/asr/models/configs/diarizer_config.py
--- a/nemo/collections/asr/models/configs/diarizer_config.py
+++ b/nemo/collections/asr/models/configs/diarizer_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import asdict, dataclass
+from dataclasses import asdict, dataclass, field
from typing import Any, Dict, Optional, Tuple, Union
@@ -78,9 +78,9 @@ class ASRDiarizerParams(DiarizerComponentConfig):
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
- parameters: ASRDiarizerParams = ASRDiarizerParams()
- ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
- realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
+ parameters: ASRDiarizerParams = field(default_factory=lambda: ASRDiarizerParams())
+ ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = field(default_factory=lambda: ASRDiarizerCTCDecoderParams())
+ realigning_lm_parameters: ASRRealigningLMParams = field(default_factory=lambda: ASRRealigningLMParams())
@dataclass
@@ -102,7 +102,7 @@ class VADParams(DiarizerComponentConfig):
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
- parameters: VADParams = VADParams()
+ parameters: VADParams = field(default_factory=lambda: VADParams())
@dataclass
@@ -121,7 +121,7 @@ class SpeakerEmbeddingsParams(DiarizerComponentConfig):
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
- parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
+ parameters: SpeakerEmbeddingsParams = field(default_factory=lambda: SpeakerEmbeddingsParams())
@dataclass
@@ -142,7 +142,7 @@ class ClusteringParams(DiarizerComponentConfig):
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
- parameters: ClusteringParams = ClusteringParams()
+ parameters: ClusteringParams = field(default_factory=lambda: ClusteringParams())
@dataclass
@@ -166,7 +166,7 @@ class MSDDParams(DiarizerComponentConfig):
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
- parameters: MSDDParams = MSDDParams()
+ parameters: MSDDParams = field(default_factory=lambda: MSDDParams())
@dataclass
@@ -176,16 +176,16 @@ class DiarizerConfig(DiarizerComponentConfig):
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
- vad: VADConfig = VADConfig()
- speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
- clustering: ClusteringConfig = ClusteringConfig()
- msdd_model: MSDDConfig = MSDDConfig()
- asr: ASRDiarizerConfig = ASRDiarizerConfig()
+ vad: VADConfig = field(default_factory=lambda: VADConfig())
+ speaker_embeddings: SpeakerEmbeddingsConfig = field(default_factory=lambda: SpeakerEmbeddingsConfig())
+ clustering: ClusteringConfig = field(default_factory=lambda: ClusteringConfig())
+ msdd_model: MSDDConfig = field(default_factory=lambda: MSDDConfig())
+ asr: ASRDiarizerConfig = field(default_factory=lambda: ASRDiarizerConfig())
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
- diarizer: DiarizerConfig = DiarizerConfig()
+ diarizer: DiarizerConfig = field(default_factory=lambda: DiarizerConfig())
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
diff --git a/nemo/collections/asr/models/configs/k2_sequence_models_config.py b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
--- a/nemo/collections/asr/models/configs/k2_sequence_models_config.py
+++ b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
@@ -26,14 +26,14 @@ class GraphModuleConfig:
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
- backend_cfg: BackendConfig = BackendConfig()
+ backend_cfg: BackendConfig = field(default_factory=lambda: BackendConfig())
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
- graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ graph_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
- model: EncDecK2SeqConfig = EncDecK2SeqConfig()
+ model: EncDecK2SeqConfig = field(default_factory=lambda: EncDecK2SeqConfig())
diff --git a/nemo/collections/asr/models/configs/matchboxnet_config.py b/nemo/collections/asr/models/configs/matchboxnet_config.py
--- a/nemo/collections/asr/models/configs/matchboxnet_config.py
+++ b/nemo/collections/asr/models/configs/matchboxnet_config.py
@@ -107,30 +107,38 @@ class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
- freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ preprocessor: AudioToMFCCPreprocessorConfig = field(
+ default_factory=lambda: AudioToMFCCPreprocessorConfig(window_size=0.025)
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig(
+ freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ )
)
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=128
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=128)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/quartznet_config.py b/nemo/collections/asr/models/configs/quartznet_config.py
--- a/nemo/collections/asr/models/configs/quartznet_config.py
+++ b/nemo/collections/asr/models/configs/quartznet_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
@@ -174,20 +174,30 @@ class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=True
+ train_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=True, trim_silence=True)
+ )
+ validation_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
)
- validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
@dataclass
diff --git a/nemo/collections/asr/modules/audio_preprocessing.py b/nemo/collections/asr/modules/audio_preprocessing.py
--- a/nemo/collections/asr/modules/audio_preprocessing.py
+++ b/nemo/collections/asr/modules/audio_preprocessing.py
@@ -634,6 +634,12 @@ def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
+ if self.audio_length < 0:
+ raise ValueError(
+ 'audio_length must be non-negative. If using a dataclass with OmegaConf, '
+ 'please call OmegaConf.to_object(cfg) to call appropriate __post_init__ methods.'
+ )
+
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
diff --git a/nemo/collections/asr/parts/k2/classes.py b/nemo/collections/asr/parts/k2/classes.py
--- a/nemo/collections/asr/parts/k2/classes.py
+++ b/nemo/collections/asr/parts/k2/classes.py
@@ -13,7 +13,7 @@
# limitations under the License.
from abc import ABC
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
import torch
@@ -43,7 +43,7 @@ class GraphModuleConfig:
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
- intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
+ intersect_conf: GraphIntersectDenseConfig = field(default_factory=lambda: GraphIntersectDenseConfig())
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
diff --git a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
--- a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
+++ b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
@@ -13,7 +13,7 @@
# limitations under the License.
import math
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional
import torch
@@ -183,7 +183,7 @@ class MultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
@@ -287,7 +287,7 @@ class RelPositionMultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
@@ -336,7 +336,9 @@ class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
@@ -378,5 +380,7 @@ class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
diff --git a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
@@ -14,7 +14,7 @@
import math
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import torch
@@ -602,5 +602,5 @@ class BeamCTCInferConfig:
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
- flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
- pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
+ flashlight_cfg: Optional[FlashlightConfig] = field(default_factory=lambda: FlashlightConfig())
+ pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = field(default_factory=lambda: PyCTCDecodeConfig())
diff --git a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional
import torch
@@ -253,7 +253,7 @@ class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
@@ -26,7 +26,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import numpy as np
@@ -2185,7 +2185,7 @@ class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
@@ -2201,7 +2201,7 @@ class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/asr/parts/utils/asr_confidence_utils.py b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
--- a/nemo/collections/asr/parts/utils/asr_confidence_utils.py
+++ b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
@@ -14,7 +14,7 @@
import math
from abc import ABC, abstractmethod
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from functools import partial
from typing import List, Optional
@@ -175,7 +175,7 @@ class ConfidenceConfig:
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
- method_cfg: ConfidenceMethodConfig = ConfidenceMethodConfig()
+ method_cfg: ConfidenceMethodConfig = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/common/parts/adapter_modules.py b/nemo/collections/common/parts/adapter_modules.py
--- a/nemo/collections/common/parts/adapter_modules.py
+++ b/nemo/collections/common/parts/adapter_modules.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
@@ -160,5 +160,7 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
diff --git a/nemo/collections/common/tokenizers/en_ja_tokenizers.py b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
--- a/nemo/collections/common/tokenizers/en_ja_tokenizers.py
+++ b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
@@ -14,11 +14,19 @@
import re
from typing import List
-import ipadic
-import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
+try:
+ import ipadic
+ import MeCab
+
+ HAVE_MECAB = True
+ HAVE_IPADIC = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+ HAVE_IPADIC = False
+
class EnJaProcessor:
"""
@@ -67,6 +75,9 @@ class JaMecabProcessor:
"""
def __init__(self):
+ if not HAVE_MECAB or not HAVE_IPADIC:
+ raise ImportError("Please ensure that you have installed `MeCab` and `ipadic` to use JaMecabProcessor")
+
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
diff --git a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
--- a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
+++ b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
@@ -46,7 +46,7 @@ class MTOptimConfig(OptimConfig):
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
- sched: Optional[MTSchedConfig] = MTSchedConfig()
+ sched: Optional[MTSchedConfig] = field(default_factory=lambda: MTSchedConfig())
@dataclass
@@ -74,70 +74,80 @@ class MTEncDecModelConfig(EncDecNLPModelConfig):
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
- head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
+ head: TokenClassifierConfig = field(default_factory=lambda: TokenClassifierConfig(log_softmax=True))
# dataset configurations
- train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=True,
- shuffle=True,
- cache_ids=False,
- use_cache=False,
+ train_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=True,
+ shuffle=True,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ validation_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ test_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- optim: Optional[OptimConfig] = MTOptimConfig()
+ optim: Optional[OptimConfig] = field(default_factory=lambda: MTOptimConfig())
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
- encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
- decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
-
- encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ encoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+ decoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+
+ encoder: NeMoTransformerEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
- decoder: NeMoTransformerConfig = NeMoTransformerConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ decoder: NeMoTransformerConfig = field(
+ default_factory=lambda: NeMoTransformerConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
@@ -150,32 +160,36 @@ class MTBottleneckModelConfig(AAYNBaseConfig):
recon_per_token: bool = True
log_timing: bool = True
- encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
- hidden_steps=32,
- hidden_blocks=1,
- hidden_init_method='params',
+ encoder: NeMoTransformerBottleneckEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ hidden_steps=32,
+ hidden_blocks=1,
+ hidden_init_method='params',
+ )
)
- decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
+ decoder: NeMoTransformerBottleneckDecoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckDecoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ )
)
diff --git a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
--- a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
+++ b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
@@ -215,13 +215,15 @@ class PunctuationCapitalizationModelConfig:
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
- class_labels: ClassLabelsConfig = ClassLabelsConfig()
+ class_labels: ClassLabelsConfig = field(default_factory=lambda: ClassLabelsConfig())
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
- common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
+ common_dataset_parameters: Optional[CommonDatasetParametersConfig] = field(
+ default_factory=lambda: CommonDatasetParametersConfig()
+ )
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
@@ -233,16 +235,16 @@ class PunctuationCapitalizationModelConfig:
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
- punct_head: HeadConfig = HeadConfig()
+ punct_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
- capit_head: HeadConfig = HeadConfig()
+ capit_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
- tokenizer: Any = TokenizerConfig()
+ tokenizer: Any = field(default_factory=lambda: TokenizerConfig())
"""A configuration for source text tokenizer."""
- language_model: LanguageModelConfig = LanguageModelConfig()
+ language_model: LanguageModelConfig = field(default_factory=lambda: LanguageModelConfig())
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
@@ -311,22 +313,30 @@ class PunctuationCapitalizationConfig(NemoConfig):
do_testing: bool = False
"""Whether ot perform testing of the model."""
- model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
+ model: PunctuationCapitalizationModelConfig = field(default_factory=lambda: PunctuationCapitalizationModelConfig())
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
- trainer: Optional[TrainerConfig] = TrainerConfig()
+ trainer: Optional[TrainerConfig] = field(default_factory=lambda: TrainerConfig())
"""Contains ``Trainer`` Lightning class constructor parameters."""
- exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
+ exp_manager: Optional[ExpManagerConfig] = field(
+ default_factory=lambda: ExpManagerConfig(name=None, files_to_copy=[])
+ )
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
+ def __post_init__(self):
+ if self.exp_manager is not None:
+ self.exp_manager.name = self.name
+
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
- model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
+ model: PunctuationCapitalizationLexicalAudioModelConfig = field(
+ default_factory=lambda: PunctuationCapitalizationLexicalAudioModelConfig()
+ )
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
diff --git a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
--- a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
+++ b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
@@ -13,7 +13,6 @@
# limitations under the License.
"""Transformer based language model."""
-from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
@@ -25,6 +24,13 @@
scaled_init_method_normal,
)
+try:
+ from MeCab import Model
+
+ HAVE_MECAB = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+
try:
from apex.transformer.enums import AttnMaskType, ModelType
diff --git a/nemo/collections/tts/models/fastpitch.py b/nemo/collections/tts/models/fastpitch.py
--- a/nemo/collections/tts/models/fastpitch.py
+++ b/nemo/collections/tts/models/fastpitch.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import List, Optional
@@ -70,12 +70,12 @@ class TextTokenizer:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
@dataclass
class TextTokenizerConfig:
- text_tokenizer: TextTokenizer = TextTokenizer()
+ text_tokenizer: TextTokenizer = field(default_factory=lambda: TextTokenizer())
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
diff --git a/nemo/collections/tts/models/tacotron2.py b/nemo/collections/tts/models/tacotron2.py
--- a/nemo/collections/tts/models/tacotron2.py
+++ b/nemo/collections/tts/models/tacotron2.py
@@ -13,7 +13,7 @@
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
import torch
@@ -53,7 +53,7 @@ class Preprocessor:
@dataclass
class Tacotron2Config:
- preprocessor: Preprocessor = Preprocessor()
+ preprocessor: Preprocessor = field(default_factory=lambda: Preprocessor())
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
diff --git a/nemo/core/config/modelPT.py b/nemo/core/config/modelPT.py
--- a/nemo/core/config/modelPT.py
+++ b/nemo/core/config/modelPT.py
@@ -58,11 +58,13 @@ class HydraConfig:
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
- trainer: config.TrainerConfig = config.TrainerConfig(
- strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ trainer: config.TrainerConfig = field(
+ default_factory=lambda: config.TrainerConfig(
+ strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ )
)
- exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
- hydra: HydraConfig = HydraConfig()
+ exp_manager: Optional[Any] = field(default_factory=lambda: exp_manager.ExpManagerConfig())
+ hydra: HydraConfig = field(default_factory=lambda: HydraConfig())
class ModelConfigBuilder:
diff --git a/nemo/utils/exp_manager.py b/nemo/utils/exp_manager.py
--- a/nemo/utils/exp_manager.py
+++ b/nemo/utils/exp_manager.py
@@ -18,7 +18,7 @@
import sys
import time
import warnings
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
@@ -146,28 +146,30 @@ class ExpManagerConfig:
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
- mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
+ mlflow_logger_kwargs: Optional[MLFlowParams] = field(default_factory=lambda: MLFlowParams())
create_dllogger_logger: Optional[bool] = False
- dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
+ dllogger_logger_kwargs: Optional[DLLoggerParams] = field(default_factory=lambda: DLLoggerParams())
create_clearml_logger: Optional[bool] = False
- clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
+ clearml_logger_kwargs: Optional[ClearMLParams] = field(default_factory=lambda: ClearMLParams())
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
- checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
+ checkpoint_callback_params: Optional[CallbackParams] = field(default_factory=lambda: CallbackParams())
create_early_stopping_callback: Optional[bool] = False
- early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
+ early_stopping_callback_params: Optional[EarlyStoppingParams] = field(
+ default_factory=lambda: EarlyStoppingParams()
+ )
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
- step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
+ step_timing_kwargs: Optional[StepTimingParams] = field(default_factory=lambda: StepTimingParams())
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
- ema: Optional[EMAParams] = EMAParams()
+ ema: Optional[EMAParams] = field(default_factory=lambda: EMAParams())
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
@@ -112,14 +112,14 @@ class EvalBeamSearchNGramConfig:
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
- decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
+ decoding: ctc_beam_decoding.BeamCTCInferConfig = field(default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=128))
- text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
+ text_processing: Optional[TextProcessingConfig] = field(default_factory=lambda: TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
- )
+ ))
# fmt: on
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
@@ -115,7 +115,7 @@ class EvalBeamSearchNGramConfig:
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
- decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
+ decoding: rnnt_beam_decoding.BeamRNNTInferConfig = field(default_factory=lambda: rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128))
# fmt: on
diff --git a/scripts/confidence_ensembles/build_ensemble.py b/scripts/confidence_ensembles/build_ensemble.py
--- a/scripts/confidence_ensembles/build_ensemble.py
+++ b/scripts/confidence_ensembles/build_ensemble.py
@@ -75,7 +75,7 @@
import sys
import tempfile
from copy import deepcopy
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Tuple
@@ -209,19 +209,23 @@ class BuildEnsembleConfig:
random_seed: int = 0 # for reproducibility
# default confidence, can override
- confidence: ConfidenceConfig = ConfidenceConfig(
- # we keep frame confidences and apply aggregation manually to get full-utterance confidence
- preserve_frame_confidence=True,
- exclude_blank=True,
- aggregation="mean",
- method_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ confidence: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(
+ # we keep frame confidences and apply aggregation manually to get full-utterance confidence
+ preserve_frame_confidence=True,
+ exclude_blank=True,
+ aggregation="mean",
+ measure_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ )
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
- transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
+ transcription: transcribe_speech.TranscriptionConfig = field(
+ default_factory=lambda: transcribe_speech.TranscriptionConfig()
+ )
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
@@ -229,12 +233,14 @@ class BuildEnsembleConfig:
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
- tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
+ tune_confidence_config: TuneConfidenceConfig = field(default_factory=lambda: TuneConfidenceConfig())
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
- tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
+ tune_logistic_regression_config: TuneLogisticRegressionConfig = field(
+ default_factory=lambda: TuneLogisticRegressionConfig()
+ )
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
diff --git a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
--- a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
+++ b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
@@ -14,7 +14,7 @@
import json
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import Optional
@@ -124,7 +124,9 @@ class ConfidenceBenchmarkingConfig:
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
- confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ confidence_cfg: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ )
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
diff --git a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
--- a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
+++ b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
@@ -202,7 +202,7 @@ class ASRTarredDatasetMetadata:
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
- dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
+ dataset_config: Optional[ASRTarredDatasetConfig] = field(default_factory=lambda: ASRTarredDatasetConfig())
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
diff --git a/tools/nemo_forced_aligner/align.py b/tools/nemo_forced_aligner/align.py
--- a/tools/nemo_forced_aligner/align.py
+++ b/tools/nemo_forced_aligner/align.py
@@ -149,8 +149,8 @@ class AlignmentConfig:
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig = CTMFileConfig()
- ass_file_config: ASSFileConfig = ASSFileConfig()
+ ctm_file_config: CTMFileConfig = field(default_factory=lambda: CTMFileConfig())
+ ass_file_config: ASSFileConfig = field(default_factory=lambda: ASSFileConfig())
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
</patch>
</s>
</patch>
|
diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_to_text_dataset.py
--- a/tests/collections/asr/test_text_to_text_dataset.py
+++ b/tests/collections/asr/test_text_to_text_dataset.py
@@ -15,7 +15,7 @@
import json
import multiprocessing
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
import pytest
@@ -118,7 +118,7 @@ class TextTokenizerCfg:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
config = OmegaConf.create(OmegaConf.to_yaml(TextTokenizerCfg()))
return instantiate(config)
|
1.0
| |||
slackapi__python-slack-events-api-71
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
</issue>
<code>
[start of README.rst]
1 Slack Events API adapter for Python
2 ===================================
3
4 .. image:: https://badge.fury.io/py/slackeventsapi.svg
5 :target: https://pypi.org/project/slackeventsapi/
6 .. image:: https://travis-ci.org/slackapi/python-slack-events-api.svg?branch=master
7 :target: https://travis-ci.org/slackapi/python-slack-events-api
8 .. image:: https://codecov.io/gh/slackapi/python-slack-events-api/branch/master/graph/badge.svg
9 :target: https://codecov.io/gh/slackapi/python-slack-events-api
10
11
12 The Slack Events Adapter is a Python-based solution to receive and parse events
13 from Slackโs Events API. This library uses an event emitter framework to allow
14 you to easily process Slack events by simply attaching functions
15 to event listeners.
16
17 This adapter enhances and simplifies Slack's Events API by incorporating useful best practices, patterns, and opportunities to abstract out common tasks.
18
19 ๐ก We wrote a `blog post which explains how`_ the Events API can help you, why we built these tools, and how you can use them to build production-ready Slack apps.
20
21 .. _blog post which explains how: https://medium.com/@SlackAPI/enhancing-slacks-events-api-7535827829ab
22
23
24 ๐ค Installation
25 ----------------
26
27 .. code:: shell
28
29 pip install slackeventsapi
30
31 ๐ค App Setup
32 --------------------
33
34 Before you can use the `Events API`_ you must
35 `create a Slack App`_, and turn on
36 `Event Subscriptions`_.
37
38 ๐ก When you add the Request URL to your app's Event Subscription settings,
39 Slack will send a request containing a `challenge` code to verify that your
40 server is alive. This package handles that URL Verification event for you, so
41 all you need to do is start the example app, start ngrok and configure your
42 URL accordingly.
43
44 โ
Once you have your `Request URL` verified, your app is ready to start
45 receiving Team Events.
46
47 ๐ Your server will begin receiving Events from Slack's Events API as soon as a
48 user has authorized your app.
49
50 ๐ค Development workflow:
51 ===========================
52
53 (1) Create a Slack app on https://api.slack.com/apps
54 (2) Add a `bot user` for your app
55 (3) Start the example app on your **Request URL** endpoint
56 (4) Start ngrok and copy the **HTTPS** URL
57 (5) Add your **Request URL** and subscribe your app to events
58 (6) Go to your ngrok URL (e.g. https://myapp12.ngrok.com/) and auth your app
59
60 **๐ Once your app has been authorized, you will begin receiving Slack Events**
61
62 โ ๏ธ Ngrok is a great tool for developing Slack apps, but we don't recommend using ngrok
63 for production apps.
64
65 ๐ค Usage
66 ----------
67 **โ ๏ธ Keep your app's credentials safe!**
68
69 - For development, keep them in virtualenv variables.
70
71 - For production, use a secure data store.
72
73 - Never post your app's credentials to github.
74
75 .. code:: python
76
77 SLACK_SIGNING_SECRET = os.environ["SLACK_SIGNING_SECRET"]
78
79 Create a Slack Event Adapter for receiving actions via the Events API
80 -----------------------------------------------------------------------
81 **Using the built-in Flask server:**
82
83 .. code:: python
84
85 from slackeventsapi import SlackEventAdapter
86
87
88 slack_events_adapter = SlackEventAdapter(SLACK_SIGNING_SECRET, endpoint="/slack/events")
89
90
91 # Create an event listener for "reaction_added" events and print the emoji name
92 @slack_events_adapter.on("reaction_added")
93 def reaction_added(event_data):
94 emoji = event_data["event"]["reaction"]
95 print(emoji)
96
97
98 # Start the server on port 3000
99 slack_events_adapter.start(port=3000)
100
101
102 **Using your existing Flask instance:**
103
104
105 .. code:: python
106
107 from flask import Flask
108 from slackeventsapi import SlackEventAdapter
109
110
111 # This `app` represents your existing Flask app
112 app = Flask(__name__)
113
114
115 # An example of one of your Flask app's routes
116 @app.route("/")
117 def hello():
118 return "Hello there!"
119
120
121 # Bind the Events API route to your existing Flask app by passing the server
122 # instance as the last param, or with `server=app`.
123 slack_events_adapter = SlackEventAdapter(SLACK_SIGNING_SECRET, "/slack/events", app)
124
125
126 # Create an event listener for "reaction_added" events and print the emoji name
127 @slack_events_adapter.on("reaction_added")
128 def reaction_added(event_data):
129 emoji = event_data["event"]["reaction"]
130 print(emoji)
131
132
133 # Start the server on port 3000
134 if __name__ == "__main__":
135 app.run(port=3000)
136
137 For a comprehensive list of available Slack `Events` and more information on
138 `Scopes`, see https://api.slack.com/events-api
139
140 ๐ค Example event listeners
141 -----------------------------
142
143 See `example.py`_ for usage examples. This example also utilizes the
144 SlackClient Web API client.
145
146 .. _example.py: /example/
147
148 ๐ค Support
149 -----------
150
151 Need help? Join `Slack Community`_ and talk to us in `#slack-api`_.
152
153 You can also `create an Issue`_ right here on GitHub.
154
155 .. _Events API: https://api.slack.com/events-api
156 .. _create a Slack App: https://api.slack.com/apps/new
157 .. _Event Subscriptions: https://api.slack.com/events-api#subscriptions
158 .. _Slack Community: http://slackcommunity.com/
159 .. _#slack-api: https://dev4slack.slack.com/messages/slack-api/
160 .. _create an Issue: https://github.com/slackapi/python-slack-events-api/issues/new
161
[end of README.rst]
[start of /dev/null]
1
[end of /dev/null]
[start of slackeventsapi/server.py]
1 from flask import Flask, request, make_response, Blueprint
2 import json
3 import platform
4 import sys
5 import hmac
6 import hashlib
7 from time import time
8 from .version import __version__
9
10
11 class SlackServer(Flask):
12 def __init__(self, signing_secret, endpoint, emitter, server):
13 self.signing_secret = signing_secret
14 self.emitter = emitter
15 self.endpoint = endpoint
16 self.package_info = self.get_package_info()
17
18 # If a server is passed in, bind the event handler routes to it,
19 # otherwise create a new Flask instance.
20 if server:
21 if isinstance(server, Flask) or isinstance(server, Blueprint):
22 self.bind_route(server)
23 else:
24 raise TypeError("Server must be an instance of Flask or Blueprint")
25 else:
26 Flask.__init__(self, __name__)
27 self.bind_route(self)
28
29 def get_package_info(self):
30 client_name = __name__.split('.')[0]
31 client_version = __version__ # Version is returned from version.py
32
33 # Collect the package info, Python version and OS version.
34 package_info = {
35 "client": "{0}/{1}".format(client_name, client_version),
36 "python": "Python/{v.major}.{v.minor}.{v.micro}".format(v=sys.version_info),
37 "system": "{0}/{1}".format(platform.system(), platform.release())
38 }
39
40 # Concatenate and format the user-agent string to be passed into request headers
41 ua_string = []
42 for key, val in package_info.items():
43 ua_string.append(val)
44
45 return " ".join(ua_string)
46
47 def verify_signature(self, timestamp, signature):
48 # Verify the request signature of the request sent from Slack
49 # Generate a new hash using the app's signing secret and request data
50
51 # Compare the generated hash and incoming request signature
52 # Python 2.7.6 doesn't support compare_digest
53 # It's recommended to use Python 2.7.7+
54 # noqa See https://docs.python.org/2/whatsnew/2.7.html#pep-466-network-security-enhancements-for-python-2-7
55 req = str.encode('v0:' + str(timestamp) + ':') + request.get_data()
56 request_hash = 'v0=' + hmac.new(
57 str.encode(self.signing_secret),
58 req, hashlib.sha256
59 ).hexdigest()
60
61 if hasattr(hmac, "compare_digest"):
62 # Compare byte strings for Python 2
63 if (sys.version_info[0] == 2):
64 return hmac.compare_digest(bytes(request_hash), bytes(signature))
65 else:
66 return hmac.compare_digest(request_hash, signature)
67 else:
68 if len(request_hash) != len(signature):
69 return False
70 result = 0
71 if isinstance(request_hash, bytes) and isinstance(signature, bytes):
72 for x, y in zip(request_hash, signature):
73 result |= x ^ y
74 else:
75 for x, y in zip(request_hash, signature):
76 result |= ord(x) ^ ord(y)
77 return result == 0
78
79 def bind_route(self, server):
80 @server.route(self.endpoint, methods=['GET', 'POST'])
81 def event():
82 # If a GET request is made, return 404.
83 if request.method == 'GET':
84 return make_response("These are not the slackbots you're looking for.", 404)
85
86 # Each request comes with request timestamp and request signature
87 # emit an error if the timestamp is out of range
88 req_timestamp = request.headers.get('X-Slack-Request-Timestamp')
89 if abs(time() - int(req_timestamp)) > 60 * 5:
90 slack_exception = SlackEventAdapterException('Invalid request timestamp')
91 self.emitter.emit('error', slack_exception)
92 return make_response("", 403)
93
94 # Verify the request signature using the app's signing secret
95 # emit an error if the signature can't be verified
96 req_signature = request.headers.get('X-Slack-Signature')
97 if not self.verify_signature(req_timestamp, req_signature):
98 slack_exception = SlackEventAdapterException('Invalid request signature')
99 self.emitter.emit('error', slack_exception)
100 return make_response("", 403)
101
102 # Parse the request payload into JSON
103 event_data = json.loads(request.data.decode('utf-8'))
104
105 # Echo the URL verification challenge code back to Slack
106 if "challenge" in event_data:
107 return make_response(
108 event_data.get("challenge"), 200, {"content_type": "application/json"}
109 )
110
111 # Parse the Event payload and emit the event to the event listener
112 if "event" in event_data:
113 event_type = event_data["event"]["type"]
114 self.emitter.emit(event_type, event_data)
115 response = make_response("", 200)
116 response.headers['X-Slack-Powered-By'] = self.package_info
117 return response
118
119
120 class SlackEventAdapterException(Exception):
121 """
122 Base exception for all errors raised by the SlackClient library
123 """
124
125 def __init__(self, msg=None):
126 if msg is None:
127 # default error message
128 msg = "An error occurred in the SlackEventsApiAdapter library"
129 super(SlackEventAdapterException, self).__init__(msg)
130
[end of slackeventsapi/server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
slackapi/python-slack-events-api
|
0c0ce604b502508622fb14c278a0d64841fa32e3
|
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
|
2020-06-12T06:58:10Z
|
<patch>
<patch>
diff --git a/example/current_app/main.py b/example/current_app/main.py
new file mode 100644
--- /dev/null
+++ b/example/current_app/main.py
@@ -0,0 +1,49 @@
+# ------------------
+# Only for running this script here
+import sys
+from os.path import dirname
+sys.path.insert(1, f"{dirname(__file__)}/../..")
+# ------------------
+
+import os
+from slack import WebClient
+import logging
+logging.basicConfig(level=logging.DEBUG)
+
+from flask import Flask
+
+app = Flask(__name__)
+
+with app.app_context():
+ from test_module.slack_app import slack_events_adapter
+
+ slack_bot_token = os.environ["SLACK_BOT_TOKEN"]
+ slack_client = WebClient(slack_bot_token)
+
+
+ @slack_events_adapter.on("message")
+ def handle_message(event_data):
+ message = event_data["event"]
+ if message.get("subtype") is None and "hi" in message.get('text'):
+ channel = message["channel"]
+ message = "Hi <@%s>! :tada:" % message["user"]
+ slack_client.chat_postMessage(channel=channel, text=message)
+
+
+ @slack_events_adapter.on("error")
+ def error_handler(err):
+ print("ERROR: " + str(err))
+
+# (Terminal A)
+# source env/bin/activate
+# (env) $ export SLACK_BOT_TOKEN=xoxb-***
+# (env) $ export SLACK_SIGNING_SECRET=**
+# (env) $ cd example/current_app
+# (env) $ FLASK_APP=main.py FLASK_ENV=development flask run --port 3000
+
+# (Terminal B)
+# ngrok http 3000
+
+# in Slack
+# /invite @{your app's bot user}
+# post a message "hi" in the channel
diff --git a/slackeventsapi/server.py b/slackeventsapi/server.py
--- a/slackeventsapi/server.py
+++ b/slackeventsapi/server.py
@@ -1,10 +1,13 @@
-from flask import Flask, request, make_response, Blueprint
+import hashlib
+import hmac
import json
import platform
import sys
-import hmac
-import hashlib
from time import time
+
+from flask import Flask, request, make_response, Blueprint
+from werkzeug.local import LocalProxy
+
from .version import __version__
@@ -18,10 +21,10 @@ def __init__(self, signing_secret, endpoint, emitter, server):
# If a server is passed in, bind the event handler routes to it,
# otherwise create a new Flask instance.
if server:
- if isinstance(server, Flask) or isinstance(server, Blueprint):
+ if isinstance(server, (Flask, Blueprint, LocalProxy)):
self.bind_route(server)
else:
- raise TypeError("Server must be an instance of Flask or Blueprint")
+ raise TypeError("Server must be an instance of Flask, Blueprint, or LocalProxy")
else:
Flask.__init__(self, __name__)
self.bind_route(self)
</patch>
</s>
</patch>
|
diff --git a/example/current_app/test_module/__init__.py b/example/current_app/test_module/__init__.py
new file mode 100644
diff --git a/example/current_app/test_module/slack_app.py b/example/current_app/test_module/slack_app.py
new file mode 100644
--- /dev/null
+++ b/example/current_app/test_module/slack_app.py
@@ -0,0 +1,16 @@
+# ------------------
+# Only for running this script here
+import logging
+import sys
+from os.path import dirname
+
+sys.path.insert(1, f"{dirname(__file__)}/../../..")
+logging.basicConfig(level=logging.DEBUG)
+# ------------------
+
+from flask import current_app as app
+from slackeventsapi import SlackEventAdapter
+import os
+
+slack_signing_secret = os.environ["SLACK_SIGNING_SECRET"]
+slack_events_adapter = SlackEventAdapter(slack_signing_secret, "/slack/events", app)
diff --git a/tests/test_server.py b/tests/test_server.py
--- a/tests/test_server.py
+++ b/tests/test_server.py
@@ -18,7 +18,7 @@ def test_server_not_flask():
with pytest.raises(TypeError) as e:
invalid_flask = "I am not a Flask"
SlackEventAdapter("SIGNING_SECRET", "/slack/events", invalid_flask)
- assert e.value.args[0] == 'Server must be an instance of Flask or Blueprint'
+ assert e.value.args[0] == 'Server must be an instance of Flask, Blueprint, or LocalProxy'
def test_blueprint_server():
|
1.0
| ||||
celery__celery-2598
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CELERY_RESULT_SERIALIZER = 'json' breaks Exception marshaling
Setting `CELERY_RESULT_SERIALIZER = json` and raising an exception in the worker leads to this:
```
/path/to/lib/python2.7/site-packages/celery/result.py in get(self, timeout, propagate, interval, no_ack, follow_parents, EXCEPTION_STATES, PROPAGATE_STATES)
173 status = meta['status']
174 if status in PROPAGATE_STATES and propagate:
--> 175 raise meta['result']
176 return meta['result']
177 wait = get # deprecated alias to :meth:`get`.
TypeError: exceptions must be old-style classes or derived from BaseException, not dict
```
where the contents of `meta['result']` are (in my case):
```
{u'exc_message': u'unknown keys: nam', u'exc_type': u'ValueError'}
```
so it _looks_ like celery could convert the dict to a real exception before raising, but it does not currently. Changing back to `pickle` works as expected.
bug can be reproduced with the following:
``` python
# jsonresults.py
from celery.app.base import Celery
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'amqp'
app = Celery(config_source=__name__)
@app.task
def hello():
raise ValueError('go away')
```
worker:
```
# python -m celery --app=jsonresults:app worker
```
caller:
``` python
import jsonresults
jsonresults.hello.delay().get()
```
</issue>
<code>
[start of README.rst]
1 =================================
2 celery - Distributed Task Queue
3 =================================
4
5 .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
6
7 |build-status| |coverage-status|
8
9 :Version: 3.2.0a1 (Cipater)
10 :Web: http://celeryproject.org/
11 :Download: http://pypi.python.org/pypi/celery/
12 :Source: http://github.com/celery/celery/
13 :Keywords: task queue, job queue, asynchronous, async, rabbitmq, amqp, redis,
14 python, webhooks, queue, distributed
15
16 --
17
18 What is a Task Queue?
19 =====================
20
21 Task queues are used as a mechanism to distribute work across threads or
22 machines.
23
24 A task queue's input is a unit of work, called a task, dedicated worker
25 processes then constantly monitor the queue for new work to perform.
26
27 Celery communicates via messages, usually using a broker
28 to mediate between clients and workers. To initiate a task a client puts a
29 message on the queue, the broker then delivers the message to a worker.
30
31 A Celery system can consist of multiple workers and brokers, giving way
32 to high availability and horizontal scaling.
33
34 Celery is a library written in Python, but the protocol can be implemented in
35 any language. So far there's RCelery_ for the Ruby programming language, and a
36 `PHP client`, but language interoperability can also be achieved
37 by using webhooks.
38
39 .. _RCelery: https://github.com/leapfrogonline/rcelery
40 .. _`PHP client`: https://github.com/gjedeer/celery-php
41 .. _`using webhooks`:
42 http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
43
44 What do I need?
45 ===============
46
47 Celery version 3.0 runs on,
48
49 - Python (2.6, 2.7, 3.3, 3.4)
50 - PyPy (1.8, 1.9)
51 - Jython (2.5, 2.7).
52
53 This is the last version to support Python 2.5,
54 and from Celery 3.1, Python 2.6 or later is required.
55 The last version to support Python 2.4 was Celery series 2.2.
56
57 *Celery* is usually used with a message broker to send and receive messages.
58 The RabbitMQ, Redis transports are feature complete,
59 but there's also experimental support for a myriad of other solutions, including
60 using SQLite for local development.
61
62 *Celery* can run on a single machine, on multiple machines, or even
63 across datacenters.
64
65 Get Started
66 ===========
67
68 If this is the first time you're trying to use Celery, or you are
69 new to Celery 3.0 coming from previous versions then you should read our
70 getting started tutorials:
71
72 - `First steps with Celery`_
73
74 Tutorial teaching you the bare minimum needed to get started with Celery.
75
76 - `Next steps`_
77
78 A more complete overview, showing more features.
79
80 .. _`First steps with Celery`:
81 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
82
83 .. _`Next steps`:
84 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
85
86 Celery is...
87 ==========
88
89 - **Simple**
90
91 Celery is easy to use and maintain, and does *not need configuration files*.
92
93 It has an active, friendly community you can talk to for support,
94 including a `mailing-list`_ and and an IRC channel.
95
96 Here's one of the simplest applications you can make::
97
98 from celery import Celery
99
100 app = Celery('hello', broker='amqp://guest@localhost//')
101
102 @app.task
103 def hello():
104 return 'hello world'
105
106 - **Highly Available**
107
108 Workers and clients will automatically retry in the event
109 of connection loss or failure, and some brokers support
110 HA in way of *Master/Master* or *Master/Slave* replication.
111
112 - **Fast**
113
114 A single Celery process can process millions of tasks a minute,
115 with sub-millisecond round-trip latency (using RabbitMQ,
116 py-librabbitmq, and optimized settings).
117
118 - **Flexible**
119
120 Almost every part of *Celery* can be extended or used on its own,
121 Custom pool implementations, serializers, compression schemes, logging,
122 schedulers, consumers, producers, autoscalers, broker transports and much more.
123
124 It supports...
125 ============
126
127 - **Message Transports**
128
129 - RabbitMQ_, Redis_,
130 - MongoDB_ (experimental), Amazon SQS (experimental),
131 - CouchDB_ (experimental), SQLAlchemy_ (experimental),
132 - Django ORM (experimental), `IronMQ`_
133 - and more...
134
135 - **Concurrency**
136
137 - Prefork, Eventlet_, gevent_, threads/single threaded
138
139 - **Result Stores**
140
141 - AMQP, Redis
142 - memcached, MongoDB
143 - SQLAlchemy, Django ORM
144 - Apache Cassandra, IronCache
145
146 - **Serialization**
147
148 - *pickle*, *json*, *yaml*, *msgpack*.
149 - *zlib*, *bzip2* compression.
150 - Cryptographic message signing.
151
152 .. _`Eventlet`: http://eventlet.net/
153 .. _`gevent`: http://gevent.org/
154
155 .. _RabbitMQ: http://rabbitmq.com
156 .. _Redis: http://redis.io
157 .. _MongoDB: http://mongodb.org
158 .. _Beanstalk: http://kr.github.com/beanstalkd
159 .. _CouchDB: http://couchdb.apache.org
160 .. _SQLAlchemy: http://sqlalchemy.org
161 .. _`IronMQ`: http://iron.io
162
163 Framework Integration
164 =====================
165
166 Celery is easy to integrate with web frameworks, some of which even have
167 integration packages:
168
169 +--------------------+------------------------+
170 | `Django`_ | not needed |
171 +--------------------+------------------------+
172 | `Pyramid`_ | `pyramid_celery`_ |
173 +--------------------+------------------------+
174 | `Pylons`_ | `celery-pylons`_ |
175 +--------------------+------------------------+
176 | `Flask`_ | not needed |
177 +--------------------+------------------------+
178 | `web2py`_ | `web2py-celery`_ |
179 +--------------------+------------------------+
180 | `Tornado`_ | `tornado-celery`_ |
181 +--------------------+------------------------+
182
183 The integration packages are not strictly necessary, but they can make
184 development easier, and sometimes they add important hooks like closing
185 database connections at ``fork``.
186
187 .. _`Django`: http://djangoproject.com/
188 .. _`Pylons`: http://www.pylonsproject.org/
189 .. _`Flask`: http://flask.pocoo.org/
190 .. _`web2py`: http://web2py.com/
191 .. _`Bottle`: http://bottlepy.org/
192 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
193 .. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
194 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
195 .. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
196 .. _`web2py-celery`: http://code.google.com/p/web2py-celery/
197 .. _`Tornado`: http://www.tornadoweb.org/
198 .. _`tornado-celery`: http://github.com/mher/tornado-celery/
199
200 .. _celery-documentation:
201
202 Documentation
203 =============
204
205 The `latest documentation`_ with user guides, tutorials and API reference
206 is hosted at Read The Docs.
207
208 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
209
210 .. _celery-installation:
211
212 Installation
213 ============
214
215 You can install Celery either via the Python Package Index (PyPI)
216 or from source.
217
218 To install using `pip`,::
219
220 $ pip install -U Celery
221
222 To install using `easy_install`,::
223
224 $ easy_install -U Celery
225
226 .. _bundles:
227
228 Bundles
229 -------
230
231 Celery also defines a group of bundles that can be used
232 to install Celery and the dependencies for a given feature.
233
234 You can specify these in your requirements or on the ``pip`` comand-line
235 by using brackets. Multiple bundles can be specified by separating them by
236 commas.
237 ::
238
239 $ pip install "celery[librabbitmq]"
240
241 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
242
243 The following bundles are available:
244
245 Serializers
246 ~~~~~~~~~~~
247
248 :celery[auth]:
249 for using the auth serializer.
250
251 :celery[msgpack]:
252 for using the msgpack serializer.
253
254 :celery[yaml]:
255 for using the yaml serializer.
256
257 Concurrency
258 ~~~~~~~~~~~
259
260 :celery[eventlet]:
261 for using the eventlet pool.
262
263 :celery[gevent]:
264 for using the gevent pool.
265
266 :celery[threads]:
267 for using the thread pool.
268
269 Transports and Backends
270 ~~~~~~~~~~~~~~~~~~~~~~~
271
272 :celery[librabbitmq]:
273 for using the librabbitmq C library.
274
275 :celery[redis]:
276 for using Redis as a message transport or as a result backend.
277
278 :celery[mongodb]:
279 for using MongoDB as a message transport (*experimental*),
280 or as a result backend (*supported*).
281
282 :celery[sqs]:
283 for using Amazon SQS as a message transport (*experimental*).
284
285 :celery[memcache]:
286 for using memcached as a result backend.
287
288 :celery[cassandra]:
289 for using Apache Cassandra as a result backend.
290
291 :celery[couchdb]:
292 for using CouchDB as a message transport (*experimental*).
293
294 :celery[couchbase]:
295 for using CouchBase as a result backend.
296
297 :celery[beanstalk]:
298 for using Beanstalk as a message transport (*experimental*).
299
300 :celery[zookeeper]:
301 for using Zookeeper as a message transport.
302
303 :celery[zeromq]:
304 for using ZeroMQ as a message transport (*experimental*).
305
306 :celery[sqlalchemy]:
307 for using SQLAlchemy as a message transport (*experimental*),
308 or as a result backend (*supported*).
309
310 :celery[pyro]:
311 for using the Pyro4 message transport (*experimental*).
312
313 :celery[slmq]:
314 for using the SoftLayer Message Queue transport (*experimental*).
315
316 .. _celery-installing-from-source:
317
318 Downloading and installing from source
319 --------------------------------------
320
321 Download the latest version of Celery from
322 http://pypi.python.org/pypi/celery/
323
324 You can install it by doing the following,::
325
326 $ tar xvfz celery-0.0.0.tar.gz
327 $ cd celery-0.0.0
328 $ python setup.py build
329 # python setup.py install
330
331 The last command must be executed as a privileged user if
332 you are not currently using a virtualenv.
333
334 .. _celery-installing-from-git:
335
336 Using the development version
337 -----------------------------
338
339 With pip
340 ~~~~~~~~
341
342 The Celery development version also requires the development
343 versions of ``kombu``, ``amqp`` and ``billiard``.
344
345 You can install the latest snapshot of these using the following
346 pip commands::
347
348 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
349 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
350 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
351 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
352
353 With git
354 ~~~~~~~~
355
356 Please the Contributing section.
357
358 .. _getting-help:
359
360 Getting Help
361 ============
362
363 .. _mailing-list:
364
365 Mailing list
366 ------------
367
368 For discussions about the usage, development, and future of celery,
369 please join the `celery-users`_ mailing list.
370
371 .. _`celery-users`: http://groups.google.com/group/celery-users/
372
373 .. _irc-channel:
374
375 IRC
376 ---
377
378 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
379 network.
380
381 .. _`Freenode`: http://freenode.net
382
383 .. _bug-tracker:
384
385 Bug tracker
386 ===========
387
388 If you have any suggestions, bug reports or annoyances please report them
389 to our issue tracker at http://github.com/celery/celery/issues/
390
391 .. _wiki:
392
393 Wiki
394 ====
395
396 http://wiki.github.com/celery/celery/
397
398 .. _contributing-short:
399
400 Contributing
401 ============
402
403 Development of `celery` happens at Github: http://github.com/celery/celery
404
405 You are highly encouraged to participate in the development
406 of `celery`. If you don't like Github (for some reason) you're welcome
407 to send regular patches.
408
409 Be sure to also read the `Contributing to Celery`_ section in the
410 documentation.
411
412 .. _`Contributing to Celery`:
413 http://docs.celeryproject.org/en/master/contributing.html
414
415 .. _license:
416
417 License
418 =======
419
420 This software is licensed under the `New BSD License`. See the ``LICENSE``
421 file in the top distribution directory for the full license text.
422
423 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
424
425
426 .. image:: https://d2weczhvl823v0.cloudfront.net/celery/celery/trend.png
427 :alt: Bitdeli badge
428 :target: https://bitdeli.com/free
429
430 .. |build-status| image:: https://travis-ci.org/celery/celery.svg?branch=master
431 :target: https://travis-ci.org/celery/celery
432 .. |coverage-status| image:: https://coveralls.io/repos/celery/celery/badge.svg
433 :target: https://coveralls.io/r/celery/celery
434
[end of README.rst]
[start of celery/backends/amqp.py]
1 # -*- coding: utf-8 -*-
2 """
3 celery.backends.amqp
4 ~~~~~~~~~~~~~~~~~~~~
5
6 The AMQP result backend.
7
8 This backend publishes results as messages.
9
10 """
11 from __future__ import absolute_import
12
13 import socket
14
15 from collections import deque
16 from operator import itemgetter
17
18 from kombu import Exchange, Queue, Producer, Consumer
19
20 from celery import states
21 from celery.exceptions import TimeoutError
22 from celery.five import range, monotonic
23 from celery.utils.functional import dictfilter
24 from celery.utils.log import get_logger
25 from celery.utils.timeutils import maybe_s_to_ms
26
27 from .base import BaseBackend
28
29 __all__ = ['BacklogLimitExceeded', 'AMQPBackend']
30
31 logger = get_logger(__name__)
32
33
34 class BacklogLimitExceeded(Exception):
35 """Too much state history to fast-forward."""
36
37
38 def repair_uuid(s):
39 # Historically the dashes in UUIDS are removed from AMQ entity names,
40 # but there is no known reason to. Hopefully we'll be able to fix
41 # this in v4.0.
42 return '%s-%s-%s-%s-%s' % (s[:8], s[8:12], s[12:16], s[16:20], s[20:])
43
44
45 class NoCacheQueue(Queue):
46 can_cache_declaration = False
47
48
49 class AMQPBackend(BaseBackend):
50 """Publishes results by sending messages."""
51 Exchange = Exchange
52 Queue = NoCacheQueue
53 Consumer = Consumer
54 Producer = Producer
55
56 BacklogLimitExceeded = BacklogLimitExceeded
57
58 persistent = True
59 supports_autoexpire = True
60 supports_native_join = True
61
62 retry_policy = {
63 'max_retries': 20,
64 'interval_start': 0,
65 'interval_step': 1,
66 'interval_max': 1,
67 }
68
69 def __init__(self, app, connection=None, exchange=None, exchange_type=None,
70 persistent=None, serializer=None, auto_delete=True, **kwargs):
71 super(AMQPBackend, self).__init__(app, **kwargs)
72 conf = self.app.conf
73 self._connection = connection
74 self.persistent = self.prepare_persistent(persistent)
75 self.delivery_mode = 2 if self.persistent else 1
76 exchange = exchange or conf.CELERY_RESULT_EXCHANGE
77 exchange_type = exchange_type or conf.CELERY_RESULT_EXCHANGE_TYPE
78 self.exchange = self._create_exchange(
79 exchange, exchange_type, self.delivery_mode,
80 )
81 self.serializer = serializer or conf.CELERY_RESULT_SERIALIZER
82 self.auto_delete = auto_delete
83 self.queue_arguments = dictfilter({
84 'x-expires': maybe_s_to_ms(self.expires),
85 })
86
87 def _create_exchange(self, name, type='direct', delivery_mode=2):
88 return self.Exchange(name=name,
89 type=type,
90 delivery_mode=delivery_mode,
91 durable=self.persistent,
92 auto_delete=False)
93
94 def _create_binding(self, task_id):
95 name = self.rkey(task_id)
96 return self.Queue(name=name,
97 exchange=self.exchange,
98 routing_key=name,
99 durable=self.persistent,
100 auto_delete=self.auto_delete,
101 queue_arguments=self.queue_arguments)
102
103 def revive(self, channel):
104 pass
105
106 def rkey(self, task_id):
107 return task_id.replace('-', '')
108
109 def destination_for(self, task_id, request):
110 if request:
111 return self.rkey(task_id), request.correlation_id or task_id
112 return self.rkey(task_id), task_id
113
114 def store_result(self, task_id, result, status,
115 traceback=None, request=None, **kwargs):
116 """Send task return value and status."""
117 routing_key, correlation_id = self.destination_for(task_id, request)
118 if not routing_key:
119 return
120 with self.app.amqp.producer_pool.acquire(block=True) as producer:
121 producer.publish(
122 {'task_id': task_id, 'status': status,
123 'result': self.encode_result(result, status),
124 'traceback': traceback,
125 'children': self.current_task_children(request)},
126 exchange=self.exchange,
127 routing_key=routing_key,
128 correlation_id=correlation_id,
129 serializer=self.serializer,
130 retry=True, retry_policy=self.retry_policy,
131 declare=self.on_reply_declare(task_id),
132 delivery_mode=self.delivery_mode,
133 )
134 return result
135
136 def on_reply_declare(self, task_id):
137 return [self._create_binding(task_id)]
138
139 def wait_for(self, task_id, timeout=None, cache=True,
140 no_ack=True, on_interval=None,
141 READY_STATES=states.READY_STATES,
142 PROPAGATE_STATES=states.PROPAGATE_STATES,
143 **kwargs):
144 cached_meta = self._cache.get(task_id)
145 if cache and cached_meta and \
146 cached_meta['status'] in READY_STATES:
147 return cached_meta
148 else:
149 try:
150 return self.consume(task_id, timeout=timeout, no_ack=no_ack,
151 on_interval=on_interval)
152 except socket.timeout:
153 raise TimeoutError('The operation timed out.')
154
155 def get_task_meta(self, task_id, backlog_limit=1000):
156 # Polling and using basic_get
157 with self.app.pool.acquire_channel(block=True) as (_, channel):
158 binding = self._create_binding(task_id)(channel)
159 binding.declare()
160
161 prev = latest = acc = None
162 for i in range(backlog_limit): # spool ffwd
163 acc = binding.get(
164 accept=self.accept, no_ack=False,
165 )
166 if not acc: # no more messages
167 break
168 if acc.payload['task_id'] == task_id:
169 prev, latest = latest, acc
170 if prev:
171 # backends are not expected to keep history,
172 # so we delete everything except the most recent state.
173 prev.ack()
174 prev = None
175 else:
176 raise self.BacklogLimitExceeded(task_id)
177
178 if latest:
179 payload = self._cache[task_id] = latest.payload
180 latest.requeue()
181 return payload
182 else:
183 # no new state, use previous
184 try:
185 return self._cache[task_id]
186 except KeyError:
187 # result probably pending.
188 return {'status': states.PENDING, 'result': None}
189 poll = get_task_meta # XXX compat
190
191 def drain_events(self, connection, consumer,
192 timeout=None, on_interval=None, now=monotonic, wait=None):
193 wait = wait or connection.drain_events
194 results = {}
195
196 def callback(meta, message):
197 if meta['status'] in states.READY_STATES:
198 results[meta['task_id']] = meta
199
200 consumer.callbacks[:] = [callback]
201 time_start = now()
202
203 while 1:
204 # Total time spent may exceed a single call to wait()
205 if timeout and now() - time_start >= timeout:
206 raise socket.timeout()
207 try:
208 wait(timeout=1)
209 except socket.timeout:
210 pass
211 if on_interval:
212 on_interval()
213 if results: # got event on the wanted channel.
214 break
215 self._cache.update(results)
216 return results
217
218 def consume(self, task_id, timeout=None, no_ack=True, on_interval=None):
219 wait = self.drain_events
220 with self.app.pool.acquire_channel(block=True) as (conn, channel):
221 binding = self._create_binding(task_id)
222 with self.Consumer(channel, binding,
223 no_ack=no_ack, accept=self.accept) as consumer:
224 while 1:
225 try:
226 return wait(
227 conn, consumer, timeout, on_interval)[task_id]
228 except KeyError:
229 continue
230
231 def _many_bindings(self, ids):
232 return [self._create_binding(task_id) for task_id in ids]
233
234 def get_many(self, task_ids, timeout=None, no_ack=True, on_message=None,
235 now=monotonic, getfields=itemgetter('status', 'task_id'),
236 READY_STATES=states.READY_STATES,
237 PROPAGATE_STATES=states.PROPAGATE_STATES, **kwargs):
238 with self.app.pool.acquire_channel(block=True) as (conn, channel):
239 ids = set(task_ids)
240 cached_ids = set()
241 mark_cached = cached_ids.add
242 for task_id in ids:
243 try:
244 cached = self._cache[task_id]
245 except KeyError:
246 pass
247 else:
248 if cached['status'] in READY_STATES:
249 yield task_id, cached
250 mark_cached(task_id)
251 ids.difference_update(cached_ids)
252 results = deque()
253 push_result = results.append
254 push_cache = self._cache.__setitem__
255 decode_result = self.meta_from_decoded
256
257 def _on_message(message):
258 body = decode_result(message.decode())
259 if on_message is not None:
260 on_message(body)
261 state, uid = getfields(body)
262 if state in READY_STATES:
263 push_result(body) \
264 if uid in task_ids else push_cache(uid, body)
265
266 bindings = self._many_bindings(task_ids)
267 with self.Consumer(channel, bindings, on_message=_on_message,
268 accept=self.accept, no_ack=no_ack):
269 wait = conn.drain_events
270 popleft = results.popleft
271 while ids:
272 wait(timeout=timeout)
273 while results:
274 state = popleft()
275 task_id = state['task_id']
276 ids.discard(task_id)
277 push_cache(task_id, state)
278 yield task_id, state
279
280 def reload_task_result(self, task_id):
281 raise NotImplementedError(
282 'reload_task_result is not supported by this backend.')
283
284 def reload_group_result(self, task_id):
285 """Reload group result, even if it has been previously fetched."""
286 raise NotImplementedError(
287 'reload_group_result is not supported by this backend.')
288
289 def save_group(self, group_id, result):
290 raise NotImplementedError(
291 'save_group is not supported by this backend.')
292
293 def restore_group(self, group_id, cache=True):
294 raise NotImplementedError(
295 'restore_group is not supported by this backend.')
296
297 def delete_group(self, group_id):
298 raise NotImplementedError(
299 'delete_group is not supported by this backend.')
300
301 def __reduce__(self, args=(), kwargs={}):
302 kwargs.update(
303 connection=self._connection,
304 exchange=self.exchange.name,
305 exchange_type=self.exchange.type,
306 persistent=self.persistent,
307 serializer=self.serializer,
308 auto_delete=self.auto_delete,
309 expires=self.expires,
310 )
311 return super(AMQPBackend, self).__reduce__(args, kwargs)
312
[end of celery/backends/amqp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
|
celery/celery
|
6592ff64b6b024a4b68abcc53b151888fdf0dee3
|
CELERY_RESULT_SERIALIZER = 'json' breaks Exception marshaling
Setting `CELERY_RESULT_SERIALIZER = json` and raising an exception in the worker leads to this:
```
/path/to/lib/python2.7/site-packages/celery/result.py in get(self, timeout, propagate, interval, no_ack, follow_parents, EXCEPTION_STATES, PROPAGATE_STATES)
173 status = meta['status']
174 if status in PROPAGATE_STATES and propagate:
--> 175 raise meta['result']
176 return meta['result']
177 wait = get # deprecated alias to :meth:`get`.
TypeError: exceptions must be old-style classes or derived from BaseException, not dict
```
where the contents of `meta['result']` are (in my case):
```
{u'exc_message': u'unknown keys: nam', u'exc_type': u'ValueError'}
```
so it _looks_ like celery could convert the dict to a real exception before raising, but it does not currently. Changing back to `pickle` works as expected.
bug can be reproduced with the following:
``` python
# jsonresults.py
from celery.app.base import Celery
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'amqp'
app = Celery(config_source=__name__)
@app.task
def hello():
raise ValueError('go away')
```
worker:
```
# python -m celery --app=jsonresults:app worker
```
caller:
``` python
import jsonresults
jsonresults.hello.delay().get()
```
|
This is biting me as well. Any news?
|
2015-04-29T14:52:17Z
|
<patch>
<patch>
diff --git a/celery/backends/amqp.py b/celery/backends/amqp.py
--- a/celery/backends/amqp.py
+++ b/celery/backends/amqp.py
@@ -195,7 +195,7 @@ def drain_events(self, connection, consumer,
def callback(meta, message):
if meta['status'] in states.READY_STATES:
- results[meta['task_id']] = meta
+ results[meta['task_id']] = self.meta_from_decoded(meta)
consumer.callbacks[:] = [callback]
time_start = now()
</patch>
</s>
</patch>
|
diff --git a/celery/tests/backends/test_amqp.py b/celery/tests/backends/test_amqp.py
--- a/celery/tests/backends/test_amqp.py
+++ b/celery/tests/backends/test_amqp.py
@@ -13,6 +13,7 @@
from celery.backends.amqp import AMQPBackend
from celery.exceptions import TimeoutError
from celery.five import Empty, Queue, range
+from celery.result import AsyncResult
from celery.utils import uuid
from celery.tests.case import (
@@ -246,10 +247,20 @@ def test_wait_for(self):
with self.assertRaises(TimeoutError):
b.wait_for(tid, timeout=0.01, cache=False)
- def test_drain_events_remaining_timeouts(self):
+ def test_drain_events_decodes_exceptions_in_meta(self):
+ tid = uuid()
+ b = self.create_backend(serializer="json")
+ b.store_result(tid, RuntimeError("aap"), states.FAILURE)
+ result = AsyncResult(tid, backend=b)
- class Connection(object):
+ with self.assertRaises(Exception) as cm:
+ result.get()
+ self.assertEqual(cm.exception.__class__.__name__, "RuntimeError")
+ self.assertEqual(str(cm.exception), "aap")
+
+ def test_drain_events_remaining_timeouts(self):
+ class Connection(object):
def drain_events(self, timeout=None):
pass
|
1.0
| |||
celery__celery-2840
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
</issue>
<code>
[start of README.rst]
1 =================================
2 celery - Distributed Task Queue
3 =================================
4
5 .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
6
7 |build-status| |coverage-status|
8
9 :Version: 3.2.0a1 (Cipater)
10 :Web: http://celeryproject.org/
11 :Download: http://pypi.python.org/pypi/celery/
12 :Source: http://github.com/celery/celery/
13 :Keywords: task queue, job queue, asynchronous, async, rabbitmq, amqp, redis,
14 python, webhooks, queue, distributed
15
16 --
17
18 What is a Task Queue?
19 =====================
20
21 Task queues are used as a mechanism to distribute work across threads or
22 machines.
23
24 A task queue's input is a unit of work, called a task, dedicated worker
25 processes then constantly monitor the queue for new work to perform.
26
27 Celery communicates via messages, usually using a broker
28 to mediate between clients and workers. To initiate a task a client puts a
29 message on the queue, the broker then delivers the message to a worker.
30
31 A Celery system can consist of multiple workers and brokers, giving way
32 to high availability and horizontal scaling.
33
34 Celery is a library written in Python, but the protocol can be implemented in
35 any language. So far there's RCelery_ for the Ruby programming language, and a
36 `PHP client`, but language interoperability can also be achieved
37 by using webhooks.
38
39 .. _RCelery: https://github.com/leapfrogonline/rcelery
40 .. _`PHP client`: https://github.com/gjedeer/celery-php
41 .. _`using webhooks`:
42 http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
43
44 What do I need?
45 ===============
46
47 Celery version 3.0 runs on,
48
49 - Python (2.6, 2.7, 3.3, 3.4)
50 - PyPy (1.8, 1.9)
51 - Jython (2.5, 2.7).
52
53 This is the last version to support Python 2.5,
54 and from Celery 3.1, Python 2.6 or later is required.
55 The last version to support Python 2.4 was Celery series 2.2.
56
57 *Celery* is usually used with a message broker to send and receive messages.
58 The RabbitMQ, Redis transports are feature complete,
59 but there's also experimental support for a myriad of other solutions, including
60 using SQLite for local development.
61
62 *Celery* can run on a single machine, on multiple machines, or even
63 across datacenters.
64
65 Get Started
66 ===========
67
68 If this is the first time you're trying to use Celery, or you are
69 new to Celery 3.0 coming from previous versions then you should read our
70 getting started tutorials:
71
72 - `First steps with Celery`_
73
74 Tutorial teaching you the bare minimum needed to get started with Celery.
75
76 - `Next steps`_
77
78 A more complete overview, showing more features.
79
80 .. _`First steps with Celery`:
81 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
82
83 .. _`Next steps`:
84 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
85
86 Celery is...
87 ==========
88
89 - **Simple**
90
91 Celery is easy to use and maintain, and does *not need configuration files*.
92
93 It has an active, friendly community you can talk to for support,
94 including a `mailing-list`_ and and an IRC channel.
95
96 Here's one of the simplest applications you can make::
97
98 from celery import Celery
99
100 app = Celery('hello', broker='amqp://guest@localhost//')
101
102 @app.task
103 def hello():
104 return 'hello world'
105
106 - **Highly Available**
107
108 Workers and clients will automatically retry in the event
109 of connection loss or failure, and some brokers support
110 HA in way of *Master/Master* or *Master/Slave* replication.
111
112 - **Fast**
113
114 A single Celery process can process millions of tasks a minute,
115 with sub-millisecond round-trip latency (using RabbitMQ,
116 py-librabbitmq, and optimized settings).
117
118 - **Flexible**
119
120 Almost every part of *Celery* can be extended or used on its own,
121 Custom pool implementations, serializers, compression schemes, logging,
122 schedulers, consumers, producers, autoscalers, broker transports and much more.
123
124 It supports...
125 ============
126
127 - **Message Transports**
128
129 - RabbitMQ_, Redis_,
130 - MongoDB_ (experimental), Amazon SQS (experimental),
131 - CouchDB_ (experimental), SQLAlchemy_ (experimental),
132 - Django ORM (experimental), `IronMQ`_
133 - and more...
134
135 - **Concurrency**
136
137 - Prefork, Eventlet_, gevent_, threads/single threaded
138
139 - **Result Stores**
140
141 - AMQP, Redis
142 - memcached, MongoDB
143 - SQLAlchemy, Django ORM
144 - Apache Cassandra, IronCache
145
146 - **Serialization**
147
148 - *pickle*, *json*, *yaml*, *msgpack*.
149 - *zlib*, *bzip2* compression.
150 - Cryptographic message signing.
151
152 .. _`Eventlet`: http://eventlet.net/
153 .. _`gevent`: http://gevent.org/
154
155 .. _RabbitMQ: http://rabbitmq.com
156 .. _Redis: http://redis.io
157 .. _MongoDB: http://mongodb.org
158 .. _Beanstalk: http://kr.github.com/beanstalkd
159 .. _CouchDB: http://couchdb.apache.org
160 .. _SQLAlchemy: http://sqlalchemy.org
161 .. _`IronMQ`: http://iron.io
162
163 Framework Integration
164 =====================
165
166 Celery is easy to integrate with web frameworks, some of which even have
167 integration packages:
168
169 +--------------------+----------------------------------------------------+
170 | `Django`_ | not needed |
171 +--------------------+----------------------------------------------------+
172 | `Pyramid`_ | `pyramid_celery`_ |
173 +--------------------+----------------------------------------------------+
174 | `Pylons`_ | `celery-pylons`_ |
175 +--------------------+----------------------------------------------------+
176 | `Flask`_ | not needed |
177 +--------------------+----------------------------------------------------+
178 | `web2py`_ | `web2py-celery`_ |
179 +--------------------+----------------------------------------------------+
180 | `Tornado`_ | `tornado-celery`_ | `another tornado-celery`_ |
181 +--------------------+----------------------------------------------------+
182
183 The integration packages are not strictly necessary, but they can make
184 development easier, and sometimes they add important hooks like closing
185 database connections at ``fork``.
186
187 .. _`Django`: http://djangoproject.com/
188 .. _`Pylons`: http://www.pylonsproject.org/
189 .. _`Flask`: http://flask.pocoo.org/
190 .. _`web2py`: http://web2py.com/
191 .. _`Bottle`: http://bottlepy.org/
192 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
193 .. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
194 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
195 .. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
196 .. _`web2py-celery`: http://code.google.com/p/web2py-celery/
197 .. _`Tornado`: http://www.tornadoweb.org/
198 .. _`tornado-celery`: http://github.com/mher/tornado-celery/
199 .. _`another tornado-celery`: https://github.com/mayflaver/tornado-celery
200
201 .. _celery-documentation:
202
203 Documentation
204 =============
205
206 The `latest documentation`_ with user guides, tutorials and API reference
207 is hosted at Read The Docs.
208
209 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
210
211 .. _celery-installation:
212
213 Installation
214 ============
215
216 You can install Celery either via the Python Package Index (PyPI)
217 or from source.
218
219 To install using `pip`,::
220
221 $ pip install -U Celery
222
223 To install using `easy_install`,::
224
225 $ easy_install -U Celery
226
227 .. _bundles:
228
229 Bundles
230 -------
231
232 Celery also defines a group of bundles that can be used
233 to install Celery and the dependencies for a given feature.
234
235 You can specify these in your requirements or on the ``pip`` comand-line
236 by using brackets. Multiple bundles can be specified by separating them by
237 commas.
238 ::
239
240 $ pip install "celery[librabbitmq]"
241
242 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
243
244 The following bundles are available:
245
246 Serializers
247 ~~~~~~~~~~~
248
249 :celery[auth]:
250 for using the auth serializer.
251
252 :celery[msgpack]:
253 for using the msgpack serializer.
254
255 :celery[yaml]:
256 for using the yaml serializer.
257
258 Concurrency
259 ~~~~~~~~~~~
260
261 :celery[eventlet]:
262 for using the eventlet pool.
263
264 :celery[gevent]:
265 for using the gevent pool.
266
267 :celery[threads]:
268 for using the thread pool.
269
270 Transports and Backends
271 ~~~~~~~~~~~~~~~~~~~~~~~
272
273 :celery[librabbitmq]:
274 for using the librabbitmq C library.
275
276 :celery[redis]:
277 for using Redis as a message transport or as a result backend.
278
279 :celery[mongodb]:
280 for using MongoDB as a message transport (*experimental*),
281 or as a result backend (*supported*).
282
283 :celery[sqs]:
284 for using Amazon SQS as a message transport (*experimental*).
285
286 :celery[memcache]:
287 for using memcached as a result backend.
288
289 :celery[cassandra]:
290 for using Apache Cassandra as a result backend.
291
292 :celery[couchdb]:
293 for using CouchDB as a message transport (*experimental*).
294
295 :celery[couchbase]:
296 for using CouchBase as a result backend.
297
298 :celery[beanstalk]:
299 for using Beanstalk as a message transport (*experimental*).
300
301 :celery[zookeeper]:
302 for using Zookeeper as a message transport.
303
304 :celery[zeromq]:
305 for using ZeroMQ as a message transport (*experimental*).
306
307 :celery[sqlalchemy]:
308 for using SQLAlchemy as a message transport (*experimental*),
309 or as a result backend (*supported*).
310
311 :celery[pyro]:
312 for using the Pyro4 message transport (*experimental*).
313
314 :celery[slmq]:
315 for using the SoftLayer Message Queue transport (*experimental*).
316
317 .. _celery-installing-from-source:
318
319 Downloading and installing from source
320 --------------------------------------
321
322 Download the latest version of Celery from
323 http://pypi.python.org/pypi/celery/
324
325 You can install it by doing the following,::
326
327 $ tar xvfz celery-0.0.0.tar.gz
328 $ cd celery-0.0.0
329 $ python setup.py build
330 # python setup.py install
331
332 The last command must be executed as a privileged user if
333 you are not currently using a virtualenv.
334
335 .. _celery-installing-from-git:
336
337 Using the development version
338 -----------------------------
339
340 With pip
341 ~~~~~~~~
342
343 The Celery development version also requires the development
344 versions of ``kombu``, ``amqp`` and ``billiard``.
345
346 You can install the latest snapshot of these using the following
347 pip commands::
348
349 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
350 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
351 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
352 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
353
354 With git
355 ~~~~~~~~
356
357 Please the Contributing section.
358
359 .. _getting-help:
360
361 Getting Help
362 ============
363
364 .. _mailing-list:
365
366 Mailing list
367 ------------
368
369 For discussions about the usage, development, and future of celery,
370 please join the `celery-users`_ mailing list.
371
372 .. _`celery-users`: http://groups.google.com/group/celery-users/
373
374 .. _irc-channel:
375
376 IRC
377 ---
378
379 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
380 network.
381
382 .. _`Freenode`: http://freenode.net
383
384 .. _bug-tracker:
385
386 Bug tracker
387 ===========
388
389 If you have any suggestions, bug reports or annoyances please report them
390 to our issue tracker at http://github.com/celery/celery/issues/
391
392 .. _wiki:
393
394 Wiki
395 ====
396
397 http://wiki.github.com/celery/celery/
398
399
400 .. _maintainers:
401
402 Maintainers
403 ===========
404
405 - `@ask`_ (primary maintainer)
406 - `@thedrow`_
407 - `@chrisgogreen`_
408 - `@PMickael`_
409 - `@malinoff`_
410 - And you? We really need more: https://github.com/celery/celery/issues/2534
411
412 .. _`@ask`: http://github.com/ask
413 .. _`@thedrow`: http://github.com/thedrow
414 .. _`@chrisgogreen`: http://github.com/chrisgogreen
415 .. _`@PMickael`: http://github.com/PMickael
416 .. _`@malinoff`: http://github.com/malinoff
417
418
419 .. _contributing-short:
420
421 Contributing
422 ============
423
424 Development of `celery` happens at Github: http://github.com/celery/celery
425
426 You are highly encouraged to participate in the development
427 of `celery`. If you don't like Github (for some reason) you're welcome
428 to send regular patches.
429
430 Be sure to also read the `Contributing to Celery`_ section in the
431 documentation.
432
433 .. _`Contributing to Celery`:
434 http://docs.celeryproject.org/en/master/contributing.html
435
436 .. _license:
437
438 License
439 =======
440
441 This software is licensed under the `New BSD License`. See the ``LICENSE``
442 file in the top distribution directory for the full license text.
443
444 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
445
446
447 .. image:: https://d2weczhvl823v0.cloudfront.net/celery/celery/trend.png
448 :alt: Bitdeli badge
449 :target: https://bitdeli.com/free
450
451 .. |build-status| image:: https://travis-ci.org/celery/celery.svg?branch=master
452 :target: https://travis-ci.org/celery/celery
453 .. |coverage-status| image:: https://coveralls.io/repos/celery/celery/badge.svg
454 :target: https://coveralls.io/r/celery/celery
455
[end of README.rst]
[start of celery/app/defaults.py]
1 # -*- coding: utf-8 -*-
2 """
3 celery.app.defaults
4 ~~~~~~~~~~~~~~~~~~~
5
6 Configuration introspection and defaults.
7
8 """
9 from __future__ import absolute_import
10
11 import sys
12
13 from collections import deque, namedtuple
14 from datetime import timedelta
15
16 from celery.five import items
17 from celery.utils import strtobool
18 from celery.utils.functional import memoize
19
20 __all__ = ['Option', 'NAMESPACES', 'flatten', 'find']
21
22 is_jython = sys.platform.startswith('java')
23 is_pypy = hasattr(sys, 'pypy_version_info')
24
25 DEFAULT_POOL = 'prefork'
26 if is_jython:
27 DEFAULT_POOL = 'threads'
28 elif is_pypy:
29 if sys.pypy_version_info[0:3] < (1, 5, 0):
30 DEFAULT_POOL = 'solo'
31 else:
32 DEFAULT_POOL = 'prefork'
33
34 DEFAULT_ACCEPT_CONTENT = ['json', 'pickle', 'msgpack', 'yaml']
35 DEFAULT_PROCESS_LOG_FMT = """
36 [%(asctime)s: %(levelname)s/%(processName)s] %(message)s
37 """.strip()
38 DEFAULT_LOG_FMT = '[%(asctime)s: %(levelname)s] %(message)s'
39 DEFAULT_TASK_LOG_FMT = """[%(asctime)s: %(levelname)s/%(processName)s] \
40 %(task_name)s[%(task_id)s]: %(message)s"""
41
42 _BROKER_OLD = {'deprecate_by': '2.5', 'remove_by': '4.0',
43 'alt': 'BROKER_URL setting'}
44 _REDIS_OLD = {'deprecate_by': '2.5', 'remove_by': '4.0',
45 'alt': 'URL form of CELERY_RESULT_BACKEND'}
46
47 searchresult = namedtuple('searchresult', ('namespace', 'key', 'type'))
48
49
50 class Option(object):
51 alt = None
52 deprecate_by = None
53 remove_by = None
54 typemap = dict(string=str, int=int, float=float, any=lambda v: v,
55 bool=strtobool, dict=dict, tuple=tuple)
56
57 def __init__(self, default=None, *args, **kwargs):
58 self.default = default
59 self.type = kwargs.get('type') or 'string'
60 for attr, value in items(kwargs):
61 setattr(self, attr, value)
62
63 def to_python(self, value):
64 return self.typemap[self.type](value)
65
66 def __repr__(self):
67 return '<Option: type->{0} default->{1!r}>'.format(self.type,
68 self.default)
69
70 NAMESPACES = {
71 'BROKER': {
72 'URL': Option(None, type='string'),
73 'CONNECTION_TIMEOUT': Option(4, type='float'),
74 'CONNECTION_RETRY': Option(True, type='bool'),
75 'CONNECTION_MAX_RETRIES': Option(100, type='int'),
76 'FAILOVER_STRATEGY': Option(None, type='string'),
77 'HEARTBEAT': Option(None, type='int'),
78 'HEARTBEAT_CHECKRATE': Option(3.0, type='int'),
79 'LOGIN_METHOD': Option(None, type='string'),
80 'POOL_LIMIT': Option(10, type='int'),
81 'USE_SSL': Option(False, type='bool'),
82 'TRANSPORT': Option(type='string'),
83 'TRANSPORT_OPTIONS': Option({}, type='dict'),
84 'HOST': Option(type='string', **_BROKER_OLD),
85 'PORT': Option(type='int', **_BROKER_OLD),
86 'USER': Option(type='string', **_BROKER_OLD),
87 'PASSWORD': Option(type='string', **_BROKER_OLD),
88 'VHOST': Option(type='string', **_BROKER_OLD),
89 },
90 'CASSANDRA': {
91 'COLUMN_FAMILY': Option(type='string'),
92 'DETAILED_MODE': Option(False, type='bool'),
93 'KEYSPACE': Option(type='string'),
94 'READ_CONSISTENCY': Option(type='string'),
95 'SERVERS': Option(type='list'),
96 'WRITE_CONSISTENCY': Option(type='string'),
97 },
98 'CELERY': {
99 'ACCEPT_CONTENT': Option(DEFAULT_ACCEPT_CONTENT, type='list'),
100 'ACKS_LATE': Option(False, type='bool'),
101 'ALWAYS_EAGER': Option(False, type='bool'),
102 'ANNOTATIONS': Option(type='any'),
103 'BROADCAST_QUEUE': Option('celeryctl'),
104 'BROADCAST_EXCHANGE': Option('celeryctl'),
105 'BROADCAST_EXCHANGE_TYPE': Option('fanout'),
106 'CACHE_BACKEND': Option(),
107 'CACHE_BACKEND_OPTIONS': Option({}, type='dict'),
108 'CHORD_PROPAGATES': Option(True, type='bool'),
109 'COUCHBASE_BACKEND_SETTINGS': Option(None, type='dict'),
110 'CREATE_MISSING_QUEUES': Option(True, type='bool'),
111 'DEFAULT_RATE_LIMIT': Option(type='string'),
112 'DISABLE_RATE_LIMITS': Option(False, type='bool'),
113 'DEFAULT_ROUTING_KEY': Option('celery'),
114 'DEFAULT_QUEUE': Option('celery'),
115 'DEFAULT_EXCHANGE': Option('celery'),
116 'DEFAULT_EXCHANGE_TYPE': Option('direct'),
117 'DEFAULT_DELIVERY_MODE': Option(2, type='string'),
118 'EAGER_PROPAGATES_EXCEPTIONS': Option(False, type='bool'),
119 'ENABLE_UTC': Option(True, type='bool'),
120 'ENABLE_REMOTE_CONTROL': Option(True, type='bool'),
121 'EVENT_SERIALIZER': Option('json'),
122 'EVENT_QUEUE_EXPIRES': Option(60.0, type='float'),
123 'EVENT_QUEUE_TTL': Option(5.0, type='float'),
124 'IMPORTS': Option((), type='tuple'),
125 'INCLUDE': Option((), type='tuple'),
126 'IGNORE_RESULT': Option(False, type='bool'),
127 'MAX_CACHED_RESULTS': Option(100, type='int'),
128 'MESSAGE_COMPRESSION': Option(type='string'),
129 'MONGODB_BACKEND_SETTINGS': Option(type='dict'),
130 'REDIS_HOST': Option(type='string', **_REDIS_OLD),
131 'REDIS_PORT': Option(type='int', **_REDIS_OLD),
132 'REDIS_DB': Option(type='int', **_REDIS_OLD),
133 'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
134 'REDIS_MAX_CONNECTIONS': Option(type='int'),
135 'RESULT_BACKEND': Option(type='string'),
136 'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
137 'RESULT_DB_TABLENAMES': Option(type='dict'),
138 'RESULT_DBURI': Option(),
139 'RESULT_ENGINE_OPTIONS': Option(type='dict'),
140 'RESULT_EXCHANGE': Option('celeryresults'),
141 'RESULT_EXCHANGE_TYPE': Option('direct'),
142 'RESULT_SERIALIZER': Option('json'),
143 'RESULT_PERSISTENT': Option(None, type='bool'),
144 'RIAK_BACKEND_SETTINGS': Option(type='dict'),
145 'ROUTES': Option(type='any'),
146 'SEND_EVENTS': Option(False, type='bool'),
147 'SEND_TASK_ERROR_EMAILS': Option(False, type='bool'),
148 'SEND_TASK_SENT_EVENT': Option(False, type='bool'),
149 'STORE_ERRORS_EVEN_IF_IGNORED': Option(False, type='bool'),
150 'TASK_PROTOCOL': Option(1, type='int'),
151 'TASK_PUBLISH_RETRY': Option(True, type='bool'),
152 'TASK_PUBLISH_RETRY_POLICY': Option({
153 'max_retries': 3,
154 'interval_start': 0,
155 'interval_max': 1,
156 'interval_step': 0.2}, type='dict'),
157 'TASK_RESULT_EXPIRES': Option(timedelta(days=1), type='float'),
158 'TASK_SERIALIZER': Option('json'),
159 'TIMEZONE': Option(type='string'),
160 'TRACK_STARTED': Option(False, type='bool'),
161 'REDIRECT_STDOUTS': Option(True, type='bool'),
162 'REDIRECT_STDOUTS_LEVEL': Option('WARNING'),
163 'QUEUES': Option(type='dict'),
164 'QUEUE_HA_POLICY': Option(None, type='string'),
165 'SECURITY_KEY': Option(type='string'),
166 'SECURITY_CERTIFICATE': Option(type='string'),
167 'SECURITY_CERT_STORE': Option(type='string'),
168 'WORKER_DIRECT': Option(False, type='bool'),
169 },
170 'CELERYD': {
171 'AGENT': Option(None, type='string'),
172 'AUTOSCALER': Option('celery.worker.autoscale:Autoscaler'),
173 'AUTORELOADER': Option('celery.worker.autoreload:Autoreloader'),
174 'CONCURRENCY': Option(0, type='int'),
175 'TIMER': Option(type='string'),
176 'TIMER_PRECISION': Option(1.0, type='float'),
177 'FORCE_EXECV': Option(False, type='bool'),
178 'HIJACK_ROOT_LOGGER': Option(True, type='bool'),
179 'CONSUMER': Option('celery.worker.consumer:Consumer', type='string'),
180 'LOG_FORMAT': Option(DEFAULT_PROCESS_LOG_FMT),
181 'LOG_COLOR': Option(type='bool'),
182 'LOG_LEVEL': Option('WARN', deprecate_by='2.4', remove_by='4.0',
183 alt='--loglevel argument'),
184 'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
185 alt='--logfile argument'),
186 'MAX_TASKS_PER_CHILD': Option(type='int'),
187 'POOL': Option(DEFAULT_POOL),
188 'POOL_PUTLOCKS': Option(True, type='bool'),
189 'POOL_RESTARTS': Option(False, type='bool'),
190 'PREFETCH_MULTIPLIER': Option(4, type='int'),
191 'STATE_DB': Option(),
192 'TASK_LOG_FORMAT': Option(DEFAULT_TASK_LOG_FMT),
193 'TASK_SOFT_TIME_LIMIT': Option(type='float'),
194 'TASK_TIME_LIMIT': Option(type='float'),
195 'WORKER_LOST_WAIT': Option(10.0, type='float')
196 },
197 'CELERYBEAT': {
198 'SCHEDULE': Option({}, type='dict'),
199 'SCHEDULER': Option('celery.beat:PersistentScheduler'),
200 'SCHEDULE_FILENAME': Option('celerybeat-schedule'),
201 'SYNC_EVERY': Option(0, type='int'),
202 'MAX_LOOP_INTERVAL': Option(0, type='float'),
203 'LOG_LEVEL': Option('INFO', deprecate_by='2.4', remove_by='4.0',
204 alt='--loglevel argument'),
205 'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
206 alt='--logfile argument'),
207 },
208 'CELERYMON': {
209 'LOG_LEVEL': Option('INFO', deprecate_by='2.4', remove_by='4.0',
210 alt='--loglevel argument'),
211 'LOG_FILE': Option(deprecate_by='2.4', remove_by='4.0',
212 alt='--logfile argument'),
213 'LOG_FORMAT': Option(DEFAULT_LOG_FMT),
214 },
215 'EMAIL': {
216 'HOST': Option('localhost'),
217 'PORT': Option(25, type='int'),
218 'HOST_USER': Option(),
219 'HOST_PASSWORD': Option(),
220 'TIMEOUT': Option(2, type='float'),
221 'USE_SSL': Option(False, type='bool'),
222 'USE_TLS': Option(False, type='bool'),
223 'CHARSET': Option('us-ascii'),
224 },
225 'SERVER_EMAIL': Option('celery@localhost'),
226 'ADMINS': Option((), type='tuple'),
227 }
228
229
230 def flatten(d, ns=''):
231 stack = deque([(ns, d)])
232 while stack:
233 name, space = stack.popleft()
234 for key, value in items(space):
235 if isinstance(value, dict):
236 stack.append((name + key + '_', value))
237 else:
238 yield name + key, value
239 DEFAULTS = {key: value.default for key, value in flatten(NAMESPACES)}
240
241
242 def find_deprecated_settings(source):
243 from celery.utils import warn_deprecated
244 for name, opt in flatten(NAMESPACES):
245 if (opt.deprecate_by or opt.remove_by) and getattr(source, name, None):
246 warn_deprecated(description='The {0!r} setting'.format(name),
247 deprecation=opt.deprecate_by,
248 removal=opt.remove_by,
249 alternative='Use the {0.alt} instead'.format(opt))
250 return source
251
252
253 @memoize(maxsize=None)
254 def find(name, namespace='celery'):
255 # - Try specified namespace first.
256 namespace = namespace.upper()
257 try:
258 return searchresult(
259 namespace, name.upper(), NAMESPACES[namespace][name.upper()],
260 )
261 except KeyError:
262 # - Try all the other namespaces.
263 for ns, keys in items(NAMESPACES):
264 if ns.upper() == name.upper():
265 return searchresult(None, ns, keys)
266 elif isinstance(keys, dict):
267 try:
268 return searchresult(ns, name.upper(), keys[name.upper()])
269 except KeyError:
270 pass
271 # - See if name is a qualname last.
272 return searchresult(None, name.upper(), DEFAULTS[name.upper()])
273
[end of celery/app/defaults.py]
[start of celery/app/task.py]
1 # -*- coding: utf-8 -*-
2 """
3 celery.app.task
4 ~~~~~~~~~~~~~~~
5
6 Task Implementation: Task request context, and the base task class.
7
8 """
9 from __future__ import absolute_import
10
11 import sys
12
13 from billiard.einfo import ExceptionInfo
14
15 from celery import current_app, group
16 from celery import states
17 from celery._state import _task_stack
18 from celery.canvas import signature
19 from celery.exceptions import Ignore, MaxRetriesExceededError, Reject, Retry
20 from celery.five import class_property, items
21 from celery.result import EagerResult
22 from celery.utils import abstract
23 from celery.utils import uuid, maybe_reraise
24 from celery.utils.functional import mattrgetter, maybe_list
25 from celery.utils.imports import instantiate
26 from celery.utils.mail import ErrorMail
27
28 from .annotations import resolve_all as resolve_all_annotations
29 from .registry import _unpickle_task_v2
30 from .utils import appstr
31
32 __all__ = ['Context', 'Task']
33
34 #: extracts attributes related to publishing a message from an object.
35 extract_exec_options = mattrgetter(
36 'queue', 'routing_key', 'exchange', 'priority', 'expires',
37 'serializer', 'delivery_mode', 'compression', 'time_limit',
38 'soft_time_limit', 'immediate', 'mandatory', # imm+man is deprecated
39 )
40
41 # We take __repr__ very seriously around here ;)
42 R_BOUND_TASK = '<class {0.__name__} of {app}{flags}>'
43 R_UNBOUND_TASK = '<unbound {0.__name__}{flags}>'
44 R_SELF_TASK = '<@task {0.name} bound to other {0.__self__}>'
45 R_INSTANCE = '<@task: {0.name} of {app}{flags}>'
46
47 #: Here for backwards compatibility as tasks no longer use a custom metaclass.
48 TaskType = type
49
50
51 def _strflags(flags, default=''):
52 if flags:
53 return ' ({0})'.format(', '.join(flags))
54 return default
55
56
57 def _reprtask(task, fmt=None, flags=None):
58 flags = list(flags) if flags is not None else []
59 flags.append('v2 compatible') if task.__v2_compat__ else None
60 if not fmt:
61 fmt = R_BOUND_TASK if task._app else R_UNBOUND_TASK
62 return fmt.format(
63 task, flags=_strflags(flags),
64 app=appstr(task._app) if task._app else None,
65 )
66
67
68 class Context(object):
69 # Default context
70 logfile = None
71 loglevel = None
72 hostname = None
73 id = None
74 args = None
75 kwargs = None
76 retries = 0
77 eta = None
78 expires = None
79 is_eager = False
80 headers = None
81 delivery_info = None
82 reply_to = None
83 root_id = None
84 parent_id = None
85 correlation_id = None
86 taskset = None # compat alias to group
87 group = None
88 chord = None
89 utc = None
90 called_directly = True
91 callbacks = None
92 errbacks = None
93 timelimit = None
94 _children = None # see property
95 _protected = 0
96
97 def __init__(self, *args, **kwargs):
98 self.update(*args, **kwargs)
99
100 def update(self, *args, **kwargs):
101 return self.__dict__.update(*args, **kwargs)
102
103 def clear(self):
104 return self.__dict__.clear()
105
106 def get(self, key, default=None):
107 return getattr(self, key, default)
108
109 def __repr__(self):
110 return '<Context: {0!r}>'.format(vars(self))
111
112 @property
113 def children(self):
114 # children must be an empy list for every thread
115 if self._children is None:
116 self._children = []
117 return self._children
118
119
120 class Task(object):
121 """Task base class.
122
123 When called tasks apply the :meth:`run` method. This method must
124 be defined by all tasks (that is unless the :meth:`__call__` method
125 is overridden).
126
127 """
128 __trace__ = None
129 __v2_compat__ = False # set by old base in celery.task.base
130
131 ErrorMail = ErrorMail
132 MaxRetriesExceededError = MaxRetriesExceededError
133
134 #: Execution strategy used, or the qualified name of one.
135 Strategy = 'celery.worker.strategy:default'
136
137 #: This is the instance bound to if the task is a method of a class.
138 __self__ = None
139
140 #: The application instance associated with this task class.
141 _app = None
142
143 #: Name of the task.
144 name = None
145
146 #: If :const:`True` the task is an abstract base class.
147 abstract = True
148
149 #: Maximum number of retries before giving up. If set to :const:`None`,
150 #: it will **never** stop retrying.
151 max_retries = 3
152
153 #: Default time in seconds before a retry of the task should be
154 #: executed. 3 minutes by default.
155 default_retry_delay = 3 * 60
156
157 #: Rate limit for this task type. Examples: :const:`None` (no rate
158 #: limit), `'100/s'` (hundred tasks a second), `'100/m'` (hundred tasks
159 #: a minute),`'100/h'` (hundred tasks an hour)
160 rate_limit = None
161
162 #: If enabled the worker will not store task state and return values
163 #: for this task. Defaults to the :setting:`CELERY_IGNORE_RESULT`
164 #: setting.
165 ignore_result = None
166
167 #: If enabled the request will keep track of subtasks started by
168 #: this task, and this information will be sent with the result
169 #: (``result.children``).
170 trail = True
171
172 #: When enabled errors will be stored even if the task is otherwise
173 #: configured to ignore results.
174 store_errors_even_if_ignored = None
175
176 #: If enabled an email will be sent to :setting:`ADMINS` whenever a task
177 #: of this type fails.
178 send_error_emails = None
179
180 #: The name of a serializer that are registered with
181 #: :mod:`kombu.serialization.registry`. Default is `'pickle'`.
182 serializer = None
183
184 #: Hard time limit.
185 #: Defaults to the :setting:`CELERYD_TASK_TIME_LIMIT` setting.
186 time_limit = None
187
188 #: Soft time limit.
189 #: Defaults to the :setting:`CELERYD_TASK_SOFT_TIME_LIMIT` setting.
190 soft_time_limit = None
191
192 #: The result store backend used for this task.
193 backend = None
194
195 #: If disabled this task won't be registered automatically.
196 autoregister = True
197
198 #: If enabled the task will report its status as 'started' when the task
199 #: is executed by a worker. Disabled by default as the normal behaviour
200 #: is to not report that level of granularity. Tasks are either pending,
201 #: finished, or waiting to be retried.
202 #:
203 #: Having a 'started' status can be useful for when there are long
204 #: running tasks and there is a need to report which task is currently
205 #: running.
206 #:
207 #: The application default can be overridden using the
208 #: :setting:`CELERY_TRACK_STARTED` setting.
209 track_started = None
210
211 #: When enabled messages for this task will be acknowledged **after**
212 #: the task has been executed, and not *just before* which is the
213 #: default behavior.
214 #:
215 #: Please note that this means the task may be executed twice if the
216 #: worker crashes mid execution (which may be acceptable for some
217 #: applications).
218 #:
219 #: The application default can be overridden with the
220 #: :setting:`CELERY_ACKS_LATE` setting.
221 acks_late = None
222
223 #: Tuple of expected exceptions.
224 #:
225 #: These are errors that are expected in normal operation
226 #: and that should not be regarded as a real error by the worker.
227 #: Currently this means that the state will be updated to an error
228 #: state, but the worker will not log the event as an error.
229 throws = ()
230
231 #: Default task expiry time.
232 expires = None
233
234 #: Task request stack, the current request will be the topmost.
235 request_stack = None
236
237 #: Some may expect a request to exist even if the task has not been
238 #: called. This should probably be deprecated.
239 _default_request = None
240
241 _exec_options = None
242
243 __bound__ = False
244
245 from_config = (
246 ('send_error_emails', 'CELERY_SEND_TASK_ERROR_EMAILS'),
247 ('serializer', 'CELERY_TASK_SERIALIZER'),
248 ('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
249 ('track_started', 'CELERY_TRACK_STARTED'),
250 ('acks_late', 'CELERY_ACKS_LATE'),
251 ('ignore_result', 'CELERY_IGNORE_RESULT'),
252 ('store_errors_even_if_ignored',
253 'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
254 )
255
256 #: ignored
257 accept_magic_kwargs = False
258
259 _backend = None # set by backend property.
260
261 __bound__ = False
262
263 # - Tasks are lazily bound, so that configuration is not set
264 # - until the task is actually used
265
266 @classmethod
267 def bind(self, app):
268 was_bound, self.__bound__ = self.__bound__, True
269 self._app = app
270 conf = app.conf
271 self._exec_options = None # clear option cache
272
273 for attr_name, config_name in self.from_config:
274 if getattr(self, attr_name, None) is None:
275 setattr(self, attr_name, conf[config_name])
276
277 # decorate with annotations from config.
278 if not was_bound:
279 self.annotate()
280
281 from celery.utils.threads import LocalStack
282 self.request_stack = LocalStack()
283
284 # PeriodicTask uses this to add itself to the PeriodicTask schedule.
285 self.on_bound(app)
286
287 return app
288
289 @classmethod
290 def on_bound(self, app):
291 """This method can be defined to do additional actions when the
292 task class is bound to an app."""
293 pass
294
295 @classmethod
296 def _get_app(self):
297 if self._app is None:
298 self._app = current_app
299 if not self.__bound__:
300 # The app property's __set__ method is not called
301 # if Task.app is set (on the class), so must bind on use.
302 self.bind(self._app)
303 return self._app
304 app = class_property(_get_app, bind)
305
306 @classmethod
307 def annotate(self):
308 for d in resolve_all_annotations(self.app.annotations, self):
309 for key, value in items(d):
310 if key.startswith('@'):
311 self.add_around(key[1:], value)
312 else:
313 setattr(self, key, value)
314
315 @classmethod
316 def add_around(self, attr, around):
317 orig = getattr(self, attr)
318 if getattr(orig, '__wrapped__', None):
319 orig = orig.__wrapped__
320 meth = around(orig)
321 meth.__wrapped__ = orig
322 setattr(self, attr, meth)
323
324 def __call__(self, *args, **kwargs):
325 _task_stack.push(self)
326 self.push_request(args=args, kwargs=kwargs)
327 try:
328 # add self if this is a bound task
329 if self.__self__ is not None:
330 return self.run(self.__self__, *args, **kwargs)
331 return self.run(*args, **kwargs)
332 finally:
333 self.pop_request()
334 _task_stack.pop()
335
336 def __reduce__(self):
337 # - tasks are pickled into the name of the task only, and the reciever
338 # - simply grabs it from the local registry.
339 # - in later versions the module of the task is also included,
340 # - and the receiving side tries to import that module so that
341 # - it will work even if the task has not been registered.
342 mod = type(self).__module__
343 mod = mod if mod and mod in sys.modules else None
344 return (_unpickle_task_v2, (self.name, mod), None)
345
346 def run(self, *args, **kwargs):
347 """The body of the task executed by workers."""
348 raise NotImplementedError('Tasks must define the run method.')
349
350 def start_strategy(self, app, consumer, **kwargs):
351 return instantiate(self.Strategy, self, app, consumer, **kwargs)
352
353 def delay(self, *args, **kwargs):
354 """Star argument version of :meth:`apply_async`.
355
356 Does not support the extra options enabled by :meth:`apply_async`.
357
358 :param \*args: positional arguments passed on to the task.
359 :param \*\*kwargs: keyword arguments passed on to the task.
360
361 :returns :class:`celery.result.AsyncResult`:
362
363 """
364 return self.apply_async(args, kwargs)
365
366 def apply_async(self, args=None, kwargs=None, task_id=None, producer=None,
367 link=None, link_error=None, shadow=None, **options):
368 """Apply tasks asynchronously by sending a message.
369
370 :keyword args: The positional arguments to pass on to the
371 task (a :class:`list` or :class:`tuple`).
372
373 :keyword kwargs: The keyword arguments to pass on to the
374 task (a :class:`dict`)
375
376 :keyword countdown: Number of seconds into the future that the
377 task should execute. Defaults to immediate
378 execution.
379
380 :keyword eta: A :class:`~datetime.datetime` object describing
381 the absolute time and date of when the task should
382 be executed. May not be specified if `countdown`
383 is also supplied.
384
385 :keyword expires: Either a :class:`int`, describing the number of
386 seconds, or a :class:`~datetime.datetime` object
387 that describes the absolute time and date of when
388 the task should expire. The task will not be
389 executed after the expiration time.
390
391 :keyword shadow: Override task name used in logs/monitoring
392 (default from :meth:`shadow_name`).
393
394 :keyword connection: Re-use existing broker connection instead
395 of establishing a new one.
396
397 :keyword retry: If enabled sending of the task message will be retried
398 in the event of connection loss or failure. Default
399 is taken from the :setting:`CELERY_TASK_PUBLISH_RETRY`
400 setting. Note that you need to handle the
401 producer/connection manually for this to work.
402
403 :keyword retry_policy: Override the retry policy used. See the
404 :setting:`CELERY_TASK_PUBLISH_RETRY_POLICY`
405 setting.
406
407 :keyword routing_key: Custom routing key used to route the task to a
408 worker server. If in combination with a
409 ``queue`` argument only used to specify custom
410 routing keys to topic exchanges.
411
412 :keyword queue: The queue to route the task to. This must be a key
413 present in :setting:`CELERY_QUEUES`, or
414 :setting:`CELERY_CREATE_MISSING_QUEUES` must be
415 enabled. See :ref:`guide-routing` for more
416 information.
417
418 :keyword exchange: Named custom exchange to send the task to.
419 Usually not used in combination with the ``queue``
420 argument.
421
422 :keyword priority: The task priority, a number between 0 and 9.
423 Defaults to the :attr:`priority` attribute.
424
425 :keyword serializer: A string identifying the default
426 serialization method to use. Can be `pickle`,
427 `json`, `yaml`, `msgpack` or any custom
428 serialization method that has been registered
429 with :mod:`kombu.serialization.registry`.
430 Defaults to the :attr:`serializer` attribute.
431
432 :keyword compression: A string identifying the compression method
433 to use. Can be one of ``zlib``, ``bzip2``,
434 or any custom compression methods registered with
435 :func:`kombu.compression.register`. Defaults to
436 the :setting:`CELERY_MESSAGE_COMPRESSION`
437 setting.
438 :keyword link: A single, or a list of tasks to apply if the
439 task exits successfully.
440 :keyword link_error: A single, or a list of tasks to apply
441 if an error occurs while executing the task.
442
443 :keyword producer: :class:`kombu.Producer` instance to use.
444
445 :keyword add_to_parent: If set to True (default) and the task
446 is applied while executing another task, then the result
447 will be appended to the parent tasks ``request.children``
448 attribute. Trailing can also be disabled by default using the
449 :attr:`trail` attribute
450
451 :keyword publisher: Deprecated alias to ``producer``.
452
453 :keyword headers: Message headers to be sent in the
454 task (a :class:`dict`)
455
456 :rtype :class:`celery.result.AsyncResult`: if
457 :setting:`CELERY_ALWAYS_EAGER` is not set, otherwise
458 :class:`celery.result.EagerResult`:
459
460 Also supports all keyword arguments supported by
461 :meth:`kombu.Producer.publish`.
462
463 .. note::
464 If the :setting:`CELERY_ALWAYS_EAGER` setting is set, it will
465 be replaced by a local :func:`apply` call instead.
466
467 """
468 try:
469 check_arguments = self.__header__
470 except AttributeError:
471 pass
472 else:
473 check_arguments(*(args or ()), **(kwargs or {}))
474
475 app = self._get_app()
476 if app.conf.CELERY_ALWAYS_EAGER:
477 return self.apply(args, kwargs, task_id=task_id or uuid(),
478 link=link, link_error=link_error, **options)
479 # add 'self' if this is a "task_method".
480 if self.__self__ is not None:
481 args = args if isinstance(args, tuple) else tuple(args or ())
482 args = (self.__self__,) + args
483 shadow = shadow or self.shadow_name(args, kwargs, options)
484
485 preopts = self._get_exec_options()
486 options = dict(preopts, **options) if options else preopts
487 return app.send_task(
488 self.name, args, kwargs, task_id=task_id, producer=producer,
489 link=link, link_error=link_error, result_cls=self.AsyncResult,
490 shadow=shadow,
491 **options
492 )
493
494 def shadow_name(self, args, kwargs, options):
495 """Override for custom task name in worker logs/monitoring.
496
497 :param args: Task positional arguments.
498 :param kwargs: Task keyword arguments.
499 :param options: Task execution options.
500
501 **Example**:
502
503 .. code-block:: python
504
505 from celery.utils.imports import qualname
506
507 def shadow_name(task, args, kwargs, options):
508 return qualname(args[0])
509
510 @app.task(shadow_name=shadow_name, serializer='pickle')
511 def apply_function_async(fun, *args, **kwargs):
512 return fun(*args, **kwargs)
513
514 """
515 pass
516
517 def signature_from_request(self, request=None, args=None, kwargs=None,
518 queue=None, **extra_options):
519 request = self.request if request is None else request
520 args = request.args if args is None else args
521 kwargs = request.kwargs if kwargs is None else kwargs
522 limit_hard, limit_soft = request.timelimit or (None, None)
523 options = {
524 'task_id': request.id,
525 'link': request.callbacks,
526 'link_error': request.errbacks,
527 'group_id': request.group,
528 'chord': request.chord,
529 'soft_time_limit': limit_soft,
530 'time_limit': limit_hard,
531 'reply_to': request.reply_to,
532 'headers': request.headers,
533 }
534 options.update(
535 {'queue': queue} if queue else (request.delivery_info or {}),
536 )
537 return self.signature(
538 args, kwargs, options, type=self, **extra_options
539 )
540 subtask_from_request = signature_from_request
541
542 def retry(self, args=None, kwargs=None, exc=None, throw=True,
543 eta=None, countdown=None, max_retries=None, **options):
544 """Retry the task.
545
546 :param args: Positional arguments to retry with.
547 :param kwargs: Keyword arguments to retry with.
548 :keyword exc: Custom exception to report when the max restart
549 limit has been exceeded (default:
550 :exc:`~@MaxRetriesExceededError`).
551
552 If this argument is set and retry is called while
553 an exception was raised (``sys.exc_info()`` is set)
554 it will attempt to reraise the current exception.
555
556 If no exception was raised it will raise the ``exc``
557 argument provided.
558 :keyword countdown: Time in seconds to delay the retry for.
559 :keyword eta: Explicit time and date to run the retry at
560 (must be a :class:`~datetime.datetime` instance).
561 :keyword max_retries: If set, overrides the default retry limit.
562 A value of :const:`None`, means "use the default", so if you want
563 infinite retries you would have to set the :attr:`max_retries`
564 attribute of the task to :const:`None` first.
565 :keyword time_limit: If set, overrides the default time limit.
566 :keyword soft_time_limit: If set, overrides the default soft
567 time limit.
568 :keyword \*\*options: Any extra options to pass on to
569 meth:`apply_async`.
570 :keyword throw: If this is :const:`False`, do not raise the
571 :exc:`~@Retry` exception,
572 that tells the worker to mark the task as being
573 retried. Note that this means the task will be
574 marked as failed if the task raises an exception,
575 or successful if it returns.
576
577 :raises celery.exceptions.Retry: To tell the worker that
578 the task has been re-sent for retry. This always happens,
579 unless the `throw` keyword argument has been explicitly set
580 to :const:`False`, and is considered normal operation.
581
582 **Example**
583
584 .. code-block:: pycon
585
586 >>> from imaginary_twitter_lib import Twitter
587 >>> from proj.celery import app
588
589 >>> @app.task(bind=True)
590 ... def tweet(self, auth, message):
591 ... twitter = Twitter(oauth=auth)
592 ... try:
593 ... twitter.post_status_update(message)
594 ... except twitter.FailWhale as exc:
595 ... # Retry in 5 minutes.
596 ... raise self.retry(countdown=60 * 5, exc=exc)
597
598 Although the task will never return above as `retry` raises an
599 exception to notify the worker, we use `raise` in front of the retry
600 to convey that the rest of the block will not be executed.
601
602 """
603 request = self.request
604 retries = request.retries + 1
605 max_retries = self.max_retries if max_retries is None else max_retries
606
607 # Not in worker or emulated by (apply/always_eager),
608 # so just raise the original exception.
609 if request.called_directly:
610 maybe_reraise() # raise orig stack if PyErr_Occurred
611 raise exc or Retry('Task can be retried', None)
612
613 if not eta and countdown is None:
614 countdown = self.default_retry_delay
615
616 is_eager = request.is_eager
617 S = self.signature_from_request(
618 request, args, kwargs,
619 countdown=countdown, eta=eta, retries=retries,
620 **options
621 )
622
623 if max_retries is not None and retries > max_retries:
624 if exc:
625 # first try to reraise the original exception
626 maybe_reraise()
627 # or if not in an except block then raise the custom exc.
628 raise exc
629 raise self.MaxRetriesExceededError(
630 "Can't retry {0}[{1}] args:{2} kwargs:{3}".format(
631 self.name, request.id, S.args, S.kwargs))
632
633 ret = Retry(exc=exc, when=eta or countdown)
634
635 if is_eager:
636 # if task was executed eagerly using apply(),
637 # then the retry must also be executed eagerly.
638 S.apply().get()
639 if throw:
640 raise ret
641 return ret
642
643 try:
644 S.apply_async()
645 except Exception as exc:
646 raise Reject(exc, requeue=False)
647 if throw:
648 raise ret
649 return ret
650
651 def apply(self, args=None, kwargs=None,
652 link=None, link_error=None, **options):
653 """Execute this task locally, by blocking until the task returns.
654
655 :param args: positional arguments passed on to the task.
656 :param kwargs: keyword arguments passed on to the task.
657 :keyword throw: Re-raise task exceptions. Defaults to
658 the :setting:`CELERY_EAGER_PROPAGATES_EXCEPTIONS`
659 setting.
660
661 :rtype :class:`celery.result.EagerResult`:
662
663 """
664 # trace imports Task, so need to import inline.
665 from celery.app.trace import build_tracer
666
667 app = self._get_app()
668 args = args or ()
669 # add 'self' if this is a bound method.
670 if self.__self__ is not None:
671 args = (self.__self__,) + tuple(args)
672 kwargs = kwargs or {}
673 task_id = options.get('task_id') or uuid()
674 retries = options.get('retries', 0)
675 throw = app.either('CELERY_EAGER_PROPAGATES_EXCEPTIONS',
676 options.pop('throw', None))
677
678 # Make sure we get the task instance, not class.
679 task = app._tasks[self.name]
680
681 request = {'id': task_id,
682 'retries': retries,
683 'is_eager': True,
684 'logfile': options.get('logfile'),
685 'loglevel': options.get('loglevel', 0),
686 'callbacks': maybe_list(link),
687 'errbacks': maybe_list(link_error),
688 'headers': options.get('headers'),
689 'delivery_info': {'is_eager': True}}
690 tb = None
691 tracer = build_tracer(
692 task.name, task, eager=True,
693 propagate=throw, app=self._get_app(),
694 )
695 ret = tracer(task_id, args, kwargs, request)
696 retval = ret.retval
697 if isinstance(retval, ExceptionInfo):
698 retval, tb = retval.exception, retval.traceback
699 state = states.SUCCESS if ret.info is None else ret.info.state
700 return EagerResult(task_id, retval, state, traceback=tb)
701
702 def AsyncResult(self, task_id, **kwargs):
703 """Get AsyncResult instance for this kind of task.
704
705 :param task_id: Task id to get result for.
706
707 """
708 return self._get_app().AsyncResult(task_id, backend=self.backend,
709 task_name=self.name, **kwargs)
710
711 def signature(self, args=None, *starargs, **starkwargs):
712 """Return :class:`~celery.signature` object for
713 this task, wrapping arguments and execution options
714 for a single task invocation."""
715 starkwargs.setdefault('app', self.app)
716 return signature(self, args, *starargs, **starkwargs)
717 subtask = signature
718
719 def s(self, *args, **kwargs):
720 """``.s(*a, **k) -> .signature(a, k)``"""
721 return self.signature(args, kwargs)
722
723 def si(self, *args, **kwargs):
724 """``.si(*a, **k) -> .signature(a, k, immutable=True)``"""
725 return self.signature(args, kwargs, immutable=True)
726
727 def chunks(self, it, n):
728 """Creates a :class:`~celery.canvas.chunks` task for this task."""
729 from celery import chunks
730 return chunks(self.s(), it, n, app=self.app)
731
732 def map(self, it):
733 """Creates a :class:`~celery.canvas.xmap` task from ``it``."""
734 from celery import xmap
735 return xmap(self.s(), it, app=self.app)
736
737 def starmap(self, it):
738 """Creates a :class:`~celery.canvas.xstarmap` task from ``it``."""
739 from celery import xstarmap
740 return xstarmap(self.s(), it, app=self.app)
741
742 def send_event(self, type_, **fields):
743 req = self.request
744 with self.app.events.default_dispatcher(hostname=req.hostname) as d:
745 return d.send(type_, uuid=req.id, **fields)
746
747 def replace(self, sig):
748 """Replace the current task, with a new task inheriting the
749 same task id.
750
751 :param sig: :class:`@signature`
752
753 Note: This will raise :exc:`~@Ignore`, so the best practice
754 is to always use ``raise self.replace(...)`` to convey
755 to the reader that the task will not continue after being replaced.
756
757 :param: Signature of new task.
758
759 """
760 chord = self.request.chord
761 if isinstance(sig, group):
762 sig |= self.app.tasks['celery.accumulate'].s(index=0).set(
763 chord=chord,
764 )
765 chord = None
766 sig.freeze(self.request.id,
767 group_id=self.request.group,
768 chord=chord,
769 root_id=self.request.root_id)
770 sig.delay()
771 raise Ignore('Chord member replaced by new task')
772
773 def add_to_chord(self, sig, lazy=False):
774 """Add signature to the chord the current task is a member of.
775
776 :param sig: Signature to extend chord with.
777 :param lazy: If enabled the new task will not actually be called,
778 and ``sig.delay()`` must be called manually.
779
780 Currently only supported by the Redis result backend when
781 ``?new_join=1`` is enabled.
782
783 """
784 if not self.request.chord:
785 raise ValueError('Current task is not member of any chord')
786 result = sig.freeze(group_id=self.request.group,
787 chord=self.request.chord,
788 root_id=self.request.root_id)
789 self.backend.add_to_chord(self.request.group, result)
790 return sig.delay() if not lazy else sig
791
792 def update_state(self, task_id=None, state=None, meta=None):
793 """Update task state.
794
795 :keyword task_id: Id of the task to update, defaults to the
796 id of the current task
797 :keyword state: New state (:class:`str`).
798 :keyword meta: State metadata (:class:`dict`).
799
800
801
802 """
803 if task_id is None:
804 task_id = self.request.id
805 self.backend.store_result(task_id, meta, state)
806
807 def on_success(self, retval, task_id, args, kwargs):
808 """Success handler.
809
810 Run by the worker if the task executes successfully.
811
812 :param retval: The return value of the task.
813 :param task_id: Unique id of the executed task.
814 :param args: Original arguments for the executed task.
815 :param kwargs: Original keyword arguments for the executed task.
816
817 The return value of this handler is ignored.
818
819 """
820 pass
821
822 def on_retry(self, exc, task_id, args, kwargs, einfo):
823 """Retry handler.
824
825 This is run by the worker when the task is to be retried.
826
827 :param exc: The exception sent to :meth:`retry`.
828 :param task_id: Unique id of the retried task.
829 :param args: Original arguments for the retried task.
830 :param kwargs: Original keyword arguments for the retried task.
831
832 :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
833 instance, containing the traceback.
834
835 The return value of this handler is ignored.
836
837 """
838 pass
839
840 def on_failure(self, exc, task_id, args, kwargs, einfo):
841 """Error handler.
842
843 This is run by the worker when the task fails.
844
845 :param exc: The exception raised by the task.
846 :param task_id: Unique id of the failed task.
847 :param args: Original arguments for the task that failed.
848 :param kwargs: Original keyword arguments for the task
849 that failed.
850
851 :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
852 instance, containing the traceback.
853
854 The return value of this handler is ignored.
855
856 """
857 pass
858
859 def after_return(self, status, retval, task_id, args, kwargs, einfo):
860 """Handler called after the task returns.
861
862 :param status: Current task state.
863 :param retval: Task return value/exception.
864 :param task_id: Unique id of the task.
865 :param args: Original arguments for the task.
866 :param kwargs: Original keyword arguments for the task.
867
868 :keyword einfo: :class:`~billiard.einfo.ExceptionInfo`
869 instance, containing the traceback (if any).
870
871 The return value of this handler is ignored.
872
873 """
874 pass
875
876 def send_error_email(self, context, exc, **kwargs):
877 if self.send_error_emails and \
878 not getattr(self, 'disable_error_emails', None):
879 self.ErrorMail(self, **kwargs).send(context, exc)
880
881 def add_trail(self, result):
882 if self.trail:
883 self.request.children.append(result)
884 return result
885
886 def push_request(self, *args, **kwargs):
887 self.request_stack.push(Context(*args, **kwargs))
888
889 def pop_request(self):
890 self.request_stack.pop()
891
892 def __repr__(self):
893 """`repr(task)`"""
894 return _reprtask(self, R_SELF_TASK if self.__self__ else R_INSTANCE)
895
896 def _get_request(self):
897 """Get current request object."""
898 req = self.request_stack.top
899 if req is None:
900 # task was not called, but some may still expect a request
901 # to be there, perhaps that should be deprecated.
902 if self._default_request is None:
903 self._default_request = Context()
904 return self._default_request
905 return req
906 request = property(_get_request)
907
908 def _get_exec_options(self):
909 if self._exec_options is None:
910 self._exec_options = extract_exec_options(self)
911 return self._exec_options
912
913 @property
914 def backend(self):
915 backend = self._backend
916 if backend is None:
917 return self.app.backend
918 return backend
919
920 @backend.setter
921 def backend(self, value): # noqa
922 self._backend = value
923
924 @property
925 def __name__(self):
926 return self.__class__.__name__
927 abstract.CallableTask.register(Task)
928 BaseTask = Task # compat alias
929
[end of celery/app/task.py]
[start of celery/worker/request.py]
1 # -*- coding: utf-8 -*-
2 """
3 celery.worker.request
4 ~~~~~~~~~~~~~~~~~~~~~
5
6 This module defines the :class:`Request` class,
7 which specifies how tasks are executed.
8
9 """
10 from __future__ import absolute_import, unicode_literals
11
12 import logging
13 import socket
14 import sys
15
16 from datetime import datetime
17 from weakref import ref
18
19 from kombu.utils.encoding import safe_repr, safe_str
20
21 from celery import signals
22 from celery.app.trace import trace_task, trace_task_ret
23 from celery.exceptions import (
24 Ignore, TaskRevokedError, InvalidTaskError,
25 SoftTimeLimitExceeded, TimeLimitExceeded,
26 WorkerLostError, Terminated, Retry, Reject,
27 )
28 from celery.five import string
29 from celery.platforms import signals as _signals
30 from celery.utils.functional import noop
31 from celery.utils.log import get_logger
32 from celery.utils.timeutils import maybe_iso8601, timezone, maybe_make_aware
33 from celery.utils.serialization import get_pickled_exception
34
35 from . import state
36
37 __all__ = ['Request']
38
39 IS_PYPY = hasattr(sys, 'pypy_version_info')
40
41 logger = get_logger(__name__)
42 debug, info, warn, error = (logger.debug, logger.info,
43 logger.warning, logger.error)
44 _does_info = False
45 _does_debug = False
46
47
48 def __optimize__():
49 # this is also called by celery.app.trace.setup_worker_optimizations
50 global _does_debug
51 global _does_info
52 _does_debug = logger.isEnabledFor(logging.DEBUG)
53 _does_info = logger.isEnabledFor(logging.INFO)
54 __optimize__()
55
56 # Localize
57 tz_utc = timezone.utc
58 tz_or_local = timezone.tz_or_local
59 send_revoked = signals.task_revoked.send
60
61 task_accepted = state.task_accepted
62 task_ready = state.task_ready
63 revoked_tasks = state.revoked
64
65
66 class Request(object):
67 """A request for task execution."""
68 acknowledged = False
69 time_start = None
70 worker_pid = None
71 time_limits = (None, None)
72 _already_revoked = False
73 _terminate_on_ack = None
74 _apply_result = None
75 _tzlocal = None
76
77 if not IS_PYPY: # pragma: no cover
78 __slots__ = (
79 'app', 'type', 'name', 'id', 'on_ack', 'body',
80 'hostname', 'eventer', 'connection_errors', 'task', 'eta',
81 'expires', 'request_dict', 'on_reject', 'utc',
82 'content_type', 'content_encoding',
83 '__weakref__', '__dict__',
84 )
85
86 def __init__(self, message, on_ack=noop,
87 hostname=None, eventer=None, app=None,
88 connection_errors=None, request_dict=None,
89 task=None, on_reject=noop, body=None,
90 headers=None, decoded=False, utc=True,
91 maybe_make_aware=maybe_make_aware,
92 maybe_iso8601=maybe_iso8601, **opts):
93 if headers is None:
94 headers = message.headers
95 if body is None:
96 body = message.body
97 self.app = app
98 self.message = message
99 self.body = body
100 self.utc = utc
101 if decoded:
102 self.content_type = self.content_encoding = None
103 else:
104 self.content_type, self.content_encoding = (
105 message.content_type, message.content_encoding,
106 )
107
108 self.id = headers['id']
109 type = self.type = self.name = headers['task']
110 if 'shadow' in headers:
111 self.name = headers['shadow']
112 if 'timelimit' in headers:
113 self.time_limits = headers['timelimit']
114 self.on_ack = on_ack
115 self.on_reject = on_reject
116 self.hostname = hostname or socket.gethostname()
117 self.eventer = eventer
118 self.connection_errors = connection_errors or ()
119 self.task = task or self.app.tasks[type]
120
121 # timezone means the message is timezone-aware, and the only timezone
122 # supported at this point is UTC.
123 eta = headers.get('eta')
124 if eta is not None:
125 try:
126 eta = maybe_iso8601(eta)
127 except (AttributeError, ValueError, TypeError) as exc:
128 raise InvalidTaskError(
129 'invalid eta value {0!r}: {1}'.format(eta, exc))
130 self.eta = maybe_make_aware(eta, self.tzlocal)
131 else:
132 self.eta = None
133
134 expires = headers.get('expires')
135 if expires is not None:
136 try:
137 expires = maybe_iso8601(expires)
138 except (AttributeError, ValueError, TypeError) as exc:
139 raise InvalidTaskError(
140 'invalid expires value {0!r}: {1}'.format(expires, exc))
141 self.expires = maybe_make_aware(expires, self.tzlocal)
142 else:
143 self.expires = None
144
145 delivery_info = message.delivery_info or {}
146 properties = message.properties or {}
147 headers.update({
148 'reply_to': properties.get('reply_to'),
149 'correlation_id': properties.get('correlation_id'),
150 'delivery_info': {
151 'exchange': delivery_info.get('exchange'),
152 'routing_key': delivery_info.get('routing_key'),
153 'priority': delivery_info.get('priority'),
154 'redelivered': delivery_info.get('redelivered'),
155 }
156
157 })
158 self.request_dict = headers
159
160 @property
161 def delivery_info(self):
162 return self.request_dict['delivery_info']
163
164 def execute_using_pool(self, pool, **kwargs):
165 """Used by the worker to send this task to the pool.
166
167 :param pool: A :class:`celery.concurrency.base.TaskPool` instance.
168
169 :raises celery.exceptions.TaskRevokedError: if the task was revoked
170 and ignored.
171
172 """
173 task_id = self.id
174 task = self.task
175 if self.revoked():
176 raise TaskRevokedError(task_id)
177
178 time_limit, soft_time_limit = self.time_limits
179 time_limit = time_limit or task.time_limit
180 soft_time_limit = soft_time_limit or task.soft_time_limit
181 result = pool.apply_async(
182 trace_task_ret,
183 args=(self.type, task_id, self.request_dict, self.body,
184 self.content_type, self.content_encoding),
185 accept_callback=self.on_accepted,
186 timeout_callback=self.on_timeout,
187 callback=self.on_success,
188 error_callback=self.on_failure,
189 soft_timeout=soft_time_limit,
190 timeout=time_limit,
191 correlation_id=task_id,
192 )
193 # cannot create weakref to None
194 self._apply_result = ref(result) if result is not None else result
195 return result
196
197 def execute(self, loglevel=None, logfile=None):
198 """Execute the task in a :func:`~celery.app.trace.trace_task`.
199
200 :keyword loglevel: The loglevel used by the task.
201 :keyword logfile: The logfile used by the task.
202
203 """
204 if self.revoked():
205 return
206
207 # acknowledge task as being processed.
208 if not self.task.acks_late:
209 self.acknowledge()
210
211 request = self.request_dict
212 args, kwargs, embed = self.message.payload
213 request.update({'loglevel': loglevel, 'logfile': logfile,
214 'hostname': self.hostname, 'is_eager': False,
215 'args': args, 'kwargs': kwargs}, **embed or {})
216 retval = trace_task(self.task, self.id, args, kwargs, request,
217 hostname=self.hostname, loader=self.app.loader,
218 app=self.app)[0]
219 self.acknowledge()
220 return retval
221
222 def maybe_expire(self):
223 """If expired, mark the task as revoked."""
224 if self.expires:
225 now = datetime.now(self.expires.tzinfo)
226 if now > self.expires:
227 revoked_tasks.add(self.id)
228 return True
229
230 def terminate(self, pool, signal=None):
231 signal = _signals.signum(signal or 'TERM')
232 if self.time_start:
233 pool.terminate_job(self.worker_pid, signal)
234 self._announce_revoked('terminated', True, signal, False)
235 else:
236 self._terminate_on_ack = pool, signal
237 if self._apply_result is not None:
238 obj = self._apply_result() # is a weakref
239 if obj is not None:
240 obj.terminate(signal)
241
242 def _announce_revoked(self, reason, terminated, signum, expired):
243 task_ready(self)
244 self.send_event('task-revoked',
245 terminated=terminated, signum=signum, expired=expired)
246 if self.store_errors:
247 self.task.backend.mark_as_revoked(self.id, reason, request=self)
248 self.acknowledge()
249 self._already_revoked = True
250 send_revoked(self.task, request=self,
251 terminated=terminated, signum=signum, expired=expired)
252
253 def revoked(self):
254 """If revoked, skip task and mark state."""
255 expired = False
256 if self._already_revoked:
257 return True
258 if self.expires:
259 expired = self.maybe_expire()
260 if self.id in revoked_tasks:
261 info('Discarding revoked task: %s[%s]', self.name, self.id)
262 self._announce_revoked(
263 'expired' if expired else 'revoked', False, None, expired,
264 )
265 return True
266 return False
267
268 def send_event(self, type, **fields):
269 if self.eventer and self.eventer.enabled:
270 self.eventer.send(type, uuid=self.id, **fields)
271
272 def on_accepted(self, pid, time_accepted):
273 """Handler called when task is accepted by worker pool."""
274 self.worker_pid = pid
275 self.time_start = time_accepted
276 task_accepted(self)
277 if not self.task.acks_late:
278 self.acknowledge()
279 self.send_event('task-started')
280 if _does_debug:
281 debug('Task accepted: %s[%s] pid:%r', self.name, self.id, pid)
282 if self._terminate_on_ack is not None:
283 self.terminate(*self._terminate_on_ack)
284
285 def on_timeout(self, soft, timeout):
286 """Handler called if the task times out."""
287 task_ready(self)
288 if soft:
289 warn('Soft time limit (%ss) exceeded for %s[%s]',
290 soft, self.name, self.id)
291 exc = SoftTimeLimitExceeded(soft)
292 else:
293 error('Hard time limit (%ss) exceeded for %s[%s]',
294 timeout, self.name, self.id)
295 exc = TimeLimitExceeded(timeout)
296
297 if self.store_errors:
298 self.task.backend.mark_as_failure(self.id, exc, request=self)
299
300 if self.task.acks_late:
301 self.acknowledge()
302
303 def on_success(self, failed__retval__runtime, **kwargs):
304 """Handler called if the task was successfully processed."""
305 failed, retval, runtime = failed__retval__runtime
306 if failed:
307 if isinstance(retval.exception, (SystemExit, KeyboardInterrupt)):
308 raise retval.exception
309 return self.on_failure(retval, return_ok=True)
310 task_ready(self)
311
312 if self.task.acks_late:
313 self.acknowledge()
314
315 self.send_event('task-succeeded', result=retval, runtime=runtime)
316
317 def on_retry(self, exc_info):
318 """Handler called if the task should be retried."""
319 if self.task.acks_late:
320 self.acknowledge()
321
322 self.send_event('task-retried',
323 exception=safe_repr(exc_info.exception.exc),
324 traceback=safe_str(exc_info.traceback))
325
326 def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
327 """Handler called if the task raised an exception."""
328 task_ready(self)
329
330 if isinstance(exc_info.exception, MemoryError):
331 raise MemoryError('Process got: %s' % (exc_info.exception,))
332 elif isinstance(exc_info.exception, Reject):
333 return self.reject(requeue=exc_info.exception.requeue)
334 elif isinstance(exc_info.exception, Ignore):
335 return self.acknowledge()
336
337 exc = exc_info.exception
338
339 if isinstance(exc, Retry):
340 return self.on_retry(exc_info)
341
342 # These are special cases where the process would not have had
343 # time to write the result.
344 if self.store_errors:
345 if isinstance(exc, Terminated):
346 self._announce_revoked(
347 'terminated', True, string(exc), False)
348 send_failed_event = False # already sent revoked event
349 elif isinstance(exc, WorkerLostError) or not return_ok:
350 self.task.backend.mark_as_failure(
351 self.id, exc, request=self,
352 )
353 # (acks_late) acknowledge after result stored.
354 if self.task.acks_late:
355 self.acknowledge()
356
357 if send_failed_event:
358 self.send_event(
359 'task-failed',
360 exception=safe_repr(get_pickled_exception(exc_info.exception)),
361 traceback=exc_info.traceback,
362 )
363
364 if not return_ok:
365 error('Task handler raised error: %r', exc,
366 exc_info=exc_info.exc_info)
367
368 def acknowledge(self):
369 """Acknowledge task."""
370 if not self.acknowledged:
371 self.on_ack(logger, self.connection_errors)
372 self.acknowledged = True
373
374 def reject(self, requeue=False):
375 if not self.acknowledged:
376 self.on_reject(logger, self.connection_errors, requeue)
377 self.acknowledged = True
378
379 def info(self, safe=False):
380 return {'id': self.id,
381 'name': self.name,
382 'type': self.type,
383 'body': self.body,
384 'hostname': self.hostname,
385 'time_start': self.time_start,
386 'acknowledged': self.acknowledged,
387 'delivery_info': self.delivery_info,
388 'worker_pid': self.worker_pid}
389
390 def __str__(self):
391 return ' '.join([
392 self.humaninfo(),
393 ' eta:[{0}]'.format(self.eta) if self.eta else '',
394 ' expires:[{0}]'.format(self.expires) if self.expires else '',
395 ])
396 shortinfo = __str__
397
398 def humaninfo(self):
399 return '{0.name}[{0.id}]'.format(self)
400
401 def __repr__(self):
402 return '<{0}: {1}>'.format(type(self).__name__, self.humaninfo())
403
404 @property
405 def tzlocal(self):
406 if self._tzlocal is None:
407 self._tzlocal = self.app.conf.CELERY_TIMEZONE
408 return self._tzlocal
409
410 @property
411 def store_errors(self):
412 return (not self.task.ignore_result or
413 self.task.store_errors_even_if_ignored)
414
415 @property
416 def task_id(self):
417 # XXX compat
418 return self.id
419
420 @task_id.setter # noqa
421 def task_id(self, value):
422 self.id = value
423
424 @property
425 def task_name(self):
426 # XXX compat
427 return self.name
428
429 @task_name.setter # noqa
430 def task_name(self, value):
431 self.name = value
432
433 @property
434 def reply_to(self):
435 # used by rpc backend when failures reported by parent process
436 return self.request_dict['reply_to']
437
438 @property
439 def correlation_id(self):
440 # used similarly to reply_to
441 return self.request_dict['correlation_id']
442
443
444 def create_request_cls(base, task, pool, hostname, eventer,
445 ref=ref, revoked_tasks=revoked_tasks,
446 task_ready=task_ready):
447 from celery.app.trace import trace_task_ret as trace
448 default_time_limit = task.time_limit
449 default_soft_time_limit = task.soft_time_limit
450 apply_async = pool.apply_async
451 acks_late = task.acks_late
452 events = eventer and eventer.enabled
453
454 class Request(base):
455
456 def execute_using_pool(self, pool, **kwargs):
457 task_id = self.id
458 if (self.expires or task_id in revoked_tasks) and self.revoked():
459 raise TaskRevokedError(task_id)
460
461 time_limit, soft_time_limit = self.time_limits
462 time_limit = time_limit or default_time_limit
463 soft_time_limit = soft_time_limit or default_soft_time_limit
464 result = apply_async(
465 trace,
466 args=(self.type, task_id, self.request_dict, self.body,
467 self.content_type, self.content_encoding),
468 accept_callback=self.on_accepted,
469 timeout_callback=self.on_timeout,
470 callback=self.on_success,
471 error_callback=self.on_failure,
472 soft_timeout=soft_time_limit,
473 timeout=time_limit,
474 correlation_id=task_id,
475 )
476 # cannot create weakref to None
477 self._apply_result = ref(result) if result is not None else result
478 return result
479
480 def on_success(self, failed__retval__runtime, **kwargs):
481 failed, retval, runtime = failed__retval__runtime
482 if failed:
483 if isinstance(retval.exception, (
484 SystemExit, KeyboardInterrupt)):
485 raise retval.exception
486 return self.on_failure(retval, return_ok=True)
487 task_ready(self)
488
489 if acks_late:
490 self.acknowledge()
491
492 if events:
493 self.send_event(
494 'task-succeeded', result=retval, runtime=runtime,
495 )
496
497 return Request
498
[end of celery/worker/request.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
celery/celery
|
045b52f1450d6d5cc500e0057a4b498250dc5692
|
Message being acknowledged on WorkerLostError when CELERY_ACKS_LATE=True
When using celery v3.0.24, with `CELERY_ACKS_LATE = True` , if the OOM killer kills the celery worker, then the worker acknowledges the message.
As per [this](https://github.com/celery/celery/commit/e810420c) commit. The `exc_info.internal` comes in as `false`, which means it is not a internal error, due to which the message is acknowledged.
The desirable behaviour, in such a case would be to not acknowledge the message (and be able to know, whether its a OOM error), so that some other worker can pick it up.
As a workaround, I've commented out the [code](https://github.com/siddharth96/celery/commit/427695d1b23034dadda85fd7a48f7367831be4fa), where celery acknowledges the message, because in such a case, message will be lost.
|
This is deliberate as if a task is killed it may mean that the next invocation will also cause the same to happen. If the task is redelivered it may cause a loop where the same conditions occur again and again. Also, sadly you cannot distinguish processes killed by OOM from processes killed by other means, and if an administrator kills -9 a task going amok, you usually don't want that task to be called again.
There could be a configuration option for not acking terminated tasks, but I'm not sure how useful that would be.
A better solution could be to use `basic_reject(requeue=False)` instead of `basic_ack`, that way you can configure
a dead letter queue so that the killed tasks will be sent to a queue for manual inspection.
I must say, regardless of the status of this feature request, the documentation is misleading. Specifically, [this FAQ makes it seem that process failures would NOT acknowledge messages](http://celery.readthedocs.org/en/latest/faq.html#faq-acks-late-vs-retry). And [this FAQ boldface states](http://celery.readthedocs.org/en/latest/faq.html#id54) that in the event of a kill signal (9), that acks_late will allow the task to re-run (which again, is patently wrong based on this poorly documented behavior). Nowhere in the docs have I found that if the process _dies_, the message will be acknowledged, regardless of acks_late or not. (for instance, I have a set of 10k+ tasks, and some 1% of tasks wind up acknowledged but incomplete when a WorkerLostError is thrown in connection with the worker, although there are no other errors of any kind in any of my logs related to that task).
TL;DR at the least, appropriately document the current state when describing the functionality and limitations of acks_late. A work-around would be helpful -- I'm not sure I understand the solution of using `basic_reject`, although I'll keep looking into it.
The docs are referring to killing the worker process with KILL, not the child processes. The term worker will always refer to the worker instance, not the pool processes. The section within about acks_late is probably not very helpful and should be removed
|
2015-10-06T05:34:34Z
|
<patch>
<patch>
diff --git a/celery/app/defaults.py b/celery/app/defaults.py
--- a/celery/app/defaults.py
+++ b/celery/app/defaults.py
@@ -132,6 +132,7 @@ def __repr__(self):
'REDIS_DB': Option(type='int', **_REDIS_OLD),
'REDIS_PASSWORD': Option(type='string', **_REDIS_OLD),
'REDIS_MAX_CONNECTIONS': Option(type='int'),
+ 'REJECT_ON_WORKER_LOST': Option(type='bool'),
'RESULT_BACKEND': Option(type='string'),
'RESULT_DB_SHORT_LIVED_SESSIONS': Option(False, type='bool'),
'RESULT_DB_TABLENAMES': Option(type='dict'),
diff --git a/celery/app/task.py b/celery/app/task.py
--- a/celery/app/task.py
+++ b/celery/app/task.py
@@ -220,6 +220,12 @@ class Task(object):
#: :setting:`CELERY_ACKS_LATE` setting.
acks_late = None
+ #: When CELERY_ACKS_LATE is set to True, the default behavior to
+ #: handle worker crash is to acknowledge the message. Setting
+ #: this to true allows the message to be rejected and requeued so
+ #: it will be executed again by another worker.
+ reject_on_worker_lost = None
+
#: Tuple of expected exceptions.
#:
#: These are errors that are expected in normal operation
@@ -248,6 +254,7 @@ class Task(object):
('rate_limit', 'CELERY_DEFAULT_RATE_LIMIT'),
('track_started', 'CELERY_TRACK_STARTED'),
('acks_late', 'CELERY_ACKS_LATE'),
+ ('reject_on_worker_lost', 'CELERY_REJECT_ON_WORKER_LOST'),
('ignore_result', 'CELERY_IGNORE_RESULT'),
('store_errors_even_if_ignored',
'CELERY_STORE_ERRORS_EVEN_IF_IGNORED'),
diff --git a/celery/worker/request.py b/celery/worker/request.py
--- a/celery/worker/request.py
+++ b/celery/worker/request.py
@@ -326,7 +326,6 @@ def on_retry(self, exc_info):
def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
"""Handler called if the task raised an exception."""
task_ready(self)
-
if isinstance(exc_info.exception, MemoryError):
raise MemoryError('Process got: %s' % (exc_info.exception,))
elif isinstance(exc_info.exception, Reject):
@@ -352,7 +351,13 @@ def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
)
# (acks_late) acknowledge after result stored.
if self.task.acks_late:
- self.acknowledge()
+ reject_and_requeue = (self.task.reject_on_worker_lost and
+ isinstance(exc, WorkerLostError) and
+ self.delivery_info.get('redelivered', False) is False)
+ if reject_and_requeue:
+ self.reject(requeue=True)
+ else:
+ self.acknowledge()
if send_failed_event:
self.send_event(
</patch>
</s>
</patch>
|
diff --git a/celery/tests/worker/test_request.py b/celery/tests/worker/test_request.py
--- a/celery/tests/worker/test_request.py
+++ b/celery/tests/worker/test_request.py
@@ -325,6 +325,20 @@ def test_on_failure_Reject_rejects_with_requeue(self):
req_logger, req.connection_errors, True,
)
+ def test_on_failure_WrokerLostError_rejects_with_requeue(self):
+ einfo = None
+ try:
+ raise WorkerLostError()
+ except:
+ einfo = ExceptionInfo(internal=True)
+ req = self.get_request(self.add.s(2, 2))
+ req.task.acks_late = True
+ req.task.reject_on_worker_lost = True
+ req.delivery_info['redelivered'] = False
+ req.on_failure(einfo)
+ req.on_reject.assert_called_with(req_logger,
+ req.connection_errors, True)
+
def test_tzlocal_is_cached(self):
req = self.get_request(self.add.s(2, 2))
req._tzlocal = 'foo'
|
1.0
| |||
NVIDIA__NeMo-473
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
</issue>
<code>
[start of README.rst]
1 .. image:: http://www.repostatus.org/badges/latest/active.svg
2 :target: http://www.repostatus.org/#active
3 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
4
5 .. image:: https://img.shields.io/badge/documentation-github.io-blue.svg
6 :target: https://nvidia.github.io/NeMo/
7 :alt: NeMo documentation on GitHub pages
8
9 .. image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
10 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
11 :alt: NeMo core license and license for collections in this repo
12
13 .. image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
14 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
15 :alt: Language grade: Python
16
17 .. image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
18 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
19 :alt: Total alerts
20
21 .. image:: https://img.shields.io/badge/code%20style-black-000000.svg
22 :target: https://github.com/psf/black
23 :alt: Code style: black
24
25
26
27 NVIDIA Neural Modules: NeMo
28 ===========================
29
30 NeMo is a toolkit for defining and building `Conversational AI <https://developer.nvidia.com/conversational-ai#started>`_ applications.
31
32 Goal of the NeMo toolkit is to make it possible for researchers to easily compose complex neural network architectures for conversational AI using reusable components. Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
33
34 **Neural Modules** are conceptual blocks of neural networks that take *typed* inputs and produce *typed* outputs. Such modules typically represent data layers, encoders, decoders, language models, loss functions, or methods of combining activations.
35
36 The toolkit comes with extendable collections of pre-built modules for automatic speech recognition (ASR), natural language processing (NLP) and text synthesis (TTS).
37
38 **Introduction**
39
40 * Watch `this video <https://nvidia.github.io/NeMo/>`_ for a quick walk-through.
41
42 * Documentation (latest released version): https://nvidia.github.io/NeMo/
43
44 * Read NVIDIA `Developer Blog for example applications <https://devblogs.nvidia.com/how-to-build-domain-specific-automatic-speech-recognition-models-on-gpus/>`_
45
46 * Read NVIDIA `Developer Blog for Quartznet ASR model <https://devblogs.nvidia.com/develop-smaller-speech-recognition-models-with-nvidias-nemo-framework/>`_
47
48 * Recommended version to install is **0.9.0** via ``pip install nemo-toolkit``
49
50 * Recommended NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_
51
52 * Pretrained models are available on NVIDIA `NGC Model repository <https://ngc.nvidia.com/catalog/models?orderBy=modifiedDESC&query=nemo&quickFilter=models&filters=>`_
53
54
55 Getting started
56 ~~~~~~~~~~~~~~~
57
58 THE LATEST STABLE VERSION OF NeMo is **0.9.0** (Available via PIP).
59
60 **Requirements**
61
62 1) Python 3.6 or 3.7
63 2) PyTorch 1.4.* with GPU support
64 3) (optional, for best performance) NVIDIA APEX. Install from here: https://github.com/NVIDIA/apex
65
66 **NeMo Docker Container**
67 NVIDIA `NGC NeMo Toolkit container <https://ngc.nvidia.com/catalog/containers/nvidia:nemo>`_ is now available.
68
69 * Pull the docker: ``docker pull nvcr.io/nvidia/nemo:v0.9``
70 * Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/nemo:v0.9``
71
72 If you are using the NVIDIA `NGC PyTorch container <https://ngc.nvidia.com/catalog/containers/nvidia:pytorch>`_ follow these instructions
73
74 * Pull the docker: ``docker pull nvcr.io/nvidia/pytorch:20.01-py3``
75 * Run: ``docker run --runtime=nvidia -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit stack=67108864 nvcr.io/nvidia/pytorch:20.01-py3``
76 * ``apt-get update && apt-get install -y libsndfile1``
77 * ``pip install nemo_toolkit`` NeMo core
78 * ``pip install nemo_asr`` NeMo ASR (Speech Recognition) collection
79 * ``pip install nemo_nlp`` NeMo NLP (Natural Language Processing) collection
80 * ``pip install nemo_tts`` NeMo TTS (Speech Synthesis) collection
81
82 See `examples/start_here` to get started with the simplest example. The folder `examples` contains several examples to get you started with various tasks in NLP and ASR.
83
84 **Tutorials**
85
86 * `Speech recognition <https://nvidia.github.io/NeMo/asr/intro.html>`_
87 * `Natural language processing <https://nvidia.github.io/NeMo/nlp/intro.html>`_
88 * `Speech Synthesis <https://nvidia.github.io/NeMo/tts/intro.html>`_
89
90
91 DEVELOPMENT
92 ~~~~~~~~~~~
93 If you'd like to use master branch and/or develop NeMo you can run "reinstall.sh" script.
94
95 `Documentation (master branch) <http://nemo-master-docs.s3-website.us-east-2.amazonaws.com/>`_.
96
97 **Installing From Github**
98
99 If you prefer to use NeMo's latest development version (from GitHub) follow the steps below:
100
101 1) Clone the repository ``git clone https://github.com/NVIDIA/NeMo.git``
102 2) Go to NeMo folder and re-install the toolkit with collections:
103
104 .. code-block:: bash
105
106 ./reinstall.sh
107
108 **Style tests**
109
110 .. code-block:: bash
111
112 python setup.py style # Checks overall project code style and output issues with diff.
113 python setup.py style --fix # Tries to fix error in-place.
114 python setup.py style --scope=tests # Operates within certain scope (dir of file).
115
116 **Unittests**
117
118 This command runs unittests:
119
120 .. code-block:: bash
121
122 ./reinstall.sh
123 python pytest tests
124
125
126 Citation
127 ~~~~~~~~
128
129 If you are using NeMo please cite the following publication
130
131 .. code-block:: tex
132
133 @misc{nemo2019,
134 title={NeMo: a toolkit for building AI applications using Neural Modules},
135 author={Oleksii Kuchaiev and Jason Li and Huyen Nguyen and Oleksii Hrinchuk and Ryan Leary and Boris Ginsburg and Samuel Kriman and Stanislav Beliaev and Vitaly Lavrukhin and Jack Cook and Patrice Castonguay and Mariya Popova and Jocelyn Huang and Jonathan M. Cohen},
136 year={2019},
137 eprint={1909.09577},
138 archivePrefix={arXiv},
139 primaryClass={cs.LG}
140 }
141
142
[end of README.rst]
[start of nemo/backends/pytorch/actions.py]
1 # Copyright (c) 2019 NVIDIA Corporation
2 import copy
3 import importlib
4 import itertools
5 import json
6 import os
7 from collections import defaultdict
8 from contextlib import ExitStack
9 from pathlib import Path
10 from typing import List, Optional
11
12 import torch
13 import torch.distributed as dist
14 import torch.nn as nn
15 import torch.optim as optim
16 from torch.nn.parallel import DistributedDataParallel as DDP
17
18 from nemo import logging
19 from nemo.backends.pytorch.module_wrapper import TrainableNeuralModuleWrapper
20 from nemo.backends.pytorch.nm import DataLayerNM, TrainableNM
21 from nemo.backends.pytorch.optimizers import AdamW, Novograd, master_params
22 from nemo.core import DeploymentFormat, DeviceType, NeuralModule, NmTensor
23 from nemo.core.callbacks import ActionCallback, EvaluatorCallback, SimpleLossLoggerCallback
24 from nemo.core.neural_factory import Actions, ModelMode, Optimization
25 from nemo.core.neural_types import *
26 from nemo.utils.helpers import get_checkpoint_from_dir
27
28 # these imports will happen on as-needed basis
29 amp = None
30 # convert_syncbn = None
31 # create_syncbn_process_group = None
32 LARC = None
33 FusedLAMB = None
34 FusedAdam = None
35 FusedNovoGrad = None
36
37 AmpOptimizations = {
38 Optimization.mxprO0: "O0",
39 Optimization.mxprO1: "O1",
40 Optimization.mxprO2: "O2",
41 Optimization.mxprO3: "O3",
42 }
43
44 _float_2_half_req = {
45 Optimization.mxprO1,
46 Optimization.mxprO2,
47 Optimization.mxprO3,
48 }
49
50
51 class PtActions(Actions):
52 def __init__(
53 self, local_rank=None, global_rank=None, tb_writer=None, optimization_level=Optimization.mxprO0,
54 ):
55 need_apex = local_rank is not None or optimization_level != Optimization.mxprO0
56 if need_apex:
57 try:
58 apex = importlib.import_module('apex')
59 if optimization_level != Optimization.mxprO0:
60 global amp
61 amp = importlib.import_module('apex.amp')
62 if local_rank is not None:
63 # global convert_syncbn
64 # global create_syncbn_process_group
65 global LARC
66 global FusedLAMB
67 global FusedAdam
68 global FusedNovoGrad
69 parallel = importlib.import_module('apex.parallel')
70 apex_optimizer = importlib.import_module('apex.optimizers')
71 # convert_syncbn = parallel.convert_syncbn_model
72 # create_syncbn_process_group = parallel.create_syncbn_process_group
73 LARC = parallel.LARC
74 FusedLAMB = apex_optimizer.FusedLAMB
75 FusedAdam = apex_optimizer.FusedAdam
76 FusedNovoGrad = apex_optimizer.FusedNovoGrad
77
78 except ImportError:
79 raise ImportError(
80 "NVIDIA Apex is necessary for distributed training and"
81 "mixed precision training. It only works on GPUs."
82 "Please install Apex from "
83 "https://www.github.com/nvidia/apex"
84 )
85
86 super(PtActions, self).__init__(
87 local_rank=local_rank, global_rank=global_rank, optimization_level=optimization_level,
88 )
89
90 # will be [unique_instance_id -> (NMModule, PTModule)]
91 self.module_reference_table = {}
92 self.step = 0
93 self.epoch_num = 0
94 self.optimizers = []
95 self.tb_writer = tb_writer
96 self._modules = set()
97 self.cache = None
98 self.amp_initialized = False
99
100 @property
101 def modules(self):
102 return self._modules
103
104 def __get_top_sorted_modules_and_dataloader(self, hook):
105 """
106 Constructs DAG leading to hook and creates its topological order.
107 It also populates self.module_reference_table.
108 Args:
109 hook: an NmTensor or a list of NmTensors representing leaf nodes
110 in DAG
111
112 Returns:
113 list of modules with their call arguments and outputs, and dataset
114 """
115
116 def create_node(producer, producer_args):
117 if producer_args is None:
118 return tuple((producer, ()))
119 else:
120 return tuple((producer, tuple([(k, v) for k, v in producer_args.items()]),))
121
122 def is_in_degree_zero(node, processed_nodes):
123 """A node has in degree of zero"""
124 if node[1] == ():
125 return True
126 for portname, nmtensor in node[1]:
127 nd = create_node(nmtensor.producer, nmtensor.producer_args)
128 if nd not in processed_nodes:
129 return False
130 return True
131
132 hooks = hook if isinstance(hook, list) else [hook]
133
134 # ensures that no tensors are processed twice
135 processed_nmtensors = set()
136
137 indices_to_remove = []
138 # Check for duplicates in hook
139 for i, nmtensor in enumerate(hook):
140 if nmtensor in processed_nmtensors:
141 indices_to_remove.append(i)
142 else:
143 processed_nmtensors.add(nmtensor)
144
145 for i in reversed(indices_to_remove):
146 hook.pop(i)
147
148 _top_sorted_modules = []
149 all_nodes = {}
150
151 # extract all nodes to all_nodes set
152 hooks_lst = list(hooks)
153 while len(hooks_lst) > 0:
154 # take nmtensor from the end of the list
155 nmtensor = hooks_lst.pop()
156 node = create_node(nmtensor.producer, nmtensor.producer_args)
157 # Store nmtensor as an output of its producer
158 # first make sure all keys are present per output port
159 # and nm is inside all_nodes
160 if node not in all_nodes:
161 all_nodes[node] = {k: None for k in nmtensor.producer.output_ports}
162 # second, populate output port with current nmtensor
163 # where applicable
164 all_nodes[node][nmtensor.name] = nmtensor
165 processed_nmtensors.add(nmtensor)
166 if nmtensor.producer_args is not None and nmtensor.producer_args != {}:
167 for _, new_nmtensor in nmtensor.producer_args.items():
168 if new_nmtensor not in processed_nmtensors:
169 # put in the start of list
170 hooks_lst.insert(0, new_nmtensor)
171
172 all_node_with_output = []
173 # Iterate over all_nodes to create new nodes that include its output
174 # now all nodes have (module, input tensors, output tensors)
175 for node in all_nodes:
176 all_node_with_output.append(tuple((node[0], node[1], all_nodes[node])))
177
178 processed_nodes = []
179 while len(all_node_with_output) > 0:
180 for node in all_node_with_output.copy():
181 # if node's in_degree is zero it can be added to
182 # _top_sorted_modules
183 # this will also reduce in_degree of its children
184 if is_in_degree_zero(node, processed_nodes):
185 _top_sorted_modules.append(node)
186 processed_nodes.append((node[0], node[1]))
187 all_node_with_output.remove(node)
188
189 # Create top_sorted_modules aka callchain
190 top_sorted_modules = []
191 for i, m in enumerate(_top_sorted_modules):
192 top_sorted_modules.append((m[0], dict(m[1]), m[2]))
193 # Ensure that there is only one dataset in callchain
194 if i > 0 and isinstance(m[0], DataLayerNM):
195 raise ValueError("There were more than one DataLayer NeuralModule inside your DAG.")
196
197 if not isinstance(top_sorted_modules[0][0], DataLayerNM):
198 raise ValueError("The first module in your DAG was not a DataLayer NeuralModule.")
199
200 tdataset = top_sorted_modules[0][0].dataset
201
202 # populate self.module_reference_table
203 for m in top_sorted_modules:
204 if m[0].factory is None and self._local_rank is not None:
205 raise ValueError(
206 "Neural module {0} was created without "
207 "NeuralModuleFactory, but you are trying to"
208 "run in distributed mode. Please instantiate"
209 "NeuralModuleFactory first and pass its "
210 "instance as `factory` parameter to all your"
211 "Neural Module objects."
212 "".format(str(m[0]))
213 )
214 key = m[0].unique_instance_id
215 if key not in self.module_reference_table:
216 if isinstance(m[0], TrainableNeuralModuleWrapper):
217 self.module_reference_table[key] = (m[0], m[0]._pt_module)
218 else:
219 self.module_reference_table[key] = (m[0], m[0])
220
221 return top_sorted_modules, tdataset
222
223 def create_optimizer(self, optimizer, things_to_optimize, optimizer_params=None):
224 """
225 Wrapper function around __setup_optimizer()
226
227 Args:
228 optimizer : A instantiated PyTorch optimizer or string. For
229 currently supported strings, see __setup_optimizer().
230 things_to_optimize (list): Must be a list of Neural Modules and/or
231 parameters. If a Neural Module is passed, all trainable
232 parameters are extracted and passed to the optimizer.
233 optimizer_params (dict): Optional parameters dictionary.
234
235 Returns:
236 Optimizer
237 """
238
239 optimizer_instance = None
240 optimizer_class = None
241 if isinstance(optimizer, str):
242 optimizer_class = optimizer
243 elif isinstance(optimizer, torch.optim.Optimizer):
244 optimizer_instance = optimizer
245 else:
246 raise ValueError("`optimizer` must be a string or an instance of torch.optim.Optimizer")
247
248 modules_to_optimize = []
249 tensors_to_optimize = []
250 if not isinstance(things_to_optimize, list):
251 things_to_optimize = [things_to_optimize]
252 for thing in things_to_optimize:
253 if isinstance(thing, NeuralModule):
254 modules_to_optimize.append(thing)
255 elif isinstance(thing, NmTensor):
256 tensors_to_optimize.append(thing)
257 else:
258 raise ValueError(
259 "{} passed to create_optimizer() was neither a neural module nor a neural module tensor"
260 )
261
262 if tensors_to_optimize:
263 call_chain, _ = self.__get_top_sorted_modules_and_dataloader(tensors_to_optimize)
264
265 for module in call_chain:
266 if module[0] not in modules_to_optimize:
267 modules_to_optimize.append(module[0])
268
269 # Extract trainable weights which will be optimized
270 params_list = [p.parameters() for p in modules_to_optimize if isinstance(p, TrainableNM) or p.is_trainable()]
271 params_to_optimize = itertools.chain(*params_list)
272
273 if optimizer_params is None:
274 optimizer_params = {}
275 # Init amp
276 optimizer = self.__setup_optimizer(
277 optimizer_instance=optimizer_instance,
278 optimizer_class=optimizer_class,
279 optimization_params=optimizer_params,
280 params_to_optimize=params_to_optimize,
281 )
282
283 self.optimizers.append(optimizer)
284 return optimizer
285
286 @staticmethod
287 def __setup_optimizer(
288 optimizer_instance, optimizer_class, optimization_params, params_to_optimize,
289 ):
290
291 if optimizer_instance is None:
292 # Setup optimizer instance, by default it is SGD
293 lr = optimization_params["lr"]
294 if optimizer_class.lower() == "sgd":
295 optimizer = optim.SGD(
296 params_to_optimize,
297 lr=lr,
298 momentum=optimization_params.get("momentum", 0.9),
299 weight_decay=optimization_params.get("weight_decay", 0.0),
300 )
301 elif optimizer_class.lower() == "adam":
302 optimizer = optim.Adam(
303 params=params_to_optimize, lr=lr, betas=optimization_params.get("betas", (0.9, 0.999)),
304 )
305 elif optimizer_class.lower() == "fused_adam":
306 optimizer = FusedAdam(params=params_to_optimize, lr=lr)
307 elif optimizer_class.lower() == "adam_w":
308 optimizer = AdamW(
309 params=params_to_optimize,
310 lr=lr,
311 weight_decay=optimization_params.get("weight_decay", 0.0),
312 betas=optimization_params.get("betas", (0.9, 0.999)),
313 )
314 elif optimizer_class.lower() == "novograd":
315 optimizer = Novograd(
316 params_to_optimize,
317 lr=lr,
318 weight_decay=optimization_params.get("weight_decay", 0.0),
319 luc=optimization_params.get("luc", False),
320 luc_trust=optimization_params.get("luc_eta", 1e-3),
321 betas=optimization_params.get("betas", (0.95, 0.25)),
322 )
323 elif optimizer_class.lower() == "fused_novograd":
324 optimizer = FusedNovoGrad(
325 params_to_optimize,
326 lr=lr,
327 weight_decay=optimization_params.get("weight_decay", 0.0),
328 reg_inside_moment=True,
329 grad_averaging=False,
330 betas=optimization_params.get("betas", (0.95, 0.25)),
331 )
332 elif optimizer_class.lower() == "fused_lamb":
333 optimizer = FusedLAMB(params_to_optimize, lr=lr,)
334 else:
335 raise ValueError("Unknown optimizer class: {0}".format(optimizer_class))
336
337 if optimization_params.get("larc", False):
338 logging.info("Enabling larc")
339 optimizer = LARC(optimizer, trust_coefficient=optimization_params.get("larc_eta", 2e-2),)
340 else:
341 logging.info("Optimizer instance: {0} is provided.")
342 if optimizer_class is not None and optimizer_class != "":
343 logging.warning("Ignoring `optimizer_class` parameter because `optimizer_instance` is provided")
344 if optimization_params is not None and optimization_params != {}:
345 logging.warning(
346 "Ignoring `optimization_params` parameter for "
347 "optimizer because `optimizer_instance` is provided"
348 )
349 optimizer = optimizer_instance
350 return optimizer
351
352 def __initialize_amp(
353 self, optimizer, optim_level, amp_max_loss_scale=2.0 ** 24, amp_min_loss_scale=1.0,
354 ):
355 if optim_level not in AmpOptimizations:
356 raise ValueError(f"__initialize_amp() was called with unknown optim_level={optim_level}")
357 # in this case, nothing to do here
358 if optim_level == Optimization.mxprO0:
359 return optimizer
360
361 if len(self.modules) < 1:
362 raise ValueError("There were no modules to initialize")
363 pt_modules = []
364 for module in self.modules:
365 if isinstance(module, nn.Module):
366 pt_modules.append(module)
367 elif isinstance(module, TrainableNeuralModuleWrapper):
368 pt_modules.append(module._pt_module)
369
370 _, optimizer = amp.initialize(
371 max_loss_scale=amp_max_loss_scale,
372 min_loss_scale=amp_min_loss_scale,
373 models=pt_modules,
374 optimizers=optimizer,
375 opt_level=AmpOptimizations[optim_level],
376 )
377 self.amp_initialized = True
378 return optimizer
379
380 def __nm_graph_forward_pass(
381 self, call_chain, registered_tensors, mode=ModelMode.train, use_cache=False,
382 ):
383 for ind in range(1, len(call_chain)):
384 if use_cache:
385 in_cache = True
386 for tensor in call_chain[ind][2].values():
387 if tensor is None:
388 # NM has an output tensor that is not used in the
389 # current call chain, so we don't care if it's not in
390 # cache
391 continue
392 if tensor.unique_name not in registered_tensors:
393 in_cache = False
394 if in_cache:
395 continue
396 call_args = call_chain[ind][1]
397 # module = call_chain[ind][0]
398 m_id = call_chain[ind][0].unique_instance_id
399 pmodule = self.module_reference_table[m_id][1]
400
401 # if self._local_rank is not None:
402 # if isinstance(pmodule, DDP):
403 # if disable_allreduce:
404 # pmodule.disable_allreduce()
405 # else:
406 # pmodule.enable_allreduce()
407
408 if mode == ModelMode.train:
409 # if module.is_trainable():
410 if isinstance(pmodule, nn.Module):
411 pmodule.train()
412 elif mode == ModelMode.eval:
413 # if module.is_trainable():
414 if isinstance(pmodule, nn.Module):
415 pmodule.eval()
416 else:
417 raise ValueError("Unknown ModelMode")
418 # prepare call signature for `module`
419 call_set = {}
420 for tensor_name, nmtensor in call_args.items():
421 # _add_uuid_2_name(nmtensor.name, nmtensor.producer._uuid)
422 key = nmtensor.unique_name
423 call_set[tensor_name] = registered_tensors[key]
424 # actual PyTorch module call with signature
425 if isinstance(self.module_reference_table[m_id][0], TrainableNeuralModuleWrapper,):
426 new_tensors = pmodule(**call_set)
427 else:
428 new_tensors = pmodule(force_pt=True, **call_set)
429
430 if not isinstance(new_tensors, List):
431 if not isinstance(new_tensors, tuple):
432 new_tensors = [new_tensors]
433 else:
434 new_tensors = list(new_tensors)
435 for t_tensor, nm_tensor in zip(new_tensors, call_chain[ind][2].values()):
436 if nm_tensor is None:
437 continue
438 t_name = nm_tensor.unique_name
439 if t_name not in registered_tensors:
440 registered_tensors[t_name] = t_tensor
441 else:
442 raise ValueError("A NMTensor was produced twice in " f"the same DAG. {t_name}")
443
444 @staticmethod
445 def pad_tensor(t: torch.Tensor, target_size: torch.Size):
446 padded_shape = target_size.cpu().data.numpy().tolist()
447 padded_t = torch.zeros(padded_shape).cuda().type_as(t)
448 t_size = t.size()
449 if len(t_size) == 0:
450 padded_t = t
451 elif len(t_size) == 1:
452 padded_t[: t_size[0]] = t
453 elif len(t_size) == 2:
454 padded_t[: t_size[0], : t_size[1]] = t
455 elif len(t_size) == 3:
456 padded_t[: t_size[0], : t_size[1], : t_size[2]] = t
457 elif len(t_size) == 4:
458 padded_t[: t_size[0], : t_size[1], : t_size[2], : t_size[3]] = t
459 else:
460 raise NotImplementedError
461 return padded_t
462
463 @staticmethod
464 def depad_tensor(t: torch.Tensor, original_size: torch.Size):
465 t_size = original_size
466 if len(t_size) == 0:
467 depadded_t = t
468 elif len(t_size) == 1:
469 depadded_t = t[: t_size[0]]
470 elif len(t_size) == 2:
471 depadded_t = t[: t_size[0], : t_size[1]]
472 elif len(t_size) == 3:
473 depadded_t = t[: t_size[0], : t_size[1], : t_size[2]]
474 elif len(t_size) == 4:
475 depadded_t = t[: t_size[0], : t_size[1], : t_size[2], : t_size[3]]
476 else:
477 raise NotImplementedError
478 return depadded_t
479
480 def _eval(self, tensors_2_evaluate, callback, step, verbose=False):
481 """
482 Evaluation process.
483 WARNING THIS function assumes that all tensors_2_evaluate are based
484 on a single datalayer
485 Args:
486 tensors_2_evaluate: list of NmTensors to evaluate
487 callback: instance of EvaluatorCallback
488 step: current training step, used for logging
489
490 Returns:
491 None
492 """
493 with torch.no_grad():
494 # each call chain corresponds to a tensor in tensors_2_evaluate
495 call_chain, _ = self.__get_top_sorted_modules_and_dataloader(hook=tensors_2_evaluate)
496 # "Retrieve" data layer from call chain.
497 dl_nm = call_chain[0][0]
498
499 # Prepare eval_dataloader
500 # For distributed training it should have disjoint subsets of
501 # all data on every worker
502 is_distributed = False
503 world_size = None
504 if dl_nm.placement == DeviceType.AllGpu:
505 assert dist.is_initialized()
506 is_distributed = True
507 world_size = torch.distributed.get_world_size()
508 # logging.info(
509 # "Doing distributed evaluation. Rank {0} of {1}".format(
510 # self.local_rank, world_size
511 # )
512 # )
513 if dl_nm.dataset is not None:
514 sampler = torch.utils.data.distributed.DistributedSampler(
515 dataset=dl_nm.dataset, shuffle=dl_nm.shuffle
516 )
517 eval_dataloader = torch.utils.data.DataLoader(
518 dataset=dl_nm.dataset,
519 sampler=sampler,
520 num_workers=dl_nm.num_workers,
521 batch_size=dl_nm.batch_size,
522 shuffle=False,
523 )
524 else:
525 eval_dataloader = dl_nm.data_iterator
526
527 if hasattr(eval_dataloader, 'sampler'):
528 eval_dataloader.sampler.set_epoch(0)
529 else: # Not distributed
530 if dl_nm.dataset is not None:
531 # Todo: remove local_parameters
532 eval_dataloader = torch.utils.data.DataLoader(
533 dataset=dl_nm.dataset,
534 sampler=None, # not distributed sampler
535 num_workers=dl_nm.num_workers,
536 batch_size=dl_nm.batch_size,
537 shuffle=dl_nm.shuffle,
538 )
539 else:
540 eval_dataloader = dl_nm.data_iterator
541 # after this eval_dataloader is ready to be used
542 # reset global_var_dict - results of evaluation will be stored
543 # there
544
545 callback.clear_global_var_dict()
546 dl_device = dl_nm._device
547
548 # Evaluation mini-batch for loop
549 num_batches = None
550 if hasattr(eval_dataloader, "__len__"):
551 num_batches = len(eval_dataloader)
552 for epoch_i, data in enumerate(eval_dataloader, 0):
553 if (
554 verbose
555 and num_batches is not None
556 and (num_batches < 10 or (epoch_i % int(num_batches / 10) == 0))
557 ):
558 logging.info(f"Evaluating batch {epoch_i} out of {num_batches}")
559 tensors = []
560 if isinstance(data, torch.Tensor):
561 data = (data,)
562 for d in data:
563 if isinstance(d, torch.Tensor):
564 tensors.append(d.to(dl_device))
565 else:
566 tensors.append(d)
567
568 registered_e_tensors = {
569 t.unique_name: d for t, d in zip(call_chain[0][2].values(), tensors) if t is not None
570 }
571 self.__nm_graph_forward_pass(
572 call_chain=call_chain, registered_tensors=registered_e_tensors, mode=ModelMode.eval,
573 )
574
575 if not is_distributed or self.global_rank == 0:
576 values_dict = {}
577 # If distributed. For the outer loop, we need to ensure that
578 # all processes loop through the elements in the same order
579 for t2e in tensors_2_evaluate:
580 key = t2e.unique_name
581 if key not in registered_e_tensors.keys():
582 logging.info("WARNING: Tensor {} was not found during eval".format(key))
583 continue
584 if is_distributed:
585 # where we will all_gather results from all workers
586 tensors_list = []
587 # where we will all_gather tensor sizes
588 tensor_on_worker = registered_e_tensors[key]
589 if tensor_on_worker.shape != torch.Size([]):
590 tensor_on_worker_size_as_tensor = torch.tensor(tensor_on_worker.shape).cuda()
591 sizes = []
592 for ind in range(world_size):
593 sizes.append(torch.empty_like(tensor_on_worker_size_as_tensor))
594 dist.all_gather(sizes, tensor_on_worker_size_as_tensor)
595 mx_dim, _ = torch.max(torch.stack(sizes), dim=0)
596 else: # this is a singleton. For example, loss value
597 sizes = [torch.Size([])] * world_size
598 mx_dim = None
599 for ind in range(world_size):
600 # we have to use max shape for all_gather
601 if mx_dim is None: # singletons
602 tensors_list.append(torch.tensor(2).cuda().type_as(tensor_on_worker))
603 else: # non-singletons
604 tensors_list.append(
605 torch.empty(mx_dim.cpu().data.numpy().tolist()).cuda().type_as(tensor_on_worker)
606 )
607
608 if mx_dim is not None:
609 t_to_send = self.pad_tensor(tensor_on_worker, mx_dim)
610 else:
611 t_to_send = tensor_on_worker
612 dist.all_gather(tensors_list, t_to_send)
613 tensors_list = [self.depad_tensor(t, size) for t, size in zip(tensors_list, sizes)]
614 if self.global_rank == 0:
615 values_dict["IS_FROM_DIST_EVAL"] = True
616 values_dict[key] = tensors_list
617 else: # NON-DISTRIBUTED TRAINING
618 values_dict["IS_FROM_DIST_EVAL"] = False
619 values_dict[key] = [registered_e_tensors[key]]
620 if callback.user_iter_callback and (self.global_rank is None or self.global_rank == 0):
621 # values_dict will contain results from all workers
622 callback.user_iter_callback(values_dict, callback._global_var_dict)
623
624 # final aggregation (over minibatches) and logging of results
625 # should happend only on one worker
626 if callback.user_done_callback and (self.global_rank is None or self.global_rank == 0):
627 vals_to_log = callback.user_done_callback(callback._global_var_dict)
628 # log results to Tensorboard or Weights & Biases
629 if vals_to_log is not None:
630 if hasattr(callback, 'swriter') and callback.swriter is not None:
631 if hasattr(callback, 'tb_writer_func') and callback.tb_writer_func is not None:
632 callback.tb_writer_func(callback.swriter, vals_to_log, step)
633 else:
634 for key, val in vals_to_log.items():
635 callback.swriter.add_scalar(key, val, step)
636 if hasattr(callback, 'wandb_log'):
637 callback.wandb_log(vals_to_log)
638
639 def _infer(
640 self, tensors_to_return, verbose=False, cache=False, use_cache=False, offload_to_cpu=True,
641 ):
642 """
643 Does the same as _eval() just with tensors instead of eval callback.
644 """
645 # Checking that cache is used properly
646 if cache and use_cache:
647 raise ValueError(
648 "cache and use_cache were both set. However cache must first be created prior to using it."
649 )
650 if cache:
651 if self.cache is not None:
652 raise ValueError("cache was set but was not empty")
653 self.cache = []
654 if use_cache:
655 if not self.cache:
656 raise ValueError("use_cache was set, but cache was empty")
657
658 with torch.no_grad():
659 # each call chain corresponds to a tensor in tensors_2_evaluate
660 dl_nm = None
661 call_chain, _ = self.__get_top_sorted_modules_and_dataloader(hook=tensors_to_return)
662 dl_nm = call_chain[0][0]
663
664 # Prepare eval_dataloader
665 # For distributed training it should have disjoint subsets of
666 # all data on every worker
667 is_distributed = False
668 world_size = None
669 if dl_nm.placement == DeviceType.AllGpu:
670 if self.cache or use_cache:
671 raise NotImplementedError("Caching is not available for distributed training.")
672 assert dist.is_initialized()
673 is_distributed = True
674 world_size = torch.distributed.get_world_size()
675 # logging.info(
676 # "Doing distributed evaluation. Rank {0} of {1}".format(
677 # self.local_rank, world_size
678 # )
679 # )
680 if dl_nm.dataset is not None:
681 sampler = torch.utils.data.distributed.DistributedSampler(
682 dataset=dl_nm.dataset, shuffle=dl_nm.shuffle
683 )
684 eval_dataloader = torch.utils.data.DataLoader(
685 dataset=dl_nm.dataset,
686 sampler=sampler,
687 num_workers=dl_nm.num_workers,
688 batch_size=dl_nm.batch_size,
689 shuffle=False,
690 )
691 else:
692 eval_dataloader = dl_nm.data_iterator
693 eval_dataloader.sampler.set_epoch(0)
694 elif not use_cache: # Not distributed and not using cache
695 # Dataloaders are only used if use_cache is False
696 # When caching, the DAG must cache all outputs from dataloader
697 if dl_nm.dataset is not None:
698 # Todo: remove local_parameters
699 eval_dataloader = torch.utils.data.DataLoader(
700 dataset=dl_nm.dataset,
701 sampler=None, # not distributed sampler
702 num_workers=dl_nm.num_workers,
703 batch_size=dl_nm.batch_size,
704 shuffle=dl_nm.shuffle,
705 )
706 else:
707 eval_dataloader = dl_nm.data_iterator
708 # after this eval_dataloader is ready to be used
709 # reset global_var_dict - results of evaluation will be stored
710 # there
711
712 if not is_distributed or self.global_rank == 0:
713 values_dict = {}
714 for t in tensors_to_return:
715 values_dict[t.unique_name] = []
716 dl_device = dl_nm._device
717
718 # Evaluation mini-batch for loop
719 if use_cache:
720 num_batches = len(self.cache)
721 loop_iterator = self.cache
722 else:
723 num_batches = len(eval_dataloader)
724 loop_iterator = eval_dataloader
725
726 for epoch_i, data in enumerate(loop_iterator, 0):
727 logging.debug(torch.cuda.memory_allocated())
728 if verbose and (num_batches < 10 or (epoch_i % int(num_batches / 10) == 0)):
729 logging.info(f"Evaluating batch {epoch_i} out of {num_batches}")
730 tensors = []
731 if use_cache:
732 registered_e_tensors = data
733 # delete tensors_to_return
734 for t in tensors_to_return:
735 if t.unique_name in registered_e_tensors:
736 del registered_e_tensors[t.unique_name]
737 # Need to check for device type mismatch
738 for t in registered_e_tensors:
739 registered_e_tensors[t].to(dl_device)
740 else:
741 if isinstance(data, torch.Tensor):
742 data = (data,)
743 for d in data:
744 if isinstance(d, torch.Tensor):
745 tensors.append(d.to(dl_device))
746 else:
747 tensors.append(d)
748
749 registered_e_tensors = {
750 t.unique_name: d for t, d in zip(call_chain[0][2].values(), tensors) if t is not None
751 }
752 self.__nm_graph_forward_pass(
753 call_chain=call_chain,
754 registered_tensors=registered_e_tensors,
755 mode=ModelMode.eval,
756 use_cache=use_cache,
757 )
758
759 # if offload_to_cpu:
760 # # Take all cuda tensors and save them to value_dict as
761 # # cpu tensors to save GPU memory
762 # for name, tensor in registered_e_tensors.items():
763 # if isinstance(tensor, torch.Tensor):
764 # registered_e_tensors[name] = tensor.cpu()
765 if cache:
766 self.append_to_cache(registered_e_tensors, offload_to_cpu)
767
768 # If distributed. For the outer loop, we need to ensure that
769 # all processes loop through the elements in the same order
770 for t2e in tensors_to_return:
771 key = t2e.unique_name
772 if key not in registered_e_tensors.keys():
773 logging.info("WARNING: Tensor {} was not found during eval".format(key))
774 continue
775 if is_distributed:
776 # where we will all_gather results from all workers
777 tensors_list = []
778 # where we will all_gather tensor sizes
779 tensor_on_worker = registered_e_tensors[key]
780 if tensor_on_worker.shape != torch.Size([]):
781 tensor_on_worker_size_as_tensor = torch.tensor(tensor_on_worker.shape).cuda()
782 sizes = []
783 for ind in range(world_size):
784 sizes.append(torch.empty_like(tensor_on_worker_size_as_tensor))
785 dist.all_gather(sizes, tensor_on_worker_size_as_tensor)
786 mx_dim, _ = torch.max(torch.stack(sizes), dim=0)
787 else: # this is a singleton. For example, loss value
788 sizes = [torch.Size([])] * world_size
789 mx_dim = None
790 for ind in range(world_size):
791 # we have to use max shape for all_gather
792 if mx_dim is None: # singletons
793 tensors_list.append(torch.tensor(2).cuda().type_as(tensor_on_worker))
794 else: # non-singletons
795 tensors_list.append(
796 torch.empty(mx_dim.cpu().data.numpy().tolist()).cuda().type_as(tensor_on_worker)
797 )
798
799 if mx_dim is not None:
800 t_to_send = self.pad_tensor(tensor_on_worker, mx_dim)
801 else:
802 t_to_send = tensor_on_worker
803 dist.all_gather(tensors_list, t_to_send)
804 tensors_list = [self.depad_tensor(t, size) for t, size in zip(tensors_list, sizes)]
805 if offload_to_cpu:
806 tensors_list = [t.cpu() for t in tensors_list]
807 if self.global_rank == 0:
808 values_dict[key] += tensors_list
809 else: # NON-DISTRIBUTED TRAINING
810 tensor = registered_e_tensors[key]
811 if offload_to_cpu and isinstance(tensor, torch.Tensor):
812 tensor = tensor.cpu()
813 values_dict[key] += [tensor]
814
815 if not is_distributed or self.global_rank == 0:
816 inferred_tensors = []
817 for t in tensors_to_return:
818 inferred_tensors.append(values_dict[t.unique_name])
819 return inferred_tensors
820
821 # For all other ranks
822 return None
823
824 def append_to_cache(self, registered_tensors: dict, offload_to_cpu):
825 """Simpler helper function to add results of __nm_graph_forward_pass to
826 current cache.
827 """
828 if offload_to_cpu:
829 for t in registered_tensors:
830 registered_tensors[t] = registered_tensors[t].cpu()
831 self.cache.append(registered_tensors)
832
833 def clear_cache(self):
834 """ Simple helpful function to clear cache by setting self.cache to
835 None
836 """
837 self.cache = None
838
839 def save_state_to(self, path: str):
840 """
841 Saves current state such as step, epoch and optimizer parameters
842 Args:
843 path:
844
845 Returns:
846
847 """
848 state = {
849 "step": self.step,
850 "epoch_num": self.epoch_num,
851 "optimizer_state": [opt.state_dict() for opt in self.optimizers],
852 }
853 torch.save(state, path)
854
855 def restore_state_from(self, path: str):
856 """
857 Restores state such as step, epoch and optimizer parameters
858 Args:
859 path:
860
861 Returns:
862
863 """
864 if os.path.isfile(path):
865 # map_location could be cuda:<device_id> but cpu seems to be more
866 # general since we are also saving step and epoch_num
867 # load_state_dict should move the variables to the relevant device
868 checkpoint = torch.load(path, map_location="cpu")
869 self.step = checkpoint["step"]
870 self.epoch_num = checkpoint["epoch_num"]
871 if checkpoint["optimizer_state"]:
872 for opt, opt_chkpt in zip(self.optimizers, checkpoint["optimizer_state"]):
873 opt.load_state_dict(opt_chkpt)
874 else:
875 raise FileNotFoundError("Could not find checkpoint file: {0}".format(path))
876
877 @staticmethod
878 def _check_all_tensors(list_of_tensors):
879 """Method that checks if the passed list contains all NmTensors
880 """
881 if not isinstance(list_of_tensors, list):
882 return False
883 for tensor in list_of_tensors:
884 if not isinstance(tensor, NmTensor):
885 return False
886 return True
887
888 @staticmethod
889 def _check_tuples(list_of_tuples):
890 """Method that checks if the passed tuple contains an optimizer in the
891 first element, and a list of NmTensors in the second.
892 """
893 for tup in list_of_tuples:
894 if not (isinstance(tup[0], torch.optim.Optimizer) and PtActions._check_all_tensors(tup[1])):
895 return False
896 return True
897
898 def _get_all_modules(self, training_loop, callbacks, logging_callchain=None):
899 """Gets all neural modules that will be used by train() and eval() via
900 EvaluatorCallbacks. Saves all modules to self.modules
901 """
902 # If there is a SimpleLossLoggerCallback, create an logger_callchain
903 # with all callchains from training_loop and
904 # SimpleLossLoggerCallback.tensors
905 if logging_callchain:
906 for module in logging_callchain:
907 self.modules.add(module[0])
908
909 # Else grab all callchains from training_loop
910 else:
911 for step in training_loop:
912 for module in step[2]:
913 self.modules.add(module[0])
914
915 # Lastly, grab all eval modules
916 if callbacks is not None:
917 for callback in callbacks:
918 if isinstance(callback, EvaluatorCallback):
919 (callchain, _,) = self.__get_top_sorted_modules_and_dataloader(hook=callback.eval_tensors)
920 for module in callchain:
921 self.modules.add(module[0])
922
923 @staticmethod
924 def __module_export(module, output, d_format: DeploymentFormat, input_example=None, output_example=None):
925 # Check if output already exists
926 destination = Path(output)
927 if destination.exists():
928 raise FileExistsError(f"Destination {output} already exists. " f"Aborting export.")
929
930 input_names = list(module.input_ports.keys())
931 output_names = list(module.output_ports.keys())
932 dynamic_axes = defaultdict(list)
933
934 def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defaultdict):
935 if ntype.axes:
936 for ind, axis in enumerate(ntype.axes):
937 if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
938 dynamic_axes[port_name].append(ind)
939
940 # This is a hack for Jasper to Jarvis export -- need re-design for this
941 inputs_to_drop = set()
942 outputs_to_drop = set()
943 if type(module).__name__ == "JasperEncoder":
944 logging.info(
945 "Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
946 "deployment"
947 )
948 inputs_to_drop.add("length")
949 outputs_to_drop.add("encoded_lengths")
950
951 # for input_ports
952 for port_name, ntype in module.input_ports.items():
953 if port_name in inputs_to_drop:
954 input_names.remove(port_name)
955 continue
956 __extract_dynamic_axes(port_name, ntype, dynamic_axes)
957 # for output_ports
958 for port_name, ntype in module.output_ports.items():
959 if port_name in outputs_to_drop:
960 output_names.remove(port_name)
961 continue
962 __extract_dynamic_axes(port_name, ntype, dynamic_axes)
963
964 if len(dynamic_axes) == 0:
965 dynamic_axes = None
966
967 # Make a deep copy of init parameters.
968 init_params_copy = copy.deepcopy(module._init_params)
969
970 # Remove NeMo-related things from the module
971 # We need to change __call__ method. Note that this will change the
972 # whole class, not just this object! Which is why we need to repair it
973 # in the finally block
974 type(module).__call__ = torch.nn.Module.__call__
975
976 # Reset standard instance field - making the file (probably) lighter.
977 module._init_params = None
978 module._placement = None
979 module._factory = None
980 module._device = None
981
982 module.eval()
983 try:
984 if d_format == DeploymentFormat.TORCHSCRIPT:
985 if input_example is None:
986 # Route 1 - via torch.jit.script
987 traced_m = torch.jit.script(module)
988 traced_m.save(output)
989 else:
990 # Route 2 - via tracing
991 traced_m = torch.jit.trace(module, input_example)
992 traced_m.save(output)
993 elif d_format == DeploymentFormat.ONNX or d_format == DeploymentFormat.TRTONNX:
994 if input_example is None:
995 raise ValueError(f'Example input is None, but ONNX tracing was' f' attempted')
996 if output_example is None:
997 if isinstance(input_example, tuple):
998 output_example = module.forward(*input_example)
999 else:
1000 output_example = module.forward(input_example)
1001 with torch.jit.optimized_execution(True):
1002 jitted_model = torch.jit.trace(module, input_example)
1003
1004 torch.onnx.export(
1005 jitted_model,
1006 input_example,
1007 output,
1008 input_names=input_names,
1009 output_names=output_names,
1010 verbose=False,
1011 export_params=True,
1012 do_constant_folding=True,
1013 dynamic_axes=dynamic_axes,
1014 opset_version=11,
1015 example_outputs=output_example,
1016 )
1017 # fn = output + ".readable"
1018 # with open(fn, 'w') as f:
1019 # tempModel = onnx.load(output)
1020 # onnx.save(tempModel, output + ".copy")
1021 # onnx.checker.check_model(tempModel)
1022 # pgraph = onnx.helper.printable_graph(tempModel.graph)
1023 # f.write(pgraph)
1024
1025 elif d_format == DeploymentFormat.PYTORCH:
1026 torch.save(module.state_dict(), output)
1027 with open(output + ".json", 'w') as outfile:
1028 json.dump(init_params_copy, outfile)
1029
1030 else:
1031 raise NotImplementedError(f"Not supported deployment format: {d_format}")
1032 except Exception as e: # nopep8
1033 logging.error(f'module export failed for {module} ' f'with exception {e}')
1034 finally:
1035
1036 def __old_call__(self, force_pt=False, *input, **kwargs):
1037 pt_call = len(input) > 0 or force_pt
1038 if pt_call:
1039 return nn.Module.__call__(self, *input, **kwargs)
1040 else:
1041 return NeuralModule.__call__(self, **kwargs)
1042
1043 type(module).__call__ = __old_call__
1044
1045 @staticmethod
1046 def deployment_export(module, output: str, d_format: DeploymentFormat, input_example=None, output_example=None):
1047 """Exports Neural Module instance for deployment.
1048
1049 Args:
1050 module: neural module to export
1051 output (str): where export results should be saved
1052 d_format (DeploymentFormat): which deployment format to use
1053 input_example: sometimes tracing will require input examples
1054 output_example: Should match inference on input_example
1055 amp_max_loss_scale (float): Max value for amp loss scaling.
1056 Defaults to 2.0**24.
1057 """
1058
1059 with torch.no_grad():
1060 PtActions.__module_export(
1061 module=module,
1062 output=output,
1063 d_format=d_format,
1064 input_example=input_example,
1065 output_example=output_example,
1066 )
1067
1068 def train(
1069 self,
1070 tensors_to_optimize,
1071 optimizer=None,
1072 optimization_params=None,
1073 callbacks: Optional[List[ActionCallback]] = None,
1074 lr_policy=None,
1075 batches_per_step=None,
1076 stop_on_nan_loss=False,
1077 synced_batchnorm=False,
1078 synced_batchnorm_groupsize=0,
1079 gradient_predivide=False,
1080 amp_max_loss_scale=2.0 ** 24,
1081 ):
1082 if gradient_predivide:
1083 logging.error(
1084 "gradient_predivide is currently disabled, and is under consideration for removal in future versions. "
1085 "If this functionality is needed, please raise a github issue."
1086 )
1087 if not optimization_params:
1088 optimization_params = {}
1089 num_epochs = optimization_params.get("num_epochs", None)
1090 max_steps = optimization_params.get("max_steps", None)
1091 if num_epochs is None and max_steps is None:
1092 raise ValueError("You must specify either max_steps or num_epochs")
1093 grad_norm_clip = optimization_params.get('grad_norm_clip', None)
1094
1095 if batches_per_step is None:
1096 batches_per_step = 1
1097 # this is necessary because we average gradients over batch
1098 bps_scale = torch.FloatTensor([1.0 / batches_per_step]).squeeze()
1099
1100 if tensors_to_optimize is None:
1101 # This is Evaluation Mode
1102 self._init_callbacks(callbacks)
1103 # Do action start callbacks
1104 self._perform_on_action_end(callbacks=callbacks)
1105 return
1106 # Check if tensors_to_optimize is just a list of NmTensors
1107 elif tensors_to_optimize is not None and (
1108 isinstance(tensors_to_optimize[0], NmTensor) and PtActions._check_all_tensors(tensors_to_optimize)
1109 ):
1110 # Parse graph into a topologically sorted sequence of neural
1111 # modules' calls
1112 (opt_call_chain, t_dataset,) = self.__get_top_sorted_modules_and_dataloader(hook=tensors_to_optimize)
1113 # Extract trainable weights which will be optimized
1114 params_list = [
1115 p[0].parameters() for p in opt_call_chain if isinstance(p[0], TrainableNM) or p[0].is_trainable()
1116 ]
1117 params_to_optimize = itertools.chain(*params_list)
1118
1119 # Setup optimizer instance. By default it is SGD
1120 optimizer_instance = None
1121 optimizer_class = None
1122 if isinstance(optimizer, str):
1123 optimizer_class = optimizer
1124 elif isinstance(optimizer, torch.optim.Optimizer):
1125 optimizer_instance = optimizer
1126 else:
1127 raise ValueError("optimizer was not understood")
1128 optimizer = self.__setup_optimizer(
1129 optimizer_instance=optimizer_instance,
1130 optimizer_class=optimizer_class,
1131 optimization_params=optimization_params,
1132 params_to_optimize=params_to_optimize,
1133 )
1134
1135 training_loop = [(optimizer, tensors_to_optimize, opt_call_chain)]
1136
1137 self.optimizers.append(optimizer)
1138 assert (
1139 len(self.optimizers) == 1
1140 ), "There was more than one optimizer, was create_optimizer() called before train()?"
1141
1142 elif PtActions._check_tuples(tensors_to_optimize):
1143 if batches_per_step != 1:
1144 raise ValueError("Gradient accumlation with multiple optimizers is not supported")
1145 datasets = []
1146 training_loop = []
1147 for step in tensors_to_optimize:
1148 (step_call_chain, dataset,) = self.__get_top_sorted_modules_and_dataloader(hook=step[1])
1149 datasets.append(dataset)
1150 training_loop.append((step[0], step[1], step_call_chain))
1151
1152 t_dataset = datasets[0]
1153 for dataset in datasets:
1154 if type(dataset) is not type(t_dataset):
1155 raise ValueError("There were two training datasets, we only support 1.")
1156 else:
1157 raise ValueError("tensors_to_optimize was not understood")
1158
1159 logging_callchain = None
1160 # callbacks setup
1161 if callbacks is not None:
1162 for callback in callbacks:
1163 if not isinstance(callback, ActionCallback):
1164 raise ValueError("A callback was received that was not a child of ActionCallback")
1165 elif isinstance(callback, SimpleLossLoggerCallback):
1166 if logging_callchain:
1167 raise ValueError("We only support one logger callback but more than one were found")
1168 logger_step_freq = callback._step_freq
1169 logging_tensors = callback.tensors
1170 all_tensors = logging_tensors
1171 for step in training_loop:
1172 all_tensors = all_tensors + step[1]
1173 (logging_callchain, _,) = self.__get_top_sorted_modules_and_dataloader(hook=all_tensors)
1174
1175 self._get_all_modules(training_loop, callbacks, logging_callchain)
1176
1177 # Intialize Amp if needed
1178 if self._optim_level in AmpOptimizations:
1179 # Store mapping of self.optimizers to optimizer in callchain
1180 training_loop_opts = []
1181 for opt in training_loop:
1182 training_loop_opts.append(self.optimizers.index(opt[0]))
1183 self.optimizers = self.__initialize_amp(
1184 optimizer=self.optimizers,
1185 optim_level=self._optim_level,
1186 amp_max_loss_scale=amp_max_loss_scale,
1187 amp_min_loss_scale=optimization_params.get('amp_min_loss_scale', 1.0),
1188 )
1189 # Use stored mapping to map amp_init opts to training loop
1190 for i, step in enumerate(training_loop):
1191 training_loop[i] = (
1192 self.optimizers[training_loop_opts[i]],
1193 step[1],
1194 step[2],
1195 )
1196
1197 dataNM = training_loop[0][2][0][0]
1198 if dataNM.placement == DeviceType.AllGpu:
1199 # if len(training_loop) > 1:
1200 # raise NotImplementedError(
1201 # "Distributed training does nor work with multiple "
1202 # "optimizers")
1203 logging.info("Doing distributed training")
1204 if t_dataset is not None:
1205 train_sampler = torch.utils.data.distributed.DistributedSampler(
1206 dataset=t_dataset, shuffle=dataNM.shuffle
1207 )
1208 train_dataloader = torch.utils.data.DataLoader(
1209 dataset=t_dataset,
1210 sampler=train_sampler,
1211 num_workers=dataNM.num_workers,
1212 batch_size=dataNM.batch_size,
1213 shuffle=False,
1214 )
1215 else:
1216 train_dataloader = dataNM.data_iterator
1217 if hasattr(train_dataloader, 'sampler'):
1218 train_sampler = train_dataloader.sampler
1219 else:
1220 train_sampler = None
1221
1222 for train_iter in training_loop:
1223 call_chain = train_iter[2]
1224 for i in range(1, len(call_chain) - 1):
1225 key = call_chain[i][0].unique_instance_id
1226 pmodule = self.module_reference_table[key][1]
1227 if not isinstance(pmodule, DDP) and isinstance(pmodule, torch.nn.Module):
1228 # gpf = 1
1229 # if gradient_predivide:
1230 # gpf = dist.get_world_size()
1231 # pmodule = DDP(pmodule, gradient_predivide_factor=gpf) # Old Apex Method
1232
1233 # Per pytorch docs, convert sync bn prior to DDP
1234 if synced_batchnorm:
1235 world_size = dist.get_world_size()
1236 sync_batchnorm_group = None
1237 if synced_batchnorm_groupsize > 0:
1238 if world_size % synced_batchnorm_groupsize != 0:
1239 raise ValueError(
1240 f"Synchronized batch norm group size ({synced_batchnorm_groupsize}) must be 0"
1241 f" or divide total number of GPUs ({world_size})."
1242 )
1243 # Find ranks of other nodes in the same batchnorm group
1244 rank = torch.distributed.get_rank()
1245 group = rank // synced_batchnorm_groupsize
1246 group_rank_ids = range(
1247 group * synced_batchnorm_groupsize, (group + 1) * synced_batchnorm_groupsize
1248 )
1249 sync_batchnorm_group = torch.distributed.new_group(group_rank_ids)
1250
1251 pmodule = nn.SyncBatchNorm.convert_sync_batchnorm(
1252 pmodule, process_group=sync_batchnorm_group
1253 )
1254
1255 # By default, disable broadcast_buffers. This disables batch norm synchronization on forward
1256 # pass
1257 pmodule = DDP(
1258 pmodule, device_ids=[self.local_rank], broadcast_buffers=False, find_unused_parameters=True
1259 )
1260
1261 # # Convert batchnorm modules to synced if applicable
1262 # if synced_batchnorm and isinstance(pmodule, torch.nn.Module):
1263 # world_size = dist.get_world_size()
1264 # if synced_batchnorm_groupsize > 0 and world_size % synced_batchnorm_groupsize != 0:
1265 # raise ValueError(
1266 # f"Synchronized batch norm group size"
1267 # f" ({synced_batchnorm_groupsize}) must be 0"
1268 # f" or divide total number of GPUs"
1269 # f" ({world_size})."
1270 # )
1271 # process_group = create_syncbn_process_group(synced_batchnorm_groupsize)
1272 # pmodule = convert_syncbn(pmodule, process_group=process_group)
1273
1274 self.module_reference_table[key] = (
1275 self.module_reference_table[key][0],
1276 pmodule,
1277 )
1278 # single GPU/CPU training
1279 else:
1280 if t_dataset is not None:
1281 train_sampler = None
1282 train_dataloader = torch.utils.data.DataLoader(
1283 dataset=t_dataset,
1284 sampler=None,
1285 num_workers=dataNM.num_workers,
1286 batch_size=dataNM.batch_size,
1287 shuffle=dataNM.shuffle,
1288 )
1289 else:
1290 train_dataloader = dataNM.data_iterator
1291 train_sampler = None
1292
1293 self._init_callbacks(callbacks)
1294 # Do action start callbacks
1295 self._perform_on_action_start(callbacks=callbacks)
1296
1297 # MAIN TRAINING LOOP
1298 # iteration over epochs
1299 while num_epochs is None or self.epoch_num < num_epochs:
1300 if train_sampler is not None:
1301 train_sampler.set_epoch(self.epoch_num)
1302 if max_steps is not None and self.step >= max_steps:
1303 break
1304
1305 # Register epochs start with callbacks
1306 self._perform_on_epoch_start(callbacks=callbacks)
1307
1308 # iteration over batches in epoch
1309 batch_counter = 0
1310 for _, data in enumerate(train_dataloader, 0):
1311 if max_steps is not None and self.step >= max_steps:
1312 break
1313
1314 if batch_counter == 0:
1315 # Started step, zero gradients
1316 curr_optimizer = training_loop[self.step % len(training_loop)][0]
1317 curr_optimizer.zero_grad()
1318 # Register iteration start with callbacks
1319 self._perform_on_iteration_start(callbacks=callbacks)
1320
1321 # set learning rate policy
1322 if lr_policy is not None:
1323 adjusted_lr = lr_policy(optimization_params["lr"], self.step, self.epoch_num)
1324 for param_group in curr_optimizer.param_groups:
1325 param_group["lr"] = adjusted_lr
1326 if self.tb_writer is not None:
1327 value = curr_optimizer.param_groups[0]['lr']
1328 self.tb_writer.add_scalar('param/lr', value, self.step)
1329 if callbacks is not None:
1330 for callback in callbacks:
1331 callback.learning_rate = curr_optimizer.param_groups[0]['lr']
1332
1333 # registered_tensors will contain created tensors
1334 # named by output port and uuid of module which created them
1335 # Get and properly name tensors returned by data layer
1336 curr_call_chain = training_loop[self.step % len(training_loop)][2]
1337 dl_device = curr_call_chain[0][0]._device
1338 if logging_callchain and self.step % logger_step_freq == 0:
1339 curr_call_chain = logging_callchain
1340 tensors = []
1341 if isinstance(data, torch.Tensor):
1342 data = (data,)
1343 for d in data:
1344 if isinstance(d, torch.Tensor):
1345 tensors.append(d.to(dl_device))
1346 else:
1347 tensors.append(d)
1348
1349 registered_tensors = {
1350 t.unique_name: d for t, d in zip(curr_call_chain[0][2].values(), tensors) if t is not None
1351 }
1352 disable_allreduce = batch_counter < (batches_per_step - 1)
1353 self.__nm_graph_forward_pass(
1354 call_chain=curr_call_chain, registered_tensors=registered_tensors,
1355 )
1356
1357 curr_tensors_to_optimize = training_loop[self.step % len(training_loop)][1]
1358 final_loss = 0
1359 nan = False
1360 for tensor in curr_tensors_to_optimize:
1361 if (
1362 torch.isnan(registered_tensors[tensor.unique_name]).any()
1363 or torch.isinf(registered_tensors[tensor.unique_name]).any()
1364 ):
1365 if stop_on_nan_loss:
1366 raise ValueError('Loss is NaN or inf - exiting')
1367 logging.warning('Loss is NaN or inf')
1368 curr_optimizer.zero_grad()
1369 nan = True
1370 break
1371 final_loss += registered_tensors[tensor.unique_name]
1372 if nan:
1373 continue
1374 if self._optim_level in AmpOptimizations and self._optim_level != Optimization.mxprO0:
1375 with amp.scale_loss(final_loss, curr_optimizer, delay_unscale=disable_allreduce) as scaled_loss:
1376 if torch.isnan(scaled_loss).any() or torch.isinf(scaled_loss).any():
1377 if stop_on_nan_loss:
1378 raise ValueError('Loss is NaN or inf -' ' exiting')
1379 logging.warning('WARNING: Loss is NaN or inf')
1380 curr_optimizer.zero_grad()
1381 continue
1382 if disable_allreduce:
1383 with ExitStack() as stack:
1384 for mod in self.get_DDP_modules(curr_call_chain):
1385 stack.enter_context(mod.no_sync())
1386 scaled_loss.backward(bps_scale.to(scaled_loss.get_device()))
1387 else:
1388 scaled_loss.backward(bps_scale.to(scaled_loss.get_device()))
1389 # no AMP optimizations needed
1390 else:
1391 # multi-GPU, float32
1392 if self._local_rank is not None:
1393 if disable_allreduce:
1394 with ExitStack() as stack:
1395 for mod in self.get_DDP_modules(curr_call_chain):
1396 stack.enter_context(mod.no_sync())
1397 final_loss.backward(bps_scale.to(final_loss.get_device()))
1398 else:
1399 final_loss.backward(bps_scale.to(final_loss.get_device()))
1400 # single device (CPU or GPU)
1401 else:
1402 # Fix (workaround?) enabling to backpropagate gradiens on CPUs.
1403 if final_loss.get_device() < 0:
1404 final_loss.backward(bps_scale)
1405 else:
1406 final_loss.backward(bps_scale.to(final_loss.get_device()))
1407
1408 batch_counter += 1
1409
1410 if batch_counter == batches_per_step:
1411 # Ended step. Do optimizer update
1412 if grad_norm_clip is not None:
1413 torch.nn.utils.clip_grad_norm_(master_params(curr_optimizer), grad_norm_clip)
1414 curr_optimizer.step()
1415 batch_counter = 0
1416 # Register iteration end with callbacks
1417 self._update_callbacks(
1418 callbacks=callbacks, registered_tensors=registered_tensors,
1419 )
1420 self._perform_on_iteration_end(callbacks=callbacks)
1421 self.step += 1
1422 # End of epoch for loop
1423 # Register epochs end with callbacks
1424 self._perform_on_epoch_end(callbacks=callbacks)
1425 self.epoch_num += 1
1426 self._perform_on_action_end(callbacks=callbacks)
1427
1428 def infer(
1429 self,
1430 tensors,
1431 checkpoint_dir=None,
1432 ckpt_pattern='',
1433 verbose=True,
1434 cache=False,
1435 use_cache=False,
1436 offload_to_cpu=True,
1437 modules_to_restore=None,
1438 ):
1439 """See NeuralModuleFactory.infer()
1440 """
1441
1442 call_chain, _ = self.__get_top_sorted_modules_and_dataloader(hook=tensors)
1443 if checkpoint_dir:
1444 # Find all modules that need to be restored
1445 if modules_to_restore is None:
1446 modules_to_restore = []
1447 modules_to_restore_name = []
1448 for op in call_chain:
1449 if op[0].num_weights > 0:
1450 modules_to_restore.append(op[0])
1451
1452 if not isinstance(modules_to_restore, list):
1453 modules_to_restore = [modules_to_restore]
1454 modules_to_restore_name = []
1455 for mod in modules_to_restore:
1456 if not isinstance(mod, NeuralModule):
1457 raise ValueError("Found something that was not a Neural Module inside modules_to_restore")
1458 elif mod.num_weights == 0:
1459 raise ValueError("Found a Neural Module with 0 weights inside modules_to_restore")
1460 modules_to_restore_name.append(str(mod))
1461
1462 module_checkpoints = get_checkpoint_from_dir(modules_to_restore_name, checkpoint_dir, ckpt_pattern)
1463
1464 for mod, checkpoint in zip(modules_to_restore, module_checkpoints):
1465 logging.info(f"Restoring {mod} from {checkpoint}")
1466 mod.restore_from(checkpoint, self._local_rank)
1467
1468 # Init Amp
1469 if (
1470 self._optim_level in AmpOptimizations
1471 and self._optim_level != Optimization.mxprO0
1472 and not self.amp_initialized
1473 ):
1474 pt_modules = []
1475 for i in range(len(call_chain)):
1476 if isinstance(call_chain[i][0], nn.Module):
1477 pt_modules.append(call_chain[i][0])
1478 elif isinstance(call_chain[i][0], TrainableNeuralModuleWrapper):
1479 pt_modules.append(call_chain[i][0]._pt_module)
1480
1481 amp.initialize(
1482 min_loss_scale=1.0, models=pt_modules, optimizers=None, opt_level=AmpOptimizations[self._optim_level],
1483 )
1484 self.amp_initialized = True
1485
1486 # Run infer
1487 return self._infer(
1488 tensors_to_return=tensors,
1489 verbose=verbose,
1490 cache=cache,
1491 use_cache=use_cache,
1492 offload_to_cpu=offload_to_cpu,
1493 )
1494
1495 def get_DDP_modules(self, call_chain):
1496 modules = []
1497 for ind in range(1, len(call_chain)):
1498 m_id = call_chain[ind][0].unique_instance_id
1499 module = self.module_reference_table[m_id][1]
1500 if isinstance(module, DDP):
1501 modules.append(module)
1502
1503 return modules
1504
[end of nemo/backends/pytorch/actions.py]
[start of nemo/collections/asr/jasper.py]
1 # Copyright (c) 2019 NVIDIA Corporation
2 from typing import Optional
3
4 import torch
5 import torch.nn as nn
6 import torch.nn.functional as F
7
8 import nemo
9 from .parts.jasper import JasperBlock, init_weights, jasper_activations
10 from nemo.backends.pytorch.nm import TrainableNM
11 from nemo.core.neural_types import *
12 from nemo.utils.decorators import add_port_docs
13
14 logging = nemo.logging
15
16
17 class JasperEncoder(TrainableNM):
18 """
19 Jasper Encoder creates the pre-processing (prologue), Jasper convolution
20 block, and the first 3 post-processing (epilogue) layers as described in
21 Jasper (https://arxiv.org/abs/1904.03288)
22
23 Args:
24 jasper (list): A list of dictionaries. Each element in the list
25 represents the configuration of one Jasper Block. Each element
26 should contain::
27
28 {
29 # Required parameters
30 'filters' (int) # Number of output channels,
31 'repeat' (int) # Number of sub-blocks,
32 'kernel' (int) # Size of conv kernel,
33 'stride' (int) # Conv stride
34 'dilation' (int) # Conv dilation
35 'dropout' (float) # Dropout probability
36 'residual' (bool) # Whether to use residual or not.
37 # Optional parameters
38 'residual_dense' (bool) # Whether to use Dense Residuals
39 # or not. 'residual' must be True for 'residual_dense'
40 # to be enabled.
41 # Defaults to False.
42 'separable' (bool) # Whether to use separable convolutions.
43 # Defaults to False
44 'groups' (int) # Number of groups in each conv layer.
45 # Defaults to 1
46 'heads' (int) # Sharing of separable filters
47 # Defaults to -1
48 'tied' (bool) # Whether to use the same weights for all
49 # sub-blocks.
50 # Defaults to False
51 'se' (bool) # Whether to add Squeeze and Excitation
52 # sub-blocks.
53 # Defaults to False
54 'se_reduction_ratio' (int) # The reduction ratio of the Squeeze
55 # sub-module.
56 # Must be an integer > 1.
57 # Defaults to 16
58 'kernel_size_factor' (float) # Conv kernel size multiplier
59 # Can be either an int or float
60 # Kernel size is recomputed as below:
61 # new_kernel_size = int(max(1, (kernel_size * kernel_width)))
62 # to prevent kernel sizes than 1.
63 # Note: If rescaled kernel size is an even integer,
64 # adds 1 to the rescaled kernel size to allow "same"
65 # padding.
66 }
67
68 activation (str): Activation function used for each sub-blocks. Can be
69 one of ["hardtanh", "relu", "selu"].
70 feat_in (int): Number of channels being input to this module
71 normalization_mode (str): Normalization to be used in each sub-block.
72 Can be one of ["batch", "layer", "instance", "group"]
73 Defaults to "batch".
74 residual_mode (str): Type of residual connection.
75 Can be "add" or "max".
76 Defaults to "add".
77 norm_groups (int): Number of groups for "group" normalization type.
78 If set to -1, number of channels is used.
79 Defaults to -1.
80 conv_mask (bool): Controls the use of sequence length masking prior
81 to convolutions.
82 Defaults to True.
83 frame_splicing (int): Defaults to 1.
84 init_mode (str): Describes how neural network parameters are
85 initialized. Options are ['xavier_uniform', 'xavier_normal',
86 'kaiming_uniform','kaiming_normal'].
87 Defaults to "xavier_uniform".
88 """
89
90 length: Optional[torch.Tensor]
91
92 @property
93 @add_port_docs()
94 def input_ports(self):
95 """Returns definitions of module input ports.
96 """
97 return {
98 # "audio_signal": NeuralType(
99 # {0: AxisType(BatchTag), 1: AxisType(SpectrogramSignalTag), 2: AxisType(ProcessedTimeTag),}
100 # ),
101 # "length": NeuralType({0: AxisType(BatchTag)}),
102 "audio_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
103 "length": NeuralType(tuple('B'), LengthsType()),
104 }
105
106 @property
107 @add_port_docs()
108 def output_ports(self):
109 """Returns definitions of module output ports.
110 """
111 return {
112 # "outputs": NeuralType(
113 # {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag),}
114 # ),
115 # "encoded_lengths": NeuralType({0: AxisType(BatchTag)}),
116 "outputs": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
117 "encoded_lengths": NeuralType(tuple('B'), LengthsType()),
118 }
119
120 @property
121 def disabled_deployment_input_ports(self):
122 return set(["length"])
123
124 @property
125 def disabled_deployment_output_ports(self):
126 return set(["encoded_lengths"])
127
128 def prepare_for_deployment(self):
129 m_count = 0
130 for m in self.modules():
131 if type(m).__name__ == "MaskedConv1d":
132 m.use_mask = False
133 m_count += 1
134 logging.warning(f"Turned off {m_count} masked convolutions")
135
136 def __init__(
137 self,
138 jasper,
139 activation,
140 feat_in,
141 normalization_mode="batch",
142 residual_mode="add",
143 norm_groups=-1,
144 conv_mask=True,
145 frame_splicing=1,
146 init_mode='xavier_uniform',
147 ):
148 super().__init__()
149
150 activation = jasper_activations[activation]()
151 feat_in = feat_in * frame_splicing
152
153 residual_panes = []
154 encoder_layers = []
155 self.dense_residual = False
156 for lcfg in jasper:
157 dense_res = []
158 if lcfg.get('residual_dense', False):
159 residual_panes.append(feat_in)
160 dense_res = residual_panes
161 self.dense_residual = True
162 groups = lcfg.get('groups', 1)
163 separable = lcfg.get('separable', False)
164 heads = lcfg.get('heads', -1)
165 se = lcfg.get('se', False)
166 se_reduction_ratio = lcfg.get('se_reduction_ratio', 16)
167 kernel_size_factor = lcfg.get('kernel_size_factor', 1.0)
168 encoder_layers.append(
169 JasperBlock(
170 feat_in,
171 lcfg['filters'],
172 repeat=lcfg['repeat'],
173 kernel_size=lcfg['kernel'],
174 stride=lcfg['stride'],
175 dilation=lcfg['dilation'],
176 dropout=lcfg['dropout'],
177 residual=lcfg['residual'],
178 groups=groups,
179 separable=separable,
180 heads=heads,
181 residual_mode=residual_mode,
182 normalization=normalization_mode,
183 norm_groups=norm_groups,
184 activation=activation,
185 residual_panes=dense_res,
186 conv_mask=conv_mask,
187 se=se,
188 se_reduction_ratio=se_reduction_ratio,
189 kernel_size_factor=kernel_size_factor,
190 )
191 )
192 feat_in = lcfg['filters']
193
194 self.encoder = nn.Sequential(*encoder_layers)
195 self.apply(lambda x: init_weights(x, mode=init_mode))
196 self.to(self._device)
197
198 def forward(self, audio_signal, length=None):
199 # type: (Tensor, Optional[Tensor]) -> Tensor, Optional[Tensor]
200
201 s_input, length = self.encoder(([audio_signal], length))
202 if length is None:
203 return s_input[-1]
204 return s_input[-1], length
205
206
207 class JasperDecoderForCTC(TrainableNM):
208 """
209 Jasper Decoder creates the final layer in Jasper that maps from the outputs
210 of Jasper Encoder to the vocabulary of interest.
211
212 Args:
213 feat_in (int): Number of channels being input to this module
214 num_classes (int): Number of characters in ASR model's vocab/labels.
215 This count should not include the CTC blank symbol.
216 init_mode (str): Describes how neural network parameters are
217 initialized. Options are ['xavier_uniform', 'xavier_normal',
218 'kaiming_uniform','kaiming_normal'].
219 Defaults to "xavier_uniform".
220 """
221
222 @property
223 @add_port_docs()
224 def input_ports(self):
225 """Returns definitions of module input ports.
226 """
227 return {
228 # "encoder_output": NeuralType(
229 # {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag),}
230 # )
231 "encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation())
232 }
233
234 @property
235 @add_port_docs()
236 def output_ports(self):
237 """Returns definitions of module output ports.
238 """
239 # return {"output": NeuralType({0: AxisType(BatchTag), 1: AxisType(TimeTag), 2: AxisType(ChannelTag),})}
240 return {"output": NeuralType(('B', 'T', 'D'), LogprobsType())}
241
242 def __init__(self, feat_in, num_classes, init_mode="xavier_uniform"):
243 super().__init__()
244
245 self._feat_in = feat_in
246 # Add 1 for blank char
247 self._num_classes = num_classes + 1
248
249 self.decoder_layers = nn.Sequential(nn.Conv1d(self._feat_in, self._num_classes, kernel_size=1, bias=True))
250 self.apply(lambda x: init_weights(x, mode=init_mode))
251 self.to(self._device)
252
253 def forward(self, encoder_output):
254 return F.log_softmax(self.decoder_layers(encoder_output).transpose(1, 2), dim=-1)
255
256
257 class JasperDecoderForClassification(TrainableNM):
258 """
259 Jasper Decoder creates the final layer in Jasper that maps from the outputs
260 of Jasper Encoder to one class label.
261
262 Args:
263 feat_in (int): Number of channels being input to this module
264 num_classes (int): Number of characters in ASR model's vocab/labels.
265 This count should not include the CTC blank symbol.
266 init_mode (str): Describes how neural network parameters are
267 initialized. Options are ['xavier_uniform', 'xavier_normal',
268 'kaiming_uniform','kaiming_normal'].
269 Defaults to "xavier_uniform".
270 """
271
272 @property
273 def input_ports(self):
274 """Returns definitions of module input ports.
275 """
276 return {
277 # "encoder_output": NeuralType(
278 # {0: AxisType(BatchTag), 1: AxisType(EncodedRepresentationTag), 2: AxisType(ProcessedTimeTag)}
279 # )
280 "encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation())
281 }
282
283 @property
284 def output_ports(self):
285 """Returns definitions of module output ports.
286 """
287 # return {"logits": NeuralType({0: AxisType(BatchTag), 1: AxisType(ChannelTag)})}
288 return {"logits": NeuralType(('B', 'D'), LogitsType())}
289
290 def __init__(
291 self, *, feat_in, num_classes, init_mode="xavier_uniform", return_logits=True, pooling_type='avg', **kwargs
292 ):
293 TrainableNM.__init__(self, **kwargs)
294
295 self._feat_in = feat_in
296 self._return_logits = return_logits
297 self._num_classes = num_classes
298
299 if pooling_type == 'avg':
300 self.pooling = nn.AdaptiveAvgPool1d(1)
301 elif pooling_type == 'max':
302 self.pooling = nn.AdaptiveMaxPool1d(1)
303 else:
304 raise ValueError('Pooling type chosen is not valid. Must be either `avg` or `max`')
305
306 self.decoder_layers = nn.Sequential(nn.Linear(self._feat_in, self._num_classes, bias=True))
307 self.apply(lambda x: init_weights(x, mode=init_mode))
308 self.to(self._device)
309
310 def forward(self, encoder_output):
311 batch, in_channels, timesteps = encoder_output.size()
312
313 encoder_output = self.pooling(encoder_output).view(batch, in_channels) # [B, C]
314 logits = self.decoder_layers(encoder_output) # [B, num_classes]
315
316 if self._return_logits:
317 return logits
318
319 return F.softmax(logits, dim=-1)
320
[end of nemo/collections/asr/jasper.py]
[start of nemo/core/neural_factory.py]
1 # ! /usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17
18 __all__ = [
19 'Backend',
20 'ModelMode',
21 'Optimization',
22 'DeviceType',
23 'Actions',
24 'NeuralModuleFactory',
25 'DeploymentFormat',
26 ]
27
28 import random
29 from abc import ABC, abstractmethod
30 from enum import Enum
31 from typing import List, Optional
32
33 import numpy as np
34
35 import nemo
36 from ..utils import ExpManager
37 from .callbacks import ActionCallback, EvaluatorCallback
38 from .neural_types import *
39 from nemo.utils.decorators import deprecated
40
41 logging = nemo.logging
42
43
44 class DeploymentFormat(Enum):
45 """Which format to use when exporting a Neural Module for deployment"""
46
47 AUTO = 0
48 PYTORCH = 1
49 TORCHSCRIPT = 2
50 ONNX = 3
51 TRTONNX = 4
52
53
54 class Backend(Enum):
55 """Supported backends. For now, it is only PyTorch."""
56
57 PyTorch = 1
58 NotSupported = 2
59
60
61 class ModelMode(Enum):
62 """Training Mode or Evaluation/Inference"""
63
64 train = 0
65 eval = 1
66
67
68 class Optimization(Enum):
69 """Various levels of Apex/amp Optimization.
70 WARNING: This might have effect on model accuracy."""
71
72 mxprO0 = 0
73 mxprO1 = 1
74 mxprO2 = 2
75 mxprO3 = 3
76
77
78 class DeviceType(Enum):
79 """Device types where Neural Modules can be placed."""
80
81 GPU = 1
82 CPU = 2
83 AllGpu = 3
84
85
86 class Actions(ABC):
87 """Basic actions allowed on graphs of Neural Modules"""
88
89 def __init__(self, local_rank, global_rank, optimization_level=Optimization.mxprO0):
90 self._local_rank = local_rank
91 self._global_rank = global_rank
92 self._optim_level = optimization_level
93 self.step = None
94 self.epoch_num = None
95
96 @property
97 def local_rank(self):
98 """Local rank during distributed execution. None if single GPU/CPU
99
100 Returns:
101 (int) rank or worker or None if not in distributed model
102 """
103 return self._local_rank
104
105 @property
106 def global_rank(self):
107 """Global rank during distributed execution. None if single GPU/CPU
108
109 Returns:
110 (int) rank or worker or None if not in distributed model
111 """
112 return self._global_rank
113
114 @abstractmethod
115 def train(
116 self,
117 tensors_to_optimize: List[NmTensor],
118 callbacks: Optional[List[ActionCallback]],
119 lr_policy=None,
120 batches_per_step=None,
121 stop_on_nan_loss=False,
122 ):
123 """This action executes training and (optionally) evaluation.
124
125 Args:
126 tensors_to_optimize: which tensors to optimize. Typically this is
127 single loss tesnor.
128 callbacks: list of callback objects
129 lr_policy: function which should take (initial_lr, step, epoch) and
130 return learning rate
131 batches_per_step: number of mini-batches to process before one
132 optimizer step. (default: None, same as 1). Use this
133 to simulate larger batch sizes on hardware which could not fit
134 larger batch in memory otherwise. Effectively, this will make
135 "algorithmic" batch size per GPU/worker = batches_per_step*
136 batch_size
137 stop_on_nan_loss: (default: False) If set to True, the training
138 will stop if loss=nan. If set to False, the training will
139 continue, but the gradients will be zeroed before next
140 mini-batch.
141
142 Returns:
143 None
144 """
145 pass
146
147 @abstractmethod
148 def infer(self, tensors: List[NmTensor]):
149 """This action executes inference. Nothing is optimized.
150 Args:
151 tensors: which tensors to evaluate.
152
153 Returns:
154 None
155 """
156 pass
157
158 @abstractmethod
159 def save_state_to(self, path: str):
160 """
161 Saves current state such as step, epoch and optimizer parameters
162 Args:
163 path:
164
165 Returns:
166
167 """
168 pass
169
170 @abstractmethod
171 def restore_state_from(self, path: str):
172 """
173 Restores state such as step, epoch and optimizer parameters
174 Args:
175 path:
176
177 Returns:
178
179 """
180 pass
181
182 @abstractmethod
183 def create_optimizer(self, optimizer, things_to_optimize, optimizer_params):
184 """
185 Creates an optimizer object to be use in the train() method.
186
187 Args:
188 optimizer: Specifies which optimizer to use.
189 things_to_optimize: A list of neural modules or tensors to be
190 optimized.
191 optimizer_params: Specifies the parameters of the optimizer
192
193 Returns:
194 Optimizer
195 """
196 pass
197
198 def _perform_on_iteration_start(self, callbacks):
199 # TODO: Most of these checks can be relaxed since we enforce callbacks
200 # to be a list of ActionCallback objects
201 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
202 for callback in callbacks:
203 callback.on_iteration_start()
204
205 def _perform_on_iteration_end(self, callbacks):
206 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
207 for callback in callbacks:
208 callback.on_iteration_end()
209
210 def _perform_on_action_start(self, callbacks):
211 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
212 for callback in callbacks:
213 callback.on_action_start()
214
215 def _perform_on_action_end(self, callbacks):
216 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
217 for callback in callbacks:
218 callback.on_action_end()
219
220 def _perform_on_epoch_start(self, callbacks):
221 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
222 for callback in callbacks:
223 callback.on_epoch_start()
224
225 def _perform_on_epoch_end(self, callbacks):
226 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
227 for callback in callbacks:
228 callback.on_epoch_end()
229
230 def _init_callbacks(self, callbacks):
231 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
232 for callback in callbacks:
233 callback.action = self
234
235 def _update_callbacks(
236 self, callbacks=None, registered_tensors=None,
237 ):
238 # if self.local_rank is None or self.local_rank == 0:
239 if callbacks is not None and isinstance(callbacks, List) and len(callbacks) > 0:
240 for callback in callbacks:
241 callback._registered_tensors = registered_tensors
242
243
244 def _str_to_opt_level(opt_str: str) -> Optimization:
245 number = int(opt_str[1:])
246 if number not in Optimization._value2member_map_:
247 raise ValueError(f"Unknown optimization value {opt_str}")
248 return Optimization(number)
249
250
251 class NeuralModuleFactory(object):
252 _DEFAULT = None
253
254 """
255 Neural Module Factory instance is used to create neural modules and
256 trainers
257
258 Args:
259 backend (Backend): Currently only Backend.PyTorch is supported
260 local_rank (int): Process rank. Should be set by distributed runner
261 optimization_level (Optimization): Level of optimization to use. Will
262 be passed to neural modules and actions created by this factory.
263 placement (DeviceType: where to place NeuralModule instances by default
264 cudnn_benchmark (bool): (default False) If set to True it will use
265 cudnnFind method to find the best kernels instead of using
266 heuristics. If the shapes of your inputs are constant this
267 should help, for various shapes it can slow things down. Give it
268 few iterations to warmup if set to True. Currently only supported
269 by PyTorch backend.
270 random_seed (int): (default None) Sets random seed to control for
271 randomness. This should be used for debugging purposes as it might
272 have negative impact on performance. Can't be used when
273 `cudnn_benchmark=True`.
274 master_process (bool): (default True) Flag for master process
275 indication
276 set_default (bool): (default True) True if should set this instance as
277 default factory for modules instantiating.
278 """
279
280 def __init__(
281 self,
282 backend=Backend.PyTorch,
283 local_rank=None,
284 optimization_level=Optimization.mxprO0,
285 placement=None,
286 cudnn_benchmark=False,
287 random_seed=None,
288 set_default=True,
289 log_dir=None,
290 checkpoint_dir=None,
291 tensorboard_dir=None,
292 create_tb_writer=False,
293 files_to_copy=None,
294 add_time_to_log_dir=False,
295 ):
296 self._local_rank = local_rank
297 self._global_rank = None
298
299 if isinstance(optimization_level, str):
300 optimization_level = _str_to_opt_level(optimization_level)
301 self._optim_level = optimization_level
302
303 if placement is None:
304 if local_rank is not None:
305 device = DeviceType.AllGpu
306 else:
307 device = DeviceType.GPU
308
309 self._placement = device
310 else:
311 self._placement = placement
312
313 self._backend = backend
314 self._world_size = 1
315 broadcast_func = None
316 if backend == Backend.PyTorch:
317 # TODO: Move all framework specific code from this file
318 import torch
319
320 if self._placement != DeviceType.CPU:
321 if not torch.cuda.is_available():
322 raise ValueError(
323 "You requested to use GPUs but CUDA is "
324 "not installed. You can try running using"
325 " CPU-only. To do this, instantiate your"
326 " factory with placement=DeviceType.CPU"
327 "\n"
328 "Note that this is slow and is not "
329 "well supported."
330 )
331
332 torch.backends.cudnn.benchmark = cudnn_benchmark
333 if random_seed is not None and cudnn_benchmark:
334 raise ValueError("cudnn_benchmark can not be set to True when random_seed is not None.")
335 if random_seed is not None:
336 torch.backends.cudnn.deterministic = True
337 torch.backends.cudnn.benchmark = False
338 torch.manual_seed(random_seed)
339 np.random.seed(random_seed)
340 random.seed(random_seed)
341
342 if self._local_rank is not None:
343 torch.distributed.init_process_group(backend="nccl", init_method="env://")
344
345 cuda_set = True
346 # Try to set cuda device. This should fail if self._local_rank
347 # is greater than the number of available GPUs
348 try:
349 torch.cuda.set_device(self._local_rank)
350 except RuntimeError:
351 # Note in this case, all tensors are now sent to GPU 0
352 # who could crash because of OOM. Thus init_process_group()
353 # must be done before any cuda tensors are allocated
354 cuda_set = False
355 cuda_set_t = torch.cuda.IntTensor([cuda_set])
356
357 # Do an all_reduce to ensure all workers obtained a GPU
358 # For the strangest reason, BAND doesn't work so I am resorting
359 # to MIN.
360 torch.distributed.all_reduce(cuda_set_t, op=torch.distributed.ReduceOp.MIN)
361 if cuda_set_t.item() == 0:
362 raise RuntimeError(
363 "There was an error initializing distributed training."
364 " Perhaps you specified more gpus than you have "
365 "available"
366 )
367
368 del cuda_set_t
369 torch.cuda.empty_cache()
370 # Remove test tensor from memory
371
372 self._world_size = torch.distributed.get_world_size()
373 self._global_rank = torch.distributed.get_rank()
374
375 def torch_broadcast_wrapper(str_len=None, string=None, src=0):
376 """Wrapper function to broadcast string values across all
377 workers
378 """
379 # Create byte cuda torch tensor
380 if string is not None:
381 string_tensor = torch.tensor(list(string.encode()), dtype=torch.uint8).cuda()
382 else:
383 string_tensor = torch.tensor([0] * str_len, dtype=torch.uint8).cuda()
384 # Run broadcast
385 torch.distributed.broadcast(string_tensor, src)
386 # turn byte tensor back to string
387 return_string = string_tensor.cpu().numpy()
388 return_string = b''.join(return_string).decode()
389 return return_string
390
391 broadcast_func = torch_broadcast_wrapper
392 else:
393 raise NotImplementedError("Only Pytorch backend is currently supported.")
394
395 # Create ExpManager
396 # if log_dir is None, only create logger
397 self._exp_manager = ExpManager(
398 work_dir=log_dir,
399 ckpt_dir=checkpoint_dir,
400 use_tb=create_tb_writer,
401 tb_dir=tensorboard_dir,
402 local_rank=local_rank,
403 global_rank=self._global_rank,
404 files_to_copy=files_to_copy,
405 add_time=add_time_to_log_dir,
406 exist_ok=True,
407 broadcast_func=broadcast_func,
408 )
409 self._tb_writer = self._exp_manager.tb_writer
410
411 # Create trainer
412 self._trainer = self._get_trainer(tb_writer=self._tb_writer)
413
414 if set_default:
415 NeuralModuleFactory.set_default_factory(self)
416
417 @classmethod
418 def get_default_factory(cls):
419 return cls._DEFAULT
420
421 @classmethod
422 def set_default_factory(cls, factory):
423 cls._DEFAULT = factory
424
425 @classmethod
426 def reset_default_factory(cls):
427 cls._DEFAULT = None
428
429 @staticmethod
430 def __name_import(name):
431 components = name.split(".")
432 mod = __import__(components[0])
433 for comp in components[1:]:
434 mod = getattr(mod, comp)
435 return mod
436
437 @deprecated(version=0.11)
438 def __get_pytorch_module(self, name, collection, params, pretrained):
439 # TK: "factory" is not passed as parameter anymore.
440 # params["factory"] = self
441
442 if collection == "toys" or collection == "tutorials" or collection == "other":
443 constructor = NeuralModuleFactory.__name_import("nemo.backends.pytorch.tutorials." + name)
444 elif collection == "nemo_nlp":
445 constructor = NeuralModuleFactory.__name_import("nemo_nlp." + name)
446 if name == "BERT" and pretrained is True:
447 params["pretrained"] = True
448 elif collection == "nemo_asr":
449 constructor = NeuralModuleFactory.__name_import("nemo_asr." + name)
450 elif collection == "nemo_lpr":
451 constructor = NeuralModuleFactory.__name_import("nemo_lpr." + name)
452 elif collection == 'common':
453 constructor = NeuralModuleFactory.__name_import('nemo.backends.pytorch.common.' + name)
454 elif collection == "torchvision":
455 import torchvision.models as tv_models
456 import nemo.backends.pytorch.module_wrapper as mw
457 import torch.nn as nn
458
459 if name == "ImageFolderDataLayer":
460 constructor = NeuralModuleFactory.__name_import("nemo.backends.pytorch.torchvision.data." + name)
461 instance = constructor(**params)
462 return instance
463 else:
464 _nm_name = name.lower()
465 if _nm_name == "resnet18":
466 input_ports = {
467 "x": NeuralType(
468 {
469 0: AxisType(BatchTag),
470 1: AxisType(ChannelTag),
471 2: AxisType(HeightTag, 224),
472 3: AxisType(WidthTag, 224),
473 }
474 )
475 }
476 output_ports = {"output": NeuralType({0: AxisType(BatchTag), 1: AxisType(ChannelTag)})}
477
478 pt_model = tv_models.resnet18(pretrained=pretrained)
479 num_classes = params.get("num_classes", None)
480 if num_classes is not None:
481 pt_model.fc = nn.Linear(512, params["num_classes"])
482 return mw.TrainableNeuralModuleWrapper(
483 pt_nn_module=pt_model, input_ports_dict=input_ports, output_ports_dict=output_ports,
484 )
485 elif _nm_name == "resnet50":
486 input_ports = {
487 "x": NeuralType(
488 {
489 0: AxisType(BatchTag),
490 1: AxisType(ChannelTag),
491 2: AxisType(HeightTag, 224),
492 3: AxisType(WidthTag, 224),
493 }
494 )
495 }
496 output_ports = {"output": NeuralType({0: AxisType(BatchTag), 1: AxisType(ChannelTag)})}
497
498 pt_model = tv_models.resnet50(pretrained=pretrained)
499 num_classes = params.get("num_classes", None)
500 if num_classes is not None:
501 pt_model.fc = nn.Linear(2048, params["num_classes"])
502 return mw.TrainableNeuralModuleWrapper(
503 pt_nn_module=pt_model, input_ports_dict=input_ports, output_ports_dict=output_ports,
504 )
505 else:
506 collection_path = "nemo.collections." + collection + "." + name
507 constructor = NeuralModuleFactory.__name_import(collection_path)
508 if name == "BERT" and pretrained is True:
509 params["pretrained"] = True
510
511 # TK: "placement" is not passed as parameter anymore.
512 # if "placement" not in params:
513 # params["placement"] = self._placement
514 instance = constructor(**params)
515 return instance
516
517 @deprecated(version=0.11)
518 def get_module(self, name, collection, params, pretrained=False):
519 """
520 Creates NeuralModule instance
521
522 Args:
523 name (str): name of NeuralModule which instance should be returned.
524 params (dict): local parameters which should be passed to
525 NeuralModule's constructor.
526 collection (str): in which collection to look for
527 `neural_module_name`
528 pretrained (bool): return pre-trained instance or randomly
529 initialized (default)
530
531 Returns:
532 NeuralModule instance
533 """
534
535 # TK: "optimization_level" is not passed as parameter anymore.
536 # if params is not None and "optimization_level" in params:
537 # if params["optimization_level"] != self._optim_level:
538 # logging.warning(
539 # "Module's {0} requested optimization level {1} is"
540 # "different from the one specified by factory - {2}."
541 # "Using: {3} for this module".format(
542 # name, params["optimization_level"], self._optim_level, params["optimization_level"],
543 # )
544 # )
545 # else:
546 # if params is None:
547 # params = {}
548 # params["optimization_level"] = self._optim_level
549
550 if self._backend == Backend.PyTorch:
551 return self.__get_pytorch_module(name=name, collection=collection, params=params, pretrained=pretrained,)
552 else:
553 return None
554
555 def create_optimizer(self, optimizer, things_to_optimize, optimizer_params):
556 return self._trainer.create_optimizer(
557 optimizer=optimizer, things_to_optimize=things_to_optimize, optimizer_params=optimizer_params,
558 )
559
560 def train(
561 self,
562 tensors_to_optimize,
563 optimizer=None,
564 optimization_params=None,
565 callbacks: Optional[List[ActionCallback]] = None,
566 lr_policy=None,
567 batches_per_step=None,
568 stop_on_nan_loss=False,
569 synced_batchnorm=False,
570 synced_batchnorm_groupsize=0,
571 gradient_predivide=False,
572 amp_max_loss_scale=2.0 ** 24,
573 reset=False,
574 ):
575 if reset:
576 self.reset_trainer()
577 return self._trainer.train(
578 tensors_to_optimize=tensors_to_optimize,
579 optimizer=optimizer,
580 optimization_params=optimization_params,
581 callbacks=callbacks,
582 lr_policy=lr_policy,
583 batches_per_step=batches_per_step,
584 stop_on_nan_loss=stop_on_nan_loss,
585 synced_batchnorm=synced_batchnorm,
586 synced_batchnorm_groupsize=synced_batchnorm_groupsize,
587 gradient_predivide=gradient_predivide,
588 amp_max_loss_scale=amp_max_loss_scale,
589 )
590
591 def eval(self, callbacks: List[EvaluatorCallback]):
592 if callbacks is None or len(callbacks) == 0:
593 raise ValueError(f"You need to provide at lease one evaluation" f"callback to eval")
594 for callback in callbacks:
595 if not isinstance(callback, EvaluatorCallback):
596 raise TypeError(f"All callbacks passed to the eval action must" f"be inherited from EvaluatorCallback")
597 self.train(
598 tensors_to_optimize=None, optimizer='sgd', callbacks=callbacks, optimization_params={'num_epochs': 1},
599 )
600
601 def deployment_export(
602 self, module, output: str, d_format: DeploymentFormat, input_example=None, output_example=None
603 ):
604 """Exports Neural Module instance for deployment.
605
606 Args:
607 module: neural module to export
608 output (str): where export results should be saved
609 d_format (DeploymentFormat): which deployment format to use
610 input_example: sometimes tracing will require input examples
611 output_example: Should match inference on input_example
612 """
613 module.prepare_for_deployment()
614
615 return self._trainer.deployment_export(
616 module=module,
617 output=output,
618 d_format=d_format,
619 input_example=input_example,
620 output_example=output_example,
621 )
622
623 def infer(
624 self,
625 tensors: List[NmTensor],
626 checkpoint_dir=None,
627 ckpt_pattern='',
628 verbose=True,
629 cache=False,
630 use_cache=False,
631 offload_to_cpu=True,
632 modules_to_restore=None,
633 ):
634 """Runs inference to obtain values for tensors
635
636 Args:
637 tensors (list[NmTensor]): List of NeMo tensors that we want to get
638 values of.
639 checkpoint_dir (str): Path to checkpoint directory. Default is None
640 which does not load checkpoints.
641 ckpt_pattern (str): Pattern used to check for checkpoints inside
642 checkpoint_dir. Default is '' which matches any checkpoints
643 inside checkpoint_dir.
644 verbose (bool): Controls printing. Defaults to True.
645 cache (bool): If True, cache all `tensors` and intermediate tensors
646 so that future calls that have use_cache set will avoid
647 computation. Defaults to False.
648 use_cache (bool): Values from `tensors` will be always re-computed.
649 It will re-use intermediate tensors from the DAG leading to
650 `tensors`. If you want something to be re-computed, put it into
651 `tensors` list. Defaults to False.
652 offload_to_cpu (bool): If True, all evaluated tensors are moved to
653 cpu memory after each inference batch. Defaults to True.
654 modules_to_restore (list): Defaults to None, in which case all
655 NMs inside callchain with weights will be restored. If
656 specified only the modules inside this list will be restored.
657
658 Returns:
659 List of evaluated tensors. Each element in the list is also a list
660 where each element is now a batch of tensor values.
661 """
662 return self._trainer.infer(
663 tensors=tensors,
664 checkpoint_dir=checkpoint_dir,
665 ckpt_pattern=ckpt_pattern,
666 verbose=verbose,
667 cache=cache,
668 use_cache=use_cache,
669 offload_to_cpu=offload_to_cpu,
670 modules_to_restore=modules_to_restore,
671 )
672
673 def clear_cache(self):
674 """Helper function to clean inference cache."""
675 self._trainer.clear_cache()
676
677 @deprecated(version="future")
678 def _get_trainer(self, tb_writer=None):
679 if self._backend == Backend.PyTorch:
680 constructor = NeuralModuleFactory.__name_import("nemo.backends.pytorch.PtActions")
681 instance = constructor(
682 local_rank=self._local_rank,
683 global_rank=self._global_rank,
684 tb_writer=tb_writer,
685 optimization_level=self._optim_level,
686 )
687 return instance
688 else:
689 raise ValueError("Only PyTorch backend is currently supported.")
690
691 @deprecated(
692 version="future",
693 explanation="Please use .train(...), .eval(...), .infer(...) and "
694 f".create_optimizer(...) of the NeuralModuleFactory instance directly.",
695 )
696 def get_trainer(self, tb_writer=None):
697 if self._trainer:
698 logging.warning(
699 "The trainer instance was created during initialization of "
700 "Neural factory, using the already created instance."
701 )
702 return self._trainer
703 return self._get_trainer(tb_writer)
704
705 def reset_trainer(self):
706 del self._trainer
707 self._trainer = self._get_trainer(tb_writer=self._tb_writer)
708
709 def sync_all_processes(self, status=True):
710 """ Helper function for testing that allows proccess 0 to inform all
711 other processes of failures. Does nothing if not using distributed
712 training. Usage example can be seen in examples/asr/jasper_an4.py
713
714 Args:
715 status (bool): Defaults to True. If any proccess passes False, it
716 will trigger a graceful exit on all other processes. It is
717 assumed that the process that passed False will print an error
718 message on its own and exit
719 """
720 if self._world_size == 1:
721 logging.info("sync_all_processes does nothing if there is one process")
722 return
723 if self._backend == Backend.PyTorch:
724 import torch
725
726 status_tensor = torch.cuda.IntTensor([status])
727 torch.distributed.all_reduce(status_tensor, op=torch.distributed.ReduceOp.MIN)
728 if status_tensor.item() == 0:
729 logging.error("At least one process had a failure")
730 if status:
731 raise ValueError(
732 f"Process with global rank {self._global_rank} entered"
733 " sync_all_processes with a passing status, but "
734 "another process indicated a failure"
735 )
736
737 @property
738 def world_size(self):
739 return self._world_size
740
741 @property
742 def tb_writer(self):
743 return self._tb_writer
744
745 @property
746 def placement(self):
747 return self._placement
748
749 @property
750 def optim_level(self):
751 return self._optim_level
752
753 @property
754 @deprecated(version=0.11, explanation="Please use ``nemo.logging instead``")
755 def logger(self):
756 return nemo.logging
757
758 @property
759 def checkpoint_dir(self):
760 return self._exp_manager.ckpt_dir
761
762 @property
763 def work_dir(self):
764 return self._exp_manager.work_dir
765
766 @property
767 def global_rank(self):
768 return self._global_rank
769
[end of nemo/core/neural_factory.py]
[start of nemo/core/neural_modules.py]
1 # ! /usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright (c) 2019-, NVIDIA CORPORATION. All rights reserved.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17
18 """This file contains NeuralModule and NmTensor classes."""
19 __all__ = ['WeightShareTransform', 'NeuralModule']
20
21 import collections
22 import uuid
23 from abc import ABC, abstractmethod
24 from collections import namedtuple
25 from enum import Enum
26 from inspect import getargvalues, getfullargspec, stack
27 from os import path
28 from typing import Dict, List, Optional, Set, Tuple
29
30 from ruamel.yaml import YAML
31
32 from .neural_types import (
33 CanNotInferResultNeuralType,
34 NeuralPortNameMismatchError,
35 NeuralPortNmTensorMismatchError,
36 NeuralType,
37 NeuralTypeComparisonResult,
38 NmTensor,
39 )
40 from nemo import logging
41 from nemo.core import NeuralModuleFactory
42 from nemo.package_info import __version__ as nemo_version
43 from nemo.utils.decorators.deprecated import deprecated
44
45 YAML = YAML(typ='safe')
46
47
48 class WeightShareTransform(Enum):
49 """When sharing parameters, what kind of transform to apply."""
50
51 SAME = 0
52 TRANSPOSE = 1
53
54
55 PretrainedModelInfo = namedtuple(
56 "PretrainedModleInfo", ("pretrained_model_name", "description", "parameters", "location"),
57 )
58
59
60 class NeuralModule(ABC):
61 """Abstract class that every Neural Module must inherit from.
62 """
63
64 def __init__(self):
65
66 # Get default factory.
67 self._factory = NeuralModuleFactory.get_default_factory()
68
69 # Set module properties from factory else use defaults
70 self._placement = self._factory.placement
71 # If one needs to change that should override it manually.
72
73 # Optimization level.
74 self._opt_level = self._factory.optim_level
75
76 # Get object UUID.
77 self._uuid = str(uuid.uuid4())
78
79 # Retrieve dictionary of parameters (keys, values) passed to init.
80 self._init_params = self.__extract_init_params()
81
82 # Pint the types of the values.
83 # for key, value in self._init_params.items():
84 # print("{}: {} ({})".format(key, value, type(value)))
85
86 # Validate the parameters.
87 # self._validate_params(self._init_params)
88
89 @property
90 def init_params(self) -> Optional[Dict]:
91 """
92 Property returning parameters used to instantiate the module.
93
94 Returns:
95 Dictionary containing parameters used to instantiate the module.
96 """
97 return self._init_params
98
99 def __extract_init_params(self):
100 """
101 Retrieves the dictionary of of parameters (keys, values) passed to constructor of a class derived
102 (also indirectly) from the Neural Module class.
103
104 Returns:
105 Dictionary containing parameters passed to init().
106 """
107 # Get names of arguments of the original module init method.
108 init_keys = getfullargspec(type(self).__init__).args
109
110 # Remove self.
111 if "self" in init_keys:
112 init_keys.remove("self")
113
114 # Create list of params.
115 init_params = {}.fromkeys(init_keys)
116
117 # Retrieve values of those params from the call list.
118 for frame in stack()[1:]:
119 localvars = getargvalues(frame[0]).locals
120 # print("localvars: ", localvars)
121 for key in init_keys:
122 # Found the variable!
123 if key in localvars.keys():
124 # Save the value.
125 init_params[key] = localvars[key]
126
127 # Return parameters.
128 return init_params
129
130 def __validate_params(self, params):
131 """
132 Checks whether dictionary contains parameters being primitive types (string, int, float etc.)
133 or (lists of)+ primitive types.
134
135 Args:
136 params: dictionary of parameters.
137
138 Returns:
139 True if all parameters were ok, False otherwise.
140 """
141 ok = True
142
143 # Iterate over parameters and check them one by one.
144 for key, variable in params.items():
145 if not self.__is_of_allowed_type(variable):
146 logging.warning(
147 "Parameter '{}' contains a variable '{}' of type '{}' which is not allowed.".format(
148 key, variable, type(variable)
149 )
150 )
151 ok = False
152
153 # Return the result.
154 return ok
155
156 def __is_of_allowed_type(self, var):
157 """
158 A recursive function that checks if a given variable is of allowed type.
159
160 Args:
161 pretrained_model_name (str): name of pretrained model to use in order.
162
163 Returns:
164 True if all parameters were ok, False otherwise.
165 """
166 # Special case: None is also allowed.
167 if var is None:
168 return True
169
170 var_type = type(var)
171
172 # If this is list - check its elements.
173 if var_type == list:
174 for list_var in var:
175 if not self.__is_of_allowed_type(list_var):
176 return False
177
178 # If this is dict - check its elements.
179 elif var_type == dict:
180 for _, dict_var in var.items():
181 if not self.__is_of_allowed_type(dict_var):
182 return False
183
184 elif var_type not in (str, int, float, bool):
185 return False
186
187 # Well, seems that everything is ok.
188 return True
189
190 def _create_config_header(self):
191 """ A protected method that create a header stored later in the configuration file. """
192
193 # Get module "full specification".
194 module_full_spec = str(self.__module__) + "." + str(self.__class__.__qualname__)
195 module_class_name = type(self).__name__
196 # print(module_full_spec)
197
198 # Check whether module belongs to a collection.
199 spec_list = module_full_spec.split(".")
200
201 # Do not check Neural Modules from unit tests.
202 if spec_list[0] == "tests":
203 # Set collection variables.
204 collection_type = "tests"
205 collection_version = None
206 else:
207 # Check if component belongs to any collection
208 if len(spec_list) < 3 or (spec_list[0] != "nemo" and spec_list[1] != "collection"):
209 logging.warning(
210 "Module `{}` does not belong to any collection. This won't be allowed in the next release.".format(
211 module_class_name
212 )
213 )
214 collection_type = "unknown"
215 collection_version = None
216 else:
217 # Ok, set collection.
218 collection_type = spec_list[2]
219 collection_version = None
220 # TODO: to be SET!
221 # print(getattr("nemo.collections.nlp", __version__))
222
223 # Create a "header" with module "specification".
224 header = {
225 "nemo_core_version": nemo_version,
226 "collection_type": collection_type,
227 "collection_version": collection_version,
228 # "class": module_class_name, # Operating only on full_spec now.
229 "full_spec": module_full_spec,
230 }
231 return header
232
233 def export_to_config(self, config_file):
234 """
235 A function that exports module "configuration" (i.e. init parameters) to a YAML file.
236 Raises a ValueError exception in case then parameters coudn't be exported.
237
238 Args:
239 config_file: path (absolute or relative) and name of the config file (YML)
240 """
241 # Check if generic export will work.
242 if not self.__validate_params(self._init_params):
243 raise ValueError(
244 "Generic configuration export enables to use of parameters of primitive types (string, int, float) "
245 F"or (lists of/dicts of) primitive types. Please implement your own custom `export_to_config()` and "
246 F"`import_from_config()` methods for your custom Module class."
247 )
248
249 # Greate an absolute path.
250 abs_path_file = path.expanduser(config_file)
251
252 # Create the dictionary to be exported.
253 to_export = {}
254
255 # Add "header" with module "specification".
256 to_export["header"] = self._create_config_header()
257
258 # Add init parameters.
259 to_export["init_params"] = self._init_params
260 # print(to_export)
261
262 # All parameters are ok, let's export.
263 with open(abs_path_file, 'w') as outfile:
264 YAML.dump(to_export, outfile)
265
266 logging.info(
267 "Configuration of module {} ({}) exported to {}".format(self._uuid, type(self).__name__, abs_path_file)
268 )
269
270 @classmethod
271 def _validate_config_file(cls, config_file, section_name=None):
272 """
273 Class method validating whether the config file has a proper content (sections, specification etc.).
274 Raises an ImportError exception when config file is invalid or
275 incompatible (when called from a particular class).
276
277 Args:
278 config_file: path (absolute or relative) and name of the config file (YML)
279
280 section_name: section in the configuration file storing module configuration (optional, DEFAULT: None)
281
282 Returns:
283 A loaded configuration file (dictionary).
284 """
285 # Greate an absolute path.
286 abs_path_file = path.expanduser(config_file)
287
288 # Open the config file.
289 with open(abs_path_file, 'r') as stream:
290 loaded_config = YAML.load(stream)
291
292 # Check section.
293 if section_name is not None:
294 if section_name not in loaded_config:
295 raise ImportError(
296 "The loaded config `{}` doesn't contain the indicated `{}` section".format(
297 config_file, section_name
298 )
299 )
300 # Section exists - use only it for configuration.
301 loaded_config = loaded_config[section_name]
302
303 # Make sure that the config is valid.
304 if "header" not in loaded_config:
305 raise ImportError("The loaded config `{}` doesn't contain the `header` section".format(config_file))
306
307 if "init_params" not in loaded_config:
308 raise ImportError("The loaded config `{}` doesn't contain the `init_params` section".format(config_file))
309
310 # Parse the "full specification".
311 spec_list = loaded_config["header"]["full_spec"].split(".")
312
313 # Check if config contains data of a compatible class.
314 if cls.__name__ != "NeuralModule" and spec_list[-1] != cls.__name__:
315 txt = "The loaded file `{}` contains configuration of ".format(config_file)
316 txt = txt + "`{}` thus cannot be used for instantiation of an object of type `{}`".format(
317 spec_list[-1], cls.__name__
318 )
319 raise ImportError(txt)
320
321 # Success - return configuration.
322 return loaded_config
323
324 @classmethod
325 def import_from_config(cls, config_file, section_name=None, overwrite_params={}):
326 """
327 Class method importing the configuration file.
328 Raises an ImportError exception when config file is invalid or
329 incompatible (when called from a particular class).
330
331 Args:
332 config_file: path (absolute or relative) and name of the config file (YML)
333
334 section_name: section in the configuration file storing module configuration (optional, DEFAULT: None)
335
336 overwrite_params: Dictionary containing parameters that will be added to or overwrite (!) the default
337 parameters loaded from the configuration file
338
339 Returns:
340 Instance of the created NeuralModule object.
341 """
342 # Validate the content of the configuration file (its header).
343 loaded_config = cls._validate_config_file(config_file, section_name)
344
345 # Parse the "full specification".
346 spec_list = loaded_config["header"]["full_spec"].split(".")
347
348 # Get object class from "full specification".
349 mod_obj = __import__(spec_list[0])
350 for spec in spec_list[1:]:
351 mod_obj = getattr(mod_obj, spec)
352 # print(mod_obj)
353
354 # Get init parameters.
355 init_params = loaded_config["init_params"]
356 # Update parameters with additional ones.
357 init_params.update(overwrite_params)
358
359 # Create and return the object.
360 obj = mod_obj(**init_params)
361 logging.info(
362 "Instantiated a new Neural Module of type `{}` using configuration loaded from the `{}` file".format(
363 spec_list[-1], config_file
364 )
365 )
366 return obj
367
368 @deprecated(version=0.11)
369 @staticmethod
370 def create_ports(**kwargs):
371 """ Deprecated method, to be remoted in the next release."""
372 raise Exception(
373 'Deprecated method. Please implement ``inputs`` and ``outputs`` \
374 properties to define module ports instead'
375 )
376
377 @property
378 @abstractmethod
379 def input_ports(self) -> Optional[Dict[str, NeuralType]]:
380 """Returns definitions of module input ports
381
382 Returns:
383 A (dict) of module's input ports names to NeuralTypes mapping
384 """
385
386 @property
387 @abstractmethod
388 def output_ports(self) -> Optional[Dict[str, NeuralType]]:
389 """Returns definitions of module output ports
390
391 Returns:
392 A (dict) of module's output ports names to NeuralTypes mapping
393 """
394
395 @property
396 def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
397 """Returns names of input ports that will not be included in an export
398
399 Returns:
400 A (set) of module's input port names that are not exportable
401 """
402 return set([])
403
404 @property
405 def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
406 """Returns names of output ports that will not be included in an export
407
408 Returns:
409 A (set) of module's output port names that are not exportable
410 """
411 return set([])
412
413 def prepare_for_deployment(self) -> None:
414 """Patch the module if required to prepare for deployment
415
416 """
417 return
418
419 @staticmethod
420 def pretrained_storage():
421 return ''
422
423 def __call__(self, **kwargs):
424 """This method allows objects to be called with their port names
425
426 Args:
427 kwargs: Input ports and their values. For example:
428 ...
429 mymodule1 = Subclass1_of_NeuralModule(...)
430 mymodule2 = Subclass2_of_NeuralModule(...)
431 ...
432 out_port1, out_port2 = mymodule1(input_port1=value1,
433 input_port2=value2,
434 input_port3=value3)
435 out_port11 = mymodule2(input_port1=out_port2)
436 ...
437
438 Returns:
439 NmTensor object or tuple of NmTensor objects
440 """
441 # Get input and output ports definitions.
442 input_port_defs = self.input_ports
443 output_port_defs = self.output_ports
444
445 first_input_nmtensor_type = None
446 input_nmtensors_are_of_same_type = True
447 for port_name, tgv in kwargs.items():
448 # make sure that passed arguments correspond to input port names
449 if port_name not in input_port_defs.keys():
450 raise NeuralPortNameMismatchError("Wrong input port name: {0}".format(port_name))
451
452 input_port = input_port_defs[port_name]
453 type_comatibility = input_port.compare(tgv)
454 if (
455 type_comatibility != NeuralTypeComparisonResult.SAME
456 and type_comatibility != NeuralTypeComparisonResult.GREATER
457 ):
458 raise NeuralPortNmTensorMismatchError(
459 "\n\nIn {0}. \n"
460 "Port: {1} and a NmTensor it was fed are \n"
461 "of incompatible neural types:\n\n{2} \n\n and \n\n{3}"
462 "\n\nType comparison result: {4}".format(
463 self.__class__.__name__, port_name, input_port_defs[port_name], tgv, type_comatibility,
464 )
465 )
466
467 # if first_input_nmtensor_type is None:
468 # first_input_nmtensor_type = NeuralType(tgv._axis2type)
469 # else:
470 # if first_input_nmtensor_type._axis2type is None:
471 # input_nmtensors_are_of_same_type = True
472 # else:
473 # input_nmtensors_are_of_same_type = first_input_nmtensor_type.compare(
474 # tgv
475 # ) == NeuralTypeComparisonResult.SAME and len(first_input_nmtensor_type._axis2type)
476 # if not (
477 # type_comatibility == NeuralTypeComparisonResult.SAME
478 # or type_comatibility == NeuralTypeComparisonResult.GREATER
479 # ):
480 # raise NeuralPortNmTensorMismatchError(
481 # "\n\nIn {0}. \n"
482 # "Port: {1} and a NmTensor it was fed are \n"
483 # "of incompatible neural types:\n\n{2} \n\n and \n\n{3}"
484 # "\n\nType comparison result: {4}".format(
485 # self.__class__.__name__, port_name, input_port_defs[port_name], tgv, type_comatibility,
486 # )
487 # )
488 # if type_comatibility == NeuralTypeComparisonResult.LESS:
489 # print('Types were raised')
490
491 if len(output_port_defs) == 1:
492 out_name = list(output_port_defs)[0]
493 out_type = output_port_defs[out_name]
494 if out_type is None:
495 if input_nmtensors_are_of_same_type:
496 out_type = first_input_nmtensor_type
497 else:
498 raise CanNotInferResultNeuralType(
499 "Can't infer output neural type. Likely your inputs are of different type."
500 )
501 return NmTensor(producer=self, producer_args=kwargs, name=out_name, ntype=out_type,)
502 else:
503 result = []
504 for out_port, n_type in output_port_defs.items():
505 out_type = n_type
506 if out_type is None:
507 if input_nmtensors_are_of_same_type:
508 out_type = first_input_nmtensor_type
509 else:
510 raise CanNotInferResultNeuralType(
511 "Can't infer output neural type. Likely your inputs are of different type."
512 )
513 result.append(NmTensor(producer=self, producer_args=kwargs, name=out_port, ntype=out_type,))
514
515 # Creating ad-hoc class for returning from module's forward pass.
516 output_class_name = f'{self.__class__.__name__}Output'
517 field_names = list(output_port_defs)
518 result_type = collections.namedtuple(typename=output_class_name, field_names=field_names,)
519
520 # Tie tuple of output tensors with corresponding names.
521 result = result_type(*result)
522
523 return result
524
525 def __str__(self):
526 return self.__class__.__name__
527
528 @abstractmethod
529 def get_weights(self) -> Optional[Dict[(str, bool)]]:
530 """Returns NeuralModule's weights copy.
531
532 Returns:
533 Dictionary of name -> (weights, trainable)"""
534 pass
535
536 @abstractmethod
537 def set_weights(
538 self,
539 name2weight: Dict[(str, Tuple[str, bool])],
540 name2name_and_transform: Dict[(str, Tuple[str, WeightShareTransform])] = None,
541 ):
542 """Sets weight from given values. For every named weight in
543 name2weight,
544 if weight with the same name is found in the model, it will be set to
545 found value.
546
547 WARNING: This will NOT tie weights. It will copy values.
548
549 If ``name2name_and_transform`` is provided then if will set weights
550 using
551 name mapping and transform. For example, suppose ``objec1.X = 3x5
552 weight``.
553 Then, if ``name2name_and_transform['X']=('Y',
554 WeightShareTransform.TRANSPOSE)``
555 and ``Y`` is 5x3 weight and ``name2weight['Y']=Y. Then:
556 ``object1.set_weights(name2weight, name2name_and_transform)`` will
557 set object1.X=transpose(Y).
558
559 Args:
560 name2weight (dict): dictionary of name to (weight, trainable).
561 Typically this is output of get_weights method.
562 name2name_and_transform: mapping from name -> (name, transform)
563 """
564 pass
565
566 @staticmethod
567 def list_pretrained_models() -> Optional[List[PretrainedModelInfo]]:
568 """List all available pre-trained models (e.g. weights) for this NM.
569
570 Returns:
571 A list of PretrainedModelInfo tuples.
572 The pretrained_model_name field of the tuple can be used to
573 retrieve pre-trained model's weights (pass it as
574 pretrained_model_name argument to the module's constructor)
575 """
576 return None
577
578 def get_config_dict_and_checkpoint(self, pretrained_model_name):
579 """WARNING: This part is work in progress"""
580 return None
581
582 @abstractmethod
583 def tie_weights_with(
584 self,
585 module,
586 weight_names=List[str],
587 name2name_and_transform: Dict[(str, Tuple[str, WeightShareTransform])] = None,
588 ):
589 """Ties weights between self and module. For every weight name in
590 weight_names, if weight with the same name is found in self, it will
591 be tied
592 with a same weight from ``module``.
593
594 WARNING: Once weights are tied, updates to one weights's weights
595 will affect
596 other module's weights.
597
598
599 If ``name2name_and_transform`` is provided then if will set weights
600 using
601 name mapping and transform. For example, suppose ``objec1.X = 3x5
602 weights``
603 and ``object2.Y = 5x3 weights``. Then these weights can be tied like
604 this:
605
606 .. code-block:: python
607
608 object1.tie_weights_with(object2, weight_names=['X'],
609 name2name_and_transform =
610 { 'X': ('Y', WeightShareTransform.TRANSPOSE)})
611
612
613 Args:
614 module: with which module to tie weights
615 weight_names (List[str]): list of self weights' names
616 name2name_and_transform: mapping from name -> (name, transform)
617 """
618 pass
619
620 def is_trainable(self) -> bool:
621 """
622 Checks if NeuralModule is trainable.
623 A NeuralModule is trainable IFF it contains at least one trainable
624 weight
625
626 Returns:
627 True if module has trainable weights, False otherwise
628 """
629 weights = self.get_weights()
630 if weights is None:
631 return False
632 for name, w in weights.items():
633 if w[1]:
634 return True
635 return False
636
637 @abstractmethod
638 def save_to(self, path: str):
639 """Save module state to file.
640
641 Args:
642 path (string): path to while where to save.
643 """
644 pass
645
646 @abstractmethod
647 def restore_from(self, path: str):
648 """Restore module's state from file.
649
650 Args:
651 path (string): path to where to restore from.
652 """
653 pass
654
655 @abstractmethod
656 def freeze(self, weights: Set[str] = None):
657 """Freeze weights
658
659 Args:
660 weights (set): set of weight names to freeze
661 If None, all weights are freezed.
662 """
663 pass
664
665 @abstractmethod
666 def unfreeze(self, weights: Set[str] = None):
667 """Unfreeze weights
668
669 Args:
670 weights (set): set of weight names to unfreeze
671 If None, all weights are unfreezed.
672 """
673 pass
674
675 @property
676 def placement(self):
677 """Module's placement. Currently CPU or GPU.
678 DataParallel and ModelParallel will come later.
679
680 Returns:
681 (DeviceType) Device where NM's weights are located
682 """
683 return self._placement
684
685 @property
686 @deprecated(version=0.11)
687 def local_parameters(self) -> Optional[Dict]:
688 """Get module's parameters
689
690 Returns:
691 module's parameters
692 """
693 return self._init_params
694 # return self._local_parameters
695
696 @property
697 def unique_instance_id(self):
698 """A unique instance id for this object
699
700 Returns:
701 A uniq uuid which can be used to identify this object
702 """
703 return self._uuid
704
705 @property
706 def factory(self):
707 """ Neural module factory which created this module
708 Returns: NeuralModuleFactory instance or None
709 """
710 return self._factory
711
712 @property
713 @abstractmethod
714 def num_weights(self):
715 """Number of module's weights
716 """
717 pass
718
[end of nemo/core/neural_modules.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
NVIDIA/NeMo
|
ba4616f1f011d599de87f0cb3315605e715d402a
|
Jasper Encoder Export failed
The export of Jasper Encoder is failing. I am using the core API [deployment_export](https://nvidia.github.io/NeMo/api-docs/nemo.html#nemo.core.neural_factory.NeuralModuleFactory.deployment_export) like in the script: https://github.com/NVIDIA/NeMo/blob/403238f82d26879ba5fca53fbf75b3cdc70fb49b/scripts/export_jasper_to_onnx.py#L92
I believe the issue (as shown below) is that the` input_example` provided does not match the `output_example`.
```
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
```
**What is the correct `input_example` and `output_example` to export JasperEncoder?**
The full output can be seen here:
```
adrianaf@2a520c7abb1e:/tmp/NeMo$ ! python /tmp/NeMo/scripts/export_jasper_to_onnx.py --config /raid/datasets/asr/data/config_files/WSJ-test_acoustic_quartznet15x5.yaml --nn_encoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperEncoder-STEP-247400.pt --nn_decoder /home/adrianaf/projects/nemo_asr_app/models/quartznet15x5/JasperDecoderForCTC-STEP-247400.pt --onnx_encoder /raid/datasets/asr/data/models/ONNX/pre-trained_encoder.onnx --onnx_decoder /raid/datasets/asr/data/models/ONNX/pre-trained_decoder.onnx
/opt/conda/lib/python3.6/site-packages/torchvision/io/_video_opt.py:17: UserWarning: video reader based on ffmpeg c++ ops not available
warnings.warn("video reader based on ffmpeg c++ ops not available")
/tmp/NeMo/nemo/collections/asr/audio_preprocessing.py:48: UserWarning: Could not import torchaudio. Some features might not work.
warnings.warn('Could not import torchaudio. Some features might not work.')
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:48] Loading config file...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:52] Determining model shape...
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:60] Num encoder input features: 64
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:61] Num decoder input features: 1024
[NeMo W 2020-02-23 19:09:42 deprecated:68] Function ``_get_trainer`` is deprecated. It is going to be removed in the future version.
[NeMo I 2020-02-23 19:09:42 export_jasper_to_onnx:65] Initializing models...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:76] Loading checkpoints...
[NeMo I 2020-02-23 19:09:45 export_jasper_to_onnx:91] Exporting encoder...
[NeMo W 2020-02-23 19:09:45 neural_factory:627] Turned off 170 masked convolutions
[NeMo I 2020-02-23 19:09:45 actions:937] Module is JasperEncoder. We are removing input and output length ports since they are not needed for deployment
[NeMo W 2020-02-23 19:09:46 deprecated:68] Function ``local_parameters`` is deprecated. It is going to be removed in the 0.11 version.
/opt/conda/lib/python3.6/site-packages/torch/jit/__init__.py:1023: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
Not within tolerance rtol=1e-05 atol=1e-05 at input[0, 870, 67] (0.6547648906707764 vs. 0.6546438932418823) and 812 other locations (0.00%)
check_tolerance, _force_outplace, True, _module_class)
[NeMo E 2020-02-23 19:10:07 actions:1023] module export failed for JasperEncoder with exception number of output names provided (2) exceeded number of outputs (1)
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:98] Exporting decoder...
graph(%encoder_output : Float(1, 1024, 128),
%1 : Float(29, 1024, 1),
%2 : Float(29)):
%3 : Float(1, 29, 128) = onnx::Conv[dilations=[1], group=1, kernel_shape=[1], pads=[0, 0], strides=[1]](%encoder_output, %1, %2), scope: JasperDecoderForCTC/Sequential[decoder_layers]/Conv1d[0] # /opt/conda/lib/python3.6/site-packages/torch/nn/modules/conv.py:202:0
%4 : Float(1, 128, 29) = onnx::Transpose[perm=[0, 2, 1]](%3), scope: JasperDecoderForCTC # /tmp/NeMo/nemo/collections/asr/jasper.py:235:0
%output : Float(1, 128, 29) = onnx::LogSoftmax[axis=2](%4), scope: JasperDecoderForCTC # /opt/conda/lib/python3.6/site-packages/torch/nn/functional.py:1317:0
return (%output)
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input encoder_output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
/opt/conda/lib/python3.6/site-packages/torch/onnx/utils.py:774: UserWarning: No names were found for specified dynamic axes of provided input.Automatically generated names will be applied to each dynamic axes of input output
'Automatically generated names will be applied to each dynamic axes of input {}'.format(key))
[NeMo I 2020-02-23 19:10:07 export_jasper_to_onnx:105] Export completed successfully.
```
|
2020-03-10T03:03:23Z
|
<patch>
<patch>
diff --git a/nemo/backends/pytorch/actions.py b/nemo/backends/pytorch/actions.py
--- a/nemo/backends/pytorch/actions.py
+++ b/nemo/backends/pytorch/actions.py
@@ -937,26 +937,16 @@ def __extract_dynamic_axes(port_name: str, ntype: NeuralType, dynamic_axes: defa
if axis.kind == AxisKind.Batch or axis.kind == AxisKind.Time:
dynamic_axes[port_name].append(ind)
- # This is a hack for Jasper to Jarvis export -- need re-design for this
- inputs_to_drop = set()
- outputs_to_drop = set()
- if type(module).__name__ == "JasperEncoder":
- logging.info(
- "Module is JasperEncoder. We are removing input and output length ports since they are not needed for "
- "deployment"
- )
- inputs_to_drop.add("length")
- outputs_to_drop.add("encoded_lengths")
-
+ # extract dynamic axes and remove unnecessary inputs/outputs
# for input_ports
for port_name, ntype in module.input_ports.items():
- if port_name in inputs_to_drop:
+ if port_name in module._disabled_deployment_input_ports:
input_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
# for output_ports
for port_name, ntype in module.output_ports.items():
- if port_name in outputs_to_drop:
+ if port_name in module._disabled_deployment_output_ports:
output_names.remove(port_name)
continue
__extract_dynamic_axes(port_name, ntype, dynamic_axes)
diff --git a/nemo/collections/asr/jasper.py b/nemo/collections/asr/jasper.py
--- a/nemo/collections/asr/jasper.py
+++ b/nemo/collections/asr/jasper.py
@@ -118,14 +118,14 @@ def output_ports(self):
}
@property
- def disabled_deployment_input_ports(self):
+ def _disabled_deployment_input_ports(self):
return set(["length"])
@property
- def disabled_deployment_output_ports(self):
+ def _disabled_deployment_output_ports(self):
return set(["encoded_lengths"])
- def prepare_for_deployment(self):
+ def _prepare_for_deployment(self):
m_count = 0
for m in self.modules():
if type(m).__name__ == "MaskedConv1d":
diff --git a/nemo/core/neural_factory.py b/nemo/core/neural_factory.py
--- a/nemo/core/neural_factory.py
+++ b/nemo/core/neural_factory.py
@@ -610,7 +610,7 @@ def deployment_export(
input_example: sometimes tracing will require input examples
output_example: Should match inference on input_example
"""
- module.prepare_for_deployment()
+ module._prepare_for_deployment()
return self._trainer.deployment_export(
module=module,
diff --git a/nemo/core/neural_modules.py b/nemo/core/neural_modules.py
--- a/nemo/core/neural_modules.py
+++ b/nemo/core/neural_modules.py
@@ -393,7 +393,7 @@ def output_ports(self) -> Optional[Dict[str, NeuralType]]:
"""
@property
- def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_input_ports(self) -> Optional[Set[str]]:
"""Returns names of input ports that will not be included in an export
Returns:
@@ -402,7 +402,7 @@ def disabled_deployment_input_ports(self) -> Optional[Set[str]]:
return set([])
@property
- def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
+ def _disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""Returns names of output ports that will not be included in an export
Returns:
@@ -410,7 +410,7 @@ def disabled_deployment_output_ports(self) -> Optional[Set[str]]:
"""
return set([])
- def prepare_for_deployment(self) -> None:
+ def _prepare_for_deployment(self) -> None:
"""Patch the module if required to prepare for deployment
"""
</patch>
</s>
</patch>
|
diff --git a/tests/unit/core/test_deploy_export.py b/tests/unit/core/test_deploy_export.py
--- a/tests/unit/core/test_deploy_export.py
+++ b/tests/unit/core/test_deploy_export.py
@@ -46,9 +46,11 @@
import nemo.collections.nlp.nm.trainables.common.token_classification_nm
from nemo import logging
+TRT_ONNX_DISABLED = False
+
# Check if the required libraries and runtimes are installed.
+# Only initialize GPU after this runner is activated.
try:
- # Only initialize GPU after this runner is activated.
import pycuda.autoinit
# This import causes pycuda to automatically manage CUDA context creation and cleanup.
@@ -63,16 +65,17 @@
)
from .tensorrt_runner import TensorRTRunnerV2
except:
- # Skip tests.
- pytestmark = pytest.mark.skip
+ TRT_ONNX_DISABLED = True
@pytest.mark.usefixtures("neural_factory")
class TestDeployExport(TestCase):
- def setUp(self):
- logging.setLevel(logging.WARNING)
- device = nemo.core.DeviceType.GPU
- self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
+ # def setUp(self):
+ # super().setUp()
+
+ # logging.setLevel(logging.WARNING)
+ # device = nemo.core.DeviceType.GPU
+ # self.nf = nemo.core.NeuralModuleFactory(backend=nemo.core.Backend.PyTorch, placement=device)
def __test_export_route(self, module, out_name, mode, input_example=None):
out = Path(out_name)
@@ -112,7 +115,13 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
loader_cache = DataLoaderCache(data_loader)
profile_shapes = OrderedDict()
names = list(module.input_ports) + list(module.output_ports)
-
+ names = list(
+ filter(
+ lambda x: x
+ not in (module._disabled_deployment_input_ports | module._disabled_deployment_output_ports),
+ names,
+ )
+ )
if isinstance(input_example, tuple):
si = [tuple(input_example[i].shape) for i in range(len(input_example))]
elif isinstance(input_example, OrderedDict):
@@ -152,7 +161,7 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
input_names = list(input_metadata.keys())
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
+ if input_name in module._disabled_deployment_input_ports:
continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
@@ -209,8 +218,8 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
ort_inputs = ort_session.get_inputs()
for i in range(len(input_names)):
input_name = input_names[i]
- if input_name in module.disabled_deployment_input_ports:
- input_name = ort_inputs[i].name
+ if input_name in module._disabled_deployment_input_ports:
+ continue
inputs[input_name] = (
input_example[input_name].cpu().numpy()
if isinstance(input_example, OrderedDict)
@@ -263,9 +272,10 @@ def __test_export_route(self, module, out_name, mode, input_example=None):
def __test_export_route_all(self, module, out_name, input_example=None):
if input_example is not None:
- self.__test_export_route(
- module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
- )
+ if not TRT_ONNX_DISABLED:
+ self.__test_export_route(
+ module, out_name + '.trt.onnx', nemo.core.DeploymentFormat.TRTONNX, input_example=input_example
+ )
self.__test_export_route(module, out_name + '.onnx', nemo.core.DeploymentFormat.ONNX, input_example)
self.__test_export_route(module, out_name + '.pt', nemo.core.DeploymentFormat.PYTORCH, input_example)
self.__test_export_route(module, out_name + '.ts', nemo.core.DeploymentFormat.TORCHSCRIPT, input_example)
@@ -323,9 +333,7 @@ def test_jasper_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="jasper_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randn(256).cuda()),
+ module=jasper_encoder, out_name="jasper_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
@pytest.mark.unit
@@ -343,7 +351,5 @@ def test_quartz_encoder(self):
)
self.__test_export_route_all(
- module=jasper_encoder,
- out_name="quartz_encoder",
- input_example=(torch.randn(16, 64, 256).cuda(), torch.randint(20, (16,)).cuda()),
+ module=jasper_encoder, out_name="quartz_encoder", input_example=torch.randn(16, 64, 256).cuda(),
)
|
1.0
| ||||
NVIDIA__NeMo-3632
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
./reinstall.sh crashes due to not being able to uninstall llvmlite
Starting off of `nemo:1.5.1` container, cloning the NeMo repo to a folder inside of it and calling `./reinstall.sh` fails with
```
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
`pip install -e` on the other hand succeeds installing `nemo:1.7.0rc0` and `numpy:1.22.2`, the rest of the packages remain untouched.
It seems that `./reinstall.sh` which used to work fine, a week or so ago when following the same procedure to upgrade to `nemo:1.6.0rc` redeveloped issue #841. The solution remains the same, first call
```
pip install --ignore-installed llvmlite
```
followed by `./reinstall.sh`. In this case, apart `llvml`, the following packages are updated
```
ftfy-6.0.3 nemo-toolkit-1.7.0rc0 numba-0.55.1 pytorch-lightning-1.5.9 sacrebleu-2.0.0 setuptools-59.5.0
```
Interestingly `numpy` in this case is left at `1.21.5`.
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |license| |lgtm_grade| |lgtm_alerts| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |lgtm_grade| image:: https://img.shields.io/lgtm/grade/python/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
17 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/context:python
18 :alt: Language grade: Python
19
20 .. |lgtm_alerts| image:: https://img.shields.io/lgtm/alerts/g/NVIDIA/NeMo.svg?logo=lgtm&logoWidth=18
21 :target: https://lgtm.com/projects/g/NVIDIA/NeMo/alerts/
22 :alt: Total alerts
23
24 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
25 :target: https://github.com/psf/black
26 :alt: Code style: black
27
28 .. _main-readme:
29
30 **NVIDIA NeMo**
31 ===============
32
33 Introduction
34 ------------
35
36 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR), natural language processing (NLP), and text-to-speech synthesis (TTS).
37 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models) and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
38
39 `Pre-trained NeMo models. <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_
40
41 `Introductory video. <https://www.youtube.com/embed/wBgpMf_KQVw>`_
42
43 Key Features
44 ------------
45
46 * Speech processing
47 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
48 * Supported models: Jasper, QuartzNet, CitriNet, Conformer-CTC, Conformer-Transducer, ContextNet, ...
49 * Supports CTC and Transducer/RNNT losses/decoders
50 * Beam Search decoding
51 * `Language Modelling for ASR <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
52 * Streaming and Buffered ASR (CTC/Transdcer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/main/examples/asr/asr_chunked_inference>`_
53 * `Speech Classification and Speech Command Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition)
54 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
55 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
56 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
57 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
58 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
59 * Natural Language Processing
60 * `Compatible with Hugging Face Transformers and NVIDIA Megatron <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron_finetuning.html>`_
61 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation.html>`_
62 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
63 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
64 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
65 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
66 * `BERT pre-training <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/bert_pretraining.html>`_
67 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
68 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
69 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
70 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
71 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
72 * `Neural Duplex Text Normalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization.html>`_
73 * `Prompt Tuning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/megatron_finetuning.html#prompt-tuning>`_
74 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
75 * `Speech synthesis (TTS) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
76 * Spectrogram generation: Tacotron2, GlowTTS, TalkNet, FastPitch, FastSpeech2, Mixer-TTS, Mixer-TTS-X
77 * Vocoders: WaveGlow, SqueezeWave, UniGlow, MelGAN, HiFiGAN, UnivNet
78 * End-to-end speech generation: FastPitch_HifiGan_E2E, FastSpeech2_HifiGan_E2E
79 * `NGC collection of pre-trained TTS models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
80 * `Tools <https://github.com/NVIDIA/NeMo/tree/main/tools>`_
81 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/text_processing_deployment.html>`_
82 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
83 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
84
85
86 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
87
88 Requirements
89 ------------
90
91 1) Python 3.6, 3.7 or 3.8
92 2) Pytorch 1.10.0 or above
93 3) NVIDIA GPU for training
94
95 Documentation
96 -------------
97
98 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
99 :alt: Documentation Status
100 :scale: 100%
101 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
102
103 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
104 :alt: Documentation Status
105 :scale: 100%
106 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
107
108 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
109 | Version | Status | Description |
110 +=========+=============+==========================================================================================================================================+
111 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
112 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
113 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
114 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
115
116 Tutorials
117 ---------
118 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
119
120 Getting help with NeMo
121 ----------------------
122 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
123
124
125 Installation
126 ------------
127
128 Pip
129 ~~~
130 Use this installation mode if you want the latest released version.
131
132 .. code-block:: bash
133
134 apt-get update && apt-get install -y libsndfile1 ffmpeg
135 pip install Cython
136 pip install nemo_toolkit['all']
137
138 .. note::
139
140 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
141
142 Pip from source
143 ~~~~~~~~~~~~~~~
144 Use this installation mode if you want the a version from particular GitHub branch (e.g main).
145
146 .. code-block:: bash
147
148 apt-get update && apt-get install -y libsndfile1 ffmpeg
149 pip install Cython
150 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
151
152
153 From source
154 ~~~~~~~~~~~
155 Use this installation mode if you are contributing to NeMo.
156
157 .. code-block:: bash
158
159 apt-get update && apt-get install -y libsndfile1 ffmpeg
160 git clone https://github.com/NVIDIA/NeMo
161 cd NeMo
162 ./reinstall.sh
163
164 .. note::
165
166 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
167 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
168
169 RNNT
170 ~~~~
171 Note that RNNT requires numba to be installed from conda.
172
173 .. code-block:: bash
174
175 conda remove numba
176 pip uninstall numba
177 conda install -c conda-forge numba
178
179 Megatron GPT
180 ~~~~~~~~~~~~
181 Megatron GPT training requires NVIDIA Apex to be installed.
182
183 .. code-block:: bash
184
185 git clone https://github.com/NVIDIA/apex
186 cd apex
187 git checkout c8bcc98176ad8c3a0717082600c70c907891f9cb
188 pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" ./
189
190 Docker containers:
191 ~~~~~~~~~~~~~~~~~~
192 To build a nemo container with Dockerfile from a branch, please run
193
194 .. code-block:: bash
195
196 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
197
198
199 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 22.01-py3 and then installing from GitHub.
200
201 .. code-block:: bash
202
203 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
204 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
205 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:22.01-py3
206
207 Examples
208 --------
209
210 Many examples can be found under `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
211
212
213 Contributing
214 ------------
215
216 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
217
218 Publications
219 ------------
220
221 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/blob/main/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
222
223 Citation
224 --------
225
226 .. code-block:: bash
227
228 @article{kuchaiev2019nemo,
229 title={Nemo: a toolkit for building ai applications using neural modules},
230 author={Kuchaiev, Oleksii and Li, Jason and Nguyen, Huyen and Hrinchuk, Oleksii and Leary, Ryan and Ginsburg, Boris and Kriman, Samuel and Beliaev, Stanislav and Lavrukhin, Vitaly and Cook, Jack and others},
231 journal={arXiv preprint arXiv:1909.09577},
232 year={2019}
233 }
234
235 License
236 -------
237 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
238
[end of README.rst]
[start of /dev/null]
1
[end of /dev/null]
[start of nemo_text_processing/text_normalization/__init__.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from nemo.utils import logging
16
17 try:
18 import pynini
19
20 PYNINI_AVAILABLE = True
21 except (ModuleNotFoundError, ImportError):
22 logging.warning(
23 "`pynini` is not installed ! \n"
24 "Please run the `nemo_text_processing/setup.sh` script"
25 "prior to usage of this toolkit."
26 )
27
28 PYNINI_AVAILABLE = False
29
[end of nemo_text_processing/text_normalization/__init__.py]
[start of nemo_text_processing/text_normalization/en/graph_utils.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17 import string
18 from pathlib import Path
19 from typing import Dict
20
21 from nemo_text_processing.text_normalization.en.utils import get_abs_path
22
23 try:
24 import pynini
25 from pynini import Far
26 from pynini.export import export
27 from pynini.examples import plurals
28 from pynini.lib import byte, pynutil, utf8
29
30 NEMO_CHAR = utf8.VALID_UTF8_CHAR
31
32 NEMO_DIGIT = byte.DIGIT
33 NEMO_LOWER = pynini.union(*string.ascii_lowercase).optimize()
34 NEMO_UPPER = pynini.union(*string.ascii_uppercase).optimize()
35 NEMO_ALPHA = pynini.union(NEMO_LOWER, NEMO_UPPER).optimize()
36 NEMO_ALNUM = pynini.union(NEMO_DIGIT, NEMO_ALPHA).optimize()
37 NEMO_HEX = pynini.union(*string.hexdigits).optimize()
38 NEMO_NON_BREAKING_SPACE = u"\u00A0"
39 NEMO_SPACE = " "
40 NEMO_WHITE_SPACE = pynini.union(" ", "\t", "\n", "\r", u"\u00A0").optimize()
41 NEMO_NOT_SPACE = pynini.difference(NEMO_CHAR, NEMO_WHITE_SPACE).optimize()
42 NEMO_NOT_QUOTE = pynini.difference(NEMO_CHAR, r'"').optimize()
43
44 NEMO_PUNCT = pynini.union(*map(pynini.escape, string.punctuation)).optimize()
45 NEMO_GRAPH = pynini.union(NEMO_ALNUM, NEMO_PUNCT).optimize()
46
47 NEMO_SIGMA = pynini.closure(NEMO_CHAR)
48
49 delete_space = pynutil.delete(pynini.closure(NEMO_WHITE_SPACE))
50 insert_space = pynutil.insert(" ")
51 delete_extra_space = pynini.cross(pynini.closure(NEMO_WHITE_SPACE, 1), " ")
52 delete_preserve_order = pynini.closure(
53 pynutil.delete(" preserve_order: true")
54 | (pynutil.delete(" field_order: \"") + NEMO_NOT_QUOTE + pynutil.delete("\""))
55 )
56
57 suppletive = pynini.string_file(get_abs_path("data/suppletive.tsv"))
58 # _v = pynini.union("a", "e", "i", "o", "u")
59 _c = pynini.union(
60 "b", "c", "d", "f", "g", "h", "j", "k", "l", "m", "n", "p", "q", "r", "s", "t", "v", "w", "x", "y", "z"
61 )
62 _ies = NEMO_SIGMA + _c + pynini.cross("y", "ies")
63 _es = NEMO_SIGMA + pynini.union("s", "sh", "ch", "x", "z") + pynutil.insert("es")
64 _s = NEMO_SIGMA + pynutil.insert("s")
65
66 graph_plural = plurals._priority_union(
67 suppletive, plurals._priority_union(_ies, plurals._priority_union(_es, _s, NEMO_SIGMA), NEMO_SIGMA), NEMO_SIGMA
68 ).optimize()
69
70 SINGULAR_TO_PLURAL = graph_plural
71 PLURAL_TO_SINGULAR = pynini.invert(graph_plural)
72 TO_LOWER = pynini.union(*[pynini.cross(x, y) for x, y in zip(string.ascii_uppercase, string.ascii_lowercase)])
73 TO_UPPER = pynini.invert(TO_LOWER)
74
75 PYNINI_AVAILABLE = True
76 except (ModuleNotFoundError, ImportError):
77 # Create placeholders
78 NEMO_CHAR = None
79
80 NEMO_DIGIT = None
81 NEMO_LOWER = None
82 NEMO_UPPER = None
83 NEMO_ALPHA = None
84 NEMO_ALNUM = None
85 NEMO_HEX = None
86 NEMO_NON_BREAKING_SPACE = u"\u00A0"
87 NEMO_SPACE = " "
88 NEMO_WHITE_SPACE = None
89 NEMO_NOT_SPACE = None
90 NEMO_NOT_QUOTE = None
91
92 NEMO_PUNCT = None
93 NEMO_GRAPH = None
94
95 NEMO_SIGMA = None
96
97 delete_space = None
98 insert_space = None
99 delete_extra_space = None
100 delete_preserve_order = None
101
102 suppletive = None
103 # _v = pynini.union("a", "e", "i", "o", "u")
104 _c = None
105 _ies = None
106 _es = None
107 _s = None
108
109 graph_plural = None
110
111 SINGULAR_TO_PLURAL = None
112 PLURAL_TO_SINGULAR = None
113 TO_LOWER = None
114 TO_UPPER = None
115
116 PYNINI_AVAILABLE = False
117
118
119 def generator_main(file_name: str, graphs: Dict[str, 'pynini.FstLike']):
120 """
121 Exports graph as OpenFst finite state archive (FAR) file with given file name and rule name.
122
123 Args:
124 file_name: exported file name
125 graphs: Mapping of a rule name and Pynini WFST graph to be exported
126 """
127 exporter = export.Exporter(file_name)
128 for rule, graph in graphs.items():
129 exporter[rule] = graph.optimize()
130 exporter.close()
131 print(f'Created {file_name}')
132
133
134 def get_plurals(fst):
135 """
136 Given singular returns plurals
137
138 Args:
139 fst: Fst
140
141 Returns plurals to given singular forms
142 """
143 return SINGULAR_TO_PLURAL @ fst
144
145
146 def get_singulars(fst):
147 """
148 Given plural returns singulars
149
150 Args:
151 fst: Fst
152
153 Returns singulars to given plural forms
154 """
155 return PLURAL_TO_SINGULAR @ fst
156
157
158 def convert_space(fst) -> 'pynini.FstLike':
159 """
160 Converts space to nonbreaking space.
161 Used only in tagger grammars for transducing token values within quotes, e.g. name: "hello kitty"
162 This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
163
164 Args:
165 fst: input fst
166
167 Returns output fst where breaking spaces are converted to non breaking spaces
168 """
169 return fst @ pynini.cdrewrite(pynini.cross(NEMO_SPACE, NEMO_NON_BREAKING_SPACE), "", "", NEMO_SIGMA)
170
171
172 class GraphFst:
173 """
174 Base class for all grammar fsts.
175
176 Args:
177 name: name of grammar class
178 kind: either 'classify' or 'verbalize'
179 deterministic: if True will provide a single transduction option,
180 for False multiple transduction are generated (used for audio-based normalization)
181 """
182
183 def __init__(self, name: str, kind: str, deterministic: bool = True):
184 self.name = name
185 self.kind = str
186 self._fst = None
187 self.deterministic = deterministic
188
189 self.far_path = Path(os.path.dirname(__file__) + '/grammars/' + kind + '/' + name + '.far')
190 if self.far_exist():
191 self._fst = Far(self.far_path, mode="r", arc_type="standard", far_type="default").get_fst()
192
193 def far_exist(self) -> bool:
194 """
195 Returns true if FAR can be loaded
196 """
197 return self.far_path.exists()
198
199 @property
200 def fst(self) -> 'pynini.FstLike':
201 return self._fst
202
203 @fst.setter
204 def fst(self, fst):
205 self._fst = fst
206
207 def add_tokens(self, fst) -> 'pynini.FstLike':
208 """
209 Wraps class name around to given fst
210
211 Args:
212 fst: input fst
213
214 Returns:
215 Fst: fst
216 """
217 return pynutil.insert(f"{self.name} {{ ") + fst + pynutil.insert(" }")
218
219 def delete_tokens(self, fst) -> 'pynini.FstLike':
220 """
221 Deletes class name wrap around output of given fst
222
223 Args:
224 fst: input fst
225
226 Returns:
227 Fst: fst
228 """
229 res = (
230 pynutil.delete(f"{self.name}")
231 + delete_space
232 + pynutil.delete("{")
233 + delete_space
234 + fst
235 + delete_space
236 + pynutil.delete("}")
237 )
238 return res @ pynini.cdrewrite(pynini.cross(u"\u00A0", " "), "", "", NEMO_SIGMA)
239
[end of nemo_text_processing/text_normalization/en/graph_utils.py]
[start of nemo_text_processing/text_normalization/en/taggers/punctuation.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import sys
17 from unicodedata import category
18
19 from nemo_text_processing.text_normalization.en.graph_utils import GraphFst
20
21 try:
22 import pynini
23 from pynini.lib import pynutil
24
25 PYNINI_AVAILABLE = False
26 except (ModuleNotFoundError, ImportError):
27 PYNINI_AVAILABLE = False
28
29
30 class PunctuationFst(GraphFst):
31 """
32 Finite state transducer for classifying punctuation
33 e.g. a, -> tokens { name: "a" } tokens { name: "," }
34
35 Args:
36 deterministic: if True will provide a single transduction option,
37 for False multiple transduction are generated (used for audio-based normalization)
38
39 """
40
41 def __init__(self, deterministic: bool = True):
42 super().__init__(name="punctuation", kind="classify", deterministic=deterministic)
43
44 s = "!#%&\'()*+,-./:;<=>?@^_`{|}~\""
45
46 punct_unicode = [chr(i) for i in range(sys.maxunicode) if category(chr(i)).startswith("P")]
47 punct_unicode.remove('[')
48 punct_unicode.remove(']')
49 punct = pynini.union(*s) | pynini.union(*punct_unicode)
50
51 self.graph = punct
52 self.fst = (pynutil.insert("name: \"") + self.graph + pynutil.insert("\"")).optimize()
53
[end of nemo_text_processing/text_normalization/en/taggers/punctuation.py]
[start of nemo_text_processing/text_normalization/en/verbalizers/whitelist.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16
17 from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
18
19 try:
20 import pynini
21 from pynini.lib import pynutil
22
23 PYNINI_AVAILABLE = True
24 except (ModuleNotFoundError, ImportError):
25 PYNINI_AVAILABLE = False
26
27
28 class WhiteListFst(GraphFst):
29 """
30 Finite state transducer for verbalizing whitelist
31 e.g. tokens { name: "misses" } } -> misses
32
33 Args:
34 deterministic: if True will provide a single transduction option,
35 for False multiple transduction are generated (used for audio-based normalization)
36 """
37
38 def __init__(self, deterministic: bool = True):
39 super().__init__(name="whitelist", kind="verbalize", deterministic=deterministic)
40 graph = (
41 pynutil.delete("name:")
42 + delete_space
43 + pynutil.delete("\"")
44 + pynini.closure(NEMO_CHAR - " ", 1)
45 + pynutil.delete("\"")
46 )
47 graph = graph @ pynini.cdrewrite(pynini.cross(u"\u00A0", " "), "", "", NEMO_SIGMA)
48 self.fst = graph.optimize()
49
[end of nemo_text_processing/text_normalization/en/verbalizers/whitelist.py]
[start of nemo_text_processing/text_normalization/en/verbalizers/word.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
17
18 try:
19 import pynini
20 from pynini.lib import pynutil
21
22 PYNINI_AVAILABLE = True
23 except (ModuleNotFoundError, ImportError):
24 PYNINI_AVAILABLE = False
25
26
27 class WordFst(GraphFst):
28 """
29 Finite state transducer for verbalizing word
30 e.g. tokens { name: "sleep" } -> sleep
31
32 Args:
33 deterministic: if True will provide a single transduction option,
34 for False multiple transduction are generated (used for audio-based normalization)
35 """
36
37 def __init__(self, deterministic: bool = True):
38 super().__init__(name="word", kind="verbalize", deterministic=deterministic)
39 chars = pynini.closure(NEMO_CHAR - " ", 1)
40 char = pynutil.delete("name:") + delete_space + pynutil.delete("\"") + chars + pynutil.delete("\"")
41 graph = char @ pynini.cdrewrite(pynini.cross(u"\u00A0", " "), "", "", NEMO_SIGMA)
42
43 self.fst = graph.optimize()
44
[end of nemo_text_processing/text_normalization/en/verbalizers/word.py]
[start of nemo_text_processing/text_normalization/normalize.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import itertools
16 import os
17 import re
18 from argparse import ArgumentParser
19 from collections import OrderedDict
20 from math import factorial
21 from typing import Dict, List, Union
22
23 from nemo_text_processing.text_normalization.data_loader_utils import get_installation_msg, pre_process
24 from nemo_text_processing.text_normalization.token_parser import PRESERVE_ORDER_KEY, TokenParser
25 from tqdm import tqdm
26
27 try:
28 import pynini
29
30 PYNINI_AVAILABLE = True
31
32 except (ModuleNotFoundError, ImportError):
33 PYNINI_AVAILABLE = False
34
35 try:
36 from nemo.collections.common.tokenizers.moses_tokenizers import MosesProcessor
37 from nemo.collections.nlp.data.text_normalization.utils import post_process_punct
38
39 NLP_AVAILABLE = True
40 except (ModuleNotFoundError, ImportError):
41 NLP_AVAILABLE = False
42
43
44 SPACE_DUP = re.compile(' {2,}')
45
46
47 class Normalizer:
48 """
49 Normalizer class that converts text from written to spoken form.
50 Useful for TTS preprocessing.
51
52 Args:
53 input_case: expected input capitalization
54 lang: language specifying the TN rules, by default: English
55 cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
56 overwrite_cache: set to True to overwrite .far files
57 whitelist: path to a file with whitelist replacements
58 """
59
60 def __init__(
61 self,
62 input_case: str,
63 lang: str = 'en',
64 deterministic: bool = True,
65 cache_dir: str = None,
66 overwrite_cache: bool = False,
67 whitelist: str = None,
68 ):
69 assert input_case in ["lower_cased", "cased"]
70
71 if not PYNINI_AVAILABLE:
72 raise ImportError(get_installation_msg())
73
74 if lang == 'en' and deterministic:
75 from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify import ClassifyFst
76 from nemo_text_processing.text_normalization.en.verbalizers.verbalize_final import VerbalizeFinalFst
77 elif lang == 'en' and not deterministic:
78 from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify_with_audio import ClassifyFst
79 from nemo_text_processing.text_normalization.en.verbalizers.verbalize_final import VerbalizeFinalFst
80 elif lang == 'ru':
81 # Ru TN only support non-deterministic cases and produces multiple normalization options
82 # use normalize_with_audio.py
83 from nemo_text_processing.text_normalization.ru.taggers.tokenize_and_classify import ClassifyFst
84 from nemo_text_processing.text_normalization.ru.verbalizers.verbalize_final import VerbalizeFinalFst
85 elif lang == 'de':
86 # Ru TN only support non-deterministic cases and produces multiple normalization options
87 # use normalize_with_audio.py
88 from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import ClassifyFst
89 from nemo_text_processing.text_normalization.de.verbalizers.verbalize_final import VerbalizeFinalFst
90 self.tagger = ClassifyFst(
91 input_case=input_case,
92 deterministic=deterministic,
93 cache_dir=cache_dir,
94 overwrite_cache=overwrite_cache,
95 whitelist=whitelist,
96 )
97 self.verbalizer = VerbalizeFinalFst(deterministic=deterministic)
98 self.parser = TokenParser()
99 self.lang = lang
100
101 if NLP_AVAILABLE:
102 self.processor = MosesProcessor(lang_id=lang)
103 else:
104 self.processor = None
105 print("NeMo NLP is not available. Moses de-tokenization will be skipped.")
106
107 def normalize_list(self, texts: List[str], verbose=False, punct_post_process: bool = False) -> List[str]:
108 """
109 NeMo text normalizer
110
111 Args:
112 texts: list of input strings
113 verbose: whether to print intermediate meta information
114
115 Returns converted list input strings
116 """
117 res = []
118 for input in tqdm(texts):
119 try:
120 text = self.normalize(input, verbose=verbose, punct_post_process=punct_post_process)
121 except:
122 print(input)
123 raise Exception
124 res.append(text)
125 return res
126
127 def _estimate_number_of_permutations_in_nested_dict(
128 self, token_group: Dict[str, Union[OrderedDict, str, bool]]
129 ) -> int:
130 num_perms = 1
131 for k, inner in token_group.items():
132 if isinstance(inner, dict):
133 num_perms *= self._estimate_number_of_permutations_in_nested_dict(inner)
134 num_perms *= factorial(len(token_group))
135 return num_perms
136
137 def _split_tokens_to_reduce_number_of_permutations(
138 self, tokens: List[dict], max_number_of_permutations_per_split: int = 729
139 ) -> List[List[dict]]:
140 """
141 Splits a sequence of tokens in a smaller sequences of tokens in a way that maximum number of composite
142 tokens permutations does not exceed ``max_number_of_permutations_per_split``.
143
144 For example,
145
146 .. code-block:: python
147 tokens = [
148 {"tokens": {"date": {"year": "twenty eighteen", "month": "december", "day": "thirty one"}}},
149 {"tokens": {"date": {"year": "twenty eighteen", "month": "january", "day": "eight"}}},
150 ]
151 split = normalizer._split_tokens_to_reduce_number_of_permutations(
152 tokens, max_number_of_permutations_per_split=6
153 )
154 assert split == [
155 [{"tokens": {"date": {"year": "twenty eighteen", "month": "december", "day": "thirty one"}}}],
156 [{"tokens": {"date": {"year": "twenty eighteen", "month": "january", "day": "eight"}}}],
157 ]
158
159 Date tokens contain 3 items each which gives 6 permutations for every date. Since there are 2 dates, total
160 number of permutations would be ``6 * 6 == 36``. Parameter ``max_number_of_permutations_per_split`` equals 6,
161 so input sequence of tokens is split into 2 smaller sequences.
162
163 Args:
164 tokens (:obj:`List[dict]`): a list of dictionaries, possibly nested.
165 max_number_of_permutations_per_split (:obj:`int`, `optional`, defaults to :obj:`243`): a maximum number
166 of permutations which can be generated from input sequence of tokens.
167
168 Returns:
169 :obj:`List[List[dict]]`: a list of smaller sequences of tokens resulting from ``tokens`` split.
170 """
171 splits = []
172 prev_end_of_split = 0
173 current_number_of_permutations = 1
174 for i, token_group in enumerate(tokens):
175 n = self._estimate_number_of_permutations_in_nested_dict(token_group)
176 if n * current_number_of_permutations > max_number_of_permutations_per_split:
177 splits.append(tokens[prev_end_of_split:i])
178 prev_end_of_split = i
179 current_number_of_permutations = 1
180 if n > max_number_of_permutations_per_split:
181 raise ValueError(
182 f"Could not split token list with respect to condition that every split can generate number of "
183 f"permutations less or equal to "
184 f"`max_number_of_permutations_per_split={max_number_of_permutations_per_split}`. "
185 f"There is an unsplittable token group that generates more than "
186 f"{max_number_of_permutations_per_split} permutations. Try to increase "
187 f"`max_number_of_permutations_per_split` parameter."
188 )
189 current_number_of_permutations *= n
190 splits.append(tokens[prev_end_of_split:])
191 assert sum([len(s) for s in splits]) == len(tokens)
192 return splits
193
194 def normalize(
195 self, text: str, verbose: bool = False, punct_pre_process: bool = False, punct_post_process: bool = False
196 ) -> str:
197 """
198 Main function. Normalizes tokens from written to spoken form
199 e.g. 12 kg -> twelve kilograms
200
201 Args:
202 text: string that may include semiotic classes
203 verbose: whether to print intermediate meta information
204 punct_pre_process: whether to perform punctuation pre-processing, for example, [25] -> [ 25 ]
205 punct_post_process: whether to normalize punctuation
206
207 Returns: spoken form
208 """
209 original_text = text
210 if punct_pre_process:
211 text = pre_process(text)
212 text = text.strip()
213 if not text:
214 if verbose:
215 print(text)
216 return text
217 text = pynini.escape(text)
218 tagged_lattice = self.find_tags(text)
219 tagged_text = self.select_tag(tagged_lattice)
220 if verbose:
221 print(tagged_text)
222 self.parser(tagged_text)
223 tokens = self.parser.parse()
224 split_tokens = self._split_tokens_to_reduce_number_of_permutations(tokens)
225 output = ""
226 for s in split_tokens:
227 tags_reordered = self.generate_permutations(s)
228 verbalizer_lattice = None
229 for tagged_text in tags_reordered:
230 tagged_text = pynini.escape(tagged_text)
231
232 verbalizer_lattice = self.find_verbalizer(tagged_text)
233 if verbalizer_lattice.num_states() != 0:
234 break
235 if verbalizer_lattice is None:
236 raise ValueError(f"No permutations were generated from tokens {s}")
237 output += ' ' + self.select_verbalizer(verbalizer_lattice)
238 output = SPACE_DUP.sub(' ', output[1:])
239 if punct_post_process:
240 # do post-processing based on Moses detokenizer
241 if self.processor:
242 output = self.processor.moses_detokenizer.detokenize([output], unescape=False)
243 output = post_process_punct(input=original_text, normalized_text=output)
244 else:
245 print("NEMO_NLP collection is not available: skipping punctuation post_processing")
246 return output
247
248 def _permute(self, d: OrderedDict) -> List[str]:
249 """
250 Creates reorderings of dictionary elements and serializes as strings
251
252 Args:
253 d: (nested) dictionary of key value pairs
254
255 Return permutations of different string serializations of key value pairs
256 """
257 l = []
258 if PRESERVE_ORDER_KEY in d.keys():
259 d_permutations = [d.items()]
260 else:
261 d_permutations = itertools.permutations(d.items())
262 for perm in d_permutations:
263 subl = [""]
264 for k, v in perm:
265 if isinstance(v, str):
266 subl = ["".join(x) for x in itertools.product(subl, [f"{k}: \"{v}\" "])]
267 elif isinstance(v, OrderedDict):
268 rec = self._permute(v)
269 subl = ["".join(x) for x in itertools.product(subl, [f" {k} {{ "], rec, [f" }} "])]
270 elif isinstance(v, bool):
271 subl = ["".join(x) for x in itertools.product(subl, [f"{k}: true "])]
272 else:
273 raise ValueError()
274 l.extend(subl)
275 return l
276
277 def generate_permutations(self, tokens: List[dict]):
278 """
279 Generates permutations of string serializations of list of dictionaries
280
281 Args:
282 tokens: list of dictionaries
283
284 Returns string serialization of list of dictionaries
285 """
286
287 def _helper(prefix: str, tokens: List[dict], idx: int):
288 """
289 Generates permutations of string serializations of given dictionary
290
291 Args:
292 tokens: list of dictionaries
293 prefix: prefix string
294 idx: index of next dictionary
295
296 Returns string serialization of dictionary
297 """
298 if idx == len(tokens):
299 yield prefix
300 return
301 token_options = self._permute(tokens[idx])
302 for token_option in token_options:
303 yield from _helper(prefix + token_option, tokens, idx + 1)
304
305 return _helper("", tokens, 0)
306
307 def find_tags(self, text: str) -> 'pynini.FstLike':
308 """
309 Given text use tagger Fst to tag text
310
311 Args:
312 text: sentence
313
314 Returns: tagged lattice
315 """
316 lattice = text @ self.tagger.fst
317 return lattice
318
319 def select_tag(self, lattice: 'pynini.FstLike') -> str:
320 """
321 Given tagged lattice return shortest path
322
323 Args:
324 tagged_text: tagged text
325
326 Returns: shortest path
327 """
328 tagged_text = pynini.shortestpath(lattice, nshortest=1, unique=True).string()
329 return tagged_text
330
331 def find_verbalizer(self, tagged_text: str) -> 'pynini.FstLike':
332 """
333 Given tagged text creates verbalization lattice
334 This is context-independent.
335
336 Args:
337 tagged_text: input text
338
339 Returns: verbalized lattice
340 """
341 lattice = tagged_text @ self.verbalizer.fst
342 return lattice
343
344 def select_verbalizer(self, lattice: 'pynini.FstLike') -> str:
345 """
346 Given verbalized lattice return shortest path
347
348 Args:
349 lattice: verbalization lattice
350
351 Returns: shortest path
352 """
353 output = pynini.shortestpath(lattice, nshortest=1, unique=True).string()
354 return output
355
356
357 def parse_args():
358 parser = ArgumentParser()
359 parser.add_argument("input_string", help="input string", type=str)
360 parser.add_argument("--language", help="language", choices=["en", "de"], default="en", type=str)
361 parser.add_argument(
362 "--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
363 )
364 parser.add_argument("--verbose", help="print info for debugging", action='store_true')
365 parser.add_argument(
366 "--punct_post_process", help="set to True to enable punctuation post processing", action="store_true"
367 )
368 parser.add_argument(
369 "--punct_pre_process", help="set to True to enable punctuation pre processing", action="store_true"
370 )
371 parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
372 parser.add_argument("--whitelist", help="path to a file with with whitelist", default=None, type=str)
373 parser.add_argument(
374 "--cache_dir",
375 help="path to a dir with .far grammar file. Set to None to avoid using cache",
376 default=None,
377 type=str,
378 )
379 return parser.parse_args()
380
381
382 if __name__ == "__main__":
383 args = parse_args()
384 whitelist = os.path.abspath(args.whitelist) if args.whitelist else None
385 normalizer = Normalizer(
386 input_case=args.input_case,
387 cache_dir=args.cache_dir,
388 overwrite_cache=args.overwrite_cache,
389 whitelist=whitelist,
390 lang=args.language,
391 )
392 print(
393 normalizer.normalize(
394 args.input_string,
395 verbose=args.verbose,
396 punct_pre_process=args.punct_pre_process,
397 punct_post_process=args.punct_post_process,
398 )
399 )
400
[end of nemo_text_processing/text_normalization/normalize.py]
[start of nemo_text_processing/text_normalization/normalize_with_audio.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import os
17 import time
18 from argparse import ArgumentParser
19 from glob import glob
20 from typing import List, Tuple
21
22 from joblib import Parallel, delayed
23 from nemo_text_processing.text_normalization.normalize import Normalizer
24 from tqdm import tqdm
25
26 try:
27 from nemo.collections.asr.metrics.wer import word_error_rate
28 from nemo.collections.asr.models import ASRModel
29
30 ASR_AVAILABLE = True
31 except (ModuleNotFoundError, ImportError):
32 ASR_AVAILABLE = False
33
34 try:
35 import pynini
36 from pynini.lib import rewrite
37
38 PYNINI_AVAILABLE = True
39 except (ModuleNotFoundError, ImportError):
40 PYNINI_AVAILABLE = False
41
42 try:
43 from nemo.collections.nlp.data.text_normalization.utils import post_process_punct
44 from nemo_text_processing.text_normalization.data_loader_utils import pre_process
45
46 NLP_AVAILABLE = True
47 except (ModuleNotFoundError, ImportError):
48 NLP_AVAILABLE = False
49
50 """
51 The script provides multiple normalization options and chooses the best one that minimizes CER of the ASR output
52 (most of the semiotic classes use deterministic=False flag).
53
54 To run this script with a .json manifest file, the manifest file should contain the following fields:
55 "audio_data" - path to the audio file
56 "text" - raw text
57 "pred_text" - ASR model prediction
58
59 See https://github.com/NVIDIA/NeMo/blob/main/examples/asr/transcribe_speech.py on how to add ASR predictions
60
61 When the manifest is ready, run:
62 python normalize_with_audio.py \
63 --audio_data PATH/TO/MANIFEST.JSON \
64 --language en
65
66
67 To run with a single audio file, specify path to audio and text with:
68 python normalize_with_audio.py \
69 --audio_data PATH/TO/AUDIO.WAV \
70 --language en \
71 --text raw text OR PATH/TO/.TXT/FILE
72 --model QuartzNet15x5Base-En \
73 --verbose
74
75 To see possible normalization options for a text input without an audio file (could be used for debugging), run:
76 python python normalize_with_audio.py --text "RAW TEXT"
77
78 Specify `--cache_dir` to generate .far grammars once and re-used them for faster inference
79 """
80
81
82 class NormalizerWithAudio(Normalizer):
83 """
84 Normalizer class that converts text from written to spoken form.
85 Useful for TTS preprocessing.
86
87 Args:
88 input_case: expected input capitalization
89 lang: language
90 cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
91 overwrite_cache: set to True to overwrite .far files
92 whitelist: path to a file with whitelist replacements
93 """
94
95 def __init__(
96 self,
97 input_case: str,
98 lang: str = 'en',
99 cache_dir: str = None,
100 overwrite_cache: bool = False,
101 whitelist: str = None,
102 ):
103
104 super().__init__(
105 input_case=input_case,
106 lang=lang,
107 deterministic=False,
108 cache_dir=cache_dir,
109 overwrite_cache=overwrite_cache,
110 whitelist=whitelist,
111 )
112
113 def normalize(self, text: str, n_tagged: int, punct_post_process: bool = True, verbose: bool = False,) -> str:
114 """
115 Main function. Normalizes tokens from written to spoken form
116 e.g. 12 kg -> twelve kilograms
117
118 Args:
119 text: string that may include semiotic classes
120 n_tagged: number of tagged options to consider, -1 - to get all possible tagged options
121 punct_post_process: whether to normalize punctuation
122 verbose: whether to print intermediate meta information
123
124 Returns:
125 normalized text options (usually there are multiple ways of normalizing a given semiotic class)
126 """
127 original_text = text
128
129 if self.lang == "en":
130 text = pre_process(text)
131 text = text.strip()
132 if not text:
133 if verbose:
134 print(text)
135 return text
136 text = pynini.escape(text)
137
138 if n_tagged == -1:
139 if self.lang == "en":
140 try:
141 tagged_texts = rewrite.rewrites(text, self.tagger.fst_no_digits)
142 except pynini.lib.rewrite.Error:
143 tagged_texts = rewrite.rewrites(text, self.tagger.fst)
144 else:
145 tagged_texts = rewrite.rewrites(text, self.tagger.fst)
146 else:
147 if self.lang == "en":
148 try:
149 tagged_texts = rewrite.top_rewrites(text, self.tagger.fst_no_digits, nshortest=n_tagged)
150 except pynini.lib.rewrite.Error:
151 tagged_texts = rewrite.top_rewrites(text, self.tagger.fst, nshortest=n_tagged)
152 else:
153 tagged_texts = rewrite.top_rewrites(text, self.tagger.fst, nshortest=n_tagged)
154
155 # non-deterministic Eng normalization uses tagger composed with verbalizer, no permutation in between
156 if self.lang == "en":
157 normalized_texts = tagged_texts
158 else:
159 normalized_texts = []
160 for tagged_text in tagged_texts:
161 self._verbalize(tagged_text, normalized_texts, verbose=verbose)
162
163 if len(normalized_texts) == 0:
164 raise ValueError()
165
166 if punct_post_process:
167 # do post-processing based on Moses detokenizer
168 if self.processor:
169 normalized_texts = [self.processor.detokenize([t]) for t in normalized_texts]
170 normalized_texts = [
171 post_process_punct(input=original_text, normalized_text=t) for t in normalized_texts
172 ]
173
174 normalized_texts = set(normalized_texts)
175 return normalized_texts
176
177 def _verbalize(self, tagged_text: str, normalized_texts: List[str], verbose: bool = False):
178 """
179 Verbalizes tagged text
180
181 Args:
182 tagged_text: text with tags
183 normalized_texts: list of possible normalization options
184 verbose: if true prints intermediate classification results
185 """
186
187 def get_verbalized_text(tagged_text):
188 return rewrite.rewrites(tagged_text, self.verbalizer.fst)
189
190 self.parser(tagged_text)
191 tokens = self.parser.parse()
192 tags_reordered = self.generate_permutations(tokens)
193 for tagged_text_reordered in tags_reordered:
194 try:
195 tagged_text_reordered = pynini.escape(tagged_text_reordered)
196 normalized_texts.extend(get_verbalized_text(tagged_text_reordered))
197 if verbose:
198 print(tagged_text_reordered)
199
200 except pynini.lib.rewrite.Error:
201 continue
202
203 def select_best_match(
204 self,
205 normalized_texts: List[str],
206 input_text: str,
207 pred_text: str,
208 verbose: bool = False,
209 remove_punct: bool = False,
210 ):
211 """
212 Selects the best normalization option based on the lowest CER
213
214 Args:
215 normalized_texts: normalized text options
216 input_text: input text
217 pred_text: ASR model transcript of the audio file corresponding to the normalized text
218 verbose: whether to print intermediate meta information
219 remove_punct: whether to remove punctuation before calculating CER
220
221 Returns:
222 normalized text with the lowest CER and CER value
223 """
224 if pred_text == "":
225 return input_text, 1000
226
227 normalized_texts_cer = calculate_cer(normalized_texts, pred_text, remove_punct)
228 normalized_texts_cer = sorted(normalized_texts_cer, key=lambda x: x[1])
229 normalized_text, cer = normalized_texts_cer[0]
230
231 if verbose:
232 print('-' * 30)
233 for option in normalized_texts:
234 print(option)
235 print('-' * 30)
236 return normalized_text, cer
237
238
239 def calculate_cer(normalized_texts: List[str], pred_text: str, remove_punct=False) -> List[Tuple[str, float]]:
240 """
241 Calculates character error rate (CER)
242
243 Args:
244 normalized_texts: normalized text options
245 pred_text: ASR model output
246
247 Returns: normalized options with corresponding CER
248 """
249 normalized_options = []
250 for text in normalized_texts:
251 text_clean = text.replace('-', ' ').lower()
252 if remove_punct:
253 for punct in "!?:;,.-()*+-/<=>@^_":
254 text_clean = text_clean.replace(punct, "")
255 cer = round(word_error_rate([pred_text], [text_clean], use_cer=True) * 100, 2)
256 normalized_options.append((text, cer))
257 return normalized_options
258
259
260 def get_asr_model(asr_model):
261 """
262 Returns ASR Model
263
264 Args:
265 asr_model: NeMo ASR model
266 """
267 if os.path.exists(args.model):
268 asr_model = ASRModel.restore_from(asr_model)
269 elif args.model in ASRModel.get_available_model_names():
270 asr_model = ASRModel.from_pretrained(asr_model)
271 else:
272 raise ValueError(
273 f'Provide path to the pretrained checkpoint or choose from {ASRModel.get_available_model_names()}'
274 )
275 return asr_model
276
277
278 def parse_args():
279 parser = ArgumentParser()
280 parser.add_argument("--text", help="input string or path to a .txt file", default=None, type=str)
281 parser.add_argument(
282 "--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
283 )
284 parser.add_argument(
285 "--language", help="Select target language", choices=["en", "ru", "de"], default="en", type=str
286 )
287 parser.add_argument("--audio_data", default=None, help="path to an audio file or .json manifest")
288 parser.add_argument(
289 '--model', type=str, default='QuartzNet15x5Base-En', help='Pre-trained model name or path to model checkpoint'
290 )
291 parser.add_argument(
292 "--n_tagged",
293 type=int,
294 default=30,
295 help="number of tagged options to consider, -1 - return all possible tagged options",
296 )
297 parser.add_argument("--verbose", help="print info for debugging", action="store_true")
298 parser.add_argument(
299 "--no_remove_punct_for_cer",
300 help="Set to True to NOT remove punctuation before calculating CER",
301 action="store_true",
302 )
303 parser.add_argument(
304 "--no_punct_post_process", help="set to True to disable punctuation post processing", action="store_true"
305 )
306 parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
307 parser.add_argument("--whitelist", help="path to a file with with whitelist", default=None, type=str)
308 parser.add_argument(
309 "--cache_dir",
310 help="path to a dir with .far grammar file. Set to None to avoid using cache",
311 default=None,
312 type=str,
313 )
314 parser.add_argument("--n_jobs", default=-2, type=int, help="The maximum number of concurrently running jobs")
315 parser.add_argument("--batch_size", default=200, type=int, help="Number of examples for each process")
316 return parser.parse_args()
317
318
319 def _normalize_line(normalizer: NormalizerWithAudio, n_tagged, verbose, line: str, remove_punct, punct_post_process):
320 line = json.loads(line)
321 pred_text = line["pred_text"]
322
323 normalized_texts = normalizer.normalize(
324 text=line["text"], verbose=verbose, n_tagged=n_tagged, punct_post_process=punct_post_process,
325 )
326
327 normalized_text, cer = normalizer.select_best_match(
328 normalized_texts=normalized_texts,
329 input_text=line["text"],
330 pred_text=pred_text,
331 verbose=verbose,
332 remove_punct=remove_punct,
333 )
334 line["nemo_normalized"] = normalized_text
335 line["CER_nemo_normalized"] = cer
336 return line
337
338
339 def normalize_manifest(
340 normalizer,
341 audio_data: str,
342 n_jobs: int,
343 n_tagged: int,
344 remove_punct: bool,
345 punct_post_process: bool,
346 batch_size: int,
347 ):
348 """
349 Args:
350 args.audio_data: path to .json manifest file.
351 """
352
353 def __process_batch(batch_idx, batch, dir_name):
354 normalized_lines = [
355 _normalize_line(
356 normalizer,
357 n_tagged,
358 verbose=False,
359 line=line,
360 remove_punct=remove_punct,
361 punct_post_process=punct_post_process,
362 )
363 for line in tqdm(batch)
364 ]
365
366 with open(f"{dir_name}/{batch_idx}.json", "w") as f_out:
367 for line in normalized_lines:
368 f_out.write(json.dumps(line, ensure_ascii=False) + '\n')
369
370 print(f"Batch -- {batch_idx} -- is complete")
371 return normalized_lines
372
373 manifest_out = audio_data.replace('.json', '_normalized.json')
374 with open(audio_data, 'r') as f:
375 lines = f.readlines()
376
377 print(f'Normalizing {len(lines)} lines of {audio_data}...')
378
379 # to save intermediate results to a file
380 batch = min(len(lines), batch_size)
381
382 tmp_dir = manifest_out.replace(".json", "_parts")
383 os.makedirs(tmp_dir, exist_ok=True)
384
385 Parallel(n_jobs=n_jobs)(
386 delayed(__process_batch)(idx, lines[i : i + batch], tmp_dir)
387 for idx, i in enumerate(range(0, len(lines), batch))
388 )
389
390 # aggregate all intermediate files
391 with open(manifest_out, "w") as f_out:
392 for batch_f in sorted(glob(f"{tmp_dir}/*.json")):
393 with open(batch_f, "r") as f_in:
394 lines = f_in.read()
395 f_out.write(lines)
396
397 print(f'Normalized version saved at {manifest_out}')
398
399
400 if __name__ == "__main__":
401 args = parse_args()
402
403 if not ASR_AVAILABLE and args.audio_data:
404 raise ValueError("NeMo ASR collection is not installed.")
405 start = time.time()
406 args.whitelist = os.path.abspath(args.whitelist) if args.whitelist else None
407 if args.text is not None:
408 normalizer = NormalizerWithAudio(
409 input_case=args.input_case,
410 lang=args.language,
411 cache_dir=args.cache_dir,
412 overwrite_cache=args.overwrite_cache,
413 whitelist=args.whitelist,
414 )
415
416 if os.path.exists(args.text):
417 with open(args.text, 'r') as f:
418 args.text = f.read().strip()
419 normalized_texts = normalizer.normalize(
420 text=args.text,
421 verbose=args.verbose,
422 n_tagged=args.n_tagged,
423 punct_post_process=not args.no_punct_post_process,
424 )
425
426 if args.audio_data:
427 asr_model = get_asr_model(args.model)
428 pred_text = asr_model.transcribe([args.audio_data])[0]
429 normalized_text, cer = normalizer.select_best_match(
430 normalized_texts=normalized_texts,
431 pred_text=pred_text,
432 input_text=args.text,
433 verbose=args.verbose,
434 remove_punct=not args.no_remove_punct_for_cer,
435 )
436 print(f"Transcript: {pred_text}")
437 print(f"Normalized: {normalized_text}")
438 else:
439 print("Normalization options:")
440 for norm_text in normalized_texts:
441 print(norm_text)
442 elif not os.path.exists(args.audio_data):
443 raise ValueError(f"{args.audio_data} not found.")
444 elif args.audio_data.endswith('.json'):
445 normalizer = NormalizerWithAudio(
446 input_case=args.input_case,
447 lang=args.language,
448 cache_dir=args.cache_dir,
449 overwrite_cache=args.overwrite_cache,
450 whitelist=args.whitelist,
451 )
452 normalize_manifest(
453 normalizer=normalizer,
454 audio_data=args.audio_data,
455 n_jobs=args.n_jobs,
456 n_tagged=args.n_tagged,
457 remove_punct=not args.no_remove_punct_for_cer,
458 punct_post_process=not args.no_punct_post_process,
459 batch_size=args.batch_size,
460 )
461 else:
462 raise ValueError(
463 "Provide either path to .json manifest in '--audio_data' OR "
464 + "'--audio_data' path to audio file and '--text' path to a text file OR"
465 "'--text' string text (for debugging without audio)"
466 )
467 print(f'Execution time: {round((time.time() - start)/60, 2)} min.')
468
[end of nemo_text_processing/text_normalization/normalize_with_audio.py]
[start of tools/text_processing_deployment/pynini_export.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 # Copyright 2015 and onwards Google, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16
17 import os
18 import time
19 from argparse import ArgumentParser
20
21 from nemo.utils import logging
22
23 try:
24 import pynini
25 from nemo_text_processing.text_normalization.en.graph_utils import generator_main
26
27 PYNINI_AVAILABLE = True
28 except (ModuleNotFoundError, ImportError):
29
30 logging.warning(
31 "`pynini` is not installed ! \n"
32 "Please run the `nemo_text_processing/setup.sh` script"
33 "prior to usage of this toolkit."
34 )
35
36 PYNINI_AVAILABLE = False
37
38
39 # This script exports compiled grammars inside nemo_text_processing into OpenFst finite state archive files
40 # tokenize_and_classify.far and verbalize.far for production purposes
41
42
43 def itn_grammars(**kwargs):
44 d = {}
45 d['classify'] = {
46 'TOKENIZE_AND_CLASSIFY': ITNClassifyFst(
47 cache_dir=kwargs["cache_dir"], overwrite_cache=kwargs["overwrite_cache"]
48 ).fst
49 }
50 d['verbalize'] = {'ALL': ITNVerbalizeFst().fst, 'REDUP': pynini.accep("REDUP")}
51 return d
52
53
54 def tn_grammars(**kwargs):
55 d = {}
56 d['classify'] = {
57 'TOKENIZE_AND_CLASSIFY': TNClassifyFst(
58 input_case=kwargs["input_case"],
59 deterministic=True,
60 cache_dir=kwargs["cache_dir"],
61 overwrite_cache=kwargs["overwrite_cache"],
62 ).fst
63 }
64 d['verbalize'] = {'ALL': TNVerbalizeFst(deterministic=True).fst, 'REDUP': pynini.accep("REDUP")}
65 return d
66
67
68 def export_grammars(output_dir, grammars):
69 """
70 Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
71
72 Args:
73 output_dir: directory to export FAR files to. Subdirectories will be created for tagger and verbalizer respectively.
74 grammars: grammars to be exported
75 """
76
77 for category, graphs in grammars.items():
78 out_dir = os.path.join(output_dir, category)
79 if not os.path.exists(out_dir):
80 os.makedirs(out_dir)
81 time.sleep(1)
82 if category == "classify":
83 category = "tokenize_and_classify"
84 generator_main(f"{out_dir}/{category}.far", graphs)
85
86
87 def parse_args():
88 parser = ArgumentParser()
89 parser.add_argument("--output_dir", help="output directory for grammars", required=True, type=str)
90 parser.add_argument(
91 "--language", help="language", choices=["en", "de", "es", "ru", 'fr', 'vi'], type=str, default='en'
92 )
93 parser.add_argument(
94 "--grammars", help="grammars to be exported", choices=["tn_grammars", "itn_grammars"], type=str, required=True
95 )
96 parser.add_argument(
97 "--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
98 )
99 parser.add_argument("--overwrite_cache", help="set to True to re-create .far grammar files", action="store_true")
100 parser.add_argument(
101 "--cache_dir",
102 help="path to a dir with .far grammar file. Set to None to avoid using cache",
103 default=None,
104 type=str,
105 )
106 return parser.parse_args()
107
108
109 if __name__ == '__main__':
110 args = parse_args()
111
112 if args.language in ['ru', 'fr', 'es', 'vi'] and args.grammars == 'tn_grammars':
113 raise ValueError('Only ITN grammars could be deployed in Sparrowhawk for the selected languages.')
114
115 if args.language == 'en':
116 from nemo_text_processing.inverse_text_normalization.en.taggers.tokenize_and_classify import (
117 ClassifyFst as ITNClassifyFst,
118 )
119 from nemo_text_processing.inverse_text_normalization.en.verbalizers.verbalize import (
120 VerbalizeFst as ITNVerbalizeFst,
121 )
122 from nemo_text_processing.text_normalization.en.taggers.tokenize_and_classify import (
123 ClassifyFst as TNClassifyFst,
124 )
125 from nemo_text_processing.text_normalization.en.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
126 elif args.language == 'de':
127 from nemo_text_processing.inverse_text_normalization.de.taggers.tokenize_and_classify import (
128 ClassifyFst as ITNClassifyFst,
129 )
130 from nemo_text_processing.inverse_text_normalization.de.verbalizers.verbalize import (
131 VerbalizeFst as ITNVerbalizeFst,
132 )
133 from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import (
134 ClassifyFst as TNClassifyFst,
135 )
136 from nemo_text_processing.text_normalization.de.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
137 elif args.language == 'ru':
138 from nemo_text_processing.inverse_text_normalization.ru.taggers.tokenize_and_classify import (
139 ClassifyFst as ITNClassifyFst,
140 )
141 from nemo_text_processing.inverse_text_normalization.ru.verbalizers.verbalize import (
142 VerbalizeFst as ITNVerbalizeFst,
143 )
144 elif args.language == 'es':
145 from nemo_text_processing.inverse_text_normalization.es.taggers.tokenize_and_classify import (
146 ClassifyFst as ITNClassifyFst,
147 )
148 from nemo_text_processing.inverse_text_normalization.es.verbalizers.verbalize import (
149 VerbalizeFst as ITNVerbalizeFst,
150 )
151 elif args.language == 'fr':
152 from nemo_text_processing.inverse_text_normalization.fr.taggers.tokenize_and_classify import (
153 ClassifyFst as ITNClassifyFst,
154 )
155 from nemo_text_processing.inverse_text_normalization.fr.verbalizers.verbalize import (
156 VerbalizeFst as ITNVerbalizeFst,
157 )
158 elif args.language == 'vi':
159 from nemo_text_processing.inverse_text_normalization.vi.taggers.tokenize_and_classify import (
160 ClassifyFst as ITNClassifyFst,
161 )
162 from nemo_text_processing.inverse_text_normalization.vi.verbalizers.verbalize import (
163 VerbalizeFst as ITNVerbalizeFst,
164 )
165
166 output_dir = os.path.join(args.output_dir, args.language)
167 export_grammars(
168 output_dir=output_dir,
169 grammars=locals()[args.grammars](
170 input_case=args.input_case, cache_dir=args.cache_dir, overwrite_cache=args.overwrite_cache
171 ),
172 )
173
[end of tools/text_processing_deployment/pynini_export.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
NVIDIA/NeMo
|
022f0292aecbc98d591d49423d5045235394f793
|
./reinstall.sh crashes due to not being able to uninstall llvmlite
Starting off of `nemo:1.5.1` container, cloning the NeMo repo to a folder inside of it and calling `./reinstall.sh` fails with
```
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
`pip install -e` on the other hand succeeds installing `nemo:1.7.0rc0` and `numpy:1.22.2`, the rest of the packages remain untouched.
It seems that `./reinstall.sh` which used to work fine, a week or so ago when following the same procedure to upgrade to `nemo:1.6.0rc` redeveloped issue #841. The solution remains the same, first call
```
pip install --ignore-installed llvmlite
```
followed by `./reinstall.sh`. In this case, apart `llvml`, the following packages are updated
```
ftfy-6.0.3 nemo-toolkit-1.7.0rc0 numba-0.55.1 pytorch-lightning-1.5.9 sacrebleu-2.0.0 setuptools-59.5.0
```
Interestingly `numpy` in this case is left at `1.21.5`.
|
2022-02-09T05:12:31Z
|
<patch>
<patch>
diff --git a/nemo_text_processing/text_normalization/__init__.py b/nemo_text_processing/text_normalization/__init__.py
--- a/nemo_text_processing/text_normalization/__init__.py
+++ b/nemo_text_processing/text_normalization/__init__.py
@@ -21,7 +21,7 @@
except (ModuleNotFoundError, ImportError):
logging.warning(
"`pynini` is not installed ! \n"
- "Please run the `nemo_text_processing/setup.sh` script"
+ "Please run the `nemo_text_processing/setup.sh` script "
"prior to usage of this toolkit."
)
diff --git a/nemo_text_processing/text_normalization/en/graph_utils.py b/nemo_text_processing/text_normalization/en/graph_utils.py
--- a/nemo_text_processing/text_normalization/en/graph_utils.py
+++ b/nemo_text_processing/text_normalization/en/graph_utils.py
@@ -159,7 +159,7 @@ def convert_space(fst) -> 'pynini.FstLike':
"""
Converts space to nonbreaking space.
Used only in tagger grammars for transducing token values within quotes, e.g. name: "hello kitty"
- This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
+ This is making transducer significantly slower, so only use when there could be potential spaces within quotes, otherwise leave it.
Args:
fst: input fst
@@ -208,9 +208,9 @@ def add_tokens(self, fst) -> 'pynini.FstLike':
"""
Wraps class name around to given fst
- Args:
+ Args:
fst: input fst
-
+
Returns:
Fst: fst
"""
diff --git a/nemo_text_processing/text_normalization/en/taggers/punctuation.py b/nemo_text_processing/text_normalization/en/taggers/punctuation.py
--- a/nemo_text_processing/text_normalization/en/taggers/punctuation.py
+++ b/nemo_text_processing/text_normalization/en/taggers/punctuation.py
@@ -22,7 +22,7 @@
import pynini
from pynini.lib import pynutil
- PYNINI_AVAILABLE = False
+ PYNINI_AVAILABLE = True
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py b/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
--- a/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
+++ b/nemo_text_processing/text_normalization/en/verbalizers/whitelist.py
@@ -12,8 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
-
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
@@ -21,6 +19,7 @@
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
+
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/en/verbalizers/word.py b/nemo_text_processing/text_normalization/en/verbalizers/word.py
--- a/nemo_text_processing/text_normalization/en/verbalizers/word.py
+++ b/nemo_text_processing/text_normalization/en/verbalizers/word.py
@@ -12,7 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
from nemo_text_processing.text_normalization.en.graph_utils import NEMO_CHAR, NEMO_SIGMA, GraphFst, delete_space
try:
@@ -20,6 +19,7 @@
from pynini.lib import pynutil
PYNINI_AVAILABLE = True
+
except (ModuleNotFoundError, ImportError):
PYNINI_AVAILABLE = False
diff --git a/nemo_text_processing/text_normalization/es/__init__.py b/nemo_text_processing/text_normalization/es/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/__init__.py
@@ -0,0 +1,15 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+LOCALIZATION = "eu" # Set to am for alternate formatting
diff --git a/nemo_text_processing/text_normalization/es/data/__init__.py b/nemo_text_processing/text_normalization/es/data/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/dates/__init__.py b/nemo_text_processing/text_normalization/es/data/dates/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/dates/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/electronic/__init__.py b/nemo_text_processing/text_normalization/es/data/electronic/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/electronic/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/fractions/__init__.py b/nemo_text_processing/text_normalization/es/data/fractions/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/fractions/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/measures/__init__.py b/nemo_text_processing/text_normalization/es/data/measures/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/measures/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/money/__init__.py b/nemo_text_processing/text_normalization/es/data/money/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/money/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/numbers/__init__.py b/nemo_text_processing/text_normalization/es/data/numbers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/numbers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py b/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/ordinals/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/roman/__init__.py b/nemo_text_processing/text_normalization/es/data/roman/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/roman/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/data/time/__init__.py b/nemo_text_processing/text_normalization/es/data/time/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/data/time/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/graph_utils.py b/nemo_text_processing/text_normalization/es/graph_utils.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/graph_utils.py
@@ -0,0 +1,179 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_SIGMA, NEMO_SPACE
+from nemo_text_processing.text_normalization.es import LOCALIZATION
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digits = pynini.project(pynini.string_file(get_abs_path("data/numbers/digit.tsv")), "input")
+ tens = pynini.project(pynini.string_file(get_abs_path("data/numbers/ties.tsv")), "input")
+ teens = pynini.project(pynini.string_file(get_abs_path("data/numbers/teen.tsv")), "input")
+ twenties = pynini.project(pynini.string_file(get_abs_path("data/numbers/twenties.tsv")), "input")
+ hundreds = pynini.project(pynini.string_file(get_abs_path("data/numbers/hundreds.tsv")), "input")
+
+ accents = pynini.string_map([("รก", "a"), ("รฉ", "e"), ("รญ", "i"), ("รณ", "o"), ("รบ", "u")])
+
+ if LOCALIZATION == "am": # Setting localization for central and northern america formatting
+ cardinal_separator = pynini.string_map([",", NEMO_SPACE])
+ decimal_separator = pynini.accep(".")
+ else:
+ cardinal_separator = pynini.string_map([".", NEMO_SPACE])
+ decimal_separator = pynini.accep(",")
+
+ ones = pynini.union("un", "รบn")
+ fem_ones = pynini.union(pynini.cross("un", "una"), pynini.cross("รบn", "una"), pynini.cross("uno", "una"))
+ one_to_one_hundred = pynini.union(digits, tens, teens, twenties, tens + pynini.accep(" y ") + digits)
+ fem_hundreds = hundreds @ pynini.cdrewrite(pynini.cross("ientos", "ientas"), "", "", NEMO_SIGMA)
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ digits = None
+ tens = None
+ teens = None
+ twenties = None
+ hundreds = None
+
+ accents = None
+
+ cardinal_separator = None
+ decimal_separator = None
+
+ ones = None
+ fem_ones = None
+ one_to_one_hundred = None
+ fem_hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def strip_accent(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Converts all accented vowels to non-accented equivalents
+
+ Args:
+ fst: Any fst. Composes vowel conversion onto fst's output strings
+ """
+ return fst @ pynini.cdrewrite(accents, "", "", NEMO_SIGMA)
+
+
+def shift_cardinal_gender(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Applies gender conversion rules to a cardinal string. These include: rendering all masculine forms of "uno" (including apocopated forms) as "una" and
+ Converting all gendered numbers in the hundreds series (200,300,400...) to feminine equivalent (e.g. "doscientos" -> "doscientas"). Converssion only applies
+ to value place for <1000 and multiple of 1000. (e.g. "doscientos mil doscientos" -> "doscientas mil doscientas".) For place values greater than the thousands, there
+ is no gender shift as the higher powers of ten ("millones", "billones") are masculine nouns and any conversion would be formally
+ ungrammatical.
+ e.g.
+ "doscientos" -> "doscientas"
+ "doscientos mil" -> "doscientas mil"
+ "doscientos millones" -> "doscientos millones"
+ "doscientos mil millones" -> "doscientos mil millones"
+ "doscientos millones doscientos mil doscientos" -> "doscientos millones doscientas mil doscientas"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ before_mil = (
+ NEMO_SPACE
+ + (pynini.accep("mil") | pynini.accep("milรฉsimo"))
+ + pynini.closure(NEMO_SPACE + hundreds, 0, 1)
+ + pynini.closure(NEMO_SPACE + one_to_one_hundred, 0, 1)
+ + pynini.union(pynini.accep("[EOS]"), pynini.accep("\""), decimal_separator)
+ )
+ before_double_digits = pynini.closure(NEMO_SPACE + one_to_one_hundred, 0, 1) + pynini.union(
+ pynini.accep("[EOS]"), pynini.accep("\"")
+ )
+
+ fem_allign = pynini.cdrewrite(fem_hundreds, "", before_mil, NEMO_SIGMA) # doscientas mil dosciento
+ fem_allign @= pynini.cdrewrite(fem_hundreds, "", before_double_digits, NEMO_SIGMA) # doscientas mil doscienta
+ fem_allign @= pynini.cdrewrite(
+ fem_ones, "", pynini.union("[EOS]", "\"", decimal_separator), NEMO_SIGMA
+ ) # If before a quote or EOS, we know it's the end of a string
+
+ return fst @ fem_allign
+
+
+def shift_number_gender(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Performs gender conversion on all verbalized numbers in output. All values in the hundreds series (200,300,400) are changed to
+ feminine gender (e.g. "doscientos" -> "doscientas") and all forms of "uno" (including apocopated forms) are converted to "una".
+ This has no boundary restriction and will perform shift across all values in output string.
+ e.g.
+ "doscientos" -> "doscientas"
+ "doscientos millones" -> "doscientas millones"
+ "doscientos millones doscientos" -> "doscientas millones doscientas"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ fem_allign = pynini.cdrewrite(fem_hundreds, "", "", NEMO_SIGMA)
+ fem_allign @= pynini.cdrewrite(
+ fem_ones, "", pynini.union(NEMO_SPACE, pynini.accep("[EOS]"), pynini.accep("\"")), NEMO_SIGMA
+ ) # If before a quote or EOS, we know it's the end of a string
+
+ return fst @ fem_allign
+
+
+def strip_cardinal_apocope(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Reverts apocope on cardinal strings in line with formation rules. e.g. "un" -> "uno". Due to cardinal formation rules, this in effect only
+ affects strings where the final value is a variation of "un".
+ e.g.
+ "un" -> "uno"
+ "veintiรบn" -> "veintiuno"
+
+ Args:
+ fst: Any fst. Composes conversion onto fst's output strings
+ """
+ # Since cardinals use apocope by default for large values (e.g. "millรณn"), this only needs to act on the last instance of one
+ strip = pynini.cross("un", "uno") | pynini.cross("รบn", "uno")
+ strip = pynini.cdrewrite(strip, "", pynini.union("[EOS]", "\""), NEMO_SIGMA)
+ return fst @ strip
+
+
+def roman_to_int(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Alters given fst to convert Roman integers (lower and upper cased) into Arabic numerals. Valid for values up to 1000.
+ e.g.
+ "V" -> "5"
+ "i" -> "1"
+
+ Args:
+ fst: Any fst. Composes fst onto Roman conversion outputs.
+ """
+
+ def _load_roman(file: str):
+ roman = load_labels(get_abs_path(file))
+ roman_numerals = [(x, y) for x, y in roman] + [(x.upper(), y) for x, y in roman]
+ return pynini.string_map(roman_numerals)
+
+ digit = _load_roman("data/roman/digit.tsv")
+ ties = _load_roman("data/roman/ties.tsv")
+ hundreds = _load_roman("data/roman/hundreds.tsv")
+
+ graph = (
+ digit
+ | ties + (digit | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ | (
+ hundreds
+ + (ties | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ + (digit | pynutil.add_weight(pynutil.insert("0"), 0.01))
+ )
+ ).optimize()
+
+ return graph @ fst
diff --git a/nemo_text_processing/text_normalization/es/taggers/__init__.py b/nemo_text_processing/text_normalization/es/taggers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/taggers/cardinal.py b/nemo_text_processing/text_normalization/es/taggers/cardinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/cardinal.py
@@ -0,0 +1,190 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import cardinal_separator
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ teen = pynini.invert(pynini.string_file(get_abs_path("data/numbers/teen.tsv")))
+ ties = pynini.invert(pynini.string_file(get_abs_path("data/numbers/ties.tsv")))
+ twenties = pynini.invert(pynini.string_file(get_abs_path("data/numbers/twenties.tsv")))
+ hundreds = pynini.invert(pynini.string_file(get_abs_path("data/numbers/hundreds.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ zero = None
+ digit = None
+ teen = None
+ ties = None
+ twenties = None
+ hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def filter_punctuation(fst: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Helper function for parsing number strings. Converts common cardinal strings (groups of three digits delineated by 'cardinal_separator' - see graph_utils)
+ and converts to a string of digits:
+ "1 000" -> "1000"
+ "1.000.000" -> "1000000"
+ Args:
+ fst: Any pynini.FstLike object. Function composes fst onto string parser fst
+
+ Returns:
+ fst: A pynini.FstLike object
+ """
+ exactly_three_digits = NEMO_DIGIT ** 3 # for blocks of three
+ up_to_three_digits = pynini.closure(NEMO_DIGIT, 1, 3) # for start of string
+
+ cardinal_string = pynini.closure(
+ NEMO_DIGIT, 1
+ ) # For string w/o punctuation (used for page numbers, thousand series)
+
+ cardinal_string |= (
+ up_to_three_digits
+ + pynutil.delete(cardinal_separator)
+ + pynini.closure(exactly_three_digits + pynutil.delete(cardinal_separator))
+ + exactly_three_digits
+ )
+
+ return cardinal_string @ fst
+
+
+class CardinalFst(GraphFst):
+ """
+ Finite state transducer for classifying cardinals, e.g.
+ "1000" -> cardinal { integer: "mil" }
+ "2.000.000" -> cardinal { integer: "dos millones" }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="cardinal", kind="classify", deterministic=deterministic)
+
+ # Any single digit
+ graph_digit = digit
+ digits_no_one = (NEMO_DIGIT - "1") @ graph_digit
+
+ # Any double digit
+ graph_tens = teen
+ graph_tens |= ties + (pynutil.delete('0') | (pynutil.insert(" y ") + graph_digit))
+ graph_tens |= twenties
+
+ self.tens = graph_tens.optimize()
+
+ self.two_digit_non_zero = pynini.union(
+ graph_digit, graph_tens, (pynini.cross("0", NEMO_SPACE) + graph_digit)
+ ).optimize()
+
+ # Three digit strings
+ graph_hundreds = hundreds + pynini.union(
+ pynutil.delete("00"), (insert_space + graph_tens), (pynini.cross("0", NEMO_SPACE) + graph_digit)
+ )
+ graph_hundreds |= pynini.cross("100", "cien")
+ graph_hundreds |= (
+ pynini.cross("1", "ciento") + insert_space + pynini.union(graph_tens, pynutil.delete("0") + graph_digit)
+ )
+
+ self.hundreds = graph_hundreds.optimize()
+
+ # For all three digit strings with leading zeroes (graph appends '0's to manage place in string)
+ graph_hundreds_component = pynini.union(graph_hundreds, pynutil.delete("0") + graph_tens)
+
+ graph_hundreds_component_at_least_one_none_zero_digit = graph_hundreds_component | (
+ pynutil.delete("00") + graph_digit
+ )
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one = graph_hundreds_component | (
+ pynutil.delete("00") + digits_no_one
+ )
+
+ graph_thousands_component_at_least_one_none_zero_digit = pynini.union(
+ pynutil.delete("000") + graph_hundreds_component_at_least_one_none_zero_digit,
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one
+ + pynutil.insert(" mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ pynini.cross("001", "mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ )
+
+ graph_thousands_component_at_least_one_none_zero_digit_no_one = pynini.union(
+ pynutil.delete("000") + graph_hundreds_component_at_least_one_none_zero_digit_no_one,
+ graph_hundreds_component_at_least_one_none_zero_digit_no_one
+ + pynutil.insert(" mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ pynini.cross("001", "mil")
+ + ((insert_space + graph_hundreds_component_at_least_one_none_zero_digit) | pynutil.delete("000")),
+ )
+
+ graph_million = pynutil.add_weight(pynini.cross("000001", "un millรณn"), -0.001)
+ graph_million |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" millones")
+ graph_million |= pynutil.delete("000000")
+ graph_million += insert_space
+
+ graph_billion = pynutil.add_weight(pynini.cross("000001", "un billรณn"), -0.001)
+ graph_billion |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" billones")
+ graph_billion |= pynutil.delete("000000")
+ graph_billion += insert_space
+
+ graph_trillion = pynutil.add_weight(pynini.cross("000001", "un trillรณn"), -0.001)
+ graph_trillion |= graph_thousands_component_at_least_one_none_zero_digit_no_one + pynutil.insert(" trillones")
+ graph_trillion |= pynutil.delete("000000")
+ graph_trillion += insert_space
+
+ graph = (
+ graph_trillion
+ + graph_billion
+ + graph_million
+ + (graph_thousands_component_at_least_one_none_zero_digit | pynutil.delete("000000"))
+ )
+
+ self.graph = (
+ ((NEMO_DIGIT - "0") + pynini.closure(NEMO_DIGIT, 0))
+ @ pynini.cdrewrite(pynini.closure(pynutil.insert("0")), "[BOS]", "", NEMO_SIGMA)
+ @ NEMO_DIGIT ** 24
+ @ graph
+ @ pynini.cdrewrite(delete_space, "[BOS]", "", NEMO_SIGMA)
+ @ pynini.cdrewrite(delete_space, "", "[EOS]", NEMO_SIGMA)
+ @ pynini.cdrewrite(
+ pynini.cross(pynini.closure(NEMO_WHITE_SPACE, 2), NEMO_SPACE), NEMO_ALPHA, NEMO_ALPHA, NEMO_SIGMA
+ )
+ )
+ self.graph |= zero
+
+ self.graph = filter_punctuation(self.graph).optimize()
+
+ optional_minus_graph = pynini.closure(pynutil.insert("negative: ") + pynini.cross("-", "\"true\" "), 0, 1)
+
+ final_graph = optional_minus_graph + pynutil.insert("integer: \"") + self.graph + pynutil.insert("\"")
+
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/date.py b/nemo_text_processing/text_normalization/es/taggers/date.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/date.py
@@ -0,0 +1,107 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_DIGIT, NEMO_SPACE, GraphFst, delete_extra_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ articles = pynini.union("de", "del", "el", "del aรฑo")
+ delete_leading_zero = (pynutil.delete("0") | (NEMO_DIGIT - "0")) + NEMO_DIGIT
+ month_numbers = pynini.string_file(get_abs_path("data/dates/months.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ articles = None
+ delete_leading_zero = None
+ month_numbers = None
+
+ PYNINI_AVAILABLE = False
+
+
+class DateFst(GraphFst):
+ """
+ Finite state transducer for classifying date, e.g.
+ "01.04.2010" -> date { day: "un" month: "enero" year: "dos mil diez" preserve_order: true }
+ "marzo 4 2000" -> date { month: "marzo" day: "cuatro" year: "dos mil" }
+ "1990-20-01" -> date { year: "mil novecientos noventa" day: "veinte" month: "enero" }
+
+ Args:
+ cardinal: cardinal GraphFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool):
+ super().__init__(name="date", kind="classify", deterministic=deterministic)
+
+ number_to_month = month_numbers.optimize()
+ month_graph = pynini.project(number_to_month, "output")
+
+ numbers = cardinal.graph
+ optional_leading_zero = delete_leading_zero | NEMO_DIGIT
+
+ # 01, 31, 1
+ digit_day = optional_leading_zero @ pynini.union(*[str(x) for x in range(1, 32)]) @ numbers
+ day = (pynutil.insert("day: \"") + digit_day + pynutil.insert("\"")).optimize()
+
+ digit_month = optional_leading_zero @ pynini.union(*[str(x) for x in range(1, 13)])
+ number_to_month = digit_month @ number_to_month
+
+ month_name = (pynutil.insert("month: \"") + month_graph + pynutil.insert("\"")).optimize()
+ month_number = (pynutil.insert("month: \"") + number_to_month + pynutil.insert("\"")).optimize()
+
+ # prefer cardinal over year
+ year = (NEMO_DIGIT - "0") + pynini.closure(NEMO_DIGIT, 1, 3) # 90, 990, 1990
+ year @= numbers
+ self.year = year
+
+ year_only = pynutil.insert("year: \"") + year + pynutil.insert("\"")
+ year_with_articles = (
+ pynutil.insert("year: \"") + pynini.closure(articles + NEMO_SPACE, 0, 1) + year + pynutil.insert("\"")
+ )
+
+ graph_dmy = (
+ day
+ + pynini.closure(pynutil.delete(" de"))
+ + NEMO_SPACE
+ + month_name
+ + pynini.closure(NEMO_SPACE + year_with_articles, 0, 1)
+ )
+
+ graph_mdy = ( # English influences on language
+ month_name + delete_extra_space + day + pynini.closure(NEMO_SPACE + year_with_articles, 0, 1)
+ )
+
+ separators = [".", "-", "/"]
+ for sep in separators:
+ year_optional = pynini.closure(pynini.cross(sep, NEMO_SPACE) + year_only, 0, 1)
+ new_graph = day + pynini.cross(sep, NEMO_SPACE) + month_number + year_optional
+ graph_dmy |= new_graph
+ if not deterministic:
+ new_graph = month_number + pynini.cross(sep, NEMO_SPACE) + day + year_optional
+ graph_mdy |= new_graph
+
+ dash = "-"
+ day_optional = pynini.closure(pynini.cross(dash, NEMO_SPACE) + day, 0, 1)
+ graph_ymd = NEMO_DIGIT ** 4 @ year_only + pynini.cross(dash, NEMO_SPACE) + month_number + day_optional
+
+ final_graph = graph_dmy + pynutil.insert(" preserve_order: true")
+ final_graph |= graph_ymd
+ final_graph |= graph_mdy
+
+ self.final_graph = final_graph.optimize()
+ self.fst = self.add_tokens(self.final_graph).optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/decimals.py b/nemo_text_processing/text_normalization/es/taggers/decimals.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/decimals.py
@@ -0,0 +1,138 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ cardinal_separator,
+ decimal_separator,
+ strip_cardinal_apocope,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ quantities = pynini.string_file(get_abs_path("data/numbers/quantities.tsv"))
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ quantities = None
+ digit = None
+ zero = None
+
+ PYNINI_AVAILABLE = False
+
+
+def get_quantity(decimal_graph: 'pynini.FstLike', cardinal_graph: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Returns FST that transforms either a cardinal or decimal followed by a quantity into a numeral,
+ e.g. 2 millones -> integer_part: "dos" quantity: "millones"
+ e.g. 2,4 millones -> integer_part: "dos" fractional_part: "quatro" quantity: "millones"
+ e.g. 2,400 millones -> integer_part: "dos mil cuatrocientos" fractional_part: "quatro" quantity: "millones"
+
+ Args:
+ decimal_graph: DecimalFST
+ cardinal_graph: CardinalFST
+ """
+ numbers = pynini.closure(NEMO_DIGIT, 1, 6) @ cardinal_graph
+ numbers = pynini.cdrewrite(pynutil.delete(cardinal_separator), "", "", NEMO_SIGMA) @ numbers
+
+ res = (
+ pynutil.insert("integer_part: \"")
+ + numbers # The cardinal we're passing only produces 'un' for one, so gender agreement is safe (all quantities are masculine). Limit to 10^6 power.
+ + pynutil.insert("\"")
+ + NEMO_SPACE
+ + pynutil.insert("quantity: \"")
+ + quantities
+ + pynutil.insert("\"")
+ )
+ res |= decimal_graph + NEMO_SPACE + pynutil.insert("quantity: \"") + quantities + pynutil.insert("\"")
+ return res
+
+
+class DecimalFst(GraphFst):
+ """
+ Finite state transducer for classifying decimal, e.g.
+ -11,4006 billones -> decimal { negative: "true" integer_part: "once" fractional_part: "cuatro cero cero seis" quantity: "billones" preserve_order: true }
+ 1 billรณn -> decimal { integer_part: "un" quantity: "billรณn" preserve_order: true }
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="decimal", kind="classify", deterministic=deterministic)
+ graph_digit = digit | zero
+
+ if not deterministic:
+ graph = pynini.union(graph_digit, cardinal.hundreds, cardinal.tens)
+ graph += pynini.closure(insert_space + graph)
+
+ else:
+ # General pattern seems to be 1-3 digits: map as cardinal, default to digits otherwise \
+ graph = pynini.union(
+ graph_digit,
+ cardinal.tens,
+ cardinal.hundreds,
+ graph_digit + pynini.closure(insert_space + graph_digit, 3),
+ zero
+ + pynini.closure(insert_space + zero)
+ + pynini.closure(insert_space + graph_digit), # For cases such as "1,010"
+ )
+
+ # Need to strip apocope everywhere BUT end of string
+ reverse_apocope = pynini.string_map([("un", "uno"), ("รบn", "uno")])
+ apply_reverse_apocope = pynini.cdrewrite(reverse_apocope, "", NEMO_SPACE, NEMO_SIGMA)
+ graph @= apply_reverse_apocope
+
+ # Technically decimals should be space delineated groups of three, e.g. (1,333 333). This removes any possible spaces
+ strip_formatting = pynini.cdrewrite(delete_space, "", "", NEMO_SIGMA)
+ graph = strip_formatting @ graph
+
+ self.graph = graph.optimize()
+
+ graph_separator = pynutil.delete(decimal_separator)
+ optional_graph_negative = pynini.closure(pynutil.insert("negative: ") + pynini.cross("-", "\"true\" "), 0, 1)
+
+ self.graph_fractional = pynutil.insert("fractional_part: \"") + self.graph + pynutil.insert("\"")
+
+ # Integer graph maintains apocope except for ones place
+ graph_integer = (
+ strip_cardinal_apocope(cardinal.graph)
+ if deterministic
+ else pynini.union(cardinal.graph, strip_cardinal_apocope(cardinal.graph))
+ ) # Gives us forms w/ and w/o apocope
+ self.graph_integer = pynutil.insert("integer_part: \"") + graph_integer + pynutil.insert("\"")
+ final_graph_wo_sign = self.graph_integer + graph_separator + insert_space + self.graph_fractional
+
+ self.final_graph_wo_negative = (
+ final_graph_wo_sign | get_quantity(final_graph_wo_sign, cardinal.graph).optimize()
+ )
+ final_graph = optional_graph_negative + self.final_graph_wo_negative
+
+ final_graph += pynutil.insert(" preserve_order: true")
+ final_graph = self.add_tokens(final_graph)
+
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/electronic.py b/nemo_text_processing/text_normalization/es/taggers/electronic.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/electronic.py
@@ -0,0 +1,84 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_ALPHA, NEMO_DIGIT, GraphFst, insert_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ common_domains = [x[0] for x in load_labels(get_abs_path("data/electronic/domain.tsv"))]
+ symbols = [x[0] for x in load_labels(get_abs_path("data/electronic/symbols.tsv"))]
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ common_domains = None
+ symbols = None
+
+ PYNINI_AVAILABLE = False
+
+
+class ElectronicFst(GraphFst):
+ """
+ Finite state transducer for classifying electronic: email addresses
+ e.g. "abc@hotmail.com" -> electronic { username: "abc" domain: "hotmail.com" preserve_order: true }
+ e.g. "www.abc.com/123" -> electronic { protocol: "www." domain: "abc.com/123" preserve_order: true }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="electronic", kind="classify", deterministic=deterministic)
+
+ dot = pynini.accep(".")
+ accepted_common_domains = pynini.union(*common_domains)
+ accepted_symbols = pynini.union(*symbols) - dot
+ accepted_characters = pynini.closure(NEMO_ALPHA | NEMO_DIGIT | accepted_symbols)
+ acceepted_characters_with_dot = pynini.closure(NEMO_ALPHA | NEMO_DIGIT | accepted_symbols | dot)
+
+ # email
+ username = (
+ pynutil.insert("username: \"")
+ + acceepted_characters_with_dot
+ + pynutil.insert("\"")
+ + pynini.cross('@', ' ')
+ )
+ domain_graph = accepted_characters + dot + accepted_characters
+ domain_graph = pynutil.insert("domain: \"") + domain_graph + pynutil.insert("\"")
+ domain_common_graph = (
+ pynutil.insert("domain: \"")
+ + accepted_characters
+ + accepted_common_domains
+ + pynini.closure((accepted_symbols | dot) + pynini.closure(accepted_characters, 1), 0, 1)
+ + pynutil.insert("\"")
+ )
+ graph = (username + domain_graph) | domain_common_graph
+
+ # url
+ protocol_start = pynini.accep("https://") | pynini.accep("http://")
+ protocol_end = (
+ pynini.accep("www.")
+ if deterministic
+ else pynini.accep("www.") | pynini.cross("www.", "doble ve doble ve doble ve.")
+ )
+ protocol = protocol_start | protocol_end | (protocol_start + protocol_end)
+ protocol = pynutil.insert("protocol: \"") + protocol + pynutil.insert("\"")
+ graph |= protocol + insert_space + (domain_graph | domain_common_graph)
+ self.graph = graph
+
+ final_graph = self.add_tokens(self.graph + pynutil.insert(" preserve_order: true"))
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/fraction.py b/nemo_text_processing/text_normalization/es/taggers/fraction.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/fraction.py
@@ -0,0 +1,124 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ ordinal_exceptions = pynini.string_file(get_abs_path("data/fractions/ordinal_exceptions.tsv"))
+ higher_powers_of_ten = pynini.string_file(get_abs_path("data/fractions/powers_of_ten.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ ordinal_exceptions = None
+ higher_powers_of_ten = None
+
+ PYNINI_AVAILABLE = False
+
+
+class FractionFst(GraphFst):
+ """
+ Finite state transducer for classifying fraction
+ "23 4/5" ->
+ tokens { fraction { integer: "veintitrรฉs" numerator: "cuatro" denominator: "quinto" mophosyntactic_features: "ordinal" } }
+
+ Args:
+ cardinal: CardinalFst
+ ordinal: OrdinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, ordinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="fraction", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+ ordinal_graph = ordinal.graph
+
+ # 2-10 are all ordinals
+ three_to_ten = pynini.string_map(["2", "3", "4", "5", "6", "7", "8", "9", "10",])
+ block_three_to_ten = pynutil.delete(three_to_ten) # To block cardinal productions
+ if not deterministic: # Multiples of tens are sometimes rendered as ordinals
+ three_to_ten |= pynini.string_map(["20", "30", "40", "50", "60", "70", "80", "90",])
+ graph_three_to_ten = three_to_ten @ ordinal_graph
+ graph_three_to_ten @= pynini.cdrewrite(ordinal_exceptions, "", "", NEMO_SIGMA)
+
+ # Higher powers of tens (and multiples) are converted to ordinals.
+ hundreds = pynini.string_map(["100", "200", "300", "400", "500", "600", "700", "800", "900",])
+ graph_hundreds = hundreds @ ordinal_graph
+
+ multiples_of_thousand = ordinal.multiples_of_thousand # So we can have X milรฉsimos
+
+ graph_higher_powers_of_ten = (
+ pynini.closure(ordinal.one_to_one_thousand + NEMO_SPACE, 0, 1)
+ + pynini.closure("mil ", 0, 1)
+ + pynini.closure(ordinal.one_to_one_thousand + NEMO_SPACE, 0, 1)
+ ) # x millones / x mil millones / x mil z millones
+ graph_higher_powers_of_ten += higher_powers_of_ten
+ graph_higher_powers_of_ten = cardinal_graph @ graph_higher_powers_of_ten
+ graph_higher_powers_of_ten @= pynini.cdrewrite(
+ pynutil.delete("un "), pynini.accep("[BOS]"), pynini.project(higher_powers_of_ten, "output"), NEMO_SIGMA
+ ) # we drop 'un' from these ordinals (millionths, not one-millionths)
+
+ graph_higher_powers_of_ten = multiples_of_thousand | graph_hundreds | graph_higher_powers_of_ten
+ block_higher_powers_of_ten = pynutil.delete(
+ pynini.project(graph_higher_powers_of_ten, "input")
+ ) # For cardinal graph
+
+ graph_fractions_ordinals = graph_higher_powers_of_ten | graph_three_to_ten
+ graph_fractions_ordinals += pynutil.insert(
+ "\" morphosyntactic_features: \"ordinal\""
+ ) # We note the root for processing later
+
+ # Blocking the digits and hundreds from Cardinal graph
+ graph_fractions_cardinals = pynini.cdrewrite(
+ block_three_to_ten | block_higher_powers_of_ten, pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+ graph_fractions_cardinals @= NEMO_CHAR.plus @ pynini.cdrewrite(
+ pynutil.delete("0"), pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA
+ ) # Empty characters become '0' for NEMO_CHAR fst, so ned to block
+ graph_fractions_cardinals @= cardinal_graph
+ graph_fractions_cardinals += pynutil.insert(
+ "\" morphosyntactic_features: \"add_root\""
+ ) # blocking these entries to reduce erroneous possibilities in debugging
+
+ if deterministic:
+ graph_fractions_cardinals = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ graph_fractions_cardinals
+ ) # Past hundreds the conventional scheme can be hard to read. For determinism we stop here
+
+ graph_denominator = pynini.union(
+ graph_fractions_ordinals,
+ graph_fractions_cardinals,
+ pynutil.add_weight(cardinal_graph + pynutil.insert("\""), 0.001),
+ ) # Last form is simply recording the cardinal. Weighting so last resort
+
+ integer = pynutil.insert("integer_part: \"") + cardinal_graph + pynutil.insert("\"") + NEMO_SPACE
+ numerator = (
+ pynutil.insert("numerator: \"") + cardinal_graph + (pynini.cross("/", "\" ") | pynini.cross(" / ", "\" "))
+ )
+ denominator = pynutil.insert("denominator: \"") + graph_denominator
+
+ self.graph = pynini.closure(integer, 0, 1) + numerator + denominator
+
+ final_graph = self.add_tokens(self.graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/measure.py b/nemo_text_processing/text_normalization/es/taggers/measure.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/measure.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_NON_BREAKING_SPACE,
+ NEMO_SPACE,
+ GraphFst,
+ convert_space,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import strip_cardinal_apocope
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ unit = pynini.string_file(get_abs_path("data/measures/measurements.tsv"))
+ unit_plural_fem = pynini.string_file(get_abs_path("data/measures/measurements_plural_fem.tsv"))
+ unit_plural_masc = pynini.string_file(get_abs_path("data/measures/measurements_plural_masc.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ unit = None
+ unit_plural_fem = None
+ unit_plural_masc = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MeasureFst(GraphFst):
+ """
+ Finite state transducer for classifying measure, e.g.
+ "2,4 g" -> measure { cardinal { integer_part: "dos" fractional_part: "cuatro" units: "gramos" preserve_order: true } }
+ "1 g" -> measure { cardinal { integer: "un" units: "gramo" preserve_order: true } }
+ "1 millรณn g" -> measure { cardinal { integer: "un quantity: "millรณn" units: "gramos" preserve_order: true } }
+ e.g. "a-8" โ> "a ocho"
+ e.g. "1,2-a" โ> "uno coma dos a"
+ This class also converts words containing numbers and letters
+ e.g. "a-8" โ> "a ocho"
+ e.g. "1,2-a" โ> "uno coma dos a"
+
+
+ Args:
+ cardinal: CardinalFst
+ decimal: DecimalFst
+ fraction: FractionFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, decimal: GraphFst, fraction: GraphFst, deterministic: bool = True):
+ super().__init__(name="measure", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+
+ unit_singular = unit
+ unit_plural = unit_singular @ (unit_plural_fem | unit_plural_masc)
+
+ graph_unit_singular = convert_space(unit_singular)
+ graph_unit_plural = convert_space(unit_plural)
+
+ optional_graph_negative = pynini.closure("-", 0, 1)
+
+ graph_unit_denominator = (
+ pynini.cross("/", "por") + pynutil.insert(NEMO_NON_BREAKING_SPACE) + graph_unit_singular
+ )
+
+ optional_unit_denominator = pynini.closure(
+ pynutil.insert(NEMO_NON_BREAKING_SPACE) + graph_unit_denominator, 0, 1,
+ )
+
+ unit_plural = (
+ pynutil.insert("units: \"")
+ + ((graph_unit_plural + optional_unit_denominator) | graph_unit_denominator)
+ + pynutil.insert("\"")
+ )
+
+ unit_singular_graph = (
+ pynutil.insert("units: \"")
+ + ((graph_unit_singular + optional_unit_denominator) | graph_unit_denominator)
+ + pynutil.insert("\"")
+ )
+
+ subgraph_decimal = decimal.fst + insert_space + pynini.closure(NEMO_SPACE, 0, 1) + unit_plural
+
+ subgraph_cardinal = (
+ (optional_graph_negative + (pynini.closure(NEMO_DIGIT) - "1")) @ cardinal.fst
+ + insert_space
+ + pynini.closure(delete_space, 0, 1)
+ + unit_plural
+ )
+
+ subgraph_cardinal |= (
+ (optional_graph_negative + pynini.accep("1")) @ cardinal.fst
+ + insert_space
+ + pynini.closure(delete_space, 0, 1)
+ + unit_singular_graph
+ )
+
+ subgraph_fraction = fraction.fst + insert_space + pynini.closure(delete_space, 0, 1) + unit_plural
+
+ decimal_times = (
+ pynutil.insert("decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.insert(" } units: \"")
+ + pynini.union('x', 'X')
+ + pynutil.insert("\"")
+ )
+
+ cardinal_times = (
+ pynutil.insert("cardinal { integer: \"")
+ + strip_cardinal_apocope(cardinal_graph)
+ + pynutil.insert("\" } units: \"")
+ + pynini.union('x', 'X')
+ + pynutil.insert("\"")
+ )
+
+ cardinal_dash_alpha = (
+ pynutil.insert("cardinal { integer: \"")
+ + strip_cardinal_apocope(cardinal_graph)
+ + pynutil.delete('-')
+ + pynutil.insert("\" } units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.insert("\"")
+ )
+
+ decimal_dash_alpha = (
+ pynutil.insert("decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.delete('-')
+ + pynutil.insert(" } units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.insert("\"")
+ )
+
+ alpha_dash_cardinal = (
+ pynutil.insert("units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.delete('-')
+ + pynutil.insert("\"")
+ + pynutil.insert(" cardinal { integer: \"")
+ + cardinal_graph
+ + pynutil.insert("\" } preserve_order: true")
+ )
+
+ alpha_dash_decimal = (
+ pynutil.insert("units: \"")
+ + pynini.closure(NEMO_ALPHA, 1)
+ + pynutil.delete('-')
+ + pynutil.insert("\"")
+ + pynutil.insert(" decimal { ")
+ + decimal.final_graph_wo_negative
+ + pynutil.insert(" } preserve_order: true")
+ )
+
+ final_graph = (
+ subgraph_decimal
+ | subgraph_cardinal
+ | subgraph_fraction
+ | cardinal_dash_alpha
+ | alpha_dash_cardinal
+ | decimal_dash_alpha
+ | decimal_times
+ | cardinal_times
+ | alpha_dash_decimal
+ )
+ final_graph += pynutil.insert(" preserve_order: true")
+ final_graph = self.add_tokens(final_graph)
+
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/money.py b/nemo_text_processing/text_normalization/es/taggers/money.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/money.py
@@ -0,0 +1,194 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_ALPHA,
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import decimal_separator
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ maj_singular_labels = load_labels(get_abs_path("data/money/currency_major.tsv"))
+ maj_singular = pynini.string_file((get_abs_path("data/money/currency_major.tsv")))
+ min_singular = pynini.string_file(get_abs_path("data/money/currency_minor.tsv"))
+ fem_plural = pynini.string_file((get_abs_path("data/money/currency_plural_fem.tsv")))
+ masc_plural = pynini.string_file((get_abs_path("data/money/currency_plural_masc.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ maj_singular_labels = None
+ min_singular = None
+ maj_singular = None
+ fem_plural = None
+ masc_plural = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MoneyFst(GraphFst):
+ """
+ Finite state transducer for classifying money, e.g.
+ "โฌ1" -> money { currency_maj: "euro" integer_part: "un"}
+ "โฌ1,000" -> money { currency_maj: "euro" integer_part: "un" }
+ "โฌ1,001" -> money { currency_maj: "euro" integer_part: "un" fractional_part: "cero cero un" }
+ "ยฃ1,4" -> money { integer_part: "una" currency_maj: "libra" fractional_part: "cuarenta" preserve_order: true }
+ -> money { integer_part: "una" currency_maj: "libra" fractional_part: "cuarenta" currency_min: "penique" preserve_order: true }
+ "0,01 ยฃ" -> money { fractional_part: "un" currency_min: "penique" preserve_order: true }
+ "0,02 ยฃ" -> money { fractional_part: "dos" currency_min: "peniques" preserve_order: true }
+ "ยฃ0,01 million" -> money { currency_maj: "libra" integer_part: "cero" fractional_part: "cero un" quantity: "million" }
+
+ Args:
+ cardinal: CardinalFst
+ decimal: DecimalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, decimal: GraphFst, deterministic: bool = True):
+ super().__init__(name="money", kind="classify", deterministic=deterministic)
+ cardinal_graph = cardinal.graph
+ graph_decimal_final = decimal.final_graph_wo_negative
+
+ maj_singular_graph = maj_singular
+ min_singular_graph = min_singular
+ maj_plural_graph = maj_singular @ (fem_plural | masc_plural)
+ min_plural_graph = min_singular @ (fem_plural | masc_plural)
+
+ graph_maj_singular = pynutil.insert("currency_maj: \"") + maj_singular_graph + pynutil.insert("\"")
+ graph_maj_plural = pynutil.insert("currency_maj: \"") + maj_plural_graph + pynutil.insert("\"")
+
+ graph_integer_one = pynutil.insert("integer_part: \"") + pynini.cross("1", "un") + pynutil.insert("\"")
+
+ decimal_with_quantity = (NEMO_SIGMA + NEMO_ALPHA) @ graph_decimal_final
+
+ graph_decimal_plural = pynini.union(
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + graph_decimal_final, # $1,05
+ graph_decimal_final + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_plural, # 1,05 $
+ )
+ graph_decimal_plural = (
+ (NEMO_SIGMA - "1") + decimal_separator + NEMO_SIGMA
+ ) @ graph_decimal_plural # Can't have "un euros"
+
+ graph_decimal_singular = pynini.union(
+ graph_maj_singular + pynini.closure(delete_space, 0, 1) + insert_space + graph_decimal_final, # $1,05
+ graph_decimal_final + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_singular, # 1,05 $
+ )
+ graph_decimal_singular = (pynini.accep("1") + decimal_separator + NEMO_SIGMA) @ graph_decimal_singular
+
+ graph_decimal = pynini.union(
+ graph_decimal_singular,
+ graph_decimal_plural,
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + decimal_with_quantity,
+ )
+
+ graph_integer = (
+ pynutil.insert("integer_part: \"") + ((NEMO_SIGMA - "1") @ cardinal_graph) + pynutil.insert("\"")
+ )
+
+ graph_integer_only = pynini.union(
+ graph_maj_singular + pynini.closure(delete_space, 0, 1) + insert_space + graph_integer_one,
+ graph_integer_one + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_singular,
+ )
+ graph_integer_only |= pynini.union(
+ graph_maj_plural + pynini.closure(delete_space, 0, 1) + insert_space + graph_integer,
+ graph_integer + pynini.closure(delete_space, 0, 1) + insert_space + graph_maj_plural,
+ )
+
+ graph = graph_integer_only | graph_decimal
+
+ # remove trailing zeros of non zero number in the first 2 digits and fill up to 2 digits
+ # e.g. 2000 -> 20, 0200->02, 01 -> 01, 10 -> 10
+ # not accepted: 002, 00, 0,
+ two_digits_fractional_part = (
+ pynini.closure(NEMO_DIGIT) + (NEMO_DIGIT - "0") + pynini.closure(pynutil.delete("0"))
+ ) @ (
+ (pynutil.delete("0") + (NEMO_DIGIT - "0"))
+ | ((NEMO_DIGIT - "0") + pynutil.insert("0"))
+ | ((NEMO_DIGIT - "0") + NEMO_DIGIT)
+ )
+
+ graph_min_singular = pynutil.insert("currency_min: \"") + min_singular_graph + pynutil.insert("\"")
+ graph_min_plural = pynutil.insert("currency_min: \"") + min_plural_graph + pynutil.insert("\"")
+
+ # format ** euro ** cent
+ decimal_graph_with_minor = None
+ for curr_symbol, _ in maj_singular_labels:
+ preserve_order = pynutil.insert(" preserve_order: true")
+
+ integer_plus_maj = pynini.union(
+ graph_integer + insert_space + pynutil.insert(curr_symbol) @ graph_maj_plural,
+ graph_integer_one + insert_space + pynutil.insert(curr_symbol) @ graph_maj_singular,
+ )
+ # non zero integer part
+ integer_plus_maj = (pynini.closure(NEMO_DIGIT) - "0") @ integer_plus_maj
+
+ graph_fractional_one = (
+ pynutil.insert("fractional_part: \"")
+ + two_digits_fractional_part @ pynini.cross("1", "un")
+ + pynutil.insert("\"")
+ )
+
+ graph_fractional = (
+ two_digits_fractional_part @ (pynini.closure(NEMO_DIGIT, 1, 2) - "1") @ cardinal.two_digit_non_zero
+ )
+ graph_fractional = pynutil.insert("fractional_part: \"") + graph_fractional + pynutil.insert("\"")
+
+ fractional_plus_min = pynini.union(
+ graph_fractional + insert_space + pynutil.insert(curr_symbol) @ graph_min_plural,
+ graph_fractional_one + insert_space + pynutil.insert(curr_symbol) @ graph_min_singular,
+ )
+
+ decimal_graph_with_minor_curr = (
+ integer_plus_maj + pynini.cross(decimal_separator, NEMO_SPACE) + fractional_plus_min
+ )
+ decimal_graph_with_minor_curr |= pynutil.add_weight(
+ integer_plus_maj
+ + pynini.cross(decimal_separator, NEMO_SPACE)
+ + pynutil.insert("fractional_part: \"")
+ + two_digits_fractional_part @ cardinal.two_digit_non_zero
+ + pynutil.insert("\""),
+ weight=0.0001,
+ )
+
+ decimal_graph_with_minor_curr |= pynutil.delete("0,") + fractional_plus_min
+ decimal_graph_with_minor_curr = pynini.union(
+ pynutil.delete(curr_symbol)
+ + pynini.closure(delete_space, 0, 1)
+ + decimal_graph_with_minor_curr
+ + preserve_order,
+ decimal_graph_with_minor_curr
+ + preserve_order
+ + pynini.closure(delete_space, 0, 1)
+ + pynutil.delete(curr_symbol),
+ )
+
+ decimal_graph_with_minor = (
+ decimal_graph_with_minor_curr
+ if decimal_graph_with_minor is None
+ else pynini.union(decimal_graph_with_minor, decimal_graph_with_minor_curr)
+ )
+
+ final_graph = graph | pynutil.add_weight(decimal_graph_with_minor, -0.001)
+
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/ordinal.py b/nemo_text_processing/text_normalization/es/taggers/ordinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/ordinal.py
@@ -0,0 +1,186 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import roman_to_int, strip_accent
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digit = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/digit.tsv")))
+ teens = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/teen.tsv")))
+ twenties = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/twenties.tsv")))
+ ties = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/ties.tsv")))
+ hundreds = pynini.invert(pynini.string_file(get_abs_path("data/ordinals/hundreds.tsv")))
+
+ PYNINI_AVAILABLE = True
+
+except (ImportError, ModuleNotFoundError):
+ digit = None
+ teens = None
+ twenties = None
+ ties = None
+ hundreds = None
+
+ PYNINI_AVAILABLE = False
+
+
+def get_one_to_one_thousand(cardinal: 'pynini.FstLike') -> 'pynini.FstLike':
+ """
+ Produces an acceptor for verbalizations of all numbers from 1 to 1000. Needed for ordinals and fractions.
+
+ Args:
+ cardinal: CardinalFst
+
+ Returns:
+ fst: A pynini.FstLike object
+ """
+ numbers = pynini.string_map([str(_) for _ in range(1, 1000)]) @ cardinal
+ return pynini.project(numbers, "output").optimize()
+
+
+class OrdinalFst(GraphFst):
+ """
+ Finite state transducer for classifying ordinal
+ "21.ยบ" -> ordinal { integer: "vigรฉsimo primero" morphosyntactic_features: "gender_masc" }
+ This class converts ordinal up to the millionth (millonรฉsimo) order (exclusive).
+
+ This FST also records the ending of the ordinal (called "morphosyntactic_features"):
+ either as gender_masc, gender_fem, or apocope. Also introduces plural feature for non-deterministic graphs.
+
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="ordinal", kind="classify")
+ cardinal_graph = cardinal.graph
+
+ graph_digit = digit.optimize()
+ graph_teens = teens.optimize()
+ graph_ties = ties.optimize()
+ graph_twenties = twenties.optimize()
+ graph_hundreds = hundreds.optimize()
+
+ if not deterministic:
+ # Some alternative derivations
+ graph_ties = graph_ties | pynini.cross("sesenta", "setuagรฉsimo")
+
+ graph_teens = graph_teens | pynini.cross("once", "decimoprimero")
+ graph_teens |= pynini.cross("doce", "decimosegundo")
+
+ graph_digit = graph_digit | pynini.cross("nueve", "nono")
+ graph_digit |= pynini.cross("siete", "sรฉtimo")
+
+ graph_tens_component = (
+ graph_teens
+ | (graph_ties + pynini.closure(pynini.cross(" y ", NEMO_SPACE) + graph_digit, 0, 1))
+ | graph_twenties
+ )
+
+ graph_hundred_component = pynini.union(
+ graph_hundreds + pynini.closure(NEMO_SPACE + pynini.union(graph_tens_component, graph_digit), 0, 1),
+ graph_tens_component,
+ graph_digit,
+ )
+
+ # Need to go up to thousands for fractions
+ self.one_to_one_thousand = get_one_to_one_thousand(cardinal_graph)
+
+ thousands = pynini.cross("mil", "milรฉsimo")
+
+ graph_thousands = (
+ strip_accent(self.one_to_one_thousand) + NEMO_SPACE + thousands
+ ) # Cardinals become prefix for thousands series. Snce accent on the powers of ten we strip accent from leading words
+ graph_thousands @= pynini.cdrewrite(delete_space, "", "milรฉsimo", NEMO_SIGMA) # merge as a prefix
+ graph_thousands |= thousands
+
+ self.multiples_of_thousand = (cardinal_graph @ graph_thousands).optimize()
+
+ if (
+ not deterministic
+ ): # Formally the words preceding the power of ten should be a prefix, but some maintain word boundaries.
+ graph_thousands |= (self.one_to_one_thousand @ graph_hundred_component) + NEMO_SPACE + thousands
+
+ graph_thousands += pynini.closure(NEMO_SPACE + graph_hundred_component, 0, 1)
+
+ ordinal_graph = graph_thousands | graph_hundred_component
+ ordinal_graph = cardinal_graph @ ordinal_graph
+
+ if not deterministic:
+ # The 10's and 20's series can also be two words
+ split_words = pynini.cross("decimo", "dรฉcimo ") | pynini.cross("vigesimo", "vigรฉsimo ")
+ split_words = pynini.cdrewrite(split_words, "", NEMO_CHAR, NEMO_SIGMA)
+ ordinal_graph |= ordinal_graph @ split_words
+
+ # If "octavo" is preceeded by the "o" within string, it needs deletion
+ ordinal_graph @= pynini.cdrewrite(pynutil.delete("o"), "", "octavo", NEMO_SIGMA)
+
+ self.graph = ordinal_graph.optimize()
+
+ masc = pynini.accep("gender_masc")
+ fem = pynini.accep("gender_fem")
+ apocope = pynini.accep("apocope")
+
+ delete_period = pynini.closure(pynutil.delete("."), 0, 1) # Sometimes the period is omitted f
+
+ accept_masc = delete_period + pynini.cross("ยบ", masc)
+ accep_fem = delete_period + pynini.cross("ยช", fem)
+ accep_apocope = delete_period + pynini.cross("แตสณ", apocope)
+
+ # Managing Romanization
+ graph_roman = pynutil.insert("integer: \"") + roman_to_int(ordinal_graph) + pynutil.insert("\"")
+ if not deterministic:
+ # Introduce plural
+ plural = pynini.closure(pynutil.insert("/plural"), 0, 1)
+ accept_masc += plural
+ accep_fem += plural
+
+ # Romanizations have no morphology marker, so in non-deterministic case we provide option for all
+ insert_morphology = pynutil.insert(pynini.union(masc, fem)) + plural
+ insert_morphology |= pynutil.insert(apocope)
+ insert_morphology = (
+ pynutil.insert(" morphosyntactic_features: \"") + insert_morphology + pynutil.insert("\"")
+ )
+
+ graph_roman += insert_morphology
+
+ else:
+ # We assume masculine gender as default
+ graph_roman += pynutil.insert(" morphosyntactic_features: \"gender_masc\"")
+
+ # Rest of graph
+ convert_abbreviation = accept_masc | accep_fem | accep_apocope
+
+ graph = (
+ pynutil.insert("integer: \"")
+ + ordinal_graph
+ + pynutil.insert("\"")
+ + pynutil.insert(" morphosyntactic_features: \"")
+ + convert_abbreviation
+ + pynutil.insert("\"")
+ )
+ graph = pynini.union(graph, graph_roman)
+
+ final_graph = self.add_tokens(graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/telephone.py b/nemo_text_processing/text_normalization/es/taggers/telephone.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/telephone.py
@@ -0,0 +1,156 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_SIGMA, GraphFst, insert_space
+from nemo_text_processing.text_normalization.es.graph_utils import ones
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ graph_digit = pynini.string_file(get_abs_path("data/numbers/digit.tsv"))
+ graph_ties = pynini.string_file(get_abs_path("data/numbers/ties.tsv"))
+ graph_teen = pynini.string_file(get_abs_path("data/numbers/teen.tsv"))
+ graph_twenties = pynini.string_file(get_abs_path("data/numbers/twenties.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ graph_digit = None
+ graph_ties = None
+ graph_teen = None
+ graph_twenties = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TelephoneFst(GraphFst):
+ """
+ Finite state transducer for classifying telephone numbers, e.g.
+ 123-123-5678 -> { number_part: "uno dos tres uno dos tres cinco seis siete ocho" }.
+ In Spanish, digits are generally read individually, or as 2-digit numbers,
+ eg. "123" = "uno dos tres",
+ "1234" = "doce treinta y cuatro".
+ This will verbalize sequences of 10 (3+3+4 e.g. 123-456-7890).
+ 9 (3+3+3 e.g. 123-456-789) and 8 (4+4 e.g. 1234-5678) digits.
+
+ (we ignore more complicated cases such as "doscientos y dos" or "tres nueves").
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="telephone", kind="classify")
+
+ # create `single_digits` and `double_digits` graphs as these will be
+ # the building blocks of possible telephone numbers
+ single_digits = pynini.invert(graph_digit).optimize() | pynini.cross("0", "cero")
+
+ double_digits = pynini.union(
+ graph_twenties,
+ graph_teen,
+ (graph_ties + pynutil.delete("0")),
+ (graph_ties + insert_space + pynutil.insert("y") + insert_space + graph_digit),
+ )
+ double_digits = pynini.invert(double_digits)
+
+ # define `ten_digit_graph`, `nine_digit_graph`, `eight_digit_graph`
+ # which produces telephone numbers spoken (1) only with single digits,
+ # or (2) spoken with double digits (and sometimes single digits)
+
+ # 10-digit option (1): all single digits
+ ten_digit_graph = (
+ pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + single_digits
+ )
+
+ # 9-digit option (1): all single digits
+ nine_digit_graph = (
+ pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 2, 2)
+ + single_digits
+ )
+
+ # 8-digit option (1): all single digits
+ eight_digit_graph = (
+ pynini.closure(single_digits + insert_space, 4, 4)
+ + pynutil.delete("-")
+ + pynini.closure(single_digits + insert_space, 3, 3)
+ + single_digits
+ )
+
+ if not deterministic:
+ # 10-digit option (2): (1+2) + (1+2) + (2+2) digits
+ ten_digit_graph |= (
+ single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + double_digits
+ + insert_space
+ + double_digits
+ )
+
+ # 9-digit option (2): (1+2) + (1+2) + (1+2) digits
+ nine_digit_graph |= (
+ single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + single_digits
+ + insert_space
+ + double_digits
+ )
+
+ # 8-digit option (2): (2+2) + (2+2) digits
+ eight_digit_graph |= (
+ double_digits
+ + insert_space
+ + double_digits
+ + insert_space
+ + pynutil.delete("-")
+ + double_digits
+ + insert_space
+ + double_digits
+ )
+
+ number_part = pynini.union(ten_digit_graph, nine_digit_graph, eight_digit_graph)
+ number_part @= pynini.cdrewrite(pynini.cross(ones, "uno"), "", "", NEMO_SIGMA)
+
+ number_part = pynutil.insert("number_part: \"") + number_part + pynutil.insert("\"")
+
+ graph = number_part
+ final_graph = self.add_tokens(graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/time.py b/nemo_text_processing/text_normalization/es/taggers/time.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/time.py
@@ -0,0 +1,218 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_DIGIT,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ time_zone_graph = pynini.string_file(get_abs_path("data/time/time_zone.tsv"))
+ suffix = pynini.string_file(get_abs_path("data/time/time_suffix.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ time_zone_graph = None
+ suffix = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TimeFst(GraphFst):
+ """
+ Finite state transducer for classifying time, e.g.
+ "02:15 est" -> time { hours: "dos" minutes: "quince" zone: "e s t"}
+ "2 h" -> time { hours: "dos" }
+ "9 h" -> time { hours: "nueve" }
+ "02:15:10 h" -> time { hours: "dos" minutes: "quince" seconds: "diez"}
+
+ Args:
+ cardinal: CardinalFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, cardinal: GraphFst, deterministic: bool = True):
+ super().__init__(name="time", kind="classify", deterministic=deterministic)
+
+ delete_time_delimiter = pynutil.delete(pynini.union(".", ":"))
+
+ one = pynini.string_map([("un", "una"), ("รบn", "una")])
+ change_one = pynini.cdrewrite(one, "", "", NEMO_SIGMA)
+ cardinal_graph = cardinal.graph @ change_one
+
+ day_suffix = pynutil.insert("suffix: \"") + suffix + pynutil.insert("\"")
+ day_suffix = delete_space + insert_space + day_suffix
+
+ delete_hora_suffix = delete_space + insert_space + pynutil.delete("h")
+ delete_minute_suffix = delete_space + insert_space + pynutil.delete("min")
+ delete_second_suffix = delete_space + insert_space + pynutil.delete("s")
+
+ labels_hour_24 = [
+ str(x) for x in range(0, 25)
+ ] # Can see both systems. Twelve hour requires am/pm for ambiguity resolution
+ labels_hour_12 = [str(x) for x in range(1, 13)]
+ labels_minute_single = [str(x) for x in range(1, 10)]
+ labels_minute_double = [str(x) for x in range(10, 60)]
+
+ delete_leading_zero_to_double_digit = (
+ pynini.closure(pynutil.delete("0") | (NEMO_DIGIT - "0"), 0, 1) + NEMO_DIGIT
+ )
+
+ graph_24 = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ delete_leading_zero_to_double_digit @ pynini.union(*labels_hour_24)
+ )
+ graph_12 = (
+ pynini.closure(NEMO_DIGIT, 1, 2) @ delete_leading_zero_to_double_digit @ pynini.union(*labels_hour_12)
+ )
+
+ graph_hour_24 = graph_24 @ cardinal_graph
+ graph_hour_12 = graph_12 @ cardinal_graph
+
+ graph_minute_single = delete_leading_zero_to_double_digit @ pynini.union(*labels_minute_single)
+ graph_minute_double = pynini.union(*labels_minute_double)
+
+ graph_minute = pynini.union(graph_minute_single, graph_minute_double) @ cardinal_graph
+
+ final_graph_hour_only_24 = (
+ pynutil.insert("hours: \"") + graph_hour_24 + pynutil.insert("\"") + delete_hora_suffix
+ )
+ final_graph_hour_only_12 = pynutil.insert("hours: \"") + graph_hour_12 + pynutil.insert("\"") + day_suffix
+
+ final_graph_hour_24 = pynutil.insert("hours: \"") + graph_hour_24 + pynutil.insert("\"")
+ final_graph_hour_12 = pynutil.insert("hours: \"") + graph_hour_12 + pynutil.insert("\"")
+
+ final_graph_minute = pynutil.insert("minutes: \"") + graph_minute + pynutil.insert("\"")
+ final_graph_second = pynutil.insert("seconds: \"") + graph_minute + pynutil.insert("\"")
+ final_time_zone_optional = pynini.closure(
+ delete_space + insert_space + pynutil.insert("zone: \"") + time_zone_graph + pynutil.insert("\""), 0, 1,
+ )
+
+ # 02.30 h
+ graph_hm = (
+ final_graph_hour_24
+ + delete_time_delimiter
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + pynini.closure(
+ delete_time_delimiter + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second)),
+ 0,
+ 1,
+ ) # For seconds 2.30.35 h
+ + pynini.closure(delete_hora_suffix, 0, 1) # 2.30 is valid if unambiguous
+ + final_time_zone_optional
+ )
+
+ # 2 h 30 min
+ graph_hm |= (
+ final_graph_hour_24
+ + delete_hora_suffix
+ + delete_space
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + delete_minute_suffix
+ + pynini.closure(
+ delete_space
+ + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second))
+ + delete_second_suffix,
+ 0,
+ 1,
+ ) # For seconds
+ + final_time_zone_optional
+ )
+
+ # 2.30 a. m. (Only for 12 hour clock)
+ graph_hm |= (
+ final_graph_hour_12
+ + delete_time_delimiter
+ + (pynutil.delete("00") | (insert_space + final_graph_minute))
+ + pynini.closure(
+ delete_time_delimiter + (pynini.cross("00", " seconds: \"0\"") | (insert_space + final_graph_second)),
+ 0,
+ 1,
+ ) # For seconds 2.30.35 a. m.
+ + day_suffix
+ + final_time_zone_optional
+ )
+
+ graph_h = (
+ pynini.union(final_graph_hour_only_24, final_graph_hour_only_12) + final_time_zone_optional
+ ) # Should always have a time indicator, else we'll pass to cardinals
+
+ if not deterministic:
+ # This includes alternate vocalization (hour menos min, min para hour), here we shift the times and indicate a `style` tag
+ hour_shift_24 = pynini.invert(pynini.string_file(get_abs_path("data/time/hour_to_24.tsv")))
+ hour_shift_12 = pynini.invert(pynini.string_file(get_abs_path("data/time/hour_to_12.tsv")))
+ minute_shift = pynini.string_file(get_abs_path("data/time/minute_to.tsv"))
+
+ graph_hour_to_24 = graph_24 @ hour_shift_24 @ cardinal_graph
+ graph_hour_to_12 = graph_12 @ hour_shift_12 @ cardinal_graph
+
+ graph_minute_to = pynini.union(graph_minute_single, graph_minute_double) @ minute_shift @ cardinal_graph
+
+ final_graph_hour_to_24 = pynutil.insert("hours: \"") + graph_hour_to_24 + pynutil.insert("\"")
+ final_graph_hour_to_12 = pynutil.insert("hours: \"") + graph_hour_to_12 + pynutil.insert("\"")
+
+ final_graph_minute_to = pynutil.insert("minutes: \"") + graph_minute_to + pynutil.insert("\"")
+
+ graph_menos = pynutil.insert(" style: \"1\"")
+ graph_para = pynutil.insert(" style: \"2\"")
+
+ final_graph_style = graph_menos | graph_para
+
+ # 02.30 h (omitting seconds since a bit awkward)
+ graph_hm |= (
+ final_graph_hour_to_24
+ + delete_time_delimiter
+ + insert_space
+ + final_graph_minute_to
+ + pynini.closure(delete_hora_suffix, 0, 1) # 2.30 is valid if unambiguous
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ # 2 h 30 min
+ graph_hm |= (
+ final_graph_hour_to_24
+ + delete_hora_suffix
+ + delete_space
+ + insert_space
+ + final_graph_minute_to
+ + delete_minute_suffix
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ # 2.30 a. m. (Only for 12 hour clock)
+ graph_hm |= (
+ final_graph_hour_to_12
+ + delete_time_delimiter
+ + insert_space
+ + final_graph_minute_to
+ + day_suffix
+ + final_time_zone_optional
+ + final_graph_style
+ )
+
+ final_graph = graph_hm | graph_h
+ if deterministic:
+ final_graph = final_graph + pynutil.insert(" preserve_order: true")
+ final_graph = final_graph.optimize()
+ final_graph = self.add_tokens(final_graph)
+ self.fst = final_graph.optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py b/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/tokenize_and_classify.py
@@ -0,0 +1,157 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_extra_space,
+ delete_space,
+ generator_main,
+)
+from nemo_text_processing.text_normalization.en.taggers.punctuation import PunctuationFst
+from nemo_text_processing.text_normalization.es.taggers.cardinal import CardinalFst
+from nemo_text_processing.text_normalization.es.taggers.date import DateFst
+from nemo_text_processing.text_normalization.es.taggers.decimals import DecimalFst
+from nemo_text_processing.text_normalization.es.taggers.electronic import ElectronicFst
+from nemo_text_processing.text_normalization.es.taggers.fraction import FractionFst
+from nemo_text_processing.text_normalization.es.taggers.measure import MeasureFst
+from nemo_text_processing.text_normalization.es.taggers.money import MoneyFst
+from nemo_text_processing.text_normalization.es.taggers.ordinal import OrdinalFst
+from nemo_text_processing.text_normalization.es.taggers.telephone import TelephoneFst
+from nemo_text_processing.text_normalization.es.taggers.time import TimeFst
+from nemo_text_processing.text_normalization.es.taggers.whitelist import WhiteListFst
+from nemo_text_processing.text_normalization.es.taggers.word import WordFst
+
+from nemo.utils import logging
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class ClassifyFst(GraphFst):
+ """
+ Final class that composes all other classification grammars. This class can process an entire sentence, that is lower cased.
+ For deployment, this grammar will be compiled and exported to OpenFst Finate State aRchive (FAR) File.
+ More details to deployment at NeMo/tools/text_processing_deployment.
+
+ Args:
+ input_case: accepting either "lower_cased" or "cased" input.
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ cache_dir: path to a dir with .far grammar file. Set to None to avoid using cache.
+ overwrite_cache: set to True to overwrite .far files
+ whitelist: path to a file with whitelist replacements
+ """
+
+ def __init__(
+ self,
+ input_case: str,
+ deterministic: bool = False,
+ cache_dir: str = None,
+ overwrite_cache: bool = False,
+ whitelist: str = None,
+ ):
+ super().__init__(name="tokenize_and_classify", kind="classify", deterministic=deterministic)
+ far_file = None
+ if cache_dir is not None and cache_dir != "None":
+ os.makedirs(cache_dir, exist_ok=True)
+ whitelist_file = os.path.basename(whitelist) if whitelist else ""
+ far_file = os.path.join(
+ cache_dir, f"_{input_case}_es_tn_{deterministic}_deterministic{whitelist_file}.far"
+ )
+ if not overwrite_cache and far_file and os.path.exists(far_file):
+ self.fst = pynini.Far(far_file, mode="r")["tokenize_and_classify"]
+ logging.info(f"ClassifyFst.fst was restored from {far_file}.")
+ else:
+ logging.info(f"Creating ClassifyFst grammars. This might take some time...")
+
+ self.cardinal = CardinalFst(deterministic=deterministic)
+ cardinal_graph = self.cardinal.fst
+
+ self.ordinal = OrdinalFst(cardinal=self.cardinal, deterministic=deterministic)
+ ordinal_graph = self.ordinal.fst
+
+ self.decimal = DecimalFst(cardinal=self.cardinal, deterministic=deterministic)
+ decimal_graph = self.decimal.fst
+
+ self.fraction = FractionFst(cardinal=self.cardinal, ordinal=self.ordinal, deterministic=deterministic)
+ fraction_graph = self.fraction.fst
+ self.measure = MeasureFst(
+ cardinal=self.cardinal, decimal=self.decimal, fraction=self.fraction, deterministic=deterministic
+ )
+ measure_graph = self.measure.fst
+ self.date = DateFst(cardinal=self.cardinal, deterministic=deterministic)
+ date_graph = self.date.fst
+ word_graph = WordFst(deterministic=deterministic).fst
+ self.time = TimeFst(self.cardinal, deterministic=deterministic)
+ time_graph = self.time.fst
+ self.telephone = TelephoneFst(deterministic=deterministic)
+ telephone_graph = self.telephone.fst
+ self.electronic = ElectronicFst(deterministic=deterministic)
+ electronic_graph = self.electronic.fst
+ self.money = MoneyFst(cardinal=self.cardinal, decimal=self.decimal, deterministic=deterministic)
+ money_graph = self.money.fst
+ self.whitelist = WhiteListFst(input_case=input_case, deterministic=deterministic, input_file=whitelist)
+ whitelist_graph = self.whitelist.fst
+ punct_graph = PunctuationFst(deterministic=deterministic).fst
+
+ classify = (
+ pynutil.add_weight(whitelist_graph, 1.01)
+ | pynutil.add_weight(time_graph, 1.09)
+ | pynutil.add_weight(measure_graph, 1.08)
+ | pynutil.add_weight(cardinal_graph, 1.1)
+ | pynutil.add_weight(fraction_graph, 1.09)
+ | pynutil.add_weight(date_graph, 1.1)
+ | pynutil.add_weight(ordinal_graph, 1.1)
+ | pynutil.add_weight(decimal_graph, 1.1)
+ | pynutil.add_weight(money_graph, 1.1)
+ | pynutil.add_weight(telephone_graph, 1.1)
+ | pynutil.add_weight(electronic_graph, 1.1)
+ | pynutil.add_weight(word_graph, 200)
+ )
+ punct = pynutil.insert("tokens { ") + pynutil.add_weight(punct_graph, weight=2.1) + pynutil.insert(" }")
+ punct = pynini.closure(
+ pynini.compose(pynini.closure(NEMO_WHITE_SPACE, 1), delete_extra_space)
+ | (pynutil.insert(" ") + punct),
+ 1,
+ )
+ token = pynutil.insert("tokens { ") + classify + pynutil.insert(" }")
+ token_plus_punct = (
+ pynini.closure(punct + pynutil.insert(" ")) + token + pynini.closure(pynutil.insert(" ") + punct)
+ )
+
+ graph = token_plus_punct + pynini.closure(
+ (
+ pynini.compose(pynini.closure(NEMO_WHITE_SPACE, 1), delete_extra_space)
+ | (pynutil.insert(" ") + punct + pynutil.insert(" "))
+ )
+ + token_plus_punct
+ )
+
+ graph = delete_space + graph + delete_space
+ graph |= punct
+
+ self.fst = graph.optimize()
+
+ if far_file:
+ generator_main(far_file, {"tokenize_and_classify": self.fst})
+ logging.info(f"ClassifyFst grammars are saved to {far_file}.")
diff --git a/nemo_text_processing/text_normalization/es/taggers/whitelist.py b/nemo_text_processing/text_normalization/es/taggers/whitelist.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/whitelist.py
@@ -0,0 +1,69 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst, convert_space
+from nemo_text_processing.text_normalization.es.utils import get_abs_path, load_labels
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class WhiteListFst(GraphFst):
+ """
+ Finite state transducer for classifying whitelist, e.g.
+ "sr." -> tokens { name: "seรฑor" }
+ This class has highest priority among all classifier grammars. Whitelisted tokens are defined and loaded from "data/whitelist.tsv".
+
+ Args:
+ input_case: accepting either "lower_cased" or "cased" input.
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ input_file: path to a file with whitelist replacements
+ """
+
+ def __init__(self, input_case: str, deterministic: bool = True, input_file: str = None):
+ super().__init__(name="whitelist", kind="classify", deterministic=deterministic)
+
+ def _get_whitelist_graph(input_case, file):
+ whitelist = load_labels(file)
+ if input_case == "lower_cased":
+ whitelist = [[x[0].lower()] + x[1:] for x in whitelist]
+ graph = pynini.string_map(whitelist)
+ return graph
+
+ graph = _get_whitelist_graph(input_case, get_abs_path("data/whitelist.tsv"))
+ if not deterministic and input_case != "lower_cased":
+ graph |= pynutil.add_weight(
+ _get_whitelist_graph("lower_cased", get_abs_path("data/whitelist.tsv")), weight=0.0001
+ )
+
+ if input_file:
+ whitelist_provided = _get_whitelist_graph(input_case, input_file)
+ if not deterministic:
+ graph |= whitelist_provided
+ else:
+ graph = whitelist_provided
+
+ if not deterministic:
+ units_graph = _get_whitelist_graph(input_case, file=get_abs_path("data/measures/measurements.tsv"))
+ graph |= units_graph
+
+ self.graph = graph
+ self.final_graph = convert_space(self.graph).optimize()
+ self.fst = (pynutil.insert("name: \"") + self.final_graph + pynutil.insert("\"")).optimize()
diff --git a/nemo_text_processing/text_normalization/es/taggers/word.py b/nemo_text_processing/text_normalization/es/taggers/word.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/taggers/word.py
@@ -0,0 +1,39 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_SPACE, GraphFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class WordFst(GraphFst):
+ """
+ Finite state transducer for classifying word.
+ e.g. dormir -> tokens { name: "dormir" }
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="word", kind="classify")
+ word = pynutil.insert("name: \"") + pynini.closure(NEMO_NOT_SPACE, 1) + pynutil.insert("\"")
+ self.fst = word.optimize()
diff --git a/nemo_text_processing/text_normalization/es/utils.py b/nemo_text_processing/text_normalization/es/utils.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/utils.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import csv
+import os
+
+
+def get_abs_path(rel_path):
+ """
+ Get absolute path
+
+ Args:
+ rel_path: relative path to this file
+
+ Returns absolute path
+ """
+ return os.path.dirname(os.path.abspath(__file__)) + '/' + rel_path
+
+
+def load_labels(abs_path):
+ """
+ loads relative path file as dictionary
+
+ Args:
+ abs_path: absolute path
+
+ Returns dictionary of mappings
+ """
+ label_tsv = open(abs_path)
+ labels = list(csv.reader(label_tsv, delimiter="\t"))
+ return labels
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/__init__.py b/nemo_text_processing/text_normalization/es/verbalizers/__init__.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py b/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/cardinal.py
@@ -0,0 +1,57 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, GraphFst
+from nemo_text_processing.text_normalization.es.graph_utils import shift_cardinal_gender, strip_cardinal_apocope
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class CardinalFst(GraphFst):
+ """
+ Finite state transducer for verbalizing cardinals
+ e.g. cardinal { integer: "dos" } -> "dos"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="cardinal", kind="verbalize", deterministic=deterministic)
+ optional_sign = pynini.closure(pynini.cross("negative: \"true\" ", "menos "), 0, 1)
+ self.optional_sign = optional_sign
+
+ integer = pynini.closure(NEMO_NOT_QUOTE, 1)
+ self.integer = pynutil.delete(" \"") + integer + pynutil.delete("\"")
+
+ integer = pynutil.delete("integer:") + self.integer
+ self.numbers = integer
+ graph = optional_sign + self.numbers
+
+ if not deterministic:
+ # For alternate renderings
+ no_adjust = graph
+ fem_adjust = shift_cardinal_gender(graph)
+ apocope_adjust = strip_cardinal_apocope(graph)
+ graph = no_adjust | fem_adjust | apocope_adjust
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/date.py b/nemo_text_processing/text_normalization/es/verbalizers/date.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/date.py
@@ -0,0 +1,86 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import strip_cardinal_apocope
+from nemo_text_processing.text_normalization.es.taggers.date import articles
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class DateFst(GraphFst):
+ """
+ Finite state transducer for verbalizing date, e.g.
+ date { day: "treinta y uno" month: "marzo" year: "dos mil" } -> "treinta y uno de marzo de dos mil"
+ date { day: "uno" month: "mayo" year: "del mil novecientos noventa" } -> "primero de mayo del mil novecientos noventa"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="date", kind="verbalize", deterministic=deterministic)
+
+ day_cardinal = pynutil.delete("day: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ day = strip_cardinal_apocope(day_cardinal)
+
+ primero = pynini.cdrewrite(pynini.cross("uno", "primero"), "[BOS]", "[EOS]", NEMO_SIGMA)
+ day = (
+ (day @ primero) if deterministic else pynini.union(day, day @ primero)
+ ) # Primero for first day is traditional, but will vary depending on region
+
+ month = pynutil.delete("month: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+
+ year = (
+ pynutil.delete("year: \"")
+ + articles
+ + NEMO_SPACE
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ # Insert preposition if wasn't originally with the year. This would mean a space was present
+ year = pynutil.add_weight(year, -0.001)
+ year |= (
+ pynutil.delete("year: \"")
+ + pynutil.insert("de ")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ # day month year
+ graph_dmy = day + pynini.cross(NEMO_SPACE, " de ") + month + pynini.closure(pynini.accep(" ") + year, 0, 1)
+
+ graph_mdy = month + NEMO_SPACE + day + pynini.closure(NEMO_SPACE + year, 0, 1)
+ if deterministic:
+ graph_mdy += pynutil.delete(" preserve_order: true") # Only accepts this if was explicitly passed
+
+ self.graph = graph_dmy | graph_mdy
+ final_graph = self.graph + delete_preserve_order
+
+ delete_tokens = self.delete_tokens(final_graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/decimals.py b/nemo_text_processing/text_normalization/es/verbalizers/decimals.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/decimals.py
@@ -0,0 +1,87 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ GraphFst,
+ delete_preserve_order,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es import LOCALIZATION
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ shift_cardinal_gender,
+ shift_number_gender,
+ strip_cardinal_apocope,
+)
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class DecimalFst(GraphFst):
+ """
+ Finite state transducer for classifying decimal, e.g.
+ decimal { negative: "true" integer_part: "dos" fractional_part: "cuatro cero" quantity: "billones" } -> menos dos coma quatro cero billones
+ decimal { integer_part: "un" quantity: "billรณn" } -> un billรณn
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="decimal", kind="classify", deterministic=deterministic)
+
+ self.optional_sign = pynini.closure(pynini.cross("negative: \"true\"", "menos ") + delete_space, 0, 1)
+ self.integer = pynutil.delete("integer_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ self.fractional_default = (
+ pynutil.delete("fractional_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ )
+
+ conjunction = pynutil.insert(" punto ") if LOCALIZATION == "am" else pynutil.insert(" coma ")
+ if not deterministic:
+ conjunction |= pynutil.insert(pynini.union(" con ", " y "))
+ self.fractional_default |= strip_cardinal_apocope(self.fractional_default)
+ self.fractional = conjunction + self.fractional_default
+
+ self.quantity = (
+ delete_space
+ + insert_space
+ + pynutil.delete("quantity: \"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ self.optional_quantity = pynini.closure(self.quantity, 0, 1)
+
+ graph = self.optional_sign + pynini.union(
+ (self.integer + self.quantity), (self.integer + delete_space + self.fractional + self.optional_quantity)
+ )
+
+ self.numbers = graph.optimize()
+ self.numbers_no_quantity = self.integer + delete_space + self.fractional + self.optional_quantity
+
+ if not deterministic:
+ graph |= self.optional_sign + (
+ shift_cardinal_gender(self.integer + delete_space) + shift_number_gender(self.fractional)
+ )
+
+ graph += delete_preserve_order
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/electronic.py b/nemo_text_processing/text_normalization/es/verbalizers/electronic.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/electronic.py
@@ -0,0 +1,91 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_preserve_order,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ digit_no_zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/digit.tsv")))
+ zero = pynini.invert(pynini.string_file(get_abs_path("data/numbers/zero.tsv")))
+
+ graph_symbols = pynini.string_file(get_abs_path("data/electronic/symbols.tsv"))
+ server_common = pynini.string_file(get_abs_path("data/electronic/server_name.tsv"))
+ domain_common = pynini.string_file(get_abs_path("data/electronic/domain.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ digit_no_zero = None
+ zero = None
+
+ graph_symbols = None
+ server_common = None
+ domain_common = None
+
+ PYNINI_AVAILABLE = False
+
+
+class ElectronicFst(GraphFst):
+ """
+ Finite state transducer for verbalizing electronic
+ e.g. electronic { username: "abc" domain: "hotmail.com" } -> "a b c arroba hotmail punto com"
+ -> "a b c arroba h o t m a i l punto c o m"
+ -> "a b c arroba hotmail punto c o m"
+ -> "a b c at h o t m a i l punto com"
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="electronic", kind="verbalize", deterministic=deterministic)
+
+ graph_digit_no_zero = (
+ digit_no_zero @ pynini.cdrewrite(pynini.cross("un", "uno"), "", "", NEMO_SIGMA).optimize()
+ )
+ graph_digit = graph_digit_no_zero | zero
+
+ def add_space_after_char():
+ return pynini.closure(NEMO_NOT_QUOTE - pynini.accep(" ") + insert_space) + (
+ NEMO_NOT_QUOTE - pynini.accep(" ")
+ )
+
+ verbalize_characters = pynini.cdrewrite(graph_symbols | graph_digit, "", "", NEMO_SIGMA)
+
+ user_name = pynutil.delete("username: \"") + add_space_after_char() + pynutil.delete("\"")
+ user_name @= verbalize_characters
+
+ convert_defaults = pynutil.add_weight(NEMO_NOT_QUOTE, weight=0.0001) | domain_common | server_common
+ domain = convert_defaults + pynini.closure(insert_space + convert_defaults)
+ domain @= verbalize_characters
+
+ domain = pynutil.delete("domain: \"") + domain + pynutil.delete("\"")
+ protocol = (
+ pynutil.delete("protocol: \"")
+ + add_space_after_char() @ pynini.cdrewrite(graph_symbols, "", "", NEMO_SIGMA)
+ + pynutil.delete("\"")
+ )
+ self.graph = (pynini.closure(protocol + pynini.accep(" "), 0, 1) + domain) | (
+ user_name + pynini.accep(" ") + pynutil.insert("arroba ") + domain
+ )
+ delete_tokens = self.delete_tokens(self.graph + delete_preserve_order)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/fraction.py b/nemo_text_processing/text_normalization/es/verbalizers/fraction.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/fraction.py
@@ -0,0 +1,184 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_CHAR,
+ NEMO_NOT_QUOTE,
+ NEMO_NOT_SPACE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ accents,
+ shift_cardinal_gender,
+ strip_cardinal_apocope,
+)
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class FractionFst(GraphFst):
+ """
+ Finite state transducer for verbalizing fraction
+ e.g. tokens { fraction { integer: "treinta y tres" numerator: "cuatro" denominator: "quinto" } } ->
+ treinta y tres y cuatro quintos
+
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="fraction", kind="verbalize", deterministic=deterministic)
+
+ # Derivational strings append 'avo' as a suffix. Adding space for processing aid
+ fraction_stem = pynutil.insert(" avo")
+ plural = pynutil.insert("s")
+
+ integer = (
+ pynutil.delete("integer_part: \"")
+ + strip_cardinal_apocope(pynini.closure(NEMO_NOT_QUOTE))
+ + pynutil.delete("\"")
+ )
+
+ numerator_one = pynutil.delete("numerator: \"") + pynini.accep("un") + pynutil.delete("\" ")
+ numerator = (
+ pynutil.delete("numerator: \"")
+ + pynini.difference(pynini.closure(NEMO_NOT_QUOTE), "un")
+ + pynutil.delete("\" ")
+ )
+
+ denominator_add_stem = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE)
+ + fraction_stem
+ + pynutil.delete("\" morphosyntactic_features: \"add_root\"")
+ )
+ denominator_ordinal = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\" morphosyntactic_features: \"ordinal\"")
+ )
+ denominator_cardinal = pynutil.delete("denominator: \"") + (
+ pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\"")
+ )
+
+ denominator_singular = pynini.union(denominator_add_stem, denominator_ordinal)
+ denominator_plural = denominator_singular + plural
+
+ if not deterministic:
+ # Occasional exceptions
+ denominator_singular |= denominator_add_stem @ pynini.string_map(
+ [("once avo", "undรฉcimo"), ("doce avo", "duodรฉcimo")]
+ )
+
+ # Merging operations
+ merge = pynini.cdrewrite(
+ pynini.cross(" y ", "i"), "", "", NEMO_SIGMA
+ ) # The denominator must be a single word, with the conjunction "y" replaced by i
+ merge @= pynini.cdrewrite(delete_space, "", pynini.difference(NEMO_CHAR, "parte"), NEMO_SIGMA)
+
+ # The merger can produce duplicate vowels. This is not allowed in orthography
+ delete_duplicates = pynini.string_map([("aa", "a"), ("oo", "o")]) # Removes vowels
+ delete_duplicates = pynini.cdrewrite(delete_duplicates, "", "", NEMO_SIGMA)
+
+ remove_accents = pynini.cdrewrite(
+ accents,
+ pynini.union(NEMO_SPACE, pynini.accep("[BOS]")) + pynini.closure(NEMO_NOT_SPACE),
+ pynini.closure(NEMO_NOT_SPACE) + pynini.union("avo", "ava", "รฉsimo", "รฉsima"),
+ NEMO_SIGMA,
+ )
+ merge_into_single_word = merge @ remove_accents @ delete_duplicates
+
+ fraction_default = numerator + delete_space + insert_space + (denominator_plural @ merge_into_single_word)
+ fraction_with_one = (
+ numerator_one + delete_space + insert_space + (denominator_singular @ merge_into_single_word)
+ )
+
+ fraction_with_cardinal = strip_cardinal_apocope(numerator | numerator_one)
+ fraction_with_cardinal += (
+ delete_space + pynutil.insert(" sobre ") + strip_cardinal_apocope(denominator_cardinal)
+ )
+
+ conjunction = pynutil.insert(" y ")
+
+ if not deterministic:
+ # There is an alternative rendering where ordinals act as adjectives for 'parte'. This requires use of the feminine
+ # Other rules will manage use of "un" at end, so just worry about endings
+ exceptions = pynini.string_map([("tercia", "tercera")])
+ apply_exceptions = pynini.cdrewrite(exceptions, "", "", NEMO_SIGMA)
+ vowel_change = pynini.cdrewrite(pynini.cross("o", "a"), "", pynini.accep("[EOS]"), NEMO_SIGMA)
+
+ denominator_singular_fem = shift_cardinal_gender(denominator_singular) @ vowel_change @ apply_exceptions
+ denominator_plural_fem = denominator_singular_fem + plural
+
+ numerator_one_fem = shift_cardinal_gender(numerator_one)
+ numerator_fem = shift_cardinal_gender(numerator)
+
+ fraction_with_cardinal |= (
+ (numerator_one_fem | numerator_fem)
+ + delete_space
+ + pynutil.insert(" sobre ")
+ + shift_cardinal_gender(denominator_cardinal)
+ )
+
+ # Still need to manage stems
+ merge_stem = pynini.cdrewrite(
+ delete_space, "", pynini.union("avo", "ava", "avos", "avas"), NEMO_SIGMA
+ ) # For managing alternative spacing
+ merge_stem @= remove_accents @ delete_duplicates
+
+ fraction_with_one_fem = numerator_one_fem + delete_space + insert_space
+ fraction_with_one_fem += pynini.union(
+ denominator_singular_fem @ merge_stem, denominator_singular_fem @ merge_into_single_word
+ ) # Both forms exists
+ fraction_with_one_fem @= pynini.cdrewrite(pynini.cross("una media", "media"), "", "", NEMO_SIGMA)
+ fraction_with_one_fem += pynutil.insert(" parte")
+
+ fraction_default_fem = numerator_fem + delete_space + insert_space
+ fraction_default_fem += pynini.union(
+ denominator_plural_fem @ merge_stem, denominator_plural_fem @ merge_into_single_word
+ )
+ fraction_default_fem += pynutil.insert(" partes")
+
+ fraction_default |= (
+ numerator + delete_space + insert_space + denominator_plural @ merge_stem
+ ) # Case of no merger
+ fraction_default |= fraction_default_fem
+
+ fraction_with_one |= numerator_one + delete_space + insert_space + denominator_singular @ merge_stem
+ fraction_with_one |= fraction_with_one_fem
+
+ # Integers are influenced by dominant noun, need to allow feminine forms as well
+ integer |= shift_cardinal_gender(integer)
+
+ # Remove 'un medio'
+ fraction_with_one @= pynini.cdrewrite(pynini.cross("un medio", "medio"), "", "", NEMO_SIGMA)
+
+ integer = pynini.closure(integer + delete_space + conjunction, 0, 1)
+
+ fraction = fraction_with_one | fraction_default | fraction_with_cardinal
+
+ graph = integer + fraction
+
+ self.graph = graph
+ delete_tokens = self.delete_tokens(self.graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/measure.py b/nemo_text_processing/text_normalization/es/verbalizers/measure.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/measure.py
@@ -0,0 +1,110 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ NEMO_WHITE_SPACE,
+ GraphFst,
+ delete_extra_space,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import ones, shift_cardinal_gender
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ unit_plural_fem = pynini.string_file(get_abs_path("data/measures/measurements_plural_fem.tsv"))
+ unit_plural_masc = pynini.string_file(get_abs_path("data/measures/measurements_plural_masc.tsv"))
+
+ unit_singular_fem = pynini.project(unit_plural_fem, "input")
+ unit_singular_masc = pynini.project(unit_plural_masc, "input")
+
+ unit_plural_fem = pynini.project(unit_plural_fem, "output")
+ unit_plural_masc = pynini.project(unit_plural_masc, "output")
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ unit_plural_fem = None
+ unit_plural_masc = None
+
+ unit_singular_fem = None
+ unit_singular_masc = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MeasureFst(GraphFst):
+ """
+ Finite state transducer for verbalizing measure, e.g.
+ measure { cardinal { integer: "dos" units: "gramos" } } -> "dos gramos"
+ measure { cardinal { integer_part: "dos" quantity: "millones" units: "gramos" } } -> "dos millones de gramos"
+
+ Args:
+ decimal: DecimalFst
+ cardinal: CardinalFst
+ fraction: FractionFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, decimal: GraphFst, cardinal: GraphFst, fraction: GraphFst, deterministic: bool):
+ super().__init__(name="measure", kind="verbalize", deterministic=deterministic)
+
+ graph_decimal = decimal.fst
+ graph_cardinal = cardinal.fst
+ graph_fraction = fraction.fst
+
+ unit_masc = (unit_plural_masc | unit_singular_masc) + pynini.closure(
+ NEMO_WHITE_SPACE + "por" + pynini.closure(NEMO_NOT_QUOTE, 1), 0, 1
+ )
+ unit_masc |= "por" + pynini.closure(NEMO_NOT_QUOTE, 1)
+ unit_masc = pynutil.delete("units: \"") + (pynini.closure(NEMO_NOT_QUOTE) @ unit_masc) + pynutil.delete("\"")
+
+ unit_fem = (unit_plural_fem | unit_singular_fem) + pynini.closure(
+ NEMO_WHITE_SPACE + "por" + pynini.closure(NEMO_NOT_QUOTE, 1), 0, 1
+ )
+ unit_fem = pynutil.delete("units: \"") + (pynini.closure(NEMO_NOT_QUOTE) @ unit_fem) + pynutil.delete("\"")
+
+ graph_masc = (graph_cardinal | graph_decimal | graph_fraction) + NEMO_WHITE_SPACE + unit_masc
+ graph_fem = (
+ shift_cardinal_gender(graph_cardinal | graph_decimal | graph_fraction) + NEMO_WHITE_SPACE + unit_fem
+ )
+ graph = graph_masc | graph_fem
+
+ graph = (
+ pynini.cdrewrite(
+ pynutil.insert(" de"), "quantity: \"" + pynini.closure(NEMO_NOT_QUOTE, 1), "\"", NEMO_SIGMA
+ )
+ @ graph
+ ) # billones de xyz
+
+ graph @= pynini.cdrewrite(pynini.cross(ones, "uno"), "", NEMO_WHITE_SPACE + "por", NEMO_SIGMA)
+
+ # To manage alphanumeric combonations ("a-8, 5x"), we let them use a weighted default path.
+ alpha_num_unit = pynutil.delete("units: \"") + pynini.closure(NEMO_NOT_QUOTE) + pynutil.delete("\"")
+ graph_alpha_num = pynini.union(
+ (graph_cardinal | graph_decimal) + NEMO_SPACE + alpha_num_unit,
+ alpha_num_unit + delete_extra_space + (graph_cardinal | graph_decimal),
+ )
+
+ graph |= pynutil.add_weight(graph_alpha_num, 0.01)
+
+ graph += delete_preserve_order
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/money.py b/nemo_text_processing/text_normalization/es/verbalizers/money.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/money.py
@@ -0,0 +1,195 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ NEMO_SPACE,
+ GraphFst,
+ delete_preserve_order,
+)
+from nemo_text_processing.text_normalization.es.graph_utils import (
+ shift_cardinal_gender,
+ shift_number_gender,
+ strip_cardinal_apocope,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ fem = pynini.string_file((get_abs_path("data/money/currency_plural_fem.tsv")))
+ masc = pynini.string_file((get_abs_path("data/money/currency_plural_masc.tsv")))
+
+ fem_singular = pynini.project(fem, "input")
+ masc_singular = pynini.project(masc, "input")
+
+ fem_plural = pynini.project(fem, "output")
+ masc_plural = pynini.project(masc, "output")
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ fem_plural = None
+ masc_plural = None
+
+ fem_singular = None
+ masc_singular = None
+
+ PYNINI_AVAILABLE = False
+
+
+class MoneyFst(GraphFst):
+ """
+ Finite state transducer for verbalizing money, e.g.
+ money { currency_maj: "euro" integer_part: "un"} -> "un euro"
+ money { currency_maj: "euro" integer_part: "un" fractional_part: "cero cero un"} -> "uno coma cero cero uno euros"
+ money { integer_part: "un" currency_maj: "libra" fractional_part: "cuarenta" preserve_order: true} -> "una libra cuarenta"
+ money { integer_part: "un" currency_maj: "libra" fractional_part: "cuarenta" currency_min: "peniques" preserve_order: true} -> "una libra con cuarenta peniques"
+ money { fractional_part: "un" currency_min: "penique" preserve_order: true} -> "un penique"
+
+ Args:
+ decimal: GraphFst
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, decimal: GraphFst, deterministic: bool = True):
+ super().__init__(name="money", kind="verbalize", deterministic=deterministic)
+
+ maj_singular_masc = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_singular)
+ + pynutil.delete("\"")
+ )
+ maj_singular_fem = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_singular)
+ + pynutil.delete("\"")
+ )
+
+ maj_plural_masc = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_plural)
+ + pynutil.delete("\"")
+ )
+ maj_plural_fem = (
+ pynutil.delete("currency_maj: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_plural)
+ + pynutil.delete("\"")
+ )
+
+ maj_masc = maj_plural_masc | maj_singular_masc # Tagger kept quantity resolution stable
+ maj_fem = maj_plural_fem | maj_singular_fem
+
+ min_singular_masc = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_singular)
+ + pynutil.delete("\"")
+ )
+ min_singular_fem = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_singular)
+ + pynutil.delete("\"")
+ )
+
+ min_plural_masc = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ masc_plural)
+ + pynutil.delete("\"")
+ )
+ min_plural_fem = (
+ pynutil.delete("currency_min: \"")
+ + (pynini.closure(NEMO_NOT_QUOTE, 1) @ fem_plural)
+ + pynutil.delete("\"")
+ )
+
+ min_masc = min_plural_masc | min_singular_masc
+ min_fem = min_plural_fem | min_singular_fem
+
+ fractional_part = (
+ pynutil.delete("fractional_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ )
+
+ integer_part = pynutil.delete("integer_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ optional_add_and = pynini.closure(pynutil.insert(pynini.union("con ", "y ")), 0, 1)
+
+ # *** currency_maj
+ graph_integer_masc = integer_part + NEMO_SPACE + maj_masc
+ graph_integer_fem = shift_cardinal_gender(integer_part) + NEMO_SPACE + maj_fem
+ graph_integer = graph_integer_fem | graph_integer_masc
+
+ # *** currency_maj + (***) | ((con) *** current_min)
+ graph_integer_with_minor_masc = (
+ integer_part
+ + NEMO_SPACE
+ + maj_masc
+ + NEMO_SPACE
+ + pynini.union(
+ optional_add_and + strip_cardinal_apocope(fractional_part),
+ (optional_add_and + fractional_part + NEMO_SPACE + min_masc),
+ (optional_add_and + shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem),
+ ) # Could be minor currency that is different gender
+ + delete_preserve_order
+ )
+
+ graph_integer_with_minor_fem = (
+ shift_cardinal_gender(integer_part)
+ + NEMO_SPACE
+ + maj_fem
+ + NEMO_SPACE
+ + pynini.union(
+ optional_add_and + shift_cardinal_gender(fractional_part),
+ (optional_add_and + fractional_part + NEMO_SPACE + min_masc),
+ (optional_add_and + shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem),
+ ) # Could be minor currency that is different gender
+ + delete_preserve_order
+ )
+
+ graph_integer_with_minor = graph_integer_with_minor_fem | graph_integer_with_minor_masc
+
+ # *** coma *** currency_maj
+ graph_decimal_masc = decimal.numbers + NEMO_SPACE + maj_masc
+
+ # Need to fix some of the inner parts, so don't use decimal here (note: quantities covered by masc)
+ graph_decimal_fem = (
+ pynini.accep("integer_part: \"")
+ + shift_cardinal_gender(pynini.closure(NEMO_NOT_QUOTE, 1))
+ + pynini.accep("\"")
+ + NEMO_SPACE
+ + pynini.accep("fractional_part: \"")
+ + shift_number_gender(pynini.closure(NEMO_NOT_QUOTE, 1))
+ + pynini.accep("\"")
+ + NEMO_SIGMA
+ )
+ graph_decimal_fem @= decimal.numbers_no_quantity
+ graph_decimal_fem += NEMO_SPACE + maj_fem
+
+ graph_decimal = graph_decimal_fem | graph_decimal_masc
+ graph_decimal = (
+ pynini.cdrewrite(
+ pynutil.insert(" de"), "quantity: \"" + pynini.closure(NEMO_NOT_QUOTE, 1), "\"", NEMO_SIGMA
+ )
+ @ graph_decimal
+ ) # formally it's millones/billones de ***
+
+ # *** current_min
+ graph_minor_masc = fractional_part + NEMO_SPACE + min_masc + delete_preserve_order
+ graph_minor_fem = shift_cardinal_gender(fractional_part) + NEMO_SPACE + min_fem + delete_preserve_order
+ graph_minor = graph_minor_fem | graph_minor_masc
+
+ graph = graph_integer | graph_integer_with_minor | graph_decimal | graph_minor
+
+ delete_tokens = self.delete_tokens(graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py b/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/ordinal.py
@@ -0,0 +1,76 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, NEMO_SIGMA, NEMO_SPACE, GraphFst
+from nemo_text_processing.text_normalization.es.graph_utils import shift_number_gender
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class OrdinalFst(GraphFst):
+ """
+ Finite state transducer for verbalizing ordinals
+ e.g. ordinal { integer: "tercer" } } -> "tercero"
+ -> "tercera"
+ -> "tercer"
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="ordinal", kind="verbalize", deterministic=deterministic)
+
+ graph = pynutil.delete("integer: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+
+ # masculne gender we leave as is
+ graph_masc = graph + pynutil.delete(" morphosyntactic_features: \"gender_masc")
+
+ # shift gender
+ graph_fem_ending = graph @ pynini.cdrewrite(
+ pynini.cross("o", "a"), "", NEMO_SPACE | pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+ graph_fem = shift_number_gender(graph_fem_ending) + pynutil.delete(" morphosyntactic_features: \"gender_fem")
+
+ # Apocope just changes tercero and primero. May occur if someone wrote 11.er (uncommon)
+ graph_apocope = (
+ pynini.cross("tercero", "tercer")
+ | pynini.cross("primero", "primer")
+ | pynini.cross("undรฉcimo", "decimoprimer")
+ ) # In case someone wrote 11.er with deterministic
+ graph_apocope = (graph @ pynini.cdrewrite(graph_apocope, "", "", NEMO_SIGMA)) + pynutil.delete(
+ " morphosyntactic_features: \"apocope"
+ )
+
+ graph = graph_apocope | graph_masc | graph_fem
+
+ if not deterministic:
+ # Plural graph
+ graph_plural = pynini.cdrewrite(
+ pynutil.insert("s"), pynini.union("o", "a"), NEMO_SPACE | pynini.accep("[EOS]"), NEMO_SIGMA
+ )
+
+ graph |= (graph @ graph_plural) + pynutil.delete("/plural")
+
+ self.graph = graph + pynutil.delete("\"")
+
+ delete_tokens = self.delete_tokens(self.graph)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/telephone.py b/nemo_text_processing/text_normalization/es/verbalizers/telephone.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/telephone.py
@@ -0,0 +1,42 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import NEMO_NOT_QUOTE, GraphFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class TelephoneFst(GraphFst):
+ """
+ Finite state transducer for verbalizing telephone, e.g.
+ telephone { number_part: "uno dos tres uno dos tres cinco seis siete ocho" }
+ -> uno dos tres uno dos tres cinco seis siete ocho
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="telephone", kind="verbalize")
+
+ number_part = pynutil.delete("number_part: \"") + pynini.closure(NEMO_NOT_QUOTE, 1) + pynutil.delete("\"")
+ delete_tokens = self.delete_tokens(number_part)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/time.py b/nemo_text_processing/text_normalization/es/verbalizers/time.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/time.py
@@ -0,0 +1,269 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import (
+ NEMO_NOT_QUOTE,
+ NEMO_SIGMA,
+ GraphFst,
+ delete_preserve_order,
+ delete_space,
+ insert_space,
+)
+from nemo_text_processing.text_normalization.es.utils import get_abs_path
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ alt_minutes = pynini.string_file(get_abs_path("data/time/alt_minutes.tsv"))
+
+ morning_times = pynini.string_file(get_abs_path("data/time/morning_times.tsv"))
+ afternoon_times = pynini.string_file(get_abs_path("data/time/afternoon_times.tsv"))
+ evening_times = pynini.string_file(get_abs_path("data/time/evening_times.tsv"))
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ alt_minutes = None
+
+ morning_times = None
+ afternoon_times = None
+ evening_times = None
+
+ PYNINI_AVAILABLE = False
+
+
+class TimeFst(GraphFst):
+ """
+ Finite state transducer for verbalizing time, e.g.
+ time { hours: "doce" minutes: "media" suffix: "a m" } -> doce y media de la noche
+ time { hours: "doce" } -> twelve o'clock
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple transduction are generated (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="time", kind="verbalize", deterministic=deterministic)
+
+ change_minutes = pynini.cdrewrite(alt_minutes, pynini.accep("[BOS]"), pynini.accep("[EOS]"), NEMO_SIGMA)
+
+ morning_phrases = pynini.cross("am", "de la maรฑana")
+ afternoon_phrases = pynini.cross("pm", "de la tarde")
+ evening_phrases = pynini.cross("pm", "de la noche")
+
+ # For the 12's
+ mid_times = pynini.accep("doce")
+ mid_phrases = (
+ pynini.string_map([("pm", "del mediodรญa"), ("am", "de la noche")])
+ if deterministic
+ else pynini.string_map(
+ [
+ ("pm", "de la maรฑana"),
+ ("pm", "del dรญa"),
+ ("pm", "del mediodรญa"),
+ ("am", "de la noche"),
+ ("am", "de la medianoche"),
+ ]
+ )
+ )
+
+ hour = (
+ pynutil.delete("hours:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ minute = (
+ pynutil.delete("minutes:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ minute = (minute @ change_minutes) if deterministic else pynini.union(minute, minute @ change_minutes)
+
+ suffix = (
+ pynutil.delete("suffix:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ zone = (
+ pynutil.delete("zone:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+ optional_zone = pynini.closure(delete_space + insert_space + zone, 0, 1)
+ second = (
+ pynutil.delete("seconds:")
+ + delete_space
+ + pynutil.delete("\"")
+ + pynini.closure(NEMO_NOT_QUOTE, 1)
+ + pynutil.delete("\"")
+ )
+
+ graph_hms = (
+ hour
+ + pynutil.insert(" horas ")
+ + delete_space
+ + minute
+ + pynutil.insert(" minutos y ")
+ + delete_space
+ + second
+ + pynutil.insert(" segundos")
+ )
+
+ graph_hm = hour + delete_space + pynutil.insert(" y ") + minute
+ graph_hm |= pynini.union(
+ (hour @ morning_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases),
+ (hour @ afternoon_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases),
+ (hour @ evening_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases),
+ (hour @ mid_times)
+ + delete_space
+ + pynutil.insert(" y ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases),
+ )
+
+ graph_h = pynini.union(
+ hour,
+ (hour @ morning_times) + delete_space + insert_space + (suffix @ morning_phrases),
+ (hour @ afternoon_times) + delete_space + insert_space + (suffix @ afternoon_phrases),
+ (hour @ evening_times) + delete_space + insert_space + (suffix @ evening_phrases),
+ (hour @ mid_times) + delete_space + insert_space + (suffix @ mid_phrases),
+ )
+
+ graph = (graph_hms | graph_hm | graph_h) + optional_zone
+
+ if not deterministic:
+ graph_style_1 = pynutil.delete(" style: \"1\"")
+ graph_style_2 = pynutil.delete(" style: \"2\"")
+
+ graph_menos = hour + delete_space + pynutil.insert(" menos ") + minute + graph_style_1
+ graph_menos |= (
+ (hour @ morning_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ afternoon_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ evening_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases)
+ + graph_style_1
+ )
+ graph_menos |= (
+ (hour @ mid_times)
+ + delete_space
+ + pynutil.insert(" menos ")
+ + minute
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases)
+ + graph_style_1
+ )
+ graph_menos += optional_zone
+
+ graph_para = minute + pynutil.insert(" para las ") + delete_space + hour + graph_style_2
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ morning_times)
+ + delete_space
+ + insert_space
+ + (suffix @ morning_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ afternoon_times)
+ + delete_space
+ + insert_space
+ + (suffix @ afternoon_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ evening_times)
+ + delete_space
+ + insert_space
+ + (suffix @ evening_phrases)
+ + graph_style_2
+ )
+ graph_para |= (
+ minute
+ + pynutil.insert(" para las ")
+ + delete_space
+ + (hour @ mid_times)
+ + delete_space
+ + insert_space
+ + (suffix @ mid_phrases)
+ + graph_style_2
+ )
+ graph_para += optional_zone
+ graph_para @= pynini.cdrewrite(
+ pynini.cross(" las ", " la "), "para", "una", NEMO_SIGMA
+ ) # Need agreement with one
+
+ graph |= graph_menos | graph_para
+ delete_tokens = self.delete_tokens(graph + delete_preserve_order)
+ self.fst = delete_tokens.optimize()
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py b/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/verbalize.py
@@ -0,0 +1,73 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst
+from nemo_text_processing.text_normalization.en.verbalizers.whitelist import WhiteListFst
+from nemo_text_processing.text_normalization.es.verbalizers.cardinal import CardinalFst
+from nemo_text_processing.text_normalization.es.verbalizers.date import DateFst
+from nemo_text_processing.text_normalization.es.verbalizers.decimals import DecimalFst
+from nemo_text_processing.text_normalization.es.verbalizers.electronic import ElectronicFst
+from nemo_text_processing.text_normalization.es.verbalizers.fraction import FractionFst
+from nemo_text_processing.text_normalization.es.verbalizers.measure import MeasureFst
+from nemo_text_processing.text_normalization.es.verbalizers.money import MoneyFst
+from nemo_text_processing.text_normalization.es.verbalizers.ordinal import OrdinalFst
+from nemo_text_processing.text_normalization.es.verbalizers.telephone import TelephoneFst
+from nemo_text_processing.text_normalization.es.verbalizers.time import TimeFst
+
+
+class VerbalizeFst(GraphFst):
+ """
+ Composes other verbalizer grammars.
+ For deployment, this grammar will be compiled and exported to OpenFst Finate State Archiv (FAR) File.
+ More details to deployment at NeMo/tools/text_processing_deployment.
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="verbalize", kind="verbalize", deterministic=deterministic)
+ cardinal = CardinalFst(deterministic=deterministic)
+ cardinal_graph = cardinal.fst
+ ordinal = OrdinalFst(deterministic=deterministic)
+ ordinal_graph = ordinal.fst
+ decimal = DecimalFst(deterministic=deterministic)
+ decimal_graph = decimal.fst
+ fraction = FractionFst(deterministic=deterministic)
+ fraction_graph = fraction.fst
+ date = DateFst(deterministic=deterministic)
+ date_graph = date.fst
+ measure = MeasureFst(cardinal=cardinal, decimal=decimal, fraction=fraction, deterministic=deterministic)
+ measure_graph = measure.fst
+ electronic = ElectronicFst(deterministic=deterministic)
+ electronic_graph = electronic.fst
+ whitelist_graph = WhiteListFst(deterministic=deterministic).fst
+ money_graph = MoneyFst(decimal=decimal, deterministic=deterministic).fst
+ telephone_graph = TelephoneFst(deterministic=deterministic).fst
+ time_graph = TimeFst(deterministic=deterministic).fst
+
+ graph = (
+ cardinal_graph
+ | measure_graph
+ | decimal_graph
+ | ordinal_graph
+ | date_graph
+ | electronic_graph
+ | money_graph
+ | fraction_graph
+ | whitelist_graph
+ | telephone_graph
+ | time_graph
+ )
+ self.fst = graph
diff --git a/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py b/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py
new file mode 100644
--- /dev/null
+++ b/nemo_text_processing/text_normalization/es/verbalizers/verbalize_final.py
@@ -0,0 +1,52 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+from nemo_text_processing.text_normalization.en.graph_utils import GraphFst, delete_extra_space, delete_space
+from nemo_text_processing.text_normalization.en.verbalizers.word import WordFst
+from nemo_text_processing.text_normalization.es.verbalizers.verbalize import VerbalizeFst
+
+try:
+ import pynini
+ from pynini.lib import pynutil
+
+ PYNINI_AVAILABLE = True
+
+except (ModuleNotFoundError, ImportError):
+ PYNINI_AVAILABLE = False
+
+
+class VerbalizeFinalFst(GraphFst):
+ """
+ Finite state transducer that verbalizes an entire sentence
+
+ Args:
+ deterministic: if True will provide a single transduction option,
+ for False multiple options (used for audio-based normalization)
+ """
+
+ def __init__(self, deterministic: bool = True):
+ super().__init__(name="verbalize_final", kind="verbalize", deterministic=deterministic)
+ verbalize = VerbalizeFst(deterministic=deterministic).fst
+ word = WordFst(deterministic=deterministic).fst
+ types = verbalize | word
+ graph = (
+ pynutil.delete("tokens")
+ + delete_space
+ + pynutil.delete("{")
+ + delete_space
+ + types
+ + delete_space
+ + pynutil.delete("}")
+ )
+ graph = delete_space + pynini.closure(graph + delete_extra_space) + graph + delete_space
+ self.fst = graph
diff --git a/nemo_text_processing/text_normalization/normalize.py b/nemo_text_processing/text_normalization/normalize.py
--- a/nemo_text_processing/text_normalization/normalize.py
+++ b/nemo_text_processing/text_normalization/normalize.py
@@ -46,8 +46,8 @@
class Normalizer:
"""
- Normalizer class that converts text from written to spoken form.
- Useful for TTS preprocessing.
+ Normalizer class that converts text from written to spoken form.
+ Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
@@ -83,10 +83,11 @@ def __init__(
from nemo_text_processing.text_normalization.ru.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.ru.verbalizers.verbalize_final import VerbalizeFinalFst
elif lang == 'de':
- # Ru TN only support non-deterministic cases and produces multiple normalization options
- # use normalize_with_audio.py
from nemo_text_processing.text_normalization.de.taggers.tokenize_and_classify import ClassifyFst
from nemo_text_processing.text_normalization.de.verbalizers.verbalize_final import VerbalizeFinalFst
+ elif lang == 'es':
+ from nemo_text_processing.text_normalization.es.taggers.tokenize_and_classify import ClassifyFst
+ from nemo_text_processing.text_normalization.es.verbalizers.verbalize_final import VerbalizeFinalFst
self.tagger = ClassifyFst(
input_case=input_case,
deterministic=deterministic,
@@ -106,7 +107,7 @@ def __init__(
def normalize_list(self, texts: List[str], verbose=False, punct_post_process: bool = False) -> List[str]:
"""
- NeMo text normalizer
+ NeMo text normalizer
Args:
texts: list of input strings
@@ -357,7 +358,7 @@ def select_verbalizer(self, lattice: 'pynini.FstLike') -> str:
def parse_args():
parser = ArgumentParser()
parser.add_argument("input_string", help="input string", type=str)
- parser.add_argument("--language", help="language", choices=["en", "de"], default="en", type=str)
+ parser.add_argument("--language", help="language", choices=["en", "de", "es"], default="en", type=str)
parser.add_argument(
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
diff --git a/nemo_text_processing/text_normalization/normalize_with_audio.py b/nemo_text_processing/text_normalization/normalize_with_audio.py
--- a/nemo_text_processing/text_normalization/normalize_with_audio.py
+++ b/nemo_text_processing/text_normalization/normalize_with_audio.py
@@ -55,15 +55,15 @@
"audio_data" - path to the audio file
"text" - raw text
"pred_text" - ASR model prediction
-
+
See https://github.com/NVIDIA/NeMo/blob/main/examples/asr/transcribe_speech.py on how to add ASR predictions
-
+
When the manifest is ready, run:
python normalize_with_audio.py \
--audio_data PATH/TO/MANIFEST.JSON \
- --language en
-
-
+ --language en
+
+
To run with a single audio file, specify path to audio and text with:
python normalize_with_audio.py \
--audio_data PATH/TO/AUDIO.WAV \
@@ -71,18 +71,18 @@
--text raw text OR PATH/TO/.TXT/FILE
--model QuartzNet15x5Base-En \
--verbose
-
+
To see possible normalization options for a text input without an audio file (could be used for debugging), run:
python python normalize_with_audio.py --text "RAW TEXT"
-
+
Specify `--cache_dir` to generate .far grammars once and re-used them for faster inference
"""
class NormalizerWithAudio(Normalizer):
"""
- Normalizer class that converts text from written to spoken form.
- Useful for TTS preprocessing.
+ Normalizer class that converts text from written to spoken form.
+ Useful for TTS preprocessing.
Args:
input_case: expected input capitalization
@@ -282,7 +282,7 @@ def parse_args():
"--input_case", help="input capitalization", choices=["lower_cased", "cased"], default="cased", type=str
)
parser.add_argument(
- "--language", help="Select target language", choices=["en", "ru", "de"], default="en", type=str
+ "--language", help="Select target language", choices=["en", "ru", "de", "es"], default="en", type=str
)
parser.add_argument("--audio_data", default=None, help="path to an audio file or .json manifest")
parser.add_argument(
diff --git a/tools/text_processing_deployment/pynini_export.py b/tools/text_processing_deployment/pynini_export.py
--- a/tools/text_processing_deployment/pynini_export.py
+++ b/tools/text_processing_deployment/pynini_export.py
@@ -67,7 +67,7 @@ def tn_grammars(**kwargs):
def export_grammars(output_dir, grammars):
"""
- Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
+ Exports tokenizer_and_classify and verbalize Fsts as OpenFst finite state archive (FAR) files.
Args:
output_dir: directory to export FAR files to. Subdirectories will be created for tagger and verbalizer respectively.
@@ -109,7 +109,7 @@ def parse_args():
if __name__ == '__main__':
args = parse_args()
- if args.language in ['ru', 'fr', 'es', 'vi'] and args.grammars == 'tn_grammars':
+ if args.language in ['ru', 'fr', 'vi'] and args.grammars == 'tn_grammars':
raise ValueError('Only ITN grammars could be deployed in Sparrowhawk for the selected languages.')
if args.language == 'en':
@@ -148,6 +148,10 @@ def parse_args():
from nemo_text_processing.inverse_text_normalization.es.verbalizers.verbalize import (
VerbalizeFst as ITNVerbalizeFst,
)
+ from nemo_text_processing.text_normalization.es.taggers.tokenize_and_classify import (
+ ClassifyFst as TNClassifyFst,
+ )
+ from nemo_text_processing.text_normalization.es.verbalizers.verbalize import VerbalizeFst as TNVerbalizeFst
elif args.language == 'fr':
from nemo_text_processing.inverse_text_normalization.fr.taggers.tokenize_and_classify import (
ClassifyFst as ITNClassifyFst,
</patch>
</s>
</patch>
|
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_cardinal.txt
@@ -0,0 +1,86 @@
+1~un
+2~dos
+3~tres
+4~cuatro
+5~cinco
+6~seis
+7~siete
+8~ocho
+9~nueve
+10~diez
+11~once
+12~doce
+13~trece
+14~catorce
+15~quince
+16~diecisรฉis
+17~diecisiete
+18~dieciocho
+19~diecinueve
+20~veinte
+21~veintiรบn
+22~veintidรณs
+23~veintitrรฉs
+24~veinticuatro
+25~veinticinco
+26~veintisรฉis
+27~veintisiete
+28~veintiocho
+29~veintinueve
+30~treinta
+31~treinta y un
+40~cuarenta
+41~cuarenta y un
+50~cincuenta
+51~cincuenta y un
+60~sesenta
+70~setenta
+80~ochenta
+90~noventa
+100~cien
+101~ciento un
+120~ciento veinte
+121~ciento veintiรบn
+130~ciento treinta
+131~ciento treinta y un
+200~doscientos
+201~doscientos un
+300~trescientos
+301~trescientos un
+1000~mil
+1 000~mil
+1.000~mil
+1001~mil un
+1010~mil diez
+1020~mil veinte
+1021~mil veintiรบn
+1100~mil cien
+1101~mil ciento un
+1110~mil ciento diez
+1111~mil ciento once
+1234~mil doscientos treinta y cuatro
+2000~dos mil
+2001~dos mil un
+2010~dos mil diez
+2020~dos mil veinte
+2100~dos mil cien
+2101~dos mil ciento un
+2110~dos mil ciento diez
+2111~dos mil ciento once
+2222~dos mil doscientos veintidรณs
+10000~diez mil
+10 000~diez mil
+10.000~diez mil
+100000~cien mil
+100 000~cien mil
+100.000~cien mil
+1 000 000~un millรณn
+1.000.000~un millรณn
+1 234 568~un millรณn doscientos treinta y cuatro mil quinientos sesenta y ocho
+2.000.000~dos millones
+1.000.000.000~mil millones
+2.000.000.000~dos mil millones
+3 000 000 000 000~tres billones
+3.000.000.000.000~tres billones
+100 000 000 000 000 000 000 000~cien mil trillones
+100 000 000 000 000 000 000 001~cien mil trillones un
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_date.txt
@@ -0,0 +1,13 @@
+1 enero~primero de enero
+5 febrero~cinco de febrero
+20 de marzo~veinte de marzo
+abril 30~treinta de abril
+31 marzo~treinta y uno de marzo
+10 mayo 1990~diez de mayo de mil novecientos noventa
+junio 11 2000~once de junio de dos mil
+30 julio del 2020~treinta de julio del dos mil veinte
+30-2-1990~treinta de febrero de mil novecientos noventa
+30/2/1990~treinta de febrero de mil novecientos noventa
+30.2.1990~treinta de febrero de mil novecientos noventa
+1990-2-30~treinta de febrero de mil novecientos noventa
+1990-02-30~treinta de febrero de mil novecientos noventa
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_decimal.txt
@@ -0,0 +1,27 @@
+0,1~cero coma un
+0,01~cero coma cero un
+0,010~cero coma cero uno cero
+1,0101~uno coma cero uno cero un
+0,0~cero coma cero
+1,0~uno coma cero
+1,00~uno coma cero cero
+1,1~uno coma un
+233,32~doscientos treinta y tres coma treinta y dos
+32,22 millones~treinta y dos coma veintidรณs millones
+320 320,22 millones~trescientos veinte mil trescientos veinte coma veintidรณs millones
+5.002,232~cinco mil dos coma doscientos treinta y dos
+3,2 trillones~tres coma dos trillones
+3 millones~tres millones
+3 000 millones~tres mil millones
+3000 millones~tres mil millones
+3.000 millones~tres mil millones
+3.001 millones~tres mil un millones
+1 millรณn~un millรณn
+1 000 millones~mil millones
+1000 millones~mil millones
+1.000 millones~mil millones
+2,33302 millones~dos coma tres tres tres cero dos millones
+1,5332 millรณn~uno coma cinco tres tres dos millรณn
+1,53322 millรณn~uno coma cinco tres tres dos dos millรณn
+1,53321 millรณn~uno coma cinco tres tres dos un millรณn
+101,010101 millones~ciento uno coma cero uno cero uno cero un millones
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_electronic.txt
@@ -0,0 +1,12 @@
+a.bc@gmail.com~a punto b c arroba gmail punto com
+cdf@abc.edu~c d f arroba a b c punto e d u
+abc@gmail.abc~a b c arroba gmail punto a b c
+abc@abc.com~a b c arroba a b c punto com
+asdf123@abc.com~a s d f uno dos tres arroba a b c punto com
+a1b2@abc.com~a uno b dos arroba a b c punto com
+ab3.sdd.3@gmail.com~a b tres punto s d d punto tres arroba gmail punto com
+https://www.nvidia.com~h t t p s dos puntos barra barra w w w punto nvidia punto com
+www.nvidia.com~w w w punto nvidia punto com
+www.abc.es/efg~w w w punto a b c punto es barra e f g
+www.abc.es~w w w punto a b c punto es
+http://www.ourdailynews.com.sm~h t t p dos puntos barra barra w w w punto o u r d a i l y n e w s punto com punto s m
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_fraction.txt
@@ -0,0 +1,76 @@
+1/2~medio
+1 1/2~uno y medio
+3/2~tres medios
+1 3/2~uno y tres medios
+1/3~un tercio
+2/3~dos tercios
+1/4~un cuarto
+2/4~dos cuartos
+1/5~un quinto
+2/5~dos quintos
+1/6~un sexto
+2/6~dos sextos
+1/7~un sรฉptimo
+2/7~dos sรฉptimos
+1/8~un octavo
+2/8~dos octavos
+1/9~un noveno
+2/9~dos novenos
+1/10~un dรฉcimo
+2/10~dos dรฉcimos
+1/11~un onceavo
+1/12~un doceavo
+1/13~un treceavo
+1/14~un catorceavo
+1/15~un quinceavo
+1/16~un dieciseisavo
+1/17~un diecisieteavo
+1/18~un dieciochoavo
+1/19~un diecinueveavo
+1/20~un veinteavo
+1/21~un veintiunavo
+1/22~un veintidosavo
+1/30~un treintavo
+1/31~un treintaiunavo
+1/40~un cuarentavo
+1/41~un cuarentaiunavo
+1/50~un cincuentavo
+1/60~un sesentavo
+1/70~un setentavo
+1/80~un ochentavo
+1/90~un noventavo
+1/100~un centรฉsimo
+2/100~dos centรฉsimos
+1 2/100~uno y dos centรฉsimos
+1/101~uno sobre ciento uno
+1/110~uno sobre ciento diez
+1/111~uno sobre ciento once
+1/112~uno sobre ciento doce
+1/123~uno sobre ciento veintitrรฉs
+1/134~uno sobre ciento treinta y cuatro
+1/200~un ducentรฉsimo
+1/201~uno sobre doscientos uno
+1/234~uno sobre doscientos treinta y cuatro
+1/300~un tricentรฉsimo
+1/345~uno sobre trescientos cuarenta y cinco
+1/400~un cuadringentรฉsimo
+1/456~uno sobre cuatrocientos cincuenta y seis
+1/500~un quingentรฉsimo
+1/600~un sexcentรฉsimo
+1/700~un septingentรฉsimo
+1/800~un octingentรฉsimo
+1/900~un noningentรฉsimo
+1/1000~un milรฉsimo
+2/1000~dos milรฉsimos
+1 2/1000~uno y dos milรฉsimos
+1/1001~uno sobre mil uno
+1/1100~uno sobre mil cien
+1/1200~uno sobre mil doscientos
+1/1234~uno sobre mil doscientos treinta y cuatro
+1/2000~un dosmilรฉsimo
+1/5000~un cincomilรฉsimo
+1/10000~un diezmilรฉsimo
+1/100.000~un cienmilรฉsimo
+1/1.000.000~un millonรฉsimo
+1/100.000.000~un cienmillonรฉsimo
+1/1.200.000.000~un mildoscientosmillonรฉsimo
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_measure.txt
@@ -0,0 +1,17 @@
+1,2-a~uno coma dos a
+a-5~a cinco
+200 m~doscientos metros
+3 h~tres horas
+1 h~una hora
+245 mph~doscientas cuarenta y cinco millas por hora
+2 kg~dos kilogramos
+60,2400 kg~sesenta coma dos cuatro cero cero kilogramos
+-60,2400 kg~menos sesenta coma dos cuatro cero cero kilogramos
+8,52 %~ocho coma cincuenta y dos por ciento
+-8,52 %~menos ocho coma cincuenta y dos por ciento
+1 %~uno por ciento
+3 cm~tres centรญmetros
+4 s~cuatro segundos
+5 l~cinco litros
+4,51/s~cuatro coma cincuenta y uno por segundo
+0,0101 s~cero coma cero uno cero un segundos
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_money.txt
@@ -0,0 +1,24 @@
+$1~un dรณlar
+1 $~un dรณlar
+$1,50~un dรณlar cincuenta centavos
+1,50 $~un dรณlar cincuenta centavos
+ยฃ200.000.001~doscientos millones una libras
+200.000.001 ยฃ~doscientos millones una libras
+2 billones de euros~dos billones de euros
+โฌ2 billones~dos billones de euros
+โฌ 2 billones~dos billones de euros
+โฌ 2,3 billones~dos coma tres billones de euros
+2,3 billones de euros~dos coma tres billones de euros
+โฌ5,50~cinco euros cincuenta cรฉntimos
+5,50 โฌ~cinco euros cincuenta cรฉntimos
+5,01 โฌ~cinco euros un cรฉntimo
+5,01 ยฃ~cinco libras un penique
+21 czk~veintiuna coronas checas
+czk21~veintiuna coronas checas
+czk21,1 millones~veintiuna coma una millones de coronas checas
+czk 5,50 billones~cinco coma cincuenta billones de coronas checas
+rs 5,50 billones~cinco coma cincuenta billones de rupias
+czk5,50 billones~cinco coma cincuenta billones de coronas checas
+0,55 $~cincuenta y cinco centavos
+1,01 $~un dรณlar un centavo
+ยฅ12,05~doce yenes cinco centavos
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_normalize_with_audio.txt
@@ -0,0 +1,120 @@
+~121
+ciento veintiรบn
+ciento veintiuno
+ciento veintiuna
+121
+~200
+doscientos
+doscientas
+200
+~201
+doscientos un
+doscientos uno
+doscientas una
+201
+~1
+un
+uno
+una
+1
+~550.000.001
+quinientos cincuenta millones un
+quinientos cincuenta millones una
+quinientos cincuenta millones uno
+550.000.001
+~500.501
+quinientos mil quinientos un
+quinientos mil quinientos uno
+quinientas mil quinientas una
+500.501
+~500.001.ยบ
+quinientosmilรฉsimo primero
+quingentรฉsimo milรฉsimo primero
+quinientosmilรฉsimos primeros
+quingentรฉsimos milรฉsimos primeros
+500.001.ยบ
+~500.001.ยช
+quinientasmilรฉsima primera
+quingentรฉsima milรฉsima primera
+quinientasmilรฉsimas primeras
+quingentรฉsimas milรฉsimas primeras
+500.001.ยช
+~11.ยช
+dรฉcima primera
+decimoprimera
+dรฉcimas primeras
+decimoprimeras
+undรฉcima
+undรฉcimas
+11.ยช
+~11.ยบ
+dรฉcimo primero
+decimoprimero
+dรฉcimos primeros
+decimoprimeros
+undรฉcimo
+undรฉcimos
+11.ยบ
+~12.ยบ
+dรฉcimo segundo
+decimosegundo
+dรฉcimos segundos
+decimosegundos
+duodรฉcimo
+duodรฉcimos
+12.ยบ
+~200,0101
+doscientos coma cero uno cero un
+doscientos coma cero uno cero uno
+doscientas coma cero una cero una
+200,0101
+~1.000.200,21
+un millรณn doscientos coma veintiรบn
+un millรณn doscientos coma veintiuno
+un millรณn doscientas coma veintiuna
+un millรณn doscientos coma dos un
+un millรณn doscientos coma dos uno
+un millรณn doscientas coma dos una
+1.000.200,21
+~1/12
+un doceavo
+una doceava parte
+un duodรฉcimo
+una duodรฉcima parte
+uno sobre doce
+1/12
+~5/200
+cinco ducentรฉsimos
+cinco ducentรฉsimas partes
+cinco sobre doscientos
+5/200
+~1 5/3
+uno y cinco tercios
+una y cinco terceras partes
+uno y cinco sobre tres
+una y cinco sobre tres
+~1/5/2020
+primero de mayo de dos mil veinte
+uno de mayo de dos mil veinte
+cinco de enero de dos mil veinte
+~$5,50
+cinco dรณlares con cincuenta
+cinco dรณlares y cincuenta
+cinco dรณlares cincuenta
+cinco dรณlares con cincuenta centavos
+cinco dรณlares y cincuenta centavos
+cinco dรณlares cincuenta centavos
+~2.30 h
+dos y treinta
+dos y media
+tres menos treinta
+tres menos media
+treinta para las tres
+~12.30 a.m.
+doce y treinta de la medianoche
+doce y treinta de la noche
+doce y media de la medianoche
+doce y media de la noche
+una menos treinta de la maรฑana
+una menos media de la maรฑana
+treinta para la una de la maรฑana
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_ordinal.txt
@@ -0,0 +1,137 @@
+1.แตสณ~primer
+1.ยบ~primero
+1.ยช~primera
+2.ยบ~segundo
+2.ยช~segunda
+ii~segundo
+II~segundo
+3.แตสณ~tercer
+3.ยบ~tercero
+3.ยช~tercera
+4.ยบ~cuarto
+4.ยช~cuarta
+5.ยบ~quinto
+5.ยช~quinta
+6.ยบ~sexto
+6.ยช~sexta
+7.ยบ~sรฉptimo
+7.ยช~sรฉptima
+8.ยบ~octavo
+8.ยช~octava
+9.ยบ~noveno
+9.ยช~novena
+10.ยบ~dรฉcimo
+10.ยช~dรฉcima
+11.แตสณ~decimoprimer
+11.ยบ~undรฉcimo
+11.ยช~undรฉcima
+12.ยบ~duodรฉcimo
+12.ยช~duodรฉcima
+13.แตสณ~decimotercer
+13.ยบ~decimotercero
+13.ยช~decimotercera
+14.ยบ~decimocuarto
+14.ยช~decimocuarta
+15.ยบ~decimoquinto
+15.ยช~decimoquinta
+16.ยบ~decimosexto
+16.ยช~decimosexta
+17.ยบ~decimosรฉptimo
+17.ยช~decimosรฉptima
+18.ยบ~decimoctavo
+18.ยช~decimoctava
+19.ยบ~decimonoveno
+19.ยช~decimonovena
+20.ยบ~vigรฉsimo
+20.ยช~vigรฉsima
+21.แตสณ~vigesimoprimer
+21.ยบ~vigesimoprimero
+21.ยช~vigesimoprimera
+30.ยบ~trigรฉsimo
+30.ยช~trigรฉsima
+31.แตสณ~trigรฉsimo primer
+31.ยบ~trigรฉsimo primero
+31.ยช~trigรฉsima primera
+40.ยบ~cuadragรฉsimo
+40.ยช~cuadragรฉsima
+41.แตสณ~cuadragรฉsimo primer
+41.ยบ~cuadragรฉsimo primero
+41.ยช~cuadragรฉsima primera
+50.ยบ~quincuagรฉsimo
+50.ยช~quincuagรฉsima
+51.แตสณ~quincuagรฉsimo primer
+51.ยบ~quincuagรฉsimo primero
+51.ยช~quincuagรฉsima primera
+60.ยบ~sexagรฉsimo
+60.ยช~sexagรฉsima
+70.ยบ~septuagรฉsimo
+70.ยช~septuagรฉsima
+80.ยบ~octogรฉsimo
+80.ยช~octogรฉsima
+90.ยบ~nonagรฉsimo
+90.ยช~nonagรฉsima
+100.ยบ~centรฉsimo
+100.ยช~centรฉsima
+101.แตสณ~centรฉsimo primer
+101.ยบ~centรฉsimo primero
+101.ยช~centรฉsima primera
+134.ยบ~centรฉsimo trigรฉsimo cuarto
+134.ยช~centรฉsima trigรฉsima cuarta
+200.ยบ~ducentรฉsimo
+200.ยช~ducentรฉsima
+300.ยบ~tricentรฉsimo
+300.ยช~tricentรฉsima
+400.ยบ~cuadringentรฉsimo
+400.ยช~cuadringentรฉsima
+500.ยบ~quingentรฉsimo
+500.ยช~quingentรฉsima
+600.ยบ~sexcentรฉsimo
+600.ยช~sexcentรฉsima
+700.ยบ~septingentรฉsimo
+700.ยช~septingentรฉsima
+800.ยบ~octingentรฉsimo
+800.ยช~octingentรฉsima
+900.ยบ~noningentรฉsimo
+900.ยช~noningentรฉsima
+1000.ยบ~milรฉsimo
+1000.ยช~milรฉsima
+1001.แตสณ~milรฉsimo primer
+1 000.ยบ~milรฉsimo
+1 000.ยช~milรฉsima
+1 001.แตสณ~milรฉsimo primer
+1.000.ยบ~milรฉsimo
+1.000.ยช~milรฉsima
+1.001.แตสณ~milรฉsimo primer
+1248.ยบ~milรฉsimo ducentรฉsimo cuadragรฉsimo octavo
+1248.ยช~milรฉsima ducentรฉsima cuadragรฉsima octava
+2000.ยบ~dosmilรฉsimo
+100 000.ยบ~cienmilรฉsimo
+i~primero
+I~primero
+ii~segundo
+II~segundo
+iii~tercero
+III~tercero
+iv~cuarto
+IV~cuarto
+V~quinto
+VI~sexto
+VII~sรฉptimo
+VIII~octavo
+IX~noveno
+X~dรฉcimo
+XI~undรฉcimo
+XII~duodรฉcimo
+XIII~decimotercero
+XX~vigรฉsimo
+XXI~vigesimoprimero
+XXX~trigรฉsimo
+XL~cuadragรฉsimo
+L~quincuagรฉsimo
+XC~nonagรฉsimo
+C~centรฉsimo
+CD~cuadringentรฉsimo
+D~quingentรฉsimo
+CM~noningentรฉsimo
+999.ยบ~noningentรฉsimo nonagรฉsimo noveno
+cmxcix~noningentรฉsimo nonagรฉsimo noveno
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_telephone.txt
@@ -0,0 +1,3 @@
+123-123-5678~uno dos tres uno dos tres cinco seis siete ocho
+123-456-789~uno dos tres cuatro cinco seis siete ocho nueve
+1234-5678~uno dos tres cuatro cinco seis siete ocho
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_time.txt
@@ -0,0 +1,26 @@
+1.00~una
+1:00~una
+01:00~una
+01 h~una
+3 h~tres horas
+1 h~una hora
+1.05 h~una y cinco
+01.05 h~una y cinco
+1.00 h~una
+1.00 a.m.~una de la maรฑana
+1.00 a.m~una de la maรฑana
+1.00 p.m.~una de la tarde
+1.00 p.m est~una de la tarde e s t
+1.00 est~una e s t
+5:02 est~cinco y dos e s t
+5:02 p.m pst~cinco y dos de la noche p s t
+5:02 p.m.~cinco y dos de la noche
+12.15~doce y cuarto
+12.15 a.m.~doce y cuarto de la noche
+12.15 p.m.~doce y cuarto del mediodรญa
+13.30~trece y media
+14.05~catorce y cinco
+24:50~veinticuatro y cincuenta
+3:02:32 pst~tres horas dos minutos y treinta y dos segundos p s t
+00:52~cero y cincuenta y dos
+0:52~cero y cincuenta y dos
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_whitelist.txt
@@ -0,0 +1,3 @@
+el dr.~el doctor
+sr. rodriguez~seรฑor rodriguez
+182 esq. toledo~ciento ochenta y dos esquina toledo
\ No newline at end of file
diff --git a/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt b/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/data_text_normalization/test_cases_word.txt
@@ -0,0 +1,48 @@
+~
+yahoo!~yahoo!
+veinte!~veinte!
+โ~โ
+aaa~aaa
+aabach~aabach
+aabenraa~aabenraa
+aabye~aabye
+aaccessed~aaccessed
+aach~aach
+aachen's~aachen's
+aadri~aadri
+aafia~aafia
+aagaard~aagaard
+aagadu~aagadu
+aagard~aagard
+aagathadi~aagathadi
+aaghart's~aaghart's
+aagnes~aagnes
+aagomoni~aagomoni
+aagon~aagon
+aagoo~aagoo
+aagot~aagot
+aahar~aahar
+aahh~aahh
+aahperd~aahperd
+aaibinterstate~aaibinterstate
+aajab~aajab
+aakasa~aakasa
+aakervik~aakervik
+aakirkeby~aakirkeby
+aalam~aalam
+aalbaek~aalbaek
+aaldiu~aaldiu
+aalem~aalem
+a'ali~a'ali
+aalilaassamthey~aalilaassamthey
+aalin~aalin
+aaliyan~aaliyan
+aaliyan's~aaliyan's
+aamadu~aamadu
+aamara~aamara
+aambala~aambala
+aamera~aamera
+aamer's~aamer's
+aamina~aamina
+aaminah~aaminah
+aamjiwnaang~aamjiwnaang
diff --git a/tests/nemo_text_processing/es/test_cardinal.py b/tests/nemo_text_processing/es/test_cardinal.py
--- a/tests/nemo_text_processing/es/test_cardinal.py
+++ b/tests/nemo_text_processing/es/test_cardinal.py
@@ -22,7 +22,8 @@
class TestCardinal:
- inverse_normalizer_es = (
+
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +33,34 @@ class TestCardinal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_cardinal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_date.py b/tests/nemo_text_processing/es/test_date.py
--- a/tests/nemo_text_processing/es/test_date.py
+++ b/tests/nemo_text_processing/es/test_date.py
@@ -22,7 +22,7 @@
class TestDate:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +32,34 @@ class TestDate:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_date.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_decimal.py b/tests/nemo_text_processing/es/test_decimal.py
--- a/tests/nemo_text_processing/es/test_decimal.py
+++ b/tests/nemo_text_processing/es/test_decimal.py
@@ -22,7 +22,7 @@
class TestDecimal:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -32,6 +32,34 @@ class TestDecimal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_decimal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_electronic.py b/tests/nemo_text_processing/es/test_electronic.py
--- a/tests/nemo_text_processing/es/test_electronic.py
+++ b/tests/nemo_text_processing/es/test_electronic.py
@@ -35,3 +35,31 @@ class TestElectronic:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_electronic.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_fraction.py b/tests/nemo_text_processing/es/test_fraction.py
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_fraction.py
@@ -0,0 +1,51 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import pytest
+from nemo_text_processing.text_normalization.normalize import Normalizer
+from nemo_text_processing.text_normalization.normalize_with_audio import NormalizerWithAudio
+from parameterized import parameterized
+
+from ..utils import CACHE_DIR, PYNINI_AVAILABLE, parse_test_case_file
+
+
+class TestFraction:
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_fraction.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_measure.py b/tests/nemo_text_processing/es/test_measure.py
--- a/tests/nemo_text_processing/es/test_measure.py
+++ b/tests/nemo_text_processing/es/test_measure.py
@@ -36,3 +36,31 @@ class TestMeasure:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_measure.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_money.py b/tests/nemo_text_processing/es/test_money.py
--- a/tests/nemo_text_processing/es/test_money.py
+++ b/tests/nemo_text_processing/es/test_money.py
@@ -23,7 +23,7 @@
class TestMoney:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -33,6 +33,34 @@ class TestMoney:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_money.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_normalization_with_audio.py b/tests/nemo_text_processing/es/test_normalization_with_audio.py
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_normalization_with_audio.py
@@ -0,0 +1,43 @@
+# Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+from nemo_text_processing.text_normalization.normalize_with_audio import NormalizerWithAudio
+from parameterized import parameterized
+
+from ..utils import CACHE_DIR, PYNINI_AVAILABLE, get_test_cases_multiple
+
+
+class TestNormalizeWithAudio:
+
+ normalizer_es = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ @parameterized.expand(get_test_cases_multiple('es/data_text_normalization/test_cases_normalize_with_audio.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer_es.normalize(test_input, n_tagged=1000, punct_post_process=False)
+ print(expected)
+ print("pred")
+ print(pred)
+ assert len(set(pred).intersection(set(expected))) == len(
+ expected
+ ), f'missing: {set(expected).difference(set(pred))}'
diff --git a/tests/nemo_text_processing/es/test_ordinal.py b/tests/nemo_text_processing/es/test_ordinal.py
--- a/tests/nemo_text_processing/es/test_ordinal.py
+++ b/tests/nemo_text_processing/es/test_ordinal.py
@@ -23,7 +23,7 @@
class TestOrdinal:
- inverse_normalizer_es = (
+ inverse_normalizer = (
InverseNormalizer(lang='es', cache_dir=CACHE_DIR, overwrite_cache=False) if PYNINI_AVAILABLE else None
)
@@ -33,6 +33,33 @@ class TestOrdinal:
)
@pytest.mark.run_only_on('CPU')
@pytest.mark.unit
- def test_denorm_es(self, test_input, expected):
- pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
+ def test_denorm(self, test_input, expected):
+ pred = self.inverse_normalizer.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_ordinal.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=30, punct_post_process=False,
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh b/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh
new file mode 100644
--- /dev/null
+++ b/tests/nemo_text_processing/es/test_sparrowhawk_normalization.sh
@@ -0,0 +1,84 @@
+#! /bin/sh
+
+PROJECT_DIR=/workspace/tests
+
+runtest () {
+ input=$1
+ cd /workspace/sparrowhawk/documentation/grammars
+
+ # read test file
+ while read testcase; do
+ IFS='~' read written spoken <<< $testcase
+ denorm_pred=$(echo $written | normalizer_main --config=sparrowhawk_configuration.ascii_proto 2>&1 | tail -n 1)
+
+ # trim white space
+ spoken="$(echo -e "${spoken}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
+ denorm_pred="$(echo -e "${denorm_pred}" | sed -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//')"
+
+ # input expected actual
+ assertEquals "$written" "$spoken" "$denorm_pred"
+ done < "$input"
+}
+
+testTNCardinal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_cardinal.txt
+ runtest $input
+}
+
+testTNDate() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_date.txt
+ runtest $input
+}
+
+testTNDecimal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_decimal.txt
+ runtest $input
+}
+
+testTNElectronic() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_electronic.txt
+ runtest $input
+}
+
+testTNFraction() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_fraction.txt
+ runtest $input
+}
+
+testTNMoney() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_money.txt
+ runtest $input
+}
+
+testTNOrdinal() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_ordinal.txt
+ runtest $input
+}
+
+testTNTelephone() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_ordinal.txt
+ runtest $input
+}
+
+testTNTime() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_time.txt
+ runtest $input
+}
+
+testTNMeasure() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_measure.txt
+ runtest $input
+}
+
+testTNWhitelist() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_whitelist.txt
+ runtest $input
+}
+
+testTNWord() {
+ input=$PROJECT_DIR/es/data_text_normalization/test_cases_word.txt
+ runtest $input
+}
+
+# Load shUnit2
+. $PROJECT_DIR/../shunit2/shunit2
diff --git a/tests/nemo_text_processing/es/test_telephone.py b/tests/nemo_text_processing/es/test_telephone.py
--- a/tests/nemo_text_processing/es/test_telephone.py
+++ b/tests/nemo_text_processing/es/test_telephone.py
@@ -36,3 +36,31 @@ class TestTelephone:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_telephone.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_time.py b/tests/nemo_text_processing/es/test_time.py
--- a/tests/nemo_text_processing/es/test_time.py
+++ b/tests/nemo_text_processing/es/test_time.py
@@ -35,3 +35,31 @@ class TestTime:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_time.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=1000, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_whitelist.py b/tests/nemo_text_processing/es/test_whitelist.py
--- a/tests/nemo_text_processing/es/test_whitelist.py
+++ b/tests/nemo_text_processing/es/test_whitelist.py
@@ -35,3 +35,30 @@ class TestWhitelist:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_whitelist.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer.normalize(test_input, verbose=False)
+ assert pred == expected
+
+ if self.normalizer_with_audio:
+ pred_non_deterministic = self.normalizer_with_audio.normalize(
+ test_input, n_tagged=10, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic
diff --git a/tests/nemo_text_processing/es/test_word.py b/tests/nemo_text_processing/es/test_word.py
--- a/tests/nemo_text_processing/es/test_word.py
+++ b/tests/nemo_text_processing/es/test_word.py
@@ -35,3 +35,30 @@ class TestWord:
def test_denorm_es(self, test_input, expected):
pred = self.inverse_normalizer_es.inverse_normalize(test_input, verbose=False)
assert pred == expected
+
+ normalizer_es = (
+ Normalizer(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE
+ else None
+ )
+ normalizer_with_audio_es = (
+ NormalizerWithAudio(input_case='cased', lang='es', cache_dir=CACHE_DIR, overwrite_cache=False)
+ if PYNINI_AVAILABLE and CACHE_DIR
+ else None
+ )
+
+ @parameterized.expand(parse_test_case_file('es/data_text_normalization/test_cases_word.txt'))
+ @pytest.mark.skipif(
+ not PYNINI_AVAILABLE, reason="`pynini` not installed, please install via nemo_text_processing/setup.sh"
+ )
+ @pytest.mark.run_only_on('CPU')
+ @pytest.mark.unit
+ def test_norm(self, test_input, expected):
+ pred = self.normalizer_es.normalize(test_input, verbose=False)
+ assert pred == expected, f"input: {test_input}"
+
+ if self.normalizer_with_audio_es:
+ pred_non_deterministic = self.normalizer_with_audio_es.normalize(
+ test_input, n_tagged=150, punct_post_process=False
+ )
+ assert expected in pred_non_deterministic, f"input: {test_input}"
|
1.0
| ||||
NVIDIA__NeMo-7582
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
17 :target: https://badge.fury.io/py/nemo-toolkit
18 :alt: Release version
19
20 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
21 :target: https://badge.fury.io/py/nemo-toolkit
22 :alt: Python version
23
24 .. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
25 :target: https://pepy.tech/project/nemo-toolkit
26 :alt: PyPi total downloads
27
28 .. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
29 :target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
30 :alt: CodeQL
31
32 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
33 :target: https://github.com/psf/black
34 :alt: Code style: black
35
36 .. _main-readme:
37
38 **NVIDIA NeMo**
39 ===============
40
41 Introduction
42 ------------
43
44 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
45 text-to-speech synthesis (TTS), large language models (LLMs), and
46 natural language processing (NLP).
47 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
48 and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
49
50 All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
51 training is automatically scalable to 1000s of GPUs.
52 Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
53 NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
54
55 Getting started with NeMo is simple.
56 State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
57 `NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
58 These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
59
60 We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
61 can all be run on `Google Colab <https://colab.research.google.com>`_.
62
63 For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
64 we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
65
66 For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
67 The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
68 which can be used to find the optimal model parallel configuration for training on a specific cluster.
69
70 Also see our `introductory video <https://www.youtube.com/embed/wBgpMf_KQVw>`_ for a high level overview of NeMo.
71
72 Key Features
73 ------------
74
75 * Speech processing
76 * `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
77 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
78 * Supported ASR models: `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_
79 * Jasper, QuartzNet, CitriNet, ContextNet
80 * Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
81 * Squeezeformer-CTC and Squeezeformer-Transducer
82 * LSTM-Transducer (RNNT) and LSTM-CTC
83 * Supports the following decoders/losses:
84 * CTC
85 * Transducer/RNNT
86 * Hybrid Transducer/CTC
87 * NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
88 * Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
89 * Cache-aware Streaming Conformer with multiple lookaheads - `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_
90 * Beam Search decoding
91 * `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
92 * `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
93 * `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
94 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
95 * ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
96 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
97 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
98 * Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
99 * Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
100 * `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
101 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
102 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
103 * Natural Language Processing
104 * `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
105 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
106 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
107 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
108 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
109 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
110 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
111 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
112 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
113 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
114 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
115 * `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
116 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
117 * `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
118 * Text-to-Speech Synthesis (TTS):
119 * `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
120 * Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
121 * Vocoders: HiFiGAN, UnivNet, WaveGlow
122 * End-to-End Models: VITS
123 * `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
124 * `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
125 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
126 * `NeMo Forced Aligner <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/nemo_forced_aligner.html>`_
127 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
128 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
129 * `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
130
131
132 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
133
134 Requirements
135 ------------
136
137 1) Python 3.10 or above
138 2) Pytorch 1.13.1 or above
139 3) NVIDIA GPU for training
140
141 Documentation
142 -------------
143
144 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
145 :alt: Documentation Status
146 :scale: 100%
147 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
148
149 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
150 :alt: Documentation Status
151 :scale: 100%
152 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
153
154 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
155 | Version | Status | Description |
156 +=========+=============+==========================================================================================================================================+
157 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
158 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
159 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
160 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
161
162 Tutorials
163 ---------
164 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
165
166 Getting help with NeMo
167 ----------------------
168 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
169
170
171 Installation
172 ------------
173
174 Conda
175 ~~~~~
176
177 We recommend installing NeMo in a fresh Conda environment.
178
179 .. code-block:: bash
180
181 conda create --name nemo python==3.10.12
182 conda activate nemo
183
184 Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
185
186 .. code-block:: bash
187
188 conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
189
190 The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
191
192 Pip
193 ~~~
194 Use this installation mode if you want the latest released version.
195
196 .. code-block:: bash
197
198 apt-get update && apt-get install -y libsndfile1 ffmpeg
199 pip install Cython
200 pip install nemo_toolkit['all']
201
202 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
203
204 Pip from source
205 ~~~~~~~~~~~~~~~
206 Use this installation mode if you want the version from a particular GitHub branch (e.g main).
207
208 .. code-block:: bash
209
210 apt-get update && apt-get install -y libsndfile1 ffmpeg
211 pip install Cython
212 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
213
214
215 From source
216 ~~~~~~~~~~~
217 Use this installation mode if you are contributing to NeMo.
218
219 .. code-block:: bash
220
221 apt-get update && apt-get install -y libsndfile1 ffmpeg
222 git clone https://github.com/NVIDIA/NeMo
223 cd NeMo
224 ./reinstall.sh
225
226 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
227 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
228
229 RNNT
230 ~~~~
231 Note that RNNT requires numba to be installed from conda.
232
233 .. code-block:: bash
234
235 conda remove numba
236 pip uninstall numba
237 conda install -c conda-forge numba
238
239 NeMo Megatron
240 ~~~~~~~~~~~~~
241 NeMo Megatron training requires NVIDIA Apex to be installed.
242 Install it manually if not using the NVIDIA PyTorch container.
243
244 To install Apex, run
245
246 .. code-block:: bash
247
248 git clone https://github.com/NVIDIA/apex.git
249 cd apex
250 git checkout 52e18c894223800cb611682dce27d88050edf1de
251 pip install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
252
253 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
254
255 While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
256 This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
257
258 cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
259
260 .. code-block:: bash
261
262 conda install -c nvidia cuda-nvprof=11.8
263
264 packaging is also needed:
265
266 .. code-block:: bash
267
268 pip install packaging
269
270 With the latest versions of Apex, the `pyproject.toml` file in Apex may need to be deleted in order to install locally.
271
272
273 Transformer Engine
274 ~~~~~~~~~~~~~~~~~~
275 NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
276 Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
277 `Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
278
279 .. code-block:: bash
280
281 pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
282
283 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
284
285 Transformer Engine requires PyTorch to be built with CUDA 11.8.
286
287
288 Flash Attention
289 ~~~~~~~~~~~~~~~~~~~~
290 Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
291
292 .. code-block:: bash
293
294 pip install flash-attn
295 pip install triton==2.0.0.dev20221202
296
297 NLP inference UI
298 ~~~~~~~~~~~~~~~~~~~~
299 To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
300
301 .. code-block:: bash
302
303 pip install gradio==3.34.0
304
305 NeMo Text Processing
306 ~~~~~~~~~~~~~~~~~~~~
307 NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
308
309 Docker containers:
310 ~~~~~~~~~~~~~~~~~~
311 We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.20.0`` comes with container ``nemo:23.06``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
312
313 To use built container, please run
314
315 .. code-block:: bash
316
317 docker pull nvcr.io/nvidia/nemo:23.06
318
319 To build a nemo container with Dockerfile from a branch, please run
320
321 .. code-block:: bash
322
323 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
324
325
326 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
327
328 .. code-block:: bash
329
330 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
331 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
332 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
333
334 Examples
335 --------
336
337 Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
338
339
340 Contributing
341 ------------
342
343 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
344
345 Publications
346 ------------
347
348 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/tree/stable/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
349
350 License
351 -------
352 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
353
[end of README.rst]
[start of examples/asr/experimental/k2/align_speech_parallel.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 # Based on examples/asr/transcribe_speech_parallel.py
17 # ASR alignment with multi-GPU/multi-node support for large datasets
18 # It supports both tarred and non-tarred datasets
19 # Arguments
20 # model: path to a nemo/PTL checkpoint file or name of a pretrained model
21 # predict_ds: config of the dataset/dataloader
22 # aligner_args: aligner config
23 # output_path: path to store the predictions
24 # model_stride: model downsampling factor, 8 for Citrinet models and 4 for Conformer models
25 #
26 # Results of each GPU/worker is written into a file named 'predictions_{rank}.json, and aggregated results of all workers are written into 'predictions_all.json'
27
28 Example for non-tarred datasets:
29
30 python align_speech_parallel.py \
31 model=stt_en_conformer_ctc_large \
32 predict_ds.manifest_filepath=/dataset/manifest_file.json \
33 predict_ds.batch_size=16 \
34 output_path=/tmp/
35
36 Example for tarred datasets:
37
38 python align_speech_parallel.py \
39 predict_ds.is_tarred=true \
40 predict_ds.manifest_filepath=/tarred_dataset/tarred_audio_manifest.json \
41 predict_ds.tarred_audio_filepaths=/tarred_dataset/audio__OP_0..127_CL_.tar \
42 ...
43
44 By default the trainer uses all the GPUs available and default precision is FP32.
45 By setting the trainer config you may control these configs. For example to do the predictions with AMP on just two GPUs:
46
47 python align_speech_parallel.py \
48 trainer.precision=16 \
49 trainer.gpus=2 \
50 ...
51
52 You may control the dataloader's config by setting the predict_ds:
53
54 python align_speech_parallel.py \
55 predict_ds.num_workers=8 \
56 predict_ds.min_duration=2.0 \
57 predict_ds.sample_rate=16000 \
58 model=stt_en_conformer_ctc_small \
59 ...
60
61 You may control the aligner's config by setting the aligner_args:
62 aligner_args.alignment_type=argmax \
63 aligner_args.word_output=False \
64 aligner_args.cpu_decoding=True \
65 aligner_args.decode_batch_size=8 \
66 aligner_args.ctc_cfg.prob_suppress_index=-1 \
67 aligner_args.ctc_cfg.prob_suppress_value=0.5 \
68 aligner_args.rnnt_cfg.predictor_window_size=10 \
69 aligner_args.decoder_module_cfg.intersect_pruned=true \
70 aligner_args.decoder_module_cfg.intersect_conf.search_beam=40 \
71 ...
72
73 """
74
75
76 import os
77 from dataclasses import dataclass, is_dataclass
78 from typing import Optional
79
80 import pytorch_lightning as ptl
81 import torch
82 from omegaconf import MISSING, OmegaConf
83
84 from nemo.collections.asr.data.audio_to_ctm_dataset import ASRCTMPredictionWriter
85 from nemo.collections.asr.models import ASRModel
86 from nemo.collections.asr.models.configs.aligner_config import K2AlignerWrapperModelConfig
87 from nemo.collections.asr.models.configs.asr_models_config import ASRDatasetConfig
88 from nemo.collections.asr.models.k2_aligner_model import AlignerWrapperModel
89 from nemo.core.config import TrainerConfig, hydra_runner
90 from nemo.utils import logging
91 from nemo.utils.get_rank import is_global_rank_zero
92
93
94 @dataclass
95 class ParallelAlignmentConfig:
96 model: Optional[str] = None # name
97 predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
98 aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
99 output_path: str = MISSING
100 model_stride: int = 8
101
102 trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
103
104 # there arguments will be ignored
105 return_predictions: bool = False
106 use_cer: bool = False
107
108
109 def match_train_config(predict_ds, train_ds):
110 # It copies the important configurations from the train dataset of the model
111 # into the predict_ds to be used for prediction. It is needed to match the training configurations.
112 if train_ds is None:
113 return
114
115 predict_ds.sample_rate = train_ds.get("sample_rate", 16000)
116 cfg_name_list = [
117 "int_values",
118 "use_start_end_token",
119 "blank_index",
120 "unk_index",
121 "normalize",
122 "parser",
123 "eos_id",
124 "bos_id",
125 "pad_id",
126 ]
127
128 if is_dataclass(predict_ds):
129 predict_ds = OmegaConf.structured(predict_ds)
130 for cfg_name in cfg_name_list:
131 if hasattr(train_ds, cfg_name):
132 setattr(predict_ds, cfg_name, getattr(train_ds, cfg_name))
133
134 return predict_ds
135
136
137 @hydra_runner(config_name="AlignmentConfig", schema=ParallelAlignmentConfig)
138 def main(cfg: ParallelAlignmentConfig):
139 if cfg.model.endswith(".nemo"):
140 logging.info("Attempting to initialize from .nemo file")
141 model = ASRModel.restore_from(restore_path=cfg.model, map_location="cpu")
142 elif cfg.model.endswith(".ckpt"):
143 logging.info("Attempting to initialize from .ckpt file")
144 model = ASRModel.load_from_checkpoint(checkpoint_path=cfg.model, map_location="cpu")
145 else:
146 logging.info(
147 "Attempting to initialize from a pretrained model as the model name does not have the extension of .nemo or .ckpt"
148 )
149 model = ASRModel.from_pretrained(model_name=cfg.model, map_location="cpu")
150
151 trainer = ptl.Trainer(**cfg.trainer)
152
153 cfg.predict_ds.return_sample_id = True
154 cfg.return_predictions = False
155 cfg.use_cer = False
156 cfg.predict_ds = match_train_config(predict_ds=cfg.predict_ds, train_ds=model._cfg.train_ds)
157 data_loader = model._setup_dataloader_from_config(cfg.predict_ds)
158
159 os.makedirs(cfg.output_path, exist_ok=True)
160 # trainer.global_rank is not valid before predict() is called. Need this hack to find the correct global_rank.
161 global_rank = trainer.node_rank * trainer.num_devices + int(os.environ.get("LOCAL_RANK", 0))
162 output_file = os.path.join(cfg.output_path, f"predictions_{global_rank}.json")
163 output_ctm_dir = os.path.join(cfg.output_path, "ctm")
164 predictor_writer = ASRCTMPredictionWriter(
165 dataset=data_loader.dataset,
166 output_file=output_file,
167 output_ctm_dir=output_ctm_dir,
168 time_per_frame=cfg.model_stride * model._cfg.preprocessor['window_stride'],
169 )
170 trainer.callbacks.extend([predictor_writer])
171
172 aligner_wrapper = AlignerWrapperModel(model=model, cfg=cfg.aligner_args)
173 trainer.predict(model=aligner_wrapper, dataloaders=data_loader, return_predictions=cfg.return_predictions)
174 samples_num = predictor_writer.close_output_file()
175
176 logging.info(
177 f"Prediction on rank {global_rank} is done for {samples_num} samples and results are stored in {output_file}."
178 )
179
180 if torch.distributed.is_initialized():
181 torch.distributed.barrier()
182
183 samples_num = 0
184 if is_global_rank_zero():
185 output_file = os.path.join(cfg.output_path, f"predictions_all.json")
186 logging.info(f"Prediction files are being aggregated in {output_file}.")
187 with open(output_file, 'tw', encoding="utf-8") as outf:
188 for rank in range(trainer.world_size):
189 input_file = os.path.join(cfg.output_path, f"predictions_{rank}.json")
190 with open(input_file, 'r', encoding="utf-8") as inpf:
191 lines = inpf.readlines()
192 samples_num += len(lines)
193 outf.writelines(lines)
194 logging.info(
195 f"Prediction is done for {samples_num} samples in total on all workers and results are aggregated in {output_file}."
196 )
197
198
199 if __name__ == '__main__':
200 main()
201
[end of examples/asr/experimental/k2/align_speech_parallel.py]
[start of nemo/collections/asr/metrics/rnnt_wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import re
17 from abc import abstractmethod
18 from dataclasses import dataclass, is_dataclass
19 from typing import Callable, Dict, List, Optional, Tuple, Union
20
21 import editdistance
22 import numpy as np
23 import torch
24 from omegaconf import OmegaConf
25 from torchmetrics import Metric
26
27 from nemo.collections.asr.metrics.wer import move_dimension_to_the_front
28 from nemo.collections.asr.parts.submodules import rnnt_beam_decoding as beam_decode
29 from nemo.collections.asr.parts.submodules import rnnt_greedy_decoding as greedy_decode
30 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
31 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
32 from nemo.utils import logging
33
34 __all__ = ['RNNTDecoding', 'RNNTWER']
35
36
37 class AbstractRNNTDecoding(ConfidenceMixin):
38 """
39 Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
40
41 Args:
42 decoding_cfg: A dict-like object which contains the following key-value pairs.
43 strategy: str value which represents the type of decoding that can occur.
44 Possible values are :
45 - greedy, greedy_batch (for greedy decoding).
46 - beam, tsd, alsd (for beam search decoding).
47
48 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
49 tokens as well as the decoded string. Default is False in order to avoid double decoding
50 unless required.
51
52 preserve_alignments: Bool flag which preserves the history of logprobs generated during
53 decoding (sample / batched). When set to true, the Hypothesis will contain
54 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
55 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
56
57 In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
58 with the `return_hypotheses` flag set to True.
59
60 The length of the list corresponds to the Acoustic Length (T).
61 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
62 U is the number of target tokens for the current timestep Ti.
63
64 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
65 word based timestamp mapping the output log-probabilities to discrete intervals of timestamps.
66 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
67
68 rnnt_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
69 Can take the following values - "char" for character/subword time stamps, "word" for word level
70 time stamps and "all" (default), for both character level and word level time stamps.
71
72 word_seperator: Str token representing the seperator between words.
73
74 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
75 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
76 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of ints.
77
78 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
79 scores. In order to obtain hypotheses with confidence scores, please utilize
80 `rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
81
82 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
83 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
84 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
85
86 The length of the list corresponds to the Acoustic Length (T).
87 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
88 U is the number of target tokens for the current timestep Ti.
89 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
90 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
91 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
92
93 The length of the list corresponds to the number of recognized tokens.
94 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
95 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
96 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
97
98 The length of the list corresponds to the number of recognized words.
99 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
100 from the `token_confidence`.
101 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
102 Valid options are `mean`, `min`, `max`, `prod`.
103 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
104 confidence scores.
105
106 name: The measure name (str).
107 Supported values:
108 - 'max_prob' for using the maximum token probability as a confidence.
109 - 'entropy' for using a normalized entropy of a log-likelihood vector.
110
111 entropy_type: Which type of entropy to use (str).
112 Used if confidence_measure_cfg.name is set to `entropy`.
113 Supported values:
114 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
115 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
116 Note that for this entropy, the alpha should comply the following inequality:
117 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
118 where V is the model vocabulary size.
119 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
120 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
121 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
122 More: https://en.wikipedia.org/wiki/Tsallis_entropy
123 - 'renyi' for the Rรฉnyi entropy.
124 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
125 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
126 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
127
128 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
129 When the alpha equals one, scaling is not applied to 'max_prob',
130 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
131
132 entropy_norm: A mapping of the entropy value to the interval [0,1].
133 Supported values:
134 - 'lin' for using the linear mapping.
135 - 'exp' for using exponential mapping with linear shift.
136
137 The config may further contain the following sub-dictionaries:
138 "greedy":
139 max_symbols: int, describing the maximum number of target tokens to decode per
140 timestep during greedy decoding. Setting to larger values allows longer sentences
141 to be decoded, at the cost of increased execution time.
142 preserve_frame_confidence: Same as above, overrides above value.
143 confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
144
145 "beam":
146 beam_size: int, defining the beam size for beam search. Must be >= 1.
147 If beam_size == 1, will perform cached greedy search. This might be slightly different
148 results compared to the greedy search above.
149
150 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
151 Set to True by default.
152
153 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
154 hypotheses after beam search has concluded. This flag is set by default.
155
156 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
157 per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
158 at increased cost to execution time.
159
160 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
161 If an integer is provided, it can decode sequences of that particular maximum length.
162 If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
163 where seq_len is the length of the acoustic model output (T).
164
165 NOTE:
166 If a float is provided, it can be greater than 1!
167 By default, a float of 2.0 is used so that a target sequence can be at most twice
168 as long as the acoustic model output length T.
169
170 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
171 and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
172
173 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
174 in order to reduce expensive beam search cost later. int >= 0.
175
176 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
177 Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
178 and affects the speed of inference since large values will perform large beam search in the next step.
179
180 maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
181 The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
182 where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
183 predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
184 expansion apart from the "most likely" candidate.
185 Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
186 but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
187 thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
188 tuned on a validation set.
189
190 softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
191
192 decoder: The Decoder/Prediction network module.
193 joint: The Joint network module.
194 blank_id: The id of the RNNT blank token.
195 """
196
197 def __init__(self, decoding_cfg, decoder, joint, blank_id: int):
198 super(AbstractRNNTDecoding, self).__init__()
199
200 # Convert dataclass to config object
201 if is_dataclass(decoding_cfg):
202 decoding_cfg = OmegaConf.structured(decoding_cfg)
203
204 self.cfg = decoding_cfg
205 self.blank_id = blank_id
206 self.num_extra_outputs = joint.num_extra_outputs
207 self.big_blank_durations = self.cfg.get("big_blank_durations", None)
208 self.durations = self.cfg.get("durations", None)
209 self.compute_hypothesis_token_set = self.cfg.get("compute_hypothesis_token_set", False)
210 self.compute_langs = decoding_cfg.get('compute_langs', False)
211 self.preserve_alignments = self.cfg.get('preserve_alignments', None)
212 self.joint_fused_batch_size = self.cfg.get('fused_batch_size', None)
213 self.compute_timestamps = self.cfg.get('compute_timestamps', None)
214 self.word_seperator = self.cfg.get('word_seperator', ' ')
215
216 if self.durations is not None: # this means it's a TDT model.
217 if blank_id == 0:
218 raise ValueError("blank_id must equal len(non_blank_vocabs) for TDT models")
219 if self.big_blank_durations is not None:
220 raise ValueError("duration and big_blank_durations can't both be not None")
221 if self.cfg.strategy not in ['greedy', 'greedy_batch']:
222 raise ValueError("currently only greedy and greedy_batch inference is supported for TDT models")
223
224 if self.big_blank_durations is not None: # this means it's a multi-blank model.
225 if blank_id == 0:
226 raise ValueError("blank_id must equal len(vocabs) for multi-blank RNN-T models")
227 if self.cfg.strategy not in ['greedy', 'greedy_batch']:
228 raise ValueError(
229 "currently only greedy and greedy_batch inference is supported for multi-blank models"
230 )
231
232 possible_strategies = ['greedy', 'greedy_batch', 'beam', 'tsd', 'alsd', 'maes']
233 if self.cfg.strategy not in possible_strategies:
234 raise ValueError(f"Decoding strategy must be one of {possible_strategies}")
235
236 # Update preserve alignments
237 if self.preserve_alignments is None:
238 if self.cfg.strategy in ['greedy', 'greedy_batch']:
239 self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
240
241 elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
242 self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
243
244 # Update compute timestamps
245 if self.compute_timestamps is None:
246 if self.cfg.strategy in ['greedy', 'greedy_batch']:
247 self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
248
249 elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
250 self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
251
252 # Test if alignments are being preserved for RNNT
253 if self.compute_timestamps is True and self.preserve_alignments is False:
254 raise ValueError("If `compute_timesteps` flag is set, then `preserve_alignments` flag must also be set.")
255
256 # initialize confidence-related fields
257 self._init_confidence(self.cfg.get('confidence_cfg', None))
258
259 # Confidence estimation is not implemented for these strategies
260 if (
261 not self.preserve_frame_confidence
262 and self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']
263 and self.cfg.beam.get('preserve_frame_confidence', False)
264 ):
265 raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
266
267 if self.cfg.strategy == 'greedy':
268 if self.big_blank_durations is None:
269 if self.durations is None:
270 self.decoding = greedy_decode.GreedyRNNTInfer(
271 decoder_model=decoder,
272 joint_model=joint,
273 blank_index=self.blank_id,
274 max_symbols_per_step=(
275 self.cfg.greedy.get('max_symbols', None)
276 or self.cfg.greedy.get('max_symbols_per_step', None)
277 ),
278 preserve_alignments=self.preserve_alignments,
279 preserve_frame_confidence=self.preserve_frame_confidence,
280 confidence_measure_cfg=self.confidence_measure_cfg,
281 )
282 else:
283 self.decoding = greedy_decode.GreedyTDTInfer(
284 decoder_model=decoder,
285 joint_model=joint,
286 blank_index=self.blank_id,
287 durations=self.durations,
288 max_symbols_per_step=(
289 self.cfg.greedy.get('max_symbols', None)
290 or self.cfg.greedy.get('max_symbols_per_step', None)
291 ),
292 preserve_alignments=self.preserve_alignments,
293 preserve_frame_confidence=self.preserve_frame_confidence,
294 confidence_measure_cfg=self.confidence_measure_cfg,
295 )
296 else:
297 self.decoding = greedy_decode.GreedyMultiblankRNNTInfer(
298 decoder_model=decoder,
299 joint_model=joint,
300 blank_index=self.blank_id,
301 big_blank_durations=self.big_blank_durations,
302 max_symbols_per_step=(
303 self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
304 ),
305 preserve_alignments=self.preserve_alignments,
306 preserve_frame_confidence=self.preserve_frame_confidence,
307 confidence_measure_cfg=self.confidence_measure_cfg,
308 )
309
310 elif self.cfg.strategy == 'greedy_batch':
311 if self.big_blank_durations is None:
312 if self.durations is None:
313 self.decoding = greedy_decode.GreedyBatchedRNNTInfer(
314 decoder_model=decoder,
315 joint_model=joint,
316 blank_index=self.blank_id,
317 max_symbols_per_step=(
318 self.cfg.greedy.get('max_symbols', None)
319 or self.cfg.greedy.get('max_symbols_per_step', None)
320 ),
321 preserve_alignments=self.preserve_alignments,
322 preserve_frame_confidence=self.preserve_frame_confidence,
323 confidence_measure_cfg=self.confidence_measure_cfg,
324 )
325 else:
326 self.decoding = greedy_decode.GreedyBatchedTDTInfer(
327 decoder_model=decoder,
328 joint_model=joint,
329 blank_index=self.blank_id,
330 durations=self.durations,
331 max_symbols_per_step=(
332 self.cfg.greedy.get('max_symbols', None)
333 or self.cfg.greedy.get('max_symbols_per_step', None)
334 ),
335 preserve_alignments=self.preserve_alignments,
336 preserve_frame_confidence=self.preserve_frame_confidence,
337 confidence_measure_cfg=self.confidence_measure_cfg,
338 )
339
340 else:
341 self.decoding = greedy_decode.GreedyBatchedMultiblankRNNTInfer(
342 decoder_model=decoder,
343 joint_model=joint,
344 blank_index=self.blank_id,
345 big_blank_durations=self.big_blank_durations,
346 max_symbols_per_step=(
347 self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
348 ),
349 preserve_alignments=self.preserve_alignments,
350 preserve_frame_confidence=self.preserve_frame_confidence,
351 confidence_measure_cfg=self.confidence_measure_cfg,
352 )
353
354 elif self.cfg.strategy == 'beam':
355
356 self.decoding = beam_decode.BeamRNNTInfer(
357 decoder_model=decoder,
358 joint_model=joint,
359 beam_size=self.cfg.beam.beam_size,
360 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
361 search_type='default',
362 score_norm=self.cfg.beam.get('score_norm', True),
363 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
364 preserve_alignments=self.preserve_alignments,
365 )
366
367 elif self.cfg.strategy == 'tsd':
368
369 self.decoding = beam_decode.BeamRNNTInfer(
370 decoder_model=decoder,
371 joint_model=joint,
372 beam_size=self.cfg.beam.beam_size,
373 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
374 search_type='tsd',
375 score_norm=self.cfg.beam.get('score_norm', True),
376 tsd_max_sym_exp_per_step=self.cfg.beam.get('tsd_max_sym_exp', 10),
377 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
378 preserve_alignments=self.preserve_alignments,
379 )
380
381 elif self.cfg.strategy == 'alsd':
382
383 self.decoding = beam_decode.BeamRNNTInfer(
384 decoder_model=decoder,
385 joint_model=joint,
386 beam_size=self.cfg.beam.beam_size,
387 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
388 search_type='alsd',
389 score_norm=self.cfg.beam.get('score_norm', True),
390 alsd_max_target_len=self.cfg.beam.get('alsd_max_target_len', 2),
391 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
392 preserve_alignments=self.preserve_alignments,
393 )
394
395 elif self.cfg.strategy == 'maes':
396
397 self.decoding = beam_decode.BeamRNNTInfer(
398 decoder_model=decoder,
399 joint_model=joint,
400 beam_size=self.cfg.beam.beam_size,
401 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
402 search_type='maes',
403 score_norm=self.cfg.beam.get('score_norm', True),
404 maes_num_steps=self.cfg.beam.get('maes_num_steps', 2),
405 maes_prefix_alpha=self.cfg.beam.get('maes_prefix_alpha', 1),
406 maes_expansion_gamma=self.cfg.beam.get('maes_expansion_gamma', 2.3),
407 maes_expansion_beta=self.cfg.beam.get('maes_expansion_beta', 2.0),
408 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
409 preserve_alignments=self.preserve_alignments,
410 ngram_lm_model=self.cfg.beam.get('ngram_lm_model', None),
411 ngram_lm_alpha=self.cfg.beam.get('ngram_lm_alpha', 0.0),
412 hat_subtract_ilm=self.cfg.beam.get('hat_subtract_ilm', False),
413 hat_ilm_weight=self.cfg.beam.get('hat_ilm_weight', 0.0),
414 )
415
416 else:
417
418 raise ValueError(
419 f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
420 f"but was provided {self.cfg.strategy}"
421 )
422
423 # Update the joint fused batch size or disable it entirely if needed.
424 self.update_joint_fused_batch_size()
425
426 def rnnt_decoder_predictions_tensor(
427 self,
428 encoder_output: torch.Tensor,
429 encoded_lengths: torch.Tensor,
430 return_hypotheses: bool = False,
431 partial_hypotheses: Optional[List[Hypothesis]] = None,
432 ) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
433 """
434 Decode an encoder output by autoregressive decoding of the Decoder+Joint networks.
435
436 Args:
437 encoder_output: torch.Tensor of shape [B, D, T].
438 encoded_lengths: torch.Tensor containing lengths of the padded encoder outputs. Shape [B].
439 return_hypotheses: bool. If set to True it will return list of Hypothesis or NBestHypotheses
440
441 Returns:
442 If `return_best_hypothesis` is set:
443 A tuple (hypotheses, None):
444 hypotheses - list of Hypothesis (best hypothesis per sample).
445 Look at rnnt_utils.Hypothesis for more information.
446
447 If `return_best_hypothesis` is not set:
448 A tuple(hypotheses, all_hypotheses)
449 hypotheses - list of Hypothesis (best hypothesis per sample).
450 Look at rnnt_utils.Hypothesis for more information.
451 all_hypotheses - list of NBestHypotheses. Each NBestHypotheses further contains a sorted
452 list of all the hypotheses of the model per sample.
453 Look at rnnt_utils.NBestHypotheses for more information.
454 """
455 # Compute hypotheses
456 with torch.inference_mode():
457 hypotheses_list = self.decoding(
458 encoder_output=encoder_output, encoded_lengths=encoded_lengths, partial_hypotheses=partial_hypotheses
459 ) # type: [List[Hypothesis]]
460
461 # extract the hypotheses
462 hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
463
464 prediction_list = hypotheses_list
465
466 if isinstance(prediction_list[0], NBestHypotheses):
467 hypotheses = []
468 all_hypotheses = []
469
470 for nbest_hyp in prediction_list: # type: NBestHypotheses
471 n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
472 decoded_hyps = self.decode_hypothesis(n_hyps) # type: List[str]
473
474 # If computing timestamps
475 if self.compute_timestamps is True:
476 timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
477 for hyp_idx in range(len(decoded_hyps)):
478 decoded_hyps[hyp_idx] = self.compute_rnnt_timestamps(decoded_hyps[hyp_idx], timestamp_type)
479
480 hypotheses.append(decoded_hyps[0]) # best hypothesis
481 all_hypotheses.append(decoded_hyps)
482
483 if return_hypotheses:
484 return hypotheses, all_hypotheses
485
486 best_hyp_text = [h.text for h in hypotheses]
487 all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
488 return best_hyp_text, all_hyp_text
489
490 else:
491 hypotheses = self.decode_hypothesis(prediction_list) # type: List[str]
492
493 # If computing timestamps
494 if self.compute_timestamps is True:
495 timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
496 for hyp_idx in range(len(hypotheses)):
497 hypotheses[hyp_idx] = self.compute_rnnt_timestamps(hypotheses[hyp_idx], timestamp_type)
498
499 if return_hypotheses:
500 # greedy decoding, can get high-level confidence scores
501 if self.preserve_frame_confidence and (
502 self.preserve_word_confidence or self.preserve_token_confidence
503 ):
504 hypotheses = self.compute_confidence(hypotheses)
505 return hypotheses, None
506
507 best_hyp_text = [h.text for h in hypotheses]
508 return best_hyp_text, None
509
510 def decode_hypothesis(self, hypotheses_list: List[Hypothesis]) -> List[Union[Hypothesis, NBestHypotheses]]:
511 """
512 Decode a list of hypotheses into a list of strings.
513
514 Args:
515 hypotheses_list: List of Hypothesis.
516
517 Returns:
518 A list of strings.
519 """
520 for ind in range(len(hypotheses_list)):
521 # Extract the integer encoded hypothesis
522 prediction = hypotheses_list[ind].y_sequence
523
524 if type(prediction) != list:
525 prediction = prediction.tolist()
526
527 # RNN-T sample level is already preprocessed by implicit RNNT decoding
528 # Simply remove any blank and possibly big blank tokens
529 if self.big_blank_durations is not None: # multi-blank RNNT
530 num_extra_outputs = len(self.big_blank_durations)
531 prediction = [p for p in prediction if p < self.blank_id - num_extra_outputs]
532 elif self.durations is not None: # TDT model.
533 prediction = [p for p in prediction if p < self.blank_id]
534 else: # standard RNN-T
535 prediction = [p for p in prediction if p != self.blank_id]
536
537 # De-tokenize the integer tokens; if not computing timestamps
538 if self.compute_timestamps is True:
539 # keep the original predictions, wrap with the number of repetitions per token and alignments
540 # this is done so that `rnnt_decoder_predictions_tensor()` can process this hypothesis
541 # in order to compute exact time stamps.
542 alignments = copy.deepcopy(hypotheses_list[ind].alignments)
543 token_repetitions = [1] * len(alignments) # preserve number of repetitions per token
544 hypothesis = (prediction, alignments, token_repetitions)
545 else:
546 hypothesis = self.decode_tokens_to_str(prediction)
547
548 # TODO: remove
549 # collapse leading spaces before . , ? for PC models
550 hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
551
552 if self.compute_hypothesis_token_set:
553 hypotheses_list[ind].tokens = self.decode_ids_to_tokens(prediction)
554
555 # De-tokenize the integer tokens
556 hypotheses_list[ind].text = hypothesis
557
558 return hypotheses_list
559
560 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
561 """
562 Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
563 Assumes that `frame_confidence` is present in the hypotheses.
564
565 Args:
566 hypotheses_list: List of Hypothesis.
567
568 Returns:
569 A list of hypotheses with high-level confidence scores.
570 """
571 if self.exclude_blank_from_confidence:
572 for hyp in hypotheses_list:
573 hyp.token_confidence = hyp.non_blank_frame_confidence
574 else:
575 for hyp in hypotheses_list:
576 offset = 0
577 token_confidence = []
578 if len(hyp.timestep) > 0:
579 for ts, te in zip(hyp.timestep, hyp.timestep[1:] + [len(hyp.frame_confidence)]):
580 if ts != te:
581 # <blank> tokens are considered to belong to the last non-blank token, if any.
582 token_confidence.append(
583 self._aggregate_confidence(
584 [hyp.frame_confidence[ts][offset]]
585 + [fc[0] for fc in hyp.frame_confidence[ts + 1 : te]]
586 )
587 )
588 offset = 0
589 else:
590 token_confidence.append(hyp.frame_confidence[ts][offset])
591 offset += 1
592 hyp.token_confidence = token_confidence
593 if self.preserve_word_confidence:
594 for hyp in hypotheses_list:
595 hyp.word_confidence = self._aggregate_token_confidence(hyp)
596 return hypotheses_list
597
598 @abstractmethod
599 def decode_tokens_to_str(self, tokens: List[int]) -> str:
600 """
601 Implemented by subclass in order to decoder a token id list into a string.
602
603 Args:
604 tokens: List of int representing the token ids.
605
606 Returns:
607 A decoded string.
608 """
609 raise NotImplementedError()
610
611 @abstractmethod
612 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
613 """
614 Implemented by subclass in order to decode a token id list into a token list.
615 A token list is the string representation of each token id.
616
617 Args:
618 tokens: List of int representing the token ids.
619
620 Returns:
621 A list of decoded tokens.
622 """
623 raise NotImplementedError()
624
625 @abstractmethod
626 def decode_tokens_to_lang(self, tokens: List[int]) -> str:
627 """
628 Implemented by subclass in order to
629 compute the most likely language ID (LID) string given the tokens.
630
631 Args:
632 tokens: List of int representing the token ids.
633
634 Returns:
635 A decoded LID string.
636 """
637 raise NotImplementedError()
638
639 @abstractmethod
640 def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
641 """
642 Implemented by subclass in order to
643 decode a token id list into language ID (LID) list.
644
645 Args:
646 tokens: List of int representing the token ids.
647
648 Returns:
649 A list of decoded LIDS.
650 """
651 raise NotImplementedError()
652
653 def update_joint_fused_batch_size(self):
654 if self.joint_fused_batch_size is None:
655 # do nothing and let the Joint itself handle setting up of the fused batch
656 return
657
658 if not hasattr(self.decoding.joint, 'set_fused_batch_size'):
659 logging.warning(
660 "The joint module does not have `set_fused_batch_size(int)` as a setter function.\n"
661 "Ignoring update of joint fused batch size."
662 )
663 return
664
665 if not hasattr(self.decoding.joint, 'set_fuse_loss_wer'):
666 logging.warning(
667 "The joint module does not have `set_fuse_loss_wer(bool, RNNTLoss, RNNTWER)` "
668 "as a setter function.\n"
669 "Ignoring update of joint fused batch size."
670 )
671 return
672
673 if self.joint_fused_batch_size > 0:
674 self.decoding.joint.set_fused_batch_size(self.joint_fused_batch_size)
675 else:
676 logging.info("Joint fused batch size <= 0; Will temporarily disable fused batch step in the Joint.")
677 self.decoding.joint.set_fuse_loss_wer(False)
678
679 def compute_rnnt_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
680 assert timestamp_type in ['char', 'word', 'all']
681
682 # Unpack the temporary storage
683 decoded_prediction, alignments, token_repetitions = hypothesis.text
684
685 # Retrieve offsets
686 char_offsets = word_offsets = None
687 char_offsets = self._compute_offsets(hypothesis, token_repetitions, self.blank_id)
688
689 # finally, set the flattened decoded predictions to text field for later text decoding
690 hypothesis.text = decoded_prediction
691
692 # Assert number of offsets and hypothesis tokens are 1:1 match.
693 num_flattened_tokens = 0
694 for t in range(len(char_offsets)):
695 # Subtract one here for the extra RNNT BLANK token emitted to designate "End of timestep"
696 num_flattened_tokens += len(char_offsets[t]['char']) - 1
697
698 if num_flattened_tokens != len(hypothesis.text):
699 raise ValueError(
700 f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
701 " have to be of the same length, but are: "
702 f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
703 f" {len(hypothesis.text)}"
704 )
705
706 encoded_char_offsets = copy.deepcopy(char_offsets)
707
708 # Correctly process the token ids to chars/subwords.
709 for i, offsets in enumerate(char_offsets):
710 decoded_chars = []
711 for char in offsets['char'][:-1]: # ignore the RNNT Blank token at end of every timestep with -1 subset
712 decoded_chars.append(self.decode_tokens_to_str([int(char)]))
713 char_offsets[i]["char"] = decoded_chars
714
715 # detect char vs subword models
716 lens = []
717 for v in char_offsets:
718 tokens = v["char"]
719 # each token may be either 1 unicode token or multiple unicode token
720 # for character based models, only 1 token is used
721 # for subword, more than one token can be used.
722 # Computing max, then summing up total lens is a test to check for char vs subword
723 # For char models, len(lens) == sum(lens)
724 # but this is violated for subword models.
725 max_len = max(len(c) for c in tokens)
726 lens.append(max_len)
727
728 # array of one or more chars implies subword based model with multiple char emitted per TxU step (via subword)
729 if sum(lens) > len(lens):
730 text_type = 'subword'
731 else:
732 # full array of ones implies character based model with 1 char emitted per TxU step
733 text_type = 'char'
734
735 # retrieve word offsets from character offsets
736 word_offsets = None
737 if timestamp_type in ['word', 'all']:
738 if text_type == 'char':
739 word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
740 else:
741 # utilize the copy of char offsets with the correct integer ids for tokens
742 # so as to avoid tokenize -> detokenize -> compare -> merge steps.
743 word_offsets = self._get_word_offsets_subwords_sentencepiece(
744 encoded_char_offsets,
745 hypothesis,
746 decode_ids_to_tokens=self.decode_ids_to_tokens,
747 decode_tokens_to_str=self.decode_tokens_to_str,
748 )
749
750 # attach results
751 if len(hypothesis.timestep) > 0:
752 timestep_info = hypothesis.timestep
753 else:
754 timestep_info = []
755
756 # Setup defaults
757 hypothesis.timestep = {"timestep": timestep_info}
758
759 # Add char / subword time stamps
760 if char_offsets is not None and timestamp_type in ['char', 'all']:
761 hypothesis.timestep['char'] = char_offsets
762
763 # Add word time stamps
764 if word_offsets is not None and timestamp_type in ['word', 'all']:
765 hypothesis.timestep['word'] = word_offsets
766
767 # Convert the flattened token indices to text
768 hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
769
770 return hypothesis
771
772 @staticmethod
773 def _compute_offsets(
774 hypothesis: Hypothesis, token_repetitions: List[int], rnnt_token: int
775 ) -> List[Dict[str, Union[str, int]]]:
776 """
777 Utility method that calculates the indidual time indices where a token starts and ends.
778
779 Args:
780 hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
781 emitted at every time step after rnnt collapse.
782 token_repetitions: A list of ints representing the number of repetitions of each emitted token.
783 rnnt_token: The integer of the rnnt blank token used during rnnt collapse.
784
785 Returns:
786
787 """
788 start_index = 0
789
790 # If the exact timestep information is available, utilize the 1st non-rnnt blank token timestep
791 # as the start index.
792 if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
793 start_index = max(0, hypothesis.timestep[0] - 1)
794
795 # Construct the start and end indices brackets
796 end_indices = np.asarray(token_repetitions).cumsum()
797 start_indices = np.concatenate(([start_index], end_indices[:-1]))
798
799 # Process the TxU dangling alignment tensor, containing pairs of (logits, label)
800 alignment_labels = [al_logits_labels for al_logits_labels in hypothesis.text[1]]
801 for t in range(len(alignment_labels)):
802 for u in range(len(alignment_labels[t])):
803 alignment_labels[t][u] = alignment_labels[t][u][1] # pick label from (logit, label) tuple
804
805 # Merge the results per token into a list of dictionaries
806 offsets = [
807 {"char": a, "start_offset": s, "end_offset": e}
808 for a, s, e in zip(alignment_labels, start_indices, end_indices)
809 ]
810
811 # Filter out RNNT token (blank at [t][0] position). This is because blank can only occur at end of a
812 # time step for RNNT, so if 0th token is blank, then that timestep is skipped.
813 offsets = list(filter(lambda offsets: offsets["char"][0] != rnnt_token, offsets))
814 return offsets
815
816 @staticmethod
817 def _get_word_offsets_chars(
818 offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
819 ) -> Dict[str, Union[str, float]]:
820 """
821 Utility method which constructs word time stamps out of character time stamps.
822
823 References:
824 This code is a port of the Hugging Face code for word time stamp construction.
825
826 Args:
827 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
828 word_delimiter_char: Character token that represents the word delimiter. By default, " ".
829
830 Returns:
831 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
832 "end_offset".
833 """
834 word_offsets = []
835
836 last_state = "SPACE"
837 word = ""
838 start_offset = 0
839 end_offset = 0
840 for i, offset in enumerate(offsets):
841 chars = offset["char"]
842 for char in chars:
843 state = "SPACE" if char == word_delimiter_char else "WORD"
844
845 if state == last_state:
846 # If we are in the same state as before, we simply repeat what we've done before
847 end_offset = offset["end_offset"]
848 word += char
849 else:
850 # Switching state
851 if state == "SPACE":
852 # Finishing a word
853 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
854 else:
855 # Starting a new word
856 start_offset = offset["start_offset"]
857 end_offset = offset["end_offset"]
858 word = char
859
860 last_state = state
861
862 if last_state == "WORD":
863 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
864
865 return word_offsets
866
867 @staticmethod
868 def _get_word_offsets_subwords_sentencepiece(
869 offsets: Dict[str, Union[str, float]],
870 hypothesis: Hypothesis,
871 decode_ids_to_tokens: Callable[[List[int]], str],
872 decode_tokens_to_str: Callable[[List[int]], str],
873 ) -> Dict[str, Union[str, float]]:
874 """
875 Utility method which constructs word time stamps out of sub-word time stamps.
876
877 **Note**: Only supports Sentencepiece based tokenizers !
878
879 Args:
880 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
881 hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
882 after rnnt collapse.
883 decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
884 decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
885
886 Returns:
887 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
888 "end_offset".
889 """
890 word_offsets = []
891 built_token = []
892 previous_token_index = 0
893 # For every offset token
894 for i, offset in enumerate(offsets):
895 # For every subword token in offset token list (ignoring the RNNT Blank token at the end)
896 for char in offset['char'][:-1]:
897 char = int(char)
898
899 # Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
900 token = decode_ids_to_tokens([char])[0]
901 token_text = decode_tokens_to_str([char])
902
903 # It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
904 # after forcing partial text conversion of the token.
905 if token != token_text:
906 # If there are any partially or fully built sub-word token ids, construct to text.
907 # Note: This is "old" subword, that occurs *after* current sub-word has started.
908 if built_token:
909 word_offsets.append(
910 {
911 "word": decode_tokens_to_str(built_token),
912 "start_offset": offsets[previous_token_index]["start_offset"],
913 "end_offset": offsets[i]["start_offset"],
914 }
915 )
916
917 # Prepare list of new sub-word ids
918 built_token.clear()
919 built_token.append(char)
920 previous_token_index = i
921 else:
922 # If the token does not contain any sub-word start mark, then the sub-word has not completed yet
923 # Append to current sub-word list.
924 built_token.append(char)
925
926 # Inject the start offset of the first token to word offsets
927 # This is because we always skip the delay the injection of the first sub-word due to the loop
928 # condition and check whether built token is ready or not.
929 # Therefore without this forced injection, the start_offset appears as off by 1.
930 # This should only be done when these arrays contain more than one element.
931 if offsets and word_offsets:
932 word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
933
934 # If there are any remaining tokens left, inject them all into the final word offset.
935 # The start offset of this token is the start time of the next token to process.
936 # The end offset of this token is the end time of the last token from offsets.
937 # Note that built_token is a flat list; but offsets contains a nested list which
938 # may have different dimensionality.
939 # As such, we can't rely on the length of the list of built_token to index offsets.
940 if built_token:
941 # start from the previous token index as this hasn't been committed to word_offsets yet
942 # if we still have content in built_token
943 start_offset = offsets[previous_token_index]["start_offset"]
944 word_offsets.append(
945 {
946 "word": decode_tokens_to_str(built_token),
947 "start_offset": start_offset,
948 "end_offset": offsets[-1]["end_offset"],
949 }
950 )
951 built_token.clear()
952
953 return word_offsets
954
955
956 class RNNTDecoding(AbstractRNNTDecoding):
957 """
958 Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
959
960 Args:
961 decoding_cfg: A dict-like object which contains the following key-value pairs.
962 strategy: str value which represents the type of decoding that can occur.
963 Possible values are :
964 - greedy, greedy_batch (for greedy decoding).
965 - beam, tsd, alsd (for beam search decoding).
966
967 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
968 tokens as well as the decoded string. Default is False in order to avoid double decoding
969 unless required.
970
971 preserve_alignments: Bool flag which preserves the history of logprobs generated during
972 decoding (sample / batched). When set to true, the Hypothesis will contain
973 the non-null value for `logprobs` in it. Here, `alignments` is a List of List of
974 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
975
976 In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
977 with the `return_hypotheses` flag set to True.
978
979 The length of the list corresponds to the Acoustic Length (T).
980 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
981 U is the number of target tokens for the current timestep Ti.
982
983 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
984 scores. In order to obtain hypotheses with confidence scores, please utilize
985 `rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
986
987 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
988 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
989 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
990
991 The length of the list corresponds to the Acoustic Length (T).
992 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
993 U is the number of target tokens for the current timestep Ti.
994 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
995 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
996 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
997
998 The length of the list corresponds to the number of recognized tokens.
999 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
1000 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1001 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
1002
1003 The length of the list corresponds to the number of recognized words.
1004 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
1005 from the `token_confidence`.
1006 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
1007 Valid options are `mean`, `min`, `max`, `prod`.
1008 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
1009 confidence scores.
1010
1011 name: The measure name (str).
1012 Supported values:
1013 - 'max_prob' for using the maximum token probability as a confidence.
1014 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1015
1016 entropy_type: Which type of entropy to use (str).
1017 Used if confidence_measure_cfg.name is set to `entropy`.
1018 Supported values:
1019 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1020 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1021 Note that for this entropy, the alpha should comply the following inequality:
1022 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1023 where V is the model vocabulary size.
1024 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1025 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1026 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1027 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1028 - 'renyi' for the Rรฉnyi entropy.
1029 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1030 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1031 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1032
1033 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1034 When the alpha equals one, scaling is not applied to 'max_prob',
1035 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1036
1037 entropy_norm: A mapping of the entropy value to the interval [0,1].
1038 Supported values:
1039 - 'lin' for using the linear mapping.
1040 - 'exp' for using exponential mapping with linear shift.
1041
1042 The config may further contain the following sub-dictionaries:
1043 "greedy":
1044 max_symbols: int, describing the maximum number of target tokens to decode per
1045 timestep during greedy decoding. Setting to larger values allows longer sentences
1046 to be decoded, at the cost of increased execution time.
1047
1048 preserve_frame_confidence: Same as above, overrides above value.
1049
1050 confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
1051
1052 "beam":
1053 beam_size: int, defining the beam size for beam search. Must be >= 1.
1054 If beam_size == 1, will perform cached greedy search. This might be slightly different
1055 results compared to the greedy search above.
1056
1057 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
1058 Set to True by default.
1059
1060 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
1061 hypotheses after beam search has concluded. This flag is set by default.
1062
1063 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
1064 per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
1065 at increased cost to execution time.
1066
1067 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
1068 If an integer is provided, it can decode sequences of that particular maximum length.
1069 If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
1070 where seq_len is the length of the acoustic model output (T).
1071
1072 NOTE:
1073 If a float is provided, it can be greater than 1!
1074 By default, a float of 2.0 is used so that a target sequence can be at most twice
1075 as long as the acoustic model output length T.
1076
1077 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
1078 and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
1079
1080 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
1081 in order to reduce expensive beam search cost later. int >= 0.
1082
1083 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
1084 Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
1085 and affects the speed of inference since large values will perform large beam search in the next step.
1086
1087 maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
1088 The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
1089 where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
1090 predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
1091 expansion apart from the "most likely" candidate.
1092 Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
1093 but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
1094 thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
1095 tuned on a validation set.
1096
1097 softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
1098
1099 decoder: The Decoder/Prediction network module.
1100 joint: The Joint network module.
1101 vocabulary: The vocabulary (excluding the RNNT blank token) which will be used for decoding.
1102 """
1103
1104 def __init__(
1105 self, decoding_cfg, decoder, joint, vocabulary,
1106 ):
1107 # we need to ensure blank is the last token in the vocab for the case of RNNT and Multi-blank RNNT.
1108 blank_id = len(vocabulary) + joint.num_extra_outputs
1109
1110 if hasattr(decoding_cfg, 'model_type') and decoding_cfg.model_type == 'tdt':
1111 blank_id = len(vocabulary)
1112
1113 self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
1114
1115 super(RNNTDecoding, self).__init__(
1116 decoding_cfg=decoding_cfg, decoder=decoder, joint=joint, blank_id=blank_id,
1117 )
1118
1119 if isinstance(self.decoding, beam_decode.BeamRNNTInfer):
1120 self.decoding.set_decoding_type('char')
1121
1122 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
1123 """
1124 Implemented by subclass in order to aggregate token confidence to a word-level confidence.
1125
1126 Args:
1127 hypothesis: Hypothesis
1128
1129 Returns:
1130 A list of word-level confidence scores.
1131 """
1132 return self._aggregate_token_confidence_chars(hypothesis.words, hypothesis.token_confidence)
1133
1134 def decode_tokens_to_str(self, tokens: List[int]) -> str:
1135 """
1136 Implemented by subclass in order to decoder a token list into a string.
1137
1138 Args:
1139 tokens: List of int representing the token ids.
1140
1141 Returns:
1142 A decoded string.
1143 """
1144 hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
1145 return hypothesis
1146
1147 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
1148 """
1149 Implemented by subclass in order to decode a token id list into a token list.
1150 A token list is the string representation of each token id.
1151
1152 Args:
1153 tokens: List of int representing the token ids.
1154
1155 Returns:
1156 A list of decoded tokens.
1157 """
1158 token_list = [self.labels_map[c] for c in tokens if c < self.blank_id - self.num_extra_outputs]
1159 return token_list
1160
1161 def decode_tokens_to_lang(self, tokens: List[int]) -> str:
1162 """
1163 Compute the most likely language ID (LID) string given the tokens.
1164
1165 Args:
1166 tokens: List of int representing the token ids.
1167
1168 Returns:
1169 A decoded LID string.
1170 """
1171 lang = self.tokenizer.ids_to_lang(tokens)
1172 return lang
1173
1174 def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
1175 """
1176 Decode a token id list into language ID (LID) list.
1177
1178 Args:
1179 tokens: List of int representing the token ids.
1180
1181 Returns:
1182 A list of decoded LIDS.
1183 """
1184 lang_list = self.tokenizer.ids_to_text_and_langs(tokens)
1185 return lang_list
1186
1187
1188 class RNNTWER(Metric):
1189 """
1190 This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference texts.
1191 When doing distributed training/evaluation the result of res=WER(predictions, targets, target_lengths) calls
1192 will be all-reduced between all workers using SUM operations.
1193 Here contains two numbers res=[wer_numerator, wer_denominator]. WER=wer_numerator/wer_denominator.
1194
1195 If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step results.
1196 Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
1197
1198 Example:
1199 def validation_step(self, batch, batch_idx):
1200 ...
1201 wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
1202 self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
1203 return self.val_outputs
1204
1205 def on_validation_epoch_end(self):
1206 ...
1207 wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
1208 wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
1209 tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
1210 self.val_outputs.clear() # free memory
1211 return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
1212
1213 Args:
1214 decoding: RNNTDecoding object that will perform autoregressive decoding of the RNNT model.
1215 batch_dim_index: Index of the batch dimension.
1216 use_cer: Whether to use Character Error Rate isntead of Word Error Rate.
1217 log_prediction: Whether to log a single decoded sample per call.
1218
1219 Returns:
1220 res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenshtein's
1221 distances for all prediction - reference pairs, total number of words in all references.
1222 """
1223
1224 full_state_update = True
1225
1226 def __init__(
1227 self, decoding: RNNTDecoding, batch_dim_index=0, use_cer=False, log_prediction=True, dist_sync_on_step=False
1228 ):
1229 super(RNNTWER, self).__init__(dist_sync_on_step=dist_sync_on_step)
1230 self.decoding = decoding
1231 self.batch_dim_index = batch_dim_index
1232 self.use_cer = use_cer
1233 self.log_prediction = log_prediction
1234 self.blank_id = self.decoding.blank_id
1235 self.labels_map = self.decoding.labels_map
1236
1237 self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1238 self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1239
1240 def update(
1241 self,
1242 encoder_output: torch.Tensor,
1243 encoded_lengths: torch.Tensor,
1244 targets: torch.Tensor,
1245 target_lengths: torch.Tensor,
1246 ) -> torch.Tensor:
1247 words = 0
1248 scores = 0
1249 references = []
1250 with torch.no_grad():
1251 # prediction_cpu_tensor = tensors[0].long().cpu()
1252 targets_cpu_tensor = targets.long().cpu()
1253 targets_cpu_tensor = move_dimension_to_the_front(targets_cpu_tensor, self.batch_dim_index)
1254 tgt_lenths_cpu_tensor = target_lengths.long().cpu()
1255
1256 # iterate over batch
1257 for ind in range(targets_cpu_tensor.shape[0]):
1258 tgt_len = tgt_lenths_cpu_tensor[ind].item()
1259 target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
1260
1261 reference = self.decoding.decode_tokens_to_str(target)
1262 references.append(reference)
1263
1264 hypotheses, _ = self.decoding.rnnt_decoder_predictions_tensor(encoder_output, encoded_lengths)
1265
1266 if self.log_prediction:
1267 logging.info(f"\n")
1268 logging.info(f"reference :{references[0]}")
1269 logging.info(f"predicted :{hypotheses[0]}")
1270
1271 for h, r in zip(hypotheses, references):
1272 if self.use_cer:
1273 h_list = list(h)
1274 r_list = list(r)
1275 else:
1276 h_list = h.split()
1277 r_list = r.split()
1278 words += len(r_list)
1279 # Compute Levenshtein's distance
1280 scores += editdistance.eval(h_list, r_list)
1281
1282 self.scores += torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
1283 self.words += torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
1284 # return torch.tensor([scores, words]).to(predictions.device)
1285
1286 def compute(self):
1287 wer = self.scores.float() / self.words
1288 return wer, self.scores.detach(), self.words.detach()
1289
1290
1291 @dataclass
1292 class RNNTDecodingConfig:
1293 model_type: str = "rnnt" # one of "rnnt", "multiblank" or "tdt"
1294 strategy: str = "greedy_batch"
1295
1296 compute_hypothesis_token_set: bool = False
1297
1298 # preserve decoding alignments
1299 preserve_alignments: Optional[bool] = None
1300
1301 # confidence config
1302 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1303
1304 # RNNT Joint fused batch size
1305 fused_batch_size: Optional[int] = None
1306
1307 # compute RNNT time stamps
1308 compute_timestamps: Optional[bool] = None
1309
1310 # compute language IDs
1311 compute_langs: bool = False
1312
1313 # token representing word seperator
1314 word_seperator: str = " "
1315
1316 # type of timestamps to calculate
1317 rnnt_timestamp_type: str = "all" # can be char, word or all for both
1318
1319 # greedy decoding config
1320 greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
1321
1322 # beam decoding config
1323 beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
1324
1325 # can be used to change temperature for decoding
1326 temperature: float = 1.0
1327
[end of nemo/collections/asr/metrics/rnnt_wer.py]
[start of nemo/collections/asr/metrics/wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import re
16 from abc import abstractmethod
17 from dataclasses import dataclass, is_dataclass
18 from typing import Callable, Dict, List, Optional, Tuple, Union
19
20 import editdistance
21 import jiwer
22 import numpy as np
23 import torch
24 from omegaconf import DictConfig, OmegaConf
25 from torchmetrics import Metric
26
27 from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
28 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
29 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
30 from nemo.utils import logging, logging_mode
31
32 __all__ = ['word_error_rate', 'word_error_rate_detail', 'WER', 'move_dimension_to_the_front']
33
34
35 def word_error_rate(hypotheses: List[str], references: List[str], use_cer=False) -> float:
36 """
37 Computes Average Word Error rate between two texts represented as
38 corresponding lists of string.
39
40 Hypotheses and references must have same length.
41
42 Args:
43 hypotheses (list): list of hypotheses
44 references(list) : list of references
45 use_cer (bool): set True to enable cer
46
47 Returns:
48 wer (float): average word error rate
49 """
50 scores = 0
51 words = 0
52 if len(hypotheses) != len(references):
53 raise ValueError(
54 "In word error rate calculation, hypotheses and reference"
55 " lists must have the same number of elements. But I got:"
56 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
57 )
58 for h, r in zip(hypotheses, references):
59 if use_cer:
60 h_list = list(h)
61 r_list = list(r)
62 else:
63 h_list = h.split()
64 r_list = r.split()
65 words += len(r_list)
66 # May deprecate using editdistance in future release for here and rest of codebase
67 # once we confirm jiwer is reliable.
68 scores += editdistance.eval(h_list, r_list)
69 if words != 0:
70 wer = 1.0 * scores / words
71 else:
72 wer = float('inf')
73 return wer
74
75
76 def word_error_rate_detail(
77 hypotheses: List[str], references: List[str], use_cer=False
78 ) -> Tuple[float, int, float, float, float]:
79 """
80 Computes Average Word Error Rate with details (insertion rate, deletion rate, substitution rate)
81 between two texts represented as corresponding lists of string.
82
83 Hypotheses and references must have same length.
84
85 Args:
86 hypotheses (list): list of hypotheses
87 references(list) : list of references
88 use_cer (bool): set True to enable cer
89
90 Returns:
91 wer (float): average word error rate
92 words (int): Total number of words/charactors of given reference texts
93 ins_rate (float): average insertion error rate
94 del_rate (float): average deletion error rate
95 sub_rate (float): average substitution error rate
96 """
97 scores = 0
98 words = 0
99 ops_count = {'substitutions': 0, 'insertions': 0, 'deletions': 0}
100
101 if len(hypotheses) != len(references):
102 raise ValueError(
103 "In word error rate calculation, hypotheses and reference"
104 " lists must have the same number of elements. But I got:"
105 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
106 )
107
108 for h, r in zip(hypotheses, references):
109 if use_cer:
110 h_list = list(h)
111 r_list = list(r)
112 else:
113 h_list = h.split()
114 r_list = r.split()
115
116 # To get rid of the issue that jiwer does not allow empty string
117 if len(r_list) == 0:
118 if len(h_list) != 0:
119 errors = len(h_list)
120 ops_count['insertions'] += errors
121 else:
122 errors = 0
123 else:
124 if use_cer:
125 measures = jiwer.cer(r, h, return_dict=True)
126 else:
127 measures = jiwer.compute_measures(r, h)
128
129 errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
130 ops_count['insertions'] += measures['insertions']
131 ops_count['deletions'] += measures['deletions']
132 ops_count['substitutions'] += measures['substitutions']
133
134 scores += errors
135 words += len(r_list)
136
137 if words != 0:
138 wer = 1.0 * scores / words
139 ins_rate = 1.0 * ops_count['insertions'] / words
140 del_rate = 1.0 * ops_count['deletions'] / words
141 sub_rate = 1.0 * ops_count['substitutions'] / words
142 else:
143 wer, ins_rate, del_rate, sub_rate = float('inf'), float('inf'), float('inf'), float('inf')
144
145 return wer, words, ins_rate, del_rate, sub_rate
146
147
148 def word_error_rate_per_utt(hypotheses: List[str], references: List[str], use_cer=False) -> Tuple[List[float], float]:
149 """
150 Computes Word Error Rate per utterance and the average WER
151 between two texts represented as corresponding lists of string.
152
153 Hypotheses and references must have same length.
154
155 Args:
156 hypotheses (list): list of hypotheses
157 references(list) : list of references
158 use_cer (bool): set True to enable cer
159
160 Returns:
161 wer_per_utt (List[float]): word error rate per utterance
162 avg_wer (float): average word error rate
163 """
164 scores = 0
165 words = 0
166 wer_per_utt = []
167
168 if len(hypotheses) != len(references):
169 raise ValueError(
170 "In word error rate calculation, hypotheses and reference"
171 " lists must have the same number of elements. But I got:"
172 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
173 )
174
175 for h, r in zip(hypotheses, references):
176 if use_cer:
177 h_list = list(h)
178 r_list = list(r)
179 else:
180 h_list = h.split()
181 r_list = r.split()
182
183 # To get rid of the issue that jiwer does not allow empty string
184 if len(r_list) == 0:
185 if len(h_list) != 0:
186 errors = len(h_list)
187 wer_per_utt.append(float('inf'))
188 else:
189 if use_cer:
190 measures = jiwer.cer(r, h, return_dict=True)
191 er = measures['cer']
192 else:
193 measures = jiwer.compute_measures(r, h)
194 er = measures['wer']
195
196 errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
197 wer_per_utt.append(er)
198
199 scores += errors
200 words += len(r_list)
201
202 if words != 0:
203 avg_wer = 1.0 * scores / words
204 else:
205 avg_wer = float('inf')
206
207 return wer_per_utt, avg_wer
208
209
210 def move_dimension_to_the_front(tensor, dim_index):
211 all_dims = list(range(tensor.ndim))
212 return tensor.permute(*([dim_index] + all_dims[:dim_index] + all_dims[dim_index + 1 :]))
213
214
215 class AbstractCTCDecoding(ConfidenceMixin):
216 """
217 Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs.
218
219 Args:
220 decoding_cfg: A dict-like object which contains the following key-value pairs.
221 strategy: str value which represents the type of decoding that can occur.
222 Possible values are :
223 - greedy (for greedy decoding).
224 - beam (for DeepSpeed KenLM based decoding).
225
226 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
227 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
228 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
229
230 ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
231 Can take the following values - "char" for character/subword time stamps, "word" for word level
232 time stamps and "all" (default), for both character level and word level time stamps.
233
234 word_seperator: Str token representing the seperator between words.
235
236 preserve_alignments: Bool flag which preserves the history of logprobs generated during
237 decoding (sample / batched). When set to true, the Hypothesis will contain
238 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
239
240 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
241 scores. In order to obtain hypotheses with confidence scores, please utilize
242 `ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
243
244 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
245 generated during decoding. When set to true, the Hypothesis will contain
246 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
247 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
248 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
249 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
250
251 The length of the list corresponds to the number of recognized tokens.
252 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
253 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
254 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
255
256 The length of the list corresponds to the number of recognized words.
257 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
258 from the `token_confidence`.
259 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
260 Valid options are `mean`, `min`, `max`, `prod`.
261 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
262 confidence scores.
263
264 name: The measure name (str).
265 Supported values:
266 - 'max_prob' for using the maximum token probability as a confidence.
267 - 'entropy' for using a normalized entropy of a log-likelihood vector.
268
269 entropy_type: Which type of entropy to use (str).
270 Used if confidence_measure_cfg.name is set to `entropy`.
271 Supported values:
272 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
273 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
274 Note that for this entropy, the alpha should comply the following inequality:
275 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
276 where V is the model vocabulary size.
277 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
278 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
279 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
280 More: https://en.wikipedia.org/wiki/Tsallis_entropy
281 - 'renyi' for the Rรฉnyi entropy.
282 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
283 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
284 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
285
286 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
287 When the alpha equals one, scaling is not applied to 'max_prob',
288 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
289
290 entropy_norm: A mapping of the entropy value to the interval [0,1].
291 Supported values:
292 - 'lin' for using the linear mapping.
293 - 'exp' for using exponential mapping with linear shift.
294
295 batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
296 ``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
297
298 The config may further contain the following sub-dictionaries:
299 "greedy":
300 preserve_alignments: Same as above, overrides above value.
301 compute_timestamps: Same as above, overrides above value.
302 preserve_frame_confidence: Same as above, overrides above value.
303 confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
304
305 "beam":
306 beam_size: int, defining the beam size for beam search. Must be >= 1.
307 If beam_size == 1, will perform cached greedy search. This might be slightly different
308 results compared to the greedy search above.
309
310 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
311 hypotheses after beam search has concluded. This flag is set by default.
312
313 beam_alpha: float, the strength of the Language model on the final score of a token.
314 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
315
316 beam_beta: float, the strength of the sequence length penalty on the final score of a token.
317 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
318
319 kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
320 If the path is invalid (file is not found at path), will raise a deferred error at the moment
321 of calculation of beam search, so that users may update / change the decoding strategy
322 to point to the correct file.
323
324 blank_id: The id of the RNNT blank token.
325 """
326
327 def __init__(self, decoding_cfg, blank_id: int):
328 super().__init__()
329
330 # Convert dataclas to config
331 if is_dataclass(decoding_cfg):
332 decoding_cfg = OmegaConf.structured(decoding_cfg)
333
334 if not isinstance(decoding_cfg, DictConfig):
335 decoding_cfg = OmegaConf.create(decoding_cfg)
336
337 OmegaConf.set_struct(decoding_cfg, False)
338
339 # update minimal config
340 minimal_cfg = ['greedy']
341 for item in minimal_cfg:
342 if item not in decoding_cfg:
343 decoding_cfg[item] = OmegaConf.create({})
344
345 self.cfg = decoding_cfg
346 self.blank_id = blank_id
347 self.preserve_alignments = self.cfg.get('preserve_alignments', None)
348 self.compute_timestamps = self.cfg.get('compute_timestamps', None)
349 self.batch_dim_index = self.cfg.get('batch_dim_index', 0)
350 self.word_seperator = self.cfg.get('word_seperator', ' ')
351
352 possible_strategies = ['greedy', 'beam', 'pyctcdecode', 'flashlight']
353 if self.cfg.strategy not in possible_strategies:
354 raise ValueError(f"Decoding strategy must be one of {possible_strategies}. Given {self.cfg.strategy}")
355
356 # Update preserve alignments
357 if self.preserve_alignments is None:
358 if self.cfg.strategy in ['greedy']:
359 self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
360 else:
361 self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
362
363 # Update compute timestamps
364 if self.compute_timestamps is None:
365 if self.cfg.strategy in ['greedy']:
366 self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
367 elif self.cfg.strategy in ['beam']:
368 self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
369
370 # initialize confidence-related fields
371 self._init_confidence(self.cfg.get('confidence_cfg', None))
372
373 # Confidence estimation is not implemented for strategies other than `greedy`
374 if (
375 not self.preserve_frame_confidence
376 and self.cfg.strategy != 'greedy'
377 and self.cfg.beam.get('preserve_frame_confidence', False)
378 ):
379 raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
380
381 # we need timestamps to extract non-blank per-frame confidence
382 if self.compute_timestamps is not None:
383 self.compute_timestamps |= self.preserve_frame_confidence
384
385 if self.cfg.strategy == 'greedy':
386
387 self.decoding = ctc_greedy_decoding.GreedyCTCInfer(
388 blank_id=self.blank_id,
389 preserve_alignments=self.preserve_alignments,
390 compute_timestamps=self.compute_timestamps,
391 preserve_frame_confidence=self.preserve_frame_confidence,
392 confidence_measure_cfg=self.confidence_measure_cfg,
393 )
394
395 elif self.cfg.strategy == 'beam':
396
397 self.decoding = ctc_beam_decoding.BeamCTCInfer(
398 blank_id=blank_id,
399 beam_size=self.cfg.beam.get('beam_size', 1),
400 search_type='default',
401 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
402 preserve_alignments=self.preserve_alignments,
403 compute_timestamps=self.compute_timestamps,
404 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
405 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
406 kenlm_path=self.cfg.beam.get('kenlm_path', None),
407 )
408
409 self.decoding.override_fold_consecutive_value = False
410
411 elif self.cfg.strategy == 'pyctcdecode':
412
413 self.decoding = ctc_beam_decoding.BeamCTCInfer(
414 blank_id=blank_id,
415 beam_size=self.cfg.beam.get('beam_size', 1),
416 search_type='pyctcdecode',
417 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
418 preserve_alignments=self.preserve_alignments,
419 compute_timestamps=self.compute_timestamps,
420 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
421 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
422 kenlm_path=self.cfg.beam.get('kenlm_path', None),
423 pyctcdecode_cfg=self.cfg.beam.get('pyctcdecode_cfg', None),
424 )
425
426 self.decoding.override_fold_consecutive_value = False
427
428 elif self.cfg.strategy == 'flashlight':
429
430 self.decoding = ctc_beam_decoding.BeamCTCInfer(
431 blank_id=blank_id,
432 beam_size=self.cfg.beam.get('beam_size', 1),
433 search_type='flashlight',
434 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
435 preserve_alignments=self.preserve_alignments,
436 compute_timestamps=self.compute_timestamps,
437 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
438 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
439 kenlm_path=self.cfg.beam.get('kenlm_path', None),
440 flashlight_cfg=self.cfg.beam.get('flashlight_cfg', None),
441 )
442
443 self.decoding.override_fold_consecutive_value = False
444
445 else:
446 raise ValueError(
447 f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
448 f"but was provided {self.cfg.strategy}"
449 )
450
451 def ctc_decoder_predictions_tensor(
452 self,
453 decoder_outputs: torch.Tensor,
454 decoder_lengths: torch.Tensor = None,
455 fold_consecutive: bool = True,
456 return_hypotheses: bool = False,
457 ) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
458 """
459 Decodes a sequence of labels to words
460
461 Args:
462 decoder_outputs: An integer torch.Tensor of shape [Batch, Time, {Vocabulary}] (if ``batch_index_dim == 0``) or [Time, Batch]
463 (if ``batch_index_dim == 1``) of integer indices that correspond to the index of some character in the
464 label set.
465 decoder_lengths: Optional tensor of length `Batch` which contains the integer lengths
466 of the sequence in the padded `predictions` tensor.
467 fold_consecutive: Bool, determine whether to perform "ctc collapse", folding consecutive tokens
468 into a single token.
469 return_hypotheses: Bool flag whether to return just the decoding predictions of the model
470 or a Hypothesis object that holds information such as the decoded `text`,
471 the `alignment` of emited by the CTC Model, and the `length` of the sequence (if available).
472 May also contain the log-probabilities of the decoder (if this method is called via
473 transcribe())
474
475 Returns:
476 Either a list of str which represent the CTC decoded strings per sample,
477 or a list of Hypothesis objects containing additional information.
478 """
479
480 if isinstance(decoder_outputs, torch.Tensor):
481 decoder_outputs = move_dimension_to_the_front(decoder_outputs, self.batch_dim_index)
482
483 if (
484 hasattr(self.decoding, 'override_fold_consecutive_value')
485 and self.decoding.override_fold_consecutive_value is not None
486 ):
487 logging.info(
488 f"Beam search requires that consecutive ctc tokens are not folded. \n"
489 f"Overriding provided value of `fold_consecutive` = {fold_consecutive} to "
490 f"{self.decoding.override_fold_consecutive_value}",
491 mode=logging_mode.ONCE,
492 )
493 fold_consecutive = self.decoding.override_fold_consecutive_value
494
495 with torch.inference_mode():
496 # Resolve the forward step of the decoding strategy
497 hypotheses_list = self.decoding(
498 decoder_output=decoder_outputs, decoder_lengths=decoder_lengths
499 ) # type: List[List[Hypothesis]]
500
501 # extract the hypotheses
502 hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
503
504 if isinstance(hypotheses_list[0], NBestHypotheses):
505 hypotheses = []
506 all_hypotheses = []
507
508 for nbest_hyp in hypotheses_list: # type: NBestHypotheses
509 n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
510 decoded_hyps = self.decode_hypothesis(
511 n_hyps, fold_consecutive
512 ) # type: List[Union[Hypothesis, NBestHypotheses]]
513
514 # If computing timestamps
515 if self.compute_timestamps is True:
516 timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
517 for hyp_idx in range(len(decoded_hyps)):
518 decoded_hyps[hyp_idx] = self.compute_ctc_timestamps(decoded_hyps[hyp_idx], timestamp_type)
519
520 hypotheses.append(decoded_hyps[0]) # best hypothesis
521 all_hypotheses.append(decoded_hyps)
522
523 if return_hypotheses:
524 return hypotheses, all_hypotheses
525
526 best_hyp_text = [h.text for h in hypotheses]
527 all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
528 return best_hyp_text, all_hyp_text
529
530 else:
531 hypotheses = self.decode_hypothesis(
532 hypotheses_list, fold_consecutive
533 ) # type: List[Union[Hypothesis, NBestHypotheses]]
534
535 # If computing timestamps
536 if self.compute_timestamps is True:
537 # greedy decoding, can get high-level confidence scores
538 if return_hypotheses and (self.preserve_word_confidence or self.preserve_token_confidence):
539 hypotheses = self.compute_confidence(hypotheses)
540 else:
541 # remove unused token_repetitions from Hypothesis.text
542 for hyp in hypotheses:
543 hyp.text = hyp.text[:2]
544 timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
545 for hyp_idx in range(len(hypotheses)):
546 hypotheses[hyp_idx] = self.compute_ctc_timestamps(hypotheses[hyp_idx], timestamp_type)
547
548 if return_hypotheses:
549 return hypotheses, None
550
551 best_hyp_text = [h.text for h in hypotheses]
552 return best_hyp_text, None
553
554 def decode_hypothesis(
555 self, hypotheses_list: List[Hypothesis], fold_consecutive: bool
556 ) -> List[Union[Hypothesis, NBestHypotheses]]:
557 """
558 Decode a list of hypotheses into a list of strings.
559
560 Args:
561 hypotheses_list: List of Hypothesis.
562 fold_consecutive: Whether to collapse the ctc blank tokens or not.
563
564 Returns:
565 A list of strings.
566 """
567 for ind in range(len(hypotheses_list)):
568 # Extract the integer encoded hypothesis
569 hyp = hypotheses_list[ind]
570 prediction = hyp.y_sequence
571 predictions_len = hyp.length if hyp.length > 0 else None
572
573 if fold_consecutive:
574 if type(prediction) != list:
575 prediction = prediction.numpy().tolist()
576
577 if predictions_len is not None:
578 prediction = prediction[:predictions_len]
579
580 # CTC decoding procedure
581 decoded_prediction = []
582 token_lengths = [] # preserve token lengths
583 token_repetitions = [] # preserve number of repetitions per token
584
585 previous = self.blank_id
586 last_length = 0
587 last_repetition = 1
588
589 for pidx, p in enumerate(prediction):
590 if (p != previous or previous == self.blank_id) and p != self.blank_id:
591 decoded_prediction.append(p)
592
593 token_lengths.append(pidx - last_length)
594 last_length = pidx
595 token_repetitions.append(last_repetition)
596 last_repetition = 1
597
598 if p == previous and previous != self.blank_id:
599 last_repetition += 1
600
601 previous = p
602
603 if len(token_repetitions) > 0:
604 token_repetitions = token_repetitions[1:] + [last_repetition]
605
606 else:
607 if predictions_len is not None:
608 prediction = prediction[:predictions_len]
609 decoded_prediction = prediction[prediction != self.blank_id].tolist()
610 token_lengths = [1] * len(decoded_prediction) # preserve number of repetitions per token
611 token_repetitions = [1] * len(decoded_prediction) # preserve number of repetitions per token
612
613 # De-tokenize the integer tokens; if not computing timestamps
614 if self.compute_timestamps is True:
615 # keep the original predictions, wrap with the number of repetitions per token
616 # this is done so that `ctc_decoder_predictions_tensor()` can process this hypothesis
617 # in order to compute exact time stamps.
618 hypothesis = (decoded_prediction, token_lengths, token_repetitions)
619 else:
620 hypothesis = self.decode_tokens_to_str(decoded_prediction)
621
622 # TODO: remove
623 # collapse leading spaces before . , ? for PC models
624 hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
625
626 # Preserve this wrapped hypothesis or decoded text tokens.
627 hypotheses_list[ind].text = hypothesis
628
629 return hypotheses_list
630
631 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
632 """
633 Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
634 Assumes that `frame_confidence` is present in the hypotheses.
635
636 Args:
637 hypotheses_list: List of Hypothesis.
638
639 Returns:
640 A list of hypotheses with high-level confidence scores.
641 """
642 for hyp in hypotheses_list:
643 if not isinstance(hyp.text, tuple) or len(hyp.text) != 3:
644 # the method must have been called in the wrong place
645 raise ValueError(
646 """Wrong format of the `text` attribute of a hypothesis.\n
647 Expected: (decoded_prediction, token_repetitions)\n
648 The method invocation is expected between .decode_hypothesis() and .compute_ctc_timestamps()"""
649 )
650 token_repetitions = hyp.text[2]
651 hyp.text = hyp.text[:2]
652 token_confidence = []
653 if self.exclude_blank_from_confidence:
654 non_blank_frame_confidence = hyp.non_blank_frame_confidence
655 i = 0
656 for tr in token_repetitions:
657 # token repetition can be zero
658 j = i + tr
659 token_confidence.append(self._aggregate_confidence(non_blank_frame_confidence[i:j]))
660 i = j
661 else:
662 # <blank> tokens are considered to belong to the last non-blank token, if any.
663 token_lengths = hyp.text[1]
664 if len(token_lengths) > 0:
665 ts = token_lengths[0]
666 for tl in token_lengths[1:] + [len(hyp.frame_confidence)]:
667 token_confidence.append(self._aggregate_confidence(hyp.frame_confidence[ts : ts + tl]))
668 ts += tl
669 hyp.token_confidence = token_confidence
670 if self.preserve_word_confidence:
671 for hyp in hypotheses_list:
672 hyp.word_confidence = self._aggregate_token_confidence(hyp)
673 return hypotheses_list
674
675 @abstractmethod
676 def decode_tokens_to_str(self, tokens: List[int]) -> str:
677 """
678 Implemented by subclass in order to decoder a token id list into a string.
679
680 Args:
681 tokens: List of int representing the token ids.
682
683 Returns:
684 A decoded string.
685 """
686 raise NotImplementedError()
687
688 @abstractmethod
689 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
690 """
691 Implemented by subclass in order to decode a token id list into a token list.
692 A token list is the string representation of each token id.
693
694 Args:
695 tokens: List of int representing the token ids.
696
697 Returns:
698 A list of decoded tokens.
699 """
700 raise NotImplementedError()
701
702 def compute_ctc_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
703 """
704 Method to compute time stamps at char/subword, and word level given some hypothesis.
705 Requires the input hypothesis to contain a `text` field that is the tuple. The tuple contains -
706 the ctc collapsed integer ids, and the number of repetitions of each token.
707
708 Args:
709 hypothesis: A Hypothesis object, with a wrapped `text` field.
710 The `text` field must contain a tuple with two values -
711 The ctc collapsed integer ids
712 A list of integers that represents the number of repetitions per token.
713 timestamp_type: A str value that represents the type of time stamp calculated.
714 Can be one of "char", "word" or "all"
715
716 Returns:
717 A Hypothesis object with a modified `timestep` value, which is now a dictionary containing
718 the time stamp information.
719 """
720 assert timestamp_type in ['char', 'word', 'all']
721
722 # Unpack the temporary storage, and set the decoded predictions
723 decoded_prediction, token_lengths = hypothesis.text
724 hypothesis.text = decoded_prediction
725
726 # Retrieve offsets
727 char_offsets = word_offsets = None
728 char_offsets = self._compute_offsets(hypothesis, token_lengths, self.blank_id)
729
730 # Assert number of offsets and hypothesis tokens are 1:1 match.
731 if len(char_offsets) != len(hypothesis.text):
732 raise ValueError(
733 f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
734 " have to be of the same length, but are: "
735 f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
736 f" {len(hypothesis.text)}"
737 )
738
739 # Correctly process the token ids to chars/subwords.
740 for i, char in enumerate(hypothesis.text):
741 char_offsets[i]["char"] = self.decode_tokens_to_str([char])
742
743 # detect char vs subword models
744 lens = [len(list(v["char"])) > 1 for v in char_offsets]
745 if any(lens):
746 text_type = 'subword'
747 else:
748 text_type = 'char'
749
750 # retrieve word offsets from character offsets
751 word_offsets = None
752 if timestamp_type in ['word', 'all']:
753 if text_type == 'char':
754 word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
755 else:
756 word_offsets = self._get_word_offsets_subwords_sentencepiece(
757 char_offsets,
758 hypothesis,
759 decode_ids_to_tokens=self.decode_ids_to_tokens,
760 decode_tokens_to_str=self.decode_tokens_to_str,
761 )
762
763 # attach results
764 if len(hypothesis.timestep) > 0:
765 timestep_info = hypothesis.timestep
766 else:
767 timestep_info = []
768
769 # Setup defaults
770 hypothesis.timestep = {"timestep": timestep_info}
771
772 # Add char / subword time stamps
773 if char_offsets is not None and timestamp_type in ['char', 'all']:
774 hypothesis.timestep['char'] = char_offsets
775
776 # Add word time stamps
777 if word_offsets is not None and timestamp_type in ['word', 'all']:
778 hypothesis.timestep['word'] = word_offsets
779
780 # Convert the token indices to text
781 hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
782
783 return hypothesis
784
785 @staticmethod
786 def _compute_offsets(
787 hypothesis: Hypothesis, token_lengths: List[int], ctc_token: int
788 ) -> List[Dict[str, Union[str, int]]]:
789 """
790 Utility method that calculates the indidual time indices where a token starts and ends.
791
792 Args:
793 hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
794 emitted at every time step after ctc collapse.
795 token_lengths: A list of ints representing the lengths of each emitted token.
796 ctc_token: The integer of the ctc blank token used during ctc collapse.
797
798 Returns:
799
800 """
801 start_index = 0
802
803 # If the exact timestep information is available, utilize the 1st non-ctc blank token timestep
804 # as the start index.
805 if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
806 start_index = max(0, hypothesis.timestep[0] - 1)
807
808 # Construct the start and end indices brackets
809 end_indices = np.asarray(token_lengths).cumsum()
810 start_indices = np.concatenate(([start_index], end_indices[:-1]))
811
812 # Merge the results per token into a list of dictionaries
813 offsets = [
814 {"char": t, "start_offset": s, "end_offset": e}
815 for t, s, e in zip(hypothesis.text, start_indices, end_indices)
816 ]
817
818 # Filter out CTC token
819 offsets = list(filter(lambda offsets: offsets["char"] != ctc_token, offsets))
820 return offsets
821
822 @staticmethod
823 def _get_word_offsets_chars(
824 offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
825 ) -> Dict[str, Union[str, float]]:
826 """
827 Utility method which constructs word time stamps out of character time stamps.
828
829 References:
830 This code is a port of the Hugging Face code for word time stamp construction.
831
832 Args:
833 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
834 word_delimiter_char: Character token that represents the word delimiter. By default, " ".
835
836 Returns:
837 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
838 "end_offset".
839 """
840 word_offsets = []
841
842 last_state = "SPACE"
843 word = ""
844 start_offset = 0
845 end_offset = 0
846 for i, offset in enumerate(offsets):
847 char = offset["char"]
848 state = "SPACE" if char == word_delimiter_char else "WORD"
849
850 if state == last_state:
851 # If we are in the same state as before, we simply repeat what we've done before
852 end_offset = offset["end_offset"]
853 word += char
854 else:
855 # Switching state
856 if state == "SPACE":
857 # Finishing a word
858 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
859 else:
860 # Starting a new word
861 start_offset = offset["start_offset"]
862 end_offset = offset["end_offset"]
863 word = char
864
865 last_state = state
866 if last_state == "WORD":
867 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
868
869 return word_offsets
870
871 @staticmethod
872 def _get_word_offsets_subwords_sentencepiece(
873 offsets: Dict[str, Union[str, float]],
874 hypothesis: Hypothesis,
875 decode_ids_to_tokens: Callable[[List[int]], str],
876 decode_tokens_to_str: Callable[[List[int]], str],
877 ) -> Dict[str, Union[str, float]]:
878 """
879 Utility method which constructs word time stamps out of sub-word time stamps.
880
881 **Note**: Only supports Sentencepiece based tokenizers !
882
883 Args:
884 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
885 hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
886 after ctc collapse.
887 decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
888 decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
889
890 Returns:
891 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
892 "end_offset".
893 """
894 word_offsets = []
895 built_token = []
896 previous_token_index = 0
897 # For every collapsed sub-word token
898 for i, char in enumerate(hypothesis.text):
899 # Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
900 token = decode_ids_to_tokens([char])[0]
901 token_text = decode_tokens_to_str([char])
902
903 # It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
904 # after forcing partial text conversion of the token.
905 if token != token_text:
906 # If there are any partially or fully built sub-word token ids, construct to text.
907 # Note: This is "old" subword, that occurs *after* current sub-word has started.
908 if len(built_token) > 0:
909 word_offsets.append(
910 {
911 "word": decode_tokens_to_str(built_token),
912 "start_offset": offsets[previous_token_index]["start_offset"],
913 "end_offset": offsets[i]["start_offset"],
914 }
915 )
916
917 # Prepare list of new sub-word ids
918 built_token.clear()
919 built_token.append(char)
920 previous_token_index = i
921 else:
922 # If the token does not contain any sub-word start mark, then the sub-word has not completed yet
923 # Append to current sub-word list.
924 built_token.append(char)
925
926 # Inject the start offset of the first token to word offsets
927 # This is because we always skip the delay the injection of the first sub-word due to the loop
928 # condition and check whether built token is ready or not.
929 # Therefore without this forced injection, the start_offset appears as off by 1.
930 if len(word_offsets) == 0:
931 # alaptev: sometimes word_offsets can be empty
932 if len(built_token) > 0:
933 word_offsets.append(
934 {
935 "word": decode_tokens_to_str(built_token),
936 "start_offset": offsets[0]["start_offset"],
937 "end_offset": offsets[-1]["end_offset"],
938 }
939 )
940 built_token.clear()
941 else:
942 word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
943
944 # If there are any remaining tokens left, inject them all into the final word offset.
945 # Note: The start offset of this token is the start time of the first token inside build_token.
946 # Note: The end offset of this token is the end time of the last token inside build_token
947 if len(built_token) > 0:
948 word_offsets.append(
949 {
950 "word": decode_tokens_to_str(built_token),
951 "start_offset": offsets[-(len(built_token))]["start_offset"],
952 "end_offset": offsets[-1]["end_offset"],
953 }
954 )
955 built_token.clear()
956
957 return word_offsets
958
959 @property
960 def preserve_alignments(self):
961 return self._preserve_alignments
962
963 @preserve_alignments.setter
964 def preserve_alignments(self, value):
965 self._preserve_alignments = value
966
967 if hasattr(self, 'decoding'):
968 self.decoding.preserve_alignments = value
969
970 @property
971 def compute_timestamps(self):
972 return self._compute_timestamps
973
974 @compute_timestamps.setter
975 def compute_timestamps(self, value):
976 self._compute_timestamps = value
977
978 if hasattr(self, 'decoding'):
979 self.decoding.compute_timestamps = value
980
981 @property
982 def preserve_frame_confidence(self):
983 return self._preserve_frame_confidence
984
985 @preserve_frame_confidence.setter
986 def preserve_frame_confidence(self, value):
987 self._preserve_frame_confidence = value
988
989 if hasattr(self, 'decoding'):
990 self.decoding.preserve_frame_confidence = value
991
992
993 class CTCDecoding(AbstractCTCDecoding):
994 """
995 Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs for character
996 based models.
997
998 Args:
999 decoding_cfg: A dict-like object which contains the following key-value pairs.
1000 strategy: str value which represents the type of decoding that can occur.
1001 Possible values are :
1002 - greedy (for greedy decoding).
1003 - beam (for DeepSpeed KenLM based decoding).
1004
1005 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
1006 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
1007 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
1008
1009 ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
1010 Can take the following values - "char" for character/subword time stamps, "word" for word level
1011 time stamps and "all" (default), for both character level and word level time stamps.
1012
1013 word_seperator: Str token representing the seperator between words.
1014
1015 preserve_alignments: Bool flag which preserves the history of logprobs generated during
1016 decoding (sample / batched). When set to true, the Hypothesis will contain
1017 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
1018
1019 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
1020 scores. In order to obtain hypotheses with confidence scores, please utilize
1021 `ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
1022
1023 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
1024 generated during decoding. When set to true, the Hypothesis will contain
1025 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
1026 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
1027 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1028 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
1029
1030 The length of the list corresponds to the number of recognized tokens.
1031 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
1032 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1033 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
1034
1035 The length of the list corresponds to the number of recognized words.
1036 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
1037 from the `token_confidence`.
1038 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
1039 Valid options are `mean`, `min`, `max`, `prod`.
1040 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
1041 confidence scores.
1042
1043 name: The measure name (str).
1044 Supported values:
1045 - 'max_prob' for using the maximum token probability as a confidence.
1046 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1047
1048 entropy_type: Which type of entropy to use (str).
1049 Used if confidence_measure_cfg.name is set to `entropy`.
1050 Supported values:
1051 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1052 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1053 Note that for this entropy, the alpha should comply the following inequality:
1054 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1055 where V is the model vocabulary size.
1056 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1057 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1058 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1059 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1060 - 'renyi' for the Rรฉnyi entropy.
1061 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1062 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1063 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1064
1065 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1066 When the alpha equals one, scaling is not applied to 'max_prob',
1067 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1068
1069 entropy_norm: A mapping of the entropy value to the interval [0,1].
1070 Supported values:
1071 - 'lin' for using the linear mapping.
1072 - 'exp' for using exponential mapping with linear shift.
1073
1074 batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
1075 ``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
1076
1077 The config may further contain the following sub-dictionaries:
1078 "greedy":
1079 preserve_alignments: Same as above, overrides above value.
1080 compute_timestamps: Same as above, overrides above value.
1081 preserve_frame_confidence: Same as above, overrides above value.
1082 confidence_measure_cfg: Same as above, overrides confidence_cfg.measure_cfg.
1083
1084 "beam":
1085 beam_size: int, defining the beam size for beam search. Must be >= 1.
1086 If beam_size == 1, will perform cached greedy search. This might be slightly different
1087 results compared to the greedy search above.
1088
1089 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
1090 hypotheses after beam search has concluded. This flag is set by default.
1091
1092 beam_alpha: float, the strength of the Language model on the final score of a token.
1093 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
1094
1095 beam_beta: float, the strength of the sequence length penalty on the final score of a token.
1096 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
1097
1098 kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
1099 If the path is invalid (file is not found at path), will raise a deferred error at the moment
1100 of calculation of beam search, so that users may update / change the decoding strategy
1101 to point to the correct file.
1102
1103 blank_id: The id of the RNNT blank token.
1104 """
1105
1106 def __init__(
1107 self, decoding_cfg, vocabulary,
1108 ):
1109 blank_id = len(vocabulary)
1110 self.vocabulary = vocabulary
1111 self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
1112
1113 super().__init__(decoding_cfg=decoding_cfg, blank_id=blank_id)
1114
1115 # Finalize Beam Search Decoding framework
1116 if isinstance(self.decoding, ctc_beam_decoding.AbstractBeamCTCInfer):
1117 self.decoding.set_vocabulary(self.vocabulary)
1118 self.decoding.set_decoding_type('char')
1119
1120 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
1121 """
1122 Implemented by subclass in order to aggregate token confidence to a word-level confidence.
1123
1124 Args:
1125 hypothesis: Hypothesis
1126
1127 Returns:
1128 A list of word-level confidence scores.
1129 """
1130 return self._aggregate_token_confidence_chars(
1131 self.decode_tokens_to_str(hypothesis.text[0]).split(), hypothesis.token_confidence
1132 )
1133
1134 def decode_tokens_to_str(self, tokens: List[int]) -> str:
1135 """
1136 Implemented by subclass in order to decoder a token list into a string.
1137
1138 Args:
1139 tokens: List of int representing the token ids.
1140
1141 Returns:
1142 A decoded string.
1143 """
1144 hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
1145 return hypothesis
1146
1147 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
1148 """
1149 Implemented by subclass in order to decode a token id list into a token list.
1150 A token list is the string representation of each token id.
1151
1152 Args:
1153 tokens: List of int representing the token ids.
1154
1155 Returns:
1156 A list of decoded tokens.
1157 """
1158 token_list = [self.labels_map[c] for c in tokens if c != self.blank_id]
1159 return token_list
1160
1161
1162 class WER(Metric):
1163 """
1164 This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference
1165 texts. When doing distributed training/evaluation the result of ``res=WER(predictions, targets, target_lengths)``
1166 calls will be all-reduced between all workers using SUM operations. Here ``res`` contains three numbers
1167 ``res=[wer, total_levenstein_distance, total_number_of_words]``.
1168
1169 If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step
1170 results. Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
1171
1172 Example:
1173 def validation_step(self, batch, batch_idx):
1174 ...
1175 wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
1176 self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
1177 return self.val_outputs
1178
1179 def on_validation_epoch_end(self):
1180 ...
1181 wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
1182 wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
1183 tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
1184 self.val_outputs.clear() # free memory
1185 return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
1186
1187 Args:
1188 decoding: An instance of CTCDecoding.
1189 use_cer: Whether to use Character Error Rate instead of Word Error Rate.
1190 log_prediction: Whether to log a single decoded sample per call.
1191 fold_consecutive: Whether repeated consecutive characters should be folded into one when decoding.
1192
1193 Returns:
1194 res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenstein's
1195 distances for all prediction - reference pairs, total number of words in all references.
1196 """
1197
1198 full_state_update: bool = True
1199
1200 def __init__(
1201 self,
1202 decoding: CTCDecoding,
1203 use_cer=False,
1204 log_prediction=True,
1205 fold_consecutive=True,
1206 dist_sync_on_step=False,
1207 ):
1208 super().__init__(dist_sync_on_step=dist_sync_on_step)
1209
1210 self.decoding = decoding
1211 self.use_cer = use_cer
1212 self.log_prediction = log_prediction
1213 self.fold_consecutive = fold_consecutive
1214
1215 self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1216 self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1217
1218 def update(
1219 self,
1220 predictions: torch.Tensor,
1221 targets: torch.Tensor,
1222 target_lengths: torch.Tensor,
1223 predictions_lengths: torch.Tensor = None,
1224 ):
1225 """
1226 Updates metric state.
1227 Args:
1228 predictions: an integer torch.Tensor of shape ``[Batch, Time, {Vocabulary}]`` (if ``batch_dim_index == 0``) or
1229 ``[Time, Batch]`` (if ``batch_dim_index == 1``)
1230 targets: an integer torch.Tensor of shape ``[Batch, Time]`` (if ``batch_dim_index == 0``) or
1231 ``[Time, Batch]`` (if ``batch_dim_index == 1``)
1232 target_lengths: an integer torch.Tensor of shape ``[Batch]``
1233 predictions_lengths: an integer torch.Tensor of shape ``[Batch]``
1234 """
1235 words = 0
1236 scores = 0
1237 references = []
1238 with torch.no_grad():
1239 # prediction_cpu_tensor = tensors[0].long().cpu()
1240 targets_cpu_tensor = targets.long().cpu()
1241 tgt_lenths_cpu_tensor = target_lengths.long().cpu()
1242
1243 # iterate over batch
1244 for ind in range(targets_cpu_tensor.shape[0]):
1245 tgt_len = tgt_lenths_cpu_tensor[ind].item()
1246 target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
1247 reference = self.decoding.decode_tokens_to_str(target)
1248 references.append(reference)
1249
1250 hypotheses, _ = self.decoding.ctc_decoder_predictions_tensor(
1251 predictions, predictions_lengths, fold_consecutive=self.fold_consecutive
1252 )
1253
1254 if self.log_prediction:
1255 logging.info(f"\n")
1256 logging.info(f"reference:{references[0]}")
1257 logging.info(f"predicted:{hypotheses[0]}")
1258
1259 for h, r in zip(hypotheses, references):
1260 if self.use_cer:
1261 h_list = list(h)
1262 r_list = list(r)
1263 else:
1264 h_list = h.split()
1265 r_list = r.split()
1266 words += len(r_list)
1267 # Compute Levenstein's distance
1268 scores += editdistance.eval(h_list, r_list)
1269
1270 self.scores = torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
1271 self.words = torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
1272 # return torch.tensor([scores, words]).to(predictions.device)
1273
1274 def compute(self):
1275 scores = self.scores.detach().float()
1276 words = self.words.detach().float()
1277 return scores / words, scores, words
1278
1279
1280 @dataclass
1281 class CTCDecodingConfig:
1282 strategy: str = "greedy"
1283
1284 # preserve decoding alignments
1285 preserve_alignments: Optional[bool] = None
1286
1287 # compute ctc time stamps
1288 compute_timestamps: Optional[bool] = None
1289
1290 # token representing word seperator
1291 word_seperator: str = " "
1292
1293 # type of timestamps to calculate
1294 ctc_timestamp_type: str = "all" # can be char, word or all for both
1295
1296 # batch dimension
1297 batch_dim_index: int = 0
1298
1299 # greedy decoding config
1300 greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
1301
1302 # beam decoding config
1303 beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
1304
1305 # confidence config
1306 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1307
1308 # can be used to change temperature for decoding
1309 temperature: float = 1.0
1310
[end of nemo/collections/asr/metrics/wer.py]
[start of nemo/collections/asr/models/configs/aligner_config.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
18
19
20 @dataclass
21 class AlignerCTCConfig:
22 prob_suppress_index: int = -1
23 prob_suppress_value: float = 1.0
24
25
26 @dataclass
27 class AlignerRNNTConfig:
28 predictor_window_size: int = 0
29 predictor_step_size: int = 1
30
31
32 @dataclass
33 class AlignerWrapperModelConfig:
34 alignment_type: str = "forced"
35 word_output: bool = True
36 cpu_decoding: bool = False
37 decode_batch_size: int = 0
38 ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
39 rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
40
41
42 @dataclass
43 class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
44 decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
45
[end of nemo/collections/asr/models/configs/aligner_config.py]
[start of nemo/collections/asr/models/configs/asr_models_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, List, Optional
17
18 from omegaconf import MISSING
19
20 import nemo.core.classes.dataset
21 from nemo.collections.asr.metrics.wer import CTCDecodingConfig
22 from nemo.collections.asr.modules.audio_preprocessing import (
23 AudioToMelSpectrogramPreprocessorConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig
27 from nemo.core.config import modelPT as model_cfg
28
29
30 @dataclass
31 class ASRDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
32 manifest_filepath: Optional[Any] = None
33 sample_rate: int = MISSING
34 labels: List[str] = MISSING
35 trim_silence: bool = False
36
37 # Tarred dataset support
38 is_tarred: bool = False
39 tarred_audio_filepaths: Optional[Any] = None
40 tarred_shard_strategy: str = "scatter"
41 shard_manifests: bool = False
42 shuffle_n: int = 0
43
44 # Optional
45 int_values: Optional[int] = None
46 augmentor: Optional[Dict[str, Any]] = None
47 max_duration: Optional[float] = None
48 min_duration: Optional[float] = None
49 max_utts: int = 0
50 blank_index: int = -1
51 unk_index: int = -1
52 normalize: bool = False
53 trim: bool = True
54 parser: Optional[str] = 'en'
55 eos_id: Optional[int] = None
56 bos_id: Optional[int] = None
57 pad_id: int = 0
58 use_start_end_token: bool = False
59 return_sample_id: Optional[bool] = False
60
61 # bucketing params
62 bucketing_strategy: str = "synced_randomized"
63 bucketing_batch_size: Optional[Any] = None
64 bucketing_weights: Optional[List[int]] = None
65
66
67 @dataclass
68 class EncDecCTCConfig(model_cfg.ModelConfig):
69 # Model global arguments
70 sample_rate: int = 16000
71 repeat: int = 1
72 dropout: float = 0.0
73 separable: bool = False
74 labels: List[str] = MISSING
75
76 # Dataset configs
77 train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
78 validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
79 test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
80
81 # Optimizer / Scheduler config
82 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
83
84 # Model component configs
85 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
86 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
87 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
88 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
89 decoding: CTCDecodingConfig = CTCDecodingConfig()
90
91
92 @dataclass
93 class EncDecCTCModelConfig(model_cfg.NemoConfig):
94 model: EncDecCTCConfig = EncDecCTCConfig()
95
96
97 @dataclass
98 class CacheAwareStreamingConfig:
99 chunk_size: int = 0 # the size of each chunk at each step, it can be a list of two integers to specify different chunk sizes for the first step and others
100 shift_size: int = 0 # the size of the shift in each step, it can be a list of two integers to specify different shift sizes for the first step and others
101
102 cache_drop_size: int = 0 # the number of steps to drop from the cache
103 last_channel_cache_size: int = 0 # the size of the needed cache for last channel layers
104
105 valid_out_len: int = 0 # the number of the steps in the final output which are valid (have the same value as in the offline mode)
106
107 pre_encode_cache_size: int = 0 # the size of the needed cache for the pre-encoding part of the model to avoid caching inside the pre-encoding layers
108 drop_extra_pre_encoded: int = 0 # the number of steps to get dropped after the pre-encoding layer
109
110 last_channel_num: int = 0 # number of the last channel layers (like MHA layers) which need caching in the model
111 last_time_num: int = 0 # number of the last time layers (like convolutions) which need caching in the model
112
[end of nemo/collections/asr/models/configs/asr_models_config.py]
[start of nemo/collections/asr/models/configs/classification_models_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, List, Optional
17
18 from omegaconf import MISSING
19
20 import nemo.core.classes.dataset
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMFCCPreprocessorConfig,
23 CropOrPadSpectrogramAugmentationConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderClassificationConfig, ConvASREncoderConfig
27 from nemo.core.config import modelPT as model_cfg
28
29
30 @dataclass
31 class EncDecClassificationDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
32 manifest_filepath: Optional[str] = None
33 sample_rate: int = MISSING
34 labels: List[str] = MISSING
35 trim_silence: bool = False
36
37 # Tarred dataset support
38 is_tarred: bool = False
39 tarred_audio_filepaths: Optional[str] = None
40 tarred_shard_strategy: str = "scatter"
41 shuffle_n: int = 0
42
43 # Optional
44 int_values: Optional[int] = None
45 augmentor: Optional[Dict[str, Any]] = None
46 max_duration: Optional[float] = None
47 min_duration: Optional[float] = None
48 cal_labels_occurrence: Optional[bool] = False
49
50 # VAD Optional
51 vad_stream: Optional[bool] = None
52 window_length_in_sec: float = 0.31
53 shift_length_in_sec: float = 0.01
54 normalize_audio: bool = False
55 is_regression_task: bool = False
56
57 # bucketing params
58 bucketing_strategy: str = "synced_randomized"
59 bucketing_batch_size: Optional[Any] = None
60 bucketing_weights: Optional[List[int]] = None
61
62
63 @dataclass
64 class EncDecClassificationConfig(model_cfg.ModelConfig):
65 # Model global arguments
66 sample_rate: int = 16000
67 repeat: int = 1
68 dropout: float = 0.0
69 separable: bool = True
70 kernel_size_factor: float = 1.0
71 labels: List[str] = MISSING
72 timesteps: int = MISSING
73
74 # Dataset configs
75 train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
76 manifest_filepath=None, shuffle=True, trim_silence=False
77 )
78 validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
79 manifest_filepath=None, shuffle=False
80 )
81 test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
82 manifest_filepath=None, shuffle=False
83 )
84
85 # Optimizer / Scheduler config
86 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
87
88 # Model component configs
89 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
90 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
91 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
92 audio_length=timesteps
93 )
94
95 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
96 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
97
98
99 @dataclass
100 class EncDecClassificationModelConfig(model_cfg.NemoConfig):
101 model: EncDecClassificationConfig = EncDecClassificationConfig()
102
[end of nemo/collections/asr/models/configs/classification_models_config.py]
[start of nemo/collections/asr/models/configs/diarizer_config.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import asdict, dataclass
16 from typing import Any, Dict, Optional, Tuple, Union
17
18
19 @dataclass
20 class DiarizerComponentConfig:
21 """Dataclass to imitate HydraConfig dict when accessing parameters."""
22
23 def get(self, name: str, default: Optional[Any] = None):
24 return getattr(self, name, default)
25
26 def __iter__(self):
27 for key in asdict(self):
28 yield key
29
30 def dict(self) -> Dict:
31 return asdict(self)
32
33
34 @dataclass
35 class ASRDiarizerCTCDecoderParams:
36 pretrained_language_model: Optional[str] = None # KenLM model file: .arpa model file or .bin binary file.
37 beam_width: int = 32
38 alpha: float = 0.5
39 beta: float = 2.5
40
41
42 @dataclass
43 class ASRRealigningLMParams:
44 # Provide a KenLM language model in .arpa format.
45 arpa_language_model: Optional[str] = None
46 # Min number of words for the left context.
47 min_number_of_words: int = 3
48 # Max number of words for the right context.
49 max_number_of_words: int = 10
50 # The threshold for the difference between two log probability values from two hypotheses.
51 logprob_diff_threshold: float = 1.2
52
53
54 @dataclass
55 class ASRDiarizerParams(DiarizerComponentConfig):
56 # if True, speech segmentation for diarization is based on word-timestamps from ASR inference.
57 asr_based_vad: bool = False
58 # Threshold (in sec) that caps the gap between two words when generating VAD timestamps using ASR based VAD.
59 asr_based_vad_threshold: float = 1.0
60 # Batch size can be dependent on each ASR model. Default batch sizes are applied if set to null.
61 asr_batch_size: Optional[int] = None
62 # Native decoder delay. null is recommended to use the default values for each ASR model.
63 decoder_delay_in_sec: Optional[float] = None
64 # Offset to set a reference point from the start of the word. Recommended range of values is [-0.05 0.2].
65 word_ts_anchor_offset: Optional[float] = None
66 # Select which part of the word timestamp we want to use. The options are: 'start', 'end', 'mid'.
67 word_ts_anchor_pos: str = "start"
68 # Fix the word timestamp using VAD output. You must provide a VAD model to use this feature.
69 fix_word_ts_with_VAD: bool = False
70 # If True, use colored text to distinguish speakers in the output transcript.
71 colored_text: bool = False
72 # If True, the start and end time of each speaker turn is printed in the output transcript.
73 print_time: bool = True
74 # If True, the output transcript breaks the line to fix the line width (default is 90 chars)
75 break_lines: bool = False
76
77
78 @dataclass
79 class ASRDiarizerConfig(DiarizerComponentConfig):
80 model_path: Optional[str] = "stt_en_conformer_ctc_large"
81 parameters: ASRDiarizerParams = ASRDiarizerParams()
82 ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
83 realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
84
85
86 @dataclass
87 class VADParams(DiarizerComponentConfig):
88 window_length_in_sec: float = 0.15 # Window length in sec for VAD context input
89 shift_length_in_sec: float = 0.01 # Shift length in sec for generate frame level VAD prediction
90 smoothing: Union[str, bool] = "median" # False or type of smoothing method (eg: median)
91 overlap: float = 0.5 # Overlap ratio for overlapped mean/median smoothing filter
92 onset: float = 0.1 # Onset threshold for detecting the beginning and end of a speech
93 offset: float = 0.1 # Offset threshold for detecting the end of a speech
94 pad_onset: float = 0.1 # Adding durations before each speech segment
95 pad_offset: float = 0 # Adding durations after each speech segment
96 min_duration_on: float = 0 # Threshold for small non_speech deletion
97 min_duration_off: float = 0.2 # Threshold for short speech segment deletion
98 filter_speech_first: bool = True
99
100
101 @dataclass
102 class VADConfig(DiarizerComponentConfig):
103 model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
104 external_vad_manifest: Optional[str] = None
105 parameters: VADParams = VADParams()
106
107
108 @dataclass
109 class SpeakerEmbeddingsParams(DiarizerComponentConfig):
110 # Window length(s) in sec (floating-point number). either a number or a list. ex) 1.5 or [1.5,1.0,0.5]
111 window_length_in_sec: Tuple[float] = (1.5, 1.25, 1.0, 0.75, 0.5)
112 # Shift length(s) in sec (floating-point number). either a number or a list. ex) 0.75 or [0.75,0.5,0.25]
113 shift_length_in_sec: Tuple[float] = (0.75, 0.625, 0.5, 0.375, 0.25)
114 # Weight for each scale. None (for single scale) or list with window/shift scale count. ex) [0.33,0.33,0.33]
115 multiscale_weights: Tuple[float] = (1, 1, 1, 1, 1)
116 # save speaker embeddings in pickle format. True if clustering result is used for other models, such as MSDD.
117 save_embeddings: bool = True
118
119
120 @dataclass
121 class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
122 # .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
123 model_path: Optional[str] = None
124 parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
125
126
127 @dataclass
128 class ClusteringParams(DiarizerComponentConfig):
129 # If True, use num of speakers value provided in manifest file.
130 oracle_num_speakers: bool = False
131 # Max number of speakers for each recording. If an oracle number of speakers is passed, this value is ignored.
132 max_num_speakers: int = 8
133 # If the number of segments is lower than this number, enhanced speaker counting is activated.
134 enhanced_count_thres: int = 80
135 # Determines the range of p-value search: 0 < p <= max_rp_threshold.
136 max_rp_threshold: float = 0.25
137 # The higher the number, the more values will be examined with more time.
138 sparse_search_volume: int = 30
139 # If True, take a majority vote on multiple p-values to estimate the number of speakers.
140 maj_vote_spk_count: bool = False
141
142
143 @dataclass
144 class ClusteringConfig(DiarizerComponentConfig):
145 parameters: ClusteringParams = ClusteringParams()
146
147
148 @dataclass
149 class MSDDParams(DiarizerComponentConfig):
150 # If True, use speaker embedding model in checkpoint, else provided speaker embedding model in config will be used.
151 use_speaker_model_from_ckpt: bool = True
152 # Batch size for MSDD inference.
153 infer_batch_size: int = 25
154 # Sigmoid threshold for generating binarized speaker labels. The smaller the more generous on detecting overlaps.
155 sigmoid_threshold: Tuple[float] = (0.7,)
156 # If True, use oracle number of speaker and evaluate F1 score for the given speaker sequences. Default is False.
157 seq_eval_mode: bool = False
158 # If True, break the input audio clip to short sequences and calculate cluster average embeddings for inference.
159 split_infer: bool = True
160 # The length of split short sequence when split_infer is True.
161 diar_window_length: int = 50
162 # If the estimated number of speakers are larger than this number, overlap speech is not estimated.
163 overlap_infer_spk_limit: int = 5
164
165
166 @dataclass
167 class MSDDConfig(DiarizerComponentConfig):
168 model_path: Optional[str] = "diar_msdd_telephonic"
169 parameters: MSDDParams = MSDDParams()
170
171
172 @dataclass
173 class DiarizerConfig(DiarizerComponentConfig):
174 manifest_filepath: Optional[str] = None
175 out_dir: Optional[str] = None
176 oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
177 collar: float = 0.25 # Collar value for scoring
178 ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
179 vad: VADConfig = VADConfig()
180 speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
181 clustering: ClusteringConfig = ClusteringConfig()
182 msdd_model: MSDDConfig = MSDDConfig()
183 asr: ASRDiarizerConfig = ASRDiarizerConfig()
184
185
186 @dataclass
187 class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
188 diarizer: DiarizerConfig = DiarizerConfig()
189 device: str = "cpu"
190 verbose: bool = False
191 batch_size: int = 64
192 num_workers: int = 1
193 sample_rate: int = 16000
194 name: str = ""
195
196 @classmethod
197 def init_config(cls, diar_model_path: str, vad_model_path: str, map_location: str, verbose: bool):
198 return NeuralDiarizerInferenceConfig(
199 DiarizerConfig(
200 vad=VADConfig(model_path=vad_model_path), msdd_model=MSDDConfig(model_path=diar_model_path),
201 ),
202 device=map_location,
203 verbose=verbose,
204 )
205
[end of nemo/collections/asr/models/configs/diarizer_config.py]
[start of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
18 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
19 from nemo.core.config.modelPT import NemoConfig
20
21
22 @dataclass
23 class GraphModuleConfig:
24 criterion_type: str = "ml"
25 loss_type: str = "ctc"
26 split_batch_size: int = 0
27 dec_type: str = "topo"
28 transcribe_training: bool = True
29 backend_cfg: BackendConfig = BackendConfig()
30
31
32 @dataclass
33 class EncDecK2SeqConfig(EncDecCTCConfig):
34 graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
35
36
37 @dataclass
38 class EncDecK2SeqModelConfig(NemoConfig):
39 model: EncDecK2SeqConfig = EncDecK2SeqConfig()
40
[end of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
[start of nemo/collections/asr/models/configs/matchboxnet_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, field
16 from typing import Any, Callable, List, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.collections.asr.models.configs import classification_models_config as clf_cfg
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMFCCPreprocessorConfig,
23 CropOrPadSpectrogramAugmentationConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import (
27 ConvASRDecoderClassificationConfig,
28 ConvASREncoderConfig,
29 JasperEncoderConfig,
30 )
31 from nemo.core.config import modelPT as model_cfg
32
33
34 # fmt: off
35 def matchboxnet_3x1x64():
36 config = [
37 JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
38 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
39 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
40 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
41 JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
42 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
43 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
44 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
45 JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
46 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
47 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
48 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
49 JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
50 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
51 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
52 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
53 JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
54 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
55 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
56 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
57 JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
58 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
59 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
60 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
61 ]
62 return config
63
64
65 def matchboxnet_3x1x64_vad():
66 config = [
67 JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
68 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
69 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
70 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
71 JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
72 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
73 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
74 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
75 JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
76 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
77 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
78 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
79 JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
80 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
81 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
82 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
83 JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
84 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
85 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
86 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
87 JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
88 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
89 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
90 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
91 ]
92 return config
93
94
95 # fmt: on
96
97
98 @dataclass
99 class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
100 # Model global arguments
101 sample_rate: int = 16000
102 repeat: int = 1
103 dropout: float = 0.0
104 separable: bool = True
105 kernel_size_factor: float = 1.0
106 timesteps: int = 128
107 labels: List[str] = MISSING
108
109 # Dataset configs
110 train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
111 manifest_filepath=None, shuffle=True, trim_silence=False
112 )
113 validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
114 manifest_filepath=None, shuffle=False
115 )
116 test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
117 manifest_filepath=None, shuffle=False
118 )
119
120 # Optimizer / Scheduler config
121 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
122
123 # Model general component configs
124 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
125 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
126 freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
127 )
128 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
129 audio_length=128
130 )
131
132 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
133 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
134
135
136 @dataclass
137 class MatchboxNetVADModelConfig(MatchboxNetModelConfig):
138 timesteps: int = 64
139 labels: List[str] = field(default_factory=lambda: ['background', 'speech'])
140
141 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = None
142
143
144 class EncDecClassificationModelConfigBuilder(model_cfg.ModelConfigBuilder):
145 VALID_CONFIGS = ['matchboxnet_3x1x64', 'matchboxnet_3x1x64_vad']
146
147 def __init__(self, name: str = 'matchboxnet_3x1x64', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
148 if name not in EncDecClassificationModelConfigBuilder.VALID_CONFIGS:
149 raise ValueError("`name` must be one of : \n" f"{EncDecClassificationModelConfigBuilder.VALID_CONFIGS}")
150
151 self.name = name
152
153 if 'matchboxnet_3x1x64_vad' in name:
154 if encoder_cfg_func is None:
155 encoder_cfg_func = matchboxnet_3x1x64_vad
156
157 model_cfg = MatchboxNetVADModelConfig(
158 repeat=1,
159 separable=True,
160 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
161 decoder=ConvASRDecoderClassificationConfig(),
162 )
163
164 elif 'matchboxnet_3x1x64' in name:
165 if encoder_cfg_func is None:
166 encoder_cfg_func = matchboxnet_3x1x64
167
168 model_cfg = MatchboxNetModelConfig(
169 repeat=1,
170 separable=False,
171 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
172 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
173 decoder=ConvASRDecoderClassificationConfig(),
174 )
175
176 else:
177 raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
178
179 super(EncDecClassificationModelConfigBuilder, self).__init__(model_cfg)
180 self.model_cfg: clf_cfg.EncDecClassificationConfig = model_cfg # enable type hinting
181
182 def set_labels(self, labels: List[str]):
183 self.model_cfg.labels = labels
184
185 def set_separable(self, separable: bool):
186 self.model_cfg.separable = separable
187
188 def set_repeat(self, repeat: int):
189 self.model_cfg.repeat = repeat
190
191 def set_sample_rate(self, sample_rate: int):
192 self.model_cfg.sample_rate = sample_rate
193
194 def set_dropout(self, dropout: float = 0.0):
195 self.model_cfg.dropout = dropout
196
197 def set_timesteps(self, timesteps: int):
198 self.model_cfg.timesteps = timesteps
199
200 def set_is_regression_task(self, is_regression_task: bool):
201 self.model_cfg.is_regression_task = is_regression_task
202
203 # Note: Autocomplete for users wont work without these overrides
204 # But practically it is not needed since python will infer at runtime
205
206 # def set_train_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
207 # super().set_train_ds(cfg)
208 #
209 # def set_validation_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
210 # super().set_validation_ds(cfg)
211 #
212 # def set_test_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
213 # super().set_test_ds(cfg)
214
215 def _finalize_cfg(self):
216 # propagate labels
217 self.model_cfg.train_ds.labels = self.model_cfg.labels
218 self.model_cfg.validation_ds.labels = self.model_cfg.labels
219 self.model_cfg.test_ds.labels = self.model_cfg.labels
220 self.model_cfg.decoder.vocabulary = self.model_cfg.labels
221
222 # propagate num classes
223 self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
224
225 # propagate sample rate
226 self.model_cfg.sample_rate = self.model_cfg.sample_rate
227 self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
228 self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
229 self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
230 self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
231
232 # propagate filters
233 self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
234 self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
235
236 # propagate timeteps
237 if self.model_cfg.crop_or_pad_augment is not None:
238 self.model_cfg.crop_or_pad_augment.audio_length = self.model_cfg.timesteps
239
240 # propagate separable
241 for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
242 layer.separable = self.model_cfg.separable
243
244 # propagate repeat
245 for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
246 layer.repeat = self.model_cfg.repeat
247
248 # propagate dropout
249 for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
250 layer.dropout = self.model_cfg.dropout
251
252 def build(self) -> clf_cfg.EncDecClassificationConfig:
253 return super().build()
254
[end of nemo/collections/asr/models/configs/matchboxnet_config.py]
[start of nemo/collections/asr/models/configs/quartznet_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Callable, List, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.collections.asr.models.configs import asr_models_config as ctc_cfg
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMelSpectrogramPreprocessorConfig,
23 SpectrogramAugmentationConfig,
24 )
25 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig, JasperEncoderConfig
26 from nemo.core.config import modelPT as model_cfg
27
28
29 # fmt: off
30 def qn_15x5():
31 config = [
32 JasperEncoderConfig(filters=256, repeat=1, kernel=[33], stride=[2], dilation=[1], dropout=0.0,
33 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
34 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
35 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
36 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
37 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
38 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
39 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
40 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
41 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
42 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
43 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
44 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
45 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
46 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
47 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
48 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
49 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
50 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
51 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
52 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
53 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
54 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
55 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
56 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
57 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
58 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
59 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
60 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
61 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
62 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
63 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
64 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
65 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
66 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
67 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
68 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
69 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
70 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
71 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
72 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
73 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
74 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
75 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
76 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
77 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
78 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
79 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
80 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
81 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
82 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
83 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
84 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
85 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
86 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
87 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
88 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
89 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
90 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
91 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
92 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
93 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
94 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
95 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
96 JasperEncoderConfig(filters=512, repeat=1, kernel=[87], stride=[1], dilation=[2], dropout=0.0,
97 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
98 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
99 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
100 JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
101 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
102 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
103 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
104 ]
105 return config
106
107
108 def jasper_10x5_dr():
109 config = [
110 JasperEncoderConfig(filters=256, repeat=1, kernel=[11], stride=[2], dilation=[1], dropout=0.2,
111 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
112 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
113 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
114 JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
115 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
116 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
117 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
118 JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
119 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
120 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
121 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
122 JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
123 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
124 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
125 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
126 JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
127 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
128 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
129 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
130 JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
131 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
132 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
133 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
134 JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
135 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
136 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
137 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
138 JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
139 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
140 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
141 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
142 JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
143 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
144 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
145 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
146 JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
147 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
148 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
149 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
150 JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
151 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
152 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
153 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
154 JasperEncoderConfig(filters=896, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.4,
155 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
156 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
157 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
158 JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.4,
159 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
160 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
161 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
162 ]
163 return config
164 # fmt: on
165
166
167 @dataclass
168 class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
169 # Model global arguments
170 sample_rate: int = 16000
171 repeat: int = 1
172 dropout: float = 0.0
173 separable: bool = False
174 labels: List[str] = MISSING
175
176 # Dataset configs
177 train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
178 manifest_filepath=None, shuffle=True, trim_silence=True
179 )
180 validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
181 test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
182
183 # Optimizer / Scheduler config
184 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
185
186 # Model general component configs
187 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
188 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
189 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
190 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
191
192
193 @dataclass
194 class QuartzNetModelConfig(JasperModelConfig):
195 separable: bool = True
196
197
198 class EncDecCTCModelConfigBuilder(model_cfg.ModelConfigBuilder):
199 VALID_CONFIGS = ['quartznet_15x5', 'quartznet_15x5_zh', 'jasper_10x5dr']
200
201 def __init__(self, name: str = 'quartznet_15x5', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
202 if name not in EncDecCTCModelConfigBuilder.VALID_CONFIGS:
203 raise ValueError("`name` must be one of : \n" f"{EncDecCTCModelConfigBuilder.VALID_CONFIGS}")
204
205 self.name = name
206
207 if 'quartznet_15x5' in name:
208 if encoder_cfg_func is None:
209 encoder_cfg_func = qn_15x5
210
211 model_cfg = QuartzNetModelConfig(
212 repeat=5,
213 separable=True,
214 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
215 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
216 decoder=ConvASRDecoderConfig(),
217 )
218
219 elif 'jasper_10x5' in name:
220 if encoder_cfg_func is None:
221 encoder_cfg_func = jasper_10x5_dr
222
223 model_cfg = JasperModelConfig(
224 repeat=5,
225 separable=False,
226 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
227 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
228 decoder=ConvASRDecoderConfig(),
229 )
230
231 else:
232 raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
233
234 super(EncDecCTCModelConfigBuilder, self).__init__(model_cfg)
235 self.model_cfg: ctc_cfg.EncDecCTCConfig = model_cfg # enable type hinting
236
237 if 'zh' in name:
238 self.set_dataset_normalize(normalize=False)
239
240 def set_labels(self, labels: List[str]):
241 self.model_cfg.labels = labels
242
243 def set_separable(self, separable: bool):
244 self.model_cfg.separable = separable
245
246 def set_repeat(self, repeat: int):
247 self.model_cfg.repeat = repeat
248
249 def set_sample_rate(self, sample_rate: int):
250 self.model_cfg.sample_rate = sample_rate
251
252 def set_dropout(self, dropout: float = 0.0):
253 self.model_cfg.dropout = dropout
254
255 def set_dataset_normalize(self, normalize: bool):
256 self.model_cfg.train_ds.normalize = normalize
257 self.model_cfg.validation_ds.normalize = normalize
258 self.model_cfg.test_ds.normalize = normalize
259
260 # Note: Autocomplete for users wont work without these overrides
261 # But practically it is not needed since python will infer at runtime
262
263 # def set_train_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
264 # super().set_train_ds(cfg)
265 #
266 # def set_validation_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
267 # super().set_validation_ds(cfg)
268 #
269 # def set_test_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
270 # super().set_test_ds(cfg)
271
272 def _finalize_cfg(self):
273 # propagate labels
274 self.model_cfg.train_ds.labels = self.model_cfg.labels
275 self.model_cfg.validation_ds.labels = self.model_cfg.labels
276 self.model_cfg.test_ds.labels = self.model_cfg.labels
277 self.model_cfg.decoder.vocabulary = self.model_cfg.labels
278
279 # propagate num classes
280 self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
281
282 # propagate sample rate
283 self.model_cfg.sample_rate = self.model_cfg.sample_rate
284 self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
285 self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
286 self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
287 self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
288
289 # propagate filters
290 self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
291 self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
292
293 # propagate separable
294 for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
295 layer.separable = self.model_cfg.separable
296
297 # propagate repeat
298 for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
299 layer.repeat = self.model_cfg.repeat
300
301 # propagate dropout
302 for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
303 layer.dropout = self.model_cfg.dropout
304
305 def build(self) -> ctc_cfg.EncDecCTCConfig:
306 return super().build()
307
[end of nemo/collections/asr/models/configs/quartznet_config.py]
[start of nemo/collections/asr/modules/audio_preprocessing.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 import random
17 from abc import ABC, abstractmethod
18 from dataclasses import dataclass
19 from typing import Any, Dict, Optional, Tuple
20
21 import torch
22 from packaging import version
23
24 from nemo.collections.asr.parts.numba.spec_augment import SpecAugmentNumba, spec_augment_launch_heuristics
25 from nemo.collections.asr.parts.preprocessing.features import (
26 FilterbankFeatures,
27 FilterbankFeaturesTA,
28 make_seq_mask_like,
29 )
30 from nemo.collections.asr.parts.submodules.spectr_augment import SpecAugment, SpecCutout
31 from nemo.core.classes import Exportable, NeuralModule, typecheck
32 from nemo.core.neural_types import (
33 AudioSignal,
34 LengthsType,
35 MelSpectrogramType,
36 MFCCSpectrogramType,
37 NeuralType,
38 SpectrogramType,
39 )
40 from nemo.core.utils import numba_utils
41 from nemo.core.utils.numba_utils import __NUMBA_MINIMUM_VERSION__
42 from nemo.utils import logging
43
44 try:
45 import torchaudio
46 import torchaudio.functional
47 import torchaudio.transforms
48
49 TORCHAUDIO_VERSION = version.parse(torchaudio.__version__)
50 TORCHAUDIO_VERSION_MIN = version.parse('0.5')
51
52 HAVE_TORCHAUDIO = True
53 except ModuleNotFoundError:
54 HAVE_TORCHAUDIO = False
55
56 __all__ = [
57 'AudioToMelSpectrogramPreprocessor',
58 'AudioToSpectrogram',
59 'SpectrogramToAudio',
60 'AudioToMFCCPreprocessor',
61 'SpectrogramAugmentation',
62 'MaskedPatchAugmentation',
63 'CropOrPadSpectrogramAugmentation',
64 ]
65
66
67 class AudioPreprocessor(NeuralModule, ABC):
68 """
69 An interface for Neural Modules that performs audio pre-processing,
70 transforming the wav files to features.
71 """
72
73 def __init__(self, win_length, hop_length):
74 super().__init__()
75
76 self.win_length = win_length
77 self.hop_length = hop_length
78
79 self.torch_windows = {
80 'hann': torch.hann_window,
81 'hamming': torch.hamming_window,
82 'blackman': torch.blackman_window,
83 'bartlett': torch.bartlett_window,
84 'ones': torch.ones,
85 None: torch.ones,
86 }
87
88 @typecheck()
89 @torch.no_grad()
90 def forward(self, input_signal, length):
91 processed_signal, processed_length = self.get_features(input_signal, length)
92
93 return processed_signal, processed_length
94
95 @abstractmethod
96 def get_features(self, input_signal, length):
97 # Called by forward(). Subclasses should implement this.
98 pass
99
100
101 class AudioToMelSpectrogramPreprocessor(AudioPreprocessor, Exportable):
102 """Featurizer module that converts wavs to mel spectrograms.
103
104 Args:
105 sample_rate (int): Sample rate of the input audio data.
106 Defaults to 16000
107 window_size (float): Size of window for fft in seconds
108 Defaults to 0.02
109 window_stride (float): Stride of window for fft in seconds
110 Defaults to 0.01
111 n_window_size (int): Size of window for fft in samples
112 Defaults to None. Use one of window_size or n_window_size.
113 n_window_stride (int): Stride of window for fft in samples
114 Defaults to None. Use one of window_stride or n_window_stride.
115 window (str): Windowing function for fft. can be one of ['hann',
116 'hamming', 'blackman', 'bartlett']
117 Defaults to "hann"
118 normalize (str): Can be one of ['per_feature', 'all_features']; all
119 other options disable feature normalization. 'all_features'
120 normalizes the entire spectrogram to be mean 0 with std 1.
121 'pre_features' normalizes per channel / freq instead.
122 Defaults to "per_feature"
123 n_fft (int): Length of FT window. If None, it uses the smallest power
124 of 2 that is larger than n_window_size.
125 Defaults to None
126 preemph (float): Amount of pre emphasis to add to audio. Can be
127 disabled by passing None.
128 Defaults to 0.97
129 features (int): Number of mel spectrogram freq bins to output.
130 Defaults to 64
131 lowfreq (int): Lower bound on mel basis in Hz.
132 Defaults to 0
133 highfreq (int): Lower bound on mel basis in Hz.
134 Defaults to None
135 log (bool): Log features.
136 Defaults to True
137 log_zero_guard_type(str): Need to avoid taking the log of zero. There
138 are two options: "add" or "clamp".
139 Defaults to "add".
140 log_zero_guard_value(float, or str): Add or clamp requires the number
141 to add with or clamp to. log_zero_guard_value can either be a float
142 or "tiny" or "eps". torch.finfo is used if "tiny" or "eps" is
143 passed.
144 Defaults to 2**-24.
145 dither (float): Amount of white-noise dithering.
146 Defaults to 1e-5
147 pad_to (int): Ensures that the output size of the time dimension is
148 a multiple of pad_to.
149 Defaults to 16
150 frame_splicing (int): Defaults to 1
151 exact_pad (bool): If True, sets stft center to False and adds padding, such that num_frames = audio_length
152 // hop_length. Defaults to False.
153 pad_value (float): The value that shorter mels are padded with.
154 Defaults to 0
155 mag_power (float): The power that the linear spectrogram is raised to
156 prior to multiplication with mel basis.
157 Defaults to 2 for a power spec
158 rng : Random number generator
159 nb_augmentation_prob (float) : Probability with which narrowband augmentation would be applied to
160 samples in the batch.
161 Defaults to 0.0
162 nb_max_freq (int) : Frequency above which all frequencies will be masked for narrowband augmentation.
163 Defaults to 4000
164 use_torchaudio: Whether to use the `torchaudio` implementation.
165 mel_norm: Normalization used for mel filterbank weights.
166 Defaults to 'slaney' (area normalization)
167 stft_exact_pad: Deprecated argument, kept for compatibility with older checkpoints.
168 stft_conv: Deprecated argument, kept for compatibility with older checkpoints.
169 """
170
171 def save_to(self, save_path: str):
172 pass
173
174 @classmethod
175 def restore_from(cls, restore_path: str):
176 pass
177
178 @property
179 def input_types(self):
180 """Returns definitions of module input ports.
181 """
182 return {
183 "input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
184 "length": NeuralType(
185 tuple('B'), LengthsType()
186 ), # Please note that length should be in samples not seconds.
187 }
188
189 @property
190 def output_types(self):
191 """Returns definitions of module output ports.
192
193 processed_signal:
194 0: AxisType(BatchTag)
195 1: AxisType(MelSpectrogramSignalTag)
196 2: AxisType(ProcessedTimeTag)
197 processed_length:
198 0: AxisType(BatchTag)
199 """
200 return {
201 "processed_signal": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
202 "processed_length": NeuralType(tuple('B'), LengthsType()),
203 }
204
205 def __init__(
206 self,
207 sample_rate=16000,
208 window_size=0.02,
209 window_stride=0.01,
210 n_window_size=None,
211 n_window_stride=None,
212 window="hann",
213 normalize="per_feature",
214 n_fft=None,
215 preemph=0.97,
216 features=64,
217 lowfreq=0,
218 highfreq=None,
219 log=True,
220 log_zero_guard_type="add",
221 log_zero_guard_value=2 ** -24,
222 dither=1e-5,
223 pad_to=16,
224 frame_splicing=1,
225 exact_pad=False,
226 pad_value=0,
227 mag_power=2.0,
228 rng=None,
229 nb_augmentation_prob=0.0,
230 nb_max_freq=4000,
231 use_torchaudio: bool = False,
232 mel_norm="slaney",
233 stft_exact_pad=False, # Deprecated arguments; kept for config compatibility
234 stft_conv=False, # Deprecated arguments; kept for config compatibility
235 ):
236 super().__init__(n_window_size, n_window_stride)
237
238 self._sample_rate = sample_rate
239 if window_size and n_window_size:
240 raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
241 if window_stride and n_window_stride:
242 raise ValueError(
243 f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
244 )
245 if window_size:
246 n_window_size = int(window_size * self._sample_rate)
247 if window_stride:
248 n_window_stride = int(window_stride * self._sample_rate)
249
250 # Given the long and similar argument list, point to the class and instantiate it by reference
251 if not use_torchaudio:
252 featurizer_class = FilterbankFeatures
253 else:
254 featurizer_class = FilterbankFeaturesTA
255 self.featurizer = featurizer_class(
256 sample_rate=self._sample_rate,
257 n_window_size=n_window_size,
258 n_window_stride=n_window_stride,
259 window=window,
260 normalize=normalize,
261 n_fft=n_fft,
262 preemph=preemph,
263 nfilt=features,
264 lowfreq=lowfreq,
265 highfreq=highfreq,
266 log=log,
267 log_zero_guard_type=log_zero_guard_type,
268 log_zero_guard_value=log_zero_guard_value,
269 dither=dither,
270 pad_to=pad_to,
271 frame_splicing=frame_splicing,
272 exact_pad=exact_pad,
273 pad_value=pad_value,
274 mag_power=mag_power,
275 rng=rng,
276 nb_augmentation_prob=nb_augmentation_prob,
277 nb_max_freq=nb_max_freq,
278 mel_norm=mel_norm,
279 stft_exact_pad=stft_exact_pad, # Deprecated arguments; kept for config compatibility
280 stft_conv=stft_conv, # Deprecated arguments; kept for config compatibility
281 )
282
283 def input_example(self, max_batch: int = 8, max_dim: int = 32000, min_length: int = 200):
284 batch_size = torch.randint(low=1, high=max_batch, size=[1]).item()
285 max_length = torch.randint(low=min_length, high=max_dim, size=[1]).item()
286 signals = torch.rand(size=[batch_size, max_length]) * 2 - 1
287 lengths = torch.randint(low=min_length, high=max_dim, size=[batch_size])
288 lengths[0] = max_length
289 return signals, lengths
290
291 def get_features(self, input_signal, length):
292 return self.featurizer(input_signal, length)
293
294 @property
295 def filter_banks(self):
296 return self.featurizer.filter_banks
297
298
299 class AudioToMFCCPreprocessor(AudioPreprocessor):
300 """Preprocessor that converts wavs to MFCCs.
301 Uses torchaudio.transforms.MFCC.
302
303 Args:
304 sample_rate: The sample rate of the audio.
305 Defaults to 16000.
306 window_size: Size of window for fft in seconds. Used to calculate the
307 win_length arg for mel spectrogram.
308 Defaults to 0.02
309 window_stride: Stride of window for fft in seconds. Used to caculate
310 the hop_length arg for mel spect.
311 Defaults to 0.01
312 n_window_size: Size of window for fft in samples
313 Defaults to None. Use one of window_size or n_window_size.
314 n_window_stride: Stride of window for fft in samples
315 Defaults to None. Use one of window_stride or n_window_stride.
316 window: Windowing function for fft. can be one of ['hann',
317 'hamming', 'blackman', 'bartlett', 'none', 'null'].
318 Defaults to 'hann'
319 n_fft: Length of FT window. If None, it uses the smallest power of 2
320 that is larger than n_window_size.
321 Defaults to None
322 lowfreq (int): Lower bound on mel basis in Hz.
323 Defaults to 0
324 highfreq (int): Lower bound on mel basis in Hz.
325 Defaults to None
326 n_mels: Number of mel filterbanks.
327 Defaults to 64
328 n_mfcc: Number of coefficients to retain
329 Defaults to 64
330 dct_type: Type of discrete cosine transform to use
331 norm: Type of norm to use
332 log: Whether to use log-mel spectrograms instead of db-scaled.
333 Defaults to True.
334 """
335
336 @property
337 def input_types(self):
338 """Returns definitions of module input ports.
339 """
340 return {
341 "input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
342 "length": NeuralType(tuple('B'), LengthsType()),
343 }
344
345 @property
346 def output_types(self):
347 """Returns definitions of module output ports.
348 """
349 return {
350 "processed_signal": NeuralType(('B', 'D', 'T'), MFCCSpectrogramType()),
351 "processed_length": NeuralType(tuple('B'), LengthsType()),
352 }
353
354 def save_to(self, save_path: str):
355 pass
356
357 @classmethod
358 def restore_from(cls, restore_path: str):
359 pass
360
361 def __init__(
362 self,
363 sample_rate=16000,
364 window_size=0.02,
365 window_stride=0.01,
366 n_window_size=None,
367 n_window_stride=None,
368 window='hann',
369 n_fft=None,
370 lowfreq=0.0,
371 highfreq=None,
372 n_mels=64,
373 n_mfcc=64,
374 dct_type=2,
375 norm='ortho',
376 log=True,
377 ):
378 self._sample_rate = sample_rate
379 if not HAVE_TORCHAUDIO:
380 logging.error('Could not import torchaudio. Some features might not work.')
381
382 raise ModuleNotFoundError(
383 "torchaudio is not installed but is necessary for "
384 "AudioToMFCCPreprocessor. We recommend you try "
385 "building it from source for the PyTorch version you have."
386 )
387 if window_size and n_window_size:
388 raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
389 if window_stride and n_window_stride:
390 raise ValueError(
391 f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
392 )
393 # Get win_length (n_window_size) and hop_length (n_window_stride)
394 if window_size:
395 n_window_size = int(window_size * self._sample_rate)
396 if window_stride:
397 n_window_stride = int(window_stride * self._sample_rate)
398
399 super().__init__(n_window_size, n_window_stride)
400
401 mel_kwargs = {}
402
403 mel_kwargs['f_min'] = lowfreq
404 mel_kwargs['f_max'] = highfreq
405 mel_kwargs['n_mels'] = n_mels
406
407 mel_kwargs['n_fft'] = n_fft or 2 ** math.ceil(math.log2(n_window_size))
408
409 mel_kwargs['win_length'] = n_window_size
410 mel_kwargs['hop_length'] = n_window_stride
411
412 # Set window_fn. None defaults to torch.ones.
413 window_fn = self.torch_windows.get(window, None)
414 if window_fn is None:
415 raise ValueError(
416 f"Window argument for AudioProcessor is invalid: {window}."
417 f"For no window function, use 'ones' or None."
418 )
419 mel_kwargs['window_fn'] = window_fn
420
421 # Use torchaudio's implementation of MFCCs as featurizer
422 self.featurizer = torchaudio.transforms.MFCC(
423 sample_rate=self._sample_rate,
424 n_mfcc=n_mfcc,
425 dct_type=dct_type,
426 norm=norm,
427 log_mels=log,
428 melkwargs=mel_kwargs,
429 )
430
431 def get_features(self, input_signal, length):
432 features = self.featurizer(input_signal)
433 seq_len = torch.ceil(length.to(torch.float32) / self.hop_length).to(dtype=torch.long)
434 return features, seq_len
435
436
437 class SpectrogramAugmentation(NeuralModule):
438 """
439 Performs time and freq cuts in one of two ways.
440 SpecAugment zeroes out vertical and horizontal sections as described in
441 SpecAugment (https://arxiv.org/abs/1904.08779). Arguments for use with
442 SpecAugment are `freq_masks`, `time_masks`, `freq_width`, and `time_width`.
443 SpecCutout zeroes out rectangulars as described in Cutout
444 (https://arxiv.org/abs/1708.04552). Arguments for use with Cutout are
445 `rect_masks`, `rect_freq`, and `rect_time`.
446
447 Args:
448 freq_masks (int): how many frequency segments should be cut.
449 Defaults to 0.
450 time_masks (int): how many time segments should be cut
451 Defaults to 0.
452 freq_width (int): maximum number of frequencies to be cut in one
453 segment.
454 Defaults to 10.
455 time_width (int): maximum number of time steps to be cut in one
456 segment
457 Defaults to 10.
458 rect_masks (int): how many rectangular masks should be cut
459 Defaults to 0.
460 rect_freq (int): maximum size of cut rectangles along the frequency
461 dimension
462 Defaults to 5.
463 rect_time (int): maximum size of cut rectangles along the time
464 dimension
465 Defaults to 25.
466 """
467
468 @property
469 def input_types(self):
470 """Returns definitions of module input types
471 """
472 return {
473 "input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
474 "length": NeuralType(tuple('B'), LengthsType()),
475 }
476
477 @property
478 def output_types(self):
479 """Returns definitions of module output types
480 """
481 return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
482
483 def __init__(
484 self,
485 freq_masks=0,
486 time_masks=0,
487 freq_width=10,
488 time_width=10,
489 rect_masks=0,
490 rect_time=5,
491 rect_freq=20,
492 rng=None,
493 mask_value=0.0,
494 use_numba_spec_augment: bool = True,
495 ):
496 super().__init__()
497
498 if rect_masks > 0:
499 self.spec_cutout = SpecCutout(rect_masks=rect_masks, rect_time=rect_time, rect_freq=rect_freq, rng=rng,)
500 # self.spec_cutout.to(self._device)
501 else:
502 self.spec_cutout = lambda input_spec: input_spec
503 if freq_masks + time_masks > 0:
504 self.spec_augment = SpecAugment(
505 freq_masks=freq_masks,
506 time_masks=time_masks,
507 freq_width=freq_width,
508 time_width=time_width,
509 rng=rng,
510 mask_value=mask_value,
511 )
512 else:
513 self.spec_augment = lambda input_spec, length: input_spec
514
515 # Check if numba is supported, and use a Numba kernel if it is
516 if use_numba_spec_augment and numba_utils.numba_cuda_is_supported(__NUMBA_MINIMUM_VERSION__):
517 logging.info('Numba CUDA SpecAugment kernel is being used')
518 self.spec_augment_numba = SpecAugmentNumba(
519 freq_masks=freq_masks,
520 time_masks=time_masks,
521 freq_width=freq_width,
522 time_width=time_width,
523 rng=rng,
524 mask_value=mask_value,
525 )
526 else:
527 self.spec_augment_numba = None
528
529 @typecheck()
530 def forward(self, input_spec, length):
531 augmented_spec = self.spec_cutout(input_spec=input_spec)
532
533 # To run the Numba kernel, correct numba version is required as well as
534 # tensor must be on GPU and length must be provided
535 if self.spec_augment_numba is not None and spec_augment_launch_heuristics(augmented_spec, length):
536 augmented_spec = self.spec_augment_numba(input_spec=augmented_spec, length=length)
537 else:
538 augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
539 return augmented_spec
540
541
542 class MaskedPatchAugmentation(NeuralModule):
543 """
544 Zeroes out fixed size time patches of the spectrogram.
545 All samples in batch are guaranteed to have the same amount of masked time steps.
546 Optionally also performs frequency masking in the same way as SpecAugment.
547 Args:
548 patch_size (int): up to how many time steps does one patch consist of.
549 Defaults to 48.
550 mask_patches (float): how many patches should be masked in each sample.
551 if >= 1., interpreted as number of patches (after converting to int)
552 if <1., interpreted as fraction of total tokens to be masked (number of patches is rounded up)
553 Defaults to 10.
554 freq_masks (int): how many frequency segments should be cut.
555 Defaults to 0.
556 freq_width (int): maximum number of frequencies to be cut in a segment.
557 Defaults to 0.
558 """
559
560 @property
561 def input_types(self):
562 """Returns definitions of module input types
563 """
564 return {
565 "input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
566 "length": NeuralType(tuple('B'), LengthsType()),
567 }
568
569 @property
570 def output_types(self):
571 """Returns definitions of module output types
572 """
573 return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
574
575 def __init__(
576 self, patch_size: int = 48, mask_patches: float = 10.0, freq_masks: int = 0, freq_width: int = 0,
577 ):
578 super().__init__()
579 self.patch_size = patch_size
580 if mask_patches >= 1:
581 self.mask_patches = int(mask_patches)
582 elif mask_patches >= 0:
583 self._mask_fraction = mask_patches
584 self.mask_patches = None
585 else:
586 raise ValueError('mask_patches cannot be negative')
587
588 if freq_masks > 0:
589 self.spec_augment = SpecAugment(freq_masks=freq_masks, time_masks=0, freq_width=freq_width, time_width=0,)
590 else:
591 self.spec_augment = None
592
593 @typecheck()
594 def forward(self, input_spec, length):
595 augmented_spec = input_spec
596
597 min_len = torch.min(length)
598
599 if self.mask_patches is None:
600 # masking specified as fraction
601 len_fraction = int(min_len * self._mask_fraction)
602 mask_patches = len_fraction // self.patch_size + int(len_fraction % self.patch_size != 0)
603 else:
604 mask_patches = self.mask_patches
605
606 if min_len < self.patch_size * mask_patches:
607 mask_patches = min_len // self.patch_size
608
609 for idx in range(input_spec.shape[0]):
610 cur_len = length[idx]
611 patches = range(cur_len // self.patch_size)
612 masked_patches = random.sample(patches, mask_patches)
613
614 for mp in masked_patches:
615 augmented_spec[idx, :, mp * self.patch_size : (mp + 1) * self.patch_size] = 0.0
616
617 if self.spec_augment is not None:
618 augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
619
620 return augmented_spec
621
622
623 class CropOrPadSpectrogramAugmentation(NeuralModule):
624 """
625 Pad or Crop the incoming Spectrogram to a certain shape.
626
627 Args:
628 audio_length (int): the final number of timesteps that is required.
629 The signal will be either padded or cropped temporally to this
630 size.
631 """
632
633 def __init__(self, audio_length):
634 super(CropOrPadSpectrogramAugmentation, self).__init__()
635 self.audio_length = audio_length
636
637 @typecheck()
638 @torch.no_grad()
639 def forward(self, input_signal, length):
640 image = input_signal
641 num_images = image.shape[0]
642
643 audio_length = self.audio_length
644 image_len = image.shape[-1]
645
646 # Crop long signal
647 if image_len > audio_length: # randomly slice
648 cutout_images = []
649 offset = torch.randint(low=0, high=image_len - audio_length + 1, size=[num_images])
650
651 for idx, offset in enumerate(offset):
652 cutout_images.append(image[idx : idx + 1, :, offset : offset + audio_length])
653
654 image = torch.cat(cutout_images, dim=0)
655 del cutout_images
656
657 else: # symmetrically pad short signal with zeros
658 pad_left = (audio_length - image_len) // 2
659 pad_right = (audio_length - image_len) // 2
660
661 if (audio_length - image_len) % 2 == 1:
662 pad_right += 1
663
664 image = torch.nn.functional.pad(image, [pad_left, pad_right], mode="constant", value=0)
665
666 # Replace dynamic length sequences with static number of timesteps
667 length = (length * 0) + audio_length
668
669 return image, length
670
671 @property
672 def input_types(self):
673 """Returns definitions of module output ports.
674 """
675 return {
676 "input_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
677 "length": NeuralType(tuple('B'), LengthsType()),
678 }
679
680 @property
681 def output_types(self):
682 """Returns definitions of module output ports.
683 """
684 return {
685 "processed_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
686 "processed_length": NeuralType(tuple('B'), LengthsType()),
687 }
688
689 def save_to(self, save_path: str):
690 pass
691
692 @classmethod
693 def restore_from(cls, restore_path: str):
694 pass
695
696
697 class AudioToSpectrogram(NeuralModule):
698 """Transform a batch of input multi-channel signals into a batch of
699 STFT-based spectrograms.
700
701 Args:
702 fft_length: length of FFT
703 hop_length: length of hops/shifts of the sliding window
704 power: exponent for magnitude spectrogram. Default `None` will
705 return a complex-valued spectrogram
706 """
707
708 def __init__(self, fft_length: int, hop_length: int, power: Optional[float] = None):
709 if not HAVE_TORCHAUDIO:
710 logging.error('Could not import torchaudio. Some features might not work.')
711
712 raise ModuleNotFoundError(
713 "torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
714 )
715
716 super().__init__()
717
718 # For now, assume FFT length is divisible by two
719 if fft_length % 2 != 0:
720 raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
721
722 self.stft = torchaudio.transforms.Spectrogram(
723 n_fft=fft_length, hop_length=hop_length, power=power, pad_mode='constant'
724 )
725
726 # number of subbands
727 self.F = fft_length // 2 + 1
728
729 @property
730 def num_subbands(self) -> int:
731 return self.F
732
733 @property
734 def input_types(self) -> Dict[str, NeuralType]:
735 """Returns definitions of module output ports.
736 """
737 return {
738 "input": NeuralType(('B', 'C', 'T'), AudioSignal()),
739 "input_length": NeuralType(('B',), LengthsType(), optional=True),
740 }
741
742 @property
743 def output_types(self) -> Dict[str, NeuralType]:
744 """Returns definitions of module output ports.
745 """
746 return {
747 "output": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
748 "output_length": NeuralType(('B',), LengthsType()),
749 }
750
751 @typecheck()
752 def forward(
753 self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None
754 ) -> Tuple[torch.Tensor, torch.Tensor]:
755 """Convert a batch of C-channel input signals
756 into a batch of complex-valued spectrograms.
757
758 Args:
759 input: Time-domain input signal with C channels, shape (B, C, T)
760 input_length: Length of valid entries along the time dimension, shape (B,)
761
762 Returns:
763 Output spectrogram with F subbands and N time frames, shape (B, C, F, N)
764 and output length with shape (B,).
765 """
766 B, T = input.size(0), input.size(-1)
767 input = input.view(B, -1, T)
768
769 # STFT output (B, C, F, N)
770 with torch.cuda.amp.autocast(enabled=False):
771 output = self.stft(input.float())
772
773 if input_length is not None:
774 # Mask padded frames
775 output_length = self.get_output_length(input_length=input_length)
776
777 length_mask: torch.Tensor = make_seq_mask_like(
778 lengths=output_length, like=output, time_dim=-1, valid_ones=False
779 )
780 output = output.masked_fill(length_mask, 0.0)
781 else:
782 # Assume all frames are valid for all examples in the batch
783 output_length = output.size(-1) * torch.ones(B, device=output.device).long()
784
785 return output, output_length
786
787 def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
788 """Get length of valid frames for the output.
789
790 Args:
791 input_length: number of valid samples, shape (B,)
792
793 Returns:
794 Number of valid frames, shape (B,)
795 """
796 output_length = input_length.div(self.stft.hop_length, rounding_mode='floor').add(1).long()
797 return output_length
798
799
800 class SpectrogramToAudio(NeuralModule):
801 """Transform a batch of input multi-channel spectrograms into a batch of
802 time-domain multi-channel signals.
803
804 Args:
805 fft_length: length of FFT
806 hop_length: length of hops/shifts of the sliding window
807 power: exponent for magnitude spectrogram. Default `None` will
808 return a complex-valued spectrogram
809 """
810
811 def __init__(self, fft_length: int, hop_length: int):
812 if not HAVE_TORCHAUDIO:
813 logging.error('Could not import torchaudio. Some features might not work.')
814
815 raise ModuleNotFoundError(
816 "torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
817 )
818
819 super().__init__()
820
821 # For now, assume FFT length is divisible by two
822 if fft_length % 2 != 0:
823 raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
824
825 self.istft = torchaudio.transforms.InverseSpectrogram(
826 n_fft=fft_length, hop_length=hop_length, pad_mode='constant'
827 )
828
829 self.F = fft_length // 2 + 1
830
831 @property
832 def num_subbands(self) -> int:
833 return self.F
834
835 @property
836 def input_types(self) -> Dict[str, NeuralType]:
837 """Returns definitions of module output ports.
838 """
839 return {
840 "input": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
841 "input_length": NeuralType(('B',), LengthsType(), optional=True),
842 }
843
844 @property
845 def output_types(self) -> Dict[str, NeuralType]:
846 """Returns definitions of module output ports.
847 """
848 return {
849 "output": NeuralType(('B', 'C', 'T'), AudioSignal()),
850 "output_length": NeuralType(('B',), LengthsType()),
851 }
852
853 @typecheck()
854 def forward(self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None) -> torch.Tensor:
855 """Convert input complex-valued spectrogram to a time-domain
856 signal. Multi-channel IO is supported.
857
858 Args:
859 input: Input spectrogram for C channels, shape (B, C, F, N)
860 input_length: Length of valid entries along the time dimension, shape (B,)
861
862 Returns:
863 Time-domain signal with T time-domain samples and C channels, (B, C, T)
864 and output length with shape (B,).
865 """
866 B, F, N = input.size(0), input.size(-2), input.size(-1)
867 assert F == self.F, f'Number of subbands F={F} not matching self.F={self.F}'
868 input = input.view(B, -1, F, N)
869
870 # iSTFT output (B, C, T)
871 with torch.cuda.amp.autocast(enabled=False):
872 output = self.istft(input.cfloat())
873
874 if input_length is not None:
875 # Mask padded samples
876 output_length = self.get_output_length(input_length=input_length)
877
878 length_mask: torch.Tensor = make_seq_mask_like(
879 lengths=output_length, like=output, time_dim=-1, valid_ones=False
880 )
881 output = output.masked_fill(length_mask, 0.0)
882 else:
883 # Assume all frames are valid for all examples in the batch
884 output_length = output.size(-1) * torch.ones(B, device=output.device).long()
885
886 return output, output_length
887
888 def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
889 """Get length of valid samples for the output.
890
891 Args:
892 input_length: number of valid frames, shape (B,)
893
894 Returns:
895 Number of valid samples, shape (B,)
896 """
897 output_length = input_length.sub(1).mul(self.istft.hop_length).long()
898 return output_length
899
900
901 @dataclass
902 class AudioToMelSpectrogramPreprocessorConfig:
903 _target_: str = "nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor"
904 sample_rate: int = 16000
905 window_size: float = 0.02
906 window_stride: float = 0.01
907 n_window_size: Optional[int] = None
908 n_window_stride: Optional[int] = None
909 window: str = "hann"
910 normalize: str = "per_feature"
911 n_fft: Optional[int] = None
912 preemph: float = 0.97
913 features: int = 64
914 lowfreq: int = 0
915 highfreq: Optional[int] = None
916 log: bool = True
917 log_zero_guard_type: str = "add"
918 log_zero_guard_value: float = 2 ** -24
919 dither: float = 1e-5
920 pad_to: int = 16
921 frame_splicing: int = 1
922 exact_pad: bool = False
923 pad_value: int = 0
924 mag_power: float = 2.0
925 rng: Optional[str] = None
926 nb_augmentation_prob: float = 0.0
927 nb_max_freq: int = 4000
928 use_torchaudio: bool = False
929 mel_norm: str = "slaney"
930 stft_exact_pad: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
931 stft_conv: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
932
933
934 @dataclass
935 class AudioToMFCCPreprocessorConfig:
936 _target_: str = 'nemo.collections.asr.modules.AudioToMFCCPreprocessor'
937 sample_rate: int = 16000
938 window_size: float = 0.02
939 window_stride: float = 0.01
940 n_window_size: Optional[int] = None
941 n_window_stride: Optional[int] = None
942 window: str = 'hann'
943 n_fft: Optional[int] = None
944 lowfreq: Optional[float] = 0.0
945 highfreq: Optional[float] = None
946 n_mels: int = 64
947 n_mfcc: int = 64
948 dct_type: int = 2
949 norm: str = 'ortho'
950 log: bool = True
951
952
953 @dataclass
954 class SpectrogramAugmentationConfig:
955 _target_: str = "nemo.collections.asr.modules.SpectrogramAugmentation"
956 freq_masks: int = 0
957 time_masks: int = 0
958 freq_width: int = 0
959 time_width: Optional[Any] = 0
960 rect_masks: int = 0
961 rect_time: int = 0
962 rect_freq: int = 0
963 mask_value: float = 0
964 rng: Optional[Any] = None # random.Random() type
965 use_numba_spec_augment: bool = True
966
967
968 @dataclass
969 class CropOrPadSpectrogramAugmentationConfig:
970 audio_length: int
971 _target_: str = "nemo.collections.asr.modules.CropOrPadSpectrogramAugmentation"
972
973
974 @dataclass
975 class MaskedPatchAugmentationConfig:
976 patch_size: int = 48
977 mask_patches: float = 10.0
978 freq_masks: int = 0
979 freq_width: int = 0
980 _target_: str = "nemo.collections.asr.modules.MaskedPatchAugmentation"
981
[end of nemo/collections/asr/modules/audio_preprocessing.py]
[start of nemo/collections/asr/parts/k2/classes.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from abc import ABC
16 from dataclasses import dataclass
17 from typing import Any, Optional, Tuple
18
19 import torch
20 from omegaconf import DictConfig
21
22 from nemo.utils import logging
23
24
25 @dataclass
26 class GraphIntersectDenseConfig:
27 """Graph dense intersection config.
28 """
29
30 search_beam: float = 20.0
31 output_beam: float = 10.0
32 min_active_states: int = 30
33 max_active_states: int = 10000
34
35
36 @dataclass
37 class GraphModuleConfig:
38 """Config for graph modules.
39 Typically used with graph losses and decoders.
40 """
41
42 topo_type: str = "default"
43 topo_with_self_loops: bool = True
44 token_lm: Optional[Any] = None
45 intersect_pruned: bool = False
46 intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
47 boost_coeff: float = 0.0
48 predictor_window_size: int = 0
49 predictor_step_size: int = 1
50
51
52 class ASRK2Mixin(ABC):
53 """k2 Mixin class that simplifies the construction of various models with k2-based losses.
54
55 It does the following:
56 - Sets up the graph loss and decoder (methods _init_k2 and update_k2_modules).
57 - Registers external graphs, if needed.
58 - Augments forward(...) with optional graph decoding to get accurate predictions.
59 """
60
61 def _init_k2(self):
62 """
63 k2-related initialization implementation.
64
65 This method is expected to run after the __init__ which sets self._cfg
66 self._cfg is expected to have the attribute graph_module_cfg
67 """
68 if not hasattr(self, "_cfg"):
69 raise ValueError("self._cfg must be set before calling _init_k2().")
70 if not hasattr(self._cfg, "graph_module_cfg") or self._cfg.graph_module_cfg is None:
71 raise ValueError("self._cfg.graph_module_cfg must be set and cannot be None.")
72 self.graph_module_cfg = self._cfg.graph_module_cfg
73
74 # register token_lm for MAPLoss
75 criterion_type = self.graph_module_cfg.get("criterion_type", "ml")
76 self.use_graph_lm = criterion_type == "map"
77 if self.use_graph_lm:
78 token_lm_path = self.graph_module_cfg.backend_cfg.get("token_lm", None)
79 if token_lm_path is None:
80 raise ValueError(
81 f"graph_module_cfg.backend_cfg.token_lm is empty. It must be set for criterion_type == `{criterion_type}`"
82 )
83 token_lm_path = self.register_artifact('graph_module_cfg.backend_cfg.token_lm', token_lm_path)
84 self.graph_module_cfg.backend_cfg["token_lm"] = token_lm_path
85
86 self.update_k2_modules(self.graph_module_cfg)
87
88 def update_k2_modules(self, input_cfg: DictConfig):
89 """
90 Helper function to initialize or update k2 loss and transcribe_decoder.
91
92 Args:
93 input_cfg: DictConfig to take new parameters from. Schema is expected as in
94 nemo.collections.asr.models.configs.k2_sequence_models_config.GraphModuleConfig
95 """
96 del self.loss
97 if hasattr(self, "transcribe_decoder"):
98 del self.transcribe_decoder
99
100 if hasattr(self, "joint"):
101 # RNNT
102 num_classes = self.joint.num_classes_with_blank - 1
103 else:
104 # CTC, MMI, ...
105 num_classes = self.decoder.num_classes_with_blank - 1
106 remove_consecutive = input_cfg.backend_cfg.get("topo_with_self_loops", True) and input_cfg.backend_cfg.get(
107 "topo_type", "default"
108 ) not in ["forced_blank", "identity",]
109 self._wer.remove_consecutive = remove_consecutive
110
111 from nemo.collections.asr.losses.lattice_losses import LatticeLoss
112
113 self.loss = LatticeLoss(
114 num_classes=num_classes,
115 reduction=self._cfg.get("ctc_reduction", "mean_batch"),
116 backend="k2",
117 criterion_type=input_cfg.get("criterion_type", "ml"),
118 loss_type=input_cfg.get("loss_type", "ctc"),
119 split_batch_size=input_cfg.get("split_batch_size", 0),
120 graph_module_cfg=input_cfg.backend_cfg,
121 )
122
123 criterion_type = self.loss.criterion_type
124 self.use_graph_lm = criterion_type == "map"
125 transcribe_training = input_cfg.get("transcribe_training", False)
126 if transcribe_training and criterion_type == "ml":
127 logging.warning(
128 f"""You do not need to use transcribe_training=`{transcribe_training}`
129 with criterion_type=`{criterion_type}`. transcribe_training will be set to False."""
130 )
131 transcribe_training = False
132 self.transcribe_training = transcribe_training
133 if self.use_graph_lm:
134 from nemo.collections.asr.modules.graph_decoder import ViterbiDecoderWithGraph
135
136 self.transcribe_decoder = ViterbiDecoderWithGraph(
137 num_classes=num_classes,
138 backend="k2",
139 dec_type="token_lm",
140 return_type="1best",
141 return_ilabels=True,
142 output_aligned=True,
143 split_batch_size=input_cfg.get("split_batch_size", 0),
144 graph_module_cfg=input_cfg.backend_cfg,
145 )
146
147 def _forward_k2_post_processing(
148 self, log_probs: torch.Tensor, encoded_length: torch.Tensor, greedy_predictions: torch.Tensor
149 ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
150 """
151 k2-related post-processing parf of .forward()
152
153 Args:
154 log_probs: The log probabilities tensor of shape [B, T, D].
155 encoded_length: The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
156 greedy_predictions: The greedy token predictions of the model of shape [B, T]
157
158 Returns:
159 A tuple of 3 elements -
160 1) The log probabilities tensor of shape [B, T, D].
161 2) The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
162 3) The greedy token predictions of the model of shape [B, T] (via argmax)
163 """
164 # greedy_predictions from .forward() are incorrect for criterion_type=`map`
165 # getting correct greedy_predictions, if needed
166 if self.use_graph_lm and (not self.training or self.transcribe_training):
167 greedy_predictions, encoded_length, _ = self.transcribe_decoder.forward(
168 log_probs=log_probs, log_probs_length=encoded_length
169 )
170 return log_probs, encoded_length, greedy_predictions
171
[end of nemo/collections/asr/parts/k2/classes.py]
[start of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 from dataclasses import dataclass
17 from typing import Any, Optional
18
19 import torch
20 from torch import nn as nn
21
22 from nemo.collections.asr.parts.submodules import multi_head_attention as mha
23 from nemo.collections.common.parts import adapter_modules
24 from nemo.core.classes.mixins import adapter_mixin_strategies
25
26
27 class MHAResidualAddAdapterStrategy(adapter_mixin_strategies.ResidualAddAdapterStrategy):
28 """
29 An implementation of residual addition of an adapter module with its input for the MHA Adapters.
30 """
31
32 def forward(self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'):
33 """
34 A basic strategy, comprising of a residual connection over the input, after forward pass by
35 the underlying adapter. Additional work is done to pack and unpack the dictionary of inputs and outputs.
36
37 Note: The `value` tensor is added to the output of the attention adapter as the residual connection.
38
39 Args:
40 input: A dictionary of multiple input arguments for the adapter module.
41 `query`, `key`, `value`: Original output tensor of the module, or the output of the
42 previous adapter (if more than one adapters are enabled).
43 `mask`: Attention mask.
44 `pos_emb`: Optional positional embedding for relative encoding.
45 adapter: The adapter module that is currently required to perform the forward pass.
46 module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
47 therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
48
49 Returns:
50 The result tensor, after one of the active adapters has finished its forward passes.
51 """
52 out = self.compute_output(input, adapter, module=module)
53
54 # If not in training mode, or probability of stochastic depth is 0, skip step.
55 p = self.stochastic_depth
56 if not module.training or p == 0.0:
57 pass
58 else:
59 out = self.apply_stochastic_depth(out, input['value'], adapter, module=module)
60
61 # Return the residual connection output = input + adapter(input)
62 result = input['value'] + out
63
64 # If l2_lambda is activated, register the loss value
65 self.compute_auxiliary_losses(result, input['value'], adapter, module=module)
66
67 return result
68
69 def compute_output(
70 self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'
71 ) -> torch.Tensor:
72 """
73 Compute the output of a single adapter to some input.
74
75 Args:
76 input: Original output tensor of the module, or the output of the previous adapter (if more than
77 one adapters are enabled).
78 adapter: The adapter module that is currently required to perform the forward pass.
79 module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
80 therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
81
82 Returns:
83 The result tensor, after one of the active adapters has finished its forward passes.
84 """
85 if isinstance(input, (list, tuple)):
86 out = adapter(*input)
87 elif isinstance(input, dict):
88 out = adapter(**input)
89 else:
90 out = adapter(input)
91 return out
92
93
94 @dataclass
95 class MHAResidualAddAdapterStrategyConfig(adapter_mixin_strategies.ResidualAddAdapterStrategyConfig):
96 _target_: str = "{0}.{1}".format(
97 MHAResidualAddAdapterStrategy.__module__, MHAResidualAddAdapterStrategy.__name__
98 ) # mandatory field
99
100
101 class MultiHeadAttentionAdapter(mha.MultiHeadAttention, adapter_modules.AdapterModuleUtil):
102 """Multi-Head Attention layer of Transformer.
103 Args:
104 n_head (int): number of heads
105 n_feat (int): size of the features
106 dropout_rate (float): dropout rate
107 proj_dim (int, optional): Optional integer value for projection before computing attention.
108 If None, then there is no projection (equivalent to proj_dim = n_feat).
109 If > 0, then will project the n_feat to proj_dim before calculating attention.
110 If <0, then will equal n_head, so that each head has a projected dimension of 1.
111 adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
112 """
113
114 def __init__(
115 self,
116 n_head: int,
117 n_feat: int,
118 dropout_rate: float,
119 proj_dim: Optional[int] = None,
120 adapter_strategy: MHAResidualAddAdapterStrategy = None,
121 ):
122 super().__init__(n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, max_cache_len=0)
123
124 self.pre_norm = nn.LayerNorm(n_feat)
125
126 # Set the projection dim to number of heads automatically
127 if proj_dim is not None and proj_dim < 1:
128 proj_dim = n_head
129
130 self.proj_dim = proj_dim
131
132 # Recompute weights for projection dim
133 if self.proj_dim is not None:
134 if self.proj_dim % n_head != 0:
135 raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
136
137 self.d_k = self.proj_dim // n_head
138 self.s_d_k = math.sqrt(self.d_k)
139 self.linear_q = nn.Linear(n_feat, self.proj_dim)
140 self.linear_k = nn.Linear(n_feat, self.proj_dim)
141 self.linear_v = nn.Linear(n_feat, self.proj_dim)
142 self.linear_out = nn.Linear(self.proj_dim, n_feat)
143
144 # Setup adapter strategy
145 self.setup_adapter_strategy(adapter_strategy)
146
147 # reset parameters for Q to be identity operation
148 self.reset_parameters()
149
150 def forward(self, query, key, value, mask, pos_emb=None, cache=None):
151 """Compute 'Scaled Dot Product Attention'.
152 Args:
153 query (torch.Tensor): (batch, time1, size)
154 key (torch.Tensor): (batch, time2, size)
155 value(torch.Tensor): (batch, time2, size)
156 mask (torch.Tensor): (batch, time1, time2)
157 cache (torch.Tensor) : (batch, time_cache, size)
158
159 returns:
160 output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
161 cache (torch.Tensor) : (batch, time_cache_next, size)
162 """
163 # Need to perform duplicate computations as at this point the tensors have been
164 # separated by the adapter forward
165 query = self.pre_norm(query)
166 key = self.pre_norm(key)
167 value = self.pre_norm(value)
168
169 return super().forward(query, key, value, mask, pos_emb, cache=cache)
170
171 def reset_parameters(self):
172 with torch.no_grad():
173 nn.init.zeros_(self.linear_out.weight)
174 nn.init.zeros_(self.linear_out.bias)
175
176 def get_default_strategy_config(self) -> 'dataclass':
177 return MHAResidualAddAdapterStrategyConfig()
178
179
180 @dataclass
181 class MultiHeadAttentionAdapterConfig:
182 n_head: int
183 n_feat: int
184 dropout_rate: float = 0.0
185 proj_dim: Optional[int] = None
186 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
187 _target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
188
189
190 class RelPositionMultiHeadAttentionAdapter(mha.RelPositionMultiHeadAttention, adapter_modules.AdapterModuleUtil):
191 """Multi-Head Attention layer of Transformer-XL with support of relative positional encoding.
192 Paper: https://arxiv.org/abs/1901.02860
193 Args:
194 n_head (int): number of heads
195 n_feat (int): size of the features
196 dropout_rate (float): dropout rate
197 proj_dim (int, optional): Optional integer value for projection before computing attention.
198 If None, then there is no projection (equivalent to proj_dim = n_feat).
199 If > 0, then will project the n_feat to proj_dim before calculating attention.
200 If <0, then will equal n_head, so that each head has a projected dimension of 1.
201 adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
202 """
203
204 def __init__(
205 self,
206 n_head: int,
207 n_feat: int,
208 dropout_rate: float,
209 proj_dim: Optional[int] = None,
210 adapter_strategy: MHAResidualAddAdapterStrategyConfig = None,
211 ):
212 super().__init__(
213 n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, pos_bias_u=None, pos_bias_v=None, max_cache_len=0
214 )
215
216 self.pre_norm = nn.LayerNorm(n_feat)
217
218 # Set the projection dim to number of heads automatically
219 if proj_dim is not None and proj_dim < 1:
220 proj_dim = n_head
221
222 self.proj_dim = proj_dim
223
224 # Recompute weights for projection dim
225 if self.proj_dim is not None:
226 if self.proj_dim % n_head != 0:
227 raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
228
229 self.d_k = self.proj_dim // n_head
230 self.s_d_k = math.sqrt(self.d_k)
231 self.linear_q = nn.Linear(n_feat, self.proj_dim)
232 self.linear_k = nn.Linear(n_feat, self.proj_dim)
233 self.linear_v = nn.Linear(n_feat, self.proj_dim)
234 self.linear_out = nn.Linear(self.proj_dim, n_feat)
235 self.linear_pos = nn.Linear(n_feat, self.proj_dim, bias=False)
236 self.pos_bias_u = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
237 self.pos_bias_v = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
238
239 # Setup adapter strategy
240 self.setup_adapter_strategy(adapter_strategy)
241
242 # reset parameters for Q to be identity operation
243 self.reset_parameters()
244
245 def forward(self, query, key, value, mask, pos_emb, cache=None):
246 """Compute 'Scaled Dot Product Attention' with rel. positional encoding.
247 Args:
248 query (torch.Tensor): (batch, time1, size)
249 key (torch.Tensor): (batch, time2, size)
250 value(torch.Tensor): (batch, time2, size)
251 mask (torch.Tensor): (batch, time1, time2)
252 pos_emb (torch.Tensor) : (batch, time1, size)
253 cache (torch.Tensor) : (batch, time_cache, size)
254 Returns:
255 output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
256 cache_next (torch.Tensor) : (batch, time_cache_next, size)
257 """
258 # Need to perform duplicate computations as at this point the tensors have been
259 # separated by the adapter forward
260 query = self.pre_norm(query)
261 key = self.pre_norm(key)
262 value = self.pre_norm(value)
263
264 return super().forward(query, key, value, mask, pos_emb, cache=cache)
265
266 def reset_parameters(self):
267 with torch.no_grad():
268 nn.init.zeros_(self.linear_out.weight)
269 nn.init.zeros_(self.linear_out.bias)
270
271 # NOTE: This exact procedure apparently highly important.
272 # Above operation is safe to do as self.linear_out.weight *= 0.0 (similar for bias)
273 # However:
274 # DO NOT REPLACE BELOW WITH self.pos_bias_u *= 0.0 OR self.pos_bias_v *= 0.0
275 # For some reason at init sometimes it will cause the value of the tensor to become NaN
276 # All operations to compute matrix_ac and matrix_bd will then fail.
277 nn.init.zeros_(self.pos_bias_u)
278 nn.init.zeros_(self.pos_bias_v)
279
280 def get_default_strategy_config(self) -> 'dataclass':
281 return MHAResidualAddAdapterStrategyConfig()
282
283
284 @dataclass
285 class RelPositionMultiHeadAttentionAdapterConfig:
286 n_head: int
287 n_feat: int
288 dropout_rate: float = 0.0
289 proj_dim: Optional[int] = None
290 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
291 _target_: str = "{0}.{1}".format(
292 RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
293 )
294
295
296 class PositionalEncodingAdapter(mha.PositionalEncoding, adapter_modules.AdapterModuleUtil):
297
298 """
299 Absolute positional embedding adapter.
300
301 .. note::
302
303 Absolute positional embedding value is added to the input tensor *without residual connection* !
304 Therefore, the input is changed, if you only require the positional embedding, drop the returned `x` !
305
306 Args:
307 d_model (int): The input dimension of x.
308 max_len (int): The max sequence length.
309 xscale (float): The input scaling factor. Defaults to 1.0.
310 adapter_strategy (AbstractAdapterStrategy): By default, ReturnResultAdapterStrategyConfig.
311 An adapter composition function object.
312 NOTE: Since this is a positional encoding, it will not add a residual !
313 """
314
315 def __init__(
316 self,
317 d_model: int,
318 max_len: int = 5000,
319 xscale=1.0,
320 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
321 ):
322
323 super().__init__(
324 d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0,
325 )
326
327 # Setup adapter strategy
328 self.setup_adapter_strategy(adapter_strategy)
329
330 def get_default_strategy_config(self) -> 'dataclass':
331 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
332
333
334 @dataclass
335 class PositionalEncodingAdapterConfig:
336 d_model: int
337 max_len: int = 5000
338 xscale: float = 1.0
339 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
340 _target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
341
342
343 class RelPositionalEncodingAdapter(mha.RelPositionalEncoding, adapter_modules.AdapterModuleUtil):
344 """
345 Relative positional encoding for TransformerXL's layers
346 See : Appendix B in https://arxiv.org/abs/1901.02860
347
348 .. note::
349
350 Relative positional embedding value is **not** added to the input tensor !
351 Therefore, the input should be updated changed, if you only require the positional embedding, drop the returned `x` !
352
353 Args:
354 d_model (int): embedding dim
355 max_len (int): maximum input length
356 xscale (bool): whether to scale the input by sqrt(d_model)
357 adapter_strategy: By default, ReturnResultAdapterStrategyConfig. An adapter composition function object.
358 """
359
360 def __init__(
361 self,
362 d_model: int,
363 max_len: int = 5000,
364 xscale=1.0,
365 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
366 ):
367 super().__init__(d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0)
368
369 # Setup adapter strategy
370 self.setup_adapter_strategy(adapter_strategy)
371
372 def get_default_strategy_config(self) -> 'dataclass':
373 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
374
375
376 @dataclass
377 class RelPositionalEncodingAdapterConfig:
378 d_model: int
379 max_len: int = 5000
380 xscale: float = 1.0
381 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
382 _target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
383
[end of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
[start of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 import os
17 from dataclasses import dataclass
18 from typing import List, Optional, Tuple, Union
19
20 import torch
21
22 from nemo.collections.asr.parts.utils import rnnt_utils
23 from nemo.collections.common.tokenizers.tokenizer_spec import TokenizerSpec
24 from nemo.core.classes import Typing, typecheck
25 from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
26 from nemo.utils import logging
27
28 DEFAULT_TOKEN_OFFSET = 100
29
30
31 def pack_hypotheses(
32 hypotheses: List[rnnt_utils.NBestHypotheses], logitlen: torch.Tensor,
33 ) -> List[rnnt_utils.NBestHypotheses]:
34
35 if logitlen is not None:
36 if hasattr(logitlen, 'cpu'):
37 logitlen_cpu = logitlen.to('cpu')
38 else:
39 logitlen_cpu = logitlen
40
41 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.NBestHypotheses
42 for candidate_idx, cand in enumerate(hyp.n_best_hypotheses):
43 cand.y_sequence = torch.tensor(cand.y_sequence, dtype=torch.long)
44
45 if logitlen is not None:
46 cand.length = logitlen_cpu[idx]
47
48 if cand.dec_state is not None:
49 cand.dec_state = _states_to_device(cand.dec_state)
50
51 return hypotheses
52
53
54 def _states_to_device(dec_state, device='cpu'):
55 if torch.is_tensor(dec_state):
56 dec_state = dec_state.to(device)
57
58 elif isinstance(dec_state, (list, tuple)):
59 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
60
61 return dec_state
62
63
64 class AbstractBeamCTCInfer(Typing):
65 """A beam CTC decoder.
66
67 Provides a common abstraction for sample level beam decoding.
68
69 Args:
70 blank_id: int, index of the blank token. Can be 0 or len(vocabulary).
71 beam_size: int, size of the beam used in the underlying beam search engine.
72
73 """
74
75 @property
76 def input_types(self):
77 """Returns definitions of module input ports.
78 """
79 return {
80 "decoder_output": NeuralType(('B', 'T', 'D'), LogprobsType()),
81 "decoder_lengths": NeuralType(tuple('B'), LengthsType()),
82 }
83
84 @property
85 def output_types(self):
86 """Returns definitions of module output ports.
87 """
88 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
89
90 def __init__(self, blank_id: int, beam_size: int):
91 self.blank_id = blank_id
92
93 if beam_size < 1:
94 raise ValueError("Beam search size cannot be less than 1!")
95
96 self.beam_size = beam_size
97
98 # Variables set by corresponding setter methods
99 self.vocab = None
100 self.decoding_type = None
101 self.tokenizer = None
102
103 # Utility maps for vocabulary
104 self.vocab_index_map = None
105 self.index_vocab_map = None
106
107 # Internal variable, used to prevent double reduction of consecutive tokens (ctc collapse)
108 self.override_fold_consecutive_value = None
109
110 def set_vocabulary(self, vocab: List[str]):
111 """
112 Set the vocabulary of the decoding framework.
113
114 Args:
115 vocab: List of str. Each token corresponds to its location in the vocabulary emitted by the model.
116 Note that this vocabulary must NOT contain the "BLANK" token.
117 """
118 self.vocab = vocab
119 self.vocab_index_map = {v: i for i, v in enumerate(vocab)}
120 self.index_vocab_map = {i: v for i, v in enumerate(vocab)}
121
122 def set_decoding_type(self, decoding_type: str):
123 """
124 Sets the decoding type of the framework. Can support either char or subword models.
125
126 Args:
127 decoding_type: Str corresponding to decoding type. Only supports "char" and "subword".
128 """
129 decoding_type = decoding_type.lower()
130 supported_types = ['char', 'subword']
131
132 if decoding_type not in supported_types:
133 raise ValueError(
134 f"Unsupported decoding type. Supported types = {supported_types}.\n" f"Given = {decoding_type}"
135 )
136
137 self.decoding_type = decoding_type
138
139 def set_tokenizer(self, tokenizer: TokenizerSpec):
140 """
141 Set the tokenizer of the decoding framework.
142
143 Args:
144 tokenizer: NeMo tokenizer object, which inherits from TokenizerSpec.
145 """
146 self.tokenizer = tokenizer
147
148 @typecheck()
149 def forward(
150 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
151 ) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
152 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
153 Output token is generated auto-repressively.
154
155 Args:
156 decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
157 decoder_lengths: list of int representing the length of each sequence
158 output sequence.
159
160 Returns:
161 packed list containing batch number of sentences (Hypotheses).
162 """
163 raise NotImplementedError()
164
165 def __call__(self, *args, **kwargs):
166 return self.forward(*args, **kwargs)
167
168
169 class BeamCTCInfer(AbstractBeamCTCInfer):
170 """A greedy CTC decoder.
171
172 Provides a common abstraction for sample level and batch level greedy decoding.
173
174 Args:
175 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
176 preserve_alignments: Bool flag which preserves the history of logprobs generated during
177 decoding (sample / batched). When set to true, the Hypothesis will contain
178 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
179 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
180 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
181 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
182
183 """
184
185 def __init__(
186 self,
187 blank_id: int,
188 beam_size: int,
189 search_type: str = "default",
190 return_best_hypothesis: bool = True,
191 preserve_alignments: bool = False,
192 compute_timestamps: bool = False,
193 beam_alpha: float = 1.0,
194 beam_beta: float = 0.0,
195 kenlm_path: str = None,
196 flashlight_cfg: Optional['FlashlightConfig'] = None,
197 pyctcdecode_cfg: Optional['PyCTCDecodeConfig'] = None,
198 ):
199 super().__init__(blank_id=blank_id, beam_size=beam_size)
200
201 self.search_type = search_type
202 self.return_best_hypothesis = return_best_hypothesis
203 self.preserve_alignments = preserve_alignments
204 self.compute_timestamps = compute_timestamps
205
206 if self.compute_timestamps:
207 raise ValueError(f"Currently this flag is not supported for beam search algorithms.")
208
209 self.vocab = None # This must be set by specific method by user before calling forward() !
210
211 if search_type == "default" or search_type == "nemo":
212 self.search_algorithm = self.default_beam_search
213 elif search_type == "pyctcdecode":
214 self.search_algorithm = self._pyctcdecode_beam_search
215 elif search_type == "flashlight":
216 self.search_algorithm = self.flashlight_beam_search
217 else:
218 raise NotImplementedError(
219 f"The search type ({search_type}) supplied is not supported!\n"
220 f"Please use one of : (default, nemo, pyctcdecode)"
221 )
222
223 # Log the beam search algorithm
224 logging.info(f"Beam search algorithm: {search_type}")
225
226 self.beam_alpha = beam_alpha
227 self.beam_beta = beam_beta
228
229 # Default beam search args
230 self.kenlm_path = kenlm_path
231
232 # PyCTCDecode params
233 if pyctcdecode_cfg is None:
234 pyctcdecode_cfg = PyCTCDecodeConfig()
235 self.pyctcdecode_cfg = pyctcdecode_cfg # type: PyCTCDecodeConfig
236
237 if flashlight_cfg is None:
238 flashlight_cfg = FlashlightConfig()
239 self.flashlight_cfg = flashlight_cfg
240
241 # Default beam search scorer functions
242 self.default_beam_scorer = None
243 self.pyctcdecode_beam_scorer = None
244 self.flashlight_beam_scorer = None
245 self.token_offset = 0
246
247 @typecheck()
248 def forward(
249 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
250 ) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
251 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
252 Output token is generated auto-repressively.
253
254 Args:
255 decoder_output: A tensor of size (batch, timesteps, features).
256 decoder_lengths: list of int representing the length of each sequence
257 output sequence.
258
259 Returns:
260 packed list containing batch number of sentences (Hypotheses).
261 """
262 if self.vocab is None:
263 raise RuntimeError("Please set the vocabulary with `set_vocabulary()` before calling this function.")
264
265 if self.decoding_type is None:
266 raise ValueError("Please set the decoding type with `set_decoding_type()` before calling this function.")
267
268 with torch.no_grad(), torch.inference_mode():
269 # Process each sequence independently
270 prediction_tensor = decoder_output
271
272 if prediction_tensor.ndim != 3:
273 raise ValueError(
274 f"`decoder_output` must be a tensor of shape [B, T, V] (log probs, float). "
275 f"Provided shape = {prediction_tensor.shape}"
276 )
277
278 # determine type of input - logprobs or labels
279 out_len = decoder_lengths if decoder_lengths is not None else None
280 hypotheses = self.search_algorithm(prediction_tensor, out_len)
281
282 # Pack results into Hypotheses
283 packed_result = pack_hypotheses(hypotheses, decoder_lengths)
284
285 # Pack the result
286 if self.return_best_hypothesis and isinstance(packed_result[0], rnnt_utils.NBestHypotheses):
287 packed_result = [res.n_best_hypotheses[0] for res in packed_result] # type: Hypothesis
288
289 return (packed_result,)
290
291 @torch.no_grad()
292 def default_beam_search(
293 self, x: torch.Tensor, out_len: torch.Tensor
294 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
295 """
296 Open Seq2Seq Beam Search Algorithm (DeepSpeed)
297
298 Args:
299 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
300 and V is the vocabulary size. The tensor contains log-probabilities.
301 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
302
303 Returns:
304 A list of NBestHypotheses objects, one for each sequence in the batch.
305 """
306 if self.compute_timestamps:
307 raise ValueError(
308 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
309 )
310
311 if self.default_beam_scorer is None:
312 # Check for filepath
313 if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
314 raise FileNotFoundError(
315 f"KenLM binary file not found at : {self.kenlm_path}. "
316 f"Please set a valid path in the decoding config."
317 )
318
319 # perform token offset for subword models
320 if self.decoding_type == 'subword':
321 vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
322 else:
323 # char models
324 vocab = self.vocab
325
326 # Must import at runtime to avoid circular dependency due to module level import.
327 from nemo.collections.asr.modules.beam_search_decoder import BeamSearchDecoderWithLM
328
329 self.default_beam_scorer = BeamSearchDecoderWithLM(
330 vocab=vocab,
331 lm_path=self.kenlm_path,
332 beam_width=self.beam_size,
333 alpha=self.beam_alpha,
334 beta=self.beam_beta,
335 num_cpus=max(1, os.cpu_count()),
336 input_tensor=False,
337 )
338
339 x = x.to('cpu')
340
341 with typecheck.disable_checks():
342 data = [x[sample_id, : out_len[sample_id], :].softmax(dim=-1) for sample_id in range(len(x))]
343 beams_batch = self.default_beam_scorer.forward(log_probs=data, log_probs_length=None)
344
345 # For each sample in the batch
346 nbest_hypotheses = []
347 for beams_idx, beams in enumerate(beams_batch):
348 # For each beam candidate / hypothesis in each sample
349 hypotheses = []
350 for candidate_idx, candidate in enumerate(beams):
351 hypothesis = rnnt_utils.Hypothesis(
352 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
353 )
354
355 # For subword encoding, NeMo will double encode the subword (multiple tokens) into a
356 # singular unicode id. In doing so, we preserve the semantic of the unicode token, and
357 # compress the size of the final KenLM ARPA / Binary file.
358 # In order to do double encoding, we shift the subword by some token offset.
359 # This step is ignored for character based models.
360 if self.decoding_type == 'subword':
361 pred_token_ids = [ord(c) - self.token_offset for c in candidate[1]]
362 else:
363 # Char models
364 pred_token_ids = [self.vocab_index_map[c] for c in candidate[1]]
365
366 # We preserve the token ids and the score for this hypothesis
367 hypothesis.y_sequence = pred_token_ids
368 hypothesis.score = candidate[0]
369
370 # If alignment must be preserved, we preserve a view of the output logprobs.
371 # Note this view is shared amongst all beams within the sample, be sure to clone it if you
372 # require specific processing for each sample in the beam.
373 # This is done to preserve memory.
374 if self.preserve_alignments:
375 hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
376
377 hypotheses.append(hypothesis)
378
379 # Wrap the result in NBestHypothesis.
380 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
381 nbest_hypotheses.append(hypotheses)
382
383 return nbest_hypotheses
384
385 @torch.no_grad()
386 def _pyctcdecode_beam_search(
387 self, x: torch.Tensor, out_len: torch.Tensor
388 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
389 """
390 PyCTCDecode Beam Search Algorithm. Should support Char and Subword models.
391
392 Args:
393 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
394 and V is the vocabulary size. The tensor contains log-probabilities.
395 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
396
397 Returns:
398 A list of NBestHypotheses objects, one for each sequence in the batch.
399 """
400 if self.compute_timestamps:
401 raise ValueError(
402 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
403 )
404
405 try:
406 import pyctcdecode
407 except (ImportError, ModuleNotFoundError):
408 raise ImportError(
409 f"Could not load `pyctcdecode` library. Please install it from pip using :\n"
410 f"pip install --upgrade pyctcdecode"
411 )
412
413 if self.pyctcdecode_beam_scorer is None:
414 self.pyctcdecode_beam_scorer = pyctcdecode.build_ctcdecoder(
415 labels=self.vocab, kenlm_model_path=self.kenlm_path, alpha=self.beam_alpha, beta=self.beam_beta
416 ) # type: pyctcdecode.BeamSearchDecoderCTC
417
418 x = x.to('cpu').numpy()
419
420 with typecheck.disable_checks():
421 beams_batch = []
422 for sample_id in range(len(x)):
423 logprobs = x[sample_id, : out_len[sample_id], :]
424 result = self.pyctcdecode_beam_scorer.decode_beams(
425 logprobs,
426 beam_width=self.beam_size,
427 beam_prune_logp=self.pyctcdecode_cfg.beam_prune_logp,
428 token_min_logp=self.pyctcdecode_cfg.token_min_logp,
429 prune_history=self.pyctcdecode_cfg.prune_history,
430 hotwords=self.pyctcdecode_cfg.hotwords,
431 hotword_weight=self.pyctcdecode_cfg.hotword_weight,
432 lm_start_state=None,
433 ) # Output format: text, last_lm_state, text_frames, logit_score, lm_score
434 beams_batch.append(result)
435
436 nbest_hypotheses = []
437 for beams_idx, beams in enumerate(beams_batch):
438 hypotheses = []
439 for candidate_idx, candidate in enumerate(beams):
440 # Candidate = (text, last_lm_state, text_frames, logit_score, lm_score)
441 hypothesis = rnnt_utils.Hypothesis(
442 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
443 )
444
445 # TODO: Requires token ids to be returned rather than text.
446 if self.decoding_type == 'subword':
447 if self.tokenizer is None:
448 raise ValueError("Tokenizer must be provided for subword decoding. Use set_tokenizer().")
449
450 pred_token_ids = self.tokenizer.text_to_ids(candidate[0])
451 else:
452 if self.vocab is None:
453 raise ValueError("Vocab must be provided for character decoding. Use set_vocab().")
454
455 chars = list(candidate[0])
456 pred_token_ids = [self.vocab_index_map[c] for c in chars]
457
458 hypothesis.y_sequence = pred_token_ids
459 hypothesis.text = candidate[0] # text
460 hypothesis.score = candidate[4] # score
461
462 # Inject word level timestamps
463 hypothesis.timestep = candidate[2] # text_frames
464
465 if self.preserve_alignments:
466 hypothesis.alignments = torch.from_numpy(x[beams_idx][: out_len[beams_idx]])
467
468 hypotheses.append(hypothesis)
469
470 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
471 nbest_hypotheses.append(hypotheses)
472
473 return nbest_hypotheses
474
475 @torch.no_grad()
476 def flashlight_beam_search(
477 self, x: torch.Tensor, out_len: torch.Tensor
478 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
479 """
480 Flashlight Beam Search Algorithm. Should support Char and Subword models.
481
482 Args:
483 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
484 and V is the vocabulary size. The tensor contains log-probabilities.
485 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
486
487 Returns:
488 A list of NBestHypotheses objects, one for each sequence in the batch.
489 """
490 if self.compute_timestamps:
491 raise ValueError(
492 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
493 )
494
495 if self.flashlight_beam_scorer is None:
496 # Check for filepath
497 if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
498 raise FileNotFoundError(
499 f"KenLM binary file not found at : {self.kenlm_path}. "
500 f"Please set a valid path in the decoding config."
501 )
502
503 # perform token offset for subword models
504 # if self.decoding_type == 'subword':
505 # vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
506 # else:
507 # # char models
508 # vocab = self.vocab
509
510 # Must import at runtime to avoid circular dependency due to module level import.
511 from nemo.collections.asr.modules.flashlight_decoder import FlashLightKenLMBeamSearchDecoder
512
513 self.flashlight_beam_scorer = FlashLightKenLMBeamSearchDecoder(
514 lm_path=self.kenlm_path,
515 vocabulary=self.vocab,
516 tokenizer=self.tokenizer,
517 lexicon_path=self.flashlight_cfg.lexicon_path,
518 boost_path=self.flashlight_cfg.boost_path,
519 beam_size=self.beam_size,
520 beam_size_token=self.flashlight_cfg.beam_size_token,
521 beam_threshold=self.flashlight_cfg.beam_threshold,
522 lm_weight=self.beam_alpha,
523 word_score=self.beam_beta,
524 unk_weight=self.flashlight_cfg.unk_weight,
525 sil_weight=self.flashlight_cfg.sil_weight,
526 )
527
528 x = x.to('cpu')
529
530 with typecheck.disable_checks():
531 beams_batch = self.flashlight_beam_scorer.forward(log_probs=x)
532
533 # For each sample in the batch
534 nbest_hypotheses = []
535 for beams_idx, beams in enumerate(beams_batch):
536 # For each beam candidate / hypothesis in each sample
537 hypotheses = []
538 for candidate_idx, candidate in enumerate(beams):
539 hypothesis = rnnt_utils.Hypothesis(
540 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
541 )
542
543 # We preserve the token ids and the score for this hypothesis
544 hypothesis.y_sequence = candidate['tokens'].tolist()
545 hypothesis.score = candidate['score']
546
547 # If alignment must be preserved, we preserve a view of the output logprobs.
548 # Note this view is shared amongst all beams within the sample, be sure to clone it if you
549 # require specific processing for each sample in the beam.
550 # This is done to preserve memory.
551 if self.preserve_alignments:
552 hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
553
554 hypotheses.append(hypothesis)
555
556 # Wrap the result in NBestHypothesis.
557 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
558 nbest_hypotheses.append(hypotheses)
559
560 return nbest_hypotheses
561
562 def set_decoding_type(self, decoding_type: str):
563 super().set_decoding_type(decoding_type)
564
565 # Please check train_kenlm.py in scripts/asr_language_modeling/ to find out why we need
566 # TOKEN_OFFSET for BPE-based models
567 if self.decoding_type == 'subword':
568 self.token_offset = DEFAULT_TOKEN_OFFSET
569
570
571 @dataclass
572 class PyCTCDecodeConfig:
573 # These arguments cannot be imported from pyctcdecode (optional dependency)
574 # Therefore we copy the values explicitly
575 # Taken from pyctcdecode.constant
576 beam_prune_logp: float = -10.0
577 token_min_logp: float = -5.0
578 prune_history: bool = False
579 hotwords: Optional[List[str]] = None
580 hotword_weight: float = 10.0
581
582
583 @dataclass
584 class FlashlightConfig:
585 lexicon_path: Optional[str] = None
586 boost_path: Optional[str] = None
587 beam_size_token: int = 16
588 beam_threshold: float = 20.0
589 unk_weight: float = -math.inf
590 sil_weight: float = 0.0
591
592
593 @dataclass
594 class BeamCTCInferConfig:
595 beam_size: int
596 search_type: str = 'default'
597 preserve_alignments: bool = False
598 compute_timestamps: bool = False
599 return_best_hypothesis: bool = True
600
601 beam_alpha: float = 1.0
602 beam_beta: float = 0.0
603 kenlm_path: Optional[str] = None
604
605 flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
606 pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
607
[end of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
[start of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import List, Optional
17
18 import torch
19 from omegaconf import DictConfig, OmegaConf
20
21 from nemo.collections.asr.parts.utils import rnnt_utils
22 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMeasureConfig, ConfidenceMeasureMixin
23 from nemo.core.classes import Typing, typecheck
24 from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
25 from nemo.utils import logging
26
27
28 def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
29
30 if logitlen is not None:
31 if hasattr(logitlen, 'cpu'):
32 logitlen_cpu = logitlen.to('cpu')
33 else:
34 logitlen_cpu = logitlen
35
36 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
37 hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
38
39 if logitlen is not None:
40 hyp.length = logitlen_cpu[idx]
41
42 if hyp.dec_state is not None:
43 hyp.dec_state = _states_to_device(hyp.dec_state)
44
45 return hypotheses
46
47
48 def _states_to_device(dec_state, device='cpu'):
49 if torch.is_tensor(dec_state):
50 dec_state = dec_state.to(device)
51
52 elif isinstance(dec_state, (list, tuple)):
53 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
54
55 return dec_state
56
57
58 class GreedyCTCInfer(Typing, ConfidenceMeasureMixin):
59 """A greedy CTC decoder.
60
61 Provides a common abstraction for sample level and batch level greedy decoding.
62
63 Args:
64 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
65 preserve_alignments: Bool flag which preserves the history of logprobs generated during
66 decoding (sample / batched). When set to true, the Hypothesis will contain
67 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
68 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
69 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
70 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
71 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
72 generated during decoding. When set to true, the Hypothesis will contain
73 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
74 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
75 confidence scores.
76
77 name: The measure name (str).
78 Supported values:
79 - 'max_prob' for using the maximum token probability as a confidence.
80 - 'entropy' for using a normalized entropy of a log-likelihood vector.
81
82 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
83 Supported values:
84 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
85 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
86 Note that for this entropy, the alpha should comply the following inequality:
87 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
88 where V is the model vocabulary size.
89 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
90 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
91 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
92 More: https://en.wikipedia.org/wiki/Tsallis_entropy
93 - 'renyi' for the Rรฉnyi entropy.
94 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
95 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
96 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
97
98 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
99 When the alpha equals one, scaling is not applied to 'max_prob',
100 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
101
102 entropy_norm: A mapping of the entropy value to the interval [0,1].
103 Supported values:
104 - 'lin' for using the linear mapping.
105 - 'exp' for using exponential mapping with linear shift.
106
107 """
108
109 @property
110 def input_types(self):
111 """Returns definitions of module input ports.
112 """
113 # Input can be of dimention -
114 # ('B', 'T', 'D') [Log probs] or ('B', 'T') [Labels]
115
116 return {
117 "decoder_output": NeuralType(None, LogprobsType()),
118 "decoder_lengths": NeuralType(tuple('B'), LengthsType()),
119 }
120
121 @property
122 def output_types(self):
123 """Returns definitions of module output ports.
124 """
125 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
126
127 def __init__(
128 self,
129 blank_id: int,
130 preserve_alignments: bool = False,
131 compute_timestamps: bool = False,
132 preserve_frame_confidence: bool = False,
133 confidence_measure_cfg: Optional[DictConfig] = None,
134 ):
135 super().__init__()
136
137 self.blank_id = blank_id
138 self.preserve_alignments = preserve_alignments
139 # we need timestamps to extract non-blank per-frame confidence
140 self.compute_timestamps = compute_timestamps | preserve_frame_confidence
141 self.preserve_frame_confidence = preserve_frame_confidence
142
143 # set confidence calculation measure
144 self._init_confidence_measure(confidence_measure_cfg)
145
146 @typecheck()
147 def forward(
148 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
149 ):
150 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
151 Output token is generated auto-repressively.
152
153 Args:
154 decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
155 decoder_lengths: list of int representing the length of each sequence
156 output sequence.
157
158 Returns:
159 packed list containing batch number of sentences (Hypotheses).
160 """
161 with torch.inference_mode():
162 hypotheses = []
163 # Process each sequence independently
164 prediction_cpu_tensor = decoder_output.cpu()
165
166 if prediction_cpu_tensor.ndim < 2 or prediction_cpu_tensor.ndim > 3:
167 raise ValueError(
168 f"`decoder_output` must be a tensor of shape [B, T] (labels, int) or "
169 f"[B, T, V] (log probs, float). Provided shape = {prediction_cpu_tensor.shape}"
170 )
171
172 # determine type of input - logprobs or labels
173 if prediction_cpu_tensor.ndim == 2: # labels
174 greedy_decode = self._greedy_decode_labels
175 else:
176 greedy_decode = self._greedy_decode_logprobs
177
178 for ind in range(prediction_cpu_tensor.shape[0]):
179 out_len = decoder_lengths[ind] if decoder_lengths is not None else None
180 hypothesis = greedy_decode(prediction_cpu_tensor[ind], out_len)
181 hypotheses.append(hypothesis)
182
183 # Pack results into Hypotheses
184 packed_result = pack_hypotheses(hypotheses, decoder_lengths)
185
186 return (packed_result,)
187
188 @torch.no_grad()
189 def _greedy_decode_logprobs(self, x: torch.Tensor, out_len: torch.Tensor):
190 # x: [T, D]
191 # out_len: [seq_len]
192
193 # Initialize blank state and empty label set in Hypothesis
194 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
195 prediction = x.detach().cpu()
196
197 if out_len is not None:
198 prediction = prediction[:out_len]
199
200 prediction_logprobs, prediction_labels = prediction.max(dim=-1)
201
202 non_blank_ids = prediction_labels != self.blank_id
203 hypothesis.y_sequence = prediction_labels.numpy().tolist()
204 hypothesis.score = (prediction_logprobs[non_blank_ids]).sum()
205
206 if self.preserve_alignments:
207 # Preserve the logprobs, as well as labels after argmax
208 hypothesis.alignments = (prediction.clone(), prediction_labels.clone())
209
210 if self.compute_timestamps:
211 hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
212
213 if self.preserve_frame_confidence:
214 hypothesis.frame_confidence = self._get_confidence(prediction)
215
216 return hypothesis
217
218 @torch.no_grad()
219 def _greedy_decode_labels(self, x: torch.Tensor, out_len: torch.Tensor):
220 # x: [T]
221 # out_len: [seq_len]
222
223 # Initialize blank state and empty label set in Hypothesis
224 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
225 prediction_labels = x.detach().cpu()
226
227 if out_len is not None:
228 prediction_labels = prediction_labels[:out_len]
229
230 non_blank_ids = prediction_labels != self.blank_id
231 hypothesis.y_sequence = prediction_labels.numpy().tolist()
232 hypothesis.score = -1.0
233
234 if self.preserve_alignments:
235 raise ValueError("Requested for alignments, but predictions provided were labels, not log probabilities.")
236
237 if self.compute_timestamps:
238 hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
239
240 if self.preserve_frame_confidence:
241 raise ValueError(
242 "Requested for per-frame confidence, but predictions provided were labels, not log probabilities."
243 )
244
245 return hypothesis
246
247 def __call__(self, *args, **kwargs):
248 return self.forward(*args, **kwargs)
249
250
251 @dataclass
252 class GreedyCTCInferConfig:
253 preserve_alignments: bool = False
254 compute_timestamps: bool = False
255 preserve_frame_confidence: bool = False
256 confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
257 confidence_method_cfg: str = "DEPRECATED"
258
259 def __post_init__(self):
260 # OmegaConf.structured ensures that post_init check is always executed
261 self.confidence_measure_cfg = OmegaConf.structured(
262 self.confidence_measure_cfg
263 if isinstance(self.confidence_measure_cfg, ConfidenceMeasureConfig)
264 else ConfidenceMeasureConfig(**self.confidence_measure_cfg)
265 )
266 if self.confidence_method_cfg != "DEPRECATED":
267 logging.warning(
268 "`confidence_method_cfg` is deprecated and will be removed in the future. "
269 "Please use `confidence_measure_cfg` instead."
270 )
271
272 # TODO (alaptev): delete the following two lines sometime in the future
273 logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
274 # OmegaConf.structured ensures that post_init check is always executed
275 self.confidence_measure_cfg = OmegaConf.structured(
276 self.confidence_method_cfg
277 if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
278 else ConfidenceMeasureConfig(**self.confidence_method_cfg)
279 )
280
[end of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
[start of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Copyright 2017 Johns Hopkins University (Shinji Watanabe)
16 #
17 # Licensed under the Apache License, Version 2.0 (the "License");
18 # you may not use this file except in compliance with the License.
19 # You may obtain a copy of the License at
20 #
21 # http://www.apache.org/licenses/LICENSE-2.0
22 #
23 # Unless required by applicable law or agreed to in writing, software
24 # distributed under the License is distributed on an "AS IS" BASIS,
25 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
26 # See the License for the specific language governing permissions and
27 # limitations under the License.
28
29 from dataclasses import dataclass
30 from typing import List, Optional, Tuple, Union
31
32 import numpy as np
33 import torch
34 from omegaconf import DictConfig, OmegaConf
35
36 from nemo.collections.asr.modules import rnnt_abstract
37 from nemo.collections.asr.parts.utils import rnnt_utils
38 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMeasureConfig, ConfidenceMeasureMixin
39 from nemo.collections.common.parts.rnn import label_collate
40 from nemo.core.classes import Typing, typecheck
41 from nemo.core.neural_types import AcousticEncodedRepresentation, ElementType, HypothesisType, LengthsType, NeuralType
42 from nemo.utils import logging
43
44
45 def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
46
47 if hasattr(logitlen, 'cpu'):
48 logitlen_cpu = logitlen.to('cpu')
49 else:
50 logitlen_cpu = logitlen
51
52 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
53 hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
54 hyp.length = logitlen_cpu[idx]
55
56 if hyp.dec_state is not None:
57 hyp.dec_state = _states_to_device(hyp.dec_state)
58
59 return hypotheses
60
61
62 def _states_to_device(dec_state, device='cpu'):
63 if torch.is_tensor(dec_state):
64 dec_state = dec_state.to(device)
65
66 elif isinstance(dec_state, (list, tuple)):
67 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
68
69 return dec_state
70
71
72 class _GreedyRNNTInfer(Typing, ConfidenceMeasureMixin):
73 """A greedy transducer decoder.
74
75 Provides a common abstraction for sample level and batch level greedy decoding.
76
77 Args:
78 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
79 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
80 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
81 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
82 to a sequence in a single time step; if set to None then there is
83 no limit.
84 preserve_alignments: Bool flag which preserves the history of alignments generated during
85 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
86 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
87 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
88
89 The length of the list corresponds to the Acoustic Length (T).
90 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
91 U is the number of target tokens for the current timestep Ti.
92 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
93 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
94 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
95
96 The length of the list corresponds to the Acoustic Length (T).
97 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
98 U is the number of target tokens for the current timestep Ti.
99 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
100 confidence scores.
101
102 name: The measure name (str).
103 Supported values:
104 - 'max_prob' for using the maximum token probability as a confidence.
105 - 'entropy' for using a normalized entropy of a log-likelihood vector.
106
107 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
108 Supported values:
109 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
110 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
111 Note that for this entropy, the alpha should comply the following inequality:
112 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
113 where V is the model vocabulary size.
114 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
115 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
116 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
117 More: https://en.wikipedia.org/wiki/Tsallis_entropy
118 - 'renyi' for the Rรฉnyi entropy.
119 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
120 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
121 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
122
123 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
124 When the alpha equals one, scaling is not applied to 'max_prob',
125 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
126
127 entropy_norm: A mapping of the entropy value to the interval [0,1].
128 Supported values:
129 - 'lin' for using the linear mapping.
130 - 'exp' for using exponential mapping with linear shift.
131 """
132
133 @property
134 def input_types(self):
135 """Returns definitions of module input ports.
136 """
137 return {
138 "encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
139 "encoded_lengths": NeuralType(tuple('B'), LengthsType()),
140 "partial_hypotheses": [NeuralType(elements_type=HypothesisType(), optional=True)], # must always be last
141 }
142
143 @property
144 def output_types(self):
145 """Returns definitions of module output ports.
146 """
147 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
148
149 def __init__(
150 self,
151 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
152 joint_model: rnnt_abstract.AbstractRNNTJoint,
153 blank_index: int,
154 max_symbols_per_step: Optional[int] = None,
155 preserve_alignments: bool = False,
156 preserve_frame_confidence: bool = False,
157 confidence_measure_cfg: Optional[DictConfig] = None,
158 ):
159 super().__init__()
160 self.decoder = decoder_model
161 self.joint = joint_model
162
163 self._blank_index = blank_index
164 self._SOS = blank_index # Start of single index
165 self.max_symbols = max_symbols_per_step
166 self.preserve_alignments = preserve_alignments
167 self.preserve_frame_confidence = preserve_frame_confidence
168
169 # set confidence calculation measure
170 self._init_confidence_measure(confidence_measure_cfg)
171
172 def __call__(self, *args, **kwargs):
173 return self.forward(*args, **kwargs)
174
175 @torch.no_grad()
176 def _pred_step(
177 self,
178 label: Union[torch.Tensor, int],
179 hidden: Optional[torch.Tensor],
180 add_sos: bool = False,
181 batch_size: Optional[int] = None,
182 ) -> Tuple[torch.Tensor, torch.Tensor]:
183 """
184 Common prediction step based on the AbstractRNNTDecoder implementation.
185
186 Args:
187 label: (int/torch.Tensor): Label or "Start-of-Signal" token.
188 hidden: (Optional torch.Tensor): RNN State vector
189 add_sos (bool): Whether to add a zero vector at the begging as "start of sentence" token.
190 batch_size: Batch size of the output tensor.
191
192 Returns:
193 g: (B, U, H) if add_sos is false, else (B, U + 1, H)
194 hid: (h, c) where h is the final sequence hidden state and c is
195 the final cell state:
196 h (tensor), shape (L, B, H)
197 c (tensor), shape (L, B, H)
198 """
199 if isinstance(label, torch.Tensor):
200 # label: [batch, 1]
201 if label.dtype != torch.long:
202 label = label.long()
203
204 else:
205 # Label is an integer
206 if label == self._SOS:
207 return self.decoder.predict(None, hidden, add_sos=add_sos, batch_size=batch_size)
208
209 label = label_collate([[label]])
210
211 # output: [B, 1, K]
212 return self.decoder.predict(label, hidden, add_sos=add_sos, batch_size=batch_size)
213
214 def _joint_step(self, enc, pred, log_normalize: Optional[bool] = None):
215 """
216 Common joint step based on AbstractRNNTJoint implementation.
217
218 Args:
219 enc: Output of the Encoder model. A torch.Tensor of shape [B, 1, H1]
220 pred: Output of the Decoder model. A torch.Tensor of shape [B, 1, H2]
221 log_normalize: Whether to log normalize or not. None will log normalize only for CPU.
222
223 Returns:
224 logits of shape (B, T=1, U=1, V + 1)
225 """
226 with torch.no_grad():
227 logits = self.joint.joint(enc, pred)
228
229 if log_normalize is None:
230 if not logits.is_cuda: # Use log softmax only if on CPU
231 logits = logits.log_softmax(dim=len(logits.shape) - 1)
232 else:
233 if log_normalize:
234 logits = logits.log_softmax(dim=len(logits.shape) - 1)
235
236 return logits
237
238
239 class GreedyRNNTInfer(_GreedyRNNTInfer):
240 """A greedy transducer decoder.
241
242 Sequence level greedy decoding, performed auto-regressively.
243
244 Args:
245 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
246 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
247 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
248 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
249 to a sequence in a single time step; if set to None then there is
250 no limit.
251 preserve_alignments: Bool flag which preserves the history of alignments generated during
252 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
253 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
254 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
255
256 The length of the list corresponds to the Acoustic Length (T).
257 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
258 U is the number of target tokens for the current timestep Ti.
259 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
260 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
261 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
262
263 The length of the list corresponds to the Acoustic Length (T).
264 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
265 U is the number of target tokens for the current timestep Ti.
266 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
267 confidence scores.
268
269 name: The measure name (str).
270 Supported values:
271 - 'max_prob' for using the maximum token probability as a confidence.
272 - 'entropy' for using a normalized entropy of a log-likelihood vector.
273
274 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
275 Supported values:
276 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
277 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
278 Note that for this entropy, the alpha should comply the following inequality:
279 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
280 where V is the model vocabulary size.
281 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
282 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
283 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
284 More: https://en.wikipedia.org/wiki/Tsallis_entropy
285 - 'renyi' for the Rรฉnyi entropy.
286 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
287 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
288 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
289
290 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
291 When the alpha equals one, scaling is not applied to 'max_prob',
292 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
293
294 entropy_norm: A mapping of the entropy value to the interval [0,1].
295 Supported values:
296 - 'lin' for using the linear mapping.
297 - 'exp' for using exponential mapping with linear shift.
298 """
299
300 def __init__(
301 self,
302 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
303 joint_model: rnnt_abstract.AbstractRNNTJoint,
304 blank_index: int,
305 max_symbols_per_step: Optional[int] = None,
306 preserve_alignments: bool = False,
307 preserve_frame_confidence: bool = False,
308 confidence_measure_cfg: Optional[DictConfig] = None,
309 ):
310 super().__init__(
311 decoder_model=decoder_model,
312 joint_model=joint_model,
313 blank_index=blank_index,
314 max_symbols_per_step=max_symbols_per_step,
315 preserve_alignments=preserve_alignments,
316 preserve_frame_confidence=preserve_frame_confidence,
317 confidence_measure_cfg=confidence_measure_cfg,
318 )
319
320 @typecheck()
321 def forward(
322 self,
323 encoder_output: torch.Tensor,
324 encoded_lengths: torch.Tensor,
325 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
326 ):
327 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
328 Output token is generated auto-regressively.
329
330 Args:
331 encoder_output: A tensor of size (batch, features, timesteps).
332 encoded_lengths: list of int representing the length of each sequence
333 output sequence.
334
335 Returns:
336 packed list containing batch number of sentences (Hypotheses).
337 """
338 # Preserve decoder and joint training state
339 decoder_training_state = self.decoder.training
340 joint_training_state = self.joint.training
341
342 with torch.inference_mode():
343 # Apply optional preprocessing
344 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
345
346 self.decoder.eval()
347 self.joint.eval()
348
349 hypotheses = []
350 # Process each sequence independently
351 with self.decoder.as_frozen(), self.joint.as_frozen():
352 for batch_idx in range(encoder_output.size(0)):
353 inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
354 logitlen = encoded_lengths[batch_idx]
355
356 partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
357 hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
358 hypotheses.append(hypothesis)
359
360 # Pack results into Hypotheses
361 packed_result = pack_hypotheses(hypotheses, encoded_lengths)
362
363 self.decoder.train(decoder_training_state)
364 self.joint.train(joint_training_state)
365
366 return (packed_result,)
367
368 @torch.no_grad()
369 def _greedy_decode(
370 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
371 ):
372 # x: [T, 1, D]
373 # out_len: [seq_len]
374
375 # Initialize blank state and empty label set in Hypothesis
376 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
377
378 if partial_hypotheses is not None:
379 hypothesis.last_token = partial_hypotheses.last_token
380 hypothesis.y_sequence = (
381 partial_hypotheses.y_sequence.cpu().tolist()
382 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
383 else partial_hypotheses.y_sequence
384 )
385 if partial_hypotheses.dec_state is not None:
386 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
387 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
388
389 if self.preserve_alignments:
390 # Alignments is a 2-dimensional dangling list representing T x U
391 hypothesis.alignments = [[]]
392
393 if self.preserve_frame_confidence:
394 hypothesis.frame_confidence = [[]]
395
396 # For timestep t in X_t
397 for time_idx in range(out_len):
398 # Extract encoder embedding at timestep t
399 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
400 f = x.narrow(dim=0, start=time_idx, length=1)
401
402 # Setup exit flags and counter
403 not_blank = True
404 symbols_added = 0
405 # While blank is not predicted, or we dont run out of max symbols per timestep
406 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
407 # In the first timestep, we initialize the network with RNNT Blank
408 # In later timesteps, we provide previous predicted label as input.
409 if hypothesis.last_token is None and hypothesis.dec_state is None:
410 last_label = self._SOS
411 else:
412 last_label = label_collate([[hypothesis.last_token]])
413
414 # Perform prediction network and joint network steps.
415 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
416 # If preserving per-frame confidence, log_normalize must be true
417 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
418 0, 0, 0, :
419 ]
420
421 del g
422
423 # torch.max(0) op doesnt exist for FP 16.
424 if logp.dtype != torch.float32:
425 logp = logp.float()
426
427 # get index k, of max prob
428 v, k = logp.max(0)
429 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
430
431 if self.preserve_alignments:
432 # insert logprobs into last timestep
433 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
434
435 if self.preserve_frame_confidence:
436 # insert confidence into last timestep
437 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
438
439 del logp
440
441 # If blank token is predicted, exit inner loop, move onto next timestep t
442 if k == self._blank_index:
443 not_blank = False
444 else:
445 # Append token to label set, update RNN state.
446 hypothesis.y_sequence.append(k)
447 hypothesis.score += float(v)
448 hypothesis.timestep.append(time_idx)
449 hypothesis.dec_state = hidden_prime
450 hypothesis.last_token = k
451
452 # Increment token counter.
453 symbols_added += 1
454
455 if self.preserve_alignments:
456 # convert Ti-th logits into a torch array
457 hypothesis.alignments.append([]) # blank buffer for next timestep
458
459 if self.preserve_frame_confidence:
460 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
461
462 # Remove trailing empty list of Alignments
463 if self.preserve_alignments:
464 if len(hypothesis.alignments[-1]) == 0:
465 del hypothesis.alignments[-1]
466
467 # Remove trailing empty list of per-frame confidence
468 if self.preserve_frame_confidence:
469 if len(hypothesis.frame_confidence[-1]) == 0:
470 del hypothesis.frame_confidence[-1]
471
472 # Unpack the hidden states
473 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
474
475 return hypothesis
476
477
478 class GreedyBatchedRNNTInfer(_GreedyRNNTInfer):
479 """A batch level greedy transducer decoder.
480
481 Batch level greedy decoding, performed auto-regressively.
482
483 Args:
484 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
485 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
486 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
487 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
488 to a sequence in a single time step; if set to None then there is
489 no limit.
490 preserve_alignments: Bool flag which preserves the history of alignments generated during
491 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
492 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
493 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
494
495 The length of the list corresponds to the Acoustic Length (T).
496 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
497 U is the number of target tokens for the current timestep Ti.
498 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
499 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
500 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
501
502 The length of the list corresponds to the Acoustic Length (T).
503 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
504 U is the number of target tokens for the current timestep Ti.
505 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
506 confidence scores.
507
508 name: The measure name (str).
509 Supported values:
510 - 'max_prob' for using the maximum token probability as a confidence.
511 - 'entropy' for using a normalized entropy of a log-likelihood vector.
512
513 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
514 Supported values:
515 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
516 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
517 Note that for this entropy, the alpha should comply the following inequality:
518 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
519 where V is the model vocabulary size.
520 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
521 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
522 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
523 More: https://en.wikipedia.org/wiki/Tsallis_entropy
524 - 'renyi' for the Rรฉnyi entropy.
525 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
526 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
527 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
528
529 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
530 When the alpha equals one, scaling is not applied to 'max_prob',
531 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
532
533 entropy_norm: A mapping of the entropy value to the interval [0,1].
534 Supported values:
535 - 'lin' for using the linear mapping.
536 - 'exp' for using exponential mapping with linear shift.
537 """
538
539 def __init__(
540 self,
541 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
542 joint_model: rnnt_abstract.AbstractRNNTJoint,
543 blank_index: int,
544 max_symbols_per_step: Optional[int] = None,
545 preserve_alignments: bool = False,
546 preserve_frame_confidence: bool = False,
547 confidence_measure_cfg: Optional[DictConfig] = None,
548 ):
549 super().__init__(
550 decoder_model=decoder_model,
551 joint_model=joint_model,
552 blank_index=blank_index,
553 max_symbols_per_step=max_symbols_per_step,
554 preserve_alignments=preserve_alignments,
555 preserve_frame_confidence=preserve_frame_confidence,
556 confidence_measure_cfg=confidence_measure_cfg,
557 )
558
559 # Depending on availability of `blank_as_pad` support
560 # switch between more efficient batch decoding technique
561 if self.decoder.blank_as_pad:
562 self._greedy_decode = self._greedy_decode_blank_as_pad
563 else:
564 self._greedy_decode = self._greedy_decode_masked
565
566 @typecheck()
567 def forward(
568 self,
569 encoder_output: torch.Tensor,
570 encoded_lengths: torch.Tensor,
571 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
572 ):
573 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
574 Output token is generated auto-regressively.
575
576 Args:
577 encoder_output: A tensor of size (batch, features, timesteps).
578 encoded_lengths: list of int representing the length of each sequence
579 output sequence.
580
581 Returns:
582 packed list containing batch number of sentences (Hypotheses).
583 """
584 # Preserve decoder and joint training state
585 decoder_training_state = self.decoder.training
586 joint_training_state = self.joint.training
587
588 with torch.inference_mode():
589 # Apply optional preprocessing
590 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
591 logitlen = encoded_lengths
592
593 self.decoder.eval()
594 self.joint.eval()
595
596 with self.decoder.as_frozen(), self.joint.as_frozen():
597 inseq = encoder_output # [B, T, D]
598 hypotheses = self._greedy_decode(
599 inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
600 )
601
602 # Pack the hypotheses results
603 packed_result = pack_hypotheses(hypotheses, logitlen)
604
605 self.decoder.train(decoder_training_state)
606 self.joint.train(joint_training_state)
607
608 return (packed_result,)
609
610 def _greedy_decode_blank_as_pad(
611 self,
612 x: torch.Tensor,
613 out_len: torch.Tensor,
614 device: torch.device,
615 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
616 ):
617 if partial_hypotheses is not None:
618 raise NotImplementedError("`partial_hypotheses` support is not supported")
619
620 with torch.inference_mode():
621 # x: [B, T, D]
622 # out_len: [B]
623 # device: torch.device
624
625 # Initialize list of Hypothesis
626 batchsize = x.shape[0]
627 hypotheses = [
628 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
629 ]
630
631 # Initialize Hidden state matrix (shared by entire batch)
632 hidden = None
633
634 # If alignments need to be preserved, register a dangling list to hold the values
635 if self.preserve_alignments:
636 # alignments is a 3-dimensional dangling list representing B x T x U
637 for hyp in hypotheses:
638 hyp.alignments = [[]]
639
640 # If confidence scores need to be preserved, register a dangling list to hold the values
641 if self.preserve_frame_confidence:
642 # frame_confidence is a 3-dimensional dangling list representing B x T x U
643 for hyp in hypotheses:
644 hyp.frame_confidence = [[]]
645
646 # Last Label buffer + Last Label without blank buffer
647 # batch level equivalent of the last_label
648 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
649
650 # Mask buffers
651 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
652
653 # Get max sequence length
654 max_out_len = out_len.max()
655 for time_idx in range(max_out_len):
656 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
657
658 # Prepare t timestamp batch variables
659 not_blank = True
660 symbols_added = 0
661
662 # Reset blank mask
663 blank_mask.mul_(False)
664
665 # Update blank mask with time mask
666 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
667 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
668 blank_mask = time_idx >= out_len
669 # Start inner loop
670 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
671 # Batch prediction and joint network steps
672 # If very first prediction step, submit SOS tag (blank) to pred_step.
673 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
674 if time_idx == 0 and symbols_added == 0 and hidden is None:
675 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
676 else:
677 # Perform batch step prediction of decoder, getting new states and scores ("g")
678 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
679
680 # Batched joint step - Output = [B, V + 1]
681 # If preserving per-frame confidence, log_normalize must be true
682 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
683 :, 0, 0, :
684 ]
685
686 if logp.dtype != torch.float32:
687 logp = logp.float()
688
689 # Get index k, of max prob for batch
690 v, k = logp.max(1)
691 del g
692
693 # Update blank mask with current predicted blanks
694 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
695 k_is_blank = k == self._blank_index
696 blank_mask.bitwise_or_(k_is_blank)
697 all_blanks = torch.all(blank_mask)
698
699 del k_is_blank
700
701 # If preserving alignments, check if sequence length of sample has been reached
702 # before adding alignment
703 if self.preserve_alignments:
704 # Insert logprobs into last timestep per sample
705 logp_vals = logp.to('cpu')
706 logp_ids = logp_vals.max(1)[1]
707 for batch_idx, is_blank in enumerate(blank_mask):
708 # we only want to update non-blanks, unless we are at the last step in the loop where
709 # all elements produced blanks, otherwise there will be duplicate predictions
710 # saved in alignments
711 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
712 hypotheses[batch_idx].alignments[-1].append(
713 (logp_vals[batch_idx], logp_ids[batch_idx])
714 )
715 del logp_vals
716
717 # If preserving per-frame confidence, check if sequence length of sample has been reached
718 # before adding confidence scores
719 if self.preserve_frame_confidence:
720 # Insert probabilities into last timestep per sample
721 confidence = self._get_confidence(logp)
722 for batch_idx, is_blank in enumerate(blank_mask):
723 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
724 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
725 del logp
726
727 # If all samples predict / have predicted prior blanks, exit loop early
728 # This is equivalent to if single sample predicted k
729 if all_blanks:
730 not_blank = False
731 else:
732 # Collect batch indices where blanks occurred now/past
733 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
734
735 # Recover prior state for all samples which predicted blank now/past
736 if hidden is not None:
737 # LSTM has 2 states
738 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
739
740 elif len(blank_indices) > 0 and hidden is None:
741 # Reset state if there were some blank and other non-blank predictions in batch
742 # Original state is filled with zeros so we just multiply
743 # LSTM has 2 states
744 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
745
746 # Recover prior predicted label for all samples which predicted blank now/past
747 k[blank_indices] = last_label[blank_indices, 0]
748
749 # Update new label and hidden state for next iteration
750 last_label = k.clone().view(-1, 1)
751 hidden = hidden_prime
752
753 # Update predicted labels, accounting for time mask
754 # If blank was predicted even once, now or in the past,
755 # Force the current predicted label to also be blank
756 # This ensures that blanks propogate across all timesteps
757 # once they have occured (normally stopping condition of sample level loop).
758 for kidx, ki in enumerate(k):
759 if blank_mask[kidx] == 0:
760 hypotheses[kidx].y_sequence.append(ki)
761 hypotheses[kidx].timestep.append(time_idx)
762 hypotheses[kidx].score += float(v[kidx])
763 symbols_added += 1
764
765 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
766 # Then preserve U at current timestep Ti
767 # Finally, forward the timestep history to Ti+1 for that sample
768 # All of this should only be done iff the current time index <= sample-level AM length.
769 # Otherwise ignore and move to next sample / next timestep.
770 if self.preserve_alignments:
771
772 # convert Ti-th logits into a torch array
773 for batch_idx in range(batchsize):
774
775 # this checks if current timestep <= sample-level AM length
776 # If current timestep > sample-level AM length, no alignments will be added
777 # Therefore the list of Uj alignments is empty here.
778 if len(hypotheses[batch_idx].alignments[-1]) > 0:
779 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
780
781 # Do the same if preserving per-frame confidence
782 if self.preserve_frame_confidence:
783
784 for batch_idx in range(batchsize):
785 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
786 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
787
788 # Remove trailing empty list of alignments at T_{am-len} x Uj
789 if self.preserve_alignments:
790 for batch_idx in range(batchsize):
791 if len(hypotheses[batch_idx].alignments[-1]) == 0:
792 del hypotheses[batch_idx].alignments[-1]
793
794 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
795 if self.preserve_frame_confidence:
796 for batch_idx in range(batchsize):
797 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
798 del hypotheses[batch_idx].frame_confidence[-1]
799
800 # Preserve states
801 for batch_idx in range(batchsize):
802 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
803
804 return hypotheses
805
806 def _greedy_decode_masked(
807 self,
808 x: torch.Tensor,
809 out_len: torch.Tensor,
810 device: torch.device,
811 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
812 ):
813 if partial_hypotheses is not None:
814 raise NotImplementedError("`partial_hypotheses` support is not supported")
815
816 # x: [B, T, D]
817 # out_len: [B]
818 # device: torch.device
819
820 # Initialize state
821 batchsize = x.shape[0]
822 hypotheses = [
823 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
824 ]
825
826 # Initialize Hidden state matrix (shared by entire batch)
827 hidden = None
828
829 # If alignments need to be preserved, register a danling list to hold the values
830 if self.preserve_alignments:
831 # alignments is a 3-dimensional dangling list representing B x T x U
832 for hyp in hypotheses:
833 hyp.alignments = [[]]
834 else:
835 alignments = None
836
837 # If confidence scores need to be preserved, register a danling list to hold the values
838 if self.preserve_frame_confidence:
839 # frame_confidence is a 3-dimensional dangling list representing B x T x U
840 for hyp in hypotheses:
841 hyp.frame_confidence = [[]]
842
843 # Last Label buffer + Last Label without blank buffer
844 # batch level equivalent of the last_label
845 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
846 last_label_without_blank = last_label.clone()
847
848 # Mask buffers
849 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
850
851 # Get max sequence length
852 max_out_len = out_len.max()
853
854 with torch.inference_mode():
855 for time_idx in range(max_out_len):
856 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
857
858 # Prepare t timestamp batch variables
859 not_blank = True
860 symbols_added = 0
861
862 # Reset blank mask
863 blank_mask.mul_(False)
864
865 # Update blank mask with time mask
866 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
867 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
868 blank_mask = time_idx >= out_len
869
870 # Start inner loop
871 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
872 # Batch prediction and joint network steps
873 # If very first prediction step, submit SOS tag (blank) to pred_step.
874 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
875 if time_idx == 0 and symbols_added == 0 and hidden is None:
876 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
877 else:
878 # Set a dummy label for the blank value
879 # This value will be overwritten by "blank" again the last label update below
880 # This is done as vocabulary of prediction network does not contain "blank" token of RNNT
881 last_label_without_blank_mask = last_label == self._blank_index
882 last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
883 last_label_without_blank[~last_label_without_blank_mask] = last_label[
884 ~last_label_without_blank_mask
885 ]
886
887 # Perform batch step prediction of decoder, getting new states and scores ("g")
888 g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
889
890 # Batched joint step - Output = [B, V + 1]
891 # If preserving per-frame confidence, log_normalize must be true
892 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
893 :, 0, 0, :
894 ]
895
896 if logp.dtype != torch.float32:
897 logp = logp.float()
898
899 # Get index k, of max prob for batch
900 v, k = logp.max(1)
901 del g
902
903 # Update blank mask with current predicted blanks
904 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
905 k_is_blank = k == self._blank_index
906 blank_mask.bitwise_or_(k_is_blank)
907 all_blanks = torch.all(blank_mask)
908
909 # If preserving alignments, check if sequence length of sample has been reached
910 # before adding alignment
911 if self.preserve_alignments:
912 # Insert logprobs into last timestep per sample
913 logp_vals = logp.to('cpu')
914 logp_ids = logp_vals.max(1)[1]
915 for batch_idx, is_blank in enumerate(blank_mask):
916 # we only want to update non-blanks, unless we are at the last step in the loop where
917 # all elements produced blanks, otherwise there will be duplicate predictions
918 # saved in alignments
919 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
920 hypotheses[batch_idx].alignments[-1].append(
921 (logp_vals[batch_idx], logp_ids[batch_idx])
922 )
923
924 del logp_vals
925
926 # If preserving per-frame confidence, check if sequence length of sample has been reached
927 # before adding confidence scores
928 if self.preserve_frame_confidence:
929 # Insert probabilities into last timestep per sample
930 confidence = self._get_confidence(logp)
931 for batch_idx, is_blank in enumerate(blank_mask):
932 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
933 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
934 del logp
935
936 # If all samples predict / have predicted prior blanks, exit loop early
937 # This is equivalent to if single sample predicted k
938 if blank_mask.all():
939 not_blank = False
940 else:
941 # Collect batch indices where blanks occurred now/past
942 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
943
944 # Recover prior state for all samples which predicted blank now/past
945 if hidden is not None:
946 # LSTM has 2 states
947 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
948
949 elif len(blank_indices) > 0 and hidden is None:
950 # Reset state if there were some blank and other non-blank predictions in batch
951 # Original state is filled with zeros so we just multiply
952 # LSTM has 2 states
953 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
954
955 # Recover prior predicted label for all samples which predicted blank now/past
956 k[blank_indices] = last_label[blank_indices, 0]
957
958 # Update new label and hidden state for next iteration
959 last_label = k.view(-1, 1)
960 hidden = hidden_prime
961
962 # Update predicted labels, accounting for time mask
963 # If blank was predicted even once, now or in the past,
964 # Force the current predicted label to also be blank
965 # This ensures that blanks propogate across all timesteps
966 # once they have occured (normally stopping condition of sample level loop).
967 for kidx, ki in enumerate(k):
968 if blank_mask[kidx] == 0:
969 hypotheses[kidx].y_sequence.append(ki)
970 hypotheses[kidx].timestep.append(time_idx)
971 hypotheses[kidx].score += float(v[kidx])
972
973 symbols_added += 1
974
975 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
976 # Then preserve U at current timestep Ti
977 # Finally, forward the timestep history to Ti+1 for that sample
978 # All of this should only be done iff the current time index <= sample-level AM length.
979 # Otherwise ignore and move to next sample / next timestep.
980 if self.preserve_alignments:
981
982 # convert Ti-th logits into a torch array
983 for batch_idx in range(batchsize):
984
985 # this checks if current timestep <= sample-level AM length
986 # If current timestep > sample-level AM length, no alignments will be added
987 # Therefore the list of Uj alignments is empty here.
988 if len(hypotheses[batch_idx].alignments[-1]) > 0:
989 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
990
991 # Do the same if preserving per-frame confidence
992 if self.preserve_frame_confidence:
993
994 for batch_idx in range(batchsize):
995 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
996 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
997
998 # Remove trailing empty list of alignments at T_{am-len} x Uj
999 if self.preserve_alignments:
1000 for batch_idx in range(batchsize):
1001 if len(hypotheses[batch_idx].alignments[-1]) == 0:
1002 del hypotheses[batch_idx].alignments[-1]
1003
1004 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
1005 if self.preserve_frame_confidence:
1006 for batch_idx in range(batchsize):
1007 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
1008 del hypotheses[batch_idx].frame_confidence[-1]
1009
1010 # Preserve states
1011 for batch_idx in range(batchsize):
1012 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
1013
1014 return hypotheses
1015
1016
1017 class ExportedModelGreedyBatchedRNNTInfer:
1018 def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = None):
1019 self.encoder_model_path = encoder_model
1020 self.decoder_joint_model_path = decoder_joint_model
1021 self.max_symbols_per_step = max_symbols_per_step
1022
1023 # Will be populated at runtime
1024 self._blank_index = None
1025
1026 def __call__(self, audio_signal: torch.Tensor, length: torch.Tensor):
1027 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
1028 Output token is generated auto-regressively.
1029
1030 Args:
1031 encoder_output: A tensor of size (batch, features, timesteps).
1032 encoded_lengths: list of int representing the length of each sequence
1033 output sequence.
1034
1035 Returns:
1036 packed list containing batch number of sentences (Hypotheses).
1037 """
1038 with torch.no_grad():
1039 # Apply optional preprocessing
1040 encoder_output, encoded_lengths = self.run_encoder(audio_signal=audio_signal, length=length)
1041
1042 if torch.is_tensor(encoder_output):
1043 encoder_output = encoder_output.transpose(1, 2)
1044 else:
1045 encoder_output = encoder_output.transpose([0, 2, 1]) # (B, T, D)
1046 logitlen = encoded_lengths
1047
1048 inseq = encoder_output # [B, T, D]
1049 hypotheses, timestamps = self._greedy_decode(inseq, logitlen)
1050
1051 # Pack the hypotheses results
1052 packed_result = [rnnt_utils.Hypothesis(score=-1.0, y_sequence=[]) for _ in range(len(hypotheses))]
1053 for i in range(len(packed_result)):
1054 packed_result[i].y_sequence = torch.tensor(hypotheses[i], dtype=torch.long)
1055 packed_result[i].length = timestamps[i]
1056
1057 del hypotheses
1058
1059 return packed_result
1060
1061 def _greedy_decode(self, x, out_len):
1062 # x: [B, T, D]
1063 # out_len: [B]
1064
1065 # Initialize state
1066 batchsize = x.shape[0]
1067 hidden = self._get_initial_states(batchsize)
1068 target_lengths = torch.ones(batchsize, dtype=torch.int32)
1069
1070 # Output string buffer
1071 label = [[] for _ in range(batchsize)]
1072 timesteps = [[] for _ in range(batchsize)]
1073
1074 # Last Label buffer + Last Label without blank buffer
1075 # batch level equivalent of the last_label
1076 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long).numpy()
1077 if torch.is_tensor(x):
1078 last_label = torch.from_numpy(last_label).to(self.device)
1079
1080 # Mask buffers
1081 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool).numpy()
1082
1083 # Get max sequence length
1084 max_out_len = out_len.max()
1085 for time_idx in range(max_out_len):
1086 f = x[:, time_idx : time_idx + 1, :] # [B, 1, D]
1087
1088 if torch.is_tensor(f):
1089 f = f.transpose(1, 2)
1090 else:
1091 f = f.transpose([0, 2, 1])
1092
1093 # Prepare t timestamp batch variables
1094 not_blank = True
1095 symbols_added = 0
1096
1097 # Reset blank mask
1098 blank_mask *= False
1099
1100 # Update blank mask with time mask
1101 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
1102 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
1103 blank_mask = time_idx >= out_len
1104 # Start inner loop
1105 while not_blank and (self.max_symbols_per_step is None or symbols_added < self.max_symbols_per_step):
1106
1107 # Batch prediction and joint network steps
1108 # If very first prediction step, submit SOS tag (blank) to pred_step.
1109 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
1110 if time_idx == 0 and symbols_added == 0:
1111 g = torch.tensor([self._blank_index] * batchsize, dtype=torch.int32).view(-1, 1)
1112 else:
1113 if torch.is_tensor(last_label):
1114 g = last_label.type(torch.int32)
1115 else:
1116 g = last_label.astype(np.int32)
1117
1118 # Batched joint step - Output = [B, V + 1]
1119 joint_out, hidden_prime = self.run_decoder_joint(f, g, target_lengths, *hidden)
1120 logp, pred_lengths = joint_out
1121 logp = logp[:, 0, 0, :]
1122
1123 # Get index k, of max prob for batch
1124 if torch.is_tensor(logp):
1125 v, k = logp.max(1)
1126 else:
1127 k = np.argmax(logp, axis=1).astype(np.int32)
1128
1129 # Update blank mask with current predicted blanks
1130 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
1131 k_is_blank = k == self._blank_index
1132 blank_mask |= k_is_blank
1133
1134 del k_is_blank
1135 del logp
1136
1137 # If all samples predict / have predicted prior blanks, exit loop early
1138 # This is equivalent to if single sample predicted k
1139 if blank_mask.all():
1140 not_blank = False
1141
1142 else:
1143 # Collect batch indices where blanks occurred now/past
1144 if torch.is_tensor(blank_mask):
1145 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
1146 else:
1147 blank_indices = blank_mask.astype(np.int32).nonzero()
1148
1149 if type(blank_indices) in (list, tuple):
1150 blank_indices = blank_indices[0]
1151
1152 # Recover prior state for all samples which predicted blank now/past
1153 if hidden is not None:
1154 # LSTM has 2 states
1155 for state_id in range(len(hidden)):
1156 hidden_prime[state_id][:, blank_indices, :] = hidden[state_id][:, blank_indices, :]
1157
1158 elif len(blank_indices) > 0 and hidden is None:
1159 # Reset state if there were some blank and other non-blank predictions in batch
1160 # Original state is filled with zeros so we just multiply
1161 # LSTM has 2 states
1162 for state_id in range(len(hidden_prime)):
1163 hidden_prime[state_id][:, blank_indices, :] *= 0.0
1164
1165 # Recover prior predicted label for all samples which predicted blank now/past
1166 k[blank_indices] = last_label[blank_indices, 0]
1167
1168 # Update new label and hidden state for next iteration
1169 if torch.is_tensor(k):
1170 last_label = k.clone().reshape(-1, 1)
1171 else:
1172 last_label = k.copy().reshape(-1, 1)
1173 hidden = hidden_prime
1174
1175 # Update predicted labels, accounting for time mask
1176 # If blank was predicted even once, now or in the past,
1177 # Force the current predicted label to also be blank
1178 # This ensures that blanks propogate across all timesteps
1179 # once they have occured (normally stopping condition of sample level loop).
1180 for kidx, ki in enumerate(k):
1181 if blank_mask[kidx] == 0:
1182 label[kidx].append(ki)
1183 timesteps[kidx].append(time_idx)
1184
1185 symbols_added += 1
1186
1187 return label, timesteps
1188
1189 def _setup_blank_index(self):
1190 raise NotImplementedError()
1191
1192 def run_encoder(self, audio_signal, length):
1193 raise NotImplementedError()
1194
1195 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1196 raise NotImplementedError()
1197
1198 def _get_initial_states(self, batchsize):
1199 raise NotImplementedError()
1200
1201
1202 class ONNXGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
1203 def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = 10):
1204 super().__init__(
1205 encoder_model=encoder_model,
1206 decoder_joint_model=decoder_joint_model,
1207 max_symbols_per_step=max_symbols_per_step,
1208 )
1209
1210 try:
1211 import onnx
1212 import onnxruntime
1213 except (ModuleNotFoundError, ImportError):
1214 raise ImportError(f"`onnx` or `onnxruntime` could not be imported, please install the libraries.\n")
1215
1216 if torch.cuda.is_available():
1217 # Try to use onnxruntime-gpu
1218 providers = ['TensorrtExecutionProvider', 'CUDAExecutionProvider']
1219 else:
1220 # Fall back to CPU and onnxruntime-cpu
1221 providers = ['CPUExecutionProvider']
1222
1223 onnx_session_opt = onnxruntime.SessionOptions()
1224 onnx_session_opt.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
1225
1226 onnx_model = onnx.load(self.encoder_model_path)
1227 onnx.checker.check_model(onnx_model, full_check=True)
1228 self.encoder_model = onnx_model
1229 self.encoder = onnxruntime.InferenceSession(
1230 onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
1231 )
1232
1233 onnx_model = onnx.load(self.decoder_joint_model_path)
1234 onnx.checker.check_model(onnx_model, full_check=True)
1235 self.decoder_joint_model = onnx_model
1236 self.decoder_joint = onnxruntime.InferenceSession(
1237 onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
1238 )
1239
1240 logging.info("Successfully loaded encoder, decoder and joint onnx models !")
1241
1242 # Will be populated at runtime
1243 self._blank_index = None
1244 self.max_symbols_per_step = max_symbols_per_step
1245
1246 self._setup_encoder_input_output_keys()
1247 self._setup_decoder_joint_input_output_keys()
1248 self._setup_blank_index()
1249
1250 def _setup_encoder_input_output_keys(self):
1251 self.encoder_inputs = list(self.encoder_model.graph.input)
1252 self.encoder_outputs = list(self.encoder_model.graph.output)
1253
1254 def _setup_decoder_joint_input_output_keys(self):
1255 self.decoder_joint_inputs = list(self.decoder_joint_model.graph.input)
1256 self.decoder_joint_outputs = list(self.decoder_joint_model.graph.output)
1257
1258 def _setup_blank_index(self):
1259 # ASSUME: Single input with no time length information
1260 dynamic_dim = 257
1261 shapes = self.encoder_inputs[0].type.tensor_type.shape.dim
1262 ip_shape = []
1263 for shape in shapes:
1264 if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
1265 ip_shape.append(dynamic_dim) # replace dynamic axes with constant
1266 else:
1267 ip_shape.append(int(shape.dim_value))
1268
1269 enc_logits, encoded_length = self.run_encoder(
1270 audio_signal=torch.randn(*ip_shape), length=torch.randint(0, 1, size=(dynamic_dim,))
1271 )
1272
1273 # prepare states
1274 states = self._get_initial_states(batchsize=dynamic_dim)
1275
1276 # run decoder 1 step
1277 joint_out, states = self.run_decoder_joint(enc_logits, None, None, *states)
1278 log_probs, lengths = joint_out
1279
1280 self._blank_index = log_probs.shape[-1] - 1 # last token of vocab size is blank token
1281 logging.info(
1282 f"Enc-Dec-Joint step was evaluated, blank token id = {self._blank_index}; vocab size = {log_probs.shape[-1]}"
1283 )
1284
1285 def run_encoder(self, audio_signal, length):
1286 if hasattr(audio_signal, 'cpu'):
1287 audio_signal = audio_signal.cpu().numpy()
1288
1289 if hasattr(length, 'cpu'):
1290 length = length.cpu().numpy()
1291
1292 ip = {
1293 self.encoder_inputs[0].name: audio_signal,
1294 self.encoder_inputs[1].name: length,
1295 }
1296 enc_out = self.encoder.run(None, ip)
1297 enc_out, encoded_length = enc_out # ASSUME: single output
1298 return enc_out, encoded_length
1299
1300 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1301 # ASSUME: Decoder is RNN Transducer
1302 if targets is None:
1303 targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32)
1304 target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32)
1305
1306 if hasattr(targets, 'cpu'):
1307 targets = targets.cpu().numpy()
1308
1309 if hasattr(target_length, 'cpu'):
1310 target_length = target_length.cpu().numpy()
1311
1312 ip = {
1313 self.decoder_joint_inputs[0].name: enc_logits,
1314 self.decoder_joint_inputs[1].name: targets,
1315 self.decoder_joint_inputs[2].name: target_length,
1316 }
1317
1318 num_states = 0
1319 if states is not None and len(states) > 0:
1320 num_states = len(states)
1321 for idx, state in enumerate(states):
1322 if hasattr(state, 'cpu'):
1323 state = state.cpu().numpy()
1324
1325 ip[self.decoder_joint_inputs[len(ip)].name] = state
1326
1327 dec_out = self.decoder_joint.run(None, ip)
1328
1329 # unpack dec output
1330 if num_states > 0:
1331 new_states = dec_out[-num_states:]
1332 dec_out = dec_out[:-num_states]
1333 else:
1334 new_states = None
1335
1336 return dec_out, new_states
1337
1338 def _get_initial_states(self, batchsize):
1339 # ASSUME: LSTM STATES of shape (layers, batchsize, dim)
1340 input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
1341 num_states = len(input_state_nodes)
1342 if num_states == 0:
1343 return
1344
1345 input_states = []
1346 for state_id in range(num_states):
1347 node = input_state_nodes[state_id]
1348 ip_shape = []
1349 for shape_idx, shape in enumerate(node.type.tensor_type.shape.dim):
1350 if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
1351 ip_shape.append(batchsize) # replace dynamic axes with constant
1352 else:
1353 ip_shape.append(int(shape.dim_value))
1354
1355 input_states.append(torch.zeros(*ip_shape))
1356
1357 return input_states
1358
1359
1360 class TorchscriptGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
1361 def __init__(
1362 self,
1363 encoder_model: str,
1364 decoder_joint_model: str,
1365 cfg: DictConfig,
1366 device: str,
1367 max_symbols_per_step: Optional[int] = 10,
1368 ):
1369 super().__init__(
1370 encoder_model=encoder_model,
1371 decoder_joint_model=decoder_joint_model,
1372 max_symbols_per_step=max_symbols_per_step,
1373 )
1374
1375 self.cfg = cfg
1376 self.device = device
1377
1378 self.encoder = torch.jit.load(self.encoder_model_path, map_location=self.device)
1379 self.decoder_joint = torch.jit.load(self.decoder_joint_model_path, map_location=self.device)
1380
1381 logging.info("Successfully loaded encoder, decoder and joint torchscript models !")
1382
1383 # Will be populated at runtime
1384 self._blank_index = None
1385 self.max_symbols_per_step = max_symbols_per_step
1386
1387 self._setup_encoder_input_keys()
1388 self._setup_decoder_joint_input_keys()
1389 self._setup_blank_index()
1390
1391 def _setup_encoder_input_keys(self):
1392 arguments = self.encoder.forward.schema.arguments[1:]
1393 self.encoder_inputs = [arg for arg in arguments]
1394
1395 def _setup_decoder_joint_input_keys(self):
1396 arguments = self.decoder_joint.forward.schema.arguments[1:]
1397 self.decoder_joint_inputs = [arg for arg in arguments]
1398
1399 def _setup_blank_index(self):
1400 self._blank_index = len(self.cfg.joint.vocabulary)
1401
1402 logging.info(f"Blank token id = {self._blank_index}; vocab size = {len(self.cfg.joint.vocabulary) + 1}")
1403
1404 def run_encoder(self, audio_signal, length):
1405 enc_out = self.encoder(audio_signal, length)
1406 enc_out, encoded_length = enc_out # ASSUME: single output
1407 return enc_out, encoded_length
1408
1409 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1410 # ASSUME: Decoder is RNN Transducer
1411 if targets is None:
1412 targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32, device=enc_logits.device)
1413 target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32, device=enc_logits.device)
1414
1415 num_states = 0
1416 if states is not None and len(states) > 0:
1417 num_states = len(states)
1418
1419 dec_out = self.decoder_joint(enc_logits, targets, target_length, *states)
1420
1421 # unpack dec output
1422 if num_states > 0:
1423 new_states = dec_out[-num_states:]
1424 dec_out = dec_out[:-num_states]
1425 else:
1426 new_states = None
1427
1428 return dec_out, new_states
1429
1430 def _get_initial_states(self, batchsize):
1431 # ASSUME: LSTM STATES of shape (layers, batchsize, dim)
1432 input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
1433 num_states = len(input_state_nodes)
1434 if num_states == 0:
1435 return
1436
1437 input_states = []
1438 for state_id in range(num_states):
1439 # Hardcode shape size for LSTM (1 is for num layers in LSTM, which is flattened for export)
1440 ip_shape = [1, batchsize, self.cfg.model_defaults.pred_hidden]
1441 input_states.append(torch.zeros(*ip_shape, device=self.device))
1442
1443 return input_states
1444
1445
1446 class GreedyMultiblankRNNTInfer(GreedyRNNTInfer):
1447 """A greedy transducer decoder for multi-blank RNN-T.
1448
1449 Sequence level greedy decoding, performed auto-regressively.
1450
1451 Args:
1452 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
1453 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
1454 blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
1455 big_blank_durations: a list containing durations for big blanks the model supports.
1456 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
1457 to a sequence in a single time step; if set to None then there is
1458 no limit.
1459 preserve_alignments: Bool flag which preserves the history of alignments generated during
1460 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1461 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
1462 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
1463 The length of the list corresponds to the Acoustic Length (T).
1464 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
1465 U is the number of target tokens for the current timestep Ti.
1466 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
1467 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1468 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
1469 The length of the list corresponds to the Acoustic Length (T).
1470 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
1471 U is the number of target tokens for the current timestep Ti.
1472 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
1473 confidence scores.
1474
1475 name: The measure name (str).
1476 Supported values:
1477 - 'max_prob' for using the maximum token probability as a confidence.
1478 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1479
1480 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
1481 Supported values:
1482 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1483 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1484 Note that for this entropy, the alpha should comply the following inequality:
1485 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1486 where V is the model vocabulary size.
1487 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1488 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1489 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1490 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1491 - 'renyi' for the Rรฉnyi entropy.
1492 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1493 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1494 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1495
1496 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1497 When the alpha equals one, scaling is not applied to 'max_prob',
1498 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1499
1500 entropy_norm: A mapping of the entropy value to the interval [0,1].
1501 Supported values:
1502 - 'lin' for using the linear mapping.
1503 - 'exp' for using exponential mapping with linear shift.
1504 """
1505
1506 def __init__(
1507 self,
1508 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
1509 joint_model: rnnt_abstract.AbstractRNNTJoint,
1510 blank_index: int,
1511 big_blank_durations: list,
1512 max_symbols_per_step: Optional[int] = None,
1513 preserve_alignments: bool = False,
1514 preserve_frame_confidence: bool = False,
1515 confidence_measure_cfg: Optional[DictConfig] = None,
1516 ):
1517 super().__init__(
1518 decoder_model=decoder_model,
1519 joint_model=joint_model,
1520 blank_index=blank_index,
1521 max_symbols_per_step=max_symbols_per_step,
1522 preserve_alignments=preserve_alignments,
1523 preserve_frame_confidence=preserve_frame_confidence,
1524 confidence_measure_cfg=confidence_measure_cfg,
1525 )
1526 self.big_blank_durations = big_blank_durations
1527 self._SOS = blank_index - len(big_blank_durations)
1528
1529 @torch.no_grad()
1530 def _greedy_decode(
1531 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
1532 ):
1533 # x: [T, 1, D]
1534 # out_len: [seq_len]
1535
1536 # Initialize blank state and empty label set in Hypothesis
1537 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
1538
1539 if partial_hypotheses is not None:
1540 hypothesis.last_token = partial_hypotheses.last_token
1541 hypothesis.y_sequence = (
1542 partial_hypotheses.y_sequence.cpu().tolist()
1543 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
1544 else partial_hypotheses.y_sequence
1545 )
1546 if partial_hypotheses.dec_state is not None:
1547 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
1548 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
1549
1550 if self.preserve_alignments:
1551 # Alignments is a 2-dimensional dangling list representing T x U
1552 hypothesis.alignments = [[]]
1553
1554 if self.preserve_frame_confidence:
1555 hypothesis.frame_confidence = [[]]
1556
1557 # if this variable is > 1, it means the last emission was a big-blank and we need to skip frames.
1558 big_blank_duration = 1
1559
1560 # For timestep t in X_t
1561 for time_idx in range(out_len):
1562 if big_blank_duration > 1:
1563 # skip frames until big_blank_duration == 1.
1564 big_blank_duration -= 1
1565 continue
1566 # Extract encoder embedding at timestep t
1567 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
1568 f = x.narrow(dim=0, start=time_idx, length=1)
1569
1570 # Setup exit flags and counter
1571 not_blank = True
1572 symbols_added = 0
1573
1574 # While blank is not predicted, or we dont run out of max symbols per timestep
1575 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
1576 # In the first timestep, we initialize the network with RNNT Blank
1577 # In later timesteps, we provide previous predicted label as input.
1578 if hypothesis.last_token is None and hypothesis.dec_state is None:
1579 last_label = self._SOS
1580 else:
1581 last_label = label_collate([[hypothesis.last_token]])
1582
1583 # Perform prediction network and joint network steps.
1584 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
1585 # If preserving per-frame confidence, log_normalize must be true
1586 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
1587 0, 0, 0, :
1588 ]
1589
1590 del g
1591
1592 # torch.max(0) op doesnt exist for FP 16.
1593 if logp.dtype != torch.float32:
1594 logp = logp.float()
1595
1596 # get index k, of max prob
1597 v, k = logp.max(0)
1598 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
1599
1600 # Note, we have non-blanks in the vocab first, followed by big blanks, and standard blank at last.
1601 # here we check if it's a big blank and if yes, set the duration variable.
1602 if k >= self._blank_index - len(self.big_blank_durations) and k < self._blank_index:
1603 big_blank_duration = self.big_blank_durations[self._blank_index - k - 1]
1604
1605 if self.preserve_alignments:
1606 # insert logprobs into last timestep
1607 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
1608
1609 if self.preserve_frame_confidence:
1610 # insert confidence into last timestep
1611 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
1612
1613 del logp
1614
1615 # If any type of blank token is predicted, exit inner loop, move onto next timestep t
1616 if k >= self._blank_index - len(self.big_blank_durations):
1617 not_blank = False
1618 else:
1619 # Append token to label set, update RNN state.
1620 hypothesis.y_sequence.append(k)
1621 hypothesis.score += float(v)
1622 hypothesis.timestep.append(time_idx)
1623 hypothesis.dec_state = hidden_prime
1624 hypothesis.last_token = k
1625
1626 # Increment token counter.
1627 symbols_added += 1
1628
1629 if self.preserve_alignments:
1630 # convert Ti-th logits into a torch array
1631 hypothesis.alignments.append([]) # blank buffer for next timestep
1632
1633 if self.preserve_frame_confidence:
1634 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
1635
1636 # Remove trailing empty list of Alignments
1637 if self.preserve_alignments:
1638 if len(hypothesis.alignments[-1]) == 0:
1639 del hypothesis.alignments[-1]
1640
1641 # Remove trailing empty list of per-frame confidence
1642 if self.preserve_frame_confidence:
1643 if len(hypothesis.frame_confidence[-1]) == 0:
1644 del hypothesis.frame_confidence[-1]
1645
1646 # Unpack the hidden states
1647 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
1648
1649 return hypothesis
1650
1651
1652 class GreedyBatchedMultiblankRNNTInfer(GreedyBatchedRNNTInfer):
1653 """A batch level greedy transducer decoder.
1654 Batch level greedy decoding, performed auto-regressively.
1655 Args:
1656 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
1657 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
1658 blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
1659 big_blank_durations: a list containing durations for big blanks the model supports.
1660 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
1661 to a sequence in a single time step; if set to None then there is
1662 no limit.
1663 preserve_alignments: Bool flag which preserves the history of alignments generated during
1664 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1665 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
1666 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
1667 The length of the list corresponds to the Acoustic Length (T).
1668 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
1669 U is the number of target tokens for the current timestep Ti.
1670 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
1671 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1672 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
1673 The length of the list corresponds to the Acoustic Length (T).
1674 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
1675 U is the number of target tokens for the current timestep Ti.
1676 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
1677 confidence scores.
1678
1679 name: The measure name (str).
1680 Supported values:
1681 - 'max_prob' for using the maximum token probability as a confidence.
1682 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1683
1684 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
1685 Supported values:
1686 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1687 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1688 Note that for this entropy, the alpha should comply the following inequality:
1689 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1690 where V is the model vocabulary size.
1691 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1692 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1693 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1694 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1695 - 'renyi' for the Rรฉnyi entropy.
1696 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1697 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1698 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1699
1700 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1701 When the alpha equals one, scaling is not applied to 'max_prob',
1702 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1703
1704 entropy_norm: A mapping of the entropy value to the interval [0,1].
1705 Supported values:
1706 - 'lin' for using the linear mapping.
1707 - 'exp' for using exponential mapping with linear shift.
1708 """
1709
1710 def __init__(
1711 self,
1712 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
1713 joint_model: rnnt_abstract.AbstractRNNTJoint,
1714 blank_index: int,
1715 big_blank_durations: List[int],
1716 max_symbols_per_step: Optional[int] = None,
1717 preserve_alignments: bool = False,
1718 preserve_frame_confidence: bool = False,
1719 confidence_measure_cfg: Optional[DictConfig] = None,
1720 ):
1721 super().__init__(
1722 decoder_model=decoder_model,
1723 joint_model=joint_model,
1724 blank_index=blank_index,
1725 max_symbols_per_step=max_symbols_per_step,
1726 preserve_alignments=preserve_alignments,
1727 preserve_frame_confidence=preserve_frame_confidence,
1728 confidence_measure_cfg=confidence_measure_cfg,
1729 )
1730 self.big_blank_durations = big_blank_durations
1731
1732 # Depending on availability of `blank_as_pad` support
1733 # switch between more efficient batch decoding technique
1734 if self.decoder.blank_as_pad:
1735 self._greedy_decode = self._greedy_decode_blank_as_pad
1736 else:
1737 self._greedy_decode = self._greedy_decode_masked
1738 self._SOS = blank_index - len(big_blank_durations)
1739
1740 def _greedy_decode_blank_as_pad(
1741 self,
1742 x: torch.Tensor,
1743 out_len: torch.Tensor,
1744 device: torch.device,
1745 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
1746 ):
1747 if partial_hypotheses is not None:
1748 raise NotImplementedError("`partial_hypotheses` support is not supported")
1749
1750 with torch.inference_mode():
1751 # x: [B, T, D]
1752 # out_len: [B]
1753 # device: torch.device
1754
1755 # Initialize list of Hypothesis
1756 batchsize = x.shape[0]
1757 hypotheses = [
1758 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
1759 ]
1760
1761 # Initialize Hidden state matrix (shared by entire batch)
1762 hidden = None
1763
1764 # If alignments need to be preserved, register a danling list to hold the values
1765 if self.preserve_alignments:
1766 # alignments is a 3-dimensional dangling list representing B x T x U
1767 for hyp in hypotheses:
1768 hyp.alignments = [[]]
1769
1770 # If confidence scores need to be preserved, register a danling list to hold the values
1771 if self.preserve_frame_confidence:
1772 # frame_confidence is a 3-dimensional dangling list representing B x T x U
1773 for hyp in hypotheses:
1774 hyp.frame_confidence = [[]]
1775
1776 # Last Label buffer + Last Label without blank buffer
1777 # batch level equivalent of the last_label
1778 last_label = torch.full([batchsize, 1], fill_value=self._SOS, dtype=torch.long, device=device)
1779
1780 # this mask is true for if the emission is *any type* of blank.
1781 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
1782
1783 # Get max sequence length
1784 max_out_len = out_len.max()
1785
1786 # We have a mask for each big blank. A mask is "true" means: the previous emission is exactly the big-blank
1787 # with the corresponding duration, or has larger duration. E.g., for big_blank_mask for duration 2, it will
1788 # be set true if the previous emission was a big blank with duration 4, or 3 or 2; but false if prevoius
1789 # emission was a standard blank (with duration = 1).
1790 big_blank_masks = [torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)] * len(
1791 self.big_blank_durations
1792 )
1793
1794 # if this variable > 1, it means the previous emission was big-blank and we need to skip frames.
1795 big_blank_duration = 1
1796
1797 for time_idx in range(max_out_len):
1798 if big_blank_duration > 1:
1799 # skip frames until big_blank_duration == 1
1800 big_blank_duration -= 1
1801 continue
1802 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
1803
1804 # Prepare t timestamp batch variables
1805 not_blank = True
1806 symbols_added = 0
1807
1808 # Reset all blank masks
1809 blank_mask.mul_(False)
1810 for i in range(len(big_blank_masks)):
1811 big_blank_masks[i].mul_(False)
1812
1813 # Update blank mask with time mask
1814 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
1815 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
1816 blank_mask = time_idx >= out_len
1817 for i in range(len(big_blank_masks)):
1818 big_blank_masks[i] = time_idx >= out_len
1819
1820 # Start inner loop
1821 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
1822 # Batch prediction and joint network steps
1823 # If very first prediction step, submit SOS tag (blank) to pred_step.
1824 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
1825 if time_idx == 0 and symbols_added == 0 and hidden is None:
1826 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
1827 else:
1828 # Perform batch step prediction of decoder, getting new states and scores ("g")
1829 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
1830
1831 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
1832 # If preserving per-frame confidence, log_normalize must be true
1833 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
1834 :, 0, 0, :
1835 ]
1836
1837 if logp.dtype != torch.float32:
1838 logp = logp.float()
1839
1840 # Get index k, of max prob for batch
1841 v, k = logp.max(1)
1842 del g
1843
1844 # Update blank mask with current predicted blanks
1845 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
1846 k_is_blank = k >= self._blank_index - len(self.big_blank_durations)
1847 blank_mask.bitwise_or_(k_is_blank)
1848
1849 for i in range(len(big_blank_masks)):
1850 # using <= since as we mentioned before, the mask doesn't store exact matches.
1851 # instead, it is True when the predicted blank's duration is >= the duration that the
1852 # mask corresponds to.
1853 k_is_big_blank = k <= self._blank_index - 1 - i
1854
1855 # need to do a bitwise_and since it could also be a non-blank.
1856 k_is_big_blank.bitwise_and_(k_is_blank)
1857 big_blank_masks[i].bitwise_or_(k_is_big_blank)
1858
1859 del k_is_blank
1860
1861 # If preserving alignments, check if sequence length of sample has been reached
1862 # before adding alignment
1863 if self.preserve_alignments:
1864 # Insert logprobs into last timestep per sample
1865 logp_vals = logp.to('cpu')
1866 logp_ids = logp_vals.max(1)[1]
1867 for batch_idx in range(batchsize):
1868 if time_idx < out_len[batch_idx]:
1869 hypotheses[batch_idx].alignments[-1].append(
1870 (logp_vals[batch_idx], logp_ids[batch_idx])
1871 )
1872 del logp_vals
1873
1874 # If preserving per-frame confidence, check if sequence length of sample has been reached
1875 # before adding confidence scores
1876 if self.preserve_frame_confidence:
1877 # Insert probabilities into last timestep per sample
1878 confidence = self._get_confidence(logp)
1879 for batch_idx in range(batchsize):
1880 if time_idx < out_len[batch_idx]:
1881 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
1882 del logp
1883
1884 # If all samples predict / have predicted prior blanks, exit loop early
1885 # This is equivalent to if single sample predicted k
1886 if blank_mask.all():
1887 not_blank = False
1888 else:
1889 # Collect batch indices where blanks occurred now/past
1890 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
1891
1892 # Recover prior state for all samples which predicted blank now/past
1893 if hidden is not None:
1894 # LSTM has 2 states
1895 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
1896
1897 elif len(blank_indices) > 0 and hidden is None:
1898 # Reset state if there were some blank and other non-blank predictions in batch
1899 # Original state is filled with zeros so we just multiply
1900 # LSTM has 2 states
1901 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
1902
1903 # Recover prior predicted label for all samples which predicted blank now/past
1904 k[blank_indices] = last_label[blank_indices, 0]
1905
1906 # Update new label and hidden state for next iteration
1907 last_label = k.clone().view(-1, 1)
1908 hidden = hidden_prime
1909
1910 # Update predicted labels, accounting for time mask
1911 # If blank was predicted even once, now or in the past,
1912 # Force the current predicted label to also be blank
1913 # This ensures that blanks propogate across all timesteps
1914 # once they have occured (normally stopping condition of sample level loop).
1915 for kidx, ki in enumerate(k):
1916 if blank_mask[kidx] == 0:
1917 hypotheses[kidx].y_sequence.append(ki)
1918 hypotheses[kidx].timestep.append(time_idx)
1919 hypotheses[kidx].score += float(v[kidx])
1920
1921 symbols_added += 1
1922
1923 for i in range(len(big_blank_masks) + 1):
1924 # The task here is find the shortest blank duration of all batches.
1925 # so we start from the shortest blank duration and go up,
1926 # and stop once we found the duration whose corresponding mask isn't all True.
1927 if i == len(big_blank_masks) or not big_blank_masks[i].all():
1928 big_blank_duration = self.big_blank_durations[i - 1] if i > 0 else 1
1929 break
1930
1931 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
1932 # Then preserve U at current timestep Ti
1933 # Finally, forward the timestep history to Ti+1 for that sample
1934 # All of this should only be done iff the current time index <= sample-level AM length.
1935 # Otherwise ignore and move to next sample / next timestep.
1936 if self.preserve_alignments:
1937
1938 # convert Ti-th logits into a torch array
1939 for batch_idx in range(batchsize):
1940
1941 # this checks if current timestep <= sample-level AM length
1942 # If current timestep > sample-level AM length, no alignments will be added
1943 # Therefore the list of Uj alignments is empty here.
1944 if len(hypotheses[batch_idx].alignments[-1]) > 0:
1945 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
1946
1947 # Do the same if preserving per-frame confidence
1948 if self.preserve_frame_confidence:
1949
1950 for batch_idx in range(batchsize):
1951 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
1952 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
1953
1954 # Remove trailing empty list of alignments at T_{am-len} x Uj
1955 if self.preserve_alignments:
1956 for batch_idx in range(batchsize):
1957 if len(hypotheses[batch_idx].alignments[-1]) == 0:
1958 del hypotheses[batch_idx].alignments[-1]
1959
1960 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
1961 if self.preserve_frame_confidence:
1962 for batch_idx in range(batchsize):
1963 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
1964 del hypotheses[batch_idx].frame_confidence[-1]
1965
1966 # Preserve states
1967 for batch_idx in range(batchsize):
1968 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
1969
1970 return hypotheses
1971
1972 def _greedy_decode_masked(
1973 self,
1974 x: torch.Tensor,
1975 out_len: torch.Tensor,
1976 device: torch.device,
1977 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
1978 ):
1979 if partial_hypotheses is not None:
1980 raise NotImplementedError("`partial_hypotheses` support is not supported")
1981
1982 if self.big_blank_durations != [1] * len(self.big_blank_durations):
1983 raise NotImplementedError(
1984 "Efficient frame-skipping version for multi-blank masked decoding is not supported."
1985 )
1986
1987 # x: [B, T, D]
1988 # out_len: [B]
1989 # device: torch.device
1990
1991 # Initialize state
1992 batchsize = x.shape[0]
1993 hypotheses = [
1994 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
1995 ]
1996
1997 # Initialize Hidden state matrix (shared by entire batch)
1998 hidden = None
1999
2000 # If alignments need to be preserved, register a danling list to hold the values
2001 if self.preserve_alignments:
2002 # alignments is a 3-dimensional dangling list representing B x T x U
2003 for hyp in hypotheses:
2004 hyp.alignments = [[]]
2005 else:
2006 hyp.alignments = None
2007
2008 # If confidence scores need to be preserved, register a danling list to hold the values
2009 if self.preserve_frame_confidence:
2010 # frame_confidence is a 3-dimensional dangling list representing B x T x U
2011 for hyp in hypotheses:
2012 hyp.frame_confidence = [[]]
2013
2014 # Last Label buffer + Last Label without blank buffer
2015 # batch level equivalent of the last_label
2016 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
2017 last_label_without_blank = last_label.clone()
2018
2019 # Mask buffers
2020 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
2021
2022 # Get max sequence length
2023 max_out_len = out_len.max()
2024
2025 with torch.inference_mode():
2026 for time_idx in range(max_out_len):
2027 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
2028
2029 # Prepare t timestamp batch variables
2030 not_blank = True
2031 symbols_added = 0
2032
2033 # Reset blank mask
2034 blank_mask.mul_(False)
2035
2036 # Update blank mask with time mask
2037 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
2038 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
2039 blank_mask = time_idx >= out_len
2040
2041 # Start inner loop
2042 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
2043 # Batch prediction and joint network steps
2044 # If very first prediction step, submit SOS tag (blank) to pred_step.
2045 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
2046 if time_idx == 0 and symbols_added == 0 and hidden is None:
2047 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
2048 else:
2049 # Set a dummy label for the blank value
2050 # This value will be overwritten by "blank" again the last label update below
2051 # This is done as vocabulary of prediction network does not contain "blank" token of RNNT
2052 last_label_without_blank_mask = last_label >= self._blank_index
2053 last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
2054 last_label_without_blank[~last_label_without_blank_mask] = last_label[
2055 ~last_label_without_blank_mask
2056 ]
2057
2058 # Perform batch step prediction of decoder, getting new states and scores ("g")
2059 g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
2060
2061 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
2062 # If preserving per-frame confidence, log_normalize must be true
2063 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
2064 :, 0, 0, :
2065 ]
2066
2067 if logp.dtype != torch.float32:
2068 logp = logp.float()
2069
2070 # Get index k, of max prob for batch
2071 v, k = logp.max(1)
2072 del g
2073
2074 # Update blank mask with current predicted blanks
2075 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
2076 k_is_blank = k == self._blank_index
2077 blank_mask.bitwise_or_(k_is_blank)
2078
2079 # If preserving alignments, check if sequence length of sample has been reached
2080 # before adding alignment
2081 if self.preserve_alignments:
2082 # Insert logprobs into last timestep per sample
2083 logp_vals = logp.to('cpu')
2084 logp_ids = logp_vals.max(1)[1]
2085 for batch_idx in range(batchsize):
2086 if time_idx < out_len[batch_idx]:
2087 hypotheses[batch_idx].alignments[-1].append(
2088 (logp_vals[batch_idx], logp_ids[batch_idx])
2089 )
2090 del logp_vals
2091
2092 # If preserving per-frame confidence, check if sequence length of sample has been reached
2093 # before adding confidence scores
2094 if self.preserve_frame_confidence:
2095 # Insert probabilities into last timestep per sample
2096 confidence = self._get_confidence(logp)
2097 for batch_idx in range(batchsize):
2098 if time_idx < out_len[batch_idx]:
2099 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
2100 del logp
2101
2102 # If all samples predict / have predicted prior blanks, exit loop early
2103 # This is equivalent to if single sample predicted k
2104 if blank_mask.all():
2105 not_blank = False
2106 else:
2107 # Collect batch indices where blanks occurred now/past
2108 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
2109
2110 # Recover prior state for all samples which predicted blank now/past
2111 if hidden is not None:
2112 # LSTM has 2 states
2113 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
2114
2115 elif len(blank_indices) > 0 and hidden is None:
2116 # Reset state if there were some blank and other non-blank predictions in batch
2117 # Original state is filled with zeros so we just multiply
2118 # LSTM has 2 states
2119 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
2120
2121 # Recover prior predicted label for all samples which predicted blank now/past
2122 k[blank_indices] = last_label[blank_indices, 0]
2123
2124 # Update new label and hidden state for next iteration
2125 last_label = k.view(-1, 1)
2126 hidden = hidden_prime
2127
2128 # Update predicted labels, accounting for time mask
2129 # If blank was predicted even once, now or in the past,
2130 # Force the current predicted label to also be blank
2131 # This ensures that blanks propogate across all timesteps
2132 # once they have occured (normally stopping condition of sample level loop).
2133 for kidx, ki in enumerate(k):
2134 if blank_mask[kidx] == 0:
2135 hypotheses[kidx].y_sequence.append(ki)
2136 hypotheses[kidx].timestep.append(time_idx)
2137 hypotheses[kidx].score += float(v[kidx])
2138
2139 symbols_added += 1
2140
2141 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
2142 # Then preserve U at current timestep Ti
2143 # Finally, forward the timestep history to Ti+1 for that sample
2144 # All of this should only be done iff the current time index <= sample-level AM length.
2145 # Otherwise ignore and move to next sample / next timestep.
2146 if self.preserve_alignments:
2147
2148 # convert Ti-th logits into a torch array
2149 for batch_idx in range(batchsize):
2150
2151 # this checks if current timestep <= sample-level AM length
2152 # If current timestep > sample-level AM length, no alignments will be added
2153 # Therefore the list of Uj alignments is empty here.
2154 if len(hypotheses[batch_idx].alignments[-1]) > 0:
2155 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
2156
2157 # Do the same if preserving per-frame confidence
2158 if self.preserve_frame_confidence:
2159
2160 for batch_idx in range(batchsize):
2161 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
2162 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
2163
2164 # Remove trailing empty list of alignments at T_{am-len} x Uj
2165 if self.preserve_alignments:
2166 for batch_idx in range(batchsize):
2167 if len(hypotheses[batch_idx].alignments[-1]) == 0:
2168 del hypotheses[batch_idx].alignments[-1]
2169
2170 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
2171 if self.preserve_frame_confidence:
2172 for batch_idx in range(batchsize):
2173 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2174 del hypotheses[batch_idx].frame_confidence[-1]
2175
2176 # Preserve states
2177 for batch_idx in range(batchsize):
2178 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2179
2180 return hypotheses
2181
2182
2183 @dataclass
2184 class GreedyRNNTInferConfig:
2185 max_symbols_per_step: Optional[int] = 10
2186 preserve_alignments: bool = False
2187 preserve_frame_confidence: bool = False
2188 confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
2189 confidence_method_cfg: str = "DEPRECATED"
2190
2191 def __post_init__(self):
2192 # OmegaConf.structured ensures that post_init check is always executed
2193 self.confidence_measure_cfg = OmegaConf.structured(
2194 self.confidence_measure_cfg
2195 if isinstance(self.confidence_measure_cfg, ConfidenceMeasureConfig)
2196 else ConfidenceMeasureConfig(**self.confidence_measure_cfg)
2197 )
2198 if self.confidence_method_cfg != "DEPRECATED":
2199 logging.warning(
2200 "`confidence_method_cfg` is deprecated and will be removed in the future. "
2201 "Please use `confidence_measure_cfg` instead."
2202 )
2203
2204 # TODO (alaptev): delete the following two lines sometime in the future
2205 logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
2206 # OmegaConf.structured ensures that post_init check is always executed
2207 self.confidence_measure_cfg = OmegaConf.structured(
2208 self.confidence_method_cfg
2209 if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
2210 else ConfidenceMeasureConfig(**self.confidence_method_cfg)
2211 )
2212 self.confidence_method_cfg = "DEPRECATED"
2213
2214
2215 @dataclass
2216 class GreedyBatchedRNNTInferConfig:
2217 max_symbols_per_step: Optional[int] = 10
2218 preserve_alignments: bool = False
2219 preserve_frame_confidence: bool = False
2220 confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
2221 confidence_method_cfg: str = "DEPRECATED"
2222
2223 def __post_init__(self):
2224 # OmegaConf.structured ensures that post_init check is always executed
2225 self.confidence_measure_cfg = OmegaConf.structured(
2226 self.confidence_measure_cfg
2227 if isinstance(self.confidence_measure_cfg, ConfidenceMeasureConfig)
2228 else ConfidenceMeasureConfig(**self.confidence_measure_cfg)
2229 )
2230 if self.confidence_method_cfg != "DEPRECATED":
2231 logging.warning(
2232 "`confidence_method_cfg` is deprecated and will be removed in the future. "
2233 "Please use `confidence_measure_cfg` instead."
2234 )
2235
2236 # TODO (alaptev): delete the following two lines sometime in the future
2237 logging.warning("Re-writing `confidence_measure_cfg` with the value of `confidence_method_cfg`.")
2238 # OmegaConf.structured ensures that post_init check is always executed
2239 self.confidence_measure_cfg = OmegaConf.structured(
2240 self.confidence_method_cfg
2241 if isinstance(self.confidence_method_cfg, ConfidenceMeasureConfig)
2242 else ConfidenceMeasureConfig(**self.confidence_method_cfg)
2243 )
2244 self.confidence_method_cfg = "DEPRECATED"
2245
2246
2247 class GreedyTDTInfer(_GreedyRNNTInfer):
2248 """A greedy TDT decoder.
2249
2250 Sequence level greedy decoding, performed auto-regressively.
2251
2252 Args:
2253 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
2254 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
2255 blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
2256 durations: a list containing durations for TDT.
2257 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
2258 to a sequence in a single time step; if set to None then there is
2259 no limit.
2260 preserve_alignments: Bool flag which preserves the history of alignments generated during
2261 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2262 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
2263 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
2264 The length of the list corresponds to the Acoustic Length (T).
2265 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
2266 U is the number of target tokens for the current timestep Ti.
2267 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
2268 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2269 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
2270 The length of the list corresponds to the Acoustic Length (T).
2271 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
2272 U is the number of target tokens for the current timestep Ti.
2273 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
2274 confidence scores.
2275
2276 name: The measure name (str).
2277 Supported values:
2278 - 'max_prob' for using the maximum token probability as a confidence.
2279 - 'entropy' for using a normalized entropy of a log-likelihood vector.
2280
2281 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
2282 Supported values:
2283 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
2284 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
2285 Note that for this entropy, the alpha should comply the following inequality:
2286 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
2287 where V is the model vocabulary size.
2288 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
2289 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
2290 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2291 More: https://en.wikipedia.org/wiki/Tsallis_entropy
2292 - 'renyi' for the Rรฉnyi entropy.
2293 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
2294 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2295 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
2296
2297 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
2298 When the alpha equals one, scaling is not applied to 'max_prob',
2299 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
2300
2301 entropy_norm: A mapping of the entropy value to the interval [0,1].
2302 Supported values:
2303 - 'lin' for using the linear mapping.
2304 - 'exp' for using exponential mapping with linear shift.
2305 """
2306
2307 def __init__(
2308 self,
2309 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
2310 joint_model: rnnt_abstract.AbstractRNNTJoint,
2311 blank_index: int,
2312 durations: list,
2313 max_symbols_per_step: Optional[int] = None,
2314 preserve_alignments: bool = False,
2315 preserve_frame_confidence: bool = False,
2316 confidence_measure_cfg: Optional[DictConfig] = None,
2317 ):
2318 super().__init__(
2319 decoder_model=decoder_model,
2320 joint_model=joint_model,
2321 blank_index=blank_index,
2322 max_symbols_per_step=max_symbols_per_step,
2323 preserve_alignments=preserve_alignments,
2324 preserve_frame_confidence=preserve_frame_confidence,
2325 confidence_measure_cfg=confidence_measure_cfg,
2326 )
2327 self.durations = durations
2328
2329 @typecheck()
2330 def forward(
2331 self,
2332 encoder_output: torch.Tensor,
2333 encoded_lengths: torch.Tensor,
2334 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2335 ):
2336 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
2337 Output token is generated auto-regressively.
2338 Args:
2339 encoder_output: A tensor of size (batch, features, timesteps).
2340 encoded_lengths: list of int representing the length of each sequence
2341 output sequence.
2342 Returns:
2343 packed list containing batch number of sentences (Hypotheses).
2344 """
2345 # Preserve decoder and joint training state
2346 decoder_training_state = self.decoder.training
2347 joint_training_state = self.joint.training
2348
2349 with torch.inference_mode():
2350 # Apply optional preprocessing
2351 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
2352
2353 self.decoder.eval()
2354 self.joint.eval()
2355
2356 hypotheses = []
2357 # Process each sequence independently
2358 with self.decoder.as_frozen(), self.joint.as_frozen():
2359 for batch_idx in range(encoder_output.size(0)):
2360 inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
2361 logitlen = encoded_lengths[batch_idx]
2362
2363 partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
2364 hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
2365 hypotheses.append(hypothesis)
2366
2367 # Pack results into Hypotheses
2368 packed_result = pack_hypotheses(hypotheses, encoded_lengths)
2369
2370 self.decoder.train(decoder_training_state)
2371 self.joint.train(joint_training_state)
2372
2373 return (packed_result,)
2374
2375 @torch.no_grad()
2376 def _greedy_decode(
2377 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
2378 ):
2379 # x: [T, 1, D]
2380 # out_len: [seq_len]
2381
2382 # Initialize blank state and empty label set in Hypothesis
2383 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
2384
2385 if partial_hypotheses is not None:
2386 hypothesis.last_token = partial_hypotheses.last_token
2387 hypothesis.y_sequence = (
2388 partial_hypotheses.y_sequence.cpu().tolist()
2389 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
2390 else partial_hypotheses.y_sequence
2391 )
2392 if partial_hypotheses.dec_state is not None:
2393 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
2394 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
2395
2396 if self.preserve_alignments:
2397 # Alignments is a 2-dimensional dangling list representing T x U
2398 hypothesis.alignments = [[]]
2399
2400 if self.preserve_frame_confidence:
2401 hypothesis.frame_confidence = [[]]
2402
2403 time_idx = 0
2404 while time_idx < out_len:
2405 # Extract encoder embedding at timestep t
2406 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
2407 f = x.narrow(dim=0, start=time_idx, length=1)
2408
2409 # Setup exit flags and counter
2410 not_blank = True
2411 symbols_added = 0
2412
2413 need_loop = True
2414 # While blank is not predicted, or we dont run out of max symbols per timestep
2415 while need_loop and (self.max_symbols is None or symbols_added < self.max_symbols):
2416 # In the first timestep, we initialize the network with RNNT Blank
2417 # In later timesteps, we provide previous predicted label as input.
2418 if hypothesis.last_token is None and hypothesis.dec_state is None:
2419 last_label = self._SOS
2420 else:
2421 last_label = label_collate([[hypothesis.last_token]])
2422
2423 # Perform prediction network and joint network steps.
2424 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
2425 # If preserving per-frame confidence, log_normalize must be true
2426 logits = self._joint_step(f, g, log_normalize=False)
2427 logp = logits[0, 0, 0, : -len(self.durations)]
2428 if self.preserve_frame_confidence:
2429 logp = torch.log_softmax(logp, -1)
2430
2431 duration_logp = torch.log_softmax(logits[0, 0, 0, -len(self.durations) :], dim=-1)
2432 del g
2433
2434 # torch.max(0) op doesnt exist for FP 16.
2435 if logp.dtype != torch.float32:
2436 logp = logp.float()
2437
2438 # get index k, of max prob
2439 v, k = logp.max(0)
2440 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
2441
2442 d_v, d_k = duration_logp.max(0)
2443 d_k = d_k.item()
2444
2445 skip = self.durations[d_k]
2446
2447 if self.preserve_alignments:
2448 # insert logprobs into last timestep
2449 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
2450
2451 if self.preserve_frame_confidence:
2452 # insert confidence into last timestep
2453 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
2454
2455 del logp
2456
2457 # If blank token is predicted, exit inner loop, move onto next timestep t
2458 if k == self._blank_index:
2459 not_blank = False
2460 else:
2461 # Append token to label set, update RNN state.
2462 hypothesis.y_sequence.append(k)
2463 hypothesis.score += float(v)
2464 hypothesis.timestep.append(time_idx)
2465 hypothesis.dec_state = hidden_prime
2466 hypothesis.last_token = k
2467
2468 # Increment token counter.
2469 symbols_added += 1
2470 time_idx += skip
2471 need_loop = skip == 0
2472
2473 # this rarely happens, but we manually increment the `skip` number
2474 # if blank is emitted and duration=0 is predicted. This prevents possible
2475 # infinite loops.
2476 if skip == 0:
2477 skip = 1
2478
2479 if self.preserve_alignments:
2480 # convert Ti-th logits into a torch array
2481 hypothesis.alignments.append([]) # blank buffer for next timestep
2482
2483 if self.preserve_frame_confidence:
2484 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
2485
2486 if symbols_added == self.max_symbols:
2487 time_idx += 1
2488
2489 # Remove trailing empty list of Alignments
2490 if self.preserve_alignments:
2491 if len(hypothesis.alignments[-1]) == 0:
2492 del hypothesis.alignments[-1]
2493
2494 # Remove trailing empty list of per-frame confidence
2495 if self.preserve_frame_confidence:
2496 if len(hypothesis.frame_confidence[-1]) == 0:
2497 del hypothesis.frame_confidence[-1]
2498
2499 # Unpack the hidden states
2500 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
2501
2502 return hypothesis
2503
2504
2505 class GreedyBatchedTDTInfer(_GreedyRNNTInfer):
2506 """A batch level greedy TDT decoder.
2507 Batch level greedy decoding, performed auto-regressively.
2508 Args:
2509 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
2510 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
2511 blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
2512 durations: a list containing durations.
2513 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
2514 to a sequence in a single time step; if set to None then there is
2515 no limit.
2516 preserve_alignments: Bool flag which preserves the history of alignments generated during
2517 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2518 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
2519 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
2520 The length of the list corresponds to the Acoustic Length (T).
2521 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
2522 U is the number of target tokens for the current timestep Ti.
2523 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
2524 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2525 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
2526 The length of the list corresponds to the Acoustic Length (T).
2527 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
2528 U is the number of target tokens for the current timestep Ti.
2529 confidence_measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
2530 confidence scores.
2531
2532 name: The measure name (str).
2533 Supported values:
2534 - 'max_prob' for using the maximum token probability as a confidence.
2535 - 'entropy' for using a normalized entropy of a log-likelihood vector.
2536
2537 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
2538 Supported values:
2539 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
2540 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
2541 Note that for this entropy, the alpha should comply the following inequality:
2542 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
2543 where V is the model vocabulary size.
2544 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
2545 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
2546 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2547 More: https://en.wikipedia.org/wiki/Tsallis_entropy
2548 - 'renyi' for the Rรฉnyi entropy.
2549 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
2550 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2551 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
2552
2553 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
2554 When the alpha equals one, scaling is not applied to 'max_prob',
2555 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
2556
2557 entropy_norm: A mapping of the entropy value to the interval [0,1].
2558 Supported values:
2559 - 'lin' for using the linear mapping.
2560 - 'exp' for using exponential mapping with linear shift.
2561 """
2562
2563 def __init__(
2564 self,
2565 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
2566 joint_model: rnnt_abstract.AbstractRNNTJoint,
2567 blank_index: int,
2568 durations: List[int],
2569 max_symbols_per_step: Optional[int] = None,
2570 preserve_alignments: bool = False,
2571 preserve_frame_confidence: bool = False,
2572 confidence_measure_cfg: Optional[DictConfig] = None,
2573 ):
2574 super().__init__(
2575 decoder_model=decoder_model,
2576 joint_model=joint_model,
2577 blank_index=blank_index,
2578 max_symbols_per_step=max_symbols_per_step,
2579 preserve_alignments=preserve_alignments,
2580 preserve_frame_confidence=preserve_frame_confidence,
2581 confidence_measure_cfg=confidence_measure_cfg,
2582 )
2583 self.durations = durations
2584
2585 # Depending on availability of `blank_as_pad` support
2586 # switch between more efficient batch decoding technique
2587 if self.decoder.blank_as_pad:
2588 self._greedy_decode = self._greedy_decode_blank_as_pad
2589 else:
2590 self._greedy_decode = self._greedy_decode_masked
2591
2592 @typecheck()
2593 def forward(
2594 self,
2595 encoder_output: torch.Tensor,
2596 encoded_lengths: torch.Tensor,
2597 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2598 ):
2599 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
2600 Output token is generated auto-regressively.
2601 Args:
2602 encoder_output: A tensor of size (batch, features, timesteps).
2603 encoded_lengths: list of int representing the length of each sequence
2604 output sequence.
2605 Returns:
2606 packed list containing batch number of sentences (Hypotheses).
2607 """
2608 # Preserve decoder and joint training state
2609 decoder_training_state = self.decoder.training
2610 joint_training_state = self.joint.training
2611
2612 with torch.inference_mode():
2613 # Apply optional preprocessing
2614 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
2615 logitlen = encoded_lengths
2616
2617 self.decoder.eval()
2618 self.joint.eval()
2619
2620 with self.decoder.as_frozen(), self.joint.as_frozen():
2621 inseq = encoder_output # [B, T, D]
2622 hypotheses = self._greedy_decode(
2623 inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
2624 )
2625
2626 # Pack the hypotheses results
2627 packed_result = pack_hypotheses(hypotheses, logitlen)
2628
2629 self.decoder.train(decoder_training_state)
2630 self.joint.train(joint_training_state)
2631
2632 return (packed_result,)
2633
2634 def _greedy_decode_blank_as_pad(
2635 self,
2636 x: torch.Tensor,
2637 out_len: torch.Tensor,
2638 device: torch.device,
2639 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2640 ):
2641 if partial_hypotheses is not None:
2642 raise NotImplementedError("`partial_hypotheses` support is not supported")
2643
2644 with torch.inference_mode():
2645 # x: [B, T, D]
2646 # out_len: [B]
2647 # device: torch.device
2648
2649 # Initialize list of Hypothesis
2650 batchsize = x.shape[0]
2651 hypotheses = [
2652 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
2653 ]
2654
2655 # Initialize Hidden state matrix (shared by entire batch)
2656 hidden = None
2657
2658 # If alignments need to be preserved, register a danling list to hold the values
2659 if self.preserve_alignments:
2660 # alignments is a 3-dimensional dangling list representing B x T x U
2661 for hyp in hypotheses:
2662 hyp.alignments = [[]]
2663
2664 # If confidence scores need to be preserved, register a danling list to hold the values
2665 if self.preserve_frame_confidence:
2666 # frame_confidence is a 3-dimensional dangling list representing B x T x U
2667 for hyp in hypotheses:
2668 hyp.frame_confidence = [[]]
2669
2670 # Last Label buffer + Last Label without blank buffer
2671 # batch level equivalent of the last_label
2672 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
2673
2674 # Mask buffers
2675 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
2676
2677 # Get max sequence length
2678 max_out_len = out_len.max()
2679
2680 # skip means the number of frames the next decoding step should "jump" to. When skip == 1
2681 # it means the next decoding step will just use the next input frame.
2682 skip = 1
2683 for time_idx in range(max_out_len):
2684 if skip > 1: # if skip > 1 at the current step, we decrement it and skip the current frame.
2685 skip -= 1
2686 continue
2687 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
2688
2689 # need_to_stay is a boolean indicates whether the next decoding step should remain in the same frame.
2690 need_to_stay = True
2691 symbols_added = 0
2692
2693 # Reset blank mask
2694 blank_mask.mul_(False)
2695
2696 # Update blank mask with time mask
2697 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
2698 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
2699 blank_mask = time_idx >= out_len
2700
2701 # Start inner loop
2702 while need_to_stay and (self.max_symbols is None or symbols_added < self.max_symbols):
2703 # Batch prediction and joint network steps
2704 # If very first prediction step, submit SOS tag (blank) to pred_step.
2705 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
2706 if time_idx == 0 and symbols_added == 0 and hidden is None:
2707 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
2708 else:
2709 # Perform batch step prediction of decoder, getting new states and scores ("g")
2710 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
2711
2712 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
2713 # Note: log_normalize must not be True here since the joiner output is contanetation of both token logits and duration logits,
2714 # and they need to be normalized independently.
2715 joined = self._joint_step(f, g, log_normalize=None)
2716 logp = joined[:, 0, 0, : -len(self.durations)]
2717 duration_logp = joined[:, 0, 0, -len(self.durations) :]
2718
2719 if logp.dtype != torch.float32:
2720 logp = logp.float()
2721 duration_logp = duration_logp.float()
2722
2723 # get the max for both token and duration predictions.
2724 v, k = logp.max(1)
2725 dv, dk = duration_logp.max(1)
2726
2727 # here we set the skip value to be the minimum of all predicted durations, hense the "torch.min(dk)" call there.
2728 # Please refer to Section 5.2 of our paper https://arxiv.org/pdf/2304.06795.pdf for explanation of this.
2729 skip = self.durations[int(torch.min(dk))]
2730
2731 # this is a special case: if all batches emit blanks, we require that skip be at least 1
2732 # so we don't loop forever at the current frame.
2733 if blank_mask.all():
2734 if skip == 0:
2735 skip = 1
2736
2737 need_to_stay = skip == 0
2738 del g
2739
2740 # Update blank mask with current predicted blanks
2741 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
2742 k_is_blank = k == self._blank_index
2743 blank_mask.bitwise_or_(k_is_blank)
2744
2745 del k_is_blank
2746 del logp, duration_logp
2747
2748 # If all samples predict / have predicted prior blanks, exit loop early
2749 # This is equivalent to if single sample predicted k
2750 if not blank_mask.all():
2751 # Collect batch indices where blanks occurred now/past
2752 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
2753
2754 # Recover prior state for all samples which predicted blank now/past
2755 if hidden is not None:
2756 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
2757
2758 elif len(blank_indices) > 0 and hidden is None:
2759 # Reset state if there were some blank and other non-blank predictions in batch
2760 # Original state is filled with zeros so we just multiply
2761 # LSTM has 2 states
2762 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
2763
2764 # Recover prior predicted label for all samples which predicted blank now/past
2765 k[blank_indices] = last_label[blank_indices, 0]
2766
2767 # Update new label and hidden state for next iteration
2768 last_label = k.clone().view(-1, 1)
2769 hidden = hidden_prime
2770
2771 # Update predicted labels, accounting for time mask
2772 # If blank was predicted even once, now or in the past,
2773 # Force the current predicted label to also be blank
2774 # This ensures that blanks propogate across all timesteps
2775 # once they have occured (normally stopping condition of sample level loop).
2776 for kidx, ki in enumerate(k):
2777 if blank_mask[kidx] == 0:
2778 hypotheses[kidx].y_sequence.append(ki)
2779 hypotheses[kidx].timestep.append(time_idx)
2780 hypotheses[kidx].score += float(v[kidx])
2781
2782 symbols_added += 1
2783
2784 # Remove trailing empty list of alignments at T_{am-len} x Uj
2785 if self.preserve_alignments:
2786 for batch_idx in range(batchsize):
2787 if len(hypotheses[batch_idx].alignments[-1]) == 0:
2788 del hypotheses[batch_idx].alignments[-1]
2789
2790 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
2791 if self.preserve_frame_confidence:
2792 for batch_idx in range(batchsize):
2793 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2794 del hypotheses[batch_idx].frame_confidence[-1]
2795
2796 # Preserve states
2797 for batch_idx in range(batchsize):
2798 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2799
2800 return hypotheses
2801
2802 def _greedy_decode_masked(
2803 self,
2804 x: torch.Tensor,
2805 out_len: torch.Tensor,
2806 device: torch.device,
2807 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2808 ):
2809 raise NotImplementedError("masked greedy-batched decode is not supported for TDT models.")
2810
[end of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
[start of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 from abc import ABC, abstractmethod
17 from dataclasses import dataclass
18 from functools import partial
19 from typing import List, Optional
20
21 import torch
22 from omegaconf import DictConfig, OmegaConf
23
24 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
25 from nemo.utils import logging
26
27
28 class ConfidenceMeasureConstants:
29 NAMES = ("max_prob", "entropy")
30 ENTROPY_TYPES = ("gibbs", "tsallis", "renyi")
31 ENTROPY_NORMS = ("lin", "exp")
32
33 @classmethod
34 def print(cls):
35 return (
36 cls.__name__
37 + ": "
38 + str({"NAMES": cls.NAMES, "ENTROPY_TYPES": cls.ENTROPY_TYPES, "ENTROPY_NORMS": cls.ENTROPY_NORMS})
39 )
40
41
42 class ConfidenceConstants:
43 AGGREGATIONS = ("mean", "min", "max", "prod")
44
45 @classmethod
46 def print(cls):
47 return cls.__name__ + ": " + str({"AGGREGATIONS": cls.AGGREGATIONS})
48
49
50 @dataclass
51 class ConfidenceMeasureConfig:
52 """A Config which contains the measure name and settings to compute per-frame confidence scores.
53
54 Args:
55 name: The measure name (str).
56 Supported values:
57 - 'max_prob' for using the maximum token probability as a confidence.
58 - 'entropy' for using a normalized entropy of a log-likelihood vector.
59
60 entropy_type: Which type of entropy to use (str).
61 Used if confidence_measure_cfg.name is set to `entropy`.
62 Supported values:
63 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
64 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
65 Note that for this entropy, the alpha should comply the following inequality:
66 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
67 where V is the model vocabulary size.
68 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
69 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
70 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
71 More: https://en.wikipedia.org/wiki/Tsallis_entropy
72 - 'renyi' for the Rรฉnyi entropy.
73 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
74 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
75 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
76
77 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
78 When the alpha equals one, scaling is not applied to 'max_prob',
79 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
80
81 entropy_norm: A mapping of the entropy value to the interval [0,1].
82 Supported values:
83 - 'lin' for using the linear mapping.
84 - 'exp' for using exponential mapping with linear shift.
85 """
86
87 name: str = "entropy"
88 entropy_type: str = "tsallis"
89 alpha: float = 0.33
90 entropy_norm: str = "exp"
91 temperature: str = "DEPRECATED"
92
93 def __post_init__(self):
94 if self.temperature != "DEPRECATED":
95 logging.warning(
96 "`temperature` is deprecated and will be removed in the future. Please use `alpha` instead."
97 )
98
99 # TODO (alaptev): delete the following two lines sometime in the future
100 logging.warning("Re-writing `alpha` with the value of `temperature`.")
101 # self.temperature has type str
102 self.alpha = float(self.temperature)
103 self.temperature = "DEPRECATED"
104 if self.name not in ConfidenceMeasureConstants.NAMES:
105 raise ValueError(
106 f"`name` must be one of the following: "
107 f"{'`' + '`, `'.join(ConfidenceMeasureConstants.NAMES) + '`'}. Provided: `{self.name}`"
108 )
109 if self.entropy_type not in ConfidenceMeasureConstants.ENTROPY_TYPES:
110 raise ValueError(
111 f"`entropy_type` must be one of the following: "
112 f"{'`' + '`, `'.join(ConfidenceMeasureConstants.ENTROPY_TYPES) + '`'}. Provided: `{self.entropy_type}`"
113 )
114 if self.alpha <= 0.0:
115 raise ValueError(f"`alpha` must be > 0. Provided: {self.alpha}")
116 if self.entropy_norm not in ConfidenceMeasureConstants.ENTROPY_NORMS:
117 raise ValueError(
118 f"`entropy_norm` must be one of the following: "
119 f"{'`' + '`, `'.join(ConfidenceMeasureConstants.ENTROPY_NORMS) + '`'}. Provided: `{self.entropy_norm}`"
120 )
121
122
123 @dataclass
124 class ConfidenceConfig:
125 """A config which contains the following key-value pairs related to confidence scores.
126
127 Args:
128 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
129 generated during decoding. When set to true, the Hypothesis will contain
130 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
131 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
132 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
133 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
134
135 The length of the list corresponds to the number of recognized tokens.
136 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
137 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
138 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
139
140 The length of the list corresponds to the number of recognized words.
141 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
142 from the `token_confidence`.
143 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
144 Valid options are `mean`, `min`, `max`, `prod`.
145 measure_cfg: A dict-like object which contains the measure name and settings to compute per-frame
146 confidence scores.
147
148 name: The measure name (str).
149 Supported values:
150 - 'max_prob' for using the maximum token probability as a confidence.
151 - 'entropy' for using a normalized entropy of a log-likelihood vector.
152
153 entropy_type: Which type of entropy to use (str). Used if confidence_measure_cfg.name is set to `entropy`.
154 Supported values:
155 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
156 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
157 Note that for this entropy, the alpha should comply the following inequality:
158 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
159 where V is the model vocabulary size.
160 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
161 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
162 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
163 More: https://en.wikipedia.org/wiki/Tsallis_entropy
164 - 'renyi' for the Rรฉnyi entropy.
165 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
166 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
167 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
168
169 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
170 When the alpha equals one, scaling is not applied to 'max_prob',
171 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
172
173 entropy_norm: A mapping of the entropy value to the interval [0,1].
174 Supported values:
175 - 'lin' for using the linear mapping.
176 - 'exp' for using exponential mapping with linear shift.
177 """
178
179 preserve_frame_confidence: bool = False
180 preserve_token_confidence: bool = False
181 preserve_word_confidence: bool = False
182 exclude_blank: bool = True
183 aggregation: str = "min"
184 measure_cfg: ConfidenceMeasureConfig = ConfidenceMeasureConfig()
185 method_cfg: str = "DEPRECATED"
186
187 def __post_init__(self):
188 # OmegaConf.structured ensures that post_init check is always executed
189 self.measure_cfg = OmegaConf.structured(
190 self.measure_cfg
191 if isinstance(self.measure_cfg, ConfidenceMeasureConfig)
192 else ConfidenceMeasureConfig(**self.measure_cfg)
193 )
194 if self.method_cfg != "DEPRECATED":
195 logging.warning(
196 "`method_cfg` is deprecated and will be removed in the future. Please use `measure_cfg` instead."
197 )
198
199 # TODO (alaptev): delete the following two lines sometime in the future
200 logging.warning("Re-writing `measure_cfg` with the value of `method_cfg`.")
201 # OmegaConf.structured ensures that post_init check is always executed
202 self.measure_cfg = OmegaConf.structured(
203 self.method_cfg
204 if isinstance(self.method_cfg, ConfidenceMeasureConfig)
205 else ConfidenceMeasureConfig(**self.method_cfg)
206 )
207 self.method_cfg = "DEPRECATED"
208 if self.aggregation not in ConfidenceConstants.AGGREGATIONS:
209 raise ValueError(
210 f"`aggregation` has to be one of the following: "
211 f"{'`' + '`, `'.join(ConfidenceMeasureConstants.AGGREGATIONS) + '`'}. Provided: `{self.aggregation}`"
212 )
213
214
215 def get_confidence_measure_bank():
216 """Generate a dictionary with confidence measure functionals.
217
218 Supported confidence measures:
219 max_prob: normalized maximum probability
220 entropy_gibbs_lin: Gibbs entropy with linear normalization
221 entropy_gibbs_exp: Gibbs entropy with exponential normalization
222 entropy_tsallis_lin: Tsallis entropy with linear normalization
223 entropy_tsallis_exp: Tsallis entropy with exponential normalization
224 entropy_renyi_lin: Rรฉnyi entropy with linear normalization
225 entropy_renyi_exp: Rรฉnyi entropy with exponential normalization
226
227 Returns:
228 dictionary with lambda functions.
229 """
230 # helper functions
231 # Gibbs entropy is implemented without alpha
232 neg_entropy_gibbs = lambda x: (x.exp() * x).sum(-1)
233 neg_entropy_alpha = lambda x, t: (x * t).exp().sum(-1)
234 neg_entropy_alpha_gibbs = lambda x, t: ((x * t).exp() * x).sum(-1)
235 # too big for a lambda
236 def entropy_tsallis_exp(x, v, t):
237 exp_neg_max_ent = math.exp((1 - math.pow(v, 1 - t)) / (1 - t))
238 return (((1 - neg_entropy_alpha(x, t)) / (1 - t)).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
239
240 def entropy_gibbs_exp(x, v, t):
241 exp_neg_max_ent = math.pow(v, -t * math.pow(v, 1 - t))
242 return ((neg_entropy_alpha_gibbs(x, t) * t).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
243
244 # use Gibbs entropies for Tsallis and Rรฉnyi with t == 1.0
245 entropy_gibbs_lin_baseline = lambda x, v: 1 + neg_entropy_gibbs(x) / math.log(v)
246 entropy_gibbs_exp_baseline = lambda x, v: (neg_entropy_gibbs(x).exp() * v - 1) / (v - 1)
247 # fill the measure bank
248 confidence_measure_bank = {}
249 # Maximum probability measure is implemented without alpha
250 confidence_measure_bank["max_prob"] = (
251 lambda x, v, t: (x.max(dim=-1)[0].exp() * v - 1) / (v - 1)
252 if t == 1.0
253 else ((x.max(dim=-1)[0] * t).exp() * math.pow(v, t) - 1) / (math.pow(v, t) - 1)
254 )
255 confidence_measure_bank["entropy_gibbs_lin"] = (
256 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
257 if t == 1.0
258 else 1 + neg_entropy_alpha_gibbs(x, t) / math.log(v) / math.pow(v, 1 - t)
259 )
260 confidence_measure_bank["entropy_gibbs_exp"] = (
261 lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_gibbs_exp(x, v, t)
262 )
263 confidence_measure_bank["entropy_tsallis_lin"] = (
264 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
265 if t == 1.0
266 else 1 + (1 - neg_entropy_alpha(x, t)) / (math.pow(v, 1 - t) - 1)
267 )
268 confidence_measure_bank["entropy_tsallis_exp"] = (
269 lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_tsallis_exp(x, v, t)
270 )
271 confidence_measure_bank["entropy_renyi_lin"] = (
272 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
273 if t == 1.0
274 else 1 + neg_entropy_alpha(x, t).log2() / (t - 1) / math.log(v, 2)
275 )
276 confidence_measure_bank["entropy_renyi_exp"] = (
277 lambda x, v, t: entropy_gibbs_exp_baseline(x, v)
278 if t == 1.0
279 else (neg_entropy_alpha(x, t).pow(1 / (t - 1)) * v - 1) / (v - 1)
280 )
281 return confidence_measure_bank
282
283
284 def get_confidence_aggregation_bank():
285 """Generate a dictionary with confidence aggregation functions.
286
287 Supported confidence measures:
288 min: minimum
289 max: maximum
290 mean: arithmetic mean
291 prod: product
292
293 Returns:
294 dictionary with functions.
295 """
296 confidence_aggregation_bank = {"mean": lambda x: sum(x) / len(x), "min": min, "max": max}
297 # python 3.7 and earlier do not have math.prod
298 if hasattr(math, "prod"):
299 confidence_aggregation_bank["prod"] = math.prod
300 else:
301 import operator
302 from functools import reduce
303
304 confidence_aggregation_bank["prod"] = lambda x: reduce(operator.mul, x, 1)
305 return confidence_aggregation_bank
306
307
308 class ConfidenceMeasureMixin(ABC):
309 """Confidence Measure Mixin class.
310
311 It initializes per-frame confidence measure.
312 """
313
314 def _init_confidence_measure(self, confidence_measure_cfg: Optional[DictConfig] = None):
315 """Initialize per-frame confidence measure from config.
316 """
317 # OmegaConf.structured ensures that post_init check is always executed
318 confidence_measure_cfg = OmegaConf.structured(
319 ConfidenceMeasureConfig()
320 if confidence_measure_cfg is None
321 else ConfidenceMeasureConfig(**confidence_measure_cfg)
322 )
323
324 # set confidence calculation measure
325 # we suppose that self.blank_id == len(vocabulary)
326 self.num_tokens = (self.blank_id if hasattr(self, "blank_id") else self._blank_index) + 1
327 self.alpha = confidence_measure_cfg.alpha
328
329 # init confidence measure bank
330 self.confidence_measure_bank = get_confidence_measure_bank()
331
332 measure = None
333 # construct measure_name
334 measure_name = ""
335 if confidence_measure_cfg.name == "max_prob":
336 measure_name = "max_prob"
337 elif confidence_measure_cfg.name == "entropy":
338 measure_name = '_'.join(
339 [confidence_measure_cfg.name, confidence_measure_cfg.entropy_type, confidence_measure_cfg.entropy_norm]
340 )
341 else:
342 raise ValueError(f"Unsupported `confidence_measure_cfg.name`: `{confidence_measure_cfg.name}`")
343 if measure_name not in self.confidence_measure_bank:
344 raise ValueError(f"Unsupported measure setup: `{measure_name}`")
345 measure = partial(self.confidence_measure_bank[measure_name], v=self.num_tokens, t=self.alpha)
346 self._get_confidence = lambda x: measure(torch.nan_to_num(x)).tolist()
347
348
349 class ConfidenceMixin(ABC):
350 """Confidence Mixin class.
351
352 It is responsible for confidence estimation method initialization and high-level confidence score calculation.
353 """
354
355 def _init_confidence(self, confidence_cfg: Optional[DictConfig] = None):
356 """Initialize confidence-related fields and confidence aggregation function from config.
357 """
358 # OmegaConf.structured ensures that post_init check is always executed
359 confidence_cfg = OmegaConf.structured(
360 ConfidenceConfig() if confidence_cfg is None else ConfidenceConfig(**confidence_cfg)
361 )
362 self.confidence_measure_cfg = confidence_cfg.measure_cfg
363
364 # extract the config
365 self.preserve_word_confidence = confidence_cfg.get('preserve_word_confidence', False)
366 # set preserve_frame_confidence and preserve_token_confidence to True
367 # if preserve_word_confidence is True
368 self.preserve_token_confidence = (
369 confidence_cfg.get('preserve_token_confidence', False) | self.preserve_word_confidence
370 )
371 # set preserve_frame_confidence to True if preserve_token_confidence is True
372 self.preserve_frame_confidence = (
373 confidence_cfg.get('preserve_frame_confidence', False) | self.preserve_token_confidence
374 )
375 self.exclude_blank_from_confidence = confidence_cfg.get('exclude_blank', True)
376 self.word_confidence_aggregation = confidence_cfg.get('aggregation', "min")
377
378 # define aggregation functions
379 self.confidence_aggregation_bank = get_confidence_aggregation_bank()
380 self._aggregate_confidence = self.confidence_aggregation_bank[self.word_confidence_aggregation]
381
382 # Update preserve frame confidence
383 if self.preserve_frame_confidence is False:
384 if self.cfg.strategy in ['greedy', 'greedy_batch']:
385 self.preserve_frame_confidence = self.cfg.greedy.get('preserve_frame_confidence', False)
386 # OmegaConf.structured ensures that post_init check is always executed
387 confidence_measure_cfg = OmegaConf.structured(self.cfg.greedy).get('confidence_measure_cfg', None)
388 self.confidence_measure_cfg = (
389 OmegaConf.structured(ConfidenceMeasureConfig())
390 if confidence_measure_cfg is None
391 else OmegaConf.structured(ConfidenceMeasureConfig(**confidence_measure_cfg))
392 )
393
394 @abstractmethod
395 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
396 """Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
397 Assumes that `frame_confidence` is present in the hypotheses.
398
399 Args:
400 hypotheses_list: List of Hypothesis.
401
402 Returns:
403 A list of hypotheses with high-level confidence scores.
404 """
405 raise NotImplementedError()
406
407 @abstractmethod
408 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
409 """Implemented by subclass in order to aggregate token confidence to a word-level confidence.
410
411 Args:
412 hypothesis: Hypothesis
413
414 Returns:
415 A list of word-level confidence scores.
416 """
417 raise NotImplementedError()
418
419 def _aggregate_token_confidence_chars(self, words: List[str], token_confidence: List[float]) -> List[float]:
420 """Implementation of token confidence aggregation for character-based models.
421
422 Args:
423 words: List of words of a hypothesis.
424 token_confidence: List of token-level confidence scores of a hypothesis.
425
426 Returns:
427 A list of word-level confidence scores.
428 """
429 word_confidence = []
430 i = 0
431 for word in words:
432 word_len = len(word)
433 word_confidence.append(self._aggregate_confidence(token_confidence[i : i + word_len]))
434 # we assume that there is exactly one space token between words and exclude it from word confidence
435 i += word_len + 1
436 return word_confidence
437
438 def _aggregate_token_confidence_subwords_sentencepiece(
439 self, words: List[str], token_confidence: List[float], token_ids: List[int]
440 ) -> List[float]:
441 """Implementation of token confidence aggregation for subword-based models.
442
443 **Note**: Only supports Sentencepiece based tokenizers !
444
445 Args:
446 words: List of words of a hypothesis.
447 token_confidence: List of token-level confidence scores of a hypothesis.
448 token_ids: List of token ids of a hypothesis.
449
450 Returns:
451 A list of word-level confidence scores.
452 """
453 word_confidence = []
454 # run only if there are final words
455 if len(words) > 0:
456 j = 0
457 prev_unk = False
458 prev_underline = False
459 for i, token_id in enumerate(token_ids):
460 token = self.decode_ids_to_tokens([int(token_id)])[0]
461 token_text = self.decode_tokens_to_str([int(token_id)])
462 # treat `<unk>` as a separate word regardless of the next token
463 # to match the result of `tokenizer.ids_to_text`
464 if (token != token_text or prev_unk) and i > j:
465 # do not add confidence for `โ` if the current token starts with `โ`
466 # to match the result of `tokenizer.ids_to_text`
467 if not prev_underline:
468 word_confidence.append(self._aggregate_confidence(token_confidence[j:i]))
469 j = i
470 prev_unk = token == '<unk>'
471 prev_underline = token == 'โ'
472 if not prev_underline:
473 word_confidence.append(self._aggregate_confidence(token_confidence[j : len(token_ids)]))
474 if len(words) != len(word_confidence):
475 raise RuntimeError(
476 f"""Something went wrong with word-level confidence aggregation.\n
477 Please check these values for debugging:\n
478 len(words): {len(words)},\n
479 len(word_confidence): {len(word_confidence)},\n
480 recognized text: `{' '.join(words)}`"""
481 )
482 return word_confidence
483
[end of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
[start of nemo/collections/common/parts/adapter_modules.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, is_dataclass
16 from typing import Any, Optional
17
18 from hydra.utils import instantiate
19 from omegaconf import OmegaConf
20 from torch import nn as nn
21
22 from nemo.collections.common.parts.utils import activation_registry
23 from nemo.core.classes.mixins import access_mixins, adapter_mixin_strategies
24
25
26 class AdapterModuleUtil(access_mixins.AccessMixin):
27 """
28 Base class of Adapter Modules, providing common functionality to all Adapter Modules.
29 """
30
31 def setup_adapter_strategy(self, adapter_strategy: Optional[adapter_mixin_strategies.AbstractAdapterStrategy]):
32 """
33 Setup adapter strategy of this class, enabling dynamic change in the way the adapter output is
34 merged with the input.
35
36 When called successfully, will assign the variable `adapter_strategy` to the module.
37
38 Args:
39 adapter_strategy: Can be a None or an implementation of AbstractAdapterStrategy.
40 """
41 # set default adapter strategy
42 if adapter_strategy is None:
43 adapter_strategy = self.get_default_strategy_config()
44
45 if is_dataclass(adapter_strategy):
46 adapter_strategy = OmegaConf.structured(adapter_strategy)
47 OmegaConf.set_struct(adapter_strategy, False)
48
49 # The config must have the `_target_` field pointing to the actual adapter strategy class
50 # which will load that strategy dynamically to this module.
51 if isinstance(adapter_strategy, dict) or OmegaConf.is_config(adapter_strategy):
52 self.adapter_strategy = instantiate(adapter_strategy)
53 elif isinstance(adapter_strategy, adapter_mixin_strategies.AbstractAdapterStrategy):
54 self.adapter_strategy = adapter_strategy
55 else:
56 raise AttributeError(f'`adapter_strategy` provided is invalid : {adapter_strategy}')
57
58 def get_default_strategy_config(self) -> 'dataclass':
59 """
60 Returns a default adapter module strategy.
61 """
62 return adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
63
64 def adapter_unfreeze(self,):
65 """
66 Sets the requires grad for all parameters in the adapter to True.
67 This method should be overridden for any custom unfreeze behavior that is required.
68 For example, if not all params of the adapter should be unfrozen.
69 """
70 for param in self.parameters():
71 param.requires_grad_(True)
72
73
74 class LinearAdapter(nn.Module, AdapterModuleUtil):
75
76 """
77 Simple Linear Feedforward Adapter module with LayerNorm and singe hidden layer with activation function.
78 Note: The adapter explicitly initializes its final layer with all zeros in order to avoid affecting the
79 original model when all adapters are disabled.
80
81 Args:
82 in_features: Input dimension of the module. Note that for adapters, input_dim == output_dim.
83 dim: Hidden dimension of the feed forward network.
84 activation: Str name for an activation function.
85 norm_position: Str, can be `pre` or `post`. Defaults to `pre`. Determines whether the normalization
86 will occur in the first layer or the last layer. Certain architectures may prefer one over the other.
87 dropout: float value, whether to perform dropout on the output of the last layer of the adapter.
88 adapter_strategy: By default, ResidualAddAdapterStrategyConfig. An adapter composition function object.
89 """
90
91 def __init__(
92 self,
93 in_features: int,
94 dim: int,
95 activation: str = 'swish',
96 norm_position: str = 'pre',
97 dropout: float = 0.0,
98 adapter_strategy: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig = None,
99 ):
100 super().__init__()
101
102 activation = activation_registry[activation]()
103 # If the activation can be executed in place, do so.
104 if hasattr(activation, 'inplace'):
105 activation.inplace = True
106
107 assert norm_position in ['pre', 'post']
108 self.norm_position = norm_position
109
110 if norm_position == 'pre':
111 self.module = nn.Sequential(
112 nn.LayerNorm(in_features),
113 nn.Linear(in_features, dim, bias=False),
114 activation,
115 nn.Linear(dim, in_features, bias=False),
116 )
117
118 elif norm_position == 'post':
119 self.module = nn.Sequential(
120 nn.Linear(in_features, dim, bias=False),
121 activation,
122 nn.Linear(dim, in_features, bias=False),
123 nn.LayerNorm(in_features),
124 )
125
126 if dropout > 0.0:
127 self.dropout = nn.Dropout(dropout)
128 else:
129 self.dropout = None
130
131 # Setup adapter strategy
132 self.setup_adapter_strategy(adapter_strategy)
133
134 # reset parameters
135 self.reset_parameters()
136
137 def reset_parameters(self):
138 # Final layer initializations must be 0
139 if self.norm_position == 'pre':
140 self.module[-1].weight.data *= 0
141
142 elif self.norm_position == 'post':
143 self.module[-1].weight.data *= 0
144 self.module[-1].bias.data *= 0
145
146 def forward(self, x):
147 x = self.module(x)
148
149 # Add dropout if available
150 if self.dropout is not None:
151 x = self.dropout(x)
152
153 return x
154
155
156 @dataclass
157 class LinearAdapterConfig:
158 in_features: int
159 dim: int
160 activation: str = 'swish'
161 norm_position: str = 'pre'
162 dropout: float = 0.0
163 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
164 _target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
165
[end of nemo/collections/common/parts/adapter_modules.py]
[start of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import re
15 from typing import List
16
17 import ipadic
18 import MeCab
19 from pangu import spacing
20 from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
21
22
23 class EnJaProcessor:
24 """
25 Tokenizer, Detokenizer and Normalizer utilities for Japanese & English
26 Args:
27 lang_id: One of ['en', 'ja'].
28 """
29
30 def __init__(self, lang_id: str):
31 self.lang_id = lang_id
32 self.moses_tokenizer = MosesTokenizer(lang=lang_id)
33 self.moses_detokenizer = MosesDetokenizer(lang=lang_id)
34 self.normalizer = MosesPunctNormalizer(
35 lang=lang_id, pre_replace_unicode_punct=True, post_remove_control_chars=True
36 )
37
38 def detokenize(self, tokens: List[str]) -> str:
39 """
40 Detokenizes a list of tokens
41 Args:
42 tokens: list of strings as tokens
43 Returns:
44 detokenized Japanese or English string
45 """
46 return self.moses_detokenizer.detokenize(tokens)
47
48 def tokenize(self, text) -> str:
49 """
50 Tokenizes text using Moses. Returns a string of tokens.
51 """
52 tokens = self.moses_tokenizer.tokenize(text)
53 return ' '.join(tokens)
54
55 def normalize(self, text) -> str:
56 # Normalization doesn't handle Japanese periods correctly;
57 # 'ใ'becomes '.'.
58 if self.lang_id == 'en':
59 return self.normalizer.normalize(text)
60 else:
61 return text
62
63
64 class JaMecabProcessor:
65 """
66 Tokenizer, Detokenizer and Normalizer utilities for Japanese MeCab & English
67 """
68
69 def __init__(self):
70 self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
71
72 def detokenize(self, text: List[str]) -> str:
73 RE_WS_IN_FW = re.compile(
74 r'([\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])\s+(?=[\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])'
75 )
76
77 detokenize = lambda s: spacing(RE_WS_IN_FW.sub(r'\1', s)).strip()
78 return detokenize(' '.join(text))
79
80 def tokenize(self, text) -> str:
81 """
82 Tokenizes text using Moses. Returns a string of tokens.
83 """
84 return self.mecab_tokenizer.parse(text).strip()
85
86 def normalize(self, text) -> str:
87 return text
88
[end of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
[start of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Optional, Tuple
17
18 from omegaconf.omegaconf import MISSING
19
20 from nemo.collections.nlp.data.machine_translation.machine_translation_dataset import TranslationDataConfig
21 from nemo.collections.nlp.models.enc_dec_nlp_model import EncDecNLPModelConfig
22 from nemo.collections.nlp.modules.common.token_classifier import TokenClassifierConfig
23 from nemo.collections.nlp.modules.common.tokenizer_utils import TokenizerConfig
24 from nemo.collections.nlp.modules.common.transformer.transformer import (
25 NeMoTransformerConfig,
26 NeMoTransformerEncoderConfig,
27 )
28 from nemo.collections.nlp.modules.common.transformer.transformer_bottleneck import (
29 NeMoTransformerBottleneckDecoderConfig,
30 NeMoTransformerBottleneckEncoderConfig,
31 )
32 from nemo.core.config.modelPT import OptimConfig, SchedConfig
33
34
35 @dataclass
36 class MTSchedConfig(SchedConfig):
37 name: str = 'InverseSquareRootAnnealing'
38 warmup_ratio: Optional[float] = None
39 last_epoch: int = -1
40
41
42 # TODO: Refactor this dataclass to to support more optimizers (it pins the optimizer to Adam-like optimizers).
43 @dataclass
44 class MTOptimConfig(OptimConfig):
45 name: str = 'adam'
46 lr: float = 1e-3
47 betas: Tuple[float, float] = (0.9, 0.98)
48 weight_decay: float = 0.0
49 sched: Optional[MTSchedConfig] = MTSchedConfig()
50
51
52 @dataclass
53 class MTEncDecModelConfig(EncDecNLPModelConfig):
54 # machine translation configurations
55 num_val_examples: int = 3
56 num_test_examples: int = 3
57 max_generation_delta: int = 10
58 label_smoothing: Optional[float] = 0.0
59 beam_size: int = 4
60 len_pen: float = 0.0
61 src_language: Any = 'en' # Any = str or List[str]
62 tgt_language: Any = 'en' # Any = str or List[str]
63 find_unused_parameters: Optional[bool] = True
64 shared_tokenizer: Optional[bool] = True
65 multilingual: Optional[bool] = False
66 preproc_out_dir: Optional[str] = None
67 validate_input_ids: Optional[bool] = True
68 shared_embeddings: bool = False
69
70 # network architecture configuration
71 encoder_tokenizer: Any = MISSING
72 encoder: Any = MISSING
73
74 decoder_tokenizer: Any = MISSING
75 decoder: Any = MISSING
76
77 head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
78
79 # dataset configurations
80 train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
81 src_file_name=MISSING,
82 tgt_file_name=MISSING,
83 tokens_in_batch=512,
84 clean=True,
85 shuffle=True,
86 cache_ids=False,
87 use_cache=False,
88 )
89 validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
90 src_file_name=MISSING,
91 tgt_file_name=MISSING,
92 tokens_in_batch=512,
93 clean=False,
94 shuffle=False,
95 cache_ids=False,
96 use_cache=False,
97 )
98 test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
99 src_file_name=MISSING,
100 tgt_file_name=MISSING,
101 tokens_in_batch=512,
102 clean=False,
103 shuffle=False,
104 cache_ids=False,
105 use_cache=False,
106 )
107 optim: Optional[OptimConfig] = MTOptimConfig()
108
109
110 @dataclass
111 class AAYNBaseConfig(MTEncDecModelConfig):
112
113 # Attention is All You Need Base Configuration
114 encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
115 decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
116
117 encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
118 library='nemo',
119 model_name=None,
120 pretrained=False,
121 hidden_size=512,
122 inner_size=2048,
123 num_layers=6,
124 num_attention_heads=8,
125 ffn_dropout=0.1,
126 attn_score_dropout=0.1,
127 attn_layer_dropout=0.1,
128 )
129
130 decoder: NeMoTransformerConfig = NeMoTransformerConfig(
131 library='nemo',
132 model_name=None,
133 pretrained=False,
134 hidden_size=512,
135 inner_size=2048,
136 num_layers=6,
137 num_attention_heads=8,
138 ffn_dropout=0.1,
139 attn_score_dropout=0.1,
140 attn_layer_dropout=0.1,
141 )
142
143
144 @dataclass
145 class MTBottleneckModelConfig(AAYNBaseConfig):
146 model_type: str = 'nll'
147 min_logv: float = -6
148 latent_size: int = -1 # -1 will take value of encoder hidden
149 non_recon_warmup_batches: int = 200000
150 recon_per_token: bool = True
151 log_timing: bool = True
152
153 encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
154 library='nemo',
155 model_name=None,
156 pretrained=False,
157 hidden_size=512,
158 inner_size=2048,
159 num_layers=6,
160 num_attention_heads=8,
161 ffn_dropout=0.1,
162 attn_score_dropout=0.1,
163 attn_layer_dropout=0.1,
164 arch='seq2seq',
165 hidden_steps=32,
166 hidden_blocks=1,
167 hidden_init_method='params',
168 )
169
170 decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
171 library='nemo',
172 model_name=None,
173 pretrained=False,
174 inner_size=2048,
175 num_layers=6,
176 num_attention_heads=8,
177 ffn_dropout=0.1,
178 attn_score_dropout=0.1,
179 attn_layer_dropout=0.1,
180 arch='seq2seq',
181 )
182
[end of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
[start of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, Optional
17
18 from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
19
20 from nemo.collections.common.parts.adapter_modules import LinearAdapterConfig
21 from nemo.collections.nlp.data.token_classification.punctuation_capitalization_dataset import (
22 PunctuationCapitalizationEvalDataConfig,
23 PunctuationCapitalizationTrainDataConfig,
24 legacy_data_config_to_new_data_config,
25 )
26 from nemo.core.config import TrainerConfig
27 from nemo.core.config.modelPT import NemoConfig
28 from nemo.utils.exp_manager import ExpManagerConfig
29
30
31 @dataclass
32 class FreezeConfig:
33 is_enabled: bool = False
34 """Freeze audio encoder weight and add Conformer Layers on top of it"""
35 d_model: Optional[int] = 256
36 """`d_model` parameter of ``ConformerLayer``"""
37 d_ff: Optional[int] = 1024
38 """``d_ff`` parameter of ``ConformerLayer``"""
39 num_layers: Optional[int] = 8
40 """``num_layers`` number of ``ConformerLayer`` modules to add on top of audio encoder"""
41
42
43 @dataclass
44 class AdapterConfig:
45 config: Optional[LinearAdapterConfig] = None
46 """Linear adapter config see ``collections.common.parts.LinearAdapterConfig``"""
47 enable: bool = False
48 """Use adapters for audio encoder"""
49
50
51 @dataclass
52 class FusionConfig:
53 num_layers: Optional[int] = 4
54 """"Number of layers to use in fusion"""
55 num_attention_heads: Optional[int] = 4
56 """Number of attention heads to use in fusion"""
57 inner_size: Optional[int] = 2048
58 """Fusion inner size"""
59
60
61 @dataclass
62 class AudioEncoderConfig:
63 pretrained_model: str = MISSING
64 """A configuration for restoring pretrained audio encoder"""
65 freeze: Optional[FreezeConfig] = None
66 adapter: Optional[AdapterConfig] = None
67 fusion: Optional[FusionConfig] = None
68
69
70 @dataclass
71 class TokenizerConfig:
72 """A structure and default values of source text tokenizer."""
73
74 vocab_file: Optional[str] = None
75 """A path to vocabulary file which is used in ``'word'``, ``'char'``, and HuggingFace tokenizers"""
76
77 tokenizer_name: str = MISSING
78 """A name of the tokenizer used for tokenization of source sequences. Possible options are ``'sentencepiece'``,
79 ``'word'``, ``'char'``, HuggingFace tokenizers (e.g. ``'bert-base-uncased'``). For more options see function
80 ``nemo.collections.nlp.modules.common.get_tokenizer``. The tokenizer must have properties ``cls_id``, ``pad_id``,
81 ``sep_id``, ``unk_id``."""
82
83 special_tokens: Optional[Dict[str, str]] = None
84 """A dictionary with special tokens passed to constructors of ``'char'``, ``'word'``, ``'sentencepiece'``, and
85 various HuggingFace tokenizers."""
86
87 tokenizer_model: Optional[str] = None
88 """A path to a tokenizer model required for ``'sentencepiece'`` tokenizer."""
89
90
91 @dataclass
92 class LanguageModelConfig:
93 """
94 A structure and default values of language model configuration of punctuation and capitalization model. BERT like
95 HuggingFace models are supported. Provide a valid ``pretrained_model_name`` and, optionally, you may
96 reinitialize model via ``config_file`` or ``config``.
97
98 Alternatively you can initialize the language model using ``lm_checkpoint``.
99
100 This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
101 """
102
103 pretrained_model_name: str = MISSING
104 """A mandatory parameter containing name of HuggingFace pretrained model. For example, ``'bert-base-uncased'``."""
105
106 config_file: Optional[str] = None
107 """A path to a file with HuggingFace model config which is used to reinitialize language model."""
108
109 config: Optional[Dict] = None
110 """A HuggingFace config which is used to reinitialize language model."""
111
112 lm_checkpoint: Optional[str] = None
113 """A path to a ``torch`` checkpoint of a language model."""
114
115
116 @dataclass
117 class HeadConfig:
118 """
119 A structure and default values of configuration of capitalization or punctuation model head. This config defines a
120 multilayer perceptron which is applied to output of a language model. Number of units in the hidden layer is equal
121 to the dimension of the language model.
122
123 This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
124 """
125
126 num_fc_layers: int = 1
127 """A number of hidden layers in a multilayer perceptron."""
128
129 fc_dropout: float = 0.1
130 """A dropout used in an MLP."""
131
132 activation: str = 'relu'
133 """An activation used in hidden layers."""
134
135 use_transformer_init: bool = True
136 """Whether to initialize the weights of the classifier head with the approach that was used for language model
137 initialization."""
138
139
140 @dataclass
141 class ClassLabelsConfig:
142 """
143 A structure and default values of a mandatory part of config which contains names of files which are saved in .nemo
144 checkpoint. These files can also be used for passing label vocabulary to the model. For using them as label
145 vocabularies you will need to provide path these files in parameter
146 ``model.common_dataset_parameters.label_vocab_dir``. Each line in labels files
147 contains 1 label. The values are sorted, ``<line number>==<label id>``, starting from ``0``. A label with ``0`` id
148 must contain neutral label which must be equal to ``model.common_dataset_parameters.pad_label``.
149
150 This config is a part of :class:`~CommonDatasetParametersConfig`.
151 """
152
153 punct_labels_file: str = MISSING
154 """A name of punctuation labels file."""
155
156 capit_labels_file: str = MISSING
157 """A name of capitalization labels file."""
158
159
160 @dataclass
161 class CommonDatasetParametersConfig:
162 """
163 A structure and default values of common dataset parameters config which includes label and loss mask information.
164 If you omit parameters ``punct_label_ids``, ``capit_label_ids``, ``label_vocab_dir``, then labels will be inferred
165 from a training dataset or loaded from a checkpoint.
166
167 Parameters ``ignore_extra_tokens`` and ``ignore_start_end`` are responsible for forming loss mask. A loss mask
168 defines on which tokens loss is computed.
169
170 This parameter is a part of config :class:`~PunctuationCapitalizationModelConfig`.
171 """
172
173 pad_label: str = MISSING
174 """A mandatory parameter which should contain label used for punctuation and capitalization label padding. It
175 also serves as a neutral label for both punctuation and capitalization. If any of ``punct_label_ids``,
176 ``capit_label_ids`` parameters is provided, then ``pad_label`` must have ``0`` id in them. In addition, if ``label_vocab_dir``
177 is provided, then ``pad_label`` must be on the first lines in files ``class_labels.punct_labels_file`` and
178 ``class_labels.capit_labels_file``."""
179
180 ignore_extra_tokens: bool = False
181 """Whether to compute loss on not first tokens in words. If this parameter is ``True``, then loss mask is ``False``
182 for all tokens in a word except the first."""
183
184 ignore_start_end: bool = True
185 """If ``False``, then loss is computed on [CLS] and [SEP] tokens."""
186
187 punct_label_ids: Optional[Dict[str, int]] = None
188 """A dictionary with punctuation label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit this
189 parameter and pass label ids through ``class_labels.punct_labels_file`` or let the model to infer label ids from
190 dataset or load them from checkpoint."""
191
192 capit_label_ids: Optional[Dict[str, int]] = None
193 """A dictionary with capitalization label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit
194 this parameter and pass label ids through ``class_labels.capit_labels_file`` or let model to infer label ids from
195 dataset or load them from checkpoint."""
196
197 label_vocab_dir: Optional[str] = None
198 """A path to directory which contains class labels files. See :class:`ClassLabelsConfig`. If this parameter is
199 provided, then labels will be loaded from files which are located in ``label_vocab_dir`` and have names specified
200 in ``model.class_labels`` configuration section. A label specified in ``pad_label`` has to be on the first lines
201 of ``model.class_labels`` files."""
202
203
204 @dataclass
205 class PunctuationCapitalizationModelConfig:
206 """
207 A configuration of
208 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
209 model.
210
211 See an example of model config in
212 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
213 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
214
215 This config is a part of :class:`~PunctuationCapitalizationConfig`.
216 """
217
218 class_labels: ClassLabelsConfig = ClassLabelsConfig()
219 """A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
220 These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
221 for passing vocabularies, please provide path to vocabulary files in
222 ``model.common_dataset_parameters.label_vocab_dir`` parameter."""
223
224 common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
225 """Label ids and loss mask information information."""
226
227 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
228 """A configuration for creating training dataset and data loader."""
229
230 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
231 """A configuration for creating validation datasets and data loaders."""
232
233 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
234 """A configuration for creating test datasets and data loaders."""
235
236 punct_head: HeadConfig = HeadConfig()
237 """A configuration for creating punctuation MLP head that is applied to a language model outputs."""
238
239 capit_head: HeadConfig = HeadConfig()
240 """A configuration for creating capitalization MLP head that is applied to a language model outputs."""
241
242 tokenizer: Any = TokenizerConfig()
243 """A configuration for source text tokenizer."""
244
245 language_model: LanguageModelConfig = LanguageModelConfig()
246 """A configuration of a BERT-like language model which serves as a model body."""
247
248 optim: Optional[Any] = None
249 """A configuration of optimizer and learning rate scheduler. There is much variability in such config. For
250 description see `Optimizers
251 <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html#optimizers>`_ section in
252 documentation and `primer <https://github.com/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb>_ tutorial."""
253
254
255 @dataclass
256 class PunctuationCapitalizationLexicalAudioModelConfig(PunctuationCapitalizationModelConfig):
257 """
258 A configuration of
259 :class:`~nemo.collections.nlp.models.token_classification.punctuation_lexical_audio_capitalization_model.PunctuationCapitalizationLexicalAudioModel`
260 model.
261
262 See an example of model config in
263 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
264 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_lexical_audio_config.yaml>`_
265
266 Audio encoder can be frozen during training with ``freeze_audio_encoder`` parameter.
267 Adapter can be added to audio encoder with ``use_adapters`` and ``adapter_config`` parameters.
268 More conformer layers can be added on top of pretrained audio encoder with ``frozen_conf_d_model``, ``frozen_conf_d_ff`` and ``frozen_conf_num_layers`` parameters.
269 """
270
271 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
272 """A configuration for creating training dataset and data loader."""
273
274 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
275 """A configuration for creating validation datasets and data loaders."""
276
277 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
278 """A configuration for creating test datasets and data loaders."""
279
280 audio_encoder: Optional[AudioEncoderConfig] = None
281
282 restore_lexical_encoder_from: Optional[str] = None
283 """"Path to .nemo checkpoint to load weights from""" # add more comments
284
285 use_weighted_loss: Optional[bool] = False
286 """If set to ``True`` CrossEntropyLoss will be weighted"""
287
288
289 @dataclass
290 class PunctuationCapitalizationConfig(NemoConfig):
291 """
292 A config for punctuation model training and testing.
293
294 See an example of full config in
295 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
296 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
297 """
298
299 pretrained_model: Optional[str] = None
300 """Can be an NVIDIA's NGC cloud model or a path to a .nemo checkpoint. You can get list of possible cloud options
301 by calling method
302 :func:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel.list_available_models`.
303 """
304
305 name: Optional[str] = 'Punctuation_and_Capitalization'
306 """A name of the model. Used for naming output directories and ``.nemo`` checkpoints."""
307
308 do_training: bool = True
309 """Whether to perform training of the model."""
310
311 do_testing: bool = False
312 """Whether ot perform testing of the model."""
313
314 model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
315 """A configuration for the
316 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
317 model."""
318
319 trainer: Optional[TrainerConfig] = TrainerConfig()
320 """Contains ``Trainer`` Lightning class constructor parameters."""
321
322 exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
323 """A configuration with various NeMo training options such as output directories, resuming from checkpoint,
324 tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
325
326
327 @dataclass
328 class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
329 model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
330
331
332 def is_legacy_model_config(model_cfg: DictConfig) -> bool:
333 """
334 Test if model config is old style config. Old style configs are configs which were used before
335 ``common_dataset_parameters`` item was added. Old style datasets use ``dataset`` instead of
336 ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``. Old style configs do not support
337 tarred datasets.
338
339 Args:
340 model_cfg: model configuration
341
342 Returns:
343 whether ``model_config`` is legacy
344 """
345 return 'common_dataset_parameters' not in model_cfg
346
347
348 def legacy_model_config_to_new_model_config(model_cfg: DictConfig) -> DictConfig:
349 """
350 Transform old style config into
351 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`.
352 Old style configs are configs which were used before ``common_dataset_parameters`` item was added. Old style
353 datasets use ``dataset`` instead of ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``.
354 Old style configs do not support tarred datasets.
355
356 Args:
357 model_cfg: old style config
358
359 Returns:
360 model config which follows dataclass
361 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`
362 """
363 train_ds = model_cfg.get('train_ds')
364 validation_ds = model_cfg.get('validation_ds')
365 test_ds = model_cfg.get('test_ds')
366 dataset = model_cfg.dataset
367 punct_head_config = model_cfg.get('punct_head', {})
368 capit_head_config = model_cfg.get('capit_head', {})
369 omega_conf = OmegaConf.structured(
370 PunctuationCapitalizationModelConfig(
371 class_labels=model_cfg.class_labels,
372 common_dataset_parameters=CommonDatasetParametersConfig(
373 pad_label=dataset.pad_label,
374 ignore_extra_tokens=dataset.get(
375 'ignore_extra_tokens', CommonDatasetParametersConfig.ignore_extra_tokens
376 ),
377 ignore_start_end=dataset.get('ignore_start_end', CommonDatasetParametersConfig.ignore_start_end),
378 punct_label_ids=model_cfg.punct_label_ids,
379 capit_label_ids=model_cfg.capit_label_ids,
380 ),
381 train_ds=None
382 if train_ds is None
383 else legacy_data_config_to_new_data_config(train_ds, dataset, train=True),
384 validation_ds=None
385 if validation_ds is None
386 else legacy_data_config_to_new_data_config(validation_ds, dataset, train=False),
387 test_ds=None if test_ds is None else legacy_data_config_to_new_data_config(test_ds, dataset, train=False),
388 punct_head=HeadConfig(
389 num_fc_layers=punct_head_config.get('punct_num_fc_layers', HeadConfig.num_fc_layers),
390 fc_dropout=punct_head_config.get('fc_dropout', HeadConfig.fc_dropout),
391 activation=punct_head_config.get('activation', HeadConfig.activation),
392 use_transformer_init=punct_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
393 ),
394 capit_head=HeadConfig(
395 num_fc_layers=capit_head_config.get('capit_num_fc_layers', HeadConfig.num_fc_layers),
396 fc_dropout=capit_head_config.get('fc_dropout', HeadConfig.fc_dropout),
397 activation=capit_head_config.get('activation', HeadConfig.activation),
398 use_transformer_init=capit_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
399 ),
400 tokenizer=model_cfg.tokenizer,
401 language_model=model_cfg.language_model,
402 optim=model_cfg.optim,
403 )
404 )
405 with open_dict(omega_conf):
406 retain_during_legacy_conversion = model_cfg.get('retain_during_legacy_conversion', {})
407 for key in retain_during_legacy_conversion.keys():
408 omega_conf[key] = retain_during_legacy_conversion[key]
409 return omega_conf
410
[end of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
[start of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Transformer based language model."""
16 from MeCab import Model
17 from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
18 from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
19 from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
20 MegatronRetrievalTransformerEncoderModule,
21 )
22 from nemo.collections.nlp.modules.common.megatron.utils import (
23 ApexGuardDefaults,
24 init_method_normal,
25 scaled_init_method_normal,
26 )
27
28 try:
29 from apex.transformer.enums import AttnMaskType, ModelType
30
31 HAVE_APEX = True
32 except (ImportError, ModuleNotFoundError):
33 HAVE_APEX = False
34 # fake missing classes with None attributes
35 AttnMaskType = ApexGuardDefaults()
36 ModelType = ApexGuardDefaults()
37
38 try:
39 from megatron.core import ModelParallelConfig
40
41 HAVE_MEGATRON_CORE = True
42
43 except (ImportError, ModuleNotFoundError):
44
45 ModelParallelConfig = ApexGuardDefaults
46
47 HAVE_MEGATRON_CORE = False
48
49 __all__ = []
50
51 AVAILABLE_ENCODERS = ["transformer", "perceiver", "retro"]
52
53
54 def get_encoder_model(
55 config: ModelParallelConfig,
56 arch,
57 hidden_size,
58 ffn_hidden_size,
59 num_layers,
60 num_attention_heads,
61 apply_query_key_layer_scaling=False,
62 kv_channels=None,
63 init_method=None,
64 scaled_init_method=None,
65 encoder_attn_mask_type=AttnMaskType.padding,
66 pre_process=True,
67 post_process=True,
68 init_method_std=0.02,
69 megatron_amp_O2=False,
70 hidden_dropout=0.1,
71 attention_dropout=0.1,
72 ffn_dropout=0.0,
73 precision=16,
74 fp32_residual_connection=False,
75 activations_checkpoint_method=None,
76 activations_checkpoint_num_layers=1,
77 activations_checkpoint_granularity=None,
78 layernorm_epsilon=1e-5,
79 bias_activation_fusion=True,
80 bias_dropout_add_fusion=True,
81 masked_softmax_fusion=True,
82 persist_layer_norm=False,
83 openai_gelu=False,
84 activation="gelu",
85 onnx_safe=False,
86 bias=True,
87 normalization="layernorm",
88 headscale=False,
89 transformer_block_type="pre_ln",
90 hidden_steps=32,
91 parent_model_type=ModelType.encoder_or_decoder,
92 layer_type=None,
93 chunk_size=64,
94 num_self_attention_per_cross_attention=1,
95 layer_number_offset=0, # this is use only for attention norm_factor scaling
96 megatron_legacy=False,
97 normalize_attention_scores=True,
98 sequence_parallel=False,
99 num_moe_experts=1,
100 moe_frequency=1,
101 moe_dropout=0.0,
102 turn_off_rop=False, # turn off the RoP positional embedding
103 version=1, # model version
104 position_embedding_type='learned_absolute',
105 use_flash_attention=False,
106 ):
107 """Build language model and return along with the key to save."""
108
109 if kv_channels is None:
110 assert (
111 hidden_size % num_attention_heads == 0
112 ), 'hidden_size must be divisible by num_attention_heads if kv_channels is None'
113 kv_channels = hidden_size // num_attention_heads
114
115 if init_method is None:
116 init_method = init_method_normal(init_method_std)
117
118 if scaled_init_method is None:
119 scaled_init_method = scaled_init_method_normal(init_method_std, num_layers)
120
121 if arch == "transformer":
122 # Language encoder.
123 encoder = MegatronTransformerEncoderModule(
124 config=config,
125 init_method=init_method,
126 output_layer_init_method=scaled_init_method,
127 hidden_size=hidden_size,
128 num_layers=num_layers,
129 num_attention_heads=num_attention_heads,
130 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
131 kv_channels=kv_channels,
132 ffn_hidden_size=ffn_hidden_size,
133 encoder_attn_mask_type=encoder_attn_mask_type,
134 pre_process=pre_process,
135 post_process=post_process,
136 megatron_amp_O2=megatron_amp_O2,
137 hidden_dropout=hidden_dropout,
138 attention_dropout=attention_dropout,
139 ffn_dropout=ffn_dropout,
140 precision=precision,
141 fp32_residual_connection=fp32_residual_connection,
142 activations_checkpoint_method=activations_checkpoint_method,
143 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
144 activations_checkpoint_granularity=activations_checkpoint_granularity,
145 layernorm_epsilon=layernorm_epsilon,
146 bias_activation_fusion=bias_activation_fusion,
147 bias_dropout_add_fusion=bias_dropout_add_fusion,
148 masked_softmax_fusion=masked_softmax_fusion,
149 persist_layer_norm=persist_layer_norm,
150 openai_gelu=openai_gelu,
151 onnx_safe=onnx_safe,
152 activation=activation,
153 bias=bias,
154 normalization=normalization,
155 transformer_block_type=transformer_block_type,
156 headscale=headscale,
157 parent_model_type=parent_model_type,
158 megatron_legacy=megatron_legacy,
159 normalize_attention_scores=normalize_attention_scores,
160 num_moe_experts=num_moe_experts,
161 moe_frequency=moe_frequency,
162 moe_dropout=moe_dropout,
163 position_embedding_type=position_embedding_type,
164 use_flash_attention=use_flash_attention,
165 )
166 elif arch == "retro":
167 encoder = MegatronRetrievalTransformerEncoderModule(
168 config=config,
169 init_method=init_method,
170 output_layer_init_method=scaled_init_method,
171 hidden_size=hidden_size,
172 num_layers=num_layers,
173 num_attention_heads=num_attention_heads,
174 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
175 kv_channels=kv_channels,
176 layer_type=layer_type,
177 ffn_hidden_size=ffn_hidden_size,
178 pre_process=pre_process,
179 post_process=post_process,
180 megatron_amp_O2=megatron_amp_O2,
181 hidden_dropout=hidden_dropout,
182 attention_dropout=attention_dropout,
183 precision=precision,
184 fp32_residual_connection=fp32_residual_connection,
185 activations_checkpoint_method=activations_checkpoint_method,
186 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
187 activations_checkpoint_granularity=activations_checkpoint_granularity,
188 layernorm_epsilon=layernorm_epsilon,
189 bias_activation_fusion=bias_activation_fusion,
190 bias_dropout_add_fusion=bias_dropout_add_fusion,
191 masked_softmax_fusion=masked_softmax_fusion,
192 persist_layer_norm=persist_layer_norm,
193 openai_gelu=openai_gelu,
194 onnx_safe=onnx_safe,
195 activation=activation,
196 bias=bias,
197 normalization=normalization,
198 transformer_block_type=transformer_block_type,
199 parent_model_type=parent_model_type,
200 chunk_size=chunk_size,
201 layer_number_offset=layer_number_offset,
202 megatron_legacy=megatron_legacy,
203 normalize_attention_scores=normalize_attention_scores,
204 turn_off_rop=turn_off_rop,
205 version=version,
206 )
207 elif arch == "perceiver":
208 encoder = MegatronPerceiverEncoderModule(
209 config=config,
210 init_method=init_method,
211 output_layer_init_method=scaled_init_method,
212 hidden_size=hidden_size,
213 num_layers=num_layers,
214 num_attention_heads=num_attention_heads,
215 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
216 kv_channels=kv_channels,
217 ffn_hidden_size=ffn_hidden_size,
218 encoder_attn_mask_type=encoder_attn_mask_type,
219 pre_process=pre_process,
220 post_process=post_process,
221 megatron_amp_O2=megatron_amp_O2,
222 hidden_dropout=hidden_dropout,
223 attention_dropout=attention_dropout,
224 ffn_dropout=ffn_dropout,
225 precision=precision,
226 fp32_residual_connection=fp32_residual_connection,
227 activations_checkpoint_method=activations_checkpoint_method,
228 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
229 activations_checkpoint_granularity=activations_checkpoint_granularity,
230 layernorm_epsilon=layernorm_epsilon,
231 bias_activation_fusion=bias_activation_fusion,
232 bias_dropout_add_fusion=bias_dropout_add_fusion,
233 masked_softmax_fusion=masked_softmax_fusion,
234 persist_layer_norm=persist_layer_norm,
235 openai_gelu=openai_gelu,
236 onnx_safe=onnx_safe,
237 activation=activation,
238 bias=bias,
239 normalization=normalization,
240 transformer_block_type=transformer_block_type,
241 headscale=headscale,
242 parent_model_type=parent_model_type,
243 hidden_steps=hidden_steps,
244 num_self_attention_per_cross_attention=num_self_attention_per_cross_attention,
245 megatron_legacy=megatron_legacy,
246 normalize_attention_scores=normalize_attention_scores,
247 )
248 else:
249 raise ValueError(f"Unknown encoder arch = {arch}. Available encoder arch = {AVAILABLE_ENCODERS}")
250
251 return encoder
252
[end of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
[start of nemo/collections/tts/models/fastpitch.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import contextlib
15 from dataclasses import dataclass
16 from pathlib import Path
17 from typing import List, Optional
18
19 import torch
20 from hydra.utils import instantiate
21 from omegaconf import DictConfig, OmegaConf, open_dict
22 from pytorch_lightning import Trainer
23 from pytorch_lightning.loggers import TensorBoardLogger
24
25 from nemo.collections.common.parts.preprocessing import parsers
26 from nemo.collections.tts.losses.aligner_loss import BinLoss, ForwardSumLoss
27 from nemo.collections.tts.losses.fastpitchloss import DurationLoss, EnergyLoss, MelLoss, PitchLoss
28 from nemo.collections.tts.models.base import SpectrogramGenerator
29 from nemo.collections.tts.modules.fastpitch import FastPitchModule
30 from nemo.collections.tts.parts.mixins import FastPitchAdapterModelMixin
31 from nemo.collections.tts.parts.utils.callbacks import LoggingCallback
32 from nemo.collections.tts.parts.utils.helpers import (
33 batch_from_ragged,
34 g2p_backward_compatible_support,
35 plot_alignment_to_numpy,
36 plot_spectrogram_to_numpy,
37 process_batch,
38 sample_tts_input,
39 )
40 from nemo.core.classes import Exportable
41 from nemo.core.classes.common import PretrainedModelInfo, typecheck
42 from nemo.core.neural_types.elements import (
43 Index,
44 LengthsType,
45 MelSpectrogramType,
46 ProbsType,
47 RegressionValuesType,
48 TokenDurationType,
49 TokenIndex,
50 TokenLogDurationType,
51 )
52 from nemo.core.neural_types.neural_type import NeuralType
53 from nemo.utils import logging, model_utils
54
55
56 @dataclass
57 class G2PConfig:
58 _target_: str = "nemo.collections.tts.g2p.models.en_us_arpabet.EnglishG2p"
59 phoneme_dict: str = "scripts/tts_dataset_files/cmudict-0.7b_nv22.10"
60 heteronyms: str = "scripts/tts_dataset_files/heteronyms-052722"
61 phoneme_probability: float = 0.5
62
63
64 @dataclass
65 class TextTokenizer:
66 _target_: str = "nemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.EnglishPhonemesTokenizer"
67 punct: bool = True
68 stresses: bool = True
69 chars: bool = True
70 apostrophe: bool = True
71 pad_with_space: bool = True
72 add_blank_at: bool = True
73 g2p: G2PConfig = G2PConfig()
74
75
76 @dataclass
77 class TextTokenizerConfig:
78 text_tokenizer: TextTokenizer = TextTokenizer()
79
80
81 class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
82 """FastPitch model (https://arxiv.org/abs/2006.06873) that is used to generate mel spectrogram from text."""
83
84 def __init__(self, cfg: DictConfig, trainer: Trainer = None):
85 # Convert to Hydra 1.0 compatible DictConfig
86 cfg = model_utils.convert_model_config_to_dict_config(cfg)
87 cfg = model_utils.maybe_update_config_version(cfg)
88
89 # Setup normalizer
90 self.normalizer = None
91 self.text_normalizer_call = None
92 self.text_normalizer_call_kwargs = {}
93 self._setup_normalizer(cfg)
94
95 self.learn_alignment = cfg.get("learn_alignment", False)
96
97 # Setup vocabulary (=tokenizer) and input_fft_kwargs (supported only with self.learn_alignment=True)
98 input_fft_kwargs = {}
99 if self.learn_alignment:
100 self.vocab = None
101
102 self.ds_class = cfg.train_ds.dataset._target_
103 self.ds_class_name = self.ds_class.split(".")[-1]
104 if not self.ds_class in [
105 "nemo.collections.tts.data.dataset.TTSDataset",
106 "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset",
107 "nemo.collections.tts.torch.data.TTSDataset",
108 ]:
109 raise ValueError(f"Unknown dataset class: {self.ds_class}.")
110
111 self._setup_tokenizer(cfg)
112 assert self.vocab is not None
113 input_fft_kwargs["n_embed"] = len(self.vocab.tokens)
114 input_fft_kwargs["padding_idx"] = self.vocab.pad
115
116 self._parser = None
117 self._tb_logger = None
118 super().__init__(cfg=cfg, trainer=trainer)
119
120 self.bin_loss_warmup_epochs = cfg.get("bin_loss_warmup_epochs", 100)
121 self.log_images = cfg.get("log_images", False)
122 self.log_train_images = False
123
124 default_prosody_loss_scale = 0.1 if self.learn_alignment else 1.0
125 dur_loss_scale = cfg.get("dur_loss_scale", default_prosody_loss_scale)
126 pitch_loss_scale = cfg.get("pitch_loss_scale", default_prosody_loss_scale)
127 energy_loss_scale = cfg.get("energy_loss_scale", default_prosody_loss_scale)
128
129 self.mel_loss_fn = MelLoss()
130 self.pitch_loss_fn = PitchLoss(loss_scale=pitch_loss_scale)
131 self.duration_loss_fn = DurationLoss(loss_scale=dur_loss_scale)
132 self.energy_loss_fn = EnergyLoss(loss_scale=energy_loss_scale)
133
134 self.aligner = None
135 if self.learn_alignment:
136 aligner_loss_scale = cfg.get("aligner_loss_scale", 1.0)
137 self.aligner = instantiate(self._cfg.alignment_module)
138 self.forward_sum_loss_fn = ForwardSumLoss(loss_scale=aligner_loss_scale)
139 self.bin_loss_fn = BinLoss(loss_scale=aligner_loss_scale)
140
141 self.preprocessor = instantiate(self._cfg.preprocessor)
142 input_fft = instantiate(self._cfg.input_fft, **input_fft_kwargs)
143 output_fft = instantiate(self._cfg.output_fft)
144 duration_predictor = instantiate(self._cfg.duration_predictor)
145 pitch_predictor = instantiate(self._cfg.pitch_predictor)
146 speaker_encoder = instantiate(self._cfg.get("speaker_encoder", None))
147 energy_embedding_kernel_size = cfg.get("energy_embedding_kernel_size", 0)
148 energy_predictor = instantiate(self._cfg.get("energy_predictor", None))
149
150 # [TODO] may remove if we change the pre-trained config
151 # cfg: condition_types = [ "add" ]
152 n_speakers = cfg.get("n_speakers", 0)
153 speaker_emb_condition_prosody = cfg.get("speaker_emb_condition_prosody", False)
154 speaker_emb_condition_decoder = cfg.get("speaker_emb_condition_decoder", False)
155 speaker_emb_condition_aligner = cfg.get("speaker_emb_condition_aligner", False)
156 min_token_duration = cfg.get("min_token_duration", 0)
157 use_log_energy = cfg.get("use_log_energy", True)
158 if n_speakers > 1 and "add" not in input_fft.cond_input.condition_types:
159 input_fft.cond_input.condition_types.append("add")
160 if speaker_emb_condition_prosody:
161 duration_predictor.cond_input.condition_types.append("add")
162 pitch_predictor.cond_input.condition_types.append("add")
163 if speaker_emb_condition_decoder:
164 output_fft.cond_input.condition_types.append("add")
165 if speaker_emb_condition_aligner and self.aligner is not None:
166 self.aligner.cond_input.condition_types.append("add")
167
168 self.fastpitch = FastPitchModule(
169 input_fft,
170 output_fft,
171 duration_predictor,
172 pitch_predictor,
173 energy_predictor,
174 self.aligner,
175 speaker_encoder,
176 n_speakers,
177 cfg.symbols_embedding_dim,
178 cfg.pitch_embedding_kernel_size,
179 energy_embedding_kernel_size,
180 cfg.n_mel_channels,
181 min_token_duration,
182 cfg.max_token_duration,
183 use_log_energy,
184 )
185 self._input_types = self._output_types = None
186 self.export_config = {
187 "emb_range": (0, self.fastpitch.encoder.word_emb.num_embeddings),
188 "enable_volume": False,
189 "enable_ragged_batches": False,
190 }
191 if self.fastpitch.speaker_emb is not None:
192 self.export_config["num_speakers"] = cfg.n_speakers
193
194 self.log_config = cfg.get("log_config", None)
195
196 # Adapter modules setup (from FastPitchAdapterModelMixin)
197 self.setup_adapters()
198
199 def _get_default_text_tokenizer_conf(self):
200 text_tokenizer: TextTokenizerConfig = TextTokenizerConfig()
201 return OmegaConf.create(OmegaConf.to_yaml(text_tokenizer))
202
203 def _setup_normalizer(self, cfg):
204 if "text_normalizer" in cfg:
205 normalizer_kwargs = {}
206
207 if "whitelist" in cfg.text_normalizer:
208 normalizer_kwargs["whitelist"] = self.register_artifact(
209 'text_normalizer.whitelist', cfg.text_normalizer.whitelist
210 )
211 try:
212 import nemo_text_processing
213
214 self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
215 except Exception as e:
216 logging.error(e)
217 raise ImportError(
218 "`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
219 )
220
221 self.text_normalizer_call = self.normalizer.normalize
222 if "text_normalizer_call_kwargs" in cfg:
223 self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
224
225 def _setup_tokenizer(self, cfg):
226 text_tokenizer_kwargs = {}
227
228 if "g2p" in cfg.text_tokenizer:
229 # for backward compatibility
230 if (
231 self._is_model_being_restored()
232 and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
233 and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
234 ):
235 cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
236 cfg.text_tokenizer.g2p["_target_"]
237 )
238
239 g2p_kwargs = {}
240
241 if "phoneme_dict" in cfg.text_tokenizer.g2p:
242 g2p_kwargs["phoneme_dict"] = self.register_artifact(
243 'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
244 )
245
246 if "heteronyms" in cfg.text_tokenizer.g2p:
247 g2p_kwargs["heteronyms"] = self.register_artifact(
248 'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
249 )
250
251 # for backward compatability
252 text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
253
254 # TODO @xueyang: rename the instance of tokenizer because vocab is misleading.
255 self.vocab = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
256
257 @property
258 def tb_logger(self):
259 if self._tb_logger is None:
260 if self.logger is None and self.logger.experiment is None:
261 return None
262 tb_logger = self.logger.experiment
263 for logger in self.trainer.loggers:
264 if isinstance(logger, TensorBoardLogger):
265 tb_logger = logger.experiment
266 break
267 self._tb_logger = tb_logger
268 return self._tb_logger
269
270 @property
271 def parser(self):
272 if self._parser is not None:
273 return self._parser
274
275 if self.learn_alignment:
276 self._parser = self.vocab.encode
277 else:
278 self._parser = parsers.make_parser(
279 labels=self._cfg.labels,
280 name='en',
281 unk_id=-1,
282 blank_id=-1,
283 do_normalize=True,
284 abbreviation_version="fastpitch",
285 make_table=False,
286 )
287 return self._parser
288
289 def parse(self, str_input: str, normalize=True) -> torch.tensor:
290 if self.training:
291 logging.warning("parse() is meant to be called in eval mode.")
292
293 if normalize and self.text_normalizer_call is not None:
294 str_input = self.text_normalizer_call(str_input, **self.text_normalizer_call_kwargs)
295
296 if self.learn_alignment:
297 eval_phon_mode = contextlib.nullcontext()
298 if hasattr(self.vocab, "set_phone_prob"):
299 eval_phon_mode = self.vocab.set_phone_prob(prob=1.0)
300
301 # Disable mixed g2p representation if necessary
302 with eval_phon_mode:
303 tokens = self.parser(str_input)
304 else:
305 tokens = self.parser(str_input)
306
307 x = torch.tensor(tokens).unsqueeze_(0).long().to(self.device)
308 return x
309
310 @typecheck(
311 input_types={
312 "text": NeuralType(('B', 'T_text'), TokenIndex()),
313 "durs": NeuralType(('B', 'T_text'), TokenDurationType()),
314 "pitch": NeuralType(('B', 'T_audio'), RegressionValuesType()),
315 "energy": NeuralType(('B', 'T_audio'), RegressionValuesType(), optional=True),
316 "speaker": NeuralType(('B'), Index(), optional=True),
317 "pace": NeuralType(optional=True),
318 "spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
319 "attn_prior": NeuralType(('B', 'T_spec', 'T_text'), ProbsType(), optional=True),
320 "mel_lens": NeuralType(('B'), LengthsType(), optional=True),
321 "input_lens": NeuralType(('B'), LengthsType(), optional=True),
322 # reference_* data is used for multi-speaker FastPitch training
323 "reference_spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
324 "reference_spec_lens": NeuralType(('B'), LengthsType(), optional=True),
325 }
326 )
327 def forward(
328 self,
329 *,
330 text,
331 durs=None,
332 pitch=None,
333 energy=None,
334 speaker=None,
335 pace=1.0,
336 spec=None,
337 attn_prior=None,
338 mel_lens=None,
339 input_lens=None,
340 reference_spec=None,
341 reference_spec_lens=None,
342 ):
343 return self.fastpitch(
344 text=text,
345 durs=durs,
346 pitch=pitch,
347 energy=energy,
348 speaker=speaker,
349 pace=pace,
350 spec=spec,
351 attn_prior=attn_prior,
352 mel_lens=mel_lens,
353 input_lens=input_lens,
354 reference_spec=reference_spec,
355 reference_spec_lens=reference_spec_lens,
356 )
357
358 @typecheck(output_types={"spect": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType())})
359 def generate_spectrogram(
360 self,
361 tokens: 'torch.tensor',
362 speaker: Optional[int] = None,
363 pace: float = 1.0,
364 reference_spec: Optional['torch.tensor'] = None,
365 reference_spec_lens: Optional['torch.tensor'] = None,
366 ) -> torch.tensor:
367 if self.training:
368 logging.warning("generate_spectrogram() is meant to be called in eval mode.")
369 if isinstance(speaker, int):
370 speaker = torch.tensor([speaker]).to(self.device)
371 spect, *_ = self(
372 text=tokens,
373 durs=None,
374 pitch=None,
375 speaker=speaker,
376 pace=pace,
377 reference_spec=reference_spec,
378 reference_spec_lens=reference_spec_lens,
379 )
380 return spect
381
382 def training_step(self, batch, batch_idx):
383 attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
384 None,
385 None,
386 None,
387 None,
388 None,
389 None,
390 )
391 if self.learn_alignment:
392 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
393 batch_dict = batch
394 else:
395 batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
396 audio = batch_dict.get("audio")
397 audio_lens = batch_dict.get("audio_lens")
398 text = batch_dict.get("text")
399 text_lens = batch_dict.get("text_lens")
400 attn_prior = batch_dict.get("align_prior_matrix", None)
401 pitch = batch_dict.get("pitch", None)
402 energy = batch_dict.get("energy", None)
403 speaker = batch_dict.get("speaker_id", None)
404 reference_audio = batch_dict.get("reference_audio", None)
405 reference_audio_len = batch_dict.get("reference_audio_lens", None)
406 else:
407 audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
408
409 mels, spec_len = self.preprocessor(input_signal=audio, length=audio_lens)
410 reference_spec, reference_spec_len = None, None
411 if reference_audio is not None:
412 reference_spec, reference_spec_len = self.preprocessor(
413 input_signal=reference_audio, length=reference_audio_len
414 )
415
416 (
417 mels_pred,
418 _,
419 _,
420 log_durs_pred,
421 pitch_pred,
422 attn_soft,
423 attn_logprob,
424 attn_hard,
425 attn_hard_dur,
426 pitch,
427 energy_pred,
428 energy_tgt,
429 ) = self(
430 text=text,
431 durs=durs,
432 pitch=pitch,
433 energy=energy,
434 speaker=speaker,
435 pace=1.0,
436 spec=mels if self.learn_alignment else None,
437 reference_spec=reference_spec,
438 reference_spec_lens=reference_spec_len,
439 attn_prior=attn_prior,
440 mel_lens=spec_len,
441 input_lens=text_lens,
442 )
443 if durs is None:
444 durs = attn_hard_dur
445
446 mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
447 dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
448 loss = mel_loss + dur_loss
449 if self.learn_alignment:
450 ctc_loss = self.forward_sum_loss_fn(attn_logprob=attn_logprob, in_lens=text_lens, out_lens=spec_len)
451 bin_loss_weight = min(self.current_epoch / self.bin_loss_warmup_epochs, 1.0) * 1.0
452 bin_loss = self.bin_loss_fn(hard_attention=attn_hard, soft_attention=attn_soft) * bin_loss_weight
453 loss += ctc_loss + bin_loss
454
455 pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
456 energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
457 loss += pitch_loss + energy_loss
458
459 self.log("t_loss", loss)
460 self.log("t_mel_loss", mel_loss)
461 self.log("t_dur_loss", dur_loss)
462 self.log("t_pitch_loss", pitch_loss)
463 if energy_tgt is not None:
464 self.log("t_energy_loss", energy_loss)
465 if self.learn_alignment:
466 self.log("t_ctc_loss", ctc_loss)
467 self.log("t_bin_loss", bin_loss)
468
469 # Log images to tensorboard
470 if self.log_images and self.log_train_images and isinstance(self.logger, TensorBoardLogger):
471 self.log_train_images = False
472
473 self.tb_logger.add_image(
474 "train_mel_target",
475 plot_spectrogram_to_numpy(mels[0].data.cpu().float().numpy()),
476 self.global_step,
477 dataformats="HWC",
478 )
479 spec_predict = mels_pred[0].data.cpu().float().numpy()
480 self.tb_logger.add_image(
481 "train_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
482 )
483 if self.learn_alignment:
484 attn = attn_hard[0].data.cpu().float().numpy().squeeze()
485 self.tb_logger.add_image(
486 "train_attn", plot_alignment_to_numpy(attn.T), self.global_step, dataformats="HWC",
487 )
488 soft_attn = attn_soft[0].data.cpu().float().numpy().squeeze()
489 self.tb_logger.add_image(
490 "train_soft_attn", plot_alignment_to_numpy(soft_attn.T), self.global_step, dataformats="HWC",
491 )
492
493 return loss
494
495 def validation_step(self, batch, batch_idx):
496 attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
497 None,
498 None,
499 None,
500 None,
501 None,
502 None,
503 )
504 if self.learn_alignment:
505 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
506 batch_dict = batch
507 else:
508 batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
509 audio = batch_dict.get("audio")
510 audio_lens = batch_dict.get("audio_lens")
511 text = batch_dict.get("text")
512 text_lens = batch_dict.get("text_lens")
513 attn_prior = batch_dict.get("align_prior_matrix", None)
514 pitch = batch_dict.get("pitch", None)
515 energy = batch_dict.get("energy", None)
516 speaker = batch_dict.get("speaker_id", None)
517 reference_audio = batch_dict.get("reference_audio", None)
518 reference_audio_len = batch_dict.get("reference_audio_lens", None)
519 else:
520 audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
521
522 mels, mel_lens = self.preprocessor(input_signal=audio, length=audio_lens)
523 reference_spec, reference_spec_len = None, None
524 if reference_audio is not None:
525 reference_spec, reference_spec_len = self.preprocessor(
526 input_signal=reference_audio, length=reference_audio_len
527 )
528
529 # Calculate val loss on ground truth durations to better align L2 loss in time
530 (mels_pred, _, _, log_durs_pred, pitch_pred, _, _, _, attn_hard_dur, pitch, energy_pred, energy_tgt,) = self(
531 text=text,
532 durs=durs,
533 pitch=pitch,
534 energy=energy,
535 speaker=speaker,
536 pace=1.0,
537 spec=mels if self.learn_alignment else None,
538 reference_spec=reference_spec,
539 reference_spec_lens=reference_spec_len,
540 attn_prior=attn_prior,
541 mel_lens=mel_lens,
542 input_lens=text_lens,
543 )
544 if durs is None:
545 durs = attn_hard_dur
546
547 mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
548 dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
549 pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
550 energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
551 loss = mel_loss + dur_loss + pitch_loss + energy_loss
552
553 val_outputs = {
554 "val_loss": loss,
555 "mel_loss": mel_loss,
556 "dur_loss": dur_loss,
557 "pitch_loss": pitch_loss,
558 "energy_loss": energy_loss if energy_tgt is not None else None,
559 "mel_target": mels if batch_idx == 0 else None,
560 "mel_pred": mels_pred if batch_idx == 0 else None,
561 }
562 self.validation_step_outputs.append(val_outputs)
563 return val_outputs
564
565 def on_validation_epoch_end(self):
566 collect = lambda key: torch.stack([x[key] for x in self.validation_step_outputs]).mean()
567 val_loss = collect("val_loss")
568 mel_loss = collect("mel_loss")
569 dur_loss = collect("dur_loss")
570 pitch_loss = collect("pitch_loss")
571 self.log("val_loss", val_loss, sync_dist=True)
572 self.log("val_mel_loss", mel_loss, sync_dist=True)
573 self.log("val_dur_loss", dur_loss, sync_dist=True)
574 self.log("val_pitch_loss", pitch_loss, sync_dist=True)
575 if self.validation_step_outputs[0]["energy_loss"] is not None:
576 energy_loss = collect("energy_loss")
577 self.log("val_energy_loss", energy_loss, sync_dist=True)
578
579 _, _, _, _, _, spec_target, spec_predict = self.validation_step_outputs[0].values()
580
581 if self.log_images and isinstance(self.logger, TensorBoardLogger):
582 self.tb_logger.add_image(
583 "val_mel_target",
584 plot_spectrogram_to_numpy(spec_target[0].data.cpu().float().numpy()),
585 self.global_step,
586 dataformats="HWC",
587 )
588 spec_predict = spec_predict[0].data.cpu().float().numpy()
589 self.tb_logger.add_image(
590 "val_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
591 )
592 self.log_train_images = True
593 self.validation_step_outputs.clear() # free memory)
594
595 def _setup_train_dataloader(self, cfg):
596 phon_mode = contextlib.nullcontext()
597 if hasattr(self.vocab, "set_phone_prob"):
598 phon_mode = self.vocab.set_phone_prob(self.vocab.phoneme_probability)
599
600 with phon_mode:
601 dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
602
603 sampler = dataset.get_sampler(cfg.dataloader_params.batch_size)
604 return torch.utils.data.DataLoader(
605 dataset, collate_fn=dataset.collate_fn, sampler=sampler, **cfg.dataloader_params
606 )
607
608 def _setup_test_dataloader(self, cfg):
609 phon_mode = contextlib.nullcontext()
610 if hasattr(self.vocab, "set_phone_prob"):
611 phon_mode = self.vocab.set_phone_prob(0.0)
612
613 with phon_mode:
614 dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
615
616 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
617
618 def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
619 if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
620 raise ValueError(f"No dataset for {name}")
621 if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
622 raise ValueError(f"No dataloader_params for {name}")
623 if shuffle_should_be:
624 if 'shuffle' not in cfg.dataloader_params:
625 logging.warning(
626 f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
627 "config. Manually setting to True"
628 )
629 with open_dict(cfg.dataloader_params):
630 cfg.dataloader_params.shuffle = True
631 elif not cfg.dataloader_params.shuffle:
632 logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
633 elif cfg.dataloader_params.shuffle:
634 logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
635
636 if self.ds_class == "nemo.collections.tts.data.dataset.TTSDataset":
637 phon_mode = contextlib.nullcontext()
638 if hasattr(self.vocab, "set_phone_prob"):
639 phon_mode = self.vocab.set_phone_prob(prob=None if name == "val" else self.vocab.phoneme_probability)
640
641 with phon_mode:
642 dataset = instantiate(
643 cfg.dataset,
644 text_normalizer=self.normalizer,
645 text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
646 text_tokenizer=self.vocab,
647 )
648 else:
649 dataset = instantiate(cfg.dataset)
650
651 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
652
653 def setup_training_data(self, cfg):
654 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
655 self._train_dl = self._setup_train_dataloader(cfg)
656 else:
657 self._train_dl = self.__setup_dataloader_from_config(cfg)
658
659 def setup_validation_data(self, cfg):
660 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
661 self._validation_dl = self._setup_test_dataloader(cfg)
662 else:
663 self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="val")
664
665 def setup_test_data(self, cfg):
666 """Omitted."""
667 pass
668
669 def configure_callbacks(self):
670 if not self.log_config:
671 return []
672
673 sample_ds_class = self.log_config.dataset._target_
674 if sample_ds_class != "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
675 raise ValueError(f"Logging callback only supported for TextToSpeechDataset, got {sample_ds_class}")
676
677 data_loader = self._setup_test_dataloader(self.log_config)
678
679 generators = instantiate(self.log_config.generators)
680 log_dir = Path(self.log_config.log_dir) if self.log_config.log_dir else None
681 log_callback = LoggingCallback(
682 generators=generators,
683 data_loader=data_loader,
684 log_epochs=self.log_config.log_epochs,
685 epoch_frequency=self.log_config.epoch_frequency,
686 output_dir=log_dir,
687 loggers=self.trainer.loggers,
688 log_tensorboard=self.log_config.log_tensorboard,
689 log_wandb=self.log_config.log_wandb,
690 )
691
692 return [log_callback]
693
694 @classmethod
695 def list_available_models(cls) -> 'List[PretrainedModelInfo]':
696 """
697 This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
698 Returns:
699 List of available pre-trained models.
700 """
701 list_of_models = []
702
703 # en-US, single speaker, 22050Hz, LJSpeech (ARPABET).
704 model = PretrainedModelInfo(
705 pretrained_model_name="tts_en_fastpitch",
706 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/1.8.1/files/tts_en_fastpitch_align.nemo",
707 description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is ARPABET-based.",
708 class_=cls,
709 )
710 list_of_models.append(model)
711
712 # en-US, single speaker, 22050Hz, LJSpeech (IPA).
713 model = PretrainedModelInfo(
714 pretrained_model_name="tts_en_fastpitch_ipa",
715 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/IPA_1.13.0/files/tts_en_fastpitch_align_ipa.nemo",
716 description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is IPA-based.",
717 class_=cls,
718 )
719 list_of_models.append(model)
720
721 # en-US, multi-speaker, 44100Hz, HiFiTTS.
722 model = PretrainedModelInfo(
723 pretrained_model_name="tts_en_fastpitch_multispeaker",
724 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_multispeaker_fastpitchhifigan/versions/1.10.0/files/tts_en_fastpitch_multispeaker.nemo",
725 description="This model is trained on HiFITTS sampled at 44100Hz with and can be used to generate male and female English voices with an American accent.",
726 class_=cls,
727 )
728 list_of_models.append(model)
729
730 # de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 21.02
731 model = PretrainedModelInfo(
732 pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2102",
733 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2102.nemo",
734 description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 21.02 Dataset sampled at 22050Hz and can be used to generate male German voices.",
735 class_=cls,
736 )
737 list_of_models.append(model)
738
739 # de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 22.10
740 model = PretrainedModelInfo(
741 pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2210",
742 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2210.nemo",
743 description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 22.10 Dataset sampled at 22050Hz and can be used to generate male German voices.",
744 class_=cls,
745 )
746 list_of_models.append(model)
747
748 # de-DE, multi-speaker, 5 speakers, 44100 Hz, HUI-Audio-Corpus-German Clean.
749 model = PretrainedModelInfo(
750 pretrained_model_name="tts_de_fastpitch_multispeaker_5",
751 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitch_multispeaker_5/versions/1.11.0/files/tts_de_fastpitch_multispeaker_5.nemo",
752 description="This model is trained on 5 speakers in HUI-Audio-Corpus-German clean subset sampled at 44100Hz with and can be used to generate male and female German voices.",
753 class_=cls,
754 )
755 list_of_models.append(model)
756
757 # es, 174 speakers, 44100Hz, OpenSLR (IPA)
758 model = PretrainedModelInfo(
759 pretrained_model_name="tts_es_fastpitch_multispeaker",
760 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_es_multispeaker_fastpitchhifigan/versions/1.15.0/files/tts_es_fastpitch_multispeaker.nemo",
761 description="This model is trained on 174 speakers in 6 crowdsourced Latin American Spanish OpenSLR datasets sampled at 44100Hz and can be used to generate male and female Spanish voices with Latin American accents.",
762 class_=cls,
763 )
764 list_of_models.append(model)
765
766 # zh, single female speaker, 22050Hz, SFSpeech Bilingual Chinese/English dataset, improved model using richer
767 # dict and jieba word segmenter for polyphone disambiguation.
768 model = PretrainedModelInfo(
769 pretrained_model_name="tts_zh_fastpitch_sfspeech",
770 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_zh_fastpitch_hifigan_sfspeech/versions/1.15.0/files/tts_zh_fastpitch_sfspeech.nemo",
771 description="This model is trained on a single female speaker in SFSpeech Bilingual Chinese/English dataset"
772 " sampled at 22050Hz and can be used to generate female Mandarin Chinese voices. It is improved"
773 " using richer dict and jieba word segmenter for polyphone disambiguation.",
774 class_=cls,
775 )
776 list_of_models.append(model)
777
778 # en, multi speaker, LibriTTS, 16000 Hz
779 # stft 25ms 10ms matching ASR params
780 # for use during Enhlish ASR training/adaptation
781 model = PretrainedModelInfo(
782 pretrained_model_name="tts_en_fastpitch_for_asr_finetuning",
783 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch_spectrogram_enhancer_for_asr_finetuning/versions/1.20.0/files/tts_en_fastpitch_for_asr_finetuning.nemo",
784 description="This model is trained on LibriSpeech, train-960 subset."
785 " STFT parameters follow those commonly used in ASR: 25 ms window, 10 ms hop."
786 " This model is supposed to be used with its companion SpetrogramEnhancer for "
787 " ASR fine-tuning. Usage for regular TTS tasks is not advised.",
788 class_=cls,
789 )
790 list_of_models.append(model)
791
792 return list_of_models
793
794 # Methods for model exportability
795 def _prepare_for_export(self, **kwargs):
796 super()._prepare_for_export(**kwargs)
797
798 tensor_shape = ('T') if self.export_config["enable_ragged_batches"] else ('B', 'T')
799
800 # Define input_types and output_types as required by export()
801 self._input_types = {
802 "text": NeuralType(tensor_shape, TokenIndex()),
803 "pitch": NeuralType(tensor_shape, RegressionValuesType()),
804 "pace": NeuralType(tensor_shape),
805 "volume": NeuralType(tensor_shape, optional=True),
806 "batch_lengths": NeuralType(('B'), optional=True),
807 "speaker": NeuralType(('B'), Index(), optional=True),
808 }
809 self._output_types = {
810 "spect": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
811 "num_frames": NeuralType(('B'), TokenDurationType()),
812 "durs_predicted": NeuralType(('B', 'T'), TokenDurationType()),
813 "log_durs_predicted": NeuralType(('B', 'T'), TokenLogDurationType()),
814 "pitch_predicted": NeuralType(('B', 'T'), RegressionValuesType()),
815 }
816 if self.export_config["enable_volume"]:
817 self._output_types["volume_aligned"] = NeuralType(('B', 'T'), RegressionValuesType())
818
819 def _export_teardown(self):
820 self._input_types = self._output_types = None
821
822 @property
823 def disabled_deployment_input_names(self):
824 """Implement this method to return a set of input names disabled for export"""
825 disabled_inputs = set()
826 if self.fastpitch.speaker_emb is None:
827 disabled_inputs.add("speaker")
828 if not self.export_config["enable_ragged_batches"]:
829 disabled_inputs.add("batch_lengths")
830 if not self.export_config["enable_volume"]:
831 disabled_inputs.add("volume")
832 return disabled_inputs
833
834 @property
835 def input_types(self):
836 return self._input_types
837
838 @property
839 def output_types(self):
840 return self._output_types
841
842 def input_example(self, max_batch=1, max_dim=44):
843 """
844 Generates input examples for tracing etc.
845 Returns:
846 A tuple of input examples.
847 """
848 par = next(self.fastpitch.parameters())
849 inputs = sample_tts_input(self.export_config, par.device, max_batch=max_batch, max_dim=max_dim)
850 if 'enable_ragged_batches' not in self.export_config:
851 inputs.pop('batch_lengths', None)
852 return (inputs,)
853
854 def forward_for_export(self, text, pitch, pace, volume=None, batch_lengths=None, speaker=None):
855 if self.export_config["enable_ragged_batches"]:
856 text, pitch, pace, volume_tensor, lens = batch_from_ragged(
857 text, pitch, pace, batch_lengths, padding_idx=self.fastpitch.encoder.padding_idx, volume=volume
858 )
859 if volume is not None:
860 volume = volume_tensor
861 return self.fastpitch.infer(text=text, pitch=pitch, pace=pace, volume=volume, speaker=speaker)
862
863 def interpolate_speaker(
864 self, original_speaker_1, original_speaker_2, weight_speaker_1, weight_speaker_2, new_speaker_id
865 ):
866 """
867 This method performs speaker interpolation between two original speakers the model is trained on.
868
869 Inputs:
870 original_speaker_1: Integer speaker ID of first existing speaker in the model
871 original_speaker_2: Integer speaker ID of second existing speaker in the model
872 weight_speaker_1: Floating point weight associated in to first speaker during weight combination
873 weight_speaker_2: Floating point weight associated in to second speaker during weight combination
874 new_speaker_id: Integer speaker ID of new interpolated speaker in the model
875 """
876 if self.fastpitch.speaker_emb is None:
877 raise Exception(
878 "Current FastPitch model is not a multi-speaker FastPitch model. Speaker interpolation can only \
879 be performed with a multi-speaker model"
880 )
881 n_speakers = self.fastpitch.speaker_emb.weight.data.size()[0]
882 if original_speaker_1 >= n_speakers or original_speaker_2 >= n_speakers or new_speaker_id >= n_speakers:
883 raise Exception(
884 f"Parameters original_speaker_1, original_speaker_2, new_speaker_id should be less than the total \
885 total number of speakers FastPitch was trained on (n_speakers = {n_speakers})."
886 )
887 speaker_emb_1 = (
888 self.fastpitch.speaker_emb(torch.tensor(original_speaker_1, dtype=torch.int32).cuda()).clone().detach()
889 )
890 speaker_emb_2 = (
891 self.fastpitch.speaker_emb(torch.tensor(original_speaker_2, dtype=torch.int32).cuda()).clone().detach()
892 )
893 new_speaker_emb = weight_speaker_1 * speaker_emb_1 + weight_speaker_2 * speaker_emb_2
894 self.fastpitch.speaker_emb.weight.data[new_speaker_id] = new_speaker_emb
895
[end of nemo/collections/tts/models/fastpitch.py]
[start of nemo/collections/tts/models/tacotron2.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import contextlib
16 from dataclasses import dataclass
17 from typing import Any, Dict, List, Optional
18
19 import torch
20 from hydra.utils import instantiate
21 from omegaconf import MISSING, DictConfig, OmegaConf, open_dict
22 from omegaconf.errors import ConfigAttributeError
23 from pytorch_lightning.loggers import TensorBoardLogger, WandbLogger
24 from torch import nn
25
26 from nemo.collections.common.parts.preprocessing import parsers
27 from nemo.collections.tts.losses.tacotron2loss import Tacotron2Loss
28 from nemo.collections.tts.models.base import SpectrogramGenerator
29 from nemo.collections.tts.parts.utils.helpers import (
30 g2p_backward_compatible_support,
31 get_mask_from_lengths,
32 tacotron2_log_to_tb_func,
33 tacotron2_log_to_wandb_func,
34 )
35 from nemo.core.classes.common import PretrainedModelInfo, typecheck
36 from nemo.core.neural_types.elements import (
37 AudioSignal,
38 EmbeddedTextType,
39 LengthsType,
40 LogitsType,
41 MelSpectrogramType,
42 SequenceToSequenceAlignmentType,
43 )
44 from nemo.core.neural_types.neural_type import NeuralType
45 from nemo.utils import logging, model_utils
46
47
48 @dataclass
49 class Preprocessor:
50 _target_: str = MISSING
51 pad_value: float = MISSING
52
53
54 @dataclass
55 class Tacotron2Config:
56 preprocessor: Preprocessor = Preprocessor()
57 encoder: Dict[Any, Any] = MISSING
58 decoder: Dict[Any, Any] = MISSING
59 postnet: Dict[Any, Any] = MISSING
60 labels: List = MISSING
61 train_ds: Optional[Dict[Any, Any]] = None
62 validation_ds: Optional[Dict[Any, Any]] = None
63
64
65 class Tacotron2Model(SpectrogramGenerator):
66 """Tacotron 2 Model that is used to generate mel spectrograms from text"""
67
68 def __init__(self, cfg: DictConfig, trainer: 'Trainer' = None):
69 # Convert to Hydra 1.0 compatible DictConfig
70 cfg = model_utils.convert_model_config_to_dict_config(cfg)
71 cfg = model_utils.maybe_update_config_version(cfg)
72
73 # setup normalizer
74 self.normalizer = None
75 self.text_normalizer_call = None
76 self.text_normalizer_call_kwargs = {}
77 self._setup_normalizer(cfg)
78
79 # setup tokenizer
80 self.tokenizer = None
81 if hasattr(cfg, 'text_tokenizer'):
82 self._setup_tokenizer(cfg)
83
84 self.num_tokens = len(self.tokenizer.tokens)
85 self.tokenizer_pad = self.tokenizer.pad
86 self.tokenizer_unk = self.tokenizer.oov
87 # assert self.tokenizer is not None
88 else:
89 self.num_tokens = len(cfg.labels) + 3
90
91 super().__init__(cfg=cfg, trainer=trainer)
92
93 schema = OmegaConf.structured(Tacotron2Config)
94 # ModelPT ensures that cfg is a DictConfig, but do this second check in case ModelPT changes
95 if isinstance(cfg, dict):
96 cfg = OmegaConf.create(cfg)
97 elif not isinstance(cfg, DictConfig):
98 raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
99 # Ensure passed cfg is compliant with schema
100 try:
101 OmegaConf.merge(cfg, schema)
102 self.pad_value = cfg.preprocessor.pad_value
103 except ConfigAttributeError:
104 self.pad_value = cfg.preprocessor.params.pad_value
105 logging.warning(
106 "Your config is using an old NeMo yaml configuration. Please ensure that the yaml matches the "
107 "current version in the main branch for future compatibility."
108 )
109
110 self._parser = None
111 self.audio_to_melspec_precessor = instantiate(cfg.preprocessor)
112 self.text_embedding = nn.Embedding(self.num_tokens, 512)
113 self.encoder = instantiate(self._cfg.encoder)
114 self.decoder = instantiate(self._cfg.decoder)
115 self.postnet = instantiate(self._cfg.postnet)
116 self.loss = Tacotron2Loss()
117 self.calculate_loss = True
118
119 @property
120 def parser(self):
121 if self._parser is not None:
122 return self._parser
123
124 ds_class_name = self._cfg.train_ds.dataset._target_.split(".")[-1]
125 if ds_class_name == "TTSDataset":
126 self._parser = None
127 elif hasattr(self._cfg, "labels"):
128 self._parser = parsers.make_parser(
129 labels=self._cfg.labels,
130 name='en',
131 unk_id=-1,
132 blank_id=-1,
133 do_normalize=True,
134 abbreviation_version="fastpitch",
135 make_table=False,
136 )
137 else:
138 raise ValueError("Wanted to setup parser, but model does not have necessary paramaters")
139
140 return self._parser
141
142 def parse(self, text: str, normalize=True) -> torch.Tensor:
143 if self.training:
144 logging.warning("parse() is meant to be called in eval mode.")
145 if normalize and self.text_normalizer_call is not None:
146 text = self.text_normalizer_call(text, **self.text_normalizer_call_kwargs)
147
148 eval_phon_mode = contextlib.nullcontext()
149 if hasattr(self.tokenizer, "set_phone_prob"):
150 eval_phon_mode = self.tokenizer.set_phone_prob(prob=1.0)
151
152 with eval_phon_mode:
153 if self.tokenizer is not None:
154 tokens = self.tokenizer.encode(text)
155 else:
156 tokens = self.parser(text)
157 # Old parser doesn't add bos and eos ids, so maunally add it
158 tokens = [len(self._cfg.labels)] + tokens + [len(self._cfg.labels) + 1]
159 tokens_tensor = torch.tensor(tokens).unsqueeze_(0).to(self.device)
160 return tokens_tensor
161
162 @property
163 def input_types(self):
164 if self.training:
165 return {
166 "tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
167 "token_len": NeuralType(('B'), LengthsType()),
168 "audio": NeuralType(('B', 'T'), AudioSignal()),
169 "audio_len": NeuralType(('B'), LengthsType()),
170 }
171 else:
172 return {
173 "tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
174 "token_len": NeuralType(('B'), LengthsType()),
175 "audio": NeuralType(('B', 'T'), AudioSignal(), optional=True),
176 "audio_len": NeuralType(('B'), LengthsType(), optional=True),
177 }
178
179 @property
180 def output_types(self):
181 if not self.calculate_loss and not self.training:
182 return {
183 "spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
184 "spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
185 "gate_pred": NeuralType(('B', 'T'), LogitsType()),
186 "alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
187 "pred_length": NeuralType(('B'), LengthsType()),
188 }
189 return {
190 "spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
191 "spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
192 "gate_pred": NeuralType(('B', 'T'), LogitsType()),
193 "spec_target": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
194 "spec_target_len": NeuralType(('B'), LengthsType()),
195 "alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
196 }
197
198 @typecheck()
199 def forward(self, *, tokens, token_len, audio=None, audio_len=None):
200 if audio is not None and audio_len is not None:
201 spec_target, spec_target_len = self.audio_to_melspec_precessor(audio, audio_len)
202 else:
203 if self.training or self.calculate_loss:
204 raise ValueError(
205 f"'audio' and 'audio_len' can not be None when either 'self.training' or 'self.calculate_loss' is True."
206 )
207
208 token_embedding = self.text_embedding(tokens).transpose(1, 2)
209 encoder_embedding = self.encoder(token_embedding=token_embedding, token_len=token_len)
210
211 if self.training:
212 spec_pred_dec, gate_pred, alignments = self.decoder(
213 memory=encoder_embedding, decoder_inputs=spec_target, memory_lengths=token_len
214 )
215 else:
216 spec_pred_dec, gate_pred, alignments, pred_length = self.decoder(
217 memory=encoder_embedding, memory_lengths=token_len
218 )
219
220 spec_pred_postnet = self.postnet(mel_spec=spec_pred_dec)
221
222 if not self.calculate_loss and not self.training:
223 return spec_pred_dec, spec_pred_postnet, gate_pred, alignments, pred_length
224
225 return spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments
226
227 @typecheck(
228 input_types={"tokens": NeuralType(('B', 'T'), EmbeddedTextType())},
229 output_types={"spec": NeuralType(('B', 'D', 'T'), MelSpectrogramType())},
230 )
231 def generate_spectrogram(self, *, tokens):
232 self.eval()
233 self.calculate_loss = False
234 token_len = torch.tensor([len(i) for i in tokens]).to(self.device)
235 tensors = self(tokens=tokens, token_len=token_len)
236 spectrogram_pred = tensors[1]
237
238 if spectrogram_pred.shape[0] > 1:
239 # Silence all frames past the predicted end
240 mask = ~get_mask_from_lengths(tensors[-1])
241 mask = mask.expand(spectrogram_pred.shape[1], mask.size(0), mask.size(1))
242 mask = mask.permute(1, 0, 2)
243 spectrogram_pred.data.masked_fill_(mask, self.pad_value)
244
245 return spectrogram_pred
246
247 def training_step(self, batch, batch_idx):
248 audio, audio_len, tokens, token_len = batch
249 spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, _ = self.forward(
250 audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
251 )
252
253 loss, _ = self.loss(
254 spec_pred_dec=spec_pred_dec,
255 spec_pred_postnet=spec_pred_postnet,
256 gate_pred=gate_pred,
257 spec_target=spec_target,
258 spec_target_len=spec_target_len,
259 pad_value=self.pad_value,
260 )
261
262 output = {
263 'loss': loss,
264 'progress_bar': {'training_loss': loss},
265 'log': {'loss': loss},
266 }
267 return output
268
269 def validation_step(self, batch, batch_idx):
270 audio, audio_len, tokens, token_len = batch
271 spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments = self.forward(
272 audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
273 )
274
275 loss, gate_target = self.loss(
276 spec_pred_dec=spec_pred_dec,
277 spec_pred_postnet=spec_pred_postnet,
278 gate_pred=gate_pred,
279 spec_target=spec_target,
280 spec_target_len=spec_target_len,
281 pad_value=self.pad_value,
282 )
283 loss = {
284 "val_loss": loss,
285 "mel_target": spec_target,
286 "mel_postnet": spec_pred_postnet,
287 "gate": gate_pred,
288 "gate_target": gate_target,
289 "alignments": alignments,
290 }
291 self.validation_step_outputs.append(loss)
292 return loss
293
294 def on_validation_epoch_end(self):
295 if self.logger is not None and self.logger.experiment is not None:
296 logger = self.logger.experiment
297 for logger in self.trainer.loggers:
298 if isinstance(logger, TensorBoardLogger):
299 logger = logger.experiment
300 break
301 if isinstance(logger, TensorBoardLogger):
302 tacotron2_log_to_tb_func(
303 logger,
304 self.validation_step_outputs[0].values(),
305 self.global_step,
306 tag="val",
307 log_images=True,
308 add_audio=False,
309 )
310 elif isinstance(logger, WandbLogger):
311 tacotron2_log_to_wandb_func(
312 logger,
313 self.validation_step_outputs[0].values(),
314 self.global_step,
315 tag="val",
316 log_images=True,
317 add_audio=False,
318 )
319 avg_loss = torch.stack(
320 [x['val_loss'] for x in self.validation_step_outputs]
321 ).mean() # This reduces across batches, not workers!
322 self.log('val_loss', avg_loss)
323 self.validation_step_outputs.clear() # free memory
324
325 def _setup_normalizer(self, cfg):
326 if "text_normalizer" in cfg:
327 normalizer_kwargs = {}
328
329 if "whitelist" in cfg.text_normalizer:
330 normalizer_kwargs["whitelist"] = self.register_artifact(
331 'text_normalizer.whitelist', cfg.text_normalizer.whitelist
332 )
333
334 try:
335 import nemo_text_processing
336
337 self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
338 except Exception as e:
339 logging.error(e)
340 raise ImportError(
341 "`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
342 )
343
344 self.text_normalizer_call = self.normalizer.normalize
345 if "text_normalizer_call_kwargs" in cfg:
346 self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
347
348 def _setup_tokenizer(self, cfg):
349 text_tokenizer_kwargs = {}
350 if "g2p" in cfg.text_tokenizer and cfg.text_tokenizer.g2p is not None:
351 # for backward compatibility
352 if (
353 self._is_model_being_restored()
354 and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
355 and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
356 ):
357 cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
358 cfg.text_tokenizer.g2p["_target_"]
359 )
360
361 g2p_kwargs = {}
362
363 if "phoneme_dict" in cfg.text_tokenizer.g2p:
364 g2p_kwargs["phoneme_dict"] = self.register_artifact(
365 'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
366 )
367
368 if "heteronyms" in cfg.text_tokenizer.g2p:
369 g2p_kwargs["heteronyms"] = self.register_artifact(
370 'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
371 )
372
373 text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
374
375 self.tokenizer = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
376
377 def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
378 if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
379 raise ValueError(f"No dataset for {name}")
380 if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
381 raise ValueError(f"No dataloder_params for {name}")
382 if shuffle_should_be:
383 if 'shuffle' not in cfg.dataloader_params:
384 logging.warning(
385 f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
386 "config. Manually setting to True"
387 )
388 with open_dict(cfg.dataloader_params):
389 cfg.dataloader_params.shuffle = True
390 elif not cfg.dataloader_params.shuffle:
391 logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
392 elif not shuffle_should_be and cfg.dataloader_params.shuffle:
393 logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
394
395 dataset = instantiate(
396 cfg.dataset,
397 text_normalizer=self.normalizer,
398 text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
399 text_tokenizer=self.tokenizer,
400 )
401
402 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
403
404 def setup_training_data(self, cfg):
405 self._train_dl = self.__setup_dataloader_from_config(cfg)
406
407 def setup_validation_data(self, cfg):
408 self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="validation")
409
410 @classmethod
411 def list_available_models(cls) -> 'List[PretrainedModelInfo]':
412 """
413 This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
414 Returns:
415 List of available pre-trained models.
416 """
417 list_of_models = []
418 model = PretrainedModelInfo(
419 pretrained_model_name="tts_en_tacotron2",
420 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_tacotron2/versions/1.10.0/files/tts_en_tacotron2.nemo",
421 description="This model is trained on LJSpeech sampled at 22050Hz, and can be used to generate female English voices with an American accent.",
422 class_=cls,
423 aliases=["Tacotron2-22050Hz"],
424 )
425 list_of_models.append(model)
426 return list_of_models
427
[end of nemo/collections/tts/models/tacotron2.py]
[start of nemo/core/config/modelPT.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, field
16 from typing import Any, Dict, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.core import config
21 from nemo.core.classes.dataset import DatasetConfig
22 from nemo.utils import exp_manager
23
24
25 @dataclass
26 class SchedConfig:
27 name: str = MISSING
28 min_lr: float = 0.0
29 last_epoch: int = -1
30
31
32 @dataclass
33 class OptimConfig:
34 name: str = MISSING
35 sched: Optional[SchedConfig] = None
36
37
38 @dataclass
39 class ModelConfig:
40 """
41 Model component inside ModelPT
42 """
43
44 # ...
45 train_ds: Optional[DatasetConfig] = None
46 validation_ds: Optional[DatasetConfig] = None
47 test_ds: Optional[DatasetConfig] = None
48 optim: Optional[OptimConfig] = None
49
50
51 @dataclass
52 class HydraConfig:
53 run: Dict[str, Any] = field(default_factory=lambda: {"dir": "."})
54 job_logging: Dict[str, Any] = field(default_factory=lambda: {"root": {"handlers": None}})
55
56
57 @dataclass
58 class NemoConfig:
59 name: str = MISSING
60 model: ModelConfig = MISSING
61 trainer: config.TrainerConfig = config.TrainerConfig(
62 strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
63 )
64 exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
65 hydra: HydraConfig = HydraConfig()
66
67
68 class ModelConfigBuilder:
69 def __init__(self, model_cfg: ModelConfig):
70 """
71 Base class for any Model Config Builder.
72
73 A Model Config Builder is a utility class that accepts a ModelConfig dataclass,
74 and via a set of utility methods (that are implemented by the subclassed ModelConfigBuilder),
75 builds a finalized ModelConfig that can be supplied to a NemoModel dataclass as
76 the `model` component.
77
78 Subclasses *must* implement the private method `_finalize_cfg`.
79 Inside this method, they must update `self.model_cfg` with all interdependent config
80 options that need to be set (either updated by user explicitly or with their default value).
81
82 The updated model config must then be preserved in `self.model_cfg`.
83
84 Example:
85 # Create the config builder
86 config_builder = <subclass>ModelConfigBuilder()
87
88 # Update the components of the config that are modifiable
89 config_builder.set_X(X)
90 config_builder.set_Y(Y)
91
92 # Create a "finalized" config dataclass that will contain all the updates
93 # that were specified by the builder
94 model_config = config_builder.build()
95
96 # Use model config as is (or further update values), then create a new Model
97 model = nemo.<domain>.models.<ModelName>Model(cfg=model_config, trainer=Trainer())
98
99 Supported build methods:
100 - set_train_ds: All model configs can accept a subclass of `DatasetConfig` as their
101 training config. Subclasses can override this method to enable auto-complete
102 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
103
104 - set_validation_ds: All model configs can accept a subclass of `DatasetConfig` as their
105 validation config. Subclasses can override this method to enable auto-complete
106 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
107
108 - set_test_ds: All model configs can accept a subclass of `DatasetConfig` as their
109 test config. Subclasses can override this method to enable auto-complete
110 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
111
112 - set_optim: A build method that supports changes to the Optimizer (and optionally,
113 the Scheduler) used for training the model. The function accepts two inputs -
114
115 `cfg`: A subclass of `OptimizerParams` - any OptimizerParams subclass can be used,
116 in order to select an appropriate Optimizer. Examples: AdamParams.
117
118 `sched_cfg`: A subclass of `SchedulerParams` - any SchedulerParams subclass can be used,
119 in order to select an appropriate Scheduler. Examples: CosineAnnealingParams.
120 Note that this argument is optional.
121
122 - build(): The method which should return a "finalized" ModelConfig dataclass.
123 Subclasses *should* always override this method, and update the signature
124 of this method with the return type of the Dataclass, so that it enables
125 autocomplete for the user.
126
127 Example:
128 def build(self) -> EncDecCTCConfig:
129 return super().build()
130
131 Any additional build methods must be added by subclasses of ModelConfigBuilder.
132
133 Args:
134 model_cfg:
135 """
136 self.model_cfg = model_cfg
137 self.train_ds_cfg = None
138 self.validation_ds_cfg = None
139 self.test_ds_cfg = None
140 self.optim_cfg = None
141
142 def set_train_ds(self, cfg: Optional[DatasetConfig] = None):
143 self.model_cfg.train_ds = cfg
144
145 def set_validation_ds(self, cfg: Optional[DatasetConfig] = None):
146 self.model_cfg.validation_ds = cfg
147
148 def set_test_ds(self, cfg: Optional[DatasetConfig] = None):
149 self.model_cfg.test_ds = cfg
150
151 def set_optim(self, cfg: config.OptimizerParams, sched_cfg: Optional[config.SchedulerParams] = None):
152 @dataclass
153 class WrappedOptimConfig(OptimConfig, cfg.__class__):
154 pass
155
156 # Setup optim
157 optim_name = cfg.__class__.__name__.replace("Params", "").lower()
158 wrapped_cfg = WrappedOptimConfig(name=optim_name, sched=None, **vars(cfg))
159
160 if sched_cfg is not None:
161
162 @dataclass
163 class WrappedSchedConfig(SchedConfig, sched_cfg.__class__):
164 pass
165
166 # Setup scheduler
167 sched_name = sched_cfg.__class__.__name__.replace("Params", "")
168 wrapped_sched_cfg = WrappedSchedConfig(name=sched_name, **vars(sched_cfg))
169
170 wrapped_cfg.sched = wrapped_sched_cfg
171
172 self.model_cfg.optim = wrapped_cfg
173
174 def _finalize_cfg(self):
175 raise NotImplementedError()
176
177 def build(self) -> ModelConfig:
178 # validate config
179 self._finalize_cfg()
180
181 return self.model_cfg
182
[end of nemo/core/config/modelPT.py]
[start of nemo/utils/exp_manager.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import os
17 import subprocess
18 import sys
19 import time
20 import warnings
21 from dataclasses import dataclass
22 from datetime import timedelta
23 from pathlib import Path
24 from shutil import copy, move
25 from typing import Any, Dict, List, Optional, Tuple, Union
26
27 import pytorch_lightning
28 import torch
29 from hydra.core.hydra_config import HydraConfig
30 from hydra.utils import get_original_cwd
31 from omegaconf import DictConfig, OmegaConf, open_dict
32 from pytorch_lightning.callbacks import Callback, ModelCheckpoint
33 from pytorch_lightning.callbacks.early_stopping import EarlyStopping
34 from pytorch_lightning.callbacks.timer import Interval, Timer
35 from pytorch_lightning.loggers import MLFlowLogger, TensorBoardLogger, WandbLogger
36 from pytorch_lightning.loops import _TrainingEpochLoop
37 from pytorch_lightning.strategies.ddp import DDPStrategy
38
39 from nemo.collections.common.callbacks import EMA
40 from nemo.constants import NEMO_ENV_VARNAME_TESTING, NEMO_ENV_VARNAME_VERSION
41 from nemo.utils import logging, timers
42 from nemo.utils.app_state import AppState
43 from nemo.utils.callbacks import NeMoModelCheckpoint, PreemptionCallback
44 from nemo.utils.env_var_parsing import get_envbool
45 from nemo.utils.exceptions import NeMoBaseException
46 from nemo.utils.get_rank import is_global_rank_zero
47 from nemo.utils.lightning_logger_patch import add_filehandlers_to_pl_logger
48 from nemo.utils.loggers import ClearMLLogger, ClearMLParams, DLLogger, DLLoggerParams, MLFlowParams
49 from nemo.utils.model_utils import uninject_model_parallel_rank
50
51
52 class NotFoundError(NeMoBaseException):
53 """ Raised when a file or folder is not found"""
54
55
56 class LoggerMisconfigurationError(NeMoBaseException):
57 """ Raised when a mismatch between trainer.logger and exp_manager occurs"""
58
59 def __init__(self, message):
60 message = (
61 message
62 + " You can disable lighning's trainer from creating a logger by passing logger=False to its constructor."
63 )
64 super().__init__(message)
65
66
67 class CheckpointMisconfigurationError(NeMoBaseException):
68 """ Raised when a mismatch between trainer.callbacks and exp_manager occurs"""
69
70
71 @dataclass
72 class EarlyStoppingParams:
73 monitor: str = "val_loss" # The metric that early stopping should consider.
74 mode: str = "min" # inform early stopping whether to look for increase or decrease in monitor.
75 min_delta: float = 0.001 # smallest change to consider as improvement.
76 patience: int = 10 # how many (continuous) validation cycles to wait with no improvement and stopping training.
77 verbose: bool = True
78 strict: bool = True
79 check_finite: bool = True
80 stopping_threshold: Optional[float] = None
81 divergence_threshold: Optional[float] = None
82 check_on_train_epoch_end: Optional[bool] = None
83 log_rank_zero_only: bool = False
84
85
86 @dataclass
87 class CallbackParams:
88 filepath: Optional[str] = None # Deprecated
89 dirpath: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
90 filename: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
91 monitor: Optional[str] = "val_loss"
92 verbose: Optional[bool] = True
93 save_last: Optional[bool] = True
94 save_top_k: Optional[int] = 3
95 save_weights_only: Optional[bool] = False
96 mode: Optional[str] = "min"
97 auto_insert_metric_name: bool = True
98 every_n_epochs: Optional[int] = 1
99 every_n_train_steps: Optional[int] = None
100 train_time_interval: Optional[str] = None
101 prefix: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
102 postfix: str = ".nemo"
103 save_best_model: bool = False
104 always_save_nemo: bool = False
105 save_nemo_on_train_end: Optional[bool] = True # Whether to automatically save .nemo file durin on_train_end hook
106 model_parallel_size: Optional[int] = None # tensor parallel size * pipeline parallel size
107 save_on_train_epoch_end: Optional[bool] = False # Save after training, not after validation
108
109
110 @dataclass
111 class StepTimingParams:
112 reduction: Optional[str] = "mean"
113 # if True torch.cuda.synchronize() is called on start/stop
114 sync_cuda: Optional[bool] = False
115 # if positive, defines the size of a sliding window for computing mean
116 buffer_size: Optional[int] = 1
117
118
119 @dataclass
120 class EMAParams:
121 enable: Optional[bool] = False
122 decay: Optional[float] = 0.999
123 cpu_offload: Optional[bool] = False
124 validate_original_weights: Optional[bool] = False
125 every_n_steps: int = 1
126
127
128 @dataclass
129 class ExpManagerConfig:
130 """Experiment Manager config for validation of passed arguments.
131 """
132
133 # Log dir creation parameters
134 explicit_log_dir: Optional[str] = None
135 exp_dir: Optional[str] = None
136 name: Optional[str] = None
137 version: Optional[str] = None
138 use_datetime_version: Optional[bool] = True
139 resume_if_exists: Optional[bool] = False
140 resume_past_end: Optional[bool] = False
141 resume_ignore_no_checkpoint: Optional[bool] = False
142 resume_from_checkpoint: Optional[str] = None
143 # Logging parameters
144 create_tensorboard_logger: Optional[bool] = True
145 summary_writer_kwargs: Optional[Dict[Any, Any]] = None
146 create_wandb_logger: Optional[bool] = False
147 wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
148 create_mlflow_logger: Optional[bool] = False
149 mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
150 create_dllogger_logger: Optional[bool] = False
151 dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
152 create_clearml_logger: Optional[bool] = False
153 clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
154 # Checkpointing parameters
155 create_checkpoint_callback: Optional[bool] = True
156 checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
157 create_early_stopping_callback: Optional[bool] = False
158 early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
159 create_preemption_callback: Optional[bool] = True
160 # Additional exp_manager arguments
161 files_to_copy: Optional[List[str]] = None
162 # logs timing of train/val/test steps
163 log_step_timing: Optional[bool] = True
164 step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
165 # Configures creation of log files for different ranks
166 log_local_rank_0_only: Optional[bool] = False
167 log_global_rank_0_only: Optional[bool] = False
168 # disable initial validation when resuming from a checkpoint saved during validation
169 disable_validation_on_resume: Optional[bool] = True
170 ema: Optional[EMAParams] = EMAParams()
171 # Wall clock time limit
172 max_time_per_run: Optional[str] = None
173 # time to sleep non 0 ranks during initialization
174 seconds_to_sleep: float = 5
175
176
177 class TimingCallback(Callback):
178 """
179 Logs execution time of train/val/test steps
180 """
181
182 def __init__(self, timer_kwargs={}):
183 self.timer = timers.NamedTimer(**timer_kwargs)
184
185 def _on_batch_start(self, name):
186 # reset only if we do not return mean of a sliding window
187 if self.timer.buffer_size <= 0:
188 self.timer.reset(name)
189
190 self.timer.start(name)
191
192 def _on_batch_end(self, name, pl_module):
193 self.timer.stop(name)
194 # Set the `batch_size=1` as WAR for `dataloader_iter`, which is not used for any metric
195 pl_module.log(
196 name + ' in s',
197 self.timer[name],
198 on_step=True,
199 on_epoch=False,
200 batch_size=1,
201 prog_bar=(name == "train_step_timing"),
202 )
203
204 def on_train_batch_start(self, trainer, pl_module, batch, batch_idx):
205 self._on_batch_start("train_step_timing")
206
207 def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
208 self._on_batch_end("train_step_timing", pl_module)
209
210 def on_validation_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
211 self._on_batch_start("validation_step_timing")
212
213 def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
214 self._on_batch_end("validation_step_timing", pl_module)
215
216 def on_test_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
217 self._on_batch_start("test_step_timing")
218
219 def on_test_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
220 self._on_batch_end("test_step_timing", pl_module)
221
222 def on_before_backward(self, trainer, pl_module, loss):
223 self._on_batch_start("train_backward_timing")
224
225 def on_after_backward(self, trainer, pl_module):
226 self._on_batch_end("train_backward_timing", pl_module)
227
228
229 def exp_manager(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None) -> Optional[Path]:
230 """
231 exp_manager is a helper function used to manage folders for experiments. It follows the pytorch lightning paradigm
232 of exp_dir/model_or_experiment_name/version. If the lightning trainer has a logger, exp_manager will get exp_dir,
233 name, and version from the logger. Otherwise it will use the exp_dir and name arguments to create the logging
234 directory. exp_manager also allows for explicit folder creation via explicit_log_dir.
235
236 The version can be a datetime string or an integer. Datestime version can be disabled if use_datetime_version is set
237 to False. It optionally creates TensorBoardLogger, WandBLogger, DLLogger, MLFlowLogger, ClearMLLogger,
238 ModelCheckpoint objects from pytorch lightning.
239 It copies sys.argv, and git information if available to the logging directory. It creates a log file for each
240 process to log their output into.
241
242 exp_manager additionally has a resume feature (resume_if_exists) which can be used to continuing training from
243 the constructed log_dir. When you need to continue the training repeatedly (like on a cluster which you need
244 multiple consecutive jobs), you need to avoid creating the version folders. Therefore from v1.0.0, when
245 resume_if_exists is set to True, creating the version folders is ignored.
246
247 Args:
248 trainer (pytorch_lightning.Trainer): The lightning trainer.
249 cfg (DictConfig, dict): Can have the following keys:
250
251 - explicit_log_dir (str, Path): Can be used to override exp_dir/name/version folder creation. Defaults to
252 None, which will use exp_dir, name, and version to construct the logging directory.
253 - exp_dir (str, Path): The base directory to create the logging directory. Defaults to None, which logs to
254 ./nemo_experiments.
255 - name (str): The name of the experiment. Defaults to None which turns into "default" via name = name or
256 "default".
257 - version (str): The version of the experiment. Defaults to None which uses either a datetime string or
258 lightning's TensorboardLogger system of using version_{int}.
259 - use_datetime_version (bool): Whether to use a datetime string for version. Defaults to True.
260 - resume_if_exists (bool): Whether this experiment is resuming from a previous run. If True, it sets
261 trainer._checkpoint_connector._ckpt_path so that the trainer should auto-resume. exp_manager will move files
262 under log_dir to log_dir/run_{int}. Defaults to False. From v1.0.0, when resume_if_exists is True,
263 we would not create version folders to make it easier to find the log folder for next runs.
264 - resume_past_end (bool): exp_manager errors out if resume_if_exists is True and a checkpoint matching
265 ``*end.ckpt`` indicating a previous training run fully completed. This behaviour can be disabled, in which
266 case the ``*end.ckpt`` will be loaded by setting resume_past_end to True. Defaults to False.
267 - resume_ignore_no_checkpoint (bool): exp_manager errors out if resume_if_exists is True and no checkpoint
268 could be found. This behaviour can be disabled, in which case exp_manager will print a message and
269 continue without restoring, by setting resume_ignore_no_checkpoint to True. Defaults to False.
270 - resume_from_checkpoint (str): Can be used to specify a path to a specific checkpoint file to load from. This will
271 override any checkpoint found when resume_if_exists is True. Defaults to None.
272 - create_tensorboard_logger (bool): Whether to create a tensorboard logger and attach it to the pytorch
273 lightning trainer. Defaults to True.
274 - summary_writer_kwargs (dict): A dictionary of kwargs that can be passed to lightning's TensorboardLogger
275 class. Note that log_dir is passed by exp_manager and cannot exist in this dict. Defaults to None.
276 - create_wandb_logger (bool): Whether to create a Weights and Baises logger and attach it to the pytorch
277 lightning trainer. Defaults to False.
278 - wandb_logger_kwargs (dict): A dictionary of kwargs that can be passed to lightning's WandBLogger
279 class. Note that name and project are required parameters if create_wandb_logger is True.
280 Defaults to None.
281 - create_mlflow_logger (bool): Whether to create an MLFlow logger and attach it to the pytorch lightning
282 training. Defaults to False
283 - mlflow_logger_kwargs (dict): optional parameters for the MLFlow logger
284 - create_dllogger_logger (bool): Whether to create an DLLogger logger and attach it to the pytorch lightning
285 training. Defaults to False
286 - dllogger_logger_kwargs (dict): optional parameters for the DLLogger logger
287 - create_clearml_logger (bool): Whether to create an ClearML logger and attach it to the pytorch lightning
288 training. Defaults to False
289 - clearml_logger_kwargs (dict): optional parameters for the ClearML logger
290 - create_checkpoint_callback (bool): Whether to create a ModelCheckpoint callback and attach it to the
291 pytorch lightning trainer. The ModelCheckpoint saves the top 3 models with the best "val_loss", the most
292 recent checkpoint under ``*last.ckpt``, and the final checkpoint after training completes under ``*end.ckpt``.
293 Defaults to True.
294 - create_early_stopping_callback (bool): Flag to decide if early stopping should be used to stop training. Default is False.
295 See EarlyStoppingParams dataclass above.
296 - create_preemption_callback (bool): Flag to decide whether to enable preemption callback to save checkpoints and exit training
297 immediately upon preemption. Default is True.
298 - files_to_copy (list): A list of files to copy to the experiment logging directory. Defaults to None which
299 copies no files.
300 - log_local_rank_0_only (bool): Whether to only create log files for local rank 0. Defaults to False.
301 Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
302 - log_global_rank_0_only (bool): Whether to only create log files for global rank 0. Defaults to False.
303 Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
304 - max_time (str): The maximum wall clock time *per run*. This is intended to be used on clusters where you want
305 a checkpoint to be saved after this specified time and be able to resume from that checkpoint. Defaults to None.
306 - seconds_to_sleep (float): seconds to sleep non rank 0 processes for. Used to give enough time for rank 0 to initialize
307
308 returns:
309 log_dir (Path): The final logging directory where logging files are saved. Usually the concatenation of
310 exp_dir, name, and version.
311 """
312 # Add rank information to logger
313 # Note: trainer.global_rank and trainer.is_global_zero are not set until trainer.fit, so have to hack around it
314 local_rank = int(os.environ.get("LOCAL_RANK", 0))
315 global_rank = trainer.node_rank * trainer.num_devices + local_rank
316 logging.rank = global_rank
317
318 if cfg is None:
319 logging.error("exp_manager did not receive a cfg argument. It will be disabled.")
320 return
321 if trainer.fast_dev_run:
322 logging.info("Trainer was called with fast_dev_run. exp_manager will return without any functionality.")
323 return
324
325 # Ensure passed cfg is compliant with ExpManagerConfig
326 schema = OmegaConf.structured(ExpManagerConfig)
327 if isinstance(cfg, dict):
328 cfg = OmegaConf.create(cfg)
329 elif not isinstance(cfg, DictConfig):
330 raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
331 cfg = OmegaConf.create(OmegaConf.to_container(cfg, resolve=True))
332 cfg = OmegaConf.merge(schema, cfg)
333
334 error_checks(trainer, cfg) # Ensures that trainer options are compliant with NeMo and exp_manager arguments
335
336 log_dir, exp_dir, name, version = get_log_dir(
337 trainer=trainer,
338 exp_dir=cfg.exp_dir,
339 name=cfg.name,
340 version=cfg.version,
341 explicit_log_dir=cfg.explicit_log_dir,
342 use_datetime_version=cfg.use_datetime_version,
343 resume_if_exists=cfg.resume_if_exists,
344 )
345
346 check_resume(
347 trainer,
348 log_dir,
349 cfg.resume_if_exists,
350 cfg.resume_past_end,
351 cfg.resume_ignore_no_checkpoint,
352 cfg.checkpoint_callback_params.dirpath,
353 cfg.resume_from_checkpoint,
354 )
355
356 checkpoint_name = name
357 # If name returned from get_log_dir is "", use cfg.name for checkpointing
358 if checkpoint_name is None or checkpoint_name == '':
359 checkpoint_name = cfg.name or "default"
360
361 # Set mlflow name if it's not set, before the main name is erased
362 if cfg.create_mlflow_logger and (not cfg.mlflow_logger_kwargs.get("experiment_name", None)):
363 cfg.mlflow_logger_kwargs.experiment_name = cfg.name
364 logging.warning(
365 'mlflow logger specified but no experiment name set. Using the same as Tensorboard: %s',
366 cfg.mlflow_logger_kwargs.experiment_name,
367 )
368
369 cfg.name = name # Used for configure_loggers so that the log_dir is properly set even if name is ""
370 cfg.version = version
371
372 # update app_state with log_dir, exp_dir, etc
373 app_state = AppState()
374 app_state.log_dir = log_dir
375 app_state.exp_dir = exp_dir
376 app_state.name = name
377 app_state.version = version
378 app_state.checkpoint_name = checkpoint_name
379 app_state.create_checkpoint_callback = cfg.create_checkpoint_callback
380 app_state.checkpoint_callback_params = cfg.checkpoint_callback_params
381
382 # Create the logging directory if it does not exist
383 os.makedirs(log_dir, exist_ok=True) # Cannot limit creation to global zero as all ranks write to own log file
384 logging.info(f'Experiments will be logged at {log_dir}')
385 trainer._default_root_dir = log_dir
386
387 if cfg.log_local_rank_0_only is True and cfg.log_global_rank_0_only is True:
388 raise ValueError(
389 f"Cannot set both log_local_rank_0_only and log_global_rank_0_only to True. Please set either one or neither."
390 )
391
392 # This is set if the env var NEMO_TESTING is set to True.
393 nemo_testing = get_envbool(NEMO_ENV_VARNAME_TESTING, False)
394
395 # Handle logging to file
396 log_file = log_dir / f'nemo_log_globalrank-{global_rank}_localrank-{local_rank}.txt'
397 if cfg.log_local_rank_0_only is True and not nemo_testing:
398 if local_rank == 0:
399 logging.add_file_handler(log_file)
400 elif cfg.log_global_rank_0_only is True and not nemo_testing:
401 if global_rank == 0:
402 logging.add_file_handler(log_file)
403 else:
404 # Logs on all ranks.
405 logging.add_file_handler(log_file)
406
407 # For some reason, LearningRateLogger requires trainer to have a logger. Safer to create logger on all ranks
408 # not just global rank 0.
409 if (
410 cfg.create_tensorboard_logger
411 or cfg.create_wandb_logger
412 or cfg.create_mlflow_logger
413 or cfg.create_dllogger_logger
414 or cfg.create_clearml_logger
415 ):
416 configure_loggers(
417 trainer,
418 exp_dir,
419 log_dir,
420 cfg.name,
421 cfg.version,
422 cfg.checkpoint_callback_params,
423 cfg.create_tensorboard_logger,
424 cfg.summary_writer_kwargs,
425 cfg.create_wandb_logger,
426 cfg.wandb_logger_kwargs,
427 cfg.create_mlflow_logger,
428 cfg.mlflow_logger_kwargs,
429 cfg.create_dllogger_logger,
430 cfg.dllogger_logger_kwargs,
431 cfg.create_clearml_logger,
432 cfg.clearml_logger_kwargs,
433 )
434
435 # add loggers timing callbacks
436 if cfg.log_step_timing:
437 timing_callback = TimingCallback(timer_kwargs=cfg.step_timing_kwargs or {})
438 trainer.callbacks.insert(0, timing_callback)
439
440 if cfg.ema.enable:
441 ema_callback = EMA(
442 decay=cfg.ema.decay,
443 validate_original_weights=cfg.ema.validate_original_weights,
444 cpu_offload=cfg.ema.cpu_offload,
445 every_n_steps=cfg.ema.every_n_steps,
446 )
447 trainer.callbacks.append(ema_callback)
448
449 if cfg.create_early_stopping_callback:
450 early_stop_callback = EarlyStopping(**cfg.early_stopping_callback_params)
451 trainer.callbacks.append(early_stop_callback)
452
453 if cfg.create_checkpoint_callback:
454 configure_checkpointing(
455 trainer,
456 log_dir,
457 checkpoint_name,
458 cfg.resume_if_exists,
459 cfg.checkpoint_callback_params,
460 cfg.create_preemption_callback,
461 )
462
463 if cfg.disable_validation_on_resume:
464 # extend training loop to skip initial validation when resuming from checkpoint
465 configure_no_restart_validation_training_loop(trainer)
466 # Setup a stateless timer for use on clusters.
467 if cfg.max_time_per_run is not None:
468 found_ptl_timer = False
469 for idx, callback in enumerate(trainer.callbacks):
470 if isinstance(callback, Timer):
471 # NOTE: PTL does not expose a `trainer.max_time`. By the time we are in this function, PTL has already setup a timer if the user specifies `trainer.max_time` so best we can do is replace that.
472 # Working: If only `trainer.max_time` is set - it behaves as a normal PTL timer. If only `exp_manager.max_time_per_run` is set - it behaves as a StateLessTimer. If both are set, it also behaves as a StateLessTimer.
473 logging.warning(
474 f'Found a PTL Timer callback, replacing with a StatelessTimer callback. This will happen if you set trainer.max_time as well as exp_manager.max_time_per_run.'
475 )
476 trainer.callbacks[idx] = StatelessTimer(cfg.max_time_per_run)
477 found_ptl_timer = True
478 break
479
480 if not found_ptl_timer:
481 trainer.max_time = cfg.max_time_per_run
482 trainer.callbacks.append(StatelessTimer(cfg.max_time_per_run))
483
484 if is_global_rank_zero():
485 # Move files_to_copy to folder and add git information if present
486 if cfg.files_to_copy:
487 for _file in cfg.files_to_copy:
488 copy(Path(_file), log_dir)
489
490 # Create files for cmd args and git info
491 with open(log_dir / 'cmd-args.log', 'w', encoding='utf-8') as _file:
492 _file.write(" ".join(sys.argv))
493
494 # Try to get git hash
495 git_repo, git_hash = get_git_hash()
496 if git_repo:
497 with open(log_dir / 'git-info.log', 'w', encoding='utf-8') as _file:
498 _file.write(f'commit hash: {git_hash}')
499 _file.write(get_git_diff())
500
501 # Add err_file logging to global_rank zero
502 logging.add_err_file_handler(log_dir / 'nemo_error_log.txt')
503
504 # Add lightning file logging to global_rank zero
505 add_filehandlers_to_pl_logger(log_dir / 'lightning_logs.txt', log_dir / 'nemo_error_log.txt')
506
507 elif trainer.num_nodes * trainer.num_devices > 1:
508 # sleep other ranks so rank 0 can finish
509 # doing the initialization such as moving files
510 time.sleep(cfg.seconds_to_sleep)
511
512 return log_dir
513
514
515 def error_checks(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None):
516 """
517 Checks that the passed trainer is compliant with NeMo and exp_manager's passed configuration. Checks that:
518 - Throws error when hydra has changed the working directory. This causes issues with lightning's DDP
519 - Throws error when trainer has loggers defined but create_tensorboard_logger or create_wandB_logger
520 or create_mlflow_logger or create_dllogger_logger is True
521 - Prints error messages when 1) run on multi-node and not Slurm, and 2) run on multi-gpu without DDP
522 """
523 if HydraConfig.initialized() and get_original_cwd() != os.getcwd():
524 raise ValueError(
525 "Hydra changed the working directory. This interferes with ExpManger's functionality. Please pass "
526 "hydra.run.dir=. to your python script."
527 )
528 if trainer.logger is not None and (
529 cfg.create_tensorboard_logger or cfg.create_wandb_logger or cfg.create_mlflow_logger
530 ):
531 raise LoggerMisconfigurationError(
532 "The pytorch lightning trainer that was passed to exp_manager contained a logger, and either "
533 f"create_tensorboard_logger: {cfg.create_tensorboard_logger} or create_wandb_logger: "
534 f"{cfg.create_wandb_logger} or create_mlflow_logger: {cfg.create_mlflow_logger}"
535 f"or create_dllogger_logger: {cfg.create_mlflow_logger} was set to True. "
536 "These can only be used if trainer does not already have a logger."
537 )
538 if trainer.num_nodes > 1 and not check_slurm(trainer):
539 logging.error(
540 "You are running multi-node training without SLURM handling the processes."
541 " Please note that this is not tested in NeMo and could result in errors."
542 )
543 if trainer.num_devices > 1 and not isinstance(trainer.strategy, DDPStrategy):
544 logging.error(
545 "You are running multi-gpu without ddp.Please note that this is not tested in NeMo and could result in "
546 "errors."
547 )
548
549
550 def check_resume(
551 trainer: 'pytorch_lightning.Trainer',
552 log_dir: str,
553 resume_if_exists: bool = False,
554 resume_past_end: bool = False,
555 resume_ignore_no_checkpoint: bool = False,
556 dirpath: str = None,
557 resume_from_checkpoint: str = None,
558 ):
559 """Checks that resume=True was used correctly with the arguments pass to exp_manager. Sets
560 trainer._checkpoint_connector._ckpt_path as necessary.
561
562 Returns:
563 log_dir (Path): The log_dir
564 exp_dir (str): The base exp_dir without name nor version
565 name (str): The name of the experiment
566 version (str): The version of the experiment
567
568 Raises:
569 NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
570 ValueError: If resume is True, and there were more than 1 checkpoint could found.
571 """
572
573 if not log_dir:
574 raise ValueError(f"Resuming requires the log_dir {log_dir} to be passed to exp_manager")
575
576 checkpoint = None
577 if resume_from_checkpoint:
578 checkpoint = resume_from_checkpoint
579 if resume_if_exists:
580 # Use <log_dir>/checkpoints/ unless `dirpath` is set
581 checkpoint_dir = Path(dirpath) if dirpath else Path(Path(log_dir) / "checkpoints")
582
583 # when using distributed checkpointing, checkpoint_dir is a directory of directories
584 # we check for this here
585 dist_checkpoints = [d for d in list(checkpoint_dir.glob("*")) if d.is_dir()]
586 end_dist_checkpoints = [d for d in dist_checkpoints if d.match("*end")]
587 last_dist_checkpoints = [d for d in dist_checkpoints if d.match("*last")]
588
589 end_checkpoints = end_dist_checkpoints if end_dist_checkpoints else list(checkpoint_dir.rglob("*end.ckpt"))
590 last_checkpoints = last_dist_checkpoints if last_dist_checkpoints else list(checkpoint_dir.rglob("*last.ckpt"))
591
592 if not checkpoint_dir.exists() or (not len(end_checkpoints) > 0 and not len(last_checkpoints) > 0):
593 if resume_ignore_no_checkpoint:
594 warn = f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. "
595 if checkpoint is None:
596 warn += "Training from scratch."
597 elif checkpoint == resume_from_checkpoint:
598 warn += f"Training from {resume_from_checkpoint}."
599 logging.warning(warn)
600 else:
601 raise NotFoundError(
602 f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. Cannot resume."
603 )
604 elif len(end_checkpoints) > 0:
605 if resume_past_end:
606 if len(end_checkpoints) > 1:
607 if 'mp_rank' in str(end_checkpoints[0]):
608 checkpoint = end_checkpoints[0]
609 else:
610 raise ValueError(f"Multiple checkpoints {end_checkpoints} that matches *end.ckpt.")
611 else:
612 raise ValueError(
613 f"Found {end_checkpoints[0]} indicating that the last training run has already completed."
614 )
615 elif len(last_checkpoints) > 1:
616 if 'mp_rank' in str(last_checkpoints[0]) or 'tp_rank' in str(last_checkpoints[0]):
617 checkpoint = last_checkpoints[0]
618 checkpoint = uninject_model_parallel_rank(checkpoint)
619 else:
620 raise ValueError(f"Multiple checkpoints {last_checkpoints} that matches *last.ckpt.")
621 else:
622 checkpoint = last_checkpoints[0]
623
624 # PTL 2.0 supports ckpt_path instead of resume_from_checkpoint as the trainer flag
625 if checkpoint is not None:
626 trainer.ckpt_path = str(checkpoint)
627 logging.info(f'Resuming training from checkpoint: {trainer.ckpt_path}')
628
629 if is_global_rank_zero():
630 # Check to see if any files exist that need to be moved
631 files_to_move = []
632 if Path(log_dir).exists():
633 for child in Path(log_dir).iterdir():
634 if child.is_file():
635 files_to_move.append(child)
636
637 if len(files_to_move) > 0:
638 # Move old files to a new folder
639 other_run_dirs = Path(log_dir).glob("run_*")
640 run_count = 0
641 for fold in other_run_dirs:
642 if fold.is_dir():
643 run_count += 1
644 new_run_dir = Path(Path(log_dir) / f"run_{run_count}")
645 new_run_dir.mkdir()
646 for _file in files_to_move:
647 move(str(_file), str(new_run_dir))
648
649
650 def check_explicit_log_dir(
651 trainer: 'pytorch_lightning.Trainer', explicit_log_dir: Union[Path, str], exp_dir: str, name: str, version: str
652 ) -> Tuple[Path, str, str, str]:
653 """ Checks that the passed arguments are compatible with explicit_log_dir.
654
655 Returns:
656 log_dir (Path): the log_dir
657 exp_dir (str): the base exp_dir without name nor version
658 name (str): The name of the experiment
659 version (str): The version of the experiment
660
661 Raise:
662 LoggerMisconfigurationError
663 """
664 if trainer.logger is not None:
665 raise LoggerMisconfigurationError(
666 "The pytorch lightning trainer that was passed to exp_manager contained a logger and explicit_log_dir: "
667 f"{explicit_log_dir} was pass to exp_manager. Please remove the logger from the lightning trainer."
668 )
669 # Checking only (explicit_log_dir) vs (exp_dir and version).
670 # The `name` will be used as the actual name of checkpoint/archive.
671 if exp_dir or version:
672 logging.error(
673 f"exp_manager received explicit_log_dir: {explicit_log_dir} and at least one of exp_dir: {exp_dir}, "
674 f"or version: {version}. Please note that exp_dir, name, and version will be ignored."
675 )
676 if is_global_rank_zero() and Path(explicit_log_dir).exists():
677 logging.warning(f"Exp_manager is logging to {explicit_log_dir}, but it already exists.")
678 return Path(explicit_log_dir), str(explicit_log_dir), "", ""
679
680
681 def get_log_dir(
682 trainer: 'pytorch_lightning.Trainer',
683 exp_dir: str = None,
684 name: str = None,
685 version: str = None,
686 explicit_log_dir: str = None,
687 use_datetime_version: bool = True,
688 resume_if_exists: bool = False,
689 ) -> Tuple[Path, str, str, str]:
690 """
691 Obtains the log_dir used for exp_manager.
692
693 Returns:
694 log_dir (Path): the log_dir
695 exp_dir (str): the base exp_dir without name nor version
696 name (str): The name of the experiment
697 version (str): The version of the experiment
698 explicit_log_dir (str): The explicit path to the log folder. Defaults to False.
699 use_datetime_version (bool): Uses date and time as the version of the log folder. Defaults to True.
700 resume_if_exists (bool): if resume_if_exists of the exp_manager's config is enabled or not. When enabled, the
701 version folders would not get created.
702
703 Raise:
704 LoggerMisconfigurationError: If trainer is incompatible with arguments
705 NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
706 ValueError: If resume is True, and there were more than 1 checkpoint could found.
707 """
708 if explicit_log_dir: # If explicit log_dir was passed, short circuit
709 return check_explicit_log_dir(trainer, explicit_log_dir, exp_dir, name, version)
710
711 # Default exp_dir to ./nemo_experiments if None was passed
712 _exp_dir = exp_dir
713 if exp_dir is None:
714 _exp_dir = str(Path.cwd() / 'nemo_experiments')
715
716 # If the user has already defined a logger for the trainer, use the logger defaults for logging directory
717 if trainer.logger is not None:
718 if trainer.logger.save_dir:
719 if exp_dir:
720 raise LoggerMisconfigurationError(
721 "The pytorch lightning trainer that was passed to exp_manager contained a logger, the logger's "
722 f"save_dir was not None, and exp_dir ({exp_dir}) was not None. If trainer.logger.save_dir "
723 "exists, exp_manager will use trainer.logger.save_dir as the logging directory and exp_dir "
724 "must be None."
725 )
726 _exp_dir = trainer.logger.save_dir
727 if name:
728 raise LoggerMisconfigurationError(
729 "The pytorch lightning trainer that was passed to exp_manager contained a logger, and name: "
730 f"{name} was also passed to exp_manager. If the trainer contains a "
731 "logger, exp_manager will use trainer.logger.name, and name passed to exp_manager must be None."
732 )
733 name = trainer.logger.name
734 version = f"version_{trainer.logger.version}"
735 # Use user-defined exp_dir, project_name, exp_name, and versioning options
736 else:
737 name = name or "default"
738 version = version or os.environ.get(NEMO_ENV_VARNAME_VERSION, None)
739
740 if not version:
741 if resume_if_exists:
742 logging.warning(
743 "No version folders would be created under the log folder as 'resume_if_exists' is enabled."
744 )
745 version = None
746 elif is_global_rank_zero():
747 if use_datetime_version:
748 version = time.strftime('%Y-%m-%d_%H-%M-%S')
749 else:
750 tensorboard_logger = TensorBoardLogger(save_dir=Path(_exp_dir), name=name, version=version)
751 version = f"version_{tensorboard_logger.version}"
752 os.environ[NEMO_ENV_VARNAME_VERSION] = "" if version is None else version
753
754 log_dir = Path(_exp_dir) / Path(str(name)) / Path("" if version is None else str(version))
755 return log_dir, str(_exp_dir), name, version
756
757
758 def get_git_hash():
759 """
760 Helper function that tries to get the commit hash if running inside a git folder
761
762 returns:
763 Bool: Whether the git subprocess ran without error
764 str: git subprocess output or error message
765 """
766 try:
767 return (
768 True,
769 subprocess.check_output(['git', 'rev-parse', 'HEAD'], stderr=subprocess.STDOUT).decode(),
770 )
771 except subprocess.CalledProcessError as err:
772 return False, "{}\n".format(err.output.decode("utf-8"))
773
774
775 def get_git_diff():
776 """
777 Helper function that tries to get the git diff if running inside a git folder
778
779 returns:
780 Bool: Whether the git subprocess ran without error
781 str: git subprocess output or error message
782 """
783 try:
784 return subprocess.check_output(['git', 'diff'], stderr=subprocess.STDOUT).decode()
785 except subprocess.CalledProcessError as err:
786 return "{}\n".format(err.output.decode("utf-8"))
787
788
789 def configure_loggers(
790 trainer: 'pytorch_lightning.Trainer',
791 exp_dir: [Path, str],
792 log_dir: [Path, str],
793 name: str,
794 version: str,
795 checkpoint_callback_params: dict,
796 create_tensorboard_logger: bool,
797 summary_writer_kwargs: dict,
798 create_wandb_logger: bool,
799 wandb_kwargs: dict,
800 create_mlflow_logger: bool,
801 mlflow_kwargs: dict,
802 create_dllogger_logger: bool,
803 dllogger_kwargs: dict,
804 create_clearml_logger: bool,
805 clearml_kwargs: dict,
806 ):
807 """
808 Creates TensorboardLogger and/or WandBLogger / MLFlowLogger / DLlogger / ClearMLLogger and attach them to trainer.
809 Raises ValueError if summary_writer_kwargs or wandb_kwargs are misconfigured.
810 """
811 # Potentially create tensorboard logger and/or WandBLogger / MLFlowLogger / DLLogger
812 logger_list = []
813 if create_tensorboard_logger:
814 if summary_writer_kwargs is None:
815 summary_writer_kwargs = {}
816 elif "log_dir" in summary_writer_kwargs:
817 raise ValueError(
818 "You cannot pass `log_dir` as part of `summary_writer_kwargs`. `log_dir` is handled by lightning's "
819 "TensorBoardLogger logger."
820 )
821 tensorboard_logger = TensorBoardLogger(save_dir=exp_dir, name=name, version=version, **summary_writer_kwargs)
822 logger_list.append(tensorboard_logger)
823 logging.info("TensorboardLogger has been set up")
824
825 if create_wandb_logger:
826 if wandb_kwargs is None:
827 wandb_kwargs = {}
828 if "name" not in wandb_kwargs and "project" not in wandb_kwargs:
829 raise ValueError("name and project are required for wandb_logger")
830
831 # Update the wandb save_dir
832 if wandb_kwargs.get('save_dir', None) is None:
833 wandb_kwargs['save_dir'] = exp_dir
834 os.makedirs(wandb_kwargs['save_dir'], exist_ok=True)
835 wandb_logger = WandbLogger(version=version, **wandb_kwargs)
836
837 logger_list.append(wandb_logger)
838 logging.info("WandBLogger has been set up")
839
840 if create_mlflow_logger:
841 mlflow_logger = MLFlowLogger(run_name=version, **mlflow_kwargs)
842
843 logger_list.append(mlflow_logger)
844 logging.info("MLFlowLogger has been set up")
845
846 if create_dllogger_logger:
847 dllogger_logger = DLLogger(**dllogger_kwargs)
848
849 logger_list.append(dllogger_logger)
850 logging.info("DLLogger has been set up")
851
852 if create_clearml_logger:
853 clearml_logger = ClearMLLogger(
854 clearml_cfg=clearml_kwargs,
855 log_dir=log_dir,
856 prefix=name,
857 save_best_model=checkpoint_callback_params.save_best_model,
858 )
859
860 logger_list.append(clearml_logger)
861 logging.info("ClearMLLogger has been set up")
862
863 trainer._logger_connector.configure_logger(logger_list)
864
865
866 def configure_checkpointing(
867 trainer: 'pytorch_lightning.Trainer',
868 log_dir: Path,
869 name: str,
870 resume: bool,
871 params: 'DictConfig',
872 create_preemption_callback: bool,
873 ):
874 """ Adds ModelCheckpoint to trainer. Raises CheckpointMisconfigurationError if trainer already has a ModelCheckpoint
875 callback
876 """
877 for callback in trainer.callbacks:
878 if isinstance(callback, ModelCheckpoint):
879 raise CheckpointMisconfigurationError(
880 "The pytorch lightning trainer that was passed to exp_manager contained a ModelCheckpoint "
881 "and create_checkpoint_callback was set to True. Please either set create_checkpoint_callback "
882 "to False, or remove ModelCheckpoint from the lightning trainer"
883 )
884 # Create the callback and attach it to trainer
885 if "filepath" in params:
886 if params.filepath is not None:
887 logging.warning("filepath is deprecated. Please switch to dirpath and filename instead")
888 if params.dirpath is None:
889 params.dirpath = Path(params.filepath).parent
890 if params.filename is None:
891 params.filename = Path(params.filepath).name
892 with open_dict(params):
893 del params["filepath"]
894 if params.dirpath is None:
895 params.dirpath = Path(log_dir / 'checkpoints')
896 if params.filename is None:
897 params.filename = f'{name}--{{{params.monitor}:.4f}}-{{epoch}}'
898 if params.prefix is None:
899 params.prefix = name
900 NeMoModelCheckpoint.CHECKPOINT_NAME_LAST = params.filename + '-last'
901
902 logging.debug(params.dirpath)
903 logging.debug(params.filename)
904 logging.debug(params.prefix)
905
906 if "val" in params.monitor:
907 if (
908 trainer.max_epochs is not None
909 and trainer.max_epochs != -1
910 and trainer.max_epochs < trainer.check_val_every_n_epoch
911 ):
912 logging.error(
913 "The checkpoint callback was told to monitor a validation value but trainer.max_epochs("
914 f"{trainer.max_epochs}) was less than trainer.check_val_every_n_epoch({trainer.check_val_every_n_epoch}"
915 f"). It is very likely this run will fail with ModelCheckpoint(monitor='{params.monitor}') not found "
916 "in the returned metrics. Please ensure that validation is run within trainer.max_epochs."
917 )
918 elif trainer.max_steps is not None and trainer.max_steps != -1:
919 logging.warning(
920 "The checkpoint callback was told to monitor a validation value and trainer's max_steps was set to "
921 f"{trainer.max_steps}. Please ensure that max_steps will run for at least "
922 f"{trainer.check_val_every_n_epoch} epochs to ensure that checkpointing will not error out."
923 )
924
925 checkpoint_callback = NeMoModelCheckpoint(n_resume=resume, **params)
926 checkpoint_callback.last_model_path = trainer.ckpt_path or ""
927 if 'mp_rank' in checkpoint_callback.last_model_path or 'tp_rank' in checkpoint_callback.last_model_path:
928 checkpoint_callback.last_model_path = uninject_model_parallel_rank(checkpoint_callback.last_model_path)
929 trainer.callbacks.append(checkpoint_callback)
930 if create_preemption_callback:
931 # Check if cuda is avialable as preemption is supported only on GPUs
932 if torch.cuda.is_available():
933 ## By default PreemptionCallback handles SIGTERM. To handle other signals pass the signal in the call as below:
934 ## PreemptionCallback(checkpoint_callback, signal.SIGCHLD)
935 preemption_callback = PreemptionCallback(checkpoint_callback)
936 trainer.callbacks.append(preemption_callback)
937 else:
938 logging.info("Preemption is supported only on GPUs, disabling preemption")
939
940
941 def check_slurm(trainer):
942 try:
943 return trainer.accelerator_connector.is_slurm_managing_tasks
944 except AttributeError:
945 return False
946
947
948 class StatelessTimer(Timer):
949 """Extension of PTL timers to be per run."""
950
951 def __init__(self, duration: timedelta = None, interval: str = Interval.step, verbose: bool = True,) -> None:
952 super().__init__(duration, interval, verbose)
953
954 # Override PTL Timer's state dict to not store elapsed time information so that we can restore and continue training.
955 def state_dict(self) -> Dict[str, Any]:
956 return {}
957
958 def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
959 return
960
961
962 def configure_no_restart_validation_training_loop(trainer: pytorch_lightning.Trainer) -> None:
963 if type(trainer.fit_loop.epoch_loop) != _TrainingEpochLoop:
964 warnings.warn("Detected custom epoch loop. Skipping no validation on restart support.", UserWarning)
965 return
966 ## Pass trainer object to avoid trainer getting overwritten as None
967 loop = SkipResumeTrainingValidationLoop(trainer, trainer.min_steps, trainer.max_steps)
968 trainer.fit_loop.epoch_loop = loop
969
970
971 class SkipResumeTrainingValidationLoop(_TrainingEpochLoop):
972 """
973 Extend the PTL Epoch loop to skip validating when resuming.
974 This happens when resuming a checkpoint that has already run validation, but loading restores
975 the training state before validation has run.
976 """
977
978 def _should_check_val_fx(self) -> bool:
979 if self.restarting and self.global_step % self.trainer.val_check_batch == 0:
980 return False
981 return super()._should_check_val_fx()
982
983
984 def clean_exp_ckpt(exp_log_dir: Union[str, Path], remove_ckpt: bool = True, remove_nemo: bool = False):
985 """
986 Helper method that removes Pytorch Lightning .ckpt files or NeMo .nemo files from the checkpoint directory
987
988 Args:
989 exp_log_dir: str path to the root directory of the current experiment.
990 remove_ckpt: bool, whether to remove all *.ckpt files in the checkpoints directory.
991 remove_nemo: bool, whether to remove all *.nemo files in the checkpoints directory.
992 """
993 exp_log_dir = str(exp_log_dir)
994
995 if remove_ckpt:
996 logging.info("Deleting *.ckpt files ...")
997 ckpt_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.ckpt"))
998 for filepath in ckpt_files:
999 os.remove(filepath)
1000 logging.info(f"Deleted file : {filepath}")
1001
1002 if remove_nemo:
1003 logging.info("Deleting *.nemo files ...")
1004 nemo_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.nemo"))
1005 for filepath in nemo_files:
1006 os.remove(filepath)
1007 logging.info(f"Deleted file : {filepath}")
1008
[end of nemo/utils/exp_manager.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """
17 # This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
18 # fusion with beam search decoders on top of a trained ASR model with CTC decoder. To evaluate a model with
19 # Transducer (RNN-T) decoder use another script 'scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py'.
20 # NeMo's beam search decoders are capable of using the KenLM's N-gram models
21 # to find the best candidates. This script supports both character level and BPE level
22 # encodings and models which is detected automatically from the type of the model.
23 # You may train the LM model with 'scripts/asr_language_modeling/ngram_lm/train_kenlm.py'.
24
25 # Config Help
26
27 To discover all arguments of the script, please run :
28 python eval_beamsearch_ngram.py --help
29 python eval_beamsearch_ngram.py --cfg job
30
31 # USAGE
32
33 python eval_beamsearch_ngram.py nemo_model_file=<path to the .nemo file of the model> \
34 input_manifest=<path to the evaluation JSON manifest file> \
35 kenlm_model_file=<path to the binary KenLM model> \
36 beam_width=[<list of the beam widths, separated with commas>] \
37 beam_alpha=[<list of the beam alphas, separated with commas>] \
38 beam_beta=[<list of the beam betas, separated with commas>] \
39 preds_output_folder=<optional folder to store the predictions> \
40 probs_cache_file=null \
41 decoding_mode=beamsearch_ngram
42 ...
43
44
45 # Grid Search for Hyper parameters
46
47 For grid search, you can provide a list of arguments as follows -
48
49 beam_width=[4,8,16,....] \
50 beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
51 beam_beta=[-1.0,-0.5,0.0,...,1.0] \
52
53 # You may find more info on how to use this script at:
54 # https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
55
56 """
57
58
59 import contextlib
60 import json
61 import os
62 import pickle
63 from dataclasses import dataclass, field, is_dataclass
64 from pathlib import Path
65 from typing import List, Optional
66
67 import editdistance
68 import numpy as np
69 import torch
70 from omegaconf import MISSING, OmegaConf
71 from sklearn.model_selection import ParameterGrid
72 from tqdm.auto import tqdm
73
74 import nemo.collections.asr as nemo_asr
75 from nemo.collections.asr.models import EncDecHybridRNNTCTCModel
76 from nemo.collections.asr.parts.submodules import ctc_beam_decoding
77 from nemo.collections.asr.parts.utils.transcribe_utils import PunctuationCapitalization, TextProcessingConfig
78 from nemo.core.config import hydra_runner
79 from nemo.utils import logging
80
81 # fmt: off
82
83
84 @dataclass
85 class EvalBeamSearchNGramConfig:
86 """
87 Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
88 """
89 # # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
90 nemo_model_file: str = MISSING
91
92 # File paths
93 input_manifest: str = MISSING # The manifest file of the evaluation set
94 kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
95 preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
96 probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
97
98 # Parameters for inference
99 acoustic_batch_size: int = 16 # The batch size to calculate log probabilities
100 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
101 device: str = "cuda" # The device to load the model onto to calculate log probabilities
102 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
103
104 # Beam Search hyperparameters
105
106 # The decoding scheme to be used for evaluation.
107 # Can be one of ["greedy", "beamsearch", "beamsearch_ngram"]
108 decoding_mode: str = "beamsearch_ngram"
109
110 beam_width: List[int] = field(default_factory=lambda: [128]) # The width or list of the widths for the beam search decoding
111 beam_alpha: List[float] = field(default_factory=lambda: [1.0]) # The alpha parameter or list of the alphas for the beam search decoding
112 beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
113
114 decoding_strategy: str = "beam"
115 decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
116
117 text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
118 punctuation_marks = ".,?",
119 separate_punctuation = False,
120 do_lowercase = False,
121 rm_punctuation = False,
122 )
123 # fmt: on
124
125
126 def beam_search_eval(
127 model: nemo_asr.models.ASRModel,
128 cfg: EvalBeamSearchNGramConfig,
129 all_probs: List[torch.Tensor],
130 target_transcripts: List[str],
131 preds_output_file: str = None,
132 lm_path: str = None,
133 beam_alpha: float = 1.0,
134 beam_beta: float = 0.0,
135 beam_width: int = 128,
136 beam_batch_size: int = 128,
137 progress_bar: bool = True,
138 punctuation_capitalization: PunctuationCapitalization = None,
139 ):
140 level = logging.getEffectiveLevel()
141 logging.setLevel(logging.CRITICAL)
142 # Reset config
143 model.change_decoding_strategy(None)
144
145 # Override the beam search config with current search candidate configuration
146 cfg.decoding.beam_size = beam_width
147 cfg.decoding.beam_alpha = beam_alpha
148 cfg.decoding.beam_beta = beam_beta
149 cfg.decoding.return_best_hypothesis = False
150 cfg.decoding.kenlm_path = cfg.kenlm_model_file
151
152 # Update model's decoding strategy config
153 model.cfg.decoding.strategy = cfg.decoding_strategy
154 model.cfg.decoding.beam = cfg.decoding
155
156 # Update model's decoding strategy
157 if isinstance(model, EncDecHybridRNNTCTCModel):
158 model.change_decoding_strategy(model.cfg.decoding, decoder_type='ctc')
159 decoding = model.ctc_decoding
160 else:
161 model.change_decoding_strategy(model.cfg.decoding)
162 decoding = model.decoding
163 logging.setLevel(level)
164
165 wer_dist_first = cer_dist_first = 0
166 wer_dist_best = cer_dist_best = 0
167 words_count = 0
168 chars_count = 0
169 sample_idx = 0
170 if preds_output_file:
171 out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
172
173 if progress_bar:
174 it = tqdm(
175 range(int(np.ceil(len(all_probs) / beam_batch_size))),
176 desc=f"Beam search decoding with width={beam_width}, alpha={beam_alpha}, beta={beam_beta}",
177 ncols=120,
178 )
179 else:
180 it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
181 for batch_idx in it:
182 # disabling type checking
183 probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
184 probs_lens = torch.tensor([prob.shape[0] for prob in probs_batch])
185 with torch.no_grad():
186 packed_batch = torch.zeros(len(probs_batch), max(probs_lens), probs_batch[0].shape[-1], device='cpu')
187
188 for prob_index in range(len(probs_batch)):
189 packed_batch[prob_index, : probs_lens[prob_index], :] = torch.tensor(
190 probs_batch[prob_index], device=packed_batch.device, dtype=packed_batch.dtype
191 )
192
193 _, beams_batch = decoding.ctc_decoder_predictions_tensor(
194 packed_batch, decoder_lengths=probs_lens, return_hypotheses=True,
195 )
196
197 for beams_idx, beams in enumerate(beams_batch):
198 target = target_transcripts[sample_idx + beams_idx]
199 target_split_w = target.split()
200 target_split_c = list(target)
201 words_count += len(target_split_w)
202 chars_count += len(target_split_c)
203 wer_dist_min = cer_dist_min = 10000
204 for candidate_idx, candidate in enumerate(beams): # type: (int, ctc_beam_decoding.rnnt_utils.Hypothesis)
205 pred_text = candidate.text
206 if cfg.text_processing.do_lowercase:
207 pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
208 if cfg.text_processing.rm_punctuation:
209 pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
210 if cfg.text_processing.separate_punctuation:
211 pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
212 pred_split_w = pred_text.split()
213 wer_dist = editdistance.eval(target_split_w, pred_split_w)
214 pred_split_c = list(pred_text)
215 cer_dist = editdistance.eval(target_split_c, pred_split_c)
216
217 wer_dist_min = min(wer_dist_min, wer_dist)
218 cer_dist_min = min(cer_dist_min, cer_dist)
219
220 if candidate_idx == 0:
221 # first candidate
222 wer_dist_first += wer_dist
223 cer_dist_first += cer_dist
224
225 score = candidate.score
226 if preds_output_file:
227 out_file.write('{}\t{}\n'.format(pred_text, score))
228 wer_dist_best += wer_dist_min
229 cer_dist_best += cer_dist_min
230 sample_idx += len(probs_batch)
231
232 if preds_output_file:
233 out_file.close()
234 logging.info(f"Stored the predictions of beam search decoding at '{preds_output_file}'.")
235
236 if lm_path:
237 logging.info(
238 'WER/CER with beam search decoding and N-gram model = {:.2%}/{:.2%}'.format(
239 wer_dist_first / words_count, cer_dist_first / chars_count
240 )
241 )
242 else:
243 logging.info(
244 'WER/CER with beam search decoding = {:.2%}/{:.2%}'.format(
245 wer_dist_first / words_count, cer_dist_first / chars_count
246 )
247 )
248 logging.info(
249 'Oracle WER/CER in candidates with perfect LM= {:.2%}/{:.2%}'.format(
250 wer_dist_best / words_count, cer_dist_best / chars_count
251 )
252 )
253 logging.info(f"=================================================================================")
254
255 return wer_dist_first / words_count, cer_dist_first / chars_count
256
257
258 @hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
259 def main(cfg: EvalBeamSearchNGramConfig):
260 logging.warning("This file will be renamed to eval_beamsearch_ngram_ctc.py in the future NeMo (1.21) release.")
261 if is_dataclass(cfg):
262 cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
263
264 valid_decoding_modes = ["greedy", "beamsearch", "beamsearch_ngram"]
265 if cfg.decoding_mode not in valid_decoding_modes:
266 raise ValueError(
267 f"Given decoding_mode={cfg.decoding_mode} is invalid. Available options are :\n" f"{valid_decoding_modes}"
268 )
269
270 if cfg.nemo_model_file.endswith('.nemo'):
271 asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
272 else:
273 logging.warning(
274 "nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
275 )
276 asr_model = nemo_asr.models.ASRModel.from_pretrained(
277 cfg.nemo_model_file, map_location=torch.device(cfg.device)
278 )
279
280 target_transcripts = []
281 manifest_dir = Path(cfg.input_manifest).parent
282 with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
283 audio_file_paths = []
284 for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
285 data = json.loads(line)
286 audio_file = Path(data['audio_filepath'])
287 if not audio_file.is_file() and not audio_file.is_absolute():
288 audio_file = manifest_dir / audio_file
289 target_transcripts.append(data['text'])
290 audio_file_paths.append(str(audio_file.absolute()))
291
292 punctuation_capitalization = PunctuationCapitalization(cfg.text_processing.punctuation_marks)
293 if cfg.text_processing.do_lowercase:
294 target_transcripts = punctuation_capitalization.do_lowercase(target_transcripts)
295 if cfg.text_processing.rm_punctuation:
296 target_transcripts = punctuation_capitalization.rm_punctuation(target_transcripts)
297 if cfg.text_processing.separate_punctuation:
298 target_transcripts = punctuation_capitalization.separate_punctuation(target_transcripts)
299
300 if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
301 logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
302 logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
303 with open(cfg.probs_cache_file, 'rb') as probs_file:
304 all_probs = pickle.load(probs_file)
305
306 if len(all_probs) != len(audio_file_paths):
307 raise ValueError(
308 f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
309 f"match the manifest file. You may need to delete the probabilities cached file."
310 )
311 else:
312
313 @contextlib.contextmanager
314 def default_autocast():
315 yield
316
317 if cfg.use_amp:
318 if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
319 logging.info("AMP is enabled!\n")
320 autocast = torch.cuda.amp.autocast
321
322 else:
323 autocast = default_autocast
324 else:
325
326 autocast = default_autocast
327
328 with autocast():
329 with torch.no_grad():
330 if isinstance(asr_model, EncDecHybridRNNTCTCModel):
331 asr_model.cur_decoder = 'ctc'
332 all_logits = asr_model.transcribe(audio_file_paths, batch_size=cfg.acoustic_batch_size, logprobs=True)
333
334 all_probs = all_logits
335 if cfg.probs_cache_file:
336 logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
337 with open(cfg.probs_cache_file, 'wb') as f_dump:
338 pickle.dump(all_probs, f_dump)
339
340 wer_dist_greedy = 0
341 cer_dist_greedy = 0
342 words_count = 0
343 chars_count = 0
344 for batch_idx, probs in enumerate(all_probs):
345 preds = np.argmax(probs, axis=1)
346 preds_tensor = torch.tensor(preds, device='cpu').unsqueeze(0)
347 if isinstance(asr_model, EncDecHybridRNNTCTCModel):
348 pred_text = asr_model.ctc_decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
349 else:
350 pred_text = asr_model._wer.decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
351
352 if cfg.text_processing.do_lowercase:
353 pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
354 if cfg.text_processing.rm_punctuation:
355 pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
356 if cfg.text_processing.separate_punctuation:
357 pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
358
359 pred_split_w = pred_text.split()
360 target_split_w = target_transcripts[batch_idx].split()
361 pred_split_c = list(pred_text)
362 target_split_c = list(target_transcripts[batch_idx])
363
364 wer_dist = editdistance.eval(target_split_w, pred_split_w)
365 cer_dist = editdistance.eval(target_split_c, pred_split_c)
366
367 wer_dist_greedy += wer_dist
368 cer_dist_greedy += cer_dist
369 words_count += len(target_split_w)
370 chars_count += len(target_split_c)
371
372 logging.info('Greedy WER/CER = {:.2%}/{:.2%}'.format(wer_dist_greedy / words_count, cer_dist_greedy / chars_count))
373
374 asr_model = asr_model.to('cpu')
375
376 if cfg.decoding_mode == "beamsearch_ngram":
377 if not os.path.exists(cfg.kenlm_model_file):
378 raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
379 lm_path = cfg.kenlm_model_file
380 else:
381 lm_path = None
382
383 # 'greedy' decoding_mode would skip the beam search decoding
384 if cfg.decoding_mode in ["beamsearch_ngram", "beamsearch"]:
385 if cfg.beam_width is None or cfg.beam_alpha is None or cfg.beam_beta is None:
386 raise ValueError("beam_width, beam_alpha and beam_beta are needed to perform beam search decoding.")
387 params = {'beam_width': cfg.beam_width, 'beam_alpha': cfg.beam_alpha, 'beam_beta': cfg.beam_beta}
388 hp_grid = ParameterGrid(params)
389 hp_grid = list(hp_grid)
390
391 best_wer_beam_size, best_cer_beam_size = None, None
392 best_wer_alpha, best_cer_alpha = None, None
393 best_wer_beta, best_cer_beta = None, None
394 best_wer, best_cer = 1e6, 1e6
395
396 logging.info(f"==============================Starting the beam search decoding===============================")
397 logging.info(f"Grid search size: {len(hp_grid)}")
398 logging.info(f"It may take some time...")
399 logging.info(f"==============================================================================================")
400
401 if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
402 os.mkdir(cfg.preds_output_folder)
403 for hp in hp_grid:
404 if cfg.preds_output_folder:
405 preds_output_file = os.path.join(
406 cfg.preds_output_folder,
407 f"preds_out_width{hp['beam_width']}_alpha{hp['beam_alpha']}_beta{hp['beam_beta']}.tsv",
408 )
409 else:
410 preds_output_file = None
411
412 candidate_wer, candidate_cer = beam_search_eval(
413 asr_model,
414 cfg,
415 all_probs=all_probs,
416 target_transcripts=target_transcripts,
417 preds_output_file=preds_output_file,
418 lm_path=lm_path,
419 beam_width=hp["beam_width"],
420 beam_alpha=hp["beam_alpha"],
421 beam_beta=hp["beam_beta"],
422 beam_batch_size=cfg.beam_batch_size,
423 progress_bar=True,
424 punctuation_capitalization=punctuation_capitalization,
425 )
426
427 if candidate_cer < best_cer:
428 best_cer_beam_size = hp["beam_width"]
429 best_cer_alpha = hp["beam_alpha"]
430 best_cer_beta = hp["beam_beta"]
431 best_cer = candidate_cer
432
433 if candidate_wer < best_wer:
434 best_wer_beam_size = hp["beam_width"]
435 best_wer_alpha = hp["beam_alpha"]
436 best_wer_beta = hp["beam_beta"]
437 best_wer = candidate_wer
438
439 logging.info(
440 f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
441 f'Beam alpha = {best_wer_alpha}, Beam beta = {best_wer_beta}'
442 )
443
444 logging.info(
445 f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
446 f'Beam alpha = {best_cer_alpha}, Beam beta = {best_cer_beta}'
447 )
448 logging.info(f"=================================================================================")
449
450
451 if __name__ == '__main__':
452 main()
453
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """
17 # This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
18 # fusion with beam search decoders on top of a trained ASR Transducer model. NeMo's beam search decoders are capable of using the
19 # KenLM's N-gram models to find the best candidates. This script supports both character level and BPE level
20 # encodings and models which is detected automatically from the type of the model.
21 # You may train the LM model with 'scripts/ngram_lm/train_kenlm.py'.
22
23 # Config Help
24
25 To discover all arguments of the script, please run :
26 python eval_beamsearch_ngram.py --help
27 python eval_beamsearch_ngram.py --cfg job
28
29 # USAGE
30
31 python eval_beamsearch_ngram_transducer.py nemo_model_file=<path to the .nemo file of the model> \
32 input_manifest=<path to the evaluation JSON manifest file \
33 kenlm_model_file=<path to the binary KenLM model> \
34 beam_width=[<list of the beam widths, separated with commas>] \
35 beam_alpha=[<list of the beam alphas, separated with commas>] \
36 preds_output_folder=<optional folder to store the predictions> \
37 probs_cache_file=null \
38 decoding_strategy=<greedy_batch or maes decoding>
39 maes_prefix_alpha=[<list of the maes prefix alphas, separated with commas>] \
40 maes_expansion_gamma=[<list of the maes expansion gammas, separated with commas>] \
41 hat_subtract_ilm=<in case of HAT model: subtract internal LM or not> \
42 hat_ilm_weight=[<in case of HAT model: list of the HAT internal LM weights, separated with commas>] \
43 ...
44
45
46 # Grid Search for Hyper parameters
47
48 For grid search, you can provide a list of arguments as follows -
49
50 beam_width=[4,8,16,....] \
51 beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
52
53 # You may find more info on how to use this script at:
54 # https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
55
56 """
57
58
59 import contextlib
60 import json
61 import os
62 import pickle
63 import tempfile
64 from dataclasses import dataclass, field, is_dataclass
65 from pathlib import Path
66 from typing import List, Optional
67
68 import editdistance
69 import numpy as np
70 import torch
71 from omegaconf import MISSING, OmegaConf
72 from sklearn.model_selection import ParameterGrid
73 from tqdm.auto import tqdm
74
75 import nemo.collections.asr as nemo_asr
76 from nemo.collections.asr.parts.submodules import rnnt_beam_decoding
77 from nemo.core.config import hydra_runner
78 from nemo.utils import logging
79
80 # fmt: off
81
82
83 @dataclass
84 class EvalBeamSearchNGramConfig:
85 """
86 Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
87 """
88 # # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
89 nemo_model_file: str = MISSING
90
91 # File paths
92 input_manifest: str = MISSING # The manifest file of the evaluation set
93 kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
94 preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
95 probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
96
97 # Parameters for inference
98 acoustic_batch_size: int = 128 # The batch size to calculate log probabilities
99 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
100 device: str = "cuda" # The device to load the model onto to calculate log probabilities
101 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
102 num_workers: int = 1 # Number of workers for DataLoader
103
104 # The decoding scheme to be used for evaluation
105 decoding_strategy: str = "greedy_batch" # ["greedy_batch", "beam", "tsd", "alsd", "maes"]
106
107 # Beam Search hyperparameters
108 beam_width: List[int] = field(default_factory=lambda: [8]) # The width or list of the widths for the beam search decoding
109 beam_alpha: List[float] = field(default_factory=lambda: [0.2]) # The alpha parameter or list of the alphas for the beam search decoding
110
111 maes_prefix_alpha: List[int] = field(default_factory=lambda: [2]) # The maes_prefix_alpha or list of the maes_prefix_alpha for the maes decoding
112 maes_expansion_gamma: List[float] = field(default_factory=lambda: [2.3]) # The maes_expansion_gamma or list of the maes_expansion_gamma for the maes decoding
113
114 # HAT related parameters (only for internal lm subtraction)
115 hat_subtract_ilm: bool = False
116 hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
117
118 decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
119
120
121 # fmt: on
122
123
124 def decoding_step(
125 model: nemo_asr.models.ASRModel,
126 cfg: EvalBeamSearchNGramConfig,
127 all_probs: List[torch.Tensor],
128 target_transcripts: List[str],
129 preds_output_file: str = None,
130 beam_batch_size: int = 128,
131 progress_bar: bool = True,
132 ):
133 level = logging.getEffectiveLevel()
134 logging.setLevel(logging.CRITICAL)
135 # Reset config
136 model.change_decoding_strategy(None)
137
138 cfg.decoding.hat_ilm_weight = cfg.decoding.hat_ilm_weight * cfg.hat_subtract_ilm
139 # Override the beam search config with current search candidate configuration
140 cfg.decoding.return_best_hypothesis = False
141 cfg.decoding.ngram_lm_model = cfg.kenlm_model_file
142 cfg.decoding.hat_subtract_ilm = cfg.hat_subtract_ilm
143
144 # Update model's decoding strategy config
145 model.cfg.decoding.strategy = cfg.decoding_strategy
146 model.cfg.decoding.beam = cfg.decoding
147
148 # Update model's decoding strategy
149 model.change_decoding_strategy(model.cfg.decoding)
150 logging.setLevel(level)
151
152 wer_dist_first = cer_dist_first = 0
153 wer_dist_best = cer_dist_best = 0
154 words_count = 0
155 chars_count = 0
156 sample_idx = 0
157 if preds_output_file:
158 out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
159
160 if progress_bar:
161 if cfg.decoding_strategy == "greedy_batch":
162 description = "Greedy_batch decoding.."
163 else:
164 description = f"{cfg.decoding_strategy} decoding with bw={cfg.decoding.beam_size}, ba={cfg.decoding.ngram_lm_alpha}, ma={cfg.decoding.maes_prefix_alpha}, mg={cfg.decoding.maes_expansion_gamma}, hat_ilmw={cfg.decoding.hat_ilm_weight}"
165 it = tqdm(range(int(np.ceil(len(all_probs) / beam_batch_size))), desc=description, ncols=120)
166 else:
167 it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
168 for batch_idx in it:
169 # disabling type checking
170 probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
171 probs_lens = torch.tensor([prob.shape[-1] for prob in probs_batch])
172 with torch.no_grad():
173 packed_batch = torch.zeros(len(probs_batch), probs_batch[0].shape[0], max(probs_lens), device='cpu')
174
175 for prob_index in range(len(probs_batch)):
176 packed_batch[prob_index, :, : probs_lens[prob_index]] = torch.tensor(
177 probs_batch[prob_index].unsqueeze(0), device=packed_batch.device, dtype=packed_batch.dtype
178 )
179 best_hyp_batch, beams_batch = model.decoding.rnnt_decoder_predictions_tensor(
180 packed_batch, probs_lens, return_hypotheses=True,
181 )
182 if cfg.decoding_strategy == "greedy_batch":
183 beams_batch = [[x] for x in best_hyp_batch]
184
185 for beams_idx, beams in enumerate(beams_batch):
186 target = target_transcripts[sample_idx + beams_idx]
187 target_split_w = target.split()
188 target_split_c = list(target)
189 words_count += len(target_split_w)
190 chars_count += len(target_split_c)
191 wer_dist_min = cer_dist_min = 10000
192 for candidate_idx, candidate in enumerate(beams): # type: (int, rnnt_beam_decoding.rnnt_utils.Hypothesis)
193 pred_text = candidate.text
194 pred_split_w = pred_text.split()
195 wer_dist = editdistance.eval(target_split_w, pred_split_w)
196 pred_split_c = list(pred_text)
197 cer_dist = editdistance.eval(target_split_c, pred_split_c)
198
199 wer_dist_min = min(wer_dist_min, wer_dist)
200 cer_dist_min = min(cer_dist_min, cer_dist)
201
202 if candidate_idx == 0:
203 # first candidate
204 wer_dist_first += wer_dist
205 cer_dist_first += cer_dist
206
207 score = candidate.score
208 if preds_output_file:
209 out_file.write('{}\t{}\n'.format(pred_text, score))
210 wer_dist_best += wer_dist_min
211 cer_dist_best += cer_dist_min
212 sample_idx += len(probs_batch)
213
214 if cfg.decoding_strategy == "greedy_batch":
215 return wer_dist_first / words_count, cer_dist_first / chars_count
216
217 if preds_output_file:
218 out_file.close()
219 logging.info(f"Stored the predictions of {cfg.decoding_strategy} decoding at '{preds_output_file}'.")
220
221 if cfg.decoding.ngram_lm_model:
222 logging.info(
223 f"WER/CER with {cfg.decoding_strategy} decoding and N-gram model = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
224 )
225 else:
226 logging.info(
227 f"WER/CER with {cfg.decoding_strategy} decoding = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
228 )
229 logging.info(
230 f"Oracle WER/CER in candidates with perfect LM= {wer_dist_best / words_count:.2%}/{cer_dist_best / chars_count:.2%}"
231 )
232 logging.info(f"=================================================================================")
233
234 return wer_dist_first / words_count, cer_dist_first / chars_count
235
236
237 @hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
238 def main(cfg: EvalBeamSearchNGramConfig):
239 if is_dataclass(cfg):
240 cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
241
242 valid_decoding_strategis = ["greedy_batch", "beam", "tsd", "alsd", "maes"]
243 if cfg.decoding_strategy not in valid_decoding_strategis:
244 raise ValueError(
245 f"Given decoding_strategy={cfg.decoding_strategy} is invalid. Available options are :\n"
246 f"{valid_decoding_strategis}"
247 )
248
249 if cfg.nemo_model_file.endswith('.nemo'):
250 asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
251 else:
252 logging.warning(
253 "nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
254 )
255 asr_model = nemo_asr.models.ASRModel.from_pretrained(
256 cfg.nemo_model_file, map_location=torch.device(cfg.device)
257 )
258
259 if cfg.kenlm_model_file:
260 if not os.path.exists(cfg.kenlm_model_file):
261 raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
262 if cfg.decoding_strategy != "maes":
263 raise ValueError(f"Decoding with kenlm model is supported only for maes decoding algorithm.")
264 lm_path = cfg.kenlm_model_file
265 else:
266 lm_path = None
267 cfg.beam_alpha = [0.0]
268 if cfg.hat_subtract_ilm:
269 assert lm_path, "kenlm must be set for hat internal lm subtraction"
270
271 if cfg.decoding_strategy != "maes":
272 cfg.maes_prefix_alpha, cfg.maes_expansion_gamma, cfg.hat_ilm_weight = [0], [0], [0]
273
274 target_transcripts = []
275 manifest_dir = Path(cfg.input_manifest).parent
276 with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
277 audio_file_paths = []
278 for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
279 data = json.loads(line)
280 audio_file = Path(data['audio_filepath'])
281 if not audio_file.is_file() and not audio_file.is_absolute():
282 audio_file = manifest_dir / audio_file
283 target_transcripts.append(data['text'])
284 audio_file_paths.append(str(audio_file.absolute()))
285
286 if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
287 logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
288 logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
289 with open(cfg.probs_cache_file, 'rb') as probs_file:
290 all_probs = pickle.load(probs_file)
291
292 if len(all_probs) != len(audio_file_paths):
293 raise ValueError(
294 f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
295 f"match the manifest file. You may need to delete the probabilities cached file."
296 )
297 else:
298
299 @contextlib.contextmanager
300 def default_autocast():
301 yield
302
303 if cfg.use_amp:
304 if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
305 logging.info("AMP is enabled!\n")
306 autocast = torch.cuda.amp.autocast
307
308 else:
309 autocast = default_autocast
310 else:
311
312 autocast = default_autocast
313
314 # manual calculation of encoder_embeddings
315 with autocast():
316 with torch.no_grad():
317 asr_model.eval()
318 asr_model.encoder.freeze()
319 device = next(asr_model.parameters()).device
320 all_probs = []
321 with tempfile.TemporaryDirectory() as tmpdir:
322 with open(os.path.join(tmpdir, 'manifest.json'), 'w', encoding='utf-8') as fp:
323 for audio_file in audio_file_paths:
324 entry = {'audio_filepath': audio_file, 'duration': 100000, 'text': ''}
325 fp.write(json.dumps(entry) + '\n')
326 config = {
327 'paths2audio_files': audio_file_paths,
328 'batch_size': cfg.acoustic_batch_size,
329 'temp_dir': tmpdir,
330 'num_workers': cfg.num_workers,
331 'channel_selector': None,
332 'augmentor': None,
333 }
334 temporary_datalayer = asr_model._setup_transcribe_dataloader(config)
335 for test_batch in tqdm(temporary_datalayer, desc="Transcribing", disable=True):
336 encoded, encoded_len = asr_model.forward(
337 input_signal=test_batch[0].to(device), input_signal_length=test_batch[1].to(device)
338 )
339 # dump encoder embeddings per file
340 for idx in range(encoded.shape[0]):
341 encoded_no_pad = encoded[idx, :, : encoded_len[idx]]
342 all_probs.append(encoded_no_pad)
343
344 if cfg.probs_cache_file:
345 logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
346 with open(cfg.probs_cache_file, 'wb') as f_dump:
347 pickle.dump(all_probs, f_dump)
348
349 if cfg.decoding_strategy == "greedy_batch":
350 asr_model = asr_model.to('cpu')
351 candidate_wer, candidate_cer = decoding_step(
352 asr_model,
353 cfg,
354 all_probs=all_probs,
355 target_transcripts=target_transcripts,
356 beam_batch_size=cfg.beam_batch_size,
357 progress_bar=True,
358 )
359 logging.info(f"Greedy batch WER/CER = {candidate_wer:.2%}/{candidate_cer:.2%}")
360
361 asr_model = asr_model.to('cpu')
362
363 # 'greedy_batch' decoding_strategy would skip the beam search decoding
364 if cfg.decoding_strategy in ["beam", "tsd", "alsd", "maes"]:
365 if cfg.beam_width is None or cfg.beam_alpha is None:
366 raise ValueError("beam_width and beam_alpha are needed to perform beam search decoding.")
367 params = {
368 'beam_width': cfg.beam_width,
369 'beam_alpha': cfg.beam_alpha,
370 'maes_prefix_alpha': cfg.maes_prefix_alpha,
371 'maes_expansion_gamma': cfg.maes_expansion_gamma,
372 'hat_ilm_weight': cfg.hat_ilm_weight,
373 }
374 hp_grid = ParameterGrid(params)
375 hp_grid = list(hp_grid)
376
377 best_wer_beam_size, best_cer_beam_size = None, None
378 best_wer_alpha, best_cer_alpha = None, None
379 best_wer, best_cer = 1e6, 1e6
380
381 logging.info(
382 f"==============================Starting the {cfg.decoding_strategy} decoding==============================="
383 )
384 logging.info(f"Grid search size: {len(hp_grid)}")
385 logging.info(f"It may take some time...")
386 logging.info(f"==============================================================================================")
387
388 if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
389 os.mkdir(cfg.preds_output_folder)
390 for hp in hp_grid:
391 if cfg.preds_output_folder:
392 results_file = f"preds_out_{cfg.decoding_strategy}_bw{hp['beam_width']}"
393 if cfg.decoding_strategy == "maes":
394 results_file = f"{results_file}_ma{hp['maes_prefix_alpha']}_mg{hp['maes_expansion_gamma']}"
395 if cfg.kenlm_model_file:
396 results_file = f"{results_file}_ba{hp['beam_alpha']}"
397 if cfg.hat_subtract_ilm:
398 results_file = f"{results_file}_hat_ilmw{hp['hat_ilm_weight']}"
399 preds_output_file = os.path.join(cfg.preds_output_folder, f"{results_file}.tsv")
400 else:
401 preds_output_file = None
402
403 cfg.decoding.beam_size = hp["beam_width"]
404 cfg.decoding.ngram_lm_alpha = hp["beam_alpha"]
405 cfg.decoding.maes_prefix_alpha = hp["maes_prefix_alpha"]
406 cfg.decoding.maes_expansion_gamma = hp["maes_expansion_gamma"]
407 cfg.decoding.hat_ilm_weight = hp["hat_ilm_weight"]
408
409 candidate_wer, candidate_cer = decoding_step(
410 asr_model,
411 cfg,
412 all_probs=all_probs,
413 target_transcripts=target_transcripts,
414 preds_output_file=preds_output_file,
415 beam_batch_size=cfg.beam_batch_size,
416 progress_bar=True,
417 )
418
419 if candidate_cer < best_cer:
420 best_cer_beam_size = hp["beam_width"]
421 best_cer_alpha = hp["beam_alpha"]
422 best_cer_ma = hp["maes_prefix_alpha"]
423 best_cer_mg = hp["maes_expansion_gamma"]
424 best_cer_hat_ilm_weight = hp["hat_ilm_weight"]
425 best_cer = candidate_cer
426
427 if candidate_wer < best_wer:
428 best_wer_beam_size = hp["beam_width"]
429 best_wer_alpha = hp["beam_alpha"]
430 best_wer_ma = hp["maes_prefix_alpha"]
431 best_wer_ga = hp["maes_expansion_gamma"]
432 best_wer_hat_ilm_weight = hp["hat_ilm_weight"]
433 best_wer = candidate_wer
434
435 wer_hat_parameter = ""
436 if cfg.hat_subtract_ilm:
437 wer_hat_parameter = f"HAT ilm weight = {best_wer_hat_ilm_weight}, "
438 logging.info(
439 f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
440 f'Beam alpha = {best_wer_alpha}, {wer_hat_parameter}'
441 f'maes_prefix_alpha = {best_wer_ma}, maes_expansion_gamma = {best_wer_ga} '
442 )
443
444 cer_hat_parameter = ""
445 if cfg.hat_subtract_ilm:
446 cer_hat_parameter = f"HAT ilm weight = {best_cer_hat_ilm_weight}"
447 logging.info(
448 f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
449 f'Beam alpha = {best_cer_alpha}, {cer_hat_parameter} '
450 f'maes_prefix_alpha = {best_cer_ma}, maes_expansion_gamma = {best_cer_mg}'
451 )
452 logging.info(f"=================================================================================")
453
454
455 if __name__ == '__main__':
456 main()
457
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
[start of scripts/confidence_ensembles/build_ensemble.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This script provides a functionality to create confidence-based ensembles
17 from a collection of pretrained models.
18
19 For more details see the paper https://arxiv.org/abs/2306.15824
20 or tutorial in tutorials/asr/Confidence_Ensembles.ipynb
21
22 You would typically use this script by providing a yaml config file or overriding
23 default options from command line.
24
25 Usage examples:
26
27 1. Building an ensemble of two monolingual models with default settings (no confidence tuning).
28
29 python build_ensemble.py --config-path=. --config-name=ensemble_config.yaml
30 ensemble.0.model=stt_it_conformer_ctc_large
31 ensemble.0.training_manifest=<path to the Italian data of 100+ utterances (no transcription required)>
32 ensemble.1.model=stt_es_conformer_ctc_large
33 ensemble.1.training_manifest=<path to the Spanish data of 100+ utterances (no transcription required)>
34 output_path=<path to the desired location of the .nemo checkpoint>
35
36 You can have more than 2 models and can control transcription settings (e.g., batch size)
37 with ``transcription.<any argument of examples/asr/transcribe_speech.py>`` parameters.
38
39 2. If you want to get improved results, you can enable tuning of the confidence and logistic regression (LR) parameters.
40 E.g.
41
42 python build_ensemble.py
43 <all arguments like in the previous example>
44 ensemble.0.dev_manifest=<path to the dev data that's required for tuning>
45 ...
46 # IMPORTANT: see the note below if you use > 2 models!
47 ensemble.N.dev_manifest=<path to the dev data that's required for tuning>
48 tune_confidence=True # to allow confidence tuning. LR is tuned by default
49
50 As with any tuning, it is recommended to have reasonably large validation set for each model,
51 otherwise you might overfit to the validation data.
52
53 Note that if you add additional models (> 2) you will need to modify ensemble_config.yaml
54 or create a new one with added models in there. While it's theoretically possible to
55 fully override such parameters from commandline, hydra is very unfriendly for such
56 use-cases, so it's strongly recommended to be creating new configs.
57
58 3. If you want to precisely control tuning grid search, you can do that with
59
60 python build_ensemble.py
61 <all arguments as in the previous examples>
62 tune_confidence_config.confidence_type='[entropy_renyi_exp,entropy_tsallis_exp]' # only tune over this set
63 tune_confidence_config.alpha='[0.1,0.5,1.0]' # only tune over this set
64
65 You can check the dataclasses in this file for the full list of supported
66 arguments and their default values.
67 """
68
69 import atexit
70
71 # using default logging to be able to silence unnecessary messages from nemo
72 import logging
73 import os
74 import random
75 import sys
76 import tempfile
77 from copy import deepcopy
78 from dataclasses import dataclass
79 from pathlib import Path
80 from typing import Dict, List, Optional, Tuple
81
82 import joblib
83 import numpy as np
84 import pytorch_lightning as pl
85 from omegaconf import MISSING, DictConfig, OmegaConf
86 from sklearn.linear_model import LogisticRegression
87 from sklearn.metrics import confusion_matrix
88 from sklearn.pipeline import Pipeline, make_pipeline
89 from sklearn.preprocessing import StandardScaler
90 from tqdm import tqdm
91
92 from nemo.collections.asr.models.confidence_ensemble import (
93 ConfidenceEnsembleModel,
94 ConfidenceSpec,
95 compute_confidence,
96 get_filtered_logprobs,
97 )
98 from nemo.collections.asr.parts.utils.asr_confidence_utils import (
99 ConfidenceConfig,
100 ConfidenceMeasureConfig,
101 get_confidence_aggregation_bank,
102 get_confidence_measure_bank,
103 )
104 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
105 from nemo.core.config import hydra_runner
106
107 LOG = logging.getLogger(__file__)
108
109 # adding Python path. If not found, asking user to get the file
110 try:
111 sys.path.append(str(Path(__file__).parents[2] / "examples" / "asr"))
112 import transcribe_speech
113 except ImportError:
114 # if users run script normally from nemo repo, this shouldn't be triggered as
115 # we modify the path above. But if they downloaded the build_ensemble.py as
116 # an isolated script, we'd ask them to also download corresponding version
117 # of the transcribe_speech.py
118 print(
119 "Current script depends on 'examples/asr/transcribe_speech.py', but can't find it. "
120 "If it's not present, download it from the NeMo github manually and put inside this folder."
121 )
122
123
124 @dataclass
125 class EnsembleConfig:
126 # .nemo path or pretrained name
127 model: str = MISSING
128 # path to the training data manifest (non-tarred)
129 training_manifest: str = MISSING
130 # specify to limit the number of training samples
131 # 100 is most likely enough, but setting higher default just in case
132 max_training_samples: int = 1000
133 # specify to provide dev data manifest for HP tuning
134 dev_manifest: Optional[str] = None
135
136
137 @dataclass
138 class TuneConfidenceConfig:
139 # important parameter, so should always be tuned
140 exclude_blank: Tuple[bool] = (True, False)
141 # prod is pretty much always worse, so not including by default
142 aggregation: Tuple[str] = ("mean", "min", "max")
143 # not including max prob, as there is always an entropy-based metric
144 # that's better but otherwise including everything
145 confidence_type: Tuple[str] = (
146 "entropy_renyi_exp",
147 "entropy_renyi_lin",
148 "entropy_tsallis_exp",
149 "entropy_tsallis_lin",
150 "entropy_gibbs_lin",
151 "entropy_gibbs_exp",
152 )
153
154 # TODO: currently it's not possible to efficiently tune temperature, as we always
155 # apply log-softmax in the decoder, so to try different values it will be required
156 # to rerun the decoding, which is very slow. To support this for one-off experiments
157 # it's possible to modify the code of CTC decoder / Transducer joint to
158 # remove log-softmax and then apply it directly in this script with the temperature
159 #
160 # Alternatively, one can run this script multiple times with different values of
161 # temperature and pick the best performing ensemble. Note that this will increase
162 # tuning time by the number of temperature values tried. On the other hand,
163 # the above approach is a lot more efficient and will only slightly increase
164 # the total tuning runtime.
165
166 # very important to tune for max prob, but for entropy metrics 1.0 is almost always best
167 # temperature: Tuple[float] = (1.0,)
168
169 # not that important, but can sometimes make a small difference
170 alpha: Tuple[float] = (0.25, 0.33, 0.5, 1.0)
171
172 def get_grid_size(self) -> int:
173 """Returns the total number of points in the search space."""
174 if "max_prob" in self.confidence_type:
175 return (
176 len(self.exclude_blank)
177 * len(self.aggregation)
178 * ((len(self.confidence_type) - 1) * len(self.alpha) + 1)
179 )
180 return len(self.exclude_blank) * len(self.aggregation) * len(self.confidence_type) * len(self.alpha)
181
182
183 @dataclass
184 class TuneLogisticRegressionConfig:
185 # will have log-uniform grid over this range with that many points
186 # note that a value of 10000.0 (not regularization) is always added
187 C_num_points: int = 10
188 C_min: float = 0.0001
189 C_max: float = 10.0
190
191 # not too important
192 multi_class: Tuple[str] = ("ovr", "multinomial")
193
194 # should try to include weights directly if the data is too imbalanced
195 class_weight: Tuple = (None, "balanced")
196
197 # increase if getting many warnings that algorithm didn't converge
198 max_iter: int = 1000
199
200
201 @dataclass
202 class BuildEnsembleConfig:
203 # where to save the resulting ensemble model
204 output_path: str = MISSING
205
206 # each model specification
207 ensemble: List[EnsembleConfig] = MISSING
208
209 random_seed: int = 0 # for reproducibility
210
211 # default confidence, can override
212 confidence: ConfidenceConfig = ConfidenceConfig(
213 # we keep frame confidences and apply aggregation manually to get full-utterance confidence
214 preserve_frame_confidence=True,
215 exclude_blank=True,
216 aggregation="mean",
217 measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
218 )
219 temperature: float = 1.0
220
221 # this is optional, but can be used to change any aspect of the transcription
222 # config, such as batch size or amp usage. Note that model, data and confidence
223 # will be overriden by this script
224 transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
225
226 # set to True to tune the confidence.
227 # requires dev manifests to be specified for each model
228 tune_confidence: bool = False
229 # used to specify what to tune over. By default runs tuning over some
230 # reasonalbe grid, so that it does not take forever.
231 # Can be changed as needed
232 tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
233
234 # very fast to tune and can be important in case of imbalanced datasets
235 # will automatically set to False if dev data is not available
236 tune_logistic_regression: bool = True
237 tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
238
239 def __post_init__(self):
240 """Checking that if any dev data is provided, all are provided.
241
242 Will also auto-set tune_logistic_regression to False if no dev data
243 is available.
244
245 If tune_confidence is set to True (user choice) and no dev data is
246 provided, will raise an error.
247 """
248 num_dev_data = 0
249 for ensemble_cfg in self.ensemble:
250 num_dev_data += ensemble_cfg.dev_manifest is not None
251 if num_dev_data == 0:
252 if self.tune_confidence:
253 raise ValueError("tune_confidence is set to True, but no dev data is provided")
254 LOG.info("Setting tune_logistic_regression = False since no dev data is provided")
255 self.tune_logistic_regression = False
256 return
257
258 if num_dev_data < len(self.ensemble):
259 raise ValueError(
260 "Some ensemble configs specify dev data, but some don't. Either all have to specify it or none!"
261 )
262
263
264 def calculate_score(features: np.ndarray, labels: np.ndarray, pipe: Pipeline) -> Tuple[float, np.ndarray]:
265 """Score is always calculated as mean of the per-class scores.
266
267 This is done to account for possible class imbalances.
268
269 Args:
270 features: numpy array of features of shape [N x D], where N is the
271 number of objects (typically a total number of utterances in
272 all datasets) and D is the total number of confidence scores
273 used to train the model (typically = number of models).
274 labels: numpy array of shape [N] contatining ground-truth model indices.
275 pipe: classification pipeline (currently, standardization + logistic
276 regression).
277
278 Returns:
279 tuple: score value in [0, 1] and full classification confusion matrix.
280 """
281 predictions = pipe.predict(features)
282 conf_m = confusion_matrix(labels, predictions)
283 score = np.diag(conf_m).sum() / conf_m.sum()
284 return score, conf_m
285
286
287 def train_model_selection(
288 training_features: np.ndarray,
289 training_labels: np.ndarray,
290 dev_features: Optional[np.ndarray] = None,
291 dev_labels: Optional[np.ndarray] = None,
292 tune_lr: bool = False,
293 tune_lr_cfg: Optional[TuneLogisticRegressionConfig] = None,
294 verbose: bool = False,
295 ) -> Tuple[Pipeline, float]:
296 """Trains model selection block with an (optional) tuning of the parameters.
297
298 Returns a pipeline consisting of feature standardization and logistic
299 regression. If tune_lr is set to True, dev features/labels will be used
300 to tune the hyperparameters of the logistic regression with the grid
301 search that's defined via ``tune_lr_cfg``.
302
303 If no tuning is requested, uses the following parameters::
304
305 best_pipe = make_pipeline(
306 StandardScaler(),
307 LogisticRegression(
308 multi_class="multinomial",
309 C=10000.0,
310 max_iter=1000,
311 class_weight="balanced",
312 ),
313 )
314
315 Args:
316 training_features: numpy array of features of shape [N x D], where N is
317 the number of objects (typically a total number of utterances in
318 all training datasets) and D is the total number of confidence
319 scores used to train the model (typically = number of models).
320 training_labels: numpy array of shape [N] contatining ground-truth
321 model indices.
322 dev_features: same as training, but for the validation subset.
323 dev_labels: same as training, but for the validation subset.
324 tune_lr: controls whether tuning of LR hyperparameters is performed.
325 If set to True, it's required to also provide dev features/labels.
326 tune_lr_cfg: specifies what values of LR hyperparameters to try.
327 verbose: if True, will output final training/dev scores.
328
329 Returns:
330 tuple: trained model selection pipeline, best score (or -1 if no tuning
331 was done).
332 """
333 if not tune_lr:
334 # default parameters: C=10000.0 disables regularization
335 best_pipe = make_pipeline(
336 StandardScaler(),
337 LogisticRegression(multi_class="multinomial", C=10000.0, max_iter=1000, class_weight="balanced"),
338 )
339 max_score = -1
340 else:
341 C_pms = np.append(
342 np.exp(np.linspace(np.log(tune_lr_cfg.C_min), np.log(tune_lr_cfg.C_max), tune_lr_cfg.C_num_points)),
343 10000.0,
344 )
345 max_score = 0
346 best_pipe = None
347 for class_weight in tune_lr_cfg.class_weight:
348 for multi_class in tune_lr_cfg.multi_class:
349 for C in C_pms:
350 pipe = make_pipeline(
351 StandardScaler(),
352 LogisticRegression(
353 multi_class=multi_class, C=C, max_iter=tune_lr_cfg.max_iter, class_weight=class_weight
354 ),
355 )
356 pipe.fit(training_features, training_labels)
357 score, confusion = calculate_score(dev_features, dev_labels, pipe)
358 if score > max_score:
359 max_score = score
360 best_pipe = pipe
361
362 best_pipe.fit(training_features, training_labels)
363 if verbose:
364 accuracy, confusion = calculate_score(training_features, training_labels, best_pipe)
365 LOG.info("Training fit accuracy: %.4f", accuracy * 100.0)
366 LOG.info("Training confusion matrix:\n%s", str(confusion))
367 if dev_features is not None and verbose:
368 accuracy, confusion = calculate_score(dev_features, dev_labels, best_pipe)
369 LOG.info("Dev fit accuracy: %.4f", accuracy * 100.0)
370 LOG.info("Dev confusion matrix:\n%s", str(confusion))
371
372 return best_pipe, max_score
373
374
375 def subsample_manifest(manifest_file: str, max_samples: int) -> str:
376 """Will save a subsampled version of the manifest to the same folder.
377
378 Have to save to the same folder to support relative paths.
379
380 Args:
381 manifest_file: path to the manifest file that needs subsampling.
382 max_samples: how many samples to retain. Will randomly select that
383 many lines from the manifest.
384
385 Returns:
386 str: the path to the subsampled manifest file.
387 """
388 with open(manifest_file, "rt", encoding="utf-8") as fin:
389 lines = fin.readlines()
390 if max_samples < len(lines):
391 lines = random.sample(lines, max_samples)
392 output_file = manifest_file + "-subsampled"
393 with open(output_file, "wt", encoding="utf-8") as fout:
394 fout.write("".join(lines))
395 return output_file
396
397
398 def cleanup_subsampled_manifests(subsampled_manifests: List[str]):
399 """Removes all generated subsamples manifests."""
400 for manifest in subsampled_manifests:
401 os.remove(manifest)
402
403
404 def compute_all_confidences(
405 hypothesis: Hypothesis, tune_confidence_cfg: TuneConfidenceConfig
406 ) -> Dict[ConfidenceSpec, float]:
407 """Computes a set of confidence scores from a given hypothesis.
408
409 Works with the output of both CTC and Transducer decoding.
410
411 Args:
412 hypothesis: generated hypothesis as returned from the transcribe
413 method of the ASR model.
414 tune_confidence_cfg: config specifying what confidence scores to
415 compute.
416
417 Returns:
418 dict: dictionary with confidenct spec -> confidence score mapping.
419 """
420 conf_values = {}
421
422 for exclude_blank in tune_confidence_cfg.exclude_blank:
423 filtered_logprobs = get_filtered_logprobs(hypothesis, exclude_blank)
424 vocab_size = filtered_logprobs.shape[1]
425 for aggregation in tune_confidence_cfg.aggregation:
426 aggr_func = get_confidence_aggregation_bank()[aggregation]
427 for conf_type in tune_confidence_cfg.confidence_type:
428 conf_func = get_confidence_measure_bank()[conf_type]
429 if conf_type == "max_prob": # skipping alpha in this case
430 conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=1.0)).cpu().item()
431 conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, 1.0)] = conf_value
432 else:
433 for alpha in tune_confidence_cfg.alpha:
434 conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=alpha)).cpu().item()
435 conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, alpha)] = conf_value
436
437 return conf_values
438
439
440 def find_best_confidence(
441 train_confidences: List[List[Dict[ConfidenceSpec, float]]],
442 train_labels: List[int],
443 dev_confidences: List[List[Dict[ConfidenceSpec, float]]],
444 dev_labels: List[int],
445 tune_lr: bool,
446 tune_lr_config: TuneConfidenceConfig,
447 ) -> Tuple[ConfidenceConfig, Pipeline]:
448 """Finds the best confidence configuration for model selection.
449
450 Will loop over all values in the confidence dictionary and fit the LR
451 model (optionally tuning its HPs). The best performing confidence (on the
452 dev set) will be used for the final LR model.
453
454 Args:
455 train_confidences: this is an object of type
456 ``List[List[Dict[ConfidenceSpec, float]]]``. The shape of this
457 object is [M, N, S], where
458 M: number of models
459 N: number of utterances in all training sets
460 S: number of confidence scores to try
461
462 This argument will be used to construct np.array objects for each
463 of the confidence scores with the shape [M, N]
464
465 train_labels: ground-truth labels of the correct model for each data
466 points. This is a list of size [N]
467 dev_confidences: same as training, but for the validation subset.
468 dev_labels: same as training, but for the validation subset.
469 tune_lr: controls whether tuning of LR hyperparameters is performed.
470 tune_lr_cfg: specifies what values of LR hyperparameters to try.
471
472 Returns:
473 tuple: best confidence config, best model selection pipeline
474 """
475 max_score = 0
476 best_pipe = None
477 best_conf_spec = None
478 LOG.info("Evaluation all confidences. Total grid size: %d", len(train_confidences[0][0].keys()))
479 for conf_spec in tqdm(train_confidences[0][0].keys()):
480 cur_train_confidences = []
481 for model_confs in train_confidences:
482 cur_train_confidences.append([])
483 for model_conf in model_confs:
484 cur_train_confidences[-1].append(model_conf[conf_spec])
485 cur_dev_confidences = []
486 for model_confs in dev_confidences:
487 cur_dev_confidences.append([])
488 for model_conf in model_confs:
489 cur_dev_confidences[-1].append(model_conf[conf_spec])
490 # transposing with zip(*list)
491 training_features = np.array(list(zip(*cur_train_confidences)))
492 training_labels = np.array(train_labels)
493 dev_features = np.array(list(zip(*cur_dev_confidences)))
494 dev_labels = np.array(dev_labels)
495 pipe, score = train_model_selection(
496 training_features, training_labels, dev_features, dev_labels, tune_lr, tune_lr_config,
497 )
498 if max_score < score:
499 max_score = score
500 best_pipe = pipe
501 best_conf_spec = conf_spec
502 LOG.info("Found better parameters: %s. New score: %.4f", str(conf_spec), max_score)
503
504 return best_conf_spec.to_confidence_config(), best_pipe
505
506
507 @hydra_runner(config_name="BuildEnsembleConfig", schema=BuildEnsembleConfig)
508 def main(cfg: BuildEnsembleConfig):
509 # silencing all messages from nemo/ptl to avoid dumping tons of configs to the stdout
510 logging.getLogger('pytorch_lightning').setLevel(logging.CRITICAL)
511 logging.getLogger('nemo_logger').setLevel(logging.CRITICAL)
512 LOG.info(f'Build ensemble config:\n{OmegaConf.to_yaml(cfg)}')
513
514 # to ensure post init is called
515 cfg = BuildEnsembleConfig(**cfg)
516
517 pl.seed_everything(cfg.random_seed)
518 cfg.transcription.random_seed = None # seed is already applied
519 cfg.transcription.return_transcriptions = True
520 cfg.transcription.preserve_alignment = True
521 cfg.transcription.ctc_decoding.temperature = cfg.temperature
522 cfg.transcription.rnnt_decoding.temperature = cfg.temperature
523 # this ensures that generated output is after log-softmax for consistency with CTC
524
525 train_confidences = []
526 dev_confidences = []
527 train_labels = []
528 dev_labels = []
529
530 # registering clean-up function that will hold on to this list and
531 # should clean up even if there is partial error in some of the transcribe
532 # calls
533 subsampled_manifests = []
534 atexit.register(cleanup_subsampled_manifests, subsampled_manifests)
535
536 # note that we loop over the same config.
537 # This is intentional, as we need to run all models on all datasets
538 # this loop will do the following things:
539 # 1. Goes through each model X each training dataset
540 # 2. Computes predictions by directly calling transcribe_speech.main
541 # 3. Converts transcription to the confidence score(s) as specified in the config
542 # 4. If dev sets are provided, computes the same for them
543 # 5. Creates a list of ground-truth model indices by mapping each model
544 # to its own training dataset as specified in the config.
545 # 6. After the loop, we either run tuning over all confidence scores or
546 # directly use a single score to fit logistic regression and save the
547 # final ensemble model.
548 for model_idx, model_cfg in enumerate(cfg.ensemble):
549 train_model_confidences = []
550 dev_model_confidences = []
551 for data_idx, data_cfg in enumerate(cfg.ensemble):
552 if model_idx == 0: # generating subsampled manifests only one time
553 subsampled_manifests.append(
554 subsample_manifest(data_cfg.training_manifest, data_cfg.max_training_samples)
555 )
556 subsampled_manifest = subsampled_manifests[data_idx]
557
558 if model_cfg.model.endswith(".nemo"):
559 cfg.transcription.model_path = model_cfg.model
560 else: # assuming pretrained model
561 cfg.transcription.pretrained_name = model_cfg.model
562
563 cfg.transcription.dataset_manifest = subsampled_manifest
564
565 # training
566 with tempfile.NamedTemporaryFile() as output_file:
567 cfg.transcription.output_filename = output_file.name
568 LOG.info("Transcribing training dataset %d with model %d", data_idx, model_idx)
569 transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
570 LOG.info("Generating confidence scores")
571 # TODO: parallelize this loop?
572 for transcription in tqdm(transcriptions):
573 if cfg.tune_confidence:
574 train_model_confidences.append(
575 compute_all_confidences(transcription, cfg.tune_confidence_config)
576 )
577 else:
578 train_model_confidences.append(compute_confidence(transcription, cfg.confidence))
579 if model_idx == 0: # labels are the same for all models
580 train_labels.append(data_idx)
581
582 # optional dev
583 if data_cfg.dev_manifest is not None:
584 cfg.transcription.dataset_manifest = data_cfg.dev_manifest
585 with tempfile.NamedTemporaryFile() as output_file:
586 cfg.transcription.output_filename = output_file.name
587 LOG.info("Transcribing dev dataset %d with model %d", data_idx, model_idx)
588 transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
589 LOG.info("Generating confidence scores")
590 for transcription in tqdm(transcriptions):
591 if cfg.tune_confidence:
592 dev_model_confidences.append(
593 compute_all_confidences(transcription, cfg.tune_confidence_config)
594 )
595 else:
596 dev_model_confidences.append(compute_confidence(transcription, cfg.confidence))
597 if model_idx == 0: # labels are the same for all models
598 dev_labels.append(data_idx)
599
600 train_confidences.append(train_model_confidences)
601 if dev_model_confidences:
602 dev_confidences.append(dev_model_confidences)
603
604 if cfg.tune_confidence:
605 best_confidence, model_selection_block = find_best_confidence(
606 train_confidences,
607 train_labels,
608 dev_confidences,
609 dev_labels,
610 cfg.tune_logistic_regression,
611 cfg.tune_logistic_regression_config,
612 )
613 else:
614 best_confidence = cfg.confidence
615 # transposing with zip(*list)
616 training_features = np.array(list(zip(*train_confidences)))
617 training_labels = np.array(train_labels)
618 if dev_confidences:
619 dev_features = np.array(list(zip(*dev_confidences)))
620 dev_labels = np.array(dev_labels)
621 else:
622 dev_features = None
623 dev_labels = None
624 model_selection_block, _ = train_model_selection(
625 training_features,
626 training_labels,
627 dev_features,
628 dev_labels,
629 cfg.tune_logistic_regression,
630 cfg.tune_logistic_regression_config,
631 verbose=True,
632 )
633
634 with tempfile.TemporaryDirectory() as tmpdir:
635 model_selection_block_path = os.path.join(tmpdir, 'model_selection_block.pkl')
636 joblib.dump(model_selection_block, model_selection_block_path)
637
638 # creating ensemble checkpoint
639 ensemble_model = ConfidenceEnsembleModel(
640 cfg=DictConfig(
641 {
642 'model_selection_block': model_selection_block_path,
643 'confidence': best_confidence,
644 'temperature': cfg.temperature,
645 'load_models': [model_cfg.model for model_cfg in cfg.ensemble],
646 }
647 ),
648 trainer=None,
649 )
650 ensemble_model.save_to(cfg.output_path)
651
652
653 if __name__ == '__main__':
654 main()
655
[end of scripts/confidence_ensembles/build_ensemble.py]
[start of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import os
17 from dataclasses import dataclass, is_dataclass
18 from pathlib import Path
19 from typing import Optional
20
21 import pytorch_lightning as pl
22 import torch
23 from omegaconf import MISSING, OmegaConf
24 from sklearn.model_selection import ParameterGrid
25
26 from nemo.collections.asr.metrics.rnnt_wer import RNNTDecodingConfig
27 from nemo.collections.asr.metrics.wer import CTCDecodingConfig
28 from nemo.collections.asr.models import ASRModel, EncDecRNNTModel
29 from nemo.collections.asr.parts.utils.asr_confidence_benchmarking_utils import (
30 apply_confidence_parameters,
31 run_confidence_benchmark,
32 )
33 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig
34 from nemo.core.config import hydra_runner
35 from nemo.utils import logging
36
37 """
38 Get confidence metrics and curve plots for a given model, dataset, and confidence parameters.
39
40 # Arguments
41 model_path: Path to .nemo ASR checkpoint
42 pretrained_name: Name of pretrained ASR model (from NGC registry)
43 dataset_manifest: Path to dataset JSON manifest file (in NeMo format)
44 output_dir: Output directory to store a report and curve plot directories
45
46 batch_size: batch size during inference
47 num_workers: number of workers during inference
48
49 cuda: Optional int to enable or disable execution of model on certain CUDA device
50 amp: Bool to decide if Automatic Mixed Precision should be used during inference
51 audio_type: Str filetype of the audio. Supported = wav, flac, mp3
52
53 target_level: Word- or token-level confidence. Supported = word, token, auto (for computing both word and token)
54 confidence_cfg: Config with confidence parameters
55 grid_params: Dictionary with lists of parameters to iteratively benchmark on
56
57 # Usage
58 ASR model can be specified by either "model_path" or "pretrained_name".
59 Data for transcription are defined with "dataset_manifest".
60 Results are returned as a benchmark report and curve plots.
61
62 python benchmark_asr_confidence.py \
63 model_path=null \
64 pretrained_name=null \
65 dataset_manifest="" \
66 output_dir="" \
67 batch_size=64 \
68 num_workers=8 \
69 cuda=0 \
70 amp=True \
71 target_level="word" \
72 confidence_cfg.exclude_blank=False \
73 'grid_params="{\"aggregation\": [\"min\", \"prod\"], \"alpha\": [0.33, 0.5]}"'
74 """
75
76
77 def get_experiment_params(cfg):
78 """Get experiment parameters from a confidence config and generate the experiment name.
79
80 Returns:
81 List of experiment parameters.
82 String with the experiment name.
83 """
84 blank = "no_blank" if cfg.exclude_blank else "blank"
85 aggregation = cfg.aggregation
86 method_name = cfg.measure_cfg.name
87 alpha = cfg.measure_cfg.alpha
88 if method_name == "entropy":
89 entropy_type = cfg.measure_cfg.entropy_type
90 entropy_norm = cfg.measure_cfg.entropy_norm
91 experiment_param_list = [
92 aggregation,
93 str(cfg.exclude_blank),
94 method_name,
95 entropy_type,
96 entropy_norm,
97 str(alpha),
98 ]
99 experiment_str = "-".join([aggregation, blank, method_name, entropy_type, entropy_norm, str(alpha)])
100 else:
101 experiment_param_list = [aggregation, str(cfg.exclude_blank), method_name, "-", "-", str(alpha)]
102 experiment_str = "-".join([aggregation, blank, method_name, str(alpha)])
103 return experiment_param_list, experiment_str
104
105
106 @dataclass
107 class ConfidenceBenchmarkingConfig:
108 # Required configs
109 model_path: Optional[str] = None # Path to a .nemo file
110 pretrained_name: Optional[str] = None # Name of a pretrained model
111 dataset_manifest: str = MISSING
112 output_dir: str = MISSING
113
114 # General configs
115 batch_size: int = 32
116 num_workers: int = 4
117
118 # Set `cuda` to int to define CUDA device. If 'None', will look for CUDA
119 # device anyway, and do inference on CPU only if CUDA device is not found.
120 # If `cuda` is a negative number, inference will be on CPU only.
121 cuda: Optional[int] = None
122 amp: bool = False
123 audio_type: str = "wav"
124
125 # Confidence configs
126 target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
127 confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
128 grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
129
130
131 @hydra_runner(config_name="ConfidenceBenchmarkingConfig", schema=ConfidenceBenchmarkingConfig)
132 def main(cfg: ConfidenceBenchmarkingConfig):
133 torch.set_grad_enabled(False)
134
135 logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
136
137 if is_dataclass(cfg):
138 cfg = OmegaConf.structured(cfg)
139
140 if cfg.model_path is None and cfg.pretrained_name is None:
141 raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None!")
142
143 # setup GPU
144 if cfg.cuda is None:
145 if torch.cuda.is_available():
146 device = [0] # use 0th CUDA device
147 accelerator = 'gpu'
148 else:
149 device = 1
150 accelerator = 'cpu'
151 else:
152 device = [cfg.cuda]
153 accelerator = 'gpu'
154
155 map_location = torch.device('cuda:{}'.format(device[0]) if accelerator == 'gpu' else 'cpu')
156
157 # setup model
158 if cfg.model_path is not None:
159 # restore model from .nemo file path
160 model_cfg = ASRModel.restore_from(restore_path=cfg.model_path, return_config=True)
161 classpath = model_cfg.target # original class path
162 imported_class = model_utils.import_class_by_path(classpath) # type: ASRModel
163 logging.info(f"Restoring model : {imported_class.__name__}")
164 asr_model = imported_class.restore_from(
165 restore_path=cfg.model_path, map_location=map_location
166 ) # type: ASRModel
167 else:
168 # restore model by name
169 asr_model = ASRModel.from_pretrained(
170 model_name=cfg.pretrained_name, map_location=map_location
171 ) # type: ASRModel
172
173 trainer = pl.Trainer(devices=device, accelerator=accelerator)
174 asr_model.set_trainer(trainer)
175 asr_model = asr_model.eval()
176
177 # Check if ctc or rnnt model
178 is_rnnt = isinstance(asr_model, EncDecRNNTModel)
179
180 # Check that the model has the `change_decoding_strategy` method
181 if not hasattr(asr_model, 'change_decoding_strategy'):
182 raise RuntimeError("The asr_model you are using must have the `change_decoding_strategy` method.")
183
184 # get filenames and reference texts from manifest
185 filepaths = []
186 reference_texts = []
187 if os.stat(cfg.dataset_manifest).st_size == 0:
188 logging.error(f"The input dataset_manifest {cfg.dataset_manifest} is empty. Exiting!")
189 return None
190 manifest_dir = Path(cfg.dataset_manifest).parent
191 with open(cfg.dataset_manifest, 'r') as f:
192 for line in f:
193 item = json.loads(line)
194 audio_file = Path(item['audio_filepath'])
195 if not audio_file.is_file() and not audio_file.is_absolute():
196 audio_file = manifest_dir / audio_file
197 filepaths.append(str(audio_file.absolute()))
198 reference_texts.append(item['text'])
199
200 # setup AMP (optional)
201 autocast = None
202 if cfg.amp and torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
203 logging.info("AMP enabled!\n")
204 autocast = torch.cuda.amp.autocast
205
206 # do grid-based benchmarking if grid_params is provided, otherwise a regular one
207 work_dir = Path(cfg.output_dir)
208 os.makedirs(work_dir, exist_ok=True)
209 report_legend = (
210 ",".join(
211 [
212 "model_type",
213 "aggregation",
214 "blank",
215 "method_name",
216 "entropy_type",
217 "entropy_norm",
218 "alpha",
219 "target_level",
220 "auc_roc",
221 "auc_pr",
222 "auc_nt",
223 "nce",
224 "ece",
225 "auc_yc",
226 "std_yc",
227 "max_yc",
228 ]
229 )
230 + "\n"
231 )
232 model_typename = "RNNT" if is_rnnt else "CTC"
233 report_file = work_dir / Path("report.csv")
234 if cfg.grid_params:
235 asr_model.change_decoding_strategy(
236 RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
237 if is_rnnt
238 else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
239 )
240 params = json.loads(cfg.grid_params)
241 hp_grid = ParameterGrid(params)
242 hp_grid = list(hp_grid)
243
244 logging.info(f"==============================Running a benchmarking with grid search=========================")
245 logging.info(f"Grid search size: {len(hp_grid)}")
246 logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directories near the file")
247 logging.info(f"==============================================================================================")
248
249 with open(report_file, "tw", encoding="utf-8") as f:
250 f.write(report_legend)
251 f.flush()
252 for i, hp in enumerate(hp_grid):
253 logging.info(f"Run # {i + 1}, grid: `{hp}`")
254 asr_model.change_decoding_strategy(apply_confidence_parameters(asr_model.cfg.decoding, hp))
255 param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
256 plot_dir = work_dir / Path(experiment_name)
257 results = run_confidence_benchmark(
258 asr_model,
259 cfg.target_level,
260 filepaths,
261 reference_texts,
262 cfg.batch_size,
263 cfg.num_workers,
264 plot_dir,
265 autocast,
266 )
267 for level, result in results.items():
268 f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
269 f.flush()
270 else:
271 asr_model.change_decoding_strategy(
272 RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
273 if is_rnnt
274 else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
275 )
276 param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
277 plot_dir = work_dir / Path(experiment_name)
278
279 logging.info(f"==============================Running a single benchmarking===================================")
280 logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directory `{plot_dir}`")
281
282 with open(report_file, "tw", encoding="utf-8") as f:
283 f.write(report_legend)
284 f.flush()
285 results = run_confidence_benchmark(
286 asr_model,
287 cfg.batch_size,
288 cfg.num_workers,
289 cfg.target_level,
290 filepaths,
291 reference_texts,
292 plot_dir,
293 autocast,
294 )
295 for level, result in results.items():
296 f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
297 logging.info(f"===========================================Done===============================================")
298
299
300 if __name__ == '__main__':
301 main()
302
[end of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
[start of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 # This script converts an existing audio dataset with a manifest to
16 # a tarred and sharded audio dataset that can be read by the
17 # TarredAudioToTextDataLayer.
18
19 # Please make sure your audio_filepath DOES NOT CONTAIN '-sub'!
20 # Because we will use it to handle files which have duplicate filenames but with different offsets
21 # (see function create_shard for details)
22
23
24 # Bucketing can help to improve the training speed. You may use --buckets_num to specify the number of buckets.
25 # It creates multiple tarred datasets, one per bucket, based on the audio durations.
26 # The range of [min_duration, max_duration) is split into equal sized buckets.
27 # Recommend to use --sort_in_shards to speedup the training by reducing the paddings in the batches
28 # More info on how to use bucketing feature: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/datasets.html
29
30 # If valid NVIDIA DALI version is installed, will also generate the corresponding DALI index files that need to be
31 # supplied to the config in order to utilize webdataset for efficient large dataset handling.
32 # NOTE: DALI + Webdataset is NOT compatible with Bucketing support !
33
34 # Usage:
35 1) Creating a new tarfile dataset
36
37 python convert_to_tarred_audio_dataset.py \
38 --manifest_path=<path to the manifest file> \
39 --target_dir=<path to output directory> \
40 --num_shards=<number of tarfiles that will contain the audio> \
41 --max_duration=<float representing maximum duration of audio samples> \
42 --min_duration=<float representing minimum duration of audio samples> \
43 --shuffle --shuffle_seed=1 \
44 --sort_in_shards \
45 --workers=-1
46
47
48 2) Concatenating more tarfiles to a pre-existing tarred dataset
49
50 python convert_to_tarred_audio_dataset.py \
51 --manifest_path=<path to the tarred manifest file> \
52 --metadata_path=<path to the metadata.yaml (or metadata_version_{X}.yaml) file> \
53 --target_dir=<path to output directory where the original tarfiles are contained> \
54 --max_duration=<float representing maximum duration of audio samples> \
55 --min_duration=<float representing minimum duration of audio samples> \
56 --shuffle --shuffle_seed=1 \
57 --sort_in_shards \
58 --workers=-1 \
59 --concat_manifest_paths \
60 <space separated paths to 1 or more manifest files to concatenate into the original tarred dataset>
61
62 3) Writing an empty metadata file
63
64 python convert_to_tarred_audio_dataset.py \
65 --target_dir=<path to output directory> \
66 # any other optional argument
67 --num_shards=8 \
68 --max_duration=16.7 \
69 --min_duration=0.01 \
70 --shuffle \
71 --workers=-1 \
72 --sort_in_shards \
73 --shuffle_seed=1 \
74 --write_metadata
75
76 """
77 import argparse
78 import copy
79 import json
80 import os
81 import random
82 import tarfile
83 from collections import defaultdict
84 from dataclasses import dataclass, field
85 from datetime import datetime
86 from typing import Any, List, Optional
87
88 from joblib import Parallel, delayed
89 from omegaconf import DictConfig, OmegaConf, open_dict
90
91 try:
92 import create_dali_tarred_dataset_index as dali_index
93
94 DALI_INDEX_SCRIPT_AVAILABLE = True
95 except (ImportError, ModuleNotFoundError, FileNotFoundError):
96 DALI_INDEX_SCRIPT_AVAILABLE = False
97
98 parser = argparse.ArgumentParser(
99 description="Convert an existing ASR dataset to tarballs compatible with TarredAudioToTextDataLayer."
100 )
101 parser.add_argument(
102 "--manifest_path", default=None, type=str, required=False, help="Path to the existing dataset's manifest."
103 )
104
105 parser.add_argument(
106 '--concat_manifest_paths',
107 nargs='+',
108 default=None,
109 type=str,
110 required=False,
111 help="Path to the additional dataset's manifests that will be concatenated with base dataset.",
112 )
113
114 # Optional arguments
115 parser.add_argument(
116 "--target_dir",
117 default='./tarred',
118 type=str,
119 help="Target directory for resulting tarballs and manifest. Defaults to `./tarred`. Creates the path if necessary.",
120 )
121
122 parser.add_argument(
123 "--metadata_path", required=False, default=None, type=str, help="Path to metadata file for the dataset.",
124 )
125
126 parser.add_argument(
127 "--num_shards",
128 default=-1,
129 type=int,
130 help="Number of shards (tarballs) to create. Used for partitioning data among workers.",
131 )
132 parser.add_argument(
133 '--max_duration',
134 default=None,
135 required=True,
136 type=float,
137 help='Maximum duration of audio clip in the dataset. By default, it is None and is required to be set.',
138 )
139 parser.add_argument(
140 '--min_duration',
141 default=None,
142 type=float,
143 help='Minimum duration of audio clip in the dataset. By default, it is None and will not filter files.',
144 )
145 parser.add_argument(
146 "--shuffle",
147 action='store_true',
148 help="Whether or not to randomly shuffle the samples in the manifest before tarring/sharding.",
149 )
150
151 parser.add_argument(
152 "--keep_files_together",
153 action='store_true',
154 help="Whether or not to keep entries from the same file (but different offsets) together when sorting before tarring/sharding.",
155 )
156
157 parser.add_argument(
158 "--sort_in_shards",
159 action='store_true',
160 help="Whether or not to sort samples inside the shards based on their duration.",
161 )
162
163 parser.add_argument(
164 "--buckets_num", type=int, default=1, help="Number of buckets to create based on duration.",
165 )
166
167 parser.add_argument("--shuffle_seed", type=int, default=None, help="Random seed for use if shuffling is enabled.")
168 parser.add_argument(
169 '--write_metadata',
170 action='store_true',
171 help=(
172 "Flag to write a blank metadata with the current call config. "
173 "Note that the metadata will not contain the number of shards, "
174 "and it must be filled out by the user."
175 ),
176 )
177 parser.add_argument(
178 "--no_shard_manifests",
179 action='store_true',
180 help="Do not write sharded manifests along with the aggregated manifest.",
181 )
182 parser.add_argument('--workers', type=int, default=1, help='Number of worker processes')
183 args = parser.parse_args()
184
185
186 @dataclass
187 class ASRTarredDatasetConfig:
188 num_shards: int = -1
189 shuffle: bool = False
190 max_duration: Optional[float] = None
191 min_duration: Optional[float] = None
192 shuffle_seed: Optional[int] = None
193 sort_in_shards: bool = True
194 shard_manifests: bool = True
195 keep_files_together: bool = False
196
197
198 @dataclass
199 class ASRTarredDatasetMetadata:
200 created_datetime: Optional[str] = None
201 version: int = 0
202 num_samples_per_shard: Optional[int] = None
203 is_concatenated_manifest: bool = False
204
205 dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
206 history: Optional[List[Any]] = field(default_factory=lambda: [])
207
208 def __post_init__(self):
209 self.created_datetime = self.get_current_datetime()
210
211 def get_current_datetime(self):
212 return datetime.now().strftime("%m-%d-%Y %H-%M-%S")
213
214 @classmethod
215 def from_config(cls, config: DictConfig):
216 obj = cls()
217 obj.__dict__.update(**config)
218 return obj
219
220 @classmethod
221 def from_file(cls, filepath: str):
222 config = OmegaConf.load(filepath)
223 return ASRTarredDatasetMetadata.from_config(config=config)
224
225
226 class ASRTarredDatasetBuilder:
227 """
228 Helper class that constructs a tarred dataset from scratch, or concatenates tarred datasets
229 together and constructs manifests for them.
230 """
231
232 def __init__(self):
233 self.config = None
234
235 def configure(self, config: ASRTarredDatasetConfig):
236 """
237 Sets the config generated from command line overrides.
238
239 Args:
240 config: ASRTarredDatasetConfig dataclass object.
241 """
242 self.config = config # type: ASRTarredDatasetConfig
243
244 if self.config.num_shards < 0:
245 raise ValueError("`num_shards` must be > 0. Please fill in the metadata information correctly.")
246
247 def create_new_dataset(self, manifest_path: str, target_dir: str = "./tarred/", num_workers: int = 0):
248 """
249 Creates a new tarred dataset from a given manifest file.
250
251 Args:
252 manifest_path: Path to the original ASR manifest.
253 target_dir: Output directory.
254 num_workers: Integer denoting number of parallel worker processes which will write tarfiles.
255 Defaults to 1 - which denotes sequential worker process.
256
257 Output:
258 Writes tarfiles, along with the tarred dataset compatible manifest file.
259 Also preserves a record of the metadata used to construct this tarred dataset.
260 """
261 if self.config is None:
262 raise ValueError("Config has not been set. Please call `configure(config: ASRTarredDatasetConfig)`")
263
264 if manifest_path is None:
265 raise FileNotFoundError("Manifest filepath cannot be None !")
266
267 config = self.config # type: ASRTarredDatasetConfig
268
269 if not os.path.exists(target_dir):
270 os.makedirs(target_dir)
271
272 # Read the existing manifest
273 entries, total_duration, filtered_entries, filtered_duration = self._read_manifest(manifest_path, config)
274
275 if len(filtered_entries) > 0:
276 print(f"Filtered {len(filtered_entries)} files which amounts to {filtered_duration} seconds of audio.")
277 print(
278 f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
279 )
280
281 if len(entries) == 0:
282 print("No tarred dataset was created as there were 0 valid samples after filtering!")
283 return
284 if config.shuffle:
285 random.seed(config.shuffle_seed)
286 print("Shuffling...")
287 if config.keep_files_together:
288 filename_entries = defaultdict(list)
289 for ent in entries:
290 filename_entries[ent["audio_filepath"]].append(ent)
291 filenames = list(filename_entries.keys())
292 random.shuffle(filenames)
293 shuffled_entries = []
294 for filename in filenames:
295 shuffled_entries += filename_entries[filename]
296 entries = shuffled_entries
297 else:
298 random.shuffle(entries)
299
300 # Create shards and updated manifest entries
301 print(f"Number of samples added : {len(entries)}")
302 print(f"Remainder: {len(entries) % config.num_shards}")
303
304 start_indices = []
305 end_indices = []
306 # Build indices
307 for i in range(config.num_shards):
308 start_idx = (len(entries) // config.num_shards) * i
309 end_idx = start_idx + (len(entries) // config.num_shards)
310 print(f"Shard {i} has entries {start_idx} ~ {end_idx}")
311 files = set()
312 for ent_id in range(start_idx, end_idx):
313 files.add(entries[ent_id]["audio_filepath"])
314 print(f"Shard {i} contains {len(files)} files")
315 if i == config.num_shards - 1:
316 # We discard in order to have the same number of entries per shard.
317 print(f"Have {len(entries) - end_idx} entries left over that will be discarded.")
318
319 start_indices.append(start_idx)
320 end_indices.append(end_idx)
321
322 manifest_folder, _ = os.path.split(manifest_path)
323
324 with Parallel(n_jobs=num_workers, verbose=config.num_shards) as parallel:
325 # Call parallel tarfile construction
326 new_entries_list = parallel(
327 delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, i, manifest_folder)
328 for i, (start_idx, end_idx) in enumerate(zip(start_indices, end_indices))
329 )
330
331 if config.shard_manifests:
332 sharded_manifests_dir = target_dir + '/sharded_manifests'
333 if not os.path.exists(sharded_manifests_dir):
334 os.makedirs(sharded_manifests_dir)
335
336 for manifest in new_entries_list:
337 shard_id = manifest[0]['shard_id']
338 new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
339 with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
340 for entry in manifest:
341 json.dump(entry, m2)
342 m2.write('\n')
343
344 # Flatten the list of list of entries to a list of entries
345 new_entries = [sample for manifest in new_entries_list for sample in manifest]
346 del new_entries_list
347
348 print("Total number of entries in manifest :", len(new_entries))
349
350 # Write manifest
351 new_manifest_path = os.path.join(target_dir, 'tarred_audio_manifest.json')
352 with open(new_manifest_path, 'w', encoding='utf-8') as m2:
353 for entry in new_entries:
354 json.dump(entry, m2)
355 m2.write('\n')
356
357 # Write metadata (default metadata for new datasets)
358 new_metadata_path = os.path.join(target_dir, 'metadata.yaml')
359 metadata = ASRTarredDatasetMetadata()
360
361 # Update metadata
362 metadata.dataset_config = config
363 metadata.num_samples_per_shard = len(new_entries) // config.num_shards
364
365 # Write metadata
366 metadata_yaml = OmegaConf.structured(metadata)
367 OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
368
369 def create_concatenated_dataset(
370 self,
371 base_manifest_path: str,
372 manifest_paths: List[str],
373 metadata: ASRTarredDatasetMetadata,
374 target_dir: str = "./tarred_concatenated/",
375 num_workers: int = 1,
376 ):
377 """
378 Creates new tarfiles in order to create a concatenated dataset, whose manifest contains the data for
379 both the original dataset as well as the new data submitted in manifest paths.
380
381 Args:
382 base_manifest_path: Path to the manifest file which contains the information for the original
383 tarred dataset (with flattened paths).
384 manifest_paths: List of one or more paths to manifest files that will be concatenated with above
385 base tarred dataset.
386 metadata: ASRTarredDatasetMetadata dataclass instance with overrides from command line.
387 target_dir: Output directory
388
389 Output:
390 Writes tarfiles which with indices mapping to a "concatenated" tarred dataset,
391 along with the tarred dataset compatible manifest file which includes information
392 about all the datasets that comprise the concatenated dataset.
393
394 Also preserves a record of the metadata used to construct this tarred dataset.
395 """
396 if not os.path.exists(target_dir):
397 os.makedirs(target_dir)
398
399 if base_manifest_path is None:
400 raise FileNotFoundError("Base manifest filepath cannot be None !")
401
402 if manifest_paths is None or len(manifest_paths) == 0:
403 raise FileNotFoundError("List of additional manifest filepaths cannot be None !")
404
405 config = ASRTarredDatasetConfig(**(metadata.dataset_config))
406
407 # Read the existing manifest (no filtering here)
408 base_entries, _, _, _ = self._read_manifest(base_manifest_path, config)
409 print(f"Read base manifest containing {len(base_entries)} samples.")
410
411 # Precompute number of samples per shard
412 if metadata.num_samples_per_shard is None:
413 num_samples_per_shard = len(base_entries) // config.num_shards
414 else:
415 num_samples_per_shard = metadata.num_samples_per_shard
416
417 print("Number of samples per shard :", num_samples_per_shard)
418
419 # Compute min and max duration and update config (if no metadata passed)
420 print(f"Selected max duration : {config.max_duration}")
421 print(f"Selected min duration : {config.min_duration}")
422
423 entries = []
424 for new_manifest_idx in range(len(manifest_paths)):
425 new_entries, total_duration, filtered_new_entries, filtered_duration = self._read_manifest(
426 manifest_paths[new_manifest_idx], config
427 )
428
429 if len(filtered_new_entries) > 0:
430 print(
431 f"Filtered {len(filtered_new_entries)} files which amounts to {filtered_duration:0.2f}"
432 f" seconds of audio from manifest {manifest_paths[new_manifest_idx]}."
433 )
434 print(
435 f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
436 )
437
438 entries.extend(new_entries)
439
440 if len(entries) == 0:
441 print("No tarred dataset was created as there were 0 valid samples after filtering!")
442 return
443
444 if config.shuffle:
445 random.seed(config.shuffle_seed)
446 print("Shuffling...")
447 random.shuffle(entries)
448
449 # Drop last section of samples that cannot be added onto a chunk
450 drop_count = len(entries) % num_samples_per_shard
451 total_new_entries = len(entries)
452 entries = entries[:-drop_count]
453
454 print(
455 f"Dropping {drop_count} samples from total new samples {total_new_entries} since they cannot "
456 f"be added into a uniformly sized chunk."
457 )
458
459 # Create shards and updated manifest entries
460 num_added_shards = len(entries) // num_samples_per_shard
461
462 print(f"Number of samples in base dataset : {len(base_entries)}")
463 print(f"Number of samples in additional datasets : {len(entries)}")
464 print(f"Number of added shards : {num_added_shards}")
465 print(f"Remainder: {len(entries) % num_samples_per_shard}")
466
467 start_indices = []
468 end_indices = []
469 shard_indices = []
470 for i in range(num_added_shards):
471 start_idx = (len(entries) // num_added_shards) * i
472 end_idx = start_idx + (len(entries) // num_added_shards)
473 shard_idx = i + config.num_shards
474 print(f"Shard {shard_idx} has entries {start_idx + len(base_entries)} ~ {end_idx + len(base_entries)}")
475
476 start_indices.append(start_idx)
477 end_indices.append(end_idx)
478 shard_indices.append(shard_idx)
479
480 manifest_folder, _ = os.path.split(base_manifest_path)
481
482 with Parallel(n_jobs=num_workers, verbose=num_added_shards) as parallel:
483 # Call parallel tarfile construction
484 new_entries_list = parallel(
485 delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, shard_idx, manifest_folder)
486 for i, (start_idx, end_idx, shard_idx) in enumerate(zip(start_indices, end_indices, shard_indices))
487 )
488
489 if config.shard_manifests:
490 sharded_manifests_dir = target_dir + '/sharded_manifests'
491 if not os.path.exists(sharded_manifests_dir):
492 os.makedirs(sharded_manifests_dir)
493
494 for manifest in new_entries_list:
495 shard_id = manifest[0]['shard_id']
496 new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
497 with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
498 for entry in manifest:
499 json.dump(entry, m2)
500 m2.write('\n')
501
502 # Flatten the list of list of entries to a list of entries
503 new_entries = [sample for manifest in new_entries_list for sample in manifest]
504 del new_entries_list
505
506 # Write manifest
507 if metadata is None:
508 new_version = 1 # start with `1`, where `0` indicates the base manifest + dataset
509 else:
510 new_version = metadata.version + 1
511
512 print("Total number of entries in manifest :", len(base_entries) + len(new_entries))
513
514 new_manifest_path = os.path.join(target_dir, f'tarred_audio_manifest_version_{new_version}.json')
515 with open(new_manifest_path, 'w', encoding='utf-8') as m2:
516 # First write all the entries of base manifest
517 for entry in base_entries:
518 json.dump(entry, m2)
519 m2.write('\n')
520
521 # Finally write the new entries
522 for entry in new_entries:
523 json.dump(entry, m2)
524 m2.write('\n')
525
526 # Preserve historical metadata
527 base_metadata = metadata
528
529 # Write metadata (updated metadata for concatenated datasets)
530 new_metadata_path = os.path.join(target_dir, f'metadata_version_{new_version}.yaml')
531 metadata = ASRTarredDatasetMetadata()
532
533 # Update config
534 config.num_shards = config.num_shards + num_added_shards
535
536 # Update metadata
537 metadata.version = new_version
538 metadata.dataset_config = config
539 metadata.num_samples_per_shard = num_samples_per_shard
540 metadata.is_concatenated_manifest = True
541 metadata.created_datetime = metadata.get_current_datetime()
542
543 # Attach history
544 current_metadata = OmegaConf.structured(base_metadata.history)
545 metadata.history = current_metadata
546
547 # Write metadata
548 metadata_yaml = OmegaConf.structured(metadata)
549 OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
550
551 def _read_manifest(self, manifest_path: str, config: ASRTarredDatasetConfig):
552 """Read and filters data from the manifest"""
553 # Read the existing manifest
554 entries = []
555 total_duration = 0.0
556 filtered_entries = []
557 filtered_duration = 0.0
558 with open(manifest_path, 'r', encoding='utf-8') as m:
559 for line in m:
560 entry = json.loads(line)
561 if (config.max_duration is None or entry['duration'] < config.max_duration) and (
562 config.min_duration is None or entry['duration'] >= config.min_duration
563 ):
564 entries.append(entry)
565 total_duration += entry["duration"]
566 else:
567 filtered_entries.append(entry)
568 filtered_duration += entry['duration']
569
570 return entries, total_duration, filtered_entries, filtered_duration
571
572 def _create_shard(self, entries, target_dir, shard_id, manifest_folder):
573 """Creates a tarball containing the audio files from `entries`.
574 """
575 if self.config.sort_in_shards:
576 entries.sort(key=lambda x: x["duration"], reverse=False)
577
578 new_entries = []
579 tar = tarfile.open(os.path.join(target_dir, f'audio_{shard_id}.tar'), mode='w', dereference=True)
580
581 count = dict()
582 for entry in entries:
583 # We squash the filename since we do not preserve directory structure of audio files in the tarball.
584 if os.path.exists(entry["audio_filepath"]):
585 audio_filepath = entry["audio_filepath"]
586 else:
587 audio_filepath = os.path.join(manifest_folder, entry["audio_filepath"])
588 if not os.path.exists(audio_filepath):
589 raise FileNotFoundError(f"Could not find {entry['audio_filepath']}!")
590
591 base, ext = os.path.splitext(audio_filepath)
592 base = base.replace('/', '_')
593 # Need the following replacement as long as WebDataset splits on first period
594 base = base.replace('.', '_')
595 squashed_filename = f'{base}{ext}'
596 if squashed_filename not in count:
597 tar.add(audio_filepath, arcname=squashed_filename)
598 to_write = squashed_filename
599 count[squashed_filename] = 1
600 else:
601 to_write = base + "-sub" + str(count[squashed_filename]) + ext
602 count[squashed_filename] += 1
603
604 new_entry = {
605 'audio_filepath': to_write,
606 'duration': entry['duration'],
607 'shard_id': shard_id, # Keep shard ID for recordkeeping
608 }
609
610 if 'label' in entry:
611 new_entry['label'] = entry['label']
612
613 if 'text' in entry:
614 new_entry['text'] = entry['text']
615
616 if 'offset' in entry:
617 new_entry['offset'] = entry['offset']
618
619 if 'lang' in entry:
620 new_entry['lang'] = entry['lang']
621
622 new_entries.append(new_entry)
623
624 tar.close()
625 return new_entries
626
627 @classmethod
628 def setup_history(cls, base_metadata: ASRTarredDatasetMetadata, history: List[Any]):
629 if 'history' in base_metadata.keys():
630 for history_val in base_metadata.history:
631 cls.setup_history(history_val, history)
632
633 if base_metadata is not None:
634 metadata_copy = copy.deepcopy(base_metadata)
635 with open_dict(metadata_copy):
636 metadata_copy.pop('history', None)
637 history.append(metadata_copy)
638
639
640 def main():
641 if args.buckets_num > 1:
642 bucket_length = (args.max_duration - args.min_duration) / float(args.buckets_num)
643 for i in range(args.buckets_num):
644 min_duration = args.min_duration + i * bucket_length
645 max_duration = min_duration + bucket_length
646 if i == args.buckets_num - 1:
647 # add a small number to cover the samples with exactly duration of max_duration in the last bucket.
648 max_duration += 1e-5
649 target_dir = os.path.join(args.target_dir, f"bucket{i+1}")
650 print(f"Creating bucket {i+1} with min_duration={min_duration} and max_duration={max_duration} ...")
651 print(f"Results are being saved at: {target_dir}.")
652 create_tar_datasets(min_duration=min_duration, max_duration=max_duration, target_dir=target_dir)
653 print(f"Bucket {i+1} is created.")
654 else:
655 create_tar_datasets(min_duration=args.min_duration, max_duration=args.max_duration, target_dir=args.target_dir)
656
657
658 def create_tar_datasets(min_duration: float, max_duration: float, target_dir: str):
659 builder = ASRTarredDatasetBuilder()
660
661 shard_manifests = False if args.no_shard_manifests else True
662
663 if args.write_metadata:
664 metadata = ASRTarredDatasetMetadata()
665 dataset_cfg = ASRTarredDatasetConfig(
666 num_shards=args.num_shards,
667 shuffle=args.shuffle,
668 max_duration=max_duration,
669 min_duration=min_duration,
670 shuffle_seed=args.shuffle_seed,
671 sort_in_shards=args.sort_in_shards,
672 shard_manifests=shard_manifests,
673 keep_files_together=args.keep_files_together,
674 )
675 metadata.dataset_config = dataset_cfg
676
677 output_path = os.path.join(target_dir, 'default_metadata.yaml')
678 OmegaConf.save(metadata, output_path, resolve=True)
679 print(f"Default metadata written to {output_path}")
680 exit(0)
681
682 if args.concat_manifest_paths is None or len(args.concat_manifest_paths) == 0:
683 print("Creating new tarred dataset ...")
684
685 # Create a tarred dataset from scratch
686 config = ASRTarredDatasetConfig(
687 num_shards=args.num_shards,
688 shuffle=args.shuffle,
689 max_duration=max_duration,
690 min_duration=min_duration,
691 shuffle_seed=args.shuffle_seed,
692 sort_in_shards=args.sort_in_shards,
693 shard_manifests=shard_manifests,
694 keep_files_together=args.keep_files_together,
695 )
696 builder.configure(config)
697 builder.create_new_dataset(manifest_path=args.manifest_path, target_dir=target_dir, num_workers=args.workers)
698
699 else:
700 if args.buckets_num > 1:
701 raise ValueError("Concatenation feature does not support buckets_num > 1.")
702 print("Concatenating multiple tarred datasets ...")
703
704 # Implicitly update config from base details
705 if args.metadata_path is not None:
706 metadata = ASRTarredDatasetMetadata.from_file(args.metadata_path)
707 else:
708 raise ValueError("`metadata` yaml file path must be provided!")
709
710 # Preserve history
711 history = []
712 builder.setup_history(OmegaConf.structured(metadata), history)
713 metadata.history = history
714
715 # Add command line overrides (everything other than num_shards)
716 metadata.dataset_config.max_duration = max_duration
717 metadata.dataset_config.min_duration = min_duration
718 metadata.dataset_config.shuffle = args.shuffle
719 metadata.dataset_config.shuffle_seed = args.shuffle_seed
720 metadata.dataset_config.sort_in_shards = args.sort_in_shards
721 metadata.dataset_config.shard_manifests = shard_manifests
722
723 builder.configure(metadata.dataset_config)
724
725 # Concatenate a tarred dataset onto a previous one
726 builder.create_concatenated_dataset(
727 base_manifest_path=args.manifest_path,
728 manifest_paths=args.concat_manifest_paths,
729 metadata=metadata,
730 target_dir=target_dir,
731 num_workers=args.workers,
732 )
733
734 if DALI_INDEX_SCRIPT_AVAILABLE and dali_index.INDEX_CREATOR_AVAILABLE:
735 print("Constructing DALI Tarfile Index - ", target_dir)
736 index_config = dali_index.DALITarredIndexConfig(tar_dir=target_dir, workers=args.workers)
737 dali_index.main(index_config)
738
739
740 if __name__ == "__main__":
741 main()
742
[end of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
[start of tools/nemo_forced_aligner/align.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import math
17 import os
18 from dataclasses import dataclass, field, is_dataclass
19 from pathlib import Path
20 from typing import List, Optional
21
22 import torch
23 from omegaconf import OmegaConf
24 from utils.data_prep import (
25 add_t_start_end_to_utt_obj,
26 get_batch_starts_ends,
27 get_batch_variables,
28 get_manifest_lines_batch,
29 is_entry_in_all_lines,
30 is_entry_in_any_lines,
31 )
32 from utils.make_ass_files import make_ass_files
33 from utils.make_ctm_files import make_ctm_files
34 from utils.make_output_manifest import write_manifest_out_line
35 from utils.viterbi_decoding import viterbi_decoding
36
37 from nemo.collections.asr.models.ctc_models import EncDecCTCModel
38 from nemo.collections.asr.models.hybrid_rnnt_ctc_models import EncDecHybridRNNTCTCModel
39 from nemo.collections.asr.parts.utils.streaming_utils import FrameBatchASR
40 from nemo.collections.asr.parts.utils.transcribe_utils import setup_model
41 from nemo.core.config import hydra_runner
42 from nemo.utils import logging
43
44 """
45 Align the utterances in manifest_filepath.
46 Results are saved in ctm files in output_dir.
47
48 Arguments:
49 pretrained_name: string specifying the name of a CTC NeMo ASR model which will be automatically downloaded
50 from NGC and used for generating the log-probs which we will use to do alignment.
51 Note: NFA can only use CTC models (not Transducer models) at the moment.
52 model_path: string specifying the local filepath to a CTC NeMo ASR model which will be used to generate the
53 log-probs which we will use to do alignment.
54 Note: NFA can only use CTC models (not Transducer models) at the moment.
55 Note: if a model_path is provided, it will override the pretrained_name.
56 manifest_filepath: filepath to the manifest of the data you want to align,
57 containing 'audio_filepath' and 'text' fields.
58 output_dir: the folder where output CTM files and new JSON manifest will be saved.
59 align_using_pred_text: if True, will transcribe the audio using the specified model and then use that transcription
60 as the reference text for the forced alignment.
61 transcribe_device: None, or a string specifying the device that will be used for generating log-probs (i.e. "transcribing").
62 The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
63 (otherwise will set it to 'cpu').
64 viterbi_device: None, or string specifying the device that will be used for doing Viterbi decoding.
65 The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
66 (otherwise will set it to 'cpu').
67 batch_size: int specifying batch size that will be used for generating log-probs and doing Viterbi decoding.
68 use_local_attention: boolean flag specifying whether to try to use local attention for the ASR Model (will only
69 work if the ASR Model is a Conformer model). If local attention is used, we will set the local attention context
70 size to [64,64].
71 additional_segment_grouping_separator: an optional string used to separate the text into smaller segments.
72 If this is not specified, then the whole text will be treated as a single segment.
73 remove_blank_tokens_from_ctm: a boolean denoting whether to remove <blank> tokens from token-level output CTMs.
74 audio_filepath_parts_in_utt_id: int specifying how many of the 'parts' of the audio_filepath
75 we will use (starting from the final part of the audio_filepath) to determine the
76 utt_id that will be used in the CTM files. Note also that any spaces that are present in the audio_filepath
77 will be replaced with dashes, so as not to change the number of space-separated elements in the
78 CTM files.
79 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 1 => utt_id will be "e1"
80 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 2 => utt_id will be "d_e1"
81 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 3 => utt_id will be "c_d_e1"
82 use_buffered_infer: False, if set True, using streaming to do get the logits for alignment
83 This flag is useful when aligning large audio file.
84 However, currently the chunk streaming inference does not support batch inference,
85 which means even you set batch_size > 1, it will only infer one by one instead of doing
86 the whole batch inference together.
87 chunk_len_in_secs: float chunk length in seconds
88 total_buffer_in_secs: float Length of buffer (chunk + left and right padding) in seconds
89 chunk_batch_size: int batch size for buffered chunk inference,
90 which will cut one audio into segments and do inference on chunk_batch_size segments at a time
91
92 simulate_cache_aware_streaming: False, if set True, using cache aware streaming to do get the logits for alignment
93
94 save_output_file_formats: List of strings specifying what type of output files to save (default: ["ctm", "ass"])
95 ctm_file_config: CTMFileConfig to specify the configuration of the output CTM files
96 ass_file_config: ASSFileConfig to specify the configuration of the output ASS files
97 """
98
99
100 @dataclass
101 class CTMFileConfig:
102 remove_blank_tokens: bool = False
103 # minimum duration (in seconds) for timestamps in the CTM.If any line in the CTM has a
104 # duration lower than this, it will be enlarged from the middle outwards until it
105 # meets the minimum_timestamp_duration, or reaches the beginning or end of the audio file.
106 # Note that this may cause timestamps to overlap.
107 minimum_timestamp_duration: float = 0
108
109
110 @dataclass
111 class ASSFileConfig:
112 fontsize: int = 20
113 vertical_alignment: str = "center"
114 # if resegment_text_to_fill_space is True, the ASS files will use new segments
115 # such that each segment will not take up more than (approximately) max_lines_per_segment
116 # when the ASS file is applied to a video
117 resegment_text_to_fill_space: bool = False
118 max_lines_per_segment: int = 2
119 text_already_spoken_rgb: List[int] = field(default_factory=lambda: [49, 46, 61]) # dark gray
120 text_being_spoken_rgb: List[int] = field(default_factory=lambda: [57, 171, 9]) # dark green
121 text_not_yet_spoken_rgb: List[int] = field(default_factory=lambda: [194, 193, 199]) # light gray
122
123
124 @dataclass
125 class AlignmentConfig:
126 # Required configs
127 pretrained_name: Optional[str] = None
128 model_path: Optional[str] = None
129 manifest_filepath: Optional[str] = None
130 output_dir: Optional[str] = None
131
132 # General configs
133 align_using_pred_text: bool = False
134 transcribe_device: Optional[str] = None
135 viterbi_device: Optional[str] = None
136 batch_size: int = 1
137 use_local_attention: bool = True
138 additional_segment_grouping_separator: Optional[str] = None
139 audio_filepath_parts_in_utt_id: int = 1
140
141 # Buffered chunked streaming configs
142 use_buffered_chunked_streaming: bool = False
143 chunk_len_in_secs: float = 1.6
144 total_buffer_in_secs: float = 4.0
145 chunk_batch_size: int = 32
146
147 # Cache aware streaming configs
148 simulate_cache_aware_streaming: Optional[bool] = False
149
150 # Output file configs
151 save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
152 ctm_file_config: CTMFileConfig = CTMFileConfig()
153 ass_file_config: ASSFileConfig = ASSFileConfig()
154
155
156 @hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
157 def main(cfg: AlignmentConfig):
158
159 logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
160
161 if is_dataclass(cfg):
162 cfg = OmegaConf.structured(cfg)
163
164 # Validate config
165 if cfg.model_path is None and cfg.pretrained_name is None:
166 raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None")
167
168 if cfg.model_path is not None and cfg.pretrained_name is not None:
169 raise ValueError("One of cfg.model_path and cfg.pretrained_name must be None")
170
171 if cfg.manifest_filepath is None:
172 raise ValueError("cfg.manifest_filepath must be specified")
173
174 if cfg.output_dir is None:
175 raise ValueError("cfg.output_dir must be specified")
176
177 if cfg.batch_size < 1:
178 raise ValueError("cfg.batch_size cannot be zero or a negative number")
179
180 if cfg.additional_segment_grouping_separator == "" or cfg.additional_segment_grouping_separator == " ":
181 raise ValueError("cfg.additional_grouping_separator cannot be empty string or space character")
182
183 if cfg.ctm_file_config.minimum_timestamp_duration < 0:
184 raise ValueError("cfg.minimum_timestamp_duration cannot be a negative number")
185
186 if cfg.ass_file_config.vertical_alignment not in ["top", "center", "bottom"]:
187 raise ValueError("cfg.ass_file_config.vertical_alignment must be one of 'top', 'center' or 'bottom'")
188
189 for rgb_list in [
190 cfg.ass_file_config.text_already_spoken_rgb,
191 cfg.ass_file_config.text_already_spoken_rgb,
192 cfg.ass_file_config.text_already_spoken_rgb,
193 ]:
194 if len(rgb_list) != 3:
195 raise ValueError(
196 "cfg.ass_file_config.text_already_spoken_rgb,"
197 " cfg.ass_file_config.text_being_spoken_rgb,"
198 " and cfg.ass_file_config.text_already_spoken_rgb all need to contain"
199 " exactly 3 elements."
200 )
201
202 # Validate manifest contents
203 if not is_entry_in_all_lines(cfg.manifest_filepath, "audio_filepath"):
204 raise RuntimeError(
205 "At least one line in cfg.manifest_filepath does not contain an 'audio_filepath' entry. "
206 "All lines must contain an 'audio_filepath' entry."
207 )
208
209 if cfg.align_using_pred_text:
210 if is_entry_in_any_lines(cfg.manifest_filepath, "pred_text"):
211 raise RuntimeError(
212 "Cannot specify cfg.align_using_pred_text=True when the manifest at cfg.manifest_filepath "
213 "contains 'pred_text' entries. This is because the audio will be transcribed and may produce "
214 "a different 'pred_text'. This may cause confusion."
215 )
216 else:
217 if not is_entry_in_all_lines(cfg.manifest_filepath, "text"):
218 raise RuntimeError(
219 "At least one line in cfg.manifest_filepath does not contain a 'text' entry. "
220 "NFA requires all lines to contain a 'text' entry when cfg.align_using_pred_text=False."
221 )
222
223 # init devices
224 if cfg.transcribe_device is None:
225 transcribe_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
226 else:
227 transcribe_device = torch.device(cfg.transcribe_device)
228 logging.info(f"Device to be used for transcription step (`transcribe_device`) is {transcribe_device}")
229
230 if cfg.viterbi_device is None:
231 viterbi_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
232 else:
233 viterbi_device = torch.device(cfg.viterbi_device)
234 logging.info(f"Device to be used for viterbi step (`viterbi_device`) is {viterbi_device}")
235
236 if transcribe_device.type == 'cuda' or viterbi_device.type == 'cuda':
237 logging.warning(
238 'One or both of transcribe_device and viterbi_device are GPUs. If you run into OOM errors '
239 'it may help to change both devices to be the CPU.'
240 )
241
242 # load model
243 model, _ = setup_model(cfg, transcribe_device)
244 model.eval()
245
246 if isinstance(model, EncDecHybridRNNTCTCModel):
247 model.change_decoding_strategy(decoder_type="ctc")
248
249 if cfg.use_local_attention:
250 logging.info(
251 "Flag use_local_attention is set to True => will try to use local attention for model if it allows it"
252 )
253 model.change_attention_model(self_attention_model="rel_pos_local_attn", att_context_size=[64, 64])
254
255 if not (isinstance(model, EncDecCTCModel) or isinstance(model, EncDecHybridRNNTCTCModel)):
256 raise NotImplementedError(
257 f"Model is not an instance of NeMo EncDecCTCModel or ENCDecHybridRNNTCTCModel."
258 " Currently only instances of these models are supported"
259 )
260
261 if cfg.ctm_file_config.minimum_timestamp_duration > 0:
262 logging.warning(
263 f"cfg.ctm_file_config.minimum_timestamp_duration has been set to {cfg.ctm_file_config.minimum_timestamp_duration} seconds. "
264 "This may cause the alignments for some tokens/words/additional segments to be overlapping."
265 )
266
267 buffered_chunk_params = {}
268 if cfg.use_buffered_chunked_streaming:
269 model_cfg = copy.deepcopy(model._cfg)
270
271 OmegaConf.set_struct(model_cfg.preprocessor, False)
272 # some changes for streaming scenario
273 model_cfg.preprocessor.dither = 0.0
274 model_cfg.preprocessor.pad_to = 0
275
276 if model_cfg.preprocessor.normalize != "per_feature":
277 logging.error(
278 "Only EncDecCTCModelBPE models trained with per_feature normalization are supported currently"
279 )
280 # Disable config overwriting
281 OmegaConf.set_struct(model_cfg.preprocessor, True)
282
283 feature_stride = model_cfg.preprocessor['window_stride']
284 model_stride_in_secs = feature_stride * cfg.model_downsample_factor
285 total_buffer = cfg.total_buffer_in_secs
286 chunk_len = float(cfg.chunk_len_in_secs)
287 tokens_per_chunk = math.ceil(chunk_len / model_stride_in_secs)
288 mid_delay = math.ceil((chunk_len + (total_buffer - chunk_len) / 2) / model_stride_in_secs)
289 logging.info(f"tokens_per_chunk is {tokens_per_chunk}, mid_delay is {mid_delay}")
290
291 model = FrameBatchASR(
292 asr_model=model,
293 frame_len=chunk_len,
294 total_buffer=cfg.total_buffer_in_secs,
295 batch_size=cfg.chunk_batch_size,
296 )
297 buffered_chunk_params = {
298 "delay": mid_delay,
299 "model_stride_in_secs": model_stride_in_secs,
300 "tokens_per_chunk": tokens_per_chunk,
301 }
302 # get start and end line IDs of batches
303 starts, ends = get_batch_starts_ends(cfg.manifest_filepath, cfg.batch_size)
304
305 # init output_timestep_duration = None and we will calculate and update it during the first batch
306 output_timestep_duration = None
307
308 # init f_manifest_out
309 os.makedirs(cfg.output_dir, exist_ok=True)
310 tgt_manifest_name = str(Path(cfg.manifest_filepath).stem) + "_with_output_file_paths.json"
311 tgt_manifest_filepath = str(Path(cfg.output_dir) / tgt_manifest_name)
312 f_manifest_out = open(tgt_manifest_filepath, 'w')
313
314 # get alignment and save in CTM batch-by-batch
315 for start, end in zip(starts, ends):
316 manifest_lines_batch = get_manifest_lines_batch(cfg.manifest_filepath, start, end)
317
318 (log_probs_batch, y_batch, T_batch, U_batch, utt_obj_batch, output_timestep_duration,) = get_batch_variables(
319 manifest_lines_batch,
320 model,
321 cfg.additional_segment_grouping_separator,
322 cfg.align_using_pred_text,
323 cfg.audio_filepath_parts_in_utt_id,
324 output_timestep_duration,
325 cfg.simulate_cache_aware_streaming,
326 cfg.use_buffered_chunked_streaming,
327 buffered_chunk_params,
328 )
329
330 alignments_batch = viterbi_decoding(log_probs_batch, y_batch, T_batch, U_batch, viterbi_device)
331
332 for utt_obj, alignment_utt in zip(utt_obj_batch, alignments_batch):
333
334 utt_obj = add_t_start_end_to_utt_obj(utt_obj, alignment_utt, output_timestep_duration)
335
336 if "ctm" in cfg.save_output_file_formats:
337 utt_obj = make_ctm_files(utt_obj, cfg.output_dir, cfg.ctm_file_config,)
338
339 if "ass" in cfg.save_output_file_formats:
340 utt_obj = make_ass_files(utt_obj, cfg.output_dir, cfg.ass_file_config)
341
342 write_manifest_out_line(
343 f_manifest_out, utt_obj,
344 )
345
346 f_manifest_out.close()
347
348 return None
349
350
351 if __name__ == "__main__":
352 main()
353
[end of tools/nemo_forced_aligner/align.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
NVIDIA/NeMo
|
8a892b86186dbdf61803d75570cb5c58471e9dda
|
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
|
Seems to be a similar to #7002
Interesting. The fix is easy but needs to be applied to basically every single place that has this constructor for our adapter configs. Let me see if I can update it. But no guarantees on how soon fixes will come in main.
Looking forward to it @titu1994 ! Thanks ๐
@titu1994 I was looking to use NeMo speaker diarization with Python 3.11 and hit this dataclass issue. I patched everything involved in the specific code paths I needed: https://github.com/lmnt-com/NeMo/commit/d89acf9f0152e43dee29d7d1c4667ee34c26ffd7
I was using the neural diarizer as described in https://github.com/NVIDIA/NeMo/tree/main/examples/speaker_tasks/diarization
I'd be happy to upstream this if it's helpful.
I haven't checked whether this is backwards compatible for earlier python/dataclass versions, do you know?
For reference, what led me to this issue, though it's duplicative to the above discussion:
[Similar error](https://github.com/huggingface/datasets/issues/5230)
[StackOverflow solution](https://stackoverflow.com/questions/53632152/why-cant-dataclasses-have-mutable-defaults-in-their-class-attributes-declaratio)
@shaper Thanks for sharing. For brevity, you don't really need a `lambda` when you don't pass any init parameters, like this:
```
field(default_factory=lambda: ConfidenceConfig())
```
You can just do
```
field(default_factory=ConfidenceConfig)
```
It's only needed when you do pass parameter(s), like
```
field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
```
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
I have the same issue. @tango4j suggested using one of the models from https://huggingface.co/spaces/hf-audio/open_asr_leaderboard, but I cannot import nemo.collections.asr:
```
Traceback (most recent call last):
File "/opt/pycharm-2022.3.3/plugins/python/helpers/pycharm/docrunner.py", line 138, in __run
exec(compile(example.source, filename, "single",
File "<doctest NeMoASR[2]>", line 1, in <module>
NeMoASR().apply_asr(file)
^^^^^^^^^
File "/home/cbj/python/cbj/cbj/transcribe/pretrained.py", line 504, in __init__
import nemo.collections.asr as nemo_asr
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
For documentation (I had to search in the provided links):
Mutable defaults were never allowed in dataclasses (by convention), but in python 3.11 they improved the check: Instead of checking some types (dict, list, set) they now use hashable as indicator for mutable.
An alternative to default_factory would be to use frozen dataclasses, but I don't know whether in this code base the configs are used as mutable objects or not.
You need to update to NeMo 1.20, omegaconf did a fix that should resolve this
I have NeMo 1.20.0.
With `pip install nemo_toolkit` and `pip install pytorch_lightning` I installed yesterday nemo.
So it should be the newest PyPI version.
```
$ pip show nemo_toolkit
Name: nemo-toolkit
Version: 1.20.0
Summary: NeMo - a toolkit for Conversational AI
Home-page: https://github.com/nvidia/nemo
Author: NVIDIA
Author-email: nemo-toolkit@nvidia.com
License: Apache2
Location: /opt/py/2023/lib/python3.11/site-packages
Requires: huggingface-hub, numba, numpy, onnx, python-dateutil, ruamel.yaml, scikit-learn, setuptools, tensorboard, text-unidecode, torch, tqdm, wget, wrapt
Required-by:
$ pip show omegaconf
Name: omegaconf
Version: 2.3.0
Summary: A flexible configuration library
Home-page: https://github.com/omry/omegaconf
Author: Omry Yadan
Author-email: omry@yadan.net
License:
Location: /home/cbj/.local/lib/python3.11/site-packages
Requires: antlr4-python3-runtime, PyYAML
Required-by: hydra-core
$ python -c "import nemo.collections.asr as nemo_asr"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
Hmm ok I'll take a look
|
2023-09-30T01:26:50Z
|
<patch>
<patch>
diff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr/experimental/k2/align_speech_parallel.py
--- a/examples/asr/experimental/k2/align_speech_parallel.py
+++ b/examples/asr/experimental/k2/align_speech_parallel.py
@@ -74,7 +74,7 @@
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
@@ -94,12 +94,14 @@
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
- predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
- aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
+ predict_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(return_sample_id=True, num_workers=4)
+ )
+ aligner_args: K2AlignerWrapperModelConfig = field(default_factory=lambda: K2AlignerWrapperModelConfig())
output_path: str = MISSING
model_stride: int = 8
- trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
+ trainer: TrainerConfig = field(default_factory=lambda: TrainerConfig(gpus=-1, accelerator="ddp"))
# there arguments will be ignored
return_predictions: bool = False
diff --git a/nemo/collections/asr/metrics/rnnt_wer.py b/nemo/collections/asr/metrics/rnnt_wer.py
--- a/nemo/collections/asr/metrics/rnnt_wer.py
+++ b/nemo/collections/asr/metrics/rnnt_wer.py
@@ -15,7 +15,7 @@
import copy
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1299,7 +1299,7 @@ class RNNTDecodingConfig:
preserve_alignments: Optional[bool] = None
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
@@ -1317,10 +1317,10 @@ class RNNTDecodingConfig:
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
- greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
+ greedy: greedy_decode.GreedyRNNTInferConfig = field(default_factory=lambda: greedy_decode.GreedyRNNTInferConfig())
# beam decoding config
- beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
+ beam: beam_decode.BeamRNNTInferConfig = field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/metrics/wer.py b/nemo/collections/asr/metrics/wer.py
--- a/nemo/collections/asr/metrics/wer.py
+++ b/nemo/collections/asr/metrics/wer.py
@@ -14,7 +14,7 @@
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1297,13 +1297,17 @@ class CTCDecodingConfig:
batch_dim_index: int = 0
# greedy decoding config
- greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
+ greedy: ctc_greedy_decoding.GreedyCTCInferConfig = field(
+ default_factory=lambda: ctc_greedy_decoding.GreedyCTCInferConfig()
+ )
# beam decoding config
- beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ beam: ctc_beam_decoding.BeamCTCInferConfig = field(
+ default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ )
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/models/configs/aligner_config.py b/nemo/collections/asr/models/configs/aligner_config.py
--- a/nemo/collections/asr/models/configs/aligner_config.py
+++ b/nemo/collections/asr/models/configs/aligner_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@@ -35,10 +35,10 @@ class AlignerWrapperModelConfig:
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
- ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
- rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
+ ctc_cfg: AlignerCTCConfig = field(default_factory=lambda: AlignerCTCConfig())
+ rnnt_cfg: AlignerRNNTConfig = field(default_factory=lambda: AlignerRNNTConfig())
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
- decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ decoder_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
diff --git a/nemo/collections/asr/models/configs/asr_models_config.py b/nemo/collections/asr/models/configs/asr_models_config.py
--- a/nemo/collections/asr/models/configs/asr_models_config.py
+++ b/nemo/collections/asr/models/configs/asr_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -74,24 +74,32 @@ class EncDecCTCConfig(model_cfg.ModelConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
- validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ train_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=True))
+ validation_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False))
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
- decoding: CTCDecodingConfig = CTCDecodingConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
+ decoding: CTCDecodingConfig = field(default_factory=lambda: CTCDecodingConfig())
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
- model: EncDecCTCConfig = EncDecCTCConfig()
+ model: EncDecCTCConfig = field(default_factory=lambda: EncDecCTCConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/classification_models_config.py b/nemo/collections/asr/models/configs/classification_models_config.py
--- a/nemo/collections/asr/models/configs/classification_models_config.py
+++ b/nemo/collections/asr/models/configs/classification_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -72,30 +72,40 @@ class EncDecClassificationConfig(model_cfg.ModelConfig):
timesteps: int = MISSING
# Dataset configs
- train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=timesteps
+ preprocessor: AudioToMFCCPreprocessorConfig = field(default_factory=lambda: AudioToMFCCPreprocessorConfig())
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=-1)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
+
+ def __post_init__(self):
+ if self.crop_or_pad_augment is not None:
+ self.crop_or_pad_augment.audio_length = self.timesteps
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
- model: EncDecClassificationConfig = EncDecClassificationConfig()
+ model: EncDecClassificationConfig = field(default_factory=lambda: EncDecClassificationConfig())
diff --git a/nemo/collections/asr/models/configs/diarizer_config.py b/nemo/collections/asr/models/configs/diarizer_config.py
--- a/nemo/collections/asr/models/configs/diarizer_config.py
+++ b/nemo/collections/asr/models/configs/diarizer_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import asdict, dataclass
+from dataclasses import asdict, dataclass, field
from typing import Any, Dict, Optional, Tuple, Union
@@ -78,9 +78,9 @@ class ASRDiarizerParams(DiarizerComponentConfig):
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
- parameters: ASRDiarizerParams = ASRDiarizerParams()
- ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
- realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
+ parameters: ASRDiarizerParams = field(default_factory=lambda: ASRDiarizerParams())
+ ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = field(default_factory=lambda: ASRDiarizerCTCDecoderParams())
+ realigning_lm_parameters: ASRRealigningLMParams = field(default_factory=lambda: ASRRealigningLMParams())
@dataclass
@@ -102,7 +102,7 @@ class VADParams(DiarizerComponentConfig):
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
- parameters: VADParams = VADParams()
+ parameters: VADParams = field(default_factory=lambda: VADParams())
@dataclass
@@ -121,7 +121,7 @@ class SpeakerEmbeddingsParams(DiarizerComponentConfig):
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
- parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
+ parameters: SpeakerEmbeddingsParams = field(default_factory=lambda: SpeakerEmbeddingsParams())
@dataclass
@@ -142,7 +142,7 @@ class ClusteringParams(DiarizerComponentConfig):
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
- parameters: ClusteringParams = ClusteringParams()
+ parameters: ClusteringParams = field(default_factory=lambda: ClusteringParams())
@dataclass
@@ -166,7 +166,7 @@ class MSDDParams(DiarizerComponentConfig):
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
- parameters: MSDDParams = MSDDParams()
+ parameters: MSDDParams = field(default_factory=lambda: MSDDParams())
@dataclass
@@ -176,16 +176,16 @@ class DiarizerConfig(DiarizerComponentConfig):
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
- vad: VADConfig = VADConfig()
- speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
- clustering: ClusteringConfig = ClusteringConfig()
- msdd_model: MSDDConfig = MSDDConfig()
- asr: ASRDiarizerConfig = ASRDiarizerConfig()
+ vad: VADConfig = field(default_factory=lambda: VADConfig())
+ speaker_embeddings: SpeakerEmbeddingsConfig = field(default_factory=lambda: SpeakerEmbeddingsConfig())
+ clustering: ClusteringConfig = field(default_factory=lambda: ClusteringConfig())
+ msdd_model: MSDDConfig = field(default_factory=lambda: MSDDConfig())
+ asr: ASRDiarizerConfig = field(default_factory=lambda: ASRDiarizerConfig())
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
- diarizer: DiarizerConfig = DiarizerConfig()
+ diarizer: DiarizerConfig = field(default_factory=lambda: DiarizerConfig())
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
diff --git a/nemo/collections/asr/models/configs/k2_sequence_models_config.py b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
--- a/nemo/collections/asr/models/configs/k2_sequence_models_config.py
+++ b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
@@ -26,14 +26,14 @@ class GraphModuleConfig:
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
- backend_cfg: BackendConfig = BackendConfig()
+ backend_cfg: BackendConfig = field(default_factory=lambda: BackendConfig())
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
- graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ graph_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
- model: EncDecK2SeqConfig = EncDecK2SeqConfig()
+ model: EncDecK2SeqConfig = field(default_factory=lambda: EncDecK2SeqConfig())
diff --git a/nemo/collections/asr/models/configs/matchboxnet_config.py b/nemo/collections/asr/models/configs/matchboxnet_config.py
--- a/nemo/collections/asr/models/configs/matchboxnet_config.py
+++ b/nemo/collections/asr/models/configs/matchboxnet_config.py
@@ -107,30 +107,38 @@ class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
- freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ preprocessor: AudioToMFCCPreprocessorConfig = field(
+ default_factory=lambda: AudioToMFCCPreprocessorConfig(window_size=0.025)
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig(
+ freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ )
)
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=128
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=128)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/quartznet_config.py b/nemo/collections/asr/models/configs/quartznet_config.py
--- a/nemo/collections/asr/models/configs/quartznet_config.py
+++ b/nemo/collections/asr/models/configs/quartznet_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
@@ -174,20 +174,30 @@ class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=True
+ train_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=True, trim_silence=True)
+ )
+ validation_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
)
- validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
@dataclass
diff --git a/nemo/collections/asr/modules/audio_preprocessing.py b/nemo/collections/asr/modules/audio_preprocessing.py
--- a/nemo/collections/asr/modules/audio_preprocessing.py
+++ b/nemo/collections/asr/modules/audio_preprocessing.py
@@ -634,6 +634,12 @@ def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
+ if self.audio_length < 0:
+ raise ValueError(
+ 'audio_length must be non-negative. If using a dataclass with OmegaConf, '
+ 'please call OmegaConf.to_object(cfg) to call appropriate __post_init__ methods.'
+ )
+
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
diff --git a/nemo/collections/asr/parts/k2/classes.py b/nemo/collections/asr/parts/k2/classes.py
--- a/nemo/collections/asr/parts/k2/classes.py
+++ b/nemo/collections/asr/parts/k2/classes.py
@@ -13,7 +13,7 @@
# limitations under the License.
from abc import ABC
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
import torch
@@ -43,7 +43,7 @@ class GraphModuleConfig:
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
- intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
+ intersect_conf: GraphIntersectDenseConfig = field(default_factory=lambda: GraphIntersectDenseConfig())
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
diff --git a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
--- a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
+++ b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
@@ -13,7 +13,7 @@
# limitations under the License.
import math
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional
import torch
@@ -183,7 +183,7 @@ class MultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
@@ -287,7 +287,7 @@ class RelPositionMultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
@@ -336,7 +336,9 @@ class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
@@ -378,5 +380,7 @@ class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
diff --git a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
@@ -14,7 +14,7 @@
import math
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import torch
@@ -602,5 +602,5 @@ class BeamCTCInferConfig:
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
- flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
- pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
+ flashlight_cfg: Optional[FlashlightConfig] = field(default_factory=lambda: FlashlightConfig())
+ pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = field(default_factory=lambda: PyCTCDecodeConfig())
diff --git a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional
import torch
@@ -253,7 +253,9 @@ class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
@@ -26,7 +26,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import numpy as np
@@ -2185,7 +2185,9 @@ class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
@@ -2217,7 +2219,9 @@ class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = ConfidenceMeasureConfig()
+ confidence_measure_cfg: Optional[ConfidenceMeasureConfig] = field(
+ default_factory=lambda: ConfidenceMeasureConfig()
+ )
confidence_method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/asr/parts/utils/asr_confidence_utils.py b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
--- a/nemo/collections/asr/parts/utils/asr_confidence_utils.py
+++ b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
@@ -14,7 +14,7 @@
import math
from abc import ABC, abstractmethod
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from functools import partial
from typing import List, Optional
@@ -181,7 +181,7 @@ class ConfidenceConfig:
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
- measure_cfg: ConfidenceMeasureConfig = ConfidenceMeasureConfig()
+ measure_cfg: ConfidenceMeasureConfig = field(default_factory=lambda: ConfidenceMeasureConfig())
method_cfg: str = "DEPRECATED"
def __post_init__(self):
diff --git a/nemo/collections/common/parts/adapter_modules.py b/nemo/collections/common/parts/adapter_modules.py
--- a/nemo/collections/common/parts/adapter_modules.py
+++ b/nemo/collections/common/parts/adapter_modules.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
@@ -160,5 +160,7 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
diff --git a/nemo/collections/common/tokenizers/en_ja_tokenizers.py b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
--- a/nemo/collections/common/tokenizers/en_ja_tokenizers.py
+++ b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
@@ -14,11 +14,19 @@
import re
from typing import List
-import ipadic
-import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
+try:
+ import ipadic
+ import MeCab
+
+ HAVE_MECAB = True
+ HAVE_IPADIC = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+ HAVE_IPADIC = False
+
class EnJaProcessor:
"""
@@ -67,6 +75,9 @@ class JaMecabProcessor:
"""
def __init__(self):
+ if not HAVE_MECAB or not HAVE_IPADIC:
+ raise ImportError("Please ensure that you have installed `MeCab` and `ipadic` to use JaMecabProcessor")
+
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
diff --git a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
--- a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
+++ b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
@@ -46,7 +46,7 @@ class MTOptimConfig(OptimConfig):
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
- sched: Optional[MTSchedConfig] = MTSchedConfig()
+ sched: Optional[MTSchedConfig] = field(default_factory=lambda: MTSchedConfig())
@dataclass
@@ -74,70 +74,80 @@ class MTEncDecModelConfig(EncDecNLPModelConfig):
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
- head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
+ head: TokenClassifierConfig = field(default_factory=lambda: TokenClassifierConfig(log_softmax=True))
# dataset configurations
- train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=True,
- shuffle=True,
- cache_ids=False,
- use_cache=False,
+ train_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=True,
+ shuffle=True,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ validation_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ test_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- optim: Optional[OptimConfig] = MTOptimConfig()
+ optim: Optional[OptimConfig] = field(default_factory=lambda: MTOptimConfig())
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
- encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
- decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
-
- encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ encoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+ decoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+
+ encoder: NeMoTransformerEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
- decoder: NeMoTransformerConfig = NeMoTransformerConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ decoder: NeMoTransformerConfig = field(
+ default_factory=lambda: NeMoTransformerConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
@@ -150,32 +160,36 @@ class MTBottleneckModelConfig(AAYNBaseConfig):
recon_per_token: bool = True
log_timing: bool = True
- encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
- hidden_steps=32,
- hidden_blocks=1,
- hidden_init_method='params',
+ encoder: NeMoTransformerBottleneckEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ hidden_steps=32,
+ hidden_blocks=1,
+ hidden_init_method='params',
+ )
)
- decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
+ decoder: NeMoTransformerBottleneckDecoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckDecoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ )
)
diff --git a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
--- a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
+++ b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
@@ -215,13 +215,15 @@ class PunctuationCapitalizationModelConfig:
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
- class_labels: ClassLabelsConfig = ClassLabelsConfig()
+ class_labels: ClassLabelsConfig = field(default_factory=lambda: ClassLabelsConfig())
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
- common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
+ common_dataset_parameters: Optional[CommonDatasetParametersConfig] = field(
+ default_factory=lambda: CommonDatasetParametersConfig()
+ )
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
@@ -233,16 +235,16 @@ class PunctuationCapitalizationModelConfig:
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
- punct_head: HeadConfig = HeadConfig()
+ punct_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
- capit_head: HeadConfig = HeadConfig()
+ capit_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
- tokenizer: Any = TokenizerConfig()
+ tokenizer: Any = field(default_factory=lambda: TokenizerConfig())
"""A configuration for source text tokenizer."""
- language_model: LanguageModelConfig = LanguageModelConfig()
+ language_model: LanguageModelConfig = field(default_factory=lambda: LanguageModelConfig())
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
@@ -311,22 +313,30 @@ class PunctuationCapitalizationConfig(NemoConfig):
do_testing: bool = False
"""Whether ot perform testing of the model."""
- model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
+ model: PunctuationCapitalizationModelConfig = field(default_factory=lambda: PunctuationCapitalizationModelConfig())
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
- trainer: Optional[TrainerConfig] = TrainerConfig()
+ trainer: Optional[TrainerConfig] = field(default_factory=lambda: TrainerConfig())
"""Contains ``Trainer`` Lightning class constructor parameters."""
- exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
+ exp_manager: Optional[ExpManagerConfig] = field(
+ default_factory=lambda: ExpManagerConfig(name=None, files_to_copy=[])
+ )
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
+ def __post_init__(self):
+ if self.exp_manager is not None:
+ self.exp_manager.name = self.name
+
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
- model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
+ model: PunctuationCapitalizationLexicalAudioModelConfig = field(
+ default_factory=lambda: PunctuationCapitalizationLexicalAudioModelConfig()
+ )
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
diff --git a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
--- a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
+++ b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
@@ -13,7 +13,6 @@
# limitations under the License.
"""Transformer based language model."""
-from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
@@ -25,6 +24,13 @@
scaled_init_method_normal,
)
+try:
+ from MeCab import Model
+
+ HAVE_MECAB = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+
try:
from apex.transformer.enums import AttnMaskType, ModelType
diff --git a/nemo/collections/tts/models/fastpitch.py b/nemo/collections/tts/models/fastpitch.py
--- a/nemo/collections/tts/models/fastpitch.py
+++ b/nemo/collections/tts/models/fastpitch.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import List, Optional
@@ -70,12 +70,12 @@ class TextTokenizer:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
@dataclass
class TextTokenizerConfig:
- text_tokenizer: TextTokenizer = TextTokenizer()
+ text_tokenizer: TextTokenizer = field(default_factory=lambda: TextTokenizer())
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
diff --git a/nemo/collections/tts/models/tacotron2.py b/nemo/collections/tts/models/tacotron2.py
--- a/nemo/collections/tts/models/tacotron2.py
+++ b/nemo/collections/tts/models/tacotron2.py
@@ -13,7 +13,7 @@
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
import torch
@@ -53,7 +53,7 @@ class Preprocessor:
@dataclass
class Tacotron2Config:
- preprocessor: Preprocessor = Preprocessor()
+ preprocessor: Preprocessor = field(default_factory=lambda: Preprocessor())
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
diff --git a/nemo/core/config/modelPT.py b/nemo/core/config/modelPT.py
--- a/nemo/core/config/modelPT.py
+++ b/nemo/core/config/modelPT.py
@@ -58,11 +58,13 @@ class HydraConfig:
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
- trainer: config.TrainerConfig = config.TrainerConfig(
- strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ trainer: config.TrainerConfig = field(
+ default_factory=lambda: config.TrainerConfig(
+ strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ )
)
- exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
- hydra: HydraConfig = HydraConfig()
+ exp_manager: Optional[Any] = field(default_factory=lambda: exp_manager.ExpManagerConfig())
+ hydra: HydraConfig = field(default_factory=lambda: HydraConfig())
class ModelConfigBuilder:
diff --git a/nemo/utils/exp_manager.py b/nemo/utils/exp_manager.py
--- a/nemo/utils/exp_manager.py
+++ b/nemo/utils/exp_manager.py
@@ -18,7 +18,7 @@
import sys
import time
import warnings
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
@@ -146,28 +146,30 @@ class ExpManagerConfig:
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
- mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
+ mlflow_logger_kwargs: Optional[MLFlowParams] = field(default_factory=lambda: MLFlowParams())
create_dllogger_logger: Optional[bool] = False
- dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
+ dllogger_logger_kwargs: Optional[DLLoggerParams] = field(default_factory=lambda: DLLoggerParams())
create_clearml_logger: Optional[bool] = False
- clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
+ clearml_logger_kwargs: Optional[ClearMLParams] = field(default_factory=lambda: ClearMLParams())
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
- checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
+ checkpoint_callback_params: Optional[CallbackParams] = field(default_factory=lambda: CallbackParams())
create_early_stopping_callback: Optional[bool] = False
- early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
+ early_stopping_callback_params: Optional[EarlyStoppingParams] = field(
+ default_factory=lambda: EarlyStoppingParams()
+ )
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
- step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
+ step_timing_kwargs: Optional[StepTimingParams] = field(default_factory=lambda: StepTimingParams())
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
- ema: Optional[EMAParams] = EMAParams()
+ ema: Optional[EMAParams] = field(default_factory=lambda: EMAParams())
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
@@ -112,14 +112,14 @@ class EvalBeamSearchNGramConfig:
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
- decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
+ decoding: ctc_beam_decoding.BeamCTCInferConfig = field(default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=128))
- text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
+ text_processing: Optional[TextProcessingConfig] = field(default_factory=lambda: TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
- )
+ ))
# fmt: on
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
@@ -115,7 +115,7 @@ class EvalBeamSearchNGramConfig:
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
- decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
+ decoding: rnnt_beam_decoding.BeamRNNTInferConfig = field(default_factory=lambda: rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128))
# fmt: on
diff --git a/scripts/confidence_ensembles/build_ensemble.py b/scripts/confidence_ensembles/build_ensemble.py
--- a/scripts/confidence_ensembles/build_ensemble.py
+++ b/scripts/confidence_ensembles/build_ensemble.py
@@ -75,7 +75,7 @@
import sys
import tempfile
from copy import deepcopy
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Tuple
@@ -209,19 +209,23 @@ class BuildEnsembleConfig:
random_seed: int = 0 # for reproducibility
# default confidence, can override
- confidence: ConfidenceConfig = ConfidenceConfig(
- # we keep frame confidences and apply aggregation manually to get full-utterance confidence
- preserve_frame_confidence=True,
- exclude_blank=True,
- aggregation="mean",
- measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ confidence: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(
+ # we keep frame confidences and apply aggregation manually to get full-utterance confidence
+ preserve_frame_confidence=True,
+ exclude_blank=True,
+ aggregation="mean",
+ measure_cfg=ConfidenceMeasureConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ )
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
- transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
+ transcription: transcribe_speech.TranscriptionConfig = field(
+ default_factory=lambda: transcribe_speech.TranscriptionConfig()
+ )
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
@@ -229,12 +233,14 @@ class BuildEnsembleConfig:
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
- tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
+ tune_confidence_config: TuneConfidenceConfig = field(default_factory=lambda: TuneConfidenceConfig())
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
- tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
+ tune_logistic_regression_config: TuneLogisticRegressionConfig = field(
+ default_factory=lambda: TuneLogisticRegressionConfig()
+ )
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
diff --git a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
--- a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
+++ b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
@@ -14,7 +14,7 @@
import json
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import Optional
@@ -124,7 +124,9 @@ class ConfidenceBenchmarkingConfig:
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
- confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ confidence_cfg: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ )
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
diff --git a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
--- a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
+++ b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
@@ -202,7 +202,7 @@ class ASRTarredDatasetMetadata:
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
- dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
+ dataset_config: Optional[ASRTarredDatasetConfig] = field(default_factory=lambda: ASRTarredDatasetConfig())
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
diff --git a/tools/nemo_forced_aligner/align.py b/tools/nemo_forced_aligner/align.py
--- a/tools/nemo_forced_aligner/align.py
+++ b/tools/nemo_forced_aligner/align.py
@@ -149,8 +149,8 @@ class AlignmentConfig:
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig = CTMFileConfig()
- ass_file_config: ASSFileConfig = ASSFileConfig()
+ ctm_file_config: CTMFileConfig = field(default_factory=lambda: CTMFileConfig())
+ ass_file_config: ASSFileConfig = field(default_factory=lambda: ASSFileConfig())
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
</patch>
</s>
</patch>
|
diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_to_text_dataset.py
--- a/tests/collections/asr/test_text_to_text_dataset.py
+++ b/tests/collections/asr/test_text_to_text_dataset.py
@@ -15,7 +15,7 @@
import json
import multiprocessing
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
import pytest
@@ -110,7 +110,7 @@ class TextTokenizerCfg:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
config = OmegaConf.create(OmegaConf.to_yaml(TextTokenizerCfg()))
return instantiate(config)
|
1.0
| |||
NVIDIA__NeMo-7616
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active โ The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
17 :target: https://badge.fury.io/py/nemo-toolkit
18 :alt: Release version
19
20 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
21 :target: https://badge.fury.io/py/nemo-toolkit
22 :alt: Python version
23
24 .. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
25 :target: https://pepy.tech/project/nemo-toolkit
26 :alt: PyPi total downloads
27
28 .. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
29 :target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
30 :alt: CodeQL
31
32 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
33 :target: https://github.com/psf/black
34 :alt: Code style: black
35
36 .. _main-readme:
37
38 **NVIDIA NeMo**
39 ===============
40
41 Introduction
42 ------------
43
44 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
45 text-to-speech synthesis (TTS), large language models (LLMs), and
46 natural language processing (NLP).
47 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
48 and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
49
50 All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
51 training is automatically scalable to 1000s of GPUs.
52 Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
53 NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
54
55 Getting started with NeMo is simple.
56 State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
57 `NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
58 These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
59
60 We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
61 can all be run on `Google Colab <https://colab.research.google.com>`_.
62
63 For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
64 we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
65
66 For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
67 The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
68 which can be used to find the optimal model parallel configuration for training on a specific cluster.
69
70 Also see the two introductory videos below for a high level overview of NeMo.
71
72 * Developing State-Of-The-Art Conversational AI Models in Three Lines of Code.
73 * NVIDIA NeMo: Toolkit for Conversational AI at PyData Yerevan 2022.
74
75 |three_lines| |pydata|
76
77 .. |pydata| image:: https://img.youtube.com/vi/J-P6Sczmas8/maxres3.jpg
78 :target: https://www.youtube.com/embed/J-P6Sczmas8?mute=0&start=14&autoplay=0
79 :width: 600
80 :alt: Develop Conversational AI Models in 3 Lines
81
82 .. |three_lines| image:: https://img.youtube.com/vi/wBgpMf_KQVw/maxresdefault.jpg
83 :target: https://www.youtube.com/embed/wBgpMf_KQVw?mute=0&start=0&autoplay=0
84 :width: 600
85 :alt: Introduction at PyData@Yerevan 2022
86
87 Key Features
88 ------------
89
90 * Speech processing
91 * `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
92 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
93 * Supported ASR models: `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_
94 * Jasper, QuartzNet, CitriNet, ContextNet
95 * Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
96 * Squeezeformer-CTC and Squeezeformer-Transducer
97 * LSTM-Transducer (RNNT) and LSTM-CTC
98 * Supports the following decoders/losses:
99 * CTC
100 * Transducer/RNNT
101 * Hybrid Transducer/CTC
102 * NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
103 * Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
104 * Cache-aware Streaming Conformer with multiple lookaheads - `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_
105 * Beam Search decoding
106 * `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
107 * `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
108 * `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
109 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
110 * ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
111 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
112 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
113 * Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
114 * Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
115 * `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
116 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
117 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
118 * Natural Language Processing
119 * `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
120 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
121 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
122 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
123 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
124 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
125 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
126 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
127 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
128 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
129 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
130 * `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
131 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
132 * `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
133 * Text-to-Speech Synthesis (TTS):
134 * `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
135 * Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
136 * Vocoders: HiFiGAN, UnivNet, WaveGlow
137 * End-to-End Models: VITS
138 * `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
139 * `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
140 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
141 * `NeMo Forced Aligner <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/nemo_forced_aligner.html>`_
142 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
143 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
144 * `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
145
146
147 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
148
149 Requirements
150 ------------
151
152 1) Python 3.10 or above
153 2) Pytorch 1.13.1 or above
154 3) NVIDIA GPU for training
155
156 Documentation
157 -------------
158
159 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
160 :alt: Documentation Status
161 :scale: 100%
162 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
163
164 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
165 :alt: Documentation Status
166 :scale: 100%
167 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
168
169 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
170 | Version | Status | Description |
171 +=========+=============+==========================================================================================================================================+
172 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
173 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
174 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
175 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
176
177 Tutorials
178 ---------
179 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
180
181 Getting help with NeMo
182 ----------------------
183 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
184
185
186 Installation
187 ------------
188
189 Conda
190 ~~~~~
191
192 We recommend installing NeMo in a fresh Conda environment.
193
194 .. code-block:: bash
195
196 conda create --name nemo python==3.10.12
197 conda activate nemo
198
199 Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
200
201 .. code-block:: bash
202
203 conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
204
205 The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
206
207 Pip
208 ~~~
209 Use this installation mode if you want the latest released version.
210
211 .. code-block:: bash
212
213 apt-get update && apt-get install -y libsndfile1 ffmpeg
214 pip install Cython
215 pip install nemo_toolkit['all']
216
217 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
218
219 Pip from source
220 ~~~~~~~~~~~~~~~
221 Use this installation mode if you want the version from a particular GitHub branch (e.g main).
222
223 .. code-block:: bash
224
225 apt-get update && apt-get install -y libsndfile1 ffmpeg
226 pip install Cython
227 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
228
229
230 From source
231 ~~~~~~~~~~~
232 Use this installation mode if you are contributing to NeMo.
233
234 .. code-block:: bash
235
236 apt-get update && apt-get install -y libsndfile1 ffmpeg
237 git clone https://github.com/NVIDIA/NeMo
238 cd NeMo
239 ./reinstall.sh
240
241 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
242 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
243
244 RNNT
245 ~~~~
246 Note that RNNT requires numba to be installed from conda.
247
248 .. code-block:: bash
249
250 conda remove numba
251 pip uninstall numba
252 conda install -c conda-forge numba
253
254 NeMo Megatron
255 ~~~~~~~~~~~~~
256 NeMo Megatron training requires NVIDIA Apex to be installed.
257 Install it manually if not using the NVIDIA PyTorch container.
258
259 To install Apex, run
260
261 .. code-block:: bash
262
263 git clone https://github.com/NVIDIA/apex.git
264 cd apex
265 git checkout 52e18c894223800cb611682dce27d88050edf1de
266 pip install -v --no-build-isolation --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
267
268 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
269
270 While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
271 This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
272
273 cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
274
275 .. code-block:: bash
276
277 conda install -c nvidia cuda-nvprof=11.8
278
279 packaging is also needed:
280
281 .. code-block:: bash
282
283 pip install packaging
284
285 With the latest versions of Apex, the `pyproject.toml` file in Apex may need to be deleted in order to install locally.
286
287
288 Transformer Engine
289 ~~~~~~~~~~~~~~~~~~
290 NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
291 Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
292 `Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
293
294 .. code-block:: bash
295
296 pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
297
298 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
299
300 Transformer Engine requires PyTorch to be built with CUDA 11.8.
301
302
303 Flash Attention
304 ~~~~~~~~~~~~~~~~~~~~
305 Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
306
307 .. code-block:: bash
308
309 pip install flash-attn
310 pip install triton==2.0.0.dev20221202
311
312 NLP inference UI
313 ~~~~~~~~~~~~~~~~~~~~
314 To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
315
316 .. code-block:: bash
317
318 pip install gradio==3.34.0
319
320 NeMo Text Processing
321 ~~~~~~~~~~~~~~~~~~~~
322 NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
323
324 Docker containers:
325 ~~~~~~~~~~~~~~~~~~
326 We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.20.0`` comes with container ``nemo:23.06``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
327
328 To use built container, please run
329
330 .. code-block:: bash
331
332 docker pull nvcr.io/nvidia/nemo:23.06
333
334 To build a nemo container with Dockerfile from a branch, please run
335
336 .. code-block:: bash
337
338 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
339
340
341 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
342
343 .. code-block:: bash
344
345 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
346 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
347 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
348
349 Examples
350 --------
351
352 Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
353
354
355 Contributing
356 ------------
357
358 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
359
360 Publications
361 ------------
362
363 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/tree/stable/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
364
365 License
366 -------
367 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
368
[end of README.rst]
[start of examples/asr/experimental/k2/align_speech_parallel.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 # Based on examples/asr/transcribe_speech_parallel.py
17 # ASR alignment with multi-GPU/multi-node support for large datasets
18 # It supports both tarred and non-tarred datasets
19 # Arguments
20 # model: path to a nemo/PTL checkpoint file or name of a pretrained model
21 # predict_ds: config of the dataset/dataloader
22 # aligner_args: aligner config
23 # output_path: path to store the predictions
24 # model_stride: model downsampling factor, 8 for Citrinet models and 4 for Conformer models
25 #
26 # Results of each GPU/worker is written into a file named 'predictions_{rank}.json, and aggregated results of all workers are written into 'predictions_all.json'
27
28 Example for non-tarred datasets:
29
30 python align_speech_parallel.py \
31 model=stt_en_conformer_ctc_large \
32 predict_ds.manifest_filepath=/dataset/manifest_file.json \
33 predict_ds.batch_size=16 \
34 output_path=/tmp/
35
36 Example for tarred datasets:
37
38 python align_speech_parallel.py \
39 predict_ds.is_tarred=true \
40 predict_ds.manifest_filepath=/tarred_dataset/tarred_audio_manifest.json \
41 predict_ds.tarred_audio_filepaths=/tarred_dataset/audio__OP_0..127_CL_.tar \
42 ...
43
44 By default the trainer uses all the GPUs available and default precision is FP32.
45 By setting the trainer config you may control these configs. For example to do the predictions with AMP on just two GPUs:
46
47 python align_speech_parallel.py \
48 trainer.precision=16 \
49 trainer.gpus=2 \
50 ...
51
52 You may control the dataloader's config by setting the predict_ds:
53
54 python align_speech_parallel.py \
55 predict_ds.num_workers=8 \
56 predict_ds.min_duration=2.0 \
57 predict_ds.sample_rate=16000 \
58 model=stt_en_conformer_ctc_small \
59 ...
60
61 You may control the aligner's config by setting the aligner_args:
62 aligner_args.alignment_type=argmax \
63 aligner_args.word_output=False \
64 aligner_args.cpu_decoding=True \
65 aligner_args.decode_batch_size=8 \
66 aligner_args.ctc_cfg.prob_suppress_index=-1 \
67 aligner_args.ctc_cfg.prob_suppress_value=0.5 \
68 aligner_args.rnnt_cfg.predictor_window_size=10 \
69 aligner_args.decoder_module_cfg.intersect_pruned=true \
70 aligner_args.decoder_module_cfg.intersect_conf.search_beam=40 \
71 ...
72
73 """
74
75
76 import os
77 from dataclasses import dataclass, is_dataclass
78 from typing import Optional
79
80 import pytorch_lightning as ptl
81 import torch
82 from omegaconf import MISSING, OmegaConf
83
84 from nemo.collections.asr.data.audio_to_ctm_dataset import ASRCTMPredictionWriter
85 from nemo.collections.asr.models import ASRModel
86 from nemo.collections.asr.models.configs.aligner_config import K2AlignerWrapperModelConfig
87 from nemo.collections.asr.models.configs.asr_models_config import ASRDatasetConfig
88 from nemo.collections.asr.models.k2_aligner_model import AlignerWrapperModel
89 from nemo.core.config import TrainerConfig, hydra_runner
90 from nemo.utils import logging
91 from nemo.utils.get_rank import is_global_rank_zero
92
93
94 @dataclass
95 class ParallelAlignmentConfig:
96 model: Optional[str] = None # name
97 predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
98 aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
99 output_path: str = MISSING
100 model_stride: int = 8
101
102 trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
103
104 # there arguments will be ignored
105 return_predictions: bool = False
106 use_cer: bool = False
107
108
109 def match_train_config(predict_ds, train_ds):
110 # It copies the important configurations from the train dataset of the model
111 # into the predict_ds to be used for prediction. It is needed to match the training configurations.
112 if train_ds is None:
113 return
114
115 predict_ds.sample_rate = train_ds.get("sample_rate", 16000)
116 cfg_name_list = [
117 "int_values",
118 "use_start_end_token",
119 "blank_index",
120 "unk_index",
121 "normalize",
122 "parser",
123 "eos_id",
124 "bos_id",
125 "pad_id",
126 ]
127
128 if is_dataclass(predict_ds):
129 predict_ds = OmegaConf.structured(predict_ds)
130 for cfg_name in cfg_name_list:
131 if hasattr(train_ds, cfg_name):
132 setattr(predict_ds, cfg_name, getattr(train_ds, cfg_name))
133
134 return predict_ds
135
136
137 @hydra_runner(config_name="AlignmentConfig", schema=ParallelAlignmentConfig)
138 def main(cfg: ParallelAlignmentConfig):
139 if cfg.model.endswith(".nemo"):
140 logging.info("Attempting to initialize from .nemo file")
141 model = ASRModel.restore_from(restore_path=cfg.model, map_location="cpu")
142 elif cfg.model.endswith(".ckpt"):
143 logging.info("Attempting to initialize from .ckpt file")
144 model = ASRModel.load_from_checkpoint(checkpoint_path=cfg.model, map_location="cpu")
145 else:
146 logging.info(
147 "Attempting to initialize from a pretrained model as the model name does not have the extension of .nemo or .ckpt"
148 )
149 model = ASRModel.from_pretrained(model_name=cfg.model, map_location="cpu")
150
151 trainer = ptl.Trainer(**cfg.trainer)
152
153 cfg.predict_ds.return_sample_id = True
154 cfg.return_predictions = False
155 cfg.use_cer = False
156 cfg.predict_ds = match_train_config(predict_ds=cfg.predict_ds, train_ds=model._cfg.train_ds)
157 data_loader = model._setup_dataloader_from_config(cfg.predict_ds)
158
159 os.makedirs(cfg.output_path, exist_ok=True)
160 # trainer.global_rank is not valid before predict() is called. Need this hack to find the correct global_rank.
161 global_rank = trainer.node_rank * trainer.num_devices + int(os.environ.get("LOCAL_RANK", 0))
162 output_file = os.path.join(cfg.output_path, f"predictions_{global_rank}.json")
163 output_ctm_dir = os.path.join(cfg.output_path, "ctm")
164 predictor_writer = ASRCTMPredictionWriter(
165 dataset=data_loader.dataset,
166 output_file=output_file,
167 output_ctm_dir=output_ctm_dir,
168 time_per_frame=cfg.model_stride * model._cfg.preprocessor['window_stride'],
169 )
170 trainer.callbacks.extend([predictor_writer])
171
172 aligner_wrapper = AlignerWrapperModel(model=model, cfg=cfg.aligner_args)
173 trainer.predict(model=aligner_wrapper, dataloaders=data_loader, return_predictions=cfg.return_predictions)
174 samples_num = predictor_writer.close_output_file()
175
176 logging.info(
177 f"Prediction on rank {global_rank} is done for {samples_num} samples and results are stored in {output_file}."
178 )
179
180 if torch.distributed.is_initialized():
181 torch.distributed.barrier()
182
183 samples_num = 0
184 if is_global_rank_zero():
185 output_file = os.path.join(cfg.output_path, f"predictions_all.json")
186 logging.info(f"Prediction files are being aggregated in {output_file}.")
187 with open(output_file, 'tw', encoding="utf-8") as outf:
188 for rank in range(trainer.world_size):
189 input_file = os.path.join(cfg.output_path, f"predictions_{rank}.json")
190 with open(input_file, 'r', encoding="utf-8") as inpf:
191 lines = inpf.readlines()
192 samples_num += len(lines)
193 outf.writelines(lines)
194 logging.info(
195 f"Prediction is done for {samples_num} samples in total on all workers and results are aggregated in {output_file}."
196 )
197
198
199 if __name__ == '__main__':
200 main()
201
[end of examples/asr/experimental/k2/align_speech_parallel.py]
[start of nemo/collections/asr/metrics/rnnt_wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import re
17 from abc import abstractmethod
18 from dataclasses import dataclass, is_dataclass
19 from typing import Callable, Dict, List, Optional, Tuple, Union
20
21 import editdistance
22 import numpy as np
23 import torch
24 from omegaconf import OmegaConf
25 from torchmetrics import Metric
26
27 from nemo.collections.asr.metrics.wer import move_dimension_to_the_front
28 from nemo.collections.asr.parts.submodules import rnnt_beam_decoding as beam_decode
29 from nemo.collections.asr.parts.submodules import rnnt_greedy_decoding as greedy_decode
30 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
31 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
32 from nemo.utils import logging
33
34 __all__ = ['RNNTDecoding', 'RNNTWER']
35
36
37 class AbstractRNNTDecoding(ConfidenceMixin):
38 """
39 Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
40
41 Args:
42 decoding_cfg: A dict-like object which contains the following key-value pairs.
43 strategy: str value which represents the type of decoding that can occur.
44 Possible values are :
45 - greedy, greedy_batch (for greedy decoding).
46 - beam, tsd, alsd (for beam search decoding).
47
48 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
49 tokens as well as the decoded string. Default is False in order to avoid double decoding
50 unless required.
51
52 preserve_alignments: Bool flag which preserves the history of logprobs generated during
53 decoding (sample / batched). When set to true, the Hypothesis will contain
54 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
55 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
56
57 In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
58 with the `return_hypotheses` flag set to True.
59
60 The length of the list corresponds to the Acoustic Length (T).
61 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
62 U is the number of target tokens for the current timestep Ti.
63
64 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
65 word based timestamp mapping the output log-probabilities to discrete intervals of timestamps.
66 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
67
68 rnnt_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
69 Can take the following values - "char" for character/subword time stamps, "word" for word level
70 time stamps and "all" (default), for both character level and word level time stamps.
71
72 word_seperator: Str token representing the seperator between words.
73
74 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
75 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
76 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of ints.
77
78 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
79 scores. In order to obtain hypotheses with confidence scores, please utilize
80 `rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
81
82 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
83 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
84 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
85
86 The length of the list corresponds to the Acoustic Length (T).
87 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
88 U is the number of target tokens for the current timestep Ti.
89 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
90 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
91 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
92
93 The length of the list corresponds to the number of recognized tokens.
94 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
95 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
96 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
97
98 The length of the list corresponds to the number of recognized words.
99 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
100 from the `token_confidence`.
101 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
102 Valid options are `mean`, `min`, `max`, `prod`.
103 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
104 confidence scores.
105
106 name: The method name (str).
107 Supported values:
108 - 'max_prob' for using the maximum token probability as a confidence.
109 - 'entropy' for using a normalized entropy of a log-likelihood vector.
110
111 entropy_type: Which type of entropy to use (str).
112 Used if confidence_method_cfg.name is set to `entropy`.
113 Supported values:
114 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
115 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
116 Note that for this entropy, the alpha should comply the following inequality:
117 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
118 where V is the model vocabulary size.
119 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
120 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
121 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
122 More: https://en.wikipedia.org/wiki/Tsallis_entropy
123 - 'renyi' for the Rรฉnyi entropy.
124 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
125 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
126 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
127
128 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
129 When the alpha equals one, scaling is not applied to 'max_prob',
130 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
131
132 entropy_norm: A mapping of the entropy value to the interval [0,1].
133 Supported values:
134 - 'lin' for using the linear mapping.
135 - 'exp' for using exponential mapping with linear shift.
136
137 The config may further contain the following sub-dictionaries:
138 "greedy":
139 max_symbols: int, describing the maximum number of target tokens to decode per
140 timestep during greedy decoding. Setting to larger values allows longer sentences
141 to be decoded, at the cost of increased execution time.
142 preserve_frame_confidence: Same as above, overrides above value.
143 confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
144
145 "beam":
146 beam_size: int, defining the beam size for beam search. Must be >= 1.
147 If beam_size == 1, will perform cached greedy search. This might be slightly different
148 results compared to the greedy search above.
149
150 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
151 Set to True by default.
152
153 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
154 hypotheses after beam search has concluded. This flag is set by default.
155
156 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
157 per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
158 at increased cost to execution time.
159
160 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
161 If an integer is provided, it can decode sequences of that particular maximum length.
162 If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
163 where seq_len is the length of the acoustic model output (T).
164
165 NOTE:
166 If a float is provided, it can be greater than 1!
167 By default, a float of 2.0 is used so that a target sequence can be at most twice
168 as long as the acoustic model output length T.
169
170 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
171 and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
172
173 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
174 in order to reduce expensive beam search cost later. int >= 0.
175
176 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
177 Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
178 and affects the speed of inference since large values will perform large beam search in the next step.
179
180 maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
181 The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
182 where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
183 predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
184 expansion apart from the "most likely" candidate.
185 Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
186 but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
187 thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
188 tuned on a validation set.
189
190 softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
191
192 decoder: The Decoder/Prediction network module.
193 joint: The Joint network module.
194 blank_id: The id of the RNNT blank token.
195 """
196
197 def __init__(self, decoding_cfg, decoder, joint, blank_id: int):
198 super(AbstractRNNTDecoding, self).__init__()
199
200 # Convert dataclass to config object
201 if is_dataclass(decoding_cfg):
202 decoding_cfg = OmegaConf.structured(decoding_cfg)
203
204 self.cfg = decoding_cfg
205 self.blank_id = blank_id
206 self.num_extra_outputs = joint.num_extra_outputs
207 self.big_blank_durations = self.cfg.get("big_blank_durations", None)
208 self.durations = self.cfg.get("durations", None)
209 self.compute_hypothesis_token_set = self.cfg.get("compute_hypothesis_token_set", False)
210 self.compute_langs = decoding_cfg.get('compute_langs', False)
211 self.preserve_alignments = self.cfg.get('preserve_alignments', None)
212 self.joint_fused_batch_size = self.cfg.get('fused_batch_size', None)
213 self.compute_timestamps = self.cfg.get('compute_timestamps', None)
214 self.word_seperator = self.cfg.get('word_seperator', ' ')
215
216 if self.durations is not None: # this means it's a TDT model.
217 if blank_id == 0:
218 raise ValueError("blank_id must equal len(non_blank_vocabs) for TDT models")
219 if self.big_blank_durations is not None:
220 raise ValueError("duration and big_blank_durations can't both be not None")
221 if self.cfg.strategy not in ['greedy', 'greedy_batch']:
222 raise ValueError("currently only greedy and greedy_batch inference is supported for TDT models")
223
224 if self.big_blank_durations is not None: # this means it's a multi-blank model.
225 if blank_id == 0:
226 raise ValueError("blank_id must equal len(vocabs) for multi-blank RNN-T models")
227 if self.cfg.strategy not in ['greedy', 'greedy_batch']:
228 raise ValueError(
229 "currently only greedy and greedy_batch inference is supported for multi-blank models"
230 )
231
232 possible_strategies = ['greedy', 'greedy_batch', 'beam', 'tsd', 'alsd', 'maes']
233 if self.cfg.strategy not in possible_strategies:
234 raise ValueError(f"Decoding strategy must be one of {possible_strategies}")
235
236 # Update preserve alignments
237 if self.preserve_alignments is None:
238 if self.cfg.strategy in ['greedy', 'greedy_batch']:
239 self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
240
241 elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
242 self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
243
244 # Update compute timestamps
245 if self.compute_timestamps is None:
246 if self.cfg.strategy in ['greedy', 'greedy_batch']:
247 self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
248
249 elif self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']:
250 self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
251
252 # Test if alignments are being preserved for RNNT
253 if self.compute_timestamps is True and self.preserve_alignments is False:
254 raise ValueError("If `compute_timesteps` flag is set, then `preserve_alignments` flag must also be set.")
255
256 # initialize confidence-related fields
257 self._init_confidence(self.cfg.get('confidence_cfg', None))
258
259 # Confidence estimation is not implemented for these strategies
260 if (
261 not self.preserve_frame_confidence
262 and self.cfg.strategy in ['beam', 'tsd', 'alsd', 'maes']
263 and self.cfg.beam.get('preserve_frame_confidence', False)
264 ):
265 raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
266
267 if self.cfg.strategy == 'greedy':
268 if self.big_blank_durations is None:
269 if self.durations is None:
270 self.decoding = greedy_decode.GreedyRNNTInfer(
271 decoder_model=decoder,
272 joint_model=joint,
273 blank_index=self.blank_id,
274 max_symbols_per_step=(
275 self.cfg.greedy.get('max_symbols', None)
276 or self.cfg.greedy.get('max_symbols_per_step', None)
277 ),
278 preserve_alignments=self.preserve_alignments,
279 preserve_frame_confidence=self.preserve_frame_confidence,
280 confidence_method_cfg=self.confidence_method_cfg,
281 )
282 else:
283 self.decoding = greedy_decode.GreedyTDTInfer(
284 decoder_model=decoder,
285 joint_model=joint,
286 blank_index=self.blank_id,
287 durations=self.durations,
288 max_symbols_per_step=(
289 self.cfg.greedy.get('max_symbols', None)
290 or self.cfg.greedy.get('max_symbols_per_step', None)
291 ),
292 preserve_alignments=self.preserve_alignments,
293 preserve_frame_confidence=self.preserve_frame_confidence,
294 confidence_method_cfg=self.confidence_method_cfg,
295 )
296 else:
297 self.decoding = greedy_decode.GreedyMultiblankRNNTInfer(
298 decoder_model=decoder,
299 joint_model=joint,
300 blank_index=self.blank_id,
301 big_blank_durations=self.big_blank_durations,
302 max_symbols_per_step=(
303 self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
304 ),
305 preserve_alignments=self.preserve_alignments,
306 preserve_frame_confidence=self.preserve_frame_confidence,
307 confidence_method_cfg=self.confidence_method_cfg,
308 )
309
310 elif self.cfg.strategy == 'greedy_batch':
311 if self.big_blank_durations is None:
312 if self.durations is None:
313 self.decoding = greedy_decode.GreedyBatchedRNNTInfer(
314 decoder_model=decoder,
315 joint_model=joint,
316 blank_index=self.blank_id,
317 max_symbols_per_step=(
318 self.cfg.greedy.get('max_symbols', None)
319 or self.cfg.greedy.get('max_symbols_per_step', None)
320 ),
321 preserve_alignments=self.preserve_alignments,
322 preserve_frame_confidence=self.preserve_frame_confidence,
323 confidence_method_cfg=self.confidence_method_cfg,
324 )
325 else:
326 self.decoding = greedy_decode.GreedyBatchedTDTInfer(
327 decoder_model=decoder,
328 joint_model=joint,
329 blank_index=self.blank_id,
330 durations=self.durations,
331 max_symbols_per_step=(
332 self.cfg.greedy.get('max_symbols', None)
333 or self.cfg.greedy.get('max_symbols_per_step', None)
334 ),
335 preserve_alignments=self.preserve_alignments,
336 preserve_frame_confidence=self.preserve_frame_confidence,
337 confidence_method_cfg=self.confidence_method_cfg,
338 )
339
340 else:
341 self.decoding = greedy_decode.GreedyBatchedMultiblankRNNTInfer(
342 decoder_model=decoder,
343 joint_model=joint,
344 blank_index=self.blank_id,
345 big_blank_durations=self.big_blank_durations,
346 max_symbols_per_step=(
347 self.cfg.greedy.get('max_symbols', None) or self.cfg.greedy.get('max_symbols_per_step', None)
348 ),
349 preserve_alignments=self.preserve_alignments,
350 preserve_frame_confidence=self.preserve_frame_confidence,
351 confidence_method_cfg=self.confidence_method_cfg,
352 )
353
354 elif self.cfg.strategy == 'beam':
355
356 self.decoding = beam_decode.BeamRNNTInfer(
357 decoder_model=decoder,
358 joint_model=joint,
359 beam_size=self.cfg.beam.beam_size,
360 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
361 search_type='default',
362 score_norm=self.cfg.beam.get('score_norm', True),
363 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
364 preserve_alignments=self.preserve_alignments,
365 )
366
367 elif self.cfg.strategy == 'tsd':
368
369 self.decoding = beam_decode.BeamRNNTInfer(
370 decoder_model=decoder,
371 joint_model=joint,
372 beam_size=self.cfg.beam.beam_size,
373 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
374 search_type='tsd',
375 score_norm=self.cfg.beam.get('score_norm', True),
376 tsd_max_sym_exp_per_step=self.cfg.beam.get('tsd_max_sym_exp', 10),
377 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
378 preserve_alignments=self.preserve_alignments,
379 )
380
381 elif self.cfg.strategy == 'alsd':
382
383 self.decoding = beam_decode.BeamRNNTInfer(
384 decoder_model=decoder,
385 joint_model=joint,
386 beam_size=self.cfg.beam.beam_size,
387 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
388 search_type='alsd',
389 score_norm=self.cfg.beam.get('score_norm', True),
390 alsd_max_target_len=self.cfg.beam.get('alsd_max_target_len', 2),
391 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
392 preserve_alignments=self.preserve_alignments,
393 )
394
395 elif self.cfg.strategy == 'maes':
396
397 self.decoding = beam_decode.BeamRNNTInfer(
398 decoder_model=decoder,
399 joint_model=joint,
400 beam_size=self.cfg.beam.beam_size,
401 return_best_hypothesis=decoding_cfg.beam.get('return_best_hypothesis', True),
402 search_type='maes',
403 score_norm=self.cfg.beam.get('score_norm', True),
404 maes_num_steps=self.cfg.beam.get('maes_num_steps', 2),
405 maes_prefix_alpha=self.cfg.beam.get('maes_prefix_alpha', 1),
406 maes_expansion_gamma=self.cfg.beam.get('maes_expansion_gamma', 2.3),
407 maes_expansion_beta=self.cfg.beam.get('maes_expansion_beta', 2.0),
408 softmax_temperature=self.cfg.beam.get('softmax_temperature', 1.0),
409 preserve_alignments=self.preserve_alignments,
410 ngram_lm_model=self.cfg.beam.get('ngram_lm_model', None),
411 ngram_lm_alpha=self.cfg.beam.get('ngram_lm_alpha', 0.0),
412 hat_subtract_ilm=self.cfg.beam.get('hat_subtract_ilm', False),
413 hat_ilm_weight=self.cfg.beam.get('hat_ilm_weight', 0.0),
414 )
415
416 else:
417
418 raise ValueError(
419 f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
420 f"but was provided {self.cfg.strategy}"
421 )
422
423 # Update the joint fused batch size or disable it entirely if needed.
424 self.update_joint_fused_batch_size()
425
426 def rnnt_decoder_predictions_tensor(
427 self,
428 encoder_output: torch.Tensor,
429 encoded_lengths: torch.Tensor,
430 return_hypotheses: bool = False,
431 partial_hypotheses: Optional[List[Hypothesis]] = None,
432 ) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
433 """
434 Decode an encoder output by autoregressive decoding of the Decoder+Joint networks.
435
436 Args:
437 encoder_output: torch.Tensor of shape [B, D, T].
438 encoded_lengths: torch.Tensor containing lengths of the padded encoder outputs. Shape [B].
439 return_hypotheses: bool. If set to True it will return list of Hypothesis or NBestHypotheses
440
441 Returns:
442 If `return_best_hypothesis` is set:
443 A tuple (hypotheses, None):
444 hypotheses - list of Hypothesis (best hypothesis per sample).
445 Look at rnnt_utils.Hypothesis for more information.
446
447 If `return_best_hypothesis` is not set:
448 A tuple(hypotheses, all_hypotheses)
449 hypotheses - list of Hypothesis (best hypothesis per sample).
450 Look at rnnt_utils.Hypothesis for more information.
451 all_hypotheses - list of NBestHypotheses. Each NBestHypotheses further contains a sorted
452 list of all the hypotheses of the model per sample.
453 Look at rnnt_utils.NBestHypotheses for more information.
454 """
455 # Compute hypotheses
456 with torch.inference_mode():
457 hypotheses_list = self.decoding(
458 encoder_output=encoder_output, encoded_lengths=encoded_lengths, partial_hypotheses=partial_hypotheses
459 ) # type: [List[Hypothesis]]
460
461 # extract the hypotheses
462 hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
463
464 prediction_list = hypotheses_list
465
466 if isinstance(prediction_list[0], NBestHypotheses):
467 hypotheses = []
468 all_hypotheses = []
469
470 for nbest_hyp in prediction_list: # type: NBestHypotheses
471 n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
472 decoded_hyps = self.decode_hypothesis(n_hyps) # type: List[str]
473
474 # If computing timestamps
475 if self.compute_timestamps is True:
476 timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
477 for hyp_idx in range(len(decoded_hyps)):
478 decoded_hyps[hyp_idx] = self.compute_rnnt_timestamps(decoded_hyps[hyp_idx], timestamp_type)
479
480 hypotheses.append(decoded_hyps[0]) # best hypothesis
481 all_hypotheses.append(decoded_hyps)
482
483 if return_hypotheses:
484 return hypotheses, all_hypotheses
485
486 best_hyp_text = [h.text for h in hypotheses]
487 all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
488 return best_hyp_text, all_hyp_text
489
490 else:
491 hypotheses = self.decode_hypothesis(prediction_list) # type: List[str]
492
493 # If computing timestamps
494 if self.compute_timestamps is True:
495 timestamp_type = self.cfg.get('rnnt_timestamp_type', 'all')
496 for hyp_idx in range(len(hypotheses)):
497 hypotheses[hyp_idx] = self.compute_rnnt_timestamps(hypotheses[hyp_idx], timestamp_type)
498
499 if return_hypotheses:
500 # greedy decoding, can get high-level confidence scores
501 if self.preserve_frame_confidence and (
502 self.preserve_word_confidence or self.preserve_token_confidence
503 ):
504 hypotheses = self.compute_confidence(hypotheses)
505 return hypotheses, None
506
507 best_hyp_text = [h.text for h in hypotheses]
508 return best_hyp_text, None
509
510 def decode_hypothesis(self, hypotheses_list: List[Hypothesis]) -> List[Union[Hypothesis, NBestHypotheses]]:
511 """
512 Decode a list of hypotheses into a list of strings.
513
514 Args:
515 hypotheses_list: List of Hypothesis.
516
517 Returns:
518 A list of strings.
519 """
520 for ind in range(len(hypotheses_list)):
521 # Extract the integer encoded hypothesis
522 prediction = hypotheses_list[ind].y_sequence
523
524 if type(prediction) != list:
525 prediction = prediction.tolist()
526
527 # RNN-T sample level is already preprocessed by implicit RNNT decoding
528 # Simply remove any blank and possibly big blank tokens
529 if self.big_blank_durations is not None: # multi-blank RNNT
530 num_extra_outputs = len(self.big_blank_durations)
531 prediction = [p for p in prediction if p < self.blank_id - num_extra_outputs]
532 elif self.durations is not None: # TDT model.
533 prediction = [p for p in prediction if p < self.blank_id]
534 else: # standard RNN-T
535 prediction = [p for p in prediction if p != self.blank_id]
536
537 # De-tokenize the integer tokens; if not computing timestamps
538 if self.compute_timestamps is True:
539 # keep the original predictions, wrap with the number of repetitions per token and alignments
540 # this is done so that `rnnt_decoder_predictions_tensor()` can process this hypothesis
541 # in order to compute exact time stamps.
542 alignments = copy.deepcopy(hypotheses_list[ind].alignments)
543 token_repetitions = [1] * len(alignments) # preserve number of repetitions per token
544 hypothesis = (prediction, alignments, token_repetitions)
545 else:
546 hypothesis = self.decode_tokens_to_str(prediction)
547
548 # TODO: remove
549 # collapse leading spaces before . , ? for PC models
550 hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
551
552 if self.compute_hypothesis_token_set:
553 hypotheses_list[ind].tokens = self.decode_ids_to_tokens(prediction)
554
555 # De-tokenize the integer tokens
556 hypotheses_list[ind].text = hypothesis
557
558 return hypotheses_list
559
560 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
561 """
562 Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
563 Assumes that `frame_confidence` is present in the hypotheses.
564
565 Args:
566 hypotheses_list: List of Hypothesis.
567
568 Returns:
569 A list of hypotheses with high-level confidence scores.
570 """
571 if self.exclude_blank_from_confidence:
572 for hyp in hypotheses_list:
573 hyp.token_confidence = hyp.non_blank_frame_confidence
574 else:
575 for hyp in hypotheses_list:
576 offset = 0
577 token_confidence = []
578 if len(hyp.timestep) > 0:
579 for ts, te in zip(hyp.timestep, hyp.timestep[1:] + [len(hyp.frame_confidence)]):
580 if ts != te:
581 # <blank> tokens are considered to belong to the last non-blank token, if any.
582 token_confidence.append(
583 self._aggregate_confidence(
584 [hyp.frame_confidence[ts][offset]]
585 + [fc[0] for fc in hyp.frame_confidence[ts + 1 : te]]
586 )
587 )
588 offset = 0
589 else:
590 token_confidence.append(hyp.frame_confidence[ts][offset])
591 offset += 1
592 hyp.token_confidence = token_confidence
593 if self.preserve_word_confidence:
594 for hyp in hypotheses_list:
595 hyp.word_confidence = self._aggregate_token_confidence(hyp)
596 return hypotheses_list
597
598 @abstractmethod
599 def decode_tokens_to_str(self, tokens: List[int]) -> str:
600 """
601 Implemented by subclass in order to decoder a token id list into a string.
602
603 Args:
604 tokens: List of int representing the token ids.
605
606 Returns:
607 A decoded string.
608 """
609 raise NotImplementedError()
610
611 @abstractmethod
612 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
613 """
614 Implemented by subclass in order to decode a token id list into a token list.
615 A token list is the string representation of each token id.
616
617 Args:
618 tokens: List of int representing the token ids.
619
620 Returns:
621 A list of decoded tokens.
622 """
623 raise NotImplementedError()
624
625 @abstractmethod
626 def decode_tokens_to_lang(self, tokens: List[int]) -> str:
627 """
628 Implemented by subclass in order to
629 compute the most likely language ID (LID) string given the tokens.
630
631 Args:
632 tokens: List of int representing the token ids.
633
634 Returns:
635 A decoded LID string.
636 """
637 raise NotImplementedError()
638
639 @abstractmethod
640 def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
641 """
642 Implemented by subclass in order to
643 decode a token id list into language ID (LID) list.
644
645 Args:
646 tokens: List of int representing the token ids.
647
648 Returns:
649 A list of decoded LIDS.
650 """
651 raise NotImplementedError()
652
653 def update_joint_fused_batch_size(self):
654 if self.joint_fused_batch_size is None:
655 # do nothing and let the Joint itself handle setting up of the fused batch
656 return
657
658 if not hasattr(self.decoding.joint, 'set_fused_batch_size'):
659 logging.warning(
660 "The joint module does not have `set_fused_batch_size(int)` as a setter function.\n"
661 "Ignoring update of joint fused batch size."
662 )
663 return
664
665 if not hasattr(self.decoding.joint, 'set_fuse_loss_wer'):
666 logging.warning(
667 "The joint module does not have `set_fuse_loss_wer(bool, RNNTLoss, RNNTWER)` "
668 "as a setter function.\n"
669 "Ignoring update of joint fused batch size."
670 )
671 return
672
673 if self.joint_fused_batch_size > 0:
674 self.decoding.joint.set_fused_batch_size(self.joint_fused_batch_size)
675 else:
676 logging.info("Joint fused batch size <= 0; Will temporarily disable fused batch step in the Joint.")
677 self.decoding.joint.set_fuse_loss_wer(False)
678
679 def compute_rnnt_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
680 assert timestamp_type in ['char', 'word', 'all']
681
682 # Unpack the temporary storage
683 decoded_prediction, alignments, token_repetitions = hypothesis.text
684
685 # Retrieve offsets
686 char_offsets = word_offsets = None
687 char_offsets = self._compute_offsets(hypothesis, token_repetitions, self.blank_id)
688
689 # finally, set the flattened decoded predictions to text field for later text decoding
690 hypothesis.text = decoded_prediction
691
692 # Assert number of offsets and hypothesis tokens are 1:1 match.
693 num_flattened_tokens = 0
694 for t in range(len(char_offsets)):
695 # Subtract one here for the extra RNNT BLANK token emitted to designate "End of timestep"
696 num_flattened_tokens += len(char_offsets[t]['char']) - 1
697
698 if num_flattened_tokens != len(hypothesis.text):
699 raise ValueError(
700 f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
701 " have to be of the same length, but are: "
702 f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
703 f" {len(hypothesis.text)}"
704 )
705
706 encoded_char_offsets = copy.deepcopy(char_offsets)
707
708 # Correctly process the token ids to chars/subwords.
709 for i, offsets in enumerate(char_offsets):
710 decoded_chars = []
711 for char in offsets['char'][:-1]: # ignore the RNNT Blank token at end of every timestep with -1 subset
712 decoded_chars.append(self.decode_tokens_to_str([int(char)]))
713 char_offsets[i]["char"] = decoded_chars
714
715 # detect char vs subword models
716 lens = []
717 for v in char_offsets:
718 tokens = v["char"]
719 # each token may be either 1 unicode token or multiple unicode token
720 # for character based models, only 1 token is used
721 # for subword, more than one token can be used.
722 # Computing max, then summing up total lens is a test to check for char vs subword
723 # For char models, len(lens) == sum(lens)
724 # but this is violated for subword models.
725 max_len = max(len(c) for c in tokens)
726 lens.append(max_len)
727
728 # array of one or more chars implies subword based model with multiple char emitted per TxU step (via subword)
729 if sum(lens) > len(lens):
730 text_type = 'subword'
731 else:
732 # full array of ones implies character based model with 1 char emitted per TxU step
733 text_type = 'char'
734
735 # retrieve word offsets from character offsets
736 word_offsets = None
737 if timestamp_type in ['word', 'all']:
738 if text_type == 'char':
739 word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
740 else:
741 # utilize the copy of char offsets with the correct integer ids for tokens
742 # so as to avoid tokenize -> detokenize -> compare -> merge steps.
743 word_offsets = self._get_word_offsets_subwords_sentencepiece(
744 encoded_char_offsets,
745 hypothesis,
746 decode_ids_to_tokens=self.decode_ids_to_tokens,
747 decode_tokens_to_str=self.decode_tokens_to_str,
748 )
749
750 # attach results
751 if len(hypothesis.timestep) > 0:
752 timestep_info = hypothesis.timestep
753 else:
754 timestep_info = []
755
756 # Setup defaults
757 hypothesis.timestep = {"timestep": timestep_info}
758
759 # Add char / subword time stamps
760 if char_offsets is not None and timestamp_type in ['char', 'all']:
761 hypothesis.timestep['char'] = char_offsets
762
763 # Add word time stamps
764 if word_offsets is not None and timestamp_type in ['word', 'all']:
765 hypothesis.timestep['word'] = word_offsets
766
767 # Convert the flattened token indices to text
768 hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
769
770 return hypothesis
771
772 @staticmethod
773 def _compute_offsets(
774 hypothesis: Hypothesis, token_repetitions: List[int], rnnt_token: int
775 ) -> List[Dict[str, Union[str, int]]]:
776 """
777 Utility method that calculates the indidual time indices where a token starts and ends.
778
779 Args:
780 hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
781 emitted at every time step after rnnt collapse.
782 token_repetitions: A list of ints representing the number of repetitions of each emitted token.
783 rnnt_token: The integer of the rnnt blank token used during rnnt collapse.
784
785 Returns:
786
787 """
788 start_index = 0
789
790 # If the exact timestep information is available, utilize the 1st non-rnnt blank token timestep
791 # as the start index.
792 if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
793 start_index = max(0, hypothesis.timestep[0] - 1)
794
795 # Construct the start and end indices brackets
796 end_indices = np.asarray(token_repetitions).cumsum()
797 start_indices = np.concatenate(([start_index], end_indices[:-1]))
798
799 # Process the TxU dangling alignment tensor, containing pairs of (logits, label)
800 alignment_labels = [al_logits_labels for al_logits_labels in hypothesis.text[1]]
801 for t in range(len(alignment_labels)):
802 for u in range(len(alignment_labels[t])):
803 alignment_labels[t][u] = alignment_labels[t][u][1] # pick label from (logit, label) tuple
804
805 # Merge the results per token into a list of dictionaries
806 offsets = [
807 {"char": a, "start_offset": s, "end_offset": e}
808 for a, s, e in zip(alignment_labels, start_indices, end_indices)
809 ]
810
811 # Filter out RNNT token (blank at [t][0] position). This is because blank can only occur at end of a
812 # time step for RNNT, so if 0th token is blank, then that timestep is skipped.
813 offsets = list(filter(lambda offsets: offsets["char"][0] != rnnt_token, offsets))
814 return offsets
815
816 @staticmethod
817 def _get_word_offsets_chars(
818 offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
819 ) -> Dict[str, Union[str, float]]:
820 """
821 Utility method which constructs word time stamps out of character time stamps.
822
823 References:
824 This code is a port of the Hugging Face code for word time stamp construction.
825
826 Args:
827 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
828 word_delimiter_char: Character token that represents the word delimiter. By default, " ".
829
830 Returns:
831 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
832 "end_offset".
833 """
834 word_offsets = []
835
836 last_state = "SPACE"
837 word = ""
838 start_offset = 0
839 end_offset = 0
840 for i, offset in enumerate(offsets):
841 chars = offset["char"]
842 for char in chars:
843 state = "SPACE" if char == word_delimiter_char else "WORD"
844
845 if state == last_state:
846 # If we are in the same state as before, we simply repeat what we've done before
847 end_offset = offset["end_offset"]
848 word += char
849 else:
850 # Switching state
851 if state == "SPACE":
852 # Finishing a word
853 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
854 else:
855 # Starting a new word
856 start_offset = offset["start_offset"]
857 end_offset = offset["end_offset"]
858 word = char
859
860 last_state = state
861
862 if last_state == "WORD":
863 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
864
865 return word_offsets
866
867 @staticmethod
868 def _get_word_offsets_subwords_sentencepiece(
869 offsets: Dict[str, Union[str, float]],
870 hypothesis: Hypothesis,
871 decode_ids_to_tokens: Callable[[List[int]], str],
872 decode_tokens_to_str: Callable[[List[int]], str],
873 ) -> Dict[str, Union[str, float]]:
874 """
875 Utility method which constructs word time stamps out of sub-word time stamps.
876
877 **Note**: Only supports Sentencepiece based tokenizers !
878
879 Args:
880 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
881 hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
882 after rnnt collapse.
883 decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
884 decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
885
886 Returns:
887 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
888 "end_offset".
889 """
890 word_offsets = []
891 built_token = []
892 previous_token_index = 0
893 # For every offset token
894 for i, offset in enumerate(offsets):
895 # For every subword token in offset token list (ignoring the RNNT Blank token at the end)
896 for char in offset['char'][:-1]:
897 char = int(char)
898
899 # Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
900 token = decode_ids_to_tokens([char])[0]
901 token_text = decode_tokens_to_str([char])
902
903 # It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
904 # after forcing partial text conversion of the token.
905 if token != token_text:
906 # If there are any partially or fully built sub-word token ids, construct to text.
907 # Note: This is "old" subword, that occurs *after* current sub-word has started.
908 if built_token:
909 word_offsets.append(
910 {
911 "word": decode_tokens_to_str(built_token),
912 "start_offset": offsets[previous_token_index]["start_offset"],
913 "end_offset": offsets[i]["start_offset"],
914 }
915 )
916
917 # Prepare list of new sub-word ids
918 built_token.clear()
919 built_token.append(char)
920 previous_token_index = i
921 else:
922 # If the token does not contain any sub-word start mark, then the sub-word has not completed yet
923 # Append to current sub-word list.
924 built_token.append(char)
925
926 # Inject the start offset of the first token to word offsets
927 # This is because we always skip the delay the injection of the first sub-word due to the loop
928 # condition and check whether built token is ready or not.
929 # Therefore without this forced injection, the start_offset appears as off by 1.
930 # This should only be done when these arrays contain more than one element.
931 if offsets and word_offsets:
932 word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
933
934 # If there are any remaining tokens left, inject them all into the final word offset.
935 # The start offset of this token is the start time of the next token to process.
936 # The end offset of this token is the end time of the last token from offsets.
937 # Note that built_token is a flat list; but offsets contains a nested list which
938 # may have different dimensionality.
939 # As such, we can't rely on the length of the list of built_token to index offsets.
940 if built_token:
941 # start from the previous token index as this hasn't been committed to word_offsets yet
942 # if we still have content in built_token
943 start_offset = offsets[previous_token_index]["start_offset"]
944 word_offsets.append(
945 {
946 "word": decode_tokens_to_str(built_token),
947 "start_offset": start_offset,
948 "end_offset": offsets[-1]["end_offset"],
949 }
950 )
951 built_token.clear()
952
953 return word_offsets
954
955
956 class RNNTDecoding(AbstractRNNTDecoding):
957 """
958 Used for performing RNN-T auto-regressive decoding of the Decoder+Joint network given the encoder state.
959
960 Args:
961 decoding_cfg: A dict-like object which contains the following key-value pairs.
962 strategy: str value which represents the type of decoding that can occur.
963 Possible values are :
964 - greedy, greedy_batch (for greedy decoding).
965 - beam, tsd, alsd (for beam search decoding).
966
967 compute_hypothesis_token_set: A bool flag, which determines whether to compute a list of decoded
968 tokens as well as the decoded string. Default is False in order to avoid double decoding
969 unless required.
970
971 preserve_alignments: Bool flag which preserves the history of logprobs generated during
972 decoding (sample / batched). When set to true, the Hypothesis will contain
973 the non-null value for `logprobs` in it. Here, `alignments` is a List of List of
974 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
975
976 In order to obtain this hypothesis, please utilize `rnnt_decoder_predictions_tensor` function
977 with the `return_hypotheses` flag set to True.
978
979 The length of the list corresponds to the Acoustic Length (T).
980 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
981 U is the number of target tokens for the current timestep Ti.
982
983 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
984 scores. In order to obtain hypotheses with confidence scores, please utilize
985 `rnnt_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
986
987 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
988 generated during decoding (sample / batched). When set to true, the Hypothesis will contain
989 the non-null value for `frame_confidence` in it. Here, `alignments` is a List of List of floats.
990
991 The length of the list corresponds to the Acoustic Length (T).
992 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
993 U is the number of target tokens for the current timestep Ti.
994 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
995 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
996 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
997
998 The length of the list corresponds to the number of recognized tokens.
999 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
1000 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1001 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
1002
1003 The length of the list corresponds to the number of recognized words.
1004 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
1005 from the `token_confidence`.
1006 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
1007 Valid options are `mean`, `min`, `max`, `prod`.
1008 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
1009 confidence scores.
1010
1011 name: The method name (str).
1012 Supported values:
1013 - 'max_prob' for using the maximum token probability as a confidence.
1014 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1015
1016 entropy_type: Which type of entropy to use (str).
1017 Used if confidence_method_cfg.name is set to `entropy`.
1018 Supported values:
1019 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1020 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1021 Note that for this entropy, the alpha should comply the following inequality:
1022 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1023 where V is the model vocabulary size.
1024 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1025 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1026 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1027 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1028 - 'renyi' for the Rรฉnyi entropy.
1029 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1030 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1031 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1032
1033 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1034 When the alpha equals one, scaling is not applied to 'max_prob',
1035 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1036
1037 entropy_norm: A mapping of the entropy value to the interval [0,1].
1038 Supported values:
1039 - 'lin' for using the linear mapping.
1040 - 'exp' for using exponential mapping with linear shift.
1041
1042 The config may further contain the following sub-dictionaries:
1043 "greedy":
1044 max_symbols: int, describing the maximum number of target tokens to decode per
1045 timestep during greedy decoding. Setting to larger values allows longer sentences
1046 to be decoded, at the cost of increased execution time.
1047
1048 preserve_frame_confidence: Same as above, overrides above value.
1049
1050 confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
1051
1052 "beam":
1053 beam_size: int, defining the beam size for beam search. Must be >= 1.
1054 If beam_size == 1, will perform cached greedy search. This might be slightly different
1055 results compared to the greedy search above.
1056
1057 score_norm: optional bool, whether to normalize the returned beam score in the hypotheses.
1058 Set to True by default.
1059
1060 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
1061 hypotheses after beam search has concluded. This flag is set by default.
1062
1063 tsd_max_sym_exp: optional int, determines number of symmetric expansions of the target symbols
1064 per timestep of the acoustic model. Larger values will allow longer sentences to be decoded,
1065 at increased cost to execution time.
1066
1067 alsd_max_target_len: optional int or float, determines the potential maximum target sequence length.
1068 If an integer is provided, it can decode sequences of that particular maximum length.
1069 If a float is provided, it can decode sequences of int(alsd_max_target_len * seq_len),
1070 where seq_len is the length of the acoustic model output (T).
1071
1072 NOTE:
1073 If a float is provided, it can be greater than 1!
1074 By default, a float of 2.0 is used so that a target sequence can be at most twice
1075 as long as the acoustic model output length T.
1076
1077 maes_num_steps: Number of adaptive steps to take. From the paper, 2 steps is generally sufficient,
1078 and can be reduced to 1 to improve decoding speed while sacrificing some accuracy. int > 0.
1079
1080 maes_prefix_alpha: Maximum prefix length in prefix search. Must be an integer, and is advised to keep this as 1
1081 in order to reduce expensive beam search cost later. int >= 0.
1082
1083 maes_expansion_beta: Maximum number of prefix expansions allowed, in addition to the beam size.
1084 Effectively, the number of hypothesis = beam_size + maes_expansion_beta. Must be an int >= 0,
1085 and affects the speed of inference since large values will perform large beam search in the next step.
1086
1087 maes_expansion_gamma: Float pruning threshold used in the prune-by-value step when computing the expansions.
1088 The default (2.3) is selected from the paper. It performs a comparison (max_log_prob - gamma <= log_prob[v])
1089 where v is all vocabulary indices in the Vocab set and max_log_prob is the "most" likely token to be
1090 predicted. Gamma therefore provides a margin of additional tokens which can be potential candidates for
1091 expansion apart from the "most likely" candidate.
1092 Lower values will reduce the number of expansions (by increasing pruning-by-value, thereby improving speed
1093 but hurting accuracy). Higher values will increase the number of expansions (by reducing pruning-by-value,
1094 thereby reducing speed but potentially improving accuracy). This is a hyper parameter to be experimentally
1095 tuned on a validation set.
1096
1097 softmax_temperature: Scales the logits of the joint prior to computing log_softmax.
1098
1099 decoder: The Decoder/Prediction network module.
1100 joint: The Joint network module.
1101 vocabulary: The vocabulary (excluding the RNNT blank token) which will be used for decoding.
1102 """
1103
1104 def __init__(
1105 self, decoding_cfg, decoder, joint, vocabulary,
1106 ):
1107 # we need to ensure blank is the last token in the vocab for the case of RNNT and Multi-blank RNNT.
1108 blank_id = len(vocabulary) + joint.num_extra_outputs
1109
1110 if hasattr(decoding_cfg, 'model_type') and decoding_cfg.model_type == 'tdt':
1111 blank_id = len(vocabulary)
1112
1113 self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
1114
1115 super(RNNTDecoding, self).__init__(
1116 decoding_cfg=decoding_cfg, decoder=decoder, joint=joint, blank_id=blank_id,
1117 )
1118
1119 if isinstance(self.decoding, beam_decode.BeamRNNTInfer):
1120 self.decoding.set_decoding_type('char')
1121
1122 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
1123 """
1124 Implemented by subclass in order to aggregate token confidence to a word-level confidence.
1125
1126 Args:
1127 hypothesis: Hypothesis
1128
1129 Returns:
1130 A list of word-level confidence scores.
1131 """
1132 return self._aggregate_token_confidence_chars(hypothesis.words, hypothesis.token_confidence)
1133
1134 def decode_tokens_to_str(self, tokens: List[int]) -> str:
1135 """
1136 Implemented by subclass in order to decoder a token list into a string.
1137
1138 Args:
1139 tokens: List of int representing the token ids.
1140
1141 Returns:
1142 A decoded string.
1143 """
1144 hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
1145 return hypothesis
1146
1147 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
1148 """
1149 Implemented by subclass in order to decode a token id list into a token list.
1150 A token list is the string representation of each token id.
1151
1152 Args:
1153 tokens: List of int representing the token ids.
1154
1155 Returns:
1156 A list of decoded tokens.
1157 """
1158 token_list = [self.labels_map[c] for c in tokens if c < self.blank_id - self.num_extra_outputs]
1159 return token_list
1160
1161 def decode_tokens_to_lang(self, tokens: List[int]) -> str:
1162 """
1163 Compute the most likely language ID (LID) string given the tokens.
1164
1165 Args:
1166 tokens: List of int representing the token ids.
1167
1168 Returns:
1169 A decoded LID string.
1170 """
1171 lang = self.tokenizer.ids_to_lang(tokens)
1172 return lang
1173
1174 def decode_ids_to_langs(self, tokens: List[int]) -> List[str]:
1175 """
1176 Decode a token id list into language ID (LID) list.
1177
1178 Args:
1179 tokens: List of int representing the token ids.
1180
1181 Returns:
1182 A list of decoded LIDS.
1183 """
1184 lang_list = self.tokenizer.ids_to_text_and_langs(tokens)
1185 return lang_list
1186
1187
1188 class RNNTWER(Metric):
1189 """
1190 This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference texts.
1191 When doing distributed training/evaluation the result of res=WER(predictions, targets, target_lengths) calls
1192 will be all-reduced between all workers using SUM operations.
1193 Here contains two numbers res=[wer_numerator, wer_denominator]. WER=wer_numerator/wer_denominator.
1194
1195 If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step results.
1196 Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
1197
1198 Example:
1199 def validation_step(self, batch, batch_idx):
1200 ...
1201 wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
1202 self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
1203 return self.val_outputs
1204
1205 def on_validation_epoch_end(self):
1206 ...
1207 wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
1208 wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
1209 tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
1210 self.val_outputs.clear() # free memory
1211 return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
1212
1213 Args:
1214 decoding: RNNTDecoding object that will perform autoregressive decoding of the RNNT model.
1215 batch_dim_index: Index of the batch dimension.
1216 use_cer: Whether to use Character Error Rate isntead of Word Error Rate.
1217 log_prediction: Whether to log a single decoded sample per call.
1218
1219 Returns:
1220 res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenshtein's
1221 distances for all prediction - reference pairs, total number of words in all references.
1222 """
1223
1224 full_state_update = True
1225
1226 def __init__(
1227 self, decoding: RNNTDecoding, batch_dim_index=0, use_cer=False, log_prediction=True, dist_sync_on_step=False
1228 ):
1229 super(RNNTWER, self).__init__(dist_sync_on_step=dist_sync_on_step)
1230 self.decoding = decoding
1231 self.batch_dim_index = batch_dim_index
1232 self.use_cer = use_cer
1233 self.log_prediction = log_prediction
1234 self.blank_id = self.decoding.blank_id
1235 self.labels_map = self.decoding.labels_map
1236
1237 self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1238 self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1239
1240 def update(
1241 self,
1242 encoder_output: torch.Tensor,
1243 encoded_lengths: torch.Tensor,
1244 targets: torch.Tensor,
1245 target_lengths: torch.Tensor,
1246 ) -> torch.Tensor:
1247 words = 0
1248 scores = 0
1249 references = []
1250 with torch.no_grad():
1251 # prediction_cpu_tensor = tensors[0].long().cpu()
1252 targets_cpu_tensor = targets.long().cpu()
1253 targets_cpu_tensor = move_dimension_to_the_front(targets_cpu_tensor, self.batch_dim_index)
1254 tgt_lenths_cpu_tensor = target_lengths.long().cpu()
1255
1256 # iterate over batch
1257 for ind in range(targets_cpu_tensor.shape[0]):
1258 tgt_len = tgt_lenths_cpu_tensor[ind].item()
1259 target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
1260
1261 reference = self.decoding.decode_tokens_to_str(target)
1262 references.append(reference)
1263
1264 hypotheses, _ = self.decoding.rnnt_decoder_predictions_tensor(encoder_output, encoded_lengths)
1265
1266 if self.log_prediction:
1267 logging.info(f"\n")
1268 logging.info(f"reference :{references[0]}")
1269 logging.info(f"predicted :{hypotheses[0]}")
1270
1271 for h, r in zip(hypotheses, references):
1272 if self.use_cer:
1273 h_list = list(h)
1274 r_list = list(r)
1275 else:
1276 h_list = h.split()
1277 r_list = r.split()
1278 words += len(r_list)
1279 # Compute Levenshtein's distance
1280 scores += editdistance.eval(h_list, r_list)
1281
1282 self.scores += torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
1283 self.words += torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
1284 # return torch.tensor([scores, words]).to(predictions.device)
1285
1286 def compute(self):
1287 wer = self.scores.float() / self.words
1288 return wer, self.scores.detach(), self.words.detach()
1289
1290
1291 @dataclass
1292 class RNNTDecodingConfig:
1293 model_type: str = "rnnt" # one of "rnnt", "multiblank" or "tdt"
1294 strategy: str = "greedy_batch"
1295
1296 compute_hypothesis_token_set: bool = False
1297
1298 # preserve decoding alignments
1299 preserve_alignments: Optional[bool] = None
1300
1301 # confidence config
1302 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1303
1304 # RNNT Joint fused batch size
1305 fused_batch_size: Optional[int] = None
1306
1307 # compute RNNT time stamps
1308 compute_timestamps: Optional[bool] = None
1309
1310 # compute language IDs
1311 compute_langs: bool = False
1312
1313 # token representing word seperator
1314 word_seperator: str = " "
1315
1316 # type of timestamps to calculate
1317 rnnt_timestamp_type: str = "all" # can be char, word or all for both
1318
1319 # greedy decoding config
1320 greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
1321
1322 # beam decoding config
1323 beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
1324
1325 # can be used to change temperature for decoding
1326 temperature: float = 1.0
1327
[end of nemo/collections/asr/metrics/rnnt_wer.py]
[start of nemo/collections/asr/metrics/wer.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import re
16 from abc import abstractmethod
17 from dataclasses import dataclass, is_dataclass
18 from typing import Callable, Dict, List, Optional, Tuple, Union
19
20 import editdistance
21 import jiwer
22 import numpy as np
23 import torch
24 from omegaconf import DictConfig, OmegaConf
25 from torchmetrics import Metric
26
27 from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
28 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig, ConfidenceMixin
29 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis, NBestHypotheses
30 from nemo.utils import logging, logging_mode
31
32 __all__ = ['word_error_rate', 'word_error_rate_detail', 'WER', 'move_dimension_to_the_front']
33
34
35 def word_error_rate(hypotheses: List[str], references: List[str], use_cer=False) -> float:
36 """
37 Computes Average Word Error rate between two texts represented as
38 corresponding lists of string.
39
40 Hypotheses and references must have same length.
41
42 Args:
43 hypotheses (list): list of hypotheses
44 references(list) : list of references
45 use_cer (bool): set True to enable cer
46
47 Returns:
48 wer (float): average word error rate
49 """
50 scores = 0
51 words = 0
52 if len(hypotheses) != len(references):
53 raise ValueError(
54 "In word error rate calculation, hypotheses and reference"
55 " lists must have the same number of elements. But I got:"
56 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
57 )
58 for h, r in zip(hypotheses, references):
59 if use_cer:
60 h_list = list(h)
61 r_list = list(r)
62 else:
63 h_list = h.split()
64 r_list = r.split()
65 words += len(r_list)
66 # May deprecate using editdistance in future release for here and rest of codebase
67 # once we confirm jiwer is reliable.
68 scores += editdistance.eval(h_list, r_list)
69 if words != 0:
70 wer = 1.0 * scores / words
71 else:
72 wer = float('inf')
73 return wer
74
75
76 def word_error_rate_detail(
77 hypotheses: List[str], references: List[str], use_cer=False
78 ) -> Tuple[float, int, float, float, float]:
79 """
80 Computes Average Word Error Rate with details (insertion rate, deletion rate, substitution rate)
81 between two texts represented as corresponding lists of string.
82
83 Hypotheses and references must have same length.
84
85 Args:
86 hypotheses (list): list of hypotheses
87 references(list) : list of references
88 use_cer (bool): set True to enable cer
89
90 Returns:
91 wer (float): average word error rate
92 words (int): Total number of words/charactors of given reference texts
93 ins_rate (float): average insertion error rate
94 del_rate (float): average deletion error rate
95 sub_rate (float): average substitution error rate
96 """
97 scores = 0
98 words = 0
99 ops_count = {'substitutions': 0, 'insertions': 0, 'deletions': 0}
100
101 if len(hypotheses) != len(references):
102 raise ValueError(
103 "In word error rate calculation, hypotheses and reference"
104 " lists must have the same number of elements. But I got:"
105 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
106 )
107
108 for h, r in zip(hypotheses, references):
109 if use_cer:
110 h_list = list(h)
111 r_list = list(r)
112 else:
113 h_list = h.split()
114 r_list = r.split()
115
116 # To get rid of the issue that jiwer does not allow empty string
117 if len(r_list) == 0:
118 if len(h_list) != 0:
119 errors = len(h_list)
120 ops_count['insertions'] += errors
121 else:
122 errors = 0
123 else:
124 if use_cer:
125 measures = jiwer.cer(r, h, return_dict=True)
126 else:
127 measures = jiwer.compute_measures(r, h)
128
129 errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
130 ops_count['insertions'] += measures['insertions']
131 ops_count['deletions'] += measures['deletions']
132 ops_count['substitutions'] += measures['substitutions']
133
134 scores += errors
135 words += len(r_list)
136
137 if words != 0:
138 wer = 1.0 * scores / words
139 ins_rate = 1.0 * ops_count['insertions'] / words
140 del_rate = 1.0 * ops_count['deletions'] / words
141 sub_rate = 1.0 * ops_count['substitutions'] / words
142 else:
143 wer, ins_rate, del_rate, sub_rate = float('inf'), float('inf'), float('inf'), float('inf')
144
145 return wer, words, ins_rate, del_rate, sub_rate
146
147
148 def word_error_rate_per_utt(hypotheses: List[str], references: List[str], use_cer=False) -> Tuple[List[float], float]:
149 """
150 Computes Word Error Rate per utterance and the average WER
151 between two texts represented as corresponding lists of string.
152
153 Hypotheses and references must have same length.
154
155 Args:
156 hypotheses (list): list of hypotheses
157 references(list) : list of references
158 use_cer (bool): set True to enable cer
159
160 Returns:
161 wer_per_utt (List[float]): word error rate per utterance
162 avg_wer (float): average word error rate
163 """
164 scores = 0
165 words = 0
166 wer_per_utt = []
167
168 if len(hypotheses) != len(references):
169 raise ValueError(
170 "In word error rate calculation, hypotheses and reference"
171 " lists must have the same number of elements. But I got:"
172 "{0} and {1} correspondingly".format(len(hypotheses), len(references))
173 )
174
175 for h, r in zip(hypotheses, references):
176 if use_cer:
177 h_list = list(h)
178 r_list = list(r)
179 else:
180 h_list = h.split()
181 r_list = r.split()
182
183 # To get rid of the issue that jiwer does not allow empty string
184 if len(r_list) == 0:
185 if len(h_list) != 0:
186 errors = len(h_list)
187 wer_per_utt.append(float('inf'))
188 else:
189 if use_cer:
190 measures = jiwer.cer(r, h, return_dict=True)
191 er = measures['cer']
192 else:
193 measures = jiwer.compute_measures(r, h)
194 er = measures['wer']
195
196 errors = measures['insertions'] + measures['deletions'] + measures['substitutions']
197 wer_per_utt.append(er)
198
199 scores += errors
200 words += len(r_list)
201
202 if words != 0:
203 avg_wer = 1.0 * scores / words
204 else:
205 avg_wer = float('inf')
206
207 return wer_per_utt, avg_wer
208
209
210 def move_dimension_to_the_front(tensor, dim_index):
211 all_dims = list(range(tensor.ndim))
212 return tensor.permute(*([dim_index] + all_dims[:dim_index] + all_dims[dim_index + 1 :]))
213
214
215 class AbstractCTCDecoding(ConfidenceMixin):
216 """
217 Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs.
218
219 Args:
220 decoding_cfg: A dict-like object which contains the following key-value pairs.
221 strategy: str value which represents the type of decoding that can occur.
222 Possible values are :
223 - greedy (for greedy decoding).
224 - beam (for DeepSpeed KenLM based decoding).
225
226 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
227 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
228 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
229
230 ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
231 Can take the following values - "char" for character/subword time stamps, "word" for word level
232 time stamps and "all" (default), for both character level and word level time stamps.
233
234 word_seperator: Str token representing the seperator between words.
235
236 preserve_alignments: Bool flag which preserves the history of logprobs generated during
237 decoding (sample / batched). When set to true, the Hypothesis will contain
238 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
239
240 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
241 scores. In order to obtain hypotheses with confidence scores, please utilize
242 `ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
243
244 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
245 generated during decoding. When set to true, the Hypothesis will contain
246 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
247 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
248 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
249 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
250
251 The length of the list corresponds to the number of recognized tokens.
252 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
253 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
254 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
255
256 The length of the list corresponds to the number of recognized words.
257 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
258 from the `token_confidence`.
259 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
260 Valid options are `mean`, `min`, `max`, `prod`.
261 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
262 confidence scores.
263
264 name: The method name (str).
265 Supported values:
266 - 'max_prob' for using the maximum token probability as a confidence.
267 - 'entropy' for using a normalized entropy of a log-likelihood vector.
268
269 entropy_type: Which type of entropy to use (str).
270 Used if confidence_method_cfg.name is set to `entropy`.
271 Supported values:
272 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
273 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
274 Note that for this entropy, the alpha should comply the following inequality:
275 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
276 where V is the model vocabulary size.
277 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
278 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
279 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
280 More: https://en.wikipedia.org/wiki/Tsallis_entropy
281 - 'renyi' for the Rรฉnyi entropy.
282 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
283 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
284 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
285
286 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
287 When the alpha equals one, scaling is not applied to 'max_prob',
288 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
289
290 entropy_norm: A mapping of the entropy value to the interval [0,1].
291 Supported values:
292 - 'lin' for using the linear mapping.
293 - 'exp' for using exponential mapping with linear shift.
294
295 batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
296 ``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
297
298 The config may further contain the following sub-dictionaries:
299 "greedy":
300 preserve_alignments: Same as above, overrides above value.
301 compute_timestamps: Same as above, overrides above value.
302 preserve_frame_confidence: Same as above, overrides above value.
303 confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
304
305 "beam":
306 beam_size: int, defining the beam size for beam search. Must be >= 1.
307 If beam_size == 1, will perform cached greedy search. This might be slightly different
308 results compared to the greedy search above.
309
310 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
311 hypotheses after beam search has concluded. This flag is set by default.
312
313 beam_alpha: float, the strength of the Language model on the final score of a token.
314 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
315
316 beam_beta: float, the strength of the sequence length penalty on the final score of a token.
317 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
318
319 kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
320 If the path is invalid (file is not found at path), will raise a deferred error at the moment
321 of calculation of beam search, so that users may update / change the decoding strategy
322 to point to the correct file.
323
324 blank_id: The id of the RNNT blank token.
325 """
326
327 def __init__(self, decoding_cfg, blank_id: int):
328 super().__init__()
329
330 # Convert dataclas to config
331 if is_dataclass(decoding_cfg):
332 decoding_cfg = OmegaConf.structured(decoding_cfg)
333
334 if not isinstance(decoding_cfg, DictConfig):
335 decoding_cfg = OmegaConf.create(decoding_cfg)
336
337 OmegaConf.set_struct(decoding_cfg, False)
338
339 # update minimal config
340 minimal_cfg = ['greedy']
341 for item in minimal_cfg:
342 if item not in decoding_cfg:
343 decoding_cfg[item] = OmegaConf.create({})
344
345 self.cfg = decoding_cfg
346 self.blank_id = blank_id
347 self.preserve_alignments = self.cfg.get('preserve_alignments', None)
348 self.compute_timestamps = self.cfg.get('compute_timestamps', None)
349 self.batch_dim_index = self.cfg.get('batch_dim_index', 0)
350 self.word_seperator = self.cfg.get('word_seperator', ' ')
351
352 possible_strategies = ['greedy', 'beam', 'pyctcdecode', 'flashlight']
353 if self.cfg.strategy not in possible_strategies:
354 raise ValueError(f"Decoding strategy must be one of {possible_strategies}. Given {self.cfg.strategy}")
355
356 # Update preserve alignments
357 if self.preserve_alignments is None:
358 if self.cfg.strategy in ['greedy']:
359 self.preserve_alignments = self.cfg.greedy.get('preserve_alignments', False)
360 else:
361 self.preserve_alignments = self.cfg.beam.get('preserve_alignments', False)
362
363 # Update compute timestamps
364 if self.compute_timestamps is None:
365 if self.cfg.strategy in ['greedy']:
366 self.compute_timestamps = self.cfg.greedy.get('compute_timestamps', False)
367 elif self.cfg.strategy in ['beam']:
368 self.compute_timestamps = self.cfg.beam.get('compute_timestamps', False)
369
370 # initialize confidence-related fields
371 self._init_confidence(self.cfg.get('confidence_cfg', None))
372
373 # Confidence estimation is not implemented for strategies other than `greedy`
374 if (
375 not self.preserve_frame_confidence
376 and self.cfg.strategy != 'greedy'
377 and self.cfg.beam.get('preserve_frame_confidence', False)
378 ):
379 raise NotImplementedError(f"Confidence calculation is not supported for strategy `{self.cfg.strategy}`")
380
381 # we need timestamps to extract non-blank per-frame confidence
382 if self.compute_timestamps is not None:
383 self.compute_timestamps |= self.preserve_frame_confidence
384
385 if self.cfg.strategy == 'greedy':
386
387 self.decoding = ctc_greedy_decoding.GreedyCTCInfer(
388 blank_id=self.blank_id,
389 preserve_alignments=self.preserve_alignments,
390 compute_timestamps=self.compute_timestamps,
391 preserve_frame_confidence=self.preserve_frame_confidence,
392 confidence_method_cfg=self.confidence_method_cfg,
393 )
394
395 elif self.cfg.strategy == 'beam':
396
397 self.decoding = ctc_beam_decoding.BeamCTCInfer(
398 blank_id=blank_id,
399 beam_size=self.cfg.beam.get('beam_size', 1),
400 search_type='default',
401 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
402 preserve_alignments=self.preserve_alignments,
403 compute_timestamps=self.compute_timestamps,
404 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
405 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
406 kenlm_path=self.cfg.beam.get('kenlm_path', None),
407 )
408
409 self.decoding.override_fold_consecutive_value = False
410
411 elif self.cfg.strategy == 'pyctcdecode':
412
413 self.decoding = ctc_beam_decoding.BeamCTCInfer(
414 blank_id=blank_id,
415 beam_size=self.cfg.beam.get('beam_size', 1),
416 search_type='pyctcdecode',
417 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
418 preserve_alignments=self.preserve_alignments,
419 compute_timestamps=self.compute_timestamps,
420 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
421 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
422 kenlm_path=self.cfg.beam.get('kenlm_path', None),
423 pyctcdecode_cfg=self.cfg.beam.get('pyctcdecode_cfg', None),
424 )
425
426 self.decoding.override_fold_consecutive_value = False
427
428 elif self.cfg.strategy == 'flashlight':
429
430 self.decoding = ctc_beam_decoding.BeamCTCInfer(
431 blank_id=blank_id,
432 beam_size=self.cfg.beam.get('beam_size', 1),
433 search_type='flashlight',
434 return_best_hypothesis=self.cfg.beam.get('return_best_hypothesis', True),
435 preserve_alignments=self.preserve_alignments,
436 compute_timestamps=self.compute_timestamps,
437 beam_alpha=self.cfg.beam.get('beam_alpha', 1.0),
438 beam_beta=self.cfg.beam.get('beam_beta', 0.0),
439 kenlm_path=self.cfg.beam.get('kenlm_path', None),
440 flashlight_cfg=self.cfg.beam.get('flashlight_cfg', None),
441 )
442
443 self.decoding.override_fold_consecutive_value = False
444
445 else:
446 raise ValueError(
447 f"Incorrect decoding strategy supplied. Must be one of {possible_strategies}\n"
448 f"but was provided {self.cfg.strategy}"
449 )
450
451 def ctc_decoder_predictions_tensor(
452 self,
453 decoder_outputs: torch.Tensor,
454 decoder_lengths: torch.Tensor = None,
455 fold_consecutive: bool = True,
456 return_hypotheses: bool = False,
457 ) -> Tuple[List[str], Optional[List[List[str]]], Optional[Union[Hypothesis, NBestHypotheses]]]:
458 """
459 Decodes a sequence of labels to words
460
461 Args:
462 decoder_outputs: An integer torch.Tensor of shape [Batch, Time, {Vocabulary}] (if ``batch_index_dim == 0``) or [Time, Batch]
463 (if ``batch_index_dim == 1``) of integer indices that correspond to the index of some character in the
464 label set.
465 decoder_lengths: Optional tensor of length `Batch` which contains the integer lengths
466 of the sequence in the padded `predictions` tensor.
467 fold_consecutive: Bool, determine whether to perform "ctc collapse", folding consecutive tokens
468 into a single token.
469 return_hypotheses: Bool flag whether to return just the decoding predictions of the model
470 or a Hypothesis object that holds information such as the decoded `text`,
471 the `alignment` of emited by the CTC Model, and the `length` of the sequence (if available).
472 May also contain the log-probabilities of the decoder (if this method is called via
473 transcribe())
474
475 Returns:
476 Either a list of str which represent the CTC decoded strings per sample,
477 or a list of Hypothesis objects containing additional information.
478 """
479
480 if isinstance(decoder_outputs, torch.Tensor):
481 decoder_outputs = move_dimension_to_the_front(decoder_outputs, self.batch_dim_index)
482
483 if (
484 hasattr(self.decoding, 'override_fold_consecutive_value')
485 and self.decoding.override_fold_consecutive_value is not None
486 ):
487 logging.info(
488 f"Beam search requires that consecutive ctc tokens are not folded. \n"
489 f"Overriding provided value of `fold_consecutive` = {fold_consecutive} to "
490 f"{self.decoding.override_fold_consecutive_value}",
491 mode=logging_mode.ONCE,
492 )
493 fold_consecutive = self.decoding.override_fold_consecutive_value
494
495 with torch.inference_mode():
496 # Resolve the forward step of the decoding strategy
497 hypotheses_list = self.decoding(
498 decoder_output=decoder_outputs, decoder_lengths=decoder_lengths
499 ) # type: List[List[Hypothesis]]
500
501 # extract the hypotheses
502 hypotheses_list = hypotheses_list[0] # type: List[Hypothesis]
503
504 if isinstance(hypotheses_list[0], NBestHypotheses):
505 hypotheses = []
506 all_hypotheses = []
507
508 for nbest_hyp in hypotheses_list: # type: NBestHypotheses
509 n_hyps = nbest_hyp.n_best_hypotheses # Extract all hypotheses for this sample
510 decoded_hyps = self.decode_hypothesis(
511 n_hyps, fold_consecutive
512 ) # type: List[Union[Hypothesis, NBestHypotheses]]
513
514 # If computing timestamps
515 if self.compute_timestamps is True:
516 timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
517 for hyp_idx in range(len(decoded_hyps)):
518 decoded_hyps[hyp_idx] = self.compute_ctc_timestamps(decoded_hyps[hyp_idx], timestamp_type)
519
520 hypotheses.append(decoded_hyps[0]) # best hypothesis
521 all_hypotheses.append(decoded_hyps)
522
523 if return_hypotheses:
524 return hypotheses, all_hypotheses
525
526 best_hyp_text = [h.text for h in hypotheses]
527 all_hyp_text = [h.text for hh in all_hypotheses for h in hh]
528 return best_hyp_text, all_hyp_text
529
530 else:
531 hypotheses = self.decode_hypothesis(
532 hypotheses_list, fold_consecutive
533 ) # type: List[Union[Hypothesis, NBestHypotheses]]
534
535 # If computing timestamps
536 if self.compute_timestamps is True:
537 # greedy decoding, can get high-level confidence scores
538 if return_hypotheses and (self.preserve_word_confidence or self.preserve_token_confidence):
539 hypotheses = self.compute_confidence(hypotheses)
540 else:
541 # remove unused token_repetitions from Hypothesis.text
542 for hyp in hypotheses:
543 hyp.text = hyp.text[:2]
544 timestamp_type = self.cfg.get('ctc_timestamp_type', 'all')
545 for hyp_idx in range(len(hypotheses)):
546 hypotheses[hyp_idx] = self.compute_ctc_timestamps(hypotheses[hyp_idx], timestamp_type)
547
548 if return_hypotheses:
549 return hypotheses, None
550
551 best_hyp_text = [h.text for h in hypotheses]
552 return best_hyp_text, None
553
554 def decode_hypothesis(
555 self, hypotheses_list: List[Hypothesis], fold_consecutive: bool
556 ) -> List[Union[Hypothesis, NBestHypotheses]]:
557 """
558 Decode a list of hypotheses into a list of strings.
559
560 Args:
561 hypotheses_list: List of Hypothesis.
562 fold_consecutive: Whether to collapse the ctc blank tokens or not.
563
564 Returns:
565 A list of strings.
566 """
567 for ind in range(len(hypotheses_list)):
568 # Extract the integer encoded hypothesis
569 hyp = hypotheses_list[ind]
570 prediction = hyp.y_sequence
571 predictions_len = hyp.length if hyp.length > 0 else None
572
573 if fold_consecutive:
574 if type(prediction) != list:
575 prediction = prediction.numpy().tolist()
576
577 if predictions_len is not None:
578 prediction = prediction[:predictions_len]
579
580 # CTC decoding procedure
581 decoded_prediction = []
582 token_lengths = [] # preserve token lengths
583 token_repetitions = [] # preserve number of repetitions per token
584
585 previous = self.blank_id
586 last_length = 0
587 last_repetition = 1
588
589 for pidx, p in enumerate(prediction):
590 if (p != previous or previous == self.blank_id) and p != self.blank_id:
591 decoded_prediction.append(p)
592
593 token_lengths.append(pidx - last_length)
594 last_length = pidx
595 token_repetitions.append(last_repetition)
596 last_repetition = 1
597
598 if p == previous and previous != self.blank_id:
599 last_repetition += 1
600
601 previous = p
602
603 if len(token_repetitions) > 0:
604 token_repetitions = token_repetitions[1:] + [last_repetition]
605
606 else:
607 if predictions_len is not None:
608 prediction = prediction[:predictions_len]
609 decoded_prediction = prediction[prediction != self.blank_id].tolist()
610 token_lengths = [1] * len(decoded_prediction) # preserve number of repetitions per token
611 token_repetitions = [1] * len(decoded_prediction) # preserve number of repetitions per token
612
613 # De-tokenize the integer tokens; if not computing timestamps
614 if self.compute_timestamps is True:
615 # keep the original predictions, wrap with the number of repetitions per token
616 # this is done so that `ctc_decoder_predictions_tensor()` can process this hypothesis
617 # in order to compute exact time stamps.
618 hypothesis = (decoded_prediction, token_lengths, token_repetitions)
619 else:
620 hypothesis = self.decode_tokens_to_str(decoded_prediction)
621
622 # TODO: remove
623 # collapse leading spaces before . , ? for PC models
624 hypothesis = re.sub(r'(\s+)([\.\,\?])', r'\2', hypothesis)
625
626 # Preserve this wrapped hypothesis or decoded text tokens.
627 hypotheses_list[ind].text = hypothesis
628
629 return hypotheses_list
630
631 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
632 """
633 Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
634 Assumes that `frame_confidence` is present in the hypotheses.
635
636 Args:
637 hypotheses_list: List of Hypothesis.
638
639 Returns:
640 A list of hypotheses with high-level confidence scores.
641 """
642 for hyp in hypotheses_list:
643 if not isinstance(hyp.text, tuple) or len(hyp.text) != 3:
644 # the method must have been called in the wrong place
645 raise ValueError(
646 """Wrong format of the `text` attribute of a hypothesis.\n
647 Expected: (decoded_prediction, token_repetitions)\n
648 The method invocation is expected between .decode_hypothesis() and .compute_ctc_timestamps()"""
649 )
650 token_repetitions = hyp.text[2]
651 hyp.text = hyp.text[:2]
652 token_confidence = []
653 if self.exclude_blank_from_confidence:
654 non_blank_frame_confidence = hyp.non_blank_frame_confidence
655 i = 0
656 for tr in token_repetitions:
657 # token repetition can be zero
658 j = i + tr
659 token_confidence.append(self._aggregate_confidence(non_blank_frame_confidence[i:j]))
660 i = j
661 else:
662 # <blank> tokens are considered to belong to the last non-blank token, if any.
663 token_lengths = hyp.text[1]
664 if len(token_lengths) > 0:
665 ts = token_lengths[0]
666 for tl in token_lengths[1:] + [len(hyp.frame_confidence)]:
667 token_confidence.append(self._aggregate_confidence(hyp.frame_confidence[ts : ts + tl]))
668 ts += tl
669 hyp.token_confidence = token_confidence
670 if self.preserve_word_confidence:
671 for hyp in hypotheses_list:
672 hyp.word_confidence = self._aggregate_token_confidence(hyp)
673 return hypotheses_list
674
675 @abstractmethod
676 def decode_tokens_to_str(self, tokens: List[int]) -> str:
677 """
678 Implemented by subclass in order to decoder a token id list into a string.
679
680 Args:
681 tokens: List of int representing the token ids.
682
683 Returns:
684 A decoded string.
685 """
686 raise NotImplementedError()
687
688 @abstractmethod
689 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
690 """
691 Implemented by subclass in order to decode a token id list into a token list.
692 A token list is the string representation of each token id.
693
694 Args:
695 tokens: List of int representing the token ids.
696
697 Returns:
698 A list of decoded tokens.
699 """
700 raise NotImplementedError()
701
702 def compute_ctc_timestamps(self, hypothesis: Hypothesis, timestamp_type: str = "all"):
703 """
704 Method to compute time stamps at char/subword, and word level given some hypothesis.
705 Requires the input hypothesis to contain a `text` field that is the tuple. The tuple contains -
706 the ctc collapsed integer ids, and the number of repetitions of each token.
707
708 Args:
709 hypothesis: A Hypothesis object, with a wrapped `text` field.
710 The `text` field must contain a tuple with two values -
711 The ctc collapsed integer ids
712 A list of integers that represents the number of repetitions per token.
713 timestamp_type: A str value that represents the type of time stamp calculated.
714 Can be one of "char", "word" or "all"
715
716 Returns:
717 A Hypothesis object with a modified `timestep` value, which is now a dictionary containing
718 the time stamp information.
719 """
720 assert timestamp_type in ['char', 'word', 'all']
721
722 # Unpack the temporary storage, and set the decoded predictions
723 decoded_prediction, token_lengths = hypothesis.text
724 hypothesis.text = decoded_prediction
725
726 # Retrieve offsets
727 char_offsets = word_offsets = None
728 char_offsets = self._compute_offsets(hypothesis, token_lengths, self.blank_id)
729
730 # Assert number of offsets and hypothesis tokens are 1:1 match.
731 if len(char_offsets) != len(hypothesis.text):
732 raise ValueError(
733 f"`char_offsets`: {char_offsets} and `processed_tokens`: {hypothesis.text}"
734 " have to be of the same length, but are: "
735 f"`len(offsets)`: {len(char_offsets)} and `len(processed_tokens)`:"
736 f" {len(hypothesis.text)}"
737 )
738
739 # Correctly process the token ids to chars/subwords.
740 for i, char in enumerate(hypothesis.text):
741 char_offsets[i]["char"] = self.decode_tokens_to_str([char])
742
743 # detect char vs subword models
744 lens = [len(list(v["char"])) > 1 for v in char_offsets]
745 if any(lens):
746 text_type = 'subword'
747 else:
748 text_type = 'char'
749
750 # retrieve word offsets from character offsets
751 word_offsets = None
752 if timestamp_type in ['word', 'all']:
753 if text_type == 'char':
754 word_offsets = self._get_word_offsets_chars(char_offsets, word_delimiter_char=self.word_seperator)
755 else:
756 word_offsets = self._get_word_offsets_subwords_sentencepiece(
757 char_offsets,
758 hypothesis,
759 decode_ids_to_tokens=self.decode_ids_to_tokens,
760 decode_tokens_to_str=self.decode_tokens_to_str,
761 )
762
763 # attach results
764 if len(hypothesis.timestep) > 0:
765 timestep_info = hypothesis.timestep
766 else:
767 timestep_info = []
768
769 # Setup defaults
770 hypothesis.timestep = {"timestep": timestep_info}
771
772 # Add char / subword time stamps
773 if char_offsets is not None and timestamp_type in ['char', 'all']:
774 hypothesis.timestep['char'] = char_offsets
775
776 # Add word time stamps
777 if word_offsets is not None and timestamp_type in ['word', 'all']:
778 hypothesis.timestep['word'] = word_offsets
779
780 # Convert the token indices to text
781 hypothesis.text = self.decode_tokens_to_str(hypothesis.text)
782
783 return hypothesis
784
785 @staticmethod
786 def _compute_offsets(
787 hypothesis: Hypothesis, token_lengths: List[int], ctc_token: int
788 ) -> List[Dict[str, Union[str, int]]]:
789 """
790 Utility method that calculates the indidual time indices where a token starts and ends.
791
792 Args:
793 hypothesis: A Hypothesis object that contains `text` field that holds the character / subword token
794 emitted at every time step after ctc collapse.
795 token_lengths: A list of ints representing the lengths of each emitted token.
796 ctc_token: The integer of the ctc blank token used during ctc collapse.
797
798 Returns:
799
800 """
801 start_index = 0
802
803 # If the exact timestep information is available, utilize the 1st non-ctc blank token timestep
804 # as the start index.
805 if hypothesis.timestep is not None and len(hypothesis.timestep) > 0:
806 start_index = max(0, hypothesis.timestep[0] - 1)
807
808 # Construct the start and end indices brackets
809 end_indices = np.asarray(token_lengths).cumsum()
810 start_indices = np.concatenate(([start_index], end_indices[:-1]))
811
812 # Merge the results per token into a list of dictionaries
813 offsets = [
814 {"char": t, "start_offset": s, "end_offset": e}
815 for t, s, e in zip(hypothesis.text, start_indices, end_indices)
816 ]
817
818 # Filter out CTC token
819 offsets = list(filter(lambda offsets: offsets["char"] != ctc_token, offsets))
820 return offsets
821
822 @staticmethod
823 def _get_word_offsets_chars(
824 offsets: Dict[str, Union[str, float]], word_delimiter_char: str = " "
825 ) -> Dict[str, Union[str, float]]:
826 """
827 Utility method which constructs word time stamps out of character time stamps.
828
829 References:
830 This code is a port of the Hugging Face code for word time stamp construction.
831
832 Args:
833 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
834 word_delimiter_char: Character token that represents the word delimiter. By default, " ".
835
836 Returns:
837 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
838 "end_offset".
839 """
840 word_offsets = []
841
842 last_state = "SPACE"
843 word = ""
844 start_offset = 0
845 end_offset = 0
846 for i, offset in enumerate(offsets):
847 char = offset["char"]
848 state = "SPACE" if char == word_delimiter_char else "WORD"
849
850 if state == last_state:
851 # If we are in the same state as before, we simply repeat what we've done before
852 end_offset = offset["end_offset"]
853 word += char
854 else:
855 # Switching state
856 if state == "SPACE":
857 # Finishing a word
858 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
859 else:
860 # Starting a new word
861 start_offset = offset["start_offset"]
862 end_offset = offset["end_offset"]
863 word = char
864
865 last_state = state
866 if last_state == "WORD":
867 word_offsets.append({"word": word, "start_offset": start_offset, "end_offset": end_offset})
868
869 return word_offsets
870
871 @staticmethod
872 def _get_word_offsets_subwords_sentencepiece(
873 offsets: Dict[str, Union[str, float]],
874 hypothesis: Hypothesis,
875 decode_ids_to_tokens: Callable[[List[int]], str],
876 decode_tokens_to_str: Callable[[List[int]], str],
877 ) -> Dict[str, Union[str, float]]:
878 """
879 Utility method which constructs word time stamps out of sub-word time stamps.
880
881 **Note**: Only supports Sentencepiece based tokenizers !
882
883 Args:
884 offsets: A list of dictionaries, each containing "char", "start_offset" and "end_offset".
885 hypothesis: Hypothesis object that contains `text` field, where each token is a sub-word id
886 after ctc collapse.
887 decode_ids_to_tokens: A Callable function that accepts a list of integers and maps it to a sub-word.
888 decode_tokens_to_str: A Callable function that accepts a list of integers and maps it to text / str.
889
890 Returns:
891 A list of dictionaries containing the word offsets. Each item contains "word", "start_offset" and
892 "end_offset".
893 """
894 word_offsets = []
895 built_token = []
896 previous_token_index = 0
897 # For every collapsed sub-word token
898 for i, char in enumerate(hypothesis.text):
899 # Compute the sub-word text representation, and the decoded text (stripped of sub-word markers).
900 token = decode_ids_to_tokens([char])[0]
901 token_text = decode_tokens_to_str([char])
902
903 # It is a sub-word token, or contains an identifier at the beginning such as _ or ## that was stripped
904 # after forcing partial text conversion of the token.
905 if token != token_text:
906 # If there are any partially or fully built sub-word token ids, construct to text.
907 # Note: This is "old" subword, that occurs *after* current sub-word has started.
908 if len(built_token) > 0:
909 word_offsets.append(
910 {
911 "word": decode_tokens_to_str(built_token),
912 "start_offset": offsets[previous_token_index]["start_offset"],
913 "end_offset": offsets[i]["start_offset"],
914 }
915 )
916
917 # Prepare list of new sub-word ids
918 built_token.clear()
919 built_token.append(char)
920 previous_token_index = i
921 else:
922 # If the token does not contain any sub-word start mark, then the sub-word has not completed yet
923 # Append to current sub-word list.
924 built_token.append(char)
925
926 # Inject the start offset of the first token to word offsets
927 # This is because we always skip the delay the injection of the first sub-word due to the loop
928 # condition and check whether built token is ready or not.
929 # Therefore without this forced injection, the start_offset appears as off by 1.
930 if len(word_offsets) == 0:
931 # alaptev: sometimes word_offsets can be empty
932 if len(built_token) > 0:
933 word_offsets.append(
934 {
935 "word": decode_tokens_to_str(built_token),
936 "start_offset": offsets[0]["start_offset"],
937 "end_offset": offsets[-1]["end_offset"],
938 }
939 )
940 built_token.clear()
941 else:
942 word_offsets[0]["start_offset"] = offsets[0]["start_offset"]
943
944 # If there are any remaining tokens left, inject them all into the final word offset.
945 # Note: The start offset of this token is the start time of the first token inside build_token.
946 # Note: The end offset of this token is the end time of the last token inside build_token
947 if len(built_token) > 0:
948 word_offsets.append(
949 {
950 "word": decode_tokens_to_str(built_token),
951 "start_offset": offsets[-(len(built_token))]["start_offset"],
952 "end_offset": offsets[-1]["end_offset"],
953 }
954 )
955 built_token.clear()
956
957 return word_offsets
958
959 @property
960 def preserve_alignments(self):
961 return self._preserve_alignments
962
963 @preserve_alignments.setter
964 def preserve_alignments(self, value):
965 self._preserve_alignments = value
966
967 if hasattr(self, 'decoding'):
968 self.decoding.preserve_alignments = value
969
970 @property
971 def compute_timestamps(self):
972 return self._compute_timestamps
973
974 @compute_timestamps.setter
975 def compute_timestamps(self, value):
976 self._compute_timestamps = value
977
978 if hasattr(self, 'decoding'):
979 self.decoding.compute_timestamps = value
980
981 @property
982 def preserve_frame_confidence(self):
983 return self._preserve_frame_confidence
984
985 @preserve_frame_confidence.setter
986 def preserve_frame_confidence(self, value):
987 self._preserve_frame_confidence = value
988
989 if hasattr(self, 'decoding'):
990 self.decoding.preserve_frame_confidence = value
991
992
993 class CTCDecoding(AbstractCTCDecoding):
994 """
995 Used for performing CTC auto-regressive / non-auto-regressive decoding of the logprobs for character
996 based models.
997
998 Args:
999 decoding_cfg: A dict-like object which contains the following key-value pairs.
1000 strategy: str value which represents the type of decoding that can occur.
1001 Possible values are :
1002 - greedy (for greedy decoding).
1003 - beam (for DeepSpeed KenLM based decoding).
1004
1005 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
1006 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
1007 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
1008
1009 ctc_timestamp_type: A str value, which represents the types of timestamps that should be calculated.
1010 Can take the following values - "char" for character/subword time stamps, "word" for word level
1011 time stamps and "all" (default), for both character level and word level time stamps.
1012
1013 word_seperator: Str token representing the seperator between words.
1014
1015 preserve_alignments: Bool flag which preserves the history of logprobs generated during
1016 decoding (sample / batched). When set to true, the Hypothesis will contain
1017 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
1018
1019 confidence_cfg: A dict-like object which contains the following key-value pairs related to confidence
1020 scores. In order to obtain hypotheses with confidence scores, please utilize
1021 `ctc_decoder_predictions_tensor` function with the `preserve_frame_confidence` flag set to True.
1022
1023 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
1024 generated during decoding. When set to true, the Hypothesis will contain
1025 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
1026 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
1027 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1028 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
1029
1030 The length of the list corresponds to the number of recognized tokens.
1031 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
1032 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1033 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
1034
1035 The length of the list corresponds to the number of recognized words.
1036 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
1037 from the `token_confidence`.
1038 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
1039 Valid options are `mean`, `min`, `max`, `prod`.
1040 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
1041 confidence scores.
1042
1043 name: The method name (str).
1044 Supported values:
1045 - 'max_prob' for using the maximum token probability as a confidence.
1046 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1047
1048 entropy_type: Which type of entropy to use (str).
1049 Used if confidence_method_cfg.name is set to `entropy`.
1050 Supported values:
1051 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1052 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1053 Note that for this entropy, the alpha should comply the following inequality:
1054 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1055 where V is the model vocabulary size.
1056 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1057 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1058 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1059 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1060 - 'renyi' for the Rรฉnyi entropy.
1061 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1062 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1063 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1064
1065 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1066 When the alpha equals one, scaling is not applied to 'max_prob',
1067 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1068
1069 entropy_norm: A mapping of the entropy value to the interval [0,1].
1070 Supported values:
1071 - 'lin' for using the linear mapping.
1072 - 'exp' for using exponential mapping with linear shift.
1073
1074 batch_dim_index: Index of the batch dimension of ``targets`` and ``predictions`` parameters of
1075 ``ctc_decoder_predictions_tensor`` methods. Can be either 0 or 1.
1076
1077 The config may further contain the following sub-dictionaries:
1078 "greedy":
1079 preserve_alignments: Same as above, overrides above value.
1080 compute_timestamps: Same as above, overrides above value.
1081 preserve_frame_confidence: Same as above, overrides above value.
1082 confidence_method_cfg: Same as above, overrides confidence_cfg.method_cfg.
1083
1084 "beam":
1085 beam_size: int, defining the beam size for beam search. Must be >= 1.
1086 If beam_size == 1, will perform cached greedy search. This might be slightly different
1087 results compared to the greedy search above.
1088
1089 return_best_hypothesis: optional bool, whether to return just the best hypothesis or all of the
1090 hypotheses after beam search has concluded. This flag is set by default.
1091
1092 beam_alpha: float, the strength of the Language model on the final score of a token.
1093 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
1094
1095 beam_beta: float, the strength of the sequence length penalty on the final score of a token.
1096 final_score = acoustic_score + beam_alpha * lm_score + beam_beta * seq_length.
1097
1098 kenlm_path: str, path to a KenLM ARPA or .binary file (depending on the strategy chosen).
1099 If the path is invalid (file is not found at path), will raise a deferred error at the moment
1100 of calculation of beam search, so that users may update / change the decoding strategy
1101 to point to the correct file.
1102
1103 blank_id: The id of the RNNT blank token.
1104 """
1105
1106 def __init__(
1107 self, decoding_cfg, vocabulary,
1108 ):
1109 blank_id = len(vocabulary)
1110 self.vocabulary = vocabulary
1111 self.labels_map = dict([(i, vocabulary[i]) for i in range(len(vocabulary))])
1112
1113 super().__init__(decoding_cfg=decoding_cfg, blank_id=blank_id)
1114
1115 # Finalize Beam Search Decoding framework
1116 if isinstance(self.decoding, ctc_beam_decoding.AbstractBeamCTCInfer):
1117 self.decoding.set_vocabulary(self.vocabulary)
1118 self.decoding.set_decoding_type('char')
1119
1120 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
1121 """
1122 Implemented by subclass in order to aggregate token confidence to a word-level confidence.
1123
1124 Args:
1125 hypothesis: Hypothesis
1126
1127 Returns:
1128 A list of word-level confidence scores.
1129 """
1130 return self._aggregate_token_confidence_chars(
1131 self.decode_tokens_to_str(hypothesis.text[0]).split(), hypothesis.token_confidence
1132 )
1133
1134 def decode_tokens_to_str(self, tokens: List[int]) -> str:
1135 """
1136 Implemented by subclass in order to decoder a token list into a string.
1137
1138 Args:
1139 tokens: List of int representing the token ids.
1140
1141 Returns:
1142 A decoded string.
1143 """
1144 hypothesis = ''.join(self.decode_ids_to_tokens(tokens))
1145 return hypothesis
1146
1147 def decode_ids_to_tokens(self, tokens: List[int]) -> List[str]:
1148 """
1149 Implemented by subclass in order to decode a token id list into a token list.
1150 A token list is the string representation of each token id.
1151
1152 Args:
1153 tokens: List of int representing the token ids.
1154
1155 Returns:
1156 A list of decoded tokens.
1157 """
1158 token_list = [self.labels_map[c] for c in tokens if c != self.blank_id]
1159 return token_list
1160
1161
1162 class WER(Metric):
1163 """
1164 This metric computes numerator and denominator for Overall Word Error Rate (WER) between prediction and reference
1165 texts. When doing distributed training/evaluation the result of ``res=WER(predictions, targets, target_lengths)``
1166 calls will be all-reduced between all workers using SUM operations. Here ``res`` contains three numbers
1167 ``res=[wer, total_levenstein_distance, total_number_of_words]``.
1168
1169 If used with PytorchLightning LightningModule, include wer_numerator and wer_denominators inside validation_step
1170 results. Then aggregate (sum) then at the end of validation epoch to correctly compute validation WER.
1171
1172 Example:
1173 def validation_step(self, batch, batch_idx):
1174 ...
1175 wer_num, wer_denom = self.__wer(predictions, transcript, transcript_len)
1176 self.val_outputs = {'val_loss': loss_value, 'val_wer_num': wer_num, 'val_wer_denom': wer_denom}
1177 return self.val_outputs
1178
1179 def on_validation_epoch_end(self):
1180 ...
1181 wer_num = torch.stack([x['val_wer_num'] for x in self.val_outputs]).sum()
1182 wer_denom = torch.stack([x['val_wer_denom'] for x in self.val_outputs]).sum()
1183 tensorboard_logs = {'validation_loss': val_loss_mean, 'validation_avg_wer': wer_num / wer_denom}
1184 self.val_outputs.clear() # free memory
1185 return {'val_loss': val_loss_mean, 'log': tensorboard_logs}
1186
1187 Args:
1188 decoding: An instance of CTCDecoding.
1189 use_cer: Whether to use Character Error Rate instead of Word Error Rate.
1190 log_prediction: Whether to log a single decoded sample per call.
1191 fold_consecutive: Whether repeated consecutive characters should be folded into one when decoding.
1192
1193 Returns:
1194 res: a tuple of 3 zero dimensional float32 ``torch.Tensor` objects: a WER score, a sum of Levenstein's
1195 distances for all prediction - reference pairs, total number of words in all references.
1196 """
1197
1198 full_state_update: bool = True
1199
1200 def __init__(
1201 self,
1202 decoding: CTCDecoding,
1203 use_cer=False,
1204 log_prediction=True,
1205 fold_consecutive=True,
1206 dist_sync_on_step=False,
1207 ):
1208 super().__init__(dist_sync_on_step=dist_sync_on_step)
1209
1210 self.decoding = decoding
1211 self.use_cer = use_cer
1212 self.log_prediction = log_prediction
1213 self.fold_consecutive = fold_consecutive
1214
1215 self.add_state("scores", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1216 self.add_state("words", default=torch.tensor(0), dist_reduce_fx='sum', persistent=False)
1217
1218 def update(
1219 self,
1220 predictions: torch.Tensor,
1221 targets: torch.Tensor,
1222 target_lengths: torch.Tensor,
1223 predictions_lengths: torch.Tensor = None,
1224 ):
1225 """
1226 Updates metric state.
1227 Args:
1228 predictions: an integer torch.Tensor of shape ``[Batch, Time, {Vocabulary}]`` (if ``batch_dim_index == 0``) or
1229 ``[Time, Batch]`` (if ``batch_dim_index == 1``)
1230 targets: an integer torch.Tensor of shape ``[Batch, Time]`` (if ``batch_dim_index == 0``) or
1231 ``[Time, Batch]`` (if ``batch_dim_index == 1``)
1232 target_lengths: an integer torch.Tensor of shape ``[Batch]``
1233 predictions_lengths: an integer torch.Tensor of shape ``[Batch]``
1234 """
1235 words = 0
1236 scores = 0
1237 references = []
1238 with torch.no_grad():
1239 # prediction_cpu_tensor = tensors[0].long().cpu()
1240 targets_cpu_tensor = targets.long().cpu()
1241 tgt_lenths_cpu_tensor = target_lengths.long().cpu()
1242
1243 # iterate over batch
1244 for ind in range(targets_cpu_tensor.shape[0]):
1245 tgt_len = tgt_lenths_cpu_tensor[ind].item()
1246 target = targets_cpu_tensor[ind][:tgt_len].numpy().tolist()
1247 reference = self.decoding.decode_tokens_to_str(target)
1248 references.append(reference)
1249
1250 hypotheses, _ = self.decoding.ctc_decoder_predictions_tensor(
1251 predictions, predictions_lengths, fold_consecutive=self.fold_consecutive
1252 )
1253
1254 if self.log_prediction:
1255 logging.info(f"\n")
1256 logging.info(f"reference:{references[0]}")
1257 logging.info(f"predicted:{hypotheses[0]}")
1258
1259 for h, r in zip(hypotheses, references):
1260 if self.use_cer:
1261 h_list = list(h)
1262 r_list = list(r)
1263 else:
1264 h_list = h.split()
1265 r_list = r.split()
1266 words += len(r_list)
1267 # Compute Levenstein's distance
1268 scores += editdistance.eval(h_list, r_list)
1269
1270 self.scores = torch.tensor(scores, device=self.scores.device, dtype=self.scores.dtype)
1271 self.words = torch.tensor(words, device=self.words.device, dtype=self.words.dtype)
1272 # return torch.tensor([scores, words]).to(predictions.device)
1273
1274 def compute(self):
1275 scores = self.scores.detach().float()
1276 words = self.words.detach().float()
1277 return scores / words, scores, words
1278
1279
1280 @dataclass
1281 class CTCDecodingConfig:
1282 strategy: str = "greedy"
1283
1284 # preserve decoding alignments
1285 preserve_alignments: Optional[bool] = None
1286
1287 # compute ctc time stamps
1288 compute_timestamps: Optional[bool] = None
1289
1290 # token representing word seperator
1291 word_seperator: str = " "
1292
1293 # type of timestamps to calculate
1294 ctc_timestamp_type: str = "all" # can be char, word or all for both
1295
1296 # batch dimension
1297 batch_dim_index: int = 0
1298
1299 # greedy decoding config
1300 greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
1301
1302 # beam decoding config
1303 beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
1304
1305 # confidence config
1306 confidence_cfg: ConfidenceConfig = ConfidenceConfig()
1307
1308 # can be used to change temperature for decoding
1309 temperature: float = 1.0
1310
[end of nemo/collections/asr/metrics/wer.py]
[start of nemo/collections/asr/models/configs/aligner_config.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
18
19
20 @dataclass
21 class AlignerCTCConfig:
22 prob_suppress_index: int = -1
23 prob_suppress_value: float = 1.0
24
25
26 @dataclass
27 class AlignerRNNTConfig:
28 predictor_window_size: int = 0
29 predictor_step_size: int = 1
30
31
32 @dataclass
33 class AlignerWrapperModelConfig:
34 alignment_type: str = "forced"
35 word_output: bool = True
36 cpu_decoding: bool = False
37 decode_batch_size: int = 0
38 ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
39 rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
40
41
42 @dataclass
43 class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
44 decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
45
[end of nemo/collections/asr/models/configs/aligner_config.py]
[start of nemo/collections/asr/models/configs/asr_models_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, List, Optional
17
18 from omegaconf import MISSING
19
20 import nemo.core.classes.dataset
21 from nemo.collections.asr.metrics.wer import CTCDecodingConfig
22 from nemo.collections.asr.modules.audio_preprocessing import (
23 AudioToMelSpectrogramPreprocessorConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig
27 from nemo.core.config import modelPT as model_cfg
28
29
30 @dataclass
31 class ASRDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
32 manifest_filepath: Optional[Any] = None
33 sample_rate: int = MISSING
34 labels: List[str] = MISSING
35 trim_silence: bool = False
36
37 # Tarred dataset support
38 is_tarred: bool = False
39 tarred_audio_filepaths: Optional[Any] = None
40 tarred_shard_strategy: str = "scatter"
41 shard_manifests: bool = False
42 shuffle_n: int = 0
43
44 # Optional
45 int_values: Optional[int] = None
46 augmentor: Optional[Dict[str, Any]] = None
47 max_duration: Optional[float] = None
48 min_duration: Optional[float] = None
49 max_utts: int = 0
50 blank_index: int = -1
51 unk_index: int = -1
52 normalize: bool = False
53 trim: bool = True
54 parser: Optional[str] = 'en'
55 eos_id: Optional[int] = None
56 bos_id: Optional[int] = None
57 pad_id: int = 0
58 use_start_end_token: bool = False
59 return_sample_id: Optional[bool] = False
60
61 # bucketing params
62 bucketing_strategy: str = "synced_randomized"
63 bucketing_batch_size: Optional[Any] = None
64 bucketing_weights: Optional[List[int]] = None
65
66
67 @dataclass
68 class EncDecCTCConfig(model_cfg.ModelConfig):
69 # Model global arguments
70 sample_rate: int = 16000
71 repeat: int = 1
72 dropout: float = 0.0
73 separable: bool = False
74 labels: List[str] = MISSING
75
76 # Dataset configs
77 train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
78 validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
79 test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
80
81 # Optimizer / Scheduler config
82 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
83
84 # Model component configs
85 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
86 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
87 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
88 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
89 decoding: CTCDecodingConfig = CTCDecodingConfig()
90
91
92 @dataclass
93 class EncDecCTCModelConfig(model_cfg.NemoConfig):
94 model: EncDecCTCConfig = EncDecCTCConfig()
95
96
97 @dataclass
98 class CacheAwareStreamingConfig:
99 chunk_size: int = 0 # the size of each chunk at each step, it can be a list of two integers to specify different chunk sizes for the first step and others
100 shift_size: int = 0 # the size of the shift in each step, it can be a list of two integers to specify different shift sizes for the first step and others
101
102 cache_drop_size: int = 0 # the number of steps to drop from the cache
103 last_channel_cache_size: int = 0 # the size of the needed cache for last channel layers
104
105 valid_out_len: int = 0 # the number of the steps in the final output which are valid (have the same value as in the offline mode)
106
107 pre_encode_cache_size: int = 0 # the size of the needed cache for the pre-encoding part of the model to avoid caching inside the pre-encoding layers
108 drop_extra_pre_encoded: int = 0 # the number of steps to get dropped after the pre-encoding layer
109
110 last_channel_num: int = 0 # number of the last channel layers (like MHA layers) which need caching in the model
111 last_time_num: int = 0 # number of the last time layers (like convolutions) which need caching in the model
112
[end of nemo/collections/asr/models/configs/asr_models_config.py]
[start of nemo/collections/asr/models/configs/classification_models_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, List, Optional
17
18 from omegaconf import MISSING
19
20 import nemo.core.classes.dataset
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMFCCPreprocessorConfig,
23 CropOrPadSpectrogramAugmentationConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderClassificationConfig, ConvASREncoderConfig
27 from nemo.core.config import modelPT as model_cfg
28
29
30 @dataclass
31 class EncDecClassificationDatasetConfig(nemo.core.classes.dataset.DatasetConfig):
32 manifest_filepath: Optional[str] = None
33 sample_rate: int = MISSING
34 labels: List[str] = MISSING
35 trim_silence: bool = False
36
37 # Tarred dataset support
38 is_tarred: bool = False
39 tarred_audio_filepaths: Optional[str] = None
40 tarred_shard_strategy: str = "scatter"
41 shuffle_n: int = 0
42
43 # Optional
44 int_values: Optional[int] = None
45 augmentor: Optional[Dict[str, Any]] = None
46 max_duration: Optional[float] = None
47 min_duration: Optional[float] = None
48 cal_labels_occurrence: Optional[bool] = False
49
50 # VAD Optional
51 vad_stream: Optional[bool] = None
52 window_length_in_sec: float = 0.31
53 shift_length_in_sec: float = 0.01
54 normalize_audio: bool = False
55 is_regression_task: bool = False
56
57 # bucketing params
58 bucketing_strategy: str = "synced_randomized"
59 bucketing_batch_size: Optional[Any] = None
60 bucketing_weights: Optional[List[int]] = None
61
62
63 @dataclass
64 class EncDecClassificationConfig(model_cfg.ModelConfig):
65 # Model global arguments
66 sample_rate: int = 16000
67 repeat: int = 1
68 dropout: float = 0.0
69 separable: bool = True
70 kernel_size_factor: float = 1.0
71 labels: List[str] = MISSING
72 timesteps: int = MISSING
73
74 # Dataset configs
75 train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
76 manifest_filepath=None, shuffle=True, trim_silence=False
77 )
78 validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
79 manifest_filepath=None, shuffle=False
80 )
81 test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
82 manifest_filepath=None, shuffle=False
83 )
84
85 # Optimizer / Scheduler config
86 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
87
88 # Model component configs
89 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
90 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
91 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
92 audio_length=timesteps
93 )
94
95 encoder: ConvASREncoderConfig = ConvASREncoderConfig()
96 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
97
98
99 @dataclass
100 class EncDecClassificationModelConfig(model_cfg.NemoConfig):
101 model: EncDecClassificationConfig = EncDecClassificationConfig()
102
[end of nemo/collections/asr/models/configs/classification_models_config.py]
[start of nemo/collections/asr/models/configs/diarizer_config.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import asdict, dataclass
16 from typing import Any, Dict, Optional, Tuple, Union
17
18
19 @dataclass
20 class DiarizerComponentConfig:
21 """Dataclass to imitate HydraConfig dict when accessing parameters."""
22
23 def get(self, name: str, default: Optional[Any] = None):
24 return getattr(self, name, default)
25
26 def __iter__(self):
27 for key in asdict(self):
28 yield key
29
30 def dict(self) -> Dict:
31 return asdict(self)
32
33
34 @dataclass
35 class ASRDiarizerCTCDecoderParams:
36 pretrained_language_model: Optional[str] = None # KenLM model file: .arpa model file or .bin binary file.
37 beam_width: int = 32
38 alpha: float = 0.5
39 beta: float = 2.5
40
41
42 @dataclass
43 class ASRRealigningLMParams:
44 # Provide a KenLM language model in .arpa format.
45 arpa_language_model: Optional[str] = None
46 # Min number of words for the left context.
47 min_number_of_words: int = 3
48 # Max number of words for the right context.
49 max_number_of_words: int = 10
50 # The threshold for the difference between two log probability values from two hypotheses.
51 logprob_diff_threshold: float = 1.2
52
53
54 @dataclass
55 class ASRDiarizerParams(DiarizerComponentConfig):
56 # if True, speech segmentation for diarization is based on word-timestamps from ASR inference.
57 asr_based_vad: bool = False
58 # Threshold (in sec) that caps the gap between two words when generating VAD timestamps using ASR based VAD.
59 asr_based_vad_threshold: float = 1.0
60 # Batch size can be dependent on each ASR model. Default batch sizes are applied if set to null.
61 asr_batch_size: Optional[int] = None
62 # Native decoder delay. null is recommended to use the default values for each ASR model.
63 decoder_delay_in_sec: Optional[float] = None
64 # Offset to set a reference point from the start of the word. Recommended range of values is [-0.05 0.2].
65 word_ts_anchor_offset: Optional[float] = None
66 # Select which part of the word timestamp we want to use. The options are: 'start', 'end', 'mid'.
67 word_ts_anchor_pos: str = "start"
68 # Fix the word timestamp using VAD output. You must provide a VAD model to use this feature.
69 fix_word_ts_with_VAD: bool = False
70 # If True, use colored text to distinguish speakers in the output transcript.
71 colored_text: bool = False
72 # If True, the start and end time of each speaker turn is printed in the output transcript.
73 print_time: bool = True
74 # If True, the output transcript breaks the line to fix the line width (default is 90 chars)
75 break_lines: bool = False
76
77
78 @dataclass
79 class ASRDiarizerConfig(DiarizerComponentConfig):
80 model_path: Optional[str] = "stt_en_conformer_ctc_large"
81 parameters: ASRDiarizerParams = ASRDiarizerParams()
82 ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
83 realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
84
85
86 @dataclass
87 class VADParams(DiarizerComponentConfig):
88 window_length_in_sec: float = 0.15 # Window length in sec for VAD context input
89 shift_length_in_sec: float = 0.01 # Shift length in sec for generate frame level VAD prediction
90 smoothing: Union[str, bool] = "median" # False or type of smoothing method (eg: median)
91 overlap: float = 0.5 # Overlap ratio for overlapped mean/median smoothing filter
92 onset: float = 0.1 # Onset threshold for detecting the beginning and end of a speech
93 offset: float = 0.1 # Offset threshold for detecting the end of a speech
94 pad_onset: float = 0.1 # Adding durations before each speech segment
95 pad_offset: float = 0 # Adding durations after each speech segment
96 min_duration_on: float = 0 # Threshold for small non_speech deletion
97 min_duration_off: float = 0.2 # Threshold for short speech segment deletion
98 filter_speech_first: bool = True
99
100
101 @dataclass
102 class VADConfig(DiarizerComponentConfig):
103 model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
104 external_vad_manifest: Optional[str] = None
105 parameters: VADParams = VADParams()
106
107
108 @dataclass
109 class SpeakerEmbeddingsParams(DiarizerComponentConfig):
110 # Window length(s) in sec (floating-point number). either a number or a list. ex) 1.5 or [1.5,1.0,0.5]
111 window_length_in_sec: Tuple[float] = (1.5, 1.25, 1.0, 0.75, 0.5)
112 # Shift length(s) in sec (floating-point number). either a number or a list. ex) 0.75 or [0.75,0.5,0.25]
113 shift_length_in_sec: Tuple[float] = (0.75, 0.625, 0.5, 0.375, 0.25)
114 # Weight for each scale. None (for single scale) or list with window/shift scale count. ex) [0.33,0.33,0.33]
115 multiscale_weights: Tuple[float] = (1, 1, 1, 1, 1)
116 # save speaker embeddings in pickle format. True if clustering result is used for other models, such as MSDD.
117 save_embeddings: bool = True
118
119
120 @dataclass
121 class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
122 # .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
123 model_path: Optional[str] = None
124 parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
125
126
127 @dataclass
128 class ClusteringParams(DiarizerComponentConfig):
129 # If True, use num of speakers value provided in manifest file.
130 oracle_num_speakers: bool = False
131 # Max number of speakers for each recording. If an oracle number of speakers is passed, this value is ignored.
132 max_num_speakers: int = 8
133 # If the number of segments is lower than this number, enhanced speaker counting is activated.
134 enhanced_count_thres: int = 80
135 # Determines the range of p-value search: 0 < p <= max_rp_threshold.
136 max_rp_threshold: float = 0.25
137 # The higher the number, the more values will be examined with more time.
138 sparse_search_volume: int = 30
139 # If True, take a majority vote on multiple p-values to estimate the number of speakers.
140 maj_vote_spk_count: bool = False
141
142
143 @dataclass
144 class ClusteringConfig(DiarizerComponentConfig):
145 parameters: ClusteringParams = ClusteringParams()
146
147
148 @dataclass
149 class MSDDParams(DiarizerComponentConfig):
150 # If True, use speaker embedding model in checkpoint, else provided speaker embedding model in config will be used.
151 use_speaker_model_from_ckpt: bool = True
152 # Batch size for MSDD inference.
153 infer_batch_size: int = 25
154 # Sigmoid threshold for generating binarized speaker labels. The smaller the more generous on detecting overlaps.
155 sigmoid_threshold: Tuple[float] = (0.7,)
156 # If True, use oracle number of speaker and evaluate F1 score for the given speaker sequences. Default is False.
157 seq_eval_mode: bool = False
158 # If True, break the input audio clip to short sequences and calculate cluster average embeddings for inference.
159 split_infer: bool = True
160 # The length of split short sequence when split_infer is True.
161 diar_window_length: int = 50
162 # If the estimated number of speakers are larger than this number, overlap speech is not estimated.
163 overlap_infer_spk_limit: int = 5
164
165
166 @dataclass
167 class MSDDConfig(DiarizerComponentConfig):
168 model_path: Optional[str] = "diar_msdd_telephonic"
169 parameters: MSDDParams = MSDDParams()
170
171
172 @dataclass
173 class DiarizerConfig(DiarizerComponentConfig):
174 manifest_filepath: Optional[str] = None
175 out_dir: Optional[str] = None
176 oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
177 collar: float = 0.25 # Collar value for scoring
178 ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
179 vad: VADConfig = VADConfig()
180 speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
181 clustering: ClusteringConfig = ClusteringConfig()
182 msdd_model: MSDDConfig = MSDDConfig()
183 asr: ASRDiarizerConfig = ASRDiarizerConfig()
184
185
186 @dataclass
187 class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
188 diarizer: DiarizerConfig = DiarizerConfig()
189 device: str = "cpu"
190 verbose: bool = False
191 batch_size: int = 64
192 num_workers: int = 1
193 sample_rate: int = 16000
194 name: str = ""
195
196 @classmethod
197 def init_config(cls, diar_model_path: str, vad_model_path: str, map_location: str, verbose: bool):
198 return NeuralDiarizerInferenceConfig(
199 DiarizerConfig(
200 vad=VADConfig(model_path=vad_model_path), msdd_model=MSDDConfig(model_path=diar_model_path),
201 ),
202 device=map_location,
203 verbose=verbose,
204 )
205
[end of nemo/collections/asr/models/configs/diarizer_config.py]
[start of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16
17 from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
18 from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
19 from nemo.core.config.modelPT import NemoConfig
20
21
22 @dataclass
23 class GraphModuleConfig:
24 criterion_type: str = "ml"
25 loss_type: str = "ctc"
26 split_batch_size: int = 0
27 dec_type: str = "topo"
28 transcribe_training: bool = True
29 backend_cfg: BackendConfig = BackendConfig()
30
31
32 @dataclass
33 class EncDecK2SeqConfig(EncDecCTCConfig):
34 graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
35
36
37 @dataclass
38 class EncDecK2SeqModelConfig(NemoConfig):
39 model: EncDecK2SeqConfig = EncDecK2SeqConfig()
40
[end of nemo/collections/asr/models/configs/k2_sequence_models_config.py]
[start of nemo/collections/asr/models/configs/matchboxnet_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, field
16 from typing import Any, Callable, List, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.collections.asr.models.configs import classification_models_config as clf_cfg
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMFCCPreprocessorConfig,
23 CropOrPadSpectrogramAugmentationConfig,
24 SpectrogramAugmentationConfig,
25 )
26 from nemo.collections.asr.modules.conv_asr import (
27 ConvASRDecoderClassificationConfig,
28 ConvASREncoderConfig,
29 JasperEncoderConfig,
30 )
31 from nemo.core.config import modelPT as model_cfg
32
33
34 # fmt: off
35 def matchboxnet_3x1x64():
36 config = [
37 JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
38 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
39 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
40 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
41 JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
42 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
43 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
44 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
45 JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
46 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
47 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
48 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
49 JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
50 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
51 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
52 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
53 JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
54 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
55 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
56 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
57 JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
58 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
59 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
60 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
61 ]
62 return config
63
64
65 def matchboxnet_3x1x64_vad():
66 config = [
67 JasperEncoderConfig(filters=128, repeat=1, kernel=[11], stride=[1], dilation=[1], dropout=0.0,
68 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
69 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
70 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
71 JasperEncoderConfig(filters=64, repeat=1, kernel=[13], stride=[1], dilation=[1], dropout=0.0,
72 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
73 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
74 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
75 JasperEncoderConfig(filters=64, repeat=1, kernel=[15], stride=[1], dilation=[1], dropout=0.0,
76 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
77 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
78 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
79 JasperEncoderConfig(filters=64, repeat=1, kernel=[17], stride=[1], dilation=[1], dropout=0.0,
80 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
81 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
82 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
83 JasperEncoderConfig(filters=128, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.0,
84 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
85 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
86 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
87 JasperEncoderConfig(filters=128, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
88 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
89 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
90 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
91 ]
92 return config
93
94
95 # fmt: on
96
97
98 @dataclass
99 class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
100 # Model global arguments
101 sample_rate: int = 16000
102 repeat: int = 1
103 dropout: float = 0.0
104 separable: bool = True
105 kernel_size_factor: float = 1.0
106 timesteps: int = 128
107 labels: List[str] = MISSING
108
109 # Dataset configs
110 train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
111 manifest_filepath=None, shuffle=True, trim_silence=False
112 )
113 validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
114 manifest_filepath=None, shuffle=False
115 )
116 test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
117 manifest_filepath=None, shuffle=False
118 )
119
120 # Optimizer / Scheduler config
121 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
122
123 # Model general component configs
124 preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
125 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
126 freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
127 )
128 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
129 audio_length=128
130 )
131
132 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
133 decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
134
135
136 @dataclass
137 class MatchboxNetVADModelConfig(MatchboxNetModelConfig):
138 timesteps: int = 64
139 labels: List[str] = field(default_factory=lambda: ['background', 'speech'])
140
141 crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = None
142
143
144 class EncDecClassificationModelConfigBuilder(model_cfg.ModelConfigBuilder):
145 VALID_CONFIGS = ['matchboxnet_3x1x64', 'matchboxnet_3x1x64_vad']
146
147 def __init__(self, name: str = 'matchboxnet_3x1x64', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
148 if name not in EncDecClassificationModelConfigBuilder.VALID_CONFIGS:
149 raise ValueError("`name` must be one of : \n" f"{EncDecClassificationModelConfigBuilder.VALID_CONFIGS}")
150
151 self.name = name
152
153 if 'matchboxnet_3x1x64_vad' in name:
154 if encoder_cfg_func is None:
155 encoder_cfg_func = matchboxnet_3x1x64_vad
156
157 model_cfg = MatchboxNetVADModelConfig(
158 repeat=1,
159 separable=True,
160 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
161 decoder=ConvASRDecoderClassificationConfig(),
162 )
163
164 elif 'matchboxnet_3x1x64' in name:
165 if encoder_cfg_func is None:
166 encoder_cfg_func = matchboxnet_3x1x64
167
168 model_cfg = MatchboxNetModelConfig(
169 repeat=1,
170 separable=False,
171 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
172 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
173 decoder=ConvASRDecoderClassificationConfig(),
174 )
175
176 else:
177 raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
178
179 super(EncDecClassificationModelConfigBuilder, self).__init__(model_cfg)
180 self.model_cfg: clf_cfg.EncDecClassificationConfig = model_cfg # enable type hinting
181
182 def set_labels(self, labels: List[str]):
183 self.model_cfg.labels = labels
184
185 def set_separable(self, separable: bool):
186 self.model_cfg.separable = separable
187
188 def set_repeat(self, repeat: int):
189 self.model_cfg.repeat = repeat
190
191 def set_sample_rate(self, sample_rate: int):
192 self.model_cfg.sample_rate = sample_rate
193
194 def set_dropout(self, dropout: float = 0.0):
195 self.model_cfg.dropout = dropout
196
197 def set_timesteps(self, timesteps: int):
198 self.model_cfg.timesteps = timesteps
199
200 def set_is_regression_task(self, is_regression_task: bool):
201 self.model_cfg.is_regression_task = is_regression_task
202
203 # Note: Autocomplete for users wont work without these overrides
204 # But practically it is not needed since python will infer at runtime
205
206 # def set_train_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
207 # super().set_train_ds(cfg)
208 #
209 # def set_validation_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
210 # super().set_validation_ds(cfg)
211 #
212 # def set_test_ds(self, cfg: Optional[clf_cfg.EncDecClassificationDatasetConfig] = None):
213 # super().set_test_ds(cfg)
214
215 def _finalize_cfg(self):
216 # propagate labels
217 self.model_cfg.train_ds.labels = self.model_cfg.labels
218 self.model_cfg.validation_ds.labels = self.model_cfg.labels
219 self.model_cfg.test_ds.labels = self.model_cfg.labels
220 self.model_cfg.decoder.vocabulary = self.model_cfg.labels
221
222 # propagate num classes
223 self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
224
225 # propagate sample rate
226 self.model_cfg.sample_rate = self.model_cfg.sample_rate
227 self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
228 self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
229 self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
230 self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
231
232 # propagate filters
233 self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
234 self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
235
236 # propagate timeteps
237 if self.model_cfg.crop_or_pad_augment is not None:
238 self.model_cfg.crop_or_pad_augment.audio_length = self.model_cfg.timesteps
239
240 # propagate separable
241 for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
242 layer.separable = self.model_cfg.separable
243
244 # propagate repeat
245 for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
246 layer.repeat = self.model_cfg.repeat
247
248 # propagate dropout
249 for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
250 layer.dropout = self.model_cfg.dropout
251
252 def build(self) -> clf_cfg.EncDecClassificationConfig:
253 return super().build()
254
[end of nemo/collections/asr/models/configs/matchboxnet_config.py]
[start of nemo/collections/asr/models/configs/quartznet_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Callable, List, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.collections.asr.models.configs import asr_models_config as ctc_cfg
21 from nemo.collections.asr.modules.audio_preprocessing import (
22 AudioToMelSpectrogramPreprocessorConfig,
23 SpectrogramAugmentationConfig,
24 )
25 from nemo.collections.asr.modules.conv_asr import ConvASRDecoderConfig, ConvASREncoderConfig, JasperEncoderConfig
26 from nemo.core.config import modelPT as model_cfg
27
28
29 # fmt: off
30 def qn_15x5():
31 config = [
32 JasperEncoderConfig(filters=256, repeat=1, kernel=[33], stride=[2], dilation=[1], dropout=0.0,
33 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
34 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
35 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
36 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
37 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
38 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
39 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
40 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
41 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
42 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
43 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
44 JasperEncoderConfig(filters=256, repeat=5, kernel=[33], stride=[1], dilation=[1], dropout=0.0,
45 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
46 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
47 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
48 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
49 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
50 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
51 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
52 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
53 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
54 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
55 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
56 JasperEncoderConfig(filters=256, repeat=5, kernel=[39], stride=[1], dilation=[1], dropout=0.0,
57 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
58 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
59 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
60 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
61 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
62 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
63 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
64 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
65 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
66 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
67 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
68 JasperEncoderConfig(filters=512, repeat=5, kernel=[51], stride=[1], dilation=[1], dropout=0.0,
69 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
70 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
71 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
72 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
73 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
74 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
75 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
76 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
77 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
78 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
79 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
80 JasperEncoderConfig(filters=512, repeat=5, kernel=[63], stride=[1], dilation=[1], dropout=0.0,
81 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
82 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
83 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
84 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
85 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
86 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
87 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
88 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
89 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
90 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
91 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
92 JasperEncoderConfig(filters=512, repeat=5, kernel=[75], stride=[1], dilation=[1], dropout=0.0,
93 residual=True, groups=1, separable=True, heads=-1, residual_mode='add',
94 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
95 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
96 JasperEncoderConfig(filters=512, repeat=1, kernel=[87], stride=[1], dilation=[2], dropout=0.0,
97 residual=False, groups=1, separable=True, heads=-1, residual_mode='add',
98 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
99 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
100 JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.0,
101 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
102 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
103 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
104 ]
105 return config
106
107
108 def jasper_10x5_dr():
109 config = [
110 JasperEncoderConfig(filters=256, repeat=1, kernel=[11], stride=[2], dilation=[1], dropout=0.2,
111 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
112 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
113 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
114 JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
115 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
116 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
117 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
118 JasperEncoderConfig(filters=256, repeat=5, kernel=[11], stride=[1], dilation=[1], dropout=0.2,
119 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
120 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
121 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
122 JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
123 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
124 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
125 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
126 JasperEncoderConfig(filters=384, repeat=5, kernel=[13], stride=[1], dilation=[1], dropout=0.2,
127 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
128 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
129 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
130 JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
131 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
132 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
133 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
134 JasperEncoderConfig(filters=512, repeat=5, kernel=[17], stride=[1], dilation=[1], dropout=0.2,
135 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
136 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
137 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
138 JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
139 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
140 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
141 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
142 JasperEncoderConfig(filters=640, repeat=5, kernel=[21], stride=[1], dilation=[1], dropout=0.3,
143 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
144 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
145 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
146 JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
147 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
148 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
149 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
150 JasperEncoderConfig(filters=768, repeat=5, kernel=[25], stride=[1], dilation=[1], dropout=0.3,
151 residual=True, groups=1, separable=False, heads=-1, residual_mode='add',
152 residual_dense=True, se=False, se_reduction_ratio=8, se_context_size=-1,
153 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
154 JasperEncoderConfig(filters=896, repeat=1, kernel=[29], stride=[1], dilation=[2], dropout=0.4,
155 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
156 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
157 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False),
158 JasperEncoderConfig(filters=1024, repeat=1, kernel=[1], stride=[1], dilation=[1], dropout=0.4,
159 residual=False, groups=1, separable=False, heads=-1, residual_mode='add',
160 residual_dense=False, se=False, se_reduction_ratio=8, se_context_size=-1,
161 se_interpolation_mode='nearest', kernel_size_factor=1.0, stride_last=False)
162 ]
163 return config
164 # fmt: on
165
166
167 @dataclass
168 class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
169 # Model global arguments
170 sample_rate: int = 16000
171 repeat: int = 1
172 dropout: float = 0.0
173 separable: bool = False
174 labels: List[str] = MISSING
175
176 # Dataset configs
177 train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
178 manifest_filepath=None, shuffle=True, trim_silence=True
179 )
180 validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
181 test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
182
183 # Optimizer / Scheduler config
184 optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
185
186 # Model general component configs
187 preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
188 spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
189 encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
190 decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
191
192
193 @dataclass
194 class QuartzNetModelConfig(JasperModelConfig):
195 separable: bool = True
196
197
198 class EncDecCTCModelConfigBuilder(model_cfg.ModelConfigBuilder):
199 VALID_CONFIGS = ['quartznet_15x5', 'quartznet_15x5_zh', 'jasper_10x5dr']
200
201 def __init__(self, name: str = 'quartznet_15x5', encoder_cfg_func: Optional[Callable[[], List[Any]]] = None):
202 if name not in EncDecCTCModelConfigBuilder.VALID_CONFIGS:
203 raise ValueError("`name` must be one of : \n" f"{EncDecCTCModelConfigBuilder.VALID_CONFIGS}")
204
205 self.name = name
206
207 if 'quartznet_15x5' in name:
208 if encoder_cfg_func is None:
209 encoder_cfg_func = qn_15x5
210
211 model_cfg = QuartzNetModelConfig(
212 repeat=5,
213 separable=True,
214 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
215 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
216 decoder=ConvASRDecoderConfig(),
217 )
218
219 elif 'jasper_10x5' in name:
220 if encoder_cfg_func is None:
221 encoder_cfg_func = jasper_10x5_dr
222
223 model_cfg = JasperModelConfig(
224 repeat=5,
225 separable=False,
226 spec_augment=SpectrogramAugmentationConfig(rect_masks=5, rect_freq=50, rect_time=120),
227 encoder=ConvASREncoderConfig(jasper=encoder_cfg_func(), activation="relu"),
228 decoder=ConvASRDecoderConfig(),
229 )
230
231 else:
232 raise ValueError(f"Invalid config name submitted to {self.__class__.__name__}")
233
234 super(EncDecCTCModelConfigBuilder, self).__init__(model_cfg)
235 self.model_cfg: ctc_cfg.EncDecCTCConfig = model_cfg # enable type hinting
236
237 if 'zh' in name:
238 self.set_dataset_normalize(normalize=False)
239
240 def set_labels(self, labels: List[str]):
241 self.model_cfg.labels = labels
242
243 def set_separable(self, separable: bool):
244 self.model_cfg.separable = separable
245
246 def set_repeat(self, repeat: int):
247 self.model_cfg.repeat = repeat
248
249 def set_sample_rate(self, sample_rate: int):
250 self.model_cfg.sample_rate = sample_rate
251
252 def set_dropout(self, dropout: float = 0.0):
253 self.model_cfg.dropout = dropout
254
255 def set_dataset_normalize(self, normalize: bool):
256 self.model_cfg.train_ds.normalize = normalize
257 self.model_cfg.validation_ds.normalize = normalize
258 self.model_cfg.test_ds.normalize = normalize
259
260 # Note: Autocomplete for users wont work without these overrides
261 # But practically it is not needed since python will infer at runtime
262
263 # def set_train_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
264 # super().set_train_ds(cfg)
265 #
266 # def set_validation_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
267 # super().set_validation_ds(cfg)
268 #
269 # def set_test_ds(self, cfg: Optional[ctc_cfg.ASRDatasetConfig] = None):
270 # super().set_test_ds(cfg)
271
272 def _finalize_cfg(self):
273 # propagate labels
274 self.model_cfg.train_ds.labels = self.model_cfg.labels
275 self.model_cfg.validation_ds.labels = self.model_cfg.labels
276 self.model_cfg.test_ds.labels = self.model_cfg.labels
277 self.model_cfg.decoder.vocabulary = self.model_cfg.labels
278
279 # propagate num classes
280 self.model_cfg.decoder.num_classes = len(self.model_cfg.labels)
281
282 # propagate sample rate
283 self.model_cfg.sample_rate = self.model_cfg.sample_rate
284 self.model_cfg.preprocessor.sample_rate = self.model_cfg.sample_rate
285 self.model_cfg.train_ds.sample_rate = self.model_cfg.sample_rate
286 self.model_cfg.validation_ds.sample_rate = self.model_cfg.sample_rate
287 self.model_cfg.test_ds.sample_rate = self.model_cfg.sample_rate
288
289 # propagate filters
290 self.model_cfg.encoder.feat_in = self.model_cfg.preprocessor.features
291 self.model_cfg.decoder.feat_in = self.model_cfg.encoder.jasper[-1].filters
292
293 # propagate separable
294 for layer in self.model_cfg.encoder.jasper[:-1]: # type: JasperEncoderConfig
295 layer.separable = self.model_cfg.separable
296
297 # propagate repeat
298 for layer in self.model_cfg.encoder.jasper[1:-2]: # type: JasperEncoderConfig
299 layer.repeat = self.model_cfg.repeat
300
301 # propagate dropout
302 for layer in self.model_cfg.encoder.jasper: # type: JasperEncoderConfig
303 layer.dropout = self.model_cfg.dropout
304
305 def build(self) -> ctc_cfg.EncDecCTCConfig:
306 return super().build()
307
[end of nemo/collections/asr/models/configs/quartznet_config.py]
[start of nemo/collections/asr/modules/audio_preprocessing.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 import random
17 from abc import ABC, abstractmethod
18 from dataclasses import dataclass
19 from typing import Any, Dict, Optional, Tuple
20
21 import torch
22 from packaging import version
23
24 from nemo.collections.asr.parts.numba.spec_augment import SpecAugmentNumba, spec_augment_launch_heuristics
25 from nemo.collections.asr.parts.preprocessing.features import (
26 FilterbankFeatures,
27 FilterbankFeaturesTA,
28 make_seq_mask_like,
29 )
30 from nemo.collections.asr.parts.submodules.spectr_augment import SpecAugment, SpecCutout
31 from nemo.core.classes import Exportable, NeuralModule, typecheck
32 from nemo.core.neural_types import (
33 AudioSignal,
34 LengthsType,
35 MelSpectrogramType,
36 MFCCSpectrogramType,
37 NeuralType,
38 SpectrogramType,
39 )
40 from nemo.core.utils import numba_utils
41 from nemo.core.utils.numba_utils import __NUMBA_MINIMUM_VERSION__
42 from nemo.utils import logging
43
44 try:
45 import torchaudio
46 import torchaudio.functional
47 import torchaudio.transforms
48
49 TORCHAUDIO_VERSION = version.parse(torchaudio.__version__)
50 TORCHAUDIO_VERSION_MIN = version.parse('0.5')
51
52 HAVE_TORCHAUDIO = True
53 except ModuleNotFoundError:
54 HAVE_TORCHAUDIO = False
55
56 __all__ = [
57 'AudioToMelSpectrogramPreprocessor',
58 'AudioToSpectrogram',
59 'SpectrogramToAudio',
60 'AudioToMFCCPreprocessor',
61 'SpectrogramAugmentation',
62 'MaskedPatchAugmentation',
63 'CropOrPadSpectrogramAugmentation',
64 ]
65
66
67 class AudioPreprocessor(NeuralModule, ABC):
68 """
69 An interface for Neural Modules that performs audio pre-processing,
70 transforming the wav files to features.
71 """
72
73 def __init__(self, win_length, hop_length):
74 super().__init__()
75
76 self.win_length = win_length
77 self.hop_length = hop_length
78
79 self.torch_windows = {
80 'hann': torch.hann_window,
81 'hamming': torch.hamming_window,
82 'blackman': torch.blackman_window,
83 'bartlett': torch.bartlett_window,
84 'ones': torch.ones,
85 None: torch.ones,
86 }
87
88 @typecheck()
89 @torch.no_grad()
90 def forward(self, input_signal, length):
91 processed_signal, processed_length = self.get_features(input_signal, length)
92
93 return processed_signal, processed_length
94
95 @abstractmethod
96 def get_features(self, input_signal, length):
97 # Called by forward(). Subclasses should implement this.
98 pass
99
100
101 class AudioToMelSpectrogramPreprocessor(AudioPreprocessor, Exportable):
102 """Featurizer module that converts wavs to mel spectrograms.
103
104 Args:
105 sample_rate (int): Sample rate of the input audio data.
106 Defaults to 16000
107 window_size (float): Size of window for fft in seconds
108 Defaults to 0.02
109 window_stride (float): Stride of window for fft in seconds
110 Defaults to 0.01
111 n_window_size (int): Size of window for fft in samples
112 Defaults to None. Use one of window_size or n_window_size.
113 n_window_stride (int): Stride of window for fft in samples
114 Defaults to None. Use one of window_stride or n_window_stride.
115 window (str): Windowing function for fft. can be one of ['hann',
116 'hamming', 'blackman', 'bartlett']
117 Defaults to "hann"
118 normalize (str): Can be one of ['per_feature', 'all_features']; all
119 other options disable feature normalization. 'all_features'
120 normalizes the entire spectrogram to be mean 0 with std 1.
121 'pre_features' normalizes per channel / freq instead.
122 Defaults to "per_feature"
123 n_fft (int): Length of FT window. If None, it uses the smallest power
124 of 2 that is larger than n_window_size.
125 Defaults to None
126 preemph (float): Amount of pre emphasis to add to audio. Can be
127 disabled by passing None.
128 Defaults to 0.97
129 features (int): Number of mel spectrogram freq bins to output.
130 Defaults to 64
131 lowfreq (int): Lower bound on mel basis in Hz.
132 Defaults to 0
133 highfreq (int): Lower bound on mel basis in Hz.
134 Defaults to None
135 log (bool): Log features.
136 Defaults to True
137 log_zero_guard_type(str): Need to avoid taking the log of zero. There
138 are two options: "add" or "clamp".
139 Defaults to "add".
140 log_zero_guard_value(float, or str): Add or clamp requires the number
141 to add with or clamp to. log_zero_guard_value can either be a float
142 or "tiny" or "eps". torch.finfo is used if "tiny" or "eps" is
143 passed.
144 Defaults to 2**-24.
145 dither (float): Amount of white-noise dithering.
146 Defaults to 1e-5
147 pad_to (int): Ensures that the output size of the time dimension is
148 a multiple of pad_to.
149 Defaults to 16
150 frame_splicing (int): Defaults to 1
151 exact_pad (bool): If True, sets stft center to False and adds padding, such that num_frames = audio_length
152 // hop_length. Defaults to False.
153 pad_value (float): The value that shorter mels are padded with.
154 Defaults to 0
155 mag_power (float): The power that the linear spectrogram is raised to
156 prior to multiplication with mel basis.
157 Defaults to 2 for a power spec
158 rng : Random number generator
159 nb_augmentation_prob (float) : Probability with which narrowband augmentation would be applied to
160 samples in the batch.
161 Defaults to 0.0
162 nb_max_freq (int) : Frequency above which all frequencies will be masked for narrowband augmentation.
163 Defaults to 4000
164 use_torchaudio: Whether to use the `torchaudio` implementation.
165 mel_norm: Normalization used for mel filterbank weights.
166 Defaults to 'slaney' (area normalization)
167 stft_exact_pad: Deprecated argument, kept for compatibility with older checkpoints.
168 stft_conv: Deprecated argument, kept for compatibility with older checkpoints.
169 """
170
171 def save_to(self, save_path: str):
172 pass
173
174 @classmethod
175 def restore_from(cls, restore_path: str):
176 pass
177
178 @property
179 def input_types(self):
180 """Returns definitions of module input ports.
181 """
182 return {
183 "input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
184 "length": NeuralType(
185 tuple('B'), LengthsType()
186 ), # Please note that length should be in samples not seconds.
187 }
188
189 @property
190 def output_types(self):
191 """Returns definitions of module output ports.
192
193 processed_signal:
194 0: AxisType(BatchTag)
195 1: AxisType(MelSpectrogramSignalTag)
196 2: AxisType(ProcessedTimeTag)
197 processed_length:
198 0: AxisType(BatchTag)
199 """
200 return {
201 "processed_signal": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
202 "processed_length": NeuralType(tuple('B'), LengthsType()),
203 }
204
205 def __init__(
206 self,
207 sample_rate=16000,
208 window_size=0.02,
209 window_stride=0.01,
210 n_window_size=None,
211 n_window_stride=None,
212 window="hann",
213 normalize="per_feature",
214 n_fft=None,
215 preemph=0.97,
216 features=64,
217 lowfreq=0,
218 highfreq=None,
219 log=True,
220 log_zero_guard_type="add",
221 log_zero_guard_value=2 ** -24,
222 dither=1e-5,
223 pad_to=16,
224 frame_splicing=1,
225 exact_pad=False,
226 pad_value=0,
227 mag_power=2.0,
228 rng=None,
229 nb_augmentation_prob=0.0,
230 nb_max_freq=4000,
231 use_torchaudio: bool = False,
232 mel_norm="slaney",
233 stft_exact_pad=False, # Deprecated arguments; kept for config compatibility
234 stft_conv=False, # Deprecated arguments; kept for config compatibility
235 ):
236 super().__init__(n_window_size, n_window_stride)
237
238 self._sample_rate = sample_rate
239 if window_size and n_window_size:
240 raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
241 if window_stride and n_window_stride:
242 raise ValueError(
243 f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
244 )
245 if window_size:
246 n_window_size = int(window_size * self._sample_rate)
247 if window_stride:
248 n_window_stride = int(window_stride * self._sample_rate)
249
250 # Given the long and similar argument list, point to the class and instantiate it by reference
251 if not use_torchaudio:
252 featurizer_class = FilterbankFeatures
253 else:
254 featurizer_class = FilterbankFeaturesTA
255 self.featurizer = featurizer_class(
256 sample_rate=self._sample_rate,
257 n_window_size=n_window_size,
258 n_window_stride=n_window_stride,
259 window=window,
260 normalize=normalize,
261 n_fft=n_fft,
262 preemph=preemph,
263 nfilt=features,
264 lowfreq=lowfreq,
265 highfreq=highfreq,
266 log=log,
267 log_zero_guard_type=log_zero_guard_type,
268 log_zero_guard_value=log_zero_guard_value,
269 dither=dither,
270 pad_to=pad_to,
271 frame_splicing=frame_splicing,
272 exact_pad=exact_pad,
273 pad_value=pad_value,
274 mag_power=mag_power,
275 rng=rng,
276 nb_augmentation_prob=nb_augmentation_prob,
277 nb_max_freq=nb_max_freq,
278 mel_norm=mel_norm,
279 stft_exact_pad=stft_exact_pad, # Deprecated arguments; kept for config compatibility
280 stft_conv=stft_conv, # Deprecated arguments; kept for config compatibility
281 )
282
283 def input_example(self, max_batch: int = 8, max_dim: int = 32000, min_length: int = 200):
284 batch_size = torch.randint(low=1, high=max_batch, size=[1]).item()
285 max_length = torch.randint(low=min_length, high=max_dim, size=[1]).item()
286 signals = torch.rand(size=[batch_size, max_length]) * 2 - 1
287 lengths = torch.randint(low=min_length, high=max_dim, size=[batch_size])
288 lengths[0] = max_length
289 return signals, lengths
290
291 def get_features(self, input_signal, length):
292 return self.featurizer(input_signal, length)
293
294 @property
295 def filter_banks(self):
296 return self.featurizer.filter_banks
297
298
299 class AudioToMFCCPreprocessor(AudioPreprocessor):
300 """Preprocessor that converts wavs to MFCCs.
301 Uses torchaudio.transforms.MFCC.
302
303 Args:
304 sample_rate: The sample rate of the audio.
305 Defaults to 16000.
306 window_size: Size of window for fft in seconds. Used to calculate the
307 win_length arg for mel spectrogram.
308 Defaults to 0.02
309 window_stride: Stride of window for fft in seconds. Used to caculate
310 the hop_length arg for mel spect.
311 Defaults to 0.01
312 n_window_size: Size of window for fft in samples
313 Defaults to None. Use one of window_size or n_window_size.
314 n_window_stride: Stride of window for fft in samples
315 Defaults to None. Use one of window_stride or n_window_stride.
316 window: Windowing function for fft. can be one of ['hann',
317 'hamming', 'blackman', 'bartlett', 'none', 'null'].
318 Defaults to 'hann'
319 n_fft: Length of FT window. If None, it uses the smallest power of 2
320 that is larger than n_window_size.
321 Defaults to None
322 lowfreq (int): Lower bound on mel basis in Hz.
323 Defaults to 0
324 highfreq (int): Lower bound on mel basis in Hz.
325 Defaults to None
326 n_mels: Number of mel filterbanks.
327 Defaults to 64
328 n_mfcc: Number of coefficients to retain
329 Defaults to 64
330 dct_type: Type of discrete cosine transform to use
331 norm: Type of norm to use
332 log: Whether to use log-mel spectrograms instead of db-scaled.
333 Defaults to True.
334 """
335
336 @property
337 def input_types(self):
338 """Returns definitions of module input ports.
339 """
340 return {
341 "input_signal": NeuralType(('B', 'T'), AudioSignal(freq=self._sample_rate)),
342 "length": NeuralType(tuple('B'), LengthsType()),
343 }
344
345 @property
346 def output_types(self):
347 """Returns definitions of module output ports.
348 """
349 return {
350 "processed_signal": NeuralType(('B', 'D', 'T'), MFCCSpectrogramType()),
351 "processed_length": NeuralType(tuple('B'), LengthsType()),
352 }
353
354 def save_to(self, save_path: str):
355 pass
356
357 @classmethod
358 def restore_from(cls, restore_path: str):
359 pass
360
361 def __init__(
362 self,
363 sample_rate=16000,
364 window_size=0.02,
365 window_stride=0.01,
366 n_window_size=None,
367 n_window_stride=None,
368 window='hann',
369 n_fft=None,
370 lowfreq=0.0,
371 highfreq=None,
372 n_mels=64,
373 n_mfcc=64,
374 dct_type=2,
375 norm='ortho',
376 log=True,
377 ):
378 self._sample_rate = sample_rate
379 if not HAVE_TORCHAUDIO:
380 logging.error('Could not import torchaudio. Some features might not work.')
381
382 raise ModuleNotFoundError(
383 "torchaudio is not installed but is necessary for "
384 "AudioToMFCCPreprocessor. We recommend you try "
385 "building it from source for the PyTorch version you have."
386 )
387 if window_size and n_window_size:
388 raise ValueError(f"{self} received both window_size and " f"n_window_size. Only one should be specified.")
389 if window_stride and n_window_stride:
390 raise ValueError(
391 f"{self} received both window_stride and " f"n_window_stride. Only one should be specified."
392 )
393 # Get win_length (n_window_size) and hop_length (n_window_stride)
394 if window_size:
395 n_window_size = int(window_size * self._sample_rate)
396 if window_stride:
397 n_window_stride = int(window_stride * self._sample_rate)
398
399 super().__init__(n_window_size, n_window_stride)
400
401 mel_kwargs = {}
402
403 mel_kwargs['f_min'] = lowfreq
404 mel_kwargs['f_max'] = highfreq
405 mel_kwargs['n_mels'] = n_mels
406
407 mel_kwargs['n_fft'] = n_fft or 2 ** math.ceil(math.log2(n_window_size))
408
409 mel_kwargs['win_length'] = n_window_size
410 mel_kwargs['hop_length'] = n_window_stride
411
412 # Set window_fn. None defaults to torch.ones.
413 window_fn = self.torch_windows.get(window, None)
414 if window_fn is None:
415 raise ValueError(
416 f"Window argument for AudioProcessor is invalid: {window}."
417 f"For no window function, use 'ones' or None."
418 )
419 mel_kwargs['window_fn'] = window_fn
420
421 # Use torchaudio's implementation of MFCCs as featurizer
422 self.featurizer = torchaudio.transforms.MFCC(
423 sample_rate=self._sample_rate,
424 n_mfcc=n_mfcc,
425 dct_type=dct_type,
426 norm=norm,
427 log_mels=log,
428 melkwargs=mel_kwargs,
429 )
430
431 def get_features(self, input_signal, length):
432 features = self.featurizer(input_signal)
433 seq_len = torch.ceil(length.to(torch.float32) / self.hop_length).to(dtype=torch.long)
434 return features, seq_len
435
436
437 class SpectrogramAugmentation(NeuralModule):
438 """
439 Performs time and freq cuts in one of two ways.
440 SpecAugment zeroes out vertical and horizontal sections as described in
441 SpecAugment (https://arxiv.org/abs/1904.08779). Arguments for use with
442 SpecAugment are `freq_masks`, `time_masks`, `freq_width`, and `time_width`.
443 SpecCutout zeroes out rectangulars as described in Cutout
444 (https://arxiv.org/abs/1708.04552). Arguments for use with Cutout are
445 `rect_masks`, `rect_freq`, and `rect_time`.
446
447 Args:
448 freq_masks (int): how many frequency segments should be cut.
449 Defaults to 0.
450 time_masks (int): how many time segments should be cut
451 Defaults to 0.
452 freq_width (int): maximum number of frequencies to be cut in one
453 segment.
454 Defaults to 10.
455 time_width (int): maximum number of time steps to be cut in one
456 segment
457 Defaults to 10.
458 rect_masks (int): how many rectangular masks should be cut
459 Defaults to 0.
460 rect_freq (int): maximum size of cut rectangles along the frequency
461 dimension
462 Defaults to 5.
463 rect_time (int): maximum size of cut rectangles along the time
464 dimension
465 Defaults to 25.
466 """
467
468 @property
469 def input_types(self):
470 """Returns definitions of module input types
471 """
472 return {
473 "input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
474 "length": NeuralType(tuple('B'), LengthsType()),
475 }
476
477 @property
478 def output_types(self):
479 """Returns definitions of module output types
480 """
481 return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
482
483 def __init__(
484 self,
485 freq_masks=0,
486 time_masks=0,
487 freq_width=10,
488 time_width=10,
489 rect_masks=0,
490 rect_time=5,
491 rect_freq=20,
492 rng=None,
493 mask_value=0.0,
494 use_numba_spec_augment: bool = True,
495 ):
496 super().__init__()
497
498 if rect_masks > 0:
499 self.spec_cutout = SpecCutout(rect_masks=rect_masks, rect_time=rect_time, rect_freq=rect_freq, rng=rng,)
500 # self.spec_cutout.to(self._device)
501 else:
502 self.spec_cutout = lambda input_spec: input_spec
503 if freq_masks + time_masks > 0:
504 self.spec_augment = SpecAugment(
505 freq_masks=freq_masks,
506 time_masks=time_masks,
507 freq_width=freq_width,
508 time_width=time_width,
509 rng=rng,
510 mask_value=mask_value,
511 )
512 else:
513 self.spec_augment = lambda input_spec, length: input_spec
514
515 # Check if numba is supported, and use a Numba kernel if it is
516 if use_numba_spec_augment and numba_utils.numba_cuda_is_supported(__NUMBA_MINIMUM_VERSION__):
517 logging.info('Numba CUDA SpecAugment kernel is being used')
518 self.spec_augment_numba = SpecAugmentNumba(
519 freq_masks=freq_masks,
520 time_masks=time_masks,
521 freq_width=freq_width,
522 time_width=time_width,
523 rng=rng,
524 mask_value=mask_value,
525 )
526 else:
527 self.spec_augment_numba = None
528
529 @typecheck()
530 def forward(self, input_spec, length):
531 augmented_spec = self.spec_cutout(input_spec=input_spec)
532
533 # To run the Numba kernel, correct numba version is required as well as
534 # tensor must be on GPU and length must be provided
535 if self.spec_augment_numba is not None and spec_augment_launch_heuristics(augmented_spec, length):
536 augmented_spec = self.spec_augment_numba(input_spec=augmented_spec, length=length)
537 else:
538 augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
539 return augmented_spec
540
541
542 class MaskedPatchAugmentation(NeuralModule):
543 """
544 Zeroes out fixed size time patches of the spectrogram.
545 All samples in batch are guaranteed to have the same amount of masked time steps.
546 Optionally also performs frequency masking in the same way as SpecAugment.
547 Args:
548 patch_size (int): up to how many time steps does one patch consist of.
549 Defaults to 48.
550 mask_patches (float): how many patches should be masked in each sample.
551 if >= 1., interpreted as number of patches (after converting to int)
552 if <1., interpreted as fraction of total tokens to be masked (number of patches is rounded up)
553 Defaults to 10.
554 freq_masks (int): how many frequency segments should be cut.
555 Defaults to 0.
556 freq_width (int): maximum number of frequencies to be cut in a segment.
557 Defaults to 0.
558 """
559
560 @property
561 def input_types(self):
562 """Returns definitions of module input types
563 """
564 return {
565 "input_spec": NeuralType(('B', 'D', 'T'), SpectrogramType()),
566 "length": NeuralType(tuple('B'), LengthsType()),
567 }
568
569 @property
570 def output_types(self):
571 """Returns definitions of module output types
572 """
573 return {"augmented_spec": NeuralType(('B', 'D', 'T'), SpectrogramType())}
574
575 def __init__(
576 self, patch_size: int = 48, mask_patches: float = 10.0, freq_masks: int = 0, freq_width: int = 0,
577 ):
578 super().__init__()
579 self.patch_size = patch_size
580 if mask_patches >= 1:
581 self.mask_patches = int(mask_patches)
582 elif mask_patches >= 0:
583 self._mask_fraction = mask_patches
584 self.mask_patches = None
585 else:
586 raise ValueError('mask_patches cannot be negative')
587
588 if freq_masks > 0:
589 self.spec_augment = SpecAugment(freq_masks=freq_masks, time_masks=0, freq_width=freq_width, time_width=0,)
590 else:
591 self.spec_augment = None
592
593 @typecheck()
594 def forward(self, input_spec, length):
595 augmented_spec = input_spec
596
597 min_len = torch.min(length)
598
599 if self.mask_patches is None:
600 # masking specified as fraction
601 len_fraction = int(min_len * self._mask_fraction)
602 mask_patches = len_fraction // self.patch_size + int(len_fraction % self.patch_size != 0)
603 else:
604 mask_patches = self.mask_patches
605
606 if min_len < self.patch_size * mask_patches:
607 mask_patches = min_len // self.patch_size
608
609 for idx in range(input_spec.shape[0]):
610 cur_len = length[idx]
611 patches = range(cur_len // self.patch_size)
612 masked_patches = random.sample(patches, mask_patches)
613
614 for mp in masked_patches:
615 augmented_spec[idx, :, mp * self.patch_size : (mp + 1) * self.patch_size] = 0.0
616
617 if self.spec_augment is not None:
618 augmented_spec = self.spec_augment(input_spec=augmented_spec, length=length)
619
620 return augmented_spec
621
622
623 class CropOrPadSpectrogramAugmentation(NeuralModule):
624 """
625 Pad or Crop the incoming Spectrogram to a certain shape.
626
627 Args:
628 audio_length (int): the final number of timesteps that is required.
629 The signal will be either padded or cropped temporally to this
630 size.
631 """
632
633 def __init__(self, audio_length):
634 super(CropOrPadSpectrogramAugmentation, self).__init__()
635 self.audio_length = audio_length
636
637 @typecheck()
638 @torch.no_grad()
639 def forward(self, input_signal, length):
640 image = input_signal
641 num_images = image.shape[0]
642
643 audio_length = self.audio_length
644 image_len = image.shape[-1]
645
646 # Crop long signal
647 if image_len > audio_length: # randomly slice
648 cutout_images = []
649 offset = torch.randint(low=0, high=image_len - audio_length + 1, size=[num_images])
650
651 for idx, offset in enumerate(offset):
652 cutout_images.append(image[idx : idx + 1, :, offset : offset + audio_length])
653
654 image = torch.cat(cutout_images, dim=0)
655 del cutout_images
656
657 else: # symmetrically pad short signal with zeros
658 pad_left = (audio_length - image_len) // 2
659 pad_right = (audio_length - image_len) // 2
660
661 if (audio_length - image_len) % 2 == 1:
662 pad_right += 1
663
664 image = torch.nn.functional.pad(image, [pad_left, pad_right], mode="constant", value=0)
665
666 # Replace dynamic length sequences with static number of timesteps
667 length = (length * 0) + audio_length
668
669 return image, length
670
671 @property
672 def input_types(self):
673 """Returns definitions of module output ports.
674 """
675 return {
676 "input_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
677 "length": NeuralType(tuple('B'), LengthsType()),
678 }
679
680 @property
681 def output_types(self):
682 """Returns definitions of module output ports.
683 """
684 return {
685 "processed_signal": NeuralType(('B', 'D', 'T'), SpectrogramType()),
686 "processed_length": NeuralType(tuple('B'), LengthsType()),
687 }
688
689 def save_to(self, save_path: str):
690 pass
691
692 @classmethod
693 def restore_from(cls, restore_path: str):
694 pass
695
696
697 class AudioToSpectrogram(NeuralModule):
698 """Transform a batch of input multi-channel signals into a batch of
699 STFT-based spectrograms.
700
701 Args:
702 fft_length: length of FFT
703 hop_length: length of hops/shifts of the sliding window
704 power: exponent for magnitude spectrogram. Default `None` will
705 return a complex-valued spectrogram
706 """
707
708 def __init__(self, fft_length: int, hop_length: int, power: Optional[float] = None):
709 if not HAVE_TORCHAUDIO:
710 logging.error('Could not import torchaudio. Some features might not work.')
711
712 raise ModuleNotFoundError(
713 "torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
714 )
715
716 super().__init__()
717
718 # For now, assume FFT length is divisible by two
719 if fft_length % 2 != 0:
720 raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
721
722 self.stft = torchaudio.transforms.Spectrogram(
723 n_fft=fft_length, hop_length=hop_length, power=power, pad_mode='constant'
724 )
725
726 # number of subbands
727 self.F = fft_length // 2 + 1
728
729 @property
730 def num_subbands(self) -> int:
731 return self.F
732
733 @property
734 def input_types(self) -> Dict[str, NeuralType]:
735 """Returns definitions of module output ports.
736 """
737 return {
738 "input": NeuralType(('B', 'C', 'T'), AudioSignal()),
739 "input_length": NeuralType(('B',), LengthsType(), optional=True),
740 }
741
742 @property
743 def output_types(self) -> Dict[str, NeuralType]:
744 """Returns definitions of module output ports.
745 """
746 return {
747 "output": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
748 "output_length": NeuralType(('B',), LengthsType()),
749 }
750
751 @typecheck()
752 def forward(
753 self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None
754 ) -> Tuple[torch.Tensor, torch.Tensor]:
755 """Convert a batch of C-channel input signals
756 into a batch of complex-valued spectrograms.
757
758 Args:
759 input: Time-domain input signal with C channels, shape (B, C, T)
760 input_length: Length of valid entries along the time dimension, shape (B,)
761
762 Returns:
763 Output spectrogram with F subbands and N time frames, shape (B, C, F, N)
764 and output length with shape (B,).
765 """
766 B, T = input.size(0), input.size(-1)
767 input = input.view(B, -1, T)
768
769 # STFT output (B, C, F, N)
770 with torch.cuda.amp.autocast(enabled=False):
771 output = self.stft(input.float())
772
773 if input_length is not None:
774 # Mask padded frames
775 output_length = self.get_output_length(input_length=input_length)
776
777 length_mask: torch.Tensor = make_seq_mask_like(
778 lengths=output_length, like=output, time_dim=-1, valid_ones=False
779 )
780 output = output.masked_fill(length_mask, 0.0)
781 else:
782 # Assume all frames are valid for all examples in the batch
783 output_length = output.size(-1) * torch.ones(B, device=output.device).long()
784
785 return output, output_length
786
787 def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
788 """Get length of valid frames for the output.
789
790 Args:
791 input_length: number of valid samples, shape (B,)
792
793 Returns:
794 Number of valid frames, shape (B,)
795 """
796 output_length = input_length.div(self.stft.hop_length, rounding_mode='floor').add(1).long()
797 return output_length
798
799
800 class SpectrogramToAudio(NeuralModule):
801 """Transform a batch of input multi-channel spectrograms into a batch of
802 time-domain multi-channel signals.
803
804 Args:
805 fft_length: length of FFT
806 hop_length: length of hops/shifts of the sliding window
807 power: exponent for magnitude spectrogram. Default `None` will
808 return a complex-valued spectrogram
809 """
810
811 def __init__(self, fft_length: int, hop_length: int):
812 if not HAVE_TORCHAUDIO:
813 logging.error('Could not import torchaudio. Some features might not work.')
814
815 raise ModuleNotFoundError(
816 "torchaudio is not installed but is necessary to instantiate a {self.__class__.__name__}"
817 )
818
819 super().__init__()
820
821 # For now, assume FFT length is divisible by two
822 if fft_length % 2 != 0:
823 raise ValueError(f'fft_length = {fft_length} must be divisible by 2')
824
825 self.istft = torchaudio.transforms.InverseSpectrogram(
826 n_fft=fft_length, hop_length=hop_length, pad_mode='constant'
827 )
828
829 self.F = fft_length // 2 + 1
830
831 @property
832 def num_subbands(self) -> int:
833 return self.F
834
835 @property
836 def input_types(self) -> Dict[str, NeuralType]:
837 """Returns definitions of module output ports.
838 """
839 return {
840 "input": NeuralType(('B', 'C', 'D', 'T'), SpectrogramType()),
841 "input_length": NeuralType(('B',), LengthsType(), optional=True),
842 }
843
844 @property
845 def output_types(self) -> Dict[str, NeuralType]:
846 """Returns definitions of module output ports.
847 """
848 return {
849 "output": NeuralType(('B', 'C', 'T'), AudioSignal()),
850 "output_length": NeuralType(('B',), LengthsType()),
851 }
852
853 @typecheck()
854 def forward(self, input: torch.Tensor, input_length: Optional[torch.Tensor] = None) -> torch.Tensor:
855 """Convert input complex-valued spectrogram to a time-domain
856 signal. Multi-channel IO is supported.
857
858 Args:
859 input: Input spectrogram for C channels, shape (B, C, F, N)
860 input_length: Length of valid entries along the time dimension, shape (B,)
861
862 Returns:
863 Time-domain signal with T time-domain samples and C channels, (B, C, T)
864 and output length with shape (B,).
865 """
866 B, F, N = input.size(0), input.size(-2), input.size(-1)
867 assert F == self.F, f'Number of subbands F={F} not matching self.F={self.F}'
868 input = input.view(B, -1, F, N)
869
870 # iSTFT output (B, C, T)
871 with torch.cuda.amp.autocast(enabled=False):
872 output = self.istft(input.cfloat())
873
874 if input_length is not None:
875 # Mask padded samples
876 output_length = self.get_output_length(input_length=input_length)
877
878 length_mask: torch.Tensor = make_seq_mask_like(
879 lengths=output_length, like=output, time_dim=-1, valid_ones=False
880 )
881 output = output.masked_fill(length_mask, 0.0)
882 else:
883 # Assume all frames are valid for all examples in the batch
884 output_length = output.size(-1) * torch.ones(B, device=output.device).long()
885
886 return output, output_length
887
888 def get_output_length(self, input_length: torch.Tensor) -> torch.Tensor:
889 """Get length of valid samples for the output.
890
891 Args:
892 input_length: number of valid frames, shape (B,)
893
894 Returns:
895 Number of valid samples, shape (B,)
896 """
897 output_length = input_length.sub(1).mul(self.istft.hop_length).long()
898 return output_length
899
900
901 @dataclass
902 class AudioToMelSpectrogramPreprocessorConfig:
903 _target_: str = "nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor"
904 sample_rate: int = 16000
905 window_size: float = 0.02
906 window_stride: float = 0.01
907 n_window_size: Optional[int] = None
908 n_window_stride: Optional[int] = None
909 window: str = "hann"
910 normalize: str = "per_feature"
911 n_fft: Optional[int] = None
912 preemph: float = 0.97
913 features: int = 64
914 lowfreq: int = 0
915 highfreq: Optional[int] = None
916 log: bool = True
917 log_zero_guard_type: str = "add"
918 log_zero_guard_value: float = 2 ** -24
919 dither: float = 1e-5
920 pad_to: int = 16
921 frame_splicing: int = 1
922 exact_pad: bool = False
923 pad_value: int = 0
924 mag_power: float = 2.0
925 rng: Optional[str] = None
926 nb_augmentation_prob: float = 0.0
927 nb_max_freq: int = 4000
928 use_torchaudio: bool = False
929 mel_norm: str = "slaney"
930 stft_exact_pad: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
931 stft_conv: bool = False # Deprecated argument, kept for compatibility with older checkpoints.
932
933
934 @dataclass
935 class AudioToMFCCPreprocessorConfig:
936 _target_: str = 'nemo.collections.asr.modules.AudioToMFCCPreprocessor'
937 sample_rate: int = 16000
938 window_size: float = 0.02
939 window_stride: float = 0.01
940 n_window_size: Optional[int] = None
941 n_window_stride: Optional[int] = None
942 window: str = 'hann'
943 n_fft: Optional[int] = None
944 lowfreq: Optional[float] = 0.0
945 highfreq: Optional[float] = None
946 n_mels: int = 64
947 n_mfcc: int = 64
948 dct_type: int = 2
949 norm: str = 'ortho'
950 log: bool = True
951
952
953 @dataclass
954 class SpectrogramAugmentationConfig:
955 _target_: str = "nemo.collections.asr.modules.SpectrogramAugmentation"
956 freq_masks: int = 0
957 time_masks: int = 0
958 freq_width: int = 0
959 time_width: Optional[Any] = 0
960 rect_masks: int = 0
961 rect_time: int = 0
962 rect_freq: int = 0
963 mask_value: float = 0
964 rng: Optional[Any] = None # random.Random() type
965 use_numba_spec_augment: bool = True
966
967
968 @dataclass
969 class CropOrPadSpectrogramAugmentationConfig:
970 audio_length: int
971 _target_: str = "nemo.collections.asr.modules.CropOrPadSpectrogramAugmentation"
972
973
974 @dataclass
975 class MaskedPatchAugmentationConfig:
976 patch_size: int = 48
977 mask_patches: float = 10.0
978 freq_masks: int = 0
979 freq_width: int = 0
980 _target_: str = "nemo.collections.asr.modules.MaskedPatchAugmentation"
981
[end of nemo/collections/asr/modules/audio_preprocessing.py]
[start of nemo/collections/asr/parts/k2/classes.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from abc import ABC
16 from dataclasses import dataclass
17 from typing import Any, Optional, Tuple
18
19 import torch
20 from omegaconf import DictConfig
21
22 from nemo.utils import logging
23
24
25 @dataclass
26 class GraphIntersectDenseConfig:
27 """Graph dense intersection config.
28 """
29
30 search_beam: float = 20.0
31 output_beam: float = 10.0
32 min_active_states: int = 30
33 max_active_states: int = 10000
34
35
36 @dataclass
37 class GraphModuleConfig:
38 """Config for graph modules.
39 Typically used with graph losses and decoders.
40 """
41
42 topo_type: str = "default"
43 topo_with_self_loops: bool = True
44 token_lm: Optional[Any] = None
45 intersect_pruned: bool = False
46 intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
47 boost_coeff: float = 0.0
48 predictor_window_size: int = 0
49 predictor_step_size: int = 1
50
51
52 class ASRK2Mixin(ABC):
53 """k2 Mixin class that simplifies the construction of various models with k2-based losses.
54
55 It does the following:
56 - Sets up the graph loss and decoder (methods _init_k2 and update_k2_modules).
57 - Registers external graphs, if needed.
58 - Augments forward(...) with optional graph decoding to get accurate predictions.
59 """
60
61 def _init_k2(self):
62 """
63 k2-related initialization implementation.
64
65 This method is expected to run after the __init__ which sets self._cfg
66 self._cfg is expected to have the attribute graph_module_cfg
67 """
68 if not hasattr(self, "_cfg"):
69 raise ValueError("self._cfg must be set before calling _init_k2().")
70 if not hasattr(self._cfg, "graph_module_cfg") or self._cfg.graph_module_cfg is None:
71 raise ValueError("self._cfg.graph_module_cfg must be set and cannot be None.")
72 self.graph_module_cfg = self._cfg.graph_module_cfg
73
74 # register token_lm for MAPLoss
75 criterion_type = self.graph_module_cfg.get("criterion_type", "ml")
76 self.use_graph_lm = criterion_type == "map"
77 if self.use_graph_lm:
78 token_lm_path = self.graph_module_cfg.backend_cfg.get("token_lm", None)
79 if token_lm_path is None:
80 raise ValueError(
81 f"graph_module_cfg.backend_cfg.token_lm is empty. It must be set for criterion_type == `{criterion_type}`"
82 )
83 token_lm_path = self.register_artifact('graph_module_cfg.backend_cfg.token_lm', token_lm_path)
84 self.graph_module_cfg.backend_cfg["token_lm"] = token_lm_path
85
86 self.update_k2_modules(self.graph_module_cfg)
87
88 def update_k2_modules(self, input_cfg: DictConfig):
89 """
90 Helper function to initialize or update k2 loss and transcribe_decoder.
91
92 Args:
93 input_cfg: DictConfig to take new parameters from. Schema is expected as in
94 nemo.collections.asr.models.configs.k2_sequence_models_config.GraphModuleConfig
95 """
96 del self.loss
97 if hasattr(self, "transcribe_decoder"):
98 del self.transcribe_decoder
99
100 if hasattr(self, "joint"):
101 # RNNT
102 num_classes = self.joint.num_classes_with_blank - 1
103 else:
104 # CTC, MMI, ...
105 num_classes = self.decoder.num_classes_with_blank - 1
106 remove_consecutive = input_cfg.backend_cfg.get("topo_with_self_loops", True) and input_cfg.backend_cfg.get(
107 "topo_type", "default"
108 ) not in ["forced_blank", "identity",]
109 self._wer.remove_consecutive = remove_consecutive
110
111 from nemo.collections.asr.losses.lattice_losses import LatticeLoss
112
113 self.loss = LatticeLoss(
114 num_classes=num_classes,
115 reduction=self._cfg.get("ctc_reduction", "mean_batch"),
116 backend="k2",
117 criterion_type=input_cfg.get("criterion_type", "ml"),
118 loss_type=input_cfg.get("loss_type", "ctc"),
119 split_batch_size=input_cfg.get("split_batch_size", 0),
120 graph_module_cfg=input_cfg.backend_cfg,
121 )
122
123 criterion_type = self.loss.criterion_type
124 self.use_graph_lm = criterion_type == "map"
125 transcribe_training = input_cfg.get("transcribe_training", False)
126 if transcribe_training and criterion_type == "ml":
127 logging.warning(
128 f"""You do not need to use transcribe_training=`{transcribe_training}`
129 with criterion_type=`{criterion_type}`. transcribe_training will be set to False."""
130 )
131 transcribe_training = False
132 self.transcribe_training = transcribe_training
133 if self.use_graph_lm:
134 from nemo.collections.asr.modules.graph_decoder import ViterbiDecoderWithGraph
135
136 self.transcribe_decoder = ViterbiDecoderWithGraph(
137 num_classes=num_classes,
138 backend="k2",
139 dec_type="token_lm",
140 return_type="1best",
141 return_ilabels=True,
142 output_aligned=True,
143 split_batch_size=input_cfg.get("split_batch_size", 0),
144 graph_module_cfg=input_cfg.backend_cfg,
145 )
146
147 def _forward_k2_post_processing(
148 self, log_probs: torch.Tensor, encoded_length: torch.Tensor, greedy_predictions: torch.Tensor
149 ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
150 """
151 k2-related post-processing parf of .forward()
152
153 Args:
154 log_probs: The log probabilities tensor of shape [B, T, D].
155 encoded_length: The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
156 greedy_predictions: The greedy token predictions of the model of shape [B, T]
157
158 Returns:
159 A tuple of 3 elements -
160 1) The log probabilities tensor of shape [B, T, D].
161 2) The lengths of the acoustic sequence after propagation through the encoder, of shape [B].
162 3) The greedy token predictions of the model of shape [B, T] (via argmax)
163 """
164 # greedy_predictions from .forward() are incorrect for criterion_type=`map`
165 # getting correct greedy_predictions, if needed
166 if self.use_graph_lm and (not self.training or self.transcribe_training):
167 greedy_predictions, encoded_length, _ = self.transcribe_decoder.forward(
168 log_probs=log_probs, log_probs_length=encoded_length
169 )
170 return log_probs, encoded_length, greedy_predictions
171
[end of nemo/collections/asr/parts/k2/classes.py]
[start of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 from dataclasses import dataclass
17 from typing import Any, Optional
18
19 import torch
20 from torch import nn as nn
21
22 from nemo.collections.asr.parts.submodules import multi_head_attention as mha
23 from nemo.collections.common.parts import adapter_modules
24 from nemo.core.classes.mixins import adapter_mixin_strategies
25
26
27 class MHAResidualAddAdapterStrategy(adapter_mixin_strategies.ResidualAddAdapterStrategy):
28 """
29 An implementation of residual addition of an adapter module with its input for the MHA Adapters.
30 """
31
32 def forward(self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'):
33 """
34 A basic strategy, comprising of a residual connection over the input, after forward pass by
35 the underlying adapter. Additional work is done to pack and unpack the dictionary of inputs and outputs.
36
37 Note: The `value` tensor is added to the output of the attention adapter as the residual connection.
38
39 Args:
40 input: A dictionary of multiple input arguments for the adapter module.
41 `query`, `key`, `value`: Original output tensor of the module, or the output of the
42 previous adapter (if more than one adapters are enabled).
43 `mask`: Attention mask.
44 `pos_emb`: Optional positional embedding for relative encoding.
45 adapter: The adapter module that is currently required to perform the forward pass.
46 module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
47 therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
48
49 Returns:
50 The result tensor, after one of the active adapters has finished its forward passes.
51 """
52 out = self.compute_output(input, adapter, module=module)
53
54 # If not in training mode, or probability of stochastic depth is 0, skip step.
55 p = self.stochastic_depth
56 if not module.training or p == 0.0:
57 pass
58 else:
59 out = self.apply_stochastic_depth(out, input['value'], adapter, module=module)
60
61 # Return the residual connection output = input + adapter(input)
62 result = input['value'] + out
63
64 # If l2_lambda is activated, register the loss value
65 self.compute_auxiliary_losses(result, input['value'], adapter, module=module)
66
67 return result
68
69 def compute_output(
70 self, input: torch.Tensor, adapter: torch.nn.Module, *, module: 'AdapterModuleMixin'
71 ) -> torch.Tensor:
72 """
73 Compute the output of a single adapter to some input.
74
75 Args:
76 input: Original output tensor of the module, or the output of the previous adapter (if more than
77 one adapters are enabled).
78 adapter: The adapter module that is currently required to perform the forward pass.
79 module: The calling module, in its entirety. It is a module that implements `AdapterModuleMixin`,
80 therefore the strategy can access all other adapters in this module via `module.adapter_layer`.
81
82 Returns:
83 The result tensor, after one of the active adapters has finished its forward passes.
84 """
85 if isinstance(input, (list, tuple)):
86 out = adapter(*input)
87 elif isinstance(input, dict):
88 out = adapter(**input)
89 else:
90 out = adapter(input)
91 return out
92
93
94 @dataclass
95 class MHAResidualAddAdapterStrategyConfig(adapter_mixin_strategies.ResidualAddAdapterStrategyConfig):
96 _target_: str = "{0}.{1}".format(
97 MHAResidualAddAdapterStrategy.__module__, MHAResidualAddAdapterStrategy.__name__
98 ) # mandatory field
99
100
101 class MultiHeadAttentionAdapter(mha.MultiHeadAttention, adapter_modules.AdapterModuleUtil):
102 """Multi-Head Attention layer of Transformer.
103 Args:
104 n_head (int): number of heads
105 n_feat (int): size of the features
106 dropout_rate (float): dropout rate
107 proj_dim (int, optional): Optional integer value for projection before computing attention.
108 If None, then there is no projection (equivalent to proj_dim = n_feat).
109 If > 0, then will project the n_feat to proj_dim before calculating attention.
110 If <0, then will equal n_head, so that each head has a projected dimension of 1.
111 adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
112 """
113
114 def __init__(
115 self,
116 n_head: int,
117 n_feat: int,
118 dropout_rate: float,
119 proj_dim: Optional[int] = None,
120 adapter_strategy: MHAResidualAddAdapterStrategy = None,
121 ):
122 super().__init__(n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, max_cache_len=0)
123
124 self.pre_norm = nn.LayerNorm(n_feat)
125
126 # Set the projection dim to number of heads automatically
127 if proj_dim is not None and proj_dim < 1:
128 proj_dim = n_head
129
130 self.proj_dim = proj_dim
131
132 # Recompute weights for projection dim
133 if self.proj_dim is not None:
134 if self.proj_dim % n_head != 0:
135 raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
136
137 self.d_k = self.proj_dim // n_head
138 self.s_d_k = math.sqrt(self.d_k)
139 self.linear_q = nn.Linear(n_feat, self.proj_dim)
140 self.linear_k = nn.Linear(n_feat, self.proj_dim)
141 self.linear_v = nn.Linear(n_feat, self.proj_dim)
142 self.linear_out = nn.Linear(self.proj_dim, n_feat)
143
144 # Setup adapter strategy
145 self.setup_adapter_strategy(adapter_strategy)
146
147 # reset parameters for Q to be identity operation
148 self.reset_parameters()
149
150 def forward(self, query, key, value, mask, pos_emb=None, cache=None):
151 """Compute 'Scaled Dot Product Attention'.
152 Args:
153 query (torch.Tensor): (batch, time1, size)
154 key (torch.Tensor): (batch, time2, size)
155 value(torch.Tensor): (batch, time2, size)
156 mask (torch.Tensor): (batch, time1, time2)
157 cache (torch.Tensor) : (batch, time_cache, size)
158
159 returns:
160 output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
161 cache (torch.Tensor) : (batch, time_cache_next, size)
162 """
163 # Need to perform duplicate computations as at this point the tensors have been
164 # separated by the adapter forward
165 query = self.pre_norm(query)
166 key = self.pre_norm(key)
167 value = self.pre_norm(value)
168
169 return super().forward(query, key, value, mask, pos_emb, cache=cache)
170
171 def reset_parameters(self):
172 with torch.no_grad():
173 nn.init.zeros_(self.linear_out.weight)
174 nn.init.zeros_(self.linear_out.bias)
175
176 def get_default_strategy_config(self) -> 'dataclass':
177 return MHAResidualAddAdapterStrategyConfig()
178
179
180 @dataclass
181 class MultiHeadAttentionAdapterConfig:
182 n_head: int
183 n_feat: int
184 dropout_rate: float = 0.0
185 proj_dim: Optional[int] = None
186 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
187 _target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
188
189
190 class RelPositionMultiHeadAttentionAdapter(mha.RelPositionMultiHeadAttention, adapter_modules.AdapterModuleUtil):
191 """Multi-Head Attention layer of Transformer-XL with support of relative positional encoding.
192 Paper: https://arxiv.org/abs/1901.02860
193 Args:
194 n_head (int): number of heads
195 n_feat (int): size of the features
196 dropout_rate (float): dropout rate
197 proj_dim (int, optional): Optional integer value for projection before computing attention.
198 If None, then there is no projection (equivalent to proj_dim = n_feat).
199 If > 0, then will project the n_feat to proj_dim before calculating attention.
200 If <0, then will equal n_head, so that each head has a projected dimension of 1.
201 adapter_strategy: By default, MHAResidualAddAdapterStrategyConfig. An adapter composition function object.
202 """
203
204 def __init__(
205 self,
206 n_head: int,
207 n_feat: int,
208 dropout_rate: float,
209 proj_dim: Optional[int] = None,
210 adapter_strategy: MHAResidualAddAdapterStrategyConfig = None,
211 ):
212 super().__init__(
213 n_head=n_head, n_feat=n_feat, dropout_rate=dropout_rate, pos_bias_u=None, pos_bias_v=None, max_cache_len=0
214 )
215
216 self.pre_norm = nn.LayerNorm(n_feat)
217
218 # Set the projection dim to number of heads automatically
219 if proj_dim is not None and proj_dim < 1:
220 proj_dim = n_head
221
222 self.proj_dim = proj_dim
223
224 # Recompute weights for projection dim
225 if self.proj_dim is not None:
226 if self.proj_dim % n_head != 0:
227 raise ValueError(f"proj_dim ({proj_dim}) is not divisible by n_head ({n_head})")
228
229 self.d_k = self.proj_dim // n_head
230 self.s_d_k = math.sqrt(self.d_k)
231 self.linear_q = nn.Linear(n_feat, self.proj_dim)
232 self.linear_k = nn.Linear(n_feat, self.proj_dim)
233 self.linear_v = nn.Linear(n_feat, self.proj_dim)
234 self.linear_out = nn.Linear(self.proj_dim, n_feat)
235 self.linear_pos = nn.Linear(n_feat, self.proj_dim, bias=False)
236 self.pos_bias_u = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
237 self.pos_bias_v = nn.Parameter(torch.FloatTensor(self.h, self.d_k))
238
239 # Setup adapter strategy
240 self.setup_adapter_strategy(adapter_strategy)
241
242 # reset parameters for Q to be identity operation
243 self.reset_parameters()
244
245 def forward(self, query, key, value, mask, pos_emb, cache=None):
246 """Compute 'Scaled Dot Product Attention' with rel. positional encoding.
247 Args:
248 query (torch.Tensor): (batch, time1, size)
249 key (torch.Tensor): (batch, time2, size)
250 value(torch.Tensor): (batch, time2, size)
251 mask (torch.Tensor): (batch, time1, time2)
252 pos_emb (torch.Tensor) : (batch, time1, size)
253 cache (torch.Tensor) : (batch, time_cache, size)
254 Returns:
255 output (torch.Tensor): transformed `value` (batch, time1, d_model) weighted by the query dot key attention
256 cache_next (torch.Tensor) : (batch, time_cache_next, size)
257 """
258 # Need to perform duplicate computations as at this point the tensors have been
259 # separated by the adapter forward
260 query = self.pre_norm(query)
261 key = self.pre_norm(key)
262 value = self.pre_norm(value)
263
264 return super().forward(query, key, value, mask, pos_emb, cache=cache)
265
266 def reset_parameters(self):
267 with torch.no_grad():
268 nn.init.zeros_(self.linear_out.weight)
269 nn.init.zeros_(self.linear_out.bias)
270
271 # NOTE: This exact procedure apparently highly important.
272 # Above operation is safe to do as self.linear_out.weight *= 0.0 (similar for bias)
273 # However:
274 # DO NOT REPLACE BELOW WITH self.pos_bias_u *= 0.0 OR self.pos_bias_v *= 0.0
275 # For some reason at init sometimes it will cause the value of the tensor to become NaN
276 # All operations to compute matrix_ac and matrix_bd will then fail.
277 nn.init.zeros_(self.pos_bias_u)
278 nn.init.zeros_(self.pos_bias_v)
279
280 def get_default_strategy_config(self) -> 'dataclass':
281 return MHAResidualAddAdapterStrategyConfig()
282
283
284 @dataclass
285 class RelPositionMultiHeadAttentionAdapterConfig:
286 n_head: int
287 n_feat: int
288 dropout_rate: float = 0.0
289 proj_dim: Optional[int] = None
290 adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
291 _target_: str = "{0}.{1}".format(
292 RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
293 )
294
295
296 class PositionalEncodingAdapter(mha.PositionalEncoding, adapter_modules.AdapterModuleUtil):
297
298 """
299 Absolute positional embedding adapter.
300
301 .. note::
302
303 Absolute positional embedding value is added to the input tensor *without residual connection* !
304 Therefore, the input is changed, if you only require the positional embedding, drop the returned `x` !
305
306 Args:
307 d_model (int): The input dimension of x.
308 max_len (int): The max sequence length.
309 xscale (float): The input scaling factor. Defaults to 1.0.
310 adapter_strategy (AbstractAdapterStrategy): By default, ReturnResultAdapterStrategyConfig.
311 An adapter composition function object.
312 NOTE: Since this is a positional encoding, it will not add a residual !
313 """
314
315 def __init__(
316 self,
317 d_model: int,
318 max_len: int = 5000,
319 xscale=1.0,
320 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
321 ):
322
323 super().__init__(
324 d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0,
325 )
326
327 # Setup adapter strategy
328 self.setup_adapter_strategy(adapter_strategy)
329
330 def get_default_strategy_config(self) -> 'dataclass':
331 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
332
333
334 @dataclass
335 class PositionalEncodingAdapterConfig:
336 d_model: int
337 max_len: int = 5000
338 xscale: float = 1.0
339 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
340 _target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
341
342
343 class RelPositionalEncodingAdapter(mha.RelPositionalEncoding, adapter_modules.AdapterModuleUtil):
344 """
345 Relative positional encoding for TransformerXL's layers
346 See : Appendix B in https://arxiv.org/abs/1901.02860
347
348 .. note::
349
350 Relative positional embedding value is **not** added to the input tensor !
351 Therefore, the input should be updated changed, if you only require the positional embedding, drop the returned `x` !
352
353 Args:
354 d_model (int): embedding dim
355 max_len (int): maximum input length
356 xscale (bool): whether to scale the input by sqrt(d_model)
357 adapter_strategy: By default, ReturnResultAdapterStrategyConfig. An adapter composition function object.
358 """
359
360 def __init__(
361 self,
362 d_model: int,
363 max_len: int = 5000,
364 xscale=1.0,
365 adapter_strategy: adapter_mixin_strategies.ReturnResultAdapterStrategyConfig = None,
366 ):
367 super().__init__(d_model=d_model, dropout_rate=0.0, max_len=max_len, xscale=xscale, dropout_rate_emb=0.0)
368
369 # Setup adapter strategy
370 self.setup_adapter_strategy(adapter_strategy)
371
372 def get_default_strategy_config(self) -> 'dataclass':
373 return adapter_mixin_strategies.ReturnResultAdapterStrategyConfig()
374
375
376 @dataclass
377 class RelPositionalEncodingAdapterConfig:
378 d_model: int
379 max_len: int = 5000
380 xscale: float = 1.0
381 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
382 _target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
383
[end of nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py]
[start of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 import os
17 from dataclasses import dataclass
18 from typing import List, Optional, Tuple, Union
19
20 import torch
21
22 from nemo.collections.asr.parts.utils import rnnt_utils
23 from nemo.collections.common.tokenizers.tokenizer_spec import TokenizerSpec
24 from nemo.core.classes import Typing, typecheck
25 from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
26 from nemo.utils import logging
27
28 DEFAULT_TOKEN_OFFSET = 100
29
30
31 def pack_hypotheses(
32 hypotheses: List[rnnt_utils.NBestHypotheses], logitlen: torch.Tensor,
33 ) -> List[rnnt_utils.NBestHypotheses]:
34
35 if logitlen is not None:
36 if hasattr(logitlen, 'cpu'):
37 logitlen_cpu = logitlen.to('cpu')
38 else:
39 logitlen_cpu = logitlen
40
41 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.NBestHypotheses
42 for candidate_idx, cand in enumerate(hyp.n_best_hypotheses):
43 cand.y_sequence = torch.tensor(cand.y_sequence, dtype=torch.long)
44
45 if logitlen is not None:
46 cand.length = logitlen_cpu[idx]
47
48 if cand.dec_state is not None:
49 cand.dec_state = _states_to_device(cand.dec_state)
50
51 return hypotheses
52
53
54 def _states_to_device(dec_state, device='cpu'):
55 if torch.is_tensor(dec_state):
56 dec_state = dec_state.to(device)
57
58 elif isinstance(dec_state, (list, tuple)):
59 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
60
61 return dec_state
62
63
64 class AbstractBeamCTCInfer(Typing):
65 """A beam CTC decoder.
66
67 Provides a common abstraction for sample level beam decoding.
68
69 Args:
70 blank_id: int, index of the blank token. Can be 0 or len(vocabulary).
71 beam_size: int, size of the beam used in the underlying beam search engine.
72
73 """
74
75 @property
76 def input_types(self):
77 """Returns definitions of module input ports.
78 """
79 return {
80 "decoder_output": NeuralType(('B', 'T', 'D'), LogprobsType()),
81 "decoder_lengths": NeuralType(tuple('B'), LengthsType()),
82 }
83
84 @property
85 def output_types(self):
86 """Returns definitions of module output ports.
87 """
88 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
89
90 def __init__(self, blank_id: int, beam_size: int):
91 self.blank_id = blank_id
92
93 if beam_size < 1:
94 raise ValueError("Beam search size cannot be less than 1!")
95
96 self.beam_size = beam_size
97
98 # Variables set by corresponding setter methods
99 self.vocab = None
100 self.decoding_type = None
101 self.tokenizer = None
102
103 # Utility maps for vocabulary
104 self.vocab_index_map = None
105 self.index_vocab_map = None
106
107 # Internal variable, used to prevent double reduction of consecutive tokens (ctc collapse)
108 self.override_fold_consecutive_value = None
109
110 def set_vocabulary(self, vocab: List[str]):
111 """
112 Set the vocabulary of the decoding framework.
113
114 Args:
115 vocab: List of str. Each token corresponds to its location in the vocabulary emitted by the model.
116 Note that this vocabulary must NOT contain the "BLANK" token.
117 """
118 self.vocab = vocab
119 self.vocab_index_map = {v: i for i, v in enumerate(vocab)}
120 self.index_vocab_map = {i: v for i, v in enumerate(vocab)}
121
122 def set_decoding_type(self, decoding_type: str):
123 """
124 Sets the decoding type of the framework. Can support either char or subword models.
125
126 Args:
127 decoding_type: Str corresponding to decoding type. Only supports "char" and "subword".
128 """
129 decoding_type = decoding_type.lower()
130 supported_types = ['char', 'subword']
131
132 if decoding_type not in supported_types:
133 raise ValueError(
134 f"Unsupported decoding type. Supported types = {supported_types}.\n" f"Given = {decoding_type}"
135 )
136
137 self.decoding_type = decoding_type
138
139 def set_tokenizer(self, tokenizer: TokenizerSpec):
140 """
141 Set the tokenizer of the decoding framework.
142
143 Args:
144 tokenizer: NeMo tokenizer object, which inherits from TokenizerSpec.
145 """
146 self.tokenizer = tokenizer
147
148 @typecheck()
149 def forward(
150 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
151 ) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
152 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
153 Output token is generated auto-repressively.
154
155 Args:
156 decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
157 decoder_lengths: list of int representing the length of each sequence
158 output sequence.
159
160 Returns:
161 packed list containing batch number of sentences (Hypotheses).
162 """
163 raise NotImplementedError()
164
165 def __call__(self, *args, **kwargs):
166 return self.forward(*args, **kwargs)
167
168
169 class BeamCTCInfer(AbstractBeamCTCInfer):
170 """A greedy CTC decoder.
171
172 Provides a common abstraction for sample level and batch level greedy decoding.
173
174 Args:
175 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
176 preserve_alignments: Bool flag which preserves the history of logprobs generated during
177 decoding (sample / batched). When set to true, the Hypothesis will contain
178 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
179 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
180 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
181 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
182
183 """
184
185 def __init__(
186 self,
187 blank_id: int,
188 beam_size: int,
189 search_type: str = "default",
190 return_best_hypothesis: bool = True,
191 preserve_alignments: bool = False,
192 compute_timestamps: bool = False,
193 beam_alpha: float = 1.0,
194 beam_beta: float = 0.0,
195 kenlm_path: str = None,
196 flashlight_cfg: Optional['FlashlightConfig'] = None,
197 pyctcdecode_cfg: Optional['PyCTCDecodeConfig'] = None,
198 ):
199 super().__init__(blank_id=blank_id, beam_size=beam_size)
200
201 self.search_type = search_type
202 self.return_best_hypothesis = return_best_hypothesis
203 self.preserve_alignments = preserve_alignments
204 self.compute_timestamps = compute_timestamps
205
206 if self.compute_timestamps:
207 raise ValueError(f"Currently this flag is not supported for beam search algorithms.")
208
209 self.vocab = None # This must be set by specific method by user before calling forward() !
210
211 if search_type == "default" or search_type == "nemo":
212 self.search_algorithm = self.default_beam_search
213 elif search_type == "pyctcdecode":
214 self.search_algorithm = self._pyctcdecode_beam_search
215 elif search_type == "flashlight":
216 self.search_algorithm = self.flashlight_beam_search
217 else:
218 raise NotImplementedError(
219 f"The search type ({search_type}) supplied is not supported!\n"
220 f"Please use one of : (default, nemo, pyctcdecode)"
221 )
222
223 # Log the beam search algorithm
224 logging.info(f"Beam search algorithm: {search_type}")
225
226 self.beam_alpha = beam_alpha
227 self.beam_beta = beam_beta
228
229 # Default beam search args
230 self.kenlm_path = kenlm_path
231
232 # PyCTCDecode params
233 if pyctcdecode_cfg is None:
234 pyctcdecode_cfg = PyCTCDecodeConfig()
235 self.pyctcdecode_cfg = pyctcdecode_cfg # type: PyCTCDecodeConfig
236
237 if flashlight_cfg is None:
238 flashlight_cfg = FlashlightConfig()
239 self.flashlight_cfg = flashlight_cfg
240
241 # Default beam search scorer functions
242 self.default_beam_scorer = None
243 self.pyctcdecode_beam_scorer = None
244 self.flashlight_beam_scorer = None
245 self.token_offset = 0
246
247 @typecheck()
248 def forward(
249 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
250 ) -> Tuple[List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]]:
251 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
252 Output token is generated auto-repressively.
253
254 Args:
255 decoder_output: A tensor of size (batch, timesteps, features).
256 decoder_lengths: list of int representing the length of each sequence
257 output sequence.
258
259 Returns:
260 packed list containing batch number of sentences (Hypotheses).
261 """
262 if self.vocab is None:
263 raise RuntimeError("Please set the vocabulary with `set_vocabulary()` before calling this function.")
264
265 if self.decoding_type is None:
266 raise ValueError("Please set the decoding type with `set_decoding_type()` before calling this function.")
267
268 with torch.no_grad(), torch.inference_mode():
269 # Process each sequence independently
270 prediction_tensor = decoder_output
271
272 if prediction_tensor.ndim != 3:
273 raise ValueError(
274 f"`decoder_output` must be a tensor of shape [B, T, V] (log probs, float). "
275 f"Provided shape = {prediction_tensor.shape}"
276 )
277
278 # determine type of input - logprobs or labels
279 out_len = decoder_lengths if decoder_lengths is not None else None
280 hypotheses = self.search_algorithm(prediction_tensor, out_len)
281
282 # Pack results into Hypotheses
283 packed_result = pack_hypotheses(hypotheses, decoder_lengths)
284
285 # Pack the result
286 if self.return_best_hypothesis and isinstance(packed_result[0], rnnt_utils.NBestHypotheses):
287 packed_result = [res.n_best_hypotheses[0] for res in packed_result] # type: Hypothesis
288
289 return (packed_result,)
290
291 @torch.no_grad()
292 def default_beam_search(
293 self, x: torch.Tensor, out_len: torch.Tensor
294 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
295 """
296 Open Seq2Seq Beam Search Algorithm (DeepSpeed)
297
298 Args:
299 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
300 and V is the vocabulary size. The tensor contains log-probabilities.
301 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
302
303 Returns:
304 A list of NBestHypotheses objects, one for each sequence in the batch.
305 """
306 if self.compute_timestamps:
307 raise ValueError(
308 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
309 )
310
311 if self.default_beam_scorer is None:
312 # Check for filepath
313 if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
314 raise FileNotFoundError(
315 f"KenLM binary file not found at : {self.kenlm_path}. "
316 f"Please set a valid path in the decoding config."
317 )
318
319 # perform token offset for subword models
320 if self.decoding_type == 'subword':
321 vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
322 else:
323 # char models
324 vocab = self.vocab
325
326 # Must import at runtime to avoid circular dependency due to module level import.
327 from nemo.collections.asr.modules.beam_search_decoder import BeamSearchDecoderWithLM
328
329 self.default_beam_scorer = BeamSearchDecoderWithLM(
330 vocab=vocab,
331 lm_path=self.kenlm_path,
332 beam_width=self.beam_size,
333 alpha=self.beam_alpha,
334 beta=self.beam_beta,
335 num_cpus=max(1, os.cpu_count()),
336 input_tensor=False,
337 )
338
339 x = x.to('cpu')
340
341 with typecheck.disable_checks():
342 data = [x[sample_id, : out_len[sample_id], :].softmax(dim=-1) for sample_id in range(len(x))]
343 beams_batch = self.default_beam_scorer.forward(log_probs=data, log_probs_length=None)
344
345 # For each sample in the batch
346 nbest_hypotheses = []
347 for beams_idx, beams in enumerate(beams_batch):
348 # For each beam candidate / hypothesis in each sample
349 hypotheses = []
350 for candidate_idx, candidate in enumerate(beams):
351 hypothesis = rnnt_utils.Hypothesis(
352 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
353 )
354
355 # For subword encoding, NeMo will double encode the subword (multiple tokens) into a
356 # singular unicode id. In doing so, we preserve the semantic of the unicode token, and
357 # compress the size of the final KenLM ARPA / Binary file.
358 # In order to do double encoding, we shift the subword by some token offset.
359 # This step is ignored for character based models.
360 if self.decoding_type == 'subword':
361 pred_token_ids = [ord(c) - self.token_offset for c in candidate[1]]
362 else:
363 # Char models
364 pred_token_ids = [self.vocab_index_map[c] for c in candidate[1]]
365
366 # We preserve the token ids and the score for this hypothesis
367 hypothesis.y_sequence = pred_token_ids
368 hypothesis.score = candidate[0]
369
370 # If alignment must be preserved, we preserve a view of the output logprobs.
371 # Note this view is shared amongst all beams within the sample, be sure to clone it if you
372 # require specific processing for each sample in the beam.
373 # This is done to preserve memory.
374 if self.preserve_alignments:
375 hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
376
377 hypotheses.append(hypothesis)
378
379 # Wrap the result in NBestHypothesis.
380 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
381 nbest_hypotheses.append(hypotheses)
382
383 return nbest_hypotheses
384
385 @torch.no_grad()
386 def _pyctcdecode_beam_search(
387 self, x: torch.Tensor, out_len: torch.Tensor
388 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
389 """
390 PyCTCDecode Beam Search Algorithm. Should support Char and Subword models.
391
392 Args:
393 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
394 and V is the vocabulary size. The tensor contains log-probabilities.
395 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
396
397 Returns:
398 A list of NBestHypotheses objects, one for each sequence in the batch.
399 """
400 if self.compute_timestamps:
401 raise ValueError(
402 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
403 )
404
405 try:
406 import pyctcdecode
407 except (ImportError, ModuleNotFoundError):
408 raise ImportError(
409 f"Could not load `pyctcdecode` library. Please install it from pip using :\n"
410 f"pip install --upgrade pyctcdecode"
411 )
412
413 if self.pyctcdecode_beam_scorer is None:
414 self.pyctcdecode_beam_scorer = pyctcdecode.build_ctcdecoder(
415 labels=self.vocab, kenlm_model_path=self.kenlm_path, alpha=self.beam_alpha, beta=self.beam_beta
416 ) # type: pyctcdecode.BeamSearchDecoderCTC
417
418 x = x.to('cpu').numpy()
419
420 with typecheck.disable_checks():
421 beams_batch = []
422 for sample_id in range(len(x)):
423 logprobs = x[sample_id, : out_len[sample_id], :]
424 result = self.pyctcdecode_beam_scorer.decode_beams(
425 logprobs,
426 beam_width=self.beam_size,
427 beam_prune_logp=self.pyctcdecode_cfg.beam_prune_logp,
428 token_min_logp=self.pyctcdecode_cfg.token_min_logp,
429 prune_history=self.pyctcdecode_cfg.prune_history,
430 hotwords=self.pyctcdecode_cfg.hotwords,
431 hotword_weight=self.pyctcdecode_cfg.hotword_weight,
432 lm_start_state=None,
433 ) # Output format: text, last_lm_state, text_frames, logit_score, lm_score
434 beams_batch.append(result)
435
436 nbest_hypotheses = []
437 for beams_idx, beams in enumerate(beams_batch):
438 hypotheses = []
439 for candidate_idx, candidate in enumerate(beams):
440 # Candidate = (text, last_lm_state, text_frames, logit_score, lm_score)
441 hypothesis = rnnt_utils.Hypothesis(
442 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
443 )
444
445 # TODO: Requires token ids to be returned rather than text.
446 if self.decoding_type == 'subword':
447 if self.tokenizer is None:
448 raise ValueError("Tokenizer must be provided for subword decoding. Use set_tokenizer().")
449
450 pred_token_ids = self.tokenizer.text_to_ids(candidate[0])
451 else:
452 if self.vocab is None:
453 raise ValueError("Vocab must be provided for character decoding. Use set_vocab().")
454
455 chars = list(candidate[0])
456 pred_token_ids = [self.vocab_index_map[c] for c in chars]
457
458 hypothesis.y_sequence = pred_token_ids
459 hypothesis.text = candidate[0] # text
460 hypothesis.score = candidate[4] # score
461
462 # Inject word level timestamps
463 hypothesis.timestep = candidate[2] # text_frames
464
465 if self.preserve_alignments:
466 hypothesis.alignments = torch.from_numpy(x[beams_idx][: out_len[beams_idx]])
467
468 hypotheses.append(hypothesis)
469
470 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
471 nbest_hypotheses.append(hypotheses)
472
473 return nbest_hypotheses
474
475 @torch.no_grad()
476 def flashlight_beam_search(
477 self, x: torch.Tensor, out_len: torch.Tensor
478 ) -> List[Union[rnnt_utils.Hypothesis, rnnt_utils.NBestHypotheses]]:
479 """
480 Flashlight Beam Search Algorithm. Should support Char and Subword models.
481
482 Args:
483 x: Tensor of shape [B, T, V+1], where B is the batch size, T is the maximum sequence length,
484 and V is the vocabulary size. The tensor contains log-probabilities.
485 out_len: Tensor of shape [B], contains lengths of each sequence in the batch.
486
487 Returns:
488 A list of NBestHypotheses objects, one for each sequence in the batch.
489 """
490 if self.compute_timestamps:
491 raise ValueError(
492 f"Beam Search with strategy `{self.search_type}` does not support time stamp calculation!"
493 )
494
495 if self.flashlight_beam_scorer is None:
496 # Check for filepath
497 if self.kenlm_path is None or not os.path.exists(self.kenlm_path):
498 raise FileNotFoundError(
499 f"KenLM binary file not found at : {self.kenlm_path}. "
500 f"Please set a valid path in the decoding config."
501 )
502
503 # perform token offset for subword models
504 # if self.decoding_type == 'subword':
505 # vocab = [chr(idx + self.token_offset) for idx in range(len(self.vocab))]
506 # else:
507 # # char models
508 # vocab = self.vocab
509
510 # Must import at runtime to avoid circular dependency due to module level import.
511 from nemo.collections.asr.modules.flashlight_decoder import FlashLightKenLMBeamSearchDecoder
512
513 self.flashlight_beam_scorer = FlashLightKenLMBeamSearchDecoder(
514 lm_path=self.kenlm_path,
515 vocabulary=self.vocab,
516 tokenizer=self.tokenizer,
517 lexicon_path=self.flashlight_cfg.lexicon_path,
518 boost_path=self.flashlight_cfg.boost_path,
519 beam_size=self.beam_size,
520 beam_size_token=self.flashlight_cfg.beam_size_token,
521 beam_threshold=self.flashlight_cfg.beam_threshold,
522 lm_weight=self.beam_alpha,
523 word_score=self.beam_beta,
524 unk_weight=self.flashlight_cfg.unk_weight,
525 sil_weight=self.flashlight_cfg.sil_weight,
526 )
527
528 x = x.to('cpu')
529
530 with typecheck.disable_checks():
531 beams_batch = self.flashlight_beam_scorer.forward(log_probs=x)
532
533 # For each sample in the batch
534 nbest_hypotheses = []
535 for beams_idx, beams in enumerate(beams_batch):
536 # For each beam candidate / hypothesis in each sample
537 hypotheses = []
538 for candidate_idx, candidate in enumerate(beams):
539 hypothesis = rnnt_utils.Hypothesis(
540 score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None
541 )
542
543 # We preserve the token ids and the score for this hypothesis
544 hypothesis.y_sequence = candidate['tokens'].tolist()
545 hypothesis.score = candidate['score']
546
547 # If alignment must be preserved, we preserve a view of the output logprobs.
548 # Note this view is shared amongst all beams within the sample, be sure to clone it if you
549 # require specific processing for each sample in the beam.
550 # This is done to preserve memory.
551 if self.preserve_alignments:
552 hypothesis.alignments = x[beams_idx][: out_len[beams_idx]]
553
554 hypotheses.append(hypothesis)
555
556 # Wrap the result in NBestHypothesis.
557 hypotheses = rnnt_utils.NBestHypotheses(hypotheses)
558 nbest_hypotheses.append(hypotheses)
559
560 return nbest_hypotheses
561
562 def set_decoding_type(self, decoding_type: str):
563 super().set_decoding_type(decoding_type)
564
565 # Please check train_kenlm.py in scripts/asr_language_modeling/ to find out why we need
566 # TOKEN_OFFSET for BPE-based models
567 if self.decoding_type == 'subword':
568 self.token_offset = DEFAULT_TOKEN_OFFSET
569
570
571 @dataclass
572 class PyCTCDecodeConfig:
573 # These arguments cannot be imported from pyctcdecode (optional dependency)
574 # Therefore we copy the values explicitly
575 # Taken from pyctcdecode.constant
576 beam_prune_logp: float = -10.0
577 token_min_logp: float = -5.0
578 prune_history: bool = False
579 hotwords: Optional[List[str]] = None
580 hotword_weight: float = 10.0
581
582
583 @dataclass
584 class FlashlightConfig:
585 lexicon_path: Optional[str] = None
586 boost_path: Optional[str] = None
587 beam_size_token: int = 16
588 beam_threshold: float = 20.0
589 unk_weight: float = -math.inf
590 sil_weight: float = 0.0
591
592
593 @dataclass
594 class BeamCTCInferConfig:
595 beam_size: int
596 search_type: str = 'default'
597 preserve_alignments: bool = False
598 compute_timestamps: bool = False
599 return_best_hypothesis: bool = True
600
601 beam_alpha: float = 1.0
602 beam_beta: float = 0.0
603 kenlm_path: Optional[str] = None
604
605 flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
606 pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
607
[end of nemo/collections/asr/parts/submodules/ctc_beam_decoding.py]
[start of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import List, Optional
17
18 import torch
19 from omegaconf import DictConfig, OmegaConf
20
21 from nemo.collections.asr.parts.utils import rnnt_utils
22 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMethodConfig, ConfidenceMethodMixin
23 from nemo.core.classes import Typing, typecheck
24 from nemo.core.neural_types import HypothesisType, LengthsType, LogprobsType, NeuralType
25 from nemo.utils import logging
26
27
28 def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
29
30 if logitlen is not None:
31 if hasattr(logitlen, 'cpu'):
32 logitlen_cpu = logitlen.to('cpu')
33 else:
34 logitlen_cpu = logitlen
35
36 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
37 hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
38
39 if logitlen is not None:
40 hyp.length = logitlen_cpu[idx]
41
42 if hyp.dec_state is not None:
43 hyp.dec_state = _states_to_device(hyp.dec_state)
44
45 return hypotheses
46
47
48 def _states_to_device(dec_state, device='cpu'):
49 if torch.is_tensor(dec_state):
50 dec_state = dec_state.to(device)
51
52 elif isinstance(dec_state, (list, tuple)):
53 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
54
55 return dec_state
56
57
58 class GreedyCTCInfer(Typing, ConfidenceMethodMixin):
59 """A greedy CTC decoder.
60
61 Provides a common abstraction for sample level and batch level greedy decoding.
62
63 Args:
64 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
65 preserve_alignments: Bool flag which preserves the history of logprobs generated during
66 decoding (sample / batched). When set to true, the Hypothesis will contain
67 the non-null value for `logprobs` in it. Here, `logprobs` is a torch.Tensors.
68 compute_timestamps: A bool flag, which determines whether to compute the character/subword, or
69 word based timestamp mapping the output log-probabilities to discrite intervals of timestamps.
70 The timestamps will be available in the returned Hypothesis.timestep as a dictionary.
71 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
72 generated during decoding. When set to true, the Hypothesis will contain
73 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
74 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
75 confidence scores.
76
77 name: The method name (str).
78 Supported values:
79 - 'max_prob' for using the maximum token probability as a confidence.
80 - 'entropy' for using a normalized entropy of a log-likelihood vector.
81
82 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
83 Supported values:
84 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
85 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
86 Note that for this entropy, the alpha should comply the following inequality:
87 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
88 where V is the model vocabulary size.
89 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
90 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
91 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
92 More: https://en.wikipedia.org/wiki/Tsallis_entropy
93 - 'renyi' for the Rรฉnyi entropy.
94 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
95 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
96 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
97
98 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
99 When the alpha equals one, scaling is not applied to 'max_prob',
100 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
101
102 entropy_norm: A mapping of the entropy value to the interval [0,1].
103 Supported values:
104 - 'lin' for using the linear mapping.
105 - 'exp' for using exponential mapping with linear shift.
106
107 """
108
109 @property
110 def input_types(self):
111 """Returns definitions of module input ports.
112 """
113 # Input can be of dimention -
114 # ('B', 'T', 'D') [Log probs] or ('B', 'T') [Labels]
115
116 return {
117 "decoder_output": NeuralType(None, LogprobsType()),
118 "decoder_lengths": NeuralType(tuple('B'), LengthsType()),
119 }
120
121 @property
122 def output_types(self):
123 """Returns definitions of module output ports.
124 """
125 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
126
127 def __init__(
128 self,
129 blank_id: int,
130 preserve_alignments: bool = False,
131 compute_timestamps: bool = False,
132 preserve_frame_confidence: bool = False,
133 confidence_method_cfg: Optional[DictConfig] = None,
134 ):
135 super().__init__()
136
137 self.blank_id = blank_id
138 self.preserve_alignments = preserve_alignments
139 # we need timestamps to extract non-blank per-frame confidence
140 self.compute_timestamps = compute_timestamps | preserve_frame_confidence
141 self.preserve_frame_confidence = preserve_frame_confidence
142
143 # set confidence calculation method
144 self._init_confidence_method(confidence_method_cfg)
145
146 @typecheck()
147 def forward(
148 self, decoder_output: torch.Tensor, decoder_lengths: torch.Tensor,
149 ):
150 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
151 Output token is generated auto-repressively.
152
153 Args:
154 decoder_output: A tensor of size (batch, timesteps, features) or (batch, timesteps) (each timestep is a label).
155 decoder_lengths: list of int representing the length of each sequence
156 output sequence.
157
158 Returns:
159 packed list containing batch number of sentences (Hypotheses).
160 """
161 with torch.inference_mode():
162 hypotheses = []
163 # Process each sequence independently
164 prediction_cpu_tensor = decoder_output.cpu()
165
166 if prediction_cpu_tensor.ndim < 2 or prediction_cpu_tensor.ndim > 3:
167 raise ValueError(
168 f"`decoder_output` must be a tensor of shape [B, T] (labels, int) or "
169 f"[B, T, V] (log probs, float). Provided shape = {prediction_cpu_tensor.shape}"
170 )
171
172 # determine type of input - logprobs or labels
173 if prediction_cpu_tensor.ndim == 2: # labels
174 greedy_decode = self._greedy_decode_labels
175 else:
176 greedy_decode = self._greedy_decode_logprobs
177
178 for ind in range(prediction_cpu_tensor.shape[0]):
179 out_len = decoder_lengths[ind] if decoder_lengths is not None else None
180 hypothesis = greedy_decode(prediction_cpu_tensor[ind], out_len)
181 hypotheses.append(hypothesis)
182
183 # Pack results into Hypotheses
184 packed_result = pack_hypotheses(hypotheses, decoder_lengths)
185
186 return (packed_result,)
187
188 @torch.no_grad()
189 def _greedy_decode_logprobs(self, x: torch.Tensor, out_len: torch.Tensor):
190 # x: [T, D]
191 # out_len: [seq_len]
192
193 # Initialize blank state and empty label set in Hypothesis
194 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
195 prediction = x.detach().cpu()
196
197 if out_len is not None:
198 prediction = prediction[:out_len]
199
200 prediction_logprobs, prediction_labels = prediction.max(dim=-1)
201
202 non_blank_ids = prediction_labels != self.blank_id
203 hypothesis.y_sequence = prediction_labels.numpy().tolist()
204 hypothesis.score = (prediction_logprobs[non_blank_ids]).sum()
205
206 if self.preserve_alignments:
207 # Preserve the logprobs, as well as labels after argmax
208 hypothesis.alignments = (prediction.clone(), prediction_labels.clone())
209
210 if self.compute_timestamps:
211 hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
212
213 if self.preserve_frame_confidence:
214 hypothesis.frame_confidence = self._get_confidence(prediction)
215
216 return hypothesis
217
218 @torch.no_grad()
219 def _greedy_decode_labels(self, x: torch.Tensor, out_len: torch.Tensor):
220 # x: [T]
221 # out_len: [seq_len]
222
223 # Initialize blank state and empty label set in Hypothesis
224 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
225 prediction_labels = x.detach().cpu()
226
227 if out_len is not None:
228 prediction_labels = prediction_labels[:out_len]
229
230 non_blank_ids = prediction_labels != self.blank_id
231 hypothesis.y_sequence = prediction_labels.numpy().tolist()
232 hypothesis.score = -1.0
233
234 if self.preserve_alignments:
235 raise ValueError("Requested for alignments, but predictions provided were labels, not log probabilities.")
236
237 if self.compute_timestamps:
238 hypothesis.timestep = torch.nonzero(non_blank_ids, as_tuple=False)[:, 0].numpy().tolist()
239
240 if self.preserve_frame_confidence:
241 raise ValueError(
242 "Requested for per-frame confidence, but predictions provided were labels, not log probabilities."
243 )
244
245 return hypothesis
246
247 def __call__(self, *args, **kwargs):
248 return self.forward(*args, **kwargs)
249
250
251 @dataclass
252 class GreedyCTCInferConfig:
253 preserve_alignments: bool = False
254 compute_timestamps: bool = False
255 preserve_frame_confidence: bool = False
256 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
257
258 def __post_init__(self):
259 # OmegaConf.structured ensures that post_init check is always executed
260 self.confidence_method_cfg = OmegaConf.structured(
261 self.confidence_method_cfg
262 if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
263 else ConfidenceMethodConfig(**self.confidence_method_cfg)
264 )
265
[end of nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py]
[start of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 # Copyright 2017 Johns Hopkins University (Shinji Watanabe)
16 #
17 # Licensed under the Apache License, Version 2.0 (the "License");
18 # you may not use this file except in compliance with the License.
19 # You may obtain a copy of the License at
20 #
21 # http://www.apache.org/licenses/LICENSE-2.0
22 #
23 # Unless required by applicable law or agreed to in writing, software
24 # distributed under the License is distributed on an "AS IS" BASIS,
25 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
26 # See the License for the specific language governing permissions and
27 # limitations under the License.
28
29 from dataclasses import dataclass
30 from typing import List, Optional, Tuple, Union
31
32 import numpy as np
33 import torch
34 from omegaconf import DictConfig, OmegaConf
35
36 from nemo.collections.asr.modules import rnnt_abstract
37 from nemo.collections.asr.parts.utils import rnnt_utils
38 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceMethodConfig, ConfidenceMethodMixin
39 from nemo.collections.common.parts.rnn import label_collate
40 from nemo.core.classes import Typing, typecheck
41 from nemo.core.neural_types import AcousticEncodedRepresentation, ElementType, HypothesisType, LengthsType, NeuralType
42 from nemo.utils import logging
43
44
45 def pack_hypotheses(hypotheses: List[rnnt_utils.Hypothesis], logitlen: torch.Tensor,) -> List[rnnt_utils.Hypothesis]:
46
47 if hasattr(logitlen, 'cpu'):
48 logitlen_cpu = logitlen.to('cpu')
49 else:
50 logitlen_cpu = logitlen
51
52 for idx, hyp in enumerate(hypotheses): # type: rnnt_utils.Hypothesis
53 hyp.y_sequence = torch.tensor(hyp.y_sequence, dtype=torch.long)
54 hyp.length = logitlen_cpu[idx]
55
56 if hyp.dec_state is not None:
57 hyp.dec_state = _states_to_device(hyp.dec_state)
58
59 return hypotheses
60
61
62 def _states_to_device(dec_state, device='cpu'):
63 if torch.is_tensor(dec_state):
64 dec_state = dec_state.to(device)
65
66 elif isinstance(dec_state, (list, tuple)):
67 dec_state = tuple(_states_to_device(dec_i, device) for dec_i in dec_state)
68
69 return dec_state
70
71
72 class _GreedyRNNTInfer(Typing, ConfidenceMethodMixin):
73 """A greedy transducer decoder.
74
75 Provides a common abstraction for sample level and batch level greedy decoding.
76
77 Args:
78 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
79 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
80 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
81 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
82 to a sequence in a single time step; if set to None then there is
83 no limit.
84 preserve_alignments: Bool flag which preserves the history of alignments generated during
85 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
86 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
87 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
88
89 The length of the list corresponds to the Acoustic Length (T).
90 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
91 U is the number of target tokens for the current timestep Ti.
92 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
93 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
94 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
95
96 The length of the list corresponds to the Acoustic Length (T).
97 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
98 U is the number of target tokens for the current timestep Ti.
99 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
100 confidence scores.
101
102 name: The method name (str).
103 Supported values:
104 - 'max_prob' for using the maximum token probability as a confidence.
105 - 'entropy' for using a normalized entropy of a log-likelihood vector.
106
107 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
108 Supported values:
109 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
110 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
111 Note that for this entropy, the alpha should comply the following inequality:
112 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
113 where V is the model vocabulary size.
114 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
115 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
116 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
117 More: https://en.wikipedia.org/wiki/Tsallis_entropy
118 - 'renyi' for the Rรฉnyi entropy.
119 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
120 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
121 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
122
123 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
124 When the alpha equals one, scaling is not applied to 'max_prob',
125 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
126
127 entropy_norm: A mapping of the entropy value to the interval [0,1].
128 Supported values:
129 - 'lin' for using the linear mapping.
130 - 'exp' for using exponential mapping with linear shift.
131 """
132
133 @property
134 def input_types(self):
135 """Returns definitions of module input ports.
136 """
137 return {
138 "encoder_output": NeuralType(('B', 'D', 'T'), AcousticEncodedRepresentation()),
139 "encoded_lengths": NeuralType(tuple('B'), LengthsType()),
140 "partial_hypotheses": [NeuralType(elements_type=HypothesisType(), optional=True)], # must always be last
141 }
142
143 @property
144 def output_types(self):
145 """Returns definitions of module output ports.
146 """
147 return {"predictions": [NeuralType(elements_type=HypothesisType())]}
148
149 def __init__(
150 self,
151 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
152 joint_model: rnnt_abstract.AbstractRNNTJoint,
153 blank_index: int,
154 max_symbols_per_step: Optional[int] = None,
155 preserve_alignments: bool = False,
156 preserve_frame_confidence: bool = False,
157 confidence_method_cfg: Optional[DictConfig] = None,
158 ):
159 super().__init__()
160 self.decoder = decoder_model
161 self.joint = joint_model
162
163 self._blank_index = blank_index
164 self._SOS = blank_index # Start of single index
165 self.max_symbols = max_symbols_per_step
166 self.preserve_alignments = preserve_alignments
167 self.preserve_frame_confidence = preserve_frame_confidence
168
169 # set confidence calculation method
170 self._init_confidence_method(confidence_method_cfg)
171
172 def __call__(self, *args, **kwargs):
173 return self.forward(*args, **kwargs)
174
175 @torch.no_grad()
176 def _pred_step(
177 self,
178 label: Union[torch.Tensor, int],
179 hidden: Optional[torch.Tensor],
180 add_sos: bool = False,
181 batch_size: Optional[int] = None,
182 ) -> Tuple[torch.Tensor, torch.Tensor]:
183 """
184 Common prediction step based on the AbstractRNNTDecoder implementation.
185
186 Args:
187 label: (int/torch.Tensor): Label or "Start-of-Signal" token.
188 hidden: (Optional torch.Tensor): RNN State vector
189 add_sos (bool): Whether to add a zero vector at the begging as "start of sentence" token.
190 batch_size: Batch size of the output tensor.
191
192 Returns:
193 g: (B, U, H) if add_sos is false, else (B, U + 1, H)
194 hid: (h, c) where h is the final sequence hidden state and c is
195 the final cell state:
196 h (tensor), shape (L, B, H)
197 c (tensor), shape (L, B, H)
198 """
199 if isinstance(label, torch.Tensor):
200 # label: [batch, 1]
201 if label.dtype != torch.long:
202 label = label.long()
203
204 else:
205 # Label is an integer
206 if label == self._SOS:
207 return self.decoder.predict(None, hidden, add_sos=add_sos, batch_size=batch_size)
208
209 label = label_collate([[label]])
210
211 # output: [B, 1, K]
212 return self.decoder.predict(label, hidden, add_sos=add_sos, batch_size=batch_size)
213
214 def _joint_step(self, enc, pred, log_normalize: Optional[bool] = None):
215 """
216 Common joint step based on AbstractRNNTJoint implementation.
217
218 Args:
219 enc: Output of the Encoder model. A torch.Tensor of shape [B, 1, H1]
220 pred: Output of the Decoder model. A torch.Tensor of shape [B, 1, H2]
221 log_normalize: Whether to log normalize or not. None will log normalize only for CPU.
222
223 Returns:
224 logits of shape (B, T=1, U=1, V + 1)
225 """
226 with torch.no_grad():
227 logits = self.joint.joint(enc, pred)
228
229 if log_normalize is None:
230 if not logits.is_cuda: # Use log softmax only if on CPU
231 logits = logits.log_softmax(dim=len(logits.shape) - 1)
232 else:
233 if log_normalize:
234 logits = logits.log_softmax(dim=len(logits.shape) - 1)
235
236 return logits
237
238
239 class GreedyRNNTInfer(_GreedyRNNTInfer):
240 """A greedy transducer decoder.
241
242 Sequence level greedy decoding, performed auto-regressively.
243
244 Args:
245 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
246 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
247 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
248 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
249 to a sequence in a single time step; if set to None then there is
250 no limit.
251 preserve_alignments: Bool flag which preserves the history of alignments generated during
252 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
253 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
254 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
255
256 The length of the list corresponds to the Acoustic Length (T).
257 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
258 U is the number of target tokens for the current timestep Ti.
259 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
260 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
261 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
262
263 The length of the list corresponds to the Acoustic Length (T).
264 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
265 U is the number of target tokens for the current timestep Ti.
266 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
267 confidence scores.
268
269 name: The method name (str).
270 Supported values:
271 - 'max_prob' for using the maximum token probability as a confidence.
272 - 'entropy' for using a normalized entropy of a log-likelihood vector.
273
274 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
275 Supported values:
276 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
277 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
278 Note that for this entropy, the alpha should comply the following inequality:
279 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
280 where V is the model vocabulary size.
281 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
282 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
283 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
284 More: https://en.wikipedia.org/wiki/Tsallis_entropy
285 - 'renyi' for the Rรฉnyi entropy.
286 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
287 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
288 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
289
290 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
291 When the alpha equals one, scaling is not applied to 'max_prob',
292 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
293
294 entropy_norm: A mapping of the entropy value to the interval [0,1].
295 Supported values:
296 - 'lin' for using the linear mapping.
297 - 'exp' for using exponential mapping with linear shift.
298 """
299
300 def __init__(
301 self,
302 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
303 joint_model: rnnt_abstract.AbstractRNNTJoint,
304 blank_index: int,
305 max_symbols_per_step: Optional[int] = None,
306 preserve_alignments: bool = False,
307 preserve_frame_confidence: bool = False,
308 confidence_method_cfg: Optional[DictConfig] = None,
309 ):
310 super().__init__(
311 decoder_model=decoder_model,
312 joint_model=joint_model,
313 blank_index=blank_index,
314 max_symbols_per_step=max_symbols_per_step,
315 preserve_alignments=preserve_alignments,
316 preserve_frame_confidence=preserve_frame_confidence,
317 confidence_method_cfg=confidence_method_cfg,
318 )
319
320 @typecheck()
321 def forward(
322 self,
323 encoder_output: torch.Tensor,
324 encoded_lengths: torch.Tensor,
325 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
326 ):
327 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
328 Output token is generated auto-regressively.
329
330 Args:
331 encoder_output: A tensor of size (batch, features, timesteps).
332 encoded_lengths: list of int representing the length of each sequence
333 output sequence.
334
335 Returns:
336 packed list containing batch number of sentences (Hypotheses).
337 """
338 # Preserve decoder and joint training state
339 decoder_training_state = self.decoder.training
340 joint_training_state = self.joint.training
341
342 with torch.inference_mode():
343 # Apply optional preprocessing
344 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
345
346 self.decoder.eval()
347 self.joint.eval()
348
349 hypotheses = []
350 # Process each sequence independently
351 with self.decoder.as_frozen(), self.joint.as_frozen():
352 for batch_idx in range(encoder_output.size(0)):
353 inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
354 logitlen = encoded_lengths[batch_idx]
355
356 partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
357 hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
358 hypotheses.append(hypothesis)
359
360 # Pack results into Hypotheses
361 packed_result = pack_hypotheses(hypotheses, encoded_lengths)
362
363 self.decoder.train(decoder_training_state)
364 self.joint.train(joint_training_state)
365
366 return (packed_result,)
367
368 @torch.no_grad()
369 def _greedy_decode(
370 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
371 ):
372 # x: [T, 1, D]
373 # out_len: [seq_len]
374
375 # Initialize blank state and empty label set in Hypothesis
376 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
377
378 if partial_hypotheses is not None:
379 hypothesis.last_token = partial_hypotheses.last_token
380 hypothesis.y_sequence = (
381 partial_hypotheses.y_sequence.cpu().tolist()
382 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
383 else partial_hypotheses.y_sequence
384 )
385 if partial_hypotheses.dec_state is not None:
386 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
387 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
388
389 if self.preserve_alignments:
390 # Alignments is a 2-dimensional dangling list representing T x U
391 hypothesis.alignments = [[]]
392
393 if self.preserve_frame_confidence:
394 hypothesis.frame_confidence = [[]]
395
396 # For timestep t in X_t
397 for time_idx in range(out_len):
398 # Extract encoder embedding at timestep t
399 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
400 f = x.narrow(dim=0, start=time_idx, length=1)
401
402 # Setup exit flags and counter
403 not_blank = True
404 symbols_added = 0
405 # While blank is not predicted, or we dont run out of max symbols per timestep
406 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
407 # In the first timestep, we initialize the network with RNNT Blank
408 # In later timesteps, we provide previous predicted label as input.
409 if hypothesis.last_token is None and hypothesis.dec_state is None:
410 last_label = self._SOS
411 else:
412 last_label = label_collate([[hypothesis.last_token]])
413
414 # Perform prediction network and joint network steps.
415 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
416 # If preserving per-frame confidence, log_normalize must be true
417 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
418 0, 0, 0, :
419 ]
420
421 del g
422
423 # torch.max(0) op doesnt exist for FP 16.
424 if logp.dtype != torch.float32:
425 logp = logp.float()
426
427 # get index k, of max prob
428 v, k = logp.max(0)
429 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
430
431 if self.preserve_alignments:
432 # insert logprobs into last timestep
433 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
434
435 if self.preserve_frame_confidence:
436 # insert confidence into last timestep
437 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
438
439 del logp
440
441 # If blank token is predicted, exit inner loop, move onto next timestep t
442 if k == self._blank_index:
443 not_blank = False
444 else:
445 # Append token to label set, update RNN state.
446 hypothesis.y_sequence.append(k)
447 hypothesis.score += float(v)
448 hypothesis.timestep.append(time_idx)
449 hypothesis.dec_state = hidden_prime
450 hypothesis.last_token = k
451
452 # Increment token counter.
453 symbols_added += 1
454
455 if self.preserve_alignments:
456 # convert Ti-th logits into a torch array
457 hypothesis.alignments.append([]) # blank buffer for next timestep
458
459 if self.preserve_frame_confidence:
460 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
461
462 # Remove trailing empty list of Alignments
463 if self.preserve_alignments:
464 if len(hypothesis.alignments[-1]) == 0:
465 del hypothesis.alignments[-1]
466
467 # Remove trailing empty list of per-frame confidence
468 if self.preserve_frame_confidence:
469 if len(hypothesis.frame_confidence[-1]) == 0:
470 del hypothesis.frame_confidence[-1]
471
472 # Unpack the hidden states
473 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
474
475 return hypothesis
476
477
478 class GreedyBatchedRNNTInfer(_GreedyRNNTInfer):
479 """A batch level greedy transducer decoder.
480
481 Batch level greedy decoding, performed auto-regressively.
482
483 Args:
484 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
485 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
486 blank_index: int index of the blank token. Can be 0 or len(vocabulary).
487 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
488 to a sequence in a single time step; if set to None then there is
489 no limit.
490 preserve_alignments: Bool flag which preserves the history of alignments generated during
491 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
492 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
493 Tuple(Tensor (of length V + 1), Tensor(scalar, label after argmax)).
494
495 The length of the list corresponds to the Acoustic Length (T).
496 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
497 U is the number of target tokens for the current timestep Ti.
498 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
499 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
500 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
501
502 The length of the list corresponds to the Acoustic Length (T).
503 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
504 U is the number of target tokens for the current timestep Ti.
505 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
506 confidence scores.
507
508 name: The method name (str).
509 Supported values:
510 - 'max_prob' for using the maximum token probability as a confidence.
511 - 'entropy' for using a normalized entropy of a log-likelihood vector.
512
513 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
514 Supported values:
515 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
516 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
517 Note that for this entropy, the alpha should comply the following inequality:
518 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
519 where V is the model vocabulary size.
520 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
521 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
522 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
523 More: https://en.wikipedia.org/wiki/Tsallis_entropy
524 - 'renyi' for the Rรฉnyi entropy.
525 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
526 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
527 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
528
529 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
530 When the alpha equals one, scaling is not applied to 'max_prob',
531 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
532
533 entropy_norm: A mapping of the entropy value to the interval [0,1].
534 Supported values:
535 - 'lin' for using the linear mapping.
536 - 'exp' for using exponential mapping with linear shift.
537 """
538
539 def __init__(
540 self,
541 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
542 joint_model: rnnt_abstract.AbstractRNNTJoint,
543 blank_index: int,
544 max_symbols_per_step: Optional[int] = None,
545 preserve_alignments: bool = False,
546 preserve_frame_confidence: bool = False,
547 confidence_method_cfg: Optional[DictConfig] = None,
548 ):
549 super().__init__(
550 decoder_model=decoder_model,
551 joint_model=joint_model,
552 blank_index=blank_index,
553 max_symbols_per_step=max_symbols_per_step,
554 preserve_alignments=preserve_alignments,
555 preserve_frame_confidence=preserve_frame_confidence,
556 confidence_method_cfg=confidence_method_cfg,
557 )
558
559 # Depending on availability of `blank_as_pad` support
560 # switch between more efficient batch decoding technique
561 if self.decoder.blank_as_pad:
562 self._greedy_decode = self._greedy_decode_blank_as_pad
563 else:
564 self._greedy_decode = self._greedy_decode_masked
565
566 @typecheck()
567 def forward(
568 self,
569 encoder_output: torch.Tensor,
570 encoded_lengths: torch.Tensor,
571 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
572 ):
573 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
574 Output token is generated auto-regressively.
575
576 Args:
577 encoder_output: A tensor of size (batch, features, timesteps).
578 encoded_lengths: list of int representing the length of each sequence
579 output sequence.
580
581 Returns:
582 packed list containing batch number of sentences (Hypotheses).
583 """
584 # Preserve decoder and joint training state
585 decoder_training_state = self.decoder.training
586 joint_training_state = self.joint.training
587
588 with torch.inference_mode():
589 # Apply optional preprocessing
590 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
591 logitlen = encoded_lengths
592
593 self.decoder.eval()
594 self.joint.eval()
595
596 with self.decoder.as_frozen(), self.joint.as_frozen():
597 inseq = encoder_output # [B, T, D]
598 hypotheses = self._greedy_decode(
599 inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
600 )
601
602 # Pack the hypotheses results
603 packed_result = pack_hypotheses(hypotheses, logitlen)
604
605 self.decoder.train(decoder_training_state)
606 self.joint.train(joint_training_state)
607
608 return (packed_result,)
609
610 def _greedy_decode_blank_as_pad(
611 self,
612 x: torch.Tensor,
613 out_len: torch.Tensor,
614 device: torch.device,
615 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
616 ):
617 if partial_hypotheses is not None:
618 raise NotImplementedError("`partial_hypotheses` support is not supported")
619
620 with torch.inference_mode():
621 # x: [B, T, D]
622 # out_len: [B]
623 # device: torch.device
624
625 # Initialize list of Hypothesis
626 batchsize = x.shape[0]
627 hypotheses = [
628 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
629 ]
630
631 # Initialize Hidden state matrix (shared by entire batch)
632 hidden = None
633
634 # If alignments need to be preserved, register a dangling list to hold the values
635 if self.preserve_alignments:
636 # alignments is a 3-dimensional dangling list representing B x T x U
637 for hyp in hypotheses:
638 hyp.alignments = [[]]
639
640 # If confidence scores need to be preserved, register a dangling list to hold the values
641 if self.preserve_frame_confidence:
642 # frame_confidence is a 3-dimensional dangling list representing B x T x U
643 for hyp in hypotheses:
644 hyp.frame_confidence = [[]]
645
646 # Last Label buffer + Last Label without blank buffer
647 # batch level equivalent of the last_label
648 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
649
650 # Mask buffers
651 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
652
653 # Get max sequence length
654 max_out_len = out_len.max()
655 for time_idx in range(max_out_len):
656 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
657
658 # Prepare t timestamp batch variables
659 not_blank = True
660 symbols_added = 0
661
662 # Reset blank mask
663 blank_mask.mul_(False)
664
665 # Update blank mask with time mask
666 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
667 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
668 blank_mask = time_idx >= out_len
669 # Start inner loop
670 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
671 # Batch prediction and joint network steps
672 # If very first prediction step, submit SOS tag (blank) to pred_step.
673 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
674 if time_idx == 0 and symbols_added == 0 and hidden is None:
675 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
676 else:
677 # Perform batch step prediction of decoder, getting new states and scores ("g")
678 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
679
680 # Batched joint step - Output = [B, V + 1]
681 # If preserving per-frame confidence, log_normalize must be true
682 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
683 :, 0, 0, :
684 ]
685
686 if logp.dtype != torch.float32:
687 logp = logp.float()
688
689 # Get index k, of max prob for batch
690 v, k = logp.max(1)
691 del g
692
693 # Update blank mask with current predicted blanks
694 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
695 k_is_blank = k == self._blank_index
696 blank_mask.bitwise_or_(k_is_blank)
697 all_blanks = torch.all(blank_mask)
698
699 del k_is_blank
700
701 # If preserving alignments, check if sequence length of sample has been reached
702 # before adding alignment
703 if self.preserve_alignments:
704 # Insert logprobs into last timestep per sample
705 logp_vals = logp.to('cpu')
706 logp_ids = logp_vals.max(1)[1]
707 for batch_idx, is_blank in enumerate(blank_mask):
708 # we only want to update non-blanks, unless we are at the last step in the loop where
709 # all elements produced blanks, otherwise there will be duplicate predictions
710 # saved in alignments
711 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
712 hypotheses[batch_idx].alignments[-1].append(
713 (logp_vals[batch_idx], logp_ids[batch_idx])
714 )
715 del logp_vals
716
717 # If preserving per-frame confidence, check if sequence length of sample has been reached
718 # before adding confidence scores
719 if self.preserve_frame_confidence:
720 # Insert probabilities into last timestep per sample
721 confidence = self._get_confidence(logp)
722 for batch_idx, is_blank in enumerate(blank_mask):
723 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
724 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
725 del logp
726
727 # If all samples predict / have predicted prior blanks, exit loop early
728 # This is equivalent to if single sample predicted k
729 if all_blanks:
730 not_blank = False
731 else:
732 # Collect batch indices where blanks occurred now/past
733 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
734
735 # Recover prior state for all samples which predicted blank now/past
736 if hidden is not None:
737 # LSTM has 2 states
738 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
739
740 elif len(blank_indices) > 0 and hidden is None:
741 # Reset state if there were some blank and other non-blank predictions in batch
742 # Original state is filled with zeros so we just multiply
743 # LSTM has 2 states
744 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
745
746 # Recover prior predicted label for all samples which predicted blank now/past
747 k[blank_indices] = last_label[blank_indices, 0]
748
749 # Update new label and hidden state for next iteration
750 last_label = k.clone().view(-1, 1)
751 hidden = hidden_prime
752
753 # Update predicted labels, accounting for time mask
754 # If blank was predicted even once, now or in the past,
755 # Force the current predicted label to also be blank
756 # This ensures that blanks propogate across all timesteps
757 # once they have occured (normally stopping condition of sample level loop).
758 for kidx, ki in enumerate(k):
759 if blank_mask[kidx] == 0:
760 hypotheses[kidx].y_sequence.append(ki)
761 hypotheses[kidx].timestep.append(time_idx)
762 hypotheses[kidx].score += float(v[kidx])
763 symbols_added += 1
764
765 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
766 # Then preserve U at current timestep Ti
767 # Finally, forward the timestep history to Ti+1 for that sample
768 # All of this should only be done iff the current time index <= sample-level AM length.
769 # Otherwise ignore and move to next sample / next timestep.
770 if self.preserve_alignments:
771
772 # convert Ti-th logits into a torch array
773 for batch_idx in range(batchsize):
774
775 # this checks if current timestep <= sample-level AM length
776 # If current timestep > sample-level AM length, no alignments will be added
777 # Therefore the list of Uj alignments is empty here.
778 if len(hypotheses[batch_idx].alignments[-1]) > 0:
779 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
780
781 # Do the same if preserving per-frame confidence
782 if self.preserve_frame_confidence:
783
784 for batch_idx in range(batchsize):
785 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
786 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
787
788 # Remove trailing empty list of alignments at T_{am-len} x Uj
789 if self.preserve_alignments:
790 for batch_idx in range(batchsize):
791 if len(hypotheses[batch_idx].alignments[-1]) == 0:
792 del hypotheses[batch_idx].alignments[-1]
793
794 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
795 if self.preserve_frame_confidence:
796 for batch_idx in range(batchsize):
797 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
798 del hypotheses[batch_idx].frame_confidence[-1]
799
800 # Preserve states
801 for batch_idx in range(batchsize):
802 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
803
804 return hypotheses
805
806 def _greedy_decode_masked(
807 self,
808 x: torch.Tensor,
809 out_len: torch.Tensor,
810 device: torch.device,
811 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
812 ):
813 if partial_hypotheses is not None:
814 raise NotImplementedError("`partial_hypotheses` support is not supported")
815
816 # x: [B, T, D]
817 # out_len: [B]
818 # device: torch.device
819
820 # Initialize state
821 batchsize = x.shape[0]
822 hypotheses = [
823 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
824 ]
825
826 # Initialize Hidden state matrix (shared by entire batch)
827 hidden = None
828
829 # If alignments need to be preserved, register a danling list to hold the values
830 if self.preserve_alignments:
831 # alignments is a 3-dimensional dangling list representing B x T x U
832 for hyp in hypotheses:
833 hyp.alignments = [[]]
834 else:
835 alignments = None
836
837 # If confidence scores need to be preserved, register a danling list to hold the values
838 if self.preserve_frame_confidence:
839 # frame_confidence is a 3-dimensional dangling list representing B x T x U
840 for hyp in hypotheses:
841 hyp.frame_confidence = [[]]
842
843 # Last Label buffer + Last Label without blank buffer
844 # batch level equivalent of the last_label
845 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
846 last_label_without_blank = last_label.clone()
847
848 # Mask buffers
849 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
850
851 # Get max sequence length
852 max_out_len = out_len.max()
853
854 with torch.inference_mode():
855 for time_idx in range(max_out_len):
856 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
857
858 # Prepare t timestamp batch variables
859 not_blank = True
860 symbols_added = 0
861
862 # Reset blank mask
863 blank_mask.mul_(False)
864
865 # Update blank mask with time mask
866 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
867 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
868 blank_mask = time_idx >= out_len
869
870 # Start inner loop
871 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
872 # Batch prediction and joint network steps
873 # If very first prediction step, submit SOS tag (blank) to pred_step.
874 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
875 if time_idx == 0 and symbols_added == 0 and hidden is None:
876 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
877 else:
878 # Set a dummy label for the blank value
879 # This value will be overwritten by "blank" again the last label update below
880 # This is done as vocabulary of prediction network does not contain "blank" token of RNNT
881 last_label_without_blank_mask = last_label == self._blank_index
882 last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
883 last_label_without_blank[~last_label_without_blank_mask] = last_label[
884 ~last_label_without_blank_mask
885 ]
886
887 # Perform batch step prediction of decoder, getting new states and scores ("g")
888 g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
889
890 # Batched joint step - Output = [B, V + 1]
891 # If preserving per-frame confidence, log_normalize must be true
892 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
893 :, 0, 0, :
894 ]
895
896 if logp.dtype != torch.float32:
897 logp = logp.float()
898
899 # Get index k, of max prob for batch
900 v, k = logp.max(1)
901 del g
902
903 # Update blank mask with current predicted blanks
904 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
905 k_is_blank = k == self._blank_index
906 blank_mask.bitwise_or_(k_is_blank)
907 all_blanks = torch.all(blank_mask)
908
909 # If preserving alignments, check if sequence length of sample has been reached
910 # before adding alignment
911 if self.preserve_alignments:
912 # Insert logprobs into last timestep per sample
913 logp_vals = logp.to('cpu')
914 logp_ids = logp_vals.max(1)[1]
915 for batch_idx, is_blank in enumerate(blank_mask):
916 # we only want to update non-blanks, unless we are at the last step in the loop where
917 # all elements produced blanks, otherwise there will be duplicate predictions
918 # saved in alignments
919 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
920 hypotheses[batch_idx].alignments[-1].append(
921 (logp_vals[batch_idx], logp_ids[batch_idx])
922 )
923
924 del logp_vals
925
926 # If preserving per-frame confidence, check if sequence length of sample has been reached
927 # before adding confidence scores
928 if self.preserve_frame_confidence:
929 # Insert probabilities into last timestep per sample
930 confidence = self._get_confidence(logp)
931 for batch_idx, is_blank in enumerate(blank_mask):
932 if time_idx < out_len[batch_idx] and (all_blanks or not is_blank):
933 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
934 del logp
935
936 # If all samples predict / have predicted prior blanks, exit loop early
937 # This is equivalent to if single sample predicted k
938 if blank_mask.all():
939 not_blank = False
940 else:
941 # Collect batch indices where blanks occurred now/past
942 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
943
944 # Recover prior state for all samples which predicted blank now/past
945 if hidden is not None:
946 # LSTM has 2 states
947 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
948
949 elif len(blank_indices) > 0 and hidden is None:
950 # Reset state if there were some blank and other non-blank predictions in batch
951 # Original state is filled with zeros so we just multiply
952 # LSTM has 2 states
953 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
954
955 # Recover prior predicted label for all samples which predicted blank now/past
956 k[blank_indices] = last_label[blank_indices, 0]
957
958 # Update new label and hidden state for next iteration
959 last_label = k.view(-1, 1)
960 hidden = hidden_prime
961
962 # Update predicted labels, accounting for time mask
963 # If blank was predicted even once, now or in the past,
964 # Force the current predicted label to also be blank
965 # This ensures that blanks propogate across all timesteps
966 # once they have occured (normally stopping condition of sample level loop).
967 for kidx, ki in enumerate(k):
968 if blank_mask[kidx] == 0:
969 hypotheses[kidx].y_sequence.append(ki)
970 hypotheses[kidx].timestep.append(time_idx)
971 hypotheses[kidx].score += float(v[kidx])
972
973 symbols_added += 1
974
975 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
976 # Then preserve U at current timestep Ti
977 # Finally, forward the timestep history to Ti+1 for that sample
978 # All of this should only be done iff the current time index <= sample-level AM length.
979 # Otherwise ignore and move to next sample / next timestep.
980 if self.preserve_alignments:
981
982 # convert Ti-th logits into a torch array
983 for batch_idx in range(batchsize):
984
985 # this checks if current timestep <= sample-level AM length
986 # If current timestep > sample-level AM length, no alignments will be added
987 # Therefore the list of Uj alignments is empty here.
988 if len(hypotheses[batch_idx].alignments[-1]) > 0:
989 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
990
991 # Do the same if preserving per-frame confidence
992 if self.preserve_frame_confidence:
993
994 for batch_idx in range(batchsize):
995 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
996 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
997
998 # Remove trailing empty list of alignments at T_{am-len} x Uj
999 if self.preserve_alignments:
1000 for batch_idx in range(batchsize):
1001 if len(hypotheses[batch_idx].alignments[-1]) == 0:
1002 del hypotheses[batch_idx].alignments[-1]
1003
1004 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
1005 if self.preserve_frame_confidence:
1006 for batch_idx in range(batchsize):
1007 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
1008 del hypotheses[batch_idx].frame_confidence[-1]
1009
1010 # Preserve states
1011 for batch_idx in range(batchsize):
1012 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
1013
1014 return hypotheses
1015
1016
1017 class ExportedModelGreedyBatchedRNNTInfer:
1018 def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = None):
1019 self.encoder_model_path = encoder_model
1020 self.decoder_joint_model_path = decoder_joint_model
1021 self.max_symbols_per_step = max_symbols_per_step
1022
1023 # Will be populated at runtime
1024 self._blank_index = None
1025
1026 def __call__(self, audio_signal: torch.Tensor, length: torch.Tensor):
1027 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
1028 Output token is generated auto-regressively.
1029
1030 Args:
1031 encoder_output: A tensor of size (batch, features, timesteps).
1032 encoded_lengths: list of int representing the length of each sequence
1033 output sequence.
1034
1035 Returns:
1036 packed list containing batch number of sentences (Hypotheses).
1037 """
1038 with torch.no_grad():
1039 # Apply optional preprocessing
1040 encoder_output, encoded_lengths = self.run_encoder(audio_signal=audio_signal, length=length)
1041
1042 if torch.is_tensor(encoder_output):
1043 encoder_output = encoder_output.transpose(1, 2)
1044 else:
1045 encoder_output = encoder_output.transpose([0, 2, 1]) # (B, T, D)
1046 logitlen = encoded_lengths
1047
1048 inseq = encoder_output # [B, T, D]
1049 hypotheses, timestamps = self._greedy_decode(inseq, logitlen)
1050
1051 # Pack the hypotheses results
1052 packed_result = [rnnt_utils.Hypothesis(score=-1.0, y_sequence=[]) for _ in range(len(hypotheses))]
1053 for i in range(len(packed_result)):
1054 packed_result[i].y_sequence = torch.tensor(hypotheses[i], dtype=torch.long)
1055 packed_result[i].length = timestamps[i]
1056
1057 del hypotheses
1058
1059 return packed_result
1060
1061 def _greedy_decode(self, x, out_len):
1062 # x: [B, T, D]
1063 # out_len: [B]
1064
1065 # Initialize state
1066 batchsize = x.shape[0]
1067 hidden = self._get_initial_states(batchsize)
1068 target_lengths = torch.ones(batchsize, dtype=torch.int32)
1069
1070 # Output string buffer
1071 label = [[] for _ in range(batchsize)]
1072 timesteps = [[] for _ in range(batchsize)]
1073
1074 # Last Label buffer + Last Label without blank buffer
1075 # batch level equivalent of the last_label
1076 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long).numpy()
1077 if torch.is_tensor(x):
1078 last_label = torch.from_numpy(last_label).to(self.device)
1079
1080 # Mask buffers
1081 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool).numpy()
1082
1083 # Get max sequence length
1084 max_out_len = out_len.max()
1085 for time_idx in range(max_out_len):
1086 f = x[:, time_idx : time_idx + 1, :] # [B, 1, D]
1087
1088 if torch.is_tensor(f):
1089 f = f.transpose(1, 2)
1090 else:
1091 f = f.transpose([0, 2, 1])
1092
1093 # Prepare t timestamp batch variables
1094 not_blank = True
1095 symbols_added = 0
1096
1097 # Reset blank mask
1098 blank_mask *= False
1099
1100 # Update blank mask with time mask
1101 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
1102 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
1103 blank_mask = time_idx >= out_len
1104 # Start inner loop
1105 while not_blank and (self.max_symbols_per_step is None or symbols_added < self.max_symbols_per_step):
1106
1107 # Batch prediction and joint network steps
1108 # If very first prediction step, submit SOS tag (blank) to pred_step.
1109 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
1110 if time_idx == 0 and symbols_added == 0:
1111 g = torch.tensor([self._blank_index] * batchsize, dtype=torch.int32).view(-1, 1)
1112 else:
1113 if torch.is_tensor(last_label):
1114 g = last_label.type(torch.int32)
1115 else:
1116 g = last_label.astype(np.int32)
1117
1118 # Batched joint step - Output = [B, V + 1]
1119 joint_out, hidden_prime = self.run_decoder_joint(f, g, target_lengths, *hidden)
1120 logp, pred_lengths = joint_out
1121 logp = logp[:, 0, 0, :]
1122
1123 # Get index k, of max prob for batch
1124 if torch.is_tensor(logp):
1125 v, k = logp.max(1)
1126 else:
1127 k = np.argmax(logp, axis=1).astype(np.int32)
1128
1129 # Update blank mask with current predicted blanks
1130 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
1131 k_is_blank = k == self._blank_index
1132 blank_mask |= k_is_blank
1133
1134 del k_is_blank
1135 del logp
1136
1137 # If all samples predict / have predicted prior blanks, exit loop early
1138 # This is equivalent to if single sample predicted k
1139 if blank_mask.all():
1140 not_blank = False
1141
1142 else:
1143 # Collect batch indices where blanks occurred now/past
1144 if torch.is_tensor(blank_mask):
1145 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
1146 else:
1147 blank_indices = blank_mask.astype(np.int32).nonzero()
1148
1149 if type(blank_indices) in (list, tuple):
1150 blank_indices = blank_indices[0]
1151
1152 # Recover prior state for all samples which predicted blank now/past
1153 if hidden is not None:
1154 # LSTM has 2 states
1155 for state_id in range(len(hidden)):
1156 hidden_prime[state_id][:, blank_indices, :] = hidden[state_id][:, blank_indices, :]
1157
1158 elif len(blank_indices) > 0 and hidden is None:
1159 # Reset state if there were some blank and other non-blank predictions in batch
1160 # Original state is filled with zeros so we just multiply
1161 # LSTM has 2 states
1162 for state_id in range(len(hidden_prime)):
1163 hidden_prime[state_id][:, blank_indices, :] *= 0.0
1164
1165 # Recover prior predicted label for all samples which predicted blank now/past
1166 k[blank_indices] = last_label[blank_indices, 0]
1167
1168 # Update new label and hidden state for next iteration
1169 if torch.is_tensor(k):
1170 last_label = k.clone().reshape(-1, 1)
1171 else:
1172 last_label = k.copy().reshape(-1, 1)
1173 hidden = hidden_prime
1174
1175 # Update predicted labels, accounting for time mask
1176 # If blank was predicted even once, now or in the past,
1177 # Force the current predicted label to also be blank
1178 # This ensures that blanks propogate across all timesteps
1179 # once they have occured (normally stopping condition of sample level loop).
1180 for kidx, ki in enumerate(k):
1181 if blank_mask[kidx] == 0:
1182 label[kidx].append(ki)
1183 timesteps[kidx].append(time_idx)
1184
1185 symbols_added += 1
1186
1187 return label, timesteps
1188
1189 def _setup_blank_index(self):
1190 raise NotImplementedError()
1191
1192 def run_encoder(self, audio_signal, length):
1193 raise NotImplementedError()
1194
1195 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1196 raise NotImplementedError()
1197
1198 def _get_initial_states(self, batchsize):
1199 raise NotImplementedError()
1200
1201
1202 class ONNXGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
1203 def __init__(self, encoder_model: str, decoder_joint_model: str, max_symbols_per_step: Optional[int] = 10):
1204 super().__init__(
1205 encoder_model=encoder_model,
1206 decoder_joint_model=decoder_joint_model,
1207 max_symbols_per_step=max_symbols_per_step,
1208 )
1209
1210 try:
1211 import onnx
1212 import onnxruntime
1213 except (ModuleNotFoundError, ImportError):
1214 raise ImportError(f"`onnx` or `onnxruntime` could not be imported, please install the libraries.\n")
1215
1216 if torch.cuda.is_available():
1217 # Try to use onnxruntime-gpu
1218 providers = ['TensorrtExecutionProvider', 'CUDAExecutionProvider']
1219 else:
1220 # Fall back to CPU and onnxruntime-cpu
1221 providers = ['CPUExecutionProvider']
1222
1223 onnx_session_opt = onnxruntime.SessionOptions()
1224 onnx_session_opt.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
1225
1226 onnx_model = onnx.load(self.encoder_model_path)
1227 onnx.checker.check_model(onnx_model, full_check=True)
1228 self.encoder_model = onnx_model
1229 self.encoder = onnxruntime.InferenceSession(
1230 onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
1231 )
1232
1233 onnx_model = onnx.load(self.decoder_joint_model_path)
1234 onnx.checker.check_model(onnx_model, full_check=True)
1235 self.decoder_joint_model = onnx_model
1236 self.decoder_joint = onnxruntime.InferenceSession(
1237 onnx_model.SerializeToString(), providers=providers, provider_options=onnx_session_opt
1238 )
1239
1240 logging.info("Successfully loaded encoder, decoder and joint onnx models !")
1241
1242 # Will be populated at runtime
1243 self._blank_index = None
1244 self.max_symbols_per_step = max_symbols_per_step
1245
1246 self._setup_encoder_input_output_keys()
1247 self._setup_decoder_joint_input_output_keys()
1248 self._setup_blank_index()
1249
1250 def _setup_encoder_input_output_keys(self):
1251 self.encoder_inputs = list(self.encoder_model.graph.input)
1252 self.encoder_outputs = list(self.encoder_model.graph.output)
1253
1254 def _setup_decoder_joint_input_output_keys(self):
1255 self.decoder_joint_inputs = list(self.decoder_joint_model.graph.input)
1256 self.decoder_joint_outputs = list(self.decoder_joint_model.graph.output)
1257
1258 def _setup_blank_index(self):
1259 # ASSUME: Single input with no time length information
1260 dynamic_dim = 257
1261 shapes = self.encoder_inputs[0].type.tensor_type.shape.dim
1262 ip_shape = []
1263 for shape in shapes:
1264 if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
1265 ip_shape.append(dynamic_dim) # replace dynamic axes with constant
1266 else:
1267 ip_shape.append(int(shape.dim_value))
1268
1269 enc_logits, encoded_length = self.run_encoder(
1270 audio_signal=torch.randn(*ip_shape), length=torch.randint(0, 1, size=(dynamic_dim,))
1271 )
1272
1273 # prepare states
1274 states = self._get_initial_states(batchsize=dynamic_dim)
1275
1276 # run decoder 1 step
1277 joint_out, states = self.run_decoder_joint(enc_logits, None, None, *states)
1278 log_probs, lengths = joint_out
1279
1280 self._blank_index = log_probs.shape[-1] - 1 # last token of vocab size is blank token
1281 logging.info(
1282 f"Enc-Dec-Joint step was evaluated, blank token id = {self._blank_index}; vocab size = {log_probs.shape[-1]}"
1283 )
1284
1285 def run_encoder(self, audio_signal, length):
1286 if hasattr(audio_signal, 'cpu'):
1287 audio_signal = audio_signal.cpu().numpy()
1288
1289 if hasattr(length, 'cpu'):
1290 length = length.cpu().numpy()
1291
1292 ip = {
1293 self.encoder_inputs[0].name: audio_signal,
1294 self.encoder_inputs[1].name: length,
1295 }
1296 enc_out = self.encoder.run(None, ip)
1297 enc_out, encoded_length = enc_out # ASSUME: single output
1298 return enc_out, encoded_length
1299
1300 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1301 # ASSUME: Decoder is RNN Transducer
1302 if targets is None:
1303 targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32)
1304 target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32)
1305
1306 if hasattr(targets, 'cpu'):
1307 targets = targets.cpu().numpy()
1308
1309 if hasattr(target_length, 'cpu'):
1310 target_length = target_length.cpu().numpy()
1311
1312 ip = {
1313 self.decoder_joint_inputs[0].name: enc_logits,
1314 self.decoder_joint_inputs[1].name: targets,
1315 self.decoder_joint_inputs[2].name: target_length,
1316 }
1317
1318 num_states = 0
1319 if states is not None and len(states) > 0:
1320 num_states = len(states)
1321 for idx, state in enumerate(states):
1322 if hasattr(state, 'cpu'):
1323 state = state.cpu().numpy()
1324
1325 ip[self.decoder_joint_inputs[len(ip)].name] = state
1326
1327 dec_out = self.decoder_joint.run(None, ip)
1328
1329 # unpack dec output
1330 if num_states > 0:
1331 new_states = dec_out[-num_states:]
1332 dec_out = dec_out[:-num_states]
1333 else:
1334 new_states = None
1335
1336 return dec_out, new_states
1337
1338 def _get_initial_states(self, batchsize):
1339 # ASSUME: LSTM STATES of shape (layers, batchsize, dim)
1340 input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
1341 num_states = len(input_state_nodes)
1342 if num_states == 0:
1343 return
1344
1345 input_states = []
1346 for state_id in range(num_states):
1347 node = input_state_nodes[state_id]
1348 ip_shape = []
1349 for shape_idx, shape in enumerate(node.type.tensor_type.shape.dim):
1350 if hasattr(shape, 'dim_param') and 'dynamic' in shape.dim_param:
1351 ip_shape.append(batchsize) # replace dynamic axes with constant
1352 else:
1353 ip_shape.append(int(shape.dim_value))
1354
1355 input_states.append(torch.zeros(*ip_shape))
1356
1357 return input_states
1358
1359
1360 class TorchscriptGreedyBatchedRNNTInfer(ExportedModelGreedyBatchedRNNTInfer):
1361 def __init__(
1362 self,
1363 encoder_model: str,
1364 decoder_joint_model: str,
1365 cfg: DictConfig,
1366 device: str,
1367 max_symbols_per_step: Optional[int] = 10,
1368 ):
1369 super().__init__(
1370 encoder_model=encoder_model,
1371 decoder_joint_model=decoder_joint_model,
1372 max_symbols_per_step=max_symbols_per_step,
1373 )
1374
1375 self.cfg = cfg
1376 self.device = device
1377
1378 self.encoder = torch.jit.load(self.encoder_model_path, map_location=self.device)
1379 self.decoder_joint = torch.jit.load(self.decoder_joint_model_path, map_location=self.device)
1380
1381 logging.info("Successfully loaded encoder, decoder and joint torchscript models !")
1382
1383 # Will be populated at runtime
1384 self._blank_index = None
1385 self.max_symbols_per_step = max_symbols_per_step
1386
1387 self._setup_encoder_input_keys()
1388 self._setup_decoder_joint_input_keys()
1389 self._setup_blank_index()
1390
1391 def _setup_encoder_input_keys(self):
1392 arguments = self.encoder.forward.schema.arguments[1:]
1393 self.encoder_inputs = [arg for arg in arguments]
1394
1395 def _setup_decoder_joint_input_keys(self):
1396 arguments = self.decoder_joint.forward.schema.arguments[1:]
1397 self.decoder_joint_inputs = [arg for arg in arguments]
1398
1399 def _setup_blank_index(self):
1400 self._blank_index = len(self.cfg.joint.vocabulary)
1401
1402 logging.info(f"Blank token id = {self._blank_index}; vocab size = {len(self.cfg.joint.vocabulary) + 1}")
1403
1404 def run_encoder(self, audio_signal, length):
1405 enc_out = self.encoder(audio_signal, length)
1406 enc_out, encoded_length = enc_out # ASSUME: single output
1407 return enc_out, encoded_length
1408
1409 def run_decoder_joint(self, enc_logits, targets, target_length, *states):
1410 # ASSUME: Decoder is RNN Transducer
1411 if targets is None:
1412 targets = torch.zeros(enc_logits.shape[0], 1, dtype=torch.int32, device=enc_logits.device)
1413 target_length = torch.ones(enc_logits.shape[0], dtype=torch.int32, device=enc_logits.device)
1414
1415 num_states = 0
1416 if states is not None and len(states) > 0:
1417 num_states = len(states)
1418
1419 dec_out = self.decoder_joint(enc_logits, targets, target_length, *states)
1420
1421 # unpack dec output
1422 if num_states > 0:
1423 new_states = dec_out[-num_states:]
1424 dec_out = dec_out[:-num_states]
1425 else:
1426 new_states = None
1427
1428 return dec_out, new_states
1429
1430 def _get_initial_states(self, batchsize):
1431 # ASSUME: LSTM STATES of shape (layers, batchsize, dim)
1432 input_state_nodes = [ip for ip in self.decoder_joint_inputs if 'state' in ip.name]
1433 num_states = len(input_state_nodes)
1434 if num_states == 0:
1435 return
1436
1437 input_states = []
1438 for state_id in range(num_states):
1439 # Hardcode shape size for LSTM (1 is for num layers in LSTM, which is flattened for export)
1440 ip_shape = [1, batchsize, self.cfg.model_defaults.pred_hidden]
1441 input_states.append(torch.zeros(*ip_shape, device=self.device))
1442
1443 return input_states
1444
1445
1446 class GreedyMultiblankRNNTInfer(GreedyRNNTInfer):
1447 """A greedy transducer decoder for multi-blank RNN-T.
1448
1449 Sequence level greedy decoding, performed auto-regressively.
1450
1451 Args:
1452 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
1453 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
1454 blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
1455 big_blank_durations: a list containing durations for big blanks the model supports.
1456 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
1457 to a sequence in a single time step; if set to None then there is
1458 no limit.
1459 preserve_alignments: Bool flag which preserves the history of alignments generated during
1460 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1461 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
1462 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
1463 The length of the list corresponds to the Acoustic Length (T).
1464 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
1465 U is the number of target tokens for the current timestep Ti.
1466 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
1467 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1468 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
1469 The length of the list corresponds to the Acoustic Length (T).
1470 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
1471 U is the number of target tokens for the current timestep Ti.
1472 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
1473 confidence scores.
1474
1475 name: The method name (str).
1476 Supported values:
1477 - 'max_prob' for using the maximum token probability as a confidence.
1478 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1479
1480 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
1481 Supported values:
1482 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1483 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1484 Note that for this entropy, the alpha should comply the following inequality:
1485 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1486 where V is the model vocabulary size.
1487 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1488 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1489 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1490 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1491 - 'renyi' for the Rรฉnyi entropy.
1492 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1493 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1494 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1495
1496 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1497 When the alpha equals one, scaling is not applied to 'max_prob',
1498 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1499
1500 entropy_norm: A mapping of the entropy value to the interval [0,1].
1501 Supported values:
1502 - 'lin' for using the linear mapping.
1503 - 'exp' for using exponential mapping with linear shift.
1504 """
1505
1506 def __init__(
1507 self,
1508 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
1509 joint_model: rnnt_abstract.AbstractRNNTJoint,
1510 blank_index: int,
1511 big_blank_durations: list,
1512 max_symbols_per_step: Optional[int] = None,
1513 preserve_alignments: bool = False,
1514 preserve_frame_confidence: bool = False,
1515 confidence_method_cfg: Optional[DictConfig] = None,
1516 ):
1517 super().__init__(
1518 decoder_model=decoder_model,
1519 joint_model=joint_model,
1520 blank_index=blank_index,
1521 max_symbols_per_step=max_symbols_per_step,
1522 preserve_alignments=preserve_alignments,
1523 preserve_frame_confidence=preserve_frame_confidence,
1524 confidence_method_cfg=confidence_method_cfg,
1525 )
1526 self.big_blank_durations = big_blank_durations
1527 self._SOS = blank_index - len(big_blank_durations)
1528
1529 @torch.no_grad()
1530 def _greedy_decode(
1531 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
1532 ):
1533 # x: [T, 1, D]
1534 # out_len: [seq_len]
1535
1536 # Initialize blank state and empty label set in Hypothesis
1537 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
1538
1539 if partial_hypotheses is not None:
1540 hypothesis.last_token = partial_hypotheses.last_token
1541 hypothesis.y_sequence = (
1542 partial_hypotheses.y_sequence.cpu().tolist()
1543 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
1544 else partial_hypotheses.y_sequence
1545 )
1546 if partial_hypotheses.dec_state is not None:
1547 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
1548 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
1549
1550 if self.preserve_alignments:
1551 # Alignments is a 2-dimensional dangling list representing T x U
1552 hypothesis.alignments = [[]]
1553
1554 if self.preserve_frame_confidence:
1555 hypothesis.frame_confidence = [[]]
1556
1557 # if this variable is > 1, it means the last emission was a big-blank and we need to skip frames.
1558 big_blank_duration = 1
1559
1560 # For timestep t in X_t
1561 for time_idx in range(out_len):
1562 if big_blank_duration > 1:
1563 # skip frames until big_blank_duration == 1.
1564 big_blank_duration -= 1
1565 continue
1566 # Extract encoder embedding at timestep t
1567 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
1568 f = x.narrow(dim=0, start=time_idx, length=1)
1569
1570 # Setup exit flags and counter
1571 not_blank = True
1572 symbols_added = 0
1573
1574 # While blank is not predicted, or we dont run out of max symbols per timestep
1575 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
1576 # In the first timestep, we initialize the network with RNNT Blank
1577 # In later timesteps, we provide previous predicted label as input.
1578 if hypothesis.last_token is None and hypothesis.dec_state is None:
1579 last_label = self._SOS
1580 else:
1581 last_label = label_collate([[hypothesis.last_token]])
1582
1583 # Perform prediction network and joint network steps.
1584 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
1585 # If preserving per-frame confidence, log_normalize must be true
1586 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
1587 0, 0, 0, :
1588 ]
1589
1590 del g
1591
1592 # torch.max(0) op doesnt exist for FP 16.
1593 if logp.dtype != torch.float32:
1594 logp = logp.float()
1595
1596 # get index k, of max prob
1597 v, k = logp.max(0)
1598 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
1599
1600 # Note, we have non-blanks in the vocab first, followed by big blanks, and standard blank at last.
1601 # here we check if it's a big blank and if yes, set the duration variable.
1602 if k >= self._blank_index - len(self.big_blank_durations) and k < self._blank_index:
1603 big_blank_duration = self.big_blank_durations[self._blank_index - k - 1]
1604
1605 if self.preserve_alignments:
1606 # insert logprobs into last timestep
1607 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
1608
1609 if self.preserve_frame_confidence:
1610 # insert confidence into last timestep
1611 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
1612
1613 del logp
1614
1615 # If any type of blank token is predicted, exit inner loop, move onto next timestep t
1616 if k >= self._blank_index - len(self.big_blank_durations):
1617 not_blank = False
1618 else:
1619 # Append token to label set, update RNN state.
1620 hypothesis.y_sequence.append(k)
1621 hypothesis.score += float(v)
1622 hypothesis.timestep.append(time_idx)
1623 hypothesis.dec_state = hidden_prime
1624 hypothesis.last_token = k
1625
1626 # Increment token counter.
1627 symbols_added += 1
1628
1629 if self.preserve_alignments:
1630 # convert Ti-th logits into a torch array
1631 hypothesis.alignments.append([]) # blank buffer for next timestep
1632
1633 if self.preserve_frame_confidence:
1634 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
1635
1636 # Remove trailing empty list of Alignments
1637 if self.preserve_alignments:
1638 if len(hypothesis.alignments[-1]) == 0:
1639 del hypothesis.alignments[-1]
1640
1641 # Remove trailing empty list of per-frame confidence
1642 if self.preserve_frame_confidence:
1643 if len(hypothesis.frame_confidence[-1]) == 0:
1644 del hypothesis.frame_confidence[-1]
1645
1646 # Unpack the hidden states
1647 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
1648
1649 return hypothesis
1650
1651
1652 class GreedyBatchedMultiblankRNNTInfer(GreedyBatchedRNNTInfer):
1653 """A batch level greedy transducer decoder.
1654 Batch level greedy decoding, performed auto-regressively.
1655 Args:
1656 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
1657 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
1658 blank_index: int index of the blank token. Must be len(vocabulary) for multi-blank RNNTs.
1659 big_blank_durations: a list containing durations for big blanks the model supports.
1660 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
1661 to a sequence in a single time step; if set to None then there is
1662 no limit.
1663 preserve_alignments: Bool flag which preserves the history of alignments generated during
1664 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1665 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
1666 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
1667 The length of the list corresponds to the Acoustic Length (T).
1668 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
1669 U is the number of target tokens for the current timestep Ti.
1670 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
1671 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
1672 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
1673 The length of the list corresponds to the Acoustic Length (T).
1674 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
1675 U is the number of target tokens for the current timestep Ti.
1676 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
1677 confidence scores.
1678
1679 name: The method name (str).
1680 Supported values:
1681 - 'max_prob' for using the maximum token probability as a confidence.
1682 - 'entropy' for using a normalized entropy of a log-likelihood vector.
1683
1684 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
1685 Supported values:
1686 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
1687 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
1688 Note that for this entropy, the alpha should comply the following inequality:
1689 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
1690 where V is the model vocabulary size.
1691 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
1692 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
1693 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1694 More: https://en.wikipedia.org/wiki/Tsallis_entropy
1695 - 'renyi' for the Rรฉnyi entropy.
1696 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
1697 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
1698 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
1699
1700 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
1701 When the alpha equals one, scaling is not applied to 'max_prob',
1702 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
1703
1704 entropy_norm: A mapping of the entropy value to the interval [0,1].
1705 Supported values:
1706 - 'lin' for using the linear mapping.
1707 - 'exp' for using exponential mapping with linear shift.
1708 """
1709
1710 def __init__(
1711 self,
1712 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
1713 joint_model: rnnt_abstract.AbstractRNNTJoint,
1714 blank_index: int,
1715 big_blank_durations: List[int],
1716 max_symbols_per_step: Optional[int] = None,
1717 preserve_alignments: bool = False,
1718 preserve_frame_confidence: bool = False,
1719 confidence_method_cfg: Optional[DictConfig] = None,
1720 ):
1721 super().__init__(
1722 decoder_model=decoder_model,
1723 joint_model=joint_model,
1724 blank_index=blank_index,
1725 max_symbols_per_step=max_symbols_per_step,
1726 preserve_alignments=preserve_alignments,
1727 preserve_frame_confidence=preserve_frame_confidence,
1728 confidence_method_cfg=confidence_method_cfg,
1729 )
1730 self.big_blank_durations = big_blank_durations
1731
1732 # Depending on availability of `blank_as_pad` support
1733 # switch between more efficient batch decoding technique
1734 if self.decoder.blank_as_pad:
1735 self._greedy_decode = self._greedy_decode_blank_as_pad
1736 else:
1737 self._greedy_decode = self._greedy_decode_masked
1738 self._SOS = blank_index - len(big_blank_durations)
1739
1740 def _greedy_decode_blank_as_pad(
1741 self,
1742 x: torch.Tensor,
1743 out_len: torch.Tensor,
1744 device: torch.device,
1745 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
1746 ):
1747 if partial_hypotheses is not None:
1748 raise NotImplementedError("`partial_hypotheses` support is not supported")
1749
1750 with torch.inference_mode():
1751 # x: [B, T, D]
1752 # out_len: [B]
1753 # device: torch.device
1754
1755 # Initialize list of Hypothesis
1756 batchsize = x.shape[0]
1757 hypotheses = [
1758 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
1759 ]
1760
1761 # Initialize Hidden state matrix (shared by entire batch)
1762 hidden = None
1763
1764 # If alignments need to be preserved, register a danling list to hold the values
1765 if self.preserve_alignments:
1766 # alignments is a 3-dimensional dangling list representing B x T x U
1767 for hyp in hypotheses:
1768 hyp.alignments = [[]]
1769
1770 # If confidence scores need to be preserved, register a danling list to hold the values
1771 if self.preserve_frame_confidence:
1772 # frame_confidence is a 3-dimensional dangling list representing B x T x U
1773 for hyp in hypotheses:
1774 hyp.frame_confidence = [[]]
1775
1776 # Last Label buffer + Last Label without blank buffer
1777 # batch level equivalent of the last_label
1778 last_label = torch.full([batchsize, 1], fill_value=self._SOS, dtype=torch.long, device=device)
1779
1780 # this mask is true for if the emission is *any type* of blank.
1781 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
1782
1783 # Get max sequence length
1784 max_out_len = out_len.max()
1785
1786 # We have a mask for each big blank. A mask is "true" means: the previous emission is exactly the big-blank
1787 # with the corresponding duration, or has larger duration. E.g., for big_blank_mask for duration 2, it will
1788 # be set true if the previous emission was a big blank with duration 4, or 3 or 2; but false if prevoius
1789 # emission was a standard blank (with duration = 1).
1790 big_blank_masks = [torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)] * len(
1791 self.big_blank_durations
1792 )
1793
1794 # if this variable > 1, it means the previous emission was big-blank and we need to skip frames.
1795 big_blank_duration = 1
1796
1797 for time_idx in range(max_out_len):
1798 if big_blank_duration > 1:
1799 # skip frames until big_blank_duration == 1
1800 big_blank_duration -= 1
1801 continue
1802 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
1803
1804 # Prepare t timestamp batch variables
1805 not_blank = True
1806 symbols_added = 0
1807
1808 # Reset all blank masks
1809 blank_mask.mul_(False)
1810 for i in range(len(big_blank_masks)):
1811 big_blank_masks[i].mul_(False)
1812
1813 # Update blank mask with time mask
1814 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
1815 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
1816 blank_mask = time_idx >= out_len
1817 for i in range(len(big_blank_masks)):
1818 big_blank_masks[i] = time_idx >= out_len
1819
1820 # Start inner loop
1821 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
1822 # Batch prediction and joint network steps
1823 # If very first prediction step, submit SOS tag (blank) to pred_step.
1824 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
1825 if time_idx == 0 and symbols_added == 0 and hidden is None:
1826 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
1827 else:
1828 # Perform batch step prediction of decoder, getting new states and scores ("g")
1829 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
1830
1831 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
1832 # If preserving per-frame confidence, log_normalize must be true
1833 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
1834 :, 0, 0, :
1835 ]
1836
1837 if logp.dtype != torch.float32:
1838 logp = logp.float()
1839
1840 # Get index k, of max prob for batch
1841 v, k = logp.max(1)
1842 del g
1843
1844 # Update blank mask with current predicted blanks
1845 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
1846 k_is_blank = k >= self._blank_index - len(self.big_blank_durations)
1847 blank_mask.bitwise_or_(k_is_blank)
1848
1849 for i in range(len(big_blank_masks)):
1850 # using <= since as we mentioned before, the mask doesn't store exact matches.
1851 # instead, it is True when the predicted blank's duration is >= the duration that the
1852 # mask corresponds to.
1853 k_is_big_blank = k <= self._blank_index - 1 - i
1854
1855 # need to do a bitwise_and since it could also be a non-blank.
1856 k_is_big_blank.bitwise_and_(k_is_blank)
1857 big_blank_masks[i].bitwise_or_(k_is_big_blank)
1858
1859 del k_is_blank
1860
1861 # If preserving alignments, check if sequence length of sample has been reached
1862 # before adding alignment
1863 if self.preserve_alignments:
1864 # Insert logprobs into last timestep per sample
1865 logp_vals = logp.to('cpu')
1866 logp_ids = logp_vals.max(1)[1]
1867 for batch_idx in range(batchsize):
1868 if time_idx < out_len[batch_idx]:
1869 hypotheses[batch_idx].alignments[-1].append(
1870 (logp_vals[batch_idx], logp_ids[batch_idx])
1871 )
1872 del logp_vals
1873
1874 # If preserving per-frame confidence, check if sequence length of sample has been reached
1875 # before adding confidence scores
1876 if self.preserve_frame_confidence:
1877 # Insert probabilities into last timestep per sample
1878 confidence = self._get_confidence(logp)
1879 for batch_idx in range(batchsize):
1880 if time_idx < out_len[batch_idx]:
1881 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
1882 del logp
1883
1884 # If all samples predict / have predicted prior blanks, exit loop early
1885 # This is equivalent to if single sample predicted k
1886 if blank_mask.all():
1887 not_blank = False
1888 else:
1889 # Collect batch indices where blanks occurred now/past
1890 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
1891
1892 # Recover prior state for all samples which predicted blank now/past
1893 if hidden is not None:
1894 # LSTM has 2 states
1895 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
1896
1897 elif len(blank_indices) > 0 and hidden is None:
1898 # Reset state if there were some blank and other non-blank predictions in batch
1899 # Original state is filled with zeros so we just multiply
1900 # LSTM has 2 states
1901 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
1902
1903 # Recover prior predicted label for all samples which predicted blank now/past
1904 k[blank_indices] = last_label[blank_indices, 0]
1905
1906 # Update new label and hidden state for next iteration
1907 last_label = k.clone().view(-1, 1)
1908 hidden = hidden_prime
1909
1910 # Update predicted labels, accounting for time mask
1911 # If blank was predicted even once, now or in the past,
1912 # Force the current predicted label to also be blank
1913 # This ensures that blanks propogate across all timesteps
1914 # once they have occured (normally stopping condition of sample level loop).
1915 for kidx, ki in enumerate(k):
1916 if blank_mask[kidx] == 0:
1917 hypotheses[kidx].y_sequence.append(ki)
1918 hypotheses[kidx].timestep.append(time_idx)
1919 hypotheses[kidx].score += float(v[kidx])
1920
1921 symbols_added += 1
1922
1923 for i in range(len(big_blank_masks) + 1):
1924 # The task here is find the shortest blank duration of all batches.
1925 # so we start from the shortest blank duration and go up,
1926 # and stop once we found the duration whose corresponding mask isn't all True.
1927 if i == len(big_blank_masks) or not big_blank_masks[i].all():
1928 big_blank_duration = self.big_blank_durations[i - 1] if i > 0 else 1
1929 break
1930
1931 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
1932 # Then preserve U at current timestep Ti
1933 # Finally, forward the timestep history to Ti+1 for that sample
1934 # All of this should only be done iff the current time index <= sample-level AM length.
1935 # Otherwise ignore and move to next sample / next timestep.
1936 if self.preserve_alignments:
1937
1938 # convert Ti-th logits into a torch array
1939 for batch_idx in range(batchsize):
1940
1941 # this checks if current timestep <= sample-level AM length
1942 # If current timestep > sample-level AM length, no alignments will be added
1943 # Therefore the list of Uj alignments is empty here.
1944 if len(hypotheses[batch_idx].alignments[-1]) > 0:
1945 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
1946
1947 # Do the same if preserving per-frame confidence
1948 if self.preserve_frame_confidence:
1949
1950 for batch_idx in range(batchsize):
1951 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
1952 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
1953
1954 # Remove trailing empty list of alignments at T_{am-len} x Uj
1955 if self.preserve_alignments:
1956 for batch_idx in range(batchsize):
1957 if len(hypotheses[batch_idx].alignments[-1]) == 0:
1958 del hypotheses[batch_idx].alignments[-1]
1959
1960 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
1961 if self.preserve_frame_confidence:
1962 for batch_idx in range(batchsize):
1963 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
1964 del hypotheses[batch_idx].frame_confidence[-1]
1965
1966 # Preserve states
1967 for batch_idx in range(batchsize):
1968 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
1969
1970 return hypotheses
1971
1972 def _greedy_decode_masked(
1973 self,
1974 x: torch.Tensor,
1975 out_len: torch.Tensor,
1976 device: torch.device,
1977 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
1978 ):
1979 if partial_hypotheses is not None:
1980 raise NotImplementedError("`partial_hypotheses` support is not supported")
1981
1982 if self.big_blank_durations != [1] * len(self.big_blank_durations):
1983 raise NotImplementedError(
1984 "Efficient frame-skipping version for multi-blank masked decoding is not supported."
1985 )
1986
1987 # x: [B, T, D]
1988 # out_len: [B]
1989 # device: torch.device
1990
1991 # Initialize state
1992 batchsize = x.shape[0]
1993 hypotheses = [
1994 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
1995 ]
1996
1997 # Initialize Hidden state matrix (shared by entire batch)
1998 hidden = None
1999
2000 # If alignments need to be preserved, register a danling list to hold the values
2001 if self.preserve_alignments:
2002 # alignments is a 3-dimensional dangling list representing B x T x U
2003 for hyp in hypotheses:
2004 hyp.alignments = [[]]
2005 else:
2006 hyp.alignments = None
2007
2008 # If confidence scores need to be preserved, register a danling list to hold the values
2009 if self.preserve_frame_confidence:
2010 # frame_confidence is a 3-dimensional dangling list representing B x T x U
2011 for hyp in hypotheses:
2012 hyp.frame_confidence = [[]]
2013
2014 # Last Label buffer + Last Label without blank buffer
2015 # batch level equivalent of the last_label
2016 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
2017 last_label_without_blank = last_label.clone()
2018
2019 # Mask buffers
2020 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
2021
2022 # Get max sequence length
2023 max_out_len = out_len.max()
2024
2025 with torch.inference_mode():
2026 for time_idx in range(max_out_len):
2027 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
2028
2029 # Prepare t timestamp batch variables
2030 not_blank = True
2031 symbols_added = 0
2032
2033 # Reset blank mask
2034 blank_mask.mul_(False)
2035
2036 # Update blank mask with time mask
2037 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
2038 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
2039 blank_mask = time_idx >= out_len
2040
2041 # Start inner loop
2042 while not_blank and (self.max_symbols is None or symbols_added < self.max_symbols):
2043 # Batch prediction and joint network steps
2044 # If very first prediction step, submit SOS tag (blank) to pred_step.
2045 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
2046 if time_idx == 0 and symbols_added == 0 and hidden is None:
2047 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
2048 else:
2049 # Set a dummy label for the blank value
2050 # This value will be overwritten by "blank" again the last label update below
2051 # This is done as vocabulary of prediction network does not contain "blank" token of RNNT
2052 last_label_without_blank_mask = last_label >= self._blank_index
2053 last_label_without_blank[last_label_without_blank_mask] = 0 # temp change of label
2054 last_label_without_blank[~last_label_without_blank_mask] = last_label[
2055 ~last_label_without_blank_mask
2056 ]
2057
2058 # Perform batch step prediction of decoder, getting new states and scores ("g")
2059 g, hidden_prime = self._pred_step(last_label_without_blank, hidden, batch_size=batchsize)
2060
2061 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
2062 # If preserving per-frame confidence, log_normalize must be true
2063 logp = self._joint_step(f, g, log_normalize=True if self.preserve_frame_confidence else None)[
2064 :, 0, 0, :
2065 ]
2066
2067 if logp.dtype != torch.float32:
2068 logp = logp.float()
2069
2070 # Get index k, of max prob for batch
2071 v, k = logp.max(1)
2072 del g
2073
2074 # Update blank mask with current predicted blanks
2075 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
2076 k_is_blank = k == self._blank_index
2077 blank_mask.bitwise_or_(k_is_blank)
2078
2079 # If preserving alignments, check if sequence length of sample has been reached
2080 # before adding alignment
2081 if self.preserve_alignments:
2082 # Insert logprobs into last timestep per sample
2083 logp_vals = logp.to('cpu')
2084 logp_ids = logp_vals.max(1)[1]
2085 for batch_idx in range(batchsize):
2086 if time_idx < out_len[batch_idx]:
2087 hypotheses[batch_idx].alignments[-1].append(
2088 (logp_vals[batch_idx], logp_ids[batch_idx])
2089 )
2090 del logp_vals
2091
2092 # If preserving per-frame confidence, check if sequence length of sample has been reached
2093 # before adding confidence scores
2094 if self.preserve_frame_confidence:
2095 # Insert probabilities into last timestep per sample
2096 confidence = self._get_confidence(logp)
2097 for batch_idx in range(batchsize):
2098 if time_idx < out_len[batch_idx]:
2099 hypotheses[batch_idx].frame_confidence[-1].append(confidence[batch_idx])
2100 del logp
2101
2102 # If all samples predict / have predicted prior blanks, exit loop early
2103 # This is equivalent to if single sample predicted k
2104 if blank_mask.all():
2105 not_blank = False
2106 else:
2107 # Collect batch indices where blanks occurred now/past
2108 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
2109
2110 # Recover prior state for all samples which predicted blank now/past
2111 if hidden is not None:
2112 # LSTM has 2 states
2113 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
2114
2115 elif len(blank_indices) > 0 and hidden is None:
2116 # Reset state if there were some blank and other non-blank predictions in batch
2117 # Original state is filled with zeros so we just multiply
2118 # LSTM has 2 states
2119 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
2120
2121 # Recover prior predicted label for all samples which predicted blank now/past
2122 k[blank_indices] = last_label[blank_indices, 0]
2123
2124 # Update new label and hidden state for next iteration
2125 last_label = k.view(-1, 1)
2126 hidden = hidden_prime
2127
2128 # Update predicted labels, accounting for time mask
2129 # If blank was predicted even once, now or in the past,
2130 # Force the current predicted label to also be blank
2131 # This ensures that blanks propogate across all timesteps
2132 # once they have occured (normally stopping condition of sample level loop).
2133 for kidx, ki in enumerate(k):
2134 if blank_mask[kidx] == 0:
2135 hypotheses[kidx].y_sequence.append(ki)
2136 hypotheses[kidx].timestep.append(time_idx)
2137 hypotheses[kidx].score += float(v[kidx])
2138
2139 symbols_added += 1
2140
2141 # If preserving alignments, convert the current Uj alignments into a torch.Tensor
2142 # Then preserve U at current timestep Ti
2143 # Finally, forward the timestep history to Ti+1 for that sample
2144 # All of this should only be done iff the current time index <= sample-level AM length.
2145 # Otherwise ignore and move to next sample / next timestep.
2146 if self.preserve_alignments:
2147
2148 # convert Ti-th logits into a torch array
2149 for batch_idx in range(batchsize):
2150
2151 # this checks if current timestep <= sample-level AM length
2152 # If current timestep > sample-level AM length, no alignments will be added
2153 # Therefore the list of Uj alignments is empty here.
2154 if len(hypotheses[batch_idx].alignments[-1]) > 0:
2155 hypotheses[batch_idx].alignments.append([]) # blank buffer for next timestep
2156
2157 # Do the same if preserving per-frame confidence
2158 if self.preserve_frame_confidence:
2159
2160 for batch_idx in range(batchsize):
2161 if len(hypotheses[batch_idx].frame_confidence[-1]) > 0:
2162 hypotheses[batch_idx].frame_confidence.append([]) # blank buffer for next timestep
2163
2164 # Remove trailing empty list of alignments at T_{am-len} x Uj
2165 if self.preserve_alignments:
2166 for batch_idx in range(batchsize):
2167 if len(hypotheses[batch_idx].alignments[-1]) == 0:
2168 del hypotheses[batch_idx].alignments[-1]
2169
2170 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
2171 if self.preserve_frame_confidence:
2172 for batch_idx in range(batchsize):
2173 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2174 del hypotheses[batch_idx].frame_confidence[-1]
2175
2176 # Preserve states
2177 for batch_idx in range(batchsize):
2178 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2179
2180 return hypotheses
2181
2182
2183 @dataclass
2184 class GreedyRNNTInferConfig:
2185 max_symbols_per_step: Optional[int] = 10
2186 preserve_alignments: bool = False
2187 preserve_frame_confidence: bool = False
2188 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
2189
2190 def __post_init__(self):
2191 # OmegaConf.structured ensures that post_init check is always executed
2192 self.confidence_method_cfg = OmegaConf.structured(
2193 self.confidence_method_cfg
2194 if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
2195 else ConfidenceMethodConfig(**self.confidence_method_cfg)
2196 )
2197
2198
2199 @dataclass
2200 class GreedyBatchedRNNTInferConfig:
2201 max_symbols_per_step: Optional[int] = 10
2202 preserve_alignments: bool = False
2203 preserve_frame_confidence: bool = False
2204 confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
2205
2206 def __post_init__(self):
2207 # OmegaConf.structured ensures that post_init check is always executed
2208 self.confidence_method_cfg = OmegaConf.structured(
2209 self.confidence_method_cfg
2210 if isinstance(self.confidence_method_cfg, ConfidenceMethodConfig)
2211 else ConfidenceMethodConfig(**self.confidence_method_cfg)
2212 )
2213
2214
2215 class GreedyTDTInfer(_GreedyRNNTInfer):
2216 """A greedy TDT decoder.
2217
2218 Sequence level greedy decoding, performed auto-regressively.
2219
2220 Args:
2221 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
2222 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
2223 blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
2224 durations: a list containing durations for TDT.
2225 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
2226 to a sequence in a single time step; if set to None then there is
2227 no limit.
2228 preserve_alignments: Bool flag which preserves the history of alignments generated during
2229 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2230 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
2231 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
2232 The length of the list corresponds to the Acoustic Length (T).
2233 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
2234 U is the number of target tokens for the current timestep Ti.
2235 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
2236 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2237 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
2238 The length of the list corresponds to the Acoustic Length (T).
2239 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
2240 U is the number of target tokens for the current timestep Ti.
2241 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
2242 confidence scores.
2243
2244 name: The method name (str).
2245 Supported values:
2246 - 'max_prob' for using the maximum token probability as a confidence.
2247 - 'entropy' for using a normalized entropy of a log-likelihood vector.
2248
2249 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
2250 Supported values:
2251 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
2252 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
2253 Note that for this entropy, the alpha should comply the following inequality:
2254 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
2255 where V is the model vocabulary size.
2256 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
2257 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
2258 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2259 More: https://en.wikipedia.org/wiki/Tsallis_entropy
2260 - 'renyi' for the Rรฉnyi entropy.
2261 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
2262 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2263 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
2264
2265 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
2266 When the alpha equals one, scaling is not applied to 'max_prob',
2267 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
2268
2269 entropy_norm: A mapping of the entropy value to the interval [0,1].
2270 Supported values:
2271 - 'lin' for using the linear mapping.
2272 - 'exp' for using exponential mapping with linear shift.
2273 """
2274
2275 def __init__(
2276 self,
2277 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
2278 joint_model: rnnt_abstract.AbstractRNNTJoint,
2279 blank_index: int,
2280 durations: list,
2281 max_symbols_per_step: Optional[int] = None,
2282 preserve_alignments: bool = False,
2283 preserve_frame_confidence: bool = False,
2284 confidence_method_cfg: Optional[DictConfig] = None,
2285 ):
2286 super().__init__(
2287 decoder_model=decoder_model,
2288 joint_model=joint_model,
2289 blank_index=blank_index,
2290 max_symbols_per_step=max_symbols_per_step,
2291 preserve_alignments=preserve_alignments,
2292 preserve_frame_confidence=preserve_frame_confidence,
2293 confidence_method_cfg=confidence_method_cfg,
2294 )
2295 self.durations = durations
2296
2297 @typecheck()
2298 def forward(
2299 self,
2300 encoder_output: torch.Tensor,
2301 encoded_lengths: torch.Tensor,
2302 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2303 ):
2304 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
2305 Output token is generated auto-regressively.
2306 Args:
2307 encoder_output: A tensor of size (batch, features, timesteps).
2308 encoded_lengths: list of int representing the length of each sequence
2309 output sequence.
2310 Returns:
2311 packed list containing batch number of sentences (Hypotheses).
2312 """
2313 # Preserve decoder and joint training state
2314 decoder_training_state = self.decoder.training
2315 joint_training_state = self.joint.training
2316
2317 with torch.inference_mode():
2318 # Apply optional preprocessing
2319 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
2320
2321 self.decoder.eval()
2322 self.joint.eval()
2323
2324 hypotheses = []
2325 # Process each sequence independently
2326 with self.decoder.as_frozen(), self.joint.as_frozen():
2327 for batch_idx in range(encoder_output.size(0)):
2328 inseq = encoder_output[batch_idx, :, :].unsqueeze(1) # [T, 1, D]
2329 logitlen = encoded_lengths[batch_idx]
2330
2331 partial_hypothesis = partial_hypotheses[batch_idx] if partial_hypotheses is not None else None
2332 hypothesis = self._greedy_decode(inseq, logitlen, partial_hypotheses=partial_hypothesis)
2333 hypotheses.append(hypothesis)
2334
2335 # Pack results into Hypotheses
2336 packed_result = pack_hypotheses(hypotheses, encoded_lengths)
2337
2338 self.decoder.train(decoder_training_state)
2339 self.joint.train(joint_training_state)
2340
2341 return (packed_result,)
2342
2343 @torch.no_grad()
2344 def _greedy_decode(
2345 self, x: torch.Tensor, out_len: torch.Tensor, partial_hypotheses: Optional[rnnt_utils.Hypothesis] = None
2346 ):
2347 # x: [T, 1, D]
2348 # out_len: [seq_len]
2349
2350 # Initialize blank state and empty label set in Hypothesis
2351 hypothesis = rnnt_utils.Hypothesis(score=0.0, y_sequence=[], dec_state=None, timestep=[], last_token=None)
2352
2353 if partial_hypotheses is not None:
2354 hypothesis.last_token = partial_hypotheses.last_token
2355 hypothesis.y_sequence = (
2356 partial_hypotheses.y_sequence.cpu().tolist()
2357 if isinstance(partial_hypotheses.y_sequence, torch.Tensor)
2358 else partial_hypotheses.y_sequence
2359 )
2360 if partial_hypotheses.dec_state is not None:
2361 hypothesis.dec_state = self.decoder.batch_concat_states([partial_hypotheses.dec_state])
2362 hypothesis.dec_state = _states_to_device(hypothesis.dec_state, x.device)
2363
2364 if self.preserve_alignments:
2365 # Alignments is a 2-dimensional dangling list representing T x U
2366 hypothesis.alignments = [[]]
2367
2368 if self.preserve_frame_confidence:
2369 hypothesis.frame_confidence = [[]]
2370
2371 time_idx = 0
2372 while time_idx < out_len:
2373 # Extract encoder embedding at timestep t
2374 # f = x[time_idx, :, :].unsqueeze(0) # [1, 1, D]
2375 f = x.narrow(dim=0, start=time_idx, length=1)
2376
2377 # Setup exit flags and counter
2378 not_blank = True
2379 symbols_added = 0
2380
2381 need_loop = True
2382 # While blank is not predicted, or we dont run out of max symbols per timestep
2383 while need_loop and (self.max_symbols is None or symbols_added < self.max_symbols):
2384 # In the first timestep, we initialize the network with RNNT Blank
2385 # In later timesteps, we provide previous predicted label as input.
2386 if hypothesis.last_token is None and hypothesis.dec_state is None:
2387 last_label = self._SOS
2388 else:
2389 last_label = label_collate([[hypothesis.last_token]])
2390
2391 # Perform prediction network and joint network steps.
2392 g, hidden_prime = self._pred_step(last_label, hypothesis.dec_state)
2393 # If preserving per-frame confidence, log_normalize must be true
2394 logits = self._joint_step(f, g, log_normalize=False)
2395 logp = logits[0, 0, 0, : -len(self.durations)]
2396 if self.preserve_frame_confidence:
2397 logp = torch.log_softmax(logp, -1)
2398
2399 duration_logp = torch.log_softmax(logits[0, 0, 0, -len(self.durations) :], dim=-1)
2400 del g
2401
2402 # torch.max(0) op doesnt exist for FP 16.
2403 if logp.dtype != torch.float32:
2404 logp = logp.float()
2405
2406 # get index k, of max prob
2407 v, k = logp.max(0)
2408 k = k.item() # K is the label at timestep t_s in inner loop, s >= 0.
2409
2410 d_v, d_k = duration_logp.max(0)
2411 d_k = d_k.item()
2412
2413 skip = self.durations[d_k]
2414
2415 if self.preserve_alignments:
2416 # insert logprobs into last timestep
2417 hypothesis.alignments[-1].append((logp.to('cpu'), torch.tensor(k, dtype=torch.int32)))
2418
2419 if self.preserve_frame_confidence:
2420 # insert confidence into last timestep
2421 hypothesis.frame_confidence[-1].append(self._get_confidence(logp))
2422
2423 del logp
2424
2425 # If blank token is predicted, exit inner loop, move onto next timestep t
2426 if k == self._blank_index:
2427 not_blank = False
2428 else:
2429 # Append token to label set, update RNN state.
2430 hypothesis.y_sequence.append(k)
2431 hypothesis.score += float(v)
2432 hypothesis.timestep.append(time_idx)
2433 hypothesis.dec_state = hidden_prime
2434 hypothesis.last_token = k
2435
2436 # Increment token counter.
2437 symbols_added += 1
2438 time_idx += skip
2439 need_loop = skip == 0
2440
2441 # this rarely happens, but we manually increment the `skip` number
2442 # if blank is emitted and duration=0 is predicted. This prevents possible
2443 # infinite loops.
2444 if skip == 0:
2445 skip = 1
2446
2447 if self.preserve_alignments:
2448 # convert Ti-th logits into a torch array
2449 hypothesis.alignments.append([]) # blank buffer for next timestep
2450
2451 if self.preserve_frame_confidence:
2452 hypothesis.frame_confidence.append([]) # blank buffer for next timestep
2453
2454 if symbols_added == self.max_symbols:
2455 time_idx += 1
2456
2457 # Remove trailing empty list of Alignments
2458 if self.preserve_alignments:
2459 if len(hypothesis.alignments[-1]) == 0:
2460 del hypothesis.alignments[-1]
2461
2462 # Remove trailing empty list of per-frame confidence
2463 if self.preserve_frame_confidence:
2464 if len(hypothesis.frame_confidence[-1]) == 0:
2465 del hypothesis.frame_confidence[-1]
2466
2467 # Unpack the hidden states
2468 hypothesis.dec_state = self.decoder.batch_select_state(hypothesis.dec_state, 0)
2469
2470 return hypothesis
2471
2472
2473 class GreedyBatchedTDTInfer(_GreedyRNNTInfer):
2474 """A batch level greedy TDT decoder.
2475 Batch level greedy decoding, performed auto-regressively.
2476 Args:
2477 decoder_model: rnnt_utils.AbstractRNNTDecoder implementation.
2478 joint_model: rnnt_utils.AbstractRNNTJoint implementation.
2479 blank_index: int index of the blank token. Must be len(vocabulary) for TDT models.
2480 durations: a list containing durations.
2481 max_symbols_per_step: Optional int. The maximum number of symbols that can be added
2482 to a sequence in a single time step; if set to None then there is
2483 no limit.
2484 preserve_alignments: Bool flag which preserves the history of alignments generated during
2485 greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2486 the non-null value for `alignments` in it. Here, `alignments` is a List of List of
2487 Tuple(Tensor (of length V + 1 + num-big-blanks), Tensor(scalar, label after argmax)).
2488 The length of the list corresponds to the Acoustic Length (T).
2489 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more targets from a vocabulary.
2490 U is the number of target tokens for the current timestep Ti.
2491 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores generated
2492 during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
2493 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of List of floats.
2494 The length of the list corresponds to the Acoustic Length (T).
2495 Each value in the list (Ti) is a torch.Tensor (U), representing 1 or more confidence scores.
2496 U is the number of target tokens for the current timestep Ti.
2497 confidence_method_cfg: A dict-like object which contains the method name and settings to compute per-frame
2498 confidence scores.
2499
2500 name: The method name (str).
2501 Supported values:
2502 - 'max_prob' for using the maximum token probability as a confidence.
2503 - 'entropy' for using a normalized entropy of a log-likelihood vector.
2504
2505 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
2506 Supported values:
2507 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
2508 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
2509 Note that for this entropy, the alpha should comply the following inequality:
2510 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
2511 where V is the model vocabulary size.
2512 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
2513 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
2514 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2515 More: https://en.wikipedia.org/wiki/Tsallis_entropy
2516 - 'renyi' for the Rรฉnyi entropy.
2517 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
2518 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
2519 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
2520
2521 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
2522 When the alpha equals one, scaling is not applied to 'max_prob',
2523 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
2524
2525 entropy_norm: A mapping of the entropy value to the interval [0,1].
2526 Supported values:
2527 - 'lin' for using the linear mapping.
2528 - 'exp' for using exponential mapping with linear shift.
2529 """
2530
2531 def __init__(
2532 self,
2533 decoder_model: rnnt_abstract.AbstractRNNTDecoder,
2534 joint_model: rnnt_abstract.AbstractRNNTJoint,
2535 blank_index: int,
2536 durations: List[int],
2537 max_symbols_per_step: Optional[int] = None,
2538 preserve_alignments: bool = False,
2539 preserve_frame_confidence: bool = False,
2540 confidence_method_cfg: Optional[DictConfig] = None,
2541 ):
2542 super().__init__(
2543 decoder_model=decoder_model,
2544 joint_model=joint_model,
2545 blank_index=blank_index,
2546 max_symbols_per_step=max_symbols_per_step,
2547 preserve_alignments=preserve_alignments,
2548 preserve_frame_confidence=preserve_frame_confidence,
2549 confidence_method_cfg=confidence_method_cfg,
2550 )
2551 self.durations = durations
2552
2553 # Depending on availability of `blank_as_pad` support
2554 # switch between more efficient batch decoding technique
2555 if self.decoder.blank_as_pad:
2556 self._greedy_decode = self._greedy_decode_blank_as_pad
2557 else:
2558 self._greedy_decode = self._greedy_decode_masked
2559
2560 @typecheck()
2561 def forward(
2562 self,
2563 encoder_output: torch.Tensor,
2564 encoded_lengths: torch.Tensor,
2565 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2566 ):
2567 """Returns a list of hypotheses given an input batch of the encoder hidden embedding.
2568 Output token is generated auto-regressively.
2569 Args:
2570 encoder_output: A tensor of size (batch, features, timesteps).
2571 encoded_lengths: list of int representing the length of each sequence
2572 output sequence.
2573 Returns:
2574 packed list containing batch number of sentences (Hypotheses).
2575 """
2576 # Preserve decoder and joint training state
2577 decoder_training_state = self.decoder.training
2578 joint_training_state = self.joint.training
2579
2580 with torch.inference_mode():
2581 # Apply optional preprocessing
2582 encoder_output = encoder_output.transpose(1, 2) # (B, T, D)
2583 logitlen = encoded_lengths
2584
2585 self.decoder.eval()
2586 self.joint.eval()
2587
2588 with self.decoder.as_frozen(), self.joint.as_frozen():
2589 inseq = encoder_output # [B, T, D]
2590 hypotheses = self._greedy_decode(
2591 inseq, logitlen, device=inseq.device, partial_hypotheses=partial_hypotheses
2592 )
2593
2594 # Pack the hypotheses results
2595 packed_result = pack_hypotheses(hypotheses, logitlen)
2596
2597 self.decoder.train(decoder_training_state)
2598 self.joint.train(joint_training_state)
2599
2600 return (packed_result,)
2601
2602 def _greedy_decode_blank_as_pad(
2603 self,
2604 x: torch.Tensor,
2605 out_len: torch.Tensor,
2606 device: torch.device,
2607 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2608 ):
2609 if partial_hypotheses is not None:
2610 raise NotImplementedError("`partial_hypotheses` support is not supported")
2611
2612 with torch.inference_mode():
2613 # x: [B, T, D]
2614 # out_len: [B]
2615 # device: torch.device
2616
2617 # Initialize list of Hypothesis
2618 batchsize = x.shape[0]
2619 hypotheses = [
2620 rnnt_utils.Hypothesis(score=0.0, y_sequence=[], timestep=[], dec_state=None) for _ in range(batchsize)
2621 ]
2622
2623 # Initialize Hidden state matrix (shared by entire batch)
2624 hidden = None
2625
2626 # If alignments need to be preserved, register a danling list to hold the values
2627 if self.preserve_alignments:
2628 # alignments is a 3-dimensional dangling list representing B x T x U
2629 for hyp in hypotheses:
2630 hyp.alignments = [[]]
2631
2632 # If confidence scores need to be preserved, register a danling list to hold the values
2633 if self.preserve_frame_confidence:
2634 # frame_confidence is a 3-dimensional dangling list representing B x T x U
2635 for hyp in hypotheses:
2636 hyp.frame_confidence = [[]]
2637
2638 # Last Label buffer + Last Label without blank buffer
2639 # batch level equivalent of the last_label
2640 last_label = torch.full([batchsize, 1], fill_value=self._blank_index, dtype=torch.long, device=device)
2641
2642 # Mask buffers
2643 blank_mask = torch.full([batchsize], fill_value=0, dtype=torch.bool, device=device)
2644
2645 # Get max sequence length
2646 max_out_len = out_len.max()
2647
2648 # skip means the number of frames the next decoding step should "jump" to. When skip == 1
2649 # it means the next decoding step will just use the next input frame.
2650 skip = 1
2651 for time_idx in range(max_out_len):
2652 if skip > 1: # if skip > 1 at the current step, we decrement it and skip the current frame.
2653 skip -= 1
2654 continue
2655 f = x.narrow(dim=1, start=time_idx, length=1) # [B, 1, D]
2656
2657 # need_to_stay is a boolean indicates whether the next decoding step should remain in the same frame.
2658 need_to_stay = True
2659 symbols_added = 0
2660
2661 # Reset blank mask
2662 blank_mask.mul_(False)
2663
2664 # Update blank mask with time mask
2665 # Batch: [B, T, D], but Bi may have seq len < max(seq_lens_in_batch)
2666 # Forcibly mask with "blank" tokens, for all sample where current time step T > seq_len
2667 blank_mask = time_idx >= out_len
2668
2669 # Start inner loop
2670 while need_to_stay and (self.max_symbols is None or symbols_added < self.max_symbols):
2671 # Batch prediction and joint network steps
2672 # If very first prediction step, submit SOS tag (blank) to pred_step.
2673 # This feeds a zero tensor as input to AbstractRNNTDecoder to prime the state
2674 if time_idx == 0 and symbols_added == 0 and hidden is None:
2675 g, hidden_prime = self._pred_step(self._SOS, hidden, batch_size=batchsize)
2676 else:
2677 # Perform batch step prediction of decoder, getting new states and scores ("g")
2678 g, hidden_prime = self._pred_step(last_label, hidden, batch_size=batchsize)
2679
2680 # Batched joint step - Output = [B, V + 1 + num-big-blanks]
2681 # Note: log_normalize must not be True here since the joiner output is contanetation of both token logits and duration logits,
2682 # and they need to be normalized independently.
2683 joined = self._joint_step(f, g, log_normalize=None)
2684 logp = joined[:, 0, 0, : -len(self.durations)]
2685 duration_logp = joined[:, 0, 0, -len(self.durations) :]
2686
2687 if logp.dtype != torch.float32:
2688 logp = logp.float()
2689 duration_logp = duration_logp.float()
2690
2691 # get the max for both token and duration predictions.
2692 v, k = logp.max(1)
2693 dv, dk = duration_logp.max(1)
2694
2695 # here we set the skip value to be the minimum of all predicted durations, hense the "torch.min(dk)" call there.
2696 # Please refer to Section 5.2 of our paper https://arxiv.org/pdf/2304.06795.pdf for explanation of this.
2697 skip = self.durations[int(torch.min(dk))]
2698
2699 # this is a special case: if all batches emit blanks, we require that skip be at least 1
2700 # so we don't loop forever at the current frame.
2701 if blank_mask.all():
2702 if skip == 0:
2703 skip = 1
2704
2705 need_to_stay = skip == 0
2706 del g
2707
2708 # Update blank mask with current predicted blanks
2709 # This is accumulating blanks over all time steps T and all target steps min(max_symbols, U)
2710 k_is_blank = k == self._blank_index
2711 blank_mask.bitwise_or_(k_is_blank)
2712
2713 del k_is_blank
2714 del logp, duration_logp
2715
2716 # If all samples predict / have predicted prior blanks, exit loop early
2717 # This is equivalent to if single sample predicted k
2718 if not blank_mask.all():
2719 # Collect batch indices where blanks occurred now/past
2720 blank_indices = (blank_mask == 1).nonzero(as_tuple=False)
2721
2722 # Recover prior state for all samples which predicted blank now/past
2723 if hidden is not None:
2724 hidden_prime = self.decoder.batch_copy_states(hidden_prime, hidden, blank_indices)
2725
2726 elif len(blank_indices) > 0 and hidden is None:
2727 # Reset state if there were some blank and other non-blank predictions in batch
2728 # Original state is filled with zeros so we just multiply
2729 # LSTM has 2 states
2730 hidden_prime = self.decoder.batch_copy_states(hidden_prime, None, blank_indices, value=0.0)
2731
2732 # Recover prior predicted label for all samples which predicted blank now/past
2733 k[blank_indices] = last_label[blank_indices, 0]
2734
2735 # Update new label and hidden state for next iteration
2736 last_label = k.clone().view(-1, 1)
2737 hidden = hidden_prime
2738
2739 # Update predicted labels, accounting for time mask
2740 # If blank was predicted even once, now or in the past,
2741 # Force the current predicted label to also be blank
2742 # This ensures that blanks propogate across all timesteps
2743 # once they have occured (normally stopping condition of sample level loop).
2744 for kidx, ki in enumerate(k):
2745 if blank_mask[kidx] == 0:
2746 hypotheses[kidx].y_sequence.append(ki)
2747 hypotheses[kidx].timestep.append(time_idx)
2748 hypotheses[kidx].score += float(v[kidx])
2749
2750 symbols_added += 1
2751
2752 # Remove trailing empty list of alignments at T_{am-len} x Uj
2753 if self.preserve_alignments:
2754 for batch_idx in range(batchsize):
2755 if len(hypotheses[batch_idx].alignments[-1]) == 0:
2756 del hypotheses[batch_idx].alignments[-1]
2757
2758 # Remove trailing empty list of confidence scores at T_{am-len} x Uj
2759 if self.preserve_frame_confidence:
2760 for batch_idx in range(batchsize):
2761 if len(hypotheses[batch_idx].frame_confidence[-1]) == 0:
2762 del hypotheses[batch_idx].frame_confidence[-1]
2763
2764 # Preserve states
2765 for batch_idx in range(batchsize):
2766 hypotheses[batch_idx].dec_state = self.decoder.batch_select_state(hidden, batch_idx)
2767
2768 return hypotheses
2769
2770 def _greedy_decode_masked(
2771 self,
2772 x: torch.Tensor,
2773 out_len: torch.Tensor,
2774 device: torch.device,
2775 partial_hypotheses: Optional[List[rnnt_utils.Hypothesis]] = None,
2776 ):
2777 raise NotImplementedError("masked greedy-batched decode is not supported for TDT models.")
2778
[end of nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py]
[start of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import math
16 from abc import ABC, abstractmethod
17 from dataclasses import dataclass
18 from functools import partial
19 from typing import List, Optional
20
21 import torch
22 from omegaconf import DictConfig, OmegaConf
23
24 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
25 from nemo.utils import logging
26
27
28 class ConfidenceMethodConstants:
29 NAMES = ("max_prob", "entropy")
30 ENTROPY_TYPES = ("gibbs", "tsallis", "renyi")
31 ENTROPY_NORMS = ("lin", "exp")
32
33 @classmethod
34 def print(cls):
35 return (
36 cls.__name__
37 + ": "
38 + str({"NAMES": cls.NAMES, "ENTROPY_TYPES": cls.ENTROPY_TYPES, "ENTROPY_NORMS": cls.ENTROPY_NORMS})
39 )
40
41
42 class ConfidenceConstants:
43 AGGREGATIONS = ("mean", "min", "max", "prod")
44
45 @classmethod
46 def print(cls):
47 return cls.__name__ + ": " + str({"AGGREGATIONS": cls.AGGREGATIONS})
48
49
50 @dataclass
51 class ConfidenceMethodConfig:
52 """A Config which contains the method name and settings to compute per-frame confidence scores.
53
54 Args:
55 name: The method name (str).
56 Supported values:
57 - 'max_prob' for using the maximum token probability as a confidence.
58 - 'entropy' for using a normalized entropy of a log-likelihood vector.
59
60 entropy_type: Which type of entropy to use (str).
61 Used if confidence_method_cfg.name is set to `entropy`.
62 Supported values:
63 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
64 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
65 Note that for this entropy, the alpha should comply the following inequality:
66 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
67 where V is the model vocabulary size.
68 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
69 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
70 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
71 More: https://en.wikipedia.org/wiki/Tsallis_entropy
72 - 'renyi' for the Rรฉnyi entropy.
73 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
74 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
75 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
76
77 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
78 When the alpha equals one, scaling is not applied to 'max_prob',
79 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
80
81 entropy_norm: A mapping of the entropy value to the interval [0,1].
82 Supported values:
83 - 'lin' for using the linear mapping.
84 - 'exp' for using exponential mapping with linear shift.
85 """
86
87 name: str = "entropy"
88 entropy_type: str = "tsallis"
89 alpha: float = 0.33
90 entropy_norm: str = "exp"
91 temperature: str = "DEPRECATED"
92
93 def __post_init__(self):
94 if self.temperature != "DEPRECATED":
95 # self.temperature has type str
96 self.alpha = float(self.temperature)
97 self.temperature = "DEPRECATED"
98 if self.name not in ConfidenceMethodConstants.NAMES:
99 raise ValueError(
100 f"`name` must be one of the following: "
101 f"{'`' + '`, `'.join(ConfidenceMethodConstants.NAMES) + '`'}. Provided: `{self.name}`"
102 )
103 if self.entropy_type not in ConfidenceMethodConstants.ENTROPY_TYPES:
104 raise ValueError(
105 f"`entropy_type` must be one of the following: "
106 f"{'`' + '`, `'.join(ConfidenceMethodConstants.ENTROPY_TYPES) + '`'}. Provided: `{self.entropy_type}`"
107 )
108 if self.alpha <= 0.0:
109 raise ValueError(f"`alpha` must be > 0. Provided: {self.alpha}")
110 if self.entropy_norm not in ConfidenceMethodConstants.ENTROPY_NORMS:
111 raise ValueError(
112 f"`entropy_norm` must be one of the following: "
113 f"{'`' + '`, `'.join(ConfidenceMethodConstants.ENTROPY_NORMS) + '`'}. Provided: `{self.entropy_norm}`"
114 )
115
116
117 @dataclass
118 class ConfidenceConfig:
119 """A config which contains the following key-value pairs related to confidence scores.
120
121 Args:
122 preserve_frame_confidence: Bool flag which preserves the history of per-frame confidence scores
123 generated during decoding. When set to true, the Hypothesis will contain
124 the non-null value for `frame_confidence` in it. Here, `frame_confidence` is a List of floats.
125 preserve_token_confidence: Bool flag which preserves the history of per-token confidence scores
126 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
127 the non-null value for `token_confidence` in it. Here, `token_confidence` is a List of floats.
128
129 The length of the list corresponds to the number of recognized tokens.
130 preserve_word_confidence: Bool flag which preserves the history of per-word confidence scores
131 generated during greedy decoding (sample / batched). When set to true, the Hypothesis will contain
132 the non-null value for `word_confidence` in it. Here, `word_confidence` is a List of floats.
133
134 The length of the list corresponds to the number of recognized words.
135 exclude_blank: Bool flag indicating that blank token confidence scores are to be excluded
136 from the `token_confidence`.
137 aggregation: Which aggregation type to use for collapsing per-token confidence into per-word confidence.
138 Valid options are `mean`, `min`, `max`, `prod`.
139 method_cfg: A dict-like object which contains the method name and settings to compute per-frame
140 confidence scores.
141
142 name: The method name (str).
143 Supported values:
144 - 'max_prob' for using the maximum token probability as a confidence.
145 - 'entropy' for using a normalized entropy of a log-likelihood vector.
146
147 entropy_type: Which type of entropy to use (str). Used if confidence_method_cfg.name is set to `entropy`.
148 Supported values:
149 - 'gibbs' for the (standard) Gibbs entropy. If the alpha (ฮฑ) is provided,
150 the formula is the following: H_ฮฑ = -sum_i((p^ฮฑ_i)*log(p^ฮฑ_i)).
151 Note that for this entropy, the alpha should comply the following inequality:
152 (log(V)+2-sqrt(log^2(V)+4))/(2*log(V)) <= ฮฑ <= (1+log(V-1))/log(V-1)
153 where V is the model vocabulary size.
154 - 'tsallis' for the Tsallis entropy with the Boltzmann constant one.
155 Tsallis entropy formula is the following: H_ฮฑ = 1/(ฮฑ-1)*(1-sum_i(p^ฮฑ_i)),
156 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
157 More: https://en.wikipedia.org/wiki/Tsallis_entropy
158 - 'renyi' for the Rรฉnyi entropy.
159 Rรฉnyi entropy formula is the following: H_ฮฑ = 1/(1-ฮฑ)*log_2(sum_i(p^ฮฑ_i)),
160 where ฮฑ is a parameter. When ฮฑ == 1, it works like the Gibbs entropy.
161 More: https://en.wikipedia.org/wiki/R%C3%A9nyi_entropy
162
163 alpha: Power scale for logsoftmax (ฮฑ for entropies). Here we restrict it to be > 0.
164 When the alpha equals one, scaling is not applied to 'max_prob',
165 and any entropy type behaves like the Shannon entropy: H = -sum_i(p_i*log(p_i))
166
167 entropy_norm: A mapping of the entropy value to the interval [0,1].
168 Supported values:
169 - 'lin' for using the linear mapping.
170 - 'exp' for using exponential mapping with linear shift.
171 """
172
173 preserve_frame_confidence: bool = False
174 preserve_token_confidence: bool = False
175 preserve_word_confidence: bool = False
176 exclude_blank: bool = True
177 aggregation: str = "min"
178 method_cfg: ConfidenceMethodConfig = ConfidenceMethodConfig()
179
180 def __post_init__(self):
181 # OmegaConf.structured ensures that post_init check is always executed
182 self.method_cfg = OmegaConf.structured(
183 self.method_cfg
184 if isinstance(self.method_cfg, ConfidenceMethodConfig)
185 else ConfidenceMethodConfig(**self.method_cfg)
186 )
187 if self.aggregation not in ConfidenceConstants.AGGREGATIONS:
188 raise ValueError(
189 f"`aggregation` has to be one of the following: "
190 f"{'`' + '`, `'.join(ConfidenceConstants.AGGREGATIONS) + '`'}. Provided: `{self.aggregation}`"
191 )
192
193
194 def get_confidence_measure_bank():
195 """Generate a dictionary with confidence measure functionals.
196
197 Supported confidence measures:
198 max_prob: normalized maximum probability
199 entropy_gibbs_lin: Gibbs entropy with linear normalization
200 entropy_gibbs_exp: Gibbs entropy with exponential normalization
201 entropy_tsallis_lin: Tsallis entropy with linear normalization
202 entropy_tsallis_exp: Tsallis entropy with exponential normalization
203 entropy_renyi_lin: Rรฉnyi entropy with linear normalization
204 entropy_renyi_exp: Rรฉnyi entropy with exponential normalization
205
206 Returns:
207 dictionary with lambda functions.
208 """
209 # helper functions
210 # Gibbs entropy is implemented without alpha
211 neg_entropy_gibbs = lambda x: (x.exp() * x).sum(-1)
212 neg_entropy_alpha = lambda x, t: (x * t).exp().sum(-1)
213 neg_entropy_alpha_gibbs = lambda x, t: ((x * t).exp() * x).sum(-1)
214 # too big for a lambda
215 def entropy_tsallis_exp(x, v, t):
216 exp_neg_max_ent = math.exp((1 - math.pow(v, 1 - t)) / (1 - t))
217 return (((1 - neg_entropy_alpha(x, t)) / (1 - t)).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
218
219 def entropy_gibbs_exp(x, v, t):
220 exp_neg_max_ent = math.pow(v, -t * math.pow(v, 1 - t))
221 return ((neg_entropy_alpha_gibbs(x, t) * t).exp() - exp_neg_max_ent) / (1 - exp_neg_max_ent)
222
223 # use Gibbs entropies for Tsallis and Rรฉnyi with t == 1.0
224 entropy_gibbs_lin_baseline = lambda x, v: 1 + neg_entropy_gibbs(x) / math.log(v)
225 entropy_gibbs_exp_baseline = lambda x, v: (neg_entropy_gibbs(x).exp() * v - 1) / (v - 1)
226 # fill the measure bank
227 confidence_measure_bank = {}
228 # Maximum probability measure is implemented without alpha
229 confidence_measure_bank["max_prob"] = (
230 lambda x, v, t: (x.max(dim=-1)[0].exp() * v - 1) / (v - 1)
231 if t == 1.0
232 else ((x.max(dim=-1)[0] * t).exp() * math.pow(v, t) - 1) / (math.pow(v, t) - 1)
233 )
234 confidence_measure_bank["entropy_gibbs_lin"] = (
235 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
236 if t == 1.0
237 else 1 + neg_entropy_alpha_gibbs(x, t) / math.log(v) / math.pow(v, 1 - t)
238 )
239 confidence_measure_bank["entropy_gibbs_exp"] = (
240 lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_gibbs_exp(x, v, t)
241 )
242 confidence_measure_bank["entropy_tsallis_lin"] = (
243 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
244 if t == 1.0
245 else 1 + (1 - neg_entropy_alpha(x, t)) / (math.pow(v, 1 - t) - 1)
246 )
247 confidence_measure_bank["entropy_tsallis_exp"] = (
248 lambda x, v, t: entropy_gibbs_exp_baseline(x, v) if t == 1.0 else entropy_tsallis_exp(x, v, t)
249 )
250 confidence_measure_bank["entropy_renyi_lin"] = (
251 lambda x, v, t: entropy_gibbs_lin_baseline(x, v)
252 if t == 1.0
253 else 1 + neg_entropy_alpha(x, t).log2() / (t - 1) / math.log(v, 2)
254 )
255 confidence_measure_bank["entropy_renyi_exp"] = (
256 lambda x, v, t: entropy_gibbs_exp_baseline(x, v)
257 if t == 1.0
258 else (neg_entropy_alpha(x, t).pow(1 / (t - 1)) * v - 1) / (v - 1)
259 )
260 return confidence_measure_bank
261
262
263 def get_confidence_aggregation_bank():
264 """Generate a dictionary with confidence aggregation functions.
265
266 Supported confidence aggregation functions:
267 min: minimum
268 max: maximum
269 mean: arithmetic mean
270 prod: product
271
272 Returns:
273 dictionary with functions.
274 """
275 confidence_aggregation_bank = {"mean": lambda x: sum(x) / len(x), "min": min, "max": max}
276 # python 3.7 and earlier do not have math.prod
277 if hasattr(math, "prod"):
278 confidence_aggregation_bank["prod"] = math.prod
279 else:
280 import operator
281 from functools import reduce
282
283 confidence_aggregation_bank["prod"] = lambda x: reduce(operator.mul, x, 1)
284 return confidence_aggregation_bank
285
286
287 class ConfidenceMethodMixin(ABC):
288 """Confidence Method Mixin class.
289
290 It initializes per-frame confidence method.
291 """
292
293 def _init_confidence_method(self, confidence_method_cfg: Optional[DictConfig] = None):
294 """Initialize per-frame confidence method from config.
295 """
296 # OmegaConf.structured ensures that post_init check is always executed
297 confidence_method_cfg = OmegaConf.structured(
298 ConfidenceMethodConfig()
299 if confidence_method_cfg is None
300 else ConfidenceMethodConfig(**confidence_method_cfg)
301 )
302
303 # set confidence calculation method
304 # we suppose that self.blank_id == len(vocabulary)
305 self.num_tokens = (self.blank_id if hasattr(self, "blank_id") else self._blank_index) + 1
306 self.alpha = confidence_method_cfg.alpha
307
308 # init confidence measure bank
309 self.confidence_measure_bank = get_confidence_measure_bank()
310
311 measure = None
312 # construct measure_name
313 measure_name = ""
314 if confidence_method_cfg.name == "max_prob":
315 measure_name = "max_prob"
316 elif confidence_method_cfg.name == "entropy":
317 measure_name = '_'.join(
318 [confidence_method_cfg.name, confidence_method_cfg.entropy_type, confidence_method_cfg.entropy_norm]
319 )
320 else:
321 raise ValueError(f"Unsupported `confidence_method_cfg.name`: `{confidence_method_cfg.name}`")
322 if measure_name not in self.confidence_measure_bank:
323 raise ValueError(f"Unsupported measure setup: `{measure_name}`")
324 measure = partial(self.confidence_measure_bank[measure_name], v=self.num_tokens, t=self.alpha)
325 self._get_confidence = lambda x: measure(torch.nan_to_num(x)).tolist()
326
327
328 class ConfidenceMixin(ABC):
329 """Confidence Mixin class.
330
331 It is responsible for confidence estimation method initialization and high-level confidence score calculation.
332 """
333
334 def _init_confidence(self, confidence_cfg: Optional[DictConfig] = None):
335 """Initialize confidence-related fields and confidence aggregation function from config.
336 """
337 # OmegaConf.structured ensures that post_init check is always executed
338 confidence_cfg = OmegaConf.structured(
339 ConfidenceConfig() if confidence_cfg is None else ConfidenceConfig(**confidence_cfg)
340 )
341 self.confidence_method_cfg = confidence_cfg.method_cfg
342
343 # extract the config
344 self.preserve_word_confidence = confidence_cfg.get('preserve_word_confidence', False)
345 # set preserve_frame_confidence and preserve_token_confidence to True
346 # if preserve_word_confidence is True
347 self.preserve_token_confidence = (
348 confidence_cfg.get('preserve_token_confidence', False) | self.preserve_word_confidence
349 )
350 # set preserve_frame_confidence to True if preserve_token_confidence is True
351 self.preserve_frame_confidence = (
352 confidence_cfg.get('preserve_frame_confidence', False) | self.preserve_token_confidence
353 )
354 self.exclude_blank_from_confidence = confidence_cfg.get('exclude_blank', True)
355 self.word_confidence_aggregation = confidence_cfg.get('aggregation', "min")
356
357 # define aggregation functions
358 self.confidence_aggregation_bank = get_confidence_aggregation_bank()
359 self._aggregate_confidence = self.confidence_aggregation_bank[self.word_confidence_aggregation]
360
361 # Update preserve frame confidence
362 if self.preserve_frame_confidence is False:
363 if self.cfg.strategy in ['greedy', 'greedy_batch']:
364 self.preserve_frame_confidence = self.cfg.greedy.get('preserve_frame_confidence', False)
365 # OmegaConf.structured ensures that post_init check is always executed
366 confidence_method_cfg = OmegaConf.structured(self.cfg.greedy).get('confidence_method_cfg', None)
367 self.confidence_method_cfg = (
368 OmegaConf.structured(ConfidenceMethodConfig())
369 if confidence_method_cfg is None
370 else OmegaConf.structured(ConfidenceMethodConfig(**confidence_method_cfg))
371 )
372
373 @abstractmethod
374 def compute_confidence(self, hypotheses_list: List[Hypothesis]) -> List[Hypothesis]:
375 """Computes high-level (per-token and/or per-word) confidence scores for a list of hypotheses.
376 Assumes that `frame_confidence` is present in the hypotheses.
377
378 Args:
379 hypotheses_list: List of Hypothesis.
380
381 Returns:
382 A list of hypotheses with high-level confidence scores.
383 """
384 raise NotImplementedError()
385
386 @abstractmethod
387 def _aggregate_token_confidence(self, hypothesis: Hypothesis) -> List[float]:
388 """Implemented by subclass in order to aggregate token confidence to a word-level confidence.
389
390 Args:
391 hypothesis: Hypothesis
392
393 Returns:
394 A list of word-level confidence scores.
395 """
396 raise NotImplementedError()
397
398 def _aggregate_token_confidence_chars(self, words: List[str], token_confidence: List[float]) -> List[float]:
399 """Implementation of token confidence aggregation for character-based models.
400
401 Args:
402 words: List of words of a hypothesis.
403 token_confidence: List of token-level confidence scores of a hypothesis.
404
405 Returns:
406 A list of word-level confidence scores.
407 """
408 word_confidence = []
409 i = 0
410 for word in words:
411 word_len = len(word)
412 word_confidence.append(self._aggregate_confidence(token_confidence[i : i + word_len]))
413 # we assume that there is exactly one space token between words and exclude it from word confidence
414 i += word_len + 1
415 return word_confidence
416
417 def _aggregate_token_confidence_subwords_sentencepiece(
418 self, words: List[str], token_confidence: List[float], token_ids: List[int]
419 ) -> List[float]:
420 """Implementation of token confidence aggregation for subword-based models.
421
422 **Note**: Only supports Sentencepiece based tokenizers !
423
424 Args:
425 words: List of words of a hypothesis.
426 token_confidence: List of token-level confidence scores of a hypothesis.
427 token_ids: List of token ids of a hypothesis.
428
429 Returns:
430 A list of word-level confidence scores.
431 """
432 word_confidence = []
433 # run only if there are final words
434 if len(words) > 0:
435 j = 0
436 prev_unk = False
437 prev_underline = False
438 for i, token_id in enumerate(token_ids):
439 token = self.decode_ids_to_tokens([int(token_id)])[0]
440 token_text = self.decode_tokens_to_str([int(token_id)])
441 # treat `<unk>` as a separate word regardless of the next token
442 # to match the result of `tokenizer.ids_to_text`
443 if (token != token_text or prev_unk) and i > j:
444 # do not add confidence for `โ` if the current token starts with `โ`
445 # to match the result of `tokenizer.ids_to_text`
446 if not prev_underline:
447 word_confidence.append(self._aggregate_confidence(token_confidence[j:i]))
448 j = i
449 prev_unk = token == '<unk>'
450 prev_underline = token == 'โ'
451 if not prev_underline:
452 word_confidence.append(self._aggregate_confidence(token_confidence[j : len(token_ids)]))
453 if len(words) != len(word_confidence):
454 raise RuntimeError(
455 f"""Something went wrong with word-level confidence aggregation.\n
456 Please check these values for debugging:\n
457 len(words): {len(words)},\n
458 len(word_confidence): {len(word_confidence)},\n
459 recognized text: `{' '.join(words)}`"""
460 )
461 return word_confidence
462
[end of nemo/collections/asr/parts/utils/asr_confidence_utils.py]
[start of nemo/collections/common/parts/adapter_modules.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, is_dataclass
16 from typing import Any, Optional
17
18 from hydra.utils import instantiate
19 from omegaconf import OmegaConf
20 from torch import nn as nn
21
22 from nemo.collections.common.parts.utils import activation_registry
23 from nemo.core.classes.mixins import access_mixins, adapter_mixin_strategies
24
25
26 class AdapterModuleUtil(access_mixins.AccessMixin):
27 """
28 Base class of Adapter Modules, providing common functionality to all Adapter Modules.
29 """
30
31 def setup_adapter_strategy(self, adapter_strategy: Optional[adapter_mixin_strategies.AbstractAdapterStrategy]):
32 """
33 Setup adapter strategy of this class, enabling dynamic change in the way the adapter output is
34 merged with the input.
35
36 When called successfully, will assign the variable `adapter_strategy` to the module.
37
38 Args:
39 adapter_strategy: Can be a None or an implementation of AbstractAdapterStrategy.
40 """
41 # set default adapter strategy
42 if adapter_strategy is None:
43 adapter_strategy = self.get_default_strategy_config()
44
45 if is_dataclass(adapter_strategy):
46 adapter_strategy = OmegaConf.structured(adapter_strategy)
47 OmegaConf.set_struct(adapter_strategy, False)
48
49 # The config must have the `_target_` field pointing to the actual adapter strategy class
50 # which will load that strategy dynamically to this module.
51 if isinstance(adapter_strategy, dict) or OmegaConf.is_config(adapter_strategy):
52 self.adapter_strategy = instantiate(adapter_strategy)
53 elif isinstance(adapter_strategy, adapter_mixin_strategies.AbstractAdapterStrategy):
54 self.adapter_strategy = adapter_strategy
55 else:
56 raise AttributeError(f'`adapter_strategy` provided is invalid : {adapter_strategy}')
57
58 def get_default_strategy_config(self) -> 'dataclass':
59 """
60 Returns a default adapter module strategy.
61 """
62 return adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
63
64 def adapter_unfreeze(self,):
65 """
66 Sets the requires grad for all parameters in the adapter to True.
67 This method should be overridden for any custom unfreeze behavior that is required.
68 For example, if not all params of the adapter should be unfrozen.
69 """
70 for param in self.parameters():
71 param.requires_grad_(True)
72
73
74 class LinearAdapter(nn.Module, AdapterModuleUtil):
75
76 """
77 Simple Linear Feedforward Adapter module with LayerNorm and singe hidden layer with activation function.
78 Note: The adapter explicitly initializes its final layer with all zeros in order to avoid affecting the
79 original model when all adapters are disabled.
80
81 Args:
82 in_features: Input dimension of the module. Note that for adapters, input_dim == output_dim.
83 dim: Hidden dimension of the feed forward network.
84 activation: Str name for an activation function.
85 norm_position: Str, can be `pre` or `post`. Defaults to `pre`. Determines whether the normalization
86 will occur in the first layer or the last layer. Certain architectures may prefer one over the other.
87 dropout: float value, whether to perform dropout on the output of the last layer of the adapter.
88 adapter_strategy: By default, ResidualAddAdapterStrategyConfig. An adapter composition function object.
89 """
90
91 def __init__(
92 self,
93 in_features: int,
94 dim: int,
95 activation: str = 'swish',
96 norm_position: str = 'pre',
97 dropout: float = 0.0,
98 adapter_strategy: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig = None,
99 ):
100 super().__init__()
101
102 activation = activation_registry[activation]()
103 # If the activation can be executed in place, do so.
104 if hasattr(activation, 'inplace'):
105 activation.inplace = True
106
107 assert norm_position in ['pre', 'post']
108 self.norm_position = norm_position
109
110 if norm_position == 'pre':
111 self.module = nn.Sequential(
112 nn.LayerNorm(in_features),
113 nn.Linear(in_features, dim, bias=False),
114 activation,
115 nn.Linear(dim, in_features, bias=False),
116 )
117
118 elif norm_position == 'post':
119 self.module = nn.Sequential(
120 nn.Linear(in_features, dim, bias=False),
121 activation,
122 nn.Linear(dim, in_features, bias=False),
123 nn.LayerNorm(in_features),
124 )
125
126 if dropout > 0.0:
127 self.dropout = nn.Dropout(dropout)
128 else:
129 self.dropout = None
130
131 # Setup adapter strategy
132 self.setup_adapter_strategy(adapter_strategy)
133
134 # reset parameters
135 self.reset_parameters()
136
137 def reset_parameters(self):
138 # Final layer initializations must be 0
139 if self.norm_position == 'pre':
140 self.module[-1].weight.data *= 0
141
142 elif self.norm_position == 'post':
143 self.module[-1].weight.data *= 0
144 self.module[-1].bias.data *= 0
145
146 def forward(self, x):
147 x = self.module(x)
148
149 # Add dropout if available
150 if self.dropout is not None:
151 x = self.dropout(x)
152
153 return x
154
155
156 @dataclass
157 class LinearAdapterConfig:
158 in_features: int
159 dim: int
160 activation: str = 'swish'
161 norm_position: str = 'pre'
162 dropout: float = 0.0
163 adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
164 _target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
165
[end of nemo/collections/common/parts/adapter_modules.py]
[start of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import re
15 from typing import List
16
17 import ipadic
18 import MeCab
19 from pangu import spacing
20 from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
21
22
23 class EnJaProcessor:
24 """
25 Tokenizer, Detokenizer and Normalizer utilities for Japanese & English
26 Args:
27 lang_id: One of ['en', 'ja'].
28 """
29
30 def __init__(self, lang_id: str):
31 self.lang_id = lang_id
32 self.moses_tokenizer = MosesTokenizer(lang=lang_id)
33 self.moses_detokenizer = MosesDetokenizer(lang=lang_id)
34 self.normalizer = MosesPunctNormalizer(
35 lang=lang_id, pre_replace_unicode_punct=True, post_remove_control_chars=True
36 )
37
38 def detokenize(self, tokens: List[str]) -> str:
39 """
40 Detokenizes a list of tokens
41 Args:
42 tokens: list of strings as tokens
43 Returns:
44 detokenized Japanese or English string
45 """
46 return self.moses_detokenizer.detokenize(tokens)
47
48 def tokenize(self, text) -> str:
49 """
50 Tokenizes text using Moses. Returns a string of tokens.
51 """
52 tokens = self.moses_tokenizer.tokenize(text)
53 return ' '.join(tokens)
54
55 def normalize(self, text) -> str:
56 # Normalization doesn't handle Japanese periods correctly;
57 # 'ใ'becomes '.'.
58 if self.lang_id == 'en':
59 return self.normalizer.normalize(text)
60 else:
61 return text
62
63
64 class JaMecabProcessor:
65 """
66 Tokenizer, Detokenizer and Normalizer utilities for Japanese MeCab & English
67 """
68
69 def __init__(self):
70 self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
71
72 def detokenize(self, text: List[str]) -> str:
73 RE_WS_IN_FW = re.compile(
74 r'([\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])\s+(?=[\u2018\u2019\u201c\u201d\u2e80-\u312f\u3200-\u32ff\u3400-\u4dbf\u4e00-\u9fff\uf900-\ufaff\uff00-\uffef])'
75 )
76
77 detokenize = lambda s: spacing(RE_WS_IN_FW.sub(r'\1', s)).strip()
78 return detokenize(' '.join(text))
79
80 def tokenize(self, text) -> str:
81 """
82 Tokenizes text using Moses. Returns a string of tokens.
83 """
84 return self.mecab_tokenizer.parse(text).strip()
85
86 def normalize(self, text) -> str:
87 return text
88
[end of nemo/collections/common/tokenizers/en_ja_tokenizers.py]
[start of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Optional, Tuple
17
18 from omegaconf.omegaconf import MISSING
19
20 from nemo.collections.nlp.data.machine_translation.machine_translation_dataset import TranslationDataConfig
21 from nemo.collections.nlp.models.enc_dec_nlp_model import EncDecNLPModelConfig
22 from nemo.collections.nlp.modules.common.token_classifier import TokenClassifierConfig
23 from nemo.collections.nlp.modules.common.tokenizer_utils import TokenizerConfig
24 from nemo.collections.nlp.modules.common.transformer.transformer import (
25 NeMoTransformerConfig,
26 NeMoTransformerEncoderConfig,
27 )
28 from nemo.collections.nlp.modules.common.transformer.transformer_bottleneck import (
29 NeMoTransformerBottleneckDecoderConfig,
30 NeMoTransformerBottleneckEncoderConfig,
31 )
32 from nemo.core.config.modelPT import OptimConfig, SchedConfig
33
34
35 @dataclass
36 class MTSchedConfig(SchedConfig):
37 name: str = 'InverseSquareRootAnnealing'
38 warmup_ratio: Optional[float] = None
39 last_epoch: int = -1
40
41
42 # TODO: Refactor this dataclass to to support more optimizers (it pins the optimizer to Adam-like optimizers).
43 @dataclass
44 class MTOptimConfig(OptimConfig):
45 name: str = 'adam'
46 lr: float = 1e-3
47 betas: Tuple[float, float] = (0.9, 0.98)
48 weight_decay: float = 0.0
49 sched: Optional[MTSchedConfig] = MTSchedConfig()
50
51
52 @dataclass
53 class MTEncDecModelConfig(EncDecNLPModelConfig):
54 # machine translation configurations
55 num_val_examples: int = 3
56 num_test_examples: int = 3
57 max_generation_delta: int = 10
58 label_smoothing: Optional[float] = 0.0
59 beam_size: int = 4
60 len_pen: float = 0.0
61 src_language: Any = 'en' # Any = str or List[str]
62 tgt_language: Any = 'en' # Any = str or List[str]
63 find_unused_parameters: Optional[bool] = True
64 shared_tokenizer: Optional[bool] = True
65 multilingual: Optional[bool] = False
66 preproc_out_dir: Optional[str] = None
67 validate_input_ids: Optional[bool] = True
68 shared_embeddings: bool = False
69
70 # network architecture configuration
71 encoder_tokenizer: Any = MISSING
72 encoder: Any = MISSING
73
74 decoder_tokenizer: Any = MISSING
75 decoder: Any = MISSING
76
77 head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
78
79 # dataset configurations
80 train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
81 src_file_name=MISSING,
82 tgt_file_name=MISSING,
83 tokens_in_batch=512,
84 clean=True,
85 shuffle=True,
86 cache_ids=False,
87 use_cache=False,
88 )
89 validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
90 src_file_name=MISSING,
91 tgt_file_name=MISSING,
92 tokens_in_batch=512,
93 clean=False,
94 shuffle=False,
95 cache_ids=False,
96 use_cache=False,
97 )
98 test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
99 src_file_name=MISSING,
100 tgt_file_name=MISSING,
101 tokens_in_batch=512,
102 clean=False,
103 shuffle=False,
104 cache_ids=False,
105 use_cache=False,
106 )
107 optim: Optional[OptimConfig] = MTOptimConfig()
108
109
110 @dataclass
111 class AAYNBaseConfig(MTEncDecModelConfig):
112
113 # Attention is All You Need Base Configuration
114 encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
115 decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
116
117 encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
118 library='nemo',
119 model_name=None,
120 pretrained=False,
121 hidden_size=512,
122 inner_size=2048,
123 num_layers=6,
124 num_attention_heads=8,
125 ffn_dropout=0.1,
126 attn_score_dropout=0.1,
127 attn_layer_dropout=0.1,
128 )
129
130 decoder: NeMoTransformerConfig = NeMoTransformerConfig(
131 library='nemo',
132 model_name=None,
133 pretrained=False,
134 hidden_size=512,
135 inner_size=2048,
136 num_layers=6,
137 num_attention_heads=8,
138 ffn_dropout=0.1,
139 attn_score_dropout=0.1,
140 attn_layer_dropout=0.1,
141 )
142
143
144 @dataclass
145 class MTBottleneckModelConfig(AAYNBaseConfig):
146 model_type: str = 'nll'
147 min_logv: float = -6
148 latent_size: int = -1 # -1 will take value of encoder hidden
149 non_recon_warmup_batches: int = 200000
150 recon_per_token: bool = True
151 log_timing: bool = True
152
153 encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
154 library='nemo',
155 model_name=None,
156 pretrained=False,
157 hidden_size=512,
158 inner_size=2048,
159 num_layers=6,
160 num_attention_heads=8,
161 ffn_dropout=0.1,
162 attn_score_dropout=0.1,
163 attn_layer_dropout=0.1,
164 arch='seq2seq',
165 hidden_steps=32,
166 hidden_blocks=1,
167 hidden_init_method='params',
168 )
169
170 decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
171 library='nemo',
172 model_name=None,
173 pretrained=False,
174 inner_size=2048,
175 num_layers=6,
176 num_attention_heads=8,
177 ffn_dropout=0.1,
178 attn_score_dropout=0.1,
179 attn_layer_dropout=0.1,
180 arch='seq2seq',
181 )
182
[end of nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py]
[start of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass
16 from typing import Any, Dict, Optional
17
18 from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
19
20 from nemo.collections.common.parts.adapter_modules import LinearAdapterConfig
21 from nemo.collections.nlp.data.token_classification.punctuation_capitalization_dataset import (
22 PunctuationCapitalizationEvalDataConfig,
23 PunctuationCapitalizationTrainDataConfig,
24 legacy_data_config_to_new_data_config,
25 )
26 from nemo.core.config import TrainerConfig
27 from nemo.core.config.modelPT import NemoConfig
28 from nemo.utils.exp_manager import ExpManagerConfig
29
30
31 @dataclass
32 class FreezeConfig:
33 is_enabled: bool = False
34 """Freeze audio encoder weight and add Conformer Layers on top of it"""
35 d_model: Optional[int] = 256
36 """`d_model` parameter of ``ConformerLayer``"""
37 d_ff: Optional[int] = 1024
38 """``d_ff`` parameter of ``ConformerLayer``"""
39 num_layers: Optional[int] = 8
40 """``num_layers`` number of ``ConformerLayer`` modules to add on top of audio encoder"""
41
42
43 @dataclass
44 class AdapterConfig:
45 config: Optional[LinearAdapterConfig] = None
46 """Linear adapter config see ``collections.common.parts.LinearAdapterConfig``"""
47 enable: bool = False
48 """Use adapters for audio encoder"""
49
50
51 @dataclass
52 class FusionConfig:
53 num_layers: Optional[int] = 4
54 """"Number of layers to use in fusion"""
55 num_attention_heads: Optional[int] = 4
56 """Number of attention heads to use in fusion"""
57 inner_size: Optional[int] = 2048
58 """Fusion inner size"""
59
60
61 @dataclass
62 class AudioEncoderConfig:
63 pretrained_model: str = MISSING
64 """A configuration for restoring pretrained audio encoder"""
65 freeze: Optional[FreezeConfig] = None
66 adapter: Optional[AdapterConfig] = None
67 fusion: Optional[FusionConfig] = None
68
69
70 @dataclass
71 class TokenizerConfig:
72 """A structure and default values of source text tokenizer."""
73
74 vocab_file: Optional[str] = None
75 """A path to vocabulary file which is used in ``'word'``, ``'char'``, and HuggingFace tokenizers"""
76
77 tokenizer_name: str = MISSING
78 """A name of the tokenizer used for tokenization of source sequences. Possible options are ``'sentencepiece'``,
79 ``'word'``, ``'char'``, HuggingFace tokenizers (e.g. ``'bert-base-uncased'``). For more options see function
80 ``nemo.collections.nlp.modules.common.get_tokenizer``. The tokenizer must have properties ``cls_id``, ``pad_id``,
81 ``sep_id``, ``unk_id``."""
82
83 special_tokens: Optional[Dict[str, str]] = None
84 """A dictionary with special tokens passed to constructors of ``'char'``, ``'word'``, ``'sentencepiece'``, and
85 various HuggingFace tokenizers."""
86
87 tokenizer_model: Optional[str] = None
88 """A path to a tokenizer model required for ``'sentencepiece'`` tokenizer."""
89
90
91 @dataclass
92 class LanguageModelConfig:
93 """
94 A structure and default values of language model configuration of punctuation and capitalization model. BERT like
95 HuggingFace models are supported. Provide a valid ``pretrained_model_name`` and, optionally, you may
96 reinitialize model via ``config_file`` or ``config``.
97
98 Alternatively you can initialize the language model using ``lm_checkpoint``.
99
100 This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
101 """
102
103 pretrained_model_name: str = MISSING
104 """A mandatory parameter containing name of HuggingFace pretrained model. For example, ``'bert-base-uncased'``."""
105
106 config_file: Optional[str] = None
107 """A path to a file with HuggingFace model config which is used to reinitialize language model."""
108
109 config: Optional[Dict] = None
110 """A HuggingFace config which is used to reinitialize language model."""
111
112 lm_checkpoint: Optional[str] = None
113 """A path to a ``torch`` checkpoint of a language model."""
114
115
116 @dataclass
117 class HeadConfig:
118 """
119 A structure and default values of configuration of capitalization or punctuation model head. This config defines a
120 multilayer perceptron which is applied to output of a language model. Number of units in the hidden layer is equal
121 to the dimension of the language model.
122
123 This config is a part of :class:`PunctuationCapitalizationModelConfig` config.
124 """
125
126 num_fc_layers: int = 1
127 """A number of hidden layers in a multilayer perceptron."""
128
129 fc_dropout: float = 0.1
130 """A dropout used in an MLP."""
131
132 activation: str = 'relu'
133 """An activation used in hidden layers."""
134
135 use_transformer_init: bool = True
136 """Whether to initialize the weights of the classifier head with the approach that was used for language model
137 initialization."""
138
139
140 @dataclass
141 class ClassLabelsConfig:
142 """
143 A structure and default values of a mandatory part of config which contains names of files which are saved in .nemo
144 checkpoint. These files can also be used for passing label vocabulary to the model. For using them as label
145 vocabularies you will need to provide path these files in parameter
146 ``model.common_dataset_parameters.label_vocab_dir``. Each line in labels files
147 contains 1 label. The values are sorted, ``<line number>==<label id>``, starting from ``0``. A label with ``0`` id
148 must contain neutral label which must be equal to ``model.common_dataset_parameters.pad_label``.
149
150 This config is a part of :class:`~CommonDatasetParametersConfig`.
151 """
152
153 punct_labels_file: str = MISSING
154 """A name of punctuation labels file."""
155
156 capit_labels_file: str = MISSING
157 """A name of capitalization labels file."""
158
159
160 @dataclass
161 class CommonDatasetParametersConfig:
162 """
163 A structure and default values of common dataset parameters config which includes label and loss mask information.
164 If you omit parameters ``punct_label_ids``, ``capit_label_ids``, ``label_vocab_dir``, then labels will be inferred
165 from a training dataset or loaded from a checkpoint.
166
167 Parameters ``ignore_extra_tokens`` and ``ignore_start_end`` are responsible for forming loss mask. A loss mask
168 defines on which tokens loss is computed.
169
170 This parameter is a part of config :class:`~PunctuationCapitalizationModelConfig`.
171 """
172
173 pad_label: str = MISSING
174 """A mandatory parameter which should contain label used for punctuation and capitalization label padding. It
175 also serves as a neutral label for both punctuation and capitalization. If any of ``punct_label_ids``,
176 ``capit_label_ids`` parameters is provided, then ``pad_label`` must have ``0`` id in them. In addition, if ``label_vocab_dir``
177 is provided, then ``pad_label`` must be on the first lines in files ``class_labels.punct_labels_file`` and
178 ``class_labels.capit_labels_file``."""
179
180 ignore_extra_tokens: bool = False
181 """Whether to compute loss on not first tokens in words. If this parameter is ``True``, then loss mask is ``False``
182 for all tokens in a word except the first."""
183
184 ignore_start_end: bool = True
185 """If ``False``, then loss is computed on [CLS] and [SEP] tokens."""
186
187 punct_label_ids: Optional[Dict[str, int]] = None
188 """A dictionary with punctuation label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit this
189 parameter and pass label ids through ``class_labels.punct_labels_file`` or let the model to infer label ids from
190 dataset or load them from checkpoint."""
191
192 capit_label_ids: Optional[Dict[str, int]] = None
193 """A dictionary with capitalization label ids. ``pad_label`` must have ``0`` id in this dictionary. You can omit
194 this parameter and pass label ids through ``class_labels.capit_labels_file`` or let model to infer label ids from
195 dataset or load them from checkpoint."""
196
197 label_vocab_dir: Optional[str] = None
198 """A path to directory which contains class labels files. See :class:`ClassLabelsConfig`. If this parameter is
199 provided, then labels will be loaded from files which are located in ``label_vocab_dir`` and have names specified
200 in ``model.class_labels`` configuration section. A label specified in ``pad_label`` has to be on the first lines
201 of ``model.class_labels`` files."""
202
203
204 @dataclass
205 class PunctuationCapitalizationModelConfig:
206 """
207 A configuration of
208 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
209 model.
210
211 See an example of model config in
212 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
213 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
214
215 This config is a part of :class:`~PunctuationCapitalizationConfig`.
216 """
217
218 class_labels: ClassLabelsConfig = ClassLabelsConfig()
219 """A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
220 These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
221 for passing vocabularies, please provide path to vocabulary files in
222 ``model.common_dataset_parameters.label_vocab_dir`` parameter."""
223
224 common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
225 """Label ids and loss mask information information."""
226
227 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
228 """A configuration for creating training dataset and data loader."""
229
230 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
231 """A configuration for creating validation datasets and data loaders."""
232
233 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
234 """A configuration for creating test datasets and data loaders."""
235
236 punct_head: HeadConfig = HeadConfig()
237 """A configuration for creating punctuation MLP head that is applied to a language model outputs."""
238
239 capit_head: HeadConfig = HeadConfig()
240 """A configuration for creating capitalization MLP head that is applied to a language model outputs."""
241
242 tokenizer: Any = TokenizerConfig()
243 """A configuration for source text tokenizer."""
244
245 language_model: LanguageModelConfig = LanguageModelConfig()
246 """A configuration of a BERT-like language model which serves as a model body."""
247
248 optim: Optional[Any] = None
249 """A configuration of optimizer and learning rate scheduler. There is much variability in such config. For
250 description see `Optimizers
251 <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/core/core.html#optimizers>`_ section in
252 documentation and `primer <https://github.com/NVIDIA/NeMo/blob/main/tutorials/00_NeMo_Primer.ipynb>_ tutorial."""
253
254
255 @dataclass
256 class PunctuationCapitalizationLexicalAudioModelConfig(PunctuationCapitalizationModelConfig):
257 """
258 A configuration of
259 :class:`~nemo.collections.nlp.models.token_classification.punctuation_lexical_audio_capitalization_model.PunctuationCapitalizationLexicalAudioModel`
260 model.
261
262 See an example of model config in
263 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
264 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_lexical_audio_config.yaml>`_
265
266 Audio encoder can be frozen during training with ``freeze_audio_encoder`` parameter.
267 Adapter can be added to audio encoder with ``use_adapters`` and ``adapter_config`` parameters.
268 More conformer layers can be added on top of pretrained audio encoder with ``frozen_conf_d_model``, ``frozen_conf_d_ff`` and ``frozen_conf_num_layers`` parameters.
269 """
270
271 train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
272 """A configuration for creating training dataset and data loader."""
273
274 validation_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
275 """A configuration for creating validation datasets and data loaders."""
276
277 test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
278 """A configuration for creating test datasets and data loaders."""
279
280 audio_encoder: Optional[AudioEncoderConfig] = None
281
282 restore_lexical_encoder_from: Optional[str] = None
283 """"Path to .nemo checkpoint to load weights from""" # add more comments
284
285 use_weighted_loss: Optional[bool] = False
286 """If set to ``True`` CrossEntropyLoss will be weighted"""
287
288
289 @dataclass
290 class PunctuationCapitalizationConfig(NemoConfig):
291 """
292 A config for punctuation model training and testing.
293
294 See an example of full config in
295 `nemo/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml
296 <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/token_classification/conf/punctuation_capitalization_config.yaml>`_
297 """
298
299 pretrained_model: Optional[str] = None
300 """Can be an NVIDIA's NGC cloud model or a path to a .nemo checkpoint. You can get list of possible cloud options
301 by calling method
302 :func:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel.list_available_models`.
303 """
304
305 name: Optional[str] = 'Punctuation_and_Capitalization'
306 """A name of the model. Used for naming output directories and ``.nemo`` checkpoints."""
307
308 do_training: bool = True
309 """Whether to perform training of the model."""
310
311 do_testing: bool = False
312 """Whether ot perform testing of the model."""
313
314 model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
315 """A configuration for the
316 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
317 model."""
318
319 trainer: Optional[TrainerConfig] = TrainerConfig()
320 """Contains ``Trainer`` Lightning class constructor parameters."""
321
322 exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
323 """A configuration with various NeMo training options such as output directories, resuming from checkpoint,
324 tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
325
326
327 @dataclass
328 class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
329 model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
330
331
332 def is_legacy_model_config(model_cfg: DictConfig) -> bool:
333 """
334 Test if model config is old style config. Old style configs are configs which were used before
335 ``common_dataset_parameters`` item was added. Old style datasets use ``dataset`` instead of
336 ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``. Old style configs do not support
337 tarred datasets.
338
339 Args:
340 model_cfg: model configuration
341
342 Returns:
343 whether ``model_config`` is legacy
344 """
345 return 'common_dataset_parameters' not in model_cfg
346
347
348 def legacy_model_config_to_new_model_config(model_cfg: DictConfig) -> DictConfig:
349 """
350 Transform old style config into
351 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`.
352 Old style configs are configs which were used before ``common_dataset_parameters`` item was added. Old style
353 datasets use ``dataset`` instead of ``common_dataset_parameters``, ``batch_size`` instead of ``tokens_in_batch``.
354 Old style configs do not support tarred datasets.
355
356 Args:
357 model_cfg: old style config
358
359 Returns:
360 model config which follows dataclass
361 :class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_config.PunctuationCapitalizationModelConfig`
362 """
363 train_ds = model_cfg.get('train_ds')
364 validation_ds = model_cfg.get('validation_ds')
365 test_ds = model_cfg.get('test_ds')
366 dataset = model_cfg.dataset
367 punct_head_config = model_cfg.get('punct_head', {})
368 capit_head_config = model_cfg.get('capit_head', {})
369 omega_conf = OmegaConf.structured(
370 PunctuationCapitalizationModelConfig(
371 class_labels=model_cfg.class_labels,
372 common_dataset_parameters=CommonDatasetParametersConfig(
373 pad_label=dataset.pad_label,
374 ignore_extra_tokens=dataset.get(
375 'ignore_extra_tokens', CommonDatasetParametersConfig.ignore_extra_tokens
376 ),
377 ignore_start_end=dataset.get('ignore_start_end', CommonDatasetParametersConfig.ignore_start_end),
378 punct_label_ids=model_cfg.punct_label_ids,
379 capit_label_ids=model_cfg.capit_label_ids,
380 ),
381 train_ds=None
382 if train_ds is None
383 else legacy_data_config_to_new_data_config(train_ds, dataset, train=True),
384 validation_ds=None
385 if validation_ds is None
386 else legacy_data_config_to_new_data_config(validation_ds, dataset, train=False),
387 test_ds=None if test_ds is None else legacy_data_config_to_new_data_config(test_ds, dataset, train=False),
388 punct_head=HeadConfig(
389 num_fc_layers=punct_head_config.get('punct_num_fc_layers', HeadConfig.num_fc_layers),
390 fc_dropout=punct_head_config.get('fc_dropout', HeadConfig.fc_dropout),
391 activation=punct_head_config.get('activation', HeadConfig.activation),
392 use_transformer_init=punct_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
393 ),
394 capit_head=HeadConfig(
395 num_fc_layers=capit_head_config.get('capit_num_fc_layers', HeadConfig.num_fc_layers),
396 fc_dropout=capit_head_config.get('fc_dropout', HeadConfig.fc_dropout),
397 activation=capit_head_config.get('activation', HeadConfig.activation),
398 use_transformer_init=capit_head_config.get('use_transformer_init', HeadConfig.use_transformer_init),
399 ),
400 tokenizer=model_cfg.tokenizer,
401 language_model=model_cfg.language_model,
402 optim=model_cfg.optim,
403 )
404 )
405 with open_dict(omega_conf):
406 retain_during_legacy_conversion = model_cfg.get('retain_during_legacy_conversion', {})
407 for key in retain_during_legacy_conversion.keys():
408 omega_conf[key] = retain_during_legacy_conversion[key]
409 return omega_conf
410
[end of nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py]
[start of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Transformer based language model."""
16 from MeCab import Model
17 from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
18 from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
19 from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
20 MegatronRetrievalTransformerEncoderModule,
21 )
22 from nemo.collections.nlp.modules.common.megatron.utils import (
23 ApexGuardDefaults,
24 init_method_normal,
25 scaled_init_method_normal,
26 )
27
28 try:
29 from apex.transformer.enums import AttnMaskType, ModelType
30
31 HAVE_APEX = True
32 except (ImportError, ModuleNotFoundError):
33 HAVE_APEX = False
34 # fake missing classes with None attributes
35 AttnMaskType = ApexGuardDefaults()
36 ModelType = ApexGuardDefaults()
37
38 try:
39 from megatron.core import ModelParallelConfig
40
41 HAVE_MEGATRON_CORE = True
42
43 except (ImportError, ModuleNotFoundError):
44
45 ModelParallelConfig = ApexGuardDefaults
46
47 HAVE_MEGATRON_CORE = False
48
49 __all__ = []
50
51 AVAILABLE_ENCODERS = ["transformer", "perceiver", "retro"]
52
53
54 def get_encoder_model(
55 config: ModelParallelConfig,
56 arch,
57 hidden_size,
58 ffn_hidden_size,
59 num_layers,
60 num_attention_heads,
61 apply_query_key_layer_scaling=False,
62 kv_channels=None,
63 init_method=None,
64 scaled_init_method=None,
65 encoder_attn_mask_type=AttnMaskType.padding,
66 pre_process=True,
67 post_process=True,
68 init_method_std=0.02,
69 megatron_amp_O2=False,
70 hidden_dropout=0.1,
71 attention_dropout=0.1,
72 ffn_dropout=0.0,
73 precision=16,
74 fp32_residual_connection=False,
75 activations_checkpoint_method=None,
76 activations_checkpoint_num_layers=1,
77 activations_checkpoint_granularity=None,
78 layernorm_epsilon=1e-5,
79 bias_activation_fusion=True,
80 bias_dropout_add_fusion=True,
81 masked_softmax_fusion=True,
82 persist_layer_norm=False,
83 openai_gelu=False,
84 activation="gelu",
85 onnx_safe=False,
86 bias=True,
87 normalization="layernorm",
88 headscale=False,
89 transformer_block_type="pre_ln",
90 hidden_steps=32,
91 parent_model_type=ModelType.encoder_or_decoder,
92 layer_type=None,
93 chunk_size=64,
94 num_self_attention_per_cross_attention=1,
95 layer_number_offset=0, # this is use only for attention norm_factor scaling
96 megatron_legacy=False,
97 normalize_attention_scores=True,
98 sequence_parallel=False,
99 num_moe_experts=1,
100 moe_frequency=1,
101 moe_dropout=0.0,
102 turn_off_rop=False, # turn off the RoP positional embedding
103 version=1, # model version
104 position_embedding_type='learned_absolute',
105 use_flash_attention=False,
106 ):
107 """Build language model and return along with the key to save."""
108
109 if kv_channels is None:
110 assert (
111 hidden_size % num_attention_heads == 0
112 ), 'hidden_size must be divisible by num_attention_heads if kv_channels is None'
113 kv_channels = hidden_size // num_attention_heads
114
115 if init_method is None:
116 init_method = init_method_normal(init_method_std)
117
118 if scaled_init_method is None:
119 scaled_init_method = scaled_init_method_normal(init_method_std, num_layers)
120
121 if arch == "transformer":
122 # Language encoder.
123 encoder = MegatronTransformerEncoderModule(
124 config=config,
125 init_method=init_method,
126 output_layer_init_method=scaled_init_method,
127 hidden_size=hidden_size,
128 num_layers=num_layers,
129 num_attention_heads=num_attention_heads,
130 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
131 kv_channels=kv_channels,
132 ffn_hidden_size=ffn_hidden_size,
133 encoder_attn_mask_type=encoder_attn_mask_type,
134 pre_process=pre_process,
135 post_process=post_process,
136 megatron_amp_O2=megatron_amp_O2,
137 hidden_dropout=hidden_dropout,
138 attention_dropout=attention_dropout,
139 ffn_dropout=ffn_dropout,
140 precision=precision,
141 fp32_residual_connection=fp32_residual_connection,
142 activations_checkpoint_method=activations_checkpoint_method,
143 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
144 activations_checkpoint_granularity=activations_checkpoint_granularity,
145 layernorm_epsilon=layernorm_epsilon,
146 bias_activation_fusion=bias_activation_fusion,
147 bias_dropout_add_fusion=bias_dropout_add_fusion,
148 masked_softmax_fusion=masked_softmax_fusion,
149 persist_layer_norm=persist_layer_norm,
150 openai_gelu=openai_gelu,
151 onnx_safe=onnx_safe,
152 activation=activation,
153 bias=bias,
154 normalization=normalization,
155 transformer_block_type=transformer_block_type,
156 headscale=headscale,
157 parent_model_type=parent_model_type,
158 megatron_legacy=megatron_legacy,
159 normalize_attention_scores=normalize_attention_scores,
160 num_moe_experts=num_moe_experts,
161 moe_frequency=moe_frequency,
162 moe_dropout=moe_dropout,
163 position_embedding_type=position_embedding_type,
164 use_flash_attention=use_flash_attention,
165 )
166 elif arch == "retro":
167 encoder = MegatronRetrievalTransformerEncoderModule(
168 config=config,
169 init_method=init_method,
170 output_layer_init_method=scaled_init_method,
171 hidden_size=hidden_size,
172 num_layers=num_layers,
173 num_attention_heads=num_attention_heads,
174 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
175 kv_channels=kv_channels,
176 layer_type=layer_type,
177 ffn_hidden_size=ffn_hidden_size,
178 pre_process=pre_process,
179 post_process=post_process,
180 megatron_amp_O2=megatron_amp_O2,
181 hidden_dropout=hidden_dropout,
182 attention_dropout=attention_dropout,
183 precision=precision,
184 fp32_residual_connection=fp32_residual_connection,
185 activations_checkpoint_method=activations_checkpoint_method,
186 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
187 activations_checkpoint_granularity=activations_checkpoint_granularity,
188 layernorm_epsilon=layernorm_epsilon,
189 bias_activation_fusion=bias_activation_fusion,
190 bias_dropout_add_fusion=bias_dropout_add_fusion,
191 masked_softmax_fusion=masked_softmax_fusion,
192 persist_layer_norm=persist_layer_norm,
193 openai_gelu=openai_gelu,
194 onnx_safe=onnx_safe,
195 activation=activation,
196 bias=bias,
197 normalization=normalization,
198 transformer_block_type=transformer_block_type,
199 parent_model_type=parent_model_type,
200 chunk_size=chunk_size,
201 layer_number_offset=layer_number_offset,
202 megatron_legacy=megatron_legacy,
203 normalize_attention_scores=normalize_attention_scores,
204 turn_off_rop=turn_off_rop,
205 version=version,
206 )
207 elif arch == "perceiver":
208 encoder = MegatronPerceiverEncoderModule(
209 config=config,
210 init_method=init_method,
211 output_layer_init_method=scaled_init_method,
212 hidden_size=hidden_size,
213 num_layers=num_layers,
214 num_attention_heads=num_attention_heads,
215 apply_query_key_layer_scaling=apply_query_key_layer_scaling,
216 kv_channels=kv_channels,
217 ffn_hidden_size=ffn_hidden_size,
218 encoder_attn_mask_type=encoder_attn_mask_type,
219 pre_process=pre_process,
220 post_process=post_process,
221 megatron_amp_O2=megatron_amp_O2,
222 hidden_dropout=hidden_dropout,
223 attention_dropout=attention_dropout,
224 ffn_dropout=ffn_dropout,
225 precision=precision,
226 fp32_residual_connection=fp32_residual_connection,
227 activations_checkpoint_method=activations_checkpoint_method,
228 activations_checkpoint_num_layers=activations_checkpoint_num_layers,
229 activations_checkpoint_granularity=activations_checkpoint_granularity,
230 layernorm_epsilon=layernorm_epsilon,
231 bias_activation_fusion=bias_activation_fusion,
232 bias_dropout_add_fusion=bias_dropout_add_fusion,
233 masked_softmax_fusion=masked_softmax_fusion,
234 persist_layer_norm=persist_layer_norm,
235 openai_gelu=openai_gelu,
236 onnx_safe=onnx_safe,
237 activation=activation,
238 bias=bias,
239 normalization=normalization,
240 transformer_block_type=transformer_block_type,
241 headscale=headscale,
242 parent_model_type=parent_model_type,
243 hidden_steps=hidden_steps,
244 num_self_attention_per_cross_attention=num_self_attention_per_cross_attention,
245 megatron_legacy=megatron_legacy,
246 normalize_attention_scores=normalize_attention_scores,
247 )
248 else:
249 raise ValueError(f"Unknown encoder arch = {arch}. Available encoder arch = {AVAILABLE_ENCODERS}")
250
251 return encoder
252
[end of nemo/collections/nlp/modules/common/megatron/megatron_encoders.py]
[start of nemo/collections/tts/models/fastpitch.py]
1 # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import contextlib
15 from dataclasses import dataclass
16 from pathlib import Path
17 from typing import List, Optional
18
19 import torch
20 from hydra.utils import instantiate
21 from omegaconf import DictConfig, OmegaConf, open_dict
22 from pytorch_lightning import Trainer
23 from pytorch_lightning.loggers import TensorBoardLogger
24
25 from nemo.collections.common.parts.preprocessing import parsers
26 from nemo.collections.tts.losses.aligner_loss import BinLoss, ForwardSumLoss
27 from nemo.collections.tts.losses.fastpitchloss import DurationLoss, EnergyLoss, MelLoss, PitchLoss
28 from nemo.collections.tts.models.base import SpectrogramGenerator
29 from nemo.collections.tts.modules.fastpitch import FastPitchModule
30 from nemo.collections.tts.parts.mixins import FastPitchAdapterModelMixin
31 from nemo.collections.tts.parts.utils.callbacks import LoggingCallback
32 from nemo.collections.tts.parts.utils.helpers import (
33 batch_from_ragged,
34 g2p_backward_compatible_support,
35 plot_alignment_to_numpy,
36 plot_spectrogram_to_numpy,
37 process_batch,
38 sample_tts_input,
39 )
40 from nemo.core.classes import Exportable
41 from nemo.core.classes.common import PretrainedModelInfo, typecheck
42 from nemo.core.neural_types.elements import (
43 Index,
44 LengthsType,
45 MelSpectrogramType,
46 ProbsType,
47 RegressionValuesType,
48 TokenDurationType,
49 TokenIndex,
50 TokenLogDurationType,
51 )
52 from nemo.core.neural_types.neural_type import NeuralType
53 from nemo.utils import logging, model_utils
54
55
56 @dataclass
57 class G2PConfig:
58 _target_: str = "nemo.collections.tts.g2p.models.en_us_arpabet.EnglishG2p"
59 phoneme_dict: str = "scripts/tts_dataset_files/cmudict-0.7b_nv22.10"
60 heteronyms: str = "scripts/tts_dataset_files/heteronyms-052722"
61 phoneme_probability: float = 0.5
62
63
64 @dataclass
65 class TextTokenizer:
66 _target_: str = "nemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.EnglishPhonemesTokenizer"
67 punct: bool = True
68 stresses: bool = True
69 chars: bool = True
70 apostrophe: bool = True
71 pad_with_space: bool = True
72 add_blank_at: bool = True
73 g2p: G2PConfig = G2PConfig()
74
75
76 @dataclass
77 class TextTokenizerConfig:
78 text_tokenizer: TextTokenizer = TextTokenizer()
79
80
81 class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
82 """FastPitch model (https://arxiv.org/abs/2006.06873) that is used to generate mel spectrogram from text."""
83
84 def __init__(self, cfg: DictConfig, trainer: Trainer = None):
85 # Convert to Hydra 1.0 compatible DictConfig
86 cfg = model_utils.convert_model_config_to_dict_config(cfg)
87 cfg = model_utils.maybe_update_config_version(cfg)
88
89 # Setup normalizer
90 self.normalizer = None
91 self.text_normalizer_call = None
92 self.text_normalizer_call_kwargs = {}
93 self._setup_normalizer(cfg)
94
95 self.learn_alignment = cfg.get("learn_alignment", False)
96
97 # Setup vocabulary (=tokenizer) and input_fft_kwargs (supported only with self.learn_alignment=True)
98 input_fft_kwargs = {}
99 if self.learn_alignment:
100 self.vocab = None
101
102 self.ds_class = cfg.train_ds.dataset._target_
103 self.ds_class_name = self.ds_class.split(".")[-1]
104 if not self.ds_class in [
105 "nemo.collections.tts.data.dataset.TTSDataset",
106 "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset",
107 "nemo.collections.tts.torch.data.TTSDataset",
108 ]:
109 raise ValueError(f"Unknown dataset class: {self.ds_class}.")
110
111 self._setup_tokenizer(cfg)
112 assert self.vocab is not None
113 input_fft_kwargs["n_embed"] = len(self.vocab.tokens)
114 input_fft_kwargs["padding_idx"] = self.vocab.pad
115
116 self._parser = None
117 self._tb_logger = None
118 super().__init__(cfg=cfg, trainer=trainer)
119
120 self.bin_loss_warmup_epochs = cfg.get("bin_loss_warmup_epochs", 100)
121 self.log_images = cfg.get("log_images", False)
122 self.log_train_images = False
123
124 default_prosody_loss_scale = 0.1 if self.learn_alignment else 1.0
125 dur_loss_scale = cfg.get("dur_loss_scale", default_prosody_loss_scale)
126 pitch_loss_scale = cfg.get("pitch_loss_scale", default_prosody_loss_scale)
127 energy_loss_scale = cfg.get("energy_loss_scale", default_prosody_loss_scale)
128
129 self.mel_loss_fn = MelLoss()
130 self.pitch_loss_fn = PitchLoss(loss_scale=pitch_loss_scale)
131 self.duration_loss_fn = DurationLoss(loss_scale=dur_loss_scale)
132 self.energy_loss_fn = EnergyLoss(loss_scale=energy_loss_scale)
133
134 self.aligner = None
135 if self.learn_alignment:
136 aligner_loss_scale = cfg.get("aligner_loss_scale", 1.0)
137 self.aligner = instantiate(self._cfg.alignment_module)
138 self.forward_sum_loss_fn = ForwardSumLoss(loss_scale=aligner_loss_scale)
139 self.bin_loss_fn = BinLoss(loss_scale=aligner_loss_scale)
140
141 self.preprocessor = instantiate(self._cfg.preprocessor)
142 input_fft = instantiate(self._cfg.input_fft, **input_fft_kwargs)
143 output_fft = instantiate(self._cfg.output_fft)
144 duration_predictor = instantiate(self._cfg.duration_predictor)
145 pitch_predictor = instantiate(self._cfg.pitch_predictor)
146 speaker_encoder = instantiate(self._cfg.get("speaker_encoder", None))
147 energy_embedding_kernel_size = cfg.get("energy_embedding_kernel_size", 0)
148 energy_predictor = instantiate(self._cfg.get("energy_predictor", None))
149
150 # [TODO] may remove if we change the pre-trained config
151 # cfg: condition_types = [ "add" ]
152 n_speakers = cfg.get("n_speakers", 0)
153 speaker_emb_condition_prosody = cfg.get("speaker_emb_condition_prosody", False)
154 speaker_emb_condition_decoder = cfg.get("speaker_emb_condition_decoder", False)
155 speaker_emb_condition_aligner = cfg.get("speaker_emb_condition_aligner", False)
156 min_token_duration = cfg.get("min_token_duration", 0)
157 use_log_energy = cfg.get("use_log_energy", True)
158 if n_speakers > 1 and "add" not in input_fft.cond_input.condition_types:
159 input_fft.cond_input.condition_types.append("add")
160 if speaker_emb_condition_prosody:
161 duration_predictor.cond_input.condition_types.append("add")
162 pitch_predictor.cond_input.condition_types.append("add")
163 if speaker_emb_condition_decoder:
164 output_fft.cond_input.condition_types.append("add")
165 if speaker_emb_condition_aligner and self.aligner is not None:
166 self.aligner.cond_input.condition_types.append("add")
167
168 self.fastpitch = FastPitchModule(
169 input_fft,
170 output_fft,
171 duration_predictor,
172 pitch_predictor,
173 energy_predictor,
174 self.aligner,
175 speaker_encoder,
176 n_speakers,
177 cfg.symbols_embedding_dim,
178 cfg.pitch_embedding_kernel_size,
179 energy_embedding_kernel_size,
180 cfg.n_mel_channels,
181 min_token_duration,
182 cfg.max_token_duration,
183 use_log_energy,
184 )
185 self._input_types = self._output_types = None
186 self.export_config = {
187 "emb_range": (0, self.fastpitch.encoder.word_emb.num_embeddings),
188 "enable_volume": False,
189 "enable_ragged_batches": False,
190 }
191 if self.fastpitch.speaker_emb is not None:
192 self.export_config["num_speakers"] = cfg.n_speakers
193
194 self.log_config = cfg.get("log_config", None)
195
196 # Adapter modules setup (from FastPitchAdapterModelMixin)
197 self.setup_adapters()
198
199 def _get_default_text_tokenizer_conf(self):
200 text_tokenizer: TextTokenizerConfig = TextTokenizerConfig()
201 return OmegaConf.create(OmegaConf.to_yaml(text_tokenizer))
202
203 def _setup_normalizer(self, cfg):
204 if "text_normalizer" in cfg:
205 normalizer_kwargs = {}
206
207 if "whitelist" in cfg.text_normalizer:
208 normalizer_kwargs["whitelist"] = self.register_artifact(
209 'text_normalizer.whitelist', cfg.text_normalizer.whitelist
210 )
211 try:
212 import nemo_text_processing
213
214 self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
215 except Exception as e:
216 logging.error(e)
217 raise ImportError(
218 "`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
219 )
220
221 self.text_normalizer_call = self.normalizer.normalize
222 if "text_normalizer_call_kwargs" in cfg:
223 self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
224
225 def _setup_tokenizer(self, cfg):
226 text_tokenizer_kwargs = {}
227
228 if "g2p" in cfg.text_tokenizer:
229 # for backward compatibility
230 if (
231 self._is_model_being_restored()
232 and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
233 and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
234 ):
235 cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
236 cfg.text_tokenizer.g2p["_target_"]
237 )
238
239 g2p_kwargs = {}
240
241 if "phoneme_dict" in cfg.text_tokenizer.g2p:
242 g2p_kwargs["phoneme_dict"] = self.register_artifact(
243 'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
244 )
245
246 if "heteronyms" in cfg.text_tokenizer.g2p:
247 g2p_kwargs["heteronyms"] = self.register_artifact(
248 'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
249 )
250
251 # for backward compatability
252 text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
253
254 # TODO @xueyang: rename the instance of tokenizer because vocab is misleading.
255 self.vocab = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
256
257 @property
258 def tb_logger(self):
259 if self._tb_logger is None:
260 if self.logger is None and self.logger.experiment is None:
261 return None
262 tb_logger = self.logger.experiment
263 for logger in self.trainer.loggers:
264 if isinstance(logger, TensorBoardLogger):
265 tb_logger = logger.experiment
266 break
267 self._tb_logger = tb_logger
268 return self._tb_logger
269
270 @property
271 def parser(self):
272 if self._parser is not None:
273 return self._parser
274
275 if self.learn_alignment:
276 self._parser = self.vocab.encode
277 else:
278 self._parser = parsers.make_parser(
279 labels=self._cfg.labels,
280 name='en',
281 unk_id=-1,
282 blank_id=-1,
283 do_normalize=True,
284 abbreviation_version="fastpitch",
285 make_table=False,
286 )
287 return self._parser
288
289 def parse(self, str_input: str, normalize=True) -> torch.tensor:
290 if self.training:
291 logging.warning("parse() is meant to be called in eval mode.")
292
293 if normalize and self.text_normalizer_call is not None:
294 str_input = self.text_normalizer_call(str_input, **self.text_normalizer_call_kwargs)
295
296 if self.learn_alignment:
297 eval_phon_mode = contextlib.nullcontext()
298 if hasattr(self.vocab, "set_phone_prob"):
299 eval_phon_mode = self.vocab.set_phone_prob(prob=1.0)
300
301 # Disable mixed g2p representation if necessary
302 with eval_phon_mode:
303 tokens = self.parser(str_input)
304 else:
305 tokens = self.parser(str_input)
306
307 x = torch.tensor(tokens).unsqueeze_(0).long().to(self.device)
308 return x
309
310 @typecheck(
311 input_types={
312 "text": NeuralType(('B', 'T_text'), TokenIndex()),
313 "durs": NeuralType(('B', 'T_text'), TokenDurationType()),
314 "pitch": NeuralType(('B', 'T_audio'), RegressionValuesType()),
315 "energy": NeuralType(('B', 'T_audio'), RegressionValuesType(), optional=True),
316 "speaker": NeuralType(('B'), Index(), optional=True),
317 "pace": NeuralType(optional=True),
318 "spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
319 "attn_prior": NeuralType(('B', 'T_spec', 'T_text'), ProbsType(), optional=True),
320 "mel_lens": NeuralType(('B'), LengthsType(), optional=True),
321 "input_lens": NeuralType(('B'), LengthsType(), optional=True),
322 # reference_* data is used for multi-speaker FastPitch training
323 "reference_spec": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType(), optional=True),
324 "reference_spec_lens": NeuralType(('B'), LengthsType(), optional=True),
325 }
326 )
327 def forward(
328 self,
329 *,
330 text,
331 durs=None,
332 pitch=None,
333 energy=None,
334 speaker=None,
335 pace=1.0,
336 spec=None,
337 attn_prior=None,
338 mel_lens=None,
339 input_lens=None,
340 reference_spec=None,
341 reference_spec_lens=None,
342 ):
343 return self.fastpitch(
344 text=text,
345 durs=durs,
346 pitch=pitch,
347 energy=energy,
348 speaker=speaker,
349 pace=pace,
350 spec=spec,
351 attn_prior=attn_prior,
352 mel_lens=mel_lens,
353 input_lens=input_lens,
354 reference_spec=reference_spec,
355 reference_spec_lens=reference_spec_lens,
356 )
357
358 @typecheck(output_types={"spect": NeuralType(('B', 'D', 'T_spec'), MelSpectrogramType())})
359 def generate_spectrogram(
360 self,
361 tokens: 'torch.tensor',
362 speaker: Optional[int] = None,
363 pace: float = 1.0,
364 reference_spec: Optional['torch.tensor'] = None,
365 reference_spec_lens: Optional['torch.tensor'] = None,
366 ) -> torch.tensor:
367 if self.training:
368 logging.warning("generate_spectrogram() is meant to be called in eval mode.")
369 if isinstance(speaker, int):
370 speaker = torch.tensor([speaker]).to(self.device)
371 spect, *_ = self(
372 text=tokens,
373 durs=None,
374 pitch=None,
375 speaker=speaker,
376 pace=pace,
377 reference_spec=reference_spec,
378 reference_spec_lens=reference_spec_lens,
379 )
380 return spect
381
382 def training_step(self, batch, batch_idx):
383 attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
384 None,
385 None,
386 None,
387 None,
388 None,
389 None,
390 )
391 if self.learn_alignment:
392 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
393 batch_dict = batch
394 else:
395 batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
396 audio = batch_dict.get("audio")
397 audio_lens = batch_dict.get("audio_lens")
398 text = batch_dict.get("text")
399 text_lens = batch_dict.get("text_lens")
400 attn_prior = batch_dict.get("align_prior_matrix", None)
401 pitch = batch_dict.get("pitch", None)
402 energy = batch_dict.get("energy", None)
403 speaker = batch_dict.get("speaker_id", None)
404 reference_audio = batch_dict.get("reference_audio", None)
405 reference_audio_len = batch_dict.get("reference_audio_lens", None)
406 else:
407 audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
408
409 mels, spec_len = self.preprocessor(input_signal=audio, length=audio_lens)
410 reference_spec, reference_spec_len = None, None
411 if reference_audio is not None:
412 reference_spec, reference_spec_len = self.preprocessor(
413 input_signal=reference_audio, length=reference_audio_len
414 )
415
416 (
417 mels_pred,
418 _,
419 _,
420 log_durs_pred,
421 pitch_pred,
422 attn_soft,
423 attn_logprob,
424 attn_hard,
425 attn_hard_dur,
426 pitch,
427 energy_pred,
428 energy_tgt,
429 ) = self(
430 text=text,
431 durs=durs,
432 pitch=pitch,
433 energy=energy,
434 speaker=speaker,
435 pace=1.0,
436 spec=mels if self.learn_alignment else None,
437 reference_spec=reference_spec,
438 reference_spec_lens=reference_spec_len,
439 attn_prior=attn_prior,
440 mel_lens=spec_len,
441 input_lens=text_lens,
442 )
443 if durs is None:
444 durs = attn_hard_dur
445
446 mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
447 dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
448 loss = mel_loss + dur_loss
449 if self.learn_alignment:
450 ctc_loss = self.forward_sum_loss_fn(attn_logprob=attn_logprob, in_lens=text_lens, out_lens=spec_len)
451 bin_loss_weight = min(self.current_epoch / self.bin_loss_warmup_epochs, 1.0) * 1.0
452 bin_loss = self.bin_loss_fn(hard_attention=attn_hard, soft_attention=attn_soft) * bin_loss_weight
453 loss += ctc_loss + bin_loss
454
455 pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
456 energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
457 loss += pitch_loss + energy_loss
458
459 self.log("t_loss", loss)
460 self.log("t_mel_loss", mel_loss)
461 self.log("t_dur_loss", dur_loss)
462 self.log("t_pitch_loss", pitch_loss)
463 if energy_tgt is not None:
464 self.log("t_energy_loss", energy_loss)
465 if self.learn_alignment:
466 self.log("t_ctc_loss", ctc_loss)
467 self.log("t_bin_loss", bin_loss)
468
469 # Log images to tensorboard
470 if self.log_images and self.log_train_images and isinstance(self.logger, TensorBoardLogger):
471 self.log_train_images = False
472
473 self.tb_logger.add_image(
474 "train_mel_target",
475 plot_spectrogram_to_numpy(mels[0].data.cpu().float().numpy()),
476 self.global_step,
477 dataformats="HWC",
478 )
479 spec_predict = mels_pred[0].data.cpu().float().numpy()
480 self.tb_logger.add_image(
481 "train_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
482 )
483 if self.learn_alignment:
484 attn = attn_hard[0].data.cpu().float().numpy().squeeze()
485 self.tb_logger.add_image(
486 "train_attn", plot_alignment_to_numpy(attn.T), self.global_step, dataformats="HWC",
487 )
488 soft_attn = attn_soft[0].data.cpu().float().numpy().squeeze()
489 self.tb_logger.add_image(
490 "train_soft_attn", plot_alignment_to_numpy(soft_attn.T), self.global_step, dataformats="HWC",
491 )
492
493 return loss
494
495 def validation_step(self, batch, batch_idx):
496 attn_prior, durs, speaker, energy, reference_audio, reference_audio_len = (
497 None,
498 None,
499 None,
500 None,
501 None,
502 None,
503 )
504 if self.learn_alignment:
505 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
506 batch_dict = batch
507 else:
508 batch_dict = process_batch(batch, self._train_dl.dataset.sup_data_types_set)
509 audio = batch_dict.get("audio")
510 audio_lens = batch_dict.get("audio_lens")
511 text = batch_dict.get("text")
512 text_lens = batch_dict.get("text_lens")
513 attn_prior = batch_dict.get("align_prior_matrix", None)
514 pitch = batch_dict.get("pitch", None)
515 energy = batch_dict.get("energy", None)
516 speaker = batch_dict.get("speaker_id", None)
517 reference_audio = batch_dict.get("reference_audio", None)
518 reference_audio_len = batch_dict.get("reference_audio_lens", None)
519 else:
520 audio, audio_lens, text, text_lens, durs, pitch, speaker = batch
521
522 mels, mel_lens = self.preprocessor(input_signal=audio, length=audio_lens)
523 reference_spec, reference_spec_len = None, None
524 if reference_audio is not None:
525 reference_spec, reference_spec_len = self.preprocessor(
526 input_signal=reference_audio, length=reference_audio_len
527 )
528
529 # Calculate val loss on ground truth durations to better align L2 loss in time
530 (mels_pred, _, _, log_durs_pred, pitch_pred, _, _, _, attn_hard_dur, pitch, energy_pred, energy_tgt,) = self(
531 text=text,
532 durs=durs,
533 pitch=pitch,
534 energy=energy,
535 speaker=speaker,
536 pace=1.0,
537 spec=mels if self.learn_alignment else None,
538 reference_spec=reference_spec,
539 reference_spec_lens=reference_spec_len,
540 attn_prior=attn_prior,
541 mel_lens=mel_lens,
542 input_lens=text_lens,
543 )
544 if durs is None:
545 durs = attn_hard_dur
546
547 mel_loss = self.mel_loss_fn(spect_predicted=mels_pred, spect_tgt=mels)
548 dur_loss = self.duration_loss_fn(log_durs_predicted=log_durs_pred, durs_tgt=durs, len=text_lens)
549 pitch_loss = self.pitch_loss_fn(pitch_predicted=pitch_pred, pitch_tgt=pitch, len=text_lens)
550 energy_loss = self.energy_loss_fn(energy_predicted=energy_pred, energy_tgt=energy_tgt, length=text_lens)
551 loss = mel_loss + dur_loss + pitch_loss + energy_loss
552
553 val_outputs = {
554 "val_loss": loss,
555 "mel_loss": mel_loss,
556 "dur_loss": dur_loss,
557 "pitch_loss": pitch_loss,
558 "energy_loss": energy_loss if energy_tgt is not None else None,
559 "mel_target": mels if batch_idx == 0 else None,
560 "mel_pred": mels_pred if batch_idx == 0 else None,
561 }
562 self.validation_step_outputs.append(val_outputs)
563 return val_outputs
564
565 def on_validation_epoch_end(self):
566 collect = lambda key: torch.stack([x[key] for x in self.validation_step_outputs]).mean()
567 val_loss = collect("val_loss")
568 mel_loss = collect("mel_loss")
569 dur_loss = collect("dur_loss")
570 pitch_loss = collect("pitch_loss")
571 self.log("val_loss", val_loss, sync_dist=True)
572 self.log("val_mel_loss", mel_loss, sync_dist=True)
573 self.log("val_dur_loss", dur_loss, sync_dist=True)
574 self.log("val_pitch_loss", pitch_loss, sync_dist=True)
575 if self.validation_step_outputs[0]["energy_loss"] is not None:
576 energy_loss = collect("energy_loss")
577 self.log("val_energy_loss", energy_loss, sync_dist=True)
578
579 _, _, _, _, _, spec_target, spec_predict = self.validation_step_outputs[0].values()
580
581 if self.log_images and isinstance(self.logger, TensorBoardLogger):
582 self.tb_logger.add_image(
583 "val_mel_target",
584 plot_spectrogram_to_numpy(spec_target[0].data.cpu().float().numpy()),
585 self.global_step,
586 dataformats="HWC",
587 )
588 spec_predict = spec_predict[0].data.cpu().float().numpy()
589 self.tb_logger.add_image(
590 "val_mel_predicted", plot_spectrogram_to_numpy(spec_predict), self.global_step, dataformats="HWC",
591 )
592 self.log_train_images = True
593 self.validation_step_outputs.clear() # free memory)
594
595 def _setup_train_dataloader(self, cfg):
596 phon_mode = contextlib.nullcontext()
597 if hasattr(self.vocab, "set_phone_prob"):
598 phon_mode = self.vocab.set_phone_prob(self.vocab.phoneme_probability)
599
600 with phon_mode:
601 dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
602
603 sampler = dataset.get_sampler(cfg.dataloader_params.batch_size)
604 return torch.utils.data.DataLoader(
605 dataset, collate_fn=dataset.collate_fn, sampler=sampler, **cfg.dataloader_params
606 )
607
608 def _setup_test_dataloader(self, cfg):
609 phon_mode = contextlib.nullcontext()
610 if hasattr(self.vocab, "set_phone_prob"):
611 phon_mode = self.vocab.set_phone_prob(0.0)
612
613 with phon_mode:
614 dataset = instantiate(cfg.dataset, text_tokenizer=self.vocab,)
615
616 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
617
618 def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
619 if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
620 raise ValueError(f"No dataset for {name}")
621 if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
622 raise ValueError(f"No dataloader_params for {name}")
623 if shuffle_should_be:
624 if 'shuffle' not in cfg.dataloader_params:
625 logging.warning(
626 f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
627 "config. Manually setting to True"
628 )
629 with open_dict(cfg.dataloader_params):
630 cfg.dataloader_params.shuffle = True
631 elif not cfg.dataloader_params.shuffle:
632 logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
633 elif cfg.dataloader_params.shuffle:
634 logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
635
636 if self.ds_class == "nemo.collections.tts.data.dataset.TTSDataset":
637 phon_mode = contextlib.nullcontext()
638 if hasattr(self.vocab, "set_phone_prob"):
639 phon_mode = self.vocab.set_phone_prob(prob=None if name == "val" else self.vocab.phoneme_probability)
640
641 with phon_mode:
642 dataset = instantiate(
643 cfg.dataset,
644 text_normalizer=self.normalizer,
645 text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
646 text_tokenizer=self.vocab,
647 )
648 else:
649 dataset = instantiate(cfg.dataset)
650
651 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
652
653 def setup_training_data(self, cfg):
654 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
655 self._train_dl = self._setup_train_dataloader(cfg)
656 else:
657 self._train_dl = self.__setup_dataloader_from_config(cfg)
658
659 def setup_validation_data(self, cfg):
660 if self.ds_class == "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
661 self._validation_dl = self._setup_test_dataloader(cfg)
662 else:
663 self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="val")
664
665 def setup_test_data(self, cfg):
666 """Omitted."""
667 pass
668
669 def configure_callbacks(self):
670 if not self.log_config:
671 return []
672
673 sample_ds_class = self.log_config.dataset._target_
674 if sample_ds_class != "nemo.collections.tts.data.text_to_speech_dataset.TextToSpeechDataset":
675 raise ValueError(f"Logging callback only supported for TextToSpeechDataset, got {sample_ds_class}")
676
677 data_loader = self._setup_test_dataloader(self.log_config)
678
679 generators = instantiate(self.log_config.generators)
680 log_dir = Path(self.log_config.log_dir) if self.log_config.log_dir else None
681 log_callback = LoggingCallback(
682 generators=generators,
683 data_loader=data_loader,
684 log_epochs=self.log_config.log_epochs,
685 epoch_frequency=self.log_config.epoch_frequency,
686 output_dir=log_dir,
687 loggers=self.trainer.loggers,
688 log_tensorboard=self.log_config.log_tensorboard,
689 log_wandb=self.log_config.log_wandb,
690 )
691
692 return [log_callback]
693
694 @classmethod
695 def list_available_models(cls) -> 'List[PretrainedModelInfo]':
696 """
697 This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
698 Returns:
699 List of available pre-trained models.
700 """
701 list_of_models = []
702
703 # en-US, single speaker, 22050Hz, LJSpeech (ARPABET).
704 model = PretrainedModelInfo(
705 pretrained_model_name="tts_en_fastpitch",
706 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/1.8.1/files/tts_en_fastpitch_align.nemo",
707 description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is ARPABET-based.",
708 class_=cls,
709 )
710 list_of_models.append(model)
711
712 # en-US, single speaker, 22050Hz, LJSpeech (IPA).
713 model = PretrainedModelInfo(
714 pretrained_model_name="tts_en_fastpitch_ipa",
715 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch/versions/IPA_1.13.0/files/tts_en_fastpitch_align_ipa.nemo",
716 description="This model is trained on LJSpeech sampled at 22050Hz with and can be used to generate female English voices with an American accent. It is IPA-based.",
717 class_=cls,
718 )
719 list_of_models.append(model)
720
721 # en-US, multi-speaker, 44100Hz, HiFiTTS.
722 model = PretrainedModelInfo(
723 pretrained_model_name="tts_en_fastpitch_multispeaker",
724 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_multispeaker_fastpitchhifigan/versions/1.10.0/files/tts_en_fastpitch_multispeaker.nemo",
725 description="This model is trained on HiFITTS sampled at 44100Hz with and can be used to generate male and female English voices with an American accent.",
726 class_=cls,
727 )
728 list_of_models.append(model)
729
730 # de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 21.02
731 model = PretrainedModelInfo(
732 pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2102",
733 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2102.nemo",
734 description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 21.02 Dataset sampled at 22050Hz and can be used to generate male German voices.",
735 class_=cls,
736 )
737 list_of_models.append(model)
738
739 # de-DE, single male speaker, grapheme-based tokenizer, 22050 Hz, Thorsten Mรผllerโs German Neutral-TTS Dataset, 22.10
740 model = PretrainedModelInfo(
741 pretrained_model_name="tts_de_fastpitch_singleSpeaker_thorstenNeutral_2210",
742 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitchhifigan/versions/1.15.0/files/tts_de_fastpitch_thorstens2210.nemo",
743 description="This model is trained on a single male speaker data in Thorsten Mรผllerโs German Neutral 22.10 Dataset sampled at 22050Hz and can be used to generate male German voices.",
744 class_=cls,
745 )
746 list_of_models.append(model)
747
748 # de-DE, multi-speaker, 5 speakers, 44100 Hz, HUI-Audio-Corpus-German Clean.
749 model = PretrainedModelInfo(
750 pretrained_model_name="tts_de_fastpitch_multispeaker_5",
751 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_de_fastpitch_multispeaker_5/versions/1.11.0/files/tts_de_fastpitch_multispeaker_5.nemo",
752 description="This model is trained on 5 speakers in HUI-Audio-Corpus-German clean subset sampled at 44100Hz with and can be used to generate male and female German voices.",
753 class_=cls,
754 )
755 list_of_models.append(model)
756
757 # es, 174 speakers, 44100Hz, OpenSLR (IPA)
758 model = PretrainedModelInfo(
759 pretrained_model_name="tts_es_fastpitch_multispeaker",
760 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_es_multispeaker_fastpitchhifigan/versions/1.15.0/files/tts_es_fastpitch_multispeaker.nemo",
761 description="This model is trained on 174 speakers in 6 crowdsourced Latin American Spanish OpenSLR datasets sampled at 44100Hz and can be used to generate male and female Spanish voices with Latin American accents.",
762 class_=cls,
763 )
764 list_of_models.append(model)
765
766 # zh, single female speaker, 22050Hz, SFSpeech Bilingual Chinese/English dataset, improved model using richer
767 # dict and jieba word segmenter for polyphone disambiguation.
768 model = PretrainedModelInfo(
769 pretrained_model_name="tts_zh_fastpitch_sfspeech",
770 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_zh_fastpitch_hifigan_sfspeech/versions/1.15.0/files/tts_zh_fastpitch_sfspeech.nemo",
771 description="This model is trained on a single female speaker in SFSpeech Bilingual Chinese/English dataset"
772 " sampled at 22050Hz and can be used to generate female Mandarin Chinese voices. It is improved"
773 " using richer dict and jieba word segmenter for polyphone disambiguation.",
774 class_=cls,
775 )
776 list_of_models.append(model)
777
778 # en, multi speaker, LibriTTS, 16000 Hz
779 # stft 25ms 10ms matching ASR params
780 # for use during Enhlish ASR training/adaptation
781 model = PretrainedModelInfo(
782 pretrained_model_name="tts_en_fastpitch_for_asr_finetuning",
783 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_fastpitch_spectrogram_enhancer_for_asr_finetuning/versions/1.20.0/files/tts_en_fastpitch_for_asr_finetuning.nemo",
784 description="This model is trained on LibriSpeech, train-960 subset."
785 " STFT parameters follow those commonly used in ASR: 25 ms window, 10 ms hop."
786 " This model is supposed to be used with its companion SpetrogramEnhancer for "
787 " ASR fine-tuning. Usage for regular TTS tasks is not advised.",
788 class_=cls,
789 )
790 list_of_models.append(model)
791
792 return list_of_models
793
794 # Methods for model exportability
795 def _prepare_for_export(self, **kwargs):
796 super()._prepare_for_export(**kwargs)
797
798 tensor_shape = ('T') if self.export_config["enable_ragged_batches"] else ('B', 'T')
799
800 # Define input_types and output_types as required by export()
801 self._input_types = {
802 "text": NeuralType(tensor_shape, TokenIndex()),
803 "pitch": NeuralType(tensor_shape, RegressionValuesType()),
804 "pace": NeuralType(tensor_shape),
805 "volume": NeuralType(tensor_shape, optional=True),
806 "batch_lengths": NeuralType(('B'), optional=True),
807 "speaker": NeuralType(('B'), Index(), optional=True),
808 }
809 self._output_types = {
810 "spect": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
811 "num_frames": NeuralType(('B'), TokenDurationType()),
812 "durs_predicted": NeuralType(('B', 'T'), TokenDurationType()),
813 "log_durs_predicted": NeuralType(('B', 'T'), TokenLogDurationType()),
814 "pitch_predicted": NeuralType(('B', 'T'), RegressionValuesType()),
815 }
816 if self.export_config["enable_volume"]:
817 self._output_types["volume_aligned"] = NeuralType(('B', 'T'), RegressionValuesType())
818
819 def _export_teardown(self):
820 self._input_types = self._output_types = None
821
822 @property
823 def disabled_deployment_input_names(self):
824 """Implement this method to return a set of input names disabled for export"""
825 disabled_inputs = set()
826 if self.fastpitch.speaker_emb is None:
827 disabled_inputs.add("speaker")
828 if not self.export_config["enable_ragged_batches"]:
829 disabled_inputs.add("batch_lengths")
830 if not self.export_config["enable_volume"]:
831 disabled_inputs.add("volume")
832 return disabled_inputs
833
834 @property
835 def input_types(self):
836 return self._input_types
837
838 @property
839 def output_types(self):
840 return self._output_types
841
842 def input_example(self, max_batch=1, max_dim=44):
843 """
844 Generates input examples for tracing etc.
845 Returns:
846 A tuple of input examples.
847 """
848 par = next(self.fastpitch.parameters())
849 inputs = sample_tts_input(self.export_config, par.device, max_batch=max_batch, max_dim=max_dim)
850 if 'enable_ragged_batches' not in self.export_config:
851 inputs.pop('batch_lengths', None)
852 return (inputs,)
853
854 def forward_for_export(self, text, pitch, pace, volume=None, batch_lengths=None, speaker=None):
855 if self.export_config["enable_ragged_batches"]:
856 text, pitch, pace, volume_tensor, lens = batch_from_ragged(
857 text, pitch, pace, batch_lengths, padding_idx=self.fastpitch.encoder.padding_idx, volume=volume
858 )
859 if volume is not None:
860 volume = volume_tensor
861 return self.fastpitch.infer(text=text, pitch=pitch, pace=pace, volume=volume, speaker=speaker)
862
863 def interpolate_speaker(
864 self, original_speaker_1, original_speaker_2, weight_speaker_1, weight_speaker_2, new_speaker_id
865 ):
866 """
867 This method performs speaker interpolation between two original speakers the model is trained on.
868
869 Inputs:
870 original_speaker_1: Integer speaker ID of first existing speaker in the model
871 original_speaker_2: Integer speaker ID of second existing speaker in the model
872 weight_speaker_1: Floating point weight associated in to first speaker during weight combination
873 weight_speaker_2: Floating point weight associated in to second speaker during weight combination
874 new_speaker_id: Integer speaker ID of new interpolated speaker in the model
875 """
876 if self.fastpitch.speaker_emb is None:
877 raise Exception(
878 "Current FastPitch model is not a multi-speaker FastPitch model. Speaker interpolation can only \
879 be performed with a multi-speaker model"
880 )
881 n_speakers = self.fastpitch.speaker_emb.weight.data.size()[0]
882 if original_speaker_1 >= n_speakers or original_speaker_2 >= n_speakers or new_speaker_id >= n_speakers:
883 raise Exception(
884 f"Parameters original_speaker_1, original_speaker_2, new_speaker_id should be less than the total \
885 total number of speakers FastPitch was trained on (n_speakers = {n_speakers})."
886 )
887 speaker_emb_1 = (
888 self.fastpitch.speaker_emb(torch.tensor(original_speaker_1, dtype=torch.int32).cuda()).clone().detach()
889 )
890 speaker_emb_2 = (
891 self.fastpitch.speaker_emb(torch.tensor(original_speaker_2, dtype=torch.int32).cuda()).clone().detach()
892 )
893 new_speaker_emb = weight_speaker_1 * speaker_emb_1 + weight_speaker_2 * speaker_emb_2
894 self.fastpitch.speaker_emb.weight.data[new_speaker_id] = new_speaker_emb
895
[end of nemo/collections/tts/models/fastpitch.py]
[start of nemo/collections/tts/models/tacotron2.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import contextlib
16 from dataclasses import dataclass
17 from typing import Any, Dict, List, Optional
18
19 import torch
20 from hydra.utils import instantiate
21 from omegaconf import MISSING, DictConfig, OmegaConf, open_dict
22 from omegaconf.errors import ConfigAttributeError
23 from pytorch_lightning.loggers import TensorBoardLogger, WandbLogger
24 from torch import nn
25
26 from nemo.collections.common.parts.preprocessing import parsers
27 from nemo.collections.tts.losses.tacotron2loss import Tacotron2Loss
28 from nemo.collections.tts.models.base import SpectrogramGenerator
29 from nemo.collections.tts.parts.utils.helpers import (
30 g2p_backward_compatible_support,
31 get_mask_from_lengths,
32 tacotron2_log_to_tb_func,
33 tacotron2_log_to_wandb_func,
34 )
35 from nemo.core.classes.common import PretrainedModelInfo, typecheck
36 from nemo.core.neural_types.elements import (
37 AudioSignal,
38 EmbeddedTextType,
39 LengthsType,
40 LogitsType,
41 MelSpectrogramType,
42 SequenceToSequenceAlignmentType,
43 )
44 from nemo.core.neural_types.neural_type import NeuralType
45 from nemo.utils import logging, model_utils
46
47
48 @dataclass
49 class Preprocessor:
50 _target_: str = MISSING
51 pad_value: float = MISSING
52
53
54 @dataclass
55 class Tacotron2Config:
56 preprocessor: Preprocessor = Preprocessor()
57 encoder: Dict[Any, Any] = MISSING
58 decoder: Dict[Any, Any] = MISSING
59 postnet: Dict[Any, Any] = MISSING
60 labels: List = MISSING
61 train_ds: Optional[Dict[Any, Any]] = None
62 validation_ds: Optional[Dict[Any, Any]] = None
63
64
65 class Tacotron2Model(SpectrogramGenerator):
66 """Tacotron 2 Model that is used to generate mel spectrograms from text"""
67
68 def __init__(self, cfg: DictConfig, trainer: 'Trainer' = None):
69 # Convert to Hydra 1.0 compatible DictConfig
70 cfg = model_utils.convert_model_config_to_dict_config(cfg)
71 cfg = model_utils.maybe_update_config_version(cfg)
72
73 # setup normalizer
74 self.normalizer = None
75 self.text_normalizer_call = None
76 self.text_normalizer_call_kwargs = {}
77 self._setup_normalizer(cfg)
78
79 # setup tokenizer
80 self.tokenizer = None
81 if hasattr(cfg, 'text_tokenizer'):
82 self._setup_tokenizer(cfg)
83
84 self.num_tokens = len(self.tokenizer.tokens)
85 self.tokenizer_pad = self.tokenizer.pad
86 self.tokenizer_unk = self.tokenizer.oov
87 # assert self.tokenizer is not None
88 else:
89 self.num_tokens = len(cfg.labels) + 3
90
91 super().__init__(cfg=cfg, trainer=trainer)
92
93 schema = OmegaConf.structured(Tacotron2Config)
94 # ModelPT ensures that cfg is a DictConfig, but do this second check in case ModelPT changes
95 if isinstance(cfg, dict):
96 cfg = OmegaConf.create(cfg)
97 elif not isinstance(cfg, DictConfig):
98 raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
99 # Ensure passed cfg is compliant with schema
100 try:
101 OmegaConf.merge(cfg, schema)
102 self.pad_value = cfg.preprocessor.pad_value
103 except ConfigAttributeError:
104 self.pad_value = cfg.preprocessor.params.pad_value
105 logging.warning(
106 "Your config is using an old NeMo yaml configuration. Please ensure that the yaml matches the "
107 "current version in the main branch for future compatibility."
108 )
109
110 self._parser = None
111 self.audio_to_melspec_precessor = instantiate(cfg.preprocessor)
112 self.text_embedding = nn.Embedding(self.num_tokens, 512)
113 self.encoder = instantiate(self._cfg.encoder)
114 self.decoder = instantiate(self._cfg.decoder)
115 self.postnet = instantiate(self._cfg.postnet)
116 self.loss = Tacotron2Loss()
117 self.calculate_loss = True
118
119 @property
120 def parser(self):
121 if self._parser is not None:
122 return self._parser
123
124 ds_class_name = self._cfg.train_ds.dataset._target_.split(".")[-1]
125 if ds_class_name == "TTSDataset":
126 self._parser = None
127 elif hasattr(self._cfg, "labels"):
128 self._parser = parsers.make_parser(
129 labels=self._cfg.labels,
130 name='en',
131 unk_id=-1,
132 blank_id=-1,
133 do_normalize=True,
134 abbreviation_version="fastpitch",
135 make_table=False,
136 )
137 else:
138 raise ValueError("Wanted to setup parser, but model does not have necessary paramaters")
139
140 return self._parser
141
142 def parse(self, text: str, normalize=True) -> torch.Tensor:
143 if self.training:
144 logging.warning("parse() is meant to be called in eval mode.")
145 if normalize and self.text_normalizer_call is not None:
146 text = self.text_normalizer_call(text, **self.text_normalizer_call_kwargs)
147
148 eval_phon_mode = contextlib.nullcontext()
149 if hasattr(self.tokenizer, "set_phone_prob"):
150 eval_phon_mode = self.tokenizer.set_phone_prob(prob=1.0)
151
152 with eval_phon_mode:
153 if self.tokenizer is not None:
154 tokens = self.tokenizer.encode(text)
155 else:
156 tokens = self.parser(text)
157 # Old parser doesn't add bos and eos ids, so maunally add it
158 tokens = [len(self._cfg.labels)] + tokens + [len(self._cfg.labels) + 1]
159 tokens_tensor = torch.tensor(tokens).unsqueeze_(0).to(self.device)
160 return tokens_tensor
161
162 @property
163 def input_types(self):
164 if self.training:
165 return {
166 "tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
167 "token_len": NeuralType(('B'), LengthsType()),
168 "audio": NeuralType(('B', 'T'), AudioSignal()),
169 "audio_len": NeuralType(('B'), LengthsType()),
170 }
171 else:
172 return {
173 "tokens": NeuralType(('B', 'T'), EmbeddedTextType()),
174 "token_len": NeuralType(('B'), LengthsType()),
175 "audio": NeuralType(('B', 'T'), AudioSignal(), optional=True),
176 "audio_len": NeuralType(('B'), LengthsType(), optional=True),
177 }
178
179 @property
180 def output_types(self):
181 if not self.calculate_loss and not self.training:
182 return {
183 "spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
184 "spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
185 "gate_pred": NeuralType(('B', 'T'), LogitsType()),
186 "alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
187 "pred_length": NeuralType(('B'), LengthsType()),
188 }
189 return {
190 "spec_pred_dec": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
191 "spec_pred_postnet": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
192 "gate_pred": NeuralType(('B', 'T'), LogitsType()),
193 "spec_target": NeuralType(('B', 'D', 'T'), MelSpectrogramType()),
194 "spec_target_len": NeuralType(('B'), LengthsType()),
195 "alignments": NeuralType(('B', 'T', 'T'), SequenceToSequenceAlignmentType()),
196 }
197
198 @typecheck()
199 def forward(self, *, tokens, token_len, audio=None, audio_len=None):
200 if audio is not None and audio_len is not None:
201 spec_target, spec_target_len = self.audio_to_melspec_precessor(audio, audio_len)
202 else:
203 if self.training or self.calculate_loss:
204 raise ValueError(
205 f"'audio' and 'audio_len' can not be None when either 'self.training' or 'self.calculate_loss' is True."
206 )
207
208 token_embedding = self.text_embedding(tokens).transpose(1, 2)
209 encoder_embedding = self.encoder(token_embedding=token_embedding, token_len=token_len)
210
211 if self.training:
212 spec_pred_dec, gate_pred, alignments = self.decoder(
213 memory=encoder_embedding, decoder_inputs=spec_target, memory_lengths=token_len
214 )
215 else:
216 spec_pred_dec, gate_pred, alignments, pred_length = self.decoder(
217 memory=encoder_embedding, memory_lengths=token_len
218 )
219
220 spec_pred_postnet = self.postnet(mel_spec=spec_pred_dec)
221
222 if not self.calculate_loss and not self.training:
223 return spec_pred_dec, spec_pred_postnet, gate_pred, alignments, pred_length
224
225 return spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments
226
227 @typecheck(
228 input_types={"tokens": NeuralType(('B', 'T'), EmbeddedTextType())},
229 output_types={"spec": NeuralType(('B', 'D', 'T'), MelSpectrogramType())},
230 )
231 def generate_spectrogram(self, *, tokens):
232 self.eval()
233 self.calculate_loss = False
234 token_len = torch.tensor([len(i) for i in tokens]).to(self.device)
235 tensors = self(tokens=tokens, token_len=token_len)
236 spectrogram_pred = tensors[1]
237
238 if spectrogram_pred.shape[0] > 1:
239 # Silence all frames past the predicted end
240 mask = ~get_mask_from_lengths(tensors[-1])
241 mask = mask.expand(spectrogram_pred.shape[1], mask.size(0), mask.size(1))
242 mask = mask.permute(1, 0, 2)
243 spectrogram_pred.data.masked_fill_(mask, self.pad_value)
244
245 return spectrogram_pred
246
247 def training_step(self, batch, batch_idx):
248 audio, audio_len, tokens, token_len = batch
249 spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, _ = self.forward(
250 audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
251 )
252
253 loss, _ = self.loss(
254 spec_pred_dec=spec_pred_dec,
255 spec_pred_postnet=spec_pred_postnet,
256 gate_pred=gate_pred,
257 spec_target=spec_target,
258 spec_target_len=spec_target_len,
259 pad_value=self.pad_value,
260 )
261
262 output = {
263 'loss': loss,
264 'progress_bar': {'training_loss': loss},
265 'log': {'loss': loss},
266 }
267 return output
268
269 def validation_step(self, batch, batch_idx):
270 audio, audio_len, tokens, token_len = batch
271 spec_pred_dec, spec_pred_postnet, gate_pred, spec_target, spec_target_len, alignments = self.forward(
272 audio=audio, audio_len=audio_len, tokens=tokens, token_len=token_len
273 )
274
275 loss, gate_target = self.loss(
276 spec_pred_dec=spec_pred_dec,
277 spec_pred_postnet=spec_pred_postnet,
278 gate_pred=gate_pred,
279 spec_target=spec_target,
280 spec_target_len=spec_target_len,
281 pad_value=self.pad_value,
282 )
283 loss = {
284 "val_loss": loss,
285 "mel_target": spec_target,
286 "mel_postnet": spec_pred_postnet,
287 "gate": gate_pred,
288 "gate_target": gate_target,
289 "alignments": alignments,
290 }
291 self.validation_step_outputs.append(loss)
292 return loss
293
294 def on_validation_epoch_end(self):
295 if self.logger is not None and self.logger.experiment is not None:
296 logger = self.logger.experiment
297 for logger in self.trainer.loggers:
298 if isinstance(logger, TensorBoardLogger):
299 logger = logger.experiment
300 break
301 if isinstance(logger, TensorBoardLogger):
302 tacotron2_log_to_tb_func(
303 logger,
304 self.validation_step_outputs[0].values(),
305 self.global_step,
306 tag="val",
307 log_images=True,
308 add_audio=False,
309 )
310 elif isinstance(logger, WandbLogger):
311 tacotron2_log_to_wandb_func(
312 logger,
313 self.validation_step_outputs[0].values(),
314 self.global_step,
315 tag="val",
316 log_images=True,
317 add_audio=False,
318 )
319 avg_loss = torch.stack(
320 [x['val_loss'] for x in self.validation_step_outputs]
321 ).mean() # This reduces across batches, not workers!
322 self.log('val_loss', avg_loss)
323 self.validation_step_outputs.clear() # free memory
324
325 def _setup_normalizer(self, cfg):
326 if "text_normalizer" in cfg:
327 normalizer_kwargs = {}
328
329 if "whitelist" in cfg.text_normalizer:
330 normalizer_kwargs["whitelist"] = self.register_artifact(
331 'text_normalizer.whitelist', cfg.text_normalizer.whitelist
332 )
333
334 try:
335 import nemo_text_processing
336
337 self.normalizer = instantiate(cfg.text_normalizer, **normalizer_kwargs)
338 except Exception as e:
339 logging.error(e)
340 raise ImportError(
341 "`nemo_text_processing` not installed, see https://github.com/NVIDIA/NeMo-text-processing for more details"
342 )
343
344 self.text_normalizer_call = self.normalizer.normalize
345 if "text_normalizer_call_kwargs" in cfg:
346 self.text_normalizer_call_kwargs = cfg.text_normalizer_call_kwargs
347
348 def _setup_tokenizer(self, cfg):
349 text_tokenizer_kwargs = {}
350 if "g2p" in cfg.text_tokenizer and cfg.text_tokenizer.g2p is not None:
351 # for backward compatibility
352 if (
353 self._is_model_being_restored()
354 and (cfg.text_tokenizer.g2p.get('_target_', None) is not None)
355 and cfg.text_tokenizer.g2p["_target_"].startswith("nemo_text_processing.g2p")
356 ):
357 cfg.text_tokenizer.g2p["_target_"] = g2p_backward_compatible_support(
358 cfg.text_tokenizer.g2p["_target_"]
359 )
360
361 g2p_kwargs = {}
362
363 if "phoneme_dict" in cfg.text_tokenizer.g2p:
364 g2p_kwargs["phoneme_dict"] = self.register_artifact(
365 'text_tokenizer.g2p.phoneme_dict', cfg.text_tokenizer.g2p.phoneme_dict,
366 )
367
368 if "heteronyms" in cfg.text_tokenizer.g2p:
369 g2p_kwargs["heteronyms"] = self.register_artifact(
370 'text_tokenizer.g2p.heteronyms', cfg.text_tokenizer.g2p.heteronyms,
371 )
372
373 text_tokenizer_kwargs["g2p"] = instantiate(cfg.text_tokenizer.g2p, **g2p_kwargs)
374
375 self.tokenizer = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
376
377 def __setup_dataloader_from_config(self, cfg, shuffle_should_be: bool = True, name: str = "train"):
378 if "dataset" not in cfg or not isinstance(cfg.dataset, DictConfig):
379 raise ValueError(f"No dataset for {name}")
380 if "dataloader_params" not in cfg or not isinstance(cfg.dataloader_params, DictConfig):
381 raise ValueError(f"No dataloder_params for {name}")
382 if shuffle_should_be:
383 if 'shuffle' not in cfg.dataloader_params:
384 logging.warning(
385 f"Shuffle should be set to True for {self}'s {name} dataloader but was not found in its "
386 "config. Manually setting to True"
387 )
388 with open_dict(cfg.dataloader_params):
389 cfg.dataloader_params.shuffle = True
390 elif not cfg.dataloader_params.shuffle:
391 logging.error(f"The {name} dataloader for {self} has shuffle set to False!!!")
392 elif not shuffle_should_be and cfg.dataloader_params.shuffle:
393 logging.error(f"The {name} dataloader for {self} has shuffle set to True!!!")
394
395 dataset = instantiate(
396 cfg.dataset,
397 text_normalizer=self.normalizer,
398 text_normalizer_call_kwargs=self.text_normalizer_call_kwargs,
399 text_tokenizer=self.tokenizer,
400 )
401
402 return torch.utils.data.DataLoader(dataset, collate_fn=dataset.collate_fn, **cfg.dataloader_params)
403
404 def setup_training_data(self, cfg):
405 self._train_dl = self.__setup_dataloader_from_config(cfg)
406
407 def setup_validation_data(self, cfg):
408 self._validation_dl = self.__setup_dataloader_from_config(cfg, shuffle_should_be=False, name="validation")
409
410 @classmethod
411 def list_available_models(cls) -> 'List[PretrainedModelInfo]':
412 """
413 This method returns a list of pre-trained model which can be instantiated directly from NVIDIA's NGC cloud.
414 Returns:
415 List of available pre-trained models.
416 """
417 list_of_models = []
418 model = PretrainedModelInfo(
419 pretrained_model_name="tts_en_tacotron2",
420 location="https://api.ngc.nvidia.com/v2/models/nvidia/nemo/tts_en_tacotron2/versions/1.10.0/files/tts_en_tacotron2.nemo",
421 description="This model is trained on LJSpeech sampled at 22050Hz, and can be used to generate female English voices with an American accent.",
422 class_=cls,
423 aliases=["Tacotron2-22050Hz"],
424 )
425 list_of_models.append(model)
426 return list_of_models
427
[end of nemo/collections/tts/models/tacotron2.py]
[start of nemo/core/config/modelPT.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dataclasses import dataclass, field
16 from typing import Any, Dict, Optional
17
18 from omegaconf import MISSING
19
20 from nemo.core import config
21 from nemo.core.classes.dataset import DatasetConfig
22 from nemo.utils import exp_manager
23
24
25 @dataclass
26 class SchedConfig:
27 name: str = MISSING
28 min_lr: float = 0.0
29 last_epoch: int = -1
30
31
32 @dataclass
33 class OptimConfig:
34 name: str = MISSING
35 sched: Optional[SchedConfig] = None
36
37
38 @dataclass
39 class ModelConfig:
40 """
41 Model component inside ModelPT
42 """
43
44 # ...
45 train_ds: Optional[DatasetConfig] = None
46 validation_ds: Optional[DatasetConfig] = None
47 test_ds: Optional[DatasetConfig] = None
48 optim: Optional[OptimConfig] = None
49
50
51 @dataclass
52 class HydraConfig:
53 run: Dict[str, Any] = field(default_factory=lambda: {"dir": "."})
54 job_logging: Dict[str, Any] = field(default_factory=lambda: {"root": {"handlers": None}})
55
56
57 @dataclass
58 class NemoConfig:
59 name: str = MISSING
60 model: ModelConfig = MISSING
61 trainer: config.TrainerConfig = config.TrainerConfig(
62 strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
63 )
64 exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
65 hydra: HydraConfig = HydraConfig()
66
67
68 class ModelConfigBuilder:
69 def __init__(self, model_cfg: ModelConfig):
70 """
71 Base class for any Model Config Builder.
72
73 A Model Config Builder is a utility class that accepts a ModelConfig dataclass,
74 and via a set of utility methods (that are implemented by the subclassed ModelConfigBuilder),
75 builds a finalized ModelConfig that can be supplied to a NemoModel dataclass as
76 the `model` component.
77
78 Subclasses *must* implement the private method `_finalize_cfg`.
79 Inside this method, they must update `self.model_cfg` with all interdependent config
80 options that need to be set (either updated by user explicitly or with their default value).
81
82 The updated model config must then be preserved in `self.model_cfg`.
83
84 Example:
85 # Create the config builder
86 config_builder = <subclass>ModelConfigBuilder()
87
88 # Update the components of the config that are modifiable
89 config_builder.set_X(X)
90 config_builder.set_Y(Y)
91
92 # Create a "finalized" config dataclass that will contain all the updates
93 # that were specified by the builder
94 model_config = config_builder.build()
95
96 # Use model config as is (or further update values), then create a new Model
97 model = nemo.<domain>.models.<ModelName>Model(cfg=model_config, trainer=Trainer())
98
99 Supported build methods:
100 - set_train_ds: All model configs can accept a subclass of `DatasetConfig` as their
101 training config. Subclasses can override this method to enable auto-complete
102 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
103
104 - set_validation_ds: All model configs can accept a subclass of `DatasetConfig` as their
105 validation config. Subclasses can override this method to enable auto-complete
106 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
107
108 - set_test_ds: All model configs can accept a subclass of `DatasetConfig` as their
109 test config. Subclasses can override this method to enable auto-complete
110 by replacing `Optional[DatasetConfig]` with `Optional[<subclass of DatasetConfig>]`.
111
112 - set_optim: A build method that supports changes to the Optimizer (and optionally,
113 the Scheduler) used for training the model. The function accepts two inputs -
114
115 `cfg`: A subclass of `OptimizerParams` - any OptimizerParams subclass can be used,
116 in order to select an appropriate Optimizer. Examples: AdamParams.
117
118 `sched_cfg`: A subclass of `SchedulerParams` - any SchedulerParams subclass can be used,
119 in order to select an appropriate Scheduler. Examples: CosineAnnealingParams.
120 Note that this argument is optional.
121
122 - build(): The method which should return a "finalized" ModelConfig dataclass.
123 Subclasses *should* always override this method, and update the signature
124 of this method with the return type of the Dataclass, so that it enables
125 autocomplete for the user.
126
127 Example:
128 def build(self) -> EncDecCTCConfig:
129 return super().build()
130
131 Any additional build methods must be added by subclasses of ModelConfigBuilder.
132
133 Args:
134 model_cfg:
135 """
136 self.model_cfg = model_cfg
137 self.train_ds_cfg = None
138 self.validation_ds_cfg = None
139 self.test_ds_cfg = None
140 self.optim_cfg = None
141
142 def set_train_ds(self, cfg: Optional[DatasetConfig] = None):
143 self.model_cfg.train_ds = cfg
144
145 def set_validation_ds(self, cfg: Optional[DatasetConfig] = None):
146 self.model_cfg.validation_ds = cfg
147
148 def set_test_ds(self, cfg: Optional[DatasetConfig] = None):
149 self.model_cfg.test_ds = cfg
150
151 def set_optim(self, cfg: config.OptimizerParams, sched_cfg: Optional[config.SchedulerParams] = None):
152 @dataclass
153 class WrappedOptimConfig(OptimConfig, cfg.__class__):
154 pass
155
156 # Setup optim
157 optim_name = cfg.__class__.__name__.replace("Params", "").lower()
158 wrapped_cfg = WrappedOptimConfig(name=optim_name, sched=None, **vars(cfg))
159
160 if sched_cfg is not None:
161
162 @dataclass
163 class WrappedSchedConfig(SchedConfig, sched_cfg.__class__):
164 pass
165
166 # Setup scheduler
167 sched_name = sched_cfg.__class__.__name__.replace("Params", "")
168 wrapped_sched_cfg = WrappedSchedConfig(name=sched_name, **vars(sched_cfg))
169
170 wrapped_cfg.sched = wrapped_sched_cfg
171
172 self.model_cfg.optim = wrapped_cfg
173
174 def _finalize_cfg(self):
175 raise NotImplementedError()
176
177 def build(self) -> ModelConfig:
178 # validate config
179 self._finalize_cfg()
180
181 return self.model_cfg
182
[end of nemo/core/config/modelPT.py]
[start of nemo/utils/exp_manager.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import os
17 import subprocess
18 import sys
19 import time
20 import warnings
21 from dataclasses import dataclass
22 from datetime import timedelta
23 from pathlib import Path
24 from shutil import copy, move
25 from typing import Any, Dict, List, Optional, Tuple, Union
26
27 import pytorch_lightning
28 import torch
29 from hydra.core.hydra_config import HydraConfig
30 from hydra.utils import get_original_cwd
31 from omegaconf import DictConfig, OmegaConf, open_dict
32 from pytorch_lightning.callbacks import Callback, ModelCheckpoint
33 from pytorch_lightning.callbacks.early_stopping import EarlyStopping
34 from pytorch_lightning.callbacks.timer import Interval, Timer
35 from pytorch_lightning.loggers import MLFlowLogger, TensorBoardLogger, WandbLogger
36 from pytorch_lightning.loops import _TrainingEpochLoop
37 from pytorch_lightning.strategies.ddp import DDPStrategy
38
39 from nemo.collections.common.callbacks import EMA
40 from nemo.constants import NEMO_ENV_VARNAME_TESTING, NEMO_ENV_VARNAME_VERSION
41 from nemo.utils import logging, timers
42 from nemo.utils.app_state import AppState
43 from nemo.utils.callbacks import NeMoModelCheckpoint, PreemptionCallback
44 from nemo.utils.env_var_parsing import get_envbool
45 from nemo.utils.exceptions import NeMoBaseException
46 from nemo.utils.get_rank import is_global_rank_zero
47 from nemo.utils.lightning_logger_patch import add_filehandlers_to_pl_logger
48 from nemo.utils.loggers import ClearMLLogger, ClearMLParams, DLLogger, DLLoggerParams, MLFlowParams
49 from nemo.utils.model_utils import uninject_model_parallel_rank
50
51
52 class NotFoundError(NeMoBaseException):
53 """ Raised when a file or folder is not found"""
54
55
56 class LoggerMisconfigurationError(NeMoBaseException):
57 """ Raised when a mismatch between trainer.logger and exp_manager occurs"""
58
59 def __init__(self, message):
60 message = (
61 message
62 + " You can disable lighning's trainer from creating a logger by passing logger=False to its constructor."
63 )
64 super().__init__(message)
65
66
67 class CheckpointMisconfigurationError(NeMoBaseException):
68 """ Raised when a mismatch between trainer.callbacks and exp_manager occurs"""
69
70
71 @dataclass
72 class EarlyStoppingParams:
73 monitor: str = "val_loss" # The metric that early stopping should consider.
74 mode: str = "min" # inform early stopping whether to look for increase or decrease in monitor.
75 min_delta: float = 0.001 # smallest change to consider as improvement.
76 patience: int = 10 # how many (continuous) validation cycles to wait with no improvement and stopping training.
77 verbose: bool = True
78 strict: bool = True
79 check_finite: bool = True
80 stopping_threshold: Optional[float] = None
81 divergence_threshold: Optional[float] = None
82 check_on_train_epoch_end: Optional[bool] = None
83 log_rank_zero_only: bool = False
84
85
86 @dataclass
87 class CallbackParams:
88 filepath: Optional[str] = None # Deprecated
89 dirpath: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
90 filename: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
91 monitor: Optional[str] = "val_loss"
92 verbose: Optional[bool] = True
93 save_last: Optional[bool] = True
94 save_top_k: Optional[int] = 3
95 save_weights_only: Optional[bool] = False
96 mode: Optional[str] = "min"
97 auto_insert_metric_name: bool = True
98 every_n_epochs: Optional[int] = 1
99 every_n_train_steps: Optional[int] = None
100 train_time_interval: Optional[str] = None
101 prefix: Optional[str] = None # If None, exp_manager will attempt to handle the filepath
102 postfix: str = ".nemo"
103 save_best_model: bool = False
104 always_save_nemo: bool = False
105 save_nemo_on_train_end: Optional[bool] = True # Whether to automatically save .nemo file durin on_train_end hook
106 model_parallel_size: Optional[int] = None # tensor parallel size * pipeline parallel size
107 save_on_train_epoch_end: Optional[bool] = False # Save after training, not after validation
108
109
110 @dataclass
111 class StepTimingParams:
112 reduction: Optional[str] = "mean"
113 # if True torch.cuda.synchronize() is called on start/stop
114 sync_cuda: Optional[bool] = False
115 # if positive, defines the size of a sliding window for computing mean
116 buffer_size: Optional[int] = 1
117
118
119 @dataclass
120 class EMAParams:
121 enable: Optional[bool] = False
122 decay: Optional[float] = 0.999
123 cpu_offload: Optional[bool] = False
124 validate_original_weights: Optional[bool] = False
125 every_n_steps: int = 1
126
127
128 @dataclass
129 class ExpManagerConfig:
130 """Experiment Manager config for validation of passed arguments.
131 """
132
133 # Log dir creation parameters
134 explicit_log_dir: Optional[str] = None
135 exp_dir: Optional[str] = None
136 name: Optional[str] = None
137 version: Optional[str] = None
138 use_datetime_version: Optional[bool] = True
139 resume_if_exists: Optional[bool] = False
140 resume_past_end: Optional[bool] = False
141 resume_ignore_no_checkpoint: Optional[bool] = False
142 resume_from_checkpoint: Optional[str] = None
143 # Logging parameters
144 create_tensorboard_logger: Optional[bool] = True
145 summary_writer_kwargs: Optional[Dict[Any, Any]] = None
146 create_wandb_logger: Optional[bool] = False
147 wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
148 create_mlflow_logger: Optional[bool] = False
149 mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
150 create_dllogger_logger: Optional[bool] = False
151 dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
152 create_clearml_logger: Optional[bool] = False
153 clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
154 # Checkpointing parameters
155 create_checkpoint_callback: Optional[bool] = True
156 checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
157 create_early_stopping_callback: Optional[bool] = False
158 early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
159 create_preemption_callback: Optional[bool] = True
160 # Additional exp_manager arguments
161 files_to_copy: Optional[List[str]] = None
162 # logs timing of train/val/test steps
163 log_step_timing: Optional[bool] = True
164 step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
165 # Configures creation of log files for different ranks
166 log_local_rank_0_only: Optional[bool] = False
167 log_global_rank_0_only: Optional[bool] = False
168 # disable initial validation when resuming from a checkpoint saved during validation
169 disable_validation_on_resume: Optional[bool] = True
170 ema: Optional[EMAParams] = EMAParams()
171 # Wall clock time limit
172 max_time_per_run: Optional[str] = None
173 # time to sleep non 0 ranks during initialization
174 seconds_to_sleep: float = 5
175
176
177 class TimingCallback(Callback):
178 """
179 Logs execution time of train/val/test steps
180 """
181
182 def __init__(self, timer_kwargs={}):
183 self.timer = timers.NamedTimer(**timer_kwargs)
184
185 def _on_batch_start(self, name):
186 # reset only if we do not return mean of a sliding window
187 if self.timer.buffer_size <= 0:
188 self.timer.reset(name)
189
190 self.timer.start(name)
191
192 def _on_batch_end(self, name, pl_module):
193 self.timer.stop(name)
194 # Set the `batch_size=1` as WAR for `dataloader_iter`, which is not used for any metric
195 pl_module.log(
196 name + ' in s',
197 self.timer[name],
198 on_step=True,
199 on_epoch=False,
200 batch_size=1,
201 prog_bar=(name == "train_step_timing"),
202 )
203
204 def on_train_batch_start(self, trainer, pl_module, batch, batch_idx):
205 self._on_batch_start("train_step_timing")
206
207 def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
208 self._on_batch_end("train_step_timing", pl_module)
209
210 def on_validation_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
211 self._on_batch_start("validation_step_timing")
212
213 def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
214 self._on_batch_end("validation_step_timing", pl_module)
215
216 def on_test_batch_start(self, trainer, pl_module, batch, batch_idx, dataloader_idx=0):
217 self._on_batch_start("test_step_timing")
218
219 def on_test_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx=0):
220 self._on_batch_end("test_step_timing", pl_module)
221
222 def on_before_backward(self, trainer, pl_module, loss):
223 self._on_batch_start("train_backward_timing")
224
225 def on_after_backward(self, trainer, pl_module):
226 self._on_batch_end("train_backward_timing", pl_module)
227
228
229 def exp_manager(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None) -> Optional[Path]:
230 """
231 exp_manager is a helper function used to manage folders for experiments. It follows the pytorch lightning paradigm
232 of exp_dir/model_or_experiment_name/version. If the lightning trainer has a logger, exp_manager will get exp_dir,
233 name, and version from the logger. Otherwise it will use the exp_dir and name arguments to create the logging
234 directory. exp_manager also allows for explicit folder creation via explicit_log_dir.
235
236 The version can be a datetime string or an integer. Datestime version can be disabled if use_datetime_version is set
237 to False. It optionally creates TensorBoardLogger, WandBLogger, DLLogger, MLFlowLogger, ClearMLLogger,
238 ModelCheckpoint objects from pytorch lightning.
239 It copies sys.argv, and git information if available to the logging directory. It creates a log file for each
240 process to log their output into.
241
242 exp_manager additionally has a resume feature (resume_if_exists) which can be used to continuing training from
243 the constructed log_dir. When you need to continue the training repeatedly (like on a cluster which you need
244 multiple consecutive jobs), you need to avoid creating the version folders. Therefore from v1.0.0, when
245 resume_if_exists is set to True, creating the version folders is ignored.
246
247 Args:
248 trainer (pytorch_lightning.Trainer): The lightning trainer.
249 cfg (DictConfig, dict): Can have the following keys:
250
251 - explicit_log_dir (str, Path): Can be used to override exp_dir/name/version folder creation. Defaults to
252 None, which will use exp_dir, name, and version to construct the logging directory.
253 - exp_dir (str, Path): The base directory to create the logging directory. Defaults to None, which logs to
254 ./nemo_experiments.
255 - name (str): The name of the experiment. Defaults to None which turns into "default" via name = name or
256 "default".
257 - version (str): The version of the experiment. Defaults to None which uses either a datetime string or
258 lightning's TensorboardLogger system of using version_{int}.
259 - use_datetime_version (bool): Whether to use a datetime string for version. Defaults to True.
260 - resume_if_exists (bool): Whether this experiment is resuming from a previous run. If True, it sets
261 trainer._checkpoint_connector._ckpt_path so that the trainer should auto-resume. exp_manager will move files
262 under log_dir to log_dir/run_{int}. Defaults to False. From v1.0.0, when resume_if_exists is True,
263 we would not create version folders to make it easier to find the log folder for next runs.
264 - resume_past_end (bool): exp_manager errors out if resume_if_exists is True and a checkpoint matching
265 ``*end.ckpt`` indicating a previous training run fully completed. This behaviour can be disabled, in which
266 case the ``*end.ckpt`` will be loaded by setting resume_past_end to True. Defaults to False.
267 - resume_ignore_no_checkpoint (bool): exp_manager errors out if resume_if_exists is True and no checkpoint
268 could be found. This behaviour can be disabled, in which case exp_manager will print a message and
269 continue without restoring, by setting resume_ignore_no_checkpoint to True. Defaults to False.
270 - resume_from_checkpoint (str): Can be used to specify a path to a specific checkpoint file to load from. This will
271 override any checkpoint found when resume_if_exists is True. Defaults to None.
272 - create_tensorboard_logger (bool): Whether to create a tensorboard logger and attach it to the pytorch
273 lightning trainer. Defaults to True.
274 - summary_writer_kwargs (dict): A dictionary of kwargs that can be passed to lightning's TensorboardLogger
275 class. Note that log_dir is passed by exp_manager and cannot exist in this dict. Defaults to None.
276 - create_wandb_logger (bool): Whether to create a Weights and Baises logger and attach it to the pytorch
277 lightning trainer. Defaults to False.
278 - wandb_logger_kwargs (dict): A dictionary of kwargs that can be passed to lightning's WandBLogger
279 class. Note that name and project are required parameters if create_wandb_logger is True.
280 Defaults to None.
281 - create_mlflow_logger (bool): Whether to create an MLFlow logger and attach it to the pytorch lightning
282 training. Defaults to False
283 - mlflow_logger_kwargs (dict): optional parameters for the MLFlow logger
284 - create_dllogger_logger (bool): Whether to create an DLLogger logger and attach it to the pytorch lightning
285 training. Defaults to False
286 - dllogger_logger_kwargs (dict): optional parameters for the DLLogger logger
287 - create_clearml_logger (bool): Whether to create an ClearML logger and attach it to the pytorch lightning
288 training. Defaults to False
289 - clearml_logger_kwargs (dict): optional parameters for the ClearML logger
290 - create_checkpoint_callback (bool): Whether to create a ModelCheckpoint callback and attach it to the
291 pytorch lightning trainer. The ModelCheckpoint saves the top 3 models with the best "val_loss", the most
292 recent checkpoint under ``*last.ckpt``, and the final checkpoint after training completes under ``*end.ckpt``.
293 Defaults to True.
294 - create_early_stopping_callback (bool): Flag to decide if early stopping should be used to stop training. Default is False.
295 See EarlyStoppingParams dataclass above.
296 - create_preemption_callback (bool): Flag to decide whether to enable preemption callback to save checkpoints and exit training
297 immediately upon preemption. Default is True.
298 - files_to_copy (list): A list of files to copy to the experiment logging directory. Defaults to None which
299 copies no files.
300 - log_local_rank_0_only (bool): Whether to only create log files for local rank 0. Defaults to False.
301 Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
302 - log_global_rank_0_only (bool): Whether to only create log files for global rank 0. Defaults to False.
303 Set this to True if you are using DDP with many GPUs and do not want many log files in your exp dir.
304 - max_time (str): The maximum wall clock time *per run*. This is intended to be used on clusters where you want
305 a checkpoint to be saved after this specified time and be able to resume from that checkpoint. Defaults to None.
306 - seconds_to_sleep (float): seconds to sleep non rank 0 processes for. Used to give enough time for rank 0 to initialize
307
308 returns:
309 log_dir (Path): The final logging directory where logging files are saved. Usually the concatenation of
310 exp_dir, name, and version.
311 """
312 # Add rank information to logger
313 # Note: trainer.global_rank and trainer.is_global_zero are not set until trainer.fit, so have to hack around it
314 local_rank = int(os.environ.get("LOCAL_RANK", 0))
315 global_rank = trainer.node_rank * trainer.num_devices + local_rank
316 logging.rank = global_rank
317
318 if cfg is None:
319 logging.error("exp_manager did not receive a cfg argument. It will be disabled.")
320 return
321 if trainer.fast_dev_run:
322 logging.info("Trainer was called with fast_dev_run. exp_manager will return without any functionality.")
323 return
324
325 # Ensure passed cfg is compliant with ExpManagerConfig
326 schema = OmegaConf.structured(ExpManagerConfig)
327 if isinstance(cfg, dict):
328 cfg = OmegaConf.create(cfg)
329 elif not isinstance(cfg, DictConfig):
330 raise ValueError(f"cfg was type: {type(cfg)}. Expected either a dict or a DictConfig")
331 cfg = OmegaConf.create(OmegaConf.to_container(cfg, resolve=True))
332 cfg = OmegaConf.merge(schema, cfg)
333
334 error_checks(trainer, cfg) # Ensures that trainer options are compliant with NeMo and exp_manager arguments
335
336 log_dir, exp_dir, name, version = get_log_dir(
337 trainer=trainer,
338 exp_dir=cfg.exp_dir,
339 name=cfg.name,
340 version=cfg.version,
341 explicit_log_dir=cfg.explicit_log_dir,
342 use_datetime_version=cfg.use_datetime_version,
343 resume_if_exists=cfg.resume_if_exists,
344 )
345
346 check_resume(
347 trainer,
348 log_dir,
349 cfg.resume_if_exists,
350 cfg.resume_past_end,
351 cfg.resume_ignore_no_checkpoint,
352 cfg.checkpoint_callback_params.dirpath,
353 cfg.resume_from_checkpoint,
354 )
355
356 checkpoint_name = name
357 # If name returned from get_log_dir is "", use cfg.name for checkpointing
358 if checkpoint_name is None or checkpoint_name == '':
359 checkpoint_name = cfg.name or "default"
360
361 # Set mlflow name if it's not set, before the main name is erased
362 if cfg.create_mlflow_logger and (not cfg.mlflow_logger_kwargs.get("experiment_name", None)):
363 cfg.mlflow_logger_kwargs.experiment_name = cfg.name
364 logging.warning(
365 'mlflow logger specified but no experiment name set. Using the same as Tensorboard: %s',
366 cfg.mlflow_logger_kwargs.experiment_name,
367 )
368
369 cfg.name = name # Used for configure_loggers so that the log_dir is properly set even if name is ""
370 cfg.version = version
371
372 # update app_state with log_dir, exp_dir, etc
373 app_state = AppState()
374 app_state.log_dir = log_dir
375 app_state.exp_dir = exp_dir
376 app_state.name = name
377 app_state.version = version
378 app_state.checkpoint_name = checkpoint_name
379 app_state.create_checkpoint_callback = cfg.create_checkpoint_callback
380 app_state.checkpoint_callback_params = cfg.checkpoint_callback_params
381
382 # Create the logging directory if it does not exist
383 os.makedirs(log_dir, exist_ok=True) # Cannot limit creation to global zero as all ranks write to own log file
384 logging.info(f'Experiments will be logged at {log_dir}')
385 trainer._default_root_dir = log_dir
386
387 if cfg.log_local_rank_0_only is True and cfg.log_global_rank_0_only is True:
388 raise ValueError(
389 f"Cannot set both log_local_rank_0_only and log_global_rank_0_only to True. Please set either one or neither."
390 )
391
392 # This is set if the env var NEMO_TESTING is set to True.
393 nemo_testing = get_envbool(NEMO_ENV_VARNAME_TESTING, False)
394
395 # Handle logging to file
396 log_file = log_dir / f'nemo_log_globalrank-{global_rank}_localrank-{local_rank}.txt'
397 if cfg.log_local_rank_0_only is True and not nemo_testing:
398 if local_rank == 0:
399 logging.add_file_handler(log_file)
400 elif cfg.log_global_rank_0_only is True and not nemo_testing:
401 if global_rank == 0:
402 logging.add_file_handler(log_file)
403 else:
404 # Logs on all ranks.
405 logging.add_file_handler(log_file)
406
407 # For some reason, LearningRateLogger requires trainer to have a logger. Safer to create logger on all ranks
408 # not just global rank 0.
409 if (
410 cfg.create_tensorboard_logger
411 or cfg.create_wandb_logger
412 or cfg.create_mlflow_logger
413 or cfg.create_dllogger_logger
414 or cfg.create_clearml_logger
415 ):
416 configure_loggers(
417 trainer,
418 exp_dir,
419 log_dir,
420 cfg.name,
421 cfg.version,
422 cfg.checkpoint_callback_params,
423 cfg.create_tensorboard_logger,
424 cfg.summary_writer_kwargs,
425 cfg.create_wandb_logger,
426 cfg.wandb_logger_kwargs,
427 cfg.create_mlflow_logger,
428 cfg.mlflow_logger_kwargs,
429 cfg.create_dllogger_logger,
430 cfg.dllogger_logger_kwargs,
431 cfg.create_clearml_logger,
432 cfg.clearml_logger_kwargs,
433 )
434
435 # add loggers timing callbacks
436 if cfg.log_step_timing:
437 timing_callback = TimingCallback(timer_kwargs=cfg.step_timing_kwargs or {})
438 trainer.callbacks.insert(0, timing_callback)
439
440 if cfg.ema.enable:
441 ema_callback = EMA(
442 decay=cfg.ema.decay,
443 validate_original_weights=cfg.ema.validate_original_weights,
444 cpu_offload=cfg.ema.cpu_offload,
445 every_n_steps=cfg.ema.every_n_steps,
446 )
447 trainer.callbacks.append(ema_callback)
448
449 if cfg.create_early_stopping_callback:
450 early_stop_callback = EarlyStopping(**cfg.early_stopping_callback_params)
451 trainer.callbacks.append(early_stop_callback)
452
453 if cfg.create_checkpoint_callback:
454 configure_checkpointing(
455 trainer,
456 log_dir,
457 checkpoint_name,
458 cfg.resume_if_exists,
459 cfg.checkpoint_callback_params,
460 cfg.create_preemption_callback,
461 )
462
463 if cfg.disable_validation_on_resume:
464 # extend training loop to skip initial validation when resuming from checkpoint
465 configure_no_restart_validation_training_loop(trainer)
466 # Setup a stateless timer for use on clusters.
467 if cfg.max_time_per_run is not None:
468 found_ptl_timer = False
469 for idx, callback in enumerate(trainer.callbacks):
470 if isinstance(callback, Timer):
471 # NOTE: PTL does not expose a `trainer.max_time`. By the time we are in this function, PTL has already setup a timer if the user specifies `trainer.max_time` so best we can do is replace that.
472 # Working: If only `trainer.max_time` is set - it behaves as a normal PTL timer. If only `exp_manager.max_time_per_run` is set - it behaves as a StateLessTimer. If both are set, it also behaves as a StateLessTimer.
473 logging.warning(
474 f'Found a PTL Timer callback, replacing with a StatelessTimer callback. This will happen if you set trainer.max_time as well as exp_manager.max_time_per_run.'
475 )
476 trainer.callbacks[idx] = StatelessTimer(cfg.max_time_per_run)
477 found_ptl_timer = True
478 break
479
480 if not found_ptl_timer:
481 trainer.max_time = cfg.max_time_per_run
482 trainer.callbacks.append(StatelessTimer(cfg.max_time_per_run))
483
484 if is_global_rank_zero():
485 # Move files_to_copy to folder and add git information if present
486 if cfg.files_to_copy:
487 for _file in cfg.files_to_copy:
488 copy(Path(_file), log_dir)
489
490 # Create files for cmd args and git info
491 with open(log_dir / 'cmd-args.log', 'w', encoding='utf-8') as _file:
492 _file.write(" ".join(sys.argv))
493
494 # Try to get git hash
495 git_repo, git_hash = get_git_hash()
496 if git_repo:
497 with open(log_dir / 'git-info.log', 'w', encoding='utf-8') as _file:
498 _file.write(f'commit hash: {git_hash}')
499 _file.write(get_git_diff())
500
501 # Add err_file logging to global_rank zero
502 logging.add_err_file_handler(log_dir / 'nemo_error_log.txt')
503
504 # Add lightning file logging to global_rank zero
505 add_filehandlers_to_pl_logger(log_dir / 'lightning_logs.txt', log_dir / 'nemo_error_log.txt')
506
507 elif trainer.num_nodes * trainer.num_devices > 1:
508 # sleep other ranks so rank 0 can finish
509 # doing the initialization such as moving files
510 time.sleep(cfg.seconds_to_sleep)
511
512 return log_dir
513
514
515 def error_checks(trainer: 'pytorch_lightning.Trainer', cfg: Optional[Union[DictConfig, Dict]] = None):
516 """
517 Checks that the passed trainer is compliant with NeMo and exp_manager's passed configuration. Checks that:
518 - Throws error when hydra has changed the working directory. This causes issues with lightning's DDP
519 - Throws error when trainer has loggers defined but create_tensorboard_logger or create_wandB_logger
520 or create_mlflow_logger or create_dllogger_logger is True
521 - Prints error messages when 1) run on multi-node and not Slurm, and 2) run on multi-gpu without DDP
522 """
523 if HydraConfig.initialized() and get_original_cwd() != os.getcwd():
524 raise ValueError(
525 "Hydra changed the working directory. This interferes with ExpManger's functionality. Please pass "
526 "hydra.run.dir=. to your python script."
527 )
528 if trainer.logger is not None and (
529 cfg.create_tensorboard_logger or cfg.create_wandb_logger or cfg.create_mlflow_logger
530 ):
531 raise LoggerMisconfigurationError(
532 "The pytorch lightning trainer that was passed to exp_manager contained a logger, and either "
533 f"create_tensorboard_logger: {cfg.create_tensorboard_logger} or create_wandb_logger: "
534 f"{cfg.create_wandb_logger} or create_mlflow_logger: {cfg.create_mlflow_logger}"
535 f"or create_dllogger_logger: {cfg.create_mlflow_logger} was set to True. "
536 "These can only be used if trainer does not already have a logger."
537 )
538 if trainer.num_nodes > 1 and not check_slurm(trainer):
539 logging.error(
540 "You are running multi-node training without SLURM handling the processes."
541 " Please note that this is not tested in NeMo and could result in errors."
542 )
543 if trainer.num_devices > 1 and not isinstance(trainer.strategy, DDPStrategy):
544 logging.error(
545 "You are running multi-gpu without ddp.Please note that this is not tested in NeMo and could result in "
546 "errors."
547 )
548
549
550 def check_resume(
551 trainer: 'pytorch_lightning.Trainer',
552 log_dir: str,
553 resume_if_exists: bool = False,
554 resume_past_end: bool = False,
555 resume_ignore_no_checkpoint: bool = False,
556 dirpath: str = None,
557 resume_from_checkpoint: str = None,
558 ):
559 """Checks that resume=True was used correctly with the arguments pass to exp_manager. Sets
560 trainer._checkpoint_connector._ckpt_path as necessary.
561
562 Returns:
563 log_dir (Path): The log_dir
564 exp_dir (str): The base exp_dir without name nor version
565 name (str): The name of the experiment
566 version (str): The version of the experiment
567
568 Raises:
569 NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
570 ValueError: If resume is True, and there were more than 1 checkpoint could found.
571 """
572
573 if not log_dir:
574 raise ValueError(f"Resuming requires the log_dir {log_dir} to be passed to exp_manager")
575
576 checkpoint = None
577 if resume_from_checkpoint:
578 checkpoint = resume_from_checkpoint
579 if resume_if_exists:
580 # Use <log_dir>/checkpoints/ unless `dirpath` is set
581 checkpoint_dir = Path(dirpath) if dirpath else Path(Path(log_dir) / "checkpoints")
582
583 # when using distributed checkpointing, checkpoint_dir is a directory of directories
584 # we check for this here
585 dist_checkpoints = [d for d in list(checkpoint_dir.glob("*")) if d.is_dir()]
586 end_dist_checkpoints = [d for d in dist_checkpoints if d.match("*end")]
587 last_dist_checkpoints = [d for d in dist_checkpoints if d.match("*last")]
588
589 end_checkpoints = end_dist_checkpoints if end_dist_checkpoints else list(checkpoint_dir.rglob("*end.ckpt"))
590 last_checkpoints = last_dist_checkpoints if last_dist_checkpoints else list(checkpoint_dir.rglob("*last.ckpt"))
591
592 if not checkpoint_dir.exists() or (not len(end_checkpoints) > 0 and not len(last_checkpoints) > 0):
593 if resume_ignore_no_checkpoint:
594 warn = f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. "
595 if checkpoint is None:
596 warn += "Training from scratch."
597 elif checkpoint == resume_from_checkpoint:
598 warn += f"Training from {resume_from_checkpoint}."
599 logging.warning(warn)
600 else:
601 raise NotFoundError(
602 f"There were no checkpoints found in checkpoint_dir or no checkpoint folder at checkpoint_dir :{checkpoint_dir}. Cannot resume."
603 )
604 elif len(end_checkpoints) > 0:
605 if resume_past_end:
606 if len(end_checkpoints) > 1:
607 if 'mp_rank' in str(end_checkpoints[0]):
608 checkpoint = end_checkpoints[0]
609 else:
610 raise ValueError(f"Multiple checkpoints {end_checkpoints} that matches *end.ckpt.")
611 else:
612 raise ValueError(
613 f"Found {end_checkpoints[0]} indicating that the last training run has already completed."
614 )
615 elif len(last_checkpoints) > 1:
616 if 'mp_rank' in str(last_checkpoints[0]) or 'tp_rank' in str(last_checkpoints[0]):
617 checkpoint = last_checkpoints[0]
618 checkpoint = uninject_model_parallel_rank(checkpoint)
619 else:
620 raise ValueError(f"Multiple checkpoints {last_checkpoints} that matches *last.ckpt.")
621 else:
622 checkpoint = last_checkpoints[0]
623
624 # PTL 2.0 supports ckpt_path instead of resume_from_checkpoint as the trainer flag
625 if checkpoint is not None:
626 trainer.ckpt_path = str(checkpoint)
627 logging.info(f'Resuming training from checkpoint: {trainer.ckpt_path}')
628
629 if is_global_rank_zero():
630 # Check to see if any files exist that need to be moved
631 files_to_move = []
632 if Path(log_dir).exists():
633 for child in Path(log_dir).iterdir():
634 if child.is_file():
635 files_to_move.append(child)
636
637 if len(files_to_move) > 0:
638 # Move old files to a new folder
639 other_run_dirs = Path(log_dir).glob("run_*")
640 run_count = 0
641 for fold in other_run_dirs:
642 if fold.is_dir():
643 run_count += 1
644 new_run_dir = Path(Path(log_dir) / f"run_{run_count}")
645 new_run_dir.mkdir()
646 for _file in files_to_move:
647 move(str(_file), str(new_run_dir))
648
649
650 def check_explicit_log_dir(
651 trainer: 'pytorch_lightning.Trainer', explicit_log_dir: Union[Path, str], exp_dir: str, name: str, version: str
652 ) -> Tuple[Path, str, str, str]:
653 """ Checks that the passed arguments are compatible with explicit_log_dir.
654
655 Returns:
656 log_dir (Path): the log_dir
657 exp_dir (str): the base exp_dir without name nor version
658 name (str): The name of the experiment
659 version (str): The version of the experiment
660
661 Raise:
662 LoggerMisconfigurationError
663 """
664 if trainer.logger is not None:
665 raise LoggerMisconfigurationError(
666 "The pytorch lightning trainer that was passed to exp_manager contained a logger and explicit_log_dir: "
667 f"{explicit_log_dir} was pass to exp_manager. Please remove the logger from the lightning trainer."
668 )
669 # Checking only (explicit_log_dir) vs (exp_dir and version).
670 # The `name` will be used as the actual name of checkpoint/archive.
671 if exp_dir or version:
672 logging.error(
673 f"exp_manager received explicit_log_dir: {explicit_log_dir} and at least one of exp_dir: {exp_dir}, "
674 f"or version: {version}. Please note that exp_dir, name, and version will be ignored."
675 )
676 if is_global_rank_zero() and Path(explicit_log_dir).exists():
677 logging.warning(f"Exp_manager is logging to {explicit_log_dir}, but it already exists.")
678 return Path(explicit_log_dir), str(explicit_log_dir), "", ""
679
680
681 def get_log_dir(
682 trainer: 'pytorch_lightning.Trainer',
683 exp_dir: str = None,
684 name: str = None,
685 version: str = None,
686 explicit_log_dir: str = None,
687 use_datetime_version: bool = True,
688 resume_if_exists: bool = False,
689 ) -> Tuple[Path, str, str, str]:
690 """
691 Obtains the log_dir used for exp_manager.
692
693 Returns:
694 log_dir (Path): the log_dir
695 exp_dir (str): the base exp_dir without name nor version
696 name (str): The name of the experiment
697 version (str): The version of the experiment
698 explicit_log_dir (str): The explicit path to the log folder. Defaults to False.
699 use_datetime_version (bool): Uses date and time as the version of the log folder. Defaults to True.
700 resume_if_exists (bool): if resume_if_exists of the exp_manager's config is enabled or not. When enabled, the
701 version folders would not get created.
702
703 Raise:
704 LoggerMisconfigurationError: If trainer is incompatible with arguments
705 NotFoundError: If resume is True, resume_ignore_no_checkpoint is False, and checkpoints could not be found.
706 ValueError: If resume is True, and there were more than 1 checkpoint could found.
707 """
708 if explicit_log_dir: # If explicit log_dir was passed, short circuit
709 return check_explicit_log_dir(trainer, explicit_log_dir, exp_dir, name, version)
710
711 # Default exp_dir to ./nemo_experiments if None was passed
712 _exp_dir = exp_dir
713 if exp_dir is None:
714 _exp_dir = str(Path.cwd() / 'nemo_experiments')
715
716 # If the user has already defined a logger for the trainer, use the logger defaults for logging directory
717 if trainer.logger is not None:
718 if trainer.logger.save_dir:
719 if exp_dir:
720 raise LoggerMisconfigurationError(
721 "The pytorch lightning trainer that was passed to exp_manager contained a logger, the logger's "
722 f"save_dir was not None, and exp_dir ({exp_dir}) was not None. If trainer.logger.save_dir "
723 "exists, exp_manager will use trainer.logger.save_dir as the logging directory and exp_dir "
724 "must be None."
725 )
726 _exp_dir = trainer.logger.save_dir
727 if name:
728 raise LoggerMisconfigurationError(
729 "The pytorch lightning trainer that was passed to exp_manager contained a logger, and name: "
730 f"{name} was also passed to exp_manager. If the trainer contains a "
731 "logger, exp_manager will use trainer.logger.name, and name passed to exp_manager must be None."
732 )
733 name = trainer.logger.name
734 version = f"version_{trainer.logger.version}"
735 # Use user-defined exp_dir, project_name, exp_name, and versioning options
736 else:
737 name = name or "default"
738 version = version or os.environ.get(NEMO_ENV_VARNAME_VERSION, None)
739
740 if not version:
741 if resume_if_exists:
742 logging.warning(
743 "No version folders would be created under the log folder as 'resume_if_exists' is enabled."
744 )
745 version = None
746 elif is_global_rank_zero():
747 if use_datetime_version:
748 version = time.strftime('%Y-%m-%d_%H-%M-%S')
749 else:
750 tensorboard_logger = TensorBoardLogger(save_dir=Path(_exp_dir), name=name, version=version)
751 version = f"version_{tensorboard_logger.version}"
752 os.environ[NEMO_ENV_VARNAME_VERSION] = "" if version is None else version
753
754 log_dir = Path(_exp_dir) / Path(str(name)) / Path("" if version is None else str(version))
755 return log_dir, str(_exp_dir), name, version
756
757
758 def get_git_hash():
759 """
760 Helper function that tries to get the commit hash if running inside a git folder
761
762 returns:
763 Bool: Whether the git subprocess ran without error
764 str: git subprocess output or error message
765 """
766 try:
767 return (
768 True,
769 subprocess.check_output(['git', 'rev-parse', 'HEAD'], stderr=subprocess.STDOUT).decode(),
770 )
771 except subprocess.CalledProcessError as err:
772 return False, "{}\n".format(err.output.decode("utf-8"))
773
774
775 def get_git_diff():
776 """
777 Helper function that tries to get the git diff if running inside a git folder
778
779 returns:
780 Bool: Whether the git subprocess ran without error
781 str: git subprocess output or error message
782 """
783 try:
784 return subprocess.check_output(['git', 'diff'], stderr=subprocess.STDOUT).decode()
785 except subprocess.CalledProcessError as err:
786 return "{}\n".format(err.output.decode("utf-8"))
787
788
789 def configure_loggers(
790 trainer: 'pytorch_lightning.Trainer',
791 exp_dir: [Path, str],
792 log_dir: [Path, str],
793 name: str,
794 version: str,
795 checkpoint_callback_params: dict,
796 create_tensorboard_logger: bool,
797 summary_writer_kwargs: dict,
798 create_wandb_logger: bool,
799 wandb_kwargs: dict,
800 create_mlflow_logger: bool,
801 mlflow_kwargs: dict,
802 create_dllogger_logger: bool,
803 dllogger_kwargs: dict,
804 create_clearml_logger: bool,
805 clearml_kwargs: dict,
806 ):
807 """
808 Creates TensorboardLogger and/or WandBLogger / MLFlowLogger / DLlogger / ClearMLLogger and attach them to trainer.
809 Raises ValueError if summary_writer_kwargs or wandb_kwargs are misconfigured.
810 """
811 # Potentially create tensorboard logger and/or WandBLogger / MLFlowLogger / DLLogger
812 logger_list = []
813 if create_tensorboard_logger:
814 if summary_writer_kwargs is None:
815 summary_writer_kwargs = {}
816 elif "log_dir" in summary_writer_kwargs:
817 raise ValueError(
818 "You cannot pass `log_dir` as part of `summary_writer_kwargs`. `log_dir` is handled by lightning's "
819 "TensorBoardLogger logger."
820 )
821 tensorboard_logger = TensorBoardLogger(save_dir=exp_dir, name=name, version=version, **summary_writer_kwargs)
822 logger_list.append(tensorboard_logger)
823 logging.info("TensorboardLogger has been set up")
824
825 if create_wandb_logger:
826 if wandb_kwargs is None:
827 wandb_kwargs = {}
828 if "name" not in wandb_kwargs and "project" not in wandb_kwargs:
829 raise ValueError("name and project are required for wandb_logger")
830
831 # Update the wandb save_dir
832 if wandb_kwargs.get('save_dir', None) is None:
833 wandb_kwargs['save_dir'] = exp_dir
834 os.makedirs(wandb_kwargs['save_dir'], exist_ok=True)
835 wandb_logger = WandbLogger(version=version, **wandb_kwargs)
836
837 logger_list.append(wandb_logger)
838 logging.info("WandBLogger has been set up")
839
840 if create_mlflow_logger:
841 mlflow_logger = MLFlowLogger(run_name=version, **mlflow_kwargs)
842
843 logger_list.append(mlflow_logger)
844 logging.info("MLFlowLogger has been set up")
845
846 if create_dllogger_logger:
847 dllogger_logger = DLLogger(**dllogger_kwargs)
848
849 logger_list.append(dllogger_logger)
850 logging.info("DLLogger has been set up")
851
852 if create_clearml_logger:
853 clearml_logger = ClearMLLogger(
854 clearml_cfg=clearml_kwargs,
855 log_dir=log_dir,
856 prefix=name,
857 save_best_model=checkpoint_callback_params.save_best_model,
858 )
859
860 logger_list.append(clearml_logger)
861 logging.info("ClearMLLogger has been set up")
862
863 trainer._logger_connector.configure_logger(logger_list)
864
865
866 def configure_checkpointing(
867 trainer: 'pytorch_lightning.Trainer',
868 log_dir: Path,
869 name: str,
870 resume: bool,
871 params: 'DictConfig',
872 create_preemption_callback: bool,
873 ):
874 """ Adds ModelCheckpoint to trainer. Raises CheckpointMisconfigurationError if trainer already has a ModelCheckpoint
875 callback
876 """
877 for callback in trainer.callbacks:
878 if isinstance(callback, ModelCheckpoint):
879 raise CheckpointMisconfigurationError(
880 "The pytorch lightning trainer that was passed to exp_manager contained a ModelCheckpoint "
881 "and create_checkpoint_callback was set to True. Please either set create_checkpoint_callback "
882 "to False, or remove ModelCheckpoint from the lightning trainer"
883 )
884 # Create the callback and attach it to trainer
885 if "filepath" in params:
886 if params.filepath is not None:
887 logging.warning("filepath is deprecated. Please switch to dirpath and filename instead")
888 if params.dirpath is None:
889 params.dirpath = Path(params.filepath).parent
890 if params.filename is None:
891 params.filename = Path(params.filepath).name
892 with open_dict(params):
893 del params["filepath"]
894 if params.dirpath is None:
895 params.dirpath = Path(log_dir / 'checkpoints')
896 if params.filename is None:
897 params.filename = f'{name}--{{{params.monitor}:.4f}}-{{epoch}}'
898 if params.prefix is None:
899 params.prefix = name
900 NeMoModelCheckpoint.CHECKPOINT_NAME_LAST = params.filename + '-last'
901
902 logging.debug(params.dirpath)
903 logging.debug(params.filename)
904 logging.debug(params.prefix)
905
906 if "val" in params.monitor:
907 if (
908 trainer.max_epochs is not None
909 and trainer.max_epochs != -1
910 and trainer.max_epochs < trainer.check_val_every_n_epoch
911 ):
912 logging.error(
913 "The checkpoint callback was told to monitor a validation value but trainer.max_epochs("
914 f"{trainer.max_epochs}) was less than trainer.check_val_every_n_epoch({trainer.check_val_every_n_epoch}"
915 f"). It is very likely this run will fail with ModelCheckpoint(monitor='{params.monitor}') not found "
916 "in the returned metrics. Please ensure that validation is run within trainer.max_epochs."
917 )
918 elif trainer.max_steps is not None and trainer.max_steps != -1:
919 logging.warning(
920 "The checkpoint callback was told to monitor a validation value and trainer's max_steps was set to "
921 f"{trainer.max_steps}. Please ensure that max_steps will run for at least "
922 f"{trainer.check_val_every_n_epoch} epochs to ensure that checkpointing will not error out."
923 )
924
925 checkpoint_callback = NeMoModelCheckpoint(n_resume=resume, **params)
926 checkpoint_callback.last_model_path = trainer.ckpt_path or ""
927 if 'mp_rank' in checkpoint_callback.last_model_path or 'tp_rank' in checkpoint_callback.last_model_path:
928 checkpoint_callback.last_model_path = uninject_model_parallel_rank(checkpoint_callback.last_model_path)
929 trainer.callbacks.append(checkpoint_callback)
930 if create_preemption_callback:
931 # Check if cuda is avialable as preemption is supported only on GPUs
932 if torch.cuda.is_available():
933 ## By default PreemptionCallback handles SIGTERM. To handle other signals pass the signal in the call as below:
934 ## PreemptionCallback(checkpoint_callback, signal.SIGCHLD)
935 preemption_callback = PreemptionCallback(checkpoint_callback)
936 trainer.callbacks.append(preemption_callback)
937 else:
938 logging.info("Preemption is supported only on GPUs, disabling preemption")
939
940
941 def check_slurm(trainer):
942 try:
943 return trainer.accelerator_connector.is_slurm_managing_tasks
944 except AttributeError:
945 return False
946
947
948 class StatelessTimer(Timer):
949 """Extension of PTL timers to be per run."""
950
951 def __init__(self, duration: timedelta = None, interval: str = Interval.step, verbose: bool = True,) -> None:
952 super().__init__(duration, interval, verbose)
953
954 # Override PTL Timer's state dict to not store elapsed time information so that we can restore and continue training.
955 def state_dict(self) -> Dict[str, Any]:
956 return {}
957
958 def load_state_dict(self, state_dict: Dict[str, Any]) -> None:
959 return
960
961
962 def configure_no_restart_validation_training_loop(trainer: pytorch_lightning.Trainer) -> None:
963 if type(trainer.fit_loop.epoch_loop) != _TrainingEpochLoop:
964 warnings.warn("Detected custom epoch loop. Skipping no validation on restart support.", UserWarning)
965 return
966 ## Pass trainer object to avoid trainer getting overwritten as None
967 loop = SkipResumeTrainingValidationLoop(trainer, trainer.min_steps, trainer.max_steps)
968 trainer.fit_loop.epoch_loop = loop
969
970
971 class SkipResumeTrainingValidationLoop(_TrainingEpochLoop):
972 """
973 Extend the PTL Epoch loop to skip validating when resuming.
974 This happens when resuming a checkpoint that has already run validation, but loading restores
975 the training state before validation has run.
976 """
977
978 def _should_check_val_fx(self) -> bool:
979 if self.restarting and self.global_step % self.trainer.val_check_batch == 0:
980 return False
981 return super()._should_check_val_fx()
982
983
984 def clean_exp_ckpt(exp_log_dir: Union[str, Path], remove_ckpt: bool = True, remove_nemo: bool = False):
985 """
986 Helper method that removes Pytorch Lightning .ckpt files or NeMo .nemo files from the checkpoint directory
987
988 Args:
989 exp_log_dir: str path to the root directory of the current experiment.
990 remove_ckpt: bool, whether to remove all *.ckpt files in the checkpoints directory.
991 remove_nemo: bool, whether to remove all *.nemo files in the checkpoints directory.
992 """
993 exp_log_dir = str(exp_log_dir)
994
995 if remove_ckpt:
996 logging.info("Deleting *.ckpt files ...")
997 ckpt_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.ckpt"))
998 for filepath in ckpt_files:
999 os.remove(filepath)
1000 logging.info(f"Deleted file : {filepath}")
1001
1002 if remove_nemo:
1003 logging.info("Deleting *.nemo files ...")
1004 nemo_files = glob.glob(os.path.join(exp_log_dir, "checkpoints", "*.nemo"))
1005 for filepath in nemo_files:
1006 os.remove(filepath)
1007 logging.info(f"Deleted file : {filepath}")
1008
[end of nemo/utils/exp_manager.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """
17 # This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
18 # fusion with beam search decoders on top of a trained ASR model with CTC decoder. To evaluate a model with
19 # Transducer (RNN-T) decoder use another script 'scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py'.
20 # NeMo's beam search decoders are capable of using the KenLM's N-gram models
21 # to find the best candidates. This script supports both character level and BPE level
22 # encodings and models which is detected automatically from the type of the model.
23 # You may train the LM model with 'scripts/asr_language_modeling/ngram_lm/train_kenlm.py'.
24
25 # Config Help
26
27 To discover all arguments of the script, please run :
28 python eval_beamsearch_ngram.py --help
29 python eval_beamsearch_ngram.py --cfg job
30
31 # USAGE
32
33 python eval_beamsearch_ngram.py nemo_model_file=<path to the .nemo file of the model> \
34 input_manifest=<path to the evaluation JSON manifest file> \
35 kenlm_model_file=<path to the binary KenLM model> \
36 beam_width=[<list of the beam widths, separated with commas>] \
37 beam_alpha=[<list of the beam alphas, separated with commas>] \
38 beam_beta=[<list of the beam betas, separated with commas>] \
39 preds_output_folder=<optional folder to store the predictions> \
40 probs_cache_file=null \
41 decoding_mode=beamsearch_ngram
42 ...
43
44
45 # Grid Search for Hyper parameters
46
47 For grid search, you can provide a list of arguments as follows -
48
49 beam_width=[4,8,16,....] \
50 beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
51 beam_beta=[-1.0,-0.5,0.0,...,1.0] \
52
53 # You may find more info on how to use this script at:
54 # https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
55
56 """
57
58
59 import contextlib
60 import json
61 import os
62 import pickle
63 from dataclasses import dataclass, field, is_dataclass
64 from pathlib import Path
65 from typing import List, Optional
66
67 import editdistance
68 import numpy as np
69 import torch
70 from omegaconf import MISSING, OmegaConf
71 from sklearn.model_selection import ParameterGrid
72 from tqdm.auto import tqdm
73
74 import nemo.collections.asr as nemo_asr
75 from nemo.collections.asr.models import EncDecHybridRNNTCTCModel
76 from nemo.collections.asr.parts.submodules import ctc_beam_decoding
77 from nemo.collections.asr.parts.utils.transcribe_utils import PunctuationCapitalization, TextProcessingConfig
78 from nemo.core.config import hydra_runner
79 from nemo.utils import logging
80
81 # fmt: off
82
83
84 @dataclass
85 class EvalBeamSearchNGramConfig:
86 """
87 Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
88 """
89 # # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
90 nemo_model_file: str = MISSING
91
92 # File paths
93 input_manifest: str = MISSING # The manifest file of the evaluation set
94 kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
95 preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
96 probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
97
98 # Parameters for inference
99 acoustic_batch_size: int = 16 # The batch size to calculate log probabilities
100 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
101 device: str = "cuda" # The device to load the model onto to calculate log probabilities
102 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
103
104 # Beam Search hyperparameters
105
106 # The decoding scheme to be used for evaluation.
107 # Can be one of ["greedy", "beamsearch", "beamsearch_ngram"]
108 decoding_mode: str = "beamsearch_ngram"
109
110 beam_width: List[int] = field(default_factory=lambda: [128]) # The width or list of the widths for the beam search decoding
111 beam_alpha: List[float] = field(default_factory=lambda: [1.0]) # The alpha parameter or list of the alphas for the beam search decoding
112 beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
113
114 decoding_strategy: str = "beam"
115 decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
116
117 text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
118 punctuation_marks = ".,?",
119 separate_punctuation = False,
120 do_lowercase = False,
121 rm_punctuation = False,
122 )
123 # fmt: on
124
125
126 def beam_search_eval(
127 model: nemo_asr.models.ASRModel,
128 cfg: EvalBeamSearchNGramConfig,
129 all_probs: List[torch.Tensor],
130 target_transcripts: List[str],
131 preds_output_file: str = None,
132 lm_path: str = None,
133 beam_alpha: float = 1.0,
134 beam_beta: float = 0.0,
135 beam_width: int = 128,
136 beam_batch_size: int = 128,
137 progress_bar: bool = True,
138 punctuation_capitalization: PunctuationCapitalization = None,
139 ):
140 level = logging.getEffectiveLevel()
141 logging.setLevel(logging.CRITICAL)
142 # Reset config
143 model.change_decoding_strategy(None)
144
145 # Override the beam search config with current search candidate configuration
146 cfg.decoding.beam_size = beam_width
147 cfg.decoding.beam_alpha = beam_alpha
148 cfg.decoding.beam_beta = beam_beta
149 cfg.decoding.return_best_hypothesis = False
150 cfg.decoding.kenlm_path = cfg.kenlm_model_file
151
152 # Update model's decoding strategy config
153 model.cfg.decoding.strategy = cfg.decoding_strategy
154 model.cfg.decoding.beam = cfg.decoding
155
156 # Update model's decoding strategy
157 if isinstance(model, EncDecHybridRNNTCTCModel):
158 model.change_decoding_strategy(model.cfg.decoding, decoder_type='ctc')
159 decoding = model.ctc_decoding
160 else:
161 model.change_decoding_strategy(model.cfg.decoding)
162 decoding = model.decoding
163 logging.setLevel(level)
164
165 wer_dist_first = cer_dist_first = 0
166 wer_dist_best = cer_dist_best = 0
167 words_count = 0
168 chars_count = 0
169 sample_idx = 0
170 if preds_output_file:
171 out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
172
173 if progress_bar:
174 it = tqdm(
175 range(int(np.ceil(len(all_probs) / beam_batch_size))),
176 desc=f"Beam search decoding with width={beam_width}, alpha={beam_alpha}, beta={beam_beta}",
177 ncols=120,
178 )
179 else:
180 it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
181 for batch_idx in it:
182 # disabling type checking
183 probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
184 probs_lens = torch.tensor([prob.shape[0] for prob in probs_batch])
185 with torch.no_grad():
186 packed_batch = torch.zeros(len(probs_batch), max(probs_lens), probs_batch[0].shape[-1], device='cpu')
187
188 for prob_index in range(len(probs_batch)):
189 packed_batch[prob_index, : probs_lens[prob_index], :] = torch.tensor(
190 probs_batch[prob_index], device=packed_batch.device, dtype=packed_batch.dtype
191 )
192
193 _, beams_batch = decoding.ctc_decoder_predictions_tensor(
194 packed_batch, decoder_lengths=probs_lens, return_hypotheses=True,
195 )
196
197 for beams_idx, beams in enumerate(beams_batch):
198 target = target_transcripts[sample_idx + beams_idx]
199 target_split_w = target.split()
200 target_split_c = list(target)
201 words_count += len(target_split_w)
202 chars_count += len(target_split_c)
203 wer_dist_min = cer_dist_min = 10000
204 for candidate_idx, candidate in enumerate(beams): # type: (int, ctc_beam_decoding.rnnt_utils.Hypothesis)
205 pred_text = candidate.text
206 if cfg.text_processing.do_lowercase:
207 pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
208 if cfg.text_processing.rm_punctuation:
209 pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
210 if cfg.text_processing.separate_punctuation:
211 pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
212 pred_split_w = pred_text.split()
213 wer_dist = editdistance.eval(target_split_w, pred_split_w)
214 pred_split_c = list(pred_text)
215 cer_dist = editdistance.eval(target_split_c, pred_split_c)
216
217 wer_dist_min = min(wer_dist_min, wer_dist)
218 cer_dist_min = min(cer_dist_min, cer_dist)
219
220 if candidate_idx == 0:
221 # first candidate
222 wer_dist_first += wer_dist
223 cer_dist_first += cer_dist
224
225 score = candidate.score
226 if preds_output_file:
227 out_file.write('{}\t{}\n'.format(pred_text, score))
228 wer_dist_best += wer_dist_min
229 cer_dist_best += cer_dist_min
230 sample_idx += len(probs_batch)
231
232 if preds_output_file:
233 out_file.close()
234 logging.info(f"Stored the predictions of beam search decoding at '{preds_output_file}'.")
235
236 if lm_path:
237 logging.info(
238 'WER/CER with beam search decoding and N-gram model = {:.2%}/{:.2%}'.format(
239 wer_dist_first / words_count, cer_dist_first / chars_count
240 )
241 )
242 else:
243 logging.info(
244 'WER/CER with beam search decoding = {:.2%}/{:.2%}'.format(
245 wer_dist_first / words_count, cer_dist_first / chars_count
246 )
247 )
248 logging.info(
249 'Oracle WER/CER in candidates with perfect LM= {:.2%}/{:.2%}'.format(
250 wer_dist_best / words_count, cer_dist_best / chars_count
251 )
252 )
253 logging.info(f"=================================================================================")
254
255 return wer_dist_first / words_count, cer_dist_first / chars_count
256
257
258 @hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
259 def main(cfg: EvalBeamSearchNGramConfig):
260 logging.warning("This file will be renamed to eval_beamsearch_ngram_ctc.py in the future NeMo (1.21) release.")
261 if is_dataclass(cfg):
262 cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
263
264 valid_decoding_modes = ["greedy", "beamsearch", "beamsearch_ngram"]
265 if cfg.decoding_mode not in valid_decoding_modes:
266 raise ValueError(
267 f"Given decoding_mode={cfg.decoding_mode} is invalid. Available options are :\n" f"{valid_decoding_modes}"
268 )
269
270 if cfg.nemo_model_file.endswith('.nemo'):
271 asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
272 else:
273 logging.warning(
274 "nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
275 )
276 asr_model = nemo_asr.models.ASRModel.from_pretrained(
277 cfg.nemo_model_file, map_location=torch.device(cfg.device)
278 )
279
280 target_transcripts = []
281 manifest_dir = Path(cfg.input_manifest).parent
282 with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
283 audio_file_paths = []
284 for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
285 data = json.loads(line)
286 audio_file = Path(data['audio_filepath'])
287 if not audio_file.is_file() and not audio_file.is_absolute():
288 audio_file = manifest_dir / audio_file
289 target_transcripts.append(data['text'])
290 audio_file_paths.append(str(audio_file.absolute()))
291
292 punctuation_capitalization = PunctuationCapitalization(cfg.text_processing.punctuation_marks)
293 if cfg.text_processing.do_lowercase:
294 target_transcripts = punctuation_capitalization.do_lowercase(target_transcripts)
295 if cfg.text_processing.rm_punctuation:
296 target_transcripts = punctuation_capitalization.rm_punctuation(target_transcripts)
297 if cfg.text_processing.separate_punctuation:
298 target_transcripts = punctuation_capitalization.separate_punctuation(target_transcripts)
299
300 if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
301 logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
302 logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
303 with open(cfg.probs_cache_file, 'rb') as probs_file:
304 all_probs = pickle.load(probs_file)
305
306 if len(all_probs) != len(audio_file_paths):
307 raise ValueError(
308 f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
309 f"match the manifest file. You may need to delete the probabilities cached file."
310 )
311 else:
312
313 @contextlib.contextmanager
314 def default_autocast():
315 yield
316
317 if cfg.use_amp:
318 if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
319 logging.info("AMP is enabled!\n")
320 autocast = torch.cuda.amp.autocast
321
322 else:
323 autocast = default_autocast
324 else:
325
326 autocast = default_autocast
327
328 with autocast():
329 with torch.no_grad():
330 if isinstance(asr_model, EncDecHybridRNNTCTCModel):
331 asr_model.cur_decoder = 'ctc'
332 all_logits = asr_model.transcribe(audio_file_paths, batch_size=cfg.acoustic_batch_size, logprobs=True)
333
334 all_probs = all_logits
335 if cfg.probs_cache_file:
336 logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
337 with open(cfg.probs_cache_file, 'wb') as f_dump:
338 pickle.dump(all_probs, f_dump)
339
340 wer_dist_greedy = 0
341 cer_dist_greedy = 0
342 words_count = 0
343 chars_count = 0
344 for batch_idx, probs in enumerate(all_probs):
345 preds = np.argmax(probs, axis=1)
346 preds_tensor = torch.tensor(preds, device='cpu').unsqueeze(0)
347 if isinstance(asr_model, EncDecHybridRNNTCTCModel):
348 pred_text = asr_model.ctc_decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
349 else:
350 pred_text = asr_model._wer.decoding.ctc_decoder_predictions_tensor(preds_tensor)[0][0]
351
352 if cfg.text_processing.do_lowercase:
353 pred_text = punctuation_capitalization.do_lowercase([pred_text])[0]
354 if cfg.text_processing.rm_punctuation:
355 pred_text = punctuation_capitalization.rm_punctuation([pred_text])[0]
356 if cfg.text_processing.separate_punctuation:
357 pred_text = punctuation_capitalization.separate_punctuation([pred_text])[0]
358
359 pred_split_w = pred_text.split()
360 target_split_w = target_transcripts[batch_idx].split()
361 pred_split_c = list(pred_text)
362 target_split_c = list(target_transcripts[batch_idx])
363
364 wer_dist = editdistance.eval(target_split_w, pred_split_w)
365 cer_dist = editdistance.eval(target_split_c, pred_split_c)
366
367 wer_dist_greedy += wer_dist
368 cer_dist_greedy += cer_dist
369 words_count += len(target_split_w)
370 chars_count += len(target_split_c)
371
372 logging.info('Greedy WER/CER = {:.2%}/{:.2%}'.format(wer_dist_greedy / words_count, cer_dist_greedy / chars_count))
373
374 asr_model = asr_model.to('cpu')
375
376 if cfg.decoding_mode == "beamsearch_ngram":
377 if not os.path.exists(cfg.kenlm_model_file):
378 raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
379 lm_path = cfg.kenlm_model_file
380 else:
381 lm_path = None
382
383 # 'greedy' decoding_mode would skip the beam search decoding
384 if cfg.decoding_mode in ["beamsearch_ngram", "beamsearch"]:
385 if cfg.beam_width is None or cfg.beam_alpha is None or cfg.beam_beta is None:
386 raise ValueError("beam_width, beam_alpha and beam_beta are needed to perform beam search decoding.")
387 params = {'beam_width': cfg.beam_width, 'beam_alpha': cfg.beam_alpha, 'beam_beta': cfg.beam_beta}
388 hp_grid = ParameterGrid(params)
389 hp_grid = list(hp_grid)
390
391 best_wer_beam_size, best_cer_beam_size = None, None
392 best_wer_alpha, best_cer_alpha = None, None
393 best_wer_beta, best_cer_beta = None, None
394 best_wer, best_cer = 1e6, 1e6
395
396 logging.info(f"==============================Starting the beam search decoding===============================")
397 logging.info(f"Grid search size: {len(hp_grid)}")
398 logging.info(f"It may take some time...")
399 logging.info(f"==============================================================================================")
400
401 if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
402 os.mkdir(cfg.preds_output_folder)
403 for hp in hp_grid:
404 if cfg.preds_output_folder:
405 preds_output_file = os.path.join(
406 cfg.preds_output_folder,
407 f"preds_out_width{hp['beam_width']}_alpha{hp['beam_alpha']}_beta{hp['beam_beta']}.tsv",
408 )
409 else:
410 preds_output_file = None
411
412 candidate_wer, candidate_cer = beam_search_eval(
413 asr_model,
414 cfg,
415 all_probs=all_probs,
416 target_transcripts=target_transcripts,
417 preds_output_file=preds_output_file,
418 lm_path=lm_path,
419 beam_width=hp["beam_width"],
420 beam_alpha=hp["beam_alpha"],
421 beam_beta=hp["beam_beta"],
422 beam_batch_size=cfg.beam_batch_size,
423 progress_bar=True,
424 punctuation_capitalization=punctuation_capitalization,
425 )
426
427 if candidate_cer < best_cer:
428 best_cer_beam_size = hp["beam_width"]
429 best_cer_alpha = hp["beam_alpha"]
430 best_cer_beta = hp["beam_beta"]
431 best_cer = candidate_cer
432
433 if candidate_wer < best_wer:
434 best_wer_beam_size = hp["beam_width"]
435 best_wer_alpha = hp["beam_alpha"]
436 best_wer_beta = hp["beam_beta"]
437 best_wer = candidate_wer
438
439 logging.info(
440 f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
441 f'Beam alpha = {best_wer_alpha}, Beam beta = {best_wer_beta}'
442 )
443
444 logging.info(
445 f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
446 f'Beam alpha = {best_cer_alpha}, Beam beta = {best_cer_beta}'
447 )
448 logging.info(f"=================================================================================")
449
450
451 if __name__ == '__main__':
452 main()
453
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py]
[start of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """
17 # This script would evaluate an N-gram language model trained with KenLM library (https://github.com/kpu/kenlm) in
18 # fusion with beam search decoders on top of a trained ASR Transducer model. NeMo's beam search decoders are capable of using the
19 # KenLM's N-gram models to find the best candidates. This script supports both character level and BPE level
20 # encodings and models which is detected automatically from the type of the model.
21 # You may train the LM model with 'scripts/ngram_lm/train_kenlm.py'.
22
23 # Config Help
24
25 To discover all arguments of the script, please run :
26 python eval_beamsearch_ngram.py --help
27 python eval_beamsearch_ngram.py --cfg job
28
29 # USAGE
30
31 python eval_beamsearch_ngram_transducer.py nemo_model_file=<path to the .nemo file of the model> \
32 input_manifest=<path to the evaluation JSON manifest file \
33 kenlm_model_file=<path to the binary KenLM model> \
34 beam_width=[<list of the beam widths, separated with commas>] \
35 beam_alpha=[<list of the beam alphas, separated with commas>] \
36 preds_output_folder=<optional folder to store the predictions> \
37 probs_cache_file=null \
38 decoding_strategy=<greedy_batch or maes decoding>
39 maes_prefix_alpha=[<list of the maes prefix alphas, separated with commas>] \
40 maes_expansion_gamma=[<list of the maes expansion gammas, separated with commas>] \
41 hat_subtract_ilm=<in case of HAT model: subtract internal LM or not> \
42 hat_ilm_weight=[<in case of HAT model: list of the HAT internal LM weights, separated with commas>] \
43 ...
44
45
46 # Grid Search for Hyper parameters
47
48 For grid search, you can provide a list of arguments as follows -
49
50 beam_width=[4,8,16,....] \
51 beam_alpha=[-2.0,-1.0,...,1.0,2.0] \
52
53 # You may find more info on how to use this script at:
54 # https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html
55
56 """
57
58
59 import contextlib
60 import json
61 import os
62 import pickle
63 import tempfile
64 from dataclasses import dataclass, field, is_dataclass
65 from pathlib import Path
66 from typing import List, Optional
67
68 import editdistance
69 import numpy as np
70 import torch
71 from omegaconf import MISSING, OmegaConf
72 from sklearn.model_selection import ParameterGrid
73 from tqdm.auto import tqdm
74
75 import nemo.collections.asr as nemo_asr
76 from nemo.collections.asr.parts.submodules import rnnt_beam_decoding
77 from nemo.core.config import hydra_runner
78 from nemo.utils import logging
79
80 # fmt: off
81
82
83 @dataclass
84 class EvalBeamSearchNGramConfig:
85 """
86 Evaluate an ASR model with beam search decoding and n-gram KenLM language model.
87 """
88 # # The path of the '.nemo' file of the ASR model or the name of a pretrained model (ngc / huggingface)
89 nemo_model_file: str = MISSING
90
91 # File paths
92 input_manifest: str = MISSING # The manifest file of the evaluation set
93 kenlm_model_file: Optional[str] = None # The path of the KenLM binary model file
94 preds_output_folder: Optional[str] = None # The optional folder where the predictions are stored
95 probs_cache_file: Optional[str] = None # The cache file for storing the logprobs of the model
96
97 # Parameters for inference
98 acoustic_batch_size: int = 128 # The batch size to calculate log probabilities
99 beam_batch_size: int = 128 # The batch size to be used for beam search decoding
100 device: str = "cuda" # The device to load the model onto to calculate log probabilities
101 use_amp: bool = False # Whether to use AMP if available to calculate log probabilities
102 num_workers: int = 1 # Number of workers for DataLoader
103
104 # The decoding scheme to be used for evaluation
105 decoding_strategy: str = "greedy_batch" # ["greedy_batch", "beam", "tsd", "alsd", "maes"]
106
107 # Beam Search hyperparameters
108 beam_width: List[int] = field(default_factory=lambda: [8]) # The width or list of the widths for the beam search decoding
109 beam_alpha: List[float] = field(default_factory=lambda: [0.2]) # The alpha parameter or list of the alphas for the beam search decoding
110
111 maes_prefix_alpha: List[int] = field(default_factory=lambda: [2]) # The maes_prefix_alpha or list of the maes_prefix_alpha for the maes decoding
112 maes_expansion_gamma: List[float] = field(default_factory=lambda: [2.3]) # The maes_expansion_gamma or list of the maes_expansion_gamma for the maes decoding
113
114 # HAT related parameters (only for internal lm subtraction)
115 hat_subtract_ilm: bool = False
116 hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
117
118 decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
119
120
121 # fmt: on
122
123
124 def decoding_step(
125 model: nemo_asr.models.ASRModel,
126 cfg: EvalBeamSearchNGramConfig,
127 all_probs: List[torch.Tensor],
128 target_transcripts: List[str],
129 preds_output_file: str = None,
130 beam_batch_size: int = 128,
131 progress_bar: bool = True,
132 ):
133 level = logging.getEffectiveLevel()
134 logging.setLevel(logging.CRITICAL)
135 # Reset config
136 model.change_decoding_strategy(None)
137
138 cfg.decoding.hat_ilm_weight = cfg.decoding.hat_ilm_weight * cfg.hat_subtract_ilm
139 # Override the beam search config with current search candidate configuration
140 cfg.decoding.return_best_hypothesis = False
141 cfg.decoding.ngram_lm_model = cfg.kenlm_model_file
142 cfg.decoding.hat_subtract_ilm = cfg.hat_subtract_ilm
143
144 # Update model's decoding strategy config
145 model.cfg.decoding.strategy = cfg.decoding_strategy
146 model.cfg.decoding.beam = cfg.decoding
147
148 # Update model's decoding strategy
149 model.change_decoding_strategy(model.cfg.decoding)
150 logging.setLevel(level)
151
152 wer_dist_first = cer_dist_first = 0
153 wer_dist_best = cer_dist_best = 0
154 words_count = 0
155 chars_count = 0
156 sample_idx = 0
157 if preds_output_file:
158 out_file = open(preds_output_file, 'w', encoding='utf_8', newline='\n')
159
160 if progress_bar:
161 if cfg.decoding_strategy == "greedy_batch":
162 description = "Greedy_batch decoding.."
163 else:
164 description = f"{cfg.decoding_strategy} decoding with bw={cfg.decoding.beam_size}, ba={cfg.decoding.ngram_lm_alpha}, ma={cfg.decoding.maes_prefix_alpha}, mg={cfg.decoding.maes_expansion_gamma}, hat_ilmw={cfg.decoding.hat_ilm_weight}"
165 it = tqdm(range(int(np.ceil(len(all_probs) / beam_batch_size))), desc=description, ncols=120)
166 else:
167 it = range(int(np.ceil(len(all_probs) / beam_batch_size)))
168 for batch_idx in it:
169 # disabling type checking
170 probs_batch = all_probs[batch_idx * beam_batch_size : (batch_idx + 1) * beam_batch_size]
171 probs_lens = torch.tensor([prob.shape[-1] for prob in probs_batch])
172 with torch.no_grad():
173 packed_batch = torch.zeros(len(probs_batch), probs_batch[0].shape[0], max(probs_lens), device='cpu')
174
175 for prob_index in range(len(probs_batch)):
176 packed_batch[prob_index, :, : probs_lens[prob_index]] = torch.tensor(
177 probs_batch[prob_index].unsqueeze(0), device=packed_batch.device, dtype=packed_batch.dtype
178 )
179 best_hyp_batch, beams_batch = model.decoding.rnnt_decoder_predictions_tensor(
180 packed_batch, probs_lens, return_hypotheses=True,
181 )
182 if cfg.decoding_strategy == "greedy_batch":
183 beams_batch = [[x] for x in best_hyp_batch]
184
185 for beams_idx, beams in enumerate(beams_batch):
186 target = target_transcripts[sample_idx + beams_idx]
187 target_split_w = target.split()
188 target_split_c = list(target)
189 words_count += len(target_split_w)
190 chars_count += len(target_split_c)
191 wer_dist_min = cer_dist_min = 10000
192 for candidate_idx, candidate in enumerate(beams): # type: (int, rnnt_beam_decoding.rnnt_utils.Hypothesis)
193 pred_text = candidate.text
194 pred_split_w = pred_text.split()
195 wer_dist = editdistance.eval(target_split_w, pred_split_w)
196 pred_split_c = list(pred_text)
197 cer_dist = editdistance.eval(target_split_c, pred_split_c)
198
199 wer_dist_min = min(wer_dist_min, wer_dist)
200 cer_dist_min = min(cer_dist_min, cer_dist)
201
202 if candidate_idx == 0:
203 # first candidate
204 wer_dist_first += wer_dist
205 cer_dist_first += cer_dist
206
207 score = candidate.score
208 if preds_output_file:
209 out_file.write('{}\t{}\n'.format(pred_text, score))
210 wer_dist_best += wer_dist_min
211 cer_dist_best += cer_dist_min
212 sample_idx += len(probs_batch)
213
214 if cfg.decoding_strategy == "greedy_batch":
215 return wer_dist_first / words_count, cer_dist_first / chars_count
216
217 if preds_output_file:
218 out_file.close()
219 logging.info(f"Stored the predictions of {cfg.decoding_strategy} decoding at '{preds_output_file}'.")
220
221 if cfg.decoding.ngram_lm_model:
222 logging.info(
223 f"WER/CER with {cfg.decoding_strategy} decoding and N-gram model = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
224 )
225 else:
226 logging.info(
227 f"WER/CER with {cfg.decoding_strategy} decoding = {wer_dist_first / words_count:.2%}/{cer_dist_first / chars_count:.2%}"
228 )
229 logging.info(
230 f"Oracle WER/CER in candidates with perfect LM= {wer_dist_best / words_count:.2%}/{cer_dist_best / chars_count:.2%}"
231 )
232 logging.info(f"=================================================================================")
233
234 return wer_dist_first / words_count, cer_dist_first / chars_count
235
236
237 @hydra_runner(config_path=None, config_name='EvalBeamSearchNGramConfig', schema=EvalBeamSearchNGramConfig)
238 def main(cfg: EvalBeamSearchNGramConfig):
239 if is_dataclass(cfg):
240 cfg = OmegaConf.structured(cfg) # type: EvalBeamSearchNGramConfig
241
242 valid_decoding_strategis = ["greedy_batch", "beam", "tsd", "alsd", "maes"]
243 if cfg.decoding_strategy not in valid_decoding_strategis:
244 raise ValueError(
245 f"Given decoding_strategy={cfg.decoding_strategy} is invalid. Available options are :\n"
246 f"{valid_decoding_strategis}"
247 )
248
249 if cfg.nemo_model_file.endswith('.nemo'):
250 asr_model = nemo_asr.models.ASRModel.restore_from(cfg.nemo_model_file, map_location=torch.device(cfg.device))
251 else:
252 logging.warning(
253 "nemo_model_file does not end with .nemo, therefore trying to load a pretrained model with this name."
254 )
255 asr_model = nemo_asr.models.ASRModel.from_pretrained(
256 cfg.nemo_model_file, map_location=torch.device(cfg.device)
257 )
258
259 if cfg.kenlm_model_file:
260 if not os.path.exists(cfg.kenlm_model_file):
261 raise FileNotFoundError(f"Could not find the KenLM model file '{cfg.kenlm_model_file}'.")
262 if cfg.decoding_strategy != "maes":
263 raise ValueError(f"Decoding with kenlm model is supported only for maes decoding algorithm.")
264 lm_path = cfg.kenlm_model_file
265 else:
266 lm_path = None
267 cfg.beam_alpha = [0.0]
268 if cfg.hat_subtract_ilm:
269 assert lm_path, "kenlm must be set for hat internal lm subtraction"
270
271 if cfg.decoding_strategy != "maes":
272 cfg.maes_prefix_alpha, cfg.maes_expansion_gamma, cfg.hat_ilm_weight = [0], [0], [0]
273
274 target_transcripts = []
275 manifest_dir = Path(cfg.input_manifest).parent
276 with open(cfg.input_manifest, 'r', encoding='utf_8') as manifest_file:
277 audio_file_paths = []
278 for line in tqdm(manifest_file, desc=f"Reading Manifest {cfg.input_manifest} ...", ncols=120):
279 data = json.loads(line)
280 audio_file = Path(data['audio_filepath'])
281 if not audio_file.is_file() and not audio_file.is_absolute():
282 audio_file = manifest_dir / audio_file
283 target_transcripts.append(data['text'])
284 audio_file_paths.append(str(audio_file.absolute()))
285
286 if cfg.probs_cache_file and os.path.exists(cfg.probs_cache_file):
287 logging.info(f"Found a pickle file of probabilities at '{cfg.probs_cache_file}'.")
288 logging.info(f"Loading the cached pickle file of probabilities from '{cfg.probs_cache_file}' ...")
289 with open(cfg.probs_cache_file, 'rb') as probs_file:
290 all_probs = pickle.load(probs_file)
291
292 if len(all_probs) != len(audio_file_paths):
293 raise ValueError(
294 f"The number of samples in the probabilities file '{cfg.probs_cache_file}' does not "
295 f"match the manifest file. You may need to delete the probabilities cached file."
296 )
297 else:
298
299 @contextlib.contextmanager
300 def default_autocast():
301 yield
302
303 if cfg.use_amp:
304 if torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
305 logging.info("AMP is enabled!\n")
306 autocast = torch.cuda.amp.autocast
307
308 else:
309 autocast = default_autocast
310 else:
311
312 autocast = default_autocast
313
314 # manual calculation of encoder_embeddings
315 with autocast():
316 with torch.no_grad():
317 asr_model.eval()
318 asr_model.encoder.freeze()
319 device = next(asr_model.parameters()).device
320 all_probs = []
321 with tempfile.TemporaryDirectory() as tmpdir:
322 with open(os.path.join(tmpdir, 'manifest.json'), 'w', encoding='utf-8') as fp:
323 for audio_file in audio_file_paths:
324 entry = {'audio_filepath': audio_file, 'duration': 100000, 'text': ''}
325 fp.write(json.dumps(entry) + '\n')
326 config = {
327 'paths2audio_files': audio_file_paths,
328 'batch_size': cfg.acoustic_batch_size,
329 'temp_dir': tmpdir,
330 'num_workers': cfg.num_workers,
331 'channel_selector': None,
332 'augmentor': None,
333 }
334 temporary_datalayer = asr_model._setup_transcribe_dataloader(config)
335 for test_batch in tqdm(temporary_datalayer, desc="Transcribing", disable=True):
336 encoded, encoded_len = asr_model.forward(
337 input_signal=test_batch[0].to(device), input_signal_length=test_batch[1].to(device)
338 )
339 # dump encoder embeddings per file
340 for idx in range(encoded.shape[0]):
341 encoded_no_pad = encoded[idx, :, : encoded_len[idx]]
342 all_probs.append(encoded_no_pad)
343
344 if cfg.probs_cache_file:
345 logging.info(f"Writing pickle files of probabilities at '{cfg.probs_cache_file}'...")
346 with open(cfg.probs_cache_file, 'wb') as f_dump:
347 pickle.dump(all_probs, f_dump)
348
349 if cfg.decoding_strategy == "greedy_batch":
350 asr_model = asr_model.to('cpu')
351 candidate_wer, candidate_cer = decoding_step(
352 asr_model,
353 cfg,
354 all_probs=all_probs,
355 target_transcripts=target_transcripts,
356 beam_batch_size=cfg.beam_batch_size,
357 progress_bar=True,
358 )
359 logging.info(f"Greedy batch WER/CER = {candidate_wer:.2%}/{candidate_cer:.2%}")
360
361 asr_model = asr_model.to('cpu')
362
363 # 'greedy_batch' decoding_strategy would skip the beam search decoding
364 if cfg.decoding_strategy in ["beam", "tsd", "alsd", "maes"]:
365 if cfg.beam_width is None or cfg.beam_alpha is None:
366 raise ValueError("beam_width and beam_alpha are needed to perform beam search decoding.")
367 params = {
368 'beam_width': cfg.beam_width,
369 'beam_alpha': cfg.beam_alpha,
370 'maes_prefix_alpha': cfg.maes_prefix_alpha,
371 'maes_expansion_gamma': cfg.maes_expansion_gamma,
372 'hat_ilm_weight': cfg.hat_ilm_weight,
373 }
374 hp_grid = ParameterGrid(params)
375 hp_grid = list(hp_grid)
376
377 best_wer_beam_size, best_cer_beam_size = None, None
378 best_wer_alpha, best_cer_alpha = None, None
379 best_wer, best_cer = 1e6, 1e6
380
381 logging.info(
382 f"==============================Starting the {cfg.decoding_strategy} decoding==============================="
383 )
384 logging.info(f"Grid search size: {len(hp_grid)}")
385 logging.info(f"It may take some time...")
386 logging.info(f"==============================================================================================")
387
388 if cfg.preds_output_folder and not os.path.exists(cfg.preds_output_folder):
389 os.mkdir(cfg.preds_output_folder)
390 for hp in hp_grid:
391 if cfg.preds_output_folder:
392 results_file = f"preds_out_{cfg.decoding_strategy}_bw{hp['beam_width']}"
393 if cfg.decoding_strategy == "maes":
394 results_file = f"{results_file}_ma{hp['maes_prefix_alpha']}_mg{hp['maes_expansion_gamma']}"
395 if cfg.kenlm_model_file:
396 results_file = f"{results_file}_ba{hp['beam_alpha']}"
397 if cfg.hat_subtract_ilm:
398 results_file = f"{results_file}_hat_ilmw{hp['hat_ilm_weight']}"
399 preds_output_file = os.path.join(cfg.preds_output_folder, f"{results_file}.tsv")
400 else:
401 preds_output_file = None
402
403 cfg.decoding.beam_size = hp["beam_width"]
404 cfg.decoding.ngram_lm_alpha = hp["beam_alpha"]
405 cfg.decoding.maes_prefix_alpha = hp["maes_prefix_alpha"]
406 cfg.decoding.maes_expansion_gamma = hp["maes_expansion_gamma"]
407 cfg.decoding.hat_ilm_weight = hp["hat_ilm_weight"]
408
409 candidate_wer, candidate_cer = decoding_step(
410 asr_model,
411 cfg,
412 all_probs=all_probs,
413 target_transcripts=target_transcripts,
414 preds_output_file=preds_output_file,
415 beam_batch_size=cfg.beam_batch_size,
416 progress_bar=True,
417 )
418
419 if candidate_cer < best_cer:
420 best_cer_beam_size = hp["beam_width"]
421 best_cer_alpha = hp["beam_alpha"]
422 best_cer_ma = hp["maes_prefix_alpha"]
423 best_cer_mg = hp["maes_expansion_gamma"]
424 best_cer_hat_ilm_weight = hp["hat_ilm_weight"]
425 best_cer = candidate_cer
426
427 if candidate_wer < best_wer:
428 best_wer_beam_size = hp["beam_width"]
429 best_wer_alpha = hp["beam_alpha"]
430 best_wer_ma = hp["maes_prefix_alpha"]
431 best_wer_ga = hp["maes_expansion_gamma"]
432 best_wer_hat_ilm_weight = hp["hat_ilm_weight"]
433 best_wer = candidate_wer
434
435 wer_hat_parameter = ""
436 if cfg.hat_subtract_ilm:
437 wer_hat_parameter = f"HAT ilm weight = {best_wer_hat_ilm_weight}, "
438 logging.info(
439 f'Best WER Candidate = {best_wer:.2%} :: Beam size = {best_wer_beam_size}, '
440 f'Beam alpha = {best_wer_alpha}, {wer_hat_parameter}'
441 f'maes_prefix_alpha = {best_wer_ma}, maes_expansion_gamma = {best_wer_ga} '
442 )
443
444 cer_hat_parameter = ""
445 if cfg.hat_subtract_ilm:
446 cer_hat_parameter = f"HAT ilm weight = {best_cer_hat_ilm_weight}"
447 logging.info(
448 f'Best CER Candidate = {best_cer:.2%} :: Beam size = {best_cer_beam_size}, '
449 f'Beam alpha = {best_cer_alpha}, {cer_hat_parameter} '
450 f'maes_prefix_alpha = {best_cer_ma}, maes_expansion_gamma = {best_cer_mg}'
451 )
452 logging.info(f"=================================================================================")
453
454
455 if __name__ == '__main__':
456 main()
457
[end of scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py]
[start of scripts/confidence_ensembles/build_ensemble.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This script provides a functionality to create confidence-based ensembles
17 from a collection of pretrained models.
18
19 For more details see the paper https://arxiv.org/abs/2306.15824
20 or tutorial in tutorials/asr/Confidence_Ensembles.ipynb
21
22 You would typically use this script by providing a yaml config file or overriding
23 default options from command line.
24
25 Usage examples:
26
27 1. Building an ensemble of two monolingual models with default settings (no confidence tuning).
28
29 python build_ensemble.py --config-path=. --config-name=ensemble_config.yaml
30 ensemble.0.model=stt_it_conformer_ctc_large
31 ensemble.0.training_manifest=<path to the Italian data of 100+ utterances (no transcription required)>
32 ensemble.1.model=stt_es_conformer_ctc_large
33 ensemble.1.training_manifest=<path to the Spanish data of 100+ utterances (no transcription required)>
34 output_path=<path to the desired location of the .nemo checkpoint>
35
36 You can have more than 2 models and can control transcription settings (e.g., batch size)
37 with ``transcription.<any argument of examples/asr/transcribe_speech.py>`` parameters.
38
39 2. If you want to get improved results, you can enable tuning of the confidence and logistic regression (LR) parameters.
40 E.g.
41
42 python build_ensemble.py
43 <all arguments like in the previous example>
44 ensemble.0.dev_manifest=<path to the dev data that's required for tuning>
45 ...
46 # IMPORTANT: see the note below if you use > 2 models!
47 ensemble.N.dev_manifest=<path to the dev data that's required for tuning>
48 tune_confidence=True # to allow confidence tuning. LR is tuned by default
49
50 As with any tuning, it is recommended to have reasonably large validation set for each model,
51 otherwise you might overfit to the validation data.
52
53 Note that if you add additional models (> 2) you will need to modify ensemble_config.yaml
54 or create a new one with added models in there. While it's theoretically possible to
55 fully override such parameters from commandline, hydra is very unfriendly for such
56 use-cases, so it's strongly recommended to be creating new configs.
57
58 3. If you want to precisely control tuning grid search, you can do that with
59
60 python build_ensemble.py
61 <all arguments as in the previous examples>
62 tune_confidence_config.confidence_type='[entropy_renyi_exp,entropy_tsallis_exp]' # only tune over this set
63 tune_confidence_config.alpha='[0.1,0.5,1.0]' # only tune over this set
64
65 You can check the dataclasses in this file for the full list of supported
66 arguments and their default values.
67 """
68
69 import atexit
70
71 # using default logging to be able to silence unnecessary messages from nemo
72 import logging
73 import os
74 import random
75 import sys
76 import tempfile
77 from copy import deepcopy
78 from dataclasses import dataclass
79 from pathlib import Path
80 from typing import Dict, List, Optional, Tuple
81
82 import joblib
83 import numpy as np
84 import pytorch_lightning as pl
85 from omegaconf import MISSING, DictConfig, OmegaConf
86 from sklearn.linear_model import LogisticRegression
87 from sklearn.metrics import confusion_matrix
88 from sklearn.pipeline import Pipeline, make_pipeline
89 from sklearn.preprocessing import StandardScaler
90 from tqdm import tqdm
91
92 from nemo.collections.asr.models.confidence_ensemble import (
93 ConfidenceEnsembleModel,
94 ConfidenceSpec,
95 compute_confidence,
96 get_filtered_logprobs,
97 )
98 from nemo.collections.asr.parts.utils.asr_confidence_utils import (
99 ConfidenceConfig,
100 ConfidenceMethodConfig,
101 get_confidence_aggregation_bank,
102 get_confidence_measure_bank,
103 )
104 from nemo.collections.asr.parts.utils.rnnt_utils import Hypothesis
105 from nemo.core.config import hydra_runner
106
107 LOG = logging.getLogger(__file__)
108
109 # adding Python path. If not found, asking user to get the file
110 try:
111 sys.path.append(str(Path(__file__).parents[2] / "examples" / "asr"))
112 import transcribe_speech
113 except ImportError:
114 # if users run script normally from nemo repo, this shouldn't be triggered as
115 # we modify the path above. But if they downloaded the build_ensemble.py as
116 # an isolated script, we'd ask them to also download corresponding version
117 # of the transcribe_speech.py
118 print(
119 "Current script depends on 'examples/asr/transcribe_speech.py', but can't find it. "
120 "If it's not present, download it from the NeMo github manually and put inside this folder."
121 )
122
123
124 @dataclass
125 class EnsembleConfig:
126 # .nemo path or pretrained name
127 model: str = MISSING
128 # path to the training data manifest (non-tarred)
129 training_manifest: str = MISSING
130 # specify to limit the number of training samples
131 # 100 is most likely enough, but setting higher default just in case
132 max_training_samples: int = 1000
133 # specify to provide dev data manifest for HP tuning
134 dev_manifest: Optional[str] = None
135
136
137 @dataclass
138 class TuneConfidenceConfig:
139 # important parameter, so should always be tuned
140 exclude_blank: Tuple[bool] = (True, False)
141 # prod is pretty much always worse, so not including by default
142 aggregation: Tuple[str] = ("mean", "min", "max")
143 # not including max prob, as there is always an entropy-based metric
144 # that's better but otherwise including everything
145 confidence_type: Tuple[str] = (
146 "entropy_renyi_exp",
147 "entropy_renyi_lin",
148 "entropy_tsallis_exp",
149 "entropy_tsallis_lin",
150 "entropy_gibbs_lin",
151 "entropy_gibbs_exp",
152 )
153
154 # TODO: currently it's not possible to efficiently tune temperature, as we always
155 # apply log-softmax in the decoder, so to try different values it will be required
156 # to rerun the decoding, which is very slow. To support this for one-off experiments
157 # it's possible to modify the code of CTC decoder / Transducer joint to
158 # remove log-softmax and then apply it directly in this script with the temperature
159 #
160 # Alternatively, one can run this script multiple times with different values of
161 # temperature and pick the best performing ensemble. Note that this will increase
162 # tuning time by the number of temperature values tried. On the other hand,
163 # the above approach is a lot more efficient and will only slightly increase
164 # the total tuning runtime.
165
166 # very important to tune for max prob, but for entropy metrics 1.0 is almost always best
167 # temperature: Tuple[float] = (1.0,)
168
169 # not that important, but can sometimes make a small difference
170 alpha: Tuple[float] = (0.25, 0.33, 0.5, 1.0)
171
172 def get_grid_size(self) -> int:
173 """Returns the total number of points in the search space."""
174 if "max_prob" in self.confidence_type:
175 return (
176 len(self.exclude_blank)
177 * len(self.aggregation)
178 * ((len(self.confidence_type) - 1) * len(self.alpha) + 1)
179 )
180 return len(self.exclude_blank) * len(self.aggregation) * len(self.confidence_type) * len(self.alpha)
181
182
183 @dataclass
184 class TuneLogisticRegressionConfig:
185 # will have log-uniform grid over this range with that many points
186 # note that a value of 10000.0 (not regularization) is always added
187 C_num_points: int = 10
188 C_min: float = 0.0001
189 C_max: float = 10.0
190
191 # not too important
192 multi_class: Tuple[str] = ("ovr", "multinomial")
193
194 # should try to include weights directly if the data is too imbalanced
195 class_weight: Tuple = (None, "balanced")
196
197 # increase if getting many warnings that algorithm didn't converge
198 max_iter: int = 1000
199
200
201 @dataclass
202 class BuildEnsembleConfig:
203 # where to save the resulting ensemble model
204 output_path: str = MISSING
205
206 # each model specification
207 ensemble: List[EnsembleConfig] = MISSING
208
209 random_seed: int = 0 # for reproducibility
210
211 # default confidence, can override
212 confidence: ConfidenceConfig = ConfidenceConfig(
213 # we keep frame confidences and apply aggregation manually to get full-utterance confidence
214 preserve_frame_confidence=True,
215 exclude_blank=True,
216 aggregation="mean",
217 method_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
218 )
219 temperature: float = 1.0
220
221 # this is optional, but can be used to change any aspect of the transcription
222 # config, such as batch size or amp usage. Note that model, data and confidence
223 # will be overriden by this script
224 transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
225
226 # set to True to tune the confidence.
227 # requires dev manifests to be specified for each model
228 tune_confidence: bool = False
229 # used to specify what to tune over. By default runs tuning over some
230 # reasonalbe grid, so that it does not take forever.
231 # Can be changed as needed
232 tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
233
234 # very fast to tune and can be important in case of imbalanced datasets
235 # will automatically set to False if dev data is not available
236 tune_logistic_regression: bool = True
237 tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
238
239 def __post_init__(self):
240 """Checking that if any dev data is provided, all are provided.
241
242 Will also auto-set tune_logistic_regression to False if no dev data
243 is available.
244
245 If tune_confidence is set to True (user choice) and no dev data is
246 provided, will raise an error.
247 """
248 num_dev_data = 0
249 for ensemble_cfg in self.ensemble:
250 num_dev_data += ensemble_cfg.dev_manifest is not None
251 if num_dev_data == 0:
252 if self.tune_confidence:
253 raise ValueError("tune_confidence is set to True, but no dev data is provided")
254 LOG.info("Setting tune_logistic_regression = False since no dev data is provided")
255 self.tune_logistic_regression = False
256 return
257
258 if num_dev_data < len(self.ensemble):
259 raise ValueError(
260 "Some ensemble configs specify dev data, but some don't. Either all have to specify it or none!"
261 )
262
263
264 def calculate_score(features: np.ndarray, labels: np.ndarray, pipe: Pipeline) -> Tuple[float, np.ndarray]:
265 """Score is always calculated as mean of the per-class scores.
266
267 This is done to account for possible class imbalances.
268
269 Args:
270 features: numpy array of features of shape [N x D], where N is the
271 number of objects (typically a total number of utterances in
272 all datasets) and D is the total number of confidence scores
273 used to train the model (typically = number of models).
274 labels: numpy array of shape [N] contatining ground-truth model indices.
275 pipe: classification pipeline (currently, standardization + logistic
276 regression).
277
278 Returns:
279 tuple: score value in [0, 1] and full classification confusion matrix.
280 """
281 predictions = pipe.predict(features)
282 conf_m = confusion_matrix(labels, predictions)
283 score = np.diag(conf_m).sum() / conf_m.sum()
284 return score, conf_m
285
286
287 def train_model_selection(
288 training_features: np.ndarray,
289 training_labels: np.ndarray,
290 dev_features: Optional[np.ndarray] = None,
291 dev_labels: Optional[np.ndarray] = None,
292 tune_lr: bool = False,
293 tune_lr_cfg: Optional[TuneLogisticRegressionConfig] = None,
294 verbose: bool = False,
295 ) -> Tuple[Pipeline, float]:
296 """Trains model selection block with an (optional) tuning of the parameters.
297
298 Returns a pipeline consisting of feature standardization and logistic
299 regression. If tune_lr is set to True, dev features/labels will be used
300 to tune the hyperparameters of the logistic regression with the grid
301 search that's defined via ``tune_lr_cfg``.
302
303 If no tuning is requested, uses the following parameters::
304
305 best_pipe = make_pipeline(
306 StandardScaler(),
307 LogisticRegression(
308 multi_class="multinomial",
309 C=10000.0,
310 max_iter=1000,
311 class_weight="balanced",
312 ),
313 )
314
315 Args:
316 training_features: numpy array of features of shape [N x D], where N is
317 the number of objects (typically a total number of utterances in
318 all training datasets) and D is the total number of confidence
319 scores used to train the model (typically = number of models).
320 training_labels: numpy array of shape [N] contatining ground-truth
321 model indices.
322 dev_features: same as training, but for the validation subset.
323 dev_labels: same as training, but for the validation subset.
324 tune_lr: controls whether tuning of LR hyperparameters is performed.
325 If set to True, it's required to also provide dev features/labels.
326 tune_lr_cfg: specifies what values of LR hyperparameters to try.
327 verbose: if True, will output final training/dev scores.
328
329 Returns:
330 tuple: trained model selection pipeline, best score (or -1 if no tuning
331 was done).
332 """
333 if not tune_lr:
334 # default parameters: C=10000.0 disables regularization
335 best_pipe = make_pipeline(
336 StandardScaler(),
337 LogisticRegression(multi_class="multinomial", C=10000.0, max_iter=1000, class_weight="balanced"),
338 )
339 max_score = -1
340 else:
341 C_pms = np.append(
342 np.exp(np.linspace(np.log(tune_lr_cfg.C_min), np.log(tune_lr_cfg.C_max), tune_lr_cfg.C_num_points)),
343 10000.0,
344 )
345 max_score = 0
346 best_pipe = None
347 for class_weight in tune_lr_cfg.class_weight:
348 for multi_class in tune_lr_cfg.multi_class:
349 for C in C_pms:
350 pipe = make_pipeline(
351 StandardScaler(),
352 LogisticRegression(
353 multi_class=multi_class, C=C, max_iter=tune_lr_cfg.max_iter, class_weight=class_weight
354 ),
355 )
356 pipe.fit(training_features, training_labels)
357 score, confusion = calculate_score(dev_features, dev_labels, pipe)
358 if score > max_score:
359 max_score = score
360 best_pipe = pipe
361
362 best_pipe.fit(training_features, training_labels)
363 if verbose:
364 accuracy, confusion = calculate_score(training_features, training_labels, best_pipe)
365 LOG.info("Training fit accuracy: %.4f", accuracy * 100.0)
366 LOG.info("Training confusion matrix:\n%s", str(confusion))
367 if dev_features is not None and verbose:
368 accuracy, confusion = calculate_score(dev_features, dev_labels, best_pipe)
369 LOG.info("Dev fit accuracy: %.4f", accuracy * 100.0)
370 LOG.info("Dev confusion matrix:\n%s", str(confusion))
371
372 return best_pipe, max_score
373
374
375 def subsample_manifest(manifest_file: str, max_samples: int) -> str:
376 """Will save a subsampled version of the manifest to the same folder.
377
378 Have to save to the same folder to support relative paths.
379
380 Args:
381 manifest_file: path to the manifest file that needs subsampling.
382 max_samples: how many samples to retain. Will randomly select that
383 many lines from the manifest.
384
385 Returns:
386 str: the path to the subsampled manifest file.
387 """
388 with open(manifest_file, "rt", encoding="utf-8") as fin:
389 lines = fin.readlines()
390 if max_samples < len(lines):
391 lines = random.sample(lines, max_samples)
392 output_file = manifest_file + "-subsampled"
393 with open(output_file, "wt", encoding="utf-8") as fout:
394 fout.write("".join(lines))
395 return output_file
396
397
398 def cleanup_subsampled_manifests(subsampled_manifests: List[str]):
399 """Removes all generated subsamples manifests."""
400 for manifest in subsampled_manifests:
401 os.remove(manifest)
402
403
404 def compute_all_confidences(
405 hypothesis: Hypothesis, tune_confidence_cfg: TuneConfidenceConfig
406 ) -> Dict[ConfidenceSpec, float]:
407 """Computes a set of confidence scores from a given hypothesis.
408
409 Works with the output of both CTC and Transducer decoding.
410
411 Args:
412 hypothesis: generated hypothesis as returned from the transcribe
413 method of the ASR model.
414 tune_confidence_cfg: config specifying what confidence scores to
415 compute.
416
417 Returns:
418 dict: dictionary with confidenct spec -> confidence score mapping.
419 """
420 conf_values = {}
421
422 for exclude_blank in tune_confidence_cfg.exclude_blank:
423 filtered_logprobs = get_filtered_logprobs(hypothesis, exclude_blank)
424 vocab_size = filtered_logprobs.shape[1]
425 for aggregation in tune_confidence_cfg.aggregation:
426 aggr_func = get_confidence_aggregation_bank()[aggregation]
427 for conf_type in tune_confidence_cfg.confidence_type:
428 conf_func = get_confidence_measure_bank()[conf_type]
429 if conf_type == "max_prob": # skipping alpha in this case
430 conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=1.0)).cpu().item()
431 conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, 1.0)] = conf_value
432 else:
433 for alpha in tune_confidence_cfg.alpha:
434 conf_value = aggr_func(conf_func(filtered_logprobs, v=vocab_size, t=alpha)).cpu().item()
435 conf_values[ConfidenceSpec(exclude_blank, aggregation, conf_type, alpha)] = conf_value
436
437 return conf_values
438
439
440 def find_best_confidence(
441 train_confidences: List[List[Dict[ConfidenceSpec, float]]],
442 train_labels: List[int],
443 dev_confidences: List[List[Dict[ConfidenceSpec, float]]],
444 dev_labels: List[int],
445 tune_lr: bool,
446 tune_lr_config: TuneConfidenceConfig,
447 ) -> Tuple[ConfidenceConfig, Pipeline]:
448 """Finds the best confidence configuration for model selection.
449
450 Will loop over all values in the confidence dictionary and fit the LR
451 model (optionally tuning its HPs). The best performing confidence (on the
452 dev set) will be used for the final LR model.
453
454 Args:
455 train_confidences: this is an object of type
456 ``List[List[Dict[ConfidenceSpec, float]]]``. The shape of this
457 object is [M, N, S], where
458 M: number of models
459 N: number of utterances in all training sets
460 S: number of confidence scores to try
461
462 This argument will be used to construct np.array objects for each
463 of the confidence scores with the shape [M, N]
464
465 train_labels: ground-truth labels of the correct model for each data
466 points. This is a list of size [N]
467 dev_confidences: same as training, but for the validation subset.
468 dev_labels: same as training, but for the validation subset.
469 tune_lr: controls whether tuning of LR hyperparameters is performed.
470 tune_lr_cfg: specifies what values of LR hyperparameters to try.
471
472 Returns:
473 tuple: best confidence config, best model selection pipeline
474 """
475 max_score = 0
476 best_pipe = None
477 best_conf_spec = None
478 LOG.info("Evaluation all confidences. Total grid size: %d", len(train_confidences[0][0].keys()))
479 for conf_spec in tqdm(train_confidences[0][0].keys()):
480 cur_train_confidences = []
481 for model_confs in train_confidences:
482 cur_train_confidences.append([])
483 for model_conf in model_confs:
484 cur_train_confidences[-1].append(model_conf[conf_spec])
485 cur_dev_confidences = []
486 for model_confs in dev_confidences:
487 cur_dev_confidences.append([])
488 for model_conf in model_confs:
489 cur_dev_confidences[-1].append(model_conf[conf_spec])
490 # transposing with zip(*list)
491 training_features = np.array(list(zip(*cur_train_confidences)))
492 training_labels = np.array(train_labels)
493 dev_features = np.array(list(zip(*cur_dev_confidences)))
494 dev_labels = np.array(dev_labels)
495 pipe, score = train_model_selection(
496 training_features, training_labels, dev_features, dev_labels, tune_lr, tune_lr_config,
497 )
498 if max_score < score:
499 max_score = score
500 best_pipe = pipe
501 best_conf_spec = conf_spec
502 LOG.info("Found better parameters: %s. New score: %.4f", str(conf_spec), max_score)
503
504 return best_conf_spec.to_confidence_config(), best_pipe
505
506
507 @hydra_runner(config_name="BuildEnsembleConfig", schema=BuildEnsembleConfig)
508 def main(cfg: BuildEnsembleConfig):
509 # silencing all messages from nemo/ptl to avoid dumping tons of configs to the stdout
510 logging.getLogger('pytorch_lightning').setLevel(logging.CRITICAL)
511 logging.getLogger('nemo_logger').setLevel(logging.CRITICAL)
512 LOG.info(f'Build ensemble config:\n{OmegaConf.to_yaml(cfg)}')
513
514 # to ensure post init is called
515 cfg = BuildEnsembleConfig(**cfg)
516
517 pl.seed_everything(cfg.random_seed)
518 cfg.transcription.random_seed = None # seed is already applied
519 cfg.transcription.return_transcriptions = True
520 cfg.transcription.preserve_alignment = True
521 cfg.transcription.ctc_decoding.temperature = cfg.temperature
522 cfg.transcription.rnnt_decoding.temperature = cfg.temperature
523 # this ensures that generated output is after log-softmax for consistency with CTC
524
525 train_confidences = []
526 dev_confidences = []
527 train_labels = []
528 dev_labels = []
529
530 # registering clean-up function that will hold on to this list and
531 # should clean up even if there is partial error in some of the transcribe
532 # calls
533 subsampled_manifests = []
534 atexit.register(cleanup_subsampled_manifests, subsampled_manifests)
535
536 # note that we loop over the same config.
537 # This is intentional, as we need to run all models on all datasets
538 # this loop will do the following things:
539 # 1. Goes through each model X each training dataset
540 # 2. Computes predictions by directly calling transcribe_speech.main
541 # 3. Converts transcription to the confidence score(s) as specified in the config
542 # 4. If dev sets are provided, computes the same for them
543 # 5. Creates a list of ground-truth model indices by mapping each model
544 # to its own training dataset as specified in the config.
545 # 6. After the loop, we either run tuning over all confidence scores or
546 # directly use a single score to fit logistic regression and save the
547 # final ensemble model.
548 for model_idx, model_cfg in enumerate(cfg.ensemble):
549 train_model_confidences = []
550 dev_model_confidences = []
551 for data_idx, data_cfg in enumerate(cfg.ensemble):
552 if model_idx == 0: # generating subsampled manifests only one time
553 subsampled_manifests.append(
554 subsample_manifest(data_cfg.training_manifest, data_cfg.max_training_samples)
555 )
556 subsampled_manifest = subsampled_manifests[data_idx]
557
558 if model_cfg.model.endswith(".nemo"):
559 cfg.transcription.model_path = model_cfg.model
560 else: # assuming pretrained model
561 cfg.transcription.pretrained_name = model_cfg.model
562
563 cfg.transcription.dataset_manifest = subsampled_manifest
564
565 # training
566 with tempfile.NamedTemporaryFile() as output_file:
567 cfg.transcription.output_filename = output_file.name
568 LOG.info("Transcribing training dataset %d with model %d", data_idx, model_idx)
569 transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
570 LOG.info("Generating confidence scores")
571 # TODO: parallelize this loop?
572 for transcription in tqdm(transcriptions):
573 if cfg.tune_confidence:
574 train_model_confidences.append(
575 compute_all_confidences(transcription, cfg.tune_confidence_config)
576 )
577 else:
578 train_model_confidences.append(compute_confidence(transcription, cfg.confidence))
579 if model_idx == 0: # labels are the same for all models
580 train_labels.append(data_idx)
581
582 # optional dev
583 if data_cfg.dev_manifest is not None:
584 cfg.transcription.dataset_manifest = data_cfg.dev_manifest
585 with tempfile.NamedTemporaryFile() as output_file:
586 cfg.transcription.output_filename = output_file.name
587 LOG.info("Transcribing dev dataset %d with model %d", data_idx, model_idx)
588 transcriptions = transcribe_speech.main(deepcopy(cfg.transcription))
589 LOG.info("Generating confidence scores")
590 for transcription in tqdm(transcriptions):
591 if cfg.tune_confidence:
592 dev_model_confidences.append(
593 compute_all_confidences(transcription, cfg.tune_confidence_config)
594 )
595 else:
596 dev_model_confidences.append(compute_confidence(transcription, cfg.confidence))
597 if model_idx == 0: # labels are the same for all models
598 dev_labels.append(data_idx)
599
600 train_confidences.append(train_model_confidences)
601 if dev_model_confidences:
602 dev_confidences.append(dev_model_confidences)
603
604 if cfg.tune_confidence:
605 best_confidence, model_selection_block = find_best_confidence(
606 train_confidences,
607 train_labels,
608 dev_confidences,
609 dev_labels,
610 cfg.tune_logistic_regression,
611 cfg.tune_logistic_regression_config,
612 )
613 else:
614 best_confidence = cfg.confidence
615 # transposing with zip(*list)
616 training_features = np.array(list(zip(*train_confidences)))
617 training_labels = np.array(train_labels)
618 if dev_confidences:
619 dev_features = np.array(list(zip(*dev_confidences)))
620 dev_labels = np.array(dev_labels)
621 else:
622 dev_features = None
623 dev_labels = None
624 model_selection_block, _ = train_model_selection(
625 training_features,
626 training_labels,
627 dev_features,
628 dev_labels,
629 cfg.tune_logistic_regression,
630 cfg.tune_logistic_regression_config,
631 verbose=True,
632 )
633
634 with tempfile.TemporaryDirectory() as tmpdir:
635 model_selection_block_path = os.path.join(tmpdir, 'model_selection_block.pkl')
636 joblib.dump(model_selection_block, model_selection_block_path)
637
638 # creating ensemble checkpoint
639 ensemble_model = ConfidenceEnsembleModel(
640 cfg=DictConfig(
641 {
642 'model_selection_block': model_selection_block_path,
643 'confidence': best_confidence,
644 'temperature': cfg.temperature,
645 'load_models': [model_cfg.model for model_cfg in cfg.ensemble],
646 }
647 ),
648 trainer=None,
649 )
650 ensemble_model.save_to(cfg.output_path)
651
652
653 if __name__ == '__main__':
654 main()
655
[end of scripts/confidence_ensembles/build_ensemble.py]
[start of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
1 # Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import json
16 import os
17 from dataclasses import dataclass, is_dataclass
18 from pathlib import Path
19 from typing import Optional
20
21 import pytorch_lightning as pl
22 import torch
23 from omegaconf import MISSING, OmegaConf
24 from sklearn.model_selection import ParameterGrid
25
26 from nemo.collections.asr.metrics.rnnt_wer import RNNTDecodingConfig
27 from nemo.collections.asr.metrics.wer import CTCDecodingConfig
28 from nemo.collections.asr.models import ASRModel, EncDecRNNTModel
29 from nemo.collections.asr.parts.utils.asr_confidence_benchmarking_utils import (
30 apply_confidence_parameters,
31 run_confidence_benchmark,
32 )
33 from nemo.collections.asr.parts.utils.asr_confidence_utils import ConfidenceConfig
34 from nemo.core.config import hydra_runner
35 from nemo.utils import logging, model_utils
36
37 """
38 Get confidence metrics and curve plots for a given model, dataset, and confidence parameters.
39
40 # Arguments
41 model_path: Path to .nemo ASR checkpoint
42 pretrained_name: Name of pretrained ASR model (from NGC registry)
43 dataset_manifest: Path to dataset JSON manifest file (in NeMo format)
44 output_dir: Output directory to store a report and curve plot directories
45
46 batch_size: batch size during inference
47 num_workers: number of workers during inference
48
49 cuda: Optional int to enable or disable execution of model on certain CUDA device
50 amp: Bool to decide if Automatic Mixed Precision should be used during inference
51 audio_type: Str filetype of the audio. Supported = wav, flac, mp3
52
53 target_level: Word- or token-level confidence. Supported = word, token, auto (for computing both word and token)
54 confidence_cfg: Config with confidence parameters
55 grid_params: Dictionary with lists of parameters to iteratively benchmark on
56
57 # Usage
58 ASR model can be specified by either "model_path" or "pretrained_name".
59 Data for transcription are defined with "dataset_manifest".
60 Results are returned as a benchmark report and curve plots.
61
62 python benchmark_asr_confidence.py \
63 model_path=null \
64 pretrained_name=null \
65 dataset_manifest="" \
66 output_dir="" \
67 batch_size=64 \
68 num_workers=8 \
69 cuda=0 \
70 amp=True \
71 target_level="word" \
72 confidence_cfg.exclude_blank=False \
73 'grid_params="{\"aggregation\": [\"min\", \"prod\"], \"alpha\": [0.33, 0.5]}"'
74 """
75
76
77 def get_experiment_params(cfg):
78 """Get experiment parameters from a confidence config and generate the experiment name.
79
80 Returns:
81 List of experiment parameters.
82 String with the experiment name.
83 """
84 blank = "no_blank" if cfg.exclude_blank else "blank"
85 aggregation = cfg.aggregation
86 method_name = cfg.method_cfg.name
87 alpha = cfg.method_cfg.alpha
88 if method_name == "entropy":
89 entropy_type = cfg.method_cfg.entropy_type
90 entropy_norm = cfg.method_cfg.entropy_norm
91 experiment_param_list = [
92 aggregation,
93 str(cfg.exclude_blank),
94 method_name,
95 entropy_type,
96 entropy_norm,
97 str(alpha),
98 ]
99 experiment_str = "-".join([aggregation, blank, method_name, entropy_type, entropy_norm, str(alpha)])
100 else:
101 experiment_param_list = [aggregation, str(cfg.exclude_blank), method_name, "-", "-", str(alpha)]
102 experiment_str = "-".join([aggregation, blank, method_name, str(alpha)])
103 return experiment_param_list, experiment_str
104
105
106 @dataclass
107 class ConfidenceBenchmarkingConfig:
108 # Required configs
109 model_path: Optional[str] = None # Path to a .nemo file
110 pretrained_name: Optional[str] = None # Name of a pretrained model
111 dataset_manifest: str = MISSING
112 output_dir: str = MISSING
113
114 # General configs
115 batch_size: int = 32
116 num_workers: int = 4
117
118 # Set `cuda` to int to define CUDA device. If 'None', will look for CUDA
119 # device anyway, and do inference on CPU only if CUDA device is not found.
120 # If `cuda` is a negative number, inference will be on CPU only.
121 cuda: Optional[int] = None
122 amp: bool = False
123 audio_type: str = "wav"
124
125 # Confidence configs
126 target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
127 confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
128 grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
129
130
131 @hydra_runner(config_name="ConfidenceBenchmarkingConfig", schema=ConfidenceBenchmarkingConfig)
132 def main(cfg: ConfidenceBenchmarkingConfig):
133 torch.set_grad_enabled(False)
134
135 logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
136
137 if is_dataclass(cfg):
138 cfg = OmegaConf.structured(cfg)
139
140 if cfg.model_path is None and cfg.pretrained_name is None:
141 raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None!")
142
143 # setup GPU
144 if cfg.cuda is None:
145 if torch.cuda.is_available():
146 device = [0] # use 0th CUDA device
147 accelerator = 'gpu'
148 else:
149 device = 1
150 accelerator = 'cpu'
151 else:
152 device = [cfg.cuda]
153 accelerator = 'gpu'
154
155 map_location = torch.device('cuda:{}'.format(device[0]) if accelerator == 'gpu' else 'cpu')
156
157 # setup model
158 if cfg.model_path is not None:
159 # restore model from .nemo file path
160 model_cfg = ASRModel.restore_from(restore_path=cfg.model_path, return_config=True)
161 classpath = model_cfg.target # original class path
162 imported_class = model_utils.import_class_by_path(classpath) # type: ASRModel
163 logging.info(f"Restoring model : {imported_class.__name__}")
164 asr_model = imported_class.restore_from(
165 restore_path=cfg.model_path, map_location=map_location
166 ) # type: ASRModel
167 else:
168 # restore model by name
169 asr_model = ASRModel.from_pretrained(
170 model_name=cfg.pretrained_name, map_location=map_location
171 ) # type: ASRModel
172
173 trainer = pl.Trainer(devices=device, accelerator=accelerator)
174 asr_model.set_trainer(trainer)
175 asr_model = asr_model.eval()
176
177 # Check if ctc or rnnt model
178 is_rnnt = isinstance(asr_model, EncDecRNNTModel)
179
180 # Check that the model has the `change_decoding_strategy` method
181 if not hasattr(asr_model, 'change_decoding_strategy'):
182 raise RuntimeError("The asr_model you are using must have the `change_decoding_strategy` method.")
183
184 # get filenames and reference texts from manifest
185 filepaths = []
186 reference_texts = []
187 if os.stat(cfg.dataset_manifest).st_size == 0:
188 logging.error(f"The input dataset_manifest {cfg.dataset_manifest} is empty. Exiting!")
189 return None
190 manifest_dir = Path(cfg.dataset_manifest).parent
191 with open(cfg.dataset_manifest, 'r') as f:
192 for line in f:
193 item = json.loads(line)
194 audio_file = Path(item['audio_filepath'])
195 if not audio_file.is_file() and not audio_file.is_absolute():
196 audio_file = manifest_dir / audio_file
197 filepaths.append(str(audio_file.absolute()))
198 reference_texts.append(item['text'])
199
200 # setup AMP (optional)
201 autocast = None
202 if cfg.amp and torch.cuda.is_available() and hasattr(torch.cuda, 'amp') and hasattr(torch.cuda.amp, 'autocast'):
203 logging.info("AMP enabled!\n")
204 autocast = torch.cuda.amp.autocast
205
206 # do grid-based benchmarking if grid_params is provided, otherwise a regular one
207 work_dir = Path(cfg.output_dir)
208 os.makedirs(work_dir, exist_ok=True)
209 report_legend = (
210 ",".join(
211 [
212 "model_type",
213 "aggregation",
214 "blank",
215 "method_name",
216 "entropy_type",
217 "entropy_norm",
218 "alpha",
219 "target_level",
220 "auc_roc",
221 "auc_pr",
222 "auc_nt",
223 "nce",
224 "ece",
225 "auc_yc",
226 "std_yc",
227 "max_yc",
228 ]
229 )
230 + "\n"
231 )
232 model_typename = "RNNT" if is_rnnt else "CTC"
233 report_file = work_dir / Path("report.csv")
234 if cfg.grid_params:
235 asr_model.change_decoding_strategy(
236 RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
237 if is_rnnt
238 else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
239 )
240 params = json.loads(cfg.grid_params)
241 hp_grid = ParameterGrid(params)
242 hp_grid = list(hp_grid)
243
244 logging.info(f"==============================Running a benchmarking with grid search=========================")
245 logging.info(f"Grid search size: {len(hp_grid)}")
246 logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directories near the file")
247 logging.info(f"==============================================================================================")
248
249 with open(report_file, "tw", encoding="utf-8") as f:
250 f.write(report_legend)
251 f.flush()
252 for i, hp in enumerate(hp_grid):
253 logging.info(f"Run # {i + 1}, grid: `{hp}`")
254 asr_model.change_decoding_strategy(apply_confidence_parameters(asr_model.cfg.decoding, hp))
255 param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
256 plot_dir = work_dir / Path(experiment_name)
257 results = run_confidence_benchmark(
258 asr_model,
259 cfg.target_level,
260 filepaths,
261 reference_texts,
262 cfg.batch_size,
263 cfg.num_workers,
264 plot_dir,
265 autocast,
266 )
267 for level, result in results.items():
268 f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
269 f.flush()
270 else:
271 asr_model.change_decoding_strategy(
272 RNNTDecodingConfig(fused_batch_size=-1, strategy="greedy_batch", confidence_cfg=cfg.confidence_cfg)
273 if is_rnnt
274 else CTCDecodingConfig(confidence_cfg=cfg.confidence_cfg)
275 )
276 param_list, experiment_name = get_experiment_params(asr_model.cfg.decoding.confidence_cfg)
277 plot_dir = work_dir / Path(experiment_name)
278
279 logging.info(f"==============================Running a single benchmarking===================================")
280 logging.info(f"Results will be written to:\nreport file `{report_file}`\nand plot directory `{plot_dir}`")
281
282 with open(report_file, "tw", encoding="utf-8") as f:
283 f.write(report_legend)
284 f.flush()
285 results = run_confidence_benchmark(
286 asr_model,
287 cfg.batch_size,
288 cfg.num_workers,
289 cfg.target_level,
290 filepaths,
291 reference_texts,
292 plot_dir,
293 autocast,
294 )
295 for level, result in results.items():
296 f.write(f"{model_typename},{','.join(param_list)},{level},{','.join([str(r) for r in result])}\n")
297 logging.info(f"===========================================Done===============================================")
298
299
300 if __name__ == '__main__':
301 main()
302
[end of scripts/speech_recognition/confidence/benchmark_asr_confidence.py]
[start of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 # This script converts an existing audio dataset with a manifest to
16 # a tarred and sharded audio dataset that can be read by the
17 # TarredAudioToTextDataLayer.
18
19 # Please make sure your audio_filepath DOES NOT CONTAIN '-sub'!
20 # Because we will use it to handle files which have duplicate filenames but with different offsets
21 # (see function create_shard for details)
22
23
24 # Bucketing can help to improve the training speed. You may use --buckets_num to specify the number of buckets.
25 # It creates multiple tarred datasets, one per bucket, based on the audio durations.
26 # The range of [min_duration, max_duration) is split into equal sized buckets.
27 # Recommend to use --sort_in_shards to speedup the training by reducing the paddings in the batches
28 # More info on how to use bucketing feature: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/datasets.html
29
30 # If valid NVIDIA DALI version is installed, will also generate the corresponding DALI index files that need to be
31 # supplied to the config in order to utilize webdataset for efficient large dataset handling.
32 # NOTE: DALI + Webdataset is NOT compatible with Bucketing support !
33
34 # Usage:
35 1) Creating a new tarfile dataset
36
37 python convert_to_tarred_audio_dataset.py \
38 --manifest_path=<path to the manifest file> \
39 --target_dir=<path to output directory> \
40 --num_shards=<number of tarfiles that will contain the audio> \
41 --max_duration=<float representing maximum duration of audio samples> \
42 --min_duration=<float representing minimum duration of audio samples> \
43 --shuffle --shuffle_seed=1 \
44 --sort_in_shards \
45 --workers=-1
46
47
48 2) Concatenating more tarfiles to a pre-existing tarred dataset
49
50 python convert_to_tarred_audio_dataset.py \
51 --manifest_path=<path to the tarred manifest file> \
52 --metadata_path=<path to the metadata.yaml (or metadata_version_{X}.yaml) file> \
53 --target_dir=<path to output directory where the original tarfiles are contained> \
54 --max_duration=<float representing maximum duration of audio samples> \
55 --min_duration=<float representing minimum duration of audio samples> \
56 --shuffle --shuffle_seed=1 \
57 --sort_in_shards \
58 --workers=-1 \
59 --concat_manifest_paths \
60 <space separated paths to 1 or more manifest files to concatenate into the original tarred dataset>
61
62 3) Writing an empty metadata file
63
64 python convert_to_tarred_audio_dataset.py \
65 --target_dir=<path to output directory> \
66 # any other optional argument
67 --num_shards=8 \
68 --max_duration=16.7 \
69 --min_duration=0.01 \
70 --shuffle \
71 --workers=-1 \
72 --sort_in_shards \
73 --shuffle_seed=1 \
74 --write_metadata
75
76 """
77 import argparse
78 import copy
79 import json
80 import os
81 import random
82 import tarfile
83 from collections import defaultdict
84 from dataclasses import dataclass, field
85 from datetime import datetime
86 from typing import Any, List, Optional
87
88 from joblib import Parallel, delayed
89 from omegaconf import DictConfig, OmegaConf, open_dict
90
91 try:
92 import create_dali_tarred_dataset_index as dali_index
93
94 DALI_INDEX_SCRIPT_AVAILABLE = True
95 except (ImportError, ModuleNotFoundError, FileNotFoundError):
96 DALI_INDEX_SCRIPT_AVAILABLE = False
97
98 parser = argparse.ArgumentParser(
99 description="Convert an existing ASR dataset to tarballs compatible with TarredAudioToTextDataLayer."
100 )
101 parser.add_argument(
102 "--manifest_path", default=None, type=str, required=False, help="Path to the existing dataset's manifest."
103 )
104
105 parser.add_argument(
106 '--concat_manifest_paths',
107 nargs='+',
108 default=None,
109 type=str,
110 required=False,
111 help="Path to the additional dataset's manifests that will be concatenated with base dataset.",
112 )
113
114 # Optional arguments
115 parser.add_argument(
116 "--target_dir",
117 default='./tarred',
118 type=str,
119 help="Target directory for resulting tarballs and manifest. Defaults to `./tarred`. Creates the path if necessary.",
120 )
121
122 parser.add_argument(
123 "--metadata_path", required=False, default=None, type=str, help="Path to metadata file for the dataset.",
124 )
125
126 parser.add_argument(
127 "--num_shards",
128 default=-1,
129 type=int,
130 help="Number of shards (tarballs) to create. Used for partitioning data among workers.",
131 )
132 parser.add_argument(
133 '--max_duration',
134 default=None,
135 required=True,
136 type=float,
137 help='Maximum duration of audio clip in the dataset. By default, it is None and is required to be set.',
138 )
139 parser.add_argument(
140 '--min_duration',
141 default=None,
142 type=float,
143 help='Minimum duration of audio clip in the dataset. By default, it is None and will not filter files.',
144 )
145 parser.add_argument(
146 "--shuffle",
147 action='store_true',
148 help="Whether or not to randomly shuffle the samples in the manifest before tarring/sharding.",
149 )
150
151 parser.add_argument(
152 "--keep_files_together",
153 action='store_true',
154 help="Whether or not to keep entries from the same file (but different offsets) together when sorting before tarring/sharding.",
155 )
156
157 parser.add_argument(
158 "--sort_in_shards",
159 action='store_true',
160 help="Whether or not to sort samples inside the shards based on their duration.",
161 )
162
163 parser.add_argument(
164 "--buckets_num", type=int, default=1, help="Number of buckets to create based on duration.",
165 )
166
167 parser.add_argument("--shuffle_seed", type=int, default=None, help="Random seed for use if shuffling is enabled.")
168 parser.add_argument(
169 '--write_metadata',
170 action='store_true',
171 help=(
172 "Flag to write a blank metadata with the current call config. "
173 "Note that the metadata will not contain the number of shards, "
174 "and it must be filled out by the user."
175 ),
176 )
177 parser.add_argument(
178 "--no_shard_manifests",
179 action='store_true',
180 help="Do not write sharded manifests along with the aggregated manifest.",
181 )
182 parser.add_argument('--workers', type=int, default=1, help='Number of worker processes')
183 args = parser.parse_args()
184
185
186 @dataclass
187 class ASRTarredDatasetConfig:
188 num_shards: int = -1
189 shuffle: bool = False
190 max_duration: Optional[float] = None
191 min_duration: Optional[float] = None
192 shuffle_seed: Optional[int] = None
193 sort_in_shards: bool = True
194 shard_manifests: bool = True
195 keep_files_together: bool = False
196
197
198 @dataclass
199 class ASRTarredDatasetMetadata:
200 created_datetime: Optional[str] = None
201 version: int = 0
202 num_samples_per_shard: Optional[int] = None
203 is_concatenated_manifest: bool = False
204
205 dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
206 history: Optional[List[Any]] = field(default_factory=lambda: [])
207
208 def __post_init__(self):
209 self.created_datetime = self.get_current_datetime()
210
211 def get_current_datetime(self):
212 return datetime.now().strftime("%m-%d-%Y %H-%M-%S")
213
214 @classmethod
215 def from_config(cls, config: DictConfig):
216 obj = cls()
217 obj.__dict__.update(**config)
218 return obj
219
220 @classmethod
221 def from_file(cls, filepath: str):
222 config = OmegaConf.load(filepath)
223 return ASRTarredDatasetMetadata.from_config(config=config)
224
225
226 class ASRTarredDatasetBuilder:
227 """
228 Helper class that constructs a tarred dataset from scratch, or concatenates tarred datasets
229 together and constructs manifests for them.
230 """
231
232 def __init__(self):
233 self.config = None
234
235 def configure(self, config: ASRTarredDatasetConfig):
236 """
237 Sets the config generated from command line overrides.
238
239 Args:
240 config: ASRTarredDatasetConfig dataclass object.
241 """
242 self.config = config # type: ASRTarredDatasetConfig
243
244 if self.config.num_shards < 0:
245 raise ValueError("`num_shards` must be > 0. Please fill in the metadata information correctly.")
246
247 def create_new_dataset(self, manifest_path: str, target_dir: str = "./tarred/", num_workers: int = 0):
248 """
249 Creates a new tarred dataset from a given manifest file.
250
251 Args:
252 manifest_path: Path to the original ASR manifest.
253 target_dir: Output directory.
254 num_workers: Integer denoting number of parallel worker processes which will write tarfiles.
255 Defaults to 1 - which denotes sequential worker process.
256
257 Output:
258 Writes tarfiles, along with the tarred dataset compatible manifest file.
259 Also preserves a record of the metadata used to construct this tarred dataset.
260 """
261 if self.config is None:
262 raise ValueError("Config has not been set. Please call `configure(config: ASRTarredDatasetConfig)`")
263
264 if manifest_path is None:
265 raise FileNotFoundError("Manifest filepath cannot be None !")
266
267 config = self.config # type: ASRTarredDatasetConfig
268
269 if not os.path.exists(target_dir):
270 os.makedirs(target_dir)
271
272 # Read the existing manifest
273 entries, total_duration, filtered_entries, filtered_duration = self._read_manifest(manifest_path, config)
274
275 if len(filtered_entries) > 0:
276 print(f"Filtered {len(filtered_entries)} files which amounts to {filtered_duration} seconds of audio.")
277 print(
278 f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
279 )
280
281 if len(entries) == 0:
282 print("No tarred dataset was created as there were 0 valid samples after filtering!")
283 return
284 if config.shuffle:
285 random.seed(config.shuffle_seed)
286 print("Shuffling...")
287 if config.keep_files_together:
288 filename_entries = defaultdict(list)
289 for ent in entries:
290 filename_entries[ent["audio_filepath"]].append(ent)
291 filenames = list(filename_entries.keys())
292 random.shuffle(filenames)
293 shuffled_entries = []
294 for filename in filenames:
295 shuffled_entries += filename_entries[filename]
296 entries = shuffled_entries
297 else:
298 random.shuffle(entries)
299
300 # Create shards and updated manifest entries
301 print(f"Number of samples added : {len(entries)}")
302 print(f"Remainder: {len(entries) % config.num_shards}")
303
304 start_indices = []
305 end_indices = []
306 # Build indices
307 for i in range(config.num_shards):
308 start_idx = (len(entries) // config.num_shards) * i
309 end_idx = start_idx + (len(entries) // config.num_shards)
310 print(f"Shard {i} has entries {start_idx} ~ {end_idx}")
311 files = set()
312 for ent_id in range(start_idx, end_idx):
313 files.add(entries[ent_id]["audio_filepath"])
314 print(f"Shard {i} contains {len(files)} files")
315 if i == config.num_shards - 1:
316 # We discard in order to have the same number of entries per shard.
317 print(f"Have {len(entries) - end_idx} entries left over that will be discarded.")
318
319 start_indices.append(start_idx)
320 end_indices.append(end_idx)
321
322 manifest_folder, _ = os.path.split(manifest_path)
323
324 with Parallel(n_jobs=num_workers, verbose=config.num_shards) as parallel:
325 # Call parallel tarfile construction
326 new_entries_list = parallel(
327 delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, i, manifest_folder)
328 for i, (start_idx, end_idx) in enumerate(zip(start_indices, end_indices))
329 )
330
331 if config.shard_manifests:
332 sharded_manifests_dir = target_dir + '/sharded_manifests'
333 if not os.path.exists(sharded_manifests_dir):
334 os.makedirs(sharded_manifests_dir)
335
336 for manifest in new_entries_list:
337 shard_id = manifest[0]['shard_id']
338 new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
339 with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
340 for entry in manifest:
341 json.dump(entry, m2)
342 m2.write('\n')
343
344 # Flatten the list of list of entries to a list of entries
345 new_entries = [sample for manifest in new_entries_list for sample in manifest]
346 del new_entries_list
347
348 print("Total number of entries in manifest :", len(new_entries))
349
350 # Write manifest
351 new_manifest_path = os.path.join(target_dir, 'tarred_audio_manifest.json')
352 with open(new_manifest_path, 'w', encoding='utf-8') as m2:
353 for entry in new_entries:
354 json.dump(entry, m2)
355 m2.write('\n')
356
357 # Write metadata (default metadata for new datasets)
358 new_metadata_path = os.path.join(target_dir, 'metadata.yaml')
359 metadata = ASRTarredDatasetMetadata()
360
361 # Update metadata
362 metadata.dataset_config = config
363 metadata.num_samples_per_shard = len(new_entries) // config.num_shards
364
365 # Write metadata
366 metadata_yaml = OmegaConf.structured(metadata)
367 OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
368
369 def create_concatenated_dataset(
370 self,
371 base_manifest_path: str,
372 manifest_paths: List[str],
373 metadata: ASRTarredDatasetMetadata,
374 target_dir: str = "./tarred_concatenated/",
375 num_workers: int = 1,
376 ):
377 """
378 Creates new tarfiles in order to create a concatenated dataset, whose manifest contains the data for
379 both the original dataset as well as the new data submitted in manifest paths.
380
381 Args:
382 base_manifest_path: Path to the manifest file which contains the information for the original
383 tarred dataset (with flattened paths).
384 manifest_paths: List of one or more paths to manifest files that will be concatenated with above
385 base tarred dataset.
386 metadata: ASRTarredDatasetMetadata dataclass instance with overrides from command line.
387 target_dir: Output directory
388
389 Output:
390 Writes tarfiles which with indices mapping to a "concatenated" tarred dataset,
391 along with the tarred dataset compatible manifest file which includes information
392 about all the datasets that comprise the concatenated dataset.
393
394 Also preserves a record of the metadata used to construct this tarred dataset.
395 """
396 if not os.path.exists(target_dir):
397 os.makedirs(target_dir)
398
399 if base_manifest_path is None:
400 raise FileNotFoundError("Base manifest filepath cannot be None !")
401
402 if manifest_paths is None or len(manifest_paths) == 0:
403 raise FileNotFoundError("List of additional manifest filepaths cannot be None !")
404
405 config = ASRTarredDatasetConfig(**(metadata.dataset_config))
406
407 # Read the existing manifest (no filtering here)
408 base_entries, _, _, _ = self._read_manifest(base_manifest_path, config)
409 print(f"Read base manifest containing {len(base_entries)} samples.")
410
411 # Precompute number of samples per shard
412 if metadata.num_samples_per_shard is None:
413 num_samples_per_shard = len(base_entries) // config.num_shards
414 else:
415 num_samples_per_shard = metadata.num_samples_per_shard
416
417 print("Number of samples per shard :", num_samples_per_shard)
418
419 # Compute min and max duration and update config (if no metadata passed)
420 print(f"Selected max duration : {config.max_duration}")
421 print(f"Selected min duration : {config.min_duration}")
422
423 entries = []
424 for new_manifest_idx in range(len(manifest_paths)):
425 new_entries, total_duration, filtered_new_entries, filtered_duration = self._read_manifest(
426 manifest_paths[new_manifest_idx], config
427 )
428
429 if len(filtered_new_entries) > 0:
430 print(
431 f"Filtered {len(filtered_new_entries)} files which amounts to {filtered_duration:0.2f}"
432 f" seconds of audio from manifest {manifest_paths[new_manifest_idx]}."
433 )
434 print(
435 f"After filtering, manifest has {len(entries)} files which amounts to {total_duration} seconds of audio."
436 )
437
438 entries.extend(new_entries)
439
440 if len(entries) == 0:
441 print("No tarred dataset was created as there were 0 valid samples after filtering!")
442 return
443
444 if config.shuffle:
445 random.seed(config.shuffle_seed)
446 print("Shuffling...")
447 random.shuffle(entries)
448
449 # Drop last section of samples that cannot be added onto a chunk
450 drop_count = len(entries) % num_samples_per_shard
451 total_new_entries = len(entries)
452 entries = entries[:-drop_count]
453
454 print(
455 f"Dropping {drop_count} samples from total new samples {total_new_entries} since they cannot "
456 f"be added into a uniformly sized chunk."
457 )
458
459 # Create shards and updated manifest entries
460 num_added_shards = len(entries) // num_samples_per_shard
461
462 print(f"Number of samples in base dataset : {len(base_entries)}")
463 print(f"Number of samples in additional datasets : {len(entries)}")
464 print(f"Number of added shards : {num_added_shards}")
465 print(f"Remainder: {len(entries) % num_samples_per_shard}")
466
467 start_indices = []
468 end_indices = []
469 shard_indices = []
470 for i in range(num_added_shards):
471 start_idx = (len(entries) // num_added_shards) * i
472 end_idx = start_idx + (len(entries) // num_added_shards)
473 shard_idx = i + config.num_shards
474 print(f"Shard {shard_idx} has entries {start_idx + len(base_entries)} ~ {end_idx + len(base_entries)}")
475
476 start_indices.append(start_idx)
477 end_indices.append(end_idx)
478 shard_indices.append(shard_idx)
479
480 manifest_folder, _ = os.path.split(base_manifest_path)
481
482 with Parallel(n_jobs=num_workers, verbose=num_added_shards) as parallel:
483 # Call parallel tarfile construction
484 new_entries_list = parallel(
485 delayed(self._create_shard)(entries[start_idx:end_idx], target_dir, shard_idx, manifest_folder)
486 for i, (start_idx, end_idx, shard_idx) in enumerate(zip(start_indices, end_indices, shard_indices))
487 )
488
489 if config.shard_manifests:
490 sharded_manifests_dir = target_dir + '/sharded_manifests'
491 if not os.path.exists(sharded_manifests_dir):
492 os.makedirs(sharded_manifests_dir)
493
494 for manifest in new_entries_list:
495 shard_id = manifest[0]['shard_id']
496 new_manifest_shard_path = os.path.join(sharded_manifests_dir, f'manifest_{shard_id}.json')
497 with open(new_manifest_shard_path, 'w', encoding='utf-8') as m2:
498 for entry in manifest:
499 json.dump(entry, m2)
500 m2.write('\n')
501
502 # Flatten the list of list of entries to a list of entries
503 new_entries = [sample for manifest in new_entries_list for sample in manifest]
504 del new_entries_list
505
506 # Write manifest
507 if metadata is None:
508 new_version = 1 # start with `1`, where `0` indicates the base manifest + dataset
509 else:
510 new_version = metadata.version + 1
511
512 print("Total number of entries in manifest :", len(base_entries) + len(new_entries))
513
514 new_manifest_path = os.path.join(target_dir, f'tarred_audio_manifest_version_{new_version}.json')
515 with open(new_manifest_path, 'w', encoding='utf-8') as m2:
516 # First write all the entries of base manifest
517 for entry in base_entries:
518 json.dump(entry, m2)
519 m2.write('\n')
520
521 # Finally write the new entries
522 for entry in new_entries:
523 json.dump(entry, m2)
524 m2.write('\n')
525
526 # Preserve historical metadata
527 base_metadata = metadata
528
529 # Write metadata (updated metadata for concatenated datasets)
530 new_metadata_path = os.path.join(target_dir, f'metadata_version_{new_version}.yaml')
531 metadata = ASRTarredDatasetMetadata()
532
533 # Update config
534 config.num_shards = config.num_shards + num_added_shards
535
536 # Update metadata
537 metadata.version = new_version
538 metadata.dataset_config = config
539 metadata.num_samples_per_shard = num_samples_per_shard
540 metadata.is_concatenated_manifest = True
541 metadata.created_datetime = metadata.get_current_datetime()
542
543 # Attach history
544 current_metadata = OmegaConf.structured(base_metadata.history)
545 metadata.history = current_metadata
546
547 # Write metadata
548 metadata_yaml = OmegaConf.structured(metadata)
549 OmegaConf.save(metadata_yaml, new_metadata_path, resolve=True)
550
551 def _read_manifest(self, manifest_path: str, config: ASRTarredDatasetConfig):
552 """Read and filters data from the manifest"""
553 # Read the existing manifest
554 entries = []
555 total_duration = 0.0
556 filtered_entries = []
557 filtered_duration = 0.0
558 with open(manifest_path, 'r', encoding='utf-8') as m:
559 for line in m:
560 entry = json.loads(line)
561 if (config.max_duration is None or entry['duration'] < config.max_duration) and (
562 config.min_duration is None or entry['duration'] >= config.min_duration
563 ):
564 entries.append(entry)
565 total_duration += entry["duration"]
566 else:
567 filtered_entries.append(entry)
568 filtered_duration += entry['duration']
569
570 return entries, total_duration, filtered_entries, filtered_duration
571
572 def _create_shard(self, entries, target_dir, shard_id, manifest_folder):
573 """Creates a tarball containing the audio files from `entries`.
574 """
575 if self.config.sort_in_shards:
576 entries.sort(key=lambda x: x["duration"], reverse=False)
577
578 new_entries = []
579 tar = tarfile.open(os.path.join(target_dir, f'audio_{shard_id}.tar'), mode='w', dereference=True)
580
581 count = dict()
582 for entry in entries:
583 # We squash the filename since we do not preserve directory structure of audio files in the tarball.
584 if os.path.exists(entry["audio_filepath"]):
585 audio_filepath = entry["audio_filepath"]
586 else:
587 audio_filepath = os.path.join(manifest_folder, entry["audio_filepath"])
588 if not os.path.exists(audio_filepath):
589 raise FileNotFoundError(f"Could not find {entry['audio_filepath']}!")
590
591 base, ext = os.path.splitext(audio_filepath)
592 base = base.replace('/', '_')
593 # Need the following replacement as long as WebDataset splits on first period
594 base = base.replace('.', '_')
595 squashed_filename = f'{base}{ext}'
596 if squashed_filename not in count:
597 tar.add(audio_filepath, arcname=squashed_filename)
598 to_write = squashed_filename
599 count[squashed_filename] = 1
600 else:
601 to_write = base + "-sub" + str(count[squashed_filename]) + ext
602 count[squashed_filename] += 1
603
604 new_entry = {
605 'audio_filepath': to_write,
606 'duration': entry['duration'],
607 'shard_id': shard_id, # Keep shard ID for recordkeeping
608 }
609
610 if 'label' in entry:
611 new_entry['label'] = entry['label']
612
613 if 'text' in entry:
614 new_entry['text'] = entry['text']
615
616 if 'offset' in entry:
617 new_entry['offset'] = entry['offset']
618
619 if 'lang' in entry:
620 new_entry['lang'] = entry['lang']
621
622 new_entries.append(new_entry)
623
624 tar.close()
625 return new_entries
626
627 @classmethod
628 def setup_history(cls, base_metadata: ASRTarredDatasetMetadata, history: List[Any]):
629 if 'history' in base_metadata.keys():
630 for history_val in base_metadata.history:
631 cls.setup_history(history_val, history)
632
633 if base_metadata is not None:
634 metadata_copy = copy.deepcopy(base_metadata)
635 with open_dict(metadata_copy):
636 metadata_copy.pop('history', None)
637 history.append(metadata_copy)
638
639
640 def main():
641 if args.buckets_num > 1:
642 bucket_length = (args.max_duration - args.min_duration) / float(args.buckets_num)
643 for i in range(args.buckets_num):
644 min_duration = args.min_duration + i * bucket_length
645 max_duration = min_duration + bucket_length
646 if i == args.buckets_num - 1:
647 # add a small number to cover the samples with exactly duration of max_duration in the last bucket.
648 max_duration += 1e-5
649 target_dir = os.path.join(args.target_dir, f"bucket{i+1}")
650 print(f"Creating bucket {i+1} with min_duration={min_duration} and max_duration={max_duration} ...")
651 print(f"Results are being saved at: {target_dir}.")
652 create_tar_datasets(min_duration=min_duration, max_duration=max_duration, target_dir=target_dir)
653 print(f"Bucket {i+1} is created.")
654 else:
655 create_tar_datasets(min_duration=args.min_duration, max_duration=args.max_duration, target_dir=args.target_dir)
656
657
658 def create_tar_datasets(min_duration: float, max_duration: float, target_dir: str):
659 builder = ASRTarredDatasetBuilder()
660
661 shard_manifests = False if args.no_shard_manifests else True
662
663 if args.write_metadata:
664 metadata = ASRTarredDatasetMetadata()
665 dataset_cfg = ASRTarredDatasetConfig(
666 num_shards=args.num_shards,
667 shuffle=args.shuffle,
668 max_duration=max_duration,
669 min_duration=min_duration,
670 shuffle_seed=args.shuffle_seed,
671 sort_in_shards=args.sort_in_shards,
672 shard_manifests=shard_manifests,
673 keep_files_together=args.keep_files_together,
674 )
675 metadata.dataset_config = dataset_cfg
676
677 output_path = os.path.join(target_dir, 'default_metadata.yaml')
678 OmegaConf.save(metadata, output_path, resolve=True)
679 print(f"Default metadata written to {output_path}")
680 exit(0)
681
682 if args.concat_manifest_paths is None or len(args.concat_manifest_paths) == 0:
683 print("Creating new tarred dataset ...")
684
685 # Create a tarred dataset from scratch
686 config = ASRTarredDatasetConfig(
687 num_shards=args.num_shards,
688 shuffle=args.shuffle,
689 max_duration=max_duration,
690 min_duration=min_duration,
691 shuffle_seed=args.shuffle_seed,
692 sort_in_shards=args.sort_in_shards,
693 shard_manifests=shard_manifests,
694 keep_files_together=args.keep_files_together,
695 )
696 builder.configure(config)
697 builder.create_new_dataset(manifest_path=args.manifest_path, target_dir=target_dir, num_workers=args.workers)
698
699 else:
700 if args.buckets_num > 1:
701 raise ValueError("Concatenation feature does not support buckets_num > 1.")
702 print("Concatenating multiple tarred datasets ...")
703
704 # Implicitly update config from base details
705 if args.metadata_path is not None:
706 metadata = ASRTarredDatasetMetadata.from_file(args.metadata_path)
707 else:
708 raise ValueError("`metadata` yaml file path must be provided!")
709
710 # Preserve history
711 history = []
712 builder.setup_history(OmegaConf.structured(metadata), history)
713 metadata.history = history
714
715 # Add command line overrides (everything other than num_shards)
716 metadata.dataset_config.max_duration = max_duration
717 metadata.dataset_config.min_duration = min_duration
718 metadata.dataset_config.shuffle = args.shuffle
719 metadata.dataset_config.shuffle_seed = args.shuffle_seed
720 metadata.dataset_config.sort_in_shards = args.sort_in_shards
721 metadata.dataset_config.shard_manifests = shard_manifests
722
723 builder.configure(metadata.dataset_config)
724
725 # Concatenate a tarred dataset onto a previous one
726 builder.create_concatenated_dataset(
727 base_manifest_path=args.manifest_path,
728 manifest_paths=args.concat_manifest_paths,
729 metadata=metadata,
730 target_dir=target_dir,
731 num_workers=args.workers,
732 )
733
734 if DALI_INDEX_SCRIPT_AVAILABLE and dali_index.INDEX_CREATOR_AVAILABLE:
735 print("Constructing DALI Tarfile Index - ", target_dir)
736 index_config = dali_index.DALITarredIndexConfig(tar_dir=target_dir, workers=args.workers)
737 dali_index.main(index_config)
738
739
740 if __name__ == "__main__":
741 main()
742
[end of scripts/speech_recognition/convert_to_tarred_audio_dataset.py]
[start of tools/nemo_forced_aligner/align.py]
1 # Copyright (c) 2023, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import math
17 import os
18 from dataclasses import dataclass, field, is_dataclass
19 from pathlib import Path
20 from typing import List, Optional
21
22 import torch
23 from omegaconf import OmegaConf
24 from utils.data_prep import (
25 add_t_start_end_to_utt_obj,
26 get_batch_starts_ends,
27 get_batch_variables,
28 get_manifest_lines_batch,
29 is_entry_in_all_lines,
30 is_entry_in_any_lines,
31 )
32 from utils.make_ass_files import make_ass_files
33 from utils.make_ctm_files import make_ctm_files
34 from utils.make_output_manifest import write_manifest_out_line
35 from utils.viterbi_decoding import viterbi_decoding
36
37 from nemo.collections.asr.models.ctc_models import EncDecCTCModel
38 from nemo.collections.asr.models.hybrid_rnnt_ctc_models import EncDecHybridRNNTCTCModel
39 from nemo.collections.asr.parts.utils.streaming_utils import FrameBatchASR
40 from nemo.collections.asr.parts.utils.transcribe_utils import setup_model
41 from nemo.core.config import hydra_runner
42 from nemo.utils import logging
43
44 """
45 Align the utterances in manifest_filepath.
46 Results are saved in ctm files in output_dir.
47
48 Arguments:
49 pretrained_name: string specifying the name of a CTC NeMo ASR model which will be automatically downloaded
50 from NGC and used for generating the log-probs which we will use to do alignment.
51 Note: NFA can only use CTC models (not Transducer models) at the moment.
52 model_path: string specifying the local filepath to a CTC NeMo ASR model which will be used to generate the
53 log-probs which we will use to do alignment.
54 Note: NFA can only use CTC models (not Transducer models) at the moment.
55 Note: if a model_path is provided, it will override the pretrained_name.
56 manifest_filepath: filepath to the manifest of the data you want to align,
57 containing 'audio_filepath' and 'text' fields.
58 output_dir: the folder where output CTM files and new JSON manifest will be saved.
59 align_using_pred_text: if True, will transcribe the audio using the specified model and then use that transcription
60 as the reference text for the forced alignment.
61 transcribe_device: None, or a string specifying the device that will be used for generating log-probs (i.e. "transcribing").
62 The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
63 (otherwise will set it to 'cpu').
64 viterbi_device: None, or string specifying the device that will be used for doing Viterbi decoding.
65 The string needs to be in a format recognized by torch.device(). If None, NFA will set it to 'cuda' if it is available
66 (otherwise will set it to 'cpu').
67 batch_size: int specifying batch size that will be used for generating log-probs and doing Viterbi decoding.
68 use_local_attention: boolean flag specifying whether to try to use local attention for the ASR Model (will only
69 work if the ASR Model is a Conformer model). If local attention is used, we will set the local attention context
70 size to [64,64].
71 additional_segment_grouping_separator: an optional string used to separate the text into smaller segments.
72 If this is not specified, then the whole text will be treated as a single segment.
73 remove_blank_tokens_from_ctm: a boolean denoting whether to remove <blank> tokens from token-level output CTMs.
74 audio_filepath_parts_in_utt_id: int specifying how many of the 'parts' of the audio_filepath
75 we will use (starting from the final part of the audio_filepath) to determine the
76 utt_id that will be used in the CTM files. Note also that any spaces that are present in the audio_filepath
77 will be replaced with dashes, so as not to change the number of space-separated elements in the
78 CTM files.
79 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 1 => utt_id will be "e1"
80 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 2 => utt_id will be "d_e1"
81 e.g. if audio_filepath is "/a/b/c/d/e 1.wav" and audio_filepath_parts_in_utt_id is 3 => utt_id will be "c_d_e1"
82 use_buffered_infer: False, if set True, using streaming to do get the logits for alignment
83 This flag is useful when aligning large audio file.
84 However, currently the chunk streaming inference does not support batch inference,
85 which means even you set batch_size > 1, it will only infer one by one instead of doing
86 the whole batch inference together.
87 chunk_len_in_secs: float chunk length in seconds
88 total_buffer_in_secs: float Length of buffer (chunk + left and right padding) in seconds
89 chunk_batch_size: int batch size for buffered chunk inference,
90 which will cut one audio into segments and do inference on chunk_batch_size segments at a time
91
92 simulate_cache_aware_streaming: False, if set True, using cache aware streaming to do get the logits for alignment
93
94 save_output_file_formats: List of strings specifying what type of output files to save (default: ["ctm", "ass"])
95 ctm_file_config: CTMFileConfig to specify the configuration of the output CTM files
96 ass_file_config: ASSFileConfig to specify the configuration of the output ASS files
97 """
98
99
100 @dataclass
101 class CTMFileConfig:
102 remove_blank_tokens: bool = False
103 # minimum duration (in seconds) for timestamps in the CTM.If any line in the CTM has a
104 # duration lower than this, it will be enlarged from the middle outwards until it
105 # meets the minimum_timestamp_duration, or reaches the beginning or end of the audio file.
106 # Note that this may cause timestamps to overlap.
107 minimum_timestamp_duration: float = 0
108
109
110 @dataclass
111 class ASSFileConfig:
112 fontsize: int = 20
113 vertical_alignment: str = "center"
114 # if resegment_text_to_fill_space is True, the ASS files will use new segments
115 # such that each segment will not take up more than (approximately) max_lines_per_segment
116 # when the ASS file is applied to a video
117 resegment_text_to_fill_space: bool = False
118 max_lines_per_segment: int = 2
119 text_already_spoken_rgb: List[int] = field(default_factory=lambda: [49, 46, 61]) # dark gray
120 text_being_spoken_rgb: List[int] = field(default_factory=lambda: [57, 171, 9]) # dark green
121 text_not_yet_spoken_rgb: List[int] = field(default_factory=lambda: [194, 193, 199]) # light gray
122
123
124 @dataclass
125 class AlignmentConfig:
126 # Required configs
127 pretrained_name: Optional[str] = None
128 model_path: Optional[str] = None
129 manifest_filepath: Optional[str] = None
130 output_dir: Optional[str] = None
131
132 # General configs
133 align_using_pred_text: bool = False
134 transcribe_device: Optional[str] = None
135 viterbi_device: Optional[str] = None
136 batch_size: int = 1
137 use_local_attention: bool = True
138 additional_segment_grouping_separator: Optional[str] = None
139 audio_filepath_parts_in_utt_id: int = 1
140
141 # Buffered chunked streaming configs
142 use_buffered_chunked_streaming: bool = False
143 chunk_len_in_secs: float = 1.6
144 total_buffer_in_secs: float = 4.0
145 chunk_batch_size: int = 32
146
147 # Cache aware streaming configs
148 simulate_cache_aware_streaming: Optional[bool] = False
149
150 # Output file configs
151 save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
152 ctm_file_config: CTMFileConfig = CTMFileConfig()
153 ass_file_config: ASSFileConfig = ASSFileConfig()
154
155
156 @hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
157 def main(cfg: AlignmentConfig):
158
159 logging.info(f'Hydra config: {OmegaConf.to_yaml(cfg)}')
160
161 if is_dataclass(cfg):
162 cfg = OmegaConf.structured(cfg)
163
164 # Validate config
165 if cfg.model_path is None and cfg.pretrained_name is None:
166 raise ValueError("Both cfg.model_path and cfg.pretrained_name cannot be None")
167
168 if cfg.model_path is not None and cfg.pretrained_name is not None:
169 raise ValueError("One of cfg.model_path and cfg.pretrained_name must be None")
170
171 if cfg.manifest_filepath is None:
172 raise ValueError("cfg.manifest_filepath must be specified")
173
174 if cfg.output_dir is None:
175 raise ValueError("cfg.output_dir must be specified")
176
177 if cfg.batch_size < 1:
178 raise ValueError("cfg.batch_size cannot be zero or a negative number")
179
180 if cfg.additional_segment_grouping_separator == "" or cfg.additional_segment_grouping_separator == " ":
181 raise ValueError("cfg.additional_grouping_separator cannot be empty string or space character")
182
183 if cfg.ctm_file_config.minimum_timestamp_duration < 0:
184 raise ValueError("cfg.minimum_timestamp_duration cannot be a negative number")
185
186 if cfg.ass_file_config.vertical_alignment not in ["top", "center", "bottom"]:
187 raise ValueError("cfg.ass_file_config.vertical_alignment must be one of 'top', 'center' or 'bottom'")
188
189 for rgb_list in [
190 cfg.ass_file_config.text_already_spoken_rgb,
191 cfg.ass_file_config.text_already_spoken_rgb,
192 cfg.ass_file_config.text_already_spoken_rgb,
193 ]:
194 if len(rgb_list) != 3:
195 raise ValueError(
196 "cfg.ass_file_config.text_already_spoken_rgb,"
197 " cfg.ass_file_config.text_being_spoken_rgb,"
198 " and cfg.ass_file_config.text_already_spoken_rgb all need to contain"
199 " exactly 3 elements."
200 )
201
202 # Validate manifest contents
203 if not is_entry_in_all_lines(cfg.manifest_filepath, "audio_filepath"):
204 raise RuntimeError(
205 "At least one line in cfg.manifest_filepath does not contain an 'audio_filepath' entry. "
206 "All lines must contain an 'audio_filepath' entry."
207 )
208
209 if cfg.align_using_pred_text:
210 if is_entry_in_any_lines(cfg.manifest_filepath, "pred_text"):
211 raise RuntimeError(
212 "Cannot specify cfg.align_using_pred_text=True when the manifest at cfg.manifest_filepath "
213 "contains 'pred_text' entries. This is because the audio will be transcribed and may produce "
214 "a different 'pred_text'. This may cause confusion."
215 )
216 else:
217 if not is_entry_in_all_lines(cfg.manifest_filepath, "text"):
218 raise RuntimeError(
219 "At least one line in cfg.manifest_filepath does not contain a 'text' entry. "
220 "NFA requires all lines to contain a 'text' entry when cfg.align_using_pred_text=False."
221 )
222
223 # init devices
224 if cfg.transcribe_device is None:
225 transcribe_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
226 else:
227 transcribe_device = torch.device(cfg.transcribe_device)
228 logging.info(f"Device to be used for transcription step (`transcribe_device`) is {transcribe_device}")
229
230 if cfg.viterbi_device is None:
231 viterbi_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
232 else:
233 viterbi_device = torch.device(cfg.viterbi_device)
234 logging.info(f"Device to be used for viterbi step (`viterbi_device`) is {viterbi_device}")
235
236 if transcribe_device.type == 'cuda' or viterbi_device.type == 'cuda':
237 logging.warning(
238 'One or both of transcribe_device and viterbi_device are GPUs. If you run into OOM errors '
239 'it may help to change both devices to be the CPU.'
240 )
241
242 # load model
243 model, _ = setup_model(cfg, transcribe_device)
244 model.eval()
245
246 if isinstance(model, EncDecHybridRNNTCTCModel):
247 model.change_decoding_strategy(decoder_type="ctc")
248
249 if cfg.use_local_attention:
250 logging.info(
251 "Flag use_local_attention is set to True => will try to use local attention for model if it allows it"
252 )
253 model.change_attention_model(self_attention_model="rel_pos_local_attn", att_context_size=[64, 64])
254
255 if not (isinstance(model, EncDecCTCModel) or isinstance(model, EncDecHybridRNNTCTCModel)):
256 raise NotImplementedError(
257 f"Model is not an instance of NeMo EncDecCTCModel or ENCDecHybridRNNTCTCModel."
258 " Currently only instances of these models are supported"
259 )
260
261 if cfg.ctm_file_config.minimum_timestamp_duration > 0:
262 logging.warning(
263 f"cfg.ctm_file_config.minimum_timestamp_duration has been set to {cfg.ctm_file_config.minimum_timestamp_duration} seconds. "
264 "This may cause the alignments for some tokens/words/additional segments to be overlapping."
265 )
266
267 buffered_chunk_params = {}
268 if cfg.use_buffered_chunked_streaming:
269 model_cfg = copy.deepcopy(model._cfg)
270
271 OmegaConf.set_struct(model_cfg.preprocessor, False)
272 # some changes for streaming scenario
273 model_cfg.preprocessor.dither = 0.0
274 model_cfg.preprocessor.pad_to = 0
275
276 if model_cfg.preprocessor.normalize != "per_feature":
277 logging.error(
278 "Only EncDecCTCModelBPE models trained with per_feature normalization are supported currently"
279 )
280 # Disable config overwriting
281 OmegaConf.set_struct(model_cfg.preprocessor, True)
282
283 feature_stride = model_cfg.preprocessor['window_stride']
284 model_stride_in_secs = feature_stride * cfg.model_downsample_factor
285 total_buffer = cfg.total_buffer_in_secs
286 chunk_len = float(cfg.chunk_len_in_secs)
287 tokens_per_chunk = math.ceil(chunk_len / model_stride_in_secs)
288 mid_delay = math.ceil((chunk_len + (total_buffer - chunk_len) / 2) / model_stride_in_secs)
289 logging.info(f"tokens_per_chunk is {tokens_per_chunk}, mid_delay is {mid_delay}")
290
291 model = FrameBatchASR(
292 asr_model=model,
293 frame_len=chunk_len,
294 total_buffer=cfg.total_buffer_in_secs,
295 batch_size=cfg.chunk_batch_size,
296 )
297 buffered_chunk_params = {
298 "delay": mid_delay,
299 "model_stride_in_secs": model_stride_in_secs,
300 "tokens_per_chunk": tokens_per_chunk,
301 }
302 # get start and end line IDs of batches
303 starts, ends = get_batch_starts_ends(cfg.manifest_filepath, cfg.batch_size)
304
305 # init output_timestep_duration = None and we will calculate and update it during the first batch
306 output_timestep_duration = None
307
308 # init f_manifest_out
309 os.makedirs(cfg.output_dir, exist_ok=True)
310 tgt_manifest_name = str(Path(cfg.manifest_filepath).stem) + "_with_output_file_paths.json"
311 tgt_manifest_filepath = str(Path(cfg.output_dir) / tgt_manifest_name)
312 f_manifest_out = open(tgt_manifest_filepath, 'w')
313
314 # get alignment and save in CTM batch-by-batch
315 for start, end in zip(starts, ends):
316 manifest_lines_batch = get_manifest_lines_batch(cfg.manifest_filepath, start, end)
317
318 (log_probs_batch, y_batch, T_batch, U_batch, utt_obj_batch, output_timestep_duration,) = get_batch_variables(
319 manifest_lines_batch,
320 model,
321 cfg.additional_segment_grouping_separator,
322 cfg.align_using_pred_text,
323 cfg.audio_filepath_parts_in_utt_id,
324 output_timestep_duration,
325 cfg.simulate_cache_aware_streaming,
326 cfg.use_buffered_chunked_streaming,
327 buffered_chunk_params,
328 )
329
330 alignments_batch = viterbi_decoding(log_probs_batch, y_batch, T_batch, U_batch, viterbi_device)
331
332 for utt_obj, alignment_utt in zip(utt_obj_batch, alignments_batch):
333
334 utt_obj = add_t_start_end_to_utt_obj(utt_obj, alignment_utt, output_timestep_duration)
335
336 if "ctm" in cfg.save_output_file_formats:
337 utt_obj = make_ctm_files(utt_obj, cfg.output_dir, cfg.ctm_file_config,)
338
339 if "ass" in cfg.save_output_file_formats:
340 utt_obj = make_ass_files(utt_obj, cfg.output_dir, cfg.ass_file_config)
341
342 write_manifest_out_line(
343 f_manifest_out, utt_obj,
344 )
345
346 f_manifest_out.close()
347
348 return None
349
350
351 if __name__ == "__main__":
352 main()
353
[end of tools/nemo_forced_aligner/align.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
NVIDIA/NeMo
|
15db83ec4a65e649d83b61d7a4a58d911586e853
|
Ubuntu 22.04 Python 3.11 [asr]: multiple errors `dataclasses ValueError: mutable default * for field * is not allowed: use default_factory`
**Describe the bug**
After installing latest stable `1.19.1` from pipI, or the latest current commit with `[asr]` extras, I'm getting this error from `hydra-core==1.2.0` when trying to import `nemo.collections.asr`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 15, in <module>
from nemo.collections.asr.losses.angularloss import AngularSoftmaxLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/angularloss.py", line 18, in <module>
from nemo.core.classes import Loss, Typing, typecheck
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/__init__.py", line 16, in <module>
from nemo.core.classes import *
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/core/classes/__init__.py", line 16, in <module>
import hydra
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/__init__.py", line 5, in <module>
from hydra import utils
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/utils.py", line 8, in <module>
import hydra._internal.instantiate._instantiate2
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 12, in <module>
from hydra._internal.utils import _locate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/_internal/utils.py", line 18, in <module>
from hydra.core.utils import get_valid_filename, validate_config_path
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/utils.py", line 20, in <module>
from hydra.core.hydra_config import HydraConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/core/hydra_config.py", line 6, in <module>
from hydra.conf import HydraConf
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 46, in <module>
class JobConf:
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/hydra/conf/__init__.py", line 75, in JobConf
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'hydra.conf.JobConf.JobConfig.OverrideDirname'> for field override_dirname is not allowed: use default_factory
```
If I then manually upgrade `hydra-core` to the current latest (`1.3.2`), I get similar errors from `nemo-toolkit`:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
It's easy to fix with a patch like this:
```
--- nemo/collections/common/parts/adapter_modules.py.orig 2023-08-04 13:55:53.464534800 +0200
+++ nemo/collections/common/parts/adapter_modules.py 2023-08-04 14:05:45.579537700 +0200
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, is_dataclass, field
from typing import Any, Optional
from hydra.utils import instantiate
@@ -151,5 +151,5 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=adapter_mixin_strategies.ResidualAddAdapterStrategyConfig)
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
```
However then another error of the kind comes up:
```
Traceback (most recent call last):
File "/home/alex/T7/src/speaker_verification/nemo_speaker/test.py", line 5, in <module>
import nemo.collections.asr as nemo_asr
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/__init__.py", line 18, in <module>
from nemo.collections.asr.models.clustering_diarizer import ClusteringDiarizer
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/models/clustering_diarizer.py", line 29, in <module>
from nemo.collections.asr.metrics.der import score_labels
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/der.py", line 24, in <module>
from nemo.collections.asr.metrics.wer import word_error_rate
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/metrics/wer.py", line 27, in <module>
from nemo.collections.asr.parts.submodules import ctc_beam_decoding, ctc_greedy_decoding
File "/home/alex/T7/src/speaker_verification/nemo_speaker/venv11/lib/python3.11/site-packages/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py", line 593, in <module>
@dataclass
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.collections.asr.parts.submodules.ctc_beam_decoding.FlashlightConfig'> for field flashlight_cfg is not allowed: use default_factory
```
This can also be fixed accordingly, but all in all, it appears these issues are pretty common in the code base.
Looks like NeMo isn't Python 3.11 ready, at least the `asr` collection.
**Steps/Code to reproduce bug**
With Python 3.11
1. Install `nemo-toolkit[asr]` either 1.19.1, or current HEAD from git (b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50)
2. Run `import nemo.collections.asr`
**Expected behavior**
Import `nemo.collections.asr` without errors
**Environment overview (please complete the following information)**
- Environment location: Bare-metal
- Method of NeMo install: `pip install nemo-toolkit==1.19.1` or from source b498d438fc4c35ebf364a9a1c5cd3e29a2c0fe50
**Environment details**
- Ubuntu 22.04
- PyTorch version 2.0.1
- Python version 3.11.4
|
Seems to be a similar to #7002
Interesting. The fix is easy but needs to be applied to basically every single place that has this constructor for our adapter configs. Let me see if I can update it. But no guarantees on how soon fixes will come in main.
Looking forward to it @titu1994 ! Thanks ๐
@titu1994 I was looking to use NeMo speaker diarization with Python 3.11 and hit this dataclass issue. I patched everything involved in the specific code paths I needed: https://github.com/lmnt-com/NeMo/commit/d89acf9f0152e43dee29d7d1c4667ee34c26ffd7
I was using the neural diarizer as described in https://github.com/NVIDIA/NeMo/tree/main/examples/speaker_tasks/diarization
I'd be happy to upstream this if it's helpful.
I haven't checked whether this is backwards compatible for earlier python/dataclass versions, do you know?
For reference, what led me to this issue, though it's duplicative to the above discussion:
[Similar error](https://github.com/huggingface/datasets/issues/5230)
[StackOverflow solution](https://stackoverflow.com/questions/53632152/why-cant-dataclasses-have-mutable-defaults-in-their-class-attributes-declaratio)
@shaper Thanks for sharing. For brevity, you don't really need a `lambda` when you don't pass any init parameters, like this:
```
field(default_factory=lambda: ConfidenceConfig())
```
You can just do
```
field(default_factory=ConfidenceConfig)
```
It's only needed when you do pass parameter(s), like
```
field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
```
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.
I have the same issue. @tango4j suggested using one of the models from https://huggingface.co/spaces/hf-audio/open_asr_leaderboard, but I cannot import nemo.collections.asr:
```
Traceback (most recent call last):
File "/opt/pycharm-2022.3.3/plugins/python/helpers/pycharm/docrunner.py", line 138, in __run
exec(compile(example.source, filename, "single",
File "<doctest NeMoASR[2]>", line 1, in <module>
NeMoASR().apply_asr(file)
^^^^^^^^^
File "/home/cbj/python/cbj/cbj/transcribe/pretrained.py", line 504, in __init__
import nemo.collections.asr as nemo_asr
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
For documentation (I had to search in the provided links):
Mutable defaults were never allowed in dataclasses (by convention), but in python 3.11 they improved the check: Instead of checking some types (dict, list, set) they now use hashable as indicator for mutable.
An alternative to default_factory would be to use frozen dataclasses, but I don't know whether in this code base the configs are used as mutable objects or not.
You need to update to NeMo 1.20, omegaconf did a fix that should resolve this
I have NeMo 1.20.0.
With `pip install nemo_toolkit` and `pip install pytorch_lightning` I installed yesterday nemo.
So it should be the newest PyPI version.
```
$ pip show nemo_toolkit
Name: nemo-toolkit
Version: 1.20.0
Summary: NeMo - a toolkit for Conversational AI
Home-page: https://github.com/nvidia/nemo
Author: NVIDIA
Author-email: nemo-toolkit@nvidia.com
License: Apache2
Location: /opt/py/2023/lib/python3.11/site-packages
Requires: huggingface-hub, numba, numpy, onnx, python-dateutil, ruamel.yaml, scikit-learn, setuptools, tensorboard, text-unidecode, torch, tqdm, wget, wrapt
Required-by:
$ pip show omegaconf
Name: omegaconf
Version: 2.3.0
Summary: A flexible configuration library
Home-page: https://github.com/omry/omegaconf
Author: Omry Yadan
Author-email: omry@yadan.net
License:
Location: /home/cbj/.local/lib/python3.11/site-packages
Requires: antlr4-python3-runtime, PyYAML
Required-by: hydra-core
$ python -c "import nemo.collections.asr as nemo_asr"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/__init__.py", line 15, in <module>
from nemo.collections.asr import data, losses, models, modules
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/__init__.py", line 16, in <module>
from nemo.collections.asr.losses.audio_losses import SDRLoss
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/losses/audio_losses.py", line 21, in <module>
from nemo.collections.asr.parts.preprocessing.features import make_seq_mask_like
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/__init__.py", line 16, in <module>
from nemo.collections.asr.parts.preprocessing.features import FeaturizerFactory, FilterbankFeatures, WaveformFeaturizer
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/features.py", line 44, in <module>
from nemo.collections.asr.parts.preprocessing.perturb import AudioAugmentor
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/asr/parts/preprocessing/perturb.py", line 50, in <module>
from nemo.collections.common.parts.preprocessing import collections, parsers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/__init__.py", line 16, in <module>
from nemo.collections.common import data, losses, parts, tokenizers
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/__init__.py", line 15, in <module>
from nemo.collections.common.parts.adapter_modules import LinearAdapter, LinearAdapterConfig
File "/opt/py/2023/lib/python3.11/site-packages/nemo/collections/common/parts/adapter_modules.py", line 147, in <module>
@dataclass
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py/2023/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'nemo.core.classes.mixins.adapter_mixin_strategies.ResidualAddAdapterStrategyConfig'> for field adapter_strategy is not allowed: use default_factory
```
Hmm ok I'll take a look
|
2023-10-03T19:14:38Z
|
<patch>
<patch>
diff --git a/examples/asr/experimental/k2/align_speech_parallel.py b/examples/asr/experimental/k2/align_speech_parallel.py
--- a/examples/asr/experimental/k2/align_speech_parallel.py
+++ b/examples/asr/experimental/k2/align_speech_parallel.py
@@ -74,7 +74,7 @@
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Optional
import pytorch_lightning as ptl
@@ -94,12 +94,14 @@
@dataclass
class ParallelAlignmentConfig:
model: Optional[str] = None # name
- predict_ds: ASRDatasetConfig = ASRDatasetConfig(return_sample_id=True, num_workers=4)
- aligner_args: K2AlignerWrapperModelConfig = K2AlignerWrapperModelConfig()
+ predict_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(return_sample_id=True, num_workers=4)
+ )
+ aligner_args: K2AlignerWrapperModelConfig = field(default_factory=lambda: K2AlignerWrapperModelConfig())
output_path: str = MISSING
model_stride: int = 8
- trainer: TrainerConfig = TrainerConfig(gpus=-1, accelerator="ddp")
+ trainer: TrainerConfig = field(default_factory=lambda: TrainerConfig(gpus=-1, accelerator="ddp"))
# there arguments will be ignored
return_predictions: bool = False
diff --git a/nemo/collections/asr/metrics/rnnt_wer.py b/nemo/collections/asr/metrics/rnnt_wer.py
--- a/nemo/collections/asr/metrics/rnnt_wer.py
+++ b/nemo/collections/asr/metrics/rnnt_wer.py
@@ -15,7 +15,7 @@
import copy
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1299,7 +1299,7 @@ class RNNTDecodingConfig:
preserve_alignments: Optional[bool] = None
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# RNNT Joint fused batch size
fused_batch_size: Optional[int] = None
@@ -1317,10 +1317,10 @@ class RNNTDecodingConfig:
rnnt_timestamp_type: str = "all" # can be char, word or all for both
# greedy decoding config
- greedy: greedy_decode.GreedyRNNTInferConfig = greedy_decode.GreedyRNNTInferConfig()
+ greedy: greedy_decode.GreedyRNNTInferConfig = field(default_factory=lambda: greedy_decode.GreedyRNNTInferConfig())
# beam decoding config
- beam: beam_decode.BeamRNNTInferConfig = beam_decode.BeamRNNTInferConfig(beam_size=4)
+ beam: beam_decode.BeamRNNTInferConfig = field(default_factory=lambda: beam_decode.BeamRNNTInferConfig(beam_size=4))
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/metrics/wer.py b/nemo/collections/asr/metrics/wer.py
--- a/nemo/collections/asr/metrics/wer.py
+++ b/nemo/collections/asr/metrics/wer.py
@@ -14,7 +14,7 @@
import re
from abc import abstractmethod
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Callable, Dict, List, Optional, Tuple, Union
import editdistance
@@ -1297,13 +1297,17 @@ class CTCDecodingConfig:
batch_dim_index: int = 0
# greedy decoding config
- greedy: ctc_greedy_decoding.GreedyCTCInferConfig = ctc_greedy_decoding.GreedyCTCInferConfig()
+ greedy: ctc_greedy_decoding.GreedyCTCInferConfig = field(
+ default_factory=lambda: ctc_greedy_decoding.GreedyCTCInferConfig()
+ )
# beam decoding config
- beam: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ beam: ctc_beam_decoding.BeamCTCInferConfig = field(
+ default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=4)
+ )
# confidence config
- confidence_cfg: ConfidenceConfig = ConfidenceConfig()
+ confidence_cfg: ConfidenceConfig = field(default_factory=lambda: ConfidenceConfig())
# can be used to change temperature for decoding
temperature: float = 1.0
diff --git a/nemo/collections/asr/models/configs/aligner_config.py b/nemo/collections/asr/models/configs/aligner_config.py
--- a/nemo/collections/asr/models/configs/aligner_config.py
+++ b/nemo/collections/asr/models/configs/aligner_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig
@@ -35,10 +35,10 @@ class AlignerWrapperModelConfig:
word_output: bool = True
cpu_decoding: bool = False
decode_batch_size: int = 0
- ctc_cfg: AlignerCTCConfig = AlignerCTCConfig()
- rnnt_cfg: AlignerRNNTConfig = AlignerRNNTConfig()
+ ctc_cfg: AlignerCTCConfig = field(default_factory=lambda: AlignerCTCConfig())
+ rnnt_cfg: AlignerRNNTConfig = field(default_factory=lambda: AlignerRNNTConfig())
@dataclass
class K2AlignerWrapperModelConfig(AlignerWrapperModelConfig):
- decoder_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ decoder_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
diff --git a/nemo/collections/asr/models/configs/asr_models_config.py b/nemo/collections/asr/models/configs/asr_models_config.py
--- a/nemo/collections/asr/models/configs/asr_models_config.py
+++ b/nemo/collections/asr/models/configs/asr_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -74,24 +74,32 @@ class EncDecCTCConfig(model_cfg.ModelConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=True)
- validation_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ASRDatasetConfig = ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ train_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=True))
+ validation_ds: ASRDatasetConfig = field(
+ default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ASRDatasetConfig = field(default_factory=lambda: ASRDatasetConfig(manifest_filepath=None, shuffle=False))
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
- decoding: CTCDecodingConfig = CTCDecodingConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
+ decoding: CTCDecodingConfig = field(default_factory=lambda: CTCDecodingConfig())
@dataclass
class EncDecCTCModelConfig(model_cfg.NemoConfig):
- model: EncDecCTCConfig = EncDecCTCConfig()
+ model: EncDecCTCConfig = field(default_factory=lambda: EncDecCTCConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/classification_models_config.py b/nemo/collections/asr/models/configs/classification_models_config.py
--- a/nemo/collections/asr/models/configs/classification_models_config.py
+++ b/nemo/collections/asr/models/configs/classification_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
from omegaconf import MISSING
@@ -72,30 +72,40 @@ class EncDecClassificationConfig(model_cfg.ModelConfig):
timesteps: int = MISSING
# Dataset configs
- train_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: EncDecClassificationDatasetConfig = EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=timesteps
+ preprocessor: AudioToMFCCPreprocessorConfig = field(default_factory=lambda: AudioToMFCCPreprocessorConfig())
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=-1)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig()
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig())
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
+
+ def __post_init__(self):
+ if self.crop_or_pad_augment is not None:
+ self.crop_or_pad_augment.audio_length = self.timesteps
@dataclass
class EncDecClassificationModelConfig(model_cfg.NemoConfig):
- model: EncDecClassificationConfig = EncDecClassificationConfig()
+ model: EncDecClassificationConfig = field(default_factory=lambda: EncDecClassificationConfig())
diff --git a/nemo/collections/asr/models/configs/diarizer_config.py b/nemo/collections/asr/models/configs/diarizer_config.py
--- a/nemo/collections/asr/models/configs/diarizer_config.py
+++ b/nemo/collections/asr/models/configs/diarizer_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import asdict, dataclass
+from dataclasses import asdict, dataclass, field
from typing import Any, Dict, Optional, Tuple, Union
@@ -78,9 +78,9 @@ class ASRDiarizerParams(DiarizerComponentConfig):
@dataclass
class ASRDiarizerConfig(DiarizerComponentConfig):
model_path: Optional[str] = "stt_en_conformer_ctc_large"
- parameters: ASRDiarizerParams = ASRDiarizerParams()
- ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = ASRDiarizerCTCDecoderParams()
- realigning_lm_parameters: ASRRealigningLMParams = ASRRealigningLMParams()
+ parameters: ASRDiarizerParams = field(default_factory=lambda: ASRDiarizerParams())
+ ctc_decoder_parameters: ASRDiarizerCTCDecoderParams = field(default_factory=lambda: ASRDiarizerCTCDecoderParams())
+ realigning_lm_parameters: ASRRealigningLMParams = field(default_factory=lambda: ASRRealigningLMParams())
@dataclass
@@ -102,7 +102,7 @@ class VADParams(DiarizerComponentConfig):
class VADConfig(DiarizerComponentConfig):
model_path: str = "vad_multilingual_marblenet" # .nemo local model path or pretrained VAD model name
external_vad_manifest: Optional[str] = None
- parameters: VADParams = VADParams()
+ parameters: VADParams = field(default_factory=lambda: VADParams())
@dataclass
@@ -121,7 +121,7 @@ class SpeakerEmbeddingsParams(DiarizerComponentConfig):
class SpeakerEmbeddingsConfig(DiarizerComponentConfig):
# .nemo local model path or pretrained model name (titanet_large, ecapa_tdnn or speakerverification_speakernet)
model_path: Optional[str] = None
- parameters: SpeakerEmbeddingsParams = SpeakerEmbeddingsParams()
+ parameters: SpeakerEmbeddingsParams = field(default_factory=lambda: SpeakerEmbeddingsParams())
@dataclass
@@ -142,7 +142,7 @@ class ClusteringParams(DiarizerComponentConfig):
@dataclass
class ClusteringConfig(DiarizerComponentConfig):
- parameters: ClusteringParams = ClusteringParams()
+ parameters: ClusteringParams = field(default_factory=lambda: ClusteringParams())
@dataclass
@@ -166,7 +166,7 @@ class MSDDParams(DiarizerComponentConfig):
@dataclass
class MSDDConfig(DiarizerComponentConfig):
model_path: Optional[str] = "diar_msdd_telephonic"
- parameters: MSDDParams = MSDDParams()
+ parameters: MSDDParams = field(default_factory=lambda: MSDDParams())
@dataclass
@@ -176,16 +176,16 @@ class DiarizerConfig(DiarizerComponentConfig):
oracle_vad: bool = False # If True, uses RTTM files provided in the manifest file to get VAD timestamps
collar: float = 0.25 # Collar value for scoring
ignore_overlap: bool = True # Consider or ignore overlap segments while scoring
- vad: VADConfig = VADConfig()
- speaker_embeddings: SpeakerEmbeddingsConfig = SpeakerEmbeddingsConfig()
- clustering: ClusteringConfig = ClusteringConfig()
- msdd_model: MSDDConfig = MSDDConfig()
- asr: ASRDiarizerConfig = ASRDiarizerConfig()
+ vad: VADConfig = field(default_factory=lambda: VADConfig())
+ speaker_embeddings: SpeakerEmbeddingsConfig = field(default_factory=lambda: SpeakerEmbeddingsConfig())
+ clustering: ClusteringConfig = field(default_factory=lambda: ClusteringConfig())
+ msdd_model: MSDDConfig = field(default_factory=lambda: MSDDConfig())
+ asr: ASRDiarizerConfig = field(default_factory=lambda: ASRDiarizerConfig())
@dataclass
class NeuralDiarizerInferenceConfig(DiarizerComponentConfig):
- diarizer: DiarizerConfig = DiarizerConfig()
+ diarizer: DiarizerConfig = field(default_factory=lambda: DiarizerConfig())
device: str = "cpu"
verbose: bool = False
batch_size: int = 64
diff --git a/nemo/collections/asr/models/configs/k2_sequence_models_config.py b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
--- a/nemo/collections/asr/models/configs/k2_sequence_models_config.py
+++ b/nemo/collections/asr/models/configs/k2_sequence_models_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from nemo.collections.asr.models.configs.asr_models_config import EncDecCTCConfig
from nemo.collections.asr.parts.k2.classes import GraphModuleConfig as BackendConfig
@@ -26,14 +26,14 @@ class GraphModuleConfig:
split_batch_size: int = 0
dec_type: str = "topo"
transcribe_training: bool = True
- backend_cfg: BackendConfig = BackendConfig()
+ backend_cfg: BackendConfig = field(default_factory=lambda: BackendConfig())
@dataclass
class EncDecK2SeqConfig(EncDecCTCConfig):
- graph_module_cfg: GraphModuleConfig = GraphModuleConfig()
+ graph_module_cfg: GraphModuleConfig = field(default_factory=lambda: GraphModuleConfig())
@dataclass
class EncDecK2SeqModelConfig(NemoConfig):
- model: EncDecK2SeqConfig = EncDecK2SeqConfig()
+ model: EncDecK2SeqConfig = field(default_factory=lambda: EncDecK2SeqConfig())
diff --git a/nemo/collections/asr/models/configs/matchboxnet_config.py b/nemo/collections/asr/models/configs/matchboxnet_config.py
--- a/nemo/collections/asr/models/configs/matchboxnet_config.py
+++ b/nemo/collections/asr/models/configs/matchboxnet_config.py
@@ -107,30 +107,38 @@ class MatchboxNetModelConfig(clf_cfg.EncDecClassificationConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=False
+ train_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(
+ manifest_filepath=None, shuffle=True, trim_silence=False
+ )
)
- validation_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ validation_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
- test_ds: clf_cfg.EncDecClassificationDatasetConfig = clf_cfg.EncDecClassificationDatasetConfig(
- manifest_filepath=None, shuffle=False
+ test_ds: clf_cfg.EncDecClassificationDatasetConfig = field(
+ default_factory=lambda: clf_cfg.EncDecClassificationDatasetConfig(manifest_filepath=None, shuffle=False)
)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMFCCPreprocessorConfig = AudioToMFCCPreprocessorConfig(window_size=0.025)
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig(
- freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ preprocessor: AudioToMFCCPreprocessorConfig = field(
+ default_factory=lambda: AudioToMFCCPreprocessorConfig(window_size=0.025)
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig(
+ freq_masks=2, time_masks=2, freq_width=15, time_width=25, rect_masks=5, rect_time=25, rect_freq=15
+ )
)
- crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = CropOrPadSpectrogramAugmentationConfig(
- audio_length=128
+ crop_or_pad_augment: Optional[CropOrPadSpectrogramAugmentationConfig] = field(
+ default_factory=lambda: CropOrPadSpectrogramAugmentationConfig(audio_length=128)
)
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderClassificationConfig = ConvASRDecoderClassificationConfig()
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderClassificationConfig = field(default_factory=lambda: ConvASRDecoderClassificationConfig())
@dataclass
diff --git a/nemo/collections/asr/models/configs/quartznet_config.py b/nemo/collections/asr/models/configs/quartznet_config.py
--- a/nemo/collections/asr/models/configs/quartznet_config.py
+++ b/nemo/collections/asr/models/configs/quartznet_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Callable, List, Optional
from omegaconf import MISSING
@@ -174,20 +174,30 @@ class JasperModelConfig(ctc_cfg.EncDecCTCConfig):
labels: List[str] = MISSING
# Dataset configs
- train_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(
- manifest_filepath=None, shuffle=True, trim_silence=True
+ train_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=True, trim_silence=True)
+ )
+ validation_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
+ )
+ test_ds: ctc_cfg.ASRDatasetConfig = field(
+ default_factory=lambda: ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
)
- validation_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
- test_ds: ctc_cfg.ASRDatasetConfig = ctc_cfg.ASRDatasetConfig(manifest_filepath=None, shuffle=False)
# Optimizer / Scheduler config
- optim: Optional[model_cfg.OptimConfig] = model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ optim: Optional[model_cfg.OptimConfig] = field(
+ default_factory=lambda: model_cfg.OptimConfig(sched=model_cfg.SchedConfig())
+ )
# Model general component configs
- preprocessor: AudioToMelSpectrogramPreprocessorConfig = AudioToMelSpectrogramPreprocessorConfig()
- spec_augment: Optional[SpectrogramAugmentationConfig] = SpectrogramAugmentationConfig()
- encoder: ConvASREncoderConfig = ConvASREncoderConfig(activation="relu")
- decoder: ConvASRDecoderConfig = ConvASRDecoderConfig()
+ preprocessor: AudioToMelSpectrogramPreprocessorConfig = field(
+ default_factory=lambda: AudioToMelSpectrogramPreprocessorConfig()
+ )
+ spec_augment: Optional[SpectrogramAugmentationConfig] = field(
+ default_factory=lambda: SpectrogramAugmentationConfig()
+ )
+ encoder: ConvASREncoderConfig = field(default_factory=lambda: ConvASREncoderConfig(activation="relu"))
+ decoder: ConvASRDecoderConfig = field(default_factory=lambda: ConvASRDecoderConfig())
@dataclass
diff --git a/nemo/collections/asr/modules/audio_preprocessing.py b/nemo/collections/asr/modules/audio_preprocessing.py
--- a/nemo/collections/asr/modules/audio_preprocessing.py
+++ b/nemo/collections/asr/modules/audio_preprocessing.py
@@ -634,6 +634,12 @@ def __init__(self, audio_length):
super(CropOrPadSpectrogramAugmentation, self).__init__()
self.audio_length = audio_length
+ if self.audio_length < 0:
+ raise ValueError(
+ 'audio_length must be non-negative. If using a dataclass with OmegaConf, '
+ 'please call OmegaConf.to_object(cfg) to call appropriate __post_init__ methods.'
+ )
+
@typecheck()
@torch.no_grad()
def forward(self, input_signal, length):
diff --git a/nemo/collections/asr/parts/k2/classes.py b/nemo/collections/asr/parts/k2/classes.py
--- a/nemo/collections/asr/parts/k2/classes.py
+++ b/nemo/collections/asr/parts/k2/classes.py
@@ -13,7 +13,7 @@
# limitations under the License.
from abc import ABC
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
import torch
@@ -43,7 +43,7 @@ class GraphModuleConfig:
topo_with_self_loops: bool = True
token_lm: Optional[Any] = None
intersect_pruned: bool = False
- intersect_conf: GraphIntersectDenseConfig = GraphIntersectDenseConfig()
+ intersect_conf: GraphIntersectDenseConfig = field(default_factory=lambda: GraphIntersectDenseConfig())
boost_coeff: float = 0.0
predictor_window_size: int = 0
predictor_step_size: int = 1
diff --git a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
--- a/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
+++ b/nemo/collections/asr/parts/submodules/adapters/multi_head_attention_adapter_module.py
@@ -13,7 +13,7 @@
# limitations under the License.
import math
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional
import torch
@@ -183,7 +183,7 @@ class MultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(MultiHeadAttentionAdapter.__module__, MultiHeadAttentionAdapter.__name__)
@@ -287,7 +287,7 @@ class RelPositionMultiHeadAttentionAdapterConfig:
n_feat: int
dropout_rate: float = 0.0
proj_dim: Optional[int] = None
- adapter_strategy: Optional[Any] = MHAResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(default_factory=lambda: MHAResidualAddAdapterStrategyConfig())
_target_: str = "{0}.{1}".format(
RelPositionMultiHeadAttentionAdapter.__module__, RelPositionMultiHeadAttentionAdapter.__name__
)
@@ -336,7 +336,9 @@ class PositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(PositionalEncodingAdapter.__module__, PositionalEncodingAdapter.__name__)
@@ -378,5 +380,7 @@ class RelPositionalEncodingAdapterConfig:
d_model: int
max_len: int = 5000
xscale: float = 1.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(RelPositionalEncodingAdapter.__module__, RelPositionalEncodingAdapter.__name__)
diff --git a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_beam_decoding.py
@@ -14,7 +14,7 @@
import math
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import torch
@@ -602,5 +602,5 @@ class BeamCTCInferConfig:
beam_beta: float = 0.0
kenlm_path: Optional[str] = None
- flashlight_cfg: Optional[FlashlightConfig] = FlashlightConfig()
- pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = PyCTCDecodeConfig()
+ flashlight_cfg: Optional[FlashlightConfig] = field(default_factory=lambda: FlashlightConfig())
+ pyctcdecode_cfg: Optional[PyCTCDecodeConfig] = field(default_factory=lambda: PyCTCDecodeConfig())
diff --git a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/ctc_greedy_decoding.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional
import torch
@@ -253,7 +253,7 @@ class GreedyCTCInferConfig:
preserve_alignments: bool = False
compute_timestamps: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
--- a/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
+++ b/nemo/collections/asr/parts/submodules/rnnt_greedy_decoding.py
@@ -26,7 +26,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import List, Optional, Tuple, Union
import numpy as np
@@ -2185,7 +2185,7 @@ class GreedyRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
@@ -2201,7 +2201,7 @@ class GreedyBatchedRNNTInferConfig:
max_symbols_per_step: Optional[int] = 10
preserve_alignments: bool = False
preserve_frame_confidence: bool = False
- confidence_method_cfg: Optional[ConfidenceMethodConfig] = ConfidenceMethodConfig()
+ confidence_method_cfg: Optional[ConfidenceMethodConfig] = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/asr/parts/utils/asr_confidence_utils.py b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
--- a/nemo/collections/asr/parts/utils/asr_confidence_utils.py
+++ b/nemo/collections/asr/parts/utils/asr_confidence_utils.py
@@ -14,7 +14,7 @@
import math
from abc import ABC, abstractmethod
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from functools import partial
from typing import List, Optional
@@ -175,7 +175,7 @@ class ConfidenceConfig:
preserve_word_confidence: bool = False
exclude_blank: bool = True
aggregation: str = "min"
- method_cfg: ConfidenceMethodConfig = ConfidenceMethodConfig()
+ method_cfg: ConfidenceMethodConfig = field(default_factory=lambda: ConfidenceMethodConfig())
def __post_init__(self):
# OmegaConf.structured ensures that post_init check is always executed
diff --git a/nemo/collections/common/parts/adapter_modules.py b/nemo/collections/common/parts/adapter_modules.py
--- a/nemo/collections/common/parts/adapter_modules.py
+++ b/nemo/collections/common/parts/adapter_modules.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from typing import Any, Optional
from hydra.utils import instantiate
@@ -160,5 +160,7 @@ class LinearAdapterConfig:
activation: str = 'swish'
norm_position: str = 'pre'
dropout: float = 0.0
- adapter_strategy: Optional[Any] = adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ adapter_strategy: Optional[Any] = field(
+ default_factory=lambda: adapter_mixin_strategies.ResidualAddAdapterStrategyConfig()
+ )
_target_: str = "{0}.{1}".format(LinearAdapter.__module__, LinearAdapter.__name__)
diff --git a/nemo/collections/common/tokenizers/en_ja_tokenizers.py b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
--- a/nemo/collections/common/tokenizers/en_ja_tokenizers.py
+++ b/nemo/collections/common/tokenizers/en_ja_tokenizers.py
@@ -14,11 +14,19 @@
import re
from typing import List
-import ipadic
-import MeCab
from pangu import spacing
from sacremoses import MosesDetokenizer, MosesPunctNormalizer, MosesTokenizer
+try:
+ import ipadic
+ import MeCab
+
+ HAVE_MECAB = True
+ HAVE_IPADIC = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+ HAVE_IPADIC = False
+
class EnJaProcessor:
"""
@@ -67,6 +75,9 @@ class JaMecabProcessor:
"""
def __init__(self):
+ if not HAVE_MECAB or not HAVE_IPADIC:
+ raise ImportError("Please ensure that you have installed `MeCab` and `ipadic` to use JaMecabProcessor")
+
self.mecab_tokenizer = MeCab.Tagger(ipadic.MECAB_ARGS + " -Owakati")
def detokenize(self, text: List[str]) -> str:
diff --git a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
--- a/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
+++ b/nemo/collections/nlp/models/machine_translation/mt_enc_dec_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Optional, Tuple
from omegaconf.omegaconf import MISSING
@@ -46,7 +46,7 @@ class MTOptimConfig(OptimConfig):
lr: float = 1e-3
betas: Tuple[float, float] = (0.9, 0.98)
weight_decay: float = 0.0
- sched: Optional[MTSchedConfig] = MTSchedConfig()
+ sched: Optional[MTSchedConfig] = field(default_factory=lambda: MTSchedConfig())
@dataclass
@@ -74,70 +74,80 @@ class MTEncDecModelConfig(EncDecNLPModelConfig):
decoder_tokenizer: Any = MISSING
decoder: Any = MISSING
- head: TokenClassifierConfig = TokenClassifierConfig(log_softmax=True)
+ head: TokenClassifierConfig = field(default_factory=lambda: TokenClassifierConfig(log_softmax=True))
# dataset configurations
- train_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=True,
- shuffle=True,
- cache_ids=False,
- use_cache=False,
+ train_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=True,
+ shuffle=True,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- validation_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ validation_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- test_ds: Optional[TranslationDataConfig] = TranslationDataConfig(
- src_file_name=MISSING,
- tgt_file_name=MISSING,
- tokens_in_batch=512,
- clean=False,
- shuffle=False,
- cache_ids=False,
- use_cache=False,
+ test_ds: Optional[TranslationDataConfig] = field(
+ default_factory=lambda: TranslationDataConfig(
+ src_file_name=MISSING,
+ tgt_file_name=MISSING,
+ tokens_in_batch=512,
+ clean=False,
+ shuffle=False,
+ cache_ids=False,
+ use_cache=False,
+ )
)
- optim: Optional[OptimConfig] = MTOptimConfig()
+ optim: Optional[OptimConfig] = field(default_factory=lambda: MTOptimConfig())
@dataclass
class AAYNBaseConfig(MTEncDecModelConfig):
# Attention is All You Need Base Configuration
- encoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
- decoder_tokenizer: TokenizerConfig = TokenizerConfig(library='yttm')
-
- encoder: NeMoTransformerEncoderConfig = NeMoTransformerEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ encoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+ decoder_tokenizer: TokenizerConfig = field(default_factory=lambda: TokenizerConfig(library='yttm'))
+
+ encoder: NeMoTransformerEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
- decoder: NeMoTransformerConfig = NeMoTransformerConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
+ decoder: NeMoTransformerConfig = field(
+ default_factory=lambda: NeMoTransformerConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ )
)
@@ -150,32 +160,36 @@ class MTBottleneckModelConfig(AAYNBaseConfig):
recon_per_token: bool = True
log_timing: bool = True
- encoder: NeMoTransformerBottleneckEncoderConfig = NeMoTransformerBottleneckEncoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- hidden_size=512,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
- hidden_steps=32,
- hidden_blocks=1,
- hidden_init_method='params',
+ encoder: NeMoTransformerBottleneckEncoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckEncoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ hidden_size=512,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ hidden_steps=32,
+ hidden_blocks=1,
+ hidden_init_method='params',
+ )
)
- decoder: NeMoTransformerBottleneckDecoderConfig = NeMoTransformerBottleneckDecoderConfig(
- library='nemo',
- model_name=None,
- pretrained=False,
- inner_size=2048,
- num_layers=6,
- num_attention_heads=8,
- ffn_dropout=0.1,
- attn_score_dropout=0.1,
- attn_layer_dropout=0.1,
- arch='seq2seq',
+ decoder: NeMoTransformerBottleneckDecoderConfig = field(
+ default_factory=lambda: NeMoTransformerBottleneckDecoderConfig(
+ library='nemo',
+ model_name=None,
+ pretrained=False,
+ inner_size=2048,
+ num_layers=6,
+ num_attention_heads=8,
+ ffn_dropout=0.1,
+ attn_score_dropout=0.1,
+ attn_layer_dropout=0.1,
+ arch='seq2seq',
+ )
)
diff --git a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
--- a/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
+++ b/nemo/collections/nlp/models/token_classification/punctuation_capitalization_config.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, Optional
from omegaconf.omegaconf import MISSING, DictConfig, OmegaConf, open_dict
@@ -215,13 +215,15 @@ class PunctuationCapitalizationModelConfig:
This config is a part of :class:`~PunctuationCapitalizationConfig`.
"""
- class_labels: ClassLabelsConfig = ClassLabelsConfig()
+ class_labels: ClassLabelsConfig = field(default_factory=lambda: ClassLabelsConfig())
"""A mandatory parameter containing a dictionary with names of label id files used in .nemo checkpoints.
These file names can also be used for passing label vocabularies to the model. If you wish to use ``class_labels``
for passing vocabularies, please provide path to vocabulary files in
``model.common_dataset_parameters.label_vocab_dir`` parameter."""
- common_dataset_parameters: Optional[CommonDatasetParametersConfig] = CommonDatasetParametersConfig()
+ common_dataset_parameters: Optional[CommonDatasetParametersConfig] = field(
+ default_factory=lambda: CommonDatasetParametersConfig()
+ )
"""Label ids and loss mask information information."""
train_ds: Optional[PunctuationCapitalizationTrainDataConfig] = None
@@ -233,16 +235,16 @@ class PunctuationCapitalizationModelConfig:
test_ds: Optional[PunctuationCapitalizationEvalDataConfig] = None
"""A configuration for creating test datasets and data loaders."""
- punct_head: HeadConfig = HeadConfig()
+ punct_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating punctuation MLP head that is applied to a language model outputs."""
- capit_head: HeadConfig = HeadConfig()
+ capit_head: HeadConfig = field(default_factory=lambda: HeadConfig())
"""A configuration for creating capitalization MLP head that is applied to a language model outputs."""
- tokenizer: Any = TokenizerConfig()
+ tokenizer: Any = field(default_factory=lambda: TokenizerConfig())
"""A configuration for source text tokenizer."""
- language_model: LanguageModelConfig = LanguageModelConfig()
+ language_model: LanguageModelConfig = field(default_factory=lambda: LanguageModelConfig())
"""A configuration of a BERT-like language model which serves as a model body."""
optim: Optional[Any] = None
@@ -311,22 +313,30 @@ class PunctuationCapitalizationConfig(NemoConfig):
do_testing: bool = False
"""Whether ot perform testing of the model."""
- model: PunctuationCapitalizationModelConfig = PunctuationCapitalizationModelConfig()
+ model: PunctuationCapitalizationModelConfig = field(default_factory=lambda: PunctuationCapitalizationModelConfig())
"""A configuration for the
:class:`~nemo.collections.nlp.models.token_classification.punctuation_capitalization_model.PunctuationCapitalizationModel`
model."""
- trainer: Optional[TrainerConfig] = TrainerConfig()
+ trainer: Optional[TrainerConfig] = field(default_factory=lambda: TrainerConfig())
"""Contains ``Trainer`` Lightning class constructor parameters."""
- exp_manager: Optional[ExpManagerConfig] = ExpManagerConfig(name=name, files_to_copy=[])
+ exp_manager: Optional[ExpManagerConfig] = field(
+ default_factory=lambda: ExpManagerConfig(name=None, files_to_copy=[])
+ )
"""A configuration with various NeMo training options such as output directories, resuming from checkpoint,
tensorboard and W&B logging, and so on. For possible options see :ref:`exp-manager-label`."""
+ def __post_init__(self):
+ if self.exp_manager is not None:
+ self.exp_manager.name = self.name
+
@dataclass
class PunctuationCapitalizationLexicalAudioConfig(PunctuationCapitalizationConfig):
- model: PunctuationCapitalizationLexicalAudioModelConfig = PunctuationCapitalizationLexicalAudioModelConfig()
+ model: PunctuationCapitalizationLexicalAudioModelConfig = field(
+ default_factory=lambda: PunctuationCapitalizationLexicalAudioModelConfig()
+ )
def is_legacy_model_config(model_cfg: DictConfig) -> bool:
diff --git a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
--- a/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
+++ b/nemo/collections/nlp/modules/common/megatron/megatron_encoders.py
@@ -13,7 +13,6 @@
# limitations under the License.
"""Transformer based language model."""
-from MeCab import Model
from nemo.collections.nlp.modules.common.megatron.megatron_perceiver_encoders import MegatronPerceiverEncoderModule
from nemo.collections.nlp.modules.common.megatron.megatron_transformer_encoder import MegatronTransformerEncoderModule
from nemo.collections.nlp.modules.common.megatron.retrieval_transformer import (
@@ -25,6 +24,13 @@
scaled_init_method_normal,
)
+try:
+ from MeCab import Model
+
+ HAVE_MECAB = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_MECAB = False
+
try:
from apex.transformer.enums import AttnMaskType, ModelType
diff --git a/nemo/collections/tts/models/fastpitch.py b/nemo/collections/tts/models/fastpitch.py
--- a/nemo/collections/tts/models/fastpitch.py
+++ b/nemo/collections/tts/models/fastpitch.py
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import List, Optional
@@ -70,12 +70,12 @@ class TextTokenizer:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
@dataclass
class TextTokenizerConfig:
- text_tokenizer: TextTokenizer = TextTokenizer()
+ text_tokenizer: TextTokenizer = field(default_factory=lambda: TextTokenizer())
class FastPitchModel(SpectrogramGenerator, Exportable, FastPitchAdapterModelMixin):
diff --git a/nemo/collections/tts/models/tacotron2.py b/nemo/collections/tts/models/tacotron2.py
--- a/nemo/collections/tts/models/tacotron2.py
+++ b/nemo/collections/tts/models/tacotron2.py
@@ -13,7 +13,7 @@
# limitations under the License.
import contextlib
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional
import torch
@@ -53,7 +53,7 @@ class Preprocessor:
@dataclass
class Tacotron2Config:
- preprocessor: Preprocessor = Preprocessor()
+ preprocessor: Preprocessor = field(default_factory=lambda: Preprocessor())
encoder: Dict[Any, Any] = MISSING
decoder: Dict[Any, Any] = MISSING
postnet: Dict[Any, Any] = MISSING
diff --git a/nemo/core/config/modelPT.py b/nemo/core/config/modelPT.py
--- a/nemo/core/config/modelPT.py
+++ b/nemo/core/config/modelPT.py
@@ -58,11 +58,13 @@ class HydraConfig:
class NemoConfig:
name: str = MISSING
model: ModelConfig = MISSING
- trainer: config.TrainerConfig = config.TrainerConfig(
- strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ trainer: config.TrainerConfig = field(
+ default_factory=lambda: config.TrainerConfig(
+ strategy="ddp", enable_checkpointing=False, logger=False, log_every_n_steps=1, accelerator='gpu'
+ )
)
- exp_manager: Optional[Any] = exp_manager.ExpManagerConfig()
- hydra: HydraConfig = HydraConfig()
+ exp_manager: Optional[Any] = field(default_factory=lambda: exp_manager.ExpManagerConfig())
+ hydra: HydraConfig = field(default_factory=lambda: HydraConfig())
class ModelConfigBuilder:
diff --git a/nemo/utils/exp_manager.py b/nemo/utils/exp_manager.py
--- a/nemo/utils/exp_manager.py
+++ b/nemo/utils/exp_manager.py
@@ -18,7 +18,7 @@
import sys
import time
import warnings
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from datetime import timedelta
from pathlib import Path
from shutil import copy, move
@@ -146,28 +146,30 @@ class ExpManagerConfig:
create_wandb_logger: Optional[bool] = False
wandb_logger_kwargs: Optional[Dict[Any, Any]] = None
create_mlflow_logger: Optional[bool] = False
- mlflow_logger_kwargs: Optional[MLFlowParams] = MLFlowParams()
+ mlflow_logger_kwargs: Optional[MLFlowParams] = field(default_factory=lambda: MLFlowParams())
create_dllogger_logger: Optional[bool] = False
- dllogger_logger_kwargs: Optional[DLLoggerParams] = DLLoggerParams()
+ dllogger_logger_kwargs: Optional[DLLoggerParams] = field(default_factory=lambda: DLLoggerParams())
create_clearml_logger: Optional[bool] = False
- clearml_logger_kwargs: Optional[ClearMLParams] = ClearMLParams()
+ clearml_logger_kwargs: Optional[ClearMLParams] = field(default_factory=lambda: ClearMLParams())
# Checkpointing parameters
create_checkpoint_callback: Optional[bool] = True
- checkpoint_callback_params: Optional[CallbackParams] = CallbackParams()
+ checkpoint_callback_params: Optional[CallbackParams] = field(default_factory=lambda: CallbackParams())
create_early_stopping_callback: Optional[bool] = False
- early_stopping_callback_params: Optional[EarlyStoppingParams] = EarlyStoppingParams()
+ early_stopping_callback_params: Optional[EarlyStoppingParams] = field(
+ default_factory=lambda: EarlyStoppingParams()
+ )
create_preemption_callback: Optional[bool] = True
# Additional exp_manager arguments
files_to_copy: Optional[List[str]] = None
# logs timing of train/val/test steps
log_step_timing: Optional[bool] = True
- step_timing_kwargs: Optional[StepTimingParams] = StepTimingParams()
+ step_timing_kwargs: Optional[StepTimingParams] = field(default_factory=lambda: StepTimingParams())
# Configures creation of log files for different ranks
log_local_rank_0_only: Optional[bool] = False
log_global_rank_0_only: Optional[bool] = False
# disable initial validation when resuming from a checkpoint saved during validation
disable_validation_on_resume: Optional[bool] = True
- ema: Optional[EMAParams] = EMAParams()
+ ema: Optional[EMAParams] = field(default_factory=lambda: EMAParams())
# Wall clock time limit
max_time_per_run: Optional[str] = None
# time to sleep non 0 ranks during initialization
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py
@@ -112,14 +112,14 @@ class EvalBeamSearchNGramConfig:
beam_beta: List[float] = field(default_factory=lambda: [0.0]) # The beta parameter or list of the betas for the beam search decoding
decoding_strategy: str = "beam"
- decoding: ctc_beam_decoding.BeamCTCInferConfig = ctc_beam_decoding.BeamCTCInferConfig(beam_size=128)
+ decoding: ctc_beam_decoding.BeamCTCInferConfig = field(default_factory=lambda: ctc_beam_decoding.BeamCTCInferConfig(beam_size=128))
- text_processing: Optional[TextProcessingConfig] = TextProcessingConfig(
+ text_processing: Optional[TextProcessingConfig] = field(default_factory=lambda: TextProcessingConfig(
punctuation_marks = ".,?",
separate_punctuation = False,
do_lowercase = False,
rm_punctuation = False,
- )
+ ))
# fmt: on
diff --git a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
--- a/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
+++ b/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram_transducer.py
@@ -115,7 +115,7 @@ class EvalBeamSearchNGramConfig:
hat_subtract_ilm: bool = False
hat_ilm_weight: List[float] = field(default_factory=lambda: [0.0])
- decoding: rnnt_beam_decoding.BeamRNNTInferConfig = rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128)
+ decoding: rnnt_beam_decoding.BeamRNNTInferConfig = field(default_factory=lambda: rnnt_beam_decoding.BeamRNNTInferConfig(beam_size=128))
# fmt: on
diff --git a/scripts/confidence_ensembles/build_ensemble.py b/scripts/confidence_ensembles/build_ensemble.py
--- a/scripts/confidence_ensembles/build_ensemble.py
+++ b/scripts/confidence_ensembles/build_ensemble.py
@@ -75,7 +75,7 @@
import sys
import tempfile
from copy import deepcopy
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
from typing import Dict, List, Optional, Tuple
@@ -209,19 +209,23 @@ class BuildEnsembleConfig:
random_seed: int = 0 # for reproducibility
# default confidence, can override
- confidence: ConfidenceConfig = ConfidenceConfig(
- # we keep frame confidences and apply aggregation manually to get full-utterance confidence
- preserve_frame_confidence=True,
- exclude_blank=True,
- aggregation="mean",
- method_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ confidence: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(
+ # we keep frame confidences and apply aggregation manually to get full-utterance confidence
+ preserve_frame_confidence=True,
+ exclude_blank=True,
+ aggregation="mean",
+ measure_cfg=ConfidenceMethodConfig(name="entropy", entropy_type="renyi", alpha=0.25, entropy_norm="lin",),
+ )
)
temperature: float = 1.0
# this is optional, but can be used to change any aspect of the transcription
# config, such as batch size or amp usage. Note that model, data and confidence
# will be overriden by this script
- transcription: transcribe_speech.TranscriptionConfig = transcribe_speech.TranscriptionConfig()
+ transcription: transcribe_speech.TranscriptionConfig = field(
+ default_factory=lambda: transcribe_speech.TranscriptionConfig()
+ )
# set to True to tune the confidence.
# requires dev manifests to be specified for each model
@@ -229,12 +233,14 @@ class BuildEnsembleConfig:
# used to specify what to tune over. By default runs tuning over some
# reasonalbe grid, so that it does not take forever.
# Can be changed as needed
- tune_confidence_config: TuneConfidenceConfig = TuneConfidenceConfig()
+ tune_confidence_config: TuneConfidenceConfig = field(default_factory=lambda: TuneConfidenceConfig())
# very fast to tune and can be important in case of imbalanced datasets
# will automatically set to False if dev data is not available
tune_logistic_regression: bool = True
- tune_logistic_regression_config: TuneLogisticRegressionConfig = TuneLogisticRegressionConfig()
+ tune_logistic_regression_config: TuneLogisticRegressionConfig = field(
+ default_factory=lambda: TuneLogisticRegressionConfig()
+ )
def __post_init__(self):
"""Checking that if any dev data is provided, all are provided.
diff --git a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
--- a/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
+++ b/scripts/speech_recognition/confidence/benchmark_asr_confidence.py
@@ -14,7 +14,7 @@
import json
import os
-from dataclasses import dataclass, is_dataclass
+from dataclasses import dataclass, field, is_dataclass
from pathlib import Path
from typing import Optional
@@ -124,7 +124,9 @@ class ConfidenceBenchmarkingConfig:
# Confidence configs
target_level: str = "auto" # Choices: "word", "token", "auto" (for both word- and token-level confidence)
- confidence_cfg: ConfidenceConfig = ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ confidence_cfg: ConfidenceConfig = field(
+ default_factory=lambda: ConfidenceConfig(preserve_word_confidence=True, preserve_token_confidence=True)
+ )
grid_params: Optional[str] = None # a dictionary with lists of parameters to iteratively benchmark on
diff --git a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
--- a/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
+++ b/scripts/speech_recognition/convert_to_tarred_audio_dataset.py
@@ -202,7 +202,7 @@ class ASRTarredDatasetMetadata:
num_samples_per_shard: Optional[int] = None
is_concatenated_manifest: bool = False
- dataset_config: Optional[ASRTarredDatasetConfig] = ASRTarredDatasetConfig()
+ dataset_config: Optional[ASRTarredDatasetConfig] = field(default_factory=lambda: ASRTarredDatasetConfig())
history: Optional[List[Any]] = field(default_factory=lambda: [])
def __post_init__(self):
diff --git a/tools/nemo_forced_aligner/align.py b/tools/nemo_forced_aligner/align.py
--- a/tools/nemo_forced_aligner/align.py
+++ b/tools/nemo_forced_aligner/align.py
@@ -149,8 +149,8 @@ class AlignmentConfig:
# Output file configs
save_output_file_formats: List[str] = field(default_factory=lambda: ["ctm", "ass"])
- ctm_file_config: CTMFileConfig = CTMFileConfig()
- ass_file_config: ASSFileConfig = ASSFileConfig()
+ ctm_file_config: CTMFileConfig = field(default_factory=lambda: CTMFileConfig())
+ ass_file_config: ASSFileConfig = field(default_factory=lambda: ASSFileConfig())
@hydra_runner(config_name="AlignmentConfig", schema=AlignmentConfig)
</patch>
</s>
</patch>
|
diff --git a/tests/collections/asr/test_text_to_text_dataset.py b/tests/collections/asr/test_text_to_text_dataset.py
--- a/tests/collections/asr/test_text_to_text_dataset.py
+++ b/tests/collections/asr/test_text_to_text_dataset.py
@@ -15,7 +15,7 @@
import json
import multiprocessing
import os
-from dataclasses import dataclass
+from dataclasses import dataclass, field
from pathlib import Path
import pytest
@@ -118,7 +118,7 @@ class TextTokenizerCfg:
apostrophe: bool = True
pad_with_space: bool = True
add_blank_at: bool = True
- g2p: G2PConfig = G2PConfig()
+ g2p: G2PConfig = field(default_factory=lambda: G2PConfig())
config = OmegaConf.create(OmegaConf.to_yaml(TextTokenizerCfg()))
return instantiate(config)
|
1.0
| |||
slackapi__python-slack-events-api-71
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
</issue>
<code>
[start of README.rst]
1 Slack Events API adapter for Python
2 ===================================
3
4 .. image:: https://badge.fury.io/py/slackeventsapi.svg
5 :target: https://pypi.org/project/slackeventsapi/
6 .. image:: https://travis-ci.org/slackapi/python-slack-events-api.svg?branch=master
7 :target: https://travis-ci.org/slackapi/python-slack-events-api
8 .. image:: https://codecov.io/gh/slackapi/python-slack-events-api/branch/master/graph/badge.svg
9 :target: https://codecov.io/gh/slackapi/python-slack-events-api
10
11
12 The Slack Events Adapter is a Python-based solution to receive and parse events
13 from Slackโs Events API. This library uses an event emitter framework to allow
14 you to easily process Slack events by simply attaching functions
15 to event listeners.
16
17 This adapter enhances and simplifies Slack's Events API by incorporating useful best practices, patterns, and opportunities to abstract out common tasks.
18
19 ๐ก We wrote a `blog post which explains how`_ the Events API can help you, why we built these tools, and how you can use them to build production-ready Slack apps.
20
21 .. _blog post which explains how: https://medium.com/@SlackAPI/enhancing-slacks-events-api-7535827829ab
22
23
24 ๐ค Installation
25 ----------------
26
27 .. code:: shell
28
29 pip install slackeventsapi
30
31 ๐ค App Setup
32 --------------------
33
34 Before you can use the `Events API`_ you must
35 `create a Slack App`_, and turn on
36 `Event Subscriptions`_.
37
38 ๐ก When you add the Request URL to your app's Event Subscription settings,
39 Slack will send a request containing a `challenge` code to verify that your
40 server is alive. This package handles that URL Verification event for you, so
41 all you need to do is start the example app, start ngrok and configure your
42 URL accordingly.
43
44 โ
Once you have your `Request URL` verified, your app is ready to start
45 receiving Team Events.
46
47 ๐ Your server will begin receiving Events from Slack's Events API as soon as a
48 user has authorized your app.
49
50 ๐ค Development workflow:
51 ===========================
52
53 (1) Create a Slack app on https://api.slack.com/apps
54 (2) Add a `bot user` for your app
55 (3) Start the example app on your **Request URL** endpoint
56 (4) Start ngrok and copy the **HTTPS** URL
57 (5) Add your **Request URL** and subscribe your app to events
58 (6) Go to your ngrok URL (e.g. https://myapp12.ngrok.com/) and auth your app
59
60 **๐ Once your app has been authorized, you will begin receiving Slack Events**
61
62 โ ๏ธ Ngrok is a great tool for developing Slack apps, but we don't recommend using ngrok
63 for production apps.
64
65 ๐ค Usage
66 ----------
67 **โ ๏ธ Keep your app's credentials safe!**
68
69 - For development, keep them in virtualenv variables.
70
71 - For production, use a secure data store.
72
73 - Never post your app's credentials to github.
74
75 .. code:: python
76
77 SLACK_SIGNING_SECRET = os.environ["SLACK_SIGNING_SECRET"]
78
79 Create a Slack Event Adapter for receiving actions via the Events API
80 -----------------------------------------------------------------------
81 **Using the built-in Flask server:**
82
83 .. code:: python
84
85 from slackeventsapi import SlackEventAdapter
86
87
88 slack_events_adapter = SlackEventAdapter(SLACK_SIGNING_SECRET, endpoint="/slack/events")
89
90
91 # Create an event listener for "reaction_added" events and print the emoji name
92 @slack_events_adapter.on("reaction_added")
93 def reaction_added(event_data):
94 emoji = event_data["event"]["reaction"]
95 print(emoji)
96
97
98 # Start the server on port 3000
99 slack_events_adapter.start(port=3000)
100
101
102 **Using your existing Flask instance:**
103
104
105 .. code:: python
106
107 from flask import Flask
108 from slackeventsapi import SlackEventAdapter
109
110
111 # This `app` represents your existing Flask app
112 app = Flask(__name__)
113
114
115 # An example of one of your Flask app's routes
116 @app.route("/")
117 def hello():
118 return "Hello there!"
119
120
121 # Bind the Events API route to your existing Flask app by passing the server
122 # instance as the last param, or with `server=app`.
123 slack_events_adapter = SlackEventAdapter(SLACK_SIGNING_SECRET, "/slack/events", app)
124
125
126 # Create an event listener for "reaction_added" events and print the emoji name
127 @slack_events_adapter.on("reaction_added")
128 def reaction_added(event_data):
129 emoji = event_data["event"]["reaction"]
130 print(emoji)
131
132
133 # Start the server on port 3000
134 if __name__ == "__main__":
135 app.run(port=3000)
136
137 For a comprehensive list of available Slack `Events` and more information on
138 `Scopes`, see https://api.slack.com/events-api
139
140 ๐ค Example event listeners
141 -----------------------------
142
143 See `example.py`_ for usage examples. This example also utilizes the
144 SlackClient Web API client.
145
146 .. _example.py: /example/
147
148 ๐ค Support
149 -----------
150
151 Need help? Join `Slack Community`_ and talk to us in `#slack-api`_.
152
153 You can also `create an Issue`_ right here on GitHub.
154
155 .. _Events API: https://api.slack.com/events-api
156 .. _create a Slack App: https://api.slack.com/apps/new
157 .. _Event Subscriptions: https://api.slack.com/events-api#subscriptions
158 .. _Slack Community: http://slackcommunity.com/
159 .. _#slack-api: https://dev4slack.slack.com/messages/slack-api/
160 .. _create an Issue: https://github.com/slackapi/python-slack-events-api/issues/new
161
[end of README.rst]
[start of /dev/null]
1
[end of /dev/null]
[start of slackeventsapi/server.py]
1 from flask import Flask, request, make_response, Blueprint
2 import json
3 import platform
4 import sys
5 import hmac
6 import hashlib
7 from time import time
8 from .version import __version__
9
10
11 class SlackServer(Flask):
12 def __init__(self, signing_secret, endpoint, emitter, server):
13 self.signing_secret = signing_secret
14 self.emitter = emitter
15 self.endpoint = endpoint
16 self.package_info = self.get_package_info()
17
18 # If a server is passed in, bind the event handler routes to it,
19 # otherwise create a new Flask instance.
20 if server:
21 if isinstance(server, Flask) or isinstance(server, Blueprint):
22 self.bind_route(server)
23 else:
24 raise TypeError("Server must be an instance of Flask or Blueprint")
25 else:
26 Flask.__init__(self, __name__)
27 self.bind_route(self)
28
29 def get_package_info(self):
30 client_name = __name__.split('.')[0]
31 client_version = __version__ # Version is returned from version.py
32
33 # Collect the package info, Python version and OS version.
34 package_info = {
35 "client": "{0}/{1}".format(client_name, client_version),
36 "python": "Python/{v.major}.{v.minor}.{v.micro}".format(v=sys.version_info),
37 "system": "{0}/{1}".format(platform.system(), platform.release())
38 }
39
40 # Concatenate and format the user-agent string to be passed into request headers
41 ua_string = []
42 for key, val in package_info.items():
43 ua_string.append(val)
44
45 return " ".join(ua_string)
46
47 def verify_signature(self, timestamp, signature):
48 # Verify the request signature of the request sent from Slack
49 # Generate a new hash using the app's signing secret and request data
50
51 # Compare the generated hash and incoming request signature
52 # Python 2.7.6 doesn't support compare_digest
53 # It's recommended to use Python 2.7.7+
54 # noqa See https://docs.python.org/2/whatsnew/2.7.html#pep-466-network-security-enhancements-for-python-2-7
55 req = str.encode('v0:' + str(timestamp) + ':') + request.get_data()
56 request_hash = 'v0=' + hmac.new(
57 str.encode(self.signing_secret),
58 req, hashlib.sha256
59 ).hexdigest()
60
61 if hasattr(hmac, "compare_digest"):
62 # Compare byte strings for Python 2
63 if (sys.version_info[0] == 2):
64 return hmac.compare_digest(bytes(request_hash), bytes(signature))
65 else:
66 return hmac.compare_digest(request_hash, signature)
67 else:
68 if len(request_hash) != len(signature):
69 return False
70 result = 0
71 if isinstance(request_hash, bytes) and isinstance(signature, bytes):
72 for x, y in zip(request_hash, signature):
73 result |= x ^ y
74 else:
75 for x, y in zip(request_hash, signature):
76 result |= ord(x) ^ ord(y)
77 return result == 0
78
79 def bind_route(self, server):
80 @server.route(self.endpoint, methods=['GET', 'POST'])
81 def event():
82 # If a GET request is made, return 404.
83 if request.method == 'GET':
84 return make_response("These are not the slackbots you're looking for.", 404)
85
86 # Each request comes with request timestamp and request signature
87 # emit an error if the timestamp is out of range
88 req_timestamp = request.headers.get('X-Slack-Request-Timestamp')
89 if abs(time() - int(req_timestamp)) > 60 * 5:
90 slack_exception = SlackEventAdapterException('Invalid request timestamp')
91 self.emitter.emit('error', slack_exception)
92 return make_response("", 403)
93
94 # Verify the request signature using the app's signing secret
95 # emit an error if the signature can't be verified
96 req_signature = request.headers.get('X-Slack-Signature')
97 if not self.verify_signature(req_timestamp, req_signature):
98 slack_exception = SlackEventAdapterException('Invalid request signature')
99 self.emitter.emit('error', slack_exception)
100 return make_response("", 403)
101
102 # Parse the request payload into JSON
103 event_data = json.loads(request.data.decode('utf-8'))
104
105 # Echo the URL verification challenge code back to Slack
106 if "challenge" in event_data:
107 return make_response(
108 event_data.get("challenge"), 200, {"content_type": "application/json"}
109 )
110
111 # Parse the Event payload and emit the event to the event listener
112 if "event" in event_data:
113 event_type = event_data["event"]["type"]
114 self.emitter.emit(event_type, event_data)
115 response = make_response("", 200)
116 response.headers['X-Slack-Powered-By'] = self.package_info
117 return response
118
119
120 class SlackEventAdapterException(Exception):
121 """
122 Base exception for all errors raised by the SlackClient library
123 """
124
125 def __init__(self, msg=None):
126 if msg is None:
127 # default error message
128 msg = "An error occurred in the SlackEventsApiAdapter library"
129 super(SlackEventAdapterException, self).__init__(msg)
130
[end of slackeventsapi/server.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
slackapi/python-slack-events-api
|
0c0ce604b502508622fb14c278a0d64841fa32e3
|
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
Passing Flask app proxy as server
Hi Guys,
I have an app factory on my setup and the app object usually it is invoked as :
`from flask import current_app as app`
However, the slackeventsapi complains about the app object :
`TypeError("Server must be an instance of Flask")`
I have fixed adding the following to server.py :
`from werkzeug.local import LocalProxy # Importing the localproxy class`
Line 25
Changed from :
` if isinstance(server, Flask):`
to :
`if isinstance(server, Flask) or isinstance(server, LocalProxy):`
Basically, if a Flask app proxy is passed the api will carry on without complaining since it has the same methods as the Flask app object.
I hope this help other people and it is considered as a solution if more information is needed I am help to provide.
Thanks for the good work with the API.
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [X] bug ?
- [X] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [X] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [X] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [X] I've searched for any related issues and avoided creating a duplicate issue.
#### Reproducible in:
slackeventsapi version: slackeventsapi==2.1.0
python version: Python 3.7.3
OS version(s):
|
2020-06-12T06:58:10Z
|
<patch>
<patch>
diff --git a/example/current_app/main.py b/example/current_app/main.py
new file mode 100644
--- /dev/null
+++ b/example/current_app/main.py
@@ -0,0 +1,49 @@
+# ------------------
+# Only for running this script here
+import sys
+from os.path import dirname
+sys.path.insert(1, f"{dirname(__file__)}/../..")
+# ------------------
+
+import os
+from slack import WebClient
+import logging
+logging.basicConfig(level=logging.DEBUG)
+
+from flask import Flask
+
+app = Flask(__name__)
+
+with app.app_context():
+ from test_module.slack_app import slack_events_adapter
+
+ slack_bot_token = os.environ["SLACK_BOT_TOKEN"]
+ slack_client = WebClient(slack_bot_token)
+
+
+ @slack_events_adapter.on("message")
+ def handle_message(event_data):
+ message = event_data["event"]
+ if message.get("subtype") is None and "hi" in message.get('text'):
+ channel = message["channel"]
+ message = "Hi <@%s>! :tada:" % message["user"]
+ slack_client.chat_postMessage(channel=channel, text=message)
+
+
+ @slack_events_adapter.on("error")
+ def error_handler(err):
+ print("ERROR: " + str(err))
+
+# (Terminal A)
+# source env/bin/activate
+# (env) $ export SLACK_BOT_TOKEN=xoxb-***
+# (env) $ export SLACK_SIGNING_SECRET=**
+# (env) $ cd example/current_app
+# (env) $ FLASK_APP=main.py FLASK_ENV=development flask run --port 3000
+
+# (Terminal B)
+# ngrok http 3000
+
+# in Slack
+# /invite @{your app's bot user}
+# post a message "hi" in the channel
diff --git a/slackeventsapi/server.py b/slackeventsapi/server.py
--- a/slackeventsapi/server.py
+++ b/slackeventsapi/server.py
@@ -1,10 +1,13 @@
-from flask import Flask, request, make_response, Blueprint
+import hashlib
+import hmac
import json
import platform
import sys
-import hmac
-import hashlib
from time import time
+
+from flask import Flask, request, make_response, Blueprint
+from werkzeug.local import LocalProxy
+
from .version import __version__
@@ -18,10 +21,10 @@ def __init__(self, signing_secret, endpoint, emitter, server):
# If a server is passed in, bind the event handler routes to it,
# otherwise create a new Flask instance.
if server:
- if isinstance(server, Flask) or isinstance(server, Blueprint):
+ if isinstance(server, (Flask, Blueprint, LocalProxy)):
self.bind_route(server)
else:
- raise TypeError("Server must be an instance of Flask or Blueprint")
+ raise TypeError("Server must be an instance of Flask, Blueprint, or LocalProxy")
else:
Flask.__init__(self, __name__)
self.bind_route(self)
</patch>
</s>
</patch>
|
diff --git a/example/current_app/test_module/__init__.py b/example/current_app/test_module/__init__.py
new file mode 100644
diff --git a/example/current_app/test_module/slack_app.py b/example/current_app/test_module/slack_app.py
new file mode 100644
--- /dev/null
+++ b/example/current_app/test_module/slack_app.py
@@ -0,0 +1,16 @@
+# ------------------
+# Only for running this script here
+import logging
+import sys
+from os.path import dirname
+
+sys.path.insert(1, f"{dirname(__file__)}/../../..")
+logging.basicConfig(level=logging.DEBUG)
+# ------------------
+
+from flask import current_app as app
+from slackeventsapi import SlackEventAdapter
+import os
+
+slack_signing_secret = os.environ["SLACK_SIGNING_SECRET"]
+slack_events_adapter = SlackEventAdapter(slack_signing_secret, "/slack/events", app)
diff --git a/tests/test_server.py b/tests/test_server.py
--- a/tests/test_server.py
+++ b/tests/test_server.py
@@ -18,7 +18,7 @@ def test_server_not_flask():
with pytest.raises(TypeError) as e:
invalid_flask = "I am not a Flask"
SlackEventAdapter("SIGNING_SECRET", "/slack/events", invalid_flask)
- assert e.value.args[0] == 'Server must be an instance of Flask or Blueprint'
+ assert e.value.args[0] == 'Server must be an instance of Flask, Blueprint, or LocalProxy'
def test_blueprint_server():
|
1.0
| ||||
celery__celery-2598
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CELERY_RESULT_SERIALIZER = 'json' breaks Exception marshaling
Setting `CELERY_RESULT_SERIALIZER = json` and raising an exception in the worker leads to this:
```
/path/to/lib/python2.7/site-packages/celery/result.py in get(self, timeout, propagate, interval, no_ack, follow_parents, EXCEPTION_STATES, PROPAGATE_STATES)
173 status = meta['status']
174 if status in PROPAGATE_STATES and propagate:
--> 175 raise meta['result']
176 return meta['result']
177 wait = get # deprecated alias to :meth:`get`.
TypeError: exceptions must be old-style classes or derived from BaseException, not dict
```
where the contents of `meta['result']` are (in my case):
```
{u'exc_message': u'unknown keys: nam', u'exc_type': u'ValueError'}
```
so it _looks_ like celery could convert the dict to a real exception before raising, but it does not currently. Changing back to `pickle` works as expected.
bug can be reproduced with the following:
``` python
# jsonresults.py
from celery.app.base import Celery
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'amqp'
app = Celery(config_source=__name__)
@app.task
def hello():
raise ValueError('go away')
```
worker:
```
# python -m celery --app=jsonresults:app worker
```
caller:
``` python
import jsonresults
jsonresults.hello.delay().get()
```
</issue>
<code>
[start of README.rst]
1 =================================
2 celery - Distributed Task Queue
3 =================================
4
5 .. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
6
7 |build-status| |coverage-status|
8
9 :Version: 3.2.0a1 (Cipater)
10 :Web: http://celeryproject.org/
11 :Download: http://pypi.python.org/pypi/celery/
12 :Source: http://github.com/celery/celery/
13 :Keywords: task queue, job queue, asynchronous, async, rabbitmq, amqp, redis,
14 python, webhooks, queue, distributed
15
16 --
17
18 What is a Task Queue?
19 =====================
20
21 Task queues are used as a mechanism to distribute work across threads or
22 machines.
23
24 A task queue's input is a unit of work, called a task, dedicated worker
25 processes then constantly monitor the queue for new work to perform.
26
27 Celery communicates via messages, usually using a broker
28 to mediate between clients and workers. To initiate a task a client puts a
29 message on the queue, the broker then delivers the message to a worker.
30
31 A Celery system can consist of multiple workers and brokers, giving way
32 to high availability and horizontal scaling.
33
34 Celery is a library written in Python, but the protocol can be implemented in
35 any language. So far there's RCelery_ for the Ruby programming language, and a
36 `PHP client`, but language interoperability can also be achieved
37 by using webhooks.
38
39 .. _RCelery: https://github.com/leapfrogonline/rcelery
40 .. _`PHP client`: https://github.com/gjedeer/celery-php
41 .. _`using webhooks`:
42 http://docs.celeryproject.org/en/latest/userguide/remote-tasks.html
43
44 What do I need?
45 ===============
46
47 Celery version 3.0 runs on,
48
49 - Python (2.6, 2.7, 3.3, 3.4)
50 - PyPy (1.8, 1.9)
51 - Jython (2.5, 2.7).
52
53 This is the last version to support Python 2.5,
54 and from Celery 3.1, Python 2.6 or later is required.
55 The last version to support Python 2.4 was Celery series 2.2.
56
57 *Celery* is usually used with a message broker to send and receive messages.
58 The RabbitMQ, Redis transports are feature complete,
59 but there's also experimental support for a myriad of other solutions, including
60 using SQLite for local development.
61
62 *Celery* can run on a single machine, on multiple machines, or even
63 across datacenters.
64
65 Get Started
66 ===========
67
68 If this is the first time you're trying to use Celery, or you are
69 new to Celery 3.0 coming from previous versions then you should read our
70 getting started tutorials:
71
72 - `First steps with Celery`_
73
74 Tutorial teaching you the bare minimum needed to get started with Celery.
75
76 - `Next steps`_
77
78 A more complete overview, showing more features.
79
80 .. _`First steps with Celery`:
81 http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
82
83 .. _`Next steps`:
84 http://docs.celeryproject.org/en/latest/getting-started/next-steps.html
85
86 Celery is...
87 ==========
88
89 - **Simple**
90
91 Celery is easy to use and maintain, and does *not need configuration files*.
92
93 It has an active, friendly community you can talk to for support,
94 including a `mailing-list`_ and and an IRC channel.
95
96 Here's one of the simplest applications you can make::
97
98 from celery import Celery
99
100 app = Celery('hello', broker='amqp://guest@localhost//')
101
102 @app.task
103 def hello():
104 return 'hello world'
105
106 - **Highly Available**
107
108 Workers and clients will automatically retry in the event
109 of connection loss or failure, and some brokers support
110 HA in way of *Master/Master* or *Master/Slave* replication.
111
112 - **Fast**
113
114 A single Celery process can process millions of tasks a minute,
115 with sub-millisecond round-trip latency (using RabbitMQ,
116 py-librabbitmq, and optimized settings).
117
118 - **Flexible**
119
120 Almost every part of *Celery* can be extended or used on its own,
121 Custom pool implementations, serializers, compression schemes, logging,
122 schedulers, consumers, producers, autoscalers, broker transports and much more.
123
124 It supports...
125 ============
126
127 - **Message Transports**
128
129 - RabbitMQ_, Redis_,
130 - MongoDB_ (experimental), Amazon SQS (experimental),
131 - CouchDB_ (experimental), SQLAlchemy_ (experimental),
132 - Django ORM (experimental), `IronMQ`_
133 - and more...
134
135 - **Concurrency**
136
137 - Prefork, Eventlet_, gevent_, threads/single threaded
138
139 - **Result Stores**
140
141 - AMQP, Redis
142 - memcached, MongoDB
143 - SQLAlchemy, Django ORM
144 - Apache Cassandra, IronCache
145
146 - **Serialization**
147
148 - *pickle*, *json*, *yaml*, *msgpack*.
149 - *zlib*, *bzip2* compression.
150 - Cryptographic message signing.
151
152 .. _`Eventlet`: http://eventlet.net/
153 .. _`gevent`: http://gevent.org/
154
155 .. _RabbitMQ: http://rabbitmq.com
156 .. _Redis: http://redis.io
157 .. _MongoDB: http://mongodb.org
158 .. _Beanstalk: http://kr.github.com/beanstalkd
159 .. _CouchDB: http://couchdb.apache.org
160 .. _SQLAlchemy: http://sqlalchemy.org
161 .. _`IronMQ`: http://iron.io
162
163 Framework Integration
164 =====================
165
166 Celery is easy to integrate with web frameworks, some of which even have
167 integration packages:
168
169 +--------------------+------------------------+
170 | `Django`_ | not needed |
171 +--------------------+------------------------+
172 | `Pyramid`_ | `pyramid_celery`_ |
173 +--------------------+------------------------+
174 | `Pylons`_ | `celery-pylons`_ |
175 +--------------------+------------------------+
176 | `Flask`_ | not needed |
177 +--------------------+------------------------+
178 | `web2py`_ | `web2py-celery`_ |
179 +--------------------+------------------------+
180 | `Tornado`_ | `tornado-celery`_ |
181 +--------------------+------------------------+
182
183 The integration packages are not strictly necessary, but they can make
184 development easier, and sometimes they add important hooks like closing
185 database connections at ``fork``.
186
187 .. _`Django`: http://djangoproject.com/
188 .. _`Pylons`: http://www.pylonsproject.org/
189 .. _`Flask`: http://flask.pocoo.org/
190 .. _`web2py`: http://web2py.com/
191 .. _`Bottle`: http://bottlepy.org/
192 .. _`Pyramid`: http://docs.pylonsproject.org/en/latest/docs/pyramid.html
193 .. _`pyramid_celery`: http://pypi.python.org/pypi/pyramid_celery/
194 .. _`django-celery`: http://pypi.python.org/pypi/django-celery
195 .. _`celery-pylons`: http://pypi.python.org/pypi/celery-pylons
196 .. _`web2py-celery`: http://code.google.com/p/web2py-celery/
197 .. _`Tornado`: http://www.tornadoweb.org/
198 .. _`tornado-celery`: http://github.com/mher/tornado-celery/
199
200 .. _celery-documentation:
201
202 Documentation
203 =============
204
205 The `latest documentation`_ with user guides, tutorials and API reference
206 is hosted at Read The Docs.
207
208 .. _`latest documentation`: http://docs.celeryproject.org/en/latest/
209
210 .. _celery-installation:
211
212 Installation
213 ============
214
215 You can install Celery either via the Python Package Index (PyPI)
216 or from source.
217
218 To install using `pip`,::
219
220 $ pip install -U Celery
221
222 To install using `easy_install`,::
223
224 $ easy_install -U Celery
225
226 .. _bundles:
227
228 Bundles
229 -------
230
231 Celery also defines a group of bundles that can be used
232 to install Celery and the dependencies for a given feature.
233
234 You can specify these in your requirements or on the ``pip`` comand-line
235 by using brackets. Multiple bundles can be specified by separating them by
236 commas.
237 ::
238
239 $ pip install "celery[librabbitmq]"
240
241 $ pip install "celery[librabbitmq,redis,auth,msgpack]"
242
243 The following bundles are available:
244
245 Serializers
246 ~~~~~~~~~~~
247
248 :celery[auth]:
249 for using the auth serializer.
250
251 :celery[msgpack]:
252 for using the msgpack serializer.
253
254 :celery[yaml]:
255 for using the yaml serializer.
256
257 Concurrency
258 ~~~~~~~~~~~
259
260 :celery[eventlet]:
261 for using the eventlet pool.
262
263 :celery[gevent]:
264 for using the gevent pool.
265
266 :celery[threads]:
267 for using the thread pool.
268
269 Transports and Backends
270 ~~~~~~~~~~~~~~~~~~~~~~~
271
272 :celery[librabbitmq]:
273 for using the librabbitmq C library.
274
275 :celery[redis]:
276 for using Redis as a message transport or as a result backend.
277
278 :celery[mongodb]:
279 for using MongoDB as a message transport (*experimental*),
280 or as a result backend (*supported*).
281
282 :celery[sqs]:
283 for using Amazon SQS as a message transport (*experimental*).
284
285 :celery[memcache]:
286 for using memcached as a result backend.
287
288 :celery[cassandra]:
289 for using Apache Cassandra as a result backend.
290
291 :celery[couchdb]:
292 for using CouchDB as a message transport (*experimental*).
293
294 :celery[couchbase]:
295 for using CouchBase as a result backend.
296
297 :celery[beanstalk]:
298 for using Beanstalk as a message transport (*experimental*).
299
300 :celery[zookeeper]:
301 for using Zookeeper as a message transport.
302
303 :celery[zeromq]:
304 for using ZeroMQ as a message transport (*experimental*).
305
306 :celery[sqlalchemy]:
307 for using SQLAlchemy as a message transport (*experimental*),
308 or as a result backend (*supported*).
309
310 :celery[pyro]:
311 for using the Pyro4 message transport (*experimental*).
312
313 :celery[slmq]:
314 for using the SoftLayer Message Queue transport (*experimental*).
315
316 .. _celery-installing-from-source:
317
318 Downloading and installing from source
319 --------------------------------------
320
321 Download the latest version of Celery from
322 http://pypi.python.org/pypi/celery/
323
324 You can install it by doing the following,::
325
326 $ tar xvfz celery-0.0.0.tar.gz
327 $ cd celery-0.0.0
328 $ python setup.py build
329 # python setup.py install
330
331 The last command must be executed as a privileged user if
332 you are not currently using a virtualenv.
333
334 .. _celery-installing-from-git:
335
336 Using the development version
337 -----------------------------
338
339 With pip
340 ~~~~~~~~
341
342 The Celery development version also requires the development
343 versions of ``kombu``, ``amqp`` and ``billiard``.
344
345 You can install the latest snapshot of these using the following
346 pip commands::
347
348 $ pip install https://github.com/celery/celery/zipball/master#egg=celery
349 $ pip install https://github.com/celery/billiard/zipball/master#egg=billiard
350 $ pip install https://github.com/celery/py-amqp/zipball/master#egg=amqp
351 $ pip install https://github.com/celery/kombu/zipball/master#egg=kombu
352
353 With git
354 ~~~~~~~~
355
356 Please the Contributing section.
357
358 .. _getting-help:
359
360 Getting Help
361 ============
362
363 .. _mailing-list:
364
365 Mailing list
366 ------------
367
368 For discussions about the usage, development, and future of celery,
369 please join the `celery-users`_ mailing list.
370
371 .. _`celery-users`: http://groups.google.com/group/celery-users/
372
373 .. _irc-channel:
374
375 IRC
376 ---
377
378 Come chat with us on IRC. The **#celery** channel is located at the `Freenode`_
379 network.
380
381 .. _`Freenode`: http://freenode.net
382
383 .. _bug-tracker:
384
385 Bug tracker
386 ===========
387
388 If you have any suggestions, bug reports or annoyances please report them
389 to our issue tracker at http://github.com/celery/celery/issues/
390
391 .. _wiki:
392
393 Wiki
394 ====
395
396 http://wiki.github.com/celery/celery/
397
398 .. _contributing-short:
399
400 Contributing
401 ============
402
403 Development of `celery` happens at Github: http://github.com/celery/celery
404
405 You are highly encouraged to participate in the development
406 of `celery`. If you don't like Github (for some reason) you're welcome
407 to send regular patches.
408
409 Be sure to also read the `Contributing to Celery`_ section in the
410 documentation.
411
412 .. _`Contributing to Celery`:
413 http://docs.celeryproject.org/en/master/contributing.html
414
415 .. _license:
416
417 License
418 =======
419
420 This software is licensed under the `New BSD License`. See the ``LICENSE``
421 file in the top distribution directory for the full license text.
422
423 .. # vim: syntax=rst expandtab tabstop=4 shiftwidth=4 shiftround
424
425
426 .. image:: https://d2weczhvl823v0.cloudfront.net/celery/celery/trend.png
427 :alt: Bitdeli badge
428 :target: https://bitdeli.com/free
429
430 .. |build-status| image:: https://travis-ci.org/celery/celery.svg?branch=master
431 :target: https://travis-ci.org/celery/celery
432 .. |coverage-status| image:: https://coveralls.io/repos/celery/celery/badge.svg
433 :target: https://coveralls.io/r/celery/celery
434
[end of README.rst]
[start of celery/backends/amqp.py]
1 # -*- coding: utf-8 -*-
2 """
3 celery.backends.amqp
4 ~~~~~~~~~~~~~~~~~~~~
5
6 The AMQP result backend.
7
8 This backend publishes results as messages.
9
10 """
11 from __future__ import absolute_import
12
13 import socket
14
15 from collections import deque
16 from operator import itemgetter
17
18 from kombu import Exchange, Queue, Producer, Consumer
19
20 from celery import states
21 from celery.exceptions import TimeoutError
22 from celery.five import range, monotonic
23 from celery.utils.functional import dictfilter
24 from celery.utils.log import get_logger
25 from celery.utils.timeutils import maybe_s_to_ms
26
27 from .base import BaseBackend
28
29 __all__ = ['BacklogLimitExceeded', 'AMQPBackend']
30
31 logger = get_logger(__name__)
32
33
34 class BacklogLimitExceeded(Exception):
35 """Too much state history to fast-forward."""
36
37
38 def repair_uuid(s):
39 # Historically the dashes in UUIDS are removed from AMQ entity names,
40 # but there is no known reason to. Hopefully we'll be able to fix
41 # this in v4.0.
42 return '%s-%s-%s-%s-%s' % (s[:8], s[8:12], s[12:16], s[16:20], s[20:])
43
44
45 class NoCacheQueue(Queue):
46 can_cache_declaration = False
47
48
49 class AMQPBackend(BaseBackend):
50 """Publishes results by sending messages."""
51 Exchange = Exchange
52 Queue = NoCacheQueue
53 Consumer = Consumer
54 Producer = Producer
55
56 BacklogLimitExceeded = BacklogLimitExceeded
57
58 persistent = True
59 supports_autoexpire = True
60 supports_native_join = True
61
62 retry_policy = {
63 'max_retries': 20,
64 'interval_start': 0,
65 'interval_step': 1,
66 'interval_max': 1,
67 }
68
69 def __init__(self, app, connection=None, exchange=None, exchange_type=None,
70 persistent=None, serializer=None, auto_delete=True, **kwargs):
71 super(AMQPBackend, self).__init__(app, **kwargs)
72 conf = self.app.conf
73 self._connection = connection
74 self.persistent = self.prepare_persistent(persistent)
75 self.delivery_mode = 2 if self.persistent else 1
76 exchange = exchange or conf.CELERY_RESULT_EXCHANGE
77 exchange_type = exchange_type or conf.CELERY_RESULT_EXCHANGE_TYPE
78 self.exchange = self._create_exchange(
79 exchange, exchange_type, self.delivery_mode,
80 )
81 self.serializer = serializer or conf.CELERY_RESULT_SERIALIZER
82 self.auto_delete = auto_delete
83 self.queue_arguments = dictfilter({
84 'x-expires': maybe_s_to_ms(self.expires),
85 })
86
87 def _create_exchange(self, name, type='direct', delivery_mode=2):
88 return self.Exchange(name=name,
89 type=type,
90 delivery_mode=delivery_mode,
91 durable=self.persistent,
92 auto_delete=False)
93
94 def _create_binding(self, task_id):
95 name = self.rkey(task_id)
96 return self.Queue(name=name,
97 exchange=self.exchange,
98 routing_key=name,
99 durable=self.persistent,
100 auto_delete=self.auto_delete,
101 queue_arguments=self.queue_arguments)
102
103 def revive(self, channel):
104 pass
105
106 def rkey(self, task_id):
107 return task_id.replace('-', '')
108
109 def destination_for(self, task_id, request):
110 if request:
111 return self.rkey(task_id), request.correlation_id or task_id
112 return self.rkey(task_id), task_id
113
114 def store_result(self, task_id, result, status,
115 traceback=None, request=None, **kwargs):
116 """Send task return value and status."""
117 routing_key, correlation_id = self.destination_for(task_id, request)
118 if not routing_key:
119 return
120 with self.app.amqp.producer_pool.acquire(block=True) as producer:
121 producer.publish(
122 {'task_id': task_id, 'status': status,
123 'result': self.encode_result(result, status),
124 'traceback': traceback,
125 'children': self.current_task_children(request)},
126 exchange=self.exchange,
127 routing_key=routing_key,
128 correlation_id=correlation_id,
129 serializer=self.serializer,
130 retry=True, retry_policy=self.retry_policy,
131 declare=self.on_reply_declare(task_id),
132 delivery_mode=self.delivery_mode,
133 )
134 return result
135
136 def on_reply_declare(self, task_id):
137 return [self._create_binding(task_id)]
138
139 def wait_for(self, task_id, timeout=None, cache=True,
140 no_ack=True, on_interval=None,
141 READY_STATES=states.READY_STATES,
142 PROPAGATE_STATES=states.PROPAGATE_STATES,
143 **kwargs):
144 cached_meta = self._cache.get(task_id)
145 if cache and cached_meta and \
146 cached_meta['status'] in READY_STATES:
147 return cached_meta
148 else:
149 try:
150 return self.consume(task_id, timeout=timeout, no_ack=no_ack,
151 on_interval=on_interval)
152 except socket.timeout:
153 raise TimeoutError('The operation timed out.')
154
155 def get_task_meta(self, task_id, backlog_limit=1000):
156 # Polling and using basic_get
157 with self.app.pool.acquire_channel(block=True) as (_, channel):
158 binding = self._create_binding(task_id)(channel)
159 binding.declare()
160
161 prev = latest = acc = None
162 for i in range(backlog_limit): # spool ffwd
163 acc = binding.get(
164 accept=self.accept, no_ack=False,
165 )
166 if not acc: # no more messages
167 break
168 if acc.payload['task_id'] == task_id:
169 prev, latest = latest, acc
170 if prev:
171 # backends are not expected to keep history,
172 # so we delete everything except the most recent state.
173 prev.ack()
174 prev = None
175 else:
176 raise self.BacklogLimitExceeded(task_id)
177
178 if latest:
179 payload = self._cache[task_id] = latest.payload
180 latest.requeue()
181 return payload
182 else:
183 # no new state, use previous
184 try:
185 return self._cache[task_id]
186 except KeyError:
187 # result probably pending.
188 return {'status': states.PENDING, 'result': None}
189 poll = get_task_meta # XXX compat
190
191 def drain_events(self, connection, consumer,
192 timeout=None, on_interval=None, now=monotonic, wait=None):
193 wait = wait or connection.drain_events
194 results = {}
195
196 def callback(meta, message):
197 if meta['status'] in states.READY_STATES:
198 results[meta['task_id']] = meta
199
200 consumer.callbacks[:] = [callback]
201 time_start = now()
202
203 while 1:
204 # Total time spent may exceed a single call to wait()
205 if timeout and now() - time_start >= timeout:
206 raise socket.timeout()
207 try:
208 wait(timeout=1)
209 except socket.timeout:
210 pass
211 if on_interval:
212 on_interval()
213 if results: # got event on the wanted channel.
214 break
215 self._cache.update(results)
216 return results
217
218 def consume(self, task_id, timeout=None, no_ack=True, on_interval=None):
219 wait = self.drain_events
220 with self.app.pool.acquire_channel(block=True) as (conn, channel):
221 binding = self._create_binding(task_id)
222 with self.Consumer(channel, binding,
223 no_ack=no_ack, accept=self.accept) as consumer:
224 while 1:
225 try:
226 return wait(
227 conn, consumer, timeout, on_interval)[task_id]
228 except KeyError:
229 continue
230
231 def _many_bindings(self, ids):
232 return [self._create_binding(task_id) for task_id in ids]
233
234 def get_many(self, task_ids, timeout=None, no_ack=True, on_message=None,
235 now=monotonic, getfields=itemgetter('status', 'task_id'),
236 READY_STATES=states.READY_STATES,
237 PROPAGATE_STATES=states.PROPAGATE_STATES, **kwargs):
238 with self.app.pool.acquire_channel(block=True) as (conn, channel):
239 ids = set(task_ids)
240 cached_ids = set()
241 mark_cached = cached_ids.add
242 for task_id in ids:
243 try:
244 cached = self._cache[task_id]
245 except KeyError:
246 pass
247 else:
248 if cached['status'] in READY_STATES:
249 yield task_id, cached
250 mark_cached(task_id)
251 ids.difference_update(cached_ids)
252 results = deque()
253 push_result = results.append
254 push_cache = self._cache.__setitem__
255 decode_result = self.meta_from_decoded
256
257 def _on_message(message):
258 body = decode_result(message.decode())
259 if on_message is not None:
260 on_message(body)
261 state, uid = getfields(body)
262 if state in READY_STATES:
263 push_result(body) \
264 if uid in task_ids else push_cache(uid, body)
265
266 bindings = self._many_bindings(task_ids)
267 with self.Consumer(channel, bindings, on_message=_on_message,
268 accept=self.accept, no_ack=no_ack):
269 wait = conn.drain_events
270 popleft = results.popleft
271 while ids:
272 wait(timeout=timeout)
273 while results:
274 state = popleft()
275 task_id = state['task_id']
276 ids.discard(task_id)
277 push_cache(task_id, state)
278 yield task_id, state
279
280 def reload_task_result(self, task_id):
281 raise NotImplementedError(
282 'reload_task_result is not supported by this backend.')
283
284 def reload_group_result(self, task_id):
285 """Reload group result, even if it has been previously fetched."""
286 raise NotImplementedError(
287 'reload_group_result is not supported by this backend.')
288
289 def save_group(self, group_id, result):
290 raise NotImplementedError(
291 'save_group is not supported by this backend.')
292
293 def restore_group(self, group_id, cache=True):
294 raise NotImplementedError(
295 'restore_group is not supported by this backend.')
296
297 def delete_group(self, group_id):
298 raise NotImplementedError(
299 'delete_group is not supported by this backend.')
300
301 def __reduce__(self, args=(), kwargs={}):
302 kwargs.update(
303 connection=self._connection,
304 exchange=self.exchange.name,
305 exchange_type=self.exchange.type,
306 persistent=self.persistent,
307 serializer=self.serializer,
308 auto_delete=self.auto_delete,
309 expires=self.expires,
310 )
311 return super(AMQPBackend, self).__reduce__(args, kwargs)
312
[end of celery/backends/amqp.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
celery/celery
|
6592ff64b6b024a4b68abcc53b151888fdf0dee3
|
CELERY_RESULT_SERIALIZER = 'json' breaks Exception marshaling
Setting `CELERY_RESULT_SERIALIZER = json` and raising an exception in the worker leads to this:
```
/path/to/lib/python2.7/site-packages/celery/result.py in get(self, timeout, propagate, interval, no_ack, follow_parents, EXCEPTION_STATES, PROPAGATE_STATES)
173 status = meta['status']
174 if status in PROPAGATE_STATES and propagate:
--> 175 raise meta['result']
176 return meta['result']
177 wait = get # deprecated alias to :meth:`get`.
TypeError: exceptions must be old-style classes or derived from BaseException, not dict
```
where the contents of `meta['result']` are (in my case):
```
{u'exc_message': u'unknown keys: nam', u'exc_type': u'ValueError'}
```
so it _looks_ like celery could convert the dict to a real exception before raising, but it does not currently. Changing back to `pickle` works as expected.
bug can be reproduced with the following:
``` python
# jsonresults.py
from celery.app.base import Celery
CELERY_RESULT_SERIALIZER = 'json'
CELERY_RESULT_BACKEND = 'amqp'
app = Celery(config_source=__name__)
@app.task
def hello():
raise ValueError('go away')
```
worker:
```
# python -m celery --app=jsonresults:app worker
```
caller:
``` python
import jsonresults
jsonresults.hello.delay().get()
```
|
This is biting me as well. Any news?
|
2015-04-29T14:52:17Z
|
<patch>
<patch>
diff --git a/celery/backends/amqp.py b/celery/backends/amqp.py
--- a/celery/backends/amqp.py
+++ b/celery/backends/amqp.py
@@ -195,7 +195,7 @@ def drain_events(self, connection, consumer,
def callback(meta, message):
if meta['status'] in states.READY_STATES:
- results[meta['task_id']] = meta
+ results[meta['task_id']] = self.meta_from_decoded(meta)
consumer.callbacks[:] = [callback]
time_start = now()
</patch>
</s>
</patch>
|
diff --git a/celery/tests/backends/test_amqp.py b/celery/tests/backends/test_amqp.py
--- a/celery/tests/backends/test_amqp.py
+++ b/celery/tests/backends/test_amqp.py
@@ -13,6 +13,7 @@
from celery.backends.amqp import AMQPBackend
from celery.exceptions import TimeoutError
from celery.five import Empty, Queue, range
+from celery.result import AsyncResult
from celery.utils import uuid
from celery.tests.case import (
@@ -246,10 +247,20 @@ def test_wait_for(self):
with self.assertRaises(TimeoutError):
b.wait_for(tid, timeout=0.01, cache=False)
- def test_drain_events_remaining_timeouts(self):
+ def test_drain_events_decodes_exceptions_in_meta(self):
+ tid = uuid()
+ b = self.create_backend(serializer="json")
+ b.store_result(tid, RuntimeError("aap"), states.FAILURE)
+ result = AsyncResult(tid, backend=b)
- class Connection(object):
+ with self.assertRaises(Exception) as cm:
+ result.get()
+ self.assertEqual(cm.exception.__class__.__name__, "RuntimeError")
+ self.assertEqual(str(cm.exception), "aap")
+
+ def test_drain_events_remaining_timeouts(self):
+ class Connection(object):
def drain_events(self, timeout=None):
pass
|
1.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.