Samirbagda commited on
Commit
ff5ef4b
·
verified ·
1 Parent(s): 9688a91

Create Openai whisper smalll.data

Browse files
Files changed (1) hide show
  1. Openai whisper smalll.data +471 -0
Openai whisper smalll.data ADDED
@@ -0,0 +1,471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Hugging Face's logo
2
+
3
+
4
+ openai
5
+ /
6
+ whisper-small
7
+
8
+ like
9
+ 395
10
+
11
+ Follow
12
+
13
+ OpenAI
14
+ 8.47k
15
+ Automatic Speech Recognition
16
+ Transformers
17
+ PyTorch
18
+ TensorFlow
19
+ JAX
20
+ Safetensors
21
+
22
+ 99 languages
23
+ whisper
24
+ audio
25
+ hf-asr-leaderboard
26
+ Eval Results
27
+
28
+ arxiv:
29
+ 2212.04356
30
+
31
+ License:
32
+ apache-2.0
33
+ Model card
34
+ Files
35
+ Community
36
+ 44
37
+ whisper-small
38
+ /
39
+ README.md
40
+
41
+ sanchit-gandhi's picture
42
+ sanchit-gandhi
43
+ update cv13 zero shot results
44
+ dc81969
45
+ preview
46
+ code
47
+ |
48
+ raw
49
+
50
+ Copy download link
51
+ history
52
+ blame
53
+ contribute
54
+ delete
55
+
56
+ 20.2 kB
57
+ ---
58
+ language:
59
+ - en
60
+ - zh
61
+ - de
62
+ - es
63
+ - ru
64
+ - ko
65
+ - fr
66
+ - ja
67
+ - pt
68
+ - tr
69
+ - pl
70
+ - ca
71
+ - nl
72
+ - ar
73
+ - sv
74
+ - it
75
+ - id
76
+ - hi
77
+ - fi
78
+ - vi
79
+ - he
80
+ - uk
81
+ - el
82
+ - ms
83
+ - cs
84
+ - ro
85
+ - da
86
+ - hu
87
+ - ta
88
+ - no
89
+ - th
90
+ - ur
91
+ - hr
92
+ - bg
93
+ - lt
94
+ - la
95
+ - mi
96
+ - ml
97
+ - cy
98
+ - sk
99
+ - te
100
+ - fa
101
+ - lv
102
+ - bn
103
+ - sr
104
+ - az
105
+ - sl
106
+ - kn
107
+ - et
108
+ - mk
109
+ - br
110
+ - eu
111
+ - is
112
+ - hy
113
+ - ne
114
+ - mn
115
+ - bs
116
+ - kk
117
+ - sq
118
+ - sw
119
+ - gl
120
+ - mr
121
+ - pa
122
+ - si
123
+ - km
124
+ - sn
125
+ - yo
126
+ - so
127
+ - af
128
+ - oc
129
+ - ka
130
+ - be
131
+ - tg
132
+ - sd
133
+ - gu
134
+ - am
135
+ - yi
136
+ - lo
137
+ - uz
138
+ - fo
139
+ - ht
140
+ - ps
141
+ - tk
142
+ - nn
143
+ - mt
144
+ - sa
145
+ - lb
146
+ - my
147
+ - bo
148
+ - tl
149
+ - mg
150
+ - as
151
+ - tt
152
+ - haw
153
+ - ln
154
+ - ha
155
+ - ba
156
+ - jw
157
+ - su
158
+ tags:
159
+ - audio
160
+ - automatic-speech-recognition
161
+ - hf-asr-leaderboard
162
+ widget:
163
+ - example_title: Librispeech sample 1
164
+ src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
165
+ - example_title: Librispeech sample 2
166
+ src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
167
+ model-index:
168
+ - name: whisper-small
169
+ results:
170
+ - task:
171
+ name: Automatic Speech Recognition
172
+ type: automatic-speech-recognition
173
+ dataset:
174
+ name: LibriSpeech (clean)
175
+ type: librispeech_asr
176
+ config: clean
177
+ split: test
178
+ args:
179
+ language: en
180
+ metrics:
181
+ - name: Test WER
182
+ type: wer
183
+ value: 3.432213777886737
184
+ - task:
185
+ name: Automatic Speech Recognition
186
+ type: automatic-speech-recognition
187
+ dataset:
188
+ name: LibriSpeech (other)
189
+ type: librispeech_asr
190
+ config: other
191
+ split: test
192
+ args:
193
+ language: en
194
+ metrics:
195
+ - name: Test WER
196
+ type: wer
197
+ value: 7.628304527060248
198
+ - task:
199
+ name: Automatic Speech Recognition
200
+ type: automatic-speech-recognition
201
+ dataset:
202
+ name: Common Voice 11.0
203
+ type: mozilla-foundation/common_voice_11_0
204
+ config: hi
205
+ split: test
206
+ args:
207
+ language: hi
208
+ metrics:
209
+ - name: Test WER
210
+ type: wer
211
+ value: 87.3
212
+ - task:
213
+ name: Automatic Speech Recognition
214
+ type: automatic-speech-recognition
215
+ dataset:
216
+ name: Common Voice 13.0
217
+ type: mozilla-foundation/common_voice_13_0
218
+ config: dv
219
+ split: test
220
+ args:
221
+ language: dv
222
+ metrics:
223
+ - name: Wer
224
+ type: wer
225
+ value: 125.69809089960707
226
+ pipeline_tag: automatic-speech-recognition
227
+ license: apache-2.0
228
+ ---
229
+ # Whisper
230
+
231
+ Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
232
+ of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
233
+ for fine-tuning.
234
+
235
+ Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
236
+ by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
237
+
238
+ **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
239
+ copied and pasted from the original model card.
240
+
241
+ ## Model details
242
+
243
+ Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
244
+ It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
245
+
246
+ The models were trained on either English-only data or multilingual data. The English-only models were trained
247
+ on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
248
+ translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
249
+ For speech translation, the model predicts transcriptions to a *different* language to the audio.
250
+
251
+ Whisper checkpoints come in five configurations of varying model sizes.
252
+ The smallest four are trained on either English-only or multilingual data.
253
+ The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
254
+ are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
255
+ checkpoints are summarised in the following table with links to the models on the Hub:
256
+
257
+ | Size | Parameters | English-only | Multilingual |
258
+ |----------|------------|------------------------------------------------------|-----------------------------------------------------|
259
+ | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
260
+ | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
261
+ | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
262
+ | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
263
+ | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
264
+ | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
265
+
266
+ # Usage
267
+
268
+ To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
269
+
270
+ The `WhisperProcessor` is used to:
271
+ 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
272
+ 2. Post-process the model outputs (converting them from tokens to text)
273
+
274
+ The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
275
+ are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
276
+ 1. The transcription always starts with the `<|startoftranscript|>` token
277
+ 2. The second token is the language token (e.g. `<|en|>` for English)
278
+ 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
279
+ 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
280
+
281
+ Thus, a typical sequence of context tokens might look as follows:
282
+ ```
283
+ <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
284
+ ```
285
+ Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
286
+
287
+ These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
288
+ each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
289
+ the Whisper model will automatically predict the output langauge and task itself.
290
+
291
+ The context tokens can be set accordingly:
292
+
293
+ ```python
294
+ model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
295
+ ```
296
+
297
+ Which forces the model to predict in English under the task of speech recognition.
298
+
299
+ ## Transcription
300
+
301
+ ### English to English
302
+ In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
303
+ (English) and task (transcribe).
304
+
305
+ ```python
306
+ >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
307
+ >>> from datasets import load_dataset
308
+ >>> # load model and processor
309
+ >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
310
+ >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
311
+ >>> model.config.forced_decoder_ids = None
312
+ >>> # load dummy dataset and read audio files
313
+ >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
314
+ >>> sample = ds[0]["audio"]
315
+ >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
316
+ >>> # generate token ids
317
+ >>> predicted_ids = model.generate(input_features)
318
+ >>> # decode token ids to text
319
+ >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
320
+ ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
321
+ >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
322
+ [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
323
+ ```
324
+ The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
325
+
326
+ ### French to French
327
+ The following example demonstrates French to French transcription by setting the decoder ids appropriately.
328
+
329
+ ```python
330
+ >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
331
+ >>> from datasets import Audio, load_dataset
332
+ >>> # load model and processor
333
+ >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
334
+ >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
335
+ >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
336
+ >>> # load streaming dataset and read first audio sample
337
+ >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
338
+ >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
339
+ >>> input_speech = next(iter(ds))["audio"]
340
+ >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
341
+ >>> # generate token ids
342
+ >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
343
+ >>> # decode token ids to text
344
+ >>> transcription = processor.batch_decode(predicted_ids)
345
+ ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
346
+ >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
347
+ [' Un vrai travail intéressant va enfin être mené sur ce sujet.']
348
+ ```
349
+
350
+ ## Translation
351
+ Setting the task to "translate" forces the Whisper model to perform speech translation.
352
+
353
+ ### French to English
354
+
355
+ ```python
356
+ >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
357
+ >>> from datasets import Audio, load_dataset
358
+ >>> # load model and processor
359
+ >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
360
+ >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
361
+ >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
362
+ >>> # load streaming dataset and read first audio sample
363
+ >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
364
+ >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
365
+ >>> input_speech = next(iter(ds))["audio"]
366
+ >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
367
+ >>> # generate token ids
368
+ >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
369
+ >>> # decode token ids to text
370
+ >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
371
+ [' A very interesting work, we will finally be given on this subject.']
372
+ ```
373
+
374
+ ## Evaluation
375
+
376
+ This code snippet shows how to evaluate Whisper Small on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
377
+
378
+ ```python
379
+ >>> from datasets import load_dataset
380
+ >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
381
+ >>> import torch
382
+ >>> from evaluate import load
383
+ >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
384
+ >>> processor = WhisperProcessor.from_pretrained("openai/whisper-small")
385
+ >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to("cuda")
386
+ >>> def map_to_pred(batch):
387
+ >>> audio = batch["audio"]
388
+ >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
389
+ >>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
390
+ >>>
391
+ >>> with torch.no_grad():
392
+ >>> predicted_ids = model.generate(input_features.to("cuda"))[0]
393
+ >>> transcription = processor.decode(predicted_ids)
394
+ >>> batch["prediction"] = processor.tokenizer._normalize(transcription)
395
+ >>> return batch
396
+ >>> result = librispeech_test_clean.map(map_to_pred)
397
+ >>> wer = load("wer")
398
+ >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
399
+ 3.432213777886737
400
+ ```
401
+
402
+ ## Long-Form Transcription
403
+
404
+ The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
405
+ algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
406
+ [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
407
+ method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
408
+ can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
409
+
410
+ ```python
411
+ >>> import torch
412
+ >>> from transformers import pipeline
413
+ >>> from datasets import load_dataset
414
+ >>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
415
+ >>> pipe = pipeline(
416
+ >>> "automatic-speech-recognition",
417
+ >>> model="openai/whisper-small",
418
+ >>> chunk_length_s=30,
419
+ >>> device=device,
420
+ >>> )
421
+ >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
422
+ >>> sample = ds[0]["audio"]
423
+ >>> prediction = pipe(sample.copy(), batch_size=8)["text"]
424
+ " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
425
+ >>> # we can also return timestamps for the predictions
426
+ >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
427
+ [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
428
+ 'timestamp': (0.0, 5.44)}]
429
+ ```
430
+
431
+ Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
432
+
433
+ ## Fine-Tuning
434
+
435
+ The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
436
+ its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
437
+ post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
438
+ guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
439
+
440
+ ### Evaluated Use
441
+
442
+ The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
443
+
444
+ The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
445
+
446
+ In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
447
+
448
+
449
+ ## Training Data
450
+
451
+ The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
452
+
453
+ As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
454
+
455
+
456
+ ## Performance and Limitations
457
+
458
+ Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
459
+
460
+ However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
461
+
462
+ Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
463
+
464
+ In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
465
+
466
+
467
+ ## Broader Implications
468
+
469
+ We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
470
+
471
+ There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In prac