mishig HF Staff commited on
Commit
df8e53f
·
verified ·
1 Parent(s): e3b3754

Add 1 files

Browse files
Files changed (1) hide show
  1. 2106/2106.06147.md +343 -0
2106/2106.06147.md ADDED
@@ -0,0 +1,343 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: A Neural Architecture for Acoustic Question Answering
2
+
3
+ URL Source: https://arxiv.org/html/2106.06147
4
+
5
+ Markdown Content:
6
+ Jérôme Abdelnour, Jean Rouat, and Giampiero Salvi J. Abdelnour and J. Rouat are with NECOTIS, Dept. of Electrical and Computer Engineering, Sherbrooke University {Jerome.Abdelnour,Jean.Rouat}@usherbrooke.ca.G. Salvi is with Department of Electronic Systems, Norwegian University of Science and Technology giampiero.salvi@ntnu.no, and with KTH Royal Institute of Technology, Dept. of Electrical Engineering and Computer Science.
7
+
8
+ ###### Abstract
9
+
10
+ The goal of the Acoustic Question Answering (AQA) task is to answer a free-form text question about the content of an acoustic scene. It was inspired by the Visual Question Answering (VQA) task. In this paper, based on the previously introduced CLEAR dataset, we propose a new benchmark for AQA, namely CLEAR2, that emphasizes the specific challenges of acoustic inputs. These include handling of variable duration scenes, and scenes built with elementary sounds that differ between training and test set. We also introduce NAAQA, a neural architecture that leverages specific properties of acoustic inputs. The use of 1D convolutions in time and frequency to process 2D spectro-temporal representations of acoustic content shows promising results and enables reductions in model complexity. We show that time coordinate maps augment temporal localization capabilities which enhance performance of the network by ∼\sim 17 percentage points. On the other hand, frequency coordinate maps have little influence on this task. NAAQA achieves 79.5% of accuracy on the AQA task with ∼\sim 4 times fewer parameters than the previously explored VQA model. We evaluate the performance of NAAQA on an independent data set reconstructed from DAQA. We also test the addition of a MALiMo module in our model on both CLEAR2 and DAQA. We provide a detailed analysis of the results for the different question types. We release the code to produce CLEAR2 as well as NAAQA to foster research in this newly emerging machine learning task.
11
+
12
+ ###### Index Terms:
13
+
14
+ Audio, Question Answering, Reasoning, Temporal Reasoning, CLEAR, Coordconv, Auditory scene analysis
15
+
16
+ 1 Introduction
17
+ --------------
18
+
19
+ Question answering (QA) tasks are examples of constrained and limited scenarios for research in reasoning. The agent’s task in QA is to answer questions based on context. Text-based QA uses text corpora as context[[undef](https://arxiv.org/html/2106.06147v3#bib.bibx1), [undefa](https://arxiv.org/html/2106.06147v3#bib.bibx2), [undefb](https://arxiv.org/html/2106.06147v3#bib.bibx3), [undefc](https://arxiv.org/html/2106.06147v3#bib.bibx4), [undefd](https://arxiv.org/html/2106.06147v3#bib.bibx5), [undefe](https://arxiv.org/html/2106.06147v3#bib.bibx6)]. In visual question answering (VQA) the questions are related to a scene depicted in still images[[undeff](https://arxiv.org/html/2106.06147v3#bib.bibx7), [undefg](https://arxiv.org/html/2106.06147v3#bib.bibx8), [undefh](https://arxiv.org/html/2106.06147v3#bib.bibx9), [undefi](https://arxiv.org/html/2106.06147v3#bib.bibx10), [undefj](https://arxiv.org/html/2106.06147v3#bib.bibx11), [undefk](https://arxiv.org/html/2106.06147v3#bib.bibx12), [undefl](https://arxiv.org/html/2106.06147v3#bib.bibx13)]. Finally, video question answering attempts to use both the visual and acoustic information in video material as context[[undefm](https://arxiv.org/html/2106.06147v3#bib.bibx14), [undefn](https://arxiv.org/html/2106.06147v3#bib.bibx15), [undefo](https://arxiv.org/html/2106.06147v3#bib.bibx16), [undefp](https://arxiv.org/html/2106.06147v3#bib.bibx17), [undefq](https://arxiv.org/html/2106.06147v3#bib.bibx18), [undefr](https://arxiv.org/html/2106.06147v3#bib.bibx19)]. The use of the acoustic channel is usually limited to linguistic information that is expressed in text form, either with manual transcriptions (e.g. subtitles) or by automatic speech recognition[[undefs](https://arxiv.org/html/2106.06147v3#bib.bibx20)].
20
+
21
+ In most studies, reasoning is supported by spatial and symbolic representations in the visual domain[[undeft](https://arxiv.org/html/2106.06147v3#bib.bibx21), [undefu](https://arxiv.org/html/2106.06147v3#bib.bibx22)]. However, reasoning and logic relationships can also be studied via representations of sounds[[undefv](https://arxiv.org/html/2106.06147v3#bib.bibx23)]. Including the auditory modality in studies on reasoning is of particular interest for research in artificial intelligence [[undefw](https://arxiv.org/html/2106.06147v3#bib.bibx24)], but also has implications in real world applications[[undefx](https://arxiv.org/html/2106.06147v3#bib.bibx25)]. In [[undefy](https://arxiv.org/html/2106.06147v3#bib.bibx26)], audio was used in combination with video and depth information to recognize human activities. It was shown that sound can be more discriminative than the corresponding visual cues. As an example, imagine using an espresso machine. Besides possibly a display, all information about the different phases of producing coffee, from grinding the beans, to pressing the powder into the holder and brewing the coffee with high pressure hot water are conveyed by the sounds. Detection of abnormalities in machinery where the moving parts are hidden, or the detection of threatening or hazardous events are other examples of the importance of the audio information for cognitive systems.
22
+
23
+ The audio modality provides important information that can be leveraged in the context of QA reasoning. Audio allows QA systems to answer relevant questions more accurately, or even to answer questions that are not approachable from the visual domain alone. In [[undefz](https://arxiv.org/html/2106.06147v3#bib.bibx27)], we introduced the AQA task and proposed a new database (CLEAR) to promote research in AQA. The agent’s goal, in the proposed task, was to answer questions related to _acoustic scenes_ composed by a sequence of _elementary musical sounds_. The questions foster reasoning on the properties of the elementary sounds and their relative and absolute position in the scene. To build CLEAR, we were inspired by the work of Johnson _et al._[[undeff](https://arxiv.org/html/2106.06147v3#bib.bibx7)] for VQA. Similarly, we tested an architecture built for VQA and based on _FiLM layers_[[undefaa](https://arxiv.org/html/2106.06147v3#bib.bibx28)] on the newly proposed AQA task. [[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)][[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)] later proposed to extend the questions to more acoustically realistic situations by developing a new database called DAQA. To evaluate the results, they proposed the MALiMo network which relies on several FiLM layers.
24
+
25
+ The works cited above use neural network architectures that are largely inspired by image processing research. However, the structure of acoustic data is fundamentally different from that of visual data. This is illustrated for example in[[undefac](https://arxiv.org/html/2106.06147v3#bib.bibx30)] where two standard data sets in computer vision (MNIST) and speech technology (Google Speech Commands) are compared via T-SNE[[undefad](https://arxiv.org/html/2106.06147v3#bib.bibx31)]. A legitimate question is whether it is possible to obtain better results (in terms of accuracy and network complexity) by adapting the first layers of the architectures to take into account intrinsic characteristics of acoustic signals. Even within the AQA domain, the properties of acoustic data may vary significantly depending on the nature of the auditory scenes (e.g. CLEAR vs DAQA). It is, therefore interesting to evaluate the impact of the dataset on system performance.
26
+
27
+ To answer the above questions, we present a study that evaluates the impact of audio pre-processing, of acoustic feature extraction and of dataset characteristics on the performance neural architectures for AQA. When considering performance, we focus both on accuracy and complexity of the models. We provide a detailed analysis of our results based on question type to improve interpretability. The main contributions can be summarized as follows:
28
+
29
+ * •We introduce CLEAR2 a more challenging version of the CLEAR dataset, which comprises scenes of variable duration and different elementary sounds for the training and test sets.
30
+ * •We propose a highly optimized FiLM-based architecture (NAAQA) inspired by VQA tasks containing new feature extraction modules that are tailored to acoustic inputs.
31
+ * •We study the effect of time and frequency coordinate maps for acoustic data at different levels in the architecture.
32
+ * •We evaluate the generality of the methods by testing NAAQA on a regenerated a version of the DAQA dataset (DAQA′) and by adding a MALiMo module (from [[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)]) into our NAAQA architecture.
33
+ * •We provide a detailed analysis of our experimental results that helps interpretability of the model.
34
+
35
+ On the CLEAR2 dataset NAAQA outperforms the VQA baseline (which is 4 times more complex in terms of number of parameters) by 17.2 percent points in the accuracy score.
36
+
37
+ The rest of the paper is organized as follows: Section[2](https://arxiv.org/html/2106.06147v3#S2 "2 Related Work ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") reports on recent related work, Section[3](https://arxiv.org/html/2106.06147v3#S3 "3 Data ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") describes both our CLEAR2 dataset and the DAQA′ dataset, Section[4](https://arxiv.org/html/2106.06147v3#S4 "4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") presents the QA models we have tested, Section[5](https://arxiv.org/html/2106.06147v3#S5 "5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") gives details on the experimental settings, Section[6](https://arxiv.org/html/2106.06147v3#S6 "6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") presents and discusses the results and, finally, Section[7](https://arxiv.org/html/2106.06147v3#S7 "7 Conclusions ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") concludes the paper. Some extra information is reported in the supplementary material.
38
+
39
+ 2 Related Work
40
+ --------------
41
+
42
+ This section presents previous research in QA systems including data generation and modeling.
43
+
44
+ ### 2.1 Text-Based Question Answering
45
+
46
+ The question answering task was introduced as part of the Text Retrieval Conference [[undef](https://arxiv.org/html/2106.06147v3#bib.bibx1)]. In text-based question answering, both the questions and the context are expressed in text form. Answering these questions can often be approached as a pattern matching problem in the sense that the information can be retrieved almost verbatim in the text (e.g. [[undefb](https://arxiv.org/html/2106.06147v3#bib.bibx3), [undefc](https://arxiv.org/html/2106.06147v3#bib.bibx4), [undefe](https://arxiv.org/html/2106.06147v3#bib.bibx6), [undefd](https://arxiv.org/html/2106.06147v3#bib.bibx5)]).
47
+
48
+ ### 2.2 Visual Question Answering (VQA)
49
+
50
+ Visual Question Answering aims to answer questions based on a visual scene. Several VQA datasets are available to the scientific community[[undeff](https://arxiv.org/html/2106.06147v3#bib.bibx7), [undefae](https://arxiv.org/html/2106.06147v3#bib.bibx32), [undefaf](https://arxiv.org/html/2106.06147v3#bib.bibx33), [undefag](https://arxiv.org/html/2106.06147v3#bib.bibx34), [undefah](https://arxiv.org/html/2106.06147v3#bib.bibx35), [undefai](https://arxiv.org/html/2106.06147v3#bib.bibx36), [undefh](https://arxiv.org/html/2106.06147v3#bib.bibx9), [undefi](https://arxiv.org/html/2106.06147v3#bib.bibx10), [undefaj](https://arxiv.org/html/2106.06147v3#bib.bibx37), [undefg](https://arxiv.org/html/2106.06147v3#bib.bibx8)]. However, designing an unbiased dataset is non-trivial. [[undefj](https://arxiv.org/html/2106.06147v3#bib.bibx11)][[undefj](https://arxiv.org/html/2106.06147v3#bib.bibx11)] observed that the type of questions has a strong impact on the results of neural network based systems which motivated research to reduce the bias in VQA datasets[[undefak](https://arxiv.org/html/2106.06147v3#bib.bibx38), [undefal](https://arxiv.org/html/2106.06147v3#bib.bibx39), [undefam](https://arxiv.org/html/2106.06147v3#bib.bibx40), [undefk](https://arxiv.org/html/2106.06147v3#bib.bibx12), [undefl](https://arxiv.org/html/2106.06147v3#bib.bibx13), [undeff](https://arxiv.org/html/2106.06147v3#bib.bibx7)]. Gathering good labeled data is also non-trivial which induced [[undefk](https://arxiv.org/html/2106.06147v3#bib.bibx12)] and [[undefl](https://arxiv.org/html/2106.06147v3#bib.bibx13)][[undefk](https://arxiv.org/html/2106.06147v3#bib.bibx12), [undefl](https://arxiv.org/html/2106.06147v3#bib.bibx13)] to constrain their work to yes/no questions. To alleviate this problem, [[undeff](https://arxiv.org/html/2106.06147v3#bib.bibx7)][[undeff](https://arxiv.org/html/2106.06147v3#bib.bibx7)] proposed the use of synthetic data for both questions and visual scenes. The resulting CLEVR dataset has been extensively used to evaluate neural networks for VQA applications [[undefaa](https://arxiv.org/html/2106.06147v3#bib.bibx28), [undefan](https://arxiv.org/html/2106.06147v3#bib.bibx41), [undefao](https://arxiv.org/html/2106.06147v3#bib.bibx42), [undefap](https://arxiv.org/html/2106.06147v3#bib.bibx43), [undefaq](https://arxiv.org/html/2106.06147v3#bib.bibx44), [undefar](https://arxiv.org/html/2106.06147v3#bib.bibx45)] which helped foster research on VQA. To create visual scenes, the authors automated a 3D modelling software. This allows for an unlimited supply of labeled data eliminating the time and effort needed for manual annotations. For the questions, they first manually designed semantic representations for each type of question. These representations describe the reasoning steps needed to answer a question (i.e. “find all cubes || that are red || and metallic”). The semantic representations are then instantiated based on the visual scene composition thus creating a question and an answer for a given scene. This gives complete control over the labelling process.
51
+
52
+ ### 2.3 Dababases for AQA
53
+
54
+ ![Image 1: Refer to caption](https://arxiv.org/html/2106.06147v3/images/generation_process_overview.png)
55
+
56
+ Figure 1: Overview of the CLEAR dataset generation process. Highlighted in red: 10 randomly sampled sounds from the elementary sounds bank, are assembled to create an acoustic scene. The attributes of each elementary sound are depicted in blue. The question template (orange) and the elementary sounds attributes are combined to instantiate a question. The answer is generated by applying each steps of the question functional program (purple) on the acoustic scene definition (blue). The impact of the reverberations can be seen in the changes of the signals envelops.
57
+
58
+ As in VQA, using generated data in the design of AQA datasets has substantial advantages. Data can be automatically annotated which saves time and complexity. The number of training examples that can be generated is only limited by the available computational resources. Controlling the generation process gives a complete understanding of the properties and relations of the objects in a scene. This understanding can be leveraged to reduce bias in the dataset and to generate complex questions and their corresponding answers. The CLEAR dataset [[undefz](https://arxiv.org/html/2106.06147v3#bib.bibx27)] has been initially generated using semi-synthetic data. The elementary sounds were real recordings of musical notes played by various instruments and players. The auditory scenes were obtained by concatenating these elementary sounds in different combinations. The data set had two main limitations: scenes had fixed duration, and the same elementary sounds were used to generate the test and training scenes (although test and training scenes were different). The DAQA dataset[[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)] comprises more complex and less stationary elementary natural sounds coming for example from aircrafts, cars, doors, human speaking, bird singing, dog barking, etc. Although more complex and varied than CLEAR, the evaluation also uses the same elementary sound recordings for training and testing.
59
+
60
+ In this paper, we propose a more challenging version of CLEAR which uses different elementary sound recordings for the training and test sets and generates variable duration auditory scenes.
61
+
62
+ ### 2.4 Convolutional neural network on Audio
63
+
64
+ Convolutional neural networks (CNN) have dominated the visual domain in recent years. More recently, they have also been applied to a number of problems in the acoustic domains such as acoustic scene classification [[undefas](https://arxiv.org/html/2106.06147v3#bib.bibx46), [undefat](https://arxiv.org/html/2106.06147v3#bib.bibx47), [undefau](https://arxiv.org/html/2106.06147v3#bib.bibx48), [undefav](https://arxiv.org/html/2106.06147v3#bib.bibx49)], music genre classification [[undefaw](https://arxiv.org/html/2106.06147v3#bib.bibx50), [undefau](https://arxiv.org/html/2106.06147v3#bib.bibx48)], instrument classification [[undefax](https://arxiv.org/html/2106.06147v3#bib.bibx51), [undefay](https://arxiv.org/html/2106.06147v3#bib.bibx52)], sound event classification and localization [[undefaz](https://arxiv.org/html/2106.06147v3#bib.bibx53)] and speech recognition [[undefau](https://arxiv.org/html/2106.06147v3#bib.bibx48)]. Some authors[[undefaw](https://arxiv.org/html/2106.06147v3#bib.bibx50), [undefax](https://arxiv.org/html/2106.06147v3#bib.bibx51), [undefay](https://arxiv.org/html/2106.06147v3#bib.bibx52), [undefav](https://arxiv.org/html/2106.06147v3#bib.bibx49)] use intermediate representations such as STFT[[undefaaa](https://arxiv.org/html/2106.06147v3#bib.bibx54)], MFCC[[undefaab](https://arxiv.org/html/2106.06147v3#bib.bibx55)] or CQT[[undefaac](https://arxiv.org/html/2106.06147v3#bib.bibx56)] spectrograms while others work directly with the raw audio signal[[undefau](https://arxiv.org/html/2106.06147v3#bib.bibx48), [undefas](https://arxiv.org/html/2106.06147v3#bib.bibx46)].
65
+
66
+ Square convolutional and pooling kernels are often used to solve visual task such as VQA, visual scene classification and object recognition [[undefaad](https://arxiv.org/html/2106.06147v3#bib.bibx57), [undefaae](https://arxiv.org/html/2106.06147v3#bib.bibx58), [undefaaf](https://arxiv.org/html/2106.06147v3#bib.bibx59), [undefaag](https://arxiv.org/html/2106.06147v3#bib.bibx60)]. [[undefat](https://arxiv.org/html/2106.06147v3#bib.bibx47), [undefaah](https://arxiv.org/html/2106.06147v3#bib.bibx61), [undefaai](https://arxiv.org/html/2106.06147v3#bib.bibx62)][[undefat](https://arxiv.org/html/2106.06147v3#bib.bibx47), [undefaah](https://arxiv.org/html/2106.06147v3#bib.bibx61), [undefaai](https://arxiv.org/html/2106.06147v3#bib.bibx62)] have successfully used visually motivated CNN with square filters to solve audio related tasks. Time-frequency representations of audio signals are however structured very differently than visual representations. [[undefaaj](https://arxiv.org/html/2106.06147v3#bib.bibx63)][[undefaaj](https://arxiv.org/html/2106.06147v3#bib.bibx63)] explore the performance of different structures of convolutive kernels when working with music signals classification. They propose the use of 1D convolution kernels to capture time-specific or frequency-specific features. They demonstrate that similar accuracy can be reached using a combination of 1D convolutions instead of 2D convolutions by combining 1D time and 1D frequency filters while using much fewer parameters. They also explore rectangular kernels which capture both time and frequency features at different scales. The impact of such strategies for music classification is still an open question in the context of auditory scene analysis.
67
+
68
+ Coordinate maps initially proposed in[[undefaak](https://arxiv.org/html/2106.06147v3#bib.bibx64)] by[[undefaak](https://arxiv.org/html/2106.06147v3#bib.bibx64)] have proven successful for processing visual data in the context of VQA. The method consists in augmenting the visual input with matrices containing numbers in the range -1 to 1 which vary either in the x x or in the y y-dimension. With MALiMo[[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)], the same strategy is used to indicate the simultaneous relative positions of features in frequency and time. [[undefaal](https://arxiv.org/html/2106.06147v3#bib.bibx65)][[undefaal](https://arxiv.org/html/2106.06147v3#bib.bibx65)] proposed _Frequency-Aware convolutions_ which are equivalent to concatenating coordinate maps only in the _frequency_ axis. The effectiveness of coordinate maps on the time dimension for audio signals have not been studied to the best of our knowledge.
69
+
70
+ In this study we first evaluate the performance of a network initially designed for the VQA task (Visual FiLM) [[undefaa](https://arxiv.org/html/2106.06147v3#bib.bibx28)] on the AQA task, using the CLEAR2 data set. Then we introduce the NAAQA architecture to leverage specific properties of acoustic inputs. For this architecture, we analyze the influence of separate time and frequency coordinate maps. We then study the impact of adding a MALiMo block into our architecture. Finally, we evaluate our model on the DAQA′ dataset.
71
+
72
+ 3 Data
73
+ ------
74
+
75
+ We use two very different datasets in our experiments in order to study the effect of the AQA task characteristics on the model performance. The first set, that is also a contribution of this paper, comprises musical sounds (CLEAR2); the second includes short environmental sounds (DAQA′).
76
+
77
+ ### 3.1 CLEAR2
78
+
79
+ CLEAR2 is an updated version of CLEAR[[undefz](https://arxiv.org/html/2106.06147v3#bib.bibx27)]. A graphical overview of the generation process is depicted in Figure[1](https://arxiv.org/html/2106.06147v3#S2.F1 "Figure 1 ‣ 2.3 Dababases for AQA ‣ 2 Related Work ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). Each record in the dataset is a unique combination of a scene, a question and an answer.
80
+
81
+ TABLE I: Types of questions with examples and possible answers. The variable parts of each question is emphasized in bold italics. The number of possible answer per question type is reported in the last column. Certain questions have the same possible answers, the meaning of which depends on the type of question.
82
+
83
+ To build acoustic scenes, we prepared a bank of elementary sounds composed of real musical instrument recordings extracted from the Good-Sounds[[undefaam](https://arxiv.org/html/2106.06147v3#bib.bibx66)] dataset 1 1 1 Each elementary sound in a scene is characterized by an n-tuple: [_Instrument_, _Brightness_, _Loudness_, _Musical Note_, _Duration_, _Absolute position in scene_, _Relative position in scene_, _Global position_]. The _Brightness_ property is computed by using the timbralmodels[[undefaan](https://arxiv.org/html/2106.06147v3#bib.bibx67)] library. A threshold is used to define the label of the sound (_Dark_ or _Bright_). The _Loudness_ labels are assigned based on the perceptual loudness as defined by the ITU-R BS.1770-4 international normalization standard [[undefaao](https://arxiv.org/html/2106.06147v3#bib.bibx68)]. Again, a threshold is used to determine if the sound is _Quiet_ or _Loud_. All attribute values are listed in Table [I](https://arxiv.org/html/2106.06147v3#S3.T1 "TABLE I ‣ 3.1 CLEAR2 ‣ 3 Data ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") as possible answers to the questions explained below.. Differently from CLEAR, in CLEAR2 we make sure that the recordings (players, instruments, microphones) of the elementary sounds are different for the training and test sets. For the training set, the bank comprises 135 unique recordings (compared to 56 in CLEAR) sampled at 48KHz including 6 different instruments (bass, cello, clarinet, flute, trumpet and violin), 12 notes (chromatic scale) and 3 octaves. A different set of 135 recordings of the same instruments recorded using different microphones and players is used to create the test set. The acoustic scenes are built by concatenating between 5 to 15 randomly chosen sounds from the elementary sound bank into a sequence (as opposed to CLEAR which comprised fixed duration scenes). Silence segments of random duration are added in-between elementary sounds. The acoustic scenes are then corrupted by filtering to simulate room reverberation and by adding a white uncorrelated uniform noise. Both the amount of noise and reverberation vary from scene to scene with the goal of increasing variability in the data.
84
+
85
+ For each scene, a number of questions is generated using CLEVR-like[[undeff](https://arxiv.org/html/2106.06147v3#bib.bibx7)] templates. A template defines the reasoning steps required to answer a question based on the composition of the scene (i.e. “find all instances of violin || that plays before trumpet || that is the loudest”). 942 templates where designed for this AQA task. Not all template instantiations results in a valid question. The generated questions are filtered to remove ill posed questions similarly to [[undeff](https://arxiv.org/html/2106.06147v3#bib.bibx7)]. Table [I](https://arxiv.org/html/2106.06147v3#S3.T1 "TABLE I ‣ 3.1 CLEAR2 ‣ 3 Data ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") shows examples of questions with their answers.
86
+
87
+ The a priori probability of answering correctly with no information about the question or the scene, and assuming a uniform distribution of classes, is 1 57=1.75%\frac{1}{57}=1.75\%. These probabilities are higher, on average, if we introduce information about the question. For example, if we know that the question is of the type _Exist_ or _Counting comparison_, there are only two possible answers (yes or no) and the probability of answering correctly by chance is 0.5 0.5. The majority class accuracy (always answering the most common answer: Yes) is 7.5%. Statistics on the CLEAR2 dataset are presented in Table[II](https://arxiv.org/html/2106.06147v3#S3.T2 "TABLE II ‣ 3.1 CLEAR2 ‣ 3 Data ‣ NAAQA: A Neural Architecture for Acoustic Question Answering").
88
+
89
+ Datasets global statistics
90
+
91
+ CLEAR2 Dataset detailed statistics
92
+
93
+ Mean Min Max# of sounds per scene 10 5 15 Elementary sound duration 0.85s 0.69s 1.11s Scene duration 10.69s 4.46s 17.82s# of words per question 17 6 28# of unique words per question 12 5 19
94
+
95
+ DAQA’ Dataset detailed statistics
96
+
97
+ Mean Min Max# of sounds per scene 9 5 12 Elementary sound duration 9.35s 0.6s 20s Scene duration 1min 19s 9s 3min 4s# of words per question 13 5 27# of unique words per question 11 5 22
98
+
99
+ TABLE II: Datasets statistics
100
+
101
+ ### 3.2 Reproducing DAQA
102
+
103
+ We were not able to fully recreate the DAQA dataset because it relies on some AudioSet[[undefaap](https://arxiv.org/html/2106.06147v3#bib.bibx69)] YouTube videos that have since been deleted. We were able to retrieve 358 sounds out of the 400 sounds that were used to generate the original dataset. We used these sounds to generate the dataset. Changing the number of elementary sounds also impacts the whole generation process. This dataset is therefore different from the original DAQA and will be referred to as DAQA′ from now on. Our results are therefore not fully comparable to the ones reported in[[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)]. A list of all the missing sounds is available in Supplementary Materials.
104
+
105
+ ### 3.3 Comparing CLEAR2 and DAQA′
106
+
107
+ Table [II](https://arxiv.org/html/2106.06147v3#S3.T2 "TABLE II ‣ 3.1 CLEAR2 ‣ 3 Data ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") report statistics on both CLEAR2 and DAQA′. The major difference between both datasets is the type of elementary sounds used to generate the acoustic scenes, that is, sustained musical notes (CLEAR2) versus possibily transient environmental sounds (DAQA′).
108
+
109
+ Acoustic scenes in DAQA′ are much longer on average than the ones in CLEAR2. This results in much bigger input spectrograms, and, in turns, much higher computational requirements and longer training time.
110
+
111
+ Finally, the original DAQA[[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)] and, consequently, our reconstruction (DAQA′) suffer from the same problem as the original CLEAR. The same elementary sounds are used in the training and test scenes. Although scenes are still different between training and test set, this may cause the models to “remember” the elementary sounds rather than extracting their properties. In CLEAR2, this problem was mitigated by using different elementary sounds for the training and test set.
112
+
113
+ 4 Method
114
+ --------
115
+
116
+ We first describe the original Visual FiLM architecture [[undefaa](https://arxiv.org/html/2106.06147v3#bib.bibx28)] that we use as baseline model, then the proposed modifications that lead to our NAAQA architecture and, finally, NAAQA with a MALiMo module.
117
+
118
+ ### 4.1 Baseline model: Visual FiLM
119
+
120
+ Both the proposed NAAQA and Visual FiLM[[undefaa](https://arxiv.org/html/2106.06147v3#bib.bibx28)] share an overall common architecture which is depicted in Figure[2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). Visual FiLM, that we will use as baseline model, is inspired by Conditional Batch Normalization architectures[[undefaaq](https://arxiv.org/html/2106.06147v3#bib.bibx70)] and achieved state of the art results on the CLEVR VQA task[[undeff](https://arxiv.org/html/2106.06147v3#bib.bibx7)]. The network takes a visual scene and a text-based question as inputs and predicts an answer to the question for the given scene. The text-processing module uses G G unidirectional gated recurrent units (GRUs) to extract context from the text input (yellow area in Figure[2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")). The visual scene is processed by the convolutional module (blue area in the figure). The first step of this module is feature extraction (orange box), performed by a Resnet101 model[[undefaaf](https://arxiv.org/html/2106.06147v3#bib.bibx59)] pre-trained on ImageNet[[undefaar](https://arxiv.org/html/2106.06147v3#bib.bibx71)]. The extracted features are processed by a convolutional layer with batch normalization [[undefaaq](https://arxiv.org/html/2106.06147v3#bib.bibx70)] and ReLU [[undefaas](https://arxiv.org/html/2106.06147v3#bib.bibx72)] activation followed by J J Resblocks illustrated in details in the red area in the figure. Unless otherwise specified, batch normalization and ReLU activation functions are applied to all convolutional layers. Each Resblock j j comprises convolutional layers with M M filters that are linearly modulated by _FiLM layers_ through the two M×1 M\times 1 vectors 𝜷 j\boldsymbol{\beta}_{j} (additive) and 𝜸 j\boldsymbol{\gamma}_{j} (multiplicative). This modulation emphasizes the most important feature maps and inhibits the irrelevant maps given the context of the question. 𝜷 j\boldsymbol{\beta}_{j} and 𝜸 j\boldsymbol{\gamma}_{j} are learned via fully connected layers using the text embeddings extracted by the text processing module as inputs (purple area in the figure). The affine transformation in the batch normalization before the FiLM layer is deactivated. The FiLM layer applies its own affine transformation using the learned 𝜷 j\boldsymbol{\beta}_{j} and 𝜸 j\boldsymbol{\gamma}_{j} to modulate features. Several Resblocks can be stacked to increase the depth of the model, as illustrated in Figure[2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). Finally, the classifier module is composed of a 1×1 1\times 1 convolutional layer [[undefaat](https://arxiv.org/html/2106.06147v3#bib.bibx73)] with C C filters followed by max pooling and a fully connected layer with H H hidden units and an output size O O equal to the number of possible answers (Gray in Figure[2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")). A softmax layer predicts the probabilities of the answers. In order to use the Visual FiLM as a baseline for our experiments, we extract a 2D spectro-temporal representation of the acoustic scenes as depicted at the bottom of Figure[2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). The Resnet101 pre-trained extractor expects a 3 channels visual input but the spectro-temporal representation comprises only 1 channel. To work around this constraint, the spectro-temporal information is simply repeated 3 times thus creating a 3 channels input (only when using Resnet101 as feature extractor). This modified spectro-temporal representation is then fed to the model as if it was an image which is the simplest way to adapt the unmodified visual architecture to acoustic data. We call this architecture Visual FiLM Resnet101.
121
+
122
+ ![Image 2: Refer to caption](https://arxiv.org/html/2106.06147v3/x1.png)
123
+
124
+ Figure 2: Common Architecture. Two inputs: a spectro-temporal representation of an acoustic scene and a textual question. The spectro-temporal representation goes through a feature extractor (_Parallel_ and _Interleaved_ feature extractor detailed in Section [4.2.1](https://arxiv.org/html/2106.06147v3#S4.SS2.SSS1 "4.2.1 Feature Extraction ‣ 4.2 The proposed NAAQA architecture ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") for NAAQA and Resnet101 pretrained on ImageNet for Visual FiLM) and then a serie of J J Resblocks that are linearly modulated by 𝜷 j\boldsymbol{\beta}_{j} and 𝜸 j\boldsymbol{\gamma}_{j} (learned from the question input) via FiLM layers. Coordinate maps are inserted before convolution blocks that are illustrated with a pink border. The output is a probability distribution of the possible answers.
125
+
126
+ ![Image 3: Refer to caption](https://arxiv.org/html/2106.06147v3/x2.png)
127
+
128
+ (a)Parallel feature extraction. The input spectrogram is processed by 2 parallel pipelines. The first pipeline (in green) captures _frequency_ features using a serie of K K 1D convolutions with N k N_{k} filters and a stride of 2×1 2\times 1. Since the stride is larger than 1×1 1\times 1, each convolution downsample the frequency axis. The 1×2 1\times 2 maxpooling then downsamples the time axis. The second pipeline (in yellow) captures _time_ features using the same structure with transposed filter size. Features from both pipelines are concatenated and fused using a 1×1 1\times 1 convolution with P P filters to create.
129
+
130
+ ![Image 4: Refer to caption](https://arxiv.org/html/2106.06147v3/x3.png)
131
+
132
+ (b)Interleaved feature extraction. 1D time (in yellow) and frequency (in green) convolutions are applied alternately on the input spectrogram building a time-frequency representation after each block. The order of the convolution in each block can be reversed. The extractor is composed of K K blocks where each convolution has N k N_{k} filters followed by a 1×1 1\times 1 convolution with P P filters.
133
+
134
+ Figure 3: Acoustic feature extraction
135
+
136
+ ### 4.2 The proposed NAAQA architecture
137
+
138
+ #### 4.2.1 Feature Extraction
139
+
140
+ As in Visual FiLM, the first step in the NAAQA model is feature extraction (orange box in Figure[2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")). The most obvious adaptation of Visual FiLM to acoustic data is to retrain the feature extraction module on the scenes from CLEAR2. To do this, we used three 2D convolutional layers, with 3×3 3\times 3 kernels, stride 2×2 2\times 2 and N 1 N_{1}, N 2 N_{2}, and N 3 N_{3} filters respectively followed by a 1×1 1\times 1 convolutions with N 4 N_{4} filters. We refer to this model as NAAQA 2D Conv.
141
+
142
+ However, as acoustic signals present unique properties, we introduce two feature extraction modules that are specifically tailored to sounds: the _Parallel_ feature extractor (Figure[3](https://arxiv.org/html/2106.06147v3#S4.F3 "Figure 3 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")a) processes time and frequency features independently in parallel pipelines; the _Interleaved_ feature extractor (Figure[3](https://arxiv.org/html/2106.06147v3#S4.F3 "Figure 3 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")b) captures time and frequency features in a single convolutional pipeline. In both cases, the feature extractor is trained end-to-end with the rest of the network and uses a combination of 1D convolutional filters to process a 2D spectro-temporal representation. The 1D filters process the time and frequency axis independently as opposed to the 2D filters typically used in image processing.
143
+
144
+ The design of the _Parallel_ feature extractor (Figure[3(a)](https://arxiv.org/html/2106.06147v3#S4.F3.sf1 "In Figure 3 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")) is inspired by the work of [[undefaaj](https://arxiv.org/html/2106.06147v3#bib.bibx63)][[undefaaj](https://arxiv.org/html/2106.06147v3#bib.bibx63)] where 1D filters are used to capture time and frequency features separately. While [[undefaaj](https://arxiv.org/html/2106.06147v3#bib.bibx63)] time-frequency model includes only 1 time and 1 frequency convolution which are then concatenated together, our extractor stacks multiple 1D convolutions in two parallel pipelines. The time and frequency features are only fused at the end of both pipelines. This yields more complex features. The _frequency pipeline_ (green in the figure) comprises a serie of K K frequency blocks. Each block is composed of a 1D convolution with N K N_{K}3×1 3\times 1 kernels and 2×1 2\times 1 strides followed by a 1×2 1\times 2 maxpooling. With a stride larger than 1×1 1\times 1, the convolution operation downsamples the frequency axis and the pooling operation downsamples the time axis. This downsampling strategy allows features in both parallel pipelines to be of the same dimensions. The _time pipeline_ (yellow in the figure) is the same as the frequency pipeline except that the convolutional kernel operates along the time dimension and the pooling along the frequency dimension. The convolution kernel is 1×3 1\times 3 and the pooling kernel 2×1 2\times 1. The activation maps of both pipelines are concatenated channel-wise and a representation combining both the time and frequency features is created using a 1×1 1\times 1 convolution [[undefaat](https://arxiv.org/html/2106.06147v3#bib.bibx73)] with P P filters and a stride of one. The feature maps dimensionality is either compressed or expanded depending on the number of filters P P in the 1×1 1\times 1 convolution. We name the corresponding model as NAAQA Parallel. The 1×1 1\times 1 convolution can also be removed thus leaving it up to the next 3×3 3\times 3 convolution to fuse the time and frequency features.
145
+
146
+ The _Interleaved_ feature extractor (Figure[3(b)](https://arxiv.org/html/2106.06147v3#S4.F3.sf2 "In Figure 3 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")) processes the input spectrogram in a single pipeline composed of a serie of K K interleaved blocks (purple in the figure). Each block comprises a 1×3 1\times 3 _time_ convolution with N K N_{K} filters and stride 1×2 1\times 2 followed by a 3×1 3\times 1 _frequency_ convolution with N K N_{K} filters and stride 2×1 2\times 1. A 1×1 1\times 1 convolution with P P filters processes the output of the last block to either compress or expand its dimensionality. We name the corresponding model as NAAQA Interleaved Time.
147
+
148
+ As an alternative configuration, the order of the convolution operation in each block can be reversed so that it first operates along the frequency axis and then the time axis. The model is called NAAQA Interleaved Freq, in this case. Compared to the _Parallel_ feature extractor, time-frequency representations are created after each block instead of only at the end of the pipeline.
149
+
150
+ For all extractors, the convolutions in the first block comprise N 1 N_{1} convolutional filters and the number of filters is doubled after each block (N i N_{i} = 2​N i−1 2N_{i-1}). More blocks (higher K K) gives a larger downsampling of the feature maps which brings down the computational cost of the model.
151
+
152
+ #### 4.2.2 Coordinate maps for acoustic inputs
153
+
154
+ When tackling the VQA task, the Visual FiLM model concatenates coordinate maps (CoordConv[[undefaak](https://arxiv.org/html/2106.06147v3#bib.bibx64)]) to the input of convolutional layers (pink border boxes in Figure[2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")). In the visual domain both axis of an image encode spatial information. Coordinate maps have, therefore, the same meaning in the x x or y y-axis.
155
+
156
+ In spectro-temporal representations for audio, however, the y y-axis corresponds to frequency and the x x-axis to time. We, therefore, call the maps _frequency_ and _time_ coordinate map, respectively. All spectro-temporal representations in CLEAR2 have the same range for the frequency axis but the range for the time axis varies depending on the duration of the acoustic scenes. We hypothesize that _time_ coordinate maps might have a stronger impact on performance because they provide a relative time scale that the model can use to enhance its temporal localization capabilities.
157
+
158
+ #### 4.2.3 Complexity Optimization
159
+
160
+ We performed optimization of the most important hyper-parameters in the NAAQA architecture with the goal of reducing model complexity. These include number of _GRU_ text-processing units G G; the number of Resblock J J that dictates the number of FiLM layers and, therefore, the number of modulation coefficients to compute; the number of convolutional filters M M in each block; the number of filters C C and the number of hidden units H H in the classifier module. We refer to the resulting model by prepending Optimized to the model name.
161
+
162
+ #### 4.2.4 NAAQA with a MALiMo module
163
+
164
+ In MALiMo[[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)], [[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)] add a second set of FiLM layers that acts has an auxiliary controller. The controller uses the extracted acoustic features to further modulate the intermediate Resblocks. To evaluate the impact of MALiMo on CLEAR2, a MALiMo module was added to NAAQA. We refer to this configuration by appending MALiMo ctrl to the names introduced above. In our implementation of the module we replaced LSTMs with GRUs and adapted the inputs to the acoustic features that we study.
165
+
166
+ 5 Experiments
167
+ -------------
168
+
169
+ We perform experiments to compare the effect on performance of our modification to the baseline model. Most experiments are conducted on the proposed CLEAR2 dataset. We first investigate different feature extraction methods and compare them to the Visual FiLM Resnet101 feature extractor. Then, we show the effect of time and frequency coordinate maps at different levels of the model. Moreover, we perform an hyper-parameters ablation study to reduce the complexity of the model. We finally test the addition of a MALiMo module to our model. To demonstrate the generality of the results, we compare the performance of our model on CLEAR2 and DAQA′ datasets.
170
+
171
+ ### 5.1 Acoustic Pre-processing
172
+
173
+ The raw acoustic signal (sampled at 48 kHz for CLEAR2 and 16kHz for DAQA′) is processed to create a 2D time-frequency representation with Mel scale[[undefaau](https://arxiv.org/html/2106.06147v3#bib.bibx74)] spectrograms. After preliminary tests it was decided to extract 64 Mel coefficients for both CLEAR2 and DAQA′ computed over samples weighted by a Hanning window. The window size was of 512 samples (∼\sim 10.6 msec) for CLEAR2 whereas for DAQA′ it was of 400 samples (∼\sim 25ms) as in[[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)]. The time shift between consecutive windows (stride) was also optimized depending on the characteristics of audio data. We found that the best results for CLEAR2 was a time shift of 2048 samples (∼\sim 42.7 msec). This is feasible because of the sustained notes which vary slowly in time. Using such a long time shift allowed us to reduce more than ten folds the computational costs. As DAQA′ contains sounds that are shorter and less stable, the same optimization is not feasible. In fact, with a time shift of 1600 samples (100ms) a 5% drop in accuracy is observed in comparison with 160 samples (10ms) shifts. All results based on CLEAR2 are reported with long window shifts (long stride), with the exception of the comparison between short and long strides on both CLEAR2 and DAQA′ in Supplementary Materials.
174
+
175
+ As duration of scenes are not constant in CLEAR2, spectrograms are zero padded along the time axis so that they all have the same dimension (1×64×418 1\times 64\times 418) which corresponds to a maximum length of ∼\sim 17.9 sec. The power spectrum is normalized to the mean and standard deviation of the training data with the goal of speeding up convergence[[undefaav](https://arxiv.org/html/2106.06147v3#bib.bibx75)].
176
+
177
+ ### 5.2 Experimental conditions
178
+
179
+ Unless specified otherwise, the models presented in subsequent sections are trained on the CLEAR2 dataset which comprises 50 000 scenes and 4 questions per scene for a total of 200 000 records from which 140 000 (70%) are used for training, 30 000 (15%) for validation and 30 000 (15%) for test. The test set is generated using a different set of elementary sounds which ensures that the network could not memorize them and can therefore acts has a better generalization benchmark. The optimization techniques and other training settings are further described in Supplementary Materials. Results are reported in terms of accuracy, that is in percentage of correct answers over the total. Since initialization of deep architectures has a profound impact on training convergence, we developed a python library torch-reproducible-block 6 6 6[https://github.com/NECOTIS/torch-reproducible-block](https://github.com/NECOTIS/torch-reproducible-block) to control the model initial conditions and design reproducible experiments. To ensure the robustness of the results, each model is trained 5 times with 5 different random seeds.
180
+
181
+ ### 5.3 Initial model configuration
182
+
183
+ The initial configuration for the proposed model comprises G=4096 G=4096 GRU units, J=4 J=4 Resblocks with M=128 M=128 filters each and a classifier composed of a 1×1 1\times 1 convolution with C=512 C=512 filters and H=1024 H=1024 hidden units in the fully connected layer. This configuration includes both _time_ and _frequency_ coordinate maps in each location highlighted in pink in Figure [2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering").
184
+
185
+ TABLE III: Results on CLEAR2. Table gives the number of parameters, average accuracy (%), and standard deviation over 5 repetitions of the training. Overall accuracy as well as question-kind dependent accuracy are reported. Different configurations are reported in the same order as they are discussed in the paper. The most common answer is “Yes”.
186
+
187
+ TABLE IV: Results on DAQA′ The table presents number of parameters, average training, validation and test accuracy (%) with standard deviation over 5 repetitions of the training as well as average training time. Results are reported for four configurations, with and without the MALiMo module in the same order as they are presented in the paper.
188
+
189
+ 6 Results and Discussion
190
+ ------------------------
191
+
192
+ Main results on the CLEAR2 data set are presented in Table[III](https://arxiv.org/html/2106.06147v3#S5.T3 "TABLE III ‣ 5.3 Initial model configuration ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). The complexity of the models in terms of number of parameters, the overall accuracy and accuracy dependent on question’s type are reported. Results from two theoretical baselines - random chance and majority class answers - are first given. Then we report results from the Visual FiLM Resnet101 baseline model with the initial configuration described in Section [5.3](https://arxiv.org/html/2106.06147v3#S5.SS3 "5.3 Initial model configuration ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). This architecture achieves the lowest accuracy of 62.3% in comparison with all tested models. As expected, the pre-learned knowledge gathered in a visual context does not transfer directly to the acoustic context. Mel spectrograms have a very different structure than visual scenes features.
193
+
194
+ ### 6.1 NAAQA modifications
195
+
196
+ Unless specified otherwise, the initial configuration described in section [5.3](https://arxiv.org/html/2106.06147v3#S5.SS3 "5.3 Initial model configuration ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") is used for all models in this section.
197
+
198
+ #### 6.1.1 Feature Extraction
199
+
200
+ The first improvement to the baseline is given by introducing a specific audio feature extraction module based on 2D convolutions. The NAAQA 2D Conv model has slightly fewer parameters than Visual FiLM Resnet101 because of the simpler feature extraction module and a much higher overall accuracy of 77.6%.
201
+
202
+ We then tested two versions of the _Interleaved_ feature extractor (Figure[3(b)](https://arxiv.org/html/2106.06147v3#S4.F3.sf2 "In Figure 3 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")). The computation order of the 1D convolutions in each block has a significant impact on performance. When the first 1D convolution in each block is computed along the frequency axis (NAAQA Interleaved Freq), the network reaches an overall accuracy of 67.2%. It performs especially poorly with questions related to _count_ (42.5%), _count instruments_ (46.5%) and _notes_ (48.0%). The performance on _position_ questions is also the lowest among all extractors. When the computation order of the convolution is reversed (NAAQA Interleaved Time), information is better captured and the network reaches 78.0% of overall accuracy. A possible explanation relates to the nature of the sounds in the CLEAR2 dataset which mainly consists of sustained musical notes. The time dimension at short scales does not contain much information that helps identifying the individual sounds. At larger scales, however, the time axis contains information relative to the scene as a whole which is exploited by higher level layers (Resblocks) to take into account the connections between different sounds. Because its stride is greater than 1x1, each 1D convolution downsamples the axis on which it is applied. When the first is a frequency convolution, the frequency axis of the resulting features is downsampled which reduces the information that can be captured by the time convolution that follows.
203
+
204
+ The _Parallel_ feature extractor (NAAQA Parallel, Figure [3(a)](https://arxiv.org/html/2106.06147v3#S4.F3.sf1 "In Figure 3 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")) reaches an overall accuracy of 78.5%. It performs well on all question’s types except _relative position_, _count_ and _count instrument_. Refer to Section [6.2](https://arxiv.org/html/2106.06147v3#S6.SS2 "6.2 Summary of Results on CLEAR2 ‣ 6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") for further analysis. These results show that building complex time and frequency feature separately and fusing them at a later stage is a good strategy to learn acoustic features for this task. This claim is further strengthen by the analysis of section [6.3.2](https://arxiv.org/html/2106.06147v3#S6.SS3.SSS2 "6.3.2 NAAQA with a MALiMo module on DAQA′ ‣ 6.3 Evaluation on DAQA′ ‣ 6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering").
205
+
206
+ Out of all extractors, NAAQA Parallel is the one that performs the best and constitutes the basis of NAAQA in all subsequent experiments.
207
+
208
+ #### 6.1.2 Coordinate Maps
209
+
210
+ TABLE V: Impact of the placement of _Time_ and _Frequency coordinate maps_. All possible positions are illustrated by the pink border boxes in Figure [2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). The value _Both_ indicate that both _Time_ and _Frequency_ coordinate maps were inserted at the given position. The NAAQA Parallel is used with hyper-parameters from the initial configuration (defined in section [5.2](https://arxiv.org/html/2106.06147v3#S5.SS2 "5.2 Experimental conditions ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). The rows are ordered by test accuracy.).
211
+
212
+ Coordinate maps can be inserted before any convolution operation (Figure[2](https://arxiv.org/html/2106.06147v3#S4.F2 "Figure 2 ‣ 4.1 Baseline model: Visual FiLM ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")). We therefore analyzed the impact of the placement of _Time_ and _Frequency_ coordinate maps at different depths in the network. All possible locations were evaluated via grid-search. For each location, we inserted either a _Time_ coordinate map, a _Frequency_ coordinate map or both. Results are detailed in Table [V](https://arxiv.org/html/2106.06147v3#S6.T5 "TABLE V ‣ 6.1.2 Coordinate Maps ‣ 6.1 NAAQA modifications ‣ 6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). _Time_ coordinate maps have the biggest impact on performance, especially when inserted in the first convolution after the feature extractor or in the Resblocks. This could be because the fusion of the textual and acoustic features, and therefore most of the reasoning, is performed in the Resblocks. The network might be using the additional localization information to inform the modulation of the convolutional feature maps based on the context of the question. Surprisingly, the _Frequency_ coordinate maps have a minimal impact on performance. We further compare the impact of _Time_ versus _Frequency_ coordinate maps in Supplementary Materials.
213
+
214
+ #### 6.1.3 Complexity Optimization
215
+
216
+ ![Image 5: Refer to caption](https://arxiv.org/html/2106.06147v3/x4.png)
217
+
218
+ Figure 4: Test accuracy by question type and the number of relation in the question for Optimized NAAQA Parallel. The overall accuracy for this configuration is 79.1%. The presence of _before_ or _after_ in a question constitutes a temporal relation. The accuracy is _N/A_ for _relative position_ and _count compare_ since these types of question do no include relations. The hyper-parameters are described in the end of Section [6.1.3](https://arxiv.org/html/2106.06147v3#S6.SS1.SSS3 "6.1.3 Complexity Optimization ‣ 6.1 NAAQA modifications ‣ 6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")
219
+
220
+ As described in Section[4.2.3](https://arxiv.org/html/2106.06147v3#S4.SS2.SSS3 "4.2.3 Complexity Optimization ‣ 4.2 The proposed NAAQA architecture ‣ 4 Method ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"), we optimized the most important hyper-parameters (G,J,M G,J,M) in the NAAQA model to reduce its complexity. The baseline Visual FiLM Resnet101 configuration comprises 6.71 M parameters and achieves only 62.3%. NAAQA Parallel comprises 5.61 M parameters and performs significantly better with 78.5%. With this model as a starting point, we performed an ablation study to find which hyper-parameters can be reduced without impacting accuracy. The Optimized NAAQA Parallel configuration is the best trade-off between model complexity and performance. It comprises 1.68 M parameters and achieves the best overall accuracy with 79.5%. The most notable complexity reduction comes from the reduction of the number of GRU units G G. Reducing G G from 4096 4096 to 512 512 increased accuracy while reducing the number of parameters by a factor of 3 (6.61 M vs 1.68 M). The Optimized NAAQA Parallel is composed of a _Parallel_ extractor with K=3 K=3 blocks and P=64 P=64, G=512 G=512 GRU units, J=4 J=4 Resblocks with M=128 M=128 filters, a classifier module with C=512 C=512 filters and H=1024 H=1024 units. Results for this configuration can be found in Table [III](https://arxiv.org/html/2106.06147v3#S5.T3 "TABLE III ‣ 5.3 Initial model configuration ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") and Figure [4](https://arxiv.org/html/2106.06147v3#S6.F4 "Figure 4 ‣ 6.1.3 Complexity Optimization ‣ 6.1 NAAQA modifications ‣ 6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). Further results related to the ablation study can be found in Supplementary Materials.
221
+
222
+ #### 6.1.4 Adding a MALiMo controller
223
+
224
+ The bottom rows of Table[III](https://arxiv.org/html/2106.06147v3#S5.T3 "TABLE III ‣ 5.3 Initial model configuration ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") show results where the configurations described in previous sections are augmented with a MALiMo controller. Although the model complexity is significantly increased (∼\sim 1M parameters), this addition does not bring any improvement in the model performance on CLEAR2. Almost all the tested configurations with a MALiMo controller perform slightly worse than the same configuration without the module, as can be seen in Table[III](https://arxiv.org/html/2106.06147v3#S5.T3 "TABLE III ‣ 5.3 Initial model configuration ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). This may be again explained by the characteristics of the sounds in CLEAR2. A more in-depth discussion is given when we evaluate the models on DAQA′.
225
+
226
+ ### 6.2 Summary of Results on CLEAR2
227
+
228
+ NAAQA performs well on the CLEAR2 AQA task with 79.5% overall accuracy. It does however struggle with certain types of question as shown in Table[III](https://arxiv.org/html/2106.06147v3#S5.T3 "TABLE III ‣ 5.3 Initial model configuration ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") and Figure[4](https://arxiv.org/html/2106.06147v3#S6.F4 "Figure 4 ‣ 6.1.3 Complexity Optimization ‣ 6.1 NAAQA modifications ‣ 6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). When asked to _count_ the number of sounds with specific attributes, NAAQA reaches only 53.8% accuracy. This limitation is more severe if the question is to count the _different instruments_ playing in a given part of the scene (50.4%). It attains slightly higher accuracy when asked to compare the number of instances of acoustic objects (more, fewer or equal number) with specific attributes (60.6%). In contrast, the network can successfully recognize individual instruments in the scene (81.7%). This suggests, that the problem lies in the logical complexity of the question rather than in the pattern matching from the acoustic scene. As an example, the question (count instrument): ”How many different instruments are playing after the third cello playing a C# note?” requires to first identify the cello playing the _C#_ note, then identify all acoustic objects that are playing after this sound, determine which instruments are of the same family and finally count the number of different families. The model struggles when it must focus on a large number of acoustic objects which explains the low accuracy for this type of question.
229
+
230
+ A similar argument could explain why models also have difficulties with questions related to the _relative position_ of the instruments (58.0%). For example, to answer the question ”Among the flute sounds, which one plays an F note?”, the model must find all flutes playing in the scene, determine which one plays an F note, counts the number of flute playing before and translates the count to a position 7 7 7 This is one possible strategy to answer the question. There may be other ways.. This also requires the network to focus on multiple objects.
231
+
232
+ Certain questions include temporal relations between sounds (_before_ and _after_) as exemplified in Table [I](https://arxiv.org/html/2106.06147v3#S3.T1 "TABLE I ‣ 3.1 CLEAR2 ‣ 3 Data ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"). Questions that include relations require focusing on several sounds to be answered. Figure[4](https://arxiv.org/html/2106.06147v3#S6.F4 "Figure 4 ‣ 6.1.3 Complexity Optimization ‣ 6.1 NAAQA modifications ‣ 6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering") shows the accuracy for each question type depending on the presence of temporal relations. Questions that require the network to focus on a single acoustic object (_brightness_, _loudness_, _instrument_, _note_, _global position_ and _absolute position_) benefit from the presence of a relation in the question. This might be explained by the fact that the question contains more information about the scene which helps to focus on the right acoustic object. However, the presence of relations in questions that already require the network to focus on multiple objects (_exist_, _count_ and _count comparison_) is detrimental. This again supports the idea that having to focus on too many objects in the scene hinders the network performance.
233
+
234
+ ### 6.3 Evaluation on DAQA′
235
+
236
+ To compare our results to those of [[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)][[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)], we evaluated our models on a version of the DAQA data set. As mentioned in Section [3.2](https://arxiv.org/html/2106.06147v3#S3.SS2 "3.2 Reproducing DAQA ‣ 3 Data ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"), we were not able to reproduce the original DAQA dataset which means that results presented in this section are not fully comparable with [[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)]. Results for different configurations of NAAQA tested on our modified DAQA′ are reported in Table [IV](https://arxiv.org/html/2106.06147v3#S5.T4 "TABLE IV ‣ 5.3 Initial model configuration ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering").
237
+
238
+ #### 6.3.1 NAAQA on DAQA′
239
+
240
+ The models explored in this section matches the performance of previous efforts[[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)]. The smallest model they evaluated had 5.49M parameters, the biggest model had 21.33M parameters and the best performing model had 13.20M parameters. The Optimized NAAQA 2D Conv model only has 1.68 M parameters and reaches an accuracy of 58.3% on DAQA′. The Optimized NAAQA Parallel has the same number of parameters and performs slightly better with and accuracy of 60.4%. When we analyzed both of these models on CLEAR2 dataset in Section [6.1.1](https://arxiv.org/html/2106.06147v3#S6.SS1.SSS1 "6.1.1 Feature Extraction ‣ 6.1 NAAQA modifications ‣ 6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"), we found a much smaller difference between the performance of the NAAQA 2D Conv and the NAAQA Parallel. This difference suggests that the parallel extractor is more effective in the context of complex acoustic sounds (DAQA′) than with sustained musical notes (CLEAR2).
241
+
242
+ Even though these results are not 100% comparable with [[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)] because of the difference in the dataset composition, we want to emphasize that Optimized NAAQA Parallel reaches a somewhat similar accuracy than the smallest FiLM in [[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)] (60.4% vs 64.3%) with significantly smaller number of parameters (1.68 M vs 5.49 M).
243
+
244
+ #### 6.3.2 NAAQA with a MALiMo module on DAQA′
245
+
246
+ In Section [6.1.4](https://arxiv.org/html/2106.06147v3#S6.SS1.SSS4 "6.1.4 Adding a MALiMo controller ‣ 6.1 NAAQA modifications ‣ 6 Results and Discussion ‣ NAAQA: A Neural Architecture for Acoustic Question Answering"), we found that adding a MALiMo controller to our NAAQA models did not improve the accuracy on CLEAR2. On the other hand, the MALiMo controller has a significant positive impact when the model is evaluated on DAQA′ dataset (Table [IV](https://arxiv.org/html/2106.06147v3#S5.T4 "TABLE IV ‣ 5.3 Initial model configuration ‣ 5 Experiments ‣ NAAQA: A Neural Architecture for Acoustic Question Answering")). We see an increase of almost 4% when using Optimized NAAQA Parallel + MALiMo ctrl compared to Optimized NAAQA Parallel alone. These results are consistent with [[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)] findings and with the hypothesis that MALiMo increases performance when working with complex sounds.
247
+
248
+ The Optimized NAAQA Parallel + MALiMo ctrl configuration performs about the same as the smallest MALiMo model evaluated in [[undefab](https://arxiv.org/html/2106.06147v3#bib.bibx29)] (64.3% vs 65.1%) with significantly fewer parameters (2.78 M vs 8.91 M).
249
+
250
+ 7 Conclusions
251
+ -------------
252
+
253
+ Acoustic Question Answering (AQA) is a newly emerging task in the area of machine learning research. As performance is strongly dependent on the acoustical environments and types of questions, it is important to understand the relationship between the application and the chosen neural architecture. We propose a benchmark for AQA based on musical sounds (CLEAR2) and a neural architecture that is tailored to interpreting acoustic scenes (NAAQA). NAAQA introduces a number of modifications to a FiLM based architecture to optimize acoustic scenes analysis. These includes several strategies for neural feature extraction, an ablation study of the hyper-parameters and the optimization of coordinate maps. We confirm that FiLM layers are very effective to modulate activation maps in the AQA application. We are able to optimize our NAAQA neural network so to obtain competitive results with a fraction of the model complexity. These results are confirmed on a different AQA task (DAQA′) comprising more complex sounds with the addition of a MALiMo controller in the model. We release all code openly in the hope that these resources may foster increased research activity in solving the AQA task.
254
+
255
+ Acknowledgment
256
+ --------------
257
+
258
+ We would like to thank the reviewers for their constructive comments that helped us improve the paper. The NVIDIA Corporation for the donation of GPUs. Part of this research was funded or supported by the CHIST-ERA IGLU project, the CRSNG, the Michael-Smith scholarships, the Universities of Sherbrooke and NTNU.
259
+
260
+ References
261
+ ----------
262
+
263
+ * [undef]Ellen M Voorhees “The TREC-8 Question Answering Track Report.” In _TREC_, 1999, pp. 77–82
264
+ * [undefa]Ellen M Voorhees and Dawn M Tice “Building a question answering test collection” In _SIGIR_, 2000, pp. 200–207
265
+ * [undefb]Martin M Soubbotin and Sergei M Soubbotin “Patterns of Potential Answer Expressions as Clues to the Right Answers.” In _TREC_ 500-250, 2001
266
+ * [undefc]Eduard H Hovy et al. “Question Answering in Webclopedia.” In _TREC_, 2000, pp. 53–56
267
+ * [undefd]Mohit Iyyer et al. “A neural network for factoid question answering over paragraphs” In _EMNLP_, 2014, pp. 633–644
268
+ * [undefe]Deepak Ravichandran and Eduard Hovy “Learning surface text patterns for a question answering system” In _ACL_, 2002, pp. 41–47
269
+ * [undeff]Justin Johnson et al. “CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning” In _CVPR_, 2017, pp. 1988–1997
270
+ * [undefg]Stanislaw Antol et al. “VQA: Visual Question Answering” In _ICCV_, 2015, pp. 2425–2433
271
+ * [undefh]Yuke Zhu et al. “Visual7W: Grounded Question Answering in Images” In _CVPR_, 2016, pp. 4995–5004
272
+ * [undefi]Haoyuan Gao et al. “Are you talking to a machine? dataset and methods for multilingual image question” In _NeurIPS_, 2015, pp. 2296–2304
273
+ * [undefj]Aishwarya Agrawal et al. “Analyzing the Behavior of Visual Question Answering Models” In _EMNLP_, 2016, pp. 1955–1960 DOI: [10.18653/v1/D16-1203](https://dx.doi.org/10.18653/v1/D16-1203)
274
+ * [undefk]Peng Zhang et al. “Yin and yang: Balancing and answering binary visual questions” In _CVPR_, 2016, pp. 5014–5022
275
+ * [undefl]Donald Geman et al. “Visual turing test for computer vision systems” In _PNAS_, 2015, pp. 3618–3623
276
+ * [undefm]Jinwei Cao et al. “Automated question answering from lecture videos: NLP vs. pattern matching” In _HICSS_, 2005, pp. 43b
277
+ * [undefn]Tat-Seng Chua “Question answering on large news video archive” In _ISPA_, 2003, pp. 289–294
278
+ * [undefo]Hui Yang et al. “VideoQA: Question Answering on news video” In _MM_, 2003, pp. 632–641
279
+ * [undefp]Kyung-Min Kim et al. “DeepStory: Video story QA by deep embedded memory networks” In _IJCAI_, 2017, pp. 2016–2022 DOI: [10.24963/ijcai.2017/280](https://dx.doi.org/10.24963/ijcai.2017/280)
280
+ * [undefq]Makarand Tapaswi et al. “Movieqa: Understanding stories in movies through question-answering” In _CVPR_, 2016, pp. 4631–4640
281
+ * [undefr]Yu-Chieh Wu and Jie-Chi Yang “A robust passage retrieval algorithm for video question answering” In _IEEE T CIRC SYST VID_, 2008, pp. 1411–1421
282
+ * [undefs]Ted Zhang et al. “Speech-Based Visual Question Answering” In _arXiv_ abs/1705.00464, 2017 arXiv:[1705.00464](https://arxiv.org/abs/1705.00464)
283
+ * [undeft]Shi-Kuo Chang and Erland Jungert “Symbolic Projection for Image Information Retrieval and Spatial Reasoning”, Signal processing and its applications Elsevier, 1996 DOI: [10.1016/b978-0-12-168030-5.x5000-1](https://dx.doi.org/10.1016/b978-0-12-168030-5.x5000-1)
284
+ * [undefu]Amirouche Moktefi and Sun-Joo Shin “Visual Reasoning with Diagrams”, Studies in Universal Logic Springer, 2013 DOI: [10.1007/978-3-0348-0600-8](https://dx.doi.org/10.1007/978-3-0348-0600-8)
285
+ * [undefv]Marc Champagne “Sound reasoning (literally): Prospects and Challenges of current acoustic logics” In _Logica Universalis_, 2015, pp. 331–343
286
+ * [undefw]A. Koepke et al. “Audio Retrieval with Natural Language Queries: A Benchmark Study” In _IEEE Transactions on Multimedia_, 2022
287
+ * [undefx]Marc Champagne “Teaching Argument Diagrams to a Student Who Is Blind” In _Diagrams_, 2018, pp. 783–786
288
+ * [undefy]Alessandro Pieropan et al. “Audio-Visual Classification and Detection of Human Manipulation Actions” In _IROS_, 2014, pp. 3045–3052
289
+ * [undefz]Jérôme Abdelnour et al. “CLEAR: A Dataset for Compositional Language and Elementary Acoustic Reasoning” Available at https://arxiv.org/abs/1811.10561 In _NeurIPS Vigil Workshop_, 2018
290
+ * [undefaa]Ethan Perez et al. “FiLM: Visual Reasoning with a General Conditioning Layer” In _AAAI_, 2018, pp. 3942–3951
291
+ * [undefab]Haytham M. Fayek and Justin Johnson “Temporal Reasoning via Audio Question Answering” In _IEEE-ACM T AUDIO SPE_, 2020, pp. 2283–2294
292
+ * [undefac]Cheng Zhang et al. “Active Mini-Batch Sampling using Repulsive Point Processes” In _AAAI_, 2019, pp. 5741–5748
293
+ * [undefad]Laurens Van Der Maaten and Geoffrey Hinton “Visualizing Data using t-SNE” In _JMLR_, 2008, pp. 2579–2605 DOI: [10.1007/s10479-011-0841-3](https://dx.doi.org/10.1007/s10479-011-0841-3)
294
+ * [undefae]Drew A. Hudson and Christopher D. Manning “GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering” In _CVPR_, 2019, pp. 6700–6709
295
+ * [undefaf]Peng Wang et al. “FVQA: Fact-Based Visual Question Answering” In _TPAMI_, 2018, pp. 2413–2427
296
+ * [undefag]Kenneth Marino et al. “OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge” In _CVPR_, 2019
297
+ * [undefah]Rowan Zellers et al. “From Recognition to Cognition: Visual Commonsense Reasoning” In _CVPR_, 2019
298
+ * [undefai]Daniel Gordon et al. “IQA: Visual Question Answering in Interactive Environments” In _CVPR_, 2018, pp. 4089–4098
299
+ * [undefaj]Yash Goyal et al. “Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering” In _CVPR_, 2017, pp. 6325–6334
300
+ * [undefak]Varun Manjunatha et al. “Explicit Bias Discovery in Visual Question Answering Models” In _CVPR_, 2019, pp. 9562–9571
301
+ * [undefal]Anubrata Das et al. “Dataset bias: A case study for visual question answering” In _ASIS&T_, 2019, pp. 58–67 DOI: [10.1002/pra2.7](https://dx.doi.org/10.1002/pra2.7)
302
+ * [undefam]Aishwarya Agrawal et al. “Don’t Just Assume; Look and Answer: Overcoming Priors for Visual Question Answering” In _CVPR_, 2018, pp. 4971–4980
303
+ * [undefan]Drew Arad Hudson and Christopher D. Manning “Compositional Attention Networks for Machine Reasoning” In _ICLR_, 2018 URL: [https://openreview.net/forum?id=S1Euwz-Rb](https://openreview.net/forum?id=S1Euwz-Rb)
304
+ * [undefao]Ramakrishna Vedantam et al. “Probabilistic Neural Symbolic Models for Interpretable Visual Question Answering” In _ICML_, 2019, pp. 6428–6437
305
+ * [undefap]Ronghang Hu et al. “Language-Conditioned Graph Networks for Relational Reasoning” In _ICCV_, 2019, pp. 10293–10302
306
+ * [undefaq]Ronghang Hu et al. “Explainable Neural Computation via Stack Neural Module Networks” In _ECCV_, 2018, pp. 55–71
307
+ * [undefar]Kexin Yi et al. “Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding” In _NeurIPS_, 2018, pp. 1039–1050
308
+ * [undefas]Sajjad Abdoli et al. “End-to-end environmental sound classification using a 1D convolutional neural network” In _Expert Systems with Applications_, 2019, pp. 252–263 DOI: [https://doi.org/10.1016/j.eswa.2019.06.040](https://dx.doi.org/https://doi.org/10.1016/j.eswa.2019.06.040)
309
+ * [undefat]Venkatesh Boddapati et al. “Classifying environmental sounds using image recognition networks” In _Procedia Computer Science_, 2017, pp. 2048–2056 DOI: [10.1016/j.procs.2017.08.250](https://dx.doi.org/10.1016/j.procs.2017.08.250)
310
+ * [undefau]Jongpil Lee et al. “Raw Waveform-based Audio Classification Using Sample-level CNN Architectures” In _arXiv_ abs/1712.00866, 2017 arXiv:[1712.00866](https://arxiv.org/abs/1712.00866)
311
+ * [undefav]Yoonchang Han and Kyogu Lee “Acoustic scene classification using convolutional neural network and multiple-width frequency-delta data augmentation” In _arXiv_ abs/1607.02383, 2016 arXiv:[1607.02383](https://arxiv.org/abs/1607.02383)
312
+ * [undefaw]Juhan Nam et al. “Deep Learning for Audio-Based Music Classification and Tagging: Teaching Computers to Distinguish Rock from Bach” In _IEEE Signal Processing Magazine_, 2019, pp. 41–51
313
+ * [undefax]Weiping Zheng et al. “CNNs-based Acoustic Scene Classification using Multi-Spectrogram Fusion and Label Expansions” In _arXiv_ abs/1809.01543, 2018 arXiv:[1809.01543](https://arxiv.org/abs/1809.01543)
314
+ * [undefay]Jordi Pons et al. “Timbre Analysis of Music Audio Signals with Convolutional Neural Networks” In _EUSIPCO_, 2017, pp. 2744–2748
315
+ * [undefaz]Mathilde Brousmiche et al. “SECL-UMons Database for Sound Event Classification and Localization” In _ICASSP_, 2020, pp. 756–760 DOI: [10.1109/ICASSP40776.2020.9053298](https://dx.doi.org/10.1109/ICASSP40776.2020.9053298)
316
+ * [undefaaa]S. Nawab and Thomas F. Quatieri “Short-Time Fourier Transform” In _Advanced Topics in Signal Processing_ Prentice-Hall, Inc., 1987, pp. 289–337
317
+ * [undefaab]Beth Logan “Mel Frequency Cepstral Coefficients for Music Modeling” In _ISMIR_, 2000
318
+ * [undefaac]Judith Brown and Miller Puckette “An efficient algorithm for the calculation of a constant Q transform” In _JASA_, 1992, pp. 2698 DOI: [10.1121/1.404385](https://dx.doi.org/10.1121/1.404385)
319
+ * [undefaad]Alex Krizhevsky et al. “ImageNet classification with deep convolutional neural networks” In _NeurIPS_, 2012, pp. 1097–1105
320
+ * [undefaae]Karen Simonyan and Andrew Zisserman “Very Deep Convolutional Networks for Large-Scale Image Recognition” In _ICLR_, 2015
321
+ * [undefaaf]Kaiming He et al. “Deep Residual Learning for Image Recognition” In _CVPR_, 2016, pp. 770–778
322
+ * [undefaag]Christian Szegedy et al. “Going Deeper with Convolutions” In _CVPR_, 2015, pp. 1–9
323
+ * [undefaah]Shawn Hershey et al. “CNN architectures for large-scale audio classification” In _ICASSP_, 2017, pp. 131–135
324
+ * [undefaai]Anurag Kumar and Bhiksha Raj “Deep CNN Framework for Audio Event Recognition using Weakly Labeled Web Data” In _arXiv_ abs/1707.02530, 2017 arXiv:[1707.02530](https://arxiv.org/abs/1707.02530)
325
+ * [undefaaj]Jordi Pons et al. “Experimenting with musically motivated convolutional neural networks” In _CBMI_, 2016, pp. 1–6 DOI: [10.1109/CBMI.2016.7500246](https://dx.doi.org/10.1109/CBMI.2016.7500246)
326
+ * [undefaak]Rosanne Liu et al. “An intriguing failing of convolutional neural networks and the coordconv solution” In _NeurIPS_, 2018, pp. 9605–9616
327
+ * [undefaal]Khaled Koutini et al. “Receptive-Field-Regularized CNN Variants for Acoustic Scene Classification” In _DCASE_, 2019 DOI: [10.33682/cjd9-kc43](https://dx.doi.org/10.33682/cjd9-kc43)
328
+ * [undefaam]Oriol Romani Picas et al. “A real-time system for measuring sound goodness in instrumental sounds” In _Proceedings of the AES Convention_, 2015
329
+ * [undefaan]Andy Pearce et al. “Timbral Models, AudioCommons project, Deliverable D5.7”, 2018 URL: [http://www.audiocommons.org/materials/](http://www.audiocommons.org/materials/)
330
+ * [undefaao]International Telecommunication Union “Algorithms to measure audio programme loudness and true-peak audio level (ITU-R BS.1770-4)”, 2015, pp. 25 URL: [https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-4-201510-I!!PDF-E.pdf](https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-4-201510-I!!PDF-E.pdf)
331
+ * [undefaap]Jort F. Gemmeke et al. “Audio Set: An ontology and human-labeled dataset for audio events” In _ICASSP_, 2017
332
+ * [undefaaq]Sergey Ioffe and Christian Szegedy “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift” In _ICML_, 2015, pp. 448–456 URL: [http://dl.acm.org/citation.cfm?id=3045118.3045167](http://dl.acm.org/citation.cfm?id=3045118.3045167)
333
+ * [undefaar]Olga Russakovsky et al. “ImageNet large scale visual recognition challenge” In _IJCV_, 2015, pp. 211–252
334
+ * [undefaas]Kevin Jarrett et al. “What is the best multi-stage architecture for object recognition?” In _ICCV_, 2009, pp. 2146–2153
335
+ * [undefaat]Min Lin et al. “Network In Networks” In _ICLR_, 2014
336
+ * [undefaau]Stanley Smith Stevens et al. “A scale for the measurement of the psychological magnitude pitch” In _JASA_ 8.3 Acoustical Society of America, 1937, pp. 185–190
337
+ * [undefaav]Yann A LeCun et al. “Efficient backprop” In _Neural networks: Tricks of the trade_ Springer, 2012, pp. 9–48
338
+
339
+ ![Image 6: [Uncaptioned image]](https://arxiv.org/html/2106.06147v3/photos/jerome.png)Jerome Abdelnour is currently working on electrifying the powersport industry at Taiga Motors as a backend/devops specialist. He received a degree in Computer Engineering from the University of Sherbrooke (department of Electrical and Software engineering) and recently completed a Master degree in Machine Learning at the same university. His research interest include machine learning, acoustic processing, software engineering & large-scale distributed-systems.
340
+
341
+ ![Image 7: [Uncaptioned image]](https://arxiv.org/html/2106.06147v3/photos/JR.jpg)Jean Rouat Ph.D. and Full Professor is with Univ. de Sherbrooke where he founded the Computational Neuroscience and Intelligent Signal Processing Research group (NECOTIS). His translational research links neuroscience and engineering for the creation of new technologies and a better understanding of learning multimodal representations. Development of hardware low power consumption Neural Processing Units for a sustainable development, interactions with artists for multimedia and musical creations are examples of transfers that he leads based on the knowledge he gains from neuroscience. He is leading funded projects to develop sensory substitution and intelligent systems based on neuromorphic computing and implementations.
342
+
343
+ ![Image 8: [Uncaptioned image]](https://arxiv.org/html/2106.06147v3/photos/gs.png)Giampiero Salvi is Professor at the Department of Electronic Systems at the Norwegian University of Science and Technology (NTNU), Trondheim, Norway, and Associate Professor at KTH Royal Institute of Technology, Department of Electrical Engineering and Computer Science, Stockholm, Sweden. Prof. Salvi received the MSc degree in Electronic Engineering from Università la Sapienza, Rome, Italy and the PhD degree in Computer Science from KTH. He was a post-doctoral fellow at the Institute of Systems and Robotics, Lisbon, Portugal. He was a co-founder of the company SynFace AB, active between 2006 and 2016. His main interests are machine learning, speech technology, and cognitive systems.