Buckets:
Title: Experimental Design for Multi-Channel Imaging via Task-Driven Feature Selection
URL Source: https://arxiv.org/html/2210.06891
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF 1Introduction 2Related Work 3TADRED: TAsk-DRiven Experimental Design for Imaging 4Experiments and Results 5Discussion
HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
failed: pbox
Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
License: CC BY-NC-SA 4.0 arXiv:2210.06891v4 [cs.LG] 17 Mar 2024 Experimental Design for Multi-Channel Imaging via Task-Driven Feature Selection Stefano B. Blumberg 1 , 2 , Paddy J. Slator 2 , 3 , Daniel C. Alexander 2
1 Centre for Artificial Intelligence, Department of Computer Science, University College London
2 Centre for Medical Image Computing, Department of Computer Science, University College London
3 Cardiff University Brain Research Imaging Centre and School of Computer Science, Cardiff University stefano.blumberg.17@ucl.ac.uk Abstract
This paper presents a data-driven, task-specific paradigm for experimental design, to shorten acquisition time, reduce costs, and accelerate the deployment of imaging devices. Current approaches in experimental design focus on model-parameter estimation and require specification of a particular model, whereas in imaging, other tasks may drive the design. Furthermore, such approaches often lead to intractable optimization problems in real-world imaging applications. Here we present a new paradigm for experimental design that simultaneously optimizes the design (set of image channels) and trains a machine-learning model to execute a user-specified image-analysis task. The approach obtains data densely-sampled over the measurement space (many image channels) for a small number of acquisitions, then identifies a subset of channels of prespecified size that best supports the task. We propose a method: TADRED for TAsk-DRiven Experimental Design in imaging, to identify the most informative channel-subset whilst simultaneously training a network to execute the task given the subset. Experiments demonstrate the potential of TADRED in diverse imaging applications: several clinically-relevant tasks in magnetic resonance imaging; and remote sensing and physiological applications of hyperspectral imaging. Results show substantial improvement over classical experimental design, two recent application-specific methods within the new paradigm, and state-of-the-art approaches in supervised feature selection. We anticipate further applications of our approach. Code is available: Code Link.
1Introduction
Experimental design seeks a sampling scheme or design ๐ท
{ ๐ 1 , โฆ , ๐ ๐ถ } , where each ๐ ๐ , ๐
1 , โฆ , ๐ถ , is a combination of experimental variables that are under the control of the experimenter, that provides data optimally informative for some criteria or task Antony (2003); Pukelsheim (2006). The experimental outcome (measured data) of design ๐ท is a matrix ๐ ๐ท โ โ ๐ ร ๐ถ with ๐ถ corresponding measurements from each of ๐ samples. The optimal choice of design depends on the experimental task, which we express as a function ๐ฏ that maps ๐ ๐ท to a corresponding matrix ๐ of labels. Experimental design optimization seeks the design that maximizes the ability to perform the task, subject to constraints of time or cost, i.e.
๐ท *
arg โข min ๐ท โก ๐ฟ โข ( ๐ฏ โข ( ๐ ๐ท ) , ๐ ) , subject to โข | ๐ท |
๐ถ
(1)
where ๐ฟ is a loss function. Here we limit cost simply to the size ๐ถ of ๐ท ; ๐ฏ can be any task, but often in imaging involves estimating/mapping model parameters e.g. via gradient-descent model-fitting in every pixel/voxel, as in Alexander (2008); Cercignani & Alexander (2006), or machine learning as in Gyori et al. (2022); Waterhouse & Stoyanov (2022).
In imaging, as illustrated in figure 1a, ๐ ๐ท is typically a collection of ๐ pixels or voxels with ๐ถ channels (e.g. RGB images have ๐ถ
3 ). The choice of ๐ ๐ โ ๐ท controls the contrast in channel ๐ and is global to the whole channel. Compact (small ๐ถ ) but informative designs are often critical in reducing acquisition or development costs in real-world applications. Examples include acquiring magnetic resonance imaging (MRI) contrasts, e.g. to estimate and map microstructural tissue properties within the time a patient can stay still in a scanner Alexander (2008), or manufacturing affordable hyperspectral imaging devices including a few well-chosen spectral filters, e.g. for estimating tissue oxygenation Waterhouse & Stoyanov (2022).
Figure 1: a) An example of experimental design for imaging. In remote sensing hyperspectral imaging (see table 4), each observed wavelength ๐ ๐ is chosen by the experimenter. The outcome of each ๐ ๐ is a grayscale image - a channel of the resultant data ๐ ๐ท (RGB has 3 channels). b) The new paradigm for experimental design illustrated for qMRI. First, obtain image data ๐ ๐ท ยฏ with a large number of ๐ถ ยฏ channels. Next, train a user-chosen task network, which drives design optimization to select ๐ถ < ๐ถ ยฏ channels โ we propose TADRED for this. We consider three distinct example tasks in experiments.
Standard approaches for experimental design typically optimize ๐ท over a continuous space, for the task of model parameter estimation. For example, a classical approach still widely deployed uses the Fisher matrix Montgomery (2001), whilst more recent approaches use the paradigm of sequential Bayesian experimental design Blau et al. (2022); Foster et al. (2021); Ivanova et al. (2021). Both require a priori model choice, limiting consideration to model-based tasks, and even specific model-parameter choices or assumptions on their prior distribution. Moreover, such approaches rapidly become computationally intractable as the dimension of the optimization increases.
Here we suggest a new task-driven paradigm for experimental design for real-world imaging applications, illustrated in figure 1b, that does not require a priori model specification and replaces high-dimensional continuous search with a subsampling problem. First, the paradigm requires training data ๐ ๐ท ยฏ with ๐ถ ยฏ channels/measurements acquired using a design ๐ท ยฏ that densely samples the measurement space. Secondly the paradigm selects a subset of size ๐ถ โช ๐ถ ยฏ image channels from ๐ ๐ท ยฏ (optimizing the design and choosing ๐ ๐ท โ ๐ ๐ท ยฏ ), coupled with the training of a high-performing neural network that executes the task ๐ฏ driving the experimental design. Thus, the new paradigm replaces the optimization in equation 1 with:
๐ท * , ๐ฏ *
arg โข min ๐ท , ๐ฏ โก ๐ฟ โข ( ๐ฏ โข ( ๐ ๐ท ) , ๐ ) โข subject to โข ๐ท โ ๐ท ยฏ .
(2)
In this paradigm, the task must be specified a priori, but may go beyond standard model-based tasks that drive classical/Bayesian experimental design, to include โmodel freeโ tasks such as missing data reconstruction. The training data requires only a small number of subjects/samples, so may use specialized hardware, lengthy acquisitions, or even simulations. In practice, such acquisitions are often made during early development phases of imaging technologies to explore the range of sensitivity, which informs the choice of, and often provides, ๐ท ยฏ . The paradigm we propose formalizes the exploitation of such data in experimental design for downstream systems designed for wide deployment and directly supports the use of deep learning for ๐ฏ .
In the new paradigm, the experimental design problem becomes similar to supervised feature selection, where the ๐ถ ยฏ image channels of ๐ ๐ท ยฏ are considered features. In supervised feature selection, state-of-the-art approaches Wojtas & Chen (2020); Lee et al. (2022) couple feature selection with task optimization, however the structure of the data in typical supervised feature selection problems differs from those in experimental design for imaging. Feature selection algorithms typically assume most features are uninformative and the task is to โidentify a small, highly discriminative subsetโ Kuncheva et al. (2020) e.g. genes associated with drug response from the entire genome. In experimental design for imaging, however, most channels individually offer similar amounts of information to support task performance, since they view the same scene/sample but with often-subtle differences in contrast (see e.g. figure 6). Design optimization seeks a compact combination that covers all important aspects.
Therefore we propose TADRED, a novel method for TAsk-DRiven Experimental Design in imaging. TADRED couples feature scoring and task execution in consecutive networks. The scoring and subsampling procedure enables efficient identification of subsets of complementarily informative channels jointly with training a high-performing network for the task. TADRED also gradually reduces the full set of samples stepwise to obtain the subsamples, which improves optimization.
Key contributions are:
1.
A new coupled subsampling-task paradigm (feature selection) for experimental design in imaging.
2.
TADRED: a novel approach for supervised feature selection tuned specifically for experimental design in imaging. TADRED performs task-based image channel selection.
3.
A demonstration of our approach on six datasets/tasks in both clinically-relevant MRI and remote sensing and physiological applications in hyperspectral imaging. TADRED outperforms (i) Classical experimental design, (ii) Recent application-specific published results, (iii) State-of-the-art approaches in supervised feature selection.
2Related Work
Approaches in Experimental Design A typical task in experimental design is to optimize the design ๐ท for estimating model parameters. The most widely used classical approach in imaging uses the Fisher information matrix Pukelsheim (2006). However, for non-linear models, the optimization requires pre-specification of parameter values of interest, leading to circularity, e.g. the standard design for VERDICT model with primary application in prostate cancer detection and classification Panagiotaki et al. (2015a) (used as a baseline in table 2) is computed by optimizing the Fisher-matrix for one specific combination of parameter values, despite aiming to highlight contrast in those parameters throughout the entire prostate. Approaches in the sequential Bayesian experimental design paradigm Blau et al. (2022); Foster et al. (2021); Ivanova et al. (2021) reduce this circularity by optimizing over combinations or ranges of parameter values. Recently Blau et al. (2022) also implemented an experimental design optimization in a discrete space and obtained state-of-the-art performance and deployment time, by using reinforcement learning to map history of designs and outcomes to the next design. However, the tasks driving experimental design in imaging are often โmodel freeโ supervised tasks such as missing data reconstruction (tables 2, 4) to recover missing image channels. Classical Fisher-matrix experimental design or sequential Bayesian techniques do not apply in such problems. Furthermore, the sequential Bayesian techniques have been deployed on only small-scale experiments with simulated data e.g. a simple localization problem for two sources. For example, experiments in Blau et al. (2022) have ๐ถ โค 2 and ๐ท โ โ dim , dim โค 6 . In contrast, e.g. the real-world experiment in table 2 has ๐ถ โ { 110 , 55 , 28 , 14 } and ๐ท โ โ 7 โ ๐ถ . Preliminary experiments suggest the application of these approaches to the high dimensional problems is not computationally tractable with the published code/methods. These issues motivate the reformulation of the experimental design paradigm and the introduction of TADRED. Appendix-E is a broader review of experimental design for quantitative MRI (qMRI) and hyperspectral imaging.
Supervised Feature Selection operates either at the instance level e.g. identifying different salient parts of different images; or at the population level by selecting across all the instances. In imaging, each combination of acquisition parameters ๐ ๐ โ ๐ท is global across all image pixels/voxels, so channel-selection for experimental design must be population-wide. Recursive feature elimination (RFE) / backward selection Guyon et al. (2002); Scikit-Learn (2023); Kohavi & John (1997) are frameworks that seek the most informative set of features among a superset to inform a model or task. They work by eliminating the least informative features stepwise to reach a prespecified feature-set size. โFeature Importance Ranking for Deep Learningโ (FIRDL) Wojtas & Chen (2020), โSelf-Supervision Enhanced Feature Selection with Correlated Gatesโ (SSEFS) Lee et al. (2022) are considered state-of-the-art in feature selection, outperforming both classical (e.g. RFE) and recent approaches outlined in appendix-E. Both techniques are specifically designed to โidentify a small, highly discriminativeโ subset Kuncheva et al. (2020) of features from a larger group of mostly uninformative features. SSEFS, in a first step, uses a probabilistic approach to search for this subset, whilst also exploiting the presence of correlated subsets for enhanced performance. A second step then trains a network on the chosen subset to execute the task. FIRDL instead has a complex optimization procedure involving exploration-exploitation stochastic local search. SSEFS and FIRDL are detailed in appendix A and are baselines in later experiments.
In contrast to typical feature selection problems, most candidate choices in experimental design are informative: few, if any, features are uninformative so no single small discriminative set exists. SSEFSโs first step seeks groups of correlated features, which is less useful in experiment design, as most image channels correlate strongly (examples in figure 5). FIRDL incorporates global information by performing multiple evaluations on different feature combinations. However, FIRDLโs search for a discriminative subset is inappropriate in the experimental design application; its multiple evaluations of the task-execution network are redundant and result in covariate shift and overfitting.
Nevertheless, TADRED builds upon the basic principles of task-driven feature selection, which is the foundation of FIRDL and SSEFSโs success. TADRED adopts the same dual-network architecture, but with a different optimization procedure tailored to the experimental design problem. Specifically, TADREDโs implements a novel combination of the dual selection/task network optimization within the paradigms of RFE/backwards selection. As such, it adopts a comparatively simple scoring procedure, which avoids the complicated and suboptimal joint optimization FIRDL/SSEFS require to search for a distinctively discriminative subset. TADREDโs end-to-end dual networks avoids FIRDLโs multiple evaluations on different feature combinations, and TADREDโs passing of information through the optimization procedure improves on both SSEFS and FIRDL.
Finally, PROSUB Blumberg et al. (2022) (baseline in table 2) is a previous attempt to equate experimental design with feature selection and also uses RFE. It uses a customized neural architecture search at every step and was designed specifically to address a measurement-selection problem in qMRI (data in table 2) where it achieves state-of-the-art performance. However, the technique does not naturally generalize to other tasks, which is a key motivation for TADRED. TADRED avoids PROSUBโs cumbersome neural architecture search and implements instead a novel four-phase procedure in each step, which keeps the gradient updates smooth and allows feature selection at each step. Also, beyond standard RFE, TADRED efficiently passes information from the optimization on larger feature sets to smaller sets by passing information on the network weights across the steps, unlike PROSUB. These advances combine to enhance substantially the performance, portability and generalizability of the algorithm across diverse experimental design problems.
3TADRED: TAsk-DRiven Experimental Design for Imaging
TADRED presents a novel approach to supervised feature selection, tailored to the particularities of the experimental design problem in imaging, and aims to solve equation 2. Section 3.1 describes an outer loop of the procedure, which is inspired by classical paradigms Kohavi & John (1997); Guyon et al. (2002), that gradually eliminates elements from the densely-sampled design ๐ท ยฏ in ๐ก
1 , โฆ , ๐ steps to obtain designs ๐ท ยฏ
๐ท 1 โ โฆ โ ๐ท ๐ . This corresponds to performing supervised feature selection for decreasing sizes ๐ถ ยฏ
๐ถ 1 > โฆ > ๐ถ ๐ , where { ๐ถ ๐ก } ๐ก
1 ๐ are chosen by the user a priori. Then section 3.2 outlines an inner loop for training with fixed 1 โค ๐ก โค ๐ . Inspired by recent supervised feature selection advances Imrie et al. (2022); Wojtas & Chen (2020), TADRED trains two coupled networks at each step: a scoring network ๐ฎ ๐ก , which scores individual elements of ๐ ๐ท ยฏ for importance to inform the subsampling, and a task network ๐ฏ ๐ก , which performs the task driving the design, i.e. estimates ๐ from chosen feature subset ๐ ๐ท ๐ก โ ๐ ๐ท ยฏ . The training procedure is split into four phases that allows feature selection at each step and is inspired by Karras et al. (2018); Blumberg et al. (2022) which produced enhanced optimization. The full procedure is outlined in algorithm 2.
3.1Outer Loop
Across steps ๐ก
1 , โฆ , ๐ we consider decreasing feature set sizes ๐ถ ยฏ
๐ถ 1
๐ถ 2
โฆ
๐ถ ๐ and perform supervised feature selection at each step in an inner loop (see section 3.2). Reducing feature set sizes stepwise aids the optimization procedure compared to e.g. training on all features then subsampling all at once (see table 6). The procedure passes information from the optimization on larger feature sets to smaller sets. Finally, the stepwise procedure efficiently produces a set of optimized designs (as is typical in supervised feature selection see e.g. Wojtas & Chen (2020) and also in Waterhouse & Stoyanov (2022)), which can be useful for post-hoc selection of design size to balance economy (small C) with task performance. Whilst iterative subsampling also increases computational time, this is comparable to other supervised feature selection approaches (appendix D).
3.2Inner Loop: Four-Phase Deep Learning Training
At step 1 โค ๐ก โค ๐ of the outer loop the inner loop constructs (i) a binary mask ๐ฆ ๐ก โ { 0 , 1 } ๐ถ ยฏ , โ ๐ฆ ๐ก โ 0
๐ถ ๐ก to subsample the features; (ii) a weight vector for the features ๐ฌ ยฏ ๐ก โ โ + ๐ถ ยฏ ; (iii) a trained network ๐ฏ ๐ก to perform the task, which corresponds to solving the optimization problem:
minimize ๐ฆ ๐ก , ๐ฏ ๐ก , ๐ฌ ยฏ ๐ก โข ๐ฟ โข ( ๐ฏ ๐ก โข ( ๐ ๐ท ๐ก โ ๐ฌ ยฏ ๐ก ) , ๐ ) , subject to โข โ ๐ฆ ๐ก โ 0
๐ถ ๐ก ,
where โข ๐ ๐ท ๐ก
๐ฆ ๐ก โ ๐ ๐ท ยฏ + ( ๐ ๐ถ ยฏ โ ๐ฆ ๐ก ) โ ๐ ๐ท ยฏ fill ,
(3)
the โ operation is element-wise dot product which follows broadcasting rules when inputs have mismatched dimensions, | | โ | | 0 is the ๐ฟ 0 norm, ๐ ๐ถ ยฏ is a vector with ๐ถ ยฏ ones, and โfeature fillโ ๐ ๐ท ยฏ fill โ โ ๐ถ ยฏ is a hyperparameter that fills the removed features (we take the data median, see appendix C.2). The weight vector ๐ฌ ยฏ ๐ก contains feature scores, which the training procedure uses to remove low-scoring features by setting corresponding values of the mask ๐ฆ ๐ก to 0 .
Scoring, Subsampling, and Task Execution The core of the training procedure uses the forward/backward pass in algorithm 1. The full procedure in algorithm 2 uses the forward/backward pass to update feature scoring gradually in tandem with improving label prediction.
The procedure aims to learn a meaningful sample-independent feature score to rank the features. In practice, deep-learning training is performed in batches and not across the whole data. Therefore we first learn a sample-dependent feature score ๐ โข ( ๐ฎ ๐ก โข ( ๐ ๐ท ยฏ ) )
๐ ~ โ โ + ๐ ร ๐ถ ยฏ , where ๐ฎ ๐ก is a neural network and ๐ : โ โ [ 0 , โ ) is an activation function to ensure positive scores (we take ๐
2 โ sigmoid and at initialization ๐ โข ( 0 )
1 ). We then compute a sample-independent score ๐ฌ ยฏ ๐ก โ โ + ๐ถ ยฏ as an average of ๐ ~ across the ๐ samples in ๐ ๐ท . We also compute a combined score that aids task execution
๐ฌ
๐ผ โ ๐ ~ + ( 1 โ ๐ผ ) โ ๐ฌ ยฏ ๐ก , ๐ผ โ [ 0 , 1 ] ,
(4)
which balances the current learned sample-dependent score with a fixed global estimate of the sample-independent score and allows smooth integration between the two. The mix parameter ๐ผ is set in the optimization procedure to shift the balance from sample-dependent to sample-independent scores.
We use a mask ๐ฆ ๐ก โ [ 0 , 1 ] ๐ถ ยฏ to subsample the features ๐ ๐ท ๐ก
๐ฆ ๐ก โ ๐ ๐ท ยฏ + ( ๐ ๐ถ ยฏ โ ๐ฆ ๐ก ) โ ๐ ๐ท ยฏ fill , and replace the removed features with default values ๐ ๐ท ยฏ fill to retain the shape of the data structures throughout training. Rather than learning the mask ๐ฆ ๐ก end-to-end e.g. using a sparsity term/prior as in Lee et al. (2022), we modify elements of ๐ฆ ๐ก during our training procedure. This is important to enable the outer loop of the procedure to output candidate designs at each step.
We now estimate the target ๐ with ๐ ^
๐ฏ ๐ก โข ( ๐ฌ โ ๐ ๐ท ๐ก ) from the subsampled data weighted feature-wise by the score, then calculate the loss ๐ฟ โข ( ๐ ^ , ๐ ) . This weighting allows gradients to flow end-to-end.
Training Procedure The key challenges in the design of the training procedure in the inner loop are how to (i) obtain meaningful global sample-independent scores ๐ฌ ยฏ ๐ก from learnt sample-dependent scores ๐ ~ ๐ก , (ii) differentiate through a masking operation to compute ๐ฆ ๐ก . TADREDโs four-phase procedure, inspired by Karras et al. (2018); Blumberg et al. (2022) gradually modifies the neural network structure during deep learning training, moving from learning a simpler task (learning sample-dependent scores and retaining most features) to a more complex task (learning sample-independent scores and removing more features) by linear interpolation of network components. This improves optimization over directly learning the more difficult task. Thus we address (i) by first learning ๐ ~ and then progressively reducing the final score to the average of ๐ ~ its average i.e. ๐ฌ
๐ฌ ยฏ ๐ก (in algorithm 2 phase 2), and (ii) by progressively setting elements of ๐ฆ ๐ก to zero i.e., during training, mask elements are real valued but gradually reduce to binary values (in phase 3).
Algorithm 1 TADRED Forward & Backward Pass (FBP) in Step ๐ก Requires: Input and Target Data ๐ ๐ท ยฏ , ๐ , Mask ๐ฆ ๐ก Scoring and Task Networks ๐ฎ ๐ก , ๐ฏ ๐ก , Loss ๐ฟ Sample-independent Feature Score ๐ฌ ยฏ ๐ก Mix Parameter ๐ผ โ [ 0 , 1 ] , Feature Fill ๐ ๐ท ยฏ fill 1: ๐ ~
๐ โข ( ๐ฎ ๐ก โข ( ๐ ๐ท ยฏ ) ) 2: ๐ฌ
๐ผ โ ๐ ~ + ( 1 โ ๐ผ ) โ ๐ฌ ยฏ ๐ก # Equation 4 3: ๐ ๐ท ๐ก
๐ฆ ๐ก โ ๐ ๐ท ยฏ + ( ๐ ๐ถ ยฏ โ ๐ฆ ๐ก ) โ ๐ ๐ท ยฏ fill 4: ๐ ^
๐ฏ ๐ก โข ( ๐ฌ โ ๐ ๐ท ๐ก ) 5:Compute ๐ฟ โข ( ๐ ^ , ๐ ) and backpropagate Algorithm 2 TADRED Optimization Requires: Input and Target Data ๐ ๐ท ยฏ โ โ ๐ ร ๐ถ ยฏ , ๐ Loss ๐ฟ , Feature Fill ๐ ๐ท ยฏ fill โ โ ๐ถ ยฏ Feature Set Sizes ๐ถ ยฏ
๐ถ 1 > โฆ > ๐ถ ๐ Training Steps 1 โค ๐ธ 1 < ๐ธ 2 < ๐ธ 3 < ๐ธ Initial Scoring and Task Networks ๐ฎ 1 , ๐ฏ 1 1: ๐ก โ 1 ; ๐ฆ 1 โ ๐ ๐ถ ยฏ ; ๐ผ โ 1 # Step t = 1 2:for ๐ โ 1 , โฆ , ๐ธ do 3: FBP() # Algorithm 1 4: ๐ฌ ยฏ 1 โ mean of โข ๐ ~ โข across data 5:for ๐ก โ 2 , โฆ , ๐ do # Steps ๐ก โฅ 2 6: # Phase 1 7: ๐ฌ ยฏ ๐ก โ ๐ฌ ยฏ ๐ก โ 1 ; ๐ผ โ 1 2 ; ๐ฆ ๐ก โ ๐ฆ ๐ก โ 1 8: for ๐ โ 1 , โฆ , ๐ธ 1 do 9: FBP() 10: ๐ฌ ยฏ โ mean of โข ๐ ~ โข on data # Phase 2 11: ๐ฌ ยฏ ๐ก โ 1 2 โข ( ๐ฌ ยฏ ๐ก + ๐ฌ ยฏ ) 12: for ๐ โ ๐ธ 1 + 1 , โฆ , ๐ธ 2 do 13: ๐ผ โ max โก { ๐ผ โ 1 2 โข ( ๐ธ 2 โ ๐ธ 1 ) , 0 } 14: FBP() 15: # Indices that sort an array; Phase 3 16: ๐
argsort โข { ๐ฌ ยฏ ๐ก โข [ ๐ ] : ๐ฆ ๐ก โข [ ๐ ]
1 } 17: ๐ท ๐ก โ { ๐ โข [ 0 ] , โฆ , ๐ โข [ ๐ถ ๐ก โ 1 โ ๐ถ ๐ก ] } 18: for ๐ โ ๐ธ 2 + 1 , โฆ , ๐ธ 3 do 19: ๐ฆ ๐ก โ max โก { ๐ฆ ๐ก โ ๐ โข [ ๐ ] ๐ โ ๐ท ๐ก ๐ธ 3 โ ๐ธ 2 , ๐ ๐ถ ยฏ } 20: FBP() 21: for ๐ โ ๐ธ 3 + 1 , โฆ , ๐ธ do
Phase 4
22: FBP() 23: Cache ๐ฏ ๐ก , ๐ฆ ๐ก , ๐ฌ ยฏ ๐ก for equation 3
The training procedure is different for the first outer loop step ๐ก
1 compared to steps ๐ก โฅ 2 . This is because for step ๐ก
1 we train on all ๐ถ ยฏ features and do not have information from previous steps and for steps ๐ก โฅ 2 we perform supervised feature selection for user-chosen ๐ถ ๐ก (solve equation 3) and training is initialized from step ๐ก โ 1 . We describe each step with reference to algorithm 2.
Training for Step t = 1 In the first step (lines 1-4), we simply train ๐ฎ 1 , ๐ฏ 1 on full information i.e. on all features for total (chosen) ๐ธ epochs. At completion, we set the first sample-independent score ๐ฌ ยฏ 1 (line 4) to be the mean of the sample-dependent scores ๐ ~ across samples/batches. We found training solely on a sample-dependent score results in faster optimization.
Training for Steps t = 2,โฆ,T The four phases require choosing the number of epochs for each phase: 1 <= ๐ธ 1 < ๐ธ 2 < ๐ธ 3 < ๐ธ for total number of epochs ๐ธ , training proceeds as follows:
Phase 1) Initialize ๐ฎ ๐ก and ๐ฏ ๐ก from ๐ฎ ๐ก โ 1 and ๐ฏ ๐ก โ 1 , ๐ฌ ยฏ ๐ก to ๐ฌ ยฏ ๐ก โ 1 , ๐ฆ ๐ก to ๐ฆ ๐ก โ 1 ,e and ๐ผ
1 2 to balance learning a new score for this step and using information from the learnt score from step ๐ก โ 1 . Run ๐ธ 1 epochs to refine scores and task execution with ๐ผ and ๐ฆ ๐ก fixed.
Phase 2) Update the sample-independent score ๐ฌ ยฏ ๐ก with the learnt score from phase 1 (line 11). Run ๐ธ 2 โ ๐ธ 1 epochs progressively linearly modifying ๐ผ (line 13), so training moves gradually from using sample-dependent scores to sample-independent.
Phase 3) Choose the ๐ถ ๐ก โ 1 โ ๐ถ ๐ก lowest-scored features to remove (lines 16, 17). Run ๐ธ 3 โ ๐ธ 2 epochs linearly modifying the mask for subsampling (line 19). This alters the ๐ถ ๐ก โ 1 โ ๐ถ ๐ก elements of ๐ฆ ๐ก corresponding to the lowest-scored features gradually to 0 . Thus โ ๐ฆ ๐ก โ 0
๐ถ ๐ก โ 1 goes to โ ๐ฆ ๐ก โ 0
๐ถ ๐ก . Separating this phase from phase 2 increases the stability of the optimization, as modifying the mask and score simultaneously results in large gradients.
Phase 4) Train ๐ฏ ๐ก for final refinement for ๐ธ โ ๐ธ 3 epochs with the score weights fixed and features chosen. At completion return ๐ฏ ๐ก , ๐ฆ ๐ก , ๐ฌ ยฏ ๐ก .
Implementation Details and Hyperparameters TADREDโs hyperparameters are fixed across experiments and different application areas. They are detailed in appendix A.
4Experiments and Results
This section demonstrates the benefits of TADRED in multiple scenarios, with example applications in qMRI and hyperspectral imaging. First, in table 2, we consider the standard experimental design task of model parameter estimation and outperform classical Fisher-matrix approaches. Within the new paradigm, we also show improvements over recent supervised feature selection approaches. We then show TADREDโs efficacy in a โmodel-freeโ experimental design scenario: reconstruction of a densely sampled data set from a sparse subset, where Fisher-matrix or recent Bayesian experimental design cannot operate and TADRED outperforms best published results in an MRI challenge in table 2. In figure 2 we consider a reconstruction task to then estimate multiple clinically-relevant downstream metrics from model fitting - extending the traditional model-parameter estimation task to estimate multiple quantities. TADRED outperforms recent supervised feature selection techniques in this task that has immediate deployment potential. We then show the generalizability of TADRED by performing similar sets of experiments on hyperspectral images, outperforming both supervised feature selection baselines for earth remote sensing in table 4 and recent work in tissue oxygenation estimation in table 4. Tables 6, 6 show an ablation study and that TADRED is mostly robust to randomness in deep learning training.
Table 1: Performance comparison of feature selection approaches for VERDICT-MRI designs: MSE ร 10 2 between estimated model parameters and ground truth for various ๐ถ and ๐ถ ยฏ
220 . Table 2: Performance comparison on MUDI: MSE between ๐ถ ยฏ
1344
reconstructed MRI channels/measurements and
๐ถ
ยฏ
ground-truth measurements for various
๐ถ
. PROSUB results from Blumberg et al. (2022) table 1.
๐ถ
110 55 28 14
Random 1.54 2.24 3.25 6.10
SSEFS 1.06 1.28 1.89 4.58
FIRDL 2.22 2.14 3.09 4.05
TADRED 1.03 1.18 1.80 2.64
๐ถ
500 250 100 50
PROSUB 0.49 0.61 0.89 1.35
TADRED 0.22 0.43 0.88 1.34
๐ถ
40 30 20 10 PROSUB 1.53 1.87 2.50 3.48 TADRED 1.52 1.76 2.12 2.88 Table 2: Performance comparison on MUDI: MSE between ๐ถ ยฏ
1344 reconstructed MRI channels/measurements and ๐ถ ยฏ ground-truth measurements for various ๐ถ . PROSUB results from Blumberg et al. (2022) table 1.
Appendix C provides additional analysis. Appendix E provides details on experimental design in qMRI and hyperspectral imaging and how to implement our paradigm in real-world scenarios. Appendix F summarizes and visualizes the resultant densely-sampled data ๐ ๐ท ยฏ . Following standard practice in MR parameter estimation Alexander et al. (2019); Cercignani et al. (2018), and hyperspectral image filter design Waterhouse & Stoyanov (2022), data samples are individual pixels/voxels.
Baselines and Comparisons We compare TADRED with standard model-based approaches such as the classical Fisher-matrix. Within the subsampling-task paradigm we use i) recent application-specific published results optimized by the respective authors; ii) state-of-the-art supervised feature selection approaches FIRDL, SSEFS (see section 2) and random selection then deep learning training (denoted by โrandomโ) to mimic random baselines used in experimental design papers. Each feature selection approach conducts an extensive hyperparameter search, for fairness, the same number of evaluations are used for each feature subset ๐ถ . All details are in appendix A. As this requires multiple training run (SSEFS in table 2 requires >400 runs), we examine the effect of the random seed on performance in table 6. We compare the computational costs of different approaches in appendix D.
TADRED Outperforms Classical Experimental Design and Baselines in Model Parameter Estimation A standard task in experimental design is selecting the design ๐ท to maximize the precision of model parameters. We evaluate strategies for this using the VERDICT-MRI model which aids early detection and classification of prostate cancer Panagiotaki et al. (2015a). We sample parameters ๐ฝ ๐ for voxel ๐
1 , โฆ , ๐ , from a biologically plausible range, add synthetic noise representative of clinical qMRI, and the task is to estimate ๐
{ ๐ฝ 1 , โฆ , ๐ฝ ๐ } with performance metric MSE. The first baseline Panagiotaki et al. (2015b) uses classical Fisher-matrix experimental design (see section 2), to compute the design ๐ท with ๐ถ
20 . The design produces a root-mean square error of 15.0 ร 10 โ 2 in this experiment. The TADRED design with ๐ถ
20 has corresponding error of 2.04 ร 10 โ 2 . The supervised feature selection approaches in the new paradigm use a densely-sampled design ๐ท ยฏ , where ๐ถ ยฏ
220 from Panagiotaki et al. (2015a) and use deep learning to estimate ๐ฝ ๐ . Appendix F.1 documents all designs, models, and data. Table 2 shows TADRED outperforms the feature selection baselines where ๐ถ
๐ถ ยฏ 2 , ๐ถ ยฏ 4 , ๐ถ ยฏ 8 , ๐ถ ยฏ 16 . Thus it can better estimate parameters shown to reduce unnecessary biopsies Singh et al. (2022) in shorter scan times, spurring wider deployment in clinical settings. Similar results on the well-known NODDI model are in appendix B.
Figure 2: Downstream MRI metrics (see appendix F.3) estimated from the full set of channels/measurements on HCP data ๐ถ ยฏ
288 , and ๐ถ ยฏ from ๐ถ
18 reconstructed measurements. Left: MSE for various metrics; Right: Qualitative comparison where arrows highlight closer agreement from TADREDโs design with the gold standard than those from the best performing baseline. DTI FA MD AD RD Random 2.22 6.09 22.7 6.97 SSEFS 2.86 12.9 31.2 14.9 FIRDL 9.83 23.2 77.7 26.8 TADRED 1.29 2.55 13.4 2.60 DKI MSDKI MK AK RK MSD MSK Random 9.03 7.83 15.3 6.82 7.59 SSEFS 12.0 9.26 20.3 8.96 8.17 FIRDL 11.9 10.9 21.3 10.8 6.03 TADRED 7.67 6.73 13.9 6.37 4.94
Best Performance on qMRI Challenge Data The Multi-Diffusion Challenge Pizzolato et al. (2020) aimed to identify an informative subset of data, from which to reconstruct the original full dataset ๐ ๐ท ยฏ (i.e. ๐
๐ ๐ท ยฏ ) which had ๐ถ ยฏ
1344 measurements. This task provides a generic challenge that tests the ability of an experimental design or supervised feature selection algorithm to identify a subset with maximal information content. As discussed in section 2, neither classical Fisher matrix nor Bayesian experimental design approaches can perform this task. Data are brain scans of five human subjects, which were acquired from a state-of-the-art technique that acquires multiple MRI modalities simultaneously in a high-dimensional space where ๐ ๐ โ โ 6 . Thus experimental design is important, as sampling in a time budget realistic in clinical settings is difficult Slator et al. (2021). The first experiment follows Blumberg et al. (2022) which has the best performance on the data and table 2 shows TADRED outperforms this approach. We also show TADRED outperforms the supervised feature selection baselines in appendix B. All details are in appendix F.2.
Surpassing the Baselines in Estimation of Multiple Downstream Metrics DTI, DKI, and MSDKI Basser et al. (1994); Jensen & Helpern (2010); Henriques (2018) are widely-used qMRI methods. They quantify tissue microstructure and show promise for extracting imaging biomarkers for many medical applications, such as mild brain trauma, epilepsy, stroke, and Alzheimerโs disease Jensen & Helpern (2010); Ranzenberger & Snyder (2022); Tae et al. (2018). Reducing acquisition requirements (picking a small ๐ถ ) whilst obtaining more accurate quantification will enable their usage in a wider range of clinical application areas. We use publicly available, rich, high-resolution HCP data with ๐ถ ยฏ
288 measurements from six human subjects, corresponding to โ 30 minute scan times in the clinic โ too long for general deployment. The task is to subsample sizes ๐ถ
๐ถ ยฏ 8 , ๐ถ ยฏ 16 then reconstruct the data, where the models are then fitted using standard techniques. Further details on the models, data, and model fitting techniques are in appendix F.3. Quantitative and qualitative results are in figure 2 and appendix B and show TADRED outperforms the baselines on 17/18 comparisons on clinically useful downstream metrics. Furthermore, the downstream metrics produced by TADRED are visually closer to the gold standard than those from the best baseline, potentially enhancing the diagnosis of aberrations in tissue microstructure.
Outperforming Baselines in Reconstructing Remote Sensing Ground Images The JPLโs Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) Thompson et al. (2017) remotely senses elements of the Earthโs atmosphere and surface from aeroplanes, and has been used to examine the effect and rehabilitation of forests affected by large wildfires. Purdue University Agronomy Department obtained AVIRIS data to support soils research and we use this publicly available โIndian Pineโ data Baumgardner et al. (2015), obtained from two flights - which acquired ground images from ๐ถ ยฏ
220 different wavelengths. Details are in appendix F.4. This experiment follows experiment in table 2 and examines a sampling-reconstruction task where we investigate if we can obtain the same quality data with fewer wavelengths โ which would in practice require fewer sensors. Table 4 shows TADRED outperforms the supervised feature selection baselines with subsample sizes ๐ถ
๐ถ ยฏ 2 , ๐ถ ยฏ 4 , ๐ถ ยฏ 8 , ๐ถ ยฏ 16 . These improvements demonstrate the potential for using fewer filters in AVIRIS. In the development of next-generation airborne hyperspectral devices, TADRED may be used to choose the filters. Further results are in appendix B, promising additional applications are outlined in appendix E.
Table 3: Performance comparison of feature selection approaches for remote sensing AVIRIS hyperspectral data, MSE between ๐ถ ยฏ
220
reconstructed and
๐ถ
ยฏ
ground-truth measurements.
Table 4: Performance comparison RMSE
ร
10
2
, estimating abundance of
๐ป
โข
๐
โข
๐
2
,
๐ป
โข
๐
(top),
๐
โข
๐
2
(bottom). Experimental settings and baseline from Waterhouse & Stoyanov (2022) figure 5.
๐ถ
110 55 28 14
Random 1.81 2.60 3.99 8.27
SSEFS 2.03 4.49 5.77 10.8
FIRDL 8.10 9.87 10.6 10.3
TADRED 0.87 1.82 2.84 5.80
๐ถ
6 5 4 3
Baseline 4.54 4.91 5.33 6.17
TADRED 2.80 2.89 3.23 4.36
๐ถ
6 5 4 3 Baseline 4.45 4.43 5.10 6.36 TADRED 2.76 2.94 3.46 5.64 Table 4: Performance comparison RMSE ร 10 2 , estimating abundance of ๐ป โข ๐ โข ๐ 2 , ๐ป โข ๐ (top), ๐ โข ๐ 2 (bottom). Experimental settings and baseline from Waterhouse & Stoyanov (2022) figure 5.
Improving the Estimation of Oxygen Saturation This experiment follows Waterhouse & Stoyanov (2022). Tissue oxygen saturation levels provide information regarding chemical and heat burns, along with the likelihood of healing. However, techniques such as spectrophotometers and pulse oximetry do not provide the spatial resolution to observe differences in blood saturation in neighboring tissue. Hyperspectral imaging is a non-invasive and real-time alternative to improve oxygenation estimation, yet application-specific spectral band selection is required to reduce the high cost of imaging sensors, allowing widespread clinical adoption. To address this, Waterhouse & Stoyanov (2022) adapted the model in Can & รlgen (2019) for simulations and the objective is to estimate the pixel-wise abundance of oxyhemoglobin ๐ป โข ๐ โข ๐ 2 and deoxyhemoglobin ๐ป โข ๐ ; and oxygen saturation ๐ โข ๐ 2 . Design elements ๐ ๐ โ ๐ท ยฏ are chosen from 4 filters of different widths applied to 87 wavelengths (center), producing ๐ถ ยฏ
348 measurements. Table 4 shows that TADRED produces directly outperforms all approaches and results published and optimized in Waterhouse & Stoyanov (2022) for estimating the abundance of ๐ป โข ๐ โข ๐ 2 , ๐ป โข ๐ , ๐ โข ๐ 2 ; for feature sizes ๐ถ
6 , 5 , 4 , 3 . This suggests that using TADRED during the development of clinically-viable hyperspectral devices may be beneficial to reduce costs.
Component Analysis and the Effect of Randomness We use the experimental settings in table 2. Table 6 examines the impact of removing TADREDโs components on performance. First it considers TADRED without iteratively removing features in the optimization procedure, fixing ๐ก
2 and ๐ถ 1 , ๐ถ 2
๐ถ ยฏ , ๐ถ , showing that iterative subsampling has better performance than subsampling all features one iteration. As the feature scoring is a key element of TADRED, we also show that removing the scoring network ๐ฎ , whilst still learning a score, results in extremely poor performance, as training is destabilized when progressively setting the score from sample-dependent to sample-independent (recall equation 4). Table 6 shows how changing the random seed affects network initialization and data shuffling impacts performance; TADRED performs favorably compared to alternative approaches, and TADRED is mostly robust to the randomness inherent in deep learning.
Table 5: Ablation study on TADREDโs components.
Table 6: Standard deviation of performance
ร
10
2
across 10 random seeds settings.
๐ถ
110 55 28 14
w/o Scoring Network
๐
7.23 10.7 11.5 11.5
w/o iterative subsampling 1.03 1.19 1.83 2.80
TADRED 1.03 1.18 1.80 2.64
๐ถ
110 55 28 14 Random 0.11 0.23 0.37 0.76 SSEFS 0.01 0.02 0.05 0.18 FIRDL 0.44 0.34 0.37 0.44 TADRED 0.01 0.02 0.01 0.12 Table 6: Standard deviation of performance ร 10 2 across 10 random seeds settings. 5Discussion
This paper proposes TADRED, a feature selection algorithm that enables a new subsampling paradigm for experimental design particularly in multi-channel imaging applications. We demonstrate substantial performance benefits over standard Fisher-matrix approaches at the heart of widely used quantitative MRI techniques, as well as strong potential in multiple hyperspectral-imaging applications. "Standard" data sets for testing TADRED do not exist, as its new paradigm is largely unexplored, but in the few available examples (dataset used in table 2 and hyperspectral datasets in tables 4, 4) TADRED strongly outperforms existing algorithms, even on datasets for which those baselines were specifically designed, and without problem-specific hyperparameter tuning.
TADRED combines the dual selection/task network training strategy in state-of-the-art feature selection algorithms (SSEFS and FIRDL) with an RFE framework better suited to identifying complementary subsets among many informative candidate features. Thus, TADRED outperforms SSEFS and FIRDL on the imaging experimental design problems we consider. In fact, random supervised feature selection often outperforms SSEFS when there are no informative/correlated feature subsets to identify and FIRDLโs complex optimization procedure is often not beneficial when there is no such subset to identify and it underperforms simpler approaches. On the other hand, TADRED is likely to underperform SSEFS and FIRDL on typical applications in supervised feature selection where small sets of discriminative features reside among many uninformative features. One possible limitation is that TADREDโs iterative subsampling in the paradigm of RFE and backward selection decreases the upper bound on performance as the optimal feature sets for sizes ๐ถ ๐ก , ๐ถ ๐ก โ 1 may not be nested. Future work will consider alternative strategies. This iterative subsampling also increases computational time compared to random supervised feature selection. However, appendix D shows TADREDโs computational time is comparable to SSEFS and FIRDL. Here, we consider all image channels to have equal cost, but in practice some measurements/channels may be more expensive than others; TADREDโs formulation adapts naturally to more complex cost functions on the experimental design. Also here we consider only tasks that treat each image pixel/voxel independently which is typical in quantitative imaging Alexander et al. (2019); Cercignani et al. (2018), so use only fully-connected networks (as the baselines), again TADREDโs formulation adapts naturally to use e.g. a CNN for ๐ฏ . TADRED has further applications to other imaging problems e.g autofocus for specialized equipment Lightley et al. (2022) and potentially beyond imaging to e.g. studies of cell populations Sinkoe & Hahn (2017).
Reproducibility Statement
We provide the code: Code Link, which contains the entire source code for our algorithm TADRED. The code also contains the script to create simulations used in tables 2, 8 and downloading and preprocessing the data in for results presented in tables 2, 4 and figure 2. Further details on all data and preprocessing are in appendix F. We also provide detailed information on the implementation of TADRED and the baselines in appendix A.
Acknowledgements
HPC: Tristan Clark, James OโConnor, Edward Martin; Ahmed Abdelkarim, Daniel Beechey, George Blumberg, Rฤzvan Cฤramaluฤu, Amy Chapman, Alice Cheng, Luca Franceschi, G-Research (for a previous grant), Fredrik Helltrรถm, Jessica Hoang, Chen Jin, Jean Kaddour, Marcus Keil, Johannes Kirschner, Marcela Konanova, Eve Levy and Michael Salvato, Hongxiang Lin, Nina Montaรฑa-Brown, Luca Morreale, MUDI Organizers, Raymond Ojinnaka, Gabriel Oon, Brooks Paige, David Pรฉrez-Suรกrez, Stefan Piatek, Reviewers, Oliver Slumbers, Dennis Soemers, Danail Stoyanov, Shinichi Tamura, Dale Waterhouse, Tom Young, An Zhao, Yukun Zhou. Funding: EPSRC grants M020533 R006032 R014019, Microsoft scholarship, NIHR UCLH Biomedical Research Centre, Research Initiation Project of Zhejiang Lab (No.2021ND0PI02). Data were provided [in part] by the Human Connectome Project, MGH-USC Consortium (Principal Investigators: Bruce R. Rosen, Arthur W. Toga and Van Wedeen; U01MH093765) funded by the NIH Blueprint Initiative for Neuroscience Research grant; the National Institutes of Health grant P41EB015896; and the Instrumentation Grants S10RR023043, 1S10RR023401, 1S10RR019307.
References Abid et al. (2019) โ Abubakar Abid, Muhammed Fatih Balฤญn, and James Zou.Concrete autoencoders: Differentiable feature selection and reconstruction.In: International Conference on Machine Learning (ICML), 2019. Alexander (2008) โ Daniel C. Alexander.A general framework for experiment design in diffusion MRI and its application in measuring direct tissue-microstructure features.Magnetic resonance in medicine, 60(2):439โ448, 2008. Alexander et al. (2019) โ Daniel C. Alexander, Tim B. Dyrby, Markus Nilsson, and Hui Zhang.Imaging brain microstructure with diffusion MRI: practicality and applications.NMR in Biomedicine, 32(4):e3841, 2019. Alfaro-Almagro et al. (2018) โ Fidel Alfaro-Almagro, Mark Jenkinson, Neal K. Bangerter, Jesper L. R. Andersson, Ludovica Griffanti, Gwenaรซlle Douaud, Stamatios N. Sotiropoulos, Saad Jbabdi, Moises Hernandez-Fernandez, Emmanuel Vallee, Diego Vidaurre, Matthew Webster, Paul McCarthy, Christopher Rorden, Alessandro Daducci, Daniel C. Alexander, Hui Zhang, Iulius Dragonu, Paul M. Matthews, Karla L. Miller, and Stephen M. Smith.Image processing and quality control for the first 10,000 brain imaging datasets from UK biobank.NeuroImage, 166:400โ424, 2018. Annadani et al. (2023) โ Yashas Annadani, Panagiotis Tigas, Desi R. Ivanova, Andrew Jesson, Yarin Gal, Adam Foster, and Stefan Bauer.Differentiable multi-target causal bayesian experimental design.In: International Conference on Machine Learning (ICML), 2023. Antony (2003) โ Jiju Antony.Design of Experiments for Engineers and Scientists.Oxford: Butterworth-Heinemann, 2003. Arad & Ben-Shahar (2017) โ Boaz Arad and Ohad Ben-Shahar.Filter selection for hyperspectral estimation.In: International Conference on Computer Vision (ICCV), 2017. Arbour et al. (2022) โ David Arbour, Drew Dimmery, Tung Mai, and Anup Rao.Online balanced experimental design.In: International Conference of Machine Learning (ICML), 2022. Basser & Pierpaoli (1996) โ Peter J. Basser and Carlo Pierpaoli.Microstructural and physiological features of tissues elucidated by quantitative-diffusion-tensor MRI.Journal of Magnetic Resonance, 111:209โ219, 1996. Basser et al. (1994) โ Peter J. Basser, James Mattiello, and Denis LeBihan.MR diffusion tensor spectroscopy and imaging.Biophysical journal, 66(1):259โ267, 1994. Baumgardner et al. (2015) โ Marion F. Baumgardner, Larry L. Biehl, and David A. Landgrebe.220 band AVIRIS hyperspectral image data set: June 12, 1992 indian pine test site 3.Purdue University Research Repository doi:10.4231/R7RX991C, 2015. Baumgardner et al. (2022) โ Marion F. Baumgardner, Larry L. Biehl, and David A. Landgrebe.Aviris hyperspectral image data set.https://purr.purdue.edu/publications/1947/1, 2022. Blau et al. (2022) โ Tom Blau, Edwin V. Bonilla, Iadine Chades, and Amir Dezfouli.Optimizing sequential experimental design with deep reinforcement learning.In: International Conference on Machine Learning (ICML), 2022. Blumberg et al. (2022) โ Stefano B. Blumberg, Hongxiang Lin, Francesco Grussu, Yukun Zhou, Matteo Figini, and Daniel C. Alexander.Progressive subsampling for oversampled data - application to quantitative MRI.2022. Breiman (2001) โ Leo Breiman.Random forests.Machine Learning, 45:5โ32, 2001. Camilleri et al. (2021) โ Romain Camilleri, Kevin Jamieson, and Julian Katz-Samuels.High-dimensional experimental design and kernel bandit.2021. Can & รlgen (2019) โ Osman Melih Can and Yekta รlgen.Modeling diffuse reflectance spectra of donated blood with their hematological parameters.Clinical and Preclinical Optical Diagnostics II, 2019. Castiglia et al. (2023) โ Timothy Castiglia, Yi Zhou, Shiqiang Wang, Swanand Kadhe, Nathalie Baracaldo, and Stacy Patterson.LESS-VFL: Communication-efficient feature selection for vertical federated learning.In: International Conference on Machine Learning (ICML), 2023. Cercignani & Alexander (2006) โ Mara Cercignani and Daniel C. Alexander.Optimal acquisition schemes for in vivo quantitative magnetization transfer MRI.Magnetic Resonance in Medicine, 56(4):803โ810, 2006. Cercignani et al. (2018) โ Mara Cercignani, Nicholas G. Dowell, and Paul S. Tofts.Quantitative MRI of the Brain: Principles of Physical Measurement.CRC Press, second edition, 2018. Chen et al. (2017) โ Jianbo Chen, Mitchell Stern, Martin J. Wainwright, and Michael I. Jordan.Kernel feature selection via conditional covariance minimization.In: Neural Information Processing Systems (NIPS), 2017. (22) โ Code Link.Code for this Paper by Stefano B. Blumberg.https://github.com/sbb-gh/experimental-design-multichannel. Cohen et al. (2023) โ David Cohen, Tal Shnitzer, Yuval Kluger, and Ronen Talmon.Few-sample feature selection via feature manifold learning.In: International Conference on Machine Learning (ICML), 2023. Connolly et al. (2023) โ Bethany Connolly, Kim Moore, Tobias Schwedes, Alexander Adam, Gary Willis, Ilya Feige, and Christopher Frye.Task-specific experimental design for treatment effect estimation.In: International Conference on Machine Learning (ICML), 2023. Covert et al. (2023) โ Ian Covert, Wei Qiu, Mingyu Lu, Nayoon Kim, Nathan White, and Su-In Lee.Learning to maximize mutual information for dynamic feature selection.In: International Conference on Machine Learning (ICML), 2023. Doudchenko et al. (2021) โ Nick Doudchenko, Khashayar Khosravi, Jean Pouget-Abadie, Sebastien Lahaie, Miles Lubin, Vahab Mirrokni, Jann Spiess, and Guido Imbens.Synthetic design: An optimization approach to experimental design with synthetic controls.In: Neural Information Processing Systems (NIPS), 2021. Essen et al. (2013) โ David C. Van Essen, Stephen M. Smith, Deanna M. Barch, Timothy E. J. Behrens, Essa Yacoub, Kamil Ugurbil, and WU-Minn HCP Consortium.The WU-Minn Human Connectome Project: an overview.Neuroimage, 80:62โ79, 2013. Fabian et al. (2022) โ Zalan Fabian, Berk Tinaz, and Mahdi Soltanolkotabi.HUMUS-Net: Hybrid unrolled multi-scale network architecture for accelerated mri reconstruction.In: Neural Information Processing Systems (NIPS), 2022. Ferizi et al. (2017) โ Uran Ferizi, Benoit Scherrer, Torben Schneider, Mohammad Alipoor, Odin Eufracio, Rutger H. J. Fick, Rachid Deriche, Markus Nilsson, Ana K. Loya-Olivas, Mariano Rivera, Dirk H. J. Poot, Alonso Ramirez-Manzanares, Jose L. Marroquin, Ariel Rokem, Christian Pรถtter, Robert F. Dougherty, Ken Sakaie, Claudia Wheeler-Kingshott, Simon K. Warfield, Thomas Witzel, Lawrence L. Wald, Josรฉ G Raya, and Daniel C. Alexander.Diffusion MRI microstructure models with in vivo human brain connectome data: results from a multi-group comparison.NMR in biomedicine, 30(9), 2017. Fick et al. (2019) โ Rutger Fick, Demian Wassermann, and Rachid Deriche.The Dmipy toolbox: Diffusion MRI multi-compartment modeling and microstructure recovery made easy.Frontiers in Neuroinformatics, 13(64), 2019. Fontaine et al. (2021) โ Xavier Fontaine, Pierre Perrault, Michal Valko, and Vianney Perchet.Online A-Optimal design and active linear regression.In: International Conference on Machine Learning (ICML), 2021. Foster et al. (2021) โ Adam Foster, Desi R. Ivanova, Ilyas Malik, and Tom Rainforth.Deep adaptive design: Amortizing sequential bayesian experimental design.In: Neural Information Processing Systems (NIPS), 2021. Garyfallidis et al. (2014) โ Eleftherios Garyfallidis, Matthew Brett, Bagrat Amirbekian, Ariel Rokem, Stefan van der Walt, Maxime Descoteaux, Ian Nimmo-Smith, and Dipy Contributors.DIPY, a library for the analysis of diffusion MRI data.Frontiers in Neuroinformatics, 8(8), 2014. Glynn et al. (2020) โ Peter W. Glynn, Ramesh Johari, and Mohammad Rasouli.Adaptive experimental design with temporal interference: A maximum likelihood approach.In: Neural Information Processing Systems (NIPS), 2020. Grussu et al. (2017) โ Francesco Grussu, Torben Schneider, Carmen Tur, Richard L. Yates, Mohamed Tachrount, Andrada Ianuล, Marios C. Yiannakas, Jia Newcombe, Hui Zhang, Daniel C. Alexander, Gabriele C. DeLuca, and Claudia A. M. Gandini Wheeler-Kingshott.Neurite dispersion: a new marker of multiple sclerosis spinal cord pathology?Annals of Clinical and Translational Neurology, 4(9), 2017. Grussu et al. (2021) โ Francesco Grussu, Stefano B. Blumberg, Marco Battiston, Lebina S. Kakkar, Hongxiang Lin, Andrada Ianuล, Torben Schneider, Saurabh Singh, Roger Bourne, Shonit Punwani, David Atkinson, Claudia A. M. Gandini Wheeler-Kingshott, Eleftheria Panagiotaki, Thomy Mertzanidou, and Daniel C. Alexander.Feasibility of data-driven, model-free quantitative MRI protocol design: Application to brain and prostate diffusion-relaxation imaging.Frontiers in Physics, 9:615, 2021. Gudbjartsson & Patz (1995) โ Hรกkon Gudbjartsson and Samuel Patz.The Rician distribution of noisy MRI data.Magnetic Resonance in Medicine, 34(6):910โ914, 1995. Guyon et al. (2002) โ Isabelle Guyon, Jason Weston, Stephen Barnhill, and Vladimir Vapnik.Gene selection for cancer classification using support vector machines.Journal of Machine Learning Research, 46(1):389โ422, 2002. Gyori et al. (2022) โ Noemi G. Gyori, Marco Palombo, Christopher A. Clark, Hui Zhang, and Daniel C. Alexander.Training data distribution significantly impacts the estimation of tissue microstructure with machine learning.Magnetic Resonance in Medicine, 87(2):932โ947, 2022. Hansen et al. (2022) โ Derek Hansen, Brian Manzo, and Jeffrey Regier.Normalizing flows for knockoff-free controlled feature selection.In: Neural Information Processing Systems (NIPS), 2022. He et al. (2005) โ Xiaofei He, Deng Cai, and Partha Niyogi.Laplacian score for feature selection.In: Neural Information Processing Systems (NIPS), 2005. Henriques (2018) โ Rafael Neto Henriques.Advanced methods for diffusion MRI data analysis and their application to the healthy ageing brain.Ph.D Thesis, 2018. Hutter et al. (2018) โ Jana Hutter, Paddy J. Slator, Daan Christiaens, Rui Pedro Teixeira, Thomas Roberts, Laurence Jackson, Anthony N. Price, Shaihan Malik, and Joseph V. Hajnal.Integrated and efficient diffusion-relaxometry using ZEBRA.Scientific reports, 8(1):1โ13, 2018. Imrie et al. (2022) โ Fergus Imrie, Alexander Norcliffe, Pietro Liรฒ, and Mihaela van der Schaar.Composite feature selection using deep ensembles.In: Neural Information Processing Systems (NeurIPS), 2022. Ivanova et al. (2021) โ Desi R. Ivanova, Adam Foster, Steven Kleinegesse, Michael U. Gutmann, and Tom Rainforth.Implicit deep adaptive design: Policy-based experimental design without likelihoods.In: Neural Information Processing Systems (NeurIPS), 2021. Ivanova et al. (2023) โ Desi R. Ivanova, Joel Jennings, Tom Rainforth, Cheng Zhang, and Adam Foster.Co-bed: Information-theoretic contextual optimization via bayesian experimental design.In: International Conference on Machine Learning (ICML), 2023. Jensen & Helpern (2010) โ Jens H. Jensen and Joseph A. Helpern.MRI quantification of non-Gaussian water diffusion by kurtosis analysis.NMR in Biomedicine, 23(7):698โ710, 2010. Jet Propulsion Laboratory (2023) (JPL) โ
Jet Propulsion Laboratory (JPL). AVIRIS web page. https://aviris.jpl.nasa.gov/, 2023.
Jiang et al. (2020) โ Shali Jiang, Henry Chai, Javier Gonzalez, and Roman Garnett.BINOCULARS for efficient, nonmyopic sequential experimental design.In: International Conference on Machine Learning (ICML), 2020. Johnston et al. (2019) โ Edward W. Johnston, Elisenda Bonet-Carne, Uran Ferizi, Ben Yvernault, Hayley Pye, Dominic Patel, Joey Clemente, Wivijin Piga, Susan Heavey, Harbir S. Sidhu, Francesco Giganti, James OโCallaghan, Mrishta Brizmohun Appayya, Alistair Grey, Alexandra Saborowska, Sebastien Ourselin, David Hawkes, Caroline M. Moore, Mark Emberton, Hashim U. Ahmed, Hayley Whitaker, Manuel Rodriguez-Justo, Alexander Freeman, David Atkinson, Daniel Alexander, Eleftheria Panagiotaki, and Shonit Punwani.VERDICT MRI for prostate cancer: Intracellular volume fraction versus apparent diffusion coefficient.Radiology, 291(2):391โ397, 2019. Kaddour et al. (2020) โ Jean Kaddour, Steindรณr Sรฆmundsson, and Marc Peter Deisenroth.Probabilistic active meta-learning.In: Neural Information Processing Systems (NeurIPS), 2020. Kamiya et al. (2020) โ Kouhei Kamiya, Masaaki Hori, and Shigeki Aoki.NODDI in clinical research.Journal of Neuroscience Methods, 346:108908, 2020.ISSN 0165-0270. Karim et al. (2022) โ Shahid Karim, Akeel Qadir, Umar Farooq, Muhammad Shakir, and Asif Laghari.Hyperspectral imaging: A review and trends towards medical imaging.Current Medical Imaging, 10, 2022. Karras et al. (2018) โ Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen.Progressive growing of GANs for improved quality, stability, and variation.In: International Conference on Learning Representations (ICLR), 2018. Khan et al. (2018) โ Muhammad Khan, Hamid Khan, Adeel Yousaf, Khurram Khurshid, and Asad Abbas.Modern trends in hyperspectral image analysis: A review.IEEE Access, 6:14118โ14129, 2018. Kleinegesse & Gutmann (2020) โ Steven Kleinegesse and Michael U. Gutmann.Bayesian experimental design for implicit models by mutual information neural estimation.In: International Conference on Machine Learning (ICML), 2020. Knoll et al. (2020) โ Florian Knoll, Tullie Murrell, Anuroop Sriram, Nafissa Yakubova, Jure Zbontar, Michael Rabbat, et al.Advancing machine learning for mr image reconstruction with an open competition: Overview of the 2019 fastMRI challenge.Magnetic resonance in medicine, 84(6):3054โ3070, 2020. Kohavi & John (1997) โ Ron Kohavi and George H. John.Wrappers for feature subset selection.Artificial Intelligence, 97(1):273โ324, 1997. Kumagai et al. (2022) โ Atsutoshi Kumagai, Tomoharu Iwata, and Yasutoshi Ida and.Few-shot learning for feature selection with Hilbert-Schmidt independence criterion.In: Neural Information Processing Systems (NeurIPS), 2022. Kuncheva et al. (2020) โ Ludmila I. Kuncheva, Clare E. Matthews, รlvar Arnaiz-Gonzรกlez, and Juan Josรฉ Rodrรญguez Diez.Feature selection from high-dimensional data with very low sample size: A cautionary tale.arXiv preprint arXiv:2008.12025, 2020. Lee (2022) โ Changhee Lee.Code for self-supervision enhanced feature selection with correlated gates.https://github.com/chl8856/SEFS, git commit 21fe6d97cd98612e3d0eb5ce204d2d0e2e9deb5a, 2022. Lee et al. (2022) โ Changhee Lee, Fergus Imrie, and Mihaela van der Schaar.Self-supervision enhanced feature selection with correlated gates.International Conference on Learning Representations (ICLR), 2022. Li et al. (2016) โ Yifeng Li, Chih-Yu Chen, and Wyeth W. Wasserman.Deep feature selection: Theory and application to identify enhancers and promoters.Journal of Computational Biology, 23(5):322โ336, 2016. Lightley et al. (2022) โ Jonathan Lightley, Frederik Gรถrlitz, Sunil Kumar, Ranjan Kalita, Arinbjorn Kolbeinsson, Edwin Garcia, Yuriy Alexandrov, Vicky Bousgouni, Riccardo Wysoczanski, Peter Barnes, Louise Donnelly, Chris Bakal, Christopher Dunsby, Mark A. A. Neil, Seth Flaxman, and Paul M. W. French.Robust deep learning optical autofocus system applied to automated multiwell plate single molecule localization microscopy.Journal of Microscopy, 288(2):130โ141, 2022. Lindenbaum et al. (2021) โ Ofir Lindenbaum, Uri Shaham, Erez Peterfreund, Jonathan Svirsky, Nicolas Casey, and Yuval Kluger.Differentiable unsupervised feature selection based on a gated laplacian.In: Neural Information Processing Systems (NeurIPS), 2021. Lu & Fei (2014) โ Guolan Lu and Baowei Fei.Medical hyperspectral imaging: a review.Journal of Biomedical Optics, 19(1), 2014. Lyle et al. (2023) โ Clare Lyle, Arash Mehrjou, Pascal Notin, Andrew Jesson, Stefan Bauer, Yarin Gal, and Patrick Schwab.DiscoBAX: Discovery of optimal intervention sets in genomic experiment design.In: International Conference on Machine Learning (ICML), 2023. Malkomes et al. (2021) โ Gustavo Malkomes, Bolong Cheng, Eric H Lee, and Mike Mccourt.Beyond the pareto efficient frontier: Constraint active search for multiobjective experimental design.In: International Conference on Machine Learning (ICML), 2021. Manolakis et al. (2016) โ Dimitris G. Manolakis, Ronald B. Lockwood, and Thomas W. Cooley.Hyperspectral Imaging Remote Sensing: Physics, Sensors, and Algorithms.Cambridge University Press, 2016. Mehrjou et al. (2022) โ Arash Mehrjou, Ashkan Soleymani, Andrew Jesson, Pascal Notin, Yarin Gal, Stefan Bauer, and Patrick Schwab.Genedisco: A benchmark for experimental design in drug discovery.In: International Conference on Learning Representations (ICLR), 2022. Mehta et al. (2022) โ Viraj Mehta, Biswajit Paria, Jeff Schneider, Stefano Ermon, and Willie Neiswanger.An experimental design perspective on model-based reinforcement learning.In: International Conference on Learning Representations (ICLR), 2022. Montgomery (2001) โ Douglas C. Montgomery.Design and analysis of experiments.John Wiley & Sons, fifth edition, 2001. Muckley et al. (2021) โ Matthew J. Muckley, Benedikt Riemenschneider, Alaleh Radmanesh, Sooyoung Kim, Gukyeong Jeong, Jaeho Ko, et al.Results of the 2020 fastMRI challenge for machine learning MR image reconstruction.IEEE transactions on medical imaging, 40(9):2306โ2317, 2021. MUDI Organizers (2022) โ MUDI Organizers.MUlti-dimensional DIffusion (MUDI) MRI challenge 2019 data.https://www.developingbrain.co.uk/data/, 2022. Mutny & Krause (2022) โ Mojmir Mutny and Andreas Krause.Experimental design for linear functionals in reproducing kernel Hilbert spaces.In: Neural Information Processing Systems (NeurIPS), 2022. Nandy et al. (2021) โ Preetam Nandy, Divya Venugopalan, Chun Lo, and Shaunak Chatterjee.A/B testing for recommender systems in a two-sided marketplace.In: Neural Information Processing Systems (NeurIPS), 2021. Panagiotaki et al. (2015a) โ Eleftheria Panagiotaki, Rachel W. Chan, Nikolaos Dikaios, Hashim U. Ahmed, James OโCallaghan, Alex Freeman, David Atkinson, Shonit Punwani, David J. Hawkes, and Daniel C. Alexander.Microstructural characterization of normal and malignant human prostate tissue with vascular, extracellular, and restricted diffusion for cytometry in tumours magnetic resonance imaging.Investigative Radiology, 50(4):218โ227, 2015a. Panagiotaki et al. (2015b) โ Eleftheria Panagiotaki, Andrada Ianuล, Edward Johnston, Rachel W. Chan, Nicola Stevens, David Atkinson, Shonit Punwani, David J. Hawkes, and Daniel C. Alexander.Optimised VERDICT MRI protocol for prostate cancer characterisation.In: International Society for Magnetic Resonance in Medicine (ISMRM), 2015b. Panagiotaki et al. (2014) โ Eletheria Panagiotaki, Simon Walker-Samuel, Bernard Siow, Peter S. Johnson, Vineeth Rajkumar, Barbara R. Pedley, Mark F. Lythgoe, and Daniel C. Alexander.Noninvasive quantification of solid tumor microstructure using VERDICT MRI.Cancer research, 74(7):1902โ1912, 2014. Peng et al. (2005) โ Hanchuan Peng, Fuhui Long, and Chris Ding.Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8):1226โ1238, 2005. Pizzolato et al. (2020) โ Marco Pizzolato, Marco Palombo, Elisenda Bonet-Carne, Francesco Grussu, Andrada Ianuล, Fabian Bogusz, Tomasz Pieciak, Lipeng Ning, Stefano B. Blumberg, Thomy Mertzanidou, Daniel C. Alexander, Maryam Afzali, Santiago Aja-Fernรกndez, Derek K. Jones, Carl-Fredrik Westin, Yogesh Rathi, Steven H. Baete, Lucilio Cordero-Grande, Thilo Ladner, Paddy J. Slator, Daan Christiaens, Jean-Philippe Thiran, Anthony N. Price, Farshid Sepehrband, Fan Zhang, and Jana Hutter.Acquiring and predicting MUlti-dimensional DIffusion (MUDI) data: an open challenge.In: International Society for Magnetic Resonance in Medicine (ISMRM), 2020. Pukelsheim (2006) โ Friedrich Pukelsheim.Optimal Design of Experiments.Society for Industrial and Applied Mathematics, 2006. Quinzan et al. (2023) โ Francesco Quinzan, Ashkan Soleymani, Patrick Jaillet, Cristian R. Rojas, and Stefan Bauer.DRCFS: Doubly robust causal feature selection.In: International Conference on Machine Learning (ICML), 2023. Ranzenberger & Snyder (2022) โ Logan R. Ranzenberger and Travis Snyder.Diffusion Tensor Imaging.StatPearls Publishing, fifth edition, 2022. Scikit-Learn (2023) โ Scikit-Learn.Scikit-learn recursive feature elimination (RFE).https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html, 2023. Simchi-Levi & Wang (2023) โ David Simchi-Levi and Chonghuan Wang.Pricing experimental design: Causal effect, expected revenue and tail risk.In: International Conference on Machine Learning (ICML), 2023. Simmonds & Green (1996) โ John J. Simmonds and Robert O. Green.Current status, preformance and plans for the NASA airborne visible and infrared imaging spectrometer (AVIRIS).1996.URL https://www.osti.gov/biblio/379497. Singh et al. (2022) โ Saurabh Singh, Harriet Rogers, Baris Kanber, Joey Clemente, Hayley Pye, Edward W. Johnston, Tom Parry, Alistair Grey, Eoin Dinneen, Greg Shaw, Susan Heavey, Urszula Stopka-Farooqui, Aiman Haider, Alex Freeman, Francesco Giganti, David Atkinson, Caroline M. Moore, Hayley C. Whitaker, Daniel C. Alexander, Eleftheria Panagiotaki, and Shonit Punwani.Avoiding unnecessary biopsy after multiparametric prostate MRI with VERDICT analysis: The INNOVATE study.Radiology, pp. 212536, 2022. Sinkoe & Hahn (2017) โ Andrew Sinkoe and Juergen Hahn.Optimal experimental design for parameter estimation of an IL-6 signaling model.Processes, 5(3), 2017. Slator et al. (2021) โ Paddy J. Slator, Marco Palombo, Karla L. Miller, Carl-Fredrik Westin, Frederik Laun, Daeun Kim, Justin P. Haldar, Dan Benjamini, Gregory Lemberskiy, Joao P. de Almeida Martins, and Jana Hutter.Combined diffusion-relaxometry microstructure imaging: Current status and future prospects.Magnetic Resonance in Medicine, 86(6):2987โ3011, 2021. Sokar et al. (2022) โ Ghada Sokar, Zahra Atashgahi, Mykola Pechenizkiy, and Decebal Constantin Mocanu.Where to pay attention in sparse training for feature selection?In: Neural Information Processing Systems (NeurIPS), 2022. Song et al. (2007) โ Le Song, Alexander J. Smola, Arthur Gretton, Justin Bedo, and Karsten M. Borgwardt.Supervised feature selection via dependence estimation.In: International Conference on Machine learning (ICML), 2007. Song et al. (2012) โ Le Song, Alexander J. Smola, Arthur Gretton, Justin Bedo, and Karsten M. Borgwardt.Feature selection via dependence maximization.Journal of Machine Learning Research, 13(5):1393โ1434, 2012. Stuart et al. (2019) โ Mary B. Stuart, Andrew J. S. McGonigle, and Jon R. Willmott.Hyperspectral imaging in environmental monitoring: A review of recent developments and technological advances in compact field deployable systems.Sensors, 19(14):3071, 2019. Stuart et al. (2020) โ Mary B. Stuart, Leigh R. Stanger, Matthew J. Hobbs, Tom D. Pering, Daniel Thio, Andrew J.S. McGonigle, and Jon R. Willmott.Low-cost hyperspectral imaging system: Design and testing for laboratory-based environmental applications.Sensors, 20(11):3239, 2020. Tae et al. (2018) โ Woo Suk Tae, Byung Joo Ham, Sung Bom Pyun, Shin Hyuk Kang, and Byung Jo Kim.Current clinical applications of diffusion-tensor imaging in neurological disorders.Journal of Clinical Neurology, 14(2):129โ140, 2018. Teshnizi et al. (2020) โ Ali Ahmadi Teshnizi, Saber Salehkaleybar, and Negar Kiyavash.Lazyiter: A fast algorithm for counting markov equivalent dags and designing experiments.In: International Conference on Machine Learning (ICML), 2020. Thompson et al. (2017) โ David R. Thompson, Joseph W. Boardman, Michael L. Eastwood, and Robert O. Green.A large airborne survey of earthโs visible-infrared spectral dimensionality.Optics Express, 25(8):9186โ9195, 2017. Tibshirani (1996) โ Robert Tibshirani.Regression shrinkage and selection via the lasso.Journal of the Royal Statistics Society. Series B (Methodological), pp. 267โ288, 1996. Tigas et al. (2022) โ Panagiotis Tigas, Yashas Annadani, Andrew Jesson, Bernhard Schรถlkopf, Yarin Gal, and Stefan Bauer.Interventions, where and how? Experimental design for causal models at scale.In: Neural Information Processing Systems (NeurIPS), 2022. Tigas et al. (2023) โ Panagiotis Tigas, Yashas Annadani, Desi R. Ivanova, Andrew Jesson, Yarin Gal, Adam Foster, and Stefan Bauer.Differentiable multi-target causal bayesian experimental design.In: International Conference on Machine Learning (ICML), 2023. Waterhouse & Stoyanov (2022) โ Dale J. Waterhouse and Danail Stoyanov.Optimized spectral filter design enables more accurate estimation of oxygen saturation in spectral imaging.Biomedical Optics Express, 13(4):2156โ2173, 2022. Wojtas (2021) โ Maksymilian A. Wojtas.Code for feature importance ranking for deep learning, git commit 836096edb9f822e509cadcf9cd2e7cc5fa2324cc.https://github.com/maksym33/FeatureImportanceDL, 2021. Wojtas & Chen (2020) โ Maksymilian A. Wojtas and Ke Chen.Feature importance ranking for deep learning.In: Neural Information Processing System (NeurIPS), 2020. Wu et al. (2019) โ Renjie Wu, Yuqi Li, Xijiong Xie, and Zhijie Lin.Optimized multi-spectral filter arrays for spectral reconstruction.Sensors, 19(13), 2019. Yamada et al. (2020) โ Yutaro Yamada, Ofir Lindenbaum, Sahand Negahban, and Yuval Kluger.Feature selection using stochastic gates.In: International Conference on Machine Learning (ICML), 2020. Yaman et al. (2022) โ Burhaneddin Yaman, Seyed Amir Hossein Hosseini, and Mehmet Akรงakaya.Zero-shot self-supervised learning for MRI reconstruction.In: International Conference on Learning Representations (ICLR), 2022. Zaballa & Hui (2023) โ Vincent D. Zaballa and Elliot E. Hui.Stochastic gradient bayesian optimal experimental designs for simulation-based inference.In: Differentiable Almost Everything Workshop of the International Conference of Machine Learning (ICML), 2023. Zbontar et al. (2018) โ Jure Zbontar, Florian Knoll, Anuroop Sriram, Tullie Murrell, Zhengnan Huang, Matthew J Muckley, et al.fastMRI: An open dataset and benchmarks for accelerated mri.arXiv preprint arXiv:1811.08839, 2018. Zhang et al. (2012) โ Hui Zhang, Torben Schneider, Claudia A. Wheeler-Kingshott, and Daniel C Alexander.NODDI: practical in vivo neurite orientation dispersion and density imaging of the human brain.NeuroImage, 61(4):1000โ1016, 2012. Zhang et al. (2022) โ Junzhe Zhang, Jin Tian, and Elias Bareinboim.Partial counterfactual identification from observational and experimental data.In: International Conference on Machine Learning (ICML), 2022. Zheng et al. (2020) โ Sue Zheng, David Hayden, Jason Pacheco, and John W Fisher III.Sequential bayesian experimental design with variable cost structure.In: Neural Information Processing Systems (NeurIPS), 2020. Appendix Structure
The appendices are structured as follows:
1.
Appendix A provides comprehensive details on all approaches utilized in this paper, including the specific hyperparameters employed.
2.
Appendix B are supplementary experimental results.
3.
Appendix C offers further experimental analysis of our method, TADRED.
4.
Appendix D compares the computational cost of all the approaches used in this paper and outlines the computational resources employed.
5.
Appendix E is a comprehensive description of prior work related to our problem.
6.
Appendix F details all data for each experiment, along with specifics of each task.
Appendix AKey Approaches, Hyperparameters, and Settings
This section describes the different supervised feature selection approaches used in this paper and details the choice of parameter settings within each.
General Experimental Settings
For every experiment comparing TADRED with baselines, we split the data into training, validation/development, and test sets. This is described in detail in section F. Following Lee et al. (2022) (paper of SSEFS), we conducted an extensive hyperparameter search for each approach (using the validation set), for different experimental settings and subsample values ๐ถ . This is described in detail below for every approach. For fairness, we have the same number of evaluations on the validation set for each different feature set size ๐ถ , i.e. number of trials for model selection. The best model was then applied to the test set and we reported the performance.
Other general hyperparameters are: batch size 1500 , learning rate 10 โ 4 ( 10 โ 5 for the experiment in figure 2), ADAM optimizer, and default network weight initialization. The default option for early stopping used 20 epochs for patience (i.e training stops if validation performance does not improve in 20 epochs).
TADRED - TAsk-DRiven Experimental Design for Multi-Channel Imaging Figure 3: TADREDโs structure. During training TADRED concurrently performs feature scoring, feature subsampling, and task execution. During training we progressively set the score to be sample-independent by setting ๐ผ to 1 . We score features with ๐ฌ ยฏ ๐ก โ โ ๐ถ ยฏ and remove features with low score by setting corresponding values of the mask to 0 , in this example we removed feature 2.
This subsection details the hyperparameters for the method we present in this paper: TADRED for TAsk-DRiven Experimental Design in imaging, as outlined in section 3. Figure 3 provides a graphical representation of TADREDโs structure and computational graph.
We conducted a brief search for TADRED-specific hyperparameters and fixed these hyperparameters across all experiments. We set the numbers of epochs in the four-phase inner loop training procedure as ๐ธ 1
25 , ๐ธ 2
๐ธ 1 + 10 , ๐ธ 3
๐ธ 2 + 10 . Following other baselines, we do not fix the total number of training epochs ๐ธ , but keep training beyond ๐
๐ธ 3 in algorithm 2 phase 4 until early stopping criteria (on the validation set) are met.
For fairness, when comparing TADRED with other supervised feature selection baselines, we chose a simple set of hyperparameters for the feature set sizes { ๐ถ ๐ก } ๐ก
1 ๐ and number of outer loop steps ๐ . Here we fixed ๐
5 and ๐ถ 1 , ๐ถ 2 , ๐ถ 3 , ๐ถ 4 , ๐ถ 5
๐ถ ยฏ , ๐ถ ยฏ 2 , ๐ถ ยฏ 4 , ๐ถ ยฏ 8 , ๐ถ ยฏ 16 . When comparing TADRED against two recent application-specific published results optimized by the authors of Blumberg et al. (2022); Waterhouse & Stoyanov (2022), we followed standard practice and performed a brief hyperparameter search on the validation set. We used ๐ถ 1 , โฆ , ๐ถ 9
{ 1344 , 500 , 250 , 100 , 50 , 40 , 30 , 20 , 10 } , ๐
9 in table 2 and ๐ถ 1 , โฆ , ๐ถ 19 = [348] + [250::45::-50] + [45::8::-5] + [8,6,5,4,3,2], ๐
19 (notation is [start::stop::step] ) in table 4.
We perform a grid search to find the optimal network architecture hyperparameters for each task. The Scoring Network ๐ฎ and Task Network ๐ฏ have the same number of hidden layers โ { 1 , 2 , 3 } , number of units โ { 30 , 100 , 300 , 1000 , 3000 } , and for each combination we obtain task performance on the feature set sizes ๐ถ 1
๐ถ 2
โฆ
๐ถ ๐ . The best performing network on the validation set is deployed on the test data.
Random Supervised Feature Selection
This baseline is inspired by the random design baselines used in experimental design papers e.g. Foster et al. (2021); Ivanova et al. (2021). For a particular design size, ๐ถ , we repeat the following process: i) randomly select ๐ถ features/channels; ii) perform grid search on the task network (mapping subsampled data ๐ ๐ท ยฏ to target ๐ ), with number of hidden layers โ { 1 , 2 , 3 } , number of units โ { 30 , 100 , 300 , 1000 , 3000 } ; iii) train until early stopping criteria specified on the validation set are met; iv) evaluate the best trained model on the test set.
Self-Supervision Enhanced Feature Selection with Correlated Gates (SSEFS) Lee et al. (2022)
This approach has a lengthy hyperparameter search detailed in Appendix B of Lee et al. (2022), which consists of a three-phase procedure and four neural networks. Note that this required multiple training steps e.g. obtaining results for in table 2 requires >400 runs. SSEFS exploits task-performance, self-supervision, additional unlabeled data, and correlated feature subsets. It scores the features then subsequently trains a task-based network (analogous to ๐ฏ in TADRED) on subsampled data. We use the official repository Lee (2022) and verified our implementation by replicating results in the paper Lee et al. (2022). The full optimization procedure follows Lee et al. (2022) and is split into i) self-supervision phase, ii) supervision phase, iii) training on selected features only.
The self-supervision phase finds the optimal encoder network hyperparameters. We follow Appendix B of Lee et al. (2022) and perform grid search. Similar to other approaches in this paper, we consider the encoder network, feature vector estimator network, gate vector estimator network, all have same number of hidden layers โ { 1 , 2 , 3 } number of units, including hidden dimension โ { 30 , 100 , 300 , 1000 , 3000 } . Directly following Lee et al. (2022) table S.1, other hyperparameters ๐ผ โ { 0.01 , 0.1 , 1.0 , 10 , 100 } , ๐ โ { 0.2 , 0.4 , 0.6 , 0.8 } . The self-supervisory dataset is input data ๐ ๐ท ยฏ . On the best validation performance (with early stopping), this returns a trained encoder network, cached for the supervision phase.
The supervision phase scores the features. The pretrained encoder is loaded from the previous phase. We then perform grid search, where the predictor network has number of hidden layers โ { 1 , 2 , 3 } , number of units โ { 30 , 100 , 300 , 1000 , 3000 } , following Lee et al. (2022) table S.1 ๐ฝ โ { 0.01 , 0.1 , 1.0 , 10 , 100 } . On the best validation performance with early stopping, the process returns a score for all features.
The final phase is repeated for a different number of subset sizes ๐ถ . We extract the ๐ถ highest scored features from the previous phase and perform grid search on the task network (mapping subsampled data ๐ ๐ท ยฏ to target ๐ ), with number of hidden layers โ { 1 , 2 , 3 } , number of units โ { 30 , 100 , 300 , 1000 , 3000 } . Training is until early stopping on the validation set. The best trained model is evaluated on the test set.
Feature Importance Ranking for Deep Learning (FIRDL) Wojtas & Chen (2020)
This approach has a three-stage procedure detailed in Appendix D of Wojtas & Chen (2020) and uses two neural networks. One of the networks scores the masks (analogous to mask m in TADRED), and the other trains a task network to perform a task on the subsampled data (analogous to ๐ฏ in TADRED). We use the official repository Wojtas (2021) and verified our implementation by replicating results in the paper Wojtas & Chen (2020). The following process is repeated for different feature subset sizes ๐ถ .
We perform a grid search to find the optimal hyperparameters. The operator network (analogous to task network ๐ฏ in this paper) and the selector network have the same number of hidden layers โ { 1 , 2 , 3 } , number of units โ { 30 , 100 , 300 , 1000 , 3000 } , ๐ ๐
5 , ๐ธ 1
15000 . The joint training uses early stopping on the validation set, and returns an optimal feature set size, of size ๐ถ and a trained operator network. The best performing operator network on the validation set is deployed on the test data.
Appendix BAdditional Results
This section provides additional results supporting the experiments presented in the main paper.
Table 8 contains results that repeat the experiment in table 2 using the NODDI model Zhang et al. (2012) instead of VERDICT. The first baseline uses the classical Fisher-matrix experimental design Alexander (2008) to compute the design ๐ท from Zhang et al. (2012) where ๐ถ
99 . For the supervised feature selection approaches we use densely-sampled designs ๐ท ยฏ where ๐ถ ยฏ
3612 Ferizi et al. (2017). Similar to results in table 2, table 8 shows TADRED outperforms classical experimental design with ๐ถ set to 99 following the classical approach used in current practice. In addition, TADRED outperforms the supervised feature selection baselines where ๐ถ
๐ถ ยฏ 2 , ๐ถ ยฏ 4 , ๐ถ ยฏ 8 , ๐ถ ยฏ 16 . The optimized designs enable us to estimate the widely used NODDI parameters in shorter scan times opening the potential for a wider range of clinical applications. All information on designs, models and data is in section F.1.
Table 8 shows extra results within the experiment documented in figure 2. We consider an additional feature set subsample size ๐ถ
36 which extends the results in figure 2 and show TADRED outperforms the baselines on 17/18 comparisons on clinically useful downstream metrics. This is beneficial as pressure for time in clinical MRI protocols is intense as many different MR contrasts are informative, but patient-time in the scanner is limited. Therefore shorter acquisition protocols for these widely informative downstream metrics (parametric maps) enables their exploitation in a wider range of clinical studies and applications.
Table 10 shows additional results on the MUDI data in table 2. This experiment compares TADRED with the supervised feature selection baselines following settings in the original MUDI challenge. Evaluation uses the MSE metric as in the original challenge. Further details are in appendix F.2.
Table 10 shows additional results on the AVIRIS data presented in table 4. Here, we only use data from the north-to-south flight. Improvements of TADRED over the supervised feature selection baselines are similar to that in table 4.
Table 7: MSE ร 10 2 between estimated NODDI model parameters and ground truth parameters used to simulate the data. Classical experimental design is from Zhang et al. (2012) and uses a Fisher-matrix approach. Additional results for table 2. Table 8: MSE for downstream MRI metrics (see appendix F.3) estimated from the full set of measurements on HCP data ๐ถ ยฏ
288 , and ๐ถ ยฏ from ๐ถ
36 reconstructed measurements. Additional results to figure 2. Classic Experimental Design ๐ถ
99 8.00 TADRED ๐ถ
99 , ๐ถ ยฏ
3612 4.51 ๐ถ ยฏ
3612
๐ถ
1806 903 452 226 Random 2.99 3.34 3.88 4.31 SSEFS 2.95 3.39 3.73 4.39 FIRDL 4.21 4.61 4.96 5.14 TADRED 2.59 2.92 3.33 3.85 DTI FA MD AD RD Random 1.17 3.13 9.3 3.75 SSEFS 5.96 10.4 43.7 14.2 FIRDL 10.4 68.8 106 81.4 TADRED 0.94 1.73 8.15 1.50 DKI MSDKI MK AK RK MSD MSK Random 6.37 6.28 13.3 2.50 4.15 SSEFS 8.74 7.27 16.9 3.87 10.8 FIRDL 9.39 10.3 18.6 11.1 8.82 TADRED 6.15 5.92 12.6 2.75 3.78 Table 8: MSE for downstream MRI metrics (see appendix F.3) estimated from the full set of measurements on HCP data ๐ถ ยฏ
288 , and ๐ถ ยฏ from ๐ถ
36 reconstructed measurements. Additional results to figure 2. Table 9: MSE between ๐ถ ยฏ
1344 reconstructed measurements and ๐ถ ยฏ ground-truth measurements on Multi-Diffusion challenge subjects. Additional results to table 2. Table 10: Performance comparison of feature selection approaches for remote sensing AVIRIS hyperspectral data (east-to-west flight of Indian Pine), MSE between ๐ถ ยฏ
220
reconstructed and
๐ถ
ยฏ
ground-truth measurements. Additional results to table 4.
๐ถ
500 250 100 50
Random 0.93 1.41 2.12 5.23
SSEFS 0.63 0.86 1.24 1.61
FIRDL 1.67 1.72 2.17 2.34
TADRED 0.21 0.44 0.94 1.34
๐ถ
110 55 28 14 Random 1.22 1.76 2.91 5.61 SSEFS 1.36 3.11 3.77 7.61 FIRDL 6.34 6.68 7.16 7.78 TADRED 0.60 1.42 2.33 4.49 Table 10: Performance comparison of feature selection approaches for remote sensing AVIRIS hyperspectral data (east-to-west flight of Indian Pine), MSE between ๐ถ ยฏ
220
reconstructed and
๐ถ
ยฏ
ground-truth measurements. Additional results to table 4.
Appendix CFurther Analysis
C.1Analyzing the Effect on Randomness on Feature Set Chosen
Table 11: Mean Jaccard Index between chosen measurements, across 10 random seeds, experimental settings in table 2 VERDICT simulations.
Table 12: Comparison of the choice of
๐
๐ท
ยฏ
fill
. Experimental settings follow table 2.
๐ถ
110 55 28 14
Random 32.8 15.2 6.82 2.87
SSEFS 81.2 71.4 62.2 75.9
FIRDL 34.3 18.1 48.8 41.0
TADRED 74.3 82.1 84.6 59.1
๐
๐ท
ยฏ
fill
๐ถ
110 55 28 14 SSEFS data mean 1.06 1.28 1.89 4.58 FIRDL zeros 2.22 2.14 3.09 4.05 TADRED data median 1.03 1.19 1.80 2.55 TADRED data mean 1.03 1.20 1.79 2.80 TADRED zeros 1.03 1.19 1.79 2.51 Table 12: Comparison of the choice of ๐ ๐ท ยฏ fill . Experimental settings follow table 2.
Table 12 examines how the changing the random seed that affects network initialization and data shuffling, impacts the feature set chosen. Results show TADRED performs favorably compared to alternative approaches and mostly chooses the same features
C.2Evaluation of the Choice of Feature Fill ๐ ๐ท ยฏ fill
Table 12 examines the effect of varying the values ๐ ๐ท ยฏ fill that fill the unsubsampled features. FIRDL used zeros for its equivalent of ๐ ๐ท ยฏ fill and SSEFS used the data mean (per channel/feature). Results show even if we set the values of ๐ ๐ท ยฏ fill to that of the baselines, TADRED has large improvements over the baselines.
C.3How does the Size of the Densely-sampled Design Affect Performance? Figure 4: Analyzing the performance on different densely-sampled designs ๐ท ยฏ where | ๐ท ยฏ |
๐ถ ยฏ and ๐ถ
14 . Settings follow table 2.
We examine how varying the size of densely-sampled design ๐ท ยฏ (used to create ๐ ๐ท ยฏ ) affects performance. Across 10 random seeds, we randomly sample the design from Panagiotaki et al. (2015b) to create a custom ๐ท ยฏ with ๐ถ ยฏ elements. Training is on fixed network sizes for a single subsampling rate ๐ถ 2
๐ถ
14 . We use 10 % of the training data within the experimental settings of table 2. Results are in figure 4 and exemplify typical behavior that while performance is reasonably stable for large ๐ถ ยฏ a phase change occurs as ๐ถ ยฏ nears ๐ถ and performance decreases rapidly, as the set of samples to choose from becomes too sparse.
C.4TADRED Variant with Random Selection
We tested a modified training procedure that works in the โsame mannerโ as the original implementation of TADRED whilst the scores chosen are random. Across different modifications, in settings of table 6, results (MSE) are more than 10% worse, even more than โw/o iterative subsamplingโ. Thus, although the โgradual aspectโ of TADREDโs training procedure improves performance (in fact line 2 in the ablation study table 6 already demonstrates this with โless gradualโ scenario without iterative subsampling decreases performance), the learning of the scoring network is working as intended and further improves results.
Appendix DComputational Cost of Different Approaches and Infrastructure
It is difficult to compare the computational cost of TADRED against SSEFS, FIRDL. The official implementations, described in appendix A use different machine learning frameworks and all use customized early stopping. In particular, SSEFS has a three stage procedure (first two are large hyperparameter searches) which are completed consecutively; TADRED does not require this. TADRED and FIRDL train on two distinct networks, whilst SSEFS uses four, as such, training costs are somewhat comparable if network sizes are taken to be the same. Practical requirements in all cases were reasonable and training for all methods for each experiment were performed within 24 hours. As an example, we compare the time to run the various methods for the results in table 6 per ๐ถ with the network sizes fixed and no hyperparameter search over different network sizes. Training speeds are: Random supervised feature selection (baseline) 505s; SSEFS (baseline) 1934s (using only run a single run per seed for the first two stages; as previously noted, for other results in the paper, this is much slower as the method proposes a computationally expensive sequential hyperparameter search); FIRDL (baseline) 1756s; TADRED (the new approach) 1988s. The random supervised feature selection baseline is by far the most computationally economical, as we expect, because it uses no iterative search. TADREDโs computational cost is similar to the two state-of-the-art supervised feature selection baselines, SSEFS and FIRDL.
Exploratory analysis and development was conducted on a mid-range (as of 2023) machine with a AMD Ryzen Threadripper 2950X CPU and single Titan V GPU. All experimental results reported in this paper were computed on low-to-mid range (as of 2023) graphical processing units (GPU): GTX 1080 Ti, Titan Xp, Titan X, Titan V, RTX 2080 Ti. We ran jobs on a high-performance computing cluster shared with other users, allowing multiple jobs to run in parallel.
Appendix EExtended Related Work
This section provides further information to section 2, detailing previous work related to our problem.
Classical and Other Recent Supervised Feature Selection Approaches Supervised feature selection approaches are either i) โfilter methodsโ, which select features using some proxy metric independent of the final task, ii) โwrapper methodsโ, which use the task to evaluate feature set performance, or iii) โembedded methodsโ, which couple the feature selection with the task training. Embedded methods FIRDL Wojtas & Chen (2020), SSEFS Lee et al. (2022) are state-of-the-art outperforming classical approaches e.g recursive feature elimination (RFE)-original Guyon et al. (2002), BAHSIC Song et al. (2007; 2012), mRMR Peng et al. (2005), CCM Chen et al. (2017), RF Breiman (2001), DFS Li et al. (2016), LASSO Tibshirani (1996), L-Score He et al. (2005) and recent deep learning-based CE Abid et al. (2019), STG Yamada et al. (2020), DUFS Lindenbaum et al. (2021). More recent approaches extend the supervised feature selection paradigm to limit false discovery rate Hansen et al. (2022), few-shot learning Kumagai et al. (2022), discovering groups of predictive features Imrie et al. (2022), the unsupervised setting Sokar et al. (2022), few-sample classification problems Cohen et al. (2023), dynamic feature selection Covert et al. (2023), for federated learning Castiglia et al. (2023), for identifying high-dimensional causal features Quinzan et al. (2023). They are not designed for the standard regression-based supervised feature selection problem considered in this paper.
Other Recent Experimental Design Approaches Techniques for experimental design have been developed for causal modeling Tigas et al. (2022); Zhang et al. (2022); Teshnizi et al. (2020), linear models Fontaine et al. (2021); Mutny & Krause (2022), online learning Arbour et al. (2022), active learning Kaddour et al. (2020), drug discovery Mehrjou et al. (2022), reinforcement learning Mehta et al. (2022), A/B testing Nandy et al. (2021), panel-data settings Doudchenko et al. (2021), bandit problems Camilleri et al. (2021), balancing competing objectives with uncertainty Malkomes et al. (2021), temporal treatment and control Glynn et al. (2020), causal discovery when interventions can be costly or risky Tigas et al. (2023), designing pricing experiments Simchi-Levi & Wang (2023), contextual optimization for Bayesian experimental design Ivanova et al. (2023), genomics Lyle et al. (2023), treatment effects in large randomized trials Connolly et al. (2023), learning causal models with Bayesian approaches Annadani et al. (2023). These are not applicable to the problem setting we consider. Approaches Zheng et al. (2020); Kleinegesse & Gutmann (2020); Jiang et al. (2020) are older sequential experimental design approaches, whilst Zaballa & Hui (2023) is contemporary to this work โ they face the same issues as Blau et al. (2022); Foster et al. (2021); Ivanova et al. (2021) (discussed in section 2) โ which focus on estimating model parameters and are mostly demonstrated in small-scale problems which do not scale up to the high dimensional problems we face in experimental design for image-channel selection.
Experimental Design in qMRI In qMRI the design ๐ท is known as an โacquisition schemeโ. One standard task is โparameter mappingโ, first estimating biologically-informative model parameters by voxel-wise model fitting, to then obtain downstream metrics Alexander et al. (2019). This provides information that is not visible directly from the images, such as microstructural properties of tissue. However, acquisition time (corresponding to ๐ถ
| ๐ท | ) is limited by factors of cost and the ability of (often sick) subjects to remain motionless in the noisy and claustrophobic environment of the scanner. Thus experimental design can be crucial to support the most accurate image-driven diagnosis, prognosis, or treatment choices. Many clinical scenarios use ๐ท based on intuition loosely guided by understanding of the physical systems under examination, but this can lead to highly suboptimal designs particularly for complex models. However, some studies optimize the design using the Fisher information matrix, e.g. Alexander (2008); Cercignani & Alexander (2006).
Lengthy MRI acquisitions corresponding to ๐ท ยฏ to enable our new experimental design paradigm are made easily on a few subjects, but are not feasible in routine patient imaging. However, such lengthy acquisitions are often made in the design phase of quantitative imaging techniques, e.g. as in Ferizi et al. (2017). We note also that several distinct experiment design problems arise in MRI. Here we focus on estimating per-voxel parameter values, but others e.g. Zbontar et al. (2018); Knoll et al. (2020); Muckley et al. (2021); Fabian et al. (2022); Yaman et al. (2022), focus on how best to subsample the k-space. Our approach is complementary to and may be combined with those; they expedite the acquisition of each individual channel, we identify a compact/economical set of channels.
Experimental Design in Hyperspectral Imaging Hyperspectral imaging (a.k.a. imaging spectroscopy) obtains pixel-wise information of an object-of-interest across multiple wavelengths of the electromagnetic spectrum from specialized hardware Manolakis et al. (2016). This produces an โimage cubeโ a 2D image with ๐ถ channels, as in qMRI, image channels correspond to the measurements. Experimental design involves choosing a design consisting of wavelengths and/or filters Arad & Ben-Shahar (2017); Waterhouse & Stoyanov (2022); Wu et al. (2019) which controls the image channels, and where most current practice uses uniform spacing for the wavelengths Thompson et al. (2017). For our paradigm, expensive devices can acquire large numbers of images with different spectral sensitivity simultaneously to provide training data for the design of much cheaper deployable devices Stuart et al. (2020). Recovering high-quality information from the few wavelengths chosen for particular applications by experimental design, reduces acquisition cost, increases acquisition speed, avoids misalignment, reduces storage requirements, and speeds up clinical adoption. Hyperspectral imaging has many applications Khan et al. (2018) from multiple modalities in medical imaging z Lu & Fei (2014); Karim et al. (2022), remote sensing Baumgardner et al. (2015), and environmental monitoring Stuart et al. (2019).
Appendix FData and Task Evaluation Table 13: Summary of data used in this paper. Name Results Data Type Channels Target Pixel/Voxel No. Independent No. pixels/voxels ๐ โข x โข 10 3
๐ถ ยฏ Regressors Size Variables Train Val Test VERDICT Table 2 Simulated qMRI 220 8 - ๐ ๐ โ โ 7 1000 100 100 NODDI Table 8 Simulated qMRI 3612 7 - ๐ ๐ โ โ 7 100 10 10 MUDI Table 2 qMRI Scan 1344 1344 2.5 โข mm 3
๐ ๐ โ โ 6 321 132 105 HCP Figure 2 qMRI Scan 288 288 1.25 โข mm 3
๐ ๐ โ โ 4 2182 774 674 Indian Pine Table 4 Remote Sensing Hyperspectral 220 220 20 โข m 2
๐ ๐ โ โ 1480 164 1135 Oxygen Saturation Table 4 Simulated Hyperspectral 348 2 - ๐ ๐ โ โ 2 0.4 0.044 10
This section includes additional details about the experimental data. Table 13 provides a summary, figure 6 visualizes various examples,figure 5 is a correlation plot of the measurements/channels/features.
We follow Grussu et al. (2021) and normalize each channel/measurement/feature by dividing by its 99 โข ๐ก โข โ percentile value calculated from the training set. This is performed in both the input and output of the neural network.
Figure 5: Correlation coefficient between the measurements/features/channels of the data. F.1Simulations with the VERDICT and NODDI Biophysical Models. Table 14: Parameter ranges for simulating synthetic VERDICT and NODDI model data.
VERDICT Model Parameter Minimum Maximum
๐ ๐ผ 0.01 0.99
๐ ๐ 0.01 0.99
๐ท ๐ฃ ( ๐ ms 2 s โ 1 ) 3.05 10
๐ ( ๐ m) 0.01 20
๐ง [-1 -1 -1] [1 1 1] NODDI Model Parameter Minimum Maximum
๐ ๐ โข ๐ 0.01 0.99
๐ ๐ โข ๐ โข ๐ 0.01 0.99
๐ โข ๐ท โข ๐ผ 0.01 0.99
๐ง [-1 -1 -1] [1 1 1]
This section describes the the VERDICT and NODDI models and the experimental settings used in tables 2, 8. Exact code to perform the simulations is in Code Link.
The VERDICT (Vascular, Extracellular and Restricted Diffusion for Cytometry in Tumors) model Panagiotaki et al. (2014), maps histological features of solid-cancer tumors particularly for early detection and classification of prostate cancer Panagiotaki et al. (2015a); Johnston et al. (2019); Singh et al. (2022). The VERDICT model includes parameters: ๐ ๐ผ the intra-cellular volume fraction, ๐ ๐ the vascular volume fraction, ๐ท ๐ฃ the vascular perpendicular diffusivity, ๐ the mean cell radius, and n - a 3D vector defining mean local vascular orientation.
The NODDI (Neurite Orientation Dispersion and Density Imaging) model Zhang et al. (2012), maps parameters of the cellular composition of brain tissue and is widely used in neuroimaging studies in neuroscience such as the UK Biobank study Alfaro-Almagro et al. (2018), and neurology e.g. in Alzheimerโs disease Kamiya et al. (2020) and multiple sclerosis Grussu et al. (2017). The NODDI model includes five tissue parameters: ๐ ๐ โข ๐ the intra-cellular volume fraction, ๐ ๐ โข ๐ โข ๐ the isotropic volume fraction, the orientation dispersion index (ODI) that reflects the level of variation in neurite orientation, and n - a 3D vector defining the mean local fiber orientation.
To conduct the simulations on the VERDICT and NODDI models, we employ the widely-used, open-source dmipy toolbox Fick et al. (2019). The code is available: Code Link. In each case, data simulation uses a known, fixed, acquisition scheme, i.e. experimental design, in combination with a set of ground truth model parameters. We chose the ground truth model parameters { ๐ฝ 1 , โฆ , ๐ฝ ๐ } for voxel/sample ๐
1 , โฆ , ๐ by uniformly sampling parameter combinations from the bounds given in table 14. We choose these bounds as they approximate the physically feasible limits of the parameters.
The VERDICT data has number of samples ๐
1000 โข ๐พ , 100 โข ๐พ , 100 โข ๐พ in the train, validation, test split, with target data ๐ โ โ ๐ ร 8 , ๐ฝ ๐ โ โ 8 , ๐
1 , โฆ , ๐ . The classical experimental design approach yields an acquisition scheme derived from the Fisher information matrix Panagiotaki et al. (2015b) and here ๐ ๐ท โ โ ๐ ร 20 , ๐ถ
20 . The approaches in supervised feature selection (including TADRED) also use a densely-sampled empirical acquisition scheme, designed specifically for the VERDICT protocol from Panagiotaki et al. (2015a) and here ๐ ๐ท ยฏ โ โ ๐ ร 220 with ๐ถ ยฏ
220 measurements.
The NODDI data has number of samples ๐
100 โข ๐พ , 10 โข ๐พ , 10 โข ๐พ in the train,validation,test split, with target data ๐ โ โ ๐ ร 7 , ๐ฝ ๐ โ โ 7 , ๐
1 , โฆ , ๐ . The classical experimental design approach yields an acquisition scheme derived from the Fisher information matrix Zhang et al. (2012) and so ๐ ๐ท ยฏ โ โ ๐ ร 99 , ๐ถ
99 . The approaches in supervised feature selection use a densely-sampled empirical acquisition scheme from an extremely rich acquisition from Ferizi et al. (2017). This was designed for the ISBI 2015 White Matter Challenge, which aimed to collect the richest possible data to rank biophysical models, and required a single subject to remain motionless for two uncomfortable back-to-back 4 hour scans. Here ๐ ๐ท ยฏ โ โ ๐ ร 3612 with ๐ถ ยฏ
3612 measurements.
We added Rician noise to all simulated signals, which is standard for MRI data Gudbjartsson & Patz (1995). The signal to noise ratio of the unweighted signal is 50 , which is representative of clinical qMRI.
F.2MUlti-DIffusion (MUDI) Challenge Data
Data used in tables 2, 10 are images from five in-vivo human subjects, and are publicly available MUDI Organizers (2022), and was acquired with the state-of-the-art ZEBRA sequence Hutter et al. (2018). This diffusion-relaxation MRI dataset has a 6D acquisition parameter space ๐ ๐ โ โ 6 : echo time (TE), inversion time (TI), b-value, and b-vector directions in 3 dimensions: ๐ ๐ฅ , ๐ ๐ฆ , ๐ ๐ง . Data has 2.5mm isotropic resolution and field-of-view 220 ร 230 ร 140 โข mm and resulted in 5 3D brain volumes (i.e. images) with ๐ถ ยฏ
1344 measurements/channels, which here are unique diffusion- ๐ โข 2 * and ๐ โข 1
- weighting contrasts. More information is in Hutter et al. (2018); Pizzolato et al. (2020). Each subject has an associated brain mask, after removing outlier voxels resulted in 104520 , 110420 , 105743 , 132470 , 105045 voxels for respective subjects 11 , 12 , 13 , 14 , 15 . For the experiment in table 2 we follow Blumberg et al. (2022) and perform 5-fold cross validation on the 5 subjects. For the experiment in table 10, we followed the original challenge Pizzolato et al. (2020) and took subjects 11 , 12 , 13 as the training and validation set, and subjects 14 , 15 as the unseen test set, where 90 % โ 10 % of the training/validation set voxels are respectively, for training and validation.
F.3Human Connectome Project (HCP) Test-Retest Data
This section describes the data and model fitting procedure used in figure 2 and table 8.
This section utilizes WU-Minn Human Connectome Project (HCP) diffusion data, which is publicly available at www.humanconnectome.org (Test Retest Data Release, release date: Mar 01, 2017) Essen et al. (2013). The data comprises ๐ถ ยฏ
288 volumes (i.e. measurements/channels), with 18 b=0 s mm โ 2 (i.e. non-diffusion weighted) volumes, 90 gradient directions for b=1000 s mm โ 2 , 90 directions for b=2000 s mm โ 2 , and 90 directions for b=3000 s mm โ 2 . We used 3 scans for training (ID numbers 103818 โข _ โข 1 , 105923 โข _ โข 1 , 111312 โข _ โข 1 ), one scan for validation ( 114823 โข _ โข 1 ) and one scan for testing ( 115320 โข _ โข 1 ), which produced numbers of samples ๐
708724 + 791369 + 681650
2181743 , 774149 , 674404 for the respective splits. We used only voxels inside the provided brain mask and normalized the data voxelwise with a standard technique in MRI, by dividing all measurements by the mean signal in each voxelโs b=0 values. Undefined voxels were then removed.
Diffusion tensor imaging (DTI) Basser et al. (1994), diffusion kurtosis imaging (DKI) Jensen & Helpern (2010), and Mean Signal DKI (MSDKI) Henriques (2018) are widely-used qMRI methods. Like NODDI and VERDICT, they use diffusion MRI to sensitize the image intensity to the Brownian motion of water molecules within the tissue to provide a window on tissue microstructure. However, whereas NODDI and VERDICT are designed specifically for application to brain tissue and cancer tumors, respectively, DTI and DKI are more general purpose techniques that provide indices of diffusivity (e.g. mean diffusivity - MD), diffusion anisotropy (e.g. fractional anisotropy Basser & Pierpaoli (1996) - FA), and the deviation from Gaussianity, or kurtosis, (e.g. mean kurtosis Jensen & Helpern (2010) - MK) that can inform on tissue integrity or pathology. Mean signal diffusion kurtosis imaging (MSDKI) is a simplified version of DKI that quantifies kurtosis using a simpler model that is easier to fit Henriques (2018). These techniques show promise for extracting imaging biomarkers for a wide variety of medical applications, such mild brain trauma, epilepsy, stroke, and Alzheimerโs disease Jensen & Helpern (2010); Ranzenberger & Snyder (2022); Tae et al. (2018).
To fit the DTI, DKI, and MSDKI biophysical models to the data, and obtain the downstream metrics (parameter maps), we employ the widely-used, open-source DIPY library Garyfallidis et al. (2014). We followed standard practice for model fitting in MRI and used the least-squares optimization approach and default fitting settings. To remove outliers, values were clamped where DTI FA โ [ 0 , 1 ] , DTI MD,AD,RD โ [ 0 , 0.003 ] , DKI MK,AK,RK โ [ 0 , 3 ] , MSDKI MSD โ [ 0 , 0.003 ] MSDKI MSK โ [ 0 , 3 ] . Code for model fitting is in Code Link.
The results in figure 2 and table 8 are all scaled by: DTI-FA ร 10 2 , DTI-MD ร 10 9 , DTI-AD ร 10 9 , DTI-RD ร 10 9 , DKI-MK ร 10 2 , DKI-AK ร 10 2 , DKI-RK 10 2 , MSDKI-MSD ร 10 9 , MSDKI-MSK ร 10 2 .
F.4Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) Data and Task
This section describes the data and task considered in tables 4, 10.
The Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) is a highly-specialized hyperspectral device for earth remote sensing commissioned by the Jet Propulsion Laboratory (JPL). It obtains acquisitions from adjacent spectral channels bands between the wavelengths 400nm - 2500nm. It is flown from four different aircrafts and has been deployed worldwide, for purposes such as examining the effect and rehabilitation of forests affected by large wildfires, the effect of climate change, and other applications in atmospheric studies and snow hydrology. More information is available Jet Propulsion Laboratory (2023) (JPL); Simmonds & Green (1996); Thompson et al. (2017) and on the webpage https://aviris.jpl.nasa.gov.
Data used was obtained in June 1992, when the Purdue University Agronomy Department commissioned AVIRIS to obtain two ground images of the โIndian Pineโ to support soils research Baumgardner et al. (2015) from two flight lines: east-to-west and north-to-south. This is publicly available Baumgardner et al. (2022). The data are two โimage cubeโ corresponding to a 2miles 2 area of 20m 2 pixel size with ๐ถ ยฏ
220 channels.
Data from the north-to-south flight are used for training and validation. This consists of 1644292 pixels of which 90-10 % were used for training-validation. Data from the east-to-west flight were used for test data, which consists of 1134672 pixels. We removed outliers from both images - details in Code Link and then normalized the image channel-wise so the 99 โข ๐ก โข โ -percentile is 255 (the maximum in standard images).
The objective is examine whether our supervised feature selection approaches can reconstruct the entire image from a subset of wavelengths, typical of the ground obtained over Indiana (the location of โIndian Pineโ).
F.5Estimation of Oxygen Saturation Data and Task
This experiment and data follows directly from Waterhouse & Stoyanov (2022). Data was generated from the code presented in Waterhouse & Stoyanov (2022), with assistance from its author.
Figure 6: 2D brain slices from 3D MRI scans, for different measurements/features/channels (values of ๐ถ ) for the Multi-Diffusion (MUDI) challenge (top), HCP (middle) data. Bottom: Different wavelengths for the โIndian Pineโ remote sensing hyperspectral data north-to-south flight. Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.
Report Issue Report Issue for Selection
Xet Storage Details
- Size:
- 114 kB
- Xet hash:
- c935b822a3a53414285d0240a9ca7051cbe96ca1c787cd2c003f90f748c4701b
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.