context stringlengths 80 2.5k | A stringlengths 80 2.59k | B stringlengths 80 1.95k | C stringlengths 80 3.07k | D stringlengths 80 3.07k | label stringclasses 4
values |
|---|---|---|---|---|---|
Second, we present a new enhanced model named Panoptic-PartFormer++. In particular, we have three improvements over the original Panoptic-PartFormer. 1. Rather than using the coarse masked pooling for query-level reasoning, we adopt the recent stronger baseline Mask2Former design by replacing query learning with masked... | Moreover, for Panoptic-PartFormer++, we add an extra semantic segmentation loss on part segmentation with part feature Fpsubscript𝐹𝑝F_{p}italic_F start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT. | Decoupled Decoder. The decoupled decoder has two separate decoder networks to obtain features for scene features and part features, respectively. | Ablation on Extra Part Dense Prediction. Our Panoptic-PartFormer++ also adopts an extra semantic part segmentation head during training for part features. | 2. We present an enhanced version of the decoupled decoder. We append extra semantic segmentation loss to enhance part features learning. | D |
Using these improvements helps to prevent high-frequency details such as facial hair and significantly increases the reconstruction quality, as explained further in the results section (sec. 4). Because we mainly enhanced the loss function and the discriminator side, we did not need to change the generator. Therefore, ... | To control the output of the generator and thus the facial expressions of a person’s face, the dataset must be labeled before the training process. We use 70 facial landmarks (68 of the Multi-PIE scheme [GMC+08] and two for the location of the irises) as binary images for each tuple of RGB and corresponding depth image... | Our process starts with the acquisition of a personal RGBD dataset. The acquired data is preprocessed by an automated procedure. A Facial Landmark Map (FLM) per RGB image is extracted and saved beside the corresponding RGB image. It decodes the facial expression of the respective RGB image in a binary image as so-calle... | Conditional GANs (cGANs) have been shown to be able to learn and reproduce specific relationships between inputs and outputs that are understandable for humans. For example, Mirza and Osindero [MO14] have extended the input to the generator and discriminator with a label y𝑦{y}italic_y, which makes it possible to gener... | The generator of our GAN receives a 512×512512512512\times 512512 × 512 pixel FLM of the facial landmarks as input. Compared to the Pix2Pix GAN, the output has been extended by a fourth feature map to be able to generate depth images. In addition, the discriminator receives five feature maps as input instead of only fo... | D |
In recent times where agriculture is part of the multi-sectoral economy, an agricultural pandemic has a two-folded impact on society: food shortages and direct economic losses to the agriculture sector [9, 10]. For instance, during 2014 alone, virus disease pandemics were estimated to have a global economic impact of a... | This study has several limitations, which provide opportunities for future research. First, the proposed model assumes that the plant population is homogeneous in its epidemiological and economic properties, an assumption known to be false [46, 47]. As such, by introducing unique epidemiological and economic properties... | Following the same path, an analysis of the pandemic spread and economic profit as a function of the basic infection rate (β0subscript𝛽0\beta_{0}italic_β start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT) and recovery rate (γ𝛾\gammaitalic_γ) is conducted as shown in Fig. 3. Unsurprisingly, as the basic infection rate increases... | Mathematical and computational models are key tools for understanding pandemic spread and designing intervention policies that help control a pandemic’s spread. In particular, coupled ordinary and partial differential equations, as well as simpler growth-curve equations, are especially useful deterministic models for r... | Fig. 2 shows the basic reproduction number and economic profit over time, divided into small (N=5000𝑁5000N=5000italic_N = 5000), medium (N=25000𝑁25000N=25000italic_N = 25000), and large (N=125000𝑁125000N=125000italic_N = 125000) fields with the same size. All fields are of the same size. The results show the average... | C |
Assumption 4.4 is standard in the literature; e.g., in the alternative notation described above it is that | i.e., ℛ~∗superscript~ℛ\widetilde{\mathcal{R}}^{*}over~ start_ARG caligraphic_R end_ARG start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is the adjoint solution operator for the sesquilinear form a~~𝑎\widetilde{a}over~ start_ARG italic_a end_ARG. | We highlight that, in other papers on PMLs, the scaled variable, which in our case is r+ifθ(r)𝑟isubscript𝑓𝜃𝑟r+{\rm i}f_{\theta}(r)italic_r + roman_i italic_f start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_r ), is often written as r(1+iσ~(r))𝑟1i~𝜎𝑟r(1+{\rm i}\widetilde{\sigma}(r))italic_r ( 1 + roma... | \widetilde{\mathcal{R}}g,v\rangle|≤ | over~ start_ARG italic_a end_ARG ( over~ start_ARG caligraphic_R end_ARG italic_g , italic_v ) | + | ⟨ italic_S over~ start_ARG caligraphic_R end_ARG italic_g , italic_v ⟩ | | σ~~𝜎\widetilde{\sigma}over~ start_ARG italic_σ end_ARG is non-decreasing – see [3, §2]. | D |
Let x¯L,ksubscript¯𝑥𝐿𝑘\bar{x}_{L,k}over¯ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_L , italic_k end_POSTSUBSCRIPT be the computed solutions obtained by | 4 The stopping tolerance tol𝑡𝑜𝑙tolitalic_t italic_o italic_l of LSQR for solving the inner least squares | with the stopping tolerance tol𝑡𝑜𝑙tolitalic_t italic_o italic_l in the following lemma. | Algorithm 3 using LSQR in step 3 with the stopping criterion tol𝑡𝑜𝑙tolitalic_t italic_o italic_l. Then if | the computed solution by LSQR with the stopping criterion tol𝑡𝑜𝑙tolitalic_t italic_o italic_l | C |
1.9×10−2absentsuperscript102\times 10^{-2}× 10 start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT | 9.3×10−3absentsuperscript103\times 10^{-3}× 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT | 4.4×106absentsuperscript106\times 10^{6}× 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT | 5.1×106absentsuperscript106\times 10^{6}× 10 start_POSTSUPERSCRIPT 6 end_POSTSUPERSCRIPT | 8.0×10−3absentsuperscript103\times 10^{-3}× 10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT | C |
Problem of Imbalanced Learning. Due to the limited memory budget for storing exemplars of the old classes in RKD approaches, there exists a serious problem of imbalanced data for learning (few samples for the learned/old tasks are available while we have a large number of samples for the current/new task), which would ... | Re-sampling Mixup. Beyer et al. [16] demonstrates that KD training with OOD data suffers from great degradation and they validate empirically that the data, which are related or overlapped with the original training data (consistent with the latent distribution of original training data) can perform as good as the orig... | Given that the data between the old and new classes are seriously imbalanced in RKD methods, we consider the ratio between the old classes from the past tasks and the new classes of the current task. We generate three kinds of mixed data in a batch: mixup among old classes, mixup between old classes and new classes and... | Problem of Insufficient KD. In CIL setting, the stored exemplars of old classes are limited because of the small memory budget. It is well known that the high capacity of DNNs is sufficient to memorize the entire training data [14, 15], so RKD methods suffer from insufficient data for KD training. RKD methods commonly ... | In order to tackle the insufficient KD, we employ MKD and re-sampling strategy. We over-sample the exemplars of the old classes and mix the samples from the old classes and the new classes to synthesize mixed data for KD training. The interpolated data by an old class and a new class are more consistent with the distri... | D |
Creates a new trial, contributing to a new/existing study. The POST body request should include the set of settings to refer unambiguously to a study. The API response contains the hyperparameters to test. | Creates a new trial, contributing to a new/existing study. The POST body request should include the set of settings to refer unambiguously to a study. The API response contains the hyperparameters to test. | completed, the computing node will finalize the trial using the tell API, whose body will include the unique identifier of the trial and | Provides the final score of a trial to the backend optimizer chosen for the study. | Provides an intermediate score to the backend optimizer. If the study includes a pruner strategy, the API response is a boolean value saying whether or not to continue the current trial. | C |
Assumption 1 is common in the decentralized optimization literature; see, e.g., [25, 24, 27]. The conditions imply that the eigenvalues of W𝑊Witalic_W, denoted as λ1≥λ2≥⋯≥λnsubscript𝜆1subscript𝜆2⋯subscript𝜆𝑛\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{n}italic_λ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ≥ italic... | In this part, we introduce the standing assumptions for this work. Assumptions 1, 2, 3, and 4 are related to the network structure, stochastic gradients and compression operators, respectively. Regarding the objective functions, we first consider general smooth objective functions in Section 3 with Assumption 6 only a... | Regarding the stochastic gradients, we consider the following standard assumption. | Regarding compressed decentralized algorithms based on stochastic gradients, we compare the performance of those that achieve “asymptotic network independent property” in Table 1 assuming that unbiased compressors are utilized. Note that in the literature, there are two commonly considered conditions on the compression... | In summary, the above assumptions are common and standard. In particular, Lemma 1 provides a mechanism that generalizes the applicability of the proposed CEDAS algorithm introduced in the next section. | B |
Additionally, the decoding networks adopted in the current work are computationally optimized for various sensing tasks, and better performance will be attained by associating them with the physical characteristics of scattering via model-driven methodologies. | We also explore the practicability of whether the scattering diffuser can be replaced by existing light equipment such as the DMD. Specifically, the scattering process can be simplified as a transfer matrix. Thus, we combine the scattering transfer matrix with the learning-based optimal patterns (as shown in Fig. 4(a))... | In addition, based on the combination of scattering and illuminating light modulation, we innovatively extended the idea to the field of information encryption, thus achieving high-security encryption for highly compressed measurements. Compared with conventional systems, scattering diffusers are less expensive and hav... | In this work, we present a novel lightweight image-free technique that can directly extract multitarget semantic information from highly compressed measurements. Moreover, for the first time, we introduce proactive scattering for light modulation by utilizing the information entropy boost induced by scattering for high... | Moreover, the current work employs a DMD for light modulation, which could lead to restricted imaging rates due to its dependency on programmable spatial light modulators. To get around this restriction, we suggest implementing a faster modulation method that relies on cyclic patterns coded onto a spinning mask[42]. Th... | D |
Transitions occurring after the first half of the test clock period were invalid. | By the addition of independently configurable delay blocks on the combinational output signal and the clock signal, the authors could control the sensor sensitivity (i.e., how late after the rising clock signal a change in the combinational output is detected). | The authors leveraged on-chip clock generation to sweep the test clock frequency until the maximum was found. | The change indicated that the maximum frequency was reached, and the circuit started to fail. | Given that the frequency of the main clock is known, the authors were able to derive the CUT delay by changing the phase angle between clocks and observing the sensor output. | B |
Let Σ=𝔼x[fk(x)fk(x)⊤]Σsubscript𝔼𝑥delimited-[]subscript𝑓𝑘𝑥subscript𝑓𝑘superscript𝑥top\Sigma=\mathbb{E}_{x}[f_{k}(x)f_{k}(x)^{\top}]roman_Σ = blackboard_E start_POSTSUBSCRIPT italic_x end_POSTSUBSCRIPT [ italic_f start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( italic_x ) italic_f start_POSTSUBSCRIPT italic_k... | Let A=TΩT⊤𝐴𝑇Ωsuperscript𝑇topA=T\Omega T^{\top}italic_A = italic_T roman_Ω italic_T start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT be an eigendecomposition of A𝐴Aitalic_A sorted in ascending order of the eigenvalues where ΩΩ\Omegaroman_Ω is the diagonal matrix of eigenvalues of A given by {ωi}i=1dsuperscriptsubscript... | Since a factor of N−1superscript𝑁1N^{-1}italic_N start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT does not change the minimizer in (5), we compute: | To what extent is the notch filter in Figure 2 inherent to the network weights W1subscript𝑊1W_{1}italic_W start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and not an artefact of (a) the influence of the differentiation matrix D𝐷Ditalic_D, or (b) the training data X𝑋Xitalic_X? We explore this question by using Theorem 4. In p... | Although the descramblers P^SC(k,X,N)subscript^𝑃𝑆𝐶𝑘𝑋𝑁\widehat{P}_{SC}(k,X,N)over^ start_ARG italic_P end_ARG start_POSTSUBSCRIPT italic_S italic_C end_POSTSUBSCRIPT ( italic_k , italic_X , italic_N ) may be non-unique, this degeneracy may be circumvented in the large data limit in the sense that the action of t... | B |
Furthermore, the number of nodes in this tree decomposition is linear in |Q|𝑄|Q|| italic_Q | and, thus, performing the bottom-up traversal of the algorithm is possible in the required time bound due to Lemma 19. | We start by proving the correctness of the algorithm, before discussing its running time. | Lastly, we also have to look at the running time of the final step of the algorithm. | dominates the running time of the “Initialization” step, which is why we can omit it, | T𝑇Titalic_T consisting of a single node, and states the running time of this step. | B |
1:Adjacency matrix (A𝐴Aitalic_A), initial node features (X𝑋Xitalic_X), node embedding at lthsuperscript𝑙𝑡ℎl^{th}italic_l start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT layer (Hlsuperscript𝐻𝑙H^{l}italic_H start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT), attention weight between two nodes (aijsub... | 12: Update parameters, θ′=θ−η∂ℒtotal∂θsuperscript𝜃′𝜃𝜂subscriptℒ𝑡𝑜𝑡𝑎𝑙𝜃\theta^{\prime}=\theta-\eta{\partial\mathcal{L}_{total}\over\partial\theta}italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT = italic_θ - italic_η divide start_ARG ∂ caligraphic_L start_POSTSUBSCRIPT italic_t italic_o italic_t ita... | 15: Save current parameters, θ∗=θ′superscript𝜃superscript𝜃′\theta^{*}=\theta^{\prime}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT | 2:Parameters with the best validation score (θ∗superscript𝜃\theta^{*}italic_θ start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT) | 13: Using the updated parameters (θ′superscript𝜃′\theta^{\prime}italic_θ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT) and calibrated attention weights (aij)a_{ij})italic_a start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ), get validation score α𝛼\alphaitalic_α | C |
Due to the rapid growth of code data and the continuous improvement of deep learning model capabilities, using deep learning for program generation has become the mainstream research direction (Raychev et al., 2014; Ling et al., 2016; Yin and Neubig, 2018; Wei et al., 2019; Sun et al., 2020; Mukherjee et al., 2021; Zha... | 1) Threats to external validity concern the quality of experimental datasets and the generalizability of our results. We evaluated our approach using three public code generation datasets, which are considered mainstream benchmarks in the field and have been utilized extensively in prior research (Luo et al., 2023; Li ... | . LLMs such as AlphaCode (Li et al., 2022), CodeGen (Nijkamp et al., 2023), WizardCoder (Luo et al., 2023), ChatGPT (OpenAI, 2023a), CodeGeeX (Zheng et al., 2023), Starcoder (Li et al., 2023), and CodeLlama (Rozière et al., 2023) have demonstrated promising code generation performance. | Automatic evaluation of code generation is significant and promising in the fields of natural language processing (NLP) and software engineering. Due to the great potential of code generation in reducing development costs and revolutionizing programming modes, both industry and academia have devoted substantial attenti... | To construct each code evaluation dataset, we first follow primitive NL and reference code in each corresponding base dataset. Then, for each paired NL and reference code in a code evaluation dataset, we generate an average of 20+ codes (generated from various LLMs, including CodeGen 350M&16B (Nijkamp et al., 2023), In... | B |
∙∙\bullet∙ We propose backdoor attacks in P2PFL and introduce new attack placement strategies based on graph centrality metrics such as node degree, PageRank, and clustering coefficient. We show that a small number of attackers (5%) placed in the graph strategically, is sufficient to perform a backdoor attack with high... | Such powerful attackers are relevant in evaluating and comparing mitigation strategies. We also consider a more relaxed adversarial model in which the adversary has partial visibility over the network, by observing a small fraction (e.g., 20%) of the nodes in the communication graph. | We assume that the attacker has full control over compromised peers. More precisely, the attacker can add, modify, or delete training data samples, modify and deviate from the machine learning algorithm, as long the final model update has the same vector dimension as the model updates sent by benign users. For example,... | In our study, we assume failures affect random nodes, and result in missed peer updates that do not contribute to the learned model. | We show that backdoor attacks can further be amplified by the attacker causing network failures that result in missed peer updates. We also demonstrate that an attacker with partial visibility into the network (e.g., 20% of the nodes) can still successfully introduce a backdoor in the model. | D |
We use the standard STL10 dataset following the experimental setting for clustering from [52]. | the standard stochastic gradient descent for α=1𝛼1\alpha=1italic_α = 1, see "our" vs IMSAT in Table V. | TABLE II: Effect of order α𝛼\alphaitalic_α in Renyi decisiveness (15). The results use ResNet18 trained via standard stochastic gradient descent for loss (48). | It follows that different values of α𝛼\alphaitalic_α may affect the practical results. However, our self-labeling algorithm in Section 3 assumes α=1𝛼1\alpha=1italic_α = 1 and uses the closed-form EM steps applicable only to the Shannon’s decisiveness. To evaluate Renyi decisiveness for various α>0𝛼0\alpha>0italic_α ... | We state max-margin clustering theorem for multi-class decisiveness of order α𝛼\alphaitalic_α | B |
Here ΔSsubscriptΔ𝑆\Delta_{S}roman_Δ start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT is the set of probability measures over 𝒮𝒮{\mathcal{S}}caligraphic_S. | To ensure computational feasibility, we construct the ambiguity set 𝒫𝒫{\mathcal{P}}caligraphic_P in the (s,a)𝑠𝑎(s,a)( italic_s , italic_a )-rectangular manner, where for each (s,a)∈𝒮×𝒜𝑠𝑎𝒮𝒜(s,a)\in{\mathcal{S}}\times{\mathcal{A}}( italic_s , italic_a ) ∈ caligraphic_S × caligraphic_A, we define the ambiguity s... | P:𝒮×𝒜→ΔS:𝑃→𝒮𝒜subscriptΔ𝑆P:{\mathcal{S}}\times{\mathcal{A}}\rightarrow\Delta_{S}italic_P : caligraphic_S × caligraphic_A → roman_Δ start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT is the state transition probability measure. | We assume that r:𝒮×𝒜→[0,1]:𝑟→𝒮𝒜01r:{\mathcal{S}}\times{\mathcal{A}}\rightarrow[0,1]italic_r : caligraphic_S × caligraphic_A → [ 0 , 1 ] is deterministic and bounded in [0,1]01[0,1][ 0 , 1 ]. | Consider an infinite-horizon MDP (𝒮,𝒜,γ,μ,P,r)𝒮𝒜𝛾𝜇𝑃𝑟({\mathcal{S}},{\mathcal{A}},\gamma,\mu,P,r)( caligraphic_S , caligraphic_A , italic_γ , italic_μ , italic_P , italic_r ) where 𝒮𝒮{\mathcal{S}}caligraphic_S and 𝒜𝒜{\mathcal{A}}caligraphic_A are finite state and action spaces with cardinality S𝑆Sitalic_S a... | C |
Corroborating our objective of classifying factuality and bias at the sentence level, we segmented each one of the 300 news articles in sentences and annotated them according to three different classes: (i) factual spans, (ii) biased spans, and (iii) quotes, as shown in Figure 1. | Taking advantage of the fact that textual analysis of news articles published by a media outlet is critical for assessing the factuality of its reporting, and its potential bias Baly et al. (2018), we tackle both biased and factual sentence prediction by using a strategy that has proved to be effective. In accordance w... | We aim to predict sentence-level media bias and factuality by analyzing different types of media bias and journalist factuality definitions, both proposed by AllSides Mastrine (2022). Specifically, we first built the state-of-art media bias detection models. Secondly, a baseline sentence-level factuality detection mode... | We proposed an expert annotation schema for sentence-level factuality and media bias classification. We first evaluated whether the sentence was committed with impartiality. In other words, whether it presented a type of information focused on objective facts. Whether “yes”, it should be classified as factual span. Oth... | First of all, we argue that factual spans contain a type of information that deals with facts, hence it is impartially focused on objective facts. In contrast, non-factual information contains a type of information presented subjectively (with partiality) that often strays from objective facts. Taking into account this... | C |
To adapt to more complex architectures used in vision tasks, e.g., ResNet ([8]), MobileNet ([10]), an initialization method from the perspective of training dynamics by estimating mutual information between two consecutive layers was proposed by [24]. | As neural networks go deeper [31], the initialization of network parameters becomes important to prevent gradients from vanishing or explosion during the training procedure. | Since the Xavier-Initialization is only useful on saturating activation functions such as Sigmoid and Tanh [28], it was further extended to He-Initialization [7], where ReLU [26], a broadly used non-saturating activation function, was taken into consideration. | A “learning to initialize” scheme [34] was proposed by formulating initialization on neural networks as an optimization problem. | To adapt to more complex architectures used in vision tasks, e.g., ResNet ([8]), MobileNet ([10]), an initialization method from the perspective of training dynamics by estimating mutual information between two consecutive layers was proposed by [24]. | C |
Consider data distribution x0∼q0,x0∈ℝnformulae-sequencesimilar-tosubscript𝑥0subscript𝑞0subscript𝑥0superscriptℝ𝑛x_{0}\sim q_{0},x_{0}\in\mathbb{R}^{n}italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∼ italic_q start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ blackboard_R s... | .,T-1italic_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ∼ caligraphic_N ( 0 , italic_I ) , italic_ϵ start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ∼ caligraphic_N ( 0 , roman_Σ start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ) , italic_t = 0 , … , italic_T - 1. | Optimizing the above loss can be viewed as matching the conditional generator ptθ(xt|xt+1)superscriptsubscript𝑝𝑡𝜃conditionalsubscript𝑥𝑡subscript𝑥𝑡1p_{t}^{\theta}(x_{t}|x_{t+1})italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT ( italic_x start_POSTSUBSCRIP... | Define the forward noising process: for t∈[0,..,T−1]t\in[0,..,T-1]italic_t ∈ [ 0 , . . , italic_T - 1 ], | }),italic_p start_POSTSUPERSCRIPT italic_θ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT 0 : italic_T end_POSTSUBSCRIPT end_POSTSUBSCRIPT := italic_p start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ) ∏ start_POSTSUBSCRIPT italic_t = 0 end_P... | C |
The BGLM studied in this paper contains the IC model and linear threshold (LT) model in a DAG as special cases, and is also related to | For regret minimization, we study CCB under the BGLMs as in Feng and Chen (2023); Xiong and Chen (2023). | the graph size if the actions are combinatorial. Yabe et al. (2018); Feng and Chen (2023); Xiong and Chen (2023); Varici et al. (2022) consider combinatorial action set for causal bandits problem. | Moreover, Feng and Chen (2023); Xiong and Chen (2023) also study causal bandits on BGLMs to avoid the exponentially large parameter space of general causal models. | Following Li et al. (2017); Feng and Chen (2023); Xiong and Chen (2023), we have three assumptions: | C |
The PACS dataset [25] focuses on understanding the physical common sense attributes of objects shown in the video, which is similar to our ‘material’ based annotation procedure. However, these attributes are distinguished by 13.4K question-answer pairs; displaying the video with and without audio, and then querying a v... | EPIC-KITCHENS-100. EPIC-KITCHENS-100 [1] is a large-scale egocentric audio-visual dataset which contains 100 hours of videos containing unscripted daily activities and object interactions in people’s kitchens. It consists of 700 videos and 89,977 segments describing visual actions that occur. Actions consist of verb an... | In this paper, we present a large-scale dataset, EPIC-SOUNDS, which consists of 78.4k categorised segments and 39.2k non-categorised segments, totalling 117.6k segments spanning 100 hours of audio, capturing diverse actions that sound in home kitchens. | Based on these observations, we crowdsource temporal and semantic labels for the audio of EPIC-KITCHENS-100 that are distinct from the visual ones. | In summary, we introduce EPIC-SOUNDS, a large-scale dataset of daily-life sounds, derived from the audio of EPIC-KITCHENS-100. EPIC-SOUNDS contains 78,366 categorised sound events spanning over 44 categories, as well as 39,187 non-categorised sound events, totalling 117,553 sound events across 100 hours of footage coll... | A |
Completeness: If P𝑃Pitalic_P knows w𝑤witalic_w, then V𝑉Vitalic_V accepts with high probability. (In this paper, we consider the perfect completeness property where V𝑉Vitalic_V always accepts.) | If P𝑃Pitalic_P does not know a solution of the Five Cells puzzle, then V𝑉Vitalic_V always rejects. | Completeness: If P𝑃Pitalic_P knows w𝑤witalic_w, then V𝑉Vitalic_V accepts with high probability. (In this paper, we consider the perfect completeness property where V𝑉Vitalic_V always accepts.) | Soundness: If P𝑃Pitalic_P does not know w𝑤witalic_w, then V𝑉Vitalic_V rejects with high probability. (In this paper, we consider the perfect soundness property where V𝑉Vitalic_V always rejects.) | If P𝑃Pitalic_P does not know a solution of the Meadows puzzle, then V𝑉Vitalic_V always rejects. | C |
\varepsilon_{1}}(\mu,\kappa)+\mathsf{W}_{p}^{\varepsilon_{2}}(\kappa,\nu)sansserif_W start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ε start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_ε start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ( italic_μ , italic_ν ) ≤ sansserif_W start_PO... | Recall that ∥⋅∥𝖳𝖵\|\cdot\|_{\mathsf{TV}}∥ ⋅ ∥ start_POSTSUBSCRIPT sansserif_TV end_POSTSUBSCRIPT is the dual norm corresponding to the Banach space of measurable functions on 𝒳𝒳\mathcal{X}caligraphic_X equipped with ∥⋅∥∞\|\cdot\|_{\infty}∥ ⋅ ∥ start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT. An inspection of the proof of P... | \|_{\mathsf{TV}}\leq\varepsilon∥ sansserif_T start_POSTSUBSCRIPT [ caligraphic_G , ∥ ⋅ ∥ start_POSTSUBSCRIPT sansserif_TV end_POSTSUBSCRIPT ] end_POSTSUBSCRIPT ( over~ start_ARG italic_μ end_ARG ) - over~ start_ARG italic_μ end_ARG ∥ start_POSTSUBSCRIPT sansserif_TV end_POSTSUBSCRIPT ≤ italic_ε because ‖μ−μ~‖𝖳𝖵≤εsubs... | The monotonicity statement is a simple observation, while the proof of the second claim combines Proposition 3 and standard triangle inequalities for OT and ∥⋅∥𝖳𝖵\|\cdot\|_{\mathsf{TV}}∥ ⋅ ∥ start_POSTSUBSCRIPT sansserif_TV end_POSTSUBSCRIPT. | Robust statistics and MDE under ∥⋅∥𝖳𝖵\|\cdot\|_{\mathsf{TV}}∥ ⋅ ∥ start_POSTSUBSCRIPT sansserif_TV end_POSTSUBSCRIPT. | C |
σθsubscript𝜎𝜃\sigma_{\theta}italic_σ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT | −1/r21superscript𝑟2-1/r^{2}- 1 / italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | 1/r21superscript𝑟21/r^{2}1 / italic_r start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | ^{2}-a^{2}}[\frac{r^{2}}{2}-a^{2}ln(r)]p_{o}.italic_ϕ = divide start_ARG italic_a start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_b start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - italic_a start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG [ divide start_ARG italic_r start_POSTSUPERSCRIPT 2 end_POST... | u_{\theta}&=0,\end{split}start_ROW start_CELL italic_σ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT end_CELL start_CELL = divide start_ARG italic_a start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_b start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT - italic_a start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ... | A |
DA is also a key component in recent contrastive learning techniques Chen et al. (2020a); Tian et al. (2020b); He et al. (2020); Chen & He (2021); Xiao et al. (2020); Lee & Shin (2023). An encoder that learns good visual representations of the input data is trained with a contrastive loss. The contrastive loss is chara... | Data augmentation (DA) is widely used in supervised learning in computer vision Ho et al. (2019); Lim et al. (2019); Cubuk et al. (2019; 2020); Li & Li (2023), achieving excellent results on popular datasets Ciregan et al. (2012); Sato et al. (2015); Wan et al. (2013); Krizhevsky et al. (2017). | demonstrates significant potential in computer vision tasks and other domains in recent years Chen et al. (2020a); He et al. (2020); Wang et al. (2023); Caron et al. (2020); Wang et al. (2024). It is a common practice to utilize data augmentations in forming both positive and negative pairs of data for defining the con... | To learn effective and transferable representations, numerous studies have focused on enhancing contrastive learning by selecting suitable DAs. Chen et al. (2020a); Tian et al. (2020b); He et al. (2020); Li et al. (2023); Wang et al. (2023); Van der Sluijs et al. (2024) have shown that combining different augmentations... | Data augmentations applied to natural images have demonstrated efficacy in enhancing the generalizability and robustness of models trained in supervised learning Cubuk et al. (2019; 2020); Lopes et al. (2019); Krizhevsky et al. (2017). However, in self-supervised learning Chen et al. (2020a); He et al. (2020), careful ... | C |
The goal of the model is to predict if a customer will overdraft within the next week; to that end, the model calculates predictions every Friday afternoon. We currently do not make a point-of-time prediction after a transaction is made or a daily prediction due to technical limitations with how quickly the app receive... | ODEWS was deployed during the Covid-19 pandemic which saw a sudden and extreme shock on the American economy. Suddenly, many Americans were unemployed and businesses were shuttered, leading to a precipitous decline in economic activity. As Figure 2 shows, there was a sudden decrease in overdrafts at the beginning of th... | There are several challenges in preventing overdrafts fees that make it suitable for an ML-driven solution. Firstly, different banks have different policies regarding the cost of the overdraft fee, the number of overdraft fees that can be charged to a customer in a day, the grace period an account can be made current, ... | The model training scheme is designed to mimic the deployment process in order to have the most accurate performance results. The model is trained using temporal cross-validation to take into account temporal effects and serial correlations that affect customer behavior (features) and overdraft policies (labels)(Foster... | The goal of the model is to predict if a customer will overdraft within the next week; to that end, the model calculates predictions every Friday afternoon. We currently do not make a point-of-time prediction after a transaction is made or a daily prediction due to technical limitations with how quickly the app receive... | C |
Our evidence includes experiments with different base models: linear models, ARIMAs [16], and gradient boosting [17]. | Most of these methods have their own intrinsic ways of uncertainty estimation. See [18] for ARIMA and [19, 20] for gradient boosting. The experiments show that produced uncertainty estimates with our surrogate approach are more accurate than predictions from even built-in methods designed specifically for these models. | Moreover, Table I shows that this evidence is not anecdotal: it holds for a wide range of datasets and types of base models: our approach to the construction of a surrogate model outperforms basic uncertainty estimates for considered classes of models the best bootstrap-based approach we found and naive training of sur... | The quality of uncertainty estimates produced by our method surpasses that of model-specific approaches and time-series-specific bootstrap methods. | An example of our uncertainty estimation in Figure 1 demonstrates that the surrogate uncertainty estimates don’t suffer from overconfidence typical for other methods. | A |
\mathbf{c},\sigma)italic_F start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT : ( bold_x , bold_d , bold_z start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT , bold_z start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT ) → ( bold_c , italic_σ ). | Now the optimization problem for the network weights θ𝜃\thetaitalic_θ is carried out | obtained via a neural network Fθ:(𝐱,𝐝)→(𝐜,σ):subscript𝐹𝜃→𝐱𝐝𝐜𝜎F_{\theta}:(\mathbf{x},\mathbf{d})\rightarrow(\mathbf{c},\sigma)italic_F start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT : ( bold_x , bold_d ) → ( bold_c , italic_σ ) with weights θ𝜃\thetaitalic_θ. As | translation 𝐭𝐭\mathbf{t}bold_t, rotation θ𝜃\thetaitalic_θ and scaling s𝑠sitalic_s of the | is similar to MCVQ, but now the appearance each part is based on a factor analyzer | A |
Besides, Shang et al. (Shang et al., 2023) firstly find out the problem of modality imbalance and propose to filter out the items with sensitive modalities. Then, MCLN (Li et al., 2023a) integrates a novel counterfactual framework to eliminate the noise. | Different modality representations of the same object have unique or common semantic information. | MDR (Wang et al., 2021a) proposes a multimodal disentangled recommendation that can learn well-disentangled representations carrying complementary and standard information from different modalities. DMRL (Liu et al., 2022a) considers the different contributions of various modality features for each disentanglement fact... | The features of different modalities have various importance to the user’s preference on a particular factor of the target item in RS. However, the representations of different factors in each modality are often entangled, so many researchers have introduced decomposition learning techniques to dig out the meticulous f... | Therefore, the recommendation performance and generalization of MRS can be significantly improved if the unique and common characteristics can be distinguished. | A |
(proved in Proposition 2 below), we can forget all other agents and study the allocation satisfying this condition in the view of only one agent. | In this work, we study the problem of fair chore allocation. In this problem, there are some m𝑚mitalic_m chores that have to be performed collectively by some n𝑛nitalic_n agents. Each agent may have a different cost for doing each chore. Formaly, each agent i𝑖iitalic_i has a cost function visubscript𝑣𝑖v_{i}italic_... | In particular, we give a necessary condition for the output of the HFFD algorithm in terms of the cost function of the agent who gets the last bundle Ansubscript𝐴𝑛A_{n}italic_A start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. We denote this agent by ω𝜔\omegaitalic_ω, such that σ(ω)=n𝜎𝜔𝑛\sigma(\omega)=nitalic_σ ( ... | In the convention of the literature, agent i𝑖iitalic_i gets bundle Aisubscript𝐴𝑖A_{i}italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT so that there is no need for an assignment. Starting at Section 5, we do not need the assignment σ𝜎\sigmaitalic_σ anymore as only one agent is considered. | So from this section to the end, only the last agent is considered. Whenever we mention a cost function, it is the cost function of the last agent. In other words, there is a single cost function involved in all arguments. When we say “an allocation”, there is no need for an assignment σ𝜎\sigmaitalic_σ anymore, so σ𝜎... | D |
SetSparseReversible, a sparse set that can be handled at different levels (of search) | Sparse sets are always simple, meaning that values in ‘dense’ are necessarily indexes 0, 1, 2, … | SetLinkedBinary and SetLinkedFinite, for sets containing only two elements (indexes 0 and 1), and ordered sets containing a finite number of elements (indexes), respectively. | SetLinked, a linked set is an interface that allows us to represent a list of elements perceived as indexes, i.e., elements whose values range from 0 to a specified capacity -1. | SetSparse, a sparse set [11] is basically composed of two arrays (of integers) and a limit: at any time, it contains the elements (typically, indexes of values) in the array ‘dense’ at indexes ranging from 0 to the limit (included). The presence of elements can be checked with the array ‘sparse’. | B |
0.659±0.012plus-or-minus0.6590.012\mathbf{0.659\pm 0.012}bold_0.659 ± bold_0.012* | 0.354±0.001¯¯plus-or-minus0.3540.001\underline{0.354\pm 0.001}under¯ start_ARG 0.354 ± 0.001 end_ARG | 0.606±0.001¯¯plus-or-minus0.6060.001\underline{0.606\pm 0.001}under¯ start_ARG 0.606 ± 0.001 end_ARG | 0.608±0.001plus-or-minus0.6080.001\mathbf{0.608\pm 0.001}bold_0.608 ± bold_0.001 | 0.029±0.001¯¯plus-or-minus0.0290.001\underline{0.029\pm 0.001}under¯ start_ARG 0.029 ± 0.001 end_ARG | A |
In recent years, a growing abundance multi-modal data are disseminated, linking diverse information across various modalities such as text and image in a global data space. This interconnected web of heterogeneous data constitutes a vast repository of information termed as knowledge. With the development of large-scale... | Thanks to the innovative unified knowledge proposed by our UKnow protocol, our dataset can readily accommodate a variety of downstream tasks. In this study, we opt for common-sense reasoning and vision-language pre-training as experimental domains to validate our dataset. Common-sense reasoning is an extremely popular ... | Driven by the multimodal knowledge graph, models can easily introduce external knowledge knowledge , discover long-range relations NELL995 and understand more logical semantics yago . | However, from the perspective of data organization, existing studies often claim to be knowledge-based only using one piece of them, which is actually incomplete and cannot be analogous to the complex knowledge network held by humans. In this work, we build a unified knowledge protocol based on the multimodal knowledge... | Existing knowledge-based deep learning models are broadly divided into two aspects: (1) external knowledge introduction kldrivenbenchmarking , (2) internal knowledge mining jing2020self . The former leverages expert knowledge by introducing external data krisp ; lauscher2020common ; chen2020recall or pre-trained model... | D |
PX(t+1)superscriptsubscript𝑃𝑋𝑡1P_{X}^{(t+1)}italic_P start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_t + 1 ) end_POSTSUPERSCRIPT is given as ℱ3,α[PX(t)]subscriptℱ3𝛼delimited-[]superscriptsubscript𝑃𝑋𝑡{\cal F}_{3,\alpha}[P_{X}^{(t)}]caligraphic_F start_POSTSUBSCRIPT 3 , italic_α end_... | the decoding block error probability with coding rate R𝑅Ritalic_R is upper bounded by the following quantity; | In channel coding, we discuss an upper bound of the probability of correct decoding. | In the following, we discuss the RHS of (71) with α∈[1/2,1]𝛼121\alpha\in[1/2,1]italic_α ∈ [ 1 / 2 , 1 ]. | the function is given as a function of probability distribution composed of the diagonal part. | B |
\prime}_{2n-1}}{\det(\mathcal{L}_{S}^{\prime})}\right).14 italic_n ( - divide start_ARG italic_γ start_POSTSUBSCRIPT 3 italic_n - 2 end_POSTSUBSCRIPT end_ARG start_ARG italic_γ start_POSTSUBSCRIPT 3 italic_n - 1 end_POSTSUBSCRIPT end_ARG - divide start_ARG italic_δ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POST... | The degree-Kirchhoff index, the Gutman index, and the Schultz index of the pentagonal Möbius chain Pn′subscriptsuperscript𝑃′𝑛P^{\prime}_{n}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and the pentagonal cylinder chain Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCR... | The formulae of the degree-Kirchhoff index of linear hexagonal chains [9], hexagonal Möbius chains [16], linear crossed hexagonal chains [18], linear polyomino chains [8], cylinder phenylene chains [17], random polygonal chains [13], generalized phenylenes [25], cylinder, and Möbius octagonal chain [15], linear pentago... | To evaluate the ratios involved in the above expressions, the next lemmas are used. | The following theorem is used to determine the total number of spanning trees in a connected graph G𝐺Gitalic_G using the Laplacian or normalized Laplacian spectrum [3]. | C |
On the other hand, our characterization implies that many natural classes of contracts, such as the class of all affine contracts, all polynomial contracts, or the class of all monotone contracts, fail to be ambiguity-proof. | In our next example, the principal gains from using an ambiguous contract to implement an action that cannot be implemented with a classic contract. | i.e., the principal cannot gain from implementing any action i𝑖iitalic_i with an ambiguous rather than a classic contract. | In this section we explore the power of ambiguity when the agent is allowed to select a mixed action and the principal is allowed to implementing mixed actions. Our main result (Theorem 6) is that in this case, the principal cannot gain from using an ambiguous contract. | When the agent can choose mixed actions, the principal cannot gain by employing an ambiguous contract. To prove this result we make use of the min-max theorem applied to a suitably defined zero-sum game. | C |
≤2Hm(𝑺)−2Hm(𝑺∣𝒚)absent2subscript𝐻𝑚𝑺2subscript𝐻𝑚conditional𝑺𝒚\displaystyle\leq 2H_{m}(\bm{S})-2H_{m}(\bm{S}\mid\bm{y})≤ 2 italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( bold_italic_S ) - 2 italic_H start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ( bold_italic_S ∣ bold_italic_y ) | This is exactly the special case of Lemma 5.4 where m=n/2𝑚𝑛2m=n/2italic_m = italic_n / 2. | Given these ingredients, the proof of Theorem 7 in the special case of m=n/2𝑚𝑛2m=n/2italic_m = italic_n / 2 is simple. By repeatedly applying Lemma 2.6 to each query response, | Proof of Lemma 5.4 in the special case where m=n/2𝑚𝑛2m=n/2italic_m = italic_n / 2. | In this subsection, we prove Lemma 5.4 which bounds the mutual information between 𝑺𝑺\bm{S}bold_italic_S and 𝒚𝒚\bm{y}bold_italic_y in terms of the average drop in m𝑚mitalic_m-conditional of 𝑺𝑺\bm{S}bold_italic_S conditioned on 𝒚𝒚\bm{y}bold_italic_y. We begin with the special case where n𝑛nitalic_n is even and... | A |
The edge e𝑒eitalic_e may lie in both 𝑴1′subscriptsuperscript𝑴′1\boldsymbol{M}^{\prime}_{1}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and 𝑴2′subscriptsuperscript𝑴′2\boldsymbol{M}^{\prime}_{2}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSU... | In the first case, both endpoints of e𝑒eitalic_e are labelled in both graphs. In either case, | The edge e𝑒eitalic_e may lie in both 𝑴1′subscriptsuperscript𝑴′1\boldsymbol{M}^{\prime}_{1}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and 𝑴2′subscriptsuperscript𝑴′2\boldsymbol{M}^{\prime}_{2}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSU... | The edge e𝑒eitalic_e may lie in both 𝑴1′subscriptsuperscript𝑴′1\boldsymbol{M}^{\prime}_{1}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and 𝑴2′subscriptsuperscript𝑴′2\boldsymbol{M}^{\prime}_{2}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSU... | Now consider the case when the labelled vertices of 𝐅𝐅\boldsymbol{F}bold_italic_F are distinct. | A |
After going through all conditions, participants filled out a final questionnaire. Besides the orientations and gaze as variables, we also included the familiarity of the participant with robots. This extra information was collected using the questionnaires. | After testing the performance of the motion capture system, we welcomed the participants to our lab, explained the study, and asked them to read and sign the participant info sheet with the consent form. The participants later picked up and put on a rigid body hat marker from table C, as the trajectory sampler. Afterwa... | The participant should start from position C and move towards the Table in position D to deliver the yellow cup, during which a non-contact interaction between the participant and the Spot was recorded by the motion capture cameras. In the non-stationary conditions, the Spot robot started moving from B to A the moment ... | The study took 5-10 minutes and the participant was compensated with a £5 for their time. | Figure 5: Trajectories of a single participant. The orange fixed trajectory for Spot, other colored trajectories for humans, and the time-synchronized minimum distance is labeled as dotted lines in the plot. | C |
1.3407.10−4superscript1.3407.1041.3407.10^{-4}1.3407.10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT | 1.3407.10−4superscript1.3407.1041.3407.10^{-4}1.3407.10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT | 9.7244.10−7superscript9.7244.1079.7244.10^{-7}9.7244.10 start_POSTSUPERSCRIPT - 7 end_POSTSUPERSCRIPT | It is inferred from TABLE 4, that neural networks method root mean square position error is less compared to other methods, but optimization and MOOGA methods also show accurate results where root mean square position error is less than 2.10−4superscript2.1042.10^{-4}2.10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT. ... | 9.0951.10−6superscript9.0951.1069.0951.10^{-6}9.0951.10 start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT | B |
Note that Kw,1,w,1,…subscript𝐾𝑤1𝑤1…K_{w,1,w,1,\dots}italic_K start_POSTSUBSCRIPT italic_w , 1 , italic_w , 1 , … end_POSTSUBSCRIPT is the union of w𝑤witalic_w edge-disjoint chains of length hℎhitalic_h intersecting only on the elements of the even layers (where it has only one element), and a mapping of Kw,1,w,1,…s... | The events that the random mapping gives a homomorphism for copies of Kw,1,w,1,…subscript𝐾𝑤1𝑤1…K_{w,1,w,1,\dots}italic_K start_POSTSUBSCRIPT italic_w , 1 , italic_w , 1 , … end_POSTSUBSCRIPT are conditionally independent for disjoint copies of Kw,1,w,1,…subscript𝐾𝑤1𝑤1…K_{w,1,w,1,\dots}italic_K start_POSTSUBSCRIPT... | Note that Kw,1,w,1,…subscript𝐾𝑤1𝑤1…K_{w,1,w,1,\dots}italic_K start_POSTSUBSCRIPT italic_w , 1 , italic_w , 1 , … end_POSTSUBSCRIPT is the union of w𝑤witalic_w edge-disjoint chains of length hℎhitalic_h intersecting only on the elements of the even layers (where it has only one element), and a mapping of Kw,1,w,1,…s... | The events that the random mapping gives a homomorphism for chains are conditionally independent for disjoint chains (conditioning on the mapping of the even layers). | Hence, the conditional probability that mapping w𝑤witalic_w elements for every even layer gives a homomorphism of Kh×wsubscript𝐾ℎ𝑤K_{h\times w}italic_K start_POSTSUBSCRIPT italic_h × italic_w end_POSTSUBSCRIPT is the wthsuperscript𝑤𝑡ℎw^{th}italic_w start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT power... | C |
While there are many approaches for batch-like entity clustering [221, 222], the incremental maintenance of entity clusters for new entities has received comparatively little attention. | For KG construction, however, we need incremental approaches that build on previous match decisions and determine for new entities if they are already represented in the KG or whether they should be added as new entities. Furthermore, for streaming-like data ingestion into a KG a dynamic (real-time) matching of new ent... | A straight-forward approach is to simply add a new entity either to the most similar existing cluster or to create a new cluster if there is no previous cluster with a high enough similarity exceeding some predefined similarity threshold [223]. However, this approach typically suffers from a strong dependency on the or... | The matching step of incremental ER is limited to the new entities and involves a pair-wise comparison with the existing KG entities determined by the preceding incremental blocking step. The main goal is to determine all similar entities as potential match candidates as input for the final clustering step, where it is... | The incremental approaches in [65, 66] support optimized clustering decisions for duplicate-free (sometimes called clean) data sources from which at most one entity can participate per cluster of matching entities. In this case, an effective clustering strategy is the so-called "max-both" approach where an entity s𝑠si... | B |
\Omega_{3}P_{E/2^{\prime}}\right)R_{2}R_{1}over˙ start_ARG italic_P end_ARG start_POSTSUBSCRIPT italic_E / 0 end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( italic_P start_POSTSUBSCRIPT italic_E / 0 end_POSTSUBSCRIPT roman_Ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT - roman_Ω start_POSTSUBSCRIPT 1 en... | In this section, the Forward Kinematics (FK) of the lower limbs, depicted in Figure 1, using dual quaternions is established. FK involves computing the positions and orientations of the end-effector in task space from the axes and angles of the joint rotations. The lower limb is decomposed into four segments: the pelvi... | The Inverse Kinematics (IK) problem determines the joint angles required to achieve a desired end-effector position and velocity, facilitating efficient control of lower limb motion. However, the lower limb’s excessive degrees of freedom (DOF) relative to its spatial constraints lead to redundancy issues. Consequently,... | In this paper, the dual quaternion-based theory is applied to the kinematics and dynamics study of the 7-DOF human lower limbs in 3D space. Subsequently, the artificial neural networks method is used to solve the inverse kinematics problem. The efficiency of the artificial neural networks method is verified using the j... | This section focuses on the dynamic description of the lower limb shown in Figure 2 using the Dual Quaternion-based recursive Newton-Euler method. This method involves calculating the velocities and accelerations of the center of mass of each link, known as twists, based on the positions, velocities, and accelerations ... | B |
One popular approach is to use inexact stochastic gradient and Hessian estimates with subsampling | methods (Cartis et al., 2011a; b; Grapiglia & Nesterov, 2017; Doikov et al., 2024), | Variance reduction techniques (Zhou et al., 2019; Wang et al., 2019) combine the advantages of stochastic and exact methods, | (Xu et al., 2016; Kohler & Lucchi, 2017; Xu et al., 2017; Nilesh et al., 2018; Ghadimi et al., 2017; Cartis & Scheinberg, 2018; Agafonov et al., 2020). | we obtain the Variance Reduced Cubic Newton algorithm (Zhou et al., 2019; Wang et al., 2019). | C |
}.\end{split}start_ROW start_CELL blackboard_E [ roman_Δ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ] end_CELL start_CELL ≥ ∑ start_POSTSUBSCRIPT italic_i ∈ italic_I end_POSTSUBSCRIPT ( italic_g start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⋅ divide start_ARG italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_AR... | We bound the summand for i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I in (28) below, using the geometric sum formula | We remark that Bd(n,d+Δ,r)≤B(n,d+Δ,r)subscript𝐵𝑑𝑛𝑑Δ𝑟𝐵𝑛𝑑Δ𝑟B_{d}(n,d+\Delta,r)\leq B(n,d+\Delta,r)italic_B start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ( italic_n , italic_d + roman_Δ , italic_r ) ≤ italic_B ( italic_n , italic_d + roman_Δ , italic_r ): The range of the sum in (61) is contained in the range ... | For i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I, we define the events Aisubscript𝐴𝑖A_{i}italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT as | where we used linearity and the definition of conditional expectation. Since for all i∈I𝑖𝐼i\in Iitalic_i ∈ italic_I, Pr[Ai]>0Prsubscript𝐴𝑖0\Pr[A_{i}]>0roman_Pr [ italic_A start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ] > 0 (it is possible to flip just the i𝑖iitalic_i-th bit, as p1>0subscript𝑝10p_{1}>0italic_p s... | A |
Firstly, we notice that the analytical expression as given in Theorem 2 matches well with the simulations which validates the accuracy of Theorem 2 even for smaller values of N𝑁Nitalic_N. Next, we observe that ZN(Y)subscriptsuperscript𝑍𝑌𝑁Z^{(Y)}_{N}italic_Z start_POSTSUPERSCRIPT ( italic_Y ) end_POSTSUPERSCRIPT sta... | In practice, multiple network operators co-exist in a given geographical area, each operating in different frequency band. As a consequence, at a given point in time, multiple UEs are served by different operators in the system. In such a scenario, if an IRS is optimized to cater to the needs of one of the operators, i... | In this paper, we analyzed the effect of deploying an IRS on the performance of an OOB operator that has no control over the IRS. We showed that while the IRS optimally serves the in-band UEs, it simultaneously, and at no additional cost, enhances the quality of the channels of the OOB UEs. This enhancement in the OOB ... | In order to study the impact on the OOB performance, we consider the scheduling of UEs in a round-robin (RR) fashion at both BS-X and BS-Y. We note that the performance under opportunistic scheduling at either or both BSs can also be derived along similar lines, e.g., following the approach in [7]. Since the BSs are eq... | Firstly, we notice that the analytical expression as given in Theorem 2 matches well with the simulations which validates the accuracy of Theorem 2 even for smaller values of N𝑁Nitalic_N. Next, we observe that ZN(Y)subscriptsuperscript𝑍𝑌𝑁Z^{(Y)}_{N}italic_Z start_POSTSUPERSCRIPT ( italic_Y ) end_POSTSUPERSCRIPT sta... | B |
In game theory, Grabisch & Roubens (1999); Sundararajan et al. (2020); Tsai et al. (2022) proposed interaction metrics from different perspectives. | Zhou et al. (2015); Bau et al. (2017); Kim et al. (2018) visualized the potential correspondence between convolutional filters in a DNN and visual concepts in an empirical manner. | Unlike attribution methods, some studies focused on quantifying interactions between input variables (Sorokina et al., 2008; Murdoch et al., 2018; Singh et al., 2018; Jin et al., 2019; Janizek et al., 2020). | Typical explanation methods include visualizing patterns encoded by a DNN (Simonyan et al., 2013; Zeiler & Fergus, 2014; Yosinski et al., 2015; Dosovitskiy & Brox, 2016), estimating the attribution/importance/saliency of each input variable (Ribeiro et al., 2016; Sundararajan et al., 2017; Lundberg & Lee, 2017b; Fong &... | Some studies explained a DNN by distilling the DNN into another interpretable model (Frosst & Hinton, 2017; Che et al., 2016; Wu et al., 2018; Zhang et al., 2018; Vaughan et al., 2018; Tan et al., 2018). | D |
,\tilde{I}^{(m)}_{\text{test},c})]similarity = blackboard_E start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT [ sim ( over~ start_ARG italic_I end_ARG start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT train , italic_c end_POSTSUBSCRIPT , over~ start_ARG italic_I end_ARG start_POSTSUPERSCRIPT ( ita... | Therefore, high-order concepts were less likely to be generalized to testing data than low-order concepts. | ∙∙\bullet∙ For each concept, we compute the distribution of its effects on training samples and such a distribution on testing samples. We find that compared to the distribution of high-order concepts, the distribution of low-order concepts in training samples and that in testing samples are usually more similar to eac... | In this paper, we provide a conceptual understanding of the reason why low-order concepts in training data can usually better generalize to testing data than high-order concepts. Specifically, we prove that the average inconsistency of concepts usually increases exponentially along with the order of concepts. We find t... | We found that low-order interactive concepts usually had more similar distributions between training data and testing data than high-order interactive concepts. This meant that compared to high-order concepts, the DNN was more likely to extract similar low-order concepts from the training data and testing data. In othe... | D |
As you can see Mish has a constraint since it has a logarithm function, but MoLu does not. However, this wonderful coincidence inspired us why our activation function works well at a faster rate. | The MoLU is a simple, beautiful and powerful activation function that consists of a combination of hyperbolic tangent and exponential functions. The slope of the MoLU in the negative integer region make it possible to escape from the local minima. On the other hand, in the positive integer region, the slope of the MoLU... | Table 1: Comparison with output values. MoLU is almost linear in the positive integer region so that MoLU do not cause any loss of information. | We formulated our activation function in the following order. First, we used the hyperbolic tangent function as a basic framework, and then multiplied by the identity function to show the behavior of the identity function in the positive integer region. Lastly, we composited the exponential function to the hyperbolic t... | Table 1 show that the obvious boundary between the linear part in the positive integer region and the non-linear part in the negative integer region than some other activation functions. Our activation function acts like the identity function, such as ReLU or ELU, in the positive integer region so that it does not lose... | D |
As a consequence, there is no hope to devise an algorithm listing these objects in polynomial time in n𝑛nitalic_n. | Rather, the kind of efficiency we must aim for is either guaranteeing small exponential time aiming at reducing as much as possible the base of the exponent, referred to as the input-sensitive approach [12], or the ability to generate all solutions in a time which is polynomial in the sizes of the input plus the output... | In this paper, we place ourselves in the output-sensitive approach and refer the reader to [16, 24] for more details on enumeration algorithms and their complexity. | In our case, as the worst case delay to generate a child is the same as of generating all children, we do not even need to argue that we can resume the enumeration from the i𝑖iitalic_i-th child as long as we are interested in the asymptotic delay, though this can be done to speed up the implementation. | As of the delay, it only depends on the time needed to compute the parent and the delay needed to generate children when using folklore tricks such as the alternating output technique [26]. | A |
We emphasize once more that the fact that we choose T†superscript𝑇†T^{\dagger}italic_T start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT to be the W2subscriptW2{\rm W}_{2}roman_W start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-OT map is independent of the choice of WpsubscriptW𝑝{\rm W}_{p}roman_W start_POSTSUBSCRIPT italic_p end_P... | neural networks (Statement 4) with Ω=[0,1]dΩsuperscript01𝑑\Omega=[0,1]^{d}roman_Ω = [ 0 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, we use the approximation of Hölder continuous functions by neural networks [88], and the density of Hölder continuous functions in L2(Ω)superscript𝐿2ΩL^{2}(\Omega)italic_L... | Each statement follows from L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT density results of the form of (17) for the corresponding approximation class | existing in the literature. For polynomials (Statement 1) see [22, 35, 101]. For splines (Statement 2), this is a consequence of the density of continuous functions in L2(Ω)superscript𝐿2ΩL^{2}(\Omega)italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ( roman_Ω ), and the density of piecewise constant functions in C... | Density properties such as (17) are known for many families of functions. For example, using standard density results for polynomials, splines, and neural networks in L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, we immediately get | D |
[56] proposed a dual-branch FEIE model; the one branch (composed of Temporal CNN and Transformer encoder) handles the visual modality and the other handles the audio one; modality dropout is added for A/V feature fusion. | [51] achieved the 3rd place in the ERI challenge of the 5th ABAW; it proposed a methodology that involved extracting features from visual, audio, and text modalities using Vision Transformers, HuBERT, and DeBERTa. Temporal augmentation and SE blocks were applied to enhance temporal generalization [3, 4, 48, 2, 17] and ... | [59] presented a methodology that involved extracting audio and visual features using state-of-the-art models and aligning these features to a common dimension using an Affine Module. The aligned features were then fused using a Multimodal Multi-Head Attention model. | [56] proposed a dual-branch FEIE model; the one branch (composed of Temporal CNN and Transformer encoder) handles the visual modality and the other handles the audio one; modality dropout is added for A/V feature fusion. | [54] presented a methodology that involved extracting visual features from video frames using models like FAb-Net, EfficientNet, and DAN, which capture facial expressions and attributes. Audio features are obtained using Wav2Vec2 and VGGish models. The extracted features were then processed through a temporal convoluti... | A |
Table 1: Universal coverage tolerance table for split conformal prediction. For a nominal miscoverage level α𝛼\alphaitalic_α, an ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0, and a tolerance probability τ𝜏\tauitalic_τ, the table entries are the minimum required calibration sample sizes such that the empirical coverage C∞(n,α)s... | (X_{n+1}))E [ italic_C start_POSTSUPERSCRIPT ( italic_n , italic_α ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ] = E [ italic_Z start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ] = italic_P ( italic_Y start_POSTSUBSCRIPT italic_n + 1 end_POSTSUBSCRIPT ∈ italic_D start_POSTSUPERSCRIPT ( italic_α ) end_PO... | Let (Ω,ℱ,P)Ωℱ𝑃(\Omega,\mathscr{F},P)( roman_Ω , script_F , italic_P ) denote the underlying probability space from which we induce the distributions of all random objects considered in the paper. | In general, the coverage indicators Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are dependent random variables, since for all future observables the corresponding conformal prediction sets in Definition 3 are defined in terms of the same calibration sample conformity score S(⌈(1−α)(n+1)... | Paulo C. Marques F. receives support from FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo) through project 2023/02538-0. | D |
The key challenge of event-based object classification is extracting discriminative appearance information from short-duration event streams. We convert the entire stream into a single voxel set as input of the classification network. The input is fed into the event voxel transformer encoder to obtain semantic features... | An overview of the proposed EVSTr model is illustrated in Fig. 3. EVSTr has two working modes corresponding to short-duration and long-duration recognition with event cameras. It comprises three stages: event representation, spatiotemporal feature learning, and long-range temporal modeling. For object classification, w... | Unlike object classification, which depends on appearance information, a core challenge of event-based action recognition is learning temporal dynamics. To retain the long-range temporal structure, we split an event stream into multiple segments for voxel set representations separately and then model the inter-segment ... | The architecture of the action recognition network is illustrated in Fig. 3. The model applied to action recognition and object classification keeps the same settings of event voxel transformer encoder, which are detailed in Section IV-A. We utilize the proposed encoder to extract per-segment features and aggregate the... | This work introduces a novel attention-aware model (EVSTr) for spatiotemporal representation learning on event streams. EVSTr takes event voxel sets as input to fit the sparsity of event data and hierarchically learns robust representations for recognition tasks. The proposed event voxel transformer encoder, consisting... | B |
More recently, solutions based on machine learning gained popularity. An example is Gaussian Process Implicit Surface [5], which, however requires points on the whole surface and suffers from poor scaling over a dense point cloud. Thus, the input must be downsampled, resulting in loss of detail. Other methods were orig... | We address the following problem. Given an initial RGB-D map, obtain an accurate representation of the complete shape of the object with the help of exploratory contact actions. The objective is either to maximize the accuracy given an upper bound on the number of touches or minimize the number of touches to reach a pr... | With advances in haptic exploration, some purely haptic approaches have been proposed. They utilize some techniques mentioned above, such as implicit shape potentials [12], or Gaussian Processes [13, 14, 15]. Gaussian-based methods have the advantage of having the ability to express uncertainty directly from their natu... | Deep Learning techniques such as Convolutional Neural Networks for shape completion typically represent objects as voxel grids. This allows to introduce probabilistic uncertainty in voxel grids, but the methods usually suffer from cubically growing computational requirements with the number of voxels, limiting the reso... | CNN-based methods [20, 21] usually require fewer touches but suffer from lower resolution due to computational requirements. Smith et al. proposed approaches [22, 23] based on Graph Neural Network. Reconstructions by these methods have a higher resolution but are nonsmooth and, for now, evaluated only in simulation. | C |
ℛ=Invcω(Polωℛ)ℛ𝐼𝑛superscriptsubscript𝑣𝑐𝜔𝑃𝑜superscript𝑙𝜔ℛ\mathcal{R}=Inv_{c}^{\omega}(Pol^{\omega}\,\mathcal{R})caligraphic_R = italic_I italic_n italic_v start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT ( italic_P italic_o italic_l start_POSTSUPERSCRIPT it... | Let OAsubscript𝑂𝐴O_{A}italic_O start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT be the set of all finitary operations on A𝐴Aitalic_A and OA(ω)superscriptsubscript𝑂𝐴𝜔O_{A}^{(\omega)}italic_O start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_ω ) end_POSTSUPERSCRIPT be the set of all ω𝜔\om... | Let 𝒮𝒮\mathcal{S}caligraphic_S be a set of finitary relations on A𝐴Aitalic_A and F𝐹Fitalic_F be a set of finitary operations on A𝐴Aitalic_A. | Let ℛℛ\mathcal{R}caligraphic_R be a set of ω𝜔\omegaitalic_ω-relations on A𝐴Aitalic_A. | In this paper, we examine various concepts of clones of operations and relations on a given base set A𝐴Aitalic_A. Unlike classic clone theory, which limits the arities of functions and relations to be finite, our study allows for arity ω𝜔\omegaitalic_ω for both operations and relations. Additionally, there are no res... | D |
9: set L(y).copy≔L(y).origformulae-sequence𝐿𝑦≔copy𝐿𝑦origL(y).\text{copy}\coloneqq L(y).\text{orig}italic_L ( italic_y ) . copy ≔ italic_L ( italic_y ) . orig | 16: set L(y).orig≔0formulae-sequence𝐿𝑦≔orig0L(y).\text{orig}\coloneqq 0italic_L ( italic_y ) . orig ≔ 0 | 13: set L(y).orig≔L(y).orig+1formulae-sequence𝐿𝑦≔orig𝐿𝑦orig1L(y).\text{orig}\coloneqq L(y).\text{orig}+1italic_L ( italic_y ) . orig ≔ italic_L ( italic_y ) . orig + 1 | 10: set L(y).orig≔L(y).orig+1formulae-sequence𝐿𝑦≔orig𝐿𝑦orig1L(y).\text{orig}\coloneqq L(y).\text{orig}+1italic_L ( italic_y ) . orig ≔ italic_L ( italic_y ) . orig + 1 | 22: set L(y).orig≔0formulae-sequence𝐿𝑦≔orig0L(y).\text{orig}\coloneqq 0italic_L ( italic_y ) . orig ≔ 0 | C |
In contrast to the polynomial-time algorithms of the previous sections, here we show that k𝑘kitalic_k-DMC is NP-hard when considering dminsubscript𝑑mind_{\mathrm{min}}italic_d start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT as the diversity measure. We called this variant Min-k𝑘kitalic_k-DMC in Section 1. The hardne... | In Section 5, we prove that the decision version of Min-k𝑘kitalic_k-DMC is already NP-hard when k=3𝑘3k=3italic_k = 3. The proof is split into three parts. First, we show that a variant of the constrained minimum vertex cover problem on bipartite graphs (Min-CVCB) of Chen and Kanj [CK03] is NP-hard. Then, we give a re... | In contrast to the polynomial-time algorithms of the previous sections, here we show that k𝑘kitalic_k-DMC is NP-hard when considering dminsubscript𝑑mind_{\mathrm{min}}italic_d start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT as the diversity measure. We called this variant Min-k𝑘kitalic_k-DMC in Section 1. The hardne... | Let us first introduce the constrained minimum vertex cover problem on bipartite graphs. | In this paper, we initiate the algorithmic study of computing diverse minimum s𝑠sitalic_s-t𝑡titalic_t cuts. Concretely, we introduce the following optimization problem. | C |
\langle Q\rangle}(\beta)∫ 2 start_POSTSUPERSCRIPT bold_I ( ⟨ italic_P ⟩ : ⟨ italic_β , italic_Q ⟩ ) end_POSTSUPERSCRIPT italic_d italic_T start_POSTSUBSCRIPT ⟨ italic_Q ⟩ end_POSTSUBSCRIPT ( italic_β ) | <∗superscript∗\displaystyle<^{\ast}< start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | <∗superscript∗\displaystyle<^{\ast}< start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | <∗superscript∗\displaystyle<^{\ast}< start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | <∗superscript∗\displaystyle<^{\ast}< start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT | A |
As described in Section IV-A, the CIFAR-100 dataset incorporates a superclass structure in addition to the class partition. Our proposed context normalization leverages this superclass information as ”prior knowledge” for classification. Each superclass corresponds to a context, identified by a one-hot encoded vector o... | Two versions of CN are used in the experiments: CN on Patches (CN-Patches) and CN on Channels (CN-Channels). | As described in Section IV-A, the CIFAR-100 dataset incorporates a superclass structure in addition to the class partition. Our proposed context normalization leverages this superclass information as ”prior knowledge” for classification. Each superclass corresponds to a context, identified by a one-hot encoded vector o... | TABLE VII: Comparison of the two Context Normalization methods on CIFAR-100: Context Normalization on Patches (CN-Patches) and Context Normalization on Channels (CN-Channels), with normalization to the mean and standard deviation of the dataset (ViT) and input normalization using batch normalization (BN). | Table VII demonstrates the significant performance improvement of context normalization over batch normalization (BN) when using the ViT architecture trained from scratch on CIFAR-100. Both CN-Patches and CN-Channels approaches outperform BN by approximately 10% and 18% in terms of accuracy and top-5 accuracy. The trai... | C |
In-distribution Classification Accuracy. A potential risk of modifying the primitive classification network for OOD detection is the large degradation of the in-distribution classification accuracy. | As shown in Tab. 4, our proposed DFB does not have this issue, as DFB has only 0.12% top-1 accuracy drop on the CIFAR10 dataset and improves the classification performance by 0.23% on the CIFAR100 dataset. This result indicates that the dense prediction training in DFB ensures effective learning of foreground features,... | It then seamlessly integrates the foreground and background features into image classification models by transforming the dense prediction network to a (K+1)𝐾1(K+1)( italic_K + 1 )-class classification network, where the prediction entries of the K𝐾Kitalic_K classes are focused on the class semantics of the K𝐾Kitali... | 3.2. Learning In-distribution Background Features via (K+1)𝐾1(K+1)( italic_K + 1 )-class Dense Prediction | Most of these methods, especially the post-hoc methods, are primarily based on the ”foreground features” to detect OOD samples. These are the features that exhibit the semantics of the in-distribution classes, such as the appearance features of the ‘horse’ class images in the CIFAR10 image classification, as shown in F... | A |
We propose a post-processing conditional diffusion model (DM) [13] with the capability of removing unwanted noise and other distortions in brightened low-light images. We name our conditional model Low-light Post-processing Diffusion Model (LPDM). The effect of post-processing using LPDM is displayed in Fig. 1. Our tec... | We propose a post-processing conditional diffusion model (DM) [13] with the capability of removing unwanted noise and other distortions in brightened low-light images. We name our conditional model Low-light Post-processing Diffusion Model (LPDM). The effect of post-processing using LPDM is displayed in Fig. 1. Our tec... | We avoid using DMs for sampling normally-exposed images owing to their expensive generative reverse process. Instead, we exploit the ability of DMs to capture complex conditional data distributions. | We introduce a method of applying DMs as a post-processing technique in the LLIE pipeline. Our framework is able to circumvent the computationally expensive iterative reverse process of DMs and denoise images in one pass through the model. | In this paper, we present a framework for post-processing images which have undergone low-light image enhancement. The enhancement of low-light images often reveals a variety of degradations which are hidden in the dark, and thus a need for post-processing is introduced. Furthermore, each low-light enhancement techniqu... | C |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.