context
stringlengths
250
5.39k
A
stringlengths
250
8.2k
B
stringlengths
250
7.25k
C
stringlengths
250
4.17k
D
stringlengths
250
3.2k
label
stringclasses
4 values
τ(G)=1|V|∏i=2|V|λi=12⁢|E|∏i=1|V|di∏i=2|V|λ̄i⋅\displaystyle\tau(G)=\frac{1}{|V|}\prod\limits_{i=2}^{|V|}\lambda_{i}=\frac{1}% {2|E|}\prod\limits_{i=1}^{|V|}d_{i}\prod\limits_{i=2}^{|V|}\lambdabar_{i}\cdotitalic_τ ( italic_G ) = divide start_ARG 1 end_ARG start_ARG | italic_V | end_ARG ∏ start_POSTSUBSCRIPT italic_i = 2 ...
The formulae of the degree-Kirchhoff index of linear hexagonal chains [9], hexagonal Möbius chains [16], linear crossed hexagonal chains [18], linear polyomino chains [8], cylinder phenylene chains [17], random polygonal chains [13], generalized phenylenes [25], cylinder, and Möbius octagonal chain [15], linear pentag...
We obtain the formulae for the total count of spanning trees of Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and Pn′subscriptsuperscript𝑃′𝑛P^{\prime}_{n}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT using the Theorem 4 as follows.
As both of the structural descriptors have important applications in graph theory, networking systems, molecular chemistry, and other related fields, researchers have taken a strong interest in the Kirchhoff indices and degree-Kirchhoff indices of graphs in recent years. Finding the explicit formulae for the Kirchhoff ...
Being motivated by the success of the Wiener index, nearly after four decades, in 1993,19931993,1993 , Klein and Randić put forward a novel structure-descriptor topological index, known as the Kirchhoff index [11]. The Kirchhoff index of a graph G𝐺Gitalic_G is calculated using the formula Kf⁡(G)=12⁢∑i=1|V|∑j=1|V|ri⁢jK...
C
We work with a familiar hidden-action moral hazard problem, as in Holmström, (1979), Grossman and Hart, (1983), and Laffont and Martimort, (2009, Chapter 4), with the friction arising out of limited liability (as in Innes, (1990)) rather than risk aversion. In contrast to much of the moral hazard literature, our princi...
A flourishing literature examines design problems in the face of non-Bayesian uncertainty. One branch of this literature examines models in which the principal entertains non-Bayesian uncertainty about the agents. Bergemann and Schlag, (2011) examine monopoly pricing on the part of a principal with ambiguous beliefs a...
A second branch of the literature examines settings in which the agent has ambiguous beliefs that the principal can potentially exploit. Beauchêne et al., (2019) and Cheng, (2020) examine Bayesian persuasion problems in which the sender exploits the ambiguity aversion of the receiver. Bodoh-Creed, (2012) and Di Tillio ...
Dai and Toikka, (2022) examine a principal who writes contracts to shape the actions of a team of agents, with the principal holding ambiguous beliefs about the actions available to the agents. Dütting et al., (2019) examine moral hazard problems in which the principal has ambiguous beliefs about the distribution of ou...
An implication of our results is that in the context of moral hazard problems, ambiguity and max-min utility drive optimal designs towards simplicity. We thus join a literature, with Holmström and Milgrom, (1987) as a key early entry, endeavoring to explain why actual contracts in moral hazard settings tend to be simpl...
A
Our sample size bound is similar to theirs in the pessimistic settings where wt≈wmaxsubscript𝑤𝑡subscript𝑤w_{t}\approx w_{\max}italic_w start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ≈ italic_w start_POSTSUBSCRIPT roman_max end_POSTSUBSCRIPT for all t𝑡titalic_t, with slight improvements on some of the other dependen...
Subsampling has been thoroughly explored in the context of privacy amplification (see e.g. [BBG18, ZW19] or the book chapter [Ste22]): if 𝒜𝒜\mathcal{A}caligraphic_A is a differentially private algorithm, running 𝒜𝒜\mathcal{A}caligraphic_A on a random subset of the data gives an algorithm with even better privacy p...
In [FS17], they gave a mechanism with accuracy guarantees similar to222In particular, when stdφ⁡(𝒟)subscriptstd𝜑𝒟\operatorname{{std}}_{\varphi}(\mathcal{D})roman_std start_POSTSUBSCRIPT italic_φ end_POSTSUBSCRIPT ( caligraphic_D ) is very small, the accuracy guarantee in [FS17] can improve on that of Theorem 3. See...
Most interestingly, our mechanisms and that of [FS17] is fairly similar – both rely on splitting the dataset and, roughly speaking, computing an approximate median of the queries’ value on each group – but the analyses are wholly different. Their analysis is based on differential privacy. In contrast, Theorem 4 is a s...
The proof of Theorem 3 is a simple corollary of our subsampling framework: Each vote (𝒗isubscript𝒗𝑖\bm{v}_{i}bold_italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in Figure 1) is the output of a subsampling query φ:X→{0,1}:𝜑→𝑋01\varphi:X\to\{0,1\}italic_φ : italic_X → { 0 , 1 } and so fits within our framew...
C
The inductive argument is very similar to the one in the proof of Lemma 4.30. Indeed, since the vertex r𝑟ritalic_r has at most one neighbour, ℓ≤1ℓ1\ell\leq 1roman_ℓ ≤ 1 in the proof of Lemma 4.30 and the construction does not require arbitrary parallel compositions.
This subsection is dedicated to some further relations between the classes of graphs of bounded treewidth or pathwidth, ℒtsubscriptℒ𝑡\mathcal{L}_{t}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, and ℒt+superscriptsubscriptℒ𝑡\mathcal{L}_{t}^{+}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT...
Furthermore, one may drop Equation 4 from the level-t𝑡titalic_t Lasserre system of equations with non-negativity constraints to obtain the level-2⁢t2𝑡2t2 italic_t Sherali–Adams system of equations in its original form, i.e.  with non-negativity constraints. This is paralleled by Lemma 4.30.
Since the diagonal entries of a positive semidefinite matrix are necessarily non-negative, Equation 4 implies that any solution (yI)subscript𝑦𝐼(y_{I})( italic_y start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT ) to the level-t𝑡titalic_t Lasserre system of equations is such that yI≥0subscript𝑦𝐼0y_{I}\geq 0italic_y st...
First of all, dropping the semidefiniteness constraint Equation 4 of the level-t𝑡titalic_t Lasserre system of equations turns this system essentially into the level-2⁢t2𝑡2t2 italic_t Sherali–Adams system of equations without non-negativity constraints, e.g.  as defined in [grohe_homomorphism_2021_arxiv, Section 2.7]....
B
The lab is equipped with an OptiTrack motion capture system, which functions in the outside-in [39] tracking principle. 6 motion cameras are mounted around the experiment zone to take 2D aligned pictures of passive markers on objects, according to the position of retroreflective markers on 2D frames to calculate the r...
The participant should start from position C and move towards the Table in position D to deliver the yellow cup, during which a non-contact interaction between the participant and the Spot was recorded by the motion capture cameras. In the non-stationary conditions, the Spot robot started moving from B to A the moment ...
Sideway orientation is the most special condition in all three moving cases because it gives participants the sense of moving in a vertical direction instead of the original path. The head of the Spot points vertically toward the participant’s trajectory so that the Spot will move orthogonally to its orientation. Peopl...
Figure 3: Design of the experiment: 1) OptiTrack Cameras, 2) Spot Trajectory. The Spot will always move from B to A if not still. Without gaze, in forward conditions, the Spot is facing A point; in sideways conditions, the Spot is vertical to line AB, facing the participant side, and in backward conditions, the Spot i...
Our robot in this experiment is the Spot from Boston Dynamics. It comes with built-in autonomous walking, inspecting, and avoidance functions. Compared to the general model, this one is mounted with a robotic arm on its back, which gives it the appearance of a dog with the gripper as its head (see in Figure 1). There ...
D
Thus, λ𝜆\lambdaitalic_λ imposes a constraint on each joint angular value θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, i∈{1,2,3}𝑖123i\in\left\{1,2,3\right\}italic_i ∈ { 1 , 2 , 3 }. When θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is within its...
The analytical method addresses Inverse Kinematics (IK) by solving a set of closed-form equations, which directly compute the generalized coordinates needed to position the manipulator’s end effector at a predefined target location [1]. This method leverages geometric insights and the specific structure of the robot. H...
The primary advantage of the analytical method is its accuracy and efficiency, providing real-time results and computing valid potential configurations. However, as the number of degrees of freedom in a manipulator increases beyond six, the number of possible solutions becomes very large. Furthermore, if a solution is ...
A multi-objective optimization genetic algorithm (MOOGA) is a metaheuristic technique inspired by evolutionary biology processes, designed to tackle optimization problems. Bjoerlykhaug [20] utilized MOOGA to solve inverse kinematics in real-time while the robot is in motion, resulting in a reduction of computational ti...
Tringali et al. [20] introduced an optimal inverse kinematics approach based on optimization techniques for redundant robot manipulators, incorporating both linear and nonlinear constraints by selecting appropriate initial conditions. Similarly, Lu [21] employed optimization methods to ensure feasible and smooth joint ...
D
The comparability graph of the poset in Proposition 2.4 shows that for any fixed hℎhitalic_h, this bound has the right order of magnitude in ε𝜀\varepsilonitalic_ε. As in the case of posets, we can also use the test for Kχ⁢(ℱ)subscript𝐾𝜒ℱK_{\chi(\mathcal{F})}italic_K start_POSTSUBSCRIPT italic_χ ( caligraphic_F ) en...
The goal of this paper is to study testability of finite posets as special digraphs. By a poset, we mean a set equipped with a partial order ≺precedes\prec≺ that is anti-reflexive and transitive. Alon, Ben-Eliezer and Fischer [1] proved that hereditary (closed under induced subgraphs) classes of ordered graphs are stro...
The relationship between local and global properties of structures is a central theme in combinatorics and computer science. Since the work of Rubinstein and Sudan [25], testing properties by sampling a small number of elements is an emerging research area. A classical result of this kind is the triangle removal lemma ...
Alon and Shapira [4] proved that every monotone property of undirected graphs (that is, closed under the removal of edges and vertices) is strongly testable, see Lovász and Szegedy for an analytic approach [22], while Rödl and Schacht generalized this to hypergraphs [23], see also Austin and Tao [8]. Similar results ha...
Panna Tímea Fekete’s Project No. 1016492. has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the KDP-2020 funding scheme and supported by the ERC Synergy Grant No. 810115 – DYNASNET.
D
Here a common strategy is to manually create the initial KG either by development from scratch or reusing existing KGs. There may also be a complex pipeline to construct the initial KG by processing semi-structured data from catalogs, wikis, or category systems. All projects start with building or using some initial KG...
RDFUnit [242] is an evaluation tool for validating and testing RDF graphs against predefined quality constraints and patterns. It can assess the quality and compliance of RDF datasets concerning schema definitions, vocabulary usage, and data integrity (supporting SHACL). The tool enables the automatic generation of tes...
Overall, we see that the KG-specific approaches have a number of limitations regarding scalability to many sources, support for incremental updates and in several steps regarding metadata, ontology management, entity resolution / fusion, and quality assurance. The toolsets are generally better in terms of their functio...
This functionality is not always provided (or documented) and often based on manually defined rules and filter definitions, e.g., to select properties and relationships for certain entity types. Some solutions also apply normalization steps, e.g., to unify date or number representations.
A KG’s ontology defines the concepts, relationships, and rules governing the semantic structure within a KG of one or several domains that also include the types and properties of entities and their relationships. To structure data in a KG, common ontology relationships euch as is-a and has-a are used to represent taxo...
C
Dual quaternion algebra has been highlighted in numerous works, including the dynamics modeling of a mobile manipulator  [5], stabilization of rigid body motion, multiple body interactions  [6], inverse kinematic study of 6-DOF robot arms, and tracking control  [7, 8]. For instance, Valverde et al.  [9] presented a se...
The primary objective of this paper was to leverage dual quaternions algebra for describing the kinematics, encompassing position and orientation, as well as the dynamics modeling of an anthropomorphic leg in 3D-space, thereby circumventing the high computational costs associated with homogeneous transformation method...
In this paper, the dual quaternion-based theory is applied to the kinematics and dynamics study of the 7-DOF human lower limbs in 3D space. Subsequently, the artificial neural networks method is used to solve the inverse kinematics problem. The efficiency of the artificial neural networks method is verified using the j...
In this section, the Forward Kinematics (FK) of the lower limbs, depicted in Figure 1, using dual quaternions is established. FK involves computing the positions and orientations of the end-effector in task space from the axes and angles of the joint rotations. The lower limb is decomposed into four segments: the pelv...
This section focuses on the dynamic description of the lower limb shown in Figure 2 using the Dual Quaternion-based recursive Newton-Euler method. This method involves calculating the velocities and accelerations of the center of mass of each link, known as twists, based on the positions, velocities, and accelerations...
B
We consider again a diagonal neural network and estimate the time costs needed for computing its gradient, Hessian, decomposing the Hessian, and solving the cubic subproblem. Figure 6 shows that the average cost of computing the Hessian is significantly higher than the cost of computing one gradient, and the quotient g...
We consider the variance-reduced cubic Newton method from (Zhou et al., 2019) (referred to as “full VR”), its lazy version where we do not update the snapshot Hessian (“Lazy VR”), the stochastic Cubic Newton method (“SCN”), the Cubic Newton algorithm (“CN”), Gradient Descent with line search (“GD") and Stochastic Gradi...
Figure 6: (left) times needed to compute the gradient, the Hessian, decompose the Hessian and solve the cubic subproblem for a diagonal neural network with n=10000𝑛10000n=10000italic_n = 10000 and different values of the dimension d𝑑ditalic_d. (right) average time for computing the Hessian divided by the average tim...
second-order optimization algorithms. We take into account that the cost of one stochastic Hessian is proportional to d𝑑ditalic_d times the cost of the stochastic gradient, where d𝑑ditalic_d is the problem dimension, which holds for general dense problems.
We consider again a diagonal neural network and estimate the time costs needed for computing its gradient, Hessian, decomposing the Hessian, and solving the cubic subproblem. Figure 6 shows that the average cost of computing the Hessian is significantly higher than the cost of computing one gradient, and the quotient g...
B
Intelligent reflecting surfaces (IRS) have been extensively studied in the literature as a means to enhance the performance of both indoor and outdoor wireless systems [1, 2]. An IRS is a passive electro-magnetic surface which comprises of IRS elements made of meta-materials. The IRS elements can introduce a small del...
As mentioned earlier, in this work, we consider a scenario where operator X deploys and controls an IRS in order to enhance the throughput of the users being served by it, and are interested in the effect of the IRS on an operator Y that is providing services in a different frequency band. Thus, in order to serve the k...
In practice, multiple network operators co-exist in a given geographical area, each operating in different frequency band. As a consequence, at a given point in time, multiple UEs are served by different operators in the system. In such a scenario, if an IRS is optimized to cater to the needs of one of the operators, i...
which represents the difference in the SNR/channel gain at a UE q𝑞qitalic_q (OOB-UE) served by BS-Y with and without the IRS in the environment. In Fig. 4, we plot the CCDF of ZN(Y)subscriptsuperscript𝑍𝑌𝑁Z^{(Y)}_{N}italic_Z start_POSTSUPERSCRIPT ( italic_Y ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_N end_POST...
This proposition states that the random variables {Zn(Y)}n∈ℕ∪{0}subscriptsubscriptsuperscript𝑍𝑌𝑛𝑛ℕ0\left\{Z^{(Y)}_{n}\right\}_{n\in\mathbb{N}\cup\{0\}}{ italic_Z start_POSTSUPERSCRIPT ( italic_Y ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_n ∈ blackboard_N ∪ { 0 ...
B
Unlike attribution methods, some studies focused on quantifying interactions between input variables (Sorokina et al., 2008; Murdoch et al., 2018; Singh et al., 2018; Jin et al., 2019; Janizek et al., 2020). In game theory, Grabisch & Roubens (1999); Sundararajan et al. (2020); Tsai et al. (2022) proposed interaction m...
Instead, previous studies interpreted DNNs from other perspectives, such as illustrating the visual appearance that maximizes the inference score (Simonyan et al., 2013; Yosinski et al., 2015), and estimating attribution/importance/saliency of input variables (Ribeiro et al., 2016; Sundararajan et al., 2017; Lundberg &...
Some studies explained a DNN by distilling the DNN into another interpretable model (Frosst & Hinton, 2017; Che et al., 2016; Wu et al., 2018; Zhang et al., 2018; Vaughan et al., 2018; Tan et al., 2018). However, most explanation methods did not try to disentangle concepts encoded by a DNN.
Unlike attribution methods, some studies focused on quantifying interactions between input variables (Sorokina et al., 2008; Murdoch et al., 2018; Singh et al., 2018; Jin et al., 2019; Janizek et al., 2020). In game theory, Grabisch & Roubens (1999); Sundararajan et al. (2020); Tsai et al. (2022) proposed interaction m...
Many explanation methods have been proposed to explain DNNs from different perspectives. Typical explanation methods include visualizing patterns encoded by a DNN (Simonyan et al., 2013; Zeiler & Fergus, 2014; Yosinski et al., 2015; Dosovitskiy & Brox, 2016), estimating the attribution/importance/saliency of each input...
B
Interactive concepts vs. cognitive concepts and other interaction metrics. Although the Harsanyi interactive concept seems partially aligned with humans’ cognition to some extent (Cheng et al. 2021b), we do not think such interactive concepts exactly fit humans’ cognition. More crucially, the mathematical generalizati...
Although there is a common intuition that more complex representations usually lead to over-fitting, this study uses an analytic inconsistency of concepts to explain the connection between the complexity of interactive concepts and their generalization power. The complexity of an interactive concept S𝑆Sitalic_S is de...
Therefore, a high-order interactive concept contains a large number of input variables, and represents a complex concept. In this way, we use the high inconsistency to noises of high-order concepts to explain the high over-fitting risk of high-order concepts.
In general, a DNN’s representation complexity is different from the cognitive complexity. For example, let us consider a small ball concept consisting of a few pixels (low-order concept) and a large ball concept consisting of massive pixels (high-order concept) in images. These two balls have similar cognitive difficul...
Complexity (order) of interactive concepts.  The complexity of the interactive concept S𝑆Sitalic_S is defined as the number of input variables contained in the concept, which is also termed as the order of the concept, i.e., order⁢(S)=|S|order𝑆𝑆\textit{order}(S)=|S|order ( italic_S ) = | italic_S |.
C
In NeuralODEs (Neural Ordinary Differential Equations), people used to use the ELU(Exponential Linear Units) [1] or the Tanh function for the activation functions, because of its differentiability over the whole domain. Prediction performance of NeuralODEs was not good for longer time-series data even short time-series...
ReLU (Rectified Linear Units) is mainly used in the fields of vision classification. In classification, ordinarilly more layers is better in deep learning. On datasets such as MNIST, even simple CNNs with three layers can achieve high classification performance. However, for more challenging datasets such as CIFAR10, s...
In NeuralODEs (Neural Ordinary Differential Equations), people used to use the ELU(Exponential Linear Units) [1] or the Tanh function for the activation functions, because of its differentiability over the whole domain. Prediction performance of NeuralODEs was not good for longer time-series data even short time-series...
We conducted experiment on CIFAR10 which is more challenging model than MNIST in classification fields. ResNet18 which is optimized using SGD on a batch size of 32 with a learning rate of 0.001, momentum of 0.9 is used for the experiment with random seed of 10. Our activation function converges rapidly with respect to...
We conducted experiments with MNIST, the most commonly used datasets in the field of image classification. We used MNIST datasets in torchvision and used 2 layered Networks which is optimized using SGD on a batch size of 64 with a learning rate of 0.001 and a momentum of 0.5 with random seed of 10. We confirmed that ou...
A
Another interesting line of research is to find variations of the tree-merging approach suitable for the efficient generation either of restricted classes of functional digraphs (for instance, with cycles of given lengths or trees of given heights, which is sometimes useful in applications related to the decomposition ...
We can now exploit the algorithms of Theorem 3.1, and more specifically our ability to generate the successor of a given component, as a subroutine for the efficient generation of arbitrary (non necessarily connected) functional digraphs. In order to avoid generating multiple isomorphic digraphs, we first define an app...
We would like to thank Kellogg S. Booth, Jerome Kelleher, Brendan D. McKay, and Kévin Perrot for some useful discussions and for providing some bibliographic references. We are also grateful to an anonymous reviewer for suggesting the explicit use of the supergraph method, which allowed us to simplify the proofs and to...
Figure 2: Reverse search tree for the generation of components of 4444 vertices, represented both in graphical form (top) and as isomorphism codes (bottom); the order of generation by the algorithm of Theorem 3.1 is displayed on the top left. Note that the actual ordering of children of each node depends on the (arbitr...
For brevity, we refer to connected functional digraphs as components and, as with trees, we identify a component C𝐶Citalic_C with its own code. A valid code for a component C𝐶Citalic_C is also called a canonical form of C𝐶Citalic_C; unless otherwise specified, in the rest of the paper we consider all components to b...
B
The following theorem shows that four-layer feed-forward neural networks with the ReLU activation are dense in Lηp⁢(ℝd;ℝm)subscriptsuperscript𝐿𝑝𝜂superscriptℝ𝑑superscriptℝ𝑚L^{p}_{\eta}(\mathbb{R}^{d};\mathbb{R}^{m})italic_L start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_η end_POSTSUB...
and its development is a major analytical contribution of this paper. While in the context of uncertainty propagation and inverse problems, some results have been proven in this direction when D𝐷Ditalic_D is the Lqsuperscript𝐿𝑞L^{q}italic_L start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT distance between the dens...
This result is of independent interest since typical universal approximation results for neural networks are stated over compact domains while our result is stated on all of ℝdsuperscriptℝ𝑑\mathbb{R}^{d}blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT. While similar results have been shown for operator ...
that demonstrate our theory, and even explore the validity of our approximation results beyond the current set of hypotheses. Lastly, for our applications, we present a new result concerning the approximation accuracy of neural networks on unbounded domains, Theorem 4.6, which is of independent interest.
neural networks (Statement 4) with Ω=[0,1]dΩsuperscript01𝑑\Omega=[0,1]^{d}roman_Ω = [ 0 , 1 ] start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT, we use the approximation of Hölder continuous functions by neural networks [88], and the density of Hölder continuous functions in L2⁢(Ω)superscript𝐿2ΩL^{2}(\Omega)italic_L...
B
The Aff-Wild2 database [38, 40, 20, 36, 22, 37, 34, 28, 57, 18, 33, 41, 32, 42, 43] is the largest in-the-wild database and the only one to be annotated in a per-frame basis for the seven basic expressions (i.e., happiness, surprise, anger, disgust, fear, sadness and the neutral state), twelve action units (AUs 1,2,4,6...
We evaluate our method on the updated partitioning protocol of this database according to our previous work [15, 14] (as we mentioned in Section 3.1). This new partitioning consists of a training set of around 25K images, a validation set of around 7K images and a test set of around 14K images.
Our recent studies [15, 14] have highlighted challenges in the current evaluation of affect analysis methods, noting inconsistencies in database partitioning and evaluation practices that lead to biased and unfair comparisons. To address these issues, a unified protocol for database partitioning was proposed, ensuring...
The original training set of this database consists of around 290K images and the original validation of 4K. We evaluate our method on the updated partitioning protocol of this database according to our previous work [15, 14] (as we mentioned in Section 3.1). This new partitioning consists of a training set of around 1...
The EmotioNet database [5] contains around 1M images and was released for the EmotioNet Challenge in 2017. 950K images were automatically annotated and the remaining 50K images were manually annotated with 11 AUs (1,2,4,5,6,9,12,17,20,25,26); around half of the latter constituted the validation and the other half the ...
C
A conformity function is a mapping ρ:ℝd×𝒴×Ω→ℝ:𝜌→superscriptℝ𝑑𝒴Ωℝ\rho:\mathbb{R}^{d}\times\mathscr{Y}\times\Omega\to\mathbb{R}italic_ρ : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT × script_Y × roman_Ω → blackboard_R such that ρ⁢(x,y)=ρ⁢(x,y,⋅)𝜌𝑥𝑦𝜌𝑥𝑦bold-⋅\rho(x,y)=\rho(x,y,\,\boldsymbol{\...
Note that the regularity of a specific conformity function ρ𝜌\rhoitalic_ρ is contextual, being inherently dependent on the distribution of the underlying data sequence. Technically, we can always avoid ties among the sequence of conformity scores almost surely by introducing a properly constructed ancillary tie-break...
Conformity functions are agnostic to the choice of the specific models or algorithms used to construct μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG, ξ^psubscript^𝜉𝑝\hat{\xi}_{p}over^ start_ARG italic_ξ end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, and π^^𝜋\hat{\pi}over^ start_ARG italic_π end_ARG in Exa...
In general, the coverage indicators Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are dependent random variables, since for all future observables the corresponding conformal prediction sets in Definition 3 are defined in terms of the same calibration sample conformity score S(⌈(1−α)⁢(n+1)...
Under the data exchangeability assumption, the sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT is exchangeable.
A
The recording system consists of an event camera (iniLabs DAVIS 240C [3]) and an RGB-D camera (Intel RealSense D435), as shown in Fig. 6 (a). The dynamic range of the event camera is 120 dB. We set the frame rate of the RGB-D camera to 30 FPS. We synchronize the time of two vision sensors to align the timestamps of ev...
The recording system consists of an event camera (iniLabs DAVIS 240C [3]) and an RGB-D camera (Intel RealSense D435), as shown in Fig. 6 (a). The dynamic range of the event camera is 120 dB. We set the frame rate of the RGB-D camera to 30 FPS. We synchronize the time of two vision sensors to align the timestamps of ev...
Furthermore, we extend this encoder with a Stream-Segment Temporal Modeling Module (S2TM) to learn temporal dynamics for action recognition. Specifically, we split a long-duration event stream into a sequence of segmented voxel sets and extract spatiotemporal features per segment by the encoder. Then, a sequence of fe...
Some representative samples of NeuroHAR are visualized in Fig. 6 (b). As a supplement to recording with still cameras, we preserve the background information in the motion scene by hand-held mobile recording (50% of data per category), making the dataset more suitable for practical applications. Besides, we record huma...
Figure 1: (a) Visualization of event camera output. An event camera records the spatiotemporal information of the running action as a stream of events, where red and blue dots denote positive and negative, respectively. (b) Comparison of samples from DVS128 Gesture [14] and NeuroHAR datasets. The sample is a human acti...
C
CNN-based methods [20, 21] usually require fewer touches but suffer from lower resolution due to computational requirements. Smith et al. proposed approaches [22, 23] based on Graph Neural Network. Reconstructions by these methods have a higher resolution but are nonsmooth and, for now, evaluated only in simulation.
An important part of haptic exploration is the decision where to touch. The object can be touched randomly as done by Smith et al. [22], or always select a position opposite the camera (from “behind”) as Watkins-Vall et al. [20]. However, these are not as effective as an uncertainty-driven approach. Uncertainty can co...
CNN-based methods [20, 21] usually require fewer touches but suffer from lower resolution due to computational requirements. Smith et al. proposed approaches [22, 23] based on Graph Neural Network. Reconstructions by these methods have a higher resolution but are nonsmooth and, for now, evaluated only in simulation.
Points with a high loss from Eq. 8 are not on the estimated surface. And, intuitively, if all points were certain, the loss would be zero for all of them. However, this was not the case in our experiments. Therefore, we compute the Eikonal loss from Eq. 8 for all points on our current shape O𝑂Oitalic_O and take it as ...
where, in our case, 𝐱𝐱\mathbf{x}bold_x is a point defined in 3D space and s𝑠sitalic_s is the signed distance. Traditionally, f𝑓fitalic_f would be described analytically, but it can also be learned with a neural network. Then, the implicit surface generated with a neural network can be described as
A
To identify the “good clones” satisfying F=P⁢o⁢l⁢(I⁢n⁢v⁢(F))𝐹𝑃𝑜𝑙𝐼𝑛𝑣𝐹F=Pol(Inv(F))italic_F = italic_P italic_o italic_l ( italic_I italic_n italic_v ( italic_F ) ), two approaches can be followed. The topological one involves equipping the set of operations with the topology of pointwise convergence, so that the...
This approach based on ideals allows us to recover well-known topologies on ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT, such as the topology of pointwise convergence (referred to as the local topology in this work) and the uniform topology (refer to [17], Section 19).
In the classical approach to polymorphisms and invariant relations, topology, especially local topology, plays an important role in the set OAsubscript𝑂𝐴O_{A}italic_O start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT of finitary operations. However, it seems to have no role in the set R⁢e⁢lA𝑅𝑒subscript𝑙𝐴Rel_{A}itali...
This approach is widely followed in the literature (e.g. see [4]). The topology-free approach takes into account the cardinality of the basic set A𝐴Aitalic_A and the arity of operations. Bruno Poizat fully developed the topology-free approach in [19]. In [20, 21] R. Pöschel characterised the Galois closed sets of rela...
In this paper, we adopt the topological approach as we primarily focus on ω𝜔\omegaitalic_ω-operations and relations of arity ω𝜔\omegaitalic_ω, referred to as ω𝜔\omegaitalic_ω-relations. We present a method for defining topologies on sets of functions. The key idea is to choose a Boolean ideal X𝑋Xitalic_X of subsets...
C
An illustrating example is the so-called synthesis problem in the field of system identification, where (under special conditions) the Minimum s𝑠sitalic_s-t𝑡titalic_t Cut problem can be used to determine an optimal placement of input and output signals in a physical system (modeled as a directed graph) to gather info...
One way of dealing with this issue is to present all optimal solutions of the simplified model and let a user choose between them based on external factors ignored by the mathematical model. Such an approach is useful when the number of optimal solutions is small, but in most cases (as in the Minimum s𝑠sitalic_s-t𝑡ti...
An illustrating example is the so-called synthesis problem in the field of system identification, where (under special conditions) the Minimum s𝑠sitalic_s-t𝑡titalic_t Cut problem can be used to determine an optimal placement of input and output signals in a physical system (modeled as a directed graph) to gather info...
We now briefly motivate why finding diverse minimum s𝑠sitalic_s-t𝑡titalic_t cuts in a graph can be of interest. In general, to solve a real-world problem, one typically formulates the problem as an instance of a computational problem and proceeds to find a solution with the help of an optimization algorithm. However...
This section is devoted to proving Theorem 1.1 by reducing SUM-k𝑘kitalic_k-DMC and COV-k𝑘kitalic_k-DMC to SMF on distributive lattices. First, we show that the domain of solutions of SUM-k𝑘kitalic_k-DMC and COV-k𝑘kitalic_k-DMC can be restricted to the set of k𝑘kitalic_k-tuples that satisfy a particular order, as o...
A
We let X𝑋Xitalic_X be the real line and the random transition Γ(⋅|u)\Gamma(\cdot|u)roman_Γ ( ⋅ | italic_u ) be a Gaussian 𝒩⁢(u,1)𝒩𝑢1\mathcal{N}(u,1)caligraphic_N ( italic_u , 1 ) with mean u𝑢uitalic_u. The space of measures is XM=[0,1]subscript𝑋𝑀01X_{M}=[0,1]italic_X start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIP...
In quantum mechanics, given a quantum state |ψ⟩ket𝜓\ket{\psi}| start_ARG italic_ψ end_ARG ⟩, a measurement, or POVM, E𝐸Eitalic_E produces a probability measure E⁢|ψ⟩𝐸ket𝜓E\ket{\psi}italic_E | start_ARG italic_ψ end_ARG ⟩ over strings. This probability represents the classical information produced from the measurem...
Another interesting property of quantum mechanics is that the vast majority of quantum pure states themselves will have negligible algorithmic self information 𝐈𝒬(|ψ⟩:|ψ⟩){\mathbf{I}}_{\mathcal{Q}}(\ket{\psi}:\ket{\psi})bold_I start_POSTSUBSCRIPT caligraphic_Q end_POSTSUBSCRIPT ( | start_ARG italic_ψ end_ARG ⟩ : | s...
We prove conservation of probabilities over successively general spaces. This includeds finite sequences, infinite sequences, and T0subscript𝑇0T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, second countable topologies. Conservation of probabilities over the case of finite and infinite sequences follow directly...
Quantum information theory studies the limits of communicating through quantum channels. This section shows the limitations of the algorithmic content of [ure states and their measurements. Given a measurement apparatus E𝐸Eitalic_E, there is only a tiny fraction of quantum pure states on which E𝐸Eitalic_E’s applicati...
D
Specifically, we train ViT models with the same settings as in Section IV-C with context normalization approaches (CN-Channels and CN-Patches) on the combined dataset CIFAR-100 and MNIST digits. We target two contexts r∈{1,2}𝑟12r\in\{1,2\}italic_r ∈ { 1 , 2 }, corresponding to the datasets and the context identifier ...
Specifically, we train ViT models with the same settings as in Section IV-C with context normalization approaches (CN-Channels and CN-Patches) on the combined dataset CIFAR-100 and MNIST digits. We target two contexts r∈{1,2}𝑟12r\in\{1,2\}italic_r ∈ { 1 , 2 }, corresponding to the datasets and the context identifier ...
It is important to mention that the baseline models (ViT with standard preprocessing and ViT with batch normalization) collapsed in this blended dataset as the two datasets have different structures, and simple normalization does not allow a suitable representation of the data. Context normalization, on the other hand,...
TABLE VII: Comparison of the two Context Normalization methods on CIFAR-100: Context Normalization on Patches (CN-Patches) and Context Normalization on Channels (CN-Channels), with normalization to the mean and standard deviation of the dataset (ViT) and input normalization using batch normalization (BN).
Table VII demonstrates the significant performance improvement of context normalization over batch normalization (BN) when using the ViT architecture trained from scratch on CIFAR-100. Both CN-Patches and CN-Channels approaches outperform BN by approximately 10% and 18% in terms of accuracy and top-5 accuracy. The trai...
B
Although the background-based OOD score Sbsubscript𝑆𝑏S_{b}italic_S start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT can be used to detect OOD samples directly, it can miss the OOD samples whose detection relies heavily on the foreground features. Thus, we propose to utilize this background OOD score to complement exis...
Although the background-based OOD score Sbsubscript𝑆𝑏S_{b}italic_S start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT can be used to detect OOD samples directly, it can miss the OOD samples whose detection relies heavily on the foreground features. Thus, we propose to utilize this background OOD score to complement exis...
There are generally two types of post-hoc OOD detection approaches, including raw logit-based and softmax probability-based methods. Our background-based OOD score is based on an unbounded logit value, which can dominant the overall OOD score when combining with the foreground-based OOD score using the softmax output (...
Lastly, an OOD score in the foreground dimension obtained from existing post-hoc OOD detectors based on the K𝐾Kitalic_K-class predictions, and an OOD score obtained from the extra (+1) class prediction from the background dimension, are synthesized to perform OOD detection.
where S⁢(𝐱)𝑆𝐱S(\mathbf{x})italic_S ( bold_x ) is the final OOD score used to perform OOD detection in DFB, Sh⁢(𝐱)=h⁢(𝐱)subscript𝑆ℎ𝐱ℎ𝐱S_{h}(\mathbf{x})=h(\mathbf{x})italic_S start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( bold_x ) = italic_h ( bold_x ) denotes the OOD score obtained from using an existing foreg...
B
LLIE techniques have existed for many decades and can be divided into non-learning-based methods and learning-based methods. Popular examples of traditional techniques which do not require learning from data include variants of histogram equalization (HE) [6, 20] and gamma correction (GC) [21]. HE adjusts the global co...
In this paper, we present a framework for post-processing images which have undergone low-light image enhancement. The enhancement of low-light images often reveals a variety of degradations which are hidden in the dark, and thus a need for post-processing is introduced. Furthermore, each low-light enhancement techniqu...
Illumination Map Estimation (LIME) [1] provide an effective image enhancement approach; however, post-processing denoising is typically still necessary using algorithms such as BM3D [11] which often blurs high-frequency details. Alternative Retinex-based methods reformulate the traditional Retinex model to incorporate ...
Existing denoising techniques can be applied to denoise low-light images either before or after contrast enhancement [9, 10]. These denoising techniques range from low-pass filters and algorithms such as block matching and 3D filtering (BM3D) [11], to state-of-the-art DL denoisers [9, 12]. Despite denoisers significan...
Simply adjusting the contrast of low-light images using a technique such as histogram equalization [6] is often insufficient due to the amplification of noise [1, 7]. Learning-based methods have emerged which significantly outperform traditional methods. However, even the state-of-the-art deep learning (DL) techniques ...
B
Our study highlights the role of individual characteristics in participants’ navigation performance within the knowledge space, with this influence being modulated by constraints such as time and distance. We discovered that prior experience with Wikipedia, the navigation game, and familiarity with the target page are...
Regarding other individual characteristics, distinctions arise between the two types of games (see Supplementary Figure 2222 for the game time distribution of the two games). Among participants who chose to play games featuring time constraints, male participants of Asian ethnicity without native-level fluency in a fo...
Regarding other traits, distinctions emerge when considering the two categories of constraints. Among participants who chose to play Speed-race games involving time constraints, superior performance is exhibited by male participants with an Asian ethnic background who do not speak a foreign language at a native level. ...
Encoding the participants’ answers to the questions in the survey (see encoding details in the Supplementary Material), we end up with 18 control variables characterizing the participants by the six groups of questions specified above, 5 control variables indicating the game, game type (Speed-race or Least-clicks), rou...
Our regression analysis on the uniqueness scores of navigation paths reveals that akin to success, individual characteristics also influence the uniqueness of successful routes. Specifically, among participants who chose to play the games under time constraints, younger and left-handed participants (third principal co...
B
The simulation of high-energy collisions, of the decays of the generated particles, and of the physics processes occurring within the detector by the decay products are a key necessity of analysis, typically for separating the signal from background sources or for selection efficiency studies.
As mentioned in the previous Section, the validation of the ultra-fast philosophy of Lamarr is based on the comparison between the distributions obtained from models trained on detailed simulation and the ones resulting from standard simulation strategies.
The simulation software of the LHCb experiment is built upon two main projects named Gauss and Boole [3], both based on the Gaudi framework [4]. The Gauss framework implements the so-called generation and simulation phases, while the Boole application is responsible for the digitization phase.
The first step of any simulation production is the generation phase in which the high-energy collisions are simulated with Monte Carlo generators such as Pythia8 [5] and EvtGen [6]. The output of the generation phase is the set of long-lived particles able to traverse partially or entirely, depending on the particle sp...
Lamarr [14] is a novel LHCb simulation framework implementing the ultra-fast simulation paradigm. The Lamarr framework consists of a pipeline of modular parameterizations designed to take as input the particles generated by the event generators and provide as output high-level quantities representing the particles succ...
B
Group 1 of Table 3 showcases the contact map prediction scores, evaluated across the CATH, trRosetta, Ts50/Ts500, and CASP14 test sets. Both the residue-level and protein-level pretrained models demonstrate high P@L accuracy in predicting contact maps across all datasets, indicating that the pretrained structure module...
The retrieval alignment evaluation on the CATH, trRosetta, Ts50/Ts500, and CASP14 test sets is presented in Group 2 of Table 3. Additionally, we provide residue-level and protein-level results for comprehensive analysis. It’s noteworthy that the protein-level pretrained model exhibits a higher ease in aligning sequence...
As shown in Figure 3, we propose to compute the alignment loss at both the residue and protein levels, respectively. For residue-level alignment, we compare the encoded sequence and structure features residue by residue. In contrast, for protein-level alignment, we compare the encoded features at a coarser, fine-graine...
In the proposed pretraining framework, we operate under the strong assumption that protein language models and protein structure models are equally proficient in representing features, albeit through different modalities. Hence, we advocate for quantifying the sequence-structure retrieval power to gauge the alignment p...
Figure 1: (a) The proposed cross-modal contrastive learning framework utilizes a pretrained protein language model to guide the training of the protein structure model through contrastive alignment loss. To reinforce information constraints on the structure, we introduce a self-supervised contact map prediction. (b) T...
C
We implement all the methods with OpenKE [33], which is a pytorch-based open-source framework for knowledge embedding111Codes are available at https://github.com/brcai/LiftNet. We run TransE, TransH, DistMult, and ComplEx with low-dimensional (16) and high-dimensional (512) embedding dimensions to show the difference ...
The accuracy of link prediction of the proposed LifeNet-based methods and compared conventional methods are shown in Table II. We observe that TransE, TransH, DistMult, and ComplEx all obtain higher link prediction accuracy with 512 embedding dimensions than with 16 embedding dimensions, due to the increased expressive...
The results of LiftNet-based methods for knowledge graph link prediction (accuracy measured by H@10 and MRR) are shown in Fig. 3. Generally, on WN18RR datasets, we observe the link prediction accuracy increases with higher input dimension, and the increase is significant from 4-dimension to 16-dimension. However, after...
In Table V, we show the parameter efficiency of LiftNet-based methods on the three datasets. The results are shown pair-wisely, i.e., the numbers of parameters required by 512-dimensional KGE models and those of corresponding LiftNet models, because they achieve similar link prediction accuracy. KGE methods with 16-di...
We adjust the setups of TC layers by increasing/decreasing the kernel size accordingly to ensure the output dimensions are always 512. The results with respect to link prediction accuracy (H@10) are shown in Table VIII. We see that the results of LN-TransE and LN-TransH are relatively stable, introducing more TC layers...
A
The second step is to demonstrate that the multipartite metrics (specifically, the fidelity and rate metrics for the multipartite state) are monotonic and label-isotonic with respect to the paths. For these metrics, instead of adding links to extend paths, one should combine paths to form trees. In the particular case ...
Assuming that we already found the optimal fidelity curves between any two nodes in the network, one can easily find the fidelity curve associated with multipartite entanglement distribution for a given source node and any tree nodes [6]. One has to test all possible source nodes, which suggests that the run time of su...
The second step is to demonstrate that the multipartite metrics (specifically, the fidelity and rate metrics for the multipartite state) are monotonic and label-isotonic with respect to the paths. For these metrics, instead of adding links to extend paths, one should combine paths to form trees. In the particular case ...
Finding the best route to distribute entanglement has proven to be a non-trivial problem [3, 4, 5, 6, 7, 8]. Caleffi et al. [3] studied a single-qubit entanglement generation model for bipartite entanglement (i.e., between only two nodes). In their model, the entangled qubits start as maximally entangled and subsequent...
Figure 1: Entanglement distribution. (a-b) Shows fidelity as a function of the entanglement generation probability p𝑝pitalic_p (and the equivalent in terms of capacity c𝑐citalic_c, for ne=1subscript𝑛𝑒1n_{e}=1italic_n start_POSTSUBSCRIPT italic_e end_POSTSUBSCRIPT = 1 ebit) for two links. Two fidelity curves are de...
A
Fig. 5 shows the minimum required SNRs of polar codes constructed by polar sequence [4], GA algorithm [7] and MWD sequence to achieve the target BLER under the AWGN channel with the code rate range R=0.0625∼0.9375𝑅0.0625similar-to0.9375R=0.0625\sim 0.9375italic_R = 0.0625 ∼ 0.9375. The required SNRs of MWUB equal to ...
Fig. 5 shows the minimum required SNRs of polar codes constructed by polar sequence [4], GA algorithm [7] and MWD sequence to achieve the target BLER under the AWGN channel with the code rate range R=0.0625∼0.9375𝑅0.0625similar-to0.9375R=0.0625\sim 0.9375italic_R = 0.0625 ∼ 0.9375. The required SNRs of MWUB equal to ...
In Fig. 5(a), we observe that the required SNRs of polar sequence [4], GA algorithm [7] and MWD sequence are close to the required SNRs of corresponding MWUB. Then, since the polar codes constructed by the MWD sequence have the optimum MWUB, the required SNRs are less than or equal to those of the polar sequence and GA...
In Fig. 5(b), the optimum MWUB of the MWD sequence also leads to the corresponding minimum required SNRs less than or equal to those of GA algorithm and polar sequence. The MWD sequence has about 0.750.750.750.75dB and 1.071.071.071.07dB SNR gaps at R=0.5625𝑅0.5625R=0.5625italic_R = 0.5625 compared with the GA algorit...
In this paper, the construction methods based on MWD are proposed to improve the performance of polar codes under SCL decoding. We first prove that the ML performance can approach the MWUB as the SNR goes to infinity. Then, we design the ordered and nested MWD sequence to apply fast construction without channel inform...
B
The optimal Hider distribution over the leaf nodes can be found by a similar stochastic process in which the Hider starts at the root O𝑂Oitalic_O and at each branch node chooses a branch to enter according to a certain distribution. Of course, this is merely a mental calculation for the Hider, who is stationary in th...
That is, the value of the game with signals cannot be larger than that of the classic search game without signals. Since the value of this classic search game on a tree is equal to its total length μ𝜇\muitalic_μ (Gal, 1979), this must be an upper bound for the value of the search game with signals on a tree.
If the Hider lies in neither branch, any signal distribution may be used, as in this case the Searcher will return to node j𝑗jitalic_j again after time equal to twice the length of the branches, regardless of her search method. As p→1/2→𝑝12p\rightarrow 1/2italic_p → 1 / 2, the signal becomes useless and the solution ...
This paper has addressed the question of how to optimally search for an adversarially hidden target on a tree network in the presence of unreliable signals. We have found optimal solutions for both the Searcher and the Hider that can be calculated recursively, and a closed form expression for the value of the game. Fu...
We call the strategy [1,2]12\left[1,2\right][ 1 , 2 ] “follow” (go with the signal) and the strategy [2,1]21\left[2,1\right][ 2 , 1 ] “opposite”. It is easy to see that the resulting matrix game is given by Table 1, where V1subscript𝑉1V_{1}italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is the value of the game playe...
A
Additionally, we demonstrate the flexibility of the approach through example applications and by adapting to multiple and more complex modalities beyond chest X-ray. In contrast to other studies, we intentionally do not train from scratch and use small datasets to explore the feasibility of diffusion in low-data and lo...
We experiment with the number of sampling steps, the CFG scale, the number of images used to train embeddings, and the embedding vector size. To evaluate the impact of these parameters on generation quality, we compute the Fréchet Inception Distance using 1000 generated samples compared to 1000 real examples for each p...
Figure 1: The Textual Inversion fine-tuning process for diffusion models trains a text conditioning embedding for a new token using a small set of images while keeping the rest of the architecture frozen. We show that this allows the adaption of latent diffusion models to a variety of medical imaging modalities, using ...
Figure 5 in Appendix B shows the effect of the parameters studied in this section visually on a single random seed. For reference, Appendix C shows that directly generating images without applying textual inversion, by prompting the pre-trained model to generate prostate MRI scans, results in highly unrealistic images....
This process finds a vector in the text embedding space which optimally represents the concept. Practically, this is done by freezing the entire architecture apart from the embedding vector and performing backpropagation with a similarity loss, as illustrated in Figure 1.
D
To further validate CompoNeRF’s performance for more complex multi-object text inputs, we assess its performance using a lengthy sentence describing the color, texture, light, and relationships between scene components. Fig. 7 showcases our refined scene renderings originating from pre-trained source scenes including t...
To further validate CompoNeRF’s performance for more complex multi-object text inputs, we assess its performance using a lengthy sentence describing the color, texture, light, and relationships between scene components. Fig. 7 showcases our refined scene renderings originating from pre-trained source scenes including t...
We observe that a direct amalgamation of these components can manifest various artifacts at the base of the lamp and incongruous shadows and reflections from the glass ball are notable, detracting from the authenticity of the scene’s ambiance. After composition, reconfigured objects are adeptly integrated, achieving a ...
In Fig. 14, we present further results from the ablation study of our composition module. As outlined in our main manuscript, our preference for a density-based approach is due to its effective and precise calibration of global density. For example, the ’bedroom’ scene builds upon the discussion from Fig.2(b.2) in the ...
The complexity of a scene directly influences the required configuration of the parameters 𝜽gsubscript𝜽𝑔\boldsymbol{\theta}_{g}bold_italic_θ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT within the composition module. In Figure 11, we experiment with varying the number of layers in the MLPs responsible for both den...
B
Pei and Zaïane, (2006) present a compelling software system for generating two-dimensional clusters, which creates data sets with specified clustering difficulty “easy”, “medium” or “hard”; “easy” data consists only of spherical/convex clusters, whereas “medium” and “hard” data include curve segments and special shape...
MDCGen (Iglesias et al., (2019) is a feature-rich generator that supports many desiderata in cluster analysis, such as overlap control, different probability distributions, subspace clusters, and the ability to add noise points. In particular, it is nice to be able to place noise points away from the clusters, which i...
The popular scikit-learn library for machine learning (Pedregosa et al., (2011)) offers several functions for creating synthetic clusters. Among these, some are aimed at reproducing canonical 2D toy data sets like concentric circles and nested moons (make_moons, make_circles), while others focus on sampling multivariat...
Existing data generators do not cater directly to such high-level scenarios. Instead, the user must carefully tune simulation parameters to arrive at the desired scenarios (Steinley and Henson, (2005), Schubert and Zimek, (2019), Iglesias et al., (2019)). While some generators make it easy to control the overlaps betw...
In the following overview, we mainly focus on general-purpose generators. However, the literature has also proposed more specialized solutions to fill specific needs in the community. For example, Beer et al., (2019) present a data generator for subspace clustering, Gan and Tao, (2015) evaluates density-based clusteri...
B
The evaluation of trigger identification, event type classification, argument identification, and argument role classification tasks utilizes the F1-score metric, consistent with the previous studies Zhang et al. (2019); Wadden et al. (2019). A correct trigger classification prediction requires accurate trigger word an...
In this work, we study a more realistic setting of the event extraction task, namely the oracle-free event extraction, where no additional information beyond the context is required for event inference. To address this task, we propose a generation-based event extraction framework called COFFEE. Our COFFEE introduces a...
BART-Gen (Li et al., 2021) adopts a constrained generation mechanism, which necessitates the use of templates. We removed the template and the constrained decoding, thereby enabling the model to function. The trigger extraction performance of BART-Gen is not reported in our study due to an implementation error stemming...
Many prior studies formulate the event extraction task as a token-level classification problem, which extracts event triggers and arguments using sequence tagging models based on tailor-designed neural networks (Nguyen et al., 2016; Liu et al., 2018; Li et al., 2019; Yang et al., 2019; Wadden et al., 2019; Huang et al....
BART-Gen Li et al. (2021) is designed for document-level event extraction that can deal with the long-distance dependence issue and co-reference problem. Constrained generation is applied for argument extraction that requires event-specific templates.
D
This improvement is mainly due to the Lqsuperscript𝐿𝑞L^{q}italic_L start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT-embedding property of the interpolation space [ℋ]ssuperscriptdelimited-[]ℋ𝑠[\mathcal{H}]^{s}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT proved in Theorem 5 and a truncation ...
Since the convergence rates and the minimax optimality of spectral algorithms in the well specified case are clear, a large amount of literature studied the misspecified spectral algorithms. Among these work, Steinwart et al. (2009); Dicker et al. (2017); Pillaud-Vivien et al. (2018); Fischer and Steinwart (2020); Celi...
As mentioned in Fischer and Steinwart (2020), the empirical process and the integral operator techniques are the two main techniques used to derive the learning rates of kernel methods. Steinwart et al. (2009) firstly introduced the embedding property of RKHS for the empirical process technique. Fischer and Steinwart (...
The outline of the rest of the paper is as follows. In Section 2, we introduce basic concepts including priori knowledge of RKHS, integral operators and the definition of the interpolation space. In addition, we formally define the spectral algorithm, which is the main interest of this paper, and provide three example...
Compared with the line of work which considers the embedding index (Steinwart and Christmann, 2008; Pillaud-Vivien et al., 2018; Fischer and Steinwart, 2020, etc.), this paper removes the boundedness assumption, i.e., ‖fρ∗‖L∞⁢(𝒳,μ)≤B∞<∞subscriptnormsuperscriptsubscript𝑓𝜌superscript𝐿𝒳𝜇subscript𝐵\|f_{\rho}^{*}\|_...
B
Recent years have seen a growing need for the conversion of real-world objects to computerized models [35, 9] across several domains, such as digital preservation of cultural heritage [27] and manufacturing of mechanical parts for industry [21]. This need has given rise to a range of modern data acquisition techniques...
The inherent dependency of surface reconstruction methods on surface normals, makes the visual perceptual quality of a point cloud an indirect yet important aspect of any mesh processing pipeline [7]. Although it is difficult to quantify this visual degradation in the case of point cloud simplification methods, one can...
Overall, these results show that our approach provides a computationally efficient option for performing point cloud simplification in settings where the user wishes to strike a balance between preserving high fidelity around sharp features in the cloud, and ensuring that the simplified cloud covers the manifold defin...
In this section we will introduce a number of existing point cloud simplification techniques, with a particular focus on works which have a feature-preserving element to their approach. Some of the earliest curvature-sensitive simplification techniques were proposed by Pauly et al. [26] and Moenning et al. [25]. The fo...
In order to evaluate the performance of our method in comparison to other simplification techniques, we firstly use each simplified point cloud obtained from three object level point clouds to form simplified meshes, using screened Poisson surface reconstruction [17]. We can then compute the reconstruction errors betwe...
A
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali...
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali...
For three spatial dimensions (i.e. 3D) this means that reducing the input size from 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT to 1283superscript1283128^{3}128 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT results in a reduction
of 1×1×1⁢\text⁢m⁢m3111\text𝑚superscript𝑚31\times 1\times 1\text{mm}^{3}1 × 1 × 1 italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, resulting in a total scan size of 240×240×155240240155240\times 240\times 155240 × 240 × 155, which we padded to a size of 256×256×256256256256256\times 256\times 256256 × 25...
2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT images, we distribute the model over 2 GPUs. The methods HalfRes and PatchDDM were trained on one GPU only. The optimizer we used was AdamW [Loshchilov and Hutter(2017)] with the default parameters.
B
(Convex & Non-Convex Applicability) From the above convergence analyses, the evaluation of FedAgg exclusively relies on the smoothness assumption, which indicates that the effectiveness of our results transcends the convexity paradigm of the function in question. Specifically, we conduct convergence analyses of FedAgg...
To demonstrate the effectiveness of our proposed algorithm and investigate whether the enhancements introduced by FedAgg remain consistent as the ratio of participating clients increases. Firstly, we partition the four benchmark datasets (i.e., MNIST, EMNIST-L, CIFAR-10, and CIFAR-100) into 100 clients and randomly se...
Nevertheless, in FL systems, the potential of adaptive learning rate-based algorithms in FL remains largely underexplored. Current literature often undervalues the pivotal role of the learning rate, a hyperparameter that requires meticulous tuning to accelerate the convergence speed and FL model performance. In Fig. 2,...
We systematically conduct numerical experiments designed to elucidate the influence exerted by the aggregation weight α𝛼\alphaitalic_α in the objective function presented in Eq. (13) on the model efficacy and facilitate the practical application and promotion of FedAgg. As depicted in Fig. 12, the decrement of the hy...
In this section, we conduct extensive experiments on the MNIST, CIFAR10, EMNIST-L, and CIFAR100 datasets to verify the performance of our proposed adaptive learning FL algorithm FedAgg. We first introduce the experiment setup, including the experiment platform, datasets, data partition, local training model, baseline ...
D
The gating mechanism is initialized by zeros, and controls the feature interaction between prompt and word tokens, within the process of attention calculation. Such a strategy can first preserve the original knowledge in LLaMA, and progressively inject the new instructional signals during training.
Figure 1: Characteristics of LLaMA-Adapter. Our lightweight adaption method efficiently fine-tunes LLaMA (Touvron et al., 2023) 7B model with only 1.2M learnable parameters within one hour, which exhibits superior instruction-following and multi-modal reasoning capacity.
Figure 3: Multi-modal LLaMA-Adapter. By connecting a pre-trained image encoder, LLaMA-Adapter can be extended to a multi-modal LLM for image-conditioned instruction following. Given an image input, we element-wisely add the image tokens with adaption prompts, and utilize our zero-initialized attention mechanism to inje...
As a lightweight plug-and-play module, LLaMA-Adapter enjoys superior training efficiency with only 1.2M parameters, 4.9M storage, and one-hour training. This enables more efficient storage of large-scale language models on mobile devices. LLaMA-Adapter’s efficiency advantages can be further revealed by multi-node train...
For better training stability and final performance, we introduce the zero-initialized attention mechanism with a learnable gating factor, which increasingly incorporates instructional signals, while preserving the pre-trained knowledge in LLaMA. With only 1.2M parameters and one-hour training, our approach effectively...
A
Then the pre-processed video together with learnable group tokens are fed into the video encoder. Specifically, group tokens aggregate semantically similar video tokens via grouping blocks and are then aligned with object concepts by spatial grounding. It promotes region-object groundingness, which indicates the alignm...
Video-language pre-training typically follows the pipeline: (1) encoding video and text pairs into latent representations, (2) modality fusion, and (3) pre-training on specific objectives. Existing methods typically optimize these three components in the pre-training pipeline by designing expressive encoders (Bain et a...
Videos are pre-processed with the cut-and-paste operation, inspired by  (Zhang et al., 2022; Yun et al., 2019), i.e., pasting one clip in a video onto the other background video, to explicitly introduce temporal scene changes. We further adopt grouping blocks (Xu et al., 2022; Yu et al., 2022) to aggregate semantically...
In contrast to previous methods where regions are extracted with pre-trained object detectors (Cai et al., 2022; Li et al., 2022; Yan et al., 2021), these learnable group tokens can cluster and organize semantically similar regions in a self-supervised manner, which is more effective and reduces the artifacts of any de...
Representations learned from large scale noisy datasets such as HowTo100M (Miech et al., 2019), WebVid (Bain et al., 2021), and VideoCC (Nagrani et al., 2022) have demonstrated great potentials in adapting to downstream tasks, including but not limited to text-video retrieval, video question answering, and video captio...
C
They further found that two models trained on different image datasets (CIFAR-10 and CIFAR-100, Krizhevsky et al. 2009) learn representations that are similar in the shallow layers. Similar findings were noted for language models by Wu et al. (2020). The latter also evaluated the effect of fine-tuning on language model...
For instance, CKA suggests that the BERT language model Devlin et al. (2019) is more similar to vision models than GPT-2 Radford et al. (2019). Our analysis indicates that both BERT and GPT-2 create representations that are equally similar to the vision ones.
Kornblith et al. (2019) and Morcos et al. (2018) found that increasing the model’s layer width results in more similar representations between models. Raghu et al. (2017) provided an interpretation of the learning process by examining how similar representations were during the training process compared to final repres...
A recent line of work analyzes internal representations by comparing two sets of representations, for instance from two different models. The choice of similarity measure is crucial and much work has been devoted to developing various such measures Raghu et al. (2017); Morcos et al. (2018); Kornblith et al. (2019); Wu ...
They further found that two models trained on different image datasets (CIFAR-10 and CIFAR-100, Krizhevsky et al. 2009) learn representations that are similar in the shallow layers. Similar findings were noted for language models by Wu et al. (2020). The latter also evaluated the effect of fine-tuning on language model...
B
We report the performance of our distortion estimator to describe the difference between Wakai et al.’s method [59] and the distortion estimator. Our distortion estimator is composed of Wakai et al.’s calibration network [59] without the tilt and roll angle regressors, as shown in Figure 11. We optimized the distortion...
To validate the accuracy of the camera parameters, we compared our method with conventional methods that estimate both rotation and distortion. Following [59], we evaluated the mean absolute error and reprojection error (REPE). Our method achieved the lowest mean absolute angle error and REPE of the methods listed in T...
To analyze the network activation of our VP estimator, we visualized the activation of the middle layers using Eigen-CAM [4]. For the visualization, ResNet-50 [51] backbones were used for simplicity because HRNet [53] backbones have branched structures. The ResNet-50 backbones without the head layer consist of 49 conv...
We analyzed the results of our method using HRNet-W32 to describe the error factor; that is, the calibration errors were caused by the distortion estimator and VP estimator. To evaluate this error factor, we performed calibration with ground-truth values for distortion parameters and image coordinates of VP/ADPs from t...
To clarify the performance of the HRNet [53] backbones, we also evaluated our method using the ResNet [51] backbones, which are one of the baseline backbones used for various tasks. Table 14 shows the results of our method using either the ResNet or HRNet backbones. With respect to rotation errors and reprojection err...
D
\forall t\in{\mathbb N}\bigr{)}.( italic_r italic_e italic_s italic_p . | italic_x ( italic_t ) | start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT ≤ italic_M italic_α start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT | italic_x ( 0 ) | start_POSTSUBSCRIPT caligraphic_A end_POSTSUBSCRIPT , ∀ italic_t ∈ italic_N ) .
The attractor 𝒜𝒜\mathcal{A}caligraphic_A in (5) is α𝛼\alphaitalic_α-UGES (uniformly globally exponentially stable with rate α>0𝛼0\alpha>0italic_α > 0) for system (1), (2) if there exists M>0𝑀0M>0italic_M > 0 such that any solution t↦x⁢(t)maps-to𝑡𝑥𝑡t\mapsto x(t)italic_t ↦ italic_x ( italic_t ) satisfies
Consider the continuous-time (resp. discrete-time) system in (1), (2), the attractor 𝒜𝒜\mathcal{A}caligraphic_A in (5) and the parameter α⋆≥0superscript𝛼⋆0{\alpha^{\star}}\geq 0italic_α start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ≥ 0 (resp. α⋆∈(0,1]superscript𝛼⋆01{\alpha^{\star}}\in(0,1]italic_α start_POSTSUPERSCRI...
Given an assigned convergence rate α⋆≥0superscript𝛼⋆0{\alpha^{\star}}\geq 0italic_α start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ≥ 0 of the solutions towards the attractor 𝒜𝒜\mathcal{A}caligraphic_A, we state a list of necessary and sufficient conditions for (continuous- or discrete-time) α⋆superscript𝛼⋆{\alpha^{\s...
For the continuous-time (respectively discrete-time) linear system (1), (2), α𝛼\alphaitalic_α–synchronization holds if there exist M>0𝑀0M>0italic_M > 0 and rate α>0𝛼0\alpha>0italic_α > 0 (resp. α∈(0,1)𝛼01\alpha\in(0,1)italic_α ∈ ( 0 , 1 )) such that, for any initial condition, every sub-system satisfies
D
Here we describe the method and results for testing HR⁢Q3𝑅subscript𝑄3{}_{{RQ}_{3}}start_FLOATSUBSCRIPT italic_R italic_Q start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_FLOATSUBSCRIPT: That changes in ONNX operator sets are correlated with increased defects.
(1) DL model converters lag behind ONNX releases (this might cause a failure to be mis-attributed to another release, i.e., offset in time); (2) Failures might be in any ONNX available release, not just the most recent (possibly inflating the failure rate of a given release);
This causal asymmetry may be attributable to differences in the requirements of DL model converters and DL compilers. The purpose of DL model converters is interoperability (section 2.1), making compatibility failures a focus and reducing the need for optimizations.
We also measure the relationship, assessing the correlation in the number of changes in an ONNX release and the number of failures between its release and the next. We use the Spearman correlation, which is a commonly-used and robust metric for measuring a monotonic relationship between two variables (Fenton and Bieman...
For the reasons noted above, we studied the DL model converters from PyTorch and TensorFlow into the ONNX IR (torch.onnx and tf2onnx, respectively). We note that among ONNX model converters, those for PyTorch and TensorFlow have the most failure data available on GitHub (Table 6).
A
The Alphanumeric Viewer is a component that monitors the state of specific variables of the system, e.g. sensor measurements, values corresponding to state estimation, references for controllers, etc. The information is distributed in different panes to facilitate the search for a specific variable of the system. On th...
Table IV shows the components used in both simulation and real experiments. For this mission, since we will use HITL simulation, all the modules remained the same in both experiments. In this case, the Web GUI is the component in charge of generating and uploading the mission that each drone is going to perform. We use...
In this case, the objective is to present all the information in a graphical and simple way so that non-developers can use and interact with the aerial system. In this category of components, Aerostack2 also provides a Graphical User Interface (GUI) to use the software framework through a web-based application. This t...
In addition, each software component that implements a behavior encapsulates the details of the algorithms used, providing a uniform interface that is common to all behaviors. This feature provides simplicity in describing mission plans. Using behaviors, a mission plan is expressed as a controlled sequence of activati...
The top layer of the proposed architecture is mainly dedicated to the user interface. The components of this layer are designed to ease the definition of a mission to a human operator or to help with the supervision of the system. Regarding the level of user expertise, we can differentiate two blocks of tools:
B
Several works have analyzed whether LLMs can pass tests intended to evaluate human psychological skills (Binz and Schulz, 2023, Macmillan-Scott and Musolesi, 2024, Stevenson et al., 2022a), sometimes with promising results (Kosinski, 2024, Lampinen et al., 2024). However, according to the best-supported neuroscientific...
In conclusion, while LLMs are capable of producing artifacts that are valuable, achieving P- or H-novelty and surprise appears to be more challenging. It is possible to argue that LLMs may be deemed able to generate creative products if we assume the definition of combinatorial creativity. To achieve transformational c...
In this paper, we have discussed whether or not LLMs can actually be deemed as creative; we started by considering Boden’s three criteria, i.e., value, novelty, and surprise. While LLMs are capable of value and a weak version of novelty and surprise, their inner autoregressive nature seems to prevent them from reaching...
In conclusion, all the properties listed above require some forms of consciousness and self-awareness, which are difficult to define in themselves and are related to the hard problem introduced before. Creative-person qualities in generative AI might eventually be the ultimate step in achieving human-like intelligence.
However, paraphrasing Chalmers (Chalmers, 1996), these appear as easy problems to solve in order to achieve creativity, since solutions to them can be identified by taking into consideration the underlying training and inference processes. The hard problem in machine creativity is about the intentionality and the self-...
C
where ω𝜔\omegaitalic_ω is a smoothing hyper-parameter for modulating the exponential function. rank⁢(si,j+)ranksuperscriptsubscript𝑠𝑖𝑗{\rm rank}(s_{i,j}^{+})roman_rank ( italic_s start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ) or rank⁢(si,j−)ranksuperscriptsu...
Nonetheless, these works mainly focus on the shifts between the source and target, and neglect the detection’s characteristics. For instance, the discrimination of the foreground objects and the backgrounds can naturally be an auxiliary supervision for the target data. Besides, the relatively smaller size of the nodul...
While computing the contrastive loss using Eq. (1), the instances generated by the pulmonary nodule detector need to be divided into two classes, the nodules and other instances. Due to the significant domain shift between the source and target domains, directly relying on the classification outcomes from the pulmonary...
In order to further reduce the negative impact of the pseudo nodule noise, we propose to introduce an additional unsupervised constraint for the student model training and design a weighted entropy (WE) loss. Considering the success of the entropy loss in dealing with the unlabeled data in the semi-supervised and unsu...
First, as elaborated in Sec. III-A, the pre-trained source model is adapted to the target domain via instance-level (nodule and non-nodule) contrastive learning for generating accurate pseudo nodules. This instance-level foreground-background discrimination not only avoids the requirement of target annotations, but als...
B
𝝍min≤𝐄T⁢𝜽≤𝝍max.superscript𝝍superscript𝐄𝑇𝜽superscript𝝍\displaystyle{{\bm{\psi}}^{\min}\leq\mathbf{E}^{T}\bm{\theta}\leq{\bm{\psi}}^{% \max}}.bold_italic_ψ start_POSTSUPERSCRIPT roman_min end_POSTSUPERSCRIPT ≤ bold_E start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_italic_θ ≤ bold_italic_ψ start_POSTSUPER...
Figure 1: The architecture used for training in the proposed algorithm. When making decisions in real time, we only need the network ϕ0superscriptitalic-ϕ0\phi^{0}italic_ϕ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT to predict the first-stage decisions from the given load forecast.
To be specific, ϕRsuperscriptitalic-ϕ𝑅\phi^{R}italic_ϕ start_POSTSUPERSCRIPT italic_R end_POSTSUPERSCRIPT is learning the optimal value of the second-stage DCOPF problem and must satisfy all the constraints in the optimization problem; otherwise, the estimated second-stage cost may have a large deviation from the true...
In this paper, we overcome the challenge in policy design and solve two-stage DCOPF problems by presenting a neural network (NN)-based architecture that is computationally efficient and also guarantees the feasibility of learned solutions. In particular, our architecture involves two neural networks, one each for the f...
When the learning algorithm is used in practice, i.e., in the prediction phase, just the network ϕ0superscriptitalic-ϕ0\phi^{0}italic_ϕ start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is required to predict the first-stage decisions from a nominal load value. The reason why we need a second network ϕRsuperscriptitalic-ϕ𝑅\...
A
Experiment Details.  We investigate the different priors on CIFAR10. For the BNN, we follow Izmailov et al. (2021b) and use a ResNet-20-FRN with a 𝒩⁢(0,1/5)𝒩015\mathcal{N}(0,1/5)caligraphic_N ( 0 , 1 / 5 ) prior over the parameters. For the self-supervised BNN, we learn a base encoder of the same architecture with �...
Graphical Evaluation.  First, we visualise the BNN and self-supervised BNN prior predictive (Fig. 1 and 3). The standard BNN prior predictive reflects a belief that all three image pair groups are similarly likely to have the same label, and thus does not capture semantic information well. In contrast, the self-superv...
Experiment Details.  We investigate the different priors on CIFAR10. For the BNN, we follow Izmailov et al. (2021b) and use a ResNet-20-FRN with a 𝒩⁢(0,1/5)𝒩015\mathcal{N}(0,1/5)caligraphic_N ( 0 , 1 / 5 ) prior over the parameters. For the self-supervised BNN, we learn a base encoder of the same architecture with �...
Figure 1: Self-Supervised Bayesian Neural Networks. (a) Pre-training in self-supervised BNNs corresponds to unsupervised prior learning. We learn a model with a prior distribution such that augmented images likely have the same label and distinct images likely have different labels under the prior predictive. (b) Self...
Figure 3: BNN Prior Predictives. We investigate prior predictives by computing the probability ρ𝜌\rhoitalic_ρ that particular image pairs have the same label under the prior, and examining the distribution of ρ𝜌\rhoitalic_ρ across different sets of image pairs. We consider three sets of differing semantic similarity:...
A
is a complete invariant of a stationary ergodic Polish-valued stochastic process X=(Xt:t∈ℤ)X=(X_{t}\colon t\in\mathbb{Z})italic_X = ( italic_X start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT : italic_t ∈ blackboard_Z ). In other words, it is a complete invariant of a metric measure-preserving dynamical system (Sℤ,ρS,μX...
A consequence of the preceding theorems is that one can often characterize the geometric features as captured by shape descriptors of a Polish-valued process X𝑋Xitalic_X via the ball volume processes of the fidis. This is convenient for hypothesis testing. For example, note that the ball volume corresponding to the fi...
The assumption in subsection 3.2 on μXsubscript𝜇𝑋\mu_{X}italic_μ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is satisfied by any Borel measure on a variety of underlying metric spaces that arise in geometric data analysis, including doubling metric spaces, many Banach spaces, and any Hilbert space with the norm me...
Although it is possible to apply this characterization directly for inference, we can often use a more direct approach. Specifically, our second result establishes general conditions under which the geometric features of a stochastic process can in fact be fully characterized by the process of ball volumes (subsection ...
For a corresponding mmpds of a stable shape descriptor, another equivalent characterization can be found. Indeed, under more restrictive assumption on the measure, a mmpds can equivalently be completely characterized by the process of ball volumes of the finite-dimensional distributions (fidis), via Kolmogorov’s extens...
D