context stringlengths 250 4.37k | A stringlengths 250 8.2k | B stringlengths 250 4.23k | C stringlengths 250 4.99k | D stringlengths 250 3.54k | label stringclasses 4
values |
|---|---|---|---|---|---|
τ(G)=1|V|∏i=2|V|λi=12|E|∏i=1|V|di∏i=2|V|λ̄i⋅\displaystyle\tau(G)=\frac{1}{|V|}\prod\limits_{i=2}^{|V|}\lambda_{i}=\frac{1}%
{2|E|}\prod\limits_{i=1}^{|V|}d_{i}\prod\limits_{i=2}^{|V|}\lambdabar_{i}\cdotitalic_τ ( italic_G ) = divide start_ARG 1 end_ARG start_ARG | italic_V | end_ARG ∏ start_POSTSUBSCRIPT italic_i = 2 ... | Being motivated by the success of the Wiener index, nearly after four decades, in 1993,19931993,1993 , Klein and Randić put forward a novel structure-descriptor topological index, known as the Kirchhoff index [11]. The Kirchhoff index of a graph G𝐺Gitalic_G is calculated using the formula Kf(G)=12∑i=1|V|∑j=1|V|rijK... | As both of the structural descriptors have important applications in graph theory, networking systems, molecular chemistry, and other related fields, researchers have taken a strong interest in the Kirchhoff indices and degree-Kirchhoff indices of graphs in recent years. Finding the explicit formulae for the Kirchhoff ... | We obtain the formulae for the total count of spanning trees of Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and Pn′subscriptsuperscript𝑃′𝑛P^{\prime}_{n}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT using the Theorem 4 as follows.
|
The formulae of the degree-Kirchhoff index of linear hexagonal chains [9], hexagonal Möbius chains [16], linear crossed hexagonal chains [18], linear polyomino chains [8], cylinder phenylene chains [17], random polygonal chains [13], generalized phenylenes [25], cylinder, and Möbius octagonal chain [15], linear pentag... | B |
In their case this restriction is substantive rather than sacrificing no generality, reflecting the differing structure of the incentive constraints that arise in screening and moral hazard problems.
Di Tillio et al., (2017) do not have counterparts of our findings that the number of payment functions in an optimal amb... | An implication of our results is that in the context of moral hazard problems, ambiguity and max-min utility drive optimal designs towards simplicity. We thus join a literature, with Holmström and Milgrom, (1987) as a key early entry, endeavoring to explain why actual contracts in moral hazard settings tend to be simpl... | A second branch of the literature examines settings in which the agent has ambiguous beliefs that the principal can potentially exploit.
Beauchêne et al., (2019) and Cheng, (2020) examine Bayesian persuasion problems in which the sender exploits the ambiguity aversion of the receiver. Bodoh-Creed, (2012) and Di Tillio ... | Dai and Toikka, (2022) examine a principal who writes contracts to shape the actions of a team of agents, with the principal holding ambiguous beliefs about the actions available to the agents.
Dütting et al., (2019) examine moral hazard problems in which the principal has ambiguous beliefs about the distribution of ou... |
A flourishing literature examines design problems in the face of non-Bayesian uncertainty. One branch of this literature examines models in which the principal entertains non-Bayesian uncertainty about the agents. Bergemann and Schlag, (2011) examine monopoly pricing on the part of a principal with ambiguous beliefs a... | A |
Most interestingly, our mechanisms and that of [FS17] is fairly similar – both rely on splitting the dataset and, roughly speaking, computing an approximate median of the queries’ value on each group – but the analyses are wholly different. Their analysis is based on differential privacy. In contrast, Theorem 4 is a s... |
Subsampling has been thoroughly explored in the context of privacy amplification (see e.g. [BBG18, ZW19] or the book chapter [Ste22]): if 𝒜𝒜\mathcal{A}caligraphic_A is a differentially private algorithm, running 𝒜𝒜\mathcal{A}caligraphic_A on a random subset of the data gives an algorithm with even better privacy p... | We begin by sketching a proof of our main result in the simplest setting. Specifically, we’ll show that if an analyst asks O~(n2)~𝑂superscript𝑛2\tilde{O}(n^{2})over~ start_ARG italic_O end_ARG ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) subsampling queries, each mapping X1superscript𝑋1X^{1}italic_X sta... |
Most interestingly, our mechanisms and that of [FS17] is fairly similar – both rely on splitting the dataset and, roughly speaking, computing an approximate median of the queries’ value on each group – but the analyses are wholly different. Their analysis is based on differential privacy. In contrast, Theorem 4 is a s... | Fish, Reyzin, and Rubinstein explored the use of subsampling to speed up classical mechanisms for adaptive data analysis [FRR20]. For example, their mechanism for answering a statistical query φ𝜑\varphiitalic_φ, computes φ𝜑\varphiitalic_φ on a random subsample of the data and adds Laplacian noise to that result. This... | A |
Let F𝐹Fitalic_F be an outerplanar graph with vertex u∈V(F)𝑢𝑉𝐹u\in V(F)italic_u ∈ italic_V ( italic_F ). If u𝑢uitalic_u is not isolated then there exists a neighbour v∈NF(u)𝑣subscript𝑁𝐹𝑢v\in N_{F}(u)italic_v ∈ italic_N start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ( italic_u ) such that the graph obtained fr... | If otherwise v𝑣vitalic_v is among the vertices at which 𝑴1′subscriptsuperscript𝑴′1\boldsymbol{M}^{\prime}_{1}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and 𝑴2′subscriptsuperscript𝑴′2\boldsymbol{M}^{\prime}_{2}bold_italic_M start_POSTSUPERSCRIPT ′ end_POSTSUPER... |
A graph F𝐹Fitalic_F is outerplanar if it does not have K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT or K2,3subscript𝐾23K_{2,3}italic_K start_POSTSUBSCRIPT 2 , 3 end_POSTSUBSCRIPT as a minor. Equivalent, it is outerplanar if it has a planar drawing such that all its vertices lie on the same fac... | Let F𝐹Fitalic_F be an outerplanar graph with vertex u∈V(F)𝑢𝑉𝐹u\in V(F)italic_u ∈ italic_V ( italic_F ). If u𝑢uitalic_u is not isolated then there exists a neighbour v∈NF(u)𝑣subscript𝑁𝐹𝑢v\in N_{F}(u)italic_v ∈ italic_N start_POSTSUBSCRIPT italic_F end_POSTSUBSCRIPT ( italic_u ) such that the graph obtained fr... | Take an outerplanar embedding of F𝐹Fitalic_F which has some face incident to all the vertices, consider some edge incident to u𝑢uitalic_u that is incident to this face, and subdivide that edge.
Since the vertex created by subdivision is incident to the outer face, | D |
We recruited 32 participants (17 males, 15 females) via mailing lists and word of mouth, who are mainly from STEM (Science, Technology, Engineering, and Mathematics) fields and business schools, between 19 to 37 years old (M=26,SD=3.57formulae-sequence𝑀26𝑆𝐷3.57M=26,SD=3.57italic_M = 26 , italic_S italic_D = 3.57) ... | The participant should start from position C and move towards the Table in position D to deliver the yellow cup, during which a non-contact interaction between the participant and the Spot was recorded by the motion capture cameras. In the non-stationary conditions, the Spot robot started moving from B to A the moment ... |
In our research, the motion capture system will only track the position and orientation of objects instead of their motions. As a result, marker rigid bodies will take the place of marker skeletons, which are more common in motion capturing. See in Figure 2, rigid bodies are formed by 4 or more markers on the same pla... |
The lab is equipped with an OptiTrack motion capture system, which functions in the outside-in [39] tracking principle. 6 motion cameras are mounted around the experiment zone to take 2D aligned pictures of passive markers on objects, according to the position of retroreflective markers on 2D frames to calculate the r... | At the setup stage, six OptiTrack motion cameras were mounted around the experiment zone to capture the in-situ position of the marker rigid bodies in sight. The position and orientation information of the rigid bodies were multi-casted in a local network with ROS built-in UDP communication. The origin of the OptiTrack... | D |
θ3=π2−θ1+θ2−θosubscript𝜃3𝜋2subscript𝜃1subscript𝜃2subscript𝜃𝑜\theta_{3}=\frac{\pi}{2}-\theta_{1}+\theta_{2}-\theta_{o}italic_θ start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT = divide start_ARG italic_π end_ARG start_ARG 2 end_ARG - italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + italic_θ start_POSTSUBSCRIPT 2 end_POS... | Cyclic Coordinate Descent (CCD) is an iterative algorithm used to solve inverse kinematics problems. Yotchon et al. [13] proposed a hybrid approach combining the CCD method with a differential evolution algorithm, a metaheuristic optimization technique, to tackle inverse kinematics challenges. This combined method reli... |
This subsection explores the use of neural networks to find inverse kinematics (IK) solutions for the human leg. Shah et al. [22] applied deep artificial neural networks to solve the IK problem for a 5-axis serial link robotic manipulator. They achieved an accuracy where the deviation between the end effector and the ... |
The results of the inverse kinematics simulation for the lower limbs using the Cyclic Coordinate Descent (CCD) method are shown in Figure 5. To determine the position error illustrated in Figure 6, we used these results along with the forward kinematics described in Equation (5) to generate the end effector trajectory... |
A method to address the pseudo-inverse problem is the Levenberg-Marquardt Damped Least Squares (LMDLS) method. Wampler et al. [16] proposed an approach to determine the optimal damping factor, which balances the angular joint velocities with the tracking error. This approach involves finding the joint angular error ve... | A |
The comparability graph of the poset in Proposition 2.4 shows that for any fixed hℎhitalic_h, this bound has the right order of magnitude in ε𝜀\varepsilonitalic_ε. As in the case of posets, we can also use the test for Kχ(ℱ)subscript𝐾𝜒ℱK_{\chi(\mathcal{F})}italic_K start_POSTSUBSCRIPT italic_χ ( caligraphic_F ) en... | The goal of this paper is to study testability of finite posets as special digraphs. By a poset, we mean a set equipped with a partial order ≺precedes\prec≺ that is anti-reflexive and transitive. Alon, Ben-Eliezer and Fischer [1] proved that hereditary (closed under induced subgraphs) classes of ordered graphs are stro... | Alon and Shapira [4] proved that every monotone property of undirected graphs (that is, closed under the removal of edges and vertices) is strongly testable, see Lovász and Szegedy for an analytic approach [22], while Rödl and Schacht generalized this to hypergraphs [23], see also Austin and Tao [8]. Similar results ha... |
Panna Tímea Fekete’s Project No. 1016492. has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the KDP-2020 funding scheme and supported by the ERC Synergy Grant No. 810115 – DYNASNET. | The relationship between local and global properties of structures is a central theme in combinatorics and computer science. Since the work of Rubinstein and Sudan [25], testing properties by sampling a small number of elements is an emerging research area.
A classical result of this kind is the triangle removal lemma ... | C |
This is also because the required tasks depend on the type of source input. Knowledge extraction is commonly applied on unstructured data inputs like text and may not be needed
for structured data, e.g. from databases or other knowledge graphs. Furthermore, the entity linking part of knowledge extraction can make an ad... | Quality assurance is important not only for the resulting KG as an outcome of the KG construction process but also within the different construction tasks, such as selecting good-quality sources (Section 3.1.2), data cleaning for acquired data, knowledge extraction, ontology evolution or entity fusion.
The data cleanin... | The steps of Quality Assurance and KG completion to improve the current version of the KG are not needed for every KG update but may be executed asynchronously, e.g., within separate pipelines (although QA actions such as data cleaning also apply to individual tasks).
Furthermore, data and metadata management play a sp... | Quality Assurance.
Quality assurance is a cross-cutting topic playing an important role throughout the whole KG construction process. Quality problems in the KG can be multi-faceted relating to the ontological consistency, the data quality of entities and relations (comprehensiveness), or domain coverage. The coverage ... | A benchmark could be based on similar settings than for the creation of specific KGs discussed in Section 4 aiming at the initial construction and incremental update of either a domain-specific or cross-domain KG from a defined set of data sources of different kinds. The KG ontology and the KG data model (RDF or proper... | B |
Initially, the set of waypoints required to generate a trajectory in task space is assumed to be available from the task planner. The trajectory is then planned using the minimum jerk criterion to ensure smooth acceleration of the joints, thereby reducing vibrations and avoiding resonance frequencies. For this simulati... |
In this section, the Forward Kinematics (FK) of the lower limbs, depicted in Figure 1, using dual quaternions is established. FK involves computing the positions and orientations of the end-effector in task space from the axes and angles of the joint rotations. The lower limb is decomposed into four segments: the pelv... | Afterward, the inverse kinematics (IK) of the lower limb is computed using a multi-layer perceptron trained with the Levenberg-Marquardt backpropagation algorithm, utilizing a dataset of 400,000 samples. The network architecture is illustrated in Figure 4, featuring a two-layer feed-forward structure comprising a hidde... |
This section focuses on the dynamic description of the lower limb shown in Figure 2 using the Dual Quaternion-based recursive Newton-Euler method. This method involves calculating the velocities and accelerations of the center of mass of each link, known as twists, based on the positions, velocities, and accelerations... | In this paper, the dual quaternion-based theory is applied to the kinematics and dynamics study of the 7-DOF human lower limbs in 3D space. Subsequently, the artificial neural networks method is used to solve the inverse kinematics problem. The efficiency of the artificial neural networks method is verified using the j... | B |
We note that while we proposed to approximate the “cheap" part as well in Section 3, one other theoretically viable approach is to keep it intact and approximately solve a “proximal type" problem involving hℎhitalic_h; this will lead to replacing L𝐿Litalic_L by δ𝛿\deltaitalic_δ, but the subproblem is even more diffi... | We consider the variance-reduced cubic Newton method from (Zhou et al., 2019) (referred to as “full VR”), its lazy version where we do not update the snapshot Hessian (“Lazy VR”), the stochastic Cubic Newton method (“SCN”),
the Cubic Newton algorithm (“CN”), Gradient Descent with line search (“GD") and Stochastic Gradi... | In this work, we proposed a general theory for using stochastic and auxiliary information in the context of the Cubically regularized Newton method. Our theory encapsulates the classical stochastic methods, as well as Variance Reduction and the methods with the Lazy Hessian updates.
|
Figure 4 shows that compared to other second-order methods, “Lazy VR" has considerable time and computation savings. It also performs closely to gradient descent with line search, which performs very well in this case. Figure 5 shows the same experiment for larger dimensions, most importantly we see that the gap betwe... |
To address these challenges, we can take into account second-order information (the Hessian matrix) and apply Newton’s method (see, e.g., (Nesterov, 2018)). Among the many versions of this algorithm, the Cubic Newton method (Nesterov & Polyak, 2006) is one of the most theoretically established. With the Cubic Newton m... | B |
In practice, multiple network operators co-exist in a given geographical area, each operating in different frequency band. As a consequence, at a given point in time, multiple UEs are served by different operators in the system. In such a scenario, if an IRS is optimized to cater to the needs of one of the operators, i... | We derive the ergodic sum spectral efficiencies (SE) of the two operators as a function of the number of IRS elements, under round-robin scheduling of UEs. We show that the ergodic sum-SE scales quadratically and linearly with the number of IRS elements for the in-band and OOB networks, respectively, even when the OOB ... | which represents the difference in the SNR/channel gain at a UE q𝑞qitalic_q (OOB-UE) served by BS-Y with and without the IRS in the environment. In Fig. 4, we plot the CCDF of ZN(Y)subscriptsuperscript𝑍𝑌𝑁Z^{(Y)}_{N}italic_Z start_POSTSUPERSCRIPT ( italic_Y ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_N end_POST... |
In order to study the impact on the OOB performance, we consider the scheduling of UEs in a round-robin (RR) fashion at both BS-X and BS-Y. We note that the performance under opportunistic scheduling at either or both BSs can also be derived along similar lines, e.g., following the approach in [7]. Since the BSs are e... | We provide an exact characterization of the complementary cumulative distribution function (CCDF) of the difference in the channel gain at an OOB UE with and without the IRS. We determine the probability with which the difference is non-negative as a function of the number of IRS elements, and show that the channel gai... | A |
Some studies explained a DNN by distilling the DNN into another interpretable model (Frosst & Hinton, 2017; Che et al., 2016; Wu et al., 2018; Zhang et al., 2018; Vaughan et al., 2018; Tan et al., 2018).
However, most explanation methods did not try to disentangle concepts encoded by a DNN. | ∙∙\bullet∙ Unifying empirical findings in the framework of game-theoretic interactions.
To unify different attribution methods, Deng et al. (2022b) used interactions as a unified reformulation of different attribution methods. They proved that attributions estimated by each of 14 attribution methods could all be repres... | Based on game theory, we introduced multi-variate interactions (Zhang et al., 2021a, c) and multi-order interactions (Zhang et al., 2021b) to analyze interactions encoded by the DNN.
Recently, Ren et al. (2021a) proposed the mathematical formulation for concepts encoded by a DNN, and Ren et al. (2023a) further used suc... | Game-theoretical interactions facilitate the explanation of the representation capacity of a DNN from different perspectives, including the adversarial robustness (Wang et al., 2021a; Ren et al., 2021b), adversarial transferability (Wang et al., 2021b), and generalization power (Zhang et al., 2021b; Zhou et al., 2023).... | Our research group developed a theoretical framework based on game-theoretic interactions, which aims to tackle the following two challenges in XAI, i.e., (1) extracting and quantifying concepts from implicit knowledge representations of DNNs and (2) utilizing these explicit concepts to explain the representational cap... | D |
In comparison, when a DNN was trained to fit a high-order concept, the learning dynamics is detouring. Specifically, the DNN usually first learned low-order concepts.
Then, the DNN shifted its attention to concepts of higher orders, and later gradually removed mistakenly learned low-order concepts. |
In this section, we analyze the learning dynamics of concepts with a simple experimental setting, i.e., using a DNN to fit a boolean polynomial. We find that a high-order concept is not directly learned, but is likely to be mistakenly encoded as a mixture of low-order concepts in early epochs. In spite of the simplici... | All above experimental findings on the generalization power of concepts are related to the phenomenon of the inconsistency of high-order concepts, i.e., high-order concepts are more sensitive to small noises in the input sample than low-order concepts.
Therefore, we aim to prove that the interaction effect’s variance o... | In this paper, we provide a conceptual understanding of the reason why low-order concepts in training data can usually better generalize to testing data than high-order concepts. Specifically, we prove that the average inconsistency of concepts usually increases exponentially along with the order of concepts. We find t... | Although there is a common heuristic that complex concepts are usually more likely to be over-fitted, people still do not know the exact definition of concepts with an analytic connection to their generalization power. Because we also find the low generalization power of complex (high-order) interactive concepts, in th... | C |
We begin our experiment with Lotka-Volterra model, the commonly used simple model of NeuralODEs. Coefficients and initial conditions in the Lotka-Volterra equations were all identical to the setting in [4]. Following [3], we generated training data by numerically over the time span t∈[0,6.1]𝑡06.1t\in[0,6.1]italic_t ∈ ... |
We conducted experiment on CIFAR10 which is more challenging model than MNIST in classification fields. ResNet18 which is optimized using SGD on a batch size of 32 with a learning rate of 0.001, momentum of 0.9 is used for the experiment with random seed of 10. Our activation function converges rapidly with respect to... |
The robust property of MoLU is to approach rapidly to the value of minimum of a loss function without losing stability. This is a truly useful characteristic when training long time-series data by using NeuralODEs(Neural Ordinary Differential Equations). To prove the performance of MoLU, we conducted experiment on Neu... | We conducted experiments with MNIST, the most commonly used datasets in the field of image classification. We used MNIST datasets in torchvision and used 2 layered Networks which is optimized using SGD on a batch size of 64 with a learning rate of 0.001 and a momentum of 0.5 with random seed of 10. We confirmed that ou... | We begin our experiment with Lotka-Volterra model, the commonly used simple model of NeuralODEs. Coefficients and initial conditions in the Lotka-Volterra equations were all identical to the setting in [4]. Following [3], we generated training data by numerically over the time span t∈[0,6.1]𝑡06.1t\in[0,6.1]italic_t ∈ ... | C |
In a nutshell, this technique amounts to traverse in a depth-first search manner an implicit solution tree where nodes are solutions, and where edges are defined by some parent-child relation between solutions.
During the traversal, children are obtained by merging trees having adjacent roots along the limit cycle. | A notable feature of this algorithm is that it can moreover be adapted in order to produce the successor (or predecessor) of any given solution in O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) time as well, and only needs linear space.
This procedure is then used as a s... | Thus, in general, reverse search only needs memory space that is linear in the height of the solution tree times the space needed to generate children.
As of the delay, it only depends on the time needed to compute the parent and the delay needed to generate children when using folklore tricks such as the alternating o... | We can now exploit the algorithms of Theorem 3.1, and more specifically our ability to generate the successor of a given component, as a subroutine for the efficient generation of arbitrary (non necessarily connected) functional digraphs. In order to avoid generating multiple isomorphic digraphs, we first define an app... | There is a O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )-delay and linear space algorithm generating all connected n𝑛nitalic_n-vertex functional digraphs.
Moreover, given any such functional digraph, we can generate its successor (resp., predecessor) in the enumeration... | A |
where the constant C>0𝐶0C>0italic_C > 0 depends on d,c,cF,‖F+G‖Lη2,‖F‖Wη1,2(d−1),𝑑𝑐subscript𝑐𝐹subscriptnorm𝐹𝐺subscriptsuperscript𝐿2𝜂subscriptnorm𝐹subscriptsuperscript𝑊12𝑑1𝜂d,c,c_{F},\|F+G\|_{L^{2}_{\eta}},\|F\|_{W^{1,2(d-1)}_{\eta}},italic_d , italic_c , italic_c start_POSTSUBSCRIPT italic_F end_POSTSUBS... | Next, we turn to bound the term IIII{\rm II}roman_II. Under Assumption (C3), the difference of the log-determinants is a Lipschitz function with constant 1/c1𝑐1/c1 / italic_c. In addition, using Lemma 7.3 with Assumption (C1) on the function spaces for the maps, the term IIII{\rm II}roman_II is then bounded by
|
We now see the role of Assumption (B5): Since ∇r=∇detJG∇𝑟∇detsubscript𝐽𝐺\nabla r=\nabla{\rm det}J_{G}∇ italic_r = ∇ roman_det italic_J start_POSTSUBSCRIPT italic_G end_POSTSUBSCRIPT, the polynomial asymptotic growth of G𝐺Gitalic_G and its first and second derivative means that |∇r(z)|∇𝑟𝑧|\nabla r(z)|| ∇ italic... | We comment on the assumptions in Theorem 3.9 and compare them to those in the pushforward analog, Theorem 3.8. For the pushforward, Assumption (B5) on the asymptotic polynomial growth of the map and its first derivatives implies that the η𝜂\etaitalic_η-weighted Sobolev norms are finite. Thus, Assumption (B5) is a suff... |
To understand the intuition behind integral II{\rm I}roman_I, first note that if G−1∘F=Idsuperscript𝐺1𝐹IdG^{-1}\circ F={\rm Id}italic_G start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∘ italic_F = roman_Id, then term I=0I0{\rm I}=0roman_I = 0. Hence, this term measures “how far G−1superscript𝐺1G^{-1}italic_G start_PO... | C |
To the best of our knowledge, MMA-MRNNet is the first architecture to leverage valence-arousal, AUs, and basic expressions as intermediate representations for the task of Facial Expression Intensity Estimation. This approach not only enhances the model’s ability to capture the nuanced dynamics of emotional expressions ... | [56] proposed a dual-branch FEIE model; the one branch (composed of Temporal CNN and Transformer encoder) handles the visual modality and the other handles the audio one; modality dropout is added for A/V feature fusion.
[51] achieved the 3rd place in the ERI challenge of the 5th ABAW; it proposed a methodology that in... |
At first we compare the performance of MMA-MRNNet to that of various baseline [6] and state-of-the-art methods: ViPER and Netease Fuxi Virtual Human methods (which are multi-modal methods exploiting audio, visual and text information); the best performing HFUT-CVers method (presented in the related work section; it is... | [16] presented Supervised Scoring Ensemble (SSE) for emotion recognition. A new fusion structure is presented in which class-wise scoring activations at diverse complementary feature layers are concatenated and used as inputs for second-level supervision, acting as a deep feature ensemble within a single CNN architectu... | feature encoding module (based on DenseNet121 and DeepSpectrum), a visual feature encoding module (based on PosterV2-Vit), and an audio-visual modality interaction module.
[53] proposed ViPER, a modality agnostic late fusion network that leverages a transformer-based model that combines video frames, audio recordings, ... | C |
A conformity function is a mapping ρ:ℝd×𝒴×Ω→ℝ:𝜌→superscriptℝ𝑑𝒴Ωℝ\rho:\mathbb{R}^{d}\times\mathscr{Y}\times\Omega\to\mathbb{R}italic_ρ : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT × script_Y × roman_Ω → blackboard_R such that ρ(x,y)=ρ(x,y,⋅)𝜌𝑥𝑦𝜌𝑥𝑦bold-⋅\rho(x,y)=\rho(x,y,\,\boldsymbol{\... |
Note that the regularity of a specific conformity function ρ𝜌\rhoitalic_ρ is contextual, being inherently dependent on the distribution of the underlying data sequence. Technically, we can always avoid ties among the sequence of conformity scores almost surely by introducing a properly constructed ancillary tie-break... |
Conformity functions are agnostic to the choice of the specific models or algorithms used to construct μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG, ξ^psubscript^𝜉𝑝\hat{\xi}_{p}over^ start_ARG italic_ξ end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, and π^^𝜋\hat{\pi}over^ start_ARG italic_π end_ARG in Exa... | Under the data exchangeability assumption, the sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT is exchangeable.
| In general, the coverage indicators Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are dependent random variables, since for all future observables the corresponding conformal prediction sets in Definition 3 are defined in terms of the same calibration sample conformity score S(⌈(1−α)(n+1)... | A |
The model complexity of different action recognition methods on the DailyAction dataset is summarized in Table V, which includes the number of trainable parameters, the number of MACs, and the inference throughput measured at a batch size of 1. VMV-GCN has the lowest computational complexity and maximum throughput amo... | This work proposes a powerful yet lightweight model named Event Voxel Set Transformer (EVSTr) to solve the above problems. EVSTr can flexibly process both short- and long-duration event streams in a voxel-wise way for efficient recognition tasks, including object classification and action recognition. We adopt the even... | We also analyze the performance of different event representations on recognition tasks. Variant D has a significant drop in accuracy when using point-based representations, indicating that voxel-wise representations preserve local semantics better. Besides, variant E adds bilinear interpolation integration [26] to vox... | Compared to point-based counterparts, our model outperforms state-of-the-art methods and gains a notable improvement (1.9%percent\%% increase) than the second place on the challenging N-Caltech101 dataset, demonstrating the effectiveness of EVSTr. As shown in Fig. 5, we further provide the visualization of feature repr... | Ablations of the multi-scale attentive aggregation in MNEL on object classification (N-Caltech101) and action recognition (DailyAction) are reported in Table VI. Variants A-C represent different feature aggregation strategies using the event voxel set as input. Variants D and E take different event representations as i... | D |
The scenario models a setup, where new objects are presented to a system that minimizes the risk of an unstable grasp, and therefore “explores” the shape of the object by touching and poking before attempting a grasp and subsequent manipulation. For example, imagine a conveyor belt for sorting objects of different siz... |
An important part of haptic exploration is the decision where to touch. The object can be touched randomly as done by Smith et al. [22], or always select a position opposite the camera (from “behind”) as Watkins-Vall et al. [20]. However, these are not as effective as an uncertainty-driven approach. Uncertainty can co... |
The first group of improvements concerns the process of shape completion performed by Implicit Geometric Regularization for Learning Shapes (IGR), which we modified as follows. We use a new, theoretically grounded, method to determine the points with highest uncertainty. In addition, we changed the sampling of points... | We proposed a new method for shape completion using a combination of visual and haptic feedback. VISHAC outperformed the baseline Act-VH [1] in terms of speed, reconstruction quality and robustness.
We experimentally validated VISHAC in both simulated and real-world environments, using 8 objects and an additional one f... | Contributions. We present a pipeline for visuo-haptic shape completion called VISHAC\replaced. We extend the baseline by Rustler et al. [1], which also deals with shape completion. We describe our modifications and improvements with respect to [1] below., importantly extending the work of Rustler et al. [1], which serv... | D |
(2) For every C⊆AB𝐶superscript𝐴𝐵C\subseteq A^{B}italic_C ⊆ italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT, we have that C⊆ClY(C)⊆ClX(C)𝐶subscriptCl𝑌𝐶subscriptCl𝑋𝐶C\subseteq\mathrm{Cl}_{Y}(C)\subseteq\mathrm{Cl}_{X}(C)italic_C ⊆ roman_Cl start_POSTSUBSCRIPT italic_Y end_POSTSUBSCRIPT ( italic_C ) ... |
This approach based on ideals allows us to recover well-known topologies on ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT, such as the topology of pointwise convergence (referred to as the local topology in this work) and the uniform topology (refer to [17], Section 19). | In Section 4 we develop a method for equipping the set ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT with a topology using Boolean ideals on B𝐵Bitalic_B.
This framework is particularly applicable when B=Aω𝐵superscript𝐴𝜔B=A^{\omega}italic_B = italic_A start_POSTSUPERSCRIPT italic_... | In this paper, we adopt the topological approach as we primarily focus on ω𝜔\omegaitalic_ω-operations and relations of arity ω𝜔\omegaitalic_ω, referred to as ω𝜔\omegaitalic_ω-relations.
We present a method for defining topologies on sets of functions. The key idea is to choose a Boolean ideal X𝑋Xitalic_X of subsets... | provide some examples of such topologies, that will be studied in the rest of this work. The basic idea is that, for endowing ABsuperscript𝐴𝐵A^{B}italic_A start_POSTSUPERSCRIPT italic_B end_POSTSUPERSCRIPT with a topology, it is enough to choose a suitable family X𝑋Xitalic_X of subsets of B𝐵Bitalic_B.
The only cond... | A |
In Section 4, we obtain better time bounds for the special case of finding collections of s𝑠sitalic_s-t𝑡titalic_t mincuts that are pairwise disjoint. Similar to SUM-k𝑘kitalic_k-DMC and COV-k𝑘kitalic_k-DMC, our approach exploits the partial order structure of s𝑠sitalic_s-t𝑡titalic_t mincuts. We use this to effici... |
We now present the NP-hardness proof for the decision version of Min-k𝑘kitalic_k-DMC. For simplicity, we consider Min-k𝑘kitalic_k-DMC reformulated as a minimization problem by means of the relationship maxS∈Ukdmin(S)=minS∈Ukd^min(S)subscript𝑆subscript𝑈𝑘subscript𝑑min𝑆subscript𝑆subscript𝑈𝑘subscript^𝑑min𝑆... |
Contrary to the hardness of finding diverse global mincuts in a graph [HKK+22], in Section 3 we show that both Sum-k𝑘kitalic_k-DMC and Cov-k𝑘kitalic_k-DMC can be solved in polynomial time. We show this via a reduction to the submodular function minimization problem (SFM) on a lattice, which is known to be solvable i... | In Section 5, we prove that the decision version of Min-k𝑘kitalic_k-DMC is already NP-hard when k=3𝑘3k=3italic_k = 3. The proof is split into three parts. First, we show that a variant of the constrained minimum vertex cover problem on bipartite graphs (Min-CVCB) of Chen and Kanj [CK03] is NP-hard. Then, we give a re... | In contrast to the polynomial-time algorithms of the previous sections, here we show that k𝑘kitalic_k-DMC is NP-hard when considering dminsubscript𝑑mind_{\mathrm{min}}italic_d start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT as the diversity measure. We called this variant Min-k𝑘kitalic_k-DMC in Section 1. The hardne... | C |
We let X𝑋Xitalic_X be the real line and the random transition Γ(⋅|u)\Gamma(\cdot|u)roman_Γ ( ⋅ | italic_u ) be a Gaussian 𝒩(u,1)𝒩𝑢1\mathcal{N}(u,1)caligraphic_N ( italic_u , 1 ) with mean u𝑢uitalic_u. The space of measures is XM=[0,1]subscript𝑋𝑀01X_{M}=[0,1]italic_X start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIP... | Quantum information theory studies the limits of communicating through quantum channels. This section shows the limitations of the algorithmic content of [ure states and their measurements. Given a measurement apparatus E𝐸Eitalic_E, there is only a tiny fraction of quantum pure states on which E𝐸Eitalic_E’s applicati... |
In quantum mechanics, given a quantum state |ψ⟩ket𝜓\ket{\psi}| start_ARG italic_ψ end_ARG ⟩, a measurement, or POVM, E𝐸Eitalic_E produces a probability measure E|ψ⟩𝐸ket𝜓E\ket{\psi}italic_E | start_ARG italic_ψ end_ARG ⟩ over strings. This probability represents the classical information produced from the measurem... | We prove conservation of probabilities over successively general spaces. This includeds finite sequences, infinite sequences, and T0subscript𝑇0T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, second countable topologies. Conservation of probabilities over the case of finite and infinite sequences follow directly... |
Another interesting property of quantum mechanics is that the vast majority of quantum pure states themselves will have negligible algorithmic self information 𝐈𝒬(|ψ⟩:|ψ⟩){\mathbf{I}}_{\mathcal{Q}}(\ket{\psi}:\ket{\psi})bold_I start_POSTSUBSCRIPT caligraphic_Q end_POSTSUBSCRIPT ( | start_ARG italic_ψ end_ARG ⟩ : | s... | A |
We propose Context Normalization (CN), a novel approach that utilizes defined contexts to capture underlying distribution variations. In CN, each sample in a mini-batch is normalized using the mean and standard deviation specific to its context. By treating contexts as components of a Gaussian mixture, we learn their ... | It is important to mention that the baseline models (ViT with standard preprocessing and ViT with batch normalization) collapsed in this blended dataset as the two datasets have different structures, and simple normalization does not allow a suitable representation of the data. Context normalization, on the other hand,... |
We have proposed a novel approach called ”context normalization” (CN) that enhances deep neural network training in terms of training stability, fast convergence, higher learning rate, and viable activation functions. Similar to the conventional mixture normalization (MN) method, our approach is driven by the hypothes... | Based on the Mixture Normalization (MN) hypothesis proposed by [6] (ref. to Figure 1), our Context Normalization (CN) approach operates under a similar assumption that data can be effectively represented by a mixture of multiple components, as opposed to batch normalization (BN) [4]). In the Context Normalization (CN)... |
Through a comprehensive set of experiments, we demonstrate that CN not only accelerates model convergence, but also achieves superior final test accuracy. These results highlight the effectiveness of our proposed method in improving the overall performance of models. | D |
Another challenge for OOD detection is on datasets with a large number of ID classes and high-resolution images, e.g., ImageNet-1k (Deng et al., 2009).
Fig. 7 presents the detection performance of DFB using ImageNet-1k as in-distribution dataset and on four OOD datasets, including two new high resolution datasets, Imag... | We further propose a novel OOD detection framework DFB that utilizes dense prediction networks to segment the foreground and background from in-distribution training data, and jointly learn foreground and background features.
It then leverages these background features to define background OOD scores and seamlessly com... | The Reasons behind the Effectiveness of DFB. We aim to understand the effectiveness of DFB from two perspectives, including the foreground and background OOD scoring, and the latent features learned in DFB, with the results on the Textures dataset reported in Figs. 4 and 5 respectively. We can see in Fig. 4 that the ba... | The results show that DFB can consistently and significantly outperform its base model Energy with increasing number of ID classes on four diverse OOD datasets, indicating the effectiveness of DFB working in large-scale semantic space. On the other hand, as expected, both Energy and DFB are challlenged by the large sem... | Comparison to SotA Methods.
DFB is also compared with five very recent SotA methods, including MaxLogit (Hendrycks et al., 2022), KL-Matching (Hendrycks et al., 2022), ReAct (Sun et al., 2021), MaSF (Haroush et al., 2022) and DML+ (Zhang and Xiang, 2023) , with their results reported at the top of Tabs. 1 and 2. Among ... | C |
The task of low-light image enhancement (LLIE) aims to improve the visibility of images which are captured under low-light conditions. Under-exposed images are often degraded in a variety of ways in addition to their lack of visibility. Notably, low-light regions of an image typically contain degraded color informatio... | In this paper, we present a framework for post-processing images which have undergone low-light image enhancement. The enhancement of low-light images often reveals a variety of degradations which are hidden in the dark, and thus a need for post-processing is introduced. Furthermore, each low-light enhancement techniqu... |
Existing denoising techniques can be applied to denoise low-light images either before or after contrast enhancement [9, 10]. These denoising techniques range from low-pass filters and algorithms such as block matching and 3D filtering (BM3D) [11], to state-of-the-art DL denoisers [9, 12]. Despite denoisers significan... | LLIE techniques have existed for many decades and can be divided into non-learning-based methods and learning-based methods. Popular examples of traditional techniques which do not require learning from data include variants of histogram equalization (HE) [6, 20] and gamma correction (GC) [21]. HE adjusts the global co... | Simply adjusting the contrast of low-light images using a technique such as histogram equalization [6] is often insufficient due to the amplification of noise [1, 7]. Learning-based methods have emerged which significantly outperform traditional methods. However, even the state-of-the-art deep learning (DL) techniques ... | D |
Another motivation for our analysis stems from the connection between navigation in the physical space and knowledge space. Previous research has demonstrated that the same neural regions that are responsible for navigation in physical space are also involved in navigating the knowledge space: the hippocampus and ento... | To gain insights into online navigation behaviors, researchers conducted a series of studies using Wikipedia as an observational setting [19, 20, 21, 22, 23] and utilized its well-documented network of articles as the framework for navigation studies [24, 25]. The wide range of topics represented in Wikipedia (https://... | In the game sessions, players are given two Wikipedia pages as the source and the target in each game. To reduce the disparities in prior knowledge among the participants, the source and target pages are chosen to be similarly distanced (2 or 3 steps away on the Wikipedia network) pages about renowned individuals from ... |
Our study highlights the role of individual characteristics in participants’ navigation performance within the knowledge space, with this influence being modulated by constraints such as time and distance. We discovered that prior experience with Wikipedia, the navigation game, and familiarity with the target page are... |
To gain a better understanding of how navigation on the knowledge network is affected by individual characteristics, we conducted an online experiment where we hired 445 participants from the US to play nine rounds of Wikipedia navigation games (illustration in Fig. 1) and to fill in a survey afterwards about their pe... | A |
During the LHC Run2Run2\mathrm{Run~{}2}roman_Run 2, the simulation of physics events at LHCb has taken more than 80% of the distributed computing resources available to the experiment, namely the pledged CPU time.
The experiment has just resumed data taking after a major upgrade and will operate with higher luminosity... | Meeting the foreseen needs in Run 3 conditions using only the traditional strategy for simulation, namely detailed simulation, will far exceed the pledged resources.
Hence, the LHCb Collaboration is making great efforts to modernize the simulation software stack [8, 9] and develop novel and faster simulation options [1... | Developing new simulation techniques is an unavoidable requirement for LHCb to tackle the demand for simulated samples expected for Run 3 and those will follow.
The ultra-fast simulation approach is a viable solution to reduce the pressure on pledged CPU resources and succeeds in describing the uncertainties introduced... |
As mentioned in the previous Section, the validation of the ultra-fast philosophy of Lamarr is based on the comparison between the distributions obtained from models trained on detailed simulation and the ones resulting from standard simulation strategies. | Several strategies have been developed to reduce the computational cost of the simulation phase based on resampling techniques [15] or parameterizations of energy deposits [10, 12, 13].
These options offer cheaper alternative solutions to reproduce the low-level response of the LHCb detector and are typically named fas... | A |
ℒpretrain=λ1⋅ℒalign+λ2⋅ℒcontact.subscriptℒpretrain⋅subscript𝜆1subscriptℒalign⋅subscript𝜆2subscriptℒcontact\mathcal{L}_{\text{pretrain}}=\lambda_{1}\cdot\mathcal{L}_{\text{align}}+%
\lambda_{2}\cdot\mathcal{L}_{\text{contact}}.caligraphic_L start_POSTSUBSCRIPT pretrain end_POSTSUBSCRIPT = italic_λ start_POSTSUBSCRIPT ... |
We propose leveraging pretrained protein language model to train protein structure models using cross-modal contrastive learning . Our approach demonstrates superior performances in various evaluation tasks. However, challenges remain, including the scope of language model transfer, data efficiency, generalization, co... | As depicted in Figure 1(b), the architecture of the inference phase utilizes only the pretrained language-enhanced structure model, rendering other flows unnecessary during this stage.
Evaluating a pretrained protein structure model within a novel training framework poses significant challenges. To address these challe... |
Figure 1: (a) The proposed cross-modal contrastive learning framework utilizes a pretrained protein language model to guide the training of the protein structure model through contrastive alignment loss. To reinforce information constraints on the structure, we introduce a self-supervised contact map prediction. (b) T... | Furthermore, regarding different levels (Designp vs. Designr), while there is no significant disparity between the residue-level and protein-level pretrained modules in downstream tasks, Designr slightly outperforms Designp overall. This observation suggests that any small gaps between the two levels of pretrained mode... | B |
We adjust the setups of TC layers by increasing/decreasing the kernel size accordingly to ensure the output dimensions are always 512. The results with respect to link prediction accuracy (H@10) are shown in Table VIII. We see that the results of LN-TransE and LN-TransH are relatively stable, introducing more TC layers... |
In LiftNet, we adopt TC layers to progressively lift the dimensions. To demonstrate the effectiveness of such a design, we implement LiftNet variants with fully connected (FC) layers for comparison. The experiment is conducted on the largest FB15K237 dataset, with accuracy measured by MRR. Specifically, we include Lif... | We adjust the setups of TC layers by increasing/decreasing the kernel size accordingly to ensure the output dimensions are always 512. The results with respect to link prediction accuracy (H@10) are shown in Table VIII. We see that the results of LN-TransE and LN-TransH are relatively stable, introducing more TC layers... | The main task is to design an effective f(∗)𝑓f(*)italic_f ( ∗ ) for KGE.
An intuitive choice of f(∗)𝑓f(*)italic_f ( ∗ ) is multiple fully connected (FC) layers; however, FC layers require large numbers of parameters and are prone to overfitting for KGE [22]. Inspired by image processing, we refer to feature upsampl... | The results of LiftNet-based methods for knowledge graph link prediction (accuracy measured by H@10 and MRR) are shown in Fig. 3. Generally, on WN18RR datasets, we observe the link prediction accuracy increases with higher input dimension, and the increase is significant from 4-dimension to 16-dimension. However, after... | A |
Unfortunately, to the best of our knowledge, our problem cannot be formulated as a linear programming one. This represents the biggest drawback of using linear programming for entanglement routing: the amount of detail one can add becomes restricted by the need to formulate the problem as a linear optimization. Neverth... | With the surge of research on entanglement distribution and the wide range of models being considered, it is valuable to identify models that can be addressed using the proposed approach, which can be interpreted as a generalization of previous methods for single and multi-objective optimization. It evolves from discre... | Unfortunately, to the best of our knowledge, our problem cannot be formulated as a linear programming one. This represents the biggest drawback of using linear programming for entanglement routing: the amount of detail one can add becomes restricted by the need to formulate the problem as a linear optimization. Neverth... | This paper advances the research on quantum networks by introducing an highly versatile routing approach based on fidelity curves, which can be utilized in conjunction with purification protocols including capacity-achieving purification ones [11]. The fidelity of an ebit can be quantified as the distance between the q... | This paper focused on multipartite entanglement distribution for a quantum network connected through links that exhibit a trade-off between entanglement generation rate and fidelity. This is the case with hash-based quantum purification protocols [11] and with photonic models [12]. Two entanglement distribution models ... | D |
TABLE III: The MWDs of polar codes with different construction methods, where the designed Eb/N0subscript𝐸𝑏subscript𝑁0E_{b}/N_{0}italic_E start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT / italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of the ECBS algorithm are 1.51.51.51.5dB and 2.572.572.572.57dB for (512,256)5122... | Fig. 8 provides the BLER performance of (512,256)512256\left(512,256\right)( 512 , 256 ) and (512,384)512384\left(512,384\right)( 512 , 384 ) polar codes with different construction methods.
The MWDs of the polar codes in Fig. 8 are shown in Table III. | Hence, the entropy constraint is a metric to evaluate whether the performance of polar codes with limited list size under SCL decoding can approach the ML performance or not and the proposed ECBS algorithm can improve the MWD of polar codes to show better BLER performance as L𝐿Litalic_L increases.
Finally, compared wi... | The designed Eb/N0subscript𝐸𝑏subscript𝑁0E_{b}/N_{0}italic_E start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT / italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of the ECBS algorithm are 1.51.51.51.5dB and 2.572.572.572.57dB for (512,256)512256\left(512,256\right)( 512 , 256 ) and (512,384)512384\left(512,384\right)( 5... | TABLE III: The MWDs of polar codes with different construction methods, where the designed Eb/N0subscript𝐸𝑏subscript𝑁0E_{b}/N_{0}italic_E start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT / italic_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of the ECBS algorithm are 1.51.51.51.5dB and 2.572.572.572.57dB for (512,256)5122... | A |
The solution for the Searcher will have the following structure. At every branch node j𝑗jitalic_j there is a favored branch Q1subscript𝑄1Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and a positive probability β𝛽\betaitalic_β (the favoring bias) for it to be chosen before looking at the signal.
With the rema... |
This paper has addressed the question of how to optimally search for an adversarially hidden target on a tree network in the presence of unreliable signals. We have found optimal solutions for both the Searcher and the Hider that can be calculated recursively, and a closed form expression for the value of the game. Fu... | The solution for the Searcher will have the following structure. At every branch node j𝑗jitalic_j there is a favored branch Q1subscript𝑄1Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and a positive probability β𝛽\betaitalic_β (the favoring bias) for it to be chosen before looking at the signal.
With the rema... | So in particular the Searcher will never choose the unfavored arc (branch) when the signal is for the favored one. The use of biased depth-first Searcher strategies (random choices at every branch node) of the Searcher was introduced in another context in Alpern (2010) and Alpern and Lidbetter (2014), but those distrib... |
We can now state and prove our main theorem, which includes an expression for the value of the game. We describe the optimal strategy for the Searcher by giving the favoring bias β𝛽\betaitalic_β of searching the favored branch first (without needing to observe the signal) when at a branch node. | C |
For these reasons, especially in the medical domain, it is essential to have computationally feasible methods that can fine-tune existing models towards a smaller set of a specific modality or disease.
In this paper, we pick one such method, Textual Inversion, and rigorously explore its capacities for adapting Stable D... | Several other works studied text-to-image latent diffusion models for medical imaging Chambon et al. (2022a); Akrout et al. (2023).
Closest to our work is Chambon et al. (2022b), where the authors explore various methods to adapt a pre-trained Stable Diffusion model to chest X-ray generation. | Several papers have applied diffusion to medical imaging, with a wide range of applications including anomaly detection, segmentation, registration, and modality transfer with image-to-image translation [Kazerouni et al. (2022)].
Specifically for medical image generation, several recent works have trained diffusion mod... | Pre-trained models are often trained on 2D RGB datasets, but many medical imaging modalities are 3D.
Recently, studies such as Khader et al. (2023) and Pinaya et al. (2022) have trained diffusion models from scratch on 3D data or even on 4D data Kim and Ye (2022), and Han et al. (2023) use diffusion models conditioned ... | For these reasons, especially in the medical domain, it is essential to have computationally feasible methods that can fine-tune existing models towards a smaller set of a specific modality or disease.
In this paper, we pick one such method, Textual Inversion, and rigorously explore its capacities for adapting Stable D... | B |
Semantic-based methods enhance object representations by incorporating additional semantic information, such as segmentation labels [66], instance masks [61, 57], or features extracted using pre-trained vision-language models [32]. On the other hand, 3D layout-based approaches, exemplified by NSG [37] and its successor... | Figure 3: Framework Overview.
The CompoNeRF model unfolds in three stages: 1) Editing 3D scene, which initiates the process by structuring the scene with 3D boxes and textual prompts; 2) Scene rendering, which encapsulates the composition/recomposition process, facilitating the transformation of NeRFs to a global frame... | Our framework interpreted a multi-object text prompt as a collection of localized NeRFs, each associated with a spatial box and an object-specific text prompt, which were then composited to render the entire scene view. We have further enhanced the framework with a specialized composition module for global consistency,... | CompoNeRF’s distinctive feature lies in its capability to recompose scenes by interfacing with decomposed NeRFs, thereby accelerating the creation of new scenes. In contrast to the mesh-based method in Fantasia3D, which requires considerable human effort in mesh modification and graphics engine support for editing, Com... | Much like Latent-NeRF and SJC, our CompoNeRF framework encounters the multi-face challenge, where guidance from the Stable Diffusion model may result in conflicting facial features for certain objects, as illustrated in Figure 16. The reason lies in the fact that diffusion model does not always provide reliable guidanc... | C |
MDCGen (Iglesias et al., (2019) is a feature-rich generator that supports many desiderata in cluster analysis, such as overlap control, different probability distributions, subspace clusters, and the ability to add noise points. In particular, it is nice to be able to place noise points away from the clusters, which i... |
Figure 9 confirms that clustering difficulty rises with increasing overlap. Figure 10 shows the same in the case of non-convex clusters, suggesting that applying distort maintains the desired relationship between overlap and clustering difficulty. Additionally, both figures show how our cluster overlap relates to the ... |
In Section 3.2, we defined the overlap between two clusters in terms of the error rate of the best minimax linear classifier. We verify that this notion of overlap conveys clustering difficulty by measuring clustering performance on data sets with different degrees of overlap. For this simulation, we consider data set... | Finally, the HAWKS generator (Shand et al., (2019)) controls cluster overlaps using an evolutionary algorithm that evolves the means and covariance matrices of multivariate normal distributions. The paper applies this framework to create data sets with a user-specified silhouette score representing clustering difficult... | Most of the high-level geometric parameters describing an archetype are based on what we call “max-min sampling.” In this approach, the user controls a geometric attribute by specifying a reference value and max-min ratio. In addition, a constraint ensures that the reference value and is indeed typical for every data s... | C |
BART-Gen Li et al. (2021) is designed for document-level event extraction that can deal with the long-distance dependence issue and co-reference problem. Constrained generation is applied for argument extraction that requires event-specific templates.
| The generator is fine-tuned on both trigger prediction and argument prediction simultaneously by training on the pairs of instances with different prefixes ‘TriggerEvent: ’ and ‘Argument: ’ (see §4.4). In order to take the context as input and generate structured event frames, the generator 𝒢𝒢\mathcal{G}caligraphic_G... | We preprocess the data by separating original samples into event samples and inserting placeholders for target entities. The instances are processed with distinct prefixes for subtasks: ‘TriggerEvent: ’ and ‘Arguments: ’. Figure 3 shows a data preprocessing example. Details pertaining to our pipeline training and infer... |
While impressive results are reported, we identify two major limitations of the current generation-based event extraction methods. Firstly, most of these methods rely on heuristic templates and extensive human knowledge engineering. According to the experiments conducted by Hsu et al. (2022), a slight change in the te... | The DEGREE (Hsu et al., 2022) model is designed to generate ‘invalid’ instances during both the training and inference phases, wherein event-specific knowledge is combined with context even if no such event is mentioned in the context. We eliminated these event-specific templates, leaving only the context sentence as i... | B |
\mathcal{H}]^{0}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ↪ [ caligraphic_H ] start_POSTSUPERSCRIPT italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ↪ [ caligraphic_H ] start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT exist and are compac... | Before introducing the Lqsuperscript𝐿𝑞L^{q}italic_L start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT-embedding property of the interpolation space [ℋ]ssuperscriptdelimited-[]ℋ𝑠[\mathcal{H}]^{s}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT, we first prove the following lemma, which characteri... | where m:=min{k∈ℕ:k>r}assign𝑚:𝑘ℕ𝑘𝑟m:=\min\{k\in\mathbb{N}:k>r\}italic_m := roman_min { italic_k ∈ blackboard_N : italic_k > italic_r }. (We refer to Appendix A for the definition of real interpolation and Sawano 2018, Chapter 4.2.2 for more details). It is well known that when r>d2𝑟𝑑2r>\frac{d}{2}italic_r > divid... |
It is worth pointing out the relation between the definition (5) and the interpolation space defined through the real method (real interpolation). For details of interpolation of Banach spaces through the real method, we refer to Sawano (2018, Chapter 4.2.2). Specifically, Steinwart and Scovel (2012, Theorem 4.6) reve... |
The outline of the rest of the paper is as follows. In Section 2, we introduce basic concepts including priori knowledge of RKHS, integral operators and the definition of the interpolation space. In addition, we formally define the spectral algorithm, which is the main interest of this paper, and provide three example... | C |
Section 2 briefly reviews a number of existing point cloud simplification techniques which are relevant to our work. Section 3 provides background details regarding the computation of surface variation, GPs with kernels defined on non-Euclidean domains and a greedy subset-of-data scheme for GP inference. Section 4 out... | In this section we will introduce a number of existing point cloud simplification techniques, with a particular focus on works which have a feature-preserving element to their approach. Some of the earliest curvature-sensitive simplification techniques were proposed by Pauly et al. [26] and Moenning et al. [25]. The fo... | Approximate Intrinsic Voxel Structure for Point Cloud Simplification (AIVS), introduced by Lv et al. [24], combines global voxel structure and local farthest point sampling to generate simplification demand-specific clouds which can be either isotropic, curvature-sensitive or have sharp edge preservation. As with HC ho... | HC and Potamias et al. are the only baselines with shorter runtimes than our method, and obtain maximum Hausdorff distances comparable to those obtained by our approach. However, as discussed in Section 2, tuning the user-specified HC parameters make striking a balance between feature preservation and retaining a suffi... |
Overall, these results show that our approach provides a computationally efficient option for performing point cloud simplification in settings where the user wishes to strike a balance between preserving high fidelity around sharp features in the cloud, and ensuring that the simplified cloud covers the manifold defin... | A |
Training on full resolution (FullRes):
We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali... | Training on full resolution (FullRes):
We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali... | 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT images, we distribute the model over 2 GPUs. The methods HalfRes and PatchDDM were trained on one GPU only.
The optimizer we used was AdamW [Loshchilov and Hutter(2017)] with the default parameters. | For three spatial dimensions (i.e. 3D) this means that reducing the
input size from 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT to 1283superscript1283128^{3}128 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT results in a reduction | of 1×1×1\textmm3111\text𝑚superscript𝑚31\times 1\times 1\text{mm}^{3}1 × 1 × 1 italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, resulting in a total scan size of 240×240×155240240155240\times 240\times 155240 × 240 × 155, which we padded to a size of 256×256×256256256256256\times 256\times 256256 × 25... | C |
To address the aforementioned challenges, substantial research endeavors have been dedicated. The seminal work in [3] introduces FedAvg, a foundational approach for federated optimization that highly depends on iterative model averaging and partial participation, thereby significantly decreasing the communication over... | According to [38], other adaptive algorithms such as FedAdagrad and FedYogi are proposed to improve the model convergence rate under the situation of heterogeneous data. FedAdam employs adaptive learning rates and momentum by leveraging local updates from client devices to efficiently update the global model. FedAdagra... |
We systematically conduct numerical experiments designed to elucidate the influence exerted by the aggregation weight α𝛼\alphaitalic_α in the objective function presented in Eq. (13) on the model efficacy and facilitate the practical application and promotion of FedAgg. As depicted in Fig. 12, the decrement of the hy... |
Figure 2: Numerical analysis of model accuracy and training loss curves on the CIFAR-100 dataset featuring IID data distribution. The results underscore the substantial impact of employing the adaptive learning rate scheme based on adaptive optimizer Adam, which enhances model performance and convergence rate. | Nevertheless, in FL systems, the potential of adaptive learning rate-based algorithms in FL remains largely underexplored. Current literature often undervalues the pivotal role of the learning rate, a hyperparameter that requires meticulous tuning to accelerate the convergence speed and FL model performance. In Fig. 2,... | D |
In this paper, we introduce LLaMA-Adapter, an efficient fine-tuning method that adapts LLaMA into a well-performed instruction-following model.
Trained by Alpaca’s instruction-output data, our approach freezes the entire LLaMA model, and proposes a zero-initialized attention mechanism with superior resource efficiency. | Figure 2: Details of Zero-initialized Attention. We insert learnable adaption prompts into the last L𝐿Litalic_L out of N𝑁Nitalic_N transformer layers of LLaMA. To progressively learn the instructional knowledge, we adopt a zero gating factor within the attention for stable training in the early training stages.
| Specifically, in LLaMA’s higher transformer layers, we append a set of learnable adaption prompts as prefixes to the word tokens.
Then, to avoid the noise from randomly initialized prompts at the early training stage, we equip the frozen self-attention layers with a learnable gating factor. |
If the adaption prompts are randomly initialized, they might bring disturbance to the word tokens at the beginning of training, which harms the fine-tuning stability and effectiveness. Considering this, we modify the vanilla self-attention at the last L𝐿Litalic_L layers to be zero-initialized variants, as shown in Fi... | Given a pre-trained LLaMA with an N𝑁Nitalic_N-layer transformer, we first insert a set of learnable adaption prompts into its topmost L𝐿Litalic_L layers (L≤N𝐿𝑁L\leq Nitalic_L ≤ italic_N).
We denote the prompts as {Pl}l=1Lsuperscriptsubscriptsubscript𝑃𝑙𝑙1𝐿\{P_{l}\}_{l=1}^{L}{ italic_P start_POSTSUBSCRIPT italic_... | B |
Compared with previous methods which leverage particular architectures for VQA or include a complicated fusion encoder, S-ViLM is the most efficient and flexible for various vision-language tasks.
S-ViLM achieves better performance than competing methods with the accuracy of 43.5% (+1.4%) and 46.4% (+0.5%) on MSRVTT-QA... | Two evaluation settings are considered: (1) linear probing where the backbone encoder is frozen and only the last linear classifier is trained and (2) end-to-end fine-tuning where both the backbone and the classifier are trained.
Top-1 accuracy on UCF101 and HMDB51 is reported in Table 4. | In terms of fine-tuning, different tasks are trained independently with their own set of hyperparameters on the target dataset and more details can be found in Appendix A.
For temporal action localization, we fix weights of the pre-trained video encoder and its grouping blocks to extract video features, which are then ... | VQA results on two open-ended datasets are shown in Table 2.
To enable S-ViLM to deal with the VQA task, we add a fusion head adapted from BUTD (Anderson et al., 2018) by integrating video and text features with simple linear layers. Then a classifier is inserted after the fusion module to perform question answering as... | Video action recognition. We select HMDB51 (Kuehne et al., 2011) containing 6,766 videos with 51 categories and UCF101 (Soomro et al., 2012) containing 13,320 videos with 101 categories. Both linear probing and fine-tuning the whole model are explored.
| A |
In all experiments, the encoder eθsubscript𝑒𝜃e_{\theta}italic_e start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT is a two-layer multi-layered perceptron with hidden layer dimensions of 512512512512 and 256256256256, and output dimension of 128128128128. We trained the encoder for 50 epochs for the layer prediction and ... | As a sentence representation, we experiment with [CLS] token representations and with mean pooling of token representations, since Del and Fishel (2021) noted a difference in similarity in these two cases. We report results with [CLS] representations in the main body and with mean pooling in Appendix A.1; the trends ar... |
Table 1: Layer prediction benchmark accuracy results for language and vision cases. For encoder-based methods we report mean and std over 5 random initializations. For ContraSim, we experiment with training with different datasets (rows) and evaluating on same or different datasets (columns). | Our method, ContraSim, achieves excellent results. When trained on one dataset’s training set and evaluated on the same dataset’s test set, ContraSim achieves perfect accuracy under this benchmark, with a large margin over CKA results. This holds for both language and vision cases. Even when trained on one dataset and ... | We trained a different encoder for each model, as opposed to the single encoder we trained in all other experiments. This enables ContraSim to be used with representations with different dimensions. Results are summarized in Table 3. We report results with FAISS sampling. Across all pairs, ContraSim achieves superior r... | B |
Label ambiguity also affects conventional methods in a Manhattan world. For example, it is often the case that three orthogonal directions can be estimated using the Gaussian sphere representation of VPs [74]; however, the representation does not regard the difference between front and back directions. For a fair comp... |
Figure 9: Projected 3D VP/ADPs and orthogonal points of VP/ADPs in the Manhattan world to estimate camera rotation. These orthogonal points are obtained as VP/ADPs without camera rotation; that is, pan, tilt, and roll angles are 0∘superscript00^{\circ}0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT. Four VP/ADPs of the ... | Considering generalized cases of label ambiguity, we annotated the image coordinates of VP/ADPs as follows. We 180∘superscript180180^{\circ}180 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT-rotationally align all labels based on two conditions: 1) the images have back labels without front labels, and 2) the images have r... |
After removing label ambiguity, we ignored back labels because the training and test sets had only 0.1%percent0.10.1\%0.1 % and 0.3%percent0.30.3\%0.3 % back labels, respectively. Therefore, the VP estimator detected 13 points, that is, the five VPs (front, left, right, top, and bottom) and eight ADPs in Table 2. If a... | As shown in Table 2, we annotated the VP/ADPs of the image coordinates and labels on the basis of panoramic-image width and height. We found that some generated fisheye images had label ambiguity; that is, we cannot annotate unique VP/ADP labels for these images. For example, we cannot distinguish one image with a 0∘su... | C |
A matrix S∈\mathbbCn×n𝑆\mathbbsuperscript𝐶𝑛𝑛S\in{\mathbb C}^{n\times n}italic_S ∈ italic_C start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT is Schur if and only if, for each positive definite Q∈\mathbbCn×n𝑄\mathbbsuperscript𝐶𝑛𝑛Q\in{\mathbb C}^{n\times n}italic_Q ∈ italic_C start_POSTSUPERSCRIPT ... | Moreover, referring to item (vi), in this case the vector p𝑝pitalic_p is not uniquely determined (up to rescaling) since there exist as many linearly independent eigenvectors as the geometric multiplicity of the zero eigenvalue.
In fact, any such selction of p𝑝pitalic_p is a valid one for item (vi) because, with disc... | The implication (i) ⟸⟸\Longleftarrow⟸ (ii) directly follows from the fact that (15) implies σ(Ak)⊆σ(Ae,k)𝜎subscript𝐴𝑘𝜎subscript𝐴𝑒𝑘\sigma(A_{k})\subseteq\sigma(A_{e,k})italic_σ ( italic_A start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ⊆ italic_σ ( italic_A start_POSTSUBSCRIPT italic_e , italic_k end_POSTSUBSC... | To show the equivalence of the six statements in Theorem 1, the proof is structured as follows. We first prove the equivalence among statements (i), (ii) and (iii). Then, we prove the following chain of implications:
(iii) ⟹⟹\Longrightarrow⟹ (iv), followed by (iv) ⟹\implies⟹ (v), (v) ⟹\implies⟹ (vi), and finally (vi) ⟹... | Hence, as a consequence of Perron-Frobenius theory, the dominant eigenvalue μ0subscript𝜇0\mu_{0}italic_μ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT of M𝑀Mitalic_M is real and associated with left and right eigenvectors having non-negative elements (Luenberger, 1979, Chapter 6.5, Theorem 1).
In view of Gershgorin’s Circl... | C |
Here we describe the method and results for testing HRQ3𝑅subscript𝑄3{}_{{RQ}_{3}}start_FLOATSUBSCRIPT italic_R italic_Q start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_FLOATSUBSCRIPT:
That changes in ONNX operator sets are correlated with increased defects. | This causal asymmetry may be attributable to differences in the requirements of DL model converters and DL compilers.
The purpose of DL model converters is interoperability (section 2.1), making compatibility failures a focus and reducing the need for optimizations. | For the reasons noted above, we studied the DL model converters from PyTorch and TensorFlow into the ONNX IR (torch.onnx and tf2onnx, respectively).
We note that among ONNX model converters, those for PyTorch and TensorFlow have the most failure data available on GitHub (Table 6). | (1) DL model converters lag behind ONNX releases (this might cause a failure to be mis-attributed to another release, i.e., offset in time);
(2) Failures might be in any ONNX available release, not just the most recent (possibly inflating the failure rate of a given release); | We also measure the relationship, assessing the correlation in the number of changes in an ONNX release and the number of failures between its release and the next.
We use the Spearman correlation, which is a commonly-used and robust metric for measuring a monotonic relationship between two variables (Fenton and Bieman... | C |
FCUs can be classified into two blocks: generic or embedded, each presenting distinct advantages and limitations. Generic FCUs offer versatility, accommodating various frames and components for customizable configurations, which is beneficial for developers. However, their integration and calibration require technical... | The Alphanumeric Viewer is a component that monitors the state of specific variables of the system, e.g. sensor measurements, values corresponding to state estimation, references for controllers, etc. The information is distributed in different panes to facilitate the search for a specific variable of the system.
On th... |
Developing autonomous aerial systems from scratch is a challenging task that requires extensive expertise in many different areas, such as aerodynamics, control systems, sensor integration, or AI algorithms. This is a common problem in the robotics field, so in recent years the robotics community has witnessed the dev... | In 2018 Ebeid et al. presented a survey of open-source hardware and software comparing their main features [7]. In Table I some relevant flight controller projects are listed. These projects may cover both hardware and software development of these controllers. They range from Open Source Hardware (OSH) and Open Source... |
This paper presents a novel open-source framework designed for the development of aerial robotic systems, with a strong focus on multi-robot orientation, platform independence, versatility, and modularity. These features have been validated through a series of experiments in both simulated and real-world scenarios, de... | C |
In this paper, we have discussed whether or not LLMs can actually be deemed as creative; we started by considering Boden’s three criteria, i.e., value, novelty, and surprise. While LLMs are capable of value and a weak version of novelty and surprise, their inner autoregressive nature seems to prevent them from reaching... | In this paper, we have discussed whether or not LLMs can actually be deemed as creative; we started by considering Boden’s three criteria, i.e., value, novelty, and surprise. While LLMs are capable of value and a weak version of novelty and surprise, their inner autoregressive nature seems to prevent them from reaching... | LLMs might be able to generate creative products in the future. However, the fact that they will be able to generate these outputs will not make them intrinsically creative.
Indeed, as Floridi and Chiriatti (2020) puts it, it is not what is achieved but how it is achieved that matters. An interesting definition that co... |
In addition, we have also investigated the practical implications of LLMs and their creative role, considering both legal and societal impacts. In fact, the current legal framework does not appear to be completely suited to the fast-moving field of generative AI. Moreover, the impact of these technologies on creative ... |
Nonetheless, the outputs from such models are often considered creative by the person interacting with them or exposed to their best productions. Though this is apparently in contrast with what was discussed above, we can explain this phenomenon by considering the fact that our perception does not usually align with t... | C |
(a) Existing SFUDA object detection works utilize feature alignment or sample generation to help with the pseudo labeling. These approaches mainly focus on exploiting the source model. (b) Our proposed SUP-ICI utilizes instance-level contrastive learning (CL) to make use of the foreground-background semantic informatio... | Unsupervised domain adaptation (UDA) is a practical setting where the labeled source data are provided for adapting to the unlabeled target data. Most existing methods adopt feature alignment for UDA object detection. In [3], the authors build image-level and instance-level domain classifiers to implement feature align... | Source-free unsupervised domain adaptation (SFUDA) denotes the setting of adapting to the target domain given only a well-trained source model and unlabeled target data. One stream of the SFUDA methods is implicitly aligning the feature distribution of the source and target domain using the generative adversarial netwo... |
Deep learning has achieved remarkable success in various object detection tasks. In the medical field, deep networks are able to reach clinical expert-level performance, e.g. pulmonary nodule detection [1, 2], etc. Nonetheless, these networks are usually domain-specific. In other words, they work well when the trainin... | However, medical data often involve private information, which makes them not shareable. Consequently, traditional UDA methods, which often rely on access to labeled source data, are not directly applicable in this context.
Thus in this paper, we aim at the more realistic but challenging source-free unsupervised domain... | D |
Next, we compare our method with the widely used K-means scenario reduction method. Specifically, we reduce a collection of scenarios (load realizations) into two reduced sets: one with 5555 scenarios and the other with 2222 scenarios. To generate load realizations, we use a Gaussian distribution centered around the in... | Compare the performance of using our method and the K-means scenario reduction method for the 118-bus system. The resulting average total costs and solving times across 100100100100 test instances are compared against the benchmark solutions obtained from CVXPY.
| Compare the performance of using our method and the affine policy method to solve risk-limiting dispatch on the 2000200020002000-bus system.
The resulting total costs and solving times are averaged out over 1000100010001000 test instances and compared against the benchmark solutions obtained from CVXPY. |
We assess the performance of our method and the K-means scenario reduction approach across different experimental configurations by comparing their average total costs and solving times across 100100100100 test instances against the benchmark solutions obtained using CVXPY. The results are detailed in Table V. Particu... | The results of using different methods to solve the risk-limiting dispatch problem in (3) on the 118-bus system and the 2000-bus system are provided in Table II and Table III, respectively.
The average total costs of different methods are compared against the cost values produced by CVXPY. | C |
Conventionally, BNN researchers have focused on improving predictive performance using human-crafted priors over network parameters or predictive functions (e.g., Louizos et al., 2017; Tran et al., 2020; Matsubara et al., 2021; Fortuin et al., 2021a).
However, several concerns have been raised with BNN priors (Wenzel e... | Figure 3: BNN Prior Predictives. We investigate prior predictives by computing the probability ρ𝜌\rhoitalic_ρ that particular image pairs have the same label under the prior, and examining the distribution of ρ𝜌\rhoitalic_ρ across different sets of image pairs. We consider three sets of differing semantic similarity:... |
Graphical Evaluation. First, we visualise the BNN and self-supervised BNN prior predictive (Fig. 1 and 3). The standard BNN prior predictive reflects a belief that all three image pair groups are similarly likely to have the same label, and thus does not capture semantic information well. In contrast, the self-superv... |
We then further demonstrate that self-supervised BNN prior predictives reflect input-pair semantic similarity better than normal BNN priors (§4). To do so, we develop a methodology to better understand the prior predictive distributions of BNNs. Our approach is to measure the probability of pairs of data points having... |
Figure 1: Self-Supervised Bayesian Neural Networks. (a) Pre-training in self-supervised BNNs corresponds to unsupervised prior learning. We learn a model with a prior distribution such that augmented images likely have the same label and distinct images likely have different labels under the prior predictive. (b) Self... | D |
The assumption in subsection 3.2 on μXsubscript𝜇𝑋\mu_{X}italic_μ start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is satisfied by any Borel measure on a variety of underlying metric spaces that arise in geometric data analysis, including doubling metric spaces, many Banach spaces, and any Hilbert space with the norm me... | Assumptions imposed in the geometric data analysis literature on the probability measure of interest (in the above example this corresponds to η˘t,Tsubscript˘𝜂𝑡𝑇\breve{\eta}_{t,T}over˘ start_ARG italic_η end_ARG start_POSTSUBSCRIPT italic_t , italic_T end_POSTSUBSCRIPT) essentially provide a lower bound on the volum... |
By subsection 3.2 and subsection 3.2, a natural way to formulate a test for time-invariance of the geometric features is via the corresponding ball volume processes of the shape descriptors. In addition, results in Section 2 show that we can express approximation errors in terms of the Gromov-Hausdorff distance dGHsu... | Although it is possible to apply this characterization directly for inference, we can often use a more direct approach. Specifically, our second result establishes general conditions under which the geometric features of a stochastic process can in fact be fully characterized by the process of ball volumes (subsection ... | A consequence of the preceding theorems is that one can often characterize the geometric features as captured by shape descriptors of a Polish-valued process X𝑋Xitalic_X via the ball volume processes of the fidis. This is convenient for hypothesis testing. For example, note that the ball volume corresponding to the fi... | D |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.