context
stringlengths
250
4.99k
A
stringlengths
250
5.11k
B
stringlengths
250
3.8k
C
stringlengths
250
8.2k
D
stringlengths
250
3.9k
label
stringclasses
4 values
τ(G)=1|V|∏i=2|V|λi=12⁢|E|∏i=1|V|di∏i=2|V|λ̄i⋅\displaystyle\tau(G)=\frac{1}{|V|}\prod\limits_{i=2}^{|V|}\lambda_{i}=\frac{1}% {2|E|}\prod\limits_{i=1}^{|V|}d_{i}\prod\limits_{i=2}^{|V|}\lambdabar_{i}\cdotitalic_τ ( italic_G ) = divide start_ARG 1 end_ARG start_ARG | italic_V | end_ARG ∏ start_POSTSUBSCRIPT italic_i = 2 ...
Being motivated by the success of the Wiener index, nearly after four decades, in 1993,19931993,1993 , Klein and Randić put forward a novel structure-descriptor topological index, known as the Kirchhoff index [11]. The Kirchhoff index of a graph G𝐺Gitalic_G is calculated using the formula Kf⁡(G)=12⁢∑i=1|V|∑j=1|V|ri⁢jK...
We obtain the formulae for the total count of spanning trees of Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and Pn′subscriptsuperscript𝑃′𝑛P^{\prime}_{n}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT using the Theorem 4 as follows.
As both of the structural descriptors have important applications in graph theory, networking systems, molecular chemistry, and other related fields, researchers have taken a strong interest in the Kirchhoff indices and degree-Kirchhoff indices of graphs in recent years. Finding the explicit formulae for the Kirchhoff ...
The formulae of the degree-Kirchhoff index of linear hexagonal chains [9], hexagonal Möbius chains [16], linear crossed hexagonal chains [18], linear polyomino chains [8], cylinder phenylene chains [17], random polygonal chains [13], generalized phenylenes [25], cylinder, and Möbius octagonal chain [15], linear pentag...
C
(c,r,p)\in\mathcal{I}}\rho(c,r,p).italic_ρ ( italic_c , italic_r , italic_p ) = divide start_ARG roman_max start_POSTSUBSCRIPT ⟨ italic_τ , italic_i ⟩ ∈ caligraphic_A ( italic_c , italic_r , italic_p ) end_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_P end_POSTSUBSCRIPT ( ⟨ italic_τ , italic_i ⟩ ) end_ARG start_AR...
Action 6666 cannot be implemented by a classic contract, with the half/half combination of actions 4444 and 5555 giving the same distribution over outcomes at a lower cost. Actions 2222 and 3333 have a negative expected welfare, and so will never be optimal for the principal. Actions 4444 and 5555 can both be implement...
Let γ,ϵ∈(0,1)𝛾italic-ϵ01\gamma,\epsilon\in(0,1)italic_γ , italic_ϵ ∈ ( 0 , 1 ) and let δ=ϵ⋅γn−2𝛿⋅italic-ϵsuperscript𝛾𝑛2\delta=\epsilon\cdot\gamma^{n-2}italic_δ = italic_ϵ ⋅ italic_γ start_POSTSUPERSCRIPT italic_n - 2 end_POSTSUPERSCRIPT. Consider the parameterized instance (c,r,p)𝑐𝑟𝑝(c,r,p)( italic_c , italic_r ...
Consider the instance shown in Figure 6, where r≥0𝑟0r\geq 0italic_r ≥ 0 is a parameter we will allow to vary. Actions 2 and 3 generate negative welfare, and hence only action 4 is capable (depending on r𝑟ritalic_r) of producing positive welfare. Welfare is given by
It is without loss of generality to restrict the principal to incentive compatible contracts. The idea is that an agent facing contract ⟨t,i⟩𝑡𝑖\langle t,i\rangle⟨ italic_t , italic_i ⟩ will choose an action that maximizes her expected utility given t𝑡titalic_t, and hence the principal might as well name such an acti...
C
Unfortunately, this approach only gives trivial bounds for subsampling queries. Consider the simplest type of subsampling query that takes as input a single point and outputs one bit of information about it (meaning φ:X1→{0,1}:𝜑→superscript𝑋101\varphi:X^{1}\to\{0,1\}italic_φ : italic_X start_POSTSUPERSCRIPT 1 end_PO...
Since the ALMOKL stability of subsampling queries with w=1𝑤1w=1italic_w = 1 is ≈1/nabsent1𝑛\approx 1/n≈ 1 / italic_n, this implies that for larger w𝑤witalic_w, that stability is ≈w/nabsent𝑤𝑛\approx w/n≈ italic_w / italic_n.555Note, that if we use ALOOKL stability, as in Section 2.1, we would desire the stability f...
Compare Theorem 1 to a naive approach which takes a fresh batch of w𝑤witalic_w samples for each query. Subsampling has a quadratically better dependence on the number of queries asked, T𝑇Titalic_T, than that naive approach which requires n≥w⁢T𝑛𝑤𝑇n\geq wTitalic_n ≥ italic_w italic_T. Based on the lower bounds in [H...
In contrast, Theorem 1 allows for ≈n2absentsuperscript𝑛2\approx n^{2}≈ italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT of these simple queries, which is optimal. To achieve this quadratic improvement over the DP-based bounds, we need a different approach.
Unfortunately, this approach only gives trivial bounds for subsampling queries. Consider the simplest type of subsampling query that takes as input a single point and outputs one bit of information about it (meaning φ:X1→{0,1}:𝜑→superscript𝑋101\varphi:X^{1}\to\{0,1\}italic_φ : italic_X start_POSTSUPERSCRIPT 1 end_PO...
C
A homomorphism h:F→G:ℎ→𝐹𝐺h\colon F\to Gitalic_h : italic_F → italic_G from a graph F𝐹Fitalic_F to a graph G𝐺Gitalic_G is a map V⁢(F)→V⁢(G)→𝑉𝐹𝑉𝐺V(F)\to V(G)italic_V ( italic_F ) → italic_V ( italic_G ) such that for all u⁢v∈E⁢(F)𝑢𝑣𝐸𝐹uv\in E(F)italic_u italic_v ∈ italic_E ( italic_F ) it holds that h⁢(u)⁢h⁢(v...
A homomorphism h:F→G:ℎ→𝐹𝐺h\colon F\to Gitalic_h : italic_F → italic_G from a graph F𝐹Fitalic_F to a graph G𝐺Gitalic_G is a map V⁢(F)→V⁢(G)→𝑉𝐹𝑉𝐺V(F)\to V(G)italic_V ( italic_F ) → italic_V ( italic_G ) such that for all u⁢v∈E⁢(F)𝑢𝑣𝐸𝐹uv\in E(F)italic_u italic_v ∈ italic_E ( italic_F ) it holds that h⁢(u)⁢h⁢(v...
Since the graphs G𝐺Gitalic_G and H𝐻Hitalic_H into which homomorphisms are counted are throughout assumed to be simple, looped graphs in ℱℱ\mathcal{F}caligraphic_F can generally be disregarded as they do not admit any homomorphisms into simple graphs.
Two graphs G𝐺Gitalic_G and H𝐻Hitalic_H are homomorphism indistinguishable over a family of graphs ℱℱ\mathcal{F}caligraphic_F, in symbols G≡ℱHsubscriptℱ𝐺𝐻G\equiv_{\mathcal{F}}Hitalic_G ≡ start_POSTSUBSCRIPT caligraphic_F end_POSTSUBSCRIPT italic_H, if the number of homomorphisms from F𝐹Fitalic_F to G𝐺Gitalic_G is ...
The most basic bilabelled graphs, so-called atomic graphs, make their first appearance in Theorem 3.6. These graphs are used to reformulate Equations 12 and 7. The atomic graphs are also the graphs which the sets ℒtsubscriptℒ𝑡\mathcal{L}_{t}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and ℒt+superscrip...
B
Sideway orientation is the most special condition in all three moving cases because it gives participants the sense of moving in a vertical direction instead of the original path. The head of the Spot points vertically toward the participant’s trajectory so that the Spot will move orthogonally to its orientation. Peopl...
In our work, we expect the quadruped robot to move at maximum mobility where it can walk in forward, backward, and sideways orientations to accomplish different tasks. The forward and static modes are quite general in all use cases, but backward and sideways moving orientations are also essential for certain scenarios...
Despite the statistical results, we also made assumptions about potential factors in human-robot proxemics. The special ’standing’ posture of canine robots could lead to personal space as there is a chance of sudden movement for the body language expressing a ready-to-move framing. And it remains unclear how the sound...
Sideway orientation is the most special condition in all three moving cases because it gives participants the sense of moving in a vertical direction instead of the original path. The head of the Spot points vertically toward the participant’s trajectory so that the Spot will move orthogonally to its orientation. Peopl...
So the personal distances in sideways scenarios are expected to be longer than those of forward cases. This could also be observed from bar plots like Figure 6. Similarly, we expect the backward condition to give participants the feeling of uncertainty about the potential change of direction. The other important factor...
D
Figure 7: Joints angular tracking using MPPI method. For this simulation, θ1m⁢i⁢n=42∘superscriptsubscript𝜃1𝑚𝑖𝑛superscript42\theta_{1}^{min}=42^{\circ}italic_θ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m italic_i italic_n end_POSTSUPERSCRIPT = 42 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIP...
A method to address the pseudo-inverse problem is the Levenberg-Marquardt Damped Least Squares (LMDLS) method. Wampler et al. [16] proposed an approach to determine the optimal damping factor, which balances the angular joint velocities with the tracking error. This approach involves finding the joint angular error ve...
This subsection explores the use of neural networks to find inverse kinematics (IK) solutions for the human leg. Shah et al. [22] applied deep artificial neural networks to solve the IK problem for a 5-axis serial link robotic manipulator. They achieved an accuracy where the deviation between the end effector and the ...
Regarding the Levenberg-Marquardt Damped Least Squares (LMDLS) technique, the simulation results are shown in Figures 9 and 10. This method incurs a high computational cost, primarily due to the complexity of the human leg’s structure. Figure 11 displays the angular joint values obtained using the optimization algorit...
The results of the inverse kinematics simulation for the lower limbs using the Cyclic Coordinate Descent (CCD) method are shown in Figure 5. To determine the position error illustrated in Figure 6, we used these results along with the forward kinematics described in Equation (5) to generate the end effector trajectory...
C
A property of digraphs is a set of finite digraphs closed under isomorphism. A digraph G𝐺Gitalic_G is ε𝜀\varepsilonitalic_ε-far from having a property ΦΦ\Phiroman_Φ if any digraph G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT on the vertex set V⁢(G)𝑉𝐺V(G)italic_V ( italic_G ) that d...
The relationship between local and global properties of structures is a central theme in combinatorics and computer science. Since the work of Rubinstein and Sudan [25], testing properties by sampling a small number of elements is an emerging research area. A classical result of this kind is the triangle removal lemma ...
Alon and Shapira [4] proved that every monotone property of undirected graphs (that is, closed under the removal of edges and vertices) is strongly testable, see Lovász and Szegedy for an analytic approach [22], while Rödl and Schacht generalized this to hypergraphs [23], see also Austin and Tao [8]. Similar results ha...
The goal of this paper is to study testability of finite posets as special digraphs. By a poset, we mean a set equipped with a partial order ≺precedes\prec≺ that is anti-reflexive and transitive. Alon, Ben-Eliezer and Fischer [1] proved that hereditary (closed under induced subgraphs) classes of ordered graphs are stro...
Unfortunately, the dependence on ε𝜀\varepsilonitalic_ε can be quite bad already in the case of undirected graphs: the known upper bounds in the Alon-Shapira theorem are wowzer functions due to the iterated involvement of Szemerédi’s regularity lemma. Following Alon and Fox [7], we call a property easily testable if f...
B
A simple approach would perform these updates by a complete recomputation of the KG with the changed input data similar than for the creation of the initial KG. However, such an approach would result in an enormous amount of redundant computation to repeatedly extract and transform the same (unchanged) data and to perf...
A benchmark could be based on similar settings than for the creation of specific KGs discussed in Section 4 aiming at the initial construction and incremental update of either a domain-specific or cross-domain KG from a defined set of data sources of different kinds. The KG ontology and the KG data model (RDF or proper...
While the usage of a uniform KG data model (or serialization) can lower debugging complexity of the workflow, reusing existing toolsets might require transformation/mapping between data formats and the processing steps. Moreover, a pipeline tool should be provided that can integrate the different tools and manages inte...
Pipeline and Tool Requirements. It should be easy to define and run powerful, efficient, and scalable pipelines for creating and incrementally updating a KG. This requires a set of suitable methods or tools for the different steps (discussed in the next section) that should have good interoperability, and a good degree...
Our study of existing solutions for KG construction showed that there are many different approaches not only for building specific KGs but also in the current toolsets. This underlines the inherent complexity of the problem and the dependency on different criteria such as the major kinds of input data and the intended ...
C
Initially, the set of waypoints required to generate a trajectory in task space is assumed to be available from the task planner. The trajectory is then planned using the minimum jerk criterion to ensure smooth acceleration of the joints, thereby reducing vibrations and avoiding resonance frequencies. For this simulati...
In this section, the Forward Kinematics (FK) of the lower limbs, depicted in Figure 1, using dual quaternions is established. FK involves computing the positions and orientations of the end-effector in task space from the axes and angles of the joint rotations. The lower limb is decomposed into four segments: the pelv...
In this paper, the dual quaternion-based theory is applied to the kinematics and dynamics study of the 7-DOF human lower limbs in 3D space. Subsequently, the artificial neural networks method is used to solve the inverse kinematics problem. The efficiency of the artificial neural networks method is verified using the j...
This section focuses on the dynamic description of the lower limb shown in Figure 2 using the Dual Quaternion-based recursive Newton-Euler method. This method involves calculating the velocities and accelerations of the center of mass of each link, known as twists, based on the positions, velocities, and accelerations...
Afterward, the inverse kinematics (IK) of the lower limb is computed using a multi-layer perceptron trained with the Levenberg-Marquardt backpropagation algorithm, utilizing a dataset of 400,000 samples. The network architecture is illustrated in Figure 4, featuring a two-layer feed-forward structure comprising a hidde...
D
Note that according to our theory, we could use the same random batches ℬg,ℬh⊆[n]subscriptℬ𝑔subscriptℬℎdelimited-[]𝑛\mathcal{B}_{g},\mathcal{B}_{h}\subseteq[n]caligraphic_B start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT , caligraphic_B start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ⊆ [ italic_n ] generated once for a...
to the gradient-dominated functions, for the advanced helpers. In this case, we set the snapshot line 3 in Algorithm 1) as in (5) i.e., the snapshot corresponds to the state with the smallest value of f𝑓fitalic_f during the last m𝑚mitalic_m iterations.
If the objective f𝑓fitalic_f is such that we can afford to access its gradients and Hessians from time to time (functions of the form (1) with n<∞𝑛n<\inftyitalic_n < ∞ being “reasonable”), then we can do better than the previous chapter. In this case, we can use a better approximation of the term f⁢(𝒚)−h⁢(𝒚)𝑓𝒚ℎ𝒚...
Core sets. (Bachem et al., 2017) The idea of core sets is simple: can we summarize a potentially large data set using only a few (potentially weighted) important examples? Many reasons, such as redundancy, make the answer yes. Devising approaches to find such core sets is outside of the scope of this work. Formally, fo...
where ji∈{1,…,m}subscript𝑗𝑖1…𝑚j_{i}\in\{1,\ldots,m\}italic_j start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ { 1 , … , italic_m } is the index of the corresponding core representative and εi⁢(𝐱)subscript𝜀𝑖𝐱\varepsilon_{i}({\bm{x}})italic_ε start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( bold_italic_x ) is som...
C
Our results show that deploying an IRS not only improves the throughput of the operator who controls the IRS phase configuration to optimally serve its own users, but also enhances the throughput of users associated with an OOB operator who has no control over the IRS, albeit by a smaller amount compared to the in-band...
In this paper, we analyzed the effect of deploying an IRS on the performance of an OOB operator that has no control over the IRS. We showed that while the IRS optimally serves the in-band UEs, it simultaneously, and at no additional cost, enhances the quality of the channels of the OOB UEs. This enhancement in the OOB ...
In practice, multiple network operators co-exist in a given geographical area, each operating in different frequency band. As a consequence, at a given point in time, multiple UEs are served by different operators in the system. In such a scenario, if an IRS is optimized to cater to the needs of one of the operators, i...
We consider two mobile network operators X and Y who provide service to K𝐾Kitalic_K and Q𝑄Qitalic_Q UEs, respectively. The UEs are arbitrarily distributed over a single cell covering the same geographical area, and operators X and Y use non-overlapping frequency bands.
The base stations (BSs) of operators X and Y (referred to as BS-X and BS-Y, respectively) and the UEs are equipped with a single antenna, and all the channels in the systems undergo frequency-flat fading.111Extension to general cases with multiple antennas and frequency selective channels does not change the main messa...
C
In particular, all interaction concepts can be further categorized into a set of salient concepts S∈Ω𝒙𝑆subscriptΩ𝒙S\in\Omega_{\boldsymbol{x}}italic_S ∈ roman_Ω start_POSTSUBSCRIPT bold_italic_x end_POSTSUBSCRIPT with considerable effects I⁢(S|𝒙)𝐼conditional𝑆𝒙I(S|\boldsymbol{x})italic_I ( italic_S | bold_italic_x...
Figure 1: Visualization of interaction concepts S𝑆Sitalic_S extracted by PointNet on different samples in the ShapeNet dataset. The histograms show the distribution of interaction effects I⁢(S|𝒙)𝐼conditional𝑆𝒙I(S|\boldsymbol{x})italic_I ( italic_S | bold_italic_x ) over samples in the “motorbike” category, where S...
Note that a sample in the ShapeNet dataset (Yi et al., 2016) usually contains 2500 3D points. To simplify the visualization, we simply consider 8-10 semantic parts on the point cloud 𝒙𝒙\boldsymbol{x}bold_italic_x, which has been provided by the dataset.222For example, the ShapeNet dataset has provided the annotated p...
Fig. 1 shows interaction concepts S𝑆Sitalic_S and the corresponding effects I⁢(S|𝒙)𝐼conditional𝑆𝒙I(S|\boldsymbol{x})italic_I ( italic_S | bold_italic_x ) extracted by PointNet (Qi et al., 2017a) from different samples 𝒙𝒙\boldsymbol{x}bold_italic_x in the ShapeNet dataset. We find that the interaction concept S={...
We trained AlexNet, ResNet-18, and VGG-13 on both the original dataset and the modified dataset. Compared with DNNs learned on the original dataset, DNNs learned on the modified dataset were more likely to simply used the color information for classification.
B
Before the analytic explanation for the generalization power of a DNN, let us first experimentally verify that compared to high-order interactive concepts, low-order interactive concepts are more likely to have the distribution in training samples being similar to the distribution in testing samples.
Interactive concepts vs. cognitive concepts and other interaction metrics. Although the Harsanyi interactive concept seems partially aligned with humans’ cognition to some extent (Cheng et al. 2021b), we do not think such interactive concepts exactly fit humans’ cognition. More crucially, the mathematical generalizati...
To this end, previous studies used the gap of the loss (Neyshabur et al. 2017; Bousquet, Klochkov, and Zhivotovskiy 2020; Deng, He, and Su 2021; Haghifam et al. 2020, 2021) or the smoothness of the loss landscape (Keskar et al. 2016; Li et al. 2018; Foret et al. 2021; Kwon et al. 2021) to investigate the generalization...
nowadays, the essence for the superior generalization power of a DNN is still unclear. People usually explained DNNs via the flatness of the loss landscape (Keskar et al. 2016) and theoretical bounds for the generalization (Dziugaite and Roy 2017; Neyshabur, Tomioka, and Srebro 2015), or by proposing new metrics for th...
(1) Li and Zhang (2023); Ren et al. (2023) discovered and Ren et al. (2024) mathematically proved that a well-trained DNN usually encoded just a few interactions between different input variables. (2) We can use these interactions to explain the output of a DNN.
B
We conducted experiments with MNIST, the most commonly used datasets in the field of image classification. We used MNIST datasets in torchvision and used 2 layered Networks which is optimized using SGD on a batch size of 64 with a learning rate of 0.001 and a momentum of 0.5 with random seed of 10. We confirmed that ou...
ReLU (Rectified Linear Units) is mainly used in the fields of vision classification. In classification, ordinarilly more layers is better in deep learning. On datasets such as MNIST, even simple CNNs with three layers can achieve high classification performance. However, for more challenging datasets such as CIFAR10, s...
We conducted experiment on CIFAR10 which is more challenging model than MNIST in classification fields. ResNet18 which is optimized using SGD on a batch size of 32 with a learning rate of 0.001, momentum of 0.9 is used for the experiment with random seed of 10. Our activation function converges rapidly with respect to...
We conducted experiments with MNIST, the most commonly used datasets in the field of image classification. We used MNIST datasets in torchvision and used 2 layered Networks which is optimized using SGD on a batch size of 64 with a learning rate of 0.001 and a momentum of 0.5 with random seed of 10. We confirmed that ou...
The robust property of MoLU is to approach rapidly to the value of minimum of a loss function without losing stability. This is a truly useful characteristic when training long time-series data by using NeuralODEs(Neural Ordinary Differential Equations). To prove the performance of MoLU, we conducted experiment on Neu...
B
In our case, as the worst case delay to generate a child is the same as of generating all children, we do not even need to argue that we can resume the enumeration from the i𝑖iitalic_i-th child as long as we are interested in the asymptotic delay, though this can be done to speed up the implementation. As of deciding ...
By the same arguments, it is easily seen that we can compute the solution that comes right before any given solution C𝐶Citalic_C within the same time and space. In other words, our implementation of reverse search can be made so that it only uses the memory of the current node during the DFS of the solution tree, with...
We note that the general framework of reverse search [1], equipped with the alternating output technique [26], yields a natural polynomial-time algorithm to produce the solution that comes after any given solution C𝐶Citalic_C in the enumeration, provided that we are able to decide in polynomial time whether C𝐶Citalic...
In our case, as the worst case delay to generate a child is the same as of generating all children, we do not even need to argue that we can resume the enumeration from the i𝑖iitalic_i-th child as long as we are interested in the asymptotic delay, though this can be done to speed up the implementation. As of deciding ...
Thus, in general, reverse search only needs memory space that is linear in the height of the solution tree times the space needed to generate children. As of the delay, it only depends on the time needed to compute the parent and the delay needed to generate children when using folklore tricks such as the alternating o...
A
The primary goal of this article is to provide an answer to this question by (1) providing error analysis and rates for a broad abstract class of measure-transport algorithms, which translate to the accuracy of the resulting sampling procedure described above; and (2) showing that many algorithms, including well-known ...
In considering measure transport algorithms, our primary motivating application is sampling. Measure transport is an emerging approach to sampling, where perhaps the most popular alternatives are Monte Carlo methods [82], which include Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) algorithms. In gen...
This article presents a general framework for analyzing the approximation error of measure-transport approaches to probabilistic modeling. The approximation of high-dimensional probability measures is a fundamental problem in statistics, data science, and uncertainty quantification. Broadly speaking, probability measur...
The primary goal of this article is to provide an answer to this question by (1) providing error analysis and rates for a broad abstract class of measure-transport algorithms, which translate to the accuracy of the resulting sampling procedure described above; and (2) showing that many algorithms, including well-known ...
Unfortunately, this is still not a feasible optimization problem: it requires that in each iteration we compute the inverse CDF of S♯⁢ηsubscript𝑆♯𝜂S_{\sharp}\etaitalic_S start_POSTSUBSCRIPT ♯ end_POSTSUBSCRIPT italic_η. For non-invertible maps S𝑆Sitalic_S, the CDF is not available in closed form and so it must be e...
A
The Hume-Reaction dataset was used as part of both the Emotional Reactions Sub-Challenge of MuSe 2022 [6] and the Emotional Reaction Intensity Estimation Challenge of the 5th ABAW Competition in 2023 [40, 18, 41, 42, 33, 27, 29, 30, 19, 20, 35, 25, 21, 36, 31, 22, 37, 34, 26, 1, 23]. The participants of this subchallen...
For training the MMA, we utilize multiple in-the-wild datasets, including Aff-Wild2 [38, 40, 20, 36, 22, 37, 34, 28, 57, 18, 33, 41, 32, 42], AffectNet [49], and EmotioNet [5], which are annotated for valence-arousal, 7 basic expressions, and 17 action units (these action units are an aggregate in all datasets).
Human emotions are complex, conscious experiences that profoundly influence behavior and can be expressed in various forms. These emotions are pivotal in psychological processes and significantly impact human actions. The advent of Artificial Intelligence (AI) and Deep Learning (DL) has driven the development of intell...
The Hume-Reaction dataset was used as part of both the Emotional Reactions Sub-Challenge of MuSe 2022 [6] and the Emotional Reaction Intensity Estimation Challenge of the 5th ABAW Competition in 2023 [40, 18, 41, 42, 33, 27, 29, 30, 19, 20, 35, 25, 21, 36, 31, 22, 37, 34, 26, 1, 23]. The participants of this subchallen...
The Aff-Wild2 database [38, 40, 20, 36, 22, 37, 34, 28, 57, 18, 33, 41, 32, 42, 43] is the largest in-the-wild database and the only one to be annotated in a per-frame basis for the seven basic expressions (i.e., happiness, surprise, anger, disgust, fear, sadness and the neutral state), twelve action units (AUs 1,2,4,6...
D
A conformity function is a mapping ρ:ℝd×𝒴×Ω→ℝ:𝜌→superscriptℝ𝑑𝒴Ωℝ\rho:\mathbb{R}^{d}\times\mathscr{Y}\times\Omega\to\mathbb{R}italic_ρ : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT × script_Y × roman_Ω → blackboard_R such that ρ⁢(x,y)=ρ⁢(x,y,⋅)𝜌𝑥𝑦𝜌𝑥𝑦bold-⋅\rho(x,y)=\rho(x,y,\,\boldsymbol{\...
Conformity functions are agnostic to the choice of the specific models or algorithms used to construct μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG, ξ^psubscript^𝜉𝑝\hat{\xi}_{p}over^ start_ARG italic_ξ end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, and π^^𝜋\hat{\pi}over^ start_ARG italic_π end_ARG in Exa...
Under the data exchangeability assumption, the sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT is exchangeable.
In general, the coverage indicators Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are dependent random variables, since for all future observables the corresponding conformal prediction sets in Definition 3 are defined in terms of the same calibration sample conformity score S(⌈(1−α)⁢(n+1)...
Note that the regularity of a specific conformity function ρ𝜌\rhoitalic_ρ is contextual, being inherently dependent on the distribution of the underlying data sequence. Technically, we can always avoid ties among the sequence of conformity scores almost surely by introducing a properly constructed ancillary tie-break...
D
We report the recognition performance of EVSTr and state-of-the-art methods in Table IV. For the converted datasets UCF101-DVS and HMDB51-DVS, our method achieves the best accuracy among point-based methods and has competitive performance compared to heavyweight frame-based models. On three real-world datasets, the re...
The model complexity of different action recognition methods on the DailyAction dataset is summarized in Table V, which includes the number of trainable parameters, the number of MACs, and the inference throughput measured at a batch size of 1. VMV-GCN has the lowest computational complexity and maximum throughput amo...
Table II lists the model complexity of different methods on object classification. We evaluate the model complexity comprehensively by three metrics: the number of trainable parameters, the number of multiply–accumulate operations (MACs), and average inference time. Specifically, we report the number of parameters and ...
Compared to point-based counterparts, our model outperforms state-of-the-art methods and gains a notable improvement (1.9%percent\%% increase) than the second place on the challenging N-Caltech101 dataset, demonstrating the effectiveness of EVSTr. As shown in Fig. 5, we further provide the visualization of feature repr...
We report the recognition performance of EVSTr and state-of-the-art methods in Table IV. For the converted datasets UCF101-DVS and HMDB51-DVS, our method achieves the best accuracy among point-based methods and has competitive performance compared to heavyweight frame-based models. On three real-world datasets, the re...
A
The last experiment evaluates the grasp success rate, i.e., the percentage of successful grasps. To sample grasp proposals, GraspIt! [45] was used. To check the quality of each grasp, the objects were picked and moved 10 cmtimes10cm10\text{\,}\mathrm{c}\mathrm{m}start_ARG 10 end_ARG start_ARG times end_ARG start_ARG ro...
In general, we can say that 5 touches are enough for a sufficient grasp success rate. To compare, the maximum success rate achieved in the baseline [1] was 77.8%. It is also worth mentioning that the result was achieved after time that could be comparable to touch number 12 in our results.
There is already a difference between 0 and 1 touches. The success rate increased from 63.3% to 70.4%. Maximum success was achieved using reconstructions after 10 touches. However, the difference between 5 and 15 touches is only 2.5% (82.7% vs. 85.2%).
The last experiment evaluates the grasp success rate, i.e., the percentage of successful grasps. To sample grasp proposals, GraspIt! [45] was used. To check the quality of each grasp, the objects were picked and moved 10 cmtimes10cm10\text{\,}\mathrm{c}\mathrm{m}start_ARG 10 end_ARG start_ARG times end_ARG start_ARG ro...
is more “conservative”, but maintains a steady performance gain. The relative increase in performance in the time when ‘Act-VH’ completed 5 touches (approximately touch 10 in out method) is 7.7% in JS and 21.3% in CD. After the last touch, VISHAC is better than ‘Act-VH – new data’ with a relative difference of 15.5% in...
B
If ℛℛ\mathcal{R}caligraphic_R is either a c⁢ω𝑐𝜔c\omegaitalic_c italic_ω-relation clone on a finite set or a strong c⁢ω𝑐𝜔c\omegaitalic_c italic_ω-relation clone on a countable set, then ℛ=I⁢n⁢vcω⁢(P⁢o⁢lω⁢ℛ)ℛ𝐼𝑛superscriptsubscript𝑣𝑐𝜔𝑃𝑜superscript𝑙𝜔ℛ\mathcal{R}=Inv_{c}^{\omega}(Pol^{\omega}\,\mathcal{R})calig...
In this paper, we examine various concepts of clones of operations and relations on a given base set A𝐴Aitalic_A. Unlike classic clone theory, which limits the arities of functions and relations to be finite, our study allows for arity ω𝜔\omegaitalic_ω for both operations and relations. Additionally, there are no re...
Let OAsubscript𝑂𝐴O_{A}italic_O start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT be the set of all finitary operations on A𝐴Aitalic_A and OA(ω)superscriptsubscript𝑂𝐴𝜔O_{A}^{(\omega)}italic_O start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_ω ) end_POSTSUPERSCRIPT be the set of all ω𝜔\om...
In the infinitary approach to polymorphisms and invariant relations, topology plays an important role also in the ω𝜔\omegaitalic_ω-relations. The framework we develop in this paper enables us to define, given an ideal X𝑋Xitalic_X on Aωsuperscript𝐴𝜔A^{\omega}italic_A start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIP...
In this paper, we adopt the topological approach as we primarily focus on ω𝜔\omegaitalic_ω-operations and relations of arity ω𝜔\omegaitalic_ω, referred to as ω𝜔\omegaitalic_ω-relations. We present a method for defining topologies on sets of functions. The key idea is to choose a Boolean ideal X𝑋Xitalic_X of subsets...
A
We investigate the complexity of the following three variants of k𝑘kitalic_k-Diverse Minimum s-t Cuts: (i) Sum k𝑘kitalic_k-Diverse Minimum s-t Cuts (Sum-k𝑘kitalic_k-DMC), (ii) Cover k𝑘kitalic_k-Diverse Minimum s-t Cuts (Cov-k𝑘kitalic_k-DMC), and (iii) Min k𝑘kitalic_k-Diverse Minimum s-t Cuts (Min-k𝑘kitalic_k-DMC...
Contrary to the hardness of finding diverse global mincuts in a graph [HKK+22], in Section 3 we show that both Sum-k𝑘kitalic_k-DMC and Cov-k𝑘kitalic_k-DMC can be solved in polynomial time. We show this via a reduction to the submodular function minimization problem (SFM) on a lattice, which is known to be solvable i...
We are now ready to show that both SUM-k𝑘kitalic_k-DMC and COV-k𝑘kitalic_k-DMC can be reduced to SFM. We first show that the poset L∗=(Ulrk,⪯)superscript𝐿superscriptsubscript𝑈lr𝑘precedes-or-equalsL^{*}=(U_{\mathrm{lr}}^{k},\preceq)italic_L start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT = ( italic_U start_POSTSUBSCRI...
In contrast to the polynomial-time algorithms of the previous sections, here we show that k𝑘kitalic_k-DMC is NP-hard when considering dminsubscript𝑑mind_{\mathrm{min}}italic_d start_POSTSUBSCRIPT roman_min end_POSTSUBSCRIPT as the diversity measure. We called this variant Min-k𝑘kitalic_k-DMC in Section 1. The hardne...
This section is devoted to proving Theorem 1.1 by reducing SUM-k𝑘kitalic_k-DMC and COV-k𝑘kitalic_k-DMC to SMF on distributive lattices. First, we show that the domain of solutions of SUM-k𝑘kitalic_k-DMC and COV-k𝑘kitalic_k-DMC can be restricted to the set of k𝑘kitalic_k-tuples that satisfy a particular order, as o...
A
We prove conservation of probabilities over successively general spaces. This includeds finite sequences, infinite sequences, and T0subscript𝑇0T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, second countable topologies. Conservation of probabilities over the case of finite and infinite sequences follow directly...
We also show how to lower bound information between probabilities over general spaces with information between probabilities over finite sequences using uniformly enumerable disjoint open sets. We provide an means to upper bound the probabilities between general spaces using computable non-probabilistic measure covers...
For probability measures p𝑝pitalic_p, q𝑞qitalic_q over finite sequences, infinite sequences, or general spaces, if p𝑝pitalic_p is computable, 𝐈(p:q)<+𝐊(p){\mathbf{I}}(p:q)<^{+}{\mathbf{K}}(p)bold_I ( italic_p : italic_q ) < start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT bold_K ( italic_p ).
We prove conservation of probabilities over successively general spaces. This includeds finite sequences, infinite sequences, and T0subscript𝑇0T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, second countable topologies. Conservation of probabilities over the case of finite and infinite sequences follow directly...
The average information between probability measures is small, less than the complexity of the averaging. This is true in the discrete and continuous case. For the discrete case, an enumerable sequence of uniformly computable probability measures over a general space is a sequence of measures {μi}subscript𝜇𝑖\{\mu_{i}...
A
This experiment is motivated by the interpretability of context normalization parameters of each context obtained after training. To illustrate this, we construct the American night with context normalization on the VERI-Wild dataset. American night [26] is a set of cinematic techniques used to simulate a night scene ...
We use context normalization on the backbone of a siamese network (ViT architecture) with contrastive loss [27] to estimate the similarity between images. The aim of this experiment is to reveal the behavior of the parameters learned by the model and to understand how context information has an influence on the normali...
CN transform is a differentiable operation in deep neural networks that normalizes input data. By applying CN, the model can continuously learn from input distributions and adapt its representations to the target task, leading to improved performance. This normalization helps mitigate the influence of variations in in...
CN-Channels is designed to be applied directly to images. The parameters μ𝜇\muitalic_μ and σ𝜎\sigmaitalic_σ are vectors of size the number of channels. They are learned independently according to the context by using context identifiers. CN-Channels incorporates the context identifier into the image normalization pro...
Different normalization techniques, including activation normalization, weight normalization, and gradient normalization, are employed to enhance the training performance of DNNs. To normalize activations, the most common technique is Batch Normalization (BN) [4]. BN has been proposed to solve the problem caused by the...
A
Using semantic of foreground objects only to detect OOD samples can often be successful when the OOD samples have some dominant semantics that are different from the ID images. However, approaches of this type would fail to work effectively when the OOD samples do not have clear object semantics and/or exhibit some si...
This paper considers the importance of disentangling foreground and background features in OOD detection and proposes to leverage background features to enhance the OOD detection methods that are based on foreground features. To this end, we introduce a novel generic framework, called DFB, that can Disentangle the Fore...
Figure 2. Overview of our proposed framework. It first uses a trained K𝐾Kitalic_K-class classification network to obtain pseudo semantic segmentation masks and then learns the in-distribution features by training a (K+1)𝐾1(K+1)( italic_K + 1 )-class classification network with the pseudo labels (Left). It lastly conv...
DFB aims to learn distinct representations of foreground and background information in images, while also considering them as in-distribution features. The key challenge here is how to locate these background features and separate them from the foreground features. We introduce a weakly-supervised dense prediction meth...
It then seamlessly integrates the foreground and background features into image classification models by transforming the dense prediction network to a (K+1)𝐾1(K+1)( italic_K + 1 )-class classification network, where the prediction entries of the K𝐾Kitalic_K classes are focused on the class semantics of the K𝐾Kitali...
D
A DM is a form of generative model which has recently been shown to generate high-quality samples, outperforming GANs [35, 13, 36, 37]. DMs iteratively remove small perturbations of noise, typically starting with a sample from an isotropic Gaussian distribution, until they generate a clean data sample. In this way, the...
In particular, we use DMs to capture the relationship between under-exposed and normally-exposed images. Other work has shown that DMs may be used as backbone feature extractors which predict features based on noisy inputs [41, 42]. Furthermore, the task of denoising itself has been shown to assist with seemingly unrel...
In this paper, we present a framework for post-processing images which have undergone low-light image enhancement. The enhancement of low-light images often reveals a variety of degradations which are hidden in the dark, and thus a need for post-processing is introduced. Furthermore, each low-light enhancement techniqu...
A DM is a form of generative model which has recently been shown to generate high-quality samples, outperforming GANs [35, 13, 36, 37]. DMs iteratively remove small perturbations of noise, typically starting with a sample from an isotropic Gaussian distribution, until they generate a clean data sample. In this way, the...
LPDM proposed in this study also models the conditional distribution between low-light and normally-exposed images; however, we use the diffusion paradigm to achieve this. Furthermore, we repurpose the function of a DM to be used as a noise detector. Therefore, LPDM provides a subtractable estimation of the noise in a...
A
The experiment We conducted an online experiment where we hired 445 participants (397 participants after removing participants who did not finish the experiment or did not pass the attention check, and dropping data that had recording errors) from the United States on the online crowdsourcing platform Prolific (https:/...
Encoding the participants’ answers to the questions in the survey (see encoding details in the Supplementary Material), we end up with 18 control variables characterizing the participants by the six groups of questions specified above, 5 control variables indicating the game, game type (Speed-race or Least-clicks), rou...
To gain a better understanding of how navigation on the knowledge network is affected by individual characteristics, we conducted an online experiment where we hired 445 participants from the US to play nine rounds of Wikipedia navigation games (illustration in Fig. 1) and to fill in a survey afterwards about their pe...
Figure 1: Illustration of the Wikipedia navigation game. In the Wikipedia navigation game, players need to go from one Wikipedia article (source page) to another (target page) through the links of other Wikipedia articles on the current page in 7 steps (Least-clicks game) or 150 seconds (Speed-race game). The figure s...
In the game sessions, players are given two Wikipedia pages as the source and the target in each game. To reduce the disparities in prior knowledge among the participants, the source and target pages are chosen to be similarly distanced (2 or 3 steps away on the Wikipedia network) pages about renowned individuals from ...
D
Most of the parameterizations used by Lamarr rely on machine learning algorithms that we can split into two main classes. The first class of models uses Gradient Boosted Decision Trees (GBDT) to parameterize efficiencies learning the fraction of candidates that are in acceptance, that have been successfully reconstruct...
The simulation of high-energy collisions, of the decays of the generated particles, and of the physics processes occurring within the detector by the decay products are a key necessity of analysis, typically for separating the signal from background sources or for selection efficiency studies.
The high-level response of the RICH and MUON systems are reproduced using the particles kinematic information provided by the Lamarr tracking modules and a description of the detector occupancy, for example based on the total number of tracks traversing the detector. The loss function adopted to train the PID-GAN model...
Combining stacks of GBDT and GAN models, Lamarr provides the high-level response of the LHCb tracking and PID systems. To validate the ultra-fast simulation approach the chosen machine-learning-based models are trained on detailed simulated samples and the output of Lamarr is compared to the reference distributions as ...
The second family of parameterizations is made up of Generative Adversarial Networks (GAN) [19] trained to reproduce the distributions of high-level physics quantities, typically conditioned [20] by the kinematics of the particles traversing a specific LHCb sub-detector. Additional algorithms to define detector paramet...
D
Table 2 presents a selection of several prominent baselines across different types. Notably, ProteinMPNN [2] and GVP-Transformer [10] exhibit advanced performance, particularly excelling in sequence recovery and perplexity. The lightweight GVP-GNN [14] also demonstrates competitiveness, showcasing relatively strong per...
As depicted in Figure 1(b), the architecture of the inference phase utilizes only the pretrained language-enhanced structure model, rendering other flows unnecessary during this stage. Evaluating a pretrained protein structure model within a novel training framework poses significant challenges. To address these challe...
Group 1 of Table 3 showcases the contact map prediction scores, evaluated across the CATH, trRosetta, Ts50/Ts500, and CASP14 test sets. Both the residue-level and protein-level pretrained models demonstrate high P@L accuracy in predicting contact maps across all datasets, indicating that the pretrained structure module...
Furthermore, regarding different levels (Designp vs. Designr), while there is no significant disparity between the residue-level and protein-level pretrained modules in downstream tasks, Designr slightly outperforms Designp overall. This observation suggests that any small gaps between the two levels of pretrained mode...
The retrieval alignment evaluation on the CATH, trRosetta, Ts50/Ts500, and CASP14 test sets is presented in Group 2 of Table 3. Additionally, we provide residue-level and protein-level results for comprehensive analysis. It’s noteworthy that the protein-level pretrained model exhibits a higher ease in aligning sequence...
C
Knowledge graphs containing a large number of facts benefit various downstream applications, ranging from open-domain question answering [1], content-based recommender systems [2], to text-centric information retrieval [3]. A fact in a knowledge graph is represented as a triple, which includes a head/subject entity, a...
Specifically, our deeper entity embedding network includes 1) an input layer that receives low-dimensional (n^^𝑛\hat{n}over^ start_ARG italic_n end_ARG) entity representations; 2) multiple hidden layers following the input layer to increase the model expressiveness; 3) an output layer that produces high-dimensional (n...
Figure 1: In (a), conventional KGE models that use high-dimensional entity representations equal to enlarging the width of the embedding layer. But we tend to achieve parameter efficiency by increasing the depth of the embedding network, i.e., a narrower embedding layer (low-dimensional entity representations) plus th...
In search of a simple way to reduce entity parameters and hence improve the parameter efficiency of conventional KGE models, we take inspiration from earlier discussions on wider vs deeper neural networks. Study shows that deeper neural networks require exponentially fewer model parameters than wider networks to provid...
In Table V, we show the parameter efficiency of LiftNet-based methods on the three datasets. The results are shown pair-wisely, i.e., the numbers of parameters required by 512-dimensional KGE models and those of corresponding LiftNet models, because they achieve similar link prediction accuracy. KGE methods with 16-di...
B
If P′superscript𝑃′P^{\prime}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT dominates P𝑃Pitalic_P (i.e., γi⁢j⁢(c)/γi⁢j′⁢(c)≥1subscript𝛾𝑖𝑗𝑐superscriptsubscript𝛾𝑖𝑗′𝑐1\gamma_{ij}(c)/\gamma_{ij}^{\prime}(c)\geq 1italic_γ start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT ( italic_c ) / italic_γ start_PO...
When both isotonicity and monotonicity hold, it is possible to devise a simple multi-objective routing algorithm [6] that finds the best routes between a source node and all other nodes. First, one starts at a source node and computes the fidelity curves and a priority value for the links that connect to its neighborin...
This process continues until the priority queue is empty. Note that we did not specify how the priority value of each node is computed simply because we do not know what the optimal way to compute such a value is. Prioritizing paths with a low likelihood of being dominated will reduce the number of recomputations and p...
Inevitably, at a certain point, the algorithm will find a node that already has an associated fidelity curve. In this situation, the algorithm has found an alternative path to reach the neighboring node and has to check whether this new path is dominated or not by the old one. To do this, temporary registry is required...
Bugalho et al. considered in [6] bipartite and multipartite entanglement distribution assuming a single-qubit generation model and considering that not all links have the same fidelity or entanglement generation probabilities, and combined that with imperfect quantum memories. That work considered a single source mode...
A
Next, there is about 0.450.450.450.45dB SNR gap between the MWD sequence and the corresponding MWUB at R=0.5𝑅0.5R=0.5italic_R = 0.5, which means SCL decoding with list size larger than 8888 can provide less minimum required SNR to achieve BLER ≤10−4absentsuperscript104\leq 10^{-4}≤ 10 start_POSTSUPERSCRIPT - 4 end_POS...
In Fig. 5(a), we observe that the required SNRs of polar sequence [4], GA algorithm [7] and MWD sequence are close to the required SNRs of corresponding MWUB. Then, since the polar codes constructed by the MWD sequence have the optimum MWUB, the required SNRs are less than or equal to those of the polar sequence and GA...
In Fig. 5(b), the optimum MWUB of the MWD sequence also leads to the corresponding minimum required SNRs less than or equal to those of GA algorithm and polar sequence. The MWD sequence has about 0.750.750.750.75dB and 1.071.071.071.07dB SNR gaps at R=0.5625𝑅0.5625R=0.5625italic_R = 0.5625 compared with the GA algorit...
Then, in Lemma 5, we prove the polar codes obeying the PO with the MWD sequence have the optimum performance evaluated by the MWUB in the high SNR region. In Lemma 6, we prove the MWD sequence is nested, which means that the MWD sequence can be used similarly to the polar sequence in 5G [4].
Fig. 5 shows the minimum required SNRs of polar codes constructed by polar sequence [4], GA algorithm [7] and MWD sequence to achieve the target BLER under the AWGN channel with the code rate range R=0.0625∼0.9375𝑅0.0625similar-to0.9375R=0.0625\sim 0.9375italic_R = 0.0625 ∼ 0.9375. The required SNRs of MWUB equal to ...
B
That is, the value of the game with signals cannot be larger than that of the classic search game without signals. Since the value of this classic search game on a tree is equal to its total length μ𝜇\muitalic_μ (Gal, 1979), this must be an upper bound for the value of the search game with signals on a tree.
We define the mean depth D𝐷Ditalic_D of a rooted tree, with respect to a given probability measure λ𝜆\lambdaitalic_λ on its leaf nodes, as the mean distance from the root to the set ℒℒ\mathcal{L}caligraphic_L of leaf nodes, weighted with respect to λ𝜆\lambdaitalic_λ. More precisely, we define
where the penultimate equality follows from the fact that λ¯¯𝜆\bar{\lambda}over¯ start_ARG italic_λ end_ARG is a probability distribution on ℒℒ\mathcal{L}caligraphic_L and the final equality follows from the fact that the leaf nodes of Q𝑄Qitalic_Q and Q′superscript𝑄′Q^{\prime}italic_Q start_POSTSUPERSCRIPT ′ end_PO...
We observe that as p𝑝pitalic_p goes to 1111 and q𝑞qitalic_q to 00, the distribution of λ¯¯𝜆\bar{\lambda}over¯ start_ARG italic_λ end_ARG becomes concentrated on the leaf node at greatest distance from O𝑂Oitalic_O, and D𝐷Ditalic_D converges to that distance. As p𝑝pitalic_p goes to 1/2121/21 / 2, the distribution o...
The rooted tree of Figure 1 has two branch nodes, A𝐴Aitalic_A and O𝑂Oitalic_O. The recursion works backwards from penultimate nodes to the root, so we start with the subtree at A𝐴Aitalic_A: the arcs A⁢L1𝐴subscript𝐿1AL_{1}italic_A italic_L start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and A⁢L2𝐴subscript𝐿2AL_{2}italic_A...
A
They performed experiments with both Textual Inversion and fine-tuning the U-net component of Stable Diffusion, similar to Ruiz et al. (2022). They find that Textual Inversion works, but fine-tuning the U-net is more effective, especially with more complex prompts.
Images generated by the fine-tuned StyleGAN3 model achieved an FID score of 53, and an MFID score of 0.12, substantially lower than those shown in \tablereftab:inference_settings. However, in the blinded head-to-head comparison, the expert radiologist preferred the images generated by Stable Diffusion (36/50 images, 7...
To explore the potential benefits of a diffusion-based approach over a GAN-based approach, we include the state-of-the-art StyleGAN3 as a baseline Karras et al. (2021). To allow a fair comparison, we fine-tune a pre-trained StyleGAN3 on the same hardware for the same number of steps. A blind comparison between Stable ...
Additionally, we demonstrate the flexibility of the approach through example applications and by adapting to multiple and more complex modalities beyond chest X-ray. In contrast to other studies, we intentionally do not train from scratch and use small datasets to explore the feasibility of diffusion in low-data and lo...
We experiment with the number of sampling steps, the CFG scale, the number of images used to train embeddings, and the embedding vector size. To evaluate the impact of these parameters on generation quality, we compute the Fréchet Inception Distance using 1000 generated samples compared to 1000 real examples for each p...
C
The camera distance can also be scaled in the way discussed in the main paper. Plus, cameras are oriented to look toward the objects. During optimization, the camera field of view is randomly sampled between 40 and 70 degrees. At test time, the field of view is fixed at 60 degrees.
The gradients across all rendered views direct the update of 𝜽𝜽\boldsymbol{\theta}bold_italic_θ, ensuring that the NeRF-generated images align with the text descriptions. Additionally, we incorporate the ’perturb and average’ technique from SJC for more robust ℒSDSsubscriptℒSDS\mathcal{L}_{\text{SDS}}caligraphic_L st...
We use the Adam optimizer and perform gradient descent at a learning rate of 0.001 for 5,000 steps in simple prompts, such as “apple and banana“, and 8,000 steps in more complex prompts for better quality. We follow the implementation of SJC [53] to perform the averaging implicitly, relying on the optimizer’s momentum ...
In the first task, we utilized a 7-point Likert scale to measure participants’ perceptions across two dimensions, including semantic consistency and multiview consistency. For the second task, we also used the 7-point Likert scale to evaluate the generative quality and made a comparison with existing works, including L...
In our study, we employ the CLIP score as the primary evaluation metric to assess the congruence between the generated 3D assets and the associated text prompts. This score, commonly used in text-to-image generation research as noted in studies [38, 65, 55], is derived from the cosine similarity between the embeddings...
B
\Sigma}_{2}}{2})^{-1}(\boldsymbol{\mu}_{2}-\boldsymbol{\mu}_{1})bold_italic_a start_POSTSUBSCRIPT LDA end_POSTSUBSCRIPT = ( divide start_ARG bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_...
Managing the degree of overlaps between clusters is one of the most important tasks of a cluster generator. In repliclust, we quantify pairwise overlap between two clusters as the irreducible error rate when classifying a new data point as belonging to one of the two clusters (assuming equal class probabilities). On t...
\Sigma}_{2}}{2})^{-1}(\boldsymbol{\mu}_{2}-\boldsymbol{\mu}_{1})bold_italic_a start_POSTSUBSCRIPT LDA end_POSTSUBSCRIPT = ( divide start_ARG bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_...
Figure 3: Cluster overlap based on the misclassification rate of the best linear classifier, in a minimax sense. The left panel shows the densities of two normal distributions in 1D. A minimax classification rule implies that the conditional misclassification rates for the blue and red distributions are equal. The blac...
Figure 4: Quality of approximating cluster overlap using our LDA and the cruder center-to-center (C2C) approximations. Each data point corresponds to a pair of multivariate normal clusters with the pairwise overlap shown. The full simulation is based on 900 pairs generated from a variety of data set archetypes with di...
D
The introduction of pre-trained language models revolutionized event extraction. Fine-tuning these models achieved state-of-the-art performance across various benchmarks (Lin et al., 2020; Ramponi et al., 2020; Wadden et al., 2019; Yang et al., 2021). These models captured deep contextual information and benefited from...
In this study, we propose a novel Contrastive Oracle-Free Framework for Event Extraction (COFFEE), which addresses the event extraction task without using any oracle information. Our COFFEE consists of two parts, a generator that performs the extraction of events and a selector that aims to refine the generated results...
Post-generation re-ranking is usually applied in two-stage systems, that is, generation and re-ranking, to re-score the output from the first stage by training an additional re-ranking module. This technique has been widely used in neural translation and summarization. For example, Ng et al. (2019); Yee et al. (2019) ...
Comparing COFFEE with and without ranking, we can conclude that re-ranking in the selector is crucial. In both examples, COFFEE fails to detect all events without re-ranking. Even though both candidates are the correct targets, the beam scores differ more than expected, which leads to incorrect ranking. The re-ranking ...
In addition, Figure 5 demonstrates the influence of the weight parameter on COFFEE. The weight represents the ratio of combining the ranking score and generation score. When the weight is set to 0, only the generation score is considered, while a weight of 1 means that only the ranking score is considered. As depicted ...
B
We try different values of c𝑐citalic_c, Figure 1 presents the convergence curves under the best choice of c𝑐citalic_c. For each setting, it can be concluded that the convergence rates of the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-norm generalization errors of spectral algorithms ar...
Since the convergence rates and the minimax optimality of spectral algorithms in the well specified case are clear, a large amount of literature studied the misspecified spectral algorithms. Among these work, Steinwart et al. (2009); Dicker et al. (2017); Pillaud-Vivien et al. (2018); Fischer and Steinwart (2020); Celi...
In this subsection, we compare this paper’s convergence rates and minimax optimality with the results in previous literature. Ignoring the log-term and the constants, Theorem 1 gives the upper bound of the convergence rates of spectral algorithms (with high probability)
The outline of the rest of the paper is as follows. In Section 2, we introduce basic concepts including priori knowledge of RKHS, integral operators and the definition of the interpolation space. In addition, we formally define the spectral algorithm, which is the main interest of this paper, and provide three example...
Based on the Lqsuperscript𝐿𝑞L^{q}italic_L start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT-embedding property of [ℋ]ssuperscriptdelimited-[]ℋ𝑠[\mathcal{H}]^{s}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT, the refined proof in this paper removes the boundedness assumption in previous litera...
B
In order to evaluate the performance of our method in comparison to other simplification techniques, we firstly use each simplified point cloud obtained from three object level point clouds to form simplified meshes, using screened Poisson surface reconstruction [17]. We can then compute the reconstruction errors betwe...
Overall, these results show that our approach provides a computationally efficient option for performing point cloud simplification in settings where the user wishes to strike a balance between preserving high fidelity around sharp features in the cloud, and ensuring that the simplified cloud covers the manifold defin...
HC and Potamias et al. are the only baselines with shorter runtimes than our method, and obtain maximum Hausdorff distances comparable to those obtained by our approach. However, as discussed in Section 2, tuning the user-specified HC parameters make striking a balance between feature preservation and retaining a suffi...
In this section we will introduce a number of existing point cloud simplification techniques, with a particular focus on works which have a feature-preserving element to their approach. Some of the earliest curvature-sensitive simplification techniques were proposed by Pauly et al. [26] and Moenning et al. [25]. The fo...
We use the aforementioned evaluation procedure to compare our method (denoted GP) empirically to a number of competing simplification techniques discussed in Section 2. We compare our approach to PC-Simp, AIVS, Potamias et al., HC and WLOP, with the latter two approaches implemented using the CGAL library. Additionally...
D
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali...
of 1×1×1⁢\text⁢m⁢m3111\text𝑚superscript𝑚31\times 1\times 1\text{mm}^{3}1 × 1 × 1 italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, resulting in a total scan size of 240×240×155240240155240\times 240\times 155240 × 240 × 155, which we padded to a size of 256×256×256256256256256\times 256\times 256256 × 25...
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali...
2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT images, we distribute the model over 2 GPUs. The methods HalfRes and PatchDDM were trained on one GPU only. The optimizer we used was AdamW [Loshchilov and Hutter(2017)] with the default parameters.
For three spatial dimensions (i.e. 3D) this means that reducing the input size from 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT to 1283superscript1283128^{3}128 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT results in a reduction
D
With the flourishing proliferation of edge devices, including but not limited to smartphones and wearable devices, etc, the deluge of private data originating from these geographically distributed nodes has swelled exponentially. Tremendous repositories of data provide more opportunities for artificial intelligence (A...
We compare our proposed algorithm FedAgg with the following approaches. FedAvg [3] is introduced as the fundamental framework in the field of federated learning. FedAdam [38] allocates an adaptive optimizer for the global server and a mini-batch SGD optimizer for the participating clients respectively, which averages t...
At the onset of t𝑡titalic_t-th iteration, we default 𝒘𝒊,𝟎𝒕=𝒘¯𝒕superscriptsubscript𝒘𝒊0𝒕superscriptbold-¯𝒘𝒕\boldsymbol{w_{i,0}^{t}}\!=\!\boldsymbol{\bar{w}^{t}}bold_italic_w start_POSTSUBSCRIPT bold_italic_i bold_, bold_0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_t end_POSTSUPERSCRIPT = overbold_¯ s...
Federated learning serves as a promising paradigm to tackle distributed machine learning tasks and achieves multi-fold performance benefits including personal data privacy preservation and training efficiency improvement[3]. Specifically, in the FL model training process, each participating client undertakes one or sev...
Federated learning represents a decentralized machine learning paradigm explicitly designed to address the challenges of privacy concerns. Within the FL framework, a global model is collaboratively trained by substantial clients with locally collected data. Given N𝑁Nitalic_N geographically distributed and heterogeneou...
C
We adopt CLIP (Radford et al., 2021) as the image encoder to extract multi-scale visual features, and leverage a simple bottleneck MLP layer as the learnable projection network. We keep other hyperparameters the same as the language instruction-following LLaMA-Adapter. For ScienceQA (Lu et al., 2022), we concatenate t...
Therein, prompt tuning appends a collection of trainable tokens to pre-trained large models, which are inserted either to the input embeddings (Lester et al., 2021; Liu et al., 2021b) or every intermediate layer (Li & Liang, 2021; Liu et al., 2021a). LoRA (Hu et al., 2021; Zhang et al., 2023d; Hedegaard et al., 2022) i...
Multi-modal Reasoning. Besides language instruction, our approach can also incorporate an image encoder via zero-initialized attention to become a multi-modal LLM. Compared to concurrent works (Liu et al., 2023b; Zhu et al., 2023), LLaMA-Adapter showcases higher tuning efficiency with competitive reasoning capacity on ...
Zero-shot Multi-modal Evaluation. To verify the out-of-domain generation ability of our approach, we conduct a two-stage multi-modal training, and then evaluate three benchmarks (MME (Fu et al., 2023), MMBench (Liu et al., 2023c), LVLM-eHub (Xu et al., 2023)) in a zero-shot manner. For the first stage, we utilize the ...
For zero-shot multi-modal evaluation, we select three benchmarks, MME (Fu et al., 2023), MMBench (Liu et al., 2023c), and LVLM-eHub (Xu et al., 2023), covering a wide range of VQA tasks. We compare with two concurrent multi-modal LLMs: LLaVA (Liu et al., 2023b) and MiniGPT-4 (Zhu et al., 2023).
D
These results coincide with findings in Nagrani et al. (2022), where HowTo100M has been pointed out not appropriate for vision-language tasks requiring strong alignment. S-ViLM trained on VideoCC alone significantly outperforms VCC on both tasks, showing the effectiveness of our proposed techniques.
Training objectives. Without loss of generality, the model in this ablation is pre-trained on VideoCC only. For better understanding S-ViLM, we start with the contrastive baseline represented in Scenario 1111 in Table 6. Then we add our proposed spatial grouping module during the pre-training phase. This module is driv...
Pre-training datasets. To analyze of effects of pre-training datasets, we report the model performances on selected downstream tasks in Table 5. In particular, the same model pre-trained on VideoCC achieves the best performance in zero-shot retrieval on MSR-VTT, compared with HowTo100M and WebVid-2M.
In particular, when pre-trained on the same VideoCC dataset, S-ViLM leads to better performance than MCQ. The significant improvement over MCQ shows that our techniques do help to learn better features for downstream tasks. It is also worth noting that pre-training on VideoCC and ActivityNet performs consistently bette...
S-ViLM also achieves performance gain when the model is fine-tuned on the target MSR-VTT dataset, which further validates advantages of the pre-trained model. Note that S-ViLM performs favorably against existing methods despite the much smaller size of the pre-training data used in S-ViLM than those in baselines, such ...
C
We use two multilingual models: multilingual BERT (Devlin et al., 2019)444https://huggingface.co/bert-base-multilingual-cased and XLM-R (Conneau et al., 2020a). We use the XNLI dataset (Conneau et al., 2018), which has natural language inference examples, parallel in multiple languages. Each example in our dataset is ...
We trained a different encoder for each model, as opposed to the single encoder we trained in all other experiments. This enables ContraSim to be used with representations with different dimensions. Results are summarized in Table 3. We report results with FAISS sampling. Across all pairs, ContraSim achieves superior r...
Given a pair of languages and a batch of representations at some layer, for each representation we define its positive pair as the representation of the sentence in the different language, and its negative set as all other representations in the batch.
Table 1: Layer prediction benchmark accuracy results for language and vision cases. For encoder-based methods we report mean and std over 5 random initializations. For ContraSim, we experiment with training with different datasets (rows) and evaluating on same or different datasets (columns).
As a sentence representation, we experiment with [CLS] token representations and with mean pooling of token representations, since Del and Fishel (2021) noted a difference in similarity in these two cases. We report results with [CLS] representations in the main body and with mean pooling in Appendix A.1; the trends ar...
D
Synthetic images. Figure 16 shows the additional qualitative results obtained on synthetic images. Similarly to Figure 6 (main paper), our results are the most similar to the ground-truth images. By contrast, the quality of the recovered images that contain a few arcs was notably degraded when the geometry-based method...
To validate the effectiveness of our method, we also demonstrated the qualitative results in the cross-domain evaluation. Figure 17 shows the qualitative results in the cross-domain evaluation on the HoliCity test set when learning-based methods were trained on SL-MH. Conventional learning-based methods tended to have ...
Figure 17: Qualitative results in the cross-domain evaluation on the HoliCity test set. Our method using HRNet-W32 and compared methods were trained on SL-MH. From top to bottom: input images, ground-truth images, and results of López-Antequera et al. [36], Wakai and Yamashita [58], Wakai et al. [59], and our method.
Synthetic images. Figure 16 shows the additional qualitative results obtained on synthetic images. Similarly to Figure 6 (main paper), our results are the most similar to the ground-truth images. By contrast, the quality of the recovered images that contain a few arcs was notably degraded when the geometry-based method...
Additionally, we tested our proposed method using various datasets to validate its robustness. Table 8 shows that our method outperforms both existing state-of-the-art learning-based [59] and geometry-based [35] methods on all datasets in terms of rotation errors. Table 9 also reports that our method is superior to Wa...
A
Proving equivalence of the above properties is a contribution per se, as typically one finds only parts of these equivalences in the literature, possibly with different assumptions on the Laplacian L𝐿Litalic_L. Moreover, for the construction of the strict quadratic Lyapunov function in item (d) above, we adopt a scali...
(Fax and Murray, 2004; Jadbabaie et al., 2003; Ren et al., 2007), quality-fair delivery of media contents (Dal Col et al., 2017), power networks (Dörfler et al., 2013), biological systems (Scardovi et al., 2010), and opinion dynamics (Anderson and Ye, 2019). Specifically, consensus refers to agents coming to a global a...
In the special case where α⋆=0superscript𝛼⋆0{\alpha^{\star}}=0italic_α start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT = 0 (resp. α⋆=1superscript𝛼⋆1{\alpha^{\star}}=1italic_α start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT = 1) and d=0𝑑0d=0italic_d = 0, matrix Ldsubscript𝐿𝑑L_{d}italic_L start_POSTSUBSCRIPT italic_d end_PO...
Conditions of the same form as (a) for formation stability were given in (Fax and Murray, 2004, Theorem 3) and the uniform global exponential stability condition was exploited in (Xia and Scardovi, 2014, Theorem 1) and (Seo et al., 2009, Theorem 1). The condition related to the initial value problem (e) was given in (S...
a state value, thanks to the exchange of information modeled by some communication graph; mild assumptions on the graph connectivity allow to uniformly exponentially reach consensus (Jadbabaie et al., 2003; Olfati-Saber and Murray, 2004; Olfati-Saber et al., 2007; Moreau, 2005; Ren and Beard, 2008; Wieland et al., 2008...
C
Figure 2 illustrates a model converted from PyTorch to ONNX. Conversion is challenging, as noted by AirBus (ONNX, 2023; Gauffriau et al., 2024) and others (Consortium, 2023), because it maps between graphs expressed with different operators and different semantics.
For the reasons noted above, we studied the DL model converters from PyTorch and TensorFlow into the ONNX IR (torch.onnx and tf2onnx, respectively). We note that among ONNX model converters, those for PyTorch and TensorFlow have the most failure data available on GitHub (Table 6).
Following prior work (Islam et al., 2019; Garcia et al., 2020), we subsequently filtered for GitHub issues that contained enough information for failure analysis. We filter issues for sufficient information (e.g., the issue is resolved with a commit and pull request). This filtering is conducted upon the timeline event...
Model conversion can produce models that are incompatible with runtimes or have different behaviours. For a compatibility issue, in PyTorch #78721 a converted ONNX model had a type mismatch (vbogach, 2022). For a behavioral issue, in PyTorch #74732 a converted ONNX model’s prediction
Figure 2 illustrates a model converted from PyTorch to ONNX. Conversion is challenging, as noted by AirBus (ONNX, 2023; Gauffriau et al., 2024) and others (Consortium, 2023), because it maps between graphs expressed with different operators and different semantics.
C
The simulation was performed using Gazebo simulator and the real experiment was carried out in an area of 60 m2superscript𝑚2m^{2}italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT equipped with a motion capture system (MoCap) that provides the position of drones and gates within the capture area. The UAVs used for ...
The simulation was performed using Gazebo simulator and the real experiment was carried out in an area of 60 m2superscript𝑚2m^{2}italic_m start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT equipped with a motion capture system (MoCap) that provides the position of drones and gates within the capture area. The UAVs used for ...
Table IV shows the components used in both simulation and real experiments. For this mission, since we will use HITL simulation, all the modules remained the same in both experiments. In this case, the Web GUI is the component in charge of generating and uploading the mission that each drone is going to perform. We use...
Aerostack2 provides a behavior template architecture for programming behaviors that includes two distinct parts (Figure 4). On the one hand, there is a behavior executor that performs the execution using the algorithm used for each case. On the other hand, there is a behavior monitor that interacts with the behavior e...
Table III shows a comparison between the components used in simulation and real experiments. For this mission, only basic behaviors are used: motion behaviors for taking off, following a path, and landing. The location of drones and gates is provided by the ground truth of the simulator or the mocap system, and for con...
D
In order to be able to be part of the never-ending creative cycle mentioned above, LLMs should constantly adapt. Continual learning (Kirkpatrick et al., 2017, Shin et al., 2017) for LLMs (Sun et al., 2020, Wu et al., 2022) represents a promising direction, yet unexplored for creative applications.
LLMs might be able to generate creative products in the future. However, the fact that they will be able to generate these outputs will not make them intrinsically creative. Indeed, as Floridi and Chiriatti (2020) puts it, it is not what is achieved but how it is achieved that matters. An interesting definition that co...
Finally, person covers information about personality, intellect, temperament, habits, attitude, value systems, and defense mechanisms (Rhodes, 1961). While several of the properties of press and process might be achieved - or at least simulated - by generative learning solutions, those related to the creative person a...
In this paper, we have discussed whether or not LLMs can actually be deemed as creative; we started by considering Boden’s three criteria, i.e., value, novelty, and surprise. While LLMs are capable of value and a weak version of novelty and surprise, their inner autoregressive nature seems to prevent them from reaching...
Indeed, product and process are not sufficient to explain creativity. Rhodes (1961) theorizes that four perspectives have to be considered: product (see Section 3) and process (discussed above), but also the so-called press and person. Press refers to the relationship between the product and the influence its environme...
B
More recently, to better exploit the 3D spatial information of CT images, many studies focus on using the 3D CNNs, such as [27, 28, 29, 30, 1, 31, 32, 33, 2, 34, 35, 36]. The famous NoduleNet [1] is an end-to-end 3D deep CNN framework, which achieves the nodule detection, false positive reduction, and nodule segmentati...
Unsupervised domain adaptation (UDA) is a practical setting where the labeled source data are provided for adapting to the unlabeled target data. Most existing methods adopt feature alignment for UDA object detection. In [3], the authors build image-level and instance-level domain classifiers to implement feature align...
Nonetheless, these works mainly focus on the shifts between the source and target, and neglect the detection’s characteristics. For instance, the discrimination of the foreground objects and the backgrounds can naturally be an auxiliary supervision for the target data. Besides, the relatively smaller size of the nodul...
Source-free unsupervised domain adaptation (SFUDA) denotes the setting of adapting to the target domain given only a well-trained source model and unlabeled target data. One stream of the SFUDA methods is implicitly aligning the feature distribution of the source and target domain using the generative adversarial netwo...
Deep learning has achieved remarkable success in various object detection tasks. In the medical field, deep networks are able to reach clinical expert-level performance, e.g. pulmonary nodule detection [1, 2], etc. Nonetheless, these networks are usually domain-specific. In other words, they work well when the trainin...
A
A common approach to reduce the computational burden in solving two-stage DCOPFs is to model the second-stage decisions using an affine policy. More specifically, the second-stage (or the recourse) dispatch decision is restricted to be an affine function of the realized net-load and the first-stage decisions [23, 24, ...
We also compare the solutions produced by our method to that by using the affine policy method, which is a widely applied approximation policy to make the two-stage stochastic programs tractable [56]. Specifically, the affine policy approximates the recourse decision 𝐩Rsuperscript𝐩𝑅\mathbf{p}^{R}bold_p start_POSTSUP...
Once the affine policy is determined, the decision-making in the real time is just simple function evaluations. This method has been observed to provide good performance when the net-load variations are small or are restricted to a few possible instances [26, 27, 28]. However, if the variations are large or include man...
A common approach to reduce the computational burden in solving two-stage DCOPFs is to model the second-stage decisions using an affine policy. More specifically, the second-stage (or the recourse) dispatch decision is restricted to be an affine function of the realized net-load and the first-stage decisions [23, 24, ...
Recently, many machine-learning based algorithms have been proposed to accelerate the computational speed for OPF, please see [19, 29, 30, 31, 32] and the references within. However, many of the existing algorithms are not suitable in a two-stage problem. To enhance computational speed, the proxy of the second stage ca...
B
To overcome this shortcoming, we introduce Self-Supervised Bayesian Neural Networks (§3), which use unlabelled data to learn improved priors over functions. In other words, our approach improves the BNN prior predictive distribution (which we will just call prior predictive in the remainder of the paper) by incorporati...
In practice, self-supervised BNNs generate pseudo-labelled data using unlabelled data and data augmentation, similar to contrastive learning (Oord et al., 2019; Chen et al., 2020a; b; Grill et al., 2020; Hénaff et al., 2020). We use this generated data to learn models with powerful prior predictive distributions. To d...
Pre-training as Prior Learning.  In this work, our central aim is to incorporate unlabelled data into BNNs. To achieve this, in practice, we perform model learning using contrastive datasets generated from the unlabelled data and data augmentation. This corresponds to an unsupervised prior learning step. Since our obje...
In contrast, our approach incorporates vast stores of unlabelled data into the prior distribution through variational model learning. Similarly, other work also learns priors, but typically using labelled data e.g., by using meta-learning (Garnelo et al., 2018; Rothfuss et al., 2021) or type-II maximum likelihood (Wils...
In other words, we will use 𝒟usuperscript𝒟𝑢\mathcal{D}^{u}caligraphic_D start_POSTSUPERSCRIPT italic_u end_POSTSUPERSCRIPT to guide a self-supervised training of the model, thereby incorporating the desired information from our unlabelled data. To do this, we will draw on data augmentation (Yaeger et al., 1996; Kriz...
A
The self-normalized tests are extremely simple to implement once the ensemble of point clouds has been transformed by the stable shape descriptor. To test for (4), one merely needs to compute the statistic of choice 𝔻Tmaxsubscriptsuperscript𝔻max𝑇\mathbb{D}^{\text{max}}_{T}blackboard_D start_POSTSUPERSCRIPT max end_...
Outliers are common in genomic data, and many invariants from geometric data analysis are sensitive to outliers (e.g., see [11, §4]). A suitable weight function can mitigate their effect. However, a weight function that suppresses outliers à priori can increase the type II error since irregularities are also an indicat...
To elaborate on its use, note that (4.1) can be viewed as a U𝑈Uitalic_U-statistic applied to a nonstationary process. The proof relies on a type of Hoeffding decomposition, which essentially decomposes the statistic (after scaling) into a linear part that defines the distributional properties, and an error component ...
We emphasize that we merely push towards uniform weights if the criterion indicates that using a weight functional based on 𝒮¯Tsubscript¯𝒮𝑇\bar{\mathscr{S}}_{T}over¯ start_ARG script_S end_ARG start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT increases the type II error. This does not change the limiting distributional...
The finite sample performance and the presence of outliers is particularly relevant in our context since genomic data applications are typically in the regime of T<100𝑇100T<100italic_T < 100 and n<10⁢K𝑛10𝐾n<10Kitalic_n < 10 italic_K. In Section 4.2.1, we elaborate on this and introduce a data-adaptive function that ...
A