context
stringlengths
250
5.97k
A
stringlengths
250
5.02k
B
stringlengths
250
3.37k
C
stringlengths
250
3.6k
D
stringlengths
250
8.2k
label
stringclasses
4 values
τ(G)=1|V|∏i=2|V|λi=12⁢|E|∏i=1|V|di∏i=2|V|λ̄i⋅\displaystyle\tau(G)=\frac{1}{|V|}\prod\limits_{i=2}^{|V|}\lambda_{i}=\frac{1}% {2|E|}\prod\limits_{i=1}^{|V|}d_{i}\prod\limits_{i=2}^{|V|}\lambdabar_{i}\cdotitalic_τ ( italic_G ) = divide start_ARG 1 end_ARG start_ARG | italic_V | end_ARG ∏ start_POSTSUBSCRIPT italic_i = 2 ...
As both of the structural descriptors have important applications in graph theory, networking systems, molecular chemistry, and other related fields, researchers have taken a strong interest in the Kirchhoff indices and degree-Kirchhoff indices of graphs in recent years. Finding the explicit formulae for the Kirchhoff ...
We obtain the formulae for the total count of spanning trees of Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and Pn′subscriptsuperscript𝑃′𝑛P^{\prime}_{n}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT using the Theorem 4 as follows.
Being motivated by the success of the Wiener index, nearly after four decades, in 1993,19931993,1993 , Klein and Randić put forward a novel structure-descriptor topological index, known as the Kirchhoff index [11]. The Kirchhoff index of a graph G𝐺Gitalic_G is calculated using the formula Kf⁡(G)=12⁢∑i=1|V|∑j=1|V|ri⁢jK...
The formulae of the degree-Kirchhoff index of linear hexagonal chains [9], hexagonal Möbius chains [16], linear crossed hexagonal chains [18], linear polyomino chains [8], cylinder phenylene chains [17], random polygonal chains [13], generalized phenylenes [25], cylinder, and Möbius octagonal chain [15], linear pentag...
A
If t∈{t1,…,tk}𝑡superscript𝑡1…superscript𝑡𝑘t\in\{t^{1},\ldots,t^{k}\}italic_t ∈ { italic_t start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , … , italic_t start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT }, then we say that t𝑡titalic_t is in the support of τ𝜏\tauitalic_τ.
We first describe the classic setting, in which a contract, denoted by ⟨t,i⟩𝑡𝑖\langle t,i\rangle⟨ italic_t , italic_i ⟩, is a payment function t𝑡titalic_t and a recommended action i∈[n]𝑖delimited-[]𝑛i\in[n]italic_i ∈ [ italic_n ]. The interpretation is that the principal posts a contract, the agent observes the c...
The principal now posts an ambiguous contract, the agent observes the ambiguous contract and chooses an action and bears the attendant cost, a payment function is selected from the support of the ambiguous contract, an outcome is drawn from the distribution over outcomes induced by that action, and the principal makes...
To capture this ambiguity, we define an ambiguous contract to be a collection of payment functions τ={t1,t2,…,tk}𝜏superscript𝑡1superscript𝑡2…superscript𝑡𝑘\tau=\{t^{1},t^{2},\ldots,t^{k}\}italic_τ = { italic_t start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_t start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , … , i...
A classic contract for this setting includes a payment function t=(t1,…,tm)𝑡subscript𝑡1…subscript𝑡𝑚t=(t_{1},\ldots,t_{m})italic_t = ( italic_t start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_t start_POSTSUBSCRIPT italic_m end_POSTSUBSCRIPT ), where tjsubscript𝑡𝑗t_{j}italic_t start_POSTSUBSCRIPT italic_j end...
B
Rather than directly bound bias using ALMOKL stability, we use mutual information as a technical intermediate. Specifically, in Section 2.4, we describe a proof of the following. If the analyst adaptively asks a series of adaptively chosen queries to a sample 𝑺∼𝒟nsimilar-to𝑺superscript𝒟𝑛\bm{S}\sim\mathcal{D}^{n}b...
As is typical of generalization results based on mutual information, our guarantees in Section 2.5 only bound the expected bias. While one can use Markov’s inequality to get an explicit dependence on the failure probability, that dependence will be polynomial in the inverse failure probability. In Section 2.6, we descr...
It could have been made simpler by not “grouping” S𝑆Sitalic_S into k𝑘kitalic_k disjoint groups. In this variant, whenever the mechanism wants a vote, it simply samples 𝟙⁢[𝒛≥r]1delimited-[]𝒛𝑟\mathds{1}[\bm{z}\geq r]blackboard_1 [ bold_italic_z ≥ italic_r ] where 𝒛∼φt⁢(S)similar-to𝒛subscript𝜑𝑡𝑆\bm{z}\sim\varph...
The high probability guarantee of Theorem 10 does not follow from only a mutual information bound. Instead, we use mutual information to achieve a constant failure probability and separately prove a reduction from the low failure probability case to that of constant failure probability.
Theorem 9 proves that, with constant success probability, the analyst cannot find a test on which the sample is biased. In this section, we prove Theorem 10 showing that, when all of the analyst’s queries φ:Xw→Y:𝜑→superscript𝑋𝑤𝑌\varphi:X^{w}\to Yitalic_φ : italic_X start_POSTSUPERSCRIPT italic_w end_POSTSUPERSCRIP...
A
A recurring theme in this context is accounting for additional symmetries. The variables yIsubscript𝑦𝐼y_{I}italic_y start_POSTSUBSCRIPT italic_I end_POSTSUBSCRIPT of the Lasserre system of equations, cf.  Definition 2.8, are indexed by sets of vertex pairs rather than by tuples of such.
Theorems 3.1 and 3.2 summarise our results. The notions in items 2–4 and the graph classes ℒtsubscriptℒ𝑡\mathcal{L}_{t}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and ℒt+superscriptsubscriptℒ𝑡\mathcal{L}_{t}^{+}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_P...
In the first part of the paper (Section 3), linear algebraic tools developed in [mancinska_relaxation_2017, mancinska_quantum_2019] are generalised to yield reformulations of the entire Lasserre hierarchy with and without non-negativity constraints. Section 4 is concerned with the graph theoretic properties of the grap...
In the subsequent sections, Theorems 3.1 and 3.2 will be proven in parallel. The equivalence of items 1 and 2, 2 and 3, and 3 and 4 are established in Section 3.3, Section 3.2, and Section 3.4, respectively. The statements on homomorphism indistinguishability are proven in Section 4.
Using techniques from [grohe_homomorphism_2022], we finally establish a characterisation of when the level-t𝑡titalic_t Lasserre relaxation of ISO⁡(G,H)ISO𝐺𝐻\operatorname{ISO}(G,H)roman_ISO ( italic_G , italic_H ) is feasible in terms of homomorphism indistinguishability of G𝐺Gitalic_G and H𝐻Hitalic_H. In order to ...
C
So the personal distances in sideways scenarios are expected to be longer than those of forward cases. This could also be observed from bar plots like Figure 6. Similarly, we expect the backward condition to give participants the feeling of uncertainty about the potential change of direction. The other important factor...
Comparing gaze. According to Mumm et al.  [36], gaze has a significant effect on the personal physical distance that humans maintain. The distance increases when more gaze is involved between the robot and the participant. And they also found that there is high consistency in humans’ distances from the robot and their ...
The ever-wide and deep integration of robots into human lives has brought them into more private spaces. There are already strong needs and solid integration of robots in home services [44, 28]. It is necessary for robots to understand and use human proxemic rules so that they can be well-situated and maintain a friend...
This research designed a dynamic human-robot proxemic scenario with a four-legged canine-inspired robot, to see the effects of moving orientation, gazing, and personal robotic experience on distances people maintained from the robot. We got the conclusion that in subconscious interactions, when participants pass by a r...
Instead of letting participants subjectively decide which distance they want to maintain with the canine robot, we integrate the process in a more subconscious way: the participants are asked to perform an unrelated task, in the middle of which the canine robot moves and passes by the participants. The personal distanc...
A
With ξ𝜉\xiitalic_ξ, μ𝜇\muitalic_μ and β𝛽\betaitalic_β are homogenization of the comfort index coefficients. The results in Tab. 4 show a significantly lower comfort index for neural networks method. For cyclic coordinate descent method, the comfort index goes to infinity because the joint angles are near their limit...
This subsection explores the use of neural networks to find inverse kinematics (IK) solutions for the human leg. Shah et al. [22] applied deep artificial neural networks to solve the IK problem for a 5-axis serial link robotic manipulator. They achieved an accuracy where the deviation between the end effector and the ...
Robustness : the lower limb posture is represented by the generalized coordinates q=[θ1θ2θ3]T𝑞superscriptdelimited-[]subscript𝜃1subscript𝜃2subscript𝜃3𝑇q=\left[\begin{array}[]{ccc}\theta_{1}&\theta_{2}&\theta_{3}\end{array}\right]% ^{T}italic_q = [ start_ARRAY start_ROW start_CELL italic_θ start_POSTSUBSCRIPT 1 end...
This paper presents a comparative study of inverse kinematics techniques for human lower limbs. Theoretical results indicate that the neural network method outperforms the other methods in terms of root mean square position error, computational time, and the generation of realistic postures. The comfort of human motio...
The results of the inverse kinematics simulation for the lower limbs using the Cyclic Coordinate Descent (CCD) method are shown in Figure 5. To determine the position error illustrated in Figure 6, we used these results along with the forward kinematics described in Equation (5) to generate the end effector trajectory...
C
A property of digraphs is a set of finite digraphs closed under isomorphism. A digraph G𝐺Gitalic_G is ε𝜀\varepsilonitalic_ε-far from having a property ΦΦ\Phiroman_Φ if any digraph G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT on the vertex set V⁢(G)𝑉𝐺V(G)italic_V ( italic_G ) that d...
Alon and Shapira [4] proved that every monotone property of undirected graphs (that is, closed under the removal of edges and vertices) is strongly testable, see Lovász and Szegedy for an analytic approach [22], while Rödl and Schacht generalized this to hypergraphs [23], see also Austin and Tao [8]. Similar results ha...
Unfortunately, the dependence on ε𝜀\varepsilonitalic_ε can be quite bad already in the case of undirected graphs: the known upper bounds in the Alon-Shapira theorem are wowzer functions due to the iterated involvement of Szemerédi’s regularity lemma. Following Alon and Fox [7], we call a property easily testable if f...
The goal of this paper is to study testability of finite posets as special digraphs. By a poset, we mean a set equipped with a partial order ≺precedes\prec≺ that is anti-reflexive and transitive. Alon, Ben-Eliezer and Fischer [1] proved that hereditary (closed under induced subgraphs) classes of ordered graphs are stro...
The relationship between local and global properties of structures is a central theme in combinatorics and computer science. Since the work of Rubinstein and Sudan [25], testing properties by sampling a small number of elements is an emerging research area. A classical result of this kind is the triangle removal lemma ...
A
Besides RDF (direct graphs) or property graphs, in some cases, custom models or special high arity representation could be used to cover specific features, such as access-levels, temporal information, or multihop relations in one record (node-edge-edge-node) [53].
Graph embeddings have also seen some attention in ontology matching as they can capture structural information of an ontology. For example Portisch et. al. [162] use a variation of RDF2Vec [163], which is a walk-based embedding technique similar to Word2Vec, to encode both ontologies and then use a rotation matrix to a...
Lassila et al [51] conclude that both formats are qualified to meet their challenges and none of the two is perfect for every use case. They thus recommend increasing interoperability between both models to reuse existing techniques of both approaches. Various efforts to address this problem have been made in recent ye...
Given the focus on semi-structured data sources, the techniques for knowledge extraction are generally relatively advanced compared to other steps in KG construction. This has also been made possible by the frequent use of existing knowledge extraction tools such as Stanford CoreNLP, as will see in the discussion of th...
In the following, we briefly describe both and discuss how they meet the introduced desiderata. Table 1 summarizes some of the key differences of both models. At the end, we also contrast the different terminology of the models and specify the terms used in the rest of this paper.
B
Initially, the set of waypoints required to generate a trajectory in task space is assumed to be available from the task planner. The trajectory is then planned using the minimum jerk criterion to ensure smooth acceleration of the joints, thereby reducing vibrations and avoiding resonance frequencies. For this simulati...
In this paper, the dual quaternion-based theory is applied to the kinematics and dynamics study of the 7-DOF human lower limbs in 3D space. Subsequently, the artificial neural networks method is used to solve the inverse kinematics problem. The efficiency of the artificial neural networks method is verified using the j...
This section focuses on the dynamic description of the lower limb shown in Figure 2 using the Dual Quaternion-based recursive Newton-Euler method. This method involves calculating the velocities and accelerations of the center of mass of each link, known as twists, based on the positions, velocities, and accelerations...
In this section, the Forward Kinematics (FK) of the lower limbs, depicted in Figure 1, using dual quaternions is established. FK involves computing the positions and orientations of the end-effector in task space from the axes and angles of the joint rotations. The lower limb is decomposed into four segments: the pelv...
Afterward, the inverse kinematics (IK) of the lower limb is computed using a multi-layer perceptron trained with the Levenberg-Marquardt backpropagation algorithm, utilizing a dataset of 400,000 samples. The network architecture is illustrated in Figure 4, featuring a two-layer feed-forward structure comprising a hidde...
D
We consider the variance-reduced cubic Newton method from (Zhou et al., 2019) (referred to as “full VR”), its lazy version where we do not update the snapshot Hessian (“Lazy VR”), the stochastic Cubic Newton method (“SCN”), the Cubic Newton algorithm (“CN”), Gradient Descent with line search (“GD") and Stochastic Gradi...
Figure 6: (left) times needed to compute the gradient, the Hessian, decompose the Hessian and solve the cubic subproblem for a diagonal neural network with n=10000𝑛10000n=10000italic_n = 10000 and different values of the dimension d𝑑ditalic_d. (right) average time for computing the Hessian divided by the average tim...
second-order optimization algorithms. We take into account that the cost of one stochastic Hessian is proportional to d𝑑ditalic_d times the cost of the stochastic gradient, where d𝑑ditalic_d is the problem dimension, which holds for general dense problems.
We consider again a diagonal neural network and estimate the time costs needed for computing its gradient, Hessian, decomposing the Hessian, and solving the cubic subproblem. Figure 6 shows that the average cost of computing the Hessian is significantly higher than the cost of computing one gradient, and the quotient g...
Figure 1 shows that the lazy version saves both time and arithmetic computations without sacrificing the convergence precision. In these graphs, Gradcost is computed using the convention that computing one hessian is d𝑑ditalic_d times as expensive as computing one gradient.
D
RkB⁢F=log2⁡(1+Pσ2⁢‖hd,k⁢|+∑n=1N|⁢fnX⁢gk,n‖2).superscriptsubscript𝑅𝑘𝐵𝐹subscript21𝑃superscript𝜎2superscriptnormsubscriptℎ𝑑𝑘superscriptsubscript𝑛1𝑁subscriptsuperscript𝑓𝑋𝑛subscript𝑔𝑘𝑛2R_{k}^{BF}=\log_{2}\!\left(\!1+\frac{P}{\sigma^{2}}\left||h_{d,k}|+\sum_{n=1}^% {N}|f^{X}_{n}{g_{k,n}}|\right|^{2}\right).\v...
which represents the difference in the SNR/channel gain at a UE q𝑞qitalic_q (OOB-UE) served by BS-Y with and without the IRS in the environment. In Fig. 4, we plot the CCDF of ZN(Y)subscriptsuperscript𝑍𝑌𝑁Z^{(Y)}_{N}italic_Z start_POSTSUPERSCRIPT ( italic_Y ) end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_N end_POST...
Our results show that deploying an IRS not only improves the throughput of the operator who controls the IRS phase configuration to optimally serve its own users, but also enhances the throughput of users associated with an OOB operator who has no control over the IRS, albeit by a smaller amount compared to the in-band...
Now, due to the independence of the channels of the users served by operators X and Y, the phase configuration used by operator X to serve its own users appears as a random phase configuration of the IRS for any UE served by operator Y. In the sequel, we quantify the impact of the IRS on the throughput achieved by the ...
In this paper, we analyzed the effect of deploying an IRS on the performance of an OOB operator that has no control over the IRS. We showed that while the IRS optimally serves the in-band UEs, it simultaneously, and at no additional cost, enhances the quality of the channels of the OOB UEs. This enhancement in the OOB ...
C
In particular, all interaction concepts can be further categorized into a set of salient concepts S∈Ω𝒙𝑆subscriptΩ𝒙S\in\Omega_{\boldsymbol{x}}italic_S ∈ roman_Ω start_POSTSUBSCRIPT bold_italic_x end_POSTSUBSCRIPT with considerable effects I⁢(S|𝒙)𝐼conditional𝑆𝒙I(S|\boldsymbol{x})italic_I ( italic_S | bold_italic_x...
Fig. 1 shows interaction concepts S𝑆Sitalic_S and the corresponding effects I⁢(S|𝒙)𝐼conditional𝑆𝒙I(S|\boldsymbol{x})italic_I ( italic_S | bold_italic_x ) extracted by PointNet (Qi et al., 2017a) from different samples 𝒙𝒙\boldsymbol{x}bold_italic_x in the ShapeNet dataset. We find that the interaction concept S={...
Figure 1: Visualization of interaction concepts S𝑆Sitalic_S extracted by PointNet on different samples in the ShapeNet dataset. The histograms show the distribution of interaction effects I⁢(S|𝒙)𝐼conditional𝑆𝒙I(S|\boldsymbol{x})italic_I ( italic_S | bold_italic_x ) over samples in the “motorbike” category, where S...
We trained AlexNet, ResNet-18, and VGG-13 on both the original dataset and the modified dataset. Compared with DNNs learned on the original dataset, DNNs learned on the modified dataset were more likely to simply used the color information for classification.
Note that a sample in the ShapeNet dataset (Yi et al., 2016) usually contains 2500 3D points. To simplify the visualization, we simply consider 8-10 semantic parts on the point cloud 𝒙𝒙\boldsymbol{x}bold_italic_x, which has been provided by the dataset.222For example, the ShapeNet dataset has provided the annotated p...
D
Complexity (order) of interactive concepts.  The complexity of the interactive concept S𝑆Sitalic_S is defined as the number of input variables contained in the concept, which is also termed as the order of the concept, i.e., order⁢(S)=|S|order𝑆𝑆\textit{order}(S)=|S|order ( italic_S ) = | italic_S |.
In this paper, we provide a conceptual understanding of the reason why low-order concepts in training data can usually better generalize to testing data than high-order concepts. Specifically, we prove that the average inconsistency of concepts usually increases exponentially along with the order of concepts. We find t...
All above experimental findings on the generalization power of concepts are related to the phenomenon of the inconsistency of high-order concepts, i.e., high-order concepts are more sensitive to small noises in the input sample than low-order concepts. Therefore, we aim to prove that the interaction effect’s variance o...
Although there is a common heuristic that complex concepts are usually more likely to be over-fitted, people still do not know the exact definition of concepts with an analytic connection to their generalization power. Because we also find the low generalization power of complex (high-order) interactive concepts, in th...
Although there is a common intuition that more complex representations usually lead to over-fitting, this study uses an analytic inconsistency of concepts to explain the connection between the complexity of interactive concepts and their generalization power. The complexity of an interactive concept S𝑆Sitalic_S is de...
C
The robust property of MoLU is to approach rapidly to the value of minimum of a loss function without losing stability. This is a truly useful characteristic when training long time-series data by using NeuralODEs(Neural Ordinary Differential Equations). To prove the performance of MoLU, we conducted experiment on Neu...
We conducted experiments with MNIST, the most commonly used datasets in the field of image classification. We used MNIST datasets in torchvision and used 2 layered Networks which is optimized using SGD on a batch size of 64 with a learning rate of 0.001 and a momentum of 0.5 with random seed of 10. We confirmed that ou...
We begin our experiment with Lotka-Volterra model, the commonly used simple model of NeuralODEs. Coefficients and initial conditions in the Lotka-Volterra equations were all identical to the setting in [4]. Following [3], we generated training data by numerically over the time span t∈[0,6.1]𝑡06.1t\in[0,6.1]italic_t ∈ ...
In NeuralODEs (Neural Ordinary Differential Equations), people used to use the ELU(Exponential Linear Units) [1] or the Tanh function for the activation functions, because of its differentiability over the whole domain. Prediction performance of NeuralODEs was not good for longer time-series data even short time-series...
We realized that our activation function is not only showing a good performance for the accuracy but also converging to zero rapidly when updating a loss function during a test on some mathematical model and neural networks. We formulated our activation function in the following order. First, we used the hyperbolic tan...
B
In our case, as the worst case delay to generate a child is the same as of generating all children, we do not even need to argue that we can resume the enumeration from the i𝑖iitalic_i-th child as long as we are interested in the asymptotic delay, though this can be done to speed up the implementation. As of deciding ...
By the same arguments, it is easily seen that we can compute the solution that comes right before any given solution C𝐶Citalic_C within the same time and space. In other words, our implementation of reverse search can be made so that it only uses the memory of the current node during the DFS of the solution tree, with...
We note that the general framework of reverse search [1], equipped with the alternating output technique [26], yields a natural polynomial-time algorithm to produce the solution that comes after any given solution C𝐶Citalic_C in the enumeration, provided that we are able to decide in polynomial time whether C𝐶Citalic...
Thus, in general, reverse search only needs memory space that is linear in the height of the solution tree times the space needed to generate children. As of the delay, it only depends on the time needed to compute the parent and the delay needed to generate children when using folklore tricks such as the alternating o...
In our case, as the worst case delay to generate a child is the same as of generating all children, we do not even need to argue that we can resume the enumeration from the i𝑖iitalic_i-th child as long as we are interested in the asymptotic delay, though this can be done to speed up the implementation. As of deciding ...
A
Overall, our theoretical results take a step towards understanding the approximation error of transport-based sampling and density estimation algorithms. At the same time, the present analysis suggests an extensive list of open questions for future research:
Throughout this paper, η𝜂\etaitalic_η is the reference probability measure on ΩΩ\Omegaroman_Ω, ν𝜈\nuitalic_ν is the target measure, and T†superscript𝑇†T^{\dagger}italic_T start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT is an exact pushforward T♯†⁢η=νsubscriptsuperscript𝑇†♯𝜂𝜈T^{\dagger}_{\sharp}\eta=\nuitalic_T start_...
Given a set Ω⊆ℝdΩsuperscriptℝ𝑑\Omega\subseteq\mathbb{R}^{d}roman_Ω ⊆ blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT equipped with two Borel probability measures, a target ν𝜈\nuitalic_ν and a reference η𝜂\etaitalic_η, we consider the approximation to ν𝜈\nuitalic_ν:
Combining the approximation theory framework of this paper with statistical consistency and sample complexity results, in settings where maps are estimated from empirical data (without knowledge of the true underlying density). Here either the target measure ν𝜈\nuitalic_ν is given by samples and η𝜂\etaitalic_η is kno...
In the next set of experiments, we choose η𝜂\etaitalic_η to be the standard Gaussian measure 𝒩⁢(0,1)𝒩01\mathcal{N}(0,1)caligraphic_N ( 0 , 1 ) on ℝℝ\mathbb{R}blackboard_R and let ν𝜈\nuitalic_ν be the one-dimensional Gumbel distribution, which is supported over the entire real line. The density of ν𝜈\nuitalic_ν is...
C
At first we compare the performance of MMA-MRNNet to that of various baseline [6] and state-of-the-art methods: ViPER and Netease Fuxi Virtual Human methods (which are multi-modal methods exploiting audio, visual and text information); the best performing HFUT-CVers method (presented in the related work section; it is...
We demonstrated the effectiveness of MMA-MRNNet on the Hume-Reaction dataset, where it consistently outperformed by large margins all state-of-the-art methods. We also demonstrated the effectiveness of the MMA component across multiple in-the-wild datasets, where it consistently outperformed all state-of-the-art method...
At first we compare the performance of MMA-MRNNet to that of various baseline [6] and state-of-the-art methods: ViPER and Netease Fuxi Virtual Human methods (which are multi-modal methods exploiting audio, visual and text information); the best performing HFUT-CVers method (presented in the related work section; it is...
Table 2 shows that our uni-modal non-ensemble learning MMA-MRNNet (that exploits only the visual information and does not employ any ensemble learning) outperforms all other methods by large margins (although some methods are multimodal ones or even ensembles). Let us also note that all baseline and state-of-the-art me...
The performance of the proposed MMA method was evaluated against several state-of-the-art approaches across multiple datasets (Aff-Wild2, AffectNet and EmotioNet), as detailed in Table 5. The MMA component consistently outperformed by large margins all methods on all tasks (7 basic expression recognition, AU detection...
C
A conformity function is a mapping ρ:ℝd×𝒴×Ω→ℝ:𝜌→superscriptℝ𝑑𝒴Ωℝ\rho:\mathbb{R}^{d}\times\mathscr{Y}\times\Omega\to\mathbb{R}italic_ρ : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT × script_Y × roman_Ω → blackboard_R such that ρ⁢(x,y)=ρ⁢(x,y,⋅)𝜌𝑥𝑦𝜌𝑥𝑦bold-⋅\rho(x,y)=\rho(x,y,\,\boldsymbol{\...
Note that the regularity of a specific conformity function ρ𝜌\rhoitalic_ρ is contextual, being inherently dependent on the distribution of the underlying data sequence. Technically, we can always avoid ties among the sequence of conformity scores almost surely by introducing a properly constructed ancillary tie-break...
In general, the coverage indicators Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are dependent random variables, since for all future observables the corresponding conformal prediction sets in Definition 3 are defined in terms of the same calibration sample conformity score S(⌈(1−α)⁢(n+1)...
Conformity functions are agnostic to the choice of the specific models or algorithms used to construct μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG, ξ^psubscript^𝜉𝑝\hat{\xi}_{p}over^ start_ARG italic_ξ end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, and π^^𝜋\hat{\pi}over^ start_ARG italic_π end_ARG in Exa...
Under the data exchangeability assumption, the sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT is exchangeable.
A
Furthermore, we extend this encoder with a Stream-Segment Temporal Modeling Module (S2TM) to learn temporal dynamics for action recognition. Specifically, we split a long-duration event stream into a sequence of segmented voxel sets and extract spatiotemporal features per segment by the encoder. Then, a sequence of fe...
This work introduces a novel attention-aware model (EVSTr) for spatiotemporal representation learning on event streams. EVSTr takes event voxel sets as input to fit the sparsity of event data and hierarchically learns robust representations for recognition tasks. The proposed event voxel transformer encoder, consistin...
We introduce the event voxel transformer encoder with well-designed MNEL and VSAL layers to hierarchically extract spatiotemporal features from local to global. The segment modeling strategy (S2TM) endows our network with a long-range temporal modeling capability.
Figure 4: The architecture of three components in the EVSTr model. (a) Multi-Scale Neighbor Embedding Layer (MNEL). It attentively aggregates multi-scale neighbor features into a local representation for each voxel. (b) Voxel Self-Attention Layer (VSAL). It performs inter-voxel feature interactions to enhance global re...
An overview of the proposed EVSTr model is illustrated in Fig. 3. EVSTr has two working modes corresponding to short-duration and long-duration recognition with event cameras. It comprises three stages: event representation, spatiotemporal feature learning, and long-range temporal modeling. For object classification, ...
B
CNN-based methods [20, 21] usually require fewer touches but suffer from lower resolution due to computational requirements. Smith et al. proposed approaches [22, 23] based on Graph Neural Network. Reconstructions by these methods have a higher resolution but are nonsmooth and, for now, evaluated only in simulation.
An important part of haptic exploration is the decision where to touch. The object can be touched randomly as done by Smith et al. [22], or always select a position opposite the camera (from “behind”) as Watkins-Vall et al. [20]. However, these are not as effective as an uncertainty-driven approach. Uncertainty can co...
CNN-based methods [20, 21] usually require fewer touches but suffer from lower resolution due to computational requirements. Smith et al. proposed approaches [22, 23] based on Graph Neural Network. Reconstructions by these methods have a higher resolution but are nonsmooth and, for now, evaluated only in simulation.
Points with a high loss from Eq. 8 are not on the estimated surface. And, intuitively, if all points were certain, the loss would be zero for all of them. However, this was not the case in our experiments. Therefore, we compute the Eikonal loss from Eq. 8 for all points on our current shape O𝑂Oitalic_O and take it as ...
where, in our case, 𝐱𝐱\mathbf{x}bold_x is a point defined in 3D space and s𝑠sitalic_s is the signed distance. Traditionally, f𝑓fitalic_f would be described analytically, but it can also be learned with a neural network. Then, the implicit surface generated with a neural network can be described as
A
Infinitary ω𝜔\omegaitalic_ω-clones have been mainly studied with respect to both local topology and global topology. However, to extend the previous results to ω𝜔\omegaitalic_ω-clones that are not necessarily infinitary, we require a new concept of polymorphism.
In order to characterise the ω𝜔\omegaitalic_ω-relation clones of locally closed ω𝜔\omegaitalic_ω-relations (c⁢ω𝑐𝜔c\omegaitalic_c italic_ω-relation clones) we define the notion of decreasing sequence of finitary relations. Each of these sequences has a locally closed ω𝜔\omegaitalic_ω-relation as a limit and we show...
In this section, we aim to introduce and explore this new notion of polymorphism that will allow us to generalise Theorem 7.3 to a wider class of ω𝜔\omegaitalic_ω-clones, and to characterise trace (resp. uniform) closed sets of ω𝜔\omegaitalic_ω-operations.
To describe ω𝜔\omegaitalic_ω-clones that are not necessarily infinitary through invariant relations, in Section 7.3 we introduce the notion of matrical polymorphisms. The main result characterising X𝑋Xitalic_X-closed ω𝜔\omegaitalic_ω-clones is presented in Theorem 7.13. As a corollary, we obtain a characterisation o...
In this section we introduce the ω𝜔\omegaitalic_ω-relation clones and prove that I⁢n⁢v𝒢ω⁢(C)𝐼𝑛subscriptsuperscript𝑣𝜔𝒢𝐶Inv^{\omega}_{\mathcal{G}}(C)italic_I italic_n italic_v start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT ( italic_C ) is an ω𝜔\omegaitalic_...
B
Many efforts have been devoted to finding diverse solutions in combinatorial problems. In their seminal paper [KGD93], Kuo et al. were the first to explore this problem from a complexity-theoretic perspective. They showed that the basic problem of maximizing a distance norm over a set of elements is already NP-hard. Si...
Informally, given a directed graph G𝐺Gitalic_G along with two specified vertices s𝑠sitalic_s and t𝑡titalic_t, and an integer k𝑘kitalic_k, we are interested in finding a collection of k𝑘kitalic_k s𝑠sitalic_s-t𝑡titalic_t mincuts in G𝐺Gitalic_G that are as different from each other as possible; that is, a collecti...
We showed that the k𝑘kitalic_k-Diverse Minimum s-t Cuts problem can be solved efficiently when considering two natural measures for the diversity of a set of solutions. There exist, however, other sensible measures of diversity. One that often arises in literature is the minimum pairwise Hamming distance of a collecti...
Along the same line, Hanaka et al. [HKK+22] and Gao et al. [GGK+22] recently developed frameworks to design approximation algorithms for diverse variants of combinatorial problems. On the positive side, diverse variants of other classic problems are known to be polynomially solvable when considering certain set-based ...
Many efforts have been devoted to finding diverse solutions in combinatorial problems. In their seminal paper [KGD93], Kuo et al. were the first to explore this problem from a complexity-theoretic perspective. They showed that the basic problem of maximizing a distance norm over a set of elements is already NP-hard. Si...
C
We let X𝑋Xitalic_X be the real line and the random transition Γ(⋅|u)\Gamma(\cdot|u)roman_Γ ( ⋅ | italic_u ) be a Gaussian 𝒩⁢(u,1)𝒩𝑢1\mathcal{N}(u,1)caligraphic_N ( italic_u , 1 ) with mean u𝑢uitalic_u. The space of measures is XM=[0,1]subscript𝑋𝑀01X_{M}=[0,1]italic_X start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIP...
In quantum mechanics, given a quantum state |ψ⟩ket𝜓\ket{\psi}| start_ARG italic_ψ end_ARG ⟩, a measurement, or POVM, E𝐸Eitalic_E produces a probability measure E⁢|ψ⟩𝐸ket𝜓E\ket{\psi}italic_E | start_ARG italic_ψ end_ARG ⟩ over strings. This probability represents the classical information produced from the measurem...
Quantum information theory studies the limits of communicating through quantum channels. This section shows the limitations of the algorithmic content of [ure states and their measurements. Given a measurement apparatus E𝐸Eitalic_E, there is only a tiny fraction of quantum pure states on which E𝐸Eitalic_E’s applicati...
Another interesting property of quantum mechanics is that the vast majority of quantum pure states themselves will have negligible algorithmic self information 𝐈𝒬(|ψ⟩:|ψ⟩){\mathbf{I}}_{\mathcal{Q}}(\ket{\psi}:\ket{\psi})bold_I start_POSTSUBSCRIPT caligraphic_Q end_POSTSUBSCRIPT ( | start_ARG italic_ψ end_ARG ⟩ : | s...
We prove conservation of probabilities over successively general spaces. This includeds finite sequences, infinite sequences, and T0subscript𝑇0T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, second countable topologies. Conservation of probabilities over the case of finite and infinite sequences follow directly...
B
Based on the Mixture Normalization (MN) hypothesis proposed by [6] (ref. to Figure 1), our Context Normalization (CN) approach operates under a similar assumption that data can be effectively represented by a mixture of multiple components, as opposed to batch normalization (BN)  [4]). In the Context Normalization (CN)...
TABLE VII: Comparison of the two Context Normalization methods on CIFAR-100: Context Normalization on Patches (CN-Patches) and Context Normalization on Channels (CN-Channels), with normalization to the mean and standard deviation of the dataset (ViT) and input normalization using batch normalization (BN).
A concise overview of the processing steps involved in Batch Normalization (BN), Mixture Normalization (MN), and Context Normalization (CN). The dashed line in the Batch Normalization diagram indicates a mini-batch parameter update, highlighting a key step in the process.
Mixture Normalization In the context of deep neural networks (DNNs), the distribution of activations is almost certain to have multiple modes of variation due to the non-linearities. The batch normalization (BN) [4] hypothesis that a Gaussian distribution can model the generative process of mini-batch samples is less v...
Different normalization techniques, including activation normalization, weight normalization, and gradient normalization, are employed to enhance the training performance of DNNs. To normalize activations, the most common technique is Batch Normalization (BN) [4]. BN has been proposed to solve the problem caused by the...
B
All methods are based on ID training data without using any external outlier data. † indicates that the results are taken from the original paper, and other methods are reproduced using the same network architecture. Four post-hoc foreground OOD detection methods are respectively plugged into our method ‘X’-DFB, where ...
With these pseudo mask labels, we then use a modified Dense-BiT architecture to train the (K+1)𝐾1(K+1)( italic_K + 1 )-class dense prediction model with the BiT-M-R50x1 checkpoints as the initial weights. All input images are resized to 128x128 during training and inference. We replace the Mixup augmentation used in t...
Modelling the uncertainty of pre-trained DNN directly without retraining the network is one popular approach for OOD detection (Gomes et al., 2022; Cook et al., 2020; Huang et al., 2021; Sastry and Oore, 2020; Bendale and Boult, 2016; Wang et al., 2021; Zisselman and Tamar, 2020; Dong et al., 2022; Liu et al., 2023; Yu...
Implementation Details. We use BiT-M (Kolesnikov et al., 2020), a variant of ResNetv2 architecture (He et al., 2016), as the default network backbone throughout the experiments. The official release checkpoint of BiT-M-R50x1 trained on ImageNet-21K is used as our initial K𝐾Kitalic_K-class in-distribution classificatio...
Outlier Exposure (OE) (Hendrycks et al., 2019) introduces auxiliary outlier data to train the network and improve its OOD detection performance. Such approaches can use real outliers (Mohseni et al., 2020; Liu et al., 2020; Chen et al., 2020; Papadopoulos et al., 2021; Wu et al., 2021; Yang et al., 2021; Choi et al., 2...
C
In particular, we use DMs to capture the relationship between under-exposed and normally-exposed images. Other work has shown that DMs may be used as backbone feature extractors which predict features based on noisy inputs [41, 42]. Furthermore, the task of denoising itself has been shown to assist with seemingly unrel...
LPDM proposed in this study also models the conditional distribution between low-light and normally-exposed images; however, we use the diffusion paradigm to achieve this. Furthermore, we repurpose the function of a DM to be used as a noise detector. Therefore, LPDM provides a subtractable estimation of the noise in a...
The remainder of this paper is structured as follows: Section II provides background information on LLIE; Section III provides background information on DMs; Section IV outlines preliminary mathematical notation and describes the proposed framework in detail; Section V contains the experimental setup and results for t...
In this work, we propose a technique where a conditional DM is used to remove noise from images which have undergone LLIE. The remainder of this section is structured as follows: in Section IV-A, the background information about DMs is outlined; the architecture used for LPDM is described in Section IV-B; finally, in ...
Section V-A describes the datasets used in this study; Section V-B defines the configuration of LPDM and the training parameters used for all experiments; Section V-C provides detail on the LLIE models selected for comparison with LPDM; in order to achieve a fair comparison, we compare our approach to alternative denoi...
C
Previous research has shown that individuals exhibit distinct cognitive patterns and abilities when engaging in online information-seeking activities, with several characteristics identified as influential factors. For instance, it has been demonstrated that information-seeking performance is not solely contingent upon...
Our study highlights the role of individual characteristics in participants’ navigation performance within the knowledge space, with this influence being modulated by constraints such as time and distance. We discovered that prior experience with Wikipedia, the navigation game, and familiarity with the target page are...
Another motivation for our analysis stems from the connection between navigation in the physical space and knowledge space. Previous research has demonstrated that the same neural regions that are responsible for navigation in physical space are also involved in navigating the knowledge space: the hippocampus and ento...
In addition, our observations indicate that individual characteristics, including sex, ethnicity, native language, political stance, and reported spatial navigation skills, significantly influence navigation performance in one type of game (with time or distance constraints) but not the other. To fully understand these...
Previous research primarily focused on linking individual traits to navigation within physical space. Our study expands this literature by examining navigation within the knowledge space. Similar to physical space navigation [9, 31], age acts as an inhibitor here, likely due to declining cognitive abilities associated...
B
The first step of any simulation production is the generation phase in which the high-energy collisions are simulated with Monte Carlo generators such as Pythia8 [5] and EvtGen [6]. The output of the generation phase is the set of long-lived particles able to traverse partially or entirely, depending on the particle sp...
The simulation of high-energy collisions, of the decays of the generated particles, and of the physics processes occurring within the detector by the decay products are a key necessity of analysis, typically for separating the signal from background sources or for selection efficiency studies.
The radiation-matter interactions occurring within the detector by the traversing long-lived particles are reproduced during the simulation phase that aims to compute the energy deposited in the active volumes relying on Geant4 [7]. Lastly, during the digitization phase, the energy deposits are converted into raw data ...
The first step of any simulation production is the generation phase in which the high-energy collisions are simulated with Monte Carlo generators such as Pythia8 [5] and EvtGen [6]. The output of the generation phase is the set of long-lived particles able to traverse partially or entirely, depending on the particle sp...
The high-level response of the RICH and MUON systems are reproduced using the particles kinematic information provided by the Lamarr tracking modules and a description of the detector occupancy, for example based on the total number of tracks traversing the detector. The loss function adopted to train the PID-GAN model...
B
ℒpretrain=λ1⋅ℒalign+λ2⋅ℒcontact.subscriptℒpretrain⋅subscript𝜆1subscriptℒalign⋅subscript𝜆2subscriptℒcontact\mathcal{L}_{\text{pretrain}}=\lambda_{1}\cdot\mathcal{L}_{\text{align}}+% \lambda_{2}\cdot\mathcal{L}_{\text{contact}}.caligraphic_L start_POSTSUBSCRIPT pretrain end_POSTSUBSCRIPT = italic_λ start_POSTSUBSCRIPT ...
Figure 1: (a) The proposed cross-modal contrastive learning framework utilizes a pretrained protein language model to guide the training of the protein structure model through contrastive alignment loss. To reinforce information constraints on the structure, we introduce a self-supervised contact map prediction. (b) T...
We propose leveraging pretrained protein language model to train protein structure models using cross-modal contrastive learning . Our approach demonstrates superior performances in various evaluation tasks. However, challenges remain, including the scope of language model transfer, data efficiency, generalization, co...
As depicted in Figure 1(b), the architecture of the inference phase utilizes only the pretrained language-enhanced structure model, rendering other flows unnecessary during this stage. Evaluating a pretrained protein structure model within a novel training framework poses significant challenges. To address these challe...
Furthermore, regarding different levels (Designp vs. Designr), while there is no significant disparity between the residue-level and protein-level pretrained modules in downstream tasks, Designr slightly outperforms Designp overall. This observation suggests that any small gaps between the two levels of pretrained mode...
C
The main task is to design an effective f⁢(∗)𝑓f(*)italic_f ( ∗ ) for KGE. An intuitive choice of f⁢(∗)𝑓f(*)italic_f ( ∗ ) is multiple fully connected (FC) layers; however, FC layers require large numbers of parameters and are prone to overfitting for KGE [22]. Inspired by image processing, we refer to feature upsampl...
We further analyze the sensitivity of LiftNet regarding the number of TC layers on WN18RR dataset. For demonstration, we vary the number of TC layers in LiftNet-based methods from one to four. Note one layer LiftNet is still different from the linear transformation used in [28] as it has non-linear activation on the o...
The input dimensions highly affect the parameter efficiency of LiftNet-based methods; thus we analyze its influence on the model performance. We evaluate LN-TransE, LN-TransH, LN-DistMult, and LiftNet-ComplEx with four input dimensions {4,16,64,256}41664256\{4,16,64,256\}{ 4 , 16 , 64 , 256 } on WN18RR dataset. To do ...
Specifically, we adopt transposed convolution (TC) layer in LiftNet. A TC layer broadcasts the input elements via kernels, thus increasing the dimension of the output. Different from traditional upsampling methods (e.g., nearest-neighbor, bilinear, and bicubic interpolation [29]), TC can capture the interactions among ...
Figure 1: In (a), conventional KGE models that use high-dimensional entity representations equal to enlarging the width of the embedding layer. But we tend to achieve parameter efficiency by increasing the depth of the embedding network, i.e., a narrower embedding layer (low-dimensional entity representations) plus th...
C
A second example involves the use of quantum error correction [13] and/or quantum purification to enhance ebit fidelity at the cost of reduced entanglement generation rate [11]. For example, if one generates n𝑛nitalic_n ebits in a link with fidelity finsubscript𝑓inf_{\rm in}italic_f start_POSTSUBSCRIPT roman_in end_P...
Numerical simulations were performed to find the best route for the distribution of bipartite Fig. 4 and multipartite entanglement distribution (Fig. 5) for large complex networks using the link model described in [12]. In such setups, the entanglement between qubits is generated by means of laser pulses. An increase ...
This paper advances the research on quantum networks by introducing an highly versatile routing approach based on fidelity curves, which can be utilized in conjunction with purification protocols including capacity-achieving purification ones [11]. The fidelity of an ebit can be quantified as the distance between the q...
Ultimately, whether changing laser pulses duration during the entanglement generation process, quantum error correction and purification post processing, or a combination of both, we obtain a fidelity vs. rate curve. This curve represents the fidelity of the generated entangled pairs for a given rate and it is a genera...
A second example involves the use of quantum error correction [13] and/or quantum purification to enhance ebit fidelity at the cost of reduced entanglement generation rate [11]. For example, if one generates n𝑛nitalic_n ebits in a link with fidelity finsubscript𝑓inf_{\rm in}italic_f start_POSTSUBSCRIPT roman_in end_P...
C
Though polar codes obeying PO with the MWD sequence have the optimum MWUB in the high SNR region by Lemma 5, achieving the ML performance is difficult for long code length under the SCL decoding with limited list size. Thus, a suitable construction method based on MWD for SCL decoding is required.
To satisfy the entropy constraint (54), we propose an ECBS algorithm. Given an (N,K)𝑁𝐾\left(N,K\right)( italic_N , italic_K ) polar code, the ECBS algorithm first initializes the information set 𝒜𝒜\mathcal{A}caligraphic_A by the MWD sequence 𝐪𝐪\bf qbold_q. Then, we divide the synthetic channels into (n+1)𝑛1(n+1...
In this section, we first introduce the entropy constraint according to the information-theoretic perspective on SCL decoding. Then, we use the entropy constraint to provide a construction method called entropy constraint bit-swapping (ECBS) algorithm.
In this paper, the construction methods based on MWD are proposed to improve the performance of polar codes under SCL decoding. We first prove that the ML performance can approach the MWUB as the SNR goes to infinity. Then, we design the ordered and nested MWD sequence to apply fast construction without channel inform...
A heuristic and greedy entropy constraint bit-swapping (ECBS) algorithm is proposed to improve the performance of polar codes under SCL decoding with limited list size. To design the ECBS algorithm, we establish a relationship between the list size and the MWUB by the entropies of the synthetic channels transmitting in...
B
So in particular the Searcher will never choose the unfavored arc (branch) when the signal is for the favored one. The use of biased depth-first Searcher strategies (random choices at every branch node) of the Searcher was introduced in another context in Alpern (2010) and Alpern and Lidbetter (2014), but those distrib...
The optimal Hider distribution over the leaf nodes can be found by a similar stochastic process in which the Hider starts at the root O𝑂Oitalic_O and at each branch node chooses a branch to enter according to a certain distribution. Of course, this is merely a mental calculation for the Hider, who is stationary in th...
Note that as the signal becomes more certain, so that p→1→𝑝1p\rightarrow 1italic_p → 1 and q→0→𝑞0q\rightarrow 0italic_q → 0, the value (2) goes to DQsubscript𝐷𝑄D_{Q}italic_D start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT, which goes to the distance of the furthest leaf node from O𝑂Oitalic_O. As previously remarked...
If the Hider lies in neither branch, any signal distribution may be used, as in this case the Searcher will return to node j𝑗jitalic_j again after time equal to twice the length of the branches, regardless of her search method. As p→1/2→𝑝12p\rightarrow 1/2italic_p → 1 / 2, the signal becomes useless and the solution ...
This paper has addressed the question of how to optimally search for an adversarially hidden target on a tree network in the presence of unreliable signals. We have found optimal solutions for both the Searcher and the Hider that can be calculated recursively, and a closed form expression for the value of the game. Fu...
A
Several other works studied text-to-image latent diffusion models for medical imaging Chambon et al. (2022a); Akrout et al. (2023). Closest to our work is Chambon et al. (2022b), where the authors explore various methods to adapt a pre-trained Stable Diffusion model to chest X-ray generation.
Several other works studied text-to-image latent diffusion models for medical imaging Chambon et al. (2022a); Akrout et al. (2023). Closest to our work is Chambon et al. (2022b), where the authors explore various methods to adapt a pre-trained Stable Diffusion model to chest X-ray generation.
They performed experiments with both Textual Inversion and fine-tuning the U-net component of Stable Diffusion, similar to Ruiz et al. (2022). They find that Textual Inversion works, but fine-tuning the U-net is more effective, especially with more complex prompts.
Figure 1: The Textual Inversion fine-tuning process for diffusion models trains a text conditioning embedding for a new token using a small set of images while keeping the rest of the architecture frozen. We show that this allows the adaption of latent diffusion models to a variety of medical imaging modalities, using ...
For these reasons, especially in the medical domain, it is essential to have computationally feasible methods that can fine-tune existing models towards a smaller set of a specific modality or disease. In this paper, we pick one such method, Textual Inversion, and rigorously explore its capacities for adapting Stable D...
B
Discussion: Our observations indicate that users prioritize the level of detail in individual objects when assessing the quality of generated scenes, even if the objects’ semantics do not match the given text prompts. For example, as shown at our suppl., when presented with the prompt ’a football and a basketball,’ use...
Generative Quality Evaluation. We provided four groups of generated 3D assets (refer to the supplementary material) to each participant. For each group, the 3D assets were created using Latent-NeRF, SJC, and our method, all based on the same text prompt. Participants were then asked to assess the quality of these gener...
Our framework interpreted a multi-object text prompt as a collection of localized NeRFs, each associated with a spatial box and an object-specific text prompt, which were then composited to render the entire scene view. We have further enhanced the framework with a specialized composition module for global consistency,...
Moreover, several studies [22, 56, 63, 27, 21] are dedicated to enhancing the SDS loss to provide more detailed supervision. Departing from these singular approaches, our CompoNeRF introduces a novel approach for the creation of multi-object 3D scenes. It adopts an object-compositional strategy, utilizing an editable 3...
In our study, we employ the CLIP score as the primary evaluation metric to assess the congruence between the generated 3D assets and the associated text prompts. This score, commonly used in text-to-image generation research as noted in studies [38, 65, 55], is derived from the cosine similarity between the embeddings...
B
Like other existing generators, OCLUS does not aspire to helping the user establish the overall geometric characteristics of synthetic data sets. To generate a data set, the user must provide a covariance matrix for each cluster, the desired overlaps between all pairs of clusters, and a design matrix specifying which c...
Like other existing generators, OCLUS does not aspire to helping the user establish the overall geometric characteristics of synthetic data sets. To generate a data set, the user must provide a covariance matrix for each cluster, the desired overlaps between all pairs of clusters, and a design matrix specifying which c...
MDCGen (Iglesias et al., (2019) is a feature-rich generator that supports many desiderata in cluster analysis, such as overlap control, different probability distributions, subspace clusters, and the ability to add noise points. In particular, it is nice to be able to place noise points away from the clusters, which i...
The generators OCLUS (Steinley and Henson, (2005)) and GenRandomClust (Qiu and Joe, (2006)) focus on providing more sophisticated overlap control compared to previous generators. GenRandomClust extends the generator of Milligan and Cooper, (1985) by managing overlaps between clusters with different ellipsoidal shapes a...
Existing data generators do not cater directly to such high-level scenarios. Instead, the user must carefully tune simulation parameters to arrive at the desired scenarios (Steinley and Henson, (2005), Schubert and Zimek, (2019), Iglesias et al., (2019)). While some generators make it easy to control the overlaps betw...
B
In particular, only COFFEE successfully predicts all the events within the context. In Example 1, both TANL and COFFEE without ranking fail to extract E1, triggered by ‘pay’, suggesting that the baselines may have difficulty identifying complex event triggers. In this case, there is not a specific amount of money to b...
In addition, Figure 5 demonstrates the influence of the weight parameter on COFFEE. The weight represents the ratio of combining the ranking score and generation score. When the weight is set to 0, only the generation score is considered, while a weight of 1 means that only the ranking score is considered. As depicted ...
Post-generation re-ranking is usually applied in two-stage systems, that is, generation and re-ranking, to re-score the output from the first stage by training an additional re-ranking module. This technique has been widely used in neural translation and summarization. For example, Ng et al. (2019); Yee et al. (2019) ...
In order to demonstrate the ability of our model to select event candidates, we analyze the results of two instances selected from the test set. For comparison, we select COFFEE without ranking and TANL, given its high performance. As shown in Table 3, our proposed model successfully extracts the missing events not det...
Comparing COFFEE with and without ranking, we can conclude that re-ranking in the selector is crucial. In both examples, COFFEE fails to detect all events without re-ranking. Even though both candidates are the correct targets, the beam scores differ more than expected, which leads to incorrect ranking. The re-ranking ...
D
where ϕ:[0,κ2]→ℝ+:italic-ϕ→0superscript𝜅2superscriptℝ\phi:\left[0,\kappa^{2}\right]\rightarrow\mathbb{R}^{+}italic_ϕ : [ 0 , italic_κ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] → blackboard_R start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT is a non-decreasing index function such that ϕ⁢(0)=0italic-ϕ00\phi(0)=0italic_ϕ...
Since the convergence rates and the minimax optimality of spectral algorithms in the well specified case are clear, a large amount of literature studied the misspecified spectral algorithms. Among these work, Steinwart et al. (2009); Dicker et al. (2017); Pillaud-Vivien et al. (2018); Fischer and Steinwart (2020); Celi...
General spectral algorithms in the setting of kernel methods were first proposed and studied by Rosasco et al. (2005); Caponnetto (2006); Bauer et al. (2007); Gerfo et al. (2008). A large class of regularization methods are introduced collectively as spectral algorithms and are characterized through the corresponding ...
This is a standard assumption to control the noise such that the tail probability decays fast (Lin and Cevher, 2020a; Fischer and Steinwart, 2020). It is satisfied for, for instance, the Gaussian noise with bounded variance or sub-Gaussian noise. Some literature (e.g., Steinwart et al. 2009; Pillaud-Vivien et al. 2018...
In addition, we also notice a line of work which studies the learning curves of kernel ridge regression (Spigler et al., 2020; Bordelon et al., 2020; Cui et al., 2021) and crossovers between different noise magnitudes. At present, their results all rely on a Gaussian design assumption (or some variation), which is a ve...
D
On the following pages we present additional visualisations and quantitative comparisons to accompany the results presented in the main text. Figure 1 shows the some original meshes provided here for a better visual comparison. Figure 2 shows the surface reconstruction results on Lucy. Figure 3 shows the simplified clo...
Furthermore, we validate our technique’s feature-sensitive approach on real-world scanning datasets captured using different acquisition devices. Firstly, we use a desk scene point cloud from the NYU Depth V2 dataset, derived from RGBD data acquired using RGB and Depth cameras from Microsoft Kinect. This cloud and the...
Overall, these results show that our approach provides a computationally efficient option for performing point cloud simplification in settings where the user wishes to strike a balance between preserving high fidelity around sharp features in the cloud, and ensuring that the simplified cloud covers the manifold defin...
Table 1: Empirical results and total runtimes (time taken by surface variation computation and simplification) for all tested simplification methods and point clouds. We report the maximum and mean Hausdorff distances between the original meshes, and the meshes reconstructed from the simplified point clouds. Also repo...
In order to evaluate the performance of our method in comparison to other simplification techniques, we firstly use each simplified point cloud obtained from three object level point clouds to form simplified meshes, using screened Poisson surface reconstruction [17]. We can then compute the reconstruction errors betwe...
A
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali...
2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT images, we distribute the model over 2 GPUs. The methods HalfRes and PatchDDM were trained on one GPU only. The optimizer we used was AdamW [Loshchilov and Hutter(2017)] with the default parameters.
For three spatial dimensions (i.e. 3D) this means that reducing the input size from 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT to 1283superscript1283128^{3}128 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT results in a reduction
of 1×1×1⁢\text⁢m⁢m3111\text𝑚superscript𝑚31\times 1\times 1\text{mm}^{3}1 × 1 × 1 italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, resulting in a total scan size of 240×240×155240240155240\times 240\times 155240 × 240 × 155, which we padded to a size of 256×256×256256256256256\times 256\times 256256 × 25...
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali...
B
With the flourishing proliferation of edge devices, including but not limited to smartphones and wearable devices, etc, the deluge of private data originating from these geographically distributed nodes has swelled exponentially. Tremendous repositories of data provide more opportunities for artificial intelligence (A...
Federated learning serves as a promising paradigm to tackle distributed machine learning tasks and achieves multi-fold performance benefits including personal data privacy preservation and training efficiency improvement[3]. Specifically, in the FL model training process, each participating client undertakes one or sev...
We compare our proposed algorithm FedAgg with the following approaches. FedAvg [3] is introduced as the fundamental framework in the field of federated learning. FedAdam [38] allocates an adaptive optimizer for the global server and a mini-batch SGD optimizer for the participating clients respectively, which averages t...
At the onset of t𝑡titalic_t-th iteration, we default 𝒘𝒊,𝟎𝒕=𝒘¯𝒕superscriptsubscript𝒘𝒊0𝒕superscriptbold-¯𝒘𝒕\boldsymbol{w_{i,0}^{t}}\!=\!\boldsymbol{\bar{w}^{t}}bold_italic_w start_POSTSUBSCRIPT bold_italic_i bold_, bold_0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_t end_POSTSUPERSCRIPT = overbold_¯ s...
Federated learning represents a decentralized machine learning paradigm explicitly designed to address the challenges of privacy concerns. Within the FL framework, a global model is collaboratively trained by substantial clients with locally collected data. Given N𝑁Nitalic_N geographically distributed and heterogeneou...
A
We first investigate the number of transformer layers to be inserted by zero-initialized attention in LLaMA-Adapter. As shown in Table 4.2, increasing the layer numbers introduces more parameters, but leads to a large improvement in the answering accuracy of ScienceQA’s validation set. There also exists an optimal inse...
If the adaption prompts are randomly initialized, they might bring disturbance to the word tokens at the beginning of training, which harms the fine-tuning stability and effectiveness. Considering this, we modify the vanilla self-attention at the last L𝐿Litalic_L layers to be zero-initialized variants, as shown in Fi...
For better training stability and final performance, we introduce the zero-initialized attention mechanism with a learnable gating factor, which increasingly incorporates instructional signals, while preserving the pre-trained knowledge in LLaMA. With only 1.2M parameters and one-hour training, our approach effectively...
Zero-shot Multi-modal Evaluation. To verify the out-of-domain generation ability of our approach, we conduct a two-stage multi-modal training, and then evaluate three benchmarks (MME (Fu et al., 2023), MMBench (Liu et al., 2023c), LVLM-eHub (Xu et al., 2023)) in a zero-shot manner. For the first stage, we utilize the ...
Our proposed zero-initialized attention is essential for the early-stage training stability and final generation capacity. As shown in Table 4.2, it contributes to a significant +43.08% gain on ScienceQA’s validation set. In contrast, the randomly initialized baseline only achieves 40.77% accuracy, nearly the same as ‘...
D
S-ViLM also achieves consistently superior performance under the fine-tuning evaluation. Outstanding performance of S-ViLM demonstrates that leveraging fine-grained video language structures during pre-training contributes to meaningful video representations.
Temporal action localization (TAL). TAL aims at predicting the temporal extent and the labels of action instances. We evaluate the performance on ActivityNet (Heilbron et al., 2015), an action understanding dataset of 19,994 temporally annotated untrimmed videos with 200 action categories.
We report the mean average precision (mAP) under different temporal Intersection over Union (tIoU) thresholds on ActivityNet in Table 4. For temporal action localization, the model is pre-trained on HowTo100M only, which is observed to be beneficial to TAL compared with VideoCC + ActivityNet (see the ablation study bel...
In terms of fine-tuning, different tasks are trained independently with their own set of hyperparameters on the target dataset and more details can be found in Appendix A. For temporal action localization, we fix weights of the pre-trained video encoder and its grouping blocks to extract video features, which are then ...
Pre-training datasets. To analyze of effects of pre-training datasets, we report the model performances on selected downstream tasks in Table 5. In particular, the same model pre-trained on VideoCC achieves the best performance in zero-shot retrieval on MSR-VTT, compared with HowTo100M and WebVid-2M.
B
We experimentally evaluate ContraSim on standard benchmark for similarity measures – the layer prediction benchmark Kornblith et al. (2019), and two new benchmarks we introduce in this paper: the multilingual benchmark and the image–caption benchmark. In experiments with both language and vision models and multiple dat...
Our method outperformed other similarity measures under the common layer prediction benchmark and two new benchmarks we proposed: the multilingual benchmark and the image–caption benchmark. It particularly shines in strengthened versions of said benchmarks, where random sampling is replaced with finding the most simila...
Through ablations, we demonstrate that the CL procedure is crucial to the success of the method and only maximizing the similarity of positive examples is not sufficient. Furthermore, we demonstrate that ContraSim reveals new insights not captured by previous similarity measures.
In addition, we report the results of two new similarity measures, which use an encoder to map representations to the space where similarity is measured. However, in both methods we train eθsubscript𝑒𝜃e_{\theta}italic_e start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT to only maximize the similarity between positive p...
Motivated by this, we introduce ContraSim, a new similarity measure for interpreting NNs, based on contrastive learning (CL) (Chen et al., 2020; He et al., 2020). Contrary to prior work (e.g., Raghu et al., 2017; Kornblith et al., 2019), which defines closed-form general-purpose similarity measures, ContraSim is a lea...
B
Label ambiguity also affects conventional methods in a Manhattan world. For example, it is often the case that three orthogonal directions can be estimated using the Gaussian sphere representation of VPs [74]; however, the representation does not regard the difference between front and back directions. For a fair comp...
Considering generalized cases of label ambiguity, we annotated the image coordinates of VP/ADPs as follows. We 180∘superscript180180^{\circ}180 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT-rotationally align all labels based on two conditions: 1) the images have back labels without front labels, and 2) the images have r...
Figure 9: Projected 3D VP/ADPs and orthogonal points of VP/ADPs in the Manhattan world to estimate camera rotation. These orthogonal points are obtained as VP/ADPs without camera rotation; that is, pan, tilt, and roll angles are 0∘superscript00^{\circ}0 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT. Four VP/ADPs of the ...
After removing label ambiguity, we ignored back labels because the training and test sets had only 0.1%percent0.10.1\%0.1 % and 0.3%percent0.30.3\%0.3 % back labels, respectively. Therefore, the VP estimator detected 13 points, that is, the five VPs (front, left, right, top, and bottom) and eight ADPs in Table 2. If a...
As shown in Table 2, we annotated the VP/ADPs of the image coordinates and labels on the basis of panoramic-image width and height. We found that some generated fisheye images had label ambiguity; that is, we cannot annotate unique VP/ADP labels for these images. For example, we cannot distinguish one image with a 0∘su...
C
The problems of consensus and synchronization of multi-agent systems (Fagnani and Frasca, 2017) have received growing interest, due to the variety of applications in many different areas, including: cooperative control of unmanned aerial vehicles, formation control of mobile robots and communication in sensor networks
(Fax and Murray, 2004; Jadbabaie et al., 2003; Ren et al., 2007), quality-fair delivery of media contents (Dal Col et al., 2017), power networks (Dörfler et al., 2013), biological systems (Scardovi et al., 2010), and opinion dynamics (Anderson and Ye, 2019). Specifically, consensus refers to agents coming to a global a...
Consensus and synchronization problems have been widely investigated for agents modeled by identical linear time-invariant (LTI) systems, with many subsequent extensions to switching network topologies (Olfati-Saber and Murray, 2004; Xiao and Wang, 2007; Su and Huang, 2012), heterogeneous and nonlinear systems (Khong e...
a state value, thanks to the exchange of information modeled by some communication graph; mild assumptions on the graph connectivity allow to uniformly exponentially reach consensus (Jadbabaie et al., 2003; Olfati-Saber and Murray, 2004; Olfati-Saber et al., 2007; Moreau, 2005; Ren and Beard, 2008; Wieland et al., 2008...
Conditions of the same form as (a) for formation stability were given in (Fax and Murray, 2004, Theorem 3) and the uniform global exponential stability condition was exploited in (Xia and Scardovi, 2014, Theorem 1) and (Seo et al., 2009, Theorem 1). The condition related to the initial value problem (e) was given in (S...
A
We considered alternatives to measure behavioral differences. A more general notion of tolerance or “acceptable error” could be used, but the choice of tolerance is not well established — the PyTorch verification tool uses 10−7superscript10710^{-7}10 start_POSTSUPERSCRIPT - 7 end_POSTSUPERSCRIPT, NNSmith uses 10−3super...
We conducted the first failure analysis of DL model converters, considering the PyTorch and TensorFlow converters for the popular ONNX intermediate representation. The most common symptoms of failure are crashes and, perhaps more concerningly, models that misbehave on certain inputs. Of the five stages of a typical mod...
Software engineering failures inform development and maintenance (Petroski et al., 1994; Anandayuvaraj and Davis, 2023; Amusuo et al., 2022). Previous failure analyses of DL interoperability software focused on the “Development” and “Runtime” components of Figure 1.
We omitted measurements of model size, adversarial robustness, and prediction accuracy, to focus instead on measuring the common failure modes (crashing and behavioral differences) identified in our failure analysis. Our analysis reveals that converters can successfully convert many real model but synthetic models are ...
Of the 2,742/3,544 (∼similar-to\sim∼77%) synthetic models that successfully convert, only 1,244/2,742 (∼similar-to\sim∼45%) successfully load into ONNX Runtime. We observed 11 unique ONNX Runtime errors, of which 6 appear to correspond to open GitHub issues (vbogach, 2022; josephrocca, 2021; BowenBao, 2023; rafaelagrc,...
D
The Aerostack2 software framework presented represents a new iteration designed from scratch and built on the foundation laid by its predecessor, Aerostack [3]. Aerostack2 has been developed by learning from the strengths and weaknesses of its predecessor, which has been used successfully in our research lab for over ...
In order to control an unmanned aerial vehicle (UAV) autonomously, there are two complementary parts: the Flight Control Unit (FCU), systems that focus on low-level control of the aircraft, allowing it to fly stably, and the high-level control frameworks that are responsible for providing further autonomy to the vehicl...
Control Mode Negotiation: Each plugin implements multiple combinations of input-output control mode. A control mode represents the set of different input signals that a controller plugin can handle. Similarly, each Aerial Platform has a set of control modes that are available to control it. The motion controller compo...
Developing autonomous aerial systems from scratch is a challenging task that requires extensive expertise in many different areas, such as aerodynamics, control systems, sensor integration, or AI algorithms. This is a common problem in the robotics field, so in recent years the robotics community has witnessed the dev...
To facilitate the implementation of different aerial platforms, Aerostack2 incorporates an AerialPlatform abstract class responsible for managing the capabilities associated with the direct integration of various aerial platforms into the framework. This abstraction facilitates the integration of new platforms into the...
A
Margaret Boden defines creativity as “the ability to come up with ideas or artifacts that are new, surprising and valuable” (Boden, 2003). In other words, Boden implicitly derives criteria that can be used to identify a creative product. They suggest that creativity is about novelty, surprise and value.
LLMs might be able to generate creative products in the future. However, the fact that they will be able to generate these outputs will not make them intrinsically creative. Indeed, as Floridi and Chiriatti (2020) puts it, it is not what is achieved but how it is achieved that matters. An interesting definition that co...
Value refers to utility, performance, and attractiveness (Maher, 2010). It is also related to both the quality of the output, and its acceptance by society. Due to the large impact LLMs are already having (Bommasani et al., 2021) and the quality of outputs of the systems based on them (Stevenson et al., 2022b), it is p...
LLMs can in theory recognize certain limitations of their own texts after generating them, e.g., by ranking them (Franceschelli and Musolesi, 2024a) or by assigning quality- and diversity-based scores (Bradley et al., 2024). Then, they can try to correct, modify, or rephrase the outputs if asked to do so (i.e., through...
However, paraphrasing Chalmers (Chalmers, 1996), these appear as easy problems to solve in order to achieve creativity, since solutions to them can be identified by taking into consideration the underlying training and inference processes. The hard problem in machine creativity is about the intentionality and the self-...
B
(a) Existing SFUDA object detection works utilize feature alignment or sample generation to help with the pseudo labeling. These approaches mainly focus on exploiting the source model. (b) Our proposed SUP-ICI utilizes instance-level contrastive learning (CL) to make use of the foreground-background semantic informatio...
Source-free unsupervised domain adaptation (SFUDA) denotes the setting of adapting to the target domain given only a well-trained source model and unlabeled target data. One stream of the SFUDA methods is implicitly aligning the feature distribution of the source and target domain using the generative adversarial netwo...
Deep learning has achieved remarkable success in various object detection tasks. In the medical field, deep networks are able to reach clinical expert-level performance, e.g. pulmonary nodule detection [1, 2], etc. Nonetheless, these networks are usually domain-specific. In other words, they work well when the trainin...
However, medical data often involve private information, which makes them not shareable. Consequently, traditional UDA methods, which often rely on access to labeled source data, are not directly applicable in this context. Thus in this paper, we aim at the more realistic but challenging source-free unsupervised domain...
Unsupervised domain adaptation (UDA) is a practical setting where the labeled source data are provided for adapting to the unlabeled target data. Most existing methods adopt feature alignment for UDA object detection. In [3], the authors build image-level and instance-level domain classifiers to implement feature align...
C
The SAA is a basic Monte Carlo simulation method, which represents the random parameter using a finite set of realizations (scenarios), yielding a (possibly large) deterministic two-stage linear programming problem. Though the SAA approach is easy to implement, directly using it to solve two-stage DCOPF may result in ...
Once the affine policy is determined, the decision-making in the real time is just simple function evaluations. This method has been observed to provide good performance when the net-load variations are small or are restricted to a few possible instances [26, 27, 28]. However, if the variations are large or include man...
In this paper, we consider a two-stage stochastic program based on the DC optimal power flow (DCOPF) model. The DC power flow model linearizes the power flow equations and is the workhorse in power industries [8]. The two-stage DCOPF problem is also becoming increasingly popular as a canonical problem that incorporate...
Secondly, as decisions in power system operations are made in a more online (or corrective) manner [10, 21], OPF problems need to be solved repeatedly in real time. Even though solving single linear programs is easy, solving two-stage DCOPF problems are not [22, 19].
Therefore, although (3) and (6) are linear programs, they are often large-scale problems. In addition, since both the first and second-stage decisions depend on the mean of the scenario forecasts, \widebar⁢𝐝\widebar𝐝\widebar{\mathbf{d}}bold_d, every time the set of scenarios changes, we need to re-solve (3) and (6). ...
C
Graphical Evaluation.  First, we visualise the BNN and self-supervised BNN prior predictive (Fig. 1 and 3). The standard BNN prior predictive reflects a belief that all three image pair groups are similarly likely to have the same label, and thus does not capture semantic information well. In contrast, the self-superv...
Note that this is of course only a reasonable assumption in cases where we believe to have sufficiently good knowledge of semantic similarity in our data domain. That is, we need to have a set of data augmentations for the contrastive tasks, for which we can be reasonably certain that the true labels in our downstream ...
We then further demonstrate that self-supervised BNN prior predictives reflect input-pair semantic similarity better than normal BNN priors (§4). To do so, we develop a methodology to better understand the prior predictive distributions of BNNs. Our approach is to measure the probability of pairs of data points having...
Graphical Evaluation.  First, we visualise the BNN and self-supervised BNN prior predictive (Fig. 1 and 3). The standard BNN prior predictive reflects a belief that all three image pair groups are similarly likely to have the same label, and thus does not capture semantic information well. In contrast, the self-superv...
Quantitative Evaluation.  We now quantify how well different prior predictives reflect data semantics. In Table 1, we see that conventional BNN priors reflect semantic similarity much less than self-supervised BNN priors, matching our qualitative evaluation. Note that this measure has of course been designed by us to c...
D
To elaborate on its use, note that (4.1) can be viewed as a U𝑈Uitalic_U-statistic applied to a nonstationary process. The proof relies on a type of Hoeffding decomposition, which essentially decomposes the statistic (after scaling) into a linear part that defines the distributional properties, and an error component ...
We now turn to the conditions required to prove the weak invariance principle, which inherently relies on conditions on the kernel of the (ℬ,dℬ)ℬsubscript𝑑ℬ(\mathscr{B},d_{\mathscr{B}})( script_B , italic_d start_POSTSUBSCRIPT script_B end_POSTSUBSCRIPT )-valued process. We relate this to above dependence measures in...
The assumptions introduced here are relatively mild compared to existing literature in the stationary case (e.g., see [15, 53, 37]). To the best of our knowledge, no relevant results exist in the nonstationary setting. If subsection 4.1(i) holds, conditions on the latent process such as geometric moment contraction ass...
Although it is possible to apply this characterization directly for inference, we can often use a more direct approach. Specifically, our second result establishes general conditions under which the geometric features of a stochastic process can in fact be fully characterized by the process of ball volumes (subsection ...
By subsection 3.2 and subsection 3.2, a natural way to formulate a test for time-invariance of the geometric features is via the corresponding ball volume processes of the shape descriptors. In addition, results in Section 2 show that we can express approximation errors in terms of the Gromov-Hausdorff distance dG⁢Hsu...
B