context stringlengths 250 4.99k | A stringlengths 250 4.85k | B stringlengths 250 4.17k | C stringlengths 250 4.32k | D stringlengths 250 8.2k | label stringclasses 4
values |
|---|---|---|---|---|---|
τ(G)=1|V|∏i=2|V|λi=12|E|∏i=1|V|di∏i=2|V|λ̄i⋅\displaystyle\tau(G)=\frac{1}{|V|}\prod\limits_{i=2}^{|V|}\lambda_{i}=\frac{1}%
{2|E|}\prod\limits_{i=1}^{|V|}d_{i}\prod\limits_{i=2}^{|V|}\lambdabar_{i}\cdotitalic_τ ( italic_G ) = divide start_ARG 1 end_ARG start_ARG | italic_V | end_ARG ∏ start_POSTSUBSCRIPT italic_i = 2 ... |
The formulae of the degree-Kirchhoff index of linear hexagonal chains [9], hexagonal Möbius chains [16], linear crossed hexagonal chains [18], linear polyomino chains [8], cylinder phenylene chains [17], random polygonal chains [13], generalized phenylenes [25], cylinder, and Möbius octagonal chain [15], linear pentag... | We obtain the formulae for the total count of spanning trees of Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and Pn′subscriptsuperscript𝑃′𝑛P^{\prime}_{n}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT using the Theorem 4 as follows.
| Being motivated by the success of the Wiener index, nearly after four decades, in 1993,19931993,1993 , Klein and Randić put forward a novel structure-descriptor topological index, known as the Kirchhoff index [11]. The Kirchhoff index of a graph G𝐺Gitalic_G is calculated using the formula Kf(G)=12∑i=1|V|∑j=1|V|rijK... | As both of the structural descriptors have important applications in graph theory, networking systems, molecular chemistry, and other related fields, researchers have taken a strong interest in the Kirchhoff indices and degree-Kirchhoff indices of graphs in recent years. Finding the explicit formulae for the Kirchhoff ... | D |
Third, ambiguous contracts can be a burden for agents, either because they are more difficult to evaluate and enforce or because they are a weapon for extracting surplus from agents. Circumstances may accordingly restrict attention to ambiguity-proof classes of contracts. Our results show that insisting on ambiguity pr... | We work with a familiar hidden-action moral hazard problem, as in Holmström, (1979), Grossman and Hart, (1983), and Laffont and Martimort, (2009, Chapter 4), with the friction arising out of limited liability (as in Innes, (1990)) rather than risk aversion. In contrast to much of the moral hazard literature, our princi... | We examine a familiar, finite moral hazard problem, augmented to accommodate ambiguous contracts analogously to the treatment of mechanism design problems by Di Tillio et al., (2017). In the problem we consider, a principal (she) interacts with an agent (he). The agent can take one of n𝑛nitalic_n costly actions. Each ... | An implication of our results is that in the context of moral hazard problems, ambiguity and max-min utility drive optimal designs towards simplicity. We thus join a literature, with Holmström and Milgrom, (1987) as a key early entry, endeavoring to explain why actual contracts in moral hazard settings tend to be simpl... |
Section 6 shows that the advantages of ambiguity disappear if the agent can mix over actions. The ability to mix provides the agent with more alternative actions, tightening the incentive constraints enough to dissipate any advantage the principal gains from ambiguous contracts. As explained by Raiffa, (1961) in his a... | A |
In Section 5, we generalize these ideas to other choices of m𝑚mitalic_m. Feldman and Steinke also show how to bound the mutual information using ALOOKL stability, corresponding to Theorem 7 in the case of m=1𝑚1m=1italic_m = 1 [FS18]. Our techniques are similar to theirs, appropriately generalized for m>1𝑚1m>1italic_... | In Section 5, we generalize these ideas to other choices of m𝑚mitalic_m. Feldman and Steinke also show how to bound the mutual information using ALOOKL stability, corresponding to Theorem 7 in the case of m=1𝑚1m=1italic_m = 1 [FS18]. Our techniques are similar to theirs, appropriately generalized for m>1𝑚1m>1italic_... |
Rather than directly bound bias using ALMOKL stability, we use mutual information as a technical intermediate. Specifically, in Section 2.4, we describe a proof of the following. If the analyst adaptively asks a series of adaptively chosen queries to a sample 𝑺∼𝒟nsimilar-to𝑺superscript𝒟𝑛\bm{S}\sim\mathcal{D}^{n}b... | As is typical of generalization results based on mutual information, our guarantees in Section 2.5 only bound the expected bias. While one can use Markov’s inequality to get an explicit dependence on the failure probability, that dependence will be polynomial in the inverse failure probability. In Section 2.6, we descr... |
Mutual information is known to bound bias in a variety of contexts (see e.g. [RZ16, FS18]). Similarly, we use the mutual information bound of Theorem 2 to bound the bias of responses to subsampling queries. The purpose of this subsection will be to formalize and explain our bias bounds, and we’ll defer the (mostly sta... | D |
The former condition holds iff tr((𝐃G−𝐀G)i)=tr((𝐃H−𝐀H)i)trsuperscriptsubscript𝐃𝐺subscript𝐀𝐺𝑖trsuperscriptsubscript𝐃𝐻subscript𝐀𝐻𝑖\operatorname{tr}\left((\boldsymbol{D}_{G}-\boldsymbol{A}_{G})^{i}\right)=%
\operatorname{tr}\left((\boldsymbol{D}_{H}-\boldsymbol{A}_{H})^{i}\right)roman_tr ( ( bold_italic_D ... | In general, carefully imposing conditions on where the labels are placed is essential for ensuring that the class of bilabelled graphs is closed under the desired operations and also generated by atomic graphs under them, cf. [rattan_weisfeiler_2023, p. 2271].
| Let t≥1𝑡1t\geq 1italic_t ≥ 1.
Write ℒt+superscriptsubscriptℒ𝑡\mathcal{L}_{t}^{+}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT for the class of (t,t)𝑡𝑡(t,t)( italic_t , italic_t )-bilabelled graphs generated by the set of atomic graphs 𝒜tsubscript𝒜𝑡\mathc... | The most basic bilabelled graphs, so-called atomic graphs, make their first appearance in Theorem 3.6.
These graphs are used to reformulate Equations 12 and 7. The atomic graphs are also the graphs which the sets ℒtsubscriptℒ𝑡\mathcal{L}_{t}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and ℒt+superscrip... | Let t≥1𝑡1t\geq 1italic_t ≥ 1.
A (t,t)𝑡𝑡(t,t)( italic_t , italic_t )-bilabelled graph 𝐅=(F,𝐮,𝐯)𝐅𝐹𝐮𝐯\boldsymbol{F}=(F,\boldsymbol{u},\boldsymbol{v})bold_italic_F = ( italic_F , bold_italic_u , bold_italic_v ) is atomic if all its vertices are labelled. Write 𝒜tsubscript𝒜𝑡\mathcal{A}_{t}caligraphic_A start_PO... | A |
This research designed a dynamic human-robot proxemic scenario with a four-legged canine-inspired robot, to see the effects of moving orientation, gazing, and personal robotic experience on distances people maintained from the robot. We got the conclusion that in subconscious interactions, when participants pass by a r... | Although gaze is considered a major fact in our work, only the robotic gaze is involved in the current version of the experiment. This work did not measure how much attention humans are paying to the robot. It is noticeable from experiments that some participants did not look at the Spot robot in some setups, especiall... |
We made a similar assumption that the participants with more experience in robotics will change participant’s distance from the canine robot. In our experiments, although not confirmed by statistical analysis, according to average values we might infer that robot experience added the personal distances between the par... |
The other potential factor this work did not take into concern but could bring effect to the proxemic distance is the environment sound. The Spot robot uses a hydraulic driver system, thus needing a cooling fan in every joint and also the processor to keep it running in order. Even when standing, the Spot makes a lot ... |
Despite the statistical results, we also made assumptions about potential factors in human-robot proxemics. The special ’standing’ posture of canine robots could lead to personal space as there is a chance of sudden movement for the body language expressing a ready-to-move framing. And it remains unclear how the sound... | D |
With ξ𝜉\xiitalic_ξ, μ𝜇\muitalic_μ and β𝛽\betaitalic_β are homogenization of the comfort index coefficients.
The results in Tab. 4 show a significantly lower comfort index for neural networks method. For cyclic coordinate descent method, the comfort index goes to infinity because the joint angles are near their limit... |
This subsection explores the use of neural networks to find inverse kinematics (IK) solutions for the human leg. Shah et al. [22] applied deep artificial neural networks to solve the IK problem for a 5-axis serial link robotic manipulator. They achieved an accuracy where the deviation between the end effector and the ... | Robustness : the lower limb posture is represented by the generalized coordinates q=[θ1θ2θ3]T𝑞superscriptdelimited-[]subscript𝜃1subscript𝜃2subscript𝜃3𝑇q=\left[\begin{array}[]{ccc}\theta_{1}&\theta_{2}&\theta_{3}\end{array}\right]%
^{T}italic_q = [ start_ARRAY start_ROW start_CELL italic_θ start_POSTSUBSCRIPT 1 end... |
The results of the inverse kinematics simulation for the lower limbs using the Cyclic Coordinate Descent (CCD) method are shown in Figure 5. To determine the position error illustrated in Figure 6, we used these results along with the forward kinematics described in Equation (5) to generate the end effector trajectory... |
This paper presents a comparative study of inverse kinematics techniques for human lower limbs. Theoretical results indicate that the neural network method outperforms the other methods in terms of root mean square position error, computational time, and the generation of realistic postures. The comfort of human motio... | D |
The comparability graph of the poset in Proposition 2.4 shows that for any fixed hℎhitalic_h, this bound has the right order of magnitude in ε𝜀\varepsilonitalic_ε. As in the case of posets, we can also use the test for Kχ(ℱ)subscript𝐾𝜒ℱK_{\chi(\mathcal{F})}italic_K start_POSTSUBSCRIPT italic_χ ( caligraphic_F ) en... | The relationship between local and global properties of structures is a central theme in combinatorics and computer science. Since the work of Rubinstein and Sudan [25], testing properties by sampling a small number of elements is an emerging research area.
A classical result of this kind is the triangle removal lemma ... |
Panna Tímea Fekete’s Project No. 1016492. has been implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development and Innovation Fund, financed under the KDP-2020 funding scheme and supported by the ERC Synergy Grant No. 810115 – DYNASNET. | The goal of this paper is to study testability of finite posets as special digraphs. By a poset, we mean a set equipped with a partial order ≺precedes\prec≺ that is anti-reflexive and transitive. Alon, Ben-Eliezer and Fischer [1] proved that hereditary (closed under induced subgraphs) classes of ordered graphs are stro... | Alon and Shapira [4] proved that every monotone property of undirected graphs (that is, closed under the removal of edges and vertices) is strongly testable, see Lovász and Szegedy for an analytic approach [22], while Rödl and Schacht generalized this to hypergraphs [23], see also Austin and Tao [8]. Similar results ha... | B |
Entity resolution is challenging due to the often limited quality and high heterogeneity of different entity descriptions. It is also computationally expensive because the number of comparisons between entities typically grows quadratically with the total number of entities. The standard approach for entity resolution ... | Entity resolution is challenging due to the often limited quality and high heterogeneity of different entity descriptions. It is also computationally expensive because the number of comparisons between entities typically grows quadratically with the total number of entities. The standard approach for entity resolution ... | The matching step of incremental ER is limited to the new entities and involves a pair-wise comparison with the existing KG entities determined by the preceding incremental blocking step. The main goal is to determine all similar entities as potential match candidates as input for the final clustering step, where it is... | The preceding blocking phase aims at drastically reducing the number of entity pairs to evaluate, e.g. based on some partitioning so that only entities of the same partition need to be compared with each other (e.g., persons with the same birth year or products of the same manufacturer).
After the match phase there is ... |
Blocking for incremental or streaming ER requires to identify for the new entities all other entities in the KG that need to be considered for matching. Given the typically high and growing size of the KG it is important to limit the matching to as few candidates as possible and determining the candidates should also ... | C |
\cdot\vec{y_{0}}\right)\leqslant 20^{\circ}\\
\end{array}start_ARRAY start_ROW start_CELL - 30 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ⩽ italic_a italic_r italic_c italic_c italic_o italic_s ( divide start_ARG italic_P start_POSTSUBSCRIPT italic_C / 0 end_POSTSUBSCRIPT - italic_P start_POSTSUBSCRIPT italic_B / 0 en... |
Backward recursion involves calculating the required joint torques to achieve the predefined motion based on the dual quaternion wrenches acting at the end effector, along with the twists and their first time derivatives for each link’s center of mass. The wrench at the foot’s center of mass is given by: |
This section focuses on the dynamic description of the lower limb shown in Figure 2 using the Dual Quaternion-based recursive Newton-Euler method. This method involves calculating the velocities and accelerations of the center of mass of each link, known as twists, based on the positions, velocities, and accelerations... | In this paper, the dual quaternion-based theory is applied to the kinematics and dynamics study of the 7-DOF human lower limbs in 3D space. Subsequently, the artificial neural networks method is used to solve the inverse kinematics problem. The efficiency of the artificial neural networks method is verified using the j... |
In this section, the Forward Kinematics (FK) of the lower limbs, depicted in Figure 1, using dual quaternions is established. FK involves computing the positions and orientations of the end-effector in task space from the axes and angles of the joint rotations. The lower limb is decomposed into four segments: the pelv... | B |
We consider the variance-reduced cubic Newton method from (Zhou et al., 2019) (referred to as “full VR”), its lazy version where we do not update the snapshot Hessian (“Lazy VR”), the stochastic Cubic Newton method (“SCN”),
the Cubic Newton algorithm (“CN”), Gradient Descent with line search (“GD") and Stochastic Gradi... | second-order optimization algorithms.
We take into account that the cost of one stochastic Hessian is proportional to d𝑑ditalic_d times the cost of the stochastic gradient, where d𝑑ditalic_d is the problem dimension, which holds for general dense problems. |
Figure 6: (left) times needed to compute the gradient, the Hessian, decompose the Hessian and solve the cubic subproblem for a diagonal neural network with n=10000𝑛10000n=10000italic_n = 10000 and different values of the dimension d𝑑ditalic_d. (right) average time for computing the Hessian divided by the average tim... | Figure 1 shows that the lazy version saves both time and arithmetic computations without sacrificing the convergence precision. In these graphs, Gradcost is computed using the convention that computing one hessian is d𝑑ditalic_d times as expensive as computing one gradient.
| We consider again a diagonal neural network and estimate the time costs needed for computing its gradient, Hessian, decomposing the Hessian, and solving the cubic subproblem. Figure 6 shows that the average cost of computing the Hessian is significantly higher than the cost of computing one gradient, and the quotient g... | C |
Our results show that deploying an IRS not only improves the throughput of the operator who controls the IRS phase configuration to optimally serve its own users, but also enhances the throughput of users associated with an OOB operator who has no control over the IRS, albeit by a smaller amount compared to the in-band... | In this paper, we analyzed the effect of deploying an IRS on the performance of an OOB operator that has no control over the IRS. We showed that while the IRS optimally serves the in-band UEs, it simultaneously, and at no additional cost, enhances the quality of the channels of the OOB UEs. This enhancement in the OOB ... | In practice, multiple network operators co-exist in a given geographical area, each operating in different frequency band. As a consequence, at a given point in time, multiple UEs are served by different operators in the system. In such a scenario, if an IRS is optimized to cater to the needs of one of the operators, i... |
We consider two mobile network operators X and Y who provide service to K𝐾Kitalic_K and Q𝑄Qitalic_Q UEs, respectively. The UEs are arbitrarily distributed over a single cell covering the same geographical area, and operators X and Y use non-overlapping frequency bands. | The base stations (BSs) of operators X and Y (referred to as BS-X and BS-Y, respectively) and the UEs are equipped with a single antenna, and all the channels in the systems undergo frequency-flat fading.111Extension to general cases with multiple antennas and frequency selective channels does not change the main messa... | C |
To this end, we added Gaussian noises ϵ∼𝒩(𝟎,𝐈)similar-tobold-italic-ϵ𝒩0𝐈\boldsymbol{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I})bold_italic_ϵ ∼ caligraphic_N ( bold_0 , bold_I ) to each input sample 𝒙𝒙\boldsymbol{x}bold_italic_x in the clean dataset by modifying it to (1−δ)⋅𝒙+δ⋅ϵ⋅1𝛿𝒙⋅𝛿bold-italic-ϵ(1-\de... | Fig. 5 shows the average discrimination power of concepts in different frequency intervals.
We found that the average discrimination power β¯¯𝛽\bar{\beta}over¯ start_ARG italic_β end_ARG of concepts was usually higher than 0.80.80.80.8, which verified the discrimination power of extracted concepts. | Besides, Fig. 7(right) shows the average discrimination power β¯¯𝛽\bar{\beta}over¯ start_ARG italic_β end_ARG of the extracted concepts also decreased when we assigned more training samples with random labels.
This verified that the DNN usually could not learn transferable and discriminative concepts from samples that... | We extracted concept dictionaries of different sizes based on each MLP-5 network (please see Section 3.3.2 for details).
Fig. 7(left) shows that if there was significant label noise in the dataset, the concept dictionary usually explained much fewer concepts encoded by the network, which indicated low transferability o... | Fig. 8(left) shows that the transferability of concepts was usually low when the DNN was learned from noisy input data.
Fig. 8(right) shows that the average discrimination power β¯¯𝛽\bar{\beta}over¯ start_ARG italic_β end_ARG of concepts decreased along with the increasing strength δ𝛿\deltaitalic_δ of injected noise. | D |
In comparison, when a DNN was trained to fit a high-order concept, the learning dynamics is detouring. Specifically, the DNN usually first learned low-order concepts.
Then, the DNN shifted its attention to concepts of higher orders, and later gradually removed mistakenly learned low-order concepts. |
In this section, we analyze the learning dynamics of concepts with a simple experimental setting, i.e., using a DNN to fit a boolean polynomial. We find that a high-order concept is not directly learned, but is likely to be mistakenly encoded as a mixture of low-order concepts in early epochs. In spite of the simplici... | Although there is a common heuristic that complex concepts are usually more likely to be over-fitted, people still do not know the exact definition of concepts with an analytic connection to their generalization power. Because we also find the low generalization power of complex (high-order) interactive concepts, in th... | In this paper, we provide a conceptual understanding of the reason why low-order concepts in training data can usually better generalize to testing data than high-order concepts. Specifically, we prove that the average inconsistency of concepts usually increases exponentially along with the order of concepts. We find t... | All above experimental findings on the generalization power of concepts are related to the phenomenon of the inconsistency of high-order concepts, i.e., high-order concepts are more sensitive to small noises in the input sample than low-order concepts.
Therefore, we aim to prove that the interaction effect’s variance o... | C |
ReLU (Rectified Linear Units) is mainly used in the fields of vision classification. In classification, ordinarilly more layers is better in deep learning. On datasets such as MNIST, even simple CNNs with three layers can achieve high classification performance. However, for more challenging datasets such as CIFAR10, s... |
The robust property of MoLU is to approach rapidly to the value of minimum of a loss function without losing stability. This is a truly useful characteristic when training long time-series data by using NeuralODEs(Neural Ordinary Differential Equations). To prove the performance of MoLU, we conducted experiment on Neu... |
Table 1 show that the obvious boundary between the linear part in the positive integer region and the non-linear part in the negative integer region than some other activation functions. Our activation function acts like the identity function, such as ReLU or ELU, in the positive integer region so that it does not los... |
The MoLU is a simple, beautiful and powerful activation function that consists of a combination of hyperbolic tangent and exponential functions. The slope of the MoLU in the negative integer region make it possible to escape from the local minima. On the other hand, in the positive integer region, the slope of the MoL... | We realized that our activation function is not only showing a good performance for the accuracy but also converging to zero rapidly when updating a loss function during a test on some mathematical model and neural networks.
We formulated our activation function in the following order. First, we used the hyperbolic tan... | C |
Another interesting line of research is to find variations of the tree-merging approach suitable for the efficient generation either of restricted classes of functional digraphs (for instance, with cycles of given lengths or trees of given heights, which is sometimes useful in applications related to the decomposition ... | Figure 2: Reverse search tree for the generation of components of 4444 vertices, represented both in graphical form (top) and as isomorphism codes (bottom); the order of generation by the algorithm of Theorem 3.1 is displayed on the top left. Note that the actual ordering of children of each node depends on the (arbitr... | We would like to thank Kellogg S. Booth, Jerome Kelleher, Brendan D. McKay, and Kévin Perrot for some useful discussions and for providing some bibliographic references. We are also grateful to an anonymous reviewer for suggesting the explicit use of the supergraph method, which allowed us to simplify the proofs and to... | We can now exploit the algorithms of Theorem 3.1, and more specifically our ability to generate the successor of a given component, as a subroutine for the efficient generation of arbitrary (non necessarily connected) functional digraphs. In order to avoid generating multiple isomorphic digraphs, we first define an app... | For brevity, we refer to connected functional digraphs as components and, as with trees, we identify a component C𝐶Citalic_C with its own code. A valid code for a component C𝐶Citalic_C is also called a canonical form of C𝐶Citalic_C; unless otherwise specified, in the rest of the paper we consider all components to b... | B |
\|_{H_{\eta}^{1}}+d^{-\frac{1}{2}}\|\pi_{n}T^{\dagger}-T^{\dagger}\|_{\infty}%
\cdot\|T^{\dagger}\|_{H_{\eta}^{1}}\,.∥ italic_φ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT - italic_T start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT ∥ start_POSTSUBSCRIPT italic_H start_POSTSUBSCRIPT italic_η end_POSTSUBSCRIPT start_PO... | the complexity of the approximating class 𝒯^^𝒯\widehat{\mathcal{T}}over^ start_ARG caligraphic_T end_ARG and the accuracy of the algorithm:
if the approximating class 𝒯^^𝒯\widehat{\mathcal{T}}over^ start_ARG caligraphic_T end_ARG is rich and large (e.g., it is the space of polynomials of a very high degree), it can... | The renormalization of πnT†subscript𝜋𝑛superscript𝑇†\pi_{n}T^{\dagger}italic_π start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_T start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT leads to a looser upper bound than we would have intuitively anticipated. Rather than the H1superscript𝐻1H^{1}italic_H start_POSTSUPERSCR... | Then, to apply the approximation theory result (20), we recall that, by the hypotheses of this lemma, and (18), T†∈Hηk+1(Ω;Ω)superscript𝑇†superscriptsubscript𝐻𝜂𝑘1ΩΩT^{\dagger}\in H_{\eta}^{k+1}(\Omega;\Omega)italic_T start_POSTSUPERSCRIPT † end_POSTSUPERSCRIPT ∈ italic_H start_POSTSUBSCRIPT italic_η end_POSTSUBSCR... |
An upper bound on the first terms on the right hand side is a direct consequence of the classical approximation theory (20). We obtain a bound on the L∞superscript𝐿L^{\infty}italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT difference in Lemma 8.1, and so, combined with (20) we get, | D |
The EmotioNet database [5] contains around 1M images and was released for the EmotioNet Challenge in 2017. 950K images were automatically annotated and the remaining 50K images were manually annotated with 11 AUs (1,2,4,5,6,9,12,17,20,25,26); around half of the latter constituted the validation and the other half the ... | The original training set of this database consists of around 290K images and the original validation of 4K. We evaluate our method on the updated partitioning protocol of this database according to our previous work [15, 14] (as we mentioned in Section 3.1). This new partitioning consists of a training set of around 1... | We evaluate our method on the updated partitioning protocol of this database according to our previous work [15, 14] (as we mentioned in Section 3.1). This new partitioning consists of a training set of around 25K images, a validation set of around 7K images and a test set of around 14K images.
|
The EmotioNet database [5] contains around 1M images and was released for the EmotioNet Challenge in 2017. 950K images were automatically annotated and the remaining 50K images were manually annotated with 11 AUs (1,2,4,5,6,9,12,17,20,25,26); around half of the latter constituted the validation and the other half the ... |
Our recent studies [15, 14] have highlighted challenges in the current evaluation of affect analysis methods, noting inconsistencies in database partitioning and evaluation practices that lead to biased and unfair comparisons. To address these issues, a unified protocol for database partitioning was proposed, ensuring... | B |
A conformity function is a mapping ρ:ℝd×𝒴×Ω→ℝ:𝜌→superscriptℝ𝑑𝒴Ωℝ\rho:\mathbb{R}^{d}\times\mathscr{Y}\times\Omega\to\mathbb{R}italic_ρ : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT × script_Y × roman_Ω → blackboard_R such that ρ(x,y)=ρ(x,y,⋅)𝜌𝑥𝑦𝜌𝑥𝑦bold-⋅\rho(x,y)=\rho(x,y,\,\boldsymbol{\... |
Conformity functions are agnostic to the choice of the specific models or algorithms used to construct μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG, ξ^psubscript^𝜉𝑝\hat{\xi}_{p}over^ start_ARG italic_ξ end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, and π^^𝜋\hat{\pi}over^ start_ARG italic_π end_ARG in Exa... | Under the data exchangeability assumption, the sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT is exchangeable.
| In general, the coverage indicators Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are dependent random variables, since for all future observables the corresponding conformal prediction sets in Definition 3 are defined in terms of the same calibration sample conformity score S(⌈(1−α)(n+1)... |
Note that the regularity of a specific conformity function ρ𝜌\rhoitalic_ρ is contextual, being inherently dependent on the distribution of the underlying data sequence. Technically, we can always avoid ties among the sequence of conformity scores almost surely by introducing a properly constructed ancillary tie-break... | D |
The recording system consists of an event camera (iniLabs DAVIS 240C [3]) and an RGB-D camera (Intel RealSense D435), as shown in Fig. 6 (a). The dynamic range of the event camera is 120 dB. We set the frame rate of the RGB-D camera to 30 FPS. We synchronize the time of two vision sensors to align the timestamps of ev... |
Furthermore, we extend this encoder with a Stream-Segment Temporal Modeling Module (S2TM) to learn temporal dynamics for action recognition. Specifically, we split a long-duration event stream into a sequence of segmented voxel sets and extract spatiotemporal features per segment by the encoder. Then, a sequence of fe... | Figure 1: (a) Visualization of event camera output. An event camera records the spatiotemporal information of the running action as a stream of events, where red and blue dots denote positive and negative, respectively. (b) Comparison of samples from DVS128 Gesture [14] and NeuroHAR datasets. The sample is a human acti... | Some representative samples of NeuroHAR are visualized in Fig. 6 (b). As a supplement to recording with still cameras, we preserve the background information in the motion scene by hand-held mobile recording (50% of data per category), making the dataset more suitable for practical applications. Besides, we record huma... |
The recording system consists of an event camera (iniLabs DAVIS 240C [3]) and an RGB-D camera (Intel RealSense D435), as shown in Fig. 6 (a). The dynamic range of the event camera is 120 dB. We set the frame rate of the RGB-D camera to 30 FPS. We synchronize the time of two vision sensors to align the timestamps of ev... | C |
To be able to grasp and manipulate the object, the robotic system needs a model of the object in terms of a complete shape, e.g., an accurate mesh.
There are intrinsic limitations to the performance of computer vision techniques for 3D reconstruction of objects from images or point clouds if only a limited number of vi... | After collision, new visual information is saved, segmented and added to the point cloud for the touched object, together with the haptic information (box (7), lines 17-21). To make sure that we segment the correct object, the RGB-D information is cropped with the bounding box found for the given object in the last ite... | The same trend can be seen from the shaded areas, showing the confidence interval of ±1plus-or-minus1\pm 1± 1 standard deviation. The width of the areas shows the variability of the results for each object and each repetition of the experiment. For VISHAC, the variability shrinks with time, showing that the method is m... |
The first group of improvements concerns the process of shape completion performed by Implicit Geometric Regularization for Learning Shapes (IGR), which we modified as follows. We use a new, theoretically grounded, method to determine the points with highest uncertainty. In addition, we changed the sampling of points... |
We address the following problem. Given an initial RGB-D map, obtain an accurate representation of the complete shape of the object with the help of exploratory contact actions. The objective is either to maximize the accuracy given an upper bound on the number of touches or minimize the number of touches to reach a p... | D |
Hence, we have shown that φ𝜑\varphiitalic_φ belongs to the trace closure of Pol𝒢ω(Z)𝑃𝑜subscriptsuperscript𝑙𝜔𝒢𝑍Pol^{\omega}_{\mathcal{G}}(Z)italic_P italic_o italic_l start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT start_POSTSUBSCRIPT caligraphic_G end_POSTSUBSCRIPT ( italic_Z ) but not to Pol𝒢ω(Z)𝑃𝑜... |
An ideal X𝑋Xitalic_X on Aωsuperscript𝐴𝜔A^{\omega}italic_A start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT determines the so-called X𝑋Xitalic_X-topology on the set OA(ω)superscriptsubscript𝑂𝐴𝜔O_{A}^{(\omega)}italic_O start_POSTSUBSCRIPT italic_A end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_ω ) end_POSTSUP... |
Infinitary ω𝜔\omegaitalic_ω-clones have been mainly studied with respect to both local topology and global topology. However, to extend the previous results to ω𝜔\omegaitalic_ω-clones that are not necessarily infinitary, we require a new concept of polymorphism. |
Based on this idea, we are working on a new and more abstract theory of clones that encompasses both ω𝜔\omegaitalic_ω-clones and infinitary ω𝜔\omegaitalic_ω-clones into a variety of finitary algebras, called cl-monoids. In these algebras, the monoidal structure is enriched with the algebraic abstraction of the conce... | To describe ω𝜔\omegaitalic_ω-clones that are not necessarily infinitary through invariant relations, in Section 7.3 we introduce the notion of matrical polymorphisms. The main result characterising X𝑋Xitalic_X-closed ω𝜔\omegaitalic_ω-clones is presented in Theorem 7.13. As a corollary, we obtain a characterisation o... | B |
Many efforts have been devoted to finding diverse solutions in combinatorial problems. In their seminal paper [KGD93], Kuo et al. were the first to explore this problem from a complexity-theoretic perspective. They showed that the basic problem of maximizing a distance norm over a set of elements is already NP-hard. Si... | Many efforts have been devoted to finding diverse solutions in combinatorial problems. In their seminal paper [KGD93], Kuo et al. were the first to explore this problem from a complexity-theoretic perspective. They showed that the basic problem of maximizing a distance norm over a set of elements is already NP-hard. Si... | We showed that the k𝑘kitalic_k-Diverse Minimum s-t Cuts problem can be solved efficiently when considering two natural measures for the diversity of a set of solutions. There exist, however, other sensible measures of diversity. One that often arises in literature is the minimum pairwise Hamming distance of a collecti... |
Along the same line, Hanaka et al. [HKK+22] and Gao et al. [GGK+22] recently developed frameworks to design approximation algorithms for diverse variants of combinatorial problems. On the positive side, diverse variants of other classic problems are known to be polynomially solvable when considering certain set-based ... | Informally, given a directed graph G𝐺Gitalic_G along with two specified vertices s𝑠sitalic_s and t𝑡titalic_t, and an integer k𝑘kitalic_k, we are interested in finding a collection of k𝑘kitalic_k s𝑠sitalic_s-t𝑡titalic_t mincuts in G𝐺Gitalic_G that are as different from each other as possible; that is, a collecti... | C |
We let X𝑋Xitalic_X be the real line and the random transition Γ(⋅|u)\Gamma(\cdot|u)roman_Γ ( ⋅ | italic_u ) be a Gaussian 𝒩(u,1)𝒩𝑢1\mathcal{N}(u,1)caligraphic_N ( italic_u , 1 ) with mean u𝑢uitalic_u. The space of measures is XM=[0,1]subscript𝑋𝑀01X_{M}=[0,1]italic_X start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIP... | Quantum information theory studies the limits of communicating through quantum channels. This section shows the limitations of the algorithmic content of [ure states and their measurements. Given a measurement apparatus E𝐸Eitalic_E, there is only a tiny fraction of quantum pure states on which E𝐸Eitalic_E’s applicati... | We prove conservation of probabilities over successively general spaces. This includeds finite sequences, infinite sequences, and T0subscript𝑇0T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, second countable topologies. Conservation of probabilities over the case of finite and infinite sequences follow directly... |
Another interesting property of quantum mechanics is that the vast majority of quantum pure states themselves will have negligible algorithmic self information 𝐈𝒬(|ψ⟩:|ψ⟩){\mathbf{I}}_{\mathcal{Q}}(\ket{\psi}:\ket{\psi})bold_I start_POSTSUBSCRIPT caligraphic_Q end_POSTSUBSCRIPT ( | start_ARG italic_ψ end_ARG ⟩ : | s... |
In quantum mechanics, given a quantum state |ψ⟩ket𝜓\ket{\psi}| start_ARG italic_ψ end_ARG ⟩, a measurement, or POVM, E𝐸Eitalic_E produces a probability measure E|ψ⟩𝐸ket𝜓E\ket{\psi}italic_E | start_ARG italic_ψ end_ARG ⟩ over strings. This probability represents the classical information produced from the measurem... | A |
Specifically, we train ViT models with the same settings as in Section IV-C with context normalization approaches (CN-Channels and CN-Patches) on the combined dataset CIFAR-100 and MNIST digits. We target two contexts r∈{1,2}𝑟12r\in\{1,2\}italic_r ∈ { 1 , 2 }, corresponding to the datasets and the context identifier ... |
TABLE VII: Comparison of the two Context Normalization methods on CIFAR-100: Context Normalization on Patches (CN-Patches) and Context Normalization on Channels (CN-Channels), with normalization to the mean and standard deviation of the dataset (ViT) and input normalization using batch normalization (BN). | Table VII demonstrates the significant performance improvement of context normalization over batch normalization (BN) when using the ViT architecture trained from scratch on CIFAR-100. Both CN-Patches and CN-Channels approaches outperform BN by approximately 10% and 18% in terms of accuracy and top-5 accuracy. The trai... | It is important to mention that the baseline models (ViT with standard preprocessing and ViT with batch normalization) collapsed in this blended dataset as the two datasets have different structures, and simple normalization does not allow a suitable representation of the data. Context normalization, on the other hand,... |
Specifically, we train ViT models with the same settings as in Section IV-C with context normalization approaches (CN-Channels and CN-Patches) on the combined dataset CIFAR-100 and MNIST digits. We target two contexts r∈{1,2}𝑟12r\in\{1,2\}italic_r ∈ { 1 , 2 }, corresponding to the datasets and the context identifier ... | C |
To evaluate the effectiveness of our approach, four commonly-used OOD datasets consisting of natural image datasets with diverse background features are used, including SVHN (Netzer et al., 2011), Places365 (Zhou et al., 2017), Textures (Cimpoi et al., 2014), and CIFAR100/CIFAR10 (Krizhevsky et al., 2009) (CIFAR100 is ... | Enhancing Different OOD Detection Methods.
Four different SotA methods – MSP, ODIN, Energy, and ViM – are used as foreground-based OOD detection baseline models and plugged into DFB to perform joint foreground and background OOD detection. Their results are shown at the bottom of Tabs. 1 and 2. | Comparison to SotA Methods.
DFB is also compared with five very recent SotA methods, including MaxLogit (Hendrycks et al., 2022), KL-Matching (Hendrycks et al., 2022), ReAct (Sun et al., 2021), MaSF (Haroush et al., 2022) and DML+ (Zhang and Xiang, 2023) , with their results reported at the top of Tabs. 1 and 2. Among ... |
The OOD detection results of DFB and its competing methods with CIFAR10 and CIFAR100 as in-distribution data are reported in Tabs. 1 and 2, respectively. Overall, DFB substantially improves four different SotA detection methods in all three evaluation metrics on both datasets, and obtains new SotA performance. We disc... | All methods are based on ID training data without using any external outlier data. † indicates that the results are taken from the original paper, and other methods are reproduced using the same network architecture. Four post-hoc foreground OOD detection methods are respectively plugged into our method ‘X’-DFB, where ... | D |
Simply adjusting the contrast of low-light images using a technique such as histogram equalization [6] is often insufficient due to the amplification of noise [1, 7]. Learning-based methods have emerged which significantly outperform traditional methods. However, even the state-of-the-art deep learning (DL) techniques ... | In this paper, we present a framework for post-processing images which have undergone low-light image enhancement. The enhancement of low-light images often reveals a variety of degradations which are hidden in the dark, and thus a need for post-processing is introduced. Furthermore, each low-light enhancement techniqu... |
Existing denoising techniques can be applied to denoise low-light images either before or after contrast enhancement [9, 10]. These denoising techniques range from low-pass filters and algorithms such as block matching and 3D filtering (BM3D) [11], to state-of-the-art DL denoisers [9, 12]. Despite denoisers significan... | Simply adjusting the contrast of low-light images using a technique such as histogram equalization [6] is often insufficient due to the amplification of noise [1, 7]. Learning-based methods have emerged which significantly outperform traditional methods. However, even the state-of-the-art deep learning (DL) techniques ... |
The task of low-light image enhancement (LLIE) aims to improve the visibility of images which are captured under low-light conditions. Under-exposed images are often degraded in a variety of ways in addition to their lack of visibility. Notably, low-light regions of an image typically contain degraded color informatio... | B |
Our study extends previous research on individual differences in spatial navigation to navigation in knowledge space. An intuitive next research phase could involve constructing mathematical models that integrate personal traits to elucidate participants’ navigation behavior. Additionally, exploring whether and how nav... | In the game sessions, players are given two Wikipedia pages as the source and the target in each game. To reduce the disparities in prior knowledge among the participants, the source and target pages are chosen to be similarly distanced (2 or 3 steps away on the Wikipedia network) pages about renowned individuals from ... | The experiment
We conducted an online experiment where we hired 445 participants (397 participants after removing participants who did not finish the experiment or did not pass the attention check, and dropping data that had recording errors) from the United States on the online crowdsourcing platform Prolific (https:/... | Encoding the participants’ answers to the questions in the survey (see encoding details in the Supplementary Material), we end up with 18 control variables characterizing the participants by the six groups of questions specified above, 5 control variables indicating the game, game type (Speed-race or Least-clicks), rou... |
To gain a better understanding of how navigation on the knowledge network is affected by individual characteristics, we conducted an online experiment where we hired 445 participants from the US to play nine rounds of Wikipedia navigation games (illustration in Fig. 1) and to fill in a survey afterwards about their pe... | B |
The LHCb detector [1, 2], originally designed to study particles containing b𝑏bitalic_b and c𝑐citalic_c quarks produced at the Large Hadron Collider (LHC), is a single-arm forward spectrometer covering the pseudorapidity range 2<η<52𝜂52<\eta<52 < italic_η < 5.
The detector includes a high-precision tracking system p... | Combining stacks of GBDT and GAN models, Lamarr provides the high-level response of the LHCb tracking and PID systems.
To validate the ultra-fast simulation approach the chosen machine-learning-based models are trained on detailed simulated samples and the output of Lamarr is compared to the reference distributions as ... | The first step of any simulation production is the generation phase in which the high-energy collisions are simulated with Monte Carlo generators such as Pythia8 [5] and EvtGen [6].
The output of the generation phase is the set of long-lived particles able to traverse partially or entirely, depending on the particle sp... | LHCb is also equipped with a highly performing particle identification (PID) system capable of distinguishing photons, electrons, long-lived hadrons, and muons, combining the response of two ring-imaging Cherenkov (RICH) detectors, the calorimeter system, and the MUON system.
| GlobalPID classifiers, obtained in real data by combining RICH and MUON responses with information from the calorimeter system and features of the reconstructed tracks, are parameterized using similar GAN-based architectures that take as input what produced by the RICH-GAN and MUON-GAN models.
Lastly, the efficiency of... | C |
Intuitively, alignment at different fine-grained levels results in different capabilities. Finer-grained comparisons, such as residue-level alignment, necessitate more complex computations but may yield better performance. Conversely, coarse-grained comparison alignments, such as protein-level alignment, may exhibit in... | Virtual Cβ Atom Generation.
The virtual Cβ atoms are derived coordinates that may not physically exist in every residue. However, their positions can be inferred from the spatial relationships among protein backbone atoms: N, Cα𝛼\alphaitalic_α, and O atoms. The positions are calculated as follows: | The pipeline diagram of the contact map predictor is depicted in Figure 4. Self-attention maps extracted from the self-attention blocks undergo symmetrization and average product correction (APC) to generate the final contact maps. The self-supervised structural loss is formulated to constrain the structural feature re... | We reframe structural representation training as an information retrieval task, where the protein structure is the query, and the goal is to retrieve the sequence with the highest binding probability to the target protein structure from a pool of candidates. To strengthen structural representation constraints, we also ... | Structural Reconstruction Constraint.
As previously mentioned, the GVP layers encode the core structural features using N, Cα, and O atoms (These three types of atoms are called backbone atoms). To further enhance the structural constraints, we introduce a self-supervised structure reconstruction task. While predicting... | D |
To accurately preserve the complex structural information of knowledge graphs, conventional KGE methods (e.g., TransE [4]) seek to increase the embedding dimension for better expressiveness and adopt high-dimensional entity/relation representations. However, since the scale of model parameters grows linearly with the r... | To improve the performance, ROTH [9] adopts more flexible hyperbolic space (compared to Euclidean space) for low-dimensional entity representations, but that also brings more complex hyperbolic geometry [26]. To avoid that, contrastive learning is employed to flexibly control the strength of penalties for easy/difficul... |
Some methods attempt to improve parameter efficiency by reducing the embedding dimension, and the key problem is how to maintain model expressiveness. Meanwhile, since entities normally are much more than relations in many knowledge graphs, they mostly reduce the dimension of entity representations. One type of method... | Knowledge distillation is adopted to train low-dimensional representations with multiple pre-trained KGE models [7], which shows better performance than training from scratch. The number of pre-trained models is further reduced to one [8] by introducing the dual influence between the pre-trained and the target models. ... | To accurately preserve the complex structural information of knowledge graphs, conventional KGE methods (e.g., TransE [4]) seek to increase the embedding dimension for better expressiveness and adopt high-dimensional entity/relation representations. However, since the scale of model parameters grows linearly with the r... | B |
Inevitably, at a certain point, the algorithm will find a node that already has an associated fidelity curve. In this situation, the algorithm has found an alternative path to reach the neighboring node and has to check whether this new path is dominated or not by the old one.
To do this, temporary registry is required... | When both isotonicity and monotonicity hold, it is possible to devise a simple multi-objective routing algorithm [6] that finds the best routes between a source node and all other nodes. First, one starts at a source node and computes the fidelity curves and a priority value for the links that connect to its neighborin... |
Although multi-objective routing algorithms often converge to the optimal solution in polynomial time, multi-objective routing problems are generally NP-hard [15]. Therefore, choosing a good priority value is crucial. In our case, we used the length of the path as our priority value. While the shortest paths are not n... | Inevitably, at a certain point, the algorithm will find a node that already has an associated fidelity curve. In this situation, the algorithm has found an alternative path to reach the neighboring node and has to check whether this new path is dominated or not by the old one.
To do this, temporary registry is required... | This process continues until the priority queue is empty. Note that we did not specify how the priority value of each node is computed simply because we do not know what the optimal way to compute such a value is. Prioritizing paths with a low likelihood of being dominated will reduce the number of recomputations and p... | D |
In Fig. 5(a), we observe that the required SNRs of polar sequence [4], GA algorithm [7] and MWD sequence are close to the required SNRs of corresponding MWUB.
Then, since the polar codes constructed by the MWD sequence have the optimum MWUB, the required SNRs are less than or equal to those of the polar sequence and GA... | Then, as L𝐿Litalic_L increases, the performance of MWD sequence is better than that of polar sequence.
Specifically, the performance of MWD sequence with L=32𝐿32L=32italic_L = 32 approaches the MWUB in the high SNR region and shows about 0.190.190.190.19dB and 0.300.300.300.30dB performance gaps at BLER 10−5superscri... | Specifically, the MWD sequence shows about 0.680.680.680.68dB and 1.131.131.131.13dB SNR gaps at R=0.75𝑅0.75R=0.75italic_R = 0.75 compared with the GA algorithm and the polar sequence, respectively.
Thus, MWD sequence can improve the performance of polar codes with medium list size for short code length. |
In this paper, the construction methods based on MWD are proposed to improve the performance of polar codes under SCL decoding. We first prove that the ML performance can approach the MWUB as the SNR goes to infinity. Then, we design the ordered and nested MWD sequence to apply fast construction without channel inform... | The reason is that as the code length increases, the polar codes constructed by the MWD sequence cannot satisfy (54), which means the required SNRs of the MWD sequence with limited list size deviate from the ML performance. Thus, increasing list size can further improve the performance of MWD sequence to approach the M... | B |
The solution for the Searcher will have the following structure. At every branch node j𝑗jitalic_j there is a favored branch Q1subscript𝑄1Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and a positive probability β𝛽\betaitalic_β (the favoring bias) for it to be chosen before looking at the signal.
With the rema... | The solution for the Searcher will have the following structure. At every branch node j𝑗jitalic_j there is a favored branch Q1subscript𝑄1Q_{1}italic_Q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and a positive probability β𝛽\betaitalic_β (the favoring bias) for it to be chosen before looking at the signal.
With the rema... |
This paper has addressed the question of how to optimally search for an adversarially hidden target on a tree network in the presence of unreliable signals. We have found optimal solutions for both the Searcher and the Hider that can be calculated recursively, and a closed form expression for the value of the game. Fu... |
We can now state and prove our main theorem, which includes an expression for the value of the game. We describe the optimal strategy for the Searcher by giving the favoring bias β𝛽\betaitalic_β of searching the favored branch first (without needing to observe the signal) when at a branch node. | So in particular the Searcher will never choose the unfavored arc (branch) when the signal is for the favored one. The use of biased depth-first Searcher strategies (random choices at every branch node) of the Searcher was introduced in another context in Alpern (2010) and Alpern and Lidbetter (2014), but those distrib... | D |
Several papers have applied diffusion to medical imaging, with a wide range of applications including anomaly detection, segmentation, registration, and modality transfer with image-to-image translation [Kazerouni et al. (2022)].
Specifically for medical image generation, several recent works have trained diffusion mod... | Several other works studied text-to-image latent diffusion models for medical imaging Chambon et al. (2022a); Akrout et al. (2023).
Closest to our work is Chambon et al. (2022b), where the authors explore various methods to adapt a pre-trained Stable Diffusion model to chest X-ray generation. | However, practical challenges, such as ethical and legal impediments to sharing medical data, particularly for unstructured radiology reports, complicate this endeavor [Scheibner et al. (2021), Bovenberg et al. (2020)].
For one of the few public datasets of this caliber that exists, MIMIC-CXR, Chambon et al. have demon... | Pre-trained models are often trained on 2D RGB datasets, but many medical imaging modalities are 3D.
Recently, studies such as Khader et al. (2023) and Pinaya et al. (2022) have trained diffusion models from scratch on 3D data or even on 4D data Kim and Ye (2022), and Han et al. (2023) use diffusion models conditioned ... | Several papers have applied diffusion to medical imaging, with a wide range of applications including anomaly detection, segmentation, registration, and modality transfer with image-to-image translation [Kazerouni et al. (2022)].
Specifically for medical image generation, several recent works have trained diffusion mod... | C |
To further validate CompoNeRF’s performance for more complex multi-object text inputs, we assess its performance using a lengthy sentence describing the color, texture, light, and relationships between scene components.
Fig. 7 showcases our refined scene renderings originating from pre-trained source scenes including t... | We observe that a direct amalgamation of these components can manifest various artifacts at the base of the lamp and incongruous shadows and reflections from the glass ball are notable, detracting from the authenticity of the scene’s ambiance.
After composition, reconfigured objects are adeptly integrated, achieving a ... | To further validate CompoNeRF’s performance for more complex multi-object text inputs, we assess its performance using a lengthy sentence describing the color, texture, light, and relationships between scene components.
Fig. 7 showcases our refined scene renderings originating from pre-trained source scenes including t... | The complexity of a scene directly influences the required configuration of the parameters 𝜽gsubscript𝜽𝑔\boldsymbol{\theta}_{g}bold_italic_θ start_POSTSUBSCRIPT italic_g end_POSTSUBSCRIPT within the composition module. In Figure 11, we experiment with varying the number of layers in the MLPs responsible for both den... | In Fig. 14, we present further results from the ablation study of our composition module. As outlined in our main manuscript, our preference for a density-based approach is due to its effective and precise calibration of global density.
For example, the ’bedroom’ scene builds upon the discussion from Fig.2(b.2) in the ... | A |
\Sigma}_{2}}{2})^{-1}(\boldsymbol{\mu}_{2}-\boldsymbol{\mu}_{1})bold_italic_a start_POSTSUBSCRIPT LDA end_POSTSUBSCRIPT = ( divide start_ARG bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_... | \Sigma}_{2}}{2})^{-1}(\boldsymbol{\mu}_{2}-\boldsymbol{\mu}_{1})bold_italic_a start_POSTSUBSCRIPT LDA end_POSTSUBSCRIPT = ( divide start_ARG bold_Σ start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT + bold_Σ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_ARG start_ARG 2 end_ARG ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( bold_... |
Figure 4: Quality of approximating cluster overlap using our LDA and the cruder center-to-center (C2C) approximations. Each data point corresponds to a pair of multivariate normal clusters with the pairwise overlap shown. The full simulation is based on 900 pairs generated from a variety of data set archetypes with di... |
Managing the degree of overlaps between clusters is one of the most important tasks of a cluster generator. In repliclust, we quantify pairwise overlap between two clusters as the irreducible error rate when classifying a new data point as belonging to one of the two clusters (assuming equal class probabilities). On t... | Figure 3: Cluster overlap based on the misclassification rate of the best linear classifier, in a minimax sense. The left panel shows the densities of two normal distributions in 1D. A minimax classification rule implies that the conditional misclassification rates for the blue and red distributions are equal. The blac... | B |
In this study, we propose a novel Contrastive Oracle-Free Framework for Event Extraction (COFFEE), which addresses the event extraction task without using any oracle information. Our COFFEE consists of two parts, a generator that performs the extraction of events and a selector that aims to refine the generated results... |
Firstly, it is crucial to highlight that the oracle-free setting poses a more challenging scenario. When all oracle information is removed, generation-based baselines relying on templates exhibit a varying degree of performance decline on both datasets (↓↓\downarrow↓ 0.5% to 37.42% in argument classification). | Although DEGREE is effective with the oracle information, it struggles to filter out the ‘invalid’ events in the oracle-free setting, resulting in an almost zero (2.18%) trigger classification F1. This indicates that the information leaked in the template significantly contributes to the performance of DEGREE.
|
As described in Section 4.3, Text2Event, BART-Gen and DEGREE utilize different oracle information. To compare the performance of our COFFEE framework with these methods under the OFEE setting, we implemented the following adaptations to these baseline approaches: |
We conduct experiments on two variants of the ACE05 benchmark under the oracle-free setting to evaluate our COFFEE. The results demonstrate that the template-based baselines heavily rely on the additional oracle information, whereas our COFFEE exhibits superior empirical performance over these baselines in the absence... | D |
We try different values of c𝑐citalic_c, Figure 1 presents the convergence curves under the best choice of c𝑐citalic_c. For each setting, it can be concluded that the convergence rates of the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-norm generalization errors of spectral algorithms ar... | In this subsection, we compare this paper’s convergence rates and minimax optimality with the results in previous literature. Ignoring the log-term and the constants, Theorem 1 gives the upper bound of the convergence rates of spectral algorithms (with high probability)
|
Based on the Lqsuperscript𝐿𝑞L^{q}italic_L start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT-embedding property of [ℋ]ssuperscriptdelimited-[]ℋ𝑠[\mathcal{H}]^{s}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT, the refined proof in this paper removes the boundedness assumption in previous litera... | Since the convergence rates and the minimax optimality of spectral algorithms in the well specified case are clear, a large amount of literature studied the misspecified spectral algorithms. Among these work, Steinwart et al. (2009); Dicker et al. (2017); Pillaud-Vivien et al. (2018); Fischer and Steinwart (2020); Celi... |
The outline of the rest of the paper is as follows. In Section 2, we introduce basic concepts including priori knowledge of RKHS, integral operators and the definition of the interpolation space. In addition, we formally define the spectral algorithm, which is the main interest of this paper, and provide three example... | A |
In this work we have presented a novel, one-shot point cloud simplification algorithm capable of preserving both the salient features and the overall structure of the original point cloud. We reduce the cloud size by up to three orders of magnitude without the need for computationally intensive training on huge datase... |
Overall, these results show that our approach provides a computationally efficient option for performing point cloud simplification in settings where the user wishes to strike a balance between preserving high fidelity around sharp features in the cloud, and ensuring that the simplified cloud covers the manifold defin... |
Whilst Hausdorff distance is a useful metric, it is not the ideal candidate for assessing the feature sensitivity of a simplification algorithm, as it tends to return lower errors for more evenly distributed clouds. Whilst out of the scope of this work, there is a clear need for a well-defined and widely adopted error... | From the results presented in Table 1, it is clear that our proposed method is capable of comparable empirical performance to many of the existing methods for simplifying point clouds. The GP-based approach outperforms the AIVS baseline across all experiments and metrics, and outperforms the PC-Simp baseline on all but... |
Since results and inference times for the Potamias et al. approach were provided by the author of the paper, we do not have knowledge of the exact details of their experimental setup, especially the time required in hours to train the model. As mentioned in Section 2, their learning-based approach demands huge dataset... | B |
Training on full resolution (FullRes):
We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali... | of 1×1×1\textmm3111\text𝑚superscript𝑚31\times 1\times 1\text{mm}^{3}1 × 1 × 1 italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, resulting in a total scan size of 240×240×155240240155240\times 240\times 155240 × 240 × 155, which we padded to a size of 256×256×256256256256256\times 256\times 256256 × 25... | For three spatial dimensions (i.e. 3D) this means that reducing the
input size from 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT to 1283superscript1283128^{3}128 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT results in a reduction | Training on full resolution (FullRes):
We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali... | 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT images, we distribute the model over 2 GPUs. The methods HalfRes and PatchDDM were trained on one GPU only.
The optimizer we used was AdamW [Loshchilov and Hutter(2017)] with the default parameters. | B |
{h}^{3})}{\delta_{l}-\beta L\delta_{h}^{2}}.+ divide start_ARG italic_P start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_L italic_β italic_δ start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( italic_δ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT + italic_β italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_N... |
The detailed proof of Proposition 5 is elaborated in Appendix F of the supplementary file. As delineated by Eq. (5), our derivation reveals that the one-round convergence bound for our proposed adaptive FL algorithm FedAgg intrinsically pertains to the norm of the gradient of the global parameter at iteration t𝑡tital... |
(Convex & Non-Convex Applicability) From the above convergence analyses, the evaluation of FedAgg exclusively relies on the smoothness assumption, which indicates that the effectiveness of our results transcends the convexity paradigm of the function in question. Specifically, we conduct convergence analyses of FedAgg... | The proof of Theorem 3 is provided in Appendix E of the supplementary file. According to Propositions 3-4, we proceed to formulate a convergence upper bound for the consecutive change of the FL global loss function F(𝒘¯𝒕)𝐹superscriptbold-¯𝒘𝒕F(\boldsymbol{\bar{w}^{t}})italic_F ( overbold_¯ start_ARG bold_italic_w ... | The detailed proofs of Theorem 4 and Theorem 5 refer to Appendix G-H of the supplementary file. Moreover, delving into the foundational assumptions for our above convergence analyses, we provide a pertinent remark concerning the characteristics of the global loss function, specific to its convex and non-convex feasibil... | D |
Specifically, in LLaMA’s higher transformer layers, we append a set of learnable adaption prompts as prefixes to the word tokens.
Then, to avoid the noise from randomly initialized prompts at the early training stage, we equip the frozen self-attention layers with a learnable gating factor. | Figure 2: Details of Zero-initialized Attention. We insert learnable adaption prompts into the last L𝐿Litalic_L out of N𝑁Nitalic_N transformer layers of LLaMA. To progressively learn the instructional knowledge, we adopt a zero gating factor within the attention for stable training in the early training stages.
| The gating mechanism is initialized by zeros, and controls the feature interaction between prompt and word tokens, within the process of attention calculation.
Such a strategy can first preserve the original knowledge in LLaMA, and progressively inject the new instructional signals during training. | For better training stability and final performance, we introduce the zero-initialized attention mechanism with a learnable gating factor, which increasingly incorporates instructional signals, while preserving the pre-trained knowledge in LLaMA.
With only 1.2M parameters and one-hour training, our approach effectively... |
With our proposed zero-initialized attention, the adaption prompts can progressively inject the newly acquired instructional signals into the transformer, while simultaneously incorporating the pre-trained knowledge of LLaMA to provide high-quality responses. | B |
Model architecture. We use a 12-layer ViT-base model with the patch size of 2×16×16216162\times 16\times 162 × 16 × 16 as the video encoder and initialize it with weights pre-trained on Kinetics-400.
We adopt 32 learnable group tokens and 3 grouping blocks featuring K-means attention (Xu et al., 2022; Yu et al., 2022). | Video-language pre-training typically follows the pipeline: (1) encoding video and text pairs into latent representations, (2) modality fusion, and (3) pre-training on specific objectives.
Existing methods typically optimize these three components in the pre-training pipeline by designing expressive encoders (Bain et a... | Representations learned from large scale noisy datasets such as HowTo100M (Miech et al., 2019), WebVid (Bain et al., 2021), and VideoCC (Nagrani et al., 2022) have demonstrated great potentials in adapting to downstream tasks, including but not limited to text-video retrieval, video question answering, and video captio... |
It has been shown that most video-language pre-training methods merely perform well on learning holistic representations to match a ⟨video, caption⟩delimited-⟨⟩video, caption\langle\textit{video, caption}\rangle⟨ video, caption ⟩ pair while neglect fine-grained information such as region-object correspondences, or sce... | Pre-training datasets. We pre-train S-ViLM with the VideoCC (Nagrani et al., 2022) dataset, which contains about 3.3M video-caption pairs.
We also include ActivityNet-Caption (Krishna et al., 2017) with 20K well-aligned pairs into the pre-training corpus. | D |
Motivated by this, we introduce ContraSim, a new similarity measure for interpreting NNs, based on contrastive learning (CL) (Chen et al., 2020; He et al., 2020). Contrary to prior work (e.g., Raghu et al., 2017; Kornblith et al., 2019), which defines closed-form general-purpose similarity measures, ContraSim is a lea... | We experimentally evaluate ContraSim on standard benchmark for similarity measures – the layer prediction benchmark Kornblith et al. (2019), and two new benchmarks we introduce in this paper: the multilingual benchmark and the image–caption benchmark. In experiments with both language and vision models and multiple dat... |
Motivated by this, we introduce ContraSim, a new similarity measure for interpreting NNs, based on contrastive learning (CL) (Chen et al., 2020; He et al., 2020). Contrary to prior work (e.g., Raghu et al., 2017; Kornblith et al., 2019), which defines closed-form general-purpose similarity measures, ContraSim is a lea... | Our method, ContraSim, achieves excellent results. When trained on one dataset’s training set and evaluated on the same dataset’s test set, ContraSim achieves perfect accuracy under this benchmark, with a large margin over CKA results. This holds for both language and vision cases. Even when trained on one dataset and ... | Our method outperformed other similarity measures under the common layer prediction benchmark and two new benchmarks we proposed: the multilingual benchmark and the image–caption benchmark. It particularly shines in strengthened versions of said benchmarks, where random sampling is replaced with finding the most simila... | A |
In a Manhattan world, we proposed a learning-based method to address rotation and distortion from an image. To recover the rotation, our heatmap-based VP estimator detects the VP/ADPs. Experiments demonstrated that our method substantially outperforms conventional methods.
| In this supplementary material, we present some details omitted from the main paper: the novelty of our method in Section 1, the limitations of our method associated with indoor scenes in Section 2, extended related work of panoramic images in Section 3, details of our method of rotation estimation and auxiliary diagon... | Figure 8 shows the qualitative results of indoor scenes obtained by our method. We captured the input image using an off-the-shelf fisheye camera (ID 1 [59]) at an intersection in an underpass. The indoor image degraded the performance of our method because of the domain gap between indoor and outdoor environments. In ... | To demonstrate the validity and effectiveness of our method, we present further quantitative and qualitative results of our experiments in this section. The dataset names (SL-MH, SL-PB, SP360, and HoliCity) correspond to the names used in Section 4.1 (main paper).
| As described in Section 4.2 (main paper), our method was able to estimate a unique rotation for over 98% of the images in our experiments because of the arrangement of optimal 3D spatial uniformity, as presented in Table 11. By contrast, the use of VPs without ADPs enabled the estimation of a unique rotation for less t... | A |
The presented necessary and sufficient conditions could be used to parameterize all possible stabilizers and possibly select the best one from a certain performance view-point;
an example of this approach is provided in the recent work (Zaupa et al., 2023), where analogous conditions are used for the design of a simult... | Moreover, our proposed conditions can be used to extend the LMI-based H∞subscript𝐻H_{\infty}italic_H start_POSTSUBSCRIPT ∞ end_POSTSUBSCRIPT results in (Dal Col, 2016), as well as the LMI-based saturated feedback design in (Dal Col et al., 2019), to also deal with undirected graphs.
More in general, while existence re... | While our conditions have been shown to be strategic for performance optimization in distributed feedback design in our preliminary work (Zaupa et al., 2023), future work includes reinterpreting the main results of this paper for special classes of nonlinear systems, such as Lur’e systems with special sector-bounded no... | The presented necessary and sufficient conditions could be used to parameterize all possible stabilizers and possibly select the best one from a certain performance view-point;
an example of this approach is provided in the recent work (Zaupa et al., 2023), where analogous conditions are used for the design of a simult... | (Fax and Murray, 2004; Jadbabaie et al., 2003; Ren et al., 2007), quality-fair delivery of media contents (Dal Col et al., 2017), power networks (Dörfler et al., 2013), biological systems (Scardovi et al., 2010), and opinion dynamics (Anderson and Ye, 2019).
Specifically, consensus refers to agents coming to a global a... | A |
Here we describe the method and results for testing HRQ3𝑅subscript𝑄3{}_{{RQ}_{3}}start_FLOATSUBSCRIPT italic_R italic_Q start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT end_FLOATSUBSCRIPT:
That changes in ONNX operator sets are correlated with increased defects. | For the reasons noted above, we studied the DL model converters from PyTorch and TensorFlow into the ONNX IR (torch.onnx and tf2onnx, respectively).
We note that among ONNX model converters, those for PyTorch and TensorFlow have the most failure data available on GitHub (Table 6). | (1) DL model converters lag behind ONNX releases (this might cause a failure to be mis-attributed to another release, i.e., offset in time);
(2) Failures might be in any ONNX available release, not just the most recent (possibly inflating the failure rate of a given release); | This causal asymmetry may be attributable to differences in the requirements of DL model converters and DL compilers.
The purpose of DL model converters is interoperability (section 2.1), making compatibility failures a focus and reducing the need for optimizations. | We also measure the relationship, assessing the correlation in the number of changes in an ONNX release and the number of failures between its release and the next.
We use the Spearman correlation, which is a commonly-used and robust metric for measuring a monotonic relationship between two variables (Fenton and Bieman... | B |
The Aerostack2 framework has been utilized to develop a wide range of aerial robotics systems within the research community. In [22], the authors employed Aerostack2 to control a small UAV for reidentification of entities within an industrial setting. In [23], a virtual Spring-Damper approach was developed and tested f... |
In this experiment, two different drones shall pass through two gates in a synchronized and cyclic fashion. The objective of this experiment is not only to test the simulation with real capabilities of the framework in an indoor environment but also to test the framework’s platform independence. The experiment was car... | To address these challenges, this paper proposes a collaborative framework for aerial robotics that brings users and developers together to work toward a common goal. We present Aerostack2, a ROS 2-based framework designed to facilitate the development of complex autonomous aerial systems. By building a solid architect... |
This paper presents a novel open-source framework designed for the development of aerial robotic systems, with a strong focus on multi-robot orientation, platform independence, versatility, and modularity. These features have been validated through a series of experiments in both simulated and real-world scenarios, de... |
In Fig 7, can be shown that the trajectories followed by the two drones are so similar in both HITL simulation and the real world experiments and can be easily generated and replicated from the mission generated with the GUI, demonstrating the capabilities of Aerostack2 for working reliably with multiple drones in out... | C |
Nonetheless, the outputs from such models are often considered creative by the person interacting with them or exposed to their best productions. Though this is apparently in contrast with what was discussed above, we can explain this phenomenon by considering the fact that our perception does not usually align with t... | In conclusion, while LLMs are capable of producing artifacts that are valuable, achieving P- or H-novelty and surprise appears to be more challenging. It is possible to argue that LLMs may be deemed able to generate creative products if we assume the definition of combinatorial creativity. To achieve transformational c... | In this paper, we have discussed whether or not LLMs can actually be deemed as creative; we started by considering Boden’s three criteria, i.e., value, novelty, and surprise. While LLMs are capable of value and a weak version of novelty and surprise, their inner autoregressive nature seems to prevent them from reaching... |
Nonetheless, the outputs from such models are often considered creative by the person interacting with them or exposed to their best productions. Though this is apparently in contrast with what was discussed above, we can explain this phenomenon by considering the fact that our perception does not usually align with t... | Surprise instead refers to how much a stimulus disagrees with expectation (Berlyne, 1971). It is possible to identify three kinds of surprise, which correspond to three different forms of creativity. Combinatorial creativity involves making unfamiliar combinations of familiar ideas. Exploratory creativity requires find... | A |
On the contrary, the LODS [20] is capable of detecting more nodules as can be seen from the recall rates at the large average numbers of false positives per CT image. Nonetheless, its ability to distinguish the nodules from other tissues is weaker. It should be noted that our re-implementation of the LODS [20] removes... | Qualitative Comparison. We visualize the detection results of the compared SOTA approaches and our SUP-ICI in Fig. 4. The detection results are displayed based on the central slices of the nodule ground truths. The true positives (TP), false positives (FP), and false negatives (FN) are denoted by the green boxes, red b... |
Nonetheless, these works mainly focus on the shifts between the source and target, and neglect the detection’s characteristics. For instance, the discrimination of the foreground objects and the backgrounds can naturally be an auxiliary supervision for the target data. Besides, the relatively smaller size of the nodul... |
It is noted that the SED [18] even performs worse than the baseline approach Source-only on the scenarios PN9 →→\rightarrow→ LUNA16 and PN9 →→\rightarrow→ tianchi. The recall rates at the small average numbers of false positives per CT image are better than the baseline whereas the recall rates at the large ones are w... | Figure 4: Exemplar detection results of the compared approaches and our method SUP-ICI. The green boxes, red boxes, and yellow boxes denote the true positives (TP), false positives (FP), and false negatives (FN), respectively. The values above the boxes are the detection scores.
| A |
Both types of data are generated using the Gaussian distribution but with different choices of the mean and standard deviation. When generating load forecasts, we use the base load of the system as the mean and set the standard deviation to be 10%percent1010\%10 % of it.
To solve the two-stage problem for a given load ... | We summarize the results of using different methods to solve the reserve scheduling problem in (6) on the 118-bus system in Table IV.
All reported total costs are normalized in reference to the cost values obtained from CVXPY. Compared to the risk-limiting dispatch problem, the reserve scheduling problem has more decis... | We also compare the solutions produced by our method to that by using the affine policy method, which is a widely applied approximation policy to make the two-stage stochastic programs tractable [56]. Specifically, the affine policy approximates the recourse decision 𝐩Rsuperscript𝐩𝑅\mathbf{p}^{R}bold_p start_POSTSUP... | In this paper, we overcome the challenge in policy design and solve two-stage DCOPF problems by presenting a neural network (NN)-based architecture that is computationally efficient and also guarantees the feasibility of learned solutions. In particular, our architecture involves two neural networks, one each for the f... |
A common approach to reduce the computational burden in solving two-stage DCOPFs is to model the second-stage decisions using an affine policy. More specifically, the second-stage (or the recourse) dispatch decision is restricted to be an affine function of the realized net-load and the first-stage decisions [23, 24, ... | B |
The basis for our approach is to note that, intuitively, a suitable prior should reflect a belief that the higher the semantic similarity between pairs of inputs, the more likely these inputs are to have the same label. Therefore, rather than inspecting the prior predictive at single points in input space, we examine ... | In practice, we can utilise ideas from contrastive learning to learn models with prior predictive distributions that reflect the semantics of different input pairs.
The high-level idea is thus to use prior knowledge in the form of data augmentations, for which we believe that the semantic content of the data should be ... | Note that this is of course only a reasonable assumption in cases where we believe to have sufficiently good knowledge of semantic similarity in our data domain. That is, we need to have a set of data augmentations for the contrastive tasks, for which we can be reasonably certain that the true labels in our downstream ... | Another related line of work is concerned with learning invariances from data in Bayesian models using the marginal likelihood (van der Wilk et al., 2018; Immer et al., 2022). This case is essentially the opposite of our setting, as there, the labels are known but the augmentations are learned, while in our case, the a... | In Fig. 4, we see that the methods that leverage unlabelled data perform the best. In particular, the self-supervised BNN with BALD acquisition achieves the highest accuracy across most numbers of labels, and substantially outperforms the deep ensemble. This confirms the benefit of incorporating unlabelled data in acti... | B |
(deterministic) time series data (e.g., see [71, 72, 65, 87]), explicit statistical foundations are not provided. Indeed, many statistical questions
related to convergence rates and non-asymptotic error bounds are unexplored, and crucial questions such as what dynamics of the underlying process are preserved by such sh... | The finite sample performance and the presence of outliers is particularly relevant in our context since genomic data applications are typically in the regime of T<100𝑇100T<100italic_T < 100 and n<10K𝑛10𝐾n<10Kitalic_n < 10 italic_K.
In Section 4.2.1, we elaborate on this and introduce a data-adaptive function that ... | To the best of the authors’ knowledge, there is no systematic theory to perform statistical inference for topological or geometric features in the presence of temporal dynamics which potentially evolve over time, either abruptly or
gradually. However, data sets arising in diverse application areas | quadratic range-based self-normalized test statistics to detect gradual and abrupt topological changes. This provides the applied researcher with a simple tool for inference on the true underlying shape dynamics over time. Returning to the example of cell differentiation, we use these results to test for changes in sha... | To provide statistical theory which allows for changes in the probabilistic structure, we consider an “infill” asymptotic framework in which more observations become available at a local level as the observation length T𝑇Titalic_T increases. Nonstationary processes that can be analyzed in such a framework are known as... | B |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.