context
stringlengths
250
4.88k
A
stringlengths
250
4.17k
B
stringlengths
250
4.73k
C
stringlengths
250
3.89k
D
stringlengths
250
4.12k
label
stringclasses
4 values
τ(G)=1|V|∏i=2|V|λi=12⁢|E|∏i=1|V|di∏i=2|V|λ̄i⋅\displaystyle\tau(G)=\frac{1}{|V|}\prod\limits_{i=2}^{|V|}\lambda_{i}=\frac{1}% {2|E|}\prod\limits_{i=1}^{|V|}d_{i}\prod\limits_{i=2}^{|V|}\lambdabar_{i}\cdotitalic_τ ( italic_G ) = divide start_ARG 1 end_ARG start_ARG | italic_V | end_ARG ∏ start_POSTSUBSCRIPT italic_i = 2 ...
Being motivated by the success of the Wiener index, nearly after four decades, in 1993,19931993,1993 , Klein and Randić put forward a novel structure-descriptor topological index, known as the Kirchhoff index [11]. The Kirchhoff index of a graph G𝐺Gitalic_G is calculated using the formula Kf⁡(G)=12⁢∑i=1|V|∑j=1|V|ri⁢jK...
We obtain the formulae for the total count of spanning trees of Pnsubscript𝑃𝑛P_{n}italic_P start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT and Pn′subscriptsuperscript𝑃′𝑛P^{\prime}_{n}italic_P start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT using the Theorem 4 as follows.
The formulae of the degree-Kirchhoff index of linear hexagonal chains [9], hexagonal Möbius chains [16], linear crossed hexagonal chains [18], linear polyomino chains [8], cylinder phenylene chains [17], random polygonal chains [13], generalized phenylenes [25], cylinder, and Möbius octagonal chain [15], linear pentag...
As both of the structural descriptors have important applications in graph theory, networking systems, molecular chemistry, and other related fields, researchers have taken a strong interest in the Kirchhoff indices and degree-Kirchhoff indices of graphs in recent years. Finding the explicit formulae for the Kirchhoff ...
D
To capture this ambiguity, we define an ambiguous contract to be a collection of payment functions τ={t1,t2,…,tk}𝜏superscript𝑡1superscript𝑡2…superscript𝑡𝑘\tau=\{t^{1},t^{2},\ldots,t^{k}\}italic_τ = { italic_t start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_t start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , … , i...
In this instance, the principal can implement both action 2222 and action 4444 with a classic contract, with an expected payment equal to the agent’s respective cost. Possible contracts that achieve this include ⟨(0,5/3,0,0),2⟩053002\langle(0,5/3,0,0),2\rangle⟨ ( 0 , 5 / 3 , 0 , 0 ) , 2 ⟩
Section 3 examines optimal ambiguous contracts. We first show that the principal can use ambiguity to her advantage, either because ambiguity allows her to implement the optimal action under classic contracts at a reduced cost, or because she exploits the ambiguity to implement a different action. Indeed, a principal ...
Third, ambiguous contracts can be a burden for agents, either because they are more difficult to evaluate and enforce or because they are a weapon for extracting surplus from agents. Circumstances may accordingly restrict attention to ambiguity-proof classes of contracts. Our results show that insisting on ambiguity pr...
Section 5 defines a class of contracts to be ambiguity-proof if it is impossible for the principal to implement an action at a lower expected payment with an ambiguous contract than with a classic contract. We show that a class of contracts is ambiguity-proof if and only if it is ordered, in the sense that for any two ...
B
}_{\bm{y}}\left[\sum_{t\in T}\mathrm{stab}_{m}\left(\mathcal{M}^{\bm{y}_{1},% \ldots,\bm{y}_{t-1}}\right)\right].italic_I ( bold_italic_S ; ( bold_italic_y , … , bold_italic_y start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ) ) ≤ divide start_ARG italic_n end_ARG start_ARG italic_m end_ARG ⋅ start_BIGOP blackboard_E end...
We first use Lemma 5.3 bound the drop in m𝑚mitalic_m-conditional entropy of 𝑺𝑺\bm{S}bold_italic_S by conditioning on 𝒚≔(𝒚1,…,𝒚T)≔𝒚subscript𝒚1…subscript𝒚𝑇\bm{y}\coloneqq(\bm{y}_{1},\ldots,\bm{y}_{T})bold_italic_y ≔ ( bold_italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , bold_italic_y start_POSTSUBSCRIPT...
Indeed, we use such an amortized analysis. For the purpose of this exposition, assume n𝑛nitalic_n is even and fix m=n/2𝑚𝑛2m=n/2italic_m = italic_n / 2. For this choice of m𝑚mitalic_m, we track the “half-conditional entropy” of 𝑺𝑺\bm{S}bold_italic_S.
The proof of Theorem 12 is broken into two pieces. First, we bound the drop in m𝑚mitalic_m-conditional entropy in terms of ALMOKL stability. Note that the half-conditional entropy defined in Equation 4 corresponds to the m=n/2𝑚𝑛2m=n/2italic_m = italic_n / 2 case below.
In this subsection, we prove Lemma 5.4 which bounds the mutual information between 𝑺𝑺\bm{S}bold_italic_S and 𝒚𝒚\bm{y}bold_italic_y in terms of the average drop in m𝑚mitalic_m-conditional of 𝑺𝑺\bm{S}bold_italic_S conditioned on 𝒚𝒚\bm{y}bold_italic_y. We begin with the special case where n𝑛nitalic_n is even an...
C
The former condition holds iff tr⁡((𝐃G−𝐀G)i)=tr⁡((𝐃H−𝐀H)i)trsuperscriptsubscript𝐃𝐺subscript𝐀𝐺𝑖trsuperscriptsubscript𝐃𝐻subscript𝐀𝐻𝑖\operatorname{tr}\left((\boldsymbol{D}_{G}-\boldsymbol{A}_{G})^{i}\right)=% \operatorname{tr}\left((\boldsymbol{D}_{H}-\boldsymbol{A}_{H})^{i}\right)roman_tr ( ( bold_italic_D ...
In general, carefully imposing conditions on where the labels are placed is essential for ensuring that the class of bilabelled graphs is closed under the desired operations and also generated by atomic graphs under them, cf.  [rattan_weisfeiler_2023, p.  2271].
The most basic bilabelled graphs, so-called atomic graphs, make their first appearance in Theorem 3.6. These graphs are used to reformulate Equations 12 and 7. The atomic graphs are also the graphs which the sets ℒtsubscriptℒ𝑡\mathcal{L}_{t}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT and ℒt+superscrip...
Let t≥1𝑡1t\geq 1italic_t ≥ 1. Write ℒt+superscriptsubscriptℒ𝑡\mathcal{L}_{t}^{+}caligraphic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT for the class of (t,t)𝑡𝑡(t,t)( italic_t , italic_t )-bilabelled graphs generated by the set of atomic graphs 𝒜tsubscript𝒜𝑡\mathc...
Let t≥1𝑡1t\geq 1italic_t ≥ 1. A (t,t)𝑡𝑡(t,t)( italic_t , italic_t )-bilabelled graph 𝐅=(F,𝐮,𝐯)𝐅𝐹𝐮𝐯\boldsymbol{F}=(F,\boldsymbol{u},\boldsymbol{v})bold_italic_F = ( italic_F , bold_italic_u , bold_italic_v ) is atomic if all its vertices are labelled. Write 𝒜tsubscript𝒜𝑡\mathcal{A}_{t}caligraphic_A start_PO...
A
The proxemic relationship between humans has been studied for years as the research of human’s mutual spatial behavior [32, 18]. In 1966, Hall [16] introduced the concept of proxemics in between humans, in which the four stages of personal space zones are first established. The first purpose of maintaining a certain p...
Also, proxemics can be translated the other way, Aiello [2] later proposed hypothesizes that proxemics exist because people: (1) intend to depress strong arousal induced from other people’s proximity [4], or (2) prefer to maintain more space so that they can handle the potential threats [11, 8]. People’s proxemic dista...
The ever-wide and deep integration of robots into human lives has brought them into more private spaces. There are already strong needs and solid integration of robots in home services [44, 28]. It is necessary for robots to understand and use human proxemic rules so that they can be well-situated and maintain a friend...
The humanoid and animal-shaped robots are expecting robust implementation in real-world practice. Unlike their senior type robotic arms or Unmanned Guided Vehicle (UGV)s, who have already taken their irreplaceable places in manufacturing, medical surgeries, and delivering, these robots find their application in more p...
For the minimum, robots should maintain a safe working distance from nearby humans, and people should be aware to collaborate with robots outside the areas. These are hard limits for robots to work in any environment. Especially on those robots which may severely harm people like robotics arms or autonomous vehicles, ...
A
Thus, λ𝜆\lambdaitalic_λ imposes a constraint on each joint angular value θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, i∈{1,2,3}𝑖123i\in\left\{1,2,3\right\}italic_i ∈ { 1 , 2 , 3 }. When θisubscript𝜃𝑖\theta_{i}italic_θ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is within its...
Tringali et al. [20] introduced an optimal inverse kinematics approach based on optimization techniques for redundant robot manipulators, incorporating both linear and nonlinear constraints by selecting appropriate initial conditions. Similarly, Lu [21] employed optimization methods to ensure feasible and smooth joint ...
The primary advantage of the analytical method is its accuracy and efficiency, providing real-time results and computing valid potential configurations. However, as the number of degrees of freedom in a manipulator increases beyond six, the number of possible solutions becomes very large. Furthermore, if a solution is ...
A multi-objective optimization genetic algorithm (MOOGA) is a metaheuristic technique inspired by evolutionary biology processes, designed to tackle optimization problems. Bjoerlykhaug [20] utilized MOOGA to solve inverse kinematics in real-time while the robot is in motion, resulting in a reduction of computational ti...
The analytical method addresses Inverse Kinematics (IK) by solving a set of closed-form equations, which directly compute the generalized coordinates needed to position the manipulator’s end effector at a predefined target location [1]. This method leverages geometric insights and the specific structure of the robot. H...
A
A property of digraphs is a set of finite digraphs closed under isomorphism. A digraph G𝐺Gitalic_G is ε𝜀\varepsilonitalic_ε-far from having a property ΦΦ\Phiroman_Φ if any digraph G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT on the vertex set V⁢(G)𝑉𝐺V(G)italic_V ( italic_G ) that d...
Alon and Shapira [4] proved that every monotone property of undirected graphs (that is, closed under the removal of edges and vertices) is strongly testable, see Lovász and Szegedy for an analytic approach [22], while Rödl and Schacht generalized this to hypergraphs [23], see also Austin and Tao [8]. Similar results ha...
The goal of this paper is to study testability of finite posets as special digraphs. By a poset, we mean a set equipped with a partial order ≺precedes\prec≺ that is anti-reflexive and transitive. Alon, Ben-Eliezer and Fischer [1] proved that hereditary (closed under induced subgraphs) classes of ordered graphs are stro...
The relationship between local and global properties of structures is a central theme in combinatorics and computer science. Since the work of Rubinstein and Sudan [25], testing properties by sampling a small number of elements is an emerging research area. A classical result of this kind is the triangle removal lemma ...
Unfortunately, the dependence on ε𝜀\varepsilonitalic_ε can be quite bad already in the case of undirected graphs: the known upper bounds in the Alon-Shapira theorem are wowzer functions due to the iterated involvement of Szemerédi’s regularity lemma. Following Alon and Fox [7], we call a property easily testable if f...
A
Instead of filling in missing data, it may be preferable to remove irrelevant entities that do not pertain to the intended domain. This will prevent the KG from being unnecessarily bloated. Applying automatic approaches can cause extraction of irrelevant information and requires techniques either of manual nature or b...
Quality assurance is important not only for the resulting KG as an outcome of the KG construction process but also within the different construction tasks, such as selecting good-quality sources (Section 3.1.2), data cleaning for acquired data, knowledge extraction, ontology evolution or entity fusion. The data cleanin...
Quality Assurance. Quality assurance is a cross-cutting topic playing an important role throughout the whole KG construction process. Quality problems in the KG can be multi-faceted relating to the ontological consistency, the data quality of entities and relations (comprehensiveness), or domain coverage. The coverage ...
To represent and use KGs as informally defined above, a powerful graph data model is needed that supports entities and relations of different types as well as their ontological description and organization [24]. Moreover, the graph data model should provide a comprehensive query language and possibly more advanced grap...
In KG, quality assurance, versioning, and rollback mechanisms are crucial for managing errors and maintaining data integrity. By implementing version control mechanisms, changes in the KG can be tracked, allowing for easy rollback in the event of errors or quality issues. This ensures that previous versions of the KG c...
D
\cdot\vec{y_{0}}\right)\leqslant 20^{\circ}\\ \end{array}start_ARRAY start_ROW start_CELL - 30 start_POSTSUPERSCRIPT ∘ end_POSTSUPERSCRIPT ⩽ italic_a italic_r italic_c italic_c italic_o italic_s ( divide start_ARG italic_P start_POSTSUBSCRIPT italic_C / 0 end_POSTSUBSCRIPT - italic_P start_POSTSUBSCRIPT italic_B / 0 en...
This section focuses on the dynamic description of the lower limb shown in Figure 2 using the Dual Quaternion-based recursive Newton-Euler method. This method involves calculating the velocities and accelerations of the center of mass of each link, known as twists, based on the positions, velocities, and accelerations...
In this section, the Forward Kinematics (FK) of the lower limbs, depicted in Figure 1, using dual quaternions is established. FK involves computing the positions and orientations of the end-effector in task space from the axes and angles of the joint rotations. The lower limb is decomposed into four segments: the pelv...
Backward recursion involves calculating the required joint torques to achieve the predefined motion based on the dual quaternion wrenches acting at the end effector, along with the twists and their first time derivatives for each link’s center of mass. The wrench at the foot’s center of mass is given by:
In this paper, the dual quaternion-based theory is applied to the kinematics and dynamics study of the 7-DOF human lower limbs in 3D space. Subsequently, the artificial neural networks method is used to solve the inverse kinematics problem. The efficiency of the artificial neural networks method is verified using the j...
A
Let us compare the statements of Theorems 4 and 4, for convex functions (α=1𝛼1\alpha=1italic_α = 1). Theorem 4 guarantees convergence to a ε−limit-from𝜀\varepsilon-italic_ε -global minimum in at most 𝒪(1ε5/2+dε3/2)×\mathcal{O}(\frac{1}{\varepsilon^{5/2}}+\frac{d}{\varepsilon^{3/2}})\timescaligraphic_O ( divide start...
We consider again a diagonal neural network and estimate the time costs needed for computing its gradient, Hessian, decomposing the Hessian, and solving the cubic subproblem. Figure 6 shows that the average cost of computing the Hessian is significantly higher than the cost of computing one gradient, and the quotient g...
To address these challenges, we can take into account second-order information (the Hessian matrix) and apply Newton’s method (see, e.g., (Nesterov, 2018)). Among the many versions of this algorithm, the Cubic Newton method (Nesterov & Polyak, 2006) is one of the most theoretically established. With the Cubic Newton m...
We consider the variance-reduced cubic Newton method from (Zhou et al., 2019) (referred to as “full VR”), its lazy version where we do not update the snapshot Hessian (“Lazy VR”), the stochastic Cubic Newton method (“SCN”), the Cubic Newton algorithm (“CN”), Gradient Descent with line search (“GD") and Stochastic Gradi...
Figure 4 shows that compared to other second-order methods, “Lazy VR" has considerable time and computation savings. It also performs closely to gradient descent with line search, which performs very well in this case. Figure 5 shows the same experiment for larger dimensions, most importantly we see that the gap betwe...
C
,\tilde{\mathbf{f}}^{Y}\mathrel{\overset{\text{}}{\scalebox{2.0}[1.0]{$\sim$}}% }\mathcal{CN}(\mathbf{0},\mathbf{I}_{N})bold_f start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT = square-root start_ARG italic_β start_POSTSUBSCRIPT bold_f start_POSTSUPERSCRIPT italic_X end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_ARG over~...
In this paper, we analyzed the effect of deploying an IRS on the performance of an OOB operator that has no control over the IRS. We showed that while the IRS optimally serves the in-band UEs, it simultaneously, and at no additional cost, enhances the quality of the channels of the OOB UEs. This enhancement in the OOB ...
As mentioned earlier, in this work, we consider a scenario where operator X deploys and controls an IRS in order to enhance the throughput of the users being served by it, and are interested in the effect of the IRS on an operator Y that is providing services in a different frequency band. Thus, in order to serve the k...
The base stations (BSs) of operators X and Y (referred to as BS-X and BS-Y, respectively) and the UEs are equipped with a single antenna, and all the channels in the systems undergo frequency-flat fading.111Extension to general cases with multiple antennas and frequency selective channels does not change the main messa...
In practice, multiple network operators co-exist in a given geographical area, each operating in different frequency band. As a consequence, at a given point in time, multiple UEs are served by different operators in the system. In such a scenario, if an IRS is optimized to cater to the needs of one of the operators, i...
B
In the above two cases, both label noise and data noise corrupted the original discriminative patterns in each category, thus making the DNN unlikely to learn transferable concepts. In comparison, here, let us discuss a new case, i.e., even if there exist meaningful patterns in training data, the DNN may still not lear...
If the ground-truth label for classification is incorrectly annotated on some samples, then the DNN usually has to memorize each incorrectly-labeled training sample for classification without summarizing many common features from such chaotic annotations. Thus, in this case, the DNN usually encodes more non-transferabl...
Furthermore, if a DNN encodes faithful symbolic concepts, then these concepts are supposed to exhibit certain discrimination power in the classification task. In other words, for each concept S𝑆Sitalic_S, if the concept is saliently activated on a set of samples, then interaction effects I⁢(S)𝐼𝑆I(S)italic_I ( italic...
Furthermore, if a DNN learns meaningful concepts, then these concepts are supposed to exhibit certain discrimination power in the classification task. The same concept extracted from different samples needs to consistently push the DNN towards the classification of a certain category.
To be precise, if a classification task can be conducted with some shortcut solutions without requiring the DNN to encode complex concepts, then the DNN probably converges to the shortcut solution. For example, in an image classification task, if pixel-wise colors are sufficient to conduct the image-classification task...
D
Figure 8 shows that a DNN trained with less label noise usually exhibited a higher Sim(m,t)superscriptSim𝑚𝑡\text{Sim}^{(m,t)}Sim start_POSTSUPERSCRIPT ( italic_m , italic_t ) end_POSTSUPERSCRIPT for low-order interactive concepts. It meant that DNNs trained with less label noise usually learned low-order interactive ...
Besides, the high over-fitting risk of high-order concepts can also be explained by the detouring dynamic of learning high-order concepts. I.e., we find that a high-order concept is more likely to be mistakenly represented by the DNN as a mixture of low-order concepts. We also find the following four phenomena to expla...
In this section, we analyze the learning dynamics of concepts with a simple experimental setting, i.e., using a DNN to fit a boolean polynomial. We find that a high-order concept is not directly learned, but is likely to be mistakenly encoded as a mixture of low-order concepts in early epochs. In spite of the simplici...
In this paper, we provide a conceptual understanding of the reason why low-order concepts in training data can usually better generalize to testing data than high-order concepts. Specifically, we prove that the average inconsistency of concepts usually increases exponentially along with the order of concepts. We find t...
Although there is a common heuristic that complex concepts are usually more likely to be over-fitted, people still do not know the exact definition of concepts with an analytic connection to their generalization power. Because we also find the low generalization power of complex (high-order) interactive concepts, in th...
B
ReLU (Rectified Linear Units) is mainly used in the fields of vision classification. In classification, ordinarilly more layers is better in deep learning. On datasets such as MNIST, even simple CNNs with three layers can achieve high classification performance. However, for more challenging datasets such as CIFAR10, s...
Table 1 show that the obvious boundary between the linear part in the positive integer region and the non-linear part in the negative integer region than some other activation functions. Our activation function acts like the identity function, such as ReLU or ELU, in the positive integer region so that it does not los...
We realized that our activation function is not only showing a good performance for the accuracy but also converging to zero rapidly when updating a loss function during a test on some mathematical model and neural networks. We formulated our activation function in the following order. First, we used the hyperbolic tan...
The MoLU is a simple, beautiful and powerful activation function that consists of a combination of hyperbolic tangent and exponential functions. The slope of the MoLU in the negative integer region make it possible to escape from the local minima. On the other hand, in the positive integer region, the slope of the MoL...
The robust property of MoLU is to approach rapidly to the value of minimum of a loss function without losing stability. This is a truly useful characteristic when training long time-series data by using NeuralODEs(Neural Ordinary Differential Equations). To prove the performance of MoLU, we conducted experiment on Neu...
C
The synchronous execution of two dynamical systems A𝐴Aitalic_A and B𝐵Bitalic_B gives a dynamical system A⊗Btensor-product𝐴𝐵A\otimes Bitalic_A ⊗ italic_B, whose transition digraph is the direct product [15] of the transition digraphs of A𝐴Aitalic_A and B𝐵Bitalic_B. This product, together with a disjoint union ope...
We note that the general framework of reverse search [1], equipped with the alternating output technique [26], yields a natural polynomial-time algorithm to produce the solution that comes after any given solution C𝐶Citalic_C in the enumeration, provided that we are able to decide in polynomial time whether C𝐶Citalic...
Within this second framework, algorithms running with polynomial delay (requiring polynomial time in the input size between consecutive outputs) are considered among the most efficient in algorithmic enumeration. In this paper, we place ourselves in the output-sensitive approach and refer the reader to [16, 24] for mor...
As a consequence, there is no hope to devise an algorithm listing these objects in polynomial time in n𝑛nitalic_n. Rather, the kind of efficiency we must aim for is either guaranteeing small exponential time aiming at reducing as much as possible the base of the exponent, referred to as the input-sensitive approach [1...
In our case, as the worst case delay to generate a child is the same as of generating all children, we do not even need to argue that we can resume the enumeration from the i𝑖iitalic_i-th child as long as we are interested in the asymptotic delay, though this can be done to speed up the implementation. As of deciding ...
C
To this end, we have presented new stability results for the Wasserstein, MMD, and KL divergences, since these are some of the most popular choices in practice. Our numerical experiments demonstrate the sharpness of our analysis and investigate its validity in more general settings, beyond our theoretical assumptions.
of the number of samples from ν𝜈\nuitalic_ν and η𝜂\etaitalic_η. Analysis of the resulting statistical errors is a topic of great interest in the literature; see, for example, [33, 48, 80] for estimating OT maps and [50, 98] for triangular and other transport maps. Obtaining sharp rates for the statistical error in th...
We primarily focus on the approximation problem, which as explained above, is immediately relevant to the task of drawing samples from ν𝜈\nuitalic_ν. We will not directly address the statistical problem of density estimation from finite collections of samples using transport (see, e.g., [98]), but our results are rele...
Overall, our theoretical results take a step towards understanding the approximation error of transport-based sampling and density estimation algorithms. At the same time, the present analysis suggests an extensive list of open questions for future research:
The primary goal of this article is to provide an answer to this question by (1) providing error analysis and rates for a broad abstract class of measure-transport algorithms, which translate to the accuracy of the resulting sampling procedure described above; and (2) showing that many algorithms, including well-known ...
C
Human emotions are complex, conscious experiences that profoundly influence behavior and can be expressed in various forms. These emotions are pivotal in psychological processes and significantly impact human actions. The advent of Artificial Intelligence (AI) and Deep Learning (DL) has driven the development of intell...
In this paper, we introduce our approach MMA-MRNNet, a novel deep learning architecture designed to tackle the complexities of FEIE in scenarios where video-level annotations (i.e., there exists one annotation for the whole video) are provided rather than frame-level annotations. The key challenges addressed by MMA-MR...
We demonstrated the effectiveness of MMA-MRNNet on the Hume-Reaction dataset, where it consistently outperformed by large margins all state-of-the-art methods. We also demonstrated the effectiveness of the MMA component across multiple in-the-wild datasets, where it consistently outperformed all state-of-the-art method...
The extracted representations are then passed to the MRNN component, which consists of an RNN designed to capture temporal dependencies across the sequence of frames. To handle the varying lengths of input videos, a Mask layer is employed within the MRNN. This layer dynamically selects relevant RNN outputs based on th...
In this paper, we introduced MMA-MRNNet, a novel deep learning architecture for dynamic multi-output Facial Expression Intensity Estimation (FEIE) from video data. Our method addresses the limitations of traditional approaches by leveraging a Multi-Task Learning (MTL) framework to extract rich affective representation...
A
A conformity function is a mapping ρ:ℝd×𝒴×Ω→ℝ:𝜌→superscriptℝ𝑑𝒴Ωℝ\rho:\mathbb{R}^{d}\times\mathscr{Y}\times\Omega\to\mathbb{R}italic_ρ : blackboard_R start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT × script_Y × roman_Ω → blackboard_R such that ρ⁢(x,y)=ρ⁢(x,y,⋅)𝜌𝑥𝑦𝜌𝑥𝑦bold-⋅\rho(x,y)=\rho(x,y,\,\boldsymbol{\...
Under the data exchangeability assumption, the sequence of conformity scores {Si}i≥1subscriptsubscript𝑆𝑖𝑖1\{S_{i}\}_{i\geq 1}{ italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i ≥ 1 end_POSTSUBSCRIPT is exchangeable.
Note that the regularity of a specific conformity function ρ𝜌\rhoitalic_ρ is contextual, being inherently dependent on the distribution of the underlying data sequence. Technically, we can always avoid ties among the sequence of conformity scores almost surely by introducing a properly constructed ancillary tie-break...
In general, the coverage indicators Zisubscript𝑍𝑖Z_{i}italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are dependent random variables, since for all future observables the corresponding conformal prediction sets in Definition 3 are defined in terms of the same calibration sample conformity score S(⌈(1−α)⁢(n+1)...
Conformity functions are agnostic to the choice of the specific models or algorithms used to construct μ^^𝜇\hat{\mu}over^ start_ARG italic_μ end_ARG, ξ^psubscript^𝜉𝑝\hat{\xi}_{p}over^ start_ARG italic_ξ end_ARG start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT, and π^^𝜋\hat{\pi}over^ start_ARG italic_π end_ARG in Exa...
B
Figure 2: (a) Visualization of the human action jumping jacks with three representative event representations. (b) Comparison of different point-based architectures for object classification. Blue, pink and green blocks represent local feature encoding, aggregation function and global modeling, respectively. The compe...
Existing event-based learning models can be summarized into two main categories: frame-based and point-based methods. Frame-based methods [4, 5, 7, 8] first convert events into dense frame-based representations and then process them using learning models designed for images, such as convolutional neural networks (CNNs...
Unlike frame-based counterparts, point-based methods exploit the sparsity and asynchrony of event data and achieve a trade-off between accuracy and model complexity. They design sparse representations to convey the spatial and temporal information of event streams, as shown in Fig. 2 (a). A natural approach is to take ...
The great success of transformer architectures [32] in natural language processing attracts increasing interest among computer vision researchers. The core component of transformers is the self-attention mechanism that computes semantic correlations between input elements to model long-range dependencies. Researchers h...
Recent graph-based approaches [16, 9, 10] construct point-wise graphs on downsampled event streams and exploit graph neural networks (GNNs) to extract event features. Considering that the point-wise relationship of events is susceptible to noise signal [31], voxel-wise GNNs [11, 12] first convert an event stream into ...
B
It was proven by Takashi [33] that a given function f⁢(𝐱)𝑓𝐱f(\mathbf{x})italic_f ( bold_x ) that meets the condition of the Eikonal Equation ∥∇𝐱f⁢(𝐱)∥=1delimited-∥∥subscript∇𝐱𝑓𝐱1\left\lVert\nabla_{\mathbf{x}}f(\mathbf{x})\right\rVert=1∥ ∇ start_POSTSUBSCRIPT bold_x end_POSTSUBSCRIPT italic_f ( bold_x ) ∥ = 1 on...
Points with a high loss from Eq. 8 are not on the estimated surface. And, intuitively, if all points were certain, the loss would be zero for all of them. However, this was not the case in our experiments. Therefore, we compute the Eikonal loss from Eq. 8 for all points on our current shape O𝑂Oitalic_O and take it as ...
In Fig. 6, the results for the precision of the reconstruction are shown. The single-object experiments are shown in black. We can see that the trends of both JS and CD are the same as for the simulation, even though we can notice noise in some touches. In yellow, the results for multi-object experiments are shown. Aga...
We will first describe the module for the shape creation itself. In [1] the IGR network was used as a standalone library. To perform more efficiently and to be able to handle more objects at once, we modified it to be more compatible with the whole ecosystem (under Robot Operating System (ROS)). The module contains t...
The first group of improvements concerns the process of shape completion performed by Implicit Geometric Regularization for Learning Shapes (IGR), which we modified as follows. We use a new, theoretically grounded, method to determine the points with highest uncertainty. In addition, we changed the sampling of points...
A
Clones are sets of finitary operations that include all projections and are closed under composition (see [14, 25, 26]). They play a significant role in universal algebra, as the set of all term operations of an algebra always constitutes a clone, and, in fact, every clone is of this form. Therefore, comparing clones o...
In addition to their significance in universal algebra, clones also play an important role in the study of first-order structures. The polymorphism clone of a first-order structure, containing all finitary operations that preserve the structure, holds valuable information and serves as a powerful analytical tool. Clone...
Clones are sets of finitary operations that include all projections and are closed under composition (see [14, 25, 26]). They play a significant role in universal algebra, as the set of all term operations of an algebra always constitutes a clone, and, in fact, every clone is of this form. Therefore, comparing clones o...
In this framework the nullary operators 𝖾isubscript𝖾𝑖\mathsf{e}_{i}sansserif_e start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT are the projections, and the q𝑞qitalic_q’s operators are the compositions of ω𝜔\omegaitalic_ω-operations. The universe of a FCA (resp. ℵ0subscriptℵ0\aleph_{0}roman_ℵ start_POSTSUBSCRIPT 0 e...
For countable structures classified as ω𝜔\omegaitalic_ω-categorical, the polymorphism clone carries a substantial amount of information. In [6] primitive positive bi-interpretability of two ω𝜔\omegaitalic_ω-categorical structures A𝐴Aitalic_A and B𝐵Bitalic_B is linked to the isomorphism of their polymorphism clones ...
A
This motivates the following iterative algorithm for finding a pairwise disjoint collection of s𝑠sitalic_s-t𝑡titalic_t mincuts: (1) Find the leftmost s𝑠sitalic_s-t𝑡titalic_t mincut X𝑋Xitalic_X in Hs,tsubscript𝐻𝑠𝑡H_{s,t}italic_H start_POSTSUBSCRIPT italic_s , italic_t end_POSTSUBSCRIPT, (2) identify the set Einv...
Let C^^𝐶\hat{C}over^ start_ARG italic_C end_ARG denote the solution returned by the algorithm. First, we show that C^^𝐶\hat{C}over^ start_ARG italic_C end_ARG contains only disjoint cuts. This follows from the fact that a cut can only be found amongst valid edges at any given iteration, and once an edge has been inc...
In other words, as the algorithm makes progress, no minimum s𝑠sitalic_s-t𝑡titalic_t cut—that is disjoint from the ones found so far by the algorithm—has edges to the left of the minimum s𝑠sitalic_s-t𝑡titalic_t cut found by the algorithm at the present iteration. Next, we show that this implies the maximality of the...
In the cut-finding step (Lines 10-14), the algorithm then finds the leftmost minimum s𝑠sitalic_s-t𝑡titalic_t cut amongst valid path edges. Notice that, for each s𝑠sitalic_s-t𝑡titalic_t path in 𝒫s,tsubscript𝒫𝑠𝑡\mathcal{P}_{s,t}caligraphic_P start_POSTSUBSCRIPT italic_s , italic_t end_POSTSUBSCRIPT, removing its ...
The algorithm works by traversing the graph from left to right in iterations while marking the vertices it visits. Initially, all vertices are unmarked, except for s𝑠sitalic_s. Each iteration consists of two parts: a marking step, and a cut-finding step. In the marking step (Lines 3-9), the algorithm identifies curre...
D
We let X𝑋Xitalic_X be the real line and the random transition Γ(⋅|u)\Gamma(\cdot|u)roman_Γ ( ⋅ | italic_u ) be a Gaussian 𝒩⁢(u,1)𝒩𝑢1\mathcal{N}(u,1)caligraphic_N ( italic_u , 1 ) with mean u𝑢uitalic_u. The space of measures is XM=[0,1]subscript𝑋𝑀01X_{M}=[0,1]italic_X start_POSTSUBSCRIPT italic_M end_POSTSUBSCRIP...
Another interesting property of quantum mechanics is that the vast majority of quantum pure states themselves will have negligible algorithmic self information 𝐈𝒬(|ψ⟩:|ψ⟩){\mathbf{I}}_{\mathcal{Q}}(\ket{\psi}:\ket{\psi})bold_I start_POSTSUBSCRIPT caligraphic_Q end_POSTSUBSCRIPT ( | start_ARG italic_ψ end_ARG ⟩ : | s...
We prove conservation of probabilities over successively general spaces. This includeds finite sequences, infinite sequences, and T0subscript𝑇0T_{0}italic_T start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT, second countable topologies. Conservation of probabilities over the case of finite and infinite sequences follow directly...
Quantum information theory studies the limits of communicating through quantum channels. This section shows the limitations of the algorithmic content of [ure states and their measurements. Given a measurement apparatus E𝐸Eitalic_E, there is only a tiny fraction of quantum pure states on which E𝐸Eitalic_E’s applicati...
In quantum mechanics, given a quantum state |ψ⟩ket𝜓\ket{\psi}| start_ARG italic_ψ end_ARG ⟩, a measurement, or POVM, E𝐸Eitalic_E produces a probability measure E⁢|ψ⟩𝐸ket𝜓E\ket{\psi}italic_E | start_ARG italic_ψ end_ARG ⟩ over strings. This probability represents the classical information produced from the measurem...
C
}_{i}}.\frac{\partial\hat{x}_{i}}{\partial\sigma_{r}^{2}}=\frac{\mu_{r}+x_{i}}% {2(\sigma_{r}^{2}+\epsilon)^{3/2}}divide start_ARG ∂ roman_ℓ end_ARG start_ARG ∂ italic_σ start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG = divide start_ARG ∂ roman_ℓ end_ARG start_ARG ∂ ov...
As deep neural networks require a certain amount of labeled data for effective training, it is well known that the lack of a large enough corpus of accurately labeled high-quality data can produce disappointing results. Data augmentation [23] is one way to overcome this problem. However, current approaches generate the...
We have proposed a novel approach called ”context normalization” (CN) that enhances deep neural network training in terms of training stability, fast convergence, higher learning rate, and viable activation functions. Similar to the conventional mixture normalization (MN) method, our approach is driven by the hypothes...
CN transform is a differentiable operation in deep neural networks that normalizes input data. By applying CN, the model can continuously learn from input distributions and adapt its representations to the target task, leading to improved performance. This normalization helps mitigate the influence of variations in in...
Based on the Mixture Normalization (MN) hypothesis proposed by [6] (ref. to Figure 1), our Context Normalization (CN) approach operates under a similar assumption that data can be effectively represented by a mixture of multiple components, as opposed to batch normalization (BN)  [4]). In the Context Normalization (CN)...
C
This work studies the importance of disentangling foreground and background features and proposes to synthesize both foreground and background features for more effective OOD detection in diverse real-world applications. This provides a new insight into the OOD detection problem.
We then propose a novel approach DFB, in which different existing foreground-based OOD detection methods can be seamlessly combined to jointly learn the ID features from both foreground and background dimensions. It offers a generic approach to enhance current OOD detection methods. To our knowledge, this is the first ...
This paper considers the importance of disentangling foreground and background features in OOD detection and proposes to leverage background features to enhance the OOD detection methods that are based on foreground features. To this end, we introduce a novel generic framework, called DFB, that can Disentangle the Fore...
We further propose a novel OOD detection framework DFB that utilizes dense prediction networks to segment the foreground and background from in-distribution training data, and jointly learn foreground and background features. It then leverages these background features to define background OOD scores and seamlessly com...
Using semantic of foreground objects only to detect OOD samples can often be successful when the OOD samples have some dominant semantics that are different from the ID images. However, approaches of this type would fail to work effectively when the OOD samples do not have clear object semantics and/or exhibit some si...
A
Simply adjusting the contrast of low-light images using a technique such as histogram equalization [6] is often insufficient due to the amplification of noise [1, 7]. Learning-based methods have emerged which significantly outperform traditional methods. However, even the state-of-the-art deep learning (DL) techniques ...
In this paper, we present a framework for post-processing images which have undergone low-light image enhancement. The enhancement of low-light images often reveals a variety of degradations which are hidden in the dark, and thus a need for post-processing is introduced. Furthermore, each low-light enhancement techniqu...
Simply adjusting the contrast of low-light images using a technique such as histogram equalization [6] is often insufficient due to the amplification of noise [1, 7]. Learning-based methods have emerged which significantly outperform traditional methods. However, even the state-of-the-art deep learning (DL) techniques ...
Existing denoising techniques can be applied to denoise low-light images either before or after contrast enhancement [9, 10]. These denoising techniques range from low-pass filters and algorithms such as block matching and 3D filtering (BM3D) [11], to state-of-the-art DL denoisers [9, 12]. Despite denoisers significan...
The task of low-light image enhancement (LLIE) aims to improve the visibility of images which are captured under low-light conditions. Under-exposed images are often degraded in a variety of ways in addition to their lack of visibility. Notably, low-light regions of an image typically contain degraded color informatio...
C
Table 2: Loadings of the principle components The table displays encoded variables (first column) and their corresponding loadings on the primary principal components in each question category, retaining at least 80%percent8080\%80 % of the variance within each category. Loadings quantify the extent to which original v...
We are grateful to Csaba Pleh, Peter Kardos, and Markus Strohmaier for their valuable advice. This project was supported by the Humboldt Foundation within the Research Group Linkage Program. JK and MZ were partially supported through ERC grant No. 810115-DYNASET. MZ acknowledges further support from 101086712-LearnData...
All subjects gave their informed consent for inclusion before they participated in the study. The protocol of the study was approved by the Ethics Committee of Central European University (reference number: 2022-2023/1/EX). All methods of the study were carried out following the principles of the Belmont Report.
In the game sessions, players are given two Wikipedia pages as the source and the target in each game. To reduce the disparities in prior knowledge among the participants, the source and target pages are chosen to be similarly distanced (2 or 3 steps away on the Wikipedia network) pages about renowned individuals from ...
Regression models To investigate the impact of individual characteristics on navigation success and creativity, we employed four regression models. For navigation success, we conducted separate logistic regression analyses for games with time and distance constraints. The dependent variable was the binary measure snisu...
A
The core idea is to develop parameterizations able to transform generator-level particles information into reconstructed physics objects as schematically represented in Figure 1 (bottom). Such parameterizations can be built using deep generative models that have proven to succeed in describing the response of the LHCb ...
The first step of any simulation production is the generation phase in which the high-energy collisions are simulated with Monte Carlo generators such as Pythia8 [5] and EvtGen [6]. The output of the generation phase is the set of long-lived particles able to traverse partially or entirely, depending on the particle sp...
Combining stacks of GBDT and GAN models, Lamarr provides the high-level response of the LHCb tracking and PID systems. To validate the ultra-fast simulation approach the chosen machine-learning-based models are trained on detailed simulated samples and the output of Lamarr is compared to the reference distributions as ...
Lamarr [14] is a novel LHCb simulation framework implementing the ultra-fast simulation paradigm. The Lamarr framework consists of a pipeline of modular parameterizations designed to take as input the particles generated by the event generators and provide as output high-level quantities representing the particles succ...
As mentioned in the previous Section, the validation of the ultra-fast philosophy of Lamarr is based on the comparison between the distributions obtained from models trained on detailed simulation and the ones resulting from standard simulation strategies.
C
Additionally, we introduce a zero-shot learning approach for fitness prediction, which enables the direct validation of the model’s representational stability in a non-parametric manner. To ensure a thorough and unbiased comparison, we selected prominent protein language models and inverse folding models as benchmarks...
To explore enzyme recognition through binary classification, we leverage a novel dataset that extends the Fold dataset. Each fold in this dataset is meticulously crafted to include an equal number of enzymes, balanced with non-enzyme negative samples. Given the binary nature of the classification task, positive and neg...
To evaluate our proposed framework, we conducted benchmark tests. Due to the lack of established evaluation strategies for this novel pretraining paradigm, we designed a series of evaluation experiments, including internal tasks (e.g., contact map prediction and distribution alignment quality assessment) to demonstrate...
Our proposed method yields an average ρ𝜌\rhoitalic_ρ of 43.0%, consistently achieving the top matching rank. This suggests that leveraging prior language knowledge significantly contributes to enhanced overall performance in mutation prediction. In contrast, the optimal baseline model (ESM-IF) scores 42.2%. The superi...
Figure 1: (a) The proposed cross-modal contrastive learning framework utilizes a pretrained protein language model to guide the training of the protein structure model through contrastive alignment loss. To reinforce information constraints on the structure, we introduce a self-supervised contact map prediction. (b) T...
C
We implement all the methods with OpenKE [33], which is a pytorch-based open-source framework for knowledge embedding111Codes are available at https://github.com/brcai/LiftNet. We run TransE, TransH, DistMult, and ComplEx with low-dimensional (16) and high-dimensional (512) embedding dimensions to show the difference ...
We adjust the setups of TC layers by increasing/decreasing the kernel size accordingly to ensure the output dimensions are always 512. The results with respect to link prediction accuracy (H@10) are shown in Table VIII. We see that the results of LN-TransE and LN-TransH are relatively stable, introducing more TC layers...
In Table V, we show the parameter efficiency of LiftNet-based methods on the three datasets. The results are shown pair-wisely, i.e., the numbers of parameters required by 512-dimensional KGE models and those of corresponding LiftNet models, because they achieve similar link prediction accuracy. KGE methods with 16-di...
The accuracy of link prediction of the proposed LifeNet-based methods and compared conventional methods are shown in Table II. We observe that TransE, TransH, DistMult, and ComplEx all obtain higher link prediction accuracy with 512 embedding dimensions than with 16 embedding dimensions, due to the increased expressive...
The results of LiftNet-based methods for knowledge graph link prediction (accuracy measured by H@10 and MRR) are shown in Fig. 3. Generally, on WN18RR datasets, we observe the link prediction accuracy increases with higher input dimension, and the increase is significant from 4-dimension to 16-dimension. However, after...
C
Finding the best route to distribute entanglement has proven to be a non-trivial problem [3, 4, 5, 6, 7, 8]. Caleffi et al. [3] studied a single-qubit entanglement generation model for bipartite entanglement (i.e., between only two nodes). In their model, the entangled qubits start as maximally entangled and subsequent...
Bugalho et al. considered in [6] bipartite and multipartite entanglement distribution assuming a single-qubit generation model and considering that not all links have the same fidelity or entanglement generation probabilities, and combined that with imperfect quantum memories. That work considered a single source mode...
Pirandola et al.[4] looked at bipartite entanglement networks based on the theoretical upper bounds for the channel capacity. In a regime where entanglement distribution is close to its theoretical upper bound, that work showed that in such a regime Dijkstra’s algorithm can be used to find the path that maximizes the ...
Chakraborty et al. [5] studied bipartite entanglement distribution in a flow model in which the ebits all have the same fidelity (quality of the entanglement), but each link has different capacities. Here, we note that a flow model is one in which multiple ebits are attempted to be established simultaneously. Those aut...
Finding the best route to distribute entanglement has proven to be a non-trivial problem [3, 4, 5, 6, 7, 8]. Caleffi et al. [3] studied a single-qubit entanglement generation model for bipartite entanglement (i.e., between only two nodes). In their model, the entangled qubits start as maximally entangled and subsequent...
B
\mathcal{B}}_{r}.italic_U ( italic_W start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT ) > italic_U ( italic_W start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ) , if italic_A start_POSTSUBSCRIPT italic_i end_POSTSU...
Fig. 4 provides the BLER performance of polar codes with N=256𝑁256N=256italic_N = 256 and K=128𝐾128K=128italic_K = 128, where IMWD sequence means the inverse MWD sequence with the opposite criterion 3), i.e., when the synthetic channels have the identical disubscript𝑑𝑖d_{i}italic_d start_POSTSUBSCRIPT italic_i end_...
Finally, since the synthetic channel with a larger index is more reliable empirically when the PO between two synthetic channels is not clear, criterion 3) is designed. Hence, the synthetic channels in the identical subset with identical partial MWD are ordered as
We introduce a new concept on the synthetic channels of polar codes, named partial MWD, which is used to evaluate the influence of each synthetic channel on the MWD when the information bit is transmitted in the synthetic channel. Then, based on the partial MWD, we order the synthetic channels and obtain a nested const...
Then, criterion 2) is used to order the synthetic channels in the identical subset. For WN(i)superscriptsubscript𝑊𝑁𝑖W_{N}^{\left(i\right)}italic_W start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT with i∈ℬr𝑖subscriptℬ𝑟i\in{\mathcal{B}}_{r}italic_i ∈ caligraphic_...
B
So in particular the Searcher will never choose the unfavored arc (branch) when the signal is for the favored one. The use of biased depth-first Searcher strategies (random choices at every branch node) of the Searcher was introduced in another context in Alpern (2010) and Alpern and Lidbetter (2014), but those distrib...
Note that as the signal becomes more certain, so that p→1→𝑝1p\rightarrow 1italic_p → 1 and q→0→𝑞0q\rightarrow 0italic_q → 0, the value (2) goes to DQsubscript𝐷𝑄D_{Q}italic_D start_POSTSUBSCRIPT italic_Q end_POSTSUBSCRIPT, which goes to the distance of the furthest leaf node from O𝑂Oitalic_O. As previously remarked...
The optimal Hider distribution over the leaf nodes can be found by a similar stochastic process in which the Hider starts at the root O𝑂Oitalic_O and at each branch node chooses a branch to enter according to a certain distribution. Of course, this is merely a mental calculation for the Hider, who is stationary in th...
This paper has addressed the question of how to optimally search for an adversarially hidden target on a tree network in the presence of unreliable signals. We have found optimal solutions for both the Searcher and the Hider that can be calculated recursively, and a closed form expression for the value of the game. Fu...
If the Hider lies in neither branch, any signal distribution may be used, as in this case the Searcher will return to node j𝑗jitalic_j again after time equal to twice the length of the branches, regardless of her search method. As p→1/2→𝑝12p\rightarrow 1/2italic_p → 1 / 2, the signal becomes useless and the solution ...
B
High-quality images can be generated using embeddings trained on 100 examples on a single consumer-grade GPU. Our showcased applications include enhancing diagnostic models in low-data scenarios by incorporating synthetic cases during training, simulating disease progression, and generating images with specific disease...
Furthermore, models trained with only synthetic cases do not see a large drop in performance, indicating that the synthetic cases are diagnostically accurate. To confirm visual results from section 4.1, classification models trained with synthetic cases generated with embeddings trained on 10 cases instead of 100 show ...
Pre-trained models are often trained on 2D RGB datasets, but many medical imaging modalities are 3D. Recently, studies such as Khader et al. (2023) and Pinaya et al. (2022) have trained diffusion models from scratch on 3D data or even on 4D data Kim and Ye (2022), and Han et al. (2023) use diffusion models conditioned ...
Several other works studied text-to-image latent diffusion models for medical imaging Chambon et al. (2022a); Akrout et al. (2023). Closest to our work is Chambon et al. (2022b), where the authors explore various methods to adapt a pre-trained Stable Diffusion model to chest X-ray generation.
While a dedicated diffusion model trained on a large captioned medical dataset would likely yield superior results, our findings are promising for institutions with limited computational resources. This approach is particularly relevant for rare diseases where collecting large datasets is impractical. It remains viable...
D
Each bounding box represents a NeRF, providing the flexibility to move, scale, or replace elements as needed. CompoNeRF’s capabilities also extend to textual edits, exemplified by the transformation of ’wine’ into ’juice’. Since NeRFs have been well trained, we only finetune θg,θlsubscript𝜃𝑔subscript𝜃𝑙\theta_{g},\t...
Figure 7: The scene editing. Demonstrated here are the stages of our recomposition, utilizing cached source scenes. Each NeRF is individually identified by colorful labels. These decomposed nodes are then positioned in the initial layout and subsequently calibrated to form the final composition. The detailed descripti...
Moreover, several studies [22, 56, 63, 27, 21] are dedicated to enhancing the SDS loss to provide more detailed supervision. Departing from these singular approaches, our CompoNeRF introduces a novel approach for the creation of multi-object 3D scenes. It adopts an object-compositional strategy, utilizing an editable 3...
Figure 8: The composition strategy. Our proposed strategies for multi-object scene composition align with Eq. 2. The areas of NeRF overlap are indicated in gray. The green nodes represent composited samples. Our design is highlighted by the dashed box.
Our architecture advances scene reconstruction by providing an intuitive interface for layout manipulation. This capability is crucial for the reconfiguration of scene elements into novel scenes, as depicted in Fig. 3. Here, the input panel allows for adjustments in the attributes of bounding boxes, such as modifying ...
A
We compare the following clustering algorithms: K-Means, hierarchical (with Ward linkage), spectral (with radial-basis function affinity), HDBSCAN, and expectation maximization for a Gaussian mixture model (EM-GMM). We originally intended to include DBSCAN as well. However, the heuristics we tried for choosing the nei...
We measure performance in terms of adjusted mutual information (AMI) and adjusted Rand index (ARI), based on the ground truth cluster labels (Vinh et al., (2010), Hubert and Arabie, (1985)). To carry out the benchmark, we sample 10 times from each archetype, resulting in 60 distinct data sets. We repeat this process tw...
Figure 7 shows four representative data sets with convex clusters drawn from each archetype. Figure 8 shows non-convex clusters resulting from applying the distort function (as described in Section 3.4). For archetypes with dimensionality greater than two, we use t-SNE to visualize the data sets in 2D (van der Maaten ...
{ ‘name’: ‘two_very_different_shapes_p𝑝pitalic_pd’, ‘n_clusters’: 2, ‘dim’: p𝑝pitalic_p, ‘n_samples’: 200, ‘aspect_ref’: 1.5, ‘aspect_maxmin’: 3.0, ‘radius_maxmin’: 3, ‘imbalance_ratio’: 2, ‘max_overlap’: 0.05, ’min_overlap’: 0.001, ‘distributions’: [‘normal’, ‘exponential’] } , where the dimensionality p𝑝pitalic_p...
Figure 10: Cluster overlap predicts clustering difficulty for non-convex clusters. Clustering performance is measured in terms of adjusted mutual information (AMI, left) and adjusted Rand index (ARI, middle). Right: the silhouette score is more sensitive to dimensionality but otherwise aligns well with our cluster ove...
A
Firstly, it is crucial to highlight that the oracle-free setting poses a more challenging scenario. When all oracle information is removed, generation-based baselines relying on templates exhibit a varying degree of performance decline on both datasets (↓↓\downarrow↓ 0.5% to 37.42% in argument classification).
We conduct experiments on two variants of the ACE05 benchmark under the oracle-free setting to evaluate our COFFEE. The results demonstrate that the template-based baselines heavily rely on the additional oracle information, whereas our COFFEE exhibits superior empirical performance over these baselines in the absence...
Table 2: Performance comparison of COFFEE and SOTA generation-based approaches. † The trigger classification F1 of DEGREE is nearly zero because the model cannot exclude the negative samples constructed without a template.♮, ♢, and ♭ denote the model that requires a manually designed template, example keywords, and ev...
Although DEGREE is effective with the oracle information, it struggles to filter out the ‘invalid’ events in the oracle-free setting, resulting in an almost zero (2.18%) trigger classification F1. This indicates that the information leaked in the template significantly contributes to the performance of DEGREE.
Firstly, it is crucial to highlight that the oracle-free setting poses a more challenging scenario. When all oracle information is removed, generation-based baselines relying on templates exhibit a varying degree of performance decline on both datasets (↓↓\downarrow↓ 0.5% to 37.42% in argument classification).
C
\mathcal{H}]^{0}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ↪ [ caligraphic_H ] start_POSTSUPERSCRIPT italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ↪ [ caligraphic_H ] start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT exist and are compac...
Before introducing the Lqsuperscript𝐿𝑞L^{q}italic_L start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT-embedding property of the interpolation space [ℋ]ssuperscriptdelimited-[]ℋ𝑠[\mathcal{H}]^{s}[ caligraphic_H ] start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT, we first prove the following lemma, which characteri...
where m:=min⁡{k∈ℕ:k>r}assign𝑚:𝑘ℕ𝑘𝑟m:=\min\{k\in\mathbb{N}:k>r\}italic_m := roman_min { italic_k ∈ blackboard_N : italic_k > italic_r }. (We refer to Appendix A for the definition of real interpolation and Sawano 2018, Chapter 4.2.2 for more details). It is well known that when r>d2𝑟𝑑2r>\frac{d}{2}italic_r > divid...
The outline of the rest of the paper is as follows. In Section 2, we introduce basic concepts including priori knowledge of RKHS, integral operators and the definition of the interpolation space. In addition, we formally define the spectral algorithm, which is the main interest of this paper, and provide three example...
It is worth pointing out the relation between the definition (5) and the interpolation space defined through the real method (real interpolation). For details of interpolation of Banach spaces through the real method, we refer to Sawano (2018, Chapter 4.2.2). Specifically, Steinwart and Scovel (2012, Theorem 4.6) reve...
D
Table 1: Empirical results and total runtimes (time taken by surface variation computation and simplification) for all tested simplification methods and point clouds. We report the maximum and mean Hausdorff distances between the original meshes, and the meshes reconstructed from the simplified point clouds. Also repo...
Overall, these results show that our approach provides a computationally efficient option for performing point cloud simplification in settings where the user wishes to strike a balance between preserving high fidelity around sharp features in the cloud, and ensuring that the simplified cloud covers the manifold defin...
The WLOP baseline does not efficiently preserve the features and favours uniformly covering the domain of the original cloud. Therefore, the mean surface variation of the WLOP simplified clouds is lower, but overall the Hausdorff distances obtained from the reconstructed meshes are superior to those obtained by our met...
HC and Potamias et al. are the only baselines with shorter runtimes than our method, and obtain maximum Hausdorff distances comparable to those obtained by our approach. However, as discussed in Section 2, tuning the user-specified HC parameters make striking a balance between feature preservation and retaining a suffi...
In this section we will introduce a number of existing point cloud simplification techniques, with a particular focus on works which have a feature-preserving element to their approach. Some of the earliest curvature-sensitive simplification techniques were proposed by Pauly et al. [26] and Moenning et al. [25]. The fo...
C
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali...
Training on full resolution (FullRes): We implemented a distributed version of the proposed architecture that splits the task to two GPUs if necessary. This allows for training directly on full resolution (2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT) data, given that the expensive speciali...
2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT images, we distribute the model over 2 GPUs. The methods HalfRes and PatchDDM were trained on one GPU only. The optimizer we used was AdamW [Loshchilov and Hutter(2017)] with the default parameters.
of 1×1×1⁢\text⁢m⁢m3111\text𝑚superscript𝑚31\times 1\times 1\text{mm}^{3}1 × 1 × 1 italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT, resulting in a total scan size of 240×240×155240240155240\times 240\times 155240 × 240 × 155, which we padded to a size of 256×256×256256256256256\times 256\times 256256 × 25...
For three spatial dimensions (i.e. 3D) this means that reducing the input size from 2563superscript2563256^{3}256 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT to 1283superscript1283128^{3}128 start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT results in a reduction
D
ϕ𝟏,𝒍𝒕superscriptsubscriptbold-italic-ϕ1𝒍𝒕\boldsymbol{\phi_{1,l}^{t}}bold_italic_ϕ start_POSTSUBSCRIPT bold_1 bold_, bold_italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT bold_italic_t end_POSTSUPERSCRIPT, ϕ𝟐,𝒍𝒕superscriptsubscriptbold-italic-ϕ2𝒍𝒕\boldsymbol{\phi_{2,l}^{t}}bold_italic_ϕ start_POSTSUBSCRIPT bol...
We systematically conduct numerical experiments designed to elucidate the influence exerted by the aggregation weight α𝛼\alphaitalic_α in the objective function presented in Eq. (13) on the model efficacy and facilitate the practical application and promotion of FedAgg. As depicted in Fig. 12, the decrement of the hy...
In this paper, to achieve better FL model performance, we propose an adaptive learning rate scheme by considering the aggregated gradients of all clients in the local model updating process and the deviations between the local and average parameters. Our innovation points and main contributions are summarized as follo...
In this section, we first introduce the preliminaries regarding the standard FL model in Section III-A. Then, we formulate our FL optimization problem by considering the local model deviation of each client in Section III-B. The workflow of our proposed FL framework with aggregated gradient is presented in Fig. 3 and ...
The rest of this paper is organized as follows. We review the related work in Section II. System model and problem formulation are given in Section III. In Section IV, an analysis of the adaptive learning rate is presented. We provide the convergence analysis in Section V. Experimental results are shown in Section VI....
C
For different kinds of instructions, our approach can output reasonable responses comparable to the fully fine-tuned Alpaca, including question answering, language translation, and code generation. Please refer to the Appendix for a full comparison with Alpaca-LoRA, GPT-3 (Brown et al., 2020), and LLaMA-I (Touvron et a...
For better training stability and final performance, we introduce the zero-initialized attention mechanism with a learnable gating factor, which increasingly incorporates instructional signals, while preserving the pre-trained knowledge in LLaMA. With only 1.2M parameters and one-hour training, our approach effectively...
As a lightweight plug-and-play module, LLaMA-Adapter enjoys superior training efficiency with only 1.2M parameters, 4.9M storage, and one-hour training. This enables more efficient storage of large-scale language models on mobile devices. LLaMA-Adapter’s efficiency advantages can be further revealed by multi-node train...
Figure 1: Characteristics of LLaMA-Adapter. Our lightweight adaption method efficiently fine-tunes LLaMA (Touvron et al., 2023) 7B model with only 1.2M learnable parameters within one hour, which exhibits superior instruction-following and multi-modal reasoning capacity.
However, this full-model fine-tuning can be inefficient in both time and computation resources, limiting its transferability to downstream applications. In this paper, we propose LLaMA-Adapter to fine-tune only lightweight zero-initialized attention mechanisms on top of the frozen LLaMA, other than updating parameters ...
B
Video-language pre-training typically follows the pipeline: (1) encoding video and text pairs into latent representations, (2) modality fusion, and (3) pre-training on specific objectives. Existing methods typically optimize these three components in the pre-training pipeline by designing expressive encoders (Bain et a...
Video temporal modeling. In contrast to images, videos contain a sequence of dynamic frames and how to model temporal information is critical in video understanding (Feichtenhofer et al., 2019; Bertasius et al., 2021; Tran et al., 2014; Alwassel et al., 2021; Zhang et al., 2022; Qian et al., 2022).
Representations learned from large scale noisy datasets such as HowTo100M (Miech et al., 2019), WebVid (Bain et al., 2021), and VideoCC (Nagrani et al., 2022) have demonstrated great potentials in adapting to downstream tasks, including but not limited to text-video retrieval, video question answering, and video captio...
Video-language pre-training typically follows the pipeline: (1) encoding video and text pairs into latent representations, (2) modality fusion, and (3) pre-training on specific objectives. Existing methods typically optimize these three components in the pre-training pipeline by designing expressive encoders (Bain et a...
It has been shown that most video-language pre-training methods merely perform well on learning holistic representations to match a ⟨video, caption⟩delimited-⟨⟩video, caption\langle\textit{video, caption}\rangle⟨ video, caption ⟩ pair while neglect fine-grained information such as region-object correspondences, or sce...
D
Through ablations, we demonstrate that the CL procedure is crucial to the success of the method and only maximizing the similarity of positive examples is not sufficient. Furthermore, we demonstrate that ContraSim reveals new insights not captured by previous similarity measures.
Multilingual models, such as Multilingual-BERT (Devlin et al., 2019), learn to represent texts in different languages in the same representation space. Interestingly, these models show cross-lingual zero-shot transferability, where a model is fine-tuned in one language and evaluated in a different language (Pires et a...
For instance, CKA suggests that the BERT language model Devlin et al. (2019) is more similar to vision models than GPT-2 Radford et al. (2019). Our analysis indicates that both BERT and GPT-2 create representations that are equally similar to the vision ones.
Kornblith et al. (2019) and Morcos et al. (2018) found that increasing the model’s layer width results in more similar representations between models. Raghu et al. (2017) provided an interpretation of the learning process by examining how similar representations were during the training process compared to final repres...
In the image–caption benchmark (Figure 5), from the CKA results we might infer that BERT representations are more similar to computer vision representations than GPT2 representations. That is because with CKA, it is easier to detect the matching image–caption pair with BERT than it is with GPT2. However, ContraSim achi...
B
Of all the cases discussed above, the minimum number of points is eight, as shown in Figure 10(c). The number of ADPs (8) is smaller than that of C2subscript𝐶2C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT-based and C4subscript𝐶4C_{4}italic_C start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT-based auxiliary points (12)...
We describe the optimal arrangement of ADPs in detail. Our calibration method requires at least two unique axes to estimate camera rotation without ambiguity. It is possible to add VP-related points, such as ADPs; however, increasing the number of points causes unstable optimization. To address this trade-off, we analy...
To solve this problem concerning the arrangement and number of points, we define additional VP-related points called ADPs based on the spatial symmetry as follows. We found that six 3D VPs form a regular octahedron that has the symmetry of regular octahedron groups (octahedral symmetry), see Figure 3. This symmetry me...
For estimating camera rotation, VPs are strong geometric cues. Images often have one or no VPs, although unique rotation requires at least two unique axes that consist of lines through 3D VPs and the origin of the world coordinates. Because learning-based methods can estimate tilt and roll angles from a fisheye image [...
As described in Section 4.2 (main paper), our method was able to estimate a unique rotation for over 98% of the images in our experiments because of the arrangement of optimal 3D spatial uniformity, as presented in Table 11. By contrast, the use of VPs without ADPs enabled the estimation of a unique rotation for less t...
D
In this situation, the algebraic multiplicity of the zero eigenvalue of L𝐿Litalic_L is greater than one and therefore we have the trivial condition that matrix A𝐴Aitalic_A has to be Hurwitz/Schur. This is a reasonable conclusion: for a disconnected network, the only possible common equilibrium without information exc...
the k𝑘kitalic_k-th entry of the left eigenvector p𝑝pitalic_p, associated with the zero eigenvalue of the Laplacian L𝐿Litalic_L, can be seen as a measure of the centrality of node k𝑘kitalic_k, which weighs its initial state in the linear combination x~o⁢(0)subscript~𝑥𝑜0\tilde{x}_{o}(0)over~ start_ARG italic_x end_...
In this situation, the algebraic multiplicity of the zero eigenvalue of L𝐿Litalic_L is greater than one and therefore we have the trivial condition that matrix A𝐴Aitalic_A has to be Hurwitz/Schur. This is a reasonable conclusion: for a disconnected network, the only possible common equilibrium without information exc...
In our proofs (and in particular in Lemma 2), we do not assume that the zero eigenvalue of Ldsubscript𝐿𝑑L_{d}italic_L start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT is simple, but we leverage the fact that, for any Laplacian matrix, zero is a semisimple eigenvalue, without the need to assume connectedness or any oth...
Moreover, referring to item (vi), in this case the vector p𝑝pitalic_p is not uniquely determined (up to rescaling) since there exist as many linearly independent eigenvectors as the geometric multiplicity of the zero eigenvalue. In fact, any such selction of p𝑝pitalic_p is a valid one for item (vi) because, with disc...
D
Finding 9. We find support for HR⁢Q4𝑅subscript𝑄4{}_{{RQ}_{4}}start_FLOATSUBSCRIPT italic_R italic_Q start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT end_FLOATSUBSCRIPT. Though incorrect conversions are not directly attributable to the use of unusual operators, they may be attributable to operator sequences.
Macro analysis: Real and Synthetic models are converted and then differences are measured. For synthetic models, we used tf2onnx or torch.onnx directly. For real models we used HuggingFace’s converter, which does preprocessing and then calls those converters.
We omitted measurements of model size, adversarial robustness, and prediction accuracy, to focus instead on measuring the common failure modes (crashing and behavioral differences) identified in our failure analysis. Our analysis reveals that converters can successfully convert many real model but synthetic models are ...
This indicates the failing models will often contain common operator patterns, suggesting families of sequences that cause errors. Finally, our comparison of test suite and mismatching models (\Circled⁢3\Circled3\Circled{3}3 in Table 12) shows that the failing models share few sequences with the models used in converte...
Behavioural Differences: We observed a large fraction of behavioural differences (incorrect output) with synthetic models. Compared to real models, which had 20 instances, synthetic models had 320 instances where the inference results exceeded the threshold. The majority of these instances were observed in the tf2onnx ...
A
It is important to note that this modular organization of components is open to be used at any layer. This means that developers who use Aerostack2 may use the entire framework but also may use separately any intermediate layer or other individual component (e.g., the state estimator) for building a particular applicat...
The Alphanumeric Viewer is a component that monitors the state of specific variables of the system, e.g. sensor measurements, values corresponding to state estimation, references for controllers, etc. The information is distributed in different panes to facilitate the search for a specific variable of the system. On th...
These platforms ease the transition from simulation to real-world deployment because the logic modules remain agnostic to whether the system is operating on a real platform or in simulation. This also simplifies the implementation of heterogeneous aerial systems using different platforms.
To facilitate the implementation of different aerial platforms, Aerostack2 incorporates an AerialPlatform abstract class responsible for managing the capabilities associated with the direct integration of various aerial platforms into the framework. This abstraction facilitates the integration of new platforms into the...
In this approach, each component is managed by a function manager, which is responsible for loading the plugins with each specific algorithm and managing how they interact with the rest of the framework. The plugin selector can also provide meta-control features, such as plugin replacement, whereas the input and outpu...
C
Given their characteristics, humans are still required, especially for prompting, curation, and pre-/post-production. This means that the role of writers and journalists may be transformed, but not replaced. On the contrary, LLMs provide new opportunities for humans, who will be able to spend more time validating news ...
These models tend to scale poorly to long sequences, and they are often unable to capture the entire context. For this reason, current state-of-the-art language models make use of attention (Bahdanau et al., 2015) and transformers (Vaswani et al., 2017). In recent years, several models based on these mechanisms have be...
Indeed, we believe that LLMs can also foster human-AI co-creativity (Lee et al., 2022), since they can be used to write portions of stories in order to serve specific purposes, e.g., they can typify all the dialogues from a character, or they can provide more detailed descriptions of scenes (Calderwood et al., 2020). ...
LLMs can involve re-training through plug-and-play attribute classifiers (Dathathri et al., 2020); re-training to produce paragraphs coherent with a given outline (Rashkin et al., 2020); fine-tuning with specific corpora for writing specific text (Sawicki et al., 2022, Wertz and Kuhn, 2022); or fine-tuning to maximize ...
However, we must remind that current LLMs are not as reliable as humans, e.g., they cannot verify their information and they can propagate biases from training data. In addition, the quality of the output strictly depends on the prompt, which might in turn demand human skills and more time. Writers can be threatened as...
B
Pulmonary nodule detection is an effective way of the prevention and treatment of lung cancer. However, manually diagnosing the pulmonary nodules from the hundreds of slices of the CT images is a labor-intensive task. In recent years, with the bloom of the CNN-based methods in various detection tasks [23, 24], the pul...
More recently, to better exploit the 3D spatial information of CT images, many studies focus on using the 3D CNNs, such as [27, 28, 29, 30, 1, 31, 32, 33, 2, 34, 35, 36]. The famous NoduleNet [1] is an end-to-end 3D deep CNN framework, which achieves the nodule detection, false positive reduction, and nodule segmentati...
Pulmonary nodule detection is an effective way of the prevention and treatment of lung cancer. However, manually diagnosing the pulmonary nodules from the hundreds of slices of the CT images is a labor-intensive task. In recent years, with the bloom of the CNN-based methods in various detection tasks [23, 24], the pul...
Nonetheless, these works mainly focus on the shifts between the source and target, and neglect the detection’s characteristics. For instance, the discrimination of the foreground objects and the backgrounds can naturally be an auxiliary supervision for the target data. Besides, the relatively smaller size of the nodul...
In order to further reduce the negative impact of the pseudo nodule noise, we propose to introduce an additional unsupervised constraint for the student model training and design a weighted entropy (WE) loss. Considering the success of the entropy loss in dealing with the unlabeled data in the semi-supervised and unsu...
A
We assess the performance of our method and the K-means scenario reduction approach across different experimental configurations by comparing their average total costs and solving times across 100100100100 test instances against the benchmark solutions obtained using CVXPY. The results are detailed in Table V. Particu...
Compare the performance of using our method and the K-means scenario reduction method for the 118-bus system. The resulting average total costs and solving times across 100100100100 test instances are compared against the benchmark solutions obtained from CVXPY.
We assess the performance of our method and the K-means scenario reduction approach across different experimental configurations by comparing their average total costs and solving times across 100100100100 test instances against the benchmark solutions obtained using CVXPY. The results are detailed in Table V. Particu...
Next, we compare our method with the widely used K-means scenario reduction method. Specifically, we reduce a collection of scenarios (load realizations) into two reduced sets: one with 5555 scenarios and the other with 2222 scenarios. To generate load realizations, we use a Gaussian distribution centered around the in...
As we can see, our method achieves the smallest increase in total cost while requiring the least amount of time across both datasets with small and large load variations. Notably, in order to achieve a solving time comparable to ours, the scenario reduction method needs to shrink the scenario set by a factor of 10, i.e...
D
The basis for our approach is to note that, intuitively, a suitable prior should reflect a belief that the higher the semantic similarity between pairs of inputs, the more likely these inputs are to have the same label. Therefore, rather than inspecting the prior predictive at single points in input space, we examine ...
Note that this is of course only a reasonable assumption in cases where we believe to have sufficiently good knowledge of semantic similarity in our data domain. That is, we need to have a set of data augmentations for the contrastive tasks, for which we can be reasonably certain that the true labels in our downstream ...
In practice, we can utilise ideas from contrastive learning to learn models with prior predictive distributions that reflect the semantics of different input pairs. The high-level idea is thus to use prior knowledge in the form of data augmentations, for which we believe that the semantic content of the data should be ...
In Fig. 4, we see that the methods that leverage unlabelled data perform the best. In particular, the self-supervised BNN with BALD acquisition achieves the highest accuracy across most numbers of labels, and substantially outperforms the deep ensemble. This confirms the benefit of incorporating unlabelled data in acti...
Another related line of work is concerned with learning invariances from data in Bayesian models using the marginal likelihood (van der Wilk et al., 2018; Immer et al., 2022). This case is essentially the opposite of our setting, as there, the labels are known but the augmentations are learned, while in our case, the a...
A
The need for reliable inference on shape and topological features in applications has led to substantial interest in integrating classical statistical techniques with topological invariants. Roughly speaking, TDA provides qualitative multiscale shape descriptors for point clouds, notably persistent homology. This is a...
The need for reliable inference on shape and topological features in applications has led to substantial interest in integrating classical statistical techniques with topological invariants. Roughly speaking, TDA provides qualitative multiscale shape descriptors for point clouds, notably persistent homology. This is a...
That is, a stable shape descriptor is a Lipschitz function from the set of compact metric spaces to a Polish space. As we shall see, this condition suffices for our framework. Notable examples of stable shape descriptors include dendograms [22], many of the invariants of TDA such as persistent homology and zigzag pers...
In fact, an important issue from the point of view of inference is that sophisticated shape descriptors such as those in TDA take values in Polish spaces that lack a vector space structure. Unfortunately, Polish spaces arising as targets for shape descriptors are hard to work with directly. In particular, the lack of a...
quadratic range-based self-normalized test statistics to detect gradual and abrupt topological changes. This provides the applied researcher with a simple tool for inference on the true underlying shape dynamics over time. Returning to the example of cell differentiation, we use these results to test for changes in sha...
C