context
stringlengths
80
2.5k
A
stringlengths
80
2.59k
B
stringlengths
80
1.95k
C
stringlengths
80
3.07k
D
stringlengths
80
3.07k
label
stringclasses
4 values
These techniques are limited by the object representations used by the target detectors, and tend to be constrained to simple motion models that do not adequately represent the motion of an object.
The techniques used to estimate the egomotion (Section 4.1.2) must be adapted to estimate third-party motions in a geocentric frame.
Those regions are then used to estimate the egomotion trajectory relative to those static points and the rest of the scene is usually ignored as noise.
The linearized cost in (16) is then used to estimate the geocentric trajectory of every third-party motion in the scene.
Fully addressing the MEP requires applying more expressive motion estimation techniques, such as those used to estimate egomotion, to the other third-party motions in a scene.
D
Note that in the model reduction experiments we used approximate estimation of the ELPD and did not test the performance on actual hold out data. In practice it is important to interpret the projected posterior as explained in McLatchie et al. (2024), to assess whether it can actually be used for out-of-sample predicti...
class of GP models to be used in large scale applications of various fields of science as the computational complexity is linear with respect to data size. We have presented a scalable approximation scheme for mixed-domain covariance functions,
We thank Aki Vehtari and Gleb Tikhonov for useful comments on early versions of this manuscript, and acknowledge the computational resources provided by Aalto Science-IT, Finland.
In this work, we present a scalable approximation and model reduction scheme for additive mixed-domain GPs, where the covariance structure depends on both continuous and categorical variables. We extend the Hilbert space reduced-rank approximation (Solin and Särkkä, 2020) for said additive mixed-domain GPs, making it a...
Note that in the model reduction experiments we used approximate estimation of the ELPD and did not test the performance on actual hold out data. In practice it is important to interpret the projected posterior as explained in McLatchie et al. (2024), to assess whether it can actually be used for out-of-sample predicti...
B
The structure preservation is ensured by a proper choice of the weighting matrices of the Riccati equations which yields as a side product a passive LQG-like controller which ensures that the closed-loop system is regular, impulse-free, and asymptotically stable.
Similarly as in [27], the balanced system never needs to be explicitly computed and the balancing and truncation step can be combined.
In Figure 1, we show the results for reduced models obtained by Algorithm 1 and the classical LQG-BT method from [27]. For our approach, we distinguish between the canonical port-Hamiltonian representation associated with a finite element discretization of the model equations and an improved representation constructed ...
As shown in Figure 2, we obtain similar results as in the case of the transport network. In particular, changing the Hamiltonian in the system representation drastically reduces the error as well as the error bound of our approach by several orders of magnitude. Moreover, the error corresponding to an optimal choice is...
Similarly as in classical LQG balanced truncation, the approximation error of the reduced-order model obtained by the new method can be estimated a priori by an error bound in the gap metric.
D
0≤b⁢(𝐰,A)≤10𝑏𝐰𝐴1\displaystyle 0\leq b(\mathbf{w},A)\leq 10 ≤ italic_b ( bold_w , italic_A ) ≤ 1
For SAME we now show that bm⁢i⁢n=0subscript𝑏𝑚𝑖𝑛0b_{min}=0italic_b start_POSTSUBSCRIPT italic_m italic_i italic_n end_POSTSUBSCRIPT = 0, bm⁢a⁢x=1subscript𝑏𝑚𝑎𝑥1b_{max}=1italic_b start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT = 1 and both can be reached independent of A𝐴Aitalic_A. Since A𝐴Aital...
Hence we can show that the extrema depend on the attribute sets A𝐴Aitalic_A and B𝐵Bitalic_B:
Hence we can show that the extrema depend on the attribute sets A𝐴Aitalic_A and B𝐵Bitalic_B:
To show that both extreme cases can be reached independent of A𝐴Aitalic_A, we consider the following extreme cases:
D
Now, we provide the comparison between the PMFs of optimal noise distribution with regard to Gaussian and geometric distributions in Fig. 8 for the same MSE parameter for all the distributions. From the plot, we can observe that the probability mass at η=0𝜂0\eta=0italic_η = 0 is maximum for the proposed mechanism, whi...
For the SD neighborhood and δ=0𝛿0\delta=0italic_δ = 0, the optimal noise PMF for the modulo addition mechanism is:
Next, we find an explicit solution for the optimum noise PMF f⋆⁢(η),η∈[n]superscript𝑓⋆𝜂𝜂delimited-[]𝑛f^{\star}(\eta),\eta\in[n]italic_f start_POSTSUPERSCRIPT ⋆ end_POSTSUPERSCRIPT ( italic_η ) , italic_η ∈ [ italic_n ] for the SD and BD neighborhood cases. In Section 3.2.4, we discuss the case of discrete vector qu...
For a given ϵ>0italic-ϵ0\epsilon>0italic_ϵ > 0, the privacy loss for the SD neighborhood case with the optimal noise mechanism, is a discontinuous function of ϵitalic-ϵ\epsilonitalic_ϵ, where:
In the following figures, we show the structure of the PMF associated with the optimal noise mechanism. First, we consider the SD neighborhood case.
D
As there is no natural translation of such approaches to the model of population protocols,
In this section we show how to perform sequential composition, using the outputs of one
Note also that after each step of the composition protocol (Q,S⁢t⁢e⁢p)𝑄𝑆𝑡𝑒𝑝(Q,Step)( italic_Q , italic_S italic_t italic_e italic_p ),
In the section after that we define and construct sequential composition of protocols.
If we define the set of input states Is={q1}subscript𝐼𝑠subscript𝑞1I_{s}=\{q_{1}\}italic_I start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT = { italic_q start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT },
C
&0.\end{array}start_ARRAY start_ROW start_CELL ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT over¯ start_ARG italic_b end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT roman_Δ start_POSTSUBSCRIPT italic_i en...
Solving those equations for u𝑢\displaystyle uitalic_u and w𝑤\displaystyle witalic_w yields
Since q⁢(t,x,u)𝑞𝑡𝑥𝑢\displaystyle q(t,x,u)italic_q ( italic_t , italic_x , italic_u ) is strictly convex in u𝑢\displaystyle uitalic_u and λ⁢(t,x,u)𝜆𝑡𝑥𝑢\displaystyle\lambda(t,x,u)italic_λ ( italic_t , italic_x , italic_u ) is affine in u𝑢\displaystyle uitalic_u, then there exists a unique global minimum to (16)...
where the control input u𝑢\displaystyle uitalic_u is adapted and independent of w𝑤\displaystyle witalic_w. In order to solve this problem, the following cost is introduced:
and where we have used the fact that D⁢(t)T⁢F⁢(t)=0𝐷superscript𝑡T𝐹𝑡0\displaystyle D(t)^{\mathrm{{\scriptstyle T}}}F(t)=0italic_D ( italic_t ) start_POSTSUPERSCRIPT roman_T end_POSTSUPERSCRIPT italic_F ( italic_t ) = 0. Then, computing the derivative of the expression in the min-max expression in (108) with respect ...
A
The limit as n→∞→𝑛n\to\inftyitalic_n → ∞ of the right-hand side is 1−y−x′1𝑦superscript𝑥′1-y-x^{\prime}1 - italic_y - italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, which is positive by assumption.
On the other hand, for sufficiently small ε𝜀\varepsilonitalic_ε, agent n𝑛nitalic_n’s MMS is 1−(n−1)⁢εn1𝑛1𝜀𝑛\frac{1-(n-1)\varepsilon}{n}divide start_ARG 1 - ( italic_n - 1 ) italic_ε end_ARG start_ARG italic_n end_ARG, so her NMMS is (1−(n−1)⁢ε)2superscript1𝑛1𝜀2(1-(n-1)\varepsilon)^{2}( 1 - ( italic_n - 1 ) itali...
The limit as n→∞→𝑛n\to\inftyitalic_n → ∞ of the right-hand side is 1−y−x′1𝑦superscript𝑥′1-y-x^{\prime}1 - italic_y - italic_x start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, which is positive by assumption.
The difference on the left-hand side, when non-negative, corresponds to the weighted envy of i𝑖iitalic_i towards j𝑗jitalic_j.
Then, choose ε𝜀\varepsilonitalic_ε small enough so that the left-hand side is smaller than the right-hand side, and therefore the inequality holds.
D
Mironov [6] was first to discuss the implications of the fact that one cannot represent—and thus cannot sample from—all real numbers on a finite-precision computer.
Focusing on the Opacus DP library implementation [4] by Facebook, we also show that DP-SGD is vulnerable to information leakage
For one of our two floating-point attacks, in addition to observing a single DP output that the adversary wishes to attack,
is protected by the DP mechanism based on a theoretical normal distribution using real values.
Focusing on the Laplace mechanism, Mironov’s attack proceeds by observing that certain floating-point values cannot be generated by a DP computation
D
Concerns were raised about provision of data to third parties without explicit agreement:
More restricted access control should be applied to sensitive data (e.g., genomics data),
One interviewee suggested that access to sensitive data can be partial and conditional:
One interviewee pointed out that there might be multiple AI algorithms suitable for a task,
The access restrictions attached to sensitive data may prevent projects from proceeding:
B
}(\bm{q}_{k},\bm{q}_{i})}{\sum_{j=1}^{N}K_{h}(\bm{q}_{k},\bm{q}_{j})}divide start_ARG ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT ∇ start_POSTSUBSCRIPT bold_italic_q start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_K start_POSTSUBSCRIPT ...
However, the median trick is not suitable for the FENE potential, as the equilibrium distribution is no longer Gaussian type and the median of the pairwise distance can become very large. Numerical experiments show that taking kernel bandwidth h=0.01ℎ0.01h=0.01italic_h = 0.01 produces a good result for N=200𝑁200N=200i...
where Kh⁢(𝒒,𝒒j)subscript𝐾ℎ𝒒subscript𝒒𝑗K_{h}(\bm{q},\bm{q}_{j})italic_K start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT ( bold_italic_q , bold_italic_q start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) is a smooth kernel function and hℎhitalic_h is the kernel bandwidth [34]. A typical choice of Kh⁢(𝒒,𝒒j)subscript�...
A key step in the above deterministic particle scheme is to replace the empirical measure fNsubscript𝑓𝑁f_{N}italic_f start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT by fNhsuperscriptsubscript𝑓𝑁ℎf_{N}^{h}italic_f start_POSTSUBSCRIPT italic_N end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT using k...
The optimal kernel bandwidth depends on the potential Ψ⁢(𝐪)Ψ𝐪\Psi(\bm{q})roman_Ψ ( bold_italic_q ) and the macroscopic flow. In the current study, we choose the kernel bandwidth hℎhitalic_h through multiple numerical experiments (see the numerical sections for details).
D
_{j\ell}.( bold_italic_t start_POSTSUBSCRIPT roman_ℓ , italic_k end_POSTSUBSCRIPT ⊗ ∇ italic_λ start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT ) : ( ∇ italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ⊗ bold_italic_t start_POSTSUBSCRIPT italic_i + 1 , italic_j end_POSTSUBSCRIPT ) = ∇ italic_λ start_POSTSUBSCRIPT ro...
If we identify entries of the matrix proxy as nodes of a graph, a constraint sequence will define a path of nodes. See Fig. 7. Indices in different constraint sequences are different. Namely for τ≠τ′𝜏superscript𝜏′\tau\neq\tau^{\prime}italic_τ ≠ italic_τ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, (i+τ,i)≠(j+τ′,j)𝑖�...
When i=k𝑖𝑘i=kitalic_i = italic_k, by ℓ≠i,i+1ℓ𝑖𝑖1\ell\neq i,i+1roman_ℓ ≠ italic_i , italic_i + 1, it follows
}i,j\in f^{*},i\neq j,bold_italic_n start_POSTSUBSCRIPT italic_f ∪ { italic_i } end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_f end_POSTSUPERSCRIPT ⋅ bold_italic_n start_POSTSUBSCRIPT italic_F start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT = 0 for italic_i , italic_j ∈ italic_f start_POSTSUPERSCRIPT ...
When i=ℓ𝑖ℓi=\ellitalic_i = roman_ℓ, by i≠j𝑖𝑗i\neq jitalic_i ≠ italic_j, it follows
D
Morris (2017) and Brooks and Du (2021). Private private signals also appear as counterexamples of information aggregation in financial markets: see the discussion in Ostrovsky (2012) and similar observations in the computer science literature (Feigenbaum, Fortnow, Pennock, and
generalizes our Theorem 2: it can be used to show that the recommender has a dominant privacy-preserving recommendation in our sense even when only observing a noisy signal about ω𝜔\omegaitalic_ω. See §6.1 in Strack and Yang (2023) for a detailed comparison of the papers.
Generalizing this example, we view the attribute and the recommendation as two signals about the state. We study private private information structures, in which signals about the state are statistically independent of each other. Requiring independence between these signals imposes a joint restriction on their informa...
If the state and the attribute are independent, then the recommender can simply report the state. However, if the state and the attribute are correlated, reporting the state also inadvertently reveals some information about the attribute. The recommender faces a privacy-constrained information-design problem: how to op...
In a follow-up paper, Strack and Yang (2023) consider our problem of optimal privacy-preserving recommendation and generalize the analysis in a number of directions. Most importantly, they show that the result on the existence of a dominant recommendation—obtained in our paper for the case of a binary state—extends to ...
D
The crank and slider mechanism is shown in Figure 10 (a). This mechanism has 4444 links and 4444 joints. The number of revolute joints are 3333 and prismatic joint is 1111. The zebra crossing diagram for this mechanism is shown in Figure 10 (b). The steps involved in drawing the zebra crossing diagram is similar to tha...
There are 4444 black patches and 4444 white patches in the Zebra crossing diagram. Applying Equation 1, the number of loops (L) in the mechanism are
There are 4444 black patches and 4444 white patches in the zebra crossing diagram. The number of loops in the mechanism are calculated using Equation 1. The number of loops are,
There are 6666 black patches and 5555 white patches in the zebra crossing diagram. The number of loops (L) in the mechanism are calculated using Equation 1.
There are 6666 black patches and 5555 white patches in the zebra crossing diagram. The number of loops in the mechanism are calculated using Equation 1. The number of loops are,
B
Fig. 4 shows the ACC metric for BERT-base encoder layers on IMDB [77] dataset and several General Language Understanding Evaluation (GLUE) [78] benchmark tasks. The fitted curve to the ACC metric results shows that the ACC metric is reduced at later layers gradually. It indicates that the fraction of the word-vectors t...
Table I presents an analysis of the latency and TTFT share across the embedding, attention, and feed-forward layers in several Transformer-based models. As shown in the table, regardless of model size, the latency of the embedding layer is minimal, with the majority of latency attributed to the attention and feed-forwa...
αE⁢Plsuperscriptsubscript𝛼𝐸𝑃𝑙\alpha_{EP}^{l}italic_α start_POSTSUBSCRIPT italic_E italic_P end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT is obtained from the fitted curve to the ACC metric (red lines in Fig. 4) at layer l𝑙litalic_l. In some tasks, the ACC values of layers are not smooth. To ...
As shown in Fig. 4, the behavior of the ACC metric strongly correlates with the intricate specifications inherent in each task. The ACC results consistently exhibit a monotonic decrease with the layer number, except in the case of the SST-2 [79] dataset, where unexpected behavior is observed. Notably, the SST-2 dataset...
Fig. 4 shows the ACC metric for BERT-base encoder layers on IMDB [77] dataset and several General Language Understanding Evaluation (GLUE) [78] benchmark tasks. The fitted curve to the ACC metric results shows that the ACC metric is reduced at later layers gradually. It indicates that the fraction of the word-vectors t...
C
For the above low-rank covariance estimation model with p≥d+1𝑝𝑑1p\geq d+1italic_p ≥ italic_d + 1, an (n,n+1,0.1)𝑛𝑛10.1(n,n+1,0.1)( italic_n , italic_n + 1 , 0.1 ) sample amplification is possible if and only if n≥d𝑛𝑑n\geq ditalic_n ≥ italic_d.
In [AGSV20], a subset of the authors introduced the sample amplification problem, and studied two classes of distributions: the Gaussian location model and discrete distribution model. For these examples, they characterized the statistical complexity of sample amplification and showed that it is strictly smaller than t...
Theorem 7.2 shows that as opposed to learning, sample amplification fails to exploit the low-rank structure in the covariance estimation problem. As a result, the complexity of sample amplification coincides with that of learning in this example. Note that sample amplification is always no harder than learning: the lea...
where it is the same as the class of all discrete distributions over d+1𝑑1d+1italic_d + 1 points, except that the learner has the perfect knowledge of p0=tsubscript𝑝0𝑡p_{0}=titalic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT = italic_t for some known t∈[1/(2⁢d),1/2]𝑡12𝑑12t\in[1/(2\sqrt{d}),1/2]italic_t ∈ [ 1 / ( 2 s...
In all the examples we have seen in the previous sections, there is always a squared root relationship between the statistical complexities of sample amplification and learning. Specifically, when the dimensionality of the problem is d𝑑ditalic_d, the complexity of learning the distribution (under a small TV distance) ...
B
The vector field g0superscript𝑔0g^{0}italic_g start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is often called the drift since, when all the inputs vanish, the state evolution is still non-vanishing in the presence of g0superscript𝑔0g^{0}italic_g start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT.
A second category of more general systems would take into account for a general nonlinear dependence of the dynamics with respect to the inputs (both known and unknown). In accordance with Equation (2.1), this dependence is affine.
By construction, the unknown input degree of reconstructability from any set of functions cannot exceed mwsubscript𝑚𝑤m_{w}italic_m start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT. In addition, it depends in general on x𝑥xitalic_x and, for TV systems, also on t𝑡titalic_t.
In many cases the system is time-invariant (from now on TI), namely it has not an explicit time-dependence and all the functions that appear in (2.1) do not depend explicitly on time. Nevertheless, we also account for an explicit time dependence to be as general as possible. From now on, we use the acronym TV to indica...
The first operation executed by 𝒜−superscript𝒜\mathcal{A}^{-}caligraphic_A start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT is to express θ𝜃\thetaitalic_θ only in terms of the unknown inputs and the original state. In other words, all the vαsubscript𝑣𝛼v_{\alpha}italic_v start_POSTSUBSCRIPT italic_α end_POSTSUBSCRIPT th...
C
We are now in place to upper bound R1csuperscriptsubscript𝑅1𝑐R_{1}^{c}italic_R start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT, R2csuperscriptsubscript𝑅2𝑐R_{2}^{c}italic_R start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_c end_POSTSUPERSCRIPT, R3csupers...
Under the event ℰ′superscriptℰ′\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, it holds that
Under the event ℰ′superscriptℰ′\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, the following holds with probability at least 1−δ/21𝛿21-\delta/21 - italic_δ / 2:
Under the event ℰ′superscriptℰ′\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, it holds that
Under the event ℰ′superscriptℰ′\mathcal{E}^{\prime}caligraphic_E start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT, it holds that
A
The difficulty in applying the quadratic gradient is to invert the diagonal matrix B~~𝐵\tilde{B}over~ start_ARG italic_B end_ARG in order to obtain B¯¯𝐵\bar{B}over¯ start_ARG italic_B end_ARG. We leave the computation of matrix B¯¯𝐵\bar{B}over¯ start_ARG italic_B end_ARG to data owner and let the data owner upload t...
For a fair comparison with the baseline (Kim et al., 2018a), we utilized the same 10-fold cross-validation (CV) technique on the same iDASH dataset consisting of 1579 samples with 18 features and the same 5-fold CV technique on the other five datasets. Like (Kim et al., 2018a), We consider the average accuracy and the ...
where n𝑛nitalic_n is the number of examples in the training dataset. LR does not have a closed form of maximizing l⁢(𝜷)𝑙𝜷l(\bm{\beta})italic_l ( bold_italic_β ) and two main methods are adopted to estimate the parameters of an LR model: (a) gradient descent method via the gradient; and (b) Newton’s method by the He...
Kim et al. (2018b) discussed the problem of performing LR training in an encrypted environment. They employed full-batch gradient descent during the training process and utilized the least-squares method to approximate the sigmoid function.
Privacy-preserving logistic regression training based on HE techniques faces a difficult dilemma that no homomorphic schemes are capable of directly calculating the sigmoid function in the LR model. A common solution is to replace the sigmoid function with a polynomial approximation by using the widely adopted least-sq...
D
{0}\boldsymbol{P}}\boldsymbol{x}_{n}\Big{)}.roman_max start_POSTSUBSCRIPT bold_italic_P ∈ caligraphic_P start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUBSCRIPT roman_min start_POSTSUBSCRIPT bold_italic_θ ∈ roman_Θ end_POSTSUBSCRIPT ∑ start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ital...
The matrix 𝑲𝑲{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
We say that the matrix 𝑷𝑷{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
The first player chooses an orthogonal projection matrix 𝑷𝑷{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
In response, the second player chooses the parameters of a linear model 𝜽𝜽{\boldsymbol{\theta}}bold_italic_θ, with knowledge of 𝑷𝑷{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}%
C
Note that DVC and Ballé et al. [100] are not able to achieve the performance of H.265 codec, but our method can further improve and surpass the performance of the recent VVC codec.
As shown in Fig. 11 (a), our framework with semantic stream outperforms the traditional codec+post restoration method BasicVSR++ by a large margin. For example, our method outperforms BasicVSR++ model by 8% at the 0.06bpp bitrate level.
As for the decoder side, LFN in our method is about 16×\times× efficient than the state-of-the-art video restoration method BasicVSR++.
(1) BasicVSR++, where a recent state-of-the-art (SOTA) video restoration method BasicVSR++ [150] is adopted to enhance the lossy video by codec and then fed into the downstream task.
As for the encoder side, the computational cost of our method is about 30×\times× fewer than DVC due to not using the optical flow estimation network.
B
The most widely used AVA dataset [17] just demands the association of audio and visual data, whereas the more recent ASW dataset [2] requires the synchronization of two modalities, which excludes instances like dub-subbed movies.
However, adopting an entirely data-driven strategy would make lip sync data-hungry and hard to optimize, as a significant amount of data covering a wide range of combinations would be needed to address compound distracting factors.
As described in Sec. IV-A, it is expected that the effects of compound distracting factors will be reduced in the synthesized images while the expression information from the raw input will be preserved.
Further, the DSP is built using the parametric image formation model described in Sec. III and thus has two components: a network for estimating the required coefficients from the input, and a renderer that uses the coefficients to synthesize images.
As it would be impossible to handle dubbings using the lip motion cue alone, advanced relational modeling [18] is required.
D
The modern passenger vehicle has undergone major advancements in recent years due the demand for high-tech functionality, which is accompanied by a growing number of interconnected electronic components and a respective stream of sensor data. This data availability coupled with industry competition creates the need for...
Traditional machine learning classification approaches to pattern recognition and fault detection, specifically of vehicle systems have been employed. Prytz et. al. [15] investigate the automated data analysis of connected vehicle sensors for fault detection and emphasize that interrelations of multiple connected signa...
The automatic detection of faults using the powertrain sensor data in this study has practical applications for automotive manufactures, such as the development of new on-board diagnostics and predictive maintenance capabilities, and to improve durability testing by automotive manufacturers prior to deployment. This me...
The drive cycles data set is a multi-variate time-series record of 57 electronic sensor signals of powertrain components connected via the electronic control modules in hybrid-electric vehicles. Data are recorded by test engineers who capture a wide variety of driving conditions in the drive cycles. The electronic sens...
The focus of this work is to explore methods of automatic anomaly detection to distinguish rare and abnormal temporal patterns in embedded vehicle sensor data which in turn can be used for fault detection. The data are from several powertrain components interconnected in the vehicle’s electronic control modules and col...
D
The notion of the dual quaternion, and its use to represent poses and rigid motions, seems to go back to McAulay [25], inspired by the earlier work by Clifford [8]. The notion of using dual quaternions to represent twists may be found in [1, 31]. A basic introduction to dual quaternions may be found in [22, 29], the la...
This paper has given a comprehensive and consistent description of how to use dual quaternions to represent poses, rigid motions, twists, and wrenches. We have introduced the notion of the Lie derivative for dual quaternions. We have shown how these formula are helpful for first producing Newton-Raphson methods for sol...
Finally, in equation (93), we give an approximation of the normalization of a vector dual quaternion perturbation of the identity, which shows that it is equal up to the second order to the exponential of the vector dual quaternion. This equation was essential for calculating the Hessian in the forwards kinematics algo...
(The reader should be aware that [1, 16] have incorrect formulas for the logarithm and exponential of dual quaternions — the correct formulas may be found in [28], and [37] for the exponential.)
The notion of the dual quaternion, and its use to represent poses and rigid motions, seems to go back to McAulay [25], inspired by the earlier work by Clifford [8]. The notion of using dual quaternions to represent twists may be found in [1, 31]. A basic introduction to dual quaternions may be found in [22, 29], the la...
C
Consequently, Corollary 4.16 can be applied, meaning that the per-bit difference between an expected interaction complexity term and the corresponding interaction information goes to zero.
Our main references are Chaitin (1987); Li and Vitányi (1997); Grünwald and Vitányi (2008).
We also combine Hu’s theorems for Shannon entropy and Kolmogorov complexity to generalize the well-known result that “expected Kolmogorov complexity is close to entropy” (Grünwald and Vitányi, 2008):
This generalizes the observation after Grünwald and Vitányi (2008), Theorem 10, to n>1𝑛1n>1italic_n > 1 and more complicated interaction terms.
For n=1𝑛1n=1italic_n = 1 and I={1}=1𝐼11I=\{1\}=1italic_I = { 1 } = 1,444For simplicity, we write sets as a sequence of their elements. we obtain:
C
In fact, the decomposition of the (0,0,12) and (2,0,6) both already required decomposing singular nodes with valence 6: (0,4,4,1), (0,2,8,1) and (1,3,3,1). We will refer to previously known decomposable singular nodes and their associated sphere triangulations as base cases.
Applying the splitting in Prop. 1 could result directly in base cases, where the rest of the decomposition is already known. If the splitting does not result in base cases, then it produces triangulations with fewer vertices. This can be repeated until there are not enough vertices to have a degree 6 vertex. Since shee...
Since splitting a sphere triangulation replaces all vertices on the interior of either side with just one new vertex each, both resulting triangulations will have fewer vertices than 𝒯𝒯\mathcal{T}caligraphic_T.
To construct a splitting of 𝒯𝒯\mathcal{T}caligraphic_T into triangulations of fewer vertices, we need a pair of vertices a𝑎aitalic_a and b𝑏bitalic_b adjacent to u𝑢uitalic_u that are at least 3 edges apart from each other in 𝒞𝒞\mathcal{C}caligraphic_C such that there is path p𝑝pitalic_p from a𝑎aitalic_a to b𝑏b...
Given a sphere triangulation 𝒯𝒯\mathcal{T}caligraphic_T with some vertex u𝑢uitalic_u of degree larger than 5, there exists a splitting such that either the number of vertices in both resulting triangulations decreases or the resulting triangulations are base cases.
D
Let ℰ2={qθ2}subscriptℰ2subscript𝑞subscript𝜃2\mathcal{E}_{2}=\{q_{\theta_{2}}\}caligraphic_E start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT = { italic_q start_POSTSUBSCRIPT italic_θ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT end_POSTSUBSCRIPT } be an exponential family with support 𝒳2subscript𝒳2\mathcal{X}_{2}caligraphic_X st...
Then the Kullback-Leibler divergence between a truncated density of ℰ1subscriptℰ1\mathcal{E}_{1}caligraphic_E start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and a
4 Kullback-Leibler divergence between a truncated density and a density of an exponential family
In §4, we show that the Kullback-Leibler divergence between a truncated density and a density of a same parametric exponential family amounts to a duo Fenchel-Young divergence or equivalently to a Bregman divergence on swapped parameters (Theorem 1). As an example, we report a formula for the Kullback-Leibler divergenc...
The α𝛼\alphaitalic_α-skewed Bhattacharyya divergence for α∈(0,1)𝛼01\alpha\in(0,1)italic_α ∈ ( 0 , 1 ) between a truncated density of ℰ1subscriptℰ1\mathcal{E}_{1}caligraphic_E start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT with log-normalizer F1⁢(θ)subscript𝐹1𝜃F_{1}(\theta)italic_F start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT (...
A
However, our experiment can also help reveal whether different incentivisation schemes could improve practitioners’ motivation.
Due to the small sample size, significance tests for differences in the samples are not meaningful.
Still, we cannot observe a clear picture from our results whether a specific component dominates all others.
We find indications that different forms of financial incentives impact participants’ performance in software-engineering experiments. Due to the small sample sizes, our results are not statistically significant, but we still observe clear tendencies.
We investigated in how far financial incentives impact the performance of (student) participants in software-engineering experiments.
C
Although our TransKDs seem to be complex, they are not that heavy compared to SKDs [4] and achieve plug-and-play knowledge distillation.
The original student model with SegFormer-B0 achieves a relatively low performance of 18.19%percent18.1918.19\%18.19 % in mIoU on NYUv2.
Specifically, our TransKD-Base method achieves +5.18%percent5.18+5.18\%+ 5.18 %, +2.18%percent2.18+2.18\%+ 2.18 %, and +2.71%percent2.71+2.71\%+ 2.71 % improvements over the feature-map-only KR method, while using the SegFormer-B0, the PVTv2-B0 [25], and the Lite Vision Transformer (LVT) [34] model as the student, resp...
Benchmarked against the feature-map-only method Knowledge Review [2], TransKD-Base enhances the distillation performance by 5.18%percent5.185.18\%5.18 % in mean Intersection over Union (mIoU) while adding negligible 0.21⁢M0.21𝑀0.21M0.21 italic_M parameter during the training phase, as shown in Fig. 2.
Still, the lightweight variant of our framework TransKD-Base is alone sufficient to conduct the distillation, yielding a surprising +5.18%percent5.18+5.18\%+ 5.18 % gain compared to KR [2] while just adding 0.21⁢M0.21𝑀0.21M0.21 italic_M parameters for patch embedding distillation.
D
\end{array}start_ARRAY start_ROW start_CELL italic_φ end_CELL start_CELL : end_CELL start_CELL [ 0 , ∞ [ end_CELL start_CELL ⟶ end_CELL start_CELL [ 0 , ∞ [ end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL end_CELL start_CELL italic_λ end_CELL start_CELL ⟼ end_CELL start_CELL ∥ italic_P start_POSTSUPERSCRIPT i...
It can be solved numerically quite easily with a finite difference method101010See for example [31].. Figure 5 provides illustrations of the computed neural network kernel foliation with such a method for the Xor function (5(a)), and for the Or function (5(b)).
However, finding the vanishing points of such a function is not an easy task. Several methods may be used. A numerical method such as the Newton’s method [27] could be applied.
Many authors consider neural network attacks and robustness properties in a Euclidean input space. Yet, it is commonly admitted that to learn from high dimensional data, data must lie in a low dimensional manifold ([12]). Such manifold has in general non-zero curvature and Riemannian geometry should therefore be a more...
This first method is local and does not take into account the curvature of the data. Hence, we propose a new method to improve the performances, especially in regions of 𝒳𝒳\mathcal{X}caligraphic_X where this curvature is high.
B
In this paper, we consider High-Multiplicity Scheduling Problems On Uniform Machines, where “high-multiplicity” refers to the following compact encoding:
machine multiplicity vector μ∈ℕ0τ𝜇superscriptsubscriptℕ0𝜏\mu\in\mathds{N}_{0}^{\tau}italic_μ ∈ blackboard_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT. A job of size pjsubscript𝑝𝑗p_{j}italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT takes time
We are given d∈ℕ𝑑ℕd\in\mathds{N}italic_d ∈ blackboard_N job sizes in the form of a vector
τ∈ℕ𝜏ℕ\tau\in\mathds{N}italic_τ ∈ blackboard_N machine speeds in the form of a vector s∈ℕτ𝑠superscriptℕ𝜏s\in\mathds{N}^{\tau}italic_s ∈ blackboard_N start_POSTSUPERSCRIPT italic_τ end_POSTSUPERSCRIPT and a corresponding
p∈ℕd𝑝superscriptℕ𝑑p\in\mathds{N}^{d}italic_p ∈ blackboard_N start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT and a corresponding job multiplicity vector ν∈ℕ0d𝜈superscriptsubscriptℕ0𝑑\nu\in\mathds{N}_{0}^{d}italic_ν ∈ blackboard_N start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_d end_POSTSUPER...
B
Denote by f:S→𝔼3:𝑓→𝑆superscript𝔼3f:S\to\mathbb{E}^{3}italic_f : italic_S → blackboard_E start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT the contracting C2superscript𝐶2C^{2}italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT map in Theorem 2. Let U𝑈Uitalic_U be a union of small polygonal disks centered at each singul...
We can thus apply the basic construction of Lemma 3 and its tilted version as in Note 4 to perform Step (e). This eventually lead to a PL isometric embedding of S∖U𝑆𝑈S\setminus Uitalic_S ∖ italic_U. It remains to embed appropriately the neighborhood of the singular vertices as required by Step (f) to complete the PL ...
Compute an approximation f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT of f𝑓fitalic_f that is almost conformal on S∖U𝑆𝑈S\setminus Uitalic_S ∖ italic_U and contracting over S𝑆Sitalic_S. Here, almost conformal means that f𝑓fitalic_f almost preserves angles, or more formally that its coefficient...
Refine the acute triangulation of S∖U𝑆𝑈S\setminus Uitalic_S ∖ italic_U uniformly to obtain an acute triangulation 𝒯𝒯\cal Tcaligraphic_T with small triangles. The meaning of small depends on the geometric properties of f1subscript𝑓1f_{1}italic_f start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and on the flexibility in Note...
Compute an acute triangulation of S∖U𝑆𝑈S\setminus Uitalic_S ∖ italic_U, where each triangle is acute.
D
Table 5: Comparative results with advanced benchmark set with 30 different runs.
Now we come to the point where we need novelty threshold value. It is totally based on application. Clearly from the table -1, if we set very low value of novelty threshold like case -4 (1-49%) that means we allow two PSO teams to search closely. If our application demand closeness we can set low novelty threshold valu...
Most of the PSO variants discuss convergence analysis of their approach but in our method showing the convergence analysis is redundant as we did not change much on the analytical structure of PSO. Rather it is important to analyze how the divergence of novelty search benefits our algorithm and how both convergence of ...
As we observe from the Table -(3), that for Group A unimodal functions our NsPSO failed to provide best results. But certainly its comes second and third respectively in comparison with so far best multimodal PSO algorithms like CLPSO or ECLPSO. NsPSO outperform existing other Novelty Search plus PSO hybrid algorithms....
If we observe the table carefully we can find that in this difficult benchmark functions also our proposed methods works really well. Though it may not comes first in few occasions but if we compare it with basic PSO method, NsPSO’s performance improves quite a lot. If we think through logically NsPSO’s performance sho...
D
The 1st, 2nd and 3rd rows correspond to Cora, CiteSeer, and PubMed datasets, respectively.
Within the evasion attack context, where the focus is on learned representations, we demonstrate the following property: given that the GSO error is bounded as in Theorem 1 and Proposition 1, the linear bound of each layer of GCNN (illustrated in Subsection VI-C1) permits the network’s stability against perturbation as...
For instance, under evasion attacks, [27] demonstrates the reduction on GCNN’s accuracy under small perturbations, while maintaining the degree distributions after the attack, and [30] demonstrates the significant drop of accuracy of GCN when 5% of edges are altered.
After affirming the linear sensitivity in Theorem 3, we also examine the stability of GCNN under significant graph perturbations by observing the accuracy changes of same GCNN candidates as in Section VI-C1.
Consistent with the experimental settings in Section VI-C1, the same GCNN candidates are utilized.
C
In Figure 5, we visualize the control and disturbance policies extracted from the Q function of the neural network corresponding to λ=0.0𝜆0.0\lambda=0.0italic_λ = 0.0 in Figure 4. The two policies are considered as to be reasonable because at most state, the control policy either drives the agent towards the target se...
In this experiment, we compare the reach-avoid set learned by Algorithm 1 with the one learned by tabular Q-learning, where we first grid the continuous state space and then run value iteration (4) over the grid. We treat the reach-avoid set learned by tabular Q-learning as the ground truth solution. We apply Algorithm...
We first consider learning the viability kernel where the constraint set is the same as the one in Subsection VI-B. The reward function is set to be r⁢(x)=−1𝑟𝑥1r(x)=-1italic_r ( italic_x ) = - 1, for all x∈ℝn𝑥superscriptℝ𝑛x\in\mathbb{R}^{n}italic_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. ...
In this subsection, we apply Algorithm 1 to learn viability kernel and backward reachable set for the 6-dimensional dynamical system in Subsection VI-B. The results empirically confirm Propositions 1 and 2. In the following two experiments, the same neural network architecture as in Section VI-B does not yield satisfac...
Due to the curse of dimensionality, tabular Q learning explained in the previous experiment suffers numerical difficulties in this 6-dimensional experiment. In this subsection, we apply Algorithm 1 with the same neural network architecture as in Section VI-A. We plot the learned reach-avoid set by projecting it onto a ...
C
𝖲⁢(𝒘)⁢𝒓=𝒘×𝒓,𝖲𝒘𝒓𝒘𝒓\mathsf{S}(\bm{w})\bm{r}=\bm{w}\times\bm{r},sansserif_S ( bold_italic_w ) bold_italic_r = bold_italic_w × bold_italic_r ,
A twist is the pair of vectors (𝒘,𝒗)𝒘𝒗(\bm{w},\bm{v})( bold_italic_w , bold_italic_v ) that describe the change of pose in the moving reference frame, that is:
If 𝒓0subscript𝒓0\bm{r}_{0}bold_italic_r start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT is the center of mass of the end effector in the moving frame, then the twist about the center of mass is given by
The reason for introducing the factor 2222 in definition (27) is so that the rate of change of work done to the end effector is given by
Let the pose η𝜂\etaitalic_η represent the reference frame that moves with the end effector. It is not necessary (although it can simplify things) that the center of mass of the end effector coincides with the origin of the moving frame.
A
Due to the impressive success of deep neural networks in feature extraction and classification of text, images, and many other modalities, they have been widely exploited by research scientists over the past few years for a variety of multi-modal tasks, including misinformation detection. We may categorize deep learnin...
The majority of the existing work on multi-modal misinformation detection embeds each modality, e.g., text or image, into a vector representation and then concatenates them to generate a multi-modal representation that can be utilized for classification tasks. For instance, Singhal et al. propose using pretrained XLNet...
In another work (Segura-Bedmar and Alonso-Bartolome, 2022), Bartolome et al. exploit a Convolutional Neural Network (CNN) that takes as inputs both text and image corresponding to an article, and the outputs are concatenated into a single vector. Qi et al. extract text, Optical Character Recognition (OCR) content, news...
Xue et al. (Xue et al., 2021) propose a Multi-modal Consistency Neural Network (MCNN) which utilizes a similarity measurement module that measures the similarity of multi-modal data to detect the possible mismatches between the image and text. Lastly, Biamby et al. (Biamby et al., 2022) leverage the CLIP model (Radford...
Over the past decade, several detection models (Shu et al., 2017, 2020c; Islam et al., 2020; Cai et al., 2020) have been developed to detect misinformation. However, the majority of them leverage only a single modality for misinformation detection, e.g., text (Horne and Adali, 2017; Wu et al., 2017; Guacho et al., 2018...
A
To enhance model/ bias variety, we apply the Hard Debiasing Algorithm [bolukbasi] to the pretrained embeddings and train a classification head for the unmodified and debiased embeddings with k∈1,3𝑘13k\in{1,3}italic_k ∈ 1 , 3 (removing the first k principal components during debiasing [bolukbasi]), resulting in 6666666...
In our experiments, we use the dataset for binary classification (toxic / not toxic) and, to limit computational costs, select subsets of the dataset where only identities of one specific bias type (race-color / religion / gender) are mentioned. We further limit our subset to samples, where exactly one identity is ment...
For the Jigsaw dataset, we handle each protected attribute separately, i.e. creating a subset for each protected attribute and training models on these to limit the effects of intersectional bias.
We use three downstream datasets developed to investigate biases in LMs: In terms of classification we consider the Jigsaw Unintended Bias (Jigsaw) dataset [jigsaw] and the BIOS dataset [biosbias]. For MLM we use the CrowS-Pairs dataset [crowspairs].
Our experiments cover four protected attributes: race-color, religion, gender and age, which were included in sufficient numbers in at least one of the datasets. Table 1 shows the protected groups and the number of defining sets. These defining sets were used as attributes and removed from the target samples before com...
B
A vertex v𝑣vitalic_v of an interval graph G𝐺Gitalic_G is an extreme vertex of a toll convex set S⊆V⁢(G)𝑆𝑉𝐺S\subseteq V(G)italic_S ⊆ italic_V ( italic_G ) if and only if v𝑣vitalic_v is an end simplicial vertex of G⁢[S]𝐺delimited-[]𝑆G[S]italic_G [ italic_S ].
In order to characterize the graphs with toll convexities that are convex geometries, we need to resort to a well-known characterization of interval graphs. Three vertices of a graph form an asteroidal triple if between any pair of them there exists a path that avoids the neighborhood of the third vertex.
A graph G𝐺Gitalic_G is an interval graph if and only if G𝐺Gitalic_G is chordal and contains no asteroidal triple.
The above concepts can be transferred to the combinatorial field in a natural way. We refer the reader to [23]. Let G𝐺Gitalic_G be a graph and let 𝒞𝒞\mathscr{C}script_C be a convexity of G𝐺Gitalic_G. Given a set S⊆V⁢(G)𝑆𝑉𝐺S\subseteq V(G)italic_S ⊆ italic_V ( italic_G ), the smallest set H∈𝒞𝐻𝒞H\in\mathscr{C}it...
Using arguments similar to those used in the previous section, one can prove that if the weakly toll convexity of G𝐺Gitalic_G is a convex geometry, then G𝐺Gitalic_G is chordal and cannot contain asteroidal triples and induced subgraphs isomorphic to K1,3subscript𝐾13K_{1,3}italic_K start_POSTSUBSCRIPT 1 , 3 end_POSTS...
A
FlexFringe [VH17], which originated from the DFASAT [HV10] algorithm, is a framework for learning different kinds of automata using the red-blue state merging framework [LPP98]. Learning automata from traces can be seen as a grammatical inference [DlH10] problem where traces are modeled as the words of a language, and ...
State-merging starts with a large tree-shaped model called the prefix tree, which directly encodes the input traces. It then iteratively combines states by testing the similarity of their future behaviors using a Markov property [NN98] or a Myhill-Nerode congruence [HMU01]. This process continues until no similar state...
One of the most successful (P)DFA learning algorithms and an efficient method for performing such tests is evidence-driven state-merging (EDSM) in the red-blue framework [LPP98]. FlexFringe implements this framework, using union/find structures to keep track of performed merges and to efficiently undo them, see Figure ...
Figure 2. An automaton model printed after running FlexFringe (top). It contains the same type of counts as the prefix tree. To obtain a PDFA from these counts, one needs to normalize them to obtain transition and final probabilities (bottom). Traces only end in the third state, making it the only possible ending state...
Given a finite data set of example sequences, D𝐷Ditalic_D called the input sample, the goal of PDFA learning (or identification) is to find a (non-unique) small PDFA 𝒜𝒜\mathcal{A}caligraphic_A that is consistent with D𝐷Ditalic_D. We call such sequences positive or unlabeled. In contrast, DFAs are commonly learned f...
A
\right]\big{(}M_{\theta}(\Delta x_{n})\big{)}\cdot z_{n-1}\right\}\bigg{)}.∇ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( over~ start_ARG italic_ψ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ∘ fraktur_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_z start_POSTSUBSCRIPT italic_n - 1 end_POSTSUB...
the duality between gradient and differential, we are able to determine the gradient of ψ~nsubscript~𝜓𝑛{\widetilde{\psi}}_{n}over~ start_ARG italic_ψ end_ARG start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, which is the main ingredient of the Riemannian gradient descent algorithm of the development layer (i.e., Algori...
With the gradient computation at hand (in particular, Theorem 3.1 and Proposition 3.1), we are now ready to describe the backpropagation of development layer in Algorithm 2.
One crucial remark is in order here. In view of Theorem 3.1 and Proposition 3.1, the development layer proposed in this paper possesses a recurrence structure analogous to that of the RNNs. This is the key structural feature of Algorithm 2. However, it is well known that the RNNs are, in general, prone to problems of v...
To optimise the model parameters of the development layer, we exploit the recurrence structure in Eq. (1) and the Lie group-valued output to design an efficient gradient-based optimisation method. We combine backpropagation through time of RNNs and “trivialisation”, an optimisation method on manifolds (Lezcano-Casado (...
B
Since general Cαsuperscript𝐶𝛼C^{\alpha}italic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT domains can be covered by Cαsuperscript𝐶𝛼C^{\alpha}italic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT domains of special type, it suffices to consider domains of special type.
the differentiable mapping Φ:E→ℝ2:Φ→𝐸superscriptℝ2\Phi:E\to{\mathbb{R}}^{2}roman_Φ : italic_E → blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT by
Let g:[−2⁢d,2⁢d]d−1→ℝ:𝑔→superscript2𝑑2𝑑𝑑1ℝg:[-2d,2d]^{d-1}\to{\mathbb{R}}italic_g : [ - 2 italic_d , 2 italic_d ] start_POSTSUPERSCRIPT italic_d - 1 end_POSTSUPERSCRIPT → blackboard_R be a continuously differentiable function on [−2⁢d,2⁢d]d−1superscript2𝑑2𝑑𝑑1[-2d,2d]^{d-1}[ - 2 italic_d , 2 italic_d ] start_POST...
Let g:[−2,2]→ℝ:𝑔→22ℝg:[-2,2]\to{\mathbb{R}}italic_g : [ - 2 , 2 ] → blackboard_R be a continuously differentiable function on [−2,2]22[-2,2][ - 2 , 2 ] satisfying
The next step of our construction is to proceed from the C2superscript𝐶2C^{2}italic_C start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT to Cαsuperscript𝐶𝛼C^{\alpha}italic_C start_POSTSUPERSCRIPT italic_α end_POSTSUPERSCRIPT graph domains in ℝ2superscriptℝ2{\mathbb{R}}^{2}blackboard_R start_POSTSUPERSCRIPT 2 end_POSTSUPERS...
C
It is not hard to see that max and min are not strongly admissible and that noisy-or is not even admissible.
The above given definition of (strong) admissibility is fairly straightforward and natural, as well as useful for proving that some functions
as stated by the following lemma, the straightforward proof of which is left to the reader.
Section 4 defines the general notion of a logic that we will use, as well as the particular logics that will be considered later.
the only difference between strong admissibility (sensu novo) and admissibility (sensu novo) is that in the
A
In Theorem 5.1 we prove an analogous result where the bound is on the total number of splittings.
In Theorem 5.1 we prove an analogous result where the bound is on the total number of splittings.
Note that the problem definitions do not determine in advance which items will be split, but only bound their number, or bound the number of splittings. The solver may decide which items to split after receiving the input.
In all the works we surveyed, there is no global bound on the number of splitting jobs. As far as we know, bounding the number of splittings or split jobs was not studied before.
The number of splittings is at least the number of split items but might be larger. For example, a single item split into 10101010 different bins counts as 9999 splittings.
C
As a consequence, all interconnected devices and users stand at risk. Even though research on AI to protect against cyber threats has been ongoing for many years [12, 13], it is still unclear how to ensure the security of networks with AI integrated into their core operations. A significant drawback in AI security has ...
Explainable Artificial Intelligence (XAI) represents an advancement over the opaque AI systems in networking. Starting with the 5G era, artificial intelligence (AI) is anticipated to assume various roles across all levels of mobile networks. Furthermore, explainable AI (XAI) would be the subsequent phase in attaining a...
The standardization of application development using XAI for RIC or core and backhaul networks is necessary. With standardizations, organizations would implement strict access control mechanisms, advanced encryption and data masking techniques, and regular security audits to keep the unethical usage of XAI outputs in c...
The composition of more meticulous standards on the elements of XAI security and its provision of transparent AI/ML techniques for B5G security is a requirement. The European Partnership on Smart Networks and Services (SNS) established Europe’s strategic research and innovation roadmap. The initiative is based on an EU...
The Defense Advanced Research Projects Agency (DARPA) started the Explainable Artificial Intelligence (XAI) initiative in May 2017 to develop a set of new AI methodologies that would allow end-users to comprehend, adequately trust, and successfully manage the next generation of AI systems [14]. To further elaborate, it...
D
Initially proposed for synchronous networks, an MA may suppress point-to-point network messages according to rules that define its power.
For instance, a tree MA in a synchronous network might suppress any message except those transiting on an (unknown) spanning tree of the network, with this spanning tree possibly changing in each round.
This work takes a drastic turn away from this usual assumption and explores how BRB might be provided when processes execute on an unreliable network that might lose point-to-point messages.
This is because signatures allow for MA-tolerant BRB algorithms that are more efficient in terms of round and message complexity than those that can be constructed using k⁢2⁢ℓ𝑘2ℓk2\ellitalic_k 2 roman_ℓ-cast [4].
Initially proposed for synchronous networks, an MA may suppress point-to-point network messages according to rules that define its power.
A
D}_{\text{source}};\theta\right).italic_θ = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT ce end_POSTSUBSCRIPT ( caligraphic_D start_POSTSUBSCRIPT source end_POSTSUBSCRIPT ; italic_θ ) .
A PLM-based fine-tuning method Zhang et al. (a), called IntentBERT, utilizes a small amount of labeled utterances from public intent datasets to fine-tune PLMs with a standard classification task, which is referred to as supervised pre-training. Despite its simplicity, supervised pre-training has been shown extremely u...
Specifically, the pre-training is conducted by attaching a linear layer (as the classifier) on top of the utterance representation generated by the PLM:
Can we improve supervised pre-training via isotropization for few-shot intent detection?
After supervised pre-training, the linear layer is removed, and the PLM can be immediately used as a feature extractor for few-shot intent classification on target data. As shown in Zhang et al. (a), a parametric classifier such as logistic regression can be trained with only a few labeled samples to achieve good perfo...
D
In this paper, we highlight why polynomial regression, while offering more interpretability than neural networks and being able to approximate the same function classes, is rarely used in practice. By deriving new finite sample and asymptotic L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT rat...
Limitations and Future Work. This paper provides a formal reason why polynomial regression is ill-suited for prediction tasks in high-dimensional settings. However a limitation of the paper is that, while the main theorem (Theorem 1) applies to a large class of series regression models, the result for polynomial regres...
In this paper, we highlight why polynomial regression, while offering more interpretability than neural networks and being able to approximate the same function classes, is rarely used in practice. By deriving new finite sample and asymptotic L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT rat...
Drawing from the theoretical insights, we propose the use of BPR as an alternative to neural networks that is computationally attractive and readily implementable in most machine learning software packages. By only building the polynomial embeddings for subsets of the feature space and averaging across multiple models,...
Our main result, Theorem 1, extends the L2superscript𝐿2L^{2}italic_L start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT convergence result in [belloni2015] by deriving new finite sample rates as well as asymptotic rates using the results from [rudelson2007sampling]. This result is valid for a large class of series regression...
A
EConn≤⁢(c)subscriptEConn𝑐\texttt{EConn}_{\leq}(c)EConn start_POSTSUBSCRIPT ≤ end_POSTSUBSCRIPT ( italic_c ): the set of graphs with edge-connectivity at most c𝑐citalic_c. A graph is c𝑐citalic_c-edge-connected
the graph property of 𝖣⁢[k]𝖣delimited-[]𝑘\mathsf{D}[k]sansserif_D [ italic_k ] is the set
if it has at least c𝑐citalic_c vertices, and if it remains connected whenever fewer than c𝑐citalic_c vertices are deleted.
in the case of the DP-core C-Hamiltonian the multiplicity 2O⁢(k)superscript2𝑂𝑘2^{O(k)}2 start_POSTSUPERSCRIPT italic_O ( italic_k ) end_POSTSUPERSCRIPT is smaller than the trivial
Hamiltonian: the set of Hamiltonian graphs. A graph is Hamiltonian if it contains a cycle that spans all its vertices.
D
We further observe that the detection delay depends on the post-change distribution. The delay is comparably large when changing from the multivariate standard normal to the mixed distribution. This matches our intuition: the mixed distribution is relatively similar to the pre-change distribution, rendering it difficul...
Overall, the results on these synthetic streams indicate that MMDEW is (i) robust to the choice of α𝛼\alphaitalic_α and (ii) that α𝛼\alphaitalic_α has the expected influence on the behavior of the algorithm.
We introduced a novel change detection algorithm, MMDEW, that builds upon two-sample testing with MMD, which is known to yield powerful tests on many domains. To facilitate the efficient computation of MMD, we presented a new data structure, which allows to estimate MMD with polylogarithmic runtime and logarithmic memo...
threshold. The level α𝛼\alphaitalic_α is a bound for the probability that the tests
The MTD plot on the right mirrors this observation: The MTD decreases with increasing α𝛼\alphaitalic_α.
A
One reason why current GNNs perform poorly on heterophilic graphs, could be the mismatch between the labeling rules of nodes and their linking mechanism. The former is the target that GNNs are expected to learn for classification tasks, while the latter specifies how messages pass among nodes for attaining this goal. I...
One reason why current GNNs perform poorly on heterophilic graphs, could be the mismatch between the labeling rules of nodes and their linking mechanism. The former is the target that GNNs are expected to learn for classification tasks, while the latter specifies how messages pass among nodes for attaining this goal. I...
However, existing techniques [26, 27, 18] mainly parameterize graph edges with node similarity or dissimilarity, while failing to explicitly correlate them with the prediction target. Even worse, as the assortativity of real-world networks is usually agnostic and node features are typically full of noises, the captured...
This hypothesis is assumed without losing generality to both homophilic and heterophilic graphs. For a homophilic scenario, e.g., in citation networks, scientific papers tend to cite or be cited by others from the same area, and both of them usually possess the common keywords uniquely appearing in their topics. For a ...
Once the issue of GNNs’ learning beyond homophily is identified, a natural question arises: Can we design a new type of GNNs that is adaptive to both homophilic and heterophilic scenarios? Well formed designs should be able to identify the node connections irrelevant to learning tasks, and substantially extract the mos...
D
Suppose that E𝐸Eitalic_E is a 2-dimensional slope in ℝ4superscriptℝ4\mathbb{R}^{4}blackboard_R start_POSTSUPERSCRIPT 4 end_POSTSUPERSCRIPT characterized by subperiods p0,…,p3subscript𝑝0…subscript𝑝3p_{0},...,p_{3}italic_p start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_p start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT, ...
For more clarity, the diagram in Figure 10 summarizes the notations and the “traveling” between spaces that we use.
We can then use the lines to show that a shadow is periodic (in one direction) and determine its prime period: starting from a vertex of the shadow, we follow the line in the chosen direction until we hit another vertex, for each valid configuration of the tiles.
Figure 7 illustrates the difference between two valid projections, one being fine but not the other, on the slope of Cyrenaic tilings which we present in the next subsection. With the fine projection, projected subperiods have the same directions as the sides of the tiles.
Additionally, the lengths of the “integer versions” of subperiods are closely related to the distances between two consecutive Ammann bars in a given direction, as can be seen in Figure 1 (more details are given in Appendix A).
A
The motivation above suggests a learning problem that assumes the quantum state prepared at time step t=1,2,…,T𝑡12…𝑇t=1,2,\ldots,Titalic_t = 1 , 2 , … , italic_T is ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. Due to imperfect calibration, ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTS...
Next, we consider a more sophisticated metric, the "adaptive regret," as introduced by hazan2009efficient [18]. This metric measures the maximum of the regret over all intervals, essentially taking into account a changing comparator. Many extensions and generalizations of the original technique have been presented in w...
In this section, we consider the case that ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT may change over time. In particular, we do not assume the number of times that ρtsubscript𝜌𝑡\rho_{t}italic_ρ start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT changes, but instead consider the total ...
Dynamic regret: We consider minimizing regret under the assumption that the comparator φ𝜑\varphiitalic_φ changes slowly:
The first metric we consider is the "dynamic regret" introduced by zinkevich2003online [17], which measures the difference between the learner’s loss and that of a changing comparator. The dynamic regret bounds are usually characterized by how much the optimal comparator changes over time, known as the "path length."
D
Reset control systems are effective in improving the performance of motion systems. To facilitate the practical design of reset systems, this study develops frequency response analysis methods for open-loop and closed-loop reset control systems, by assessing their steady-state responses to sinusoidal inputs. Results sh...
The frequency response analysis for closed-loop reset systems under sinusoidal disturbance and noise follows a similar derivation process as the theories presented in this paper. However, to emphasize and clarify the contribution of this paper, we have chosen not to include analysis for systems with disturbance or nois...
The frequency response analysis is currently limited to two-reset systems. In our future research, we aim to develop techniques to identify two-reset systems and analyze multiple-reset systems, thereby expanding the scope of our analysis methods. Furthermore, the newly introduced Two-Reset Control System (T-RCS) in thi...
Reset control systems are effective in improving the performance of motion systems. To facilitate the practical design of reset systems, this study develops frequency response analysis methods for open-loop and closed-loop reset control systems, by assessing their steady-state responses to sinusoidal inputs. Results sh...
The lack of precise frequency response analysis methods for closed-loop reset systems and the disconnect between open-loop and closed-loop analysis in reset systems motivates this research. The objective of this research is to develop new frequency response analysis methods for both open-loop and closed-loop reset cont...
A
We start with recalling the setting of Theorem 5.5. The graph G𝐺Gitalic_G is a connected 𝒪ksubscript𝒪𝑘\mathcal{O}_{k}caligraphic_O start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT-free graph of girth at least 11, and C𝐶Citalic_C is a shortest cycle in G𝐺Gitalic_G. The neighborhood of C𝐶Citalic_C is denoted by N𝑁N...
We start with proving that the cardinality of S𝑆Sitalic_S is at least the cycle rank r⁢(G)𝑟𝐺r(G)italic_r ( italic_G ).
The proof of our main structural result, Theorem 1.1, spans from Section 4 to Section 8. After some preliminary results (Section 4), we show in Section 5 that it suffices to prove Theorem 1.1 when the graph G𝐺Gitalic_G has a simple structure: a cycle C𝐶Citalic_C, its neighborhood N𝑁Nitalic_N (an independent set), an...
Our goal is to prove that there is a vertex whose degree is linear in the cycle rank r⁢(G)𝑟𝐺r(G)italic_r ( italic_G ).
in which case there is a cycle Cysubscript𝐶𝑦C_{y}italic_C start_POSTSUBSCRIPT italic_y end_POSTSUBSCRIPT which is a connected component of G⁢[R′]𝐺delimited-[]superscript𝑅′G[R^{\prime}]italic_G [ italic_R start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ],
C
\forall x\in\mathbb{R}^{n},\gamma>0.( italic_I + italic_γ ∂ italic_g ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ( italic_x ) = roman_prox start_POSTSUBSCRIPT italic_γ italic_g end_POSTSUBSCRIPT ( italic_x ) , ∀ italic_x ∈ blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT , italic_γ > 0 .
The next theorem presents iteration and operation complexity of Algorithm 2 for finding an ε𝜀\varepsilonitalic_ε-residual solution of problem (1) with μ=0𝜇0\mu=0italic_μ = 0, whose proof is deferred to Section 6.
The above discussion leads to the following result regarding Algorithm 2 for finding a pair of ε𝜀\varepsilonitalic_ε-KKT solutions of problems (22) and (24).
The above discussion leads to the following result regarding Algorithm 2 for finding an ε𝜀\varepsilonitalic_ε-residual solution of problem (37).
The above discussion leads to the following result regarding Algorithm 2 for finding an ε𝜀\varepsilonitalic_ε-KKT solution of problem (29).
C
D⁢(ℓ⁢(x),ℓ⁢(y))𝐷ℓ𝑥ℓ𝑦D(\ell(x),\ell(y))italic_D ( roman_ℓ ( italic_x ) , roman_ℓ ( italic_y ) ) can be computed given only the sum ℓ′⁢(x)⊕ℓ′⁢(y)direct-sumsuperscriptℓ′𝑥superscriptℓ′𝑦\ell^{\prime}(x)\oplus\ell^{\prime}(y)roman_ℓ start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) ⊕ roman_ℓ start_POSTSUPERSCRIPT...
Let ℱℱ\mathcal{F}caligraphic_F be any class of graphs with an adjacency labeling scheme of size s⁢(n)𝑠𝑛s(n)italic_s ( italic_n ). Then
Let ℱℱ\mathcal{F}caligraphic_F be a hereditary class with an adjacency labeling scheme of size s⁢(n)𝑠𝑛s(n)italic_s ( italic_n ). Then:
Let ℱℱ\mathcal{F}caligraphic_F be any class of graphs with an adjacency labeling scheme of size s⁢(n)𝑠𝑛s(n)italic_s ( italic_n ). Then
Let ℱℱ\mathcal{F}caligraphic_F be a hereditary class of graphs that admits an adjacency labeling scheme of size s⁢(n)𝑠𝑛s(n)italic_s ( italic_n ).
D
This paper proposes an integrated constellation design and transfer model for solving the RCRP. Given a set of target points each associated with a time-varying coverage reward and a time-varying coverage threshold, the problem aims to maximize the total reward obtained during a specified time horizon and to minimize t...
The contributions of this paper are as follows. We present an integer linear program (ILP) formulation of the design-transfer problem, referred to as the Regional Constellation Reconfiguration Problem (RCRP). This formulation incorporates both constellation design and constellation transfer aspects, which are typically...
The ILP formulation of RCRP-ARC enables users to utilize commercial software packages for convenient handling and obtaining tolerance-optimal solutions. However, for large-scale real-world instances, the problem suffers from the explosion of a combinatorial solution space. To overcome this challenge and to produce high...
We conduct computational experiments to evaluate the performance of the proposed Lagrangian relaxation-based solution method. In particular, we focus on analyzing the solution quality and the computational efficiency of the Lagrangian heuristic in comparison to the results obtained by a mixed-integer programming (MIP) ...
The RCRP formulation combines the constellation transfer problem with the AP formulation and the constellation design problem with the MCP formulation. The former exhibits a special structure—the integrality property—that enables an efficient solution approach. The latter, however, is a combinatorial optimization probl...
B
Let ρ:GS,R→GLn⁢(ℂ):𝜌→subscript𝐺𝑆𝑅subscriptGL𝑛ℂ\rho:G_{S,R}\to\mathrm{GL}_{n}(\mathbb{C})italic_ρ : italic_G start_POSTSUBSCRIPT italic_S , italic_R end_POSTSUBSCRIPT → roman_GL start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( roman_ℂ ) the representation where each ρ⁢(g)𝜌𝑔\rho(g)italic_ρ ( italic_g ) is the perm...
representation ρ:G→GLn⁢(ℂ):𝜌→𝐺subscriptGL𝑛ℂ\rho:G\rightarrow\mathrm{GL}_{n}(\mathbb{C})italic_ρ : italic_G → roman_GL start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( roman_ℂ ) for some n∈ℕ𝑛ℕn\in\mathbb{N}italic_n ∈ roman_ℕ with ρ⁢(s)≠ρ⁢(e)𝜌𝑠𝜌𝑒\rho\left(s\right)\neq\rho\left(e\right)italic_ρ ( italic_s ) ≠ ital...
there is a matrix representation ρ𝜌\rhoitalic_ρ of G𝐺Gitalic_G such that ρ⁢(s)≠ρ⁢(e)𝜌𝑠𝜌𝑒\rho\left(s\right)\neq\rho\left(e\right)italic_ρ ( italic_s ) ≠ italic_ρ ( italic_e ).
be a representation such that ρ⁢(s)≠ρ⁢(e)𝜌𝑠𝜌𝑒\rho\left(s\right)\neq\rho\left(e\right)italic_ρ ( italic_s ) ≠ italic_ρ ( italic_e ).
By assumption we have ρ⁢(s)≠ρ⁢(e)𝜌𝑠𝜌𝑒\rho(s)\neq\rho(e)italic_ρ ( italic_s ) ≠ italic_ρ ( italic_e ).
D
CWP represents a novel framework related to Lyapunov function learning, that can be used to develop model-free controllers for general dynamical systems.
Subsequently, we propose D-learning for performing CWP in the absence of knowledge regarding the system dynamics.
• We propose D-learning, which parallels to Q-learning [11] in RL to obtain both Lyapunov function and its derivative (see in Fig.6). Unlike existing Lyapunov function learning methods that rely on controlled models or their approximation with neural networks [12], [13], the system dynamics are encoded in the so-called...
(c) Principal Component Analysis (PCA) projection of the Lyapunov function (17) learned for the system (15), overlaid with the trajectories of the system controlled by the D-learning controller and the DDPG controller, which shows that the D-learning controller has better stability guarantees than the DDPG controller.
Moreover, the feature function, Lyapunov function, and controller, all in the form of neural networks, can be learned jointly to achieve superior performance. These will be explored in the future work.
B
We evaluated the performance of the trained and fine-tuned RCN-Hull model on the test partition of the UCV dataset. Since the ground truth convex hull matrices contain many zeroes, classification accuracy is not a suitable performance indicator. Hence, we report the prediction performance using the precision, recall an...
Table III lists BD-rates of the RQ curves generated using the RCN-Hull model predictions with the optimal ground truth convex hulls of the test sequences used as the reference. The PCHIP [56] interpolation method which has been widely employed to compute BD-rates in codec standardization efforts due to its relative sta...
C}}_{pq}start_UNDERACCENT italic_p ∈ caligraphic_P end_UNDERACCENT start_ARG ∑ end_ARG start_UNDERACCENT italic_q ∈ caligraphic_Q end_UNDERACCENT start_ARG ∑ end_ARG over^ start_ARG caligraphic_C end_ARG start_POSTSUBSCRIPT italic_p italic_q end_POSTSUBSCRIPT, which implies significant reduction of the complexity of bi...
The overall performances of the four compared methods in terms of average BD-rates, average BD-rate magnitudes and time savings, along with their 95% bootstrap CIs are summarized in Table IV. Table IV also reports the mean absolute deviations (MAD) and the standard deviations (SD) of BD-rates obtained for the four comp...
We compared the performance of RCN-Hull against that of I-hull, P-hull [21] and F-hull [11]. The distribution of the BD-rates of each compared model on the UCV test set is plotted in Fig. 7a. The box plots333the box boundaries represent the lower and upper quartiles of the corresponding data, while the whiskers extend ...
A
In this section, we first introduce experimental settings and implementation details for evaluation.
Below we elaborate on each module used in GraphMLP and provide its detailed implementations.
We start our ablation studies by exploring the GraphMLP on different hyper-parameters.
The large-scale ablation studies with 2D detected inputs on the Human3.6M dataset are conducted to investigate the effectiveness of our model (using the single-frame model).
We also conduct detailed ablation studies on the importance of designs in our proposed approach.
D
9:     S←S∪T1←𝑆𝑆subscript𝑇1S\leftarrow S\cup T_{1}italic_S ← italic_S ∪ italic_T start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT, R←R∪T2←𝑅𝑅subscript𝑇2R\leftarrow R\cup T_{2}italic_R ← italic_R ∪ italic_T start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT
of Chen et al. (2021), which has adaptivity of O⁢(log⁡(n/k))𝑂𝑛𝑘O(\log(n/k))italic_O ( roman_log ( italic_n / italic_k ) ).
a linear-time algorithm of Kuhnle (2021); Chen et al. (2021), and showing that our adaptation
The highly adaptive linear-time algorithm (Alg. 3) outlined in Chen et al. (2021)
This algorithm is an instantiation of the ParallelGreedyBoost framework of Chen et al. (2021), and it relies heavily on
D
O~⁢(n67⁢d47)~𝑂superscript𝑛67superscript𝑑47\tilde{O}\left(n^{\frac{6}{7}}d^{\frac{4}{7}}\right)over~ start_ARG italic_O end_ARG ( italic_n start_POSTSUPERSCRIPT divide start_ARG 6 end_ARG start_ARG 7 end_ARG end_POSTSUPERSCRIPT italic_d start_POSTSUPERSCRIPT divide start_ARG 4 end_ARG start_ARG 7 end_ARG end_POSTSUPE...
In this section, we consider the cases that f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) is convex and L𝐿Litalic_L-smooth. We provide convergence results for AClipped-dpSGD with the preferred high probability. We will show that AClipped-dpSGD is faster than DP-GD and DP-SGD in terms of running time to achieve the excess populat...
In this section, we introduce our gradient estimator based on the AClip strategy, which privately estimates the mean of a heavy-tailed distribution with a high probability guarantee. Before presenting our result, we first discuss the bias of some simple clipped methods.
In this section, we provide the necessary background for our analyses, including differential privacy and
In this section, we provide our main Algorithm 1, AClipped-dpSGD, and establish its convergence results (i.e., excess population risk bounds) under (strongly) convex and (non)-smooth objectives.
C
SC user association: Optimal UA under SC for mmWave networks is widely studied in the literature [13].
Existing studies design new algorithms for UA, for example by using a Markov decision process [14] to minimize the number of handover decisions and to maximize throughput. However, a key shortcoming of these works is that they neglect the directionality gain of beams, while this simplification might significantly influ...
Directional beams in mmWave channels help overcome low signal quality due to high path loss. Several studies investigate how to steer, manage and align these beams to achieve the highest throughput or best efficiency [29, 30, 31]. All of these works show that operation with smaller beams leads to higher throughput. How...
Based on the aforementioned challenges, our goal is to design a computationally-efficient UA scheme that maximizes network throughput while meeting the users’ minimum rate requirements by exploiting a dynamic form of MC. To the best of our knowledge, our study is the first dynamic MC scheme that takes advantage of both...
Three heuristics for UA are studied in [14]: (1) connect to the least-loaded BS, (2) connect to the highest instantaneous rate, and (3) connect to the highest SNR. In this work, the authors showed that out of these three heuristics, the SNR-based approach performed best in terms of spectral efficiency. However, some of...
A
At time t3, they decide whether to buy one of the two phones, or to leave the website empty-handed. This decision is the output of a rational process of utility maximization that takes into account both the features and the prices of the two phones. The novelty of our framework is that we make explicit the difference b...
Performance-optimized Recommendation has a long history, with many initial works related to the problems of click-through rate (CTR) optimization for online advertising and search ranking. There are two categories of methods. The first relies on the label or reward given by the user, the second directly learns an order...
Moving to specialized approaches for conversion modelling and the use of price in recommendation,
Most reward-optimized recommendation systems measure an abstract form of user utility and not an actual monetary value. This situation likely stems from the preponderance of clicks as immediate reward feedback in real-world systems. But as the field and the industry mature, we need consistent and rigorous approaches fo...
Our claim is that in the presence of sales data, recommendation algorithms can use the price information to directly optimize the welfare of the whole system (which is made of the advertisers and the users), instead of maximizing the probability of an action (click or conversion for instance).
C
In the present work, we investigate the interference degree of structured hypergraphs. The problem of computing the interference degree of a hypergraph is shown to be NP-hard, and the interference degree of certain structured hypergraphs is determined. We also investigate which hypergraphs are realizable, i.e. which hy...
In the rest of this section, we prove some results that are used in the proofs in subsequent sections. The reader may skip the rest of this section now and return to it later.
The rest of this paper is organized as follows. Section 2 describes the system model; formal definitions of the unit disk graph model and hypergraph model are given, a distributed maximal scheduling algorithm from the literature is recalled, and it is explained that its worst-case performance is characterized by the in...
In the special case the hypergraph (V,ℰ)𝑉ℰ(V,\mathcal{E})( italic_V , caligraphic_E ) is 2222-uniform, its interference degree is equal to the interference degree of the graph (V,ℰ)𝑉ℰ(V,\mathcal{E})( italic_V , caligraphic_E ) [18, p. 2955]. Thus, the interference degree of the 2222-uniform hypergraph K1,rsubscript𝐾...
In the present section we investigate certain properties of a hypergraph invariant - the interference degree. We show that the problem of computing the interference degree of a hypergraph is NP-hard, and we prove some basic properties of this hypergraph invariant and compute the interference degree of certain structure...
B
&\text{if }q\in Q_{\mathsf{Max}}\end{cases}italic_V start_POSTSUBSCRIPT italic_i + 1 end_POSTSUBSCRIPT ( italic_q , italic_ν ) = { start_ROW start_CELL roman_min start_POSTSUBSCRIPT italic_δ end_POSTSUBSCRIPT roman_inf start_POSTSUBSCRIPT ( italic_q , italic_ν ) start_ARROW start_OVERACCENT italic_t , italic_δ end_OVER...
a given transition δ𝛿\deltaitalic_δ in π𝜋\piitalic_π (or ρ𝜌\rhoitalic_ρ). More generally, for
For a fixed valuation ν𝜈\nuitalic_ν, and once chosen the transition δ𝛿\deltaitalic_δ in the minimum or maximum,
path π𝜋\piitalic_π in 𝒢𝒢\mathcal{G}caligraphic_G from an initial valuation ν𝜈\nuitalic_ν of the clock:
and valuation ν𝜈\nuitalic_ν, there exist t∈ℝ≥0𝑡subscriptℝabsent0t\in\mathbb{R}_{\geq 0}italic_t ∈ blackboard_R start_POSTSUBSCRIPT ≥ 0 end_POSTSUBSCRIPT
B
Motivated by the mathematically intricate CO problem, a multi-task learning based analog beam selection (namely MTL-ABS) framework is developed to solve the beam selection problem in a low-complexity way for the RIS-enabled THz MU-MIMO systems. The MTL technique is a promising paradigm in machine learning communities a...
We primitively formulate a codebook-based beam selection problem for the RIS-enabled THz MU-MIMO system, where the subarray architecture is employed at the BS and RIS, respectively. In light of this system model, we derive a novel sum-rate metric to measure the beam selection performance.
In addition, both active MIMO and passive RIS possess an extremely large number of array elements at THz band, but current research works tend to optimize the phase shifts of RIS elements one by one during the signal processing stage, which definitely leads to high latency and heavy computational complexity for RIS-aid...
With regard to the system model and channel model mentioned above, the sum-rate of the RIS-aided THz MU-MIMO system can be formulated as
In this section, we introduce the RIS-enabled THz MU-MIMO system model and channel model. Based on these assumptions, the beam selection problem is formulated.
A
F}}_{j,t}^{-1}-id\,,\,\widehat{\bm{F}}_{l,t-1}^{-1}-id\rangle_{Leb}.[ over^ start_ARG bold_Γ end_ARG ( 1 ) ] start_POSTSUBSCRIPT italic_j , italic_l end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_T end_ARG ∑ start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPER...
For the estimator 𝑨~~𝑨\widetilde{\bm{A}}over~ start_ARG bold_italic_A end_ARG to hold, strictly speaking, we need to assume that 𝚪~⁢(0)~𝚪0\widetilde{\bm{\Gamma}}(0)over~ start_ARG bold_Γ end_ARG ( 0 ) is nonsingular as in the case of classical least squares estimators.
Under the conditions of Lemma 4.1, and 𝚪^⁢(0)^𝚪0\widehat{\bm{\Gamma}}(0)over^ start_ARG bold_Γ end_ARG ( 0 ) is nonsingular, where we recall
As before, we assume that 𝚪^⁢(0)^𝚪0\widehat{\bm{\Gamma}}(0)over^ start_ARG bold_Γ end_ARG ( 0 ) is invertible.
\right]^{-1},over~ start_ARG bold_italic_A end_ARG = over~ start_ARG bold_Γ end_ARG ( 1 ) [ over~ start_ARG bold_Γ end_ARG ( 0 ) ] start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ,
C
In the pre-training stage, we use the same strategy in EP [32] as the Base training method and train the SSL models (SemCo [24], FlexMatch [50] and MarginMatch [39]) by using the default hyperparameter setting in official codes.
In the episodic finetuning stage, for miniImageNet and tieredImageNet, 5 classes are randomly sampled per episode, where in each class we select 5 and 15 instances for the support and query set, respectively.
Most FSL methods use a meta-learning manner (episodic sampling) to generate many few-shot tasks for training. Specifically, to generate a M𝑀Mitalic_M-way K𝐾Kitalic_K-shot task, we randomly select M𝑀Mitalic_M classes from the base set and then randomly choose Nk+Nqsubscript𝑁𝑘subscript𝑁𝑞N_{k}+N_{q}italic_N start_P...
To investigate the impact of class-prior selection strategy i.e. first pick up some classes and then select several samples from these classes, we retrain two representative approaches, SPN [31] and M-PL [17], by using the original and our new setting with different label numbers. Then, we evaluate them by using the se...
To investigate the effect of meta-learning on SSL models, we finetune SemCo and FlexMatch by our PLML, and evaluate them on the testing set of base classes in miniImageNet, and tieredImageNet. As shown in Fig. 4, our approaches achieve superior performance than the baselines in most evaluation tasks of three datasets. ...
A
For initial (base model) training, we use a multilayer perceptron (MLP) model architecture for each feature set for comparison. The MLP hyperparameters are tuned using 5-fold cross-validation on the training set where the search space is a grid of combinations of: hidden layer size 64, 128, or 512 (2 hidden layers chos...
Table 2 shows the results of experiments using the FreeSolv dataset. Each row shows mean absolute error (MAE) results (mean ±plus-or-minus\pm± standard deviation (σ𝜎\sigmaitalic_σ)) averaged over 10 runs of fine-tuning the MLP base model (learned from the “train” split) with a GP regressor using either the PerturbLear...
Table 3: Spearman correlation (ρ𝜌\rhoitalic_ρ ±σplus-or-minus𝜎\pm\ \sigma± italic_σ; higher is better) of different predictors (with input dimension “size”) on the Half-life benchmark with varying numbers of test samples included in fine-tuning (n𝑛nitalic_n). Best results in bold; second-best is underlined. Note: si...
Table 2: Mean absolute error (MAE ±σplus-or-minus𝜎\pm\ \sigma± italic_σ; lower is better) of different predictors (with input dimension “size”) on the FreeSolv benchmark with varying numbers of test samples included in fine-tuning (n𝑛nitalic_n). Best results in bold; second-best is underlined.
Table 4: Fine-tuning results for TDC datasets at n=25𝑛25n=25italic_n = 25. FreeSolv, Caco-2, and PPBR use MAE (±σplus-or-minus𝜎\pm\ \sigma± italic_σ) while Half-life, VDss, and the Clearance datasets (Hepato. and Micro.) use Spearman correlation (±σplus-or-minus𝜎\pm\ \sigma± italic_σ) as the performance metric. Best...
C
M⁢a⁢t⁢c⁢h⁢i⁢n⁢g𝑀𝑎𝑡𝑐ℎ𝑖𝑛𝑔Matchingitalic_M italic_a italic_t italic_c italic_h italic_i italic_n italic_g
0.006∗∗superscript0.006absent0.006^{**}0.006 start_POSTSUPERSCRIPT ∗ ∗ end_POSTSUPERSCRIPT
0.615∗∗superscript0.615absent0.615^{**}0.615 start_POSTSUPERSCRIPT ∗ ∗ end_POSTSUPERSCRIPT
0.006∗⁣∗∗superscript0.006absent0.006^{***}0.006 start_POSTSUPERSCRIPT ∗ ∗ ∗ end_POSTSUPERSCRIPT
We collected donation transactions between 1:00 AM on Feb. 26, 2022 and 6:00 PM on Mar.3, 2022, from the public wallets of Ukraine to focus on the airdrop. We calculated the USD value of donation contributions using the historical prices of Bitcoin and Ethereum based on the daily opening prices. There were 14,903 donat...
B
We introduce a novel diffusion mechanism for machine learning models called FedDif, which aims to reduce weight divergence caused by non-IID data. In this mechanism, local models accumulate the personalized data distributions from different users, achieving a similar effect to training on IID data.
It can be easily seen that the diffusion efficiency maximization problem in (16) is a combinatorial optimization problem. It is difficult to obtain a solution directly because the set of feasible solutions is discrete. Therefore, based on auction theory, we design a diffusion strategy to find a feasible solution that s...
We design the diffusion strategy based on auction theory to balance the enhancement of learning performance with the reduction of communication costs. We formulate an optimization problem to find the trade-off, and the auction provides a feasible solution based on the proposed winner selection algorithm.
Although the diffusion mechanism can mitigate the effects of non-IID data, excessive diffusion can substantially increase the total training time and deteriorate the performance of communication systems. In other words, there is a trade off between improving learning performance and reducing communication costs. Immode...
There is a trade-off between the communication cost of diffusion and the learning performance of the global model. For example, FedDif requires more communication resources than typical FL in the short term. However, in the long term, the entire number of iterations required to obtain the required performance of the gl...
B
held by ownerheld by third partywallet with ahardware root of trustthat enforces ruleson behalf of anauthority or issuer
online services thatstore digital assetsbut for which theowner is responsiblefor their managementand any transactions
held by ownerheld by third partywallet with ahardware root of trustthat enforces ruleson behalf of anauthority or issuer
online services thatstore digital assetsbut for which theowner is responsiblefor their managementand any transactions
online services thatstore digital assetsand conduct transactionson behalf of the owner
D
Let us consider the (static) network whose topology at round t𝑡titalic_t is the complete graph Gt=Knsubscript𝐺𝑡subscript𝐾𝑛G_{t}=K_{n}italic_G start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_K start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT, i.e., each process receives messages from all other processes at ev...
The following result justifies the assumption made in Sections 4.4 and 5 that processes have a-priori knowledge of the number of leaders ℓℓ\ellroman_ℓ in the system.
The following result justifies the assumptions made in Section 4.3 that processes have knowledge of an upper bound on n𝑛nitalic_n or on the dynamic diameter d𝑑ditalic_d of the network.
Knowledge of the processes. Our algorithms assume that the processes have a-priori knowledge about certain properties of the network only when the absence of such knowledge would render the Average Consensus or Counting problems unsolvable.
The stabilizing algorithms for both functions give the correct output within 2⁢τ⁢n2𝜏𝑛2\tau n2 italic_τ italic_n communication rounds regardless of the number of leaders, and do not require any knowledge of the dynamic disconnectivity τ𝜏\tauitalic_τ or the number of processes n𝑛nitalic_n. Our terminating algorithm f...
A
Since L=O⁢(log⁡(n/ε))𝐿𝑂𝑛𝜀L=O(\log(n/\varepsilon))italic_L = italic_O ( roman_log ( italic_n / italic_ε ) ) under our assumptions log⁡(C)=O⁢(log⁡n)𝐶𝑂𝑛\log(C)=O(\log n)roman_log ( italic_C ) = italic_O ( roman_log italic_n ) and log⁡log⁡(1/η)=O⁢(log⁡n)1𝜂𝑂𝑛\log\log(1/\eta)=O(\log n)roman_log roman_log ( 1 / ital...
For k=2𝑘2k=2italic_k = 2 the sample complexity is obtained in the same way, using Lemma 4.10 instead.
We first consider general k≥2𝑘2k\geq 2italic_k ≥ 2 in this subsection, and then give an improved sample complexity bound for k=2𝑘2k=2italic_k = 2 in the next subsection.
For k=2𝑘2k=2italic_k = 2, the sample complexity of the identity testing algorithm is
If k=2𝑘2k=2italic_k = 2, i.e., we have a binary domain 𝒦={0,1}𝒦01\mathcal{K}=\{0,1\}caligraphic_K = { 0 , 1 }, then the sample complexity for the KL tester is better.
A
In this framework, we can define the problem 𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟\mathtt{partialWordsNonUniv}typewriter_partialWordsNonUniv,
Considering an instance of the 3333-CNF-SatCNF-Sat\operatorname{\textsc{CNF-Sat}}SatProb problem,
By this construction, we obtain that the instance of the 3333-CNF-SatCNF-Sat\operatorname{\textsc{CNF-Sat}}SatProb problem is satisfiable if and only if there exists a word v𝑣vitalic_v of length L𝐿Litalic_L which is not compatible to any of the words wisubscript𝑤𝑖w_{i}italic_w start_POSTSUBSCRIPT italic_i end_POSTS...
The first part of the following result was shown in [68] via a reduction from 3333-CNF-SatCNF-Sat\operatorname{\textsc{CNF-Sat}}SatProb, and it can be complemented by a conditional lower bound.
Ultimately, we have a reduction from 3333-CNF-SatCNF-Sat\operatorname{\textsc{CNF-Sat}}SatProb to 𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟𝚙𝚊𝚛𝚝𝚒𝚊𝚕𝚆𝚘𝚛𝚍𝚜𝙽𝚘𝚗𝚄𝚗𝚒𝚟\mathtt{partialWordsNonUniv}typewriter_partialWordsNonUniv (from Theorem 4.1)
C
A novel deep learning-based framework is proposed for point cloud geometry inter-frame encoding similar to P-frame encoding in video compression.
We propose a novel deep learning-based inter-frame predictor network that can predict the latent representation of the current frame from the previously reconstructed frame as shown in Fig. 4.
The proposed inter-prediction module employs a specific version of generalized sparse convolution [36] with different input and output coordinates denoted as GSConv to perform motion estimation in the feature domain.
We propose a novel inter-prediction module (predictor network) that learns a feature embedding of the current PC frame from the previous PC frame. The network utilizes hierarchical multiscale feature extractions and employs a generalized sparse convolution (GSConv) with arbitrary input and output coordinates to perform...
The proposed inter-frame compression framework employs an encoder and decoder network similar to PCGCv2 along with a novel inter-prediction module to predict the feature embedding of the current PC frame from the previous PC frame.
C
K}(\alpha_{ik}-1)\left[\psi(\alpha_{ik}-\psi(\alpha_{i0}))\right].\end{split}start_ROW start_CELL italic_K italic_L [ roman_Dir ( bold_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_α start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) | | roman_Dir ( bold_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | bold...
The weighted KL divergence term provides a regularization to penalize the case in Fig. 1. The overall loss function can be written as:
where pi⁢jsubscript𝑝𝑖𝑗p_{ij}italic_p start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT is the predicted probability for it⁢hsuperscripti𝑡ℎ\textit{i}^{th}i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT sample of class j𝑗jitalic_j, 𝐲isubscript𝐲𝑖\mathbf{y}_{i}bold_y start_POSTSUBSCRIPT italic_i...
For clarity, we provide a toy example under a triplet classification task to illustrate the difference from softmax classifiers. To calibrate the predictive uncertainty, the model is encouraged to learn a sharp simplex for accurate prediction(Fig. 1), and to produce a flat distribution for inaccurate prediction in Fig....
how prior evidence influences posterior evidence. In the case that pre-trained network provides a good class agnostic embedding, η𝜂\etaitalic_η should be higher and vice versa. According to Eq. 2, the posterior evidence and the parameters of the Dirichlet distribution can be written as:
A
This is enabled by adding redundancy in the form of check symbols to the data representation.
Hamming codes can be implemented in systematic or non-systematic form; and conversion between the two forms takes elementary matrix transformations. While the coverage is similar to classical TMR,
criterion directly restricts the type of applicable ECCs. This criterion essentially is after homomorphic operation, which guarantees that computation on raw data can always be mapped to computation on check symbols without any ambiguity.
Two types of self-checking circuits exist: Type-I features systematic; Type-II, non-systematic codes.
Check symbols in a codeword can be totally isolated from the raw data bits (systematic ECCs); or interleaved (non-systematic ECCs). In the following, we stick to systematic ECCs where data to be protected can be accessed directly, which by construction enables a more modular design, especially useful in the PiM context...
C
The proposed method presents 2ksuperscript2𝑘2^{k}2 start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT GCAS with array size 2k⁢r×2k⁢ssuperscript2𝑘𝑟superscript2𝑘𝑠2^{kr}\times 2^{ks}2 start_POSTSUPERSCRIPT italic_k italic_r end_POSTSUPERSCRIPT × 2 start_POSTSUPERSCRIPT italic_k italic_s end_POSTSUPERSCRIPT and set si...
Section 2 provides useful definitions. Section 3 describes 2D-CCC construction. Section 4 examines the PMEPR of row and column sequences in 2D-CCC arrays and provides generalizations of the proposed 2D-CCC. Section 5 compares the proposed 2D-CCC to the current state-of-the-art. Section 6 concludes.
The proposed construction yields 2D-CCC with array size M2×K2superscript𝑀2superscript𝐾2M^{2}\times K^{2}italic_M start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT × italic_K start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT and set size M⁢K𝑀𝐾MKitalic_M italic_K where M,K≥2𝑀𝐾2M,K\geq 2italic_M , italic_K ≥ 2, therefore, the p...
In this section we derive the PMEPR bound of the 2222D-CCC arrays given in Theorem 1.
As a special case of the design, we have come up with 2D-GCAS with any array size and flexible set size of the form ∏i=1api⁢∏j=1bqjsuperscriptsubscriptproduct𝑖1𝑎subscript𝑝𝑖superscriptsubscriptproduct𝑗1𝑏subscript𝑞𝑗\prod_{i=1}^{a}p_{i}\prod_{j=1}^{b}q_{j}∏ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_...
A
Suppose that from the following time series {y1,y2,…,yn}subscript𝑦1subscript𝑦2…subscript𝑦𝑛\{y_{1},y_{2},...,y_{n}\}{ italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT } we want to estimate a function f:ℝp...
The MIMO strategy also involves converting time series data into a supervised learning problem, as shown in (6). However, unlike the recursive strategy, the target variable is a vector rather than a scalar, therefore F:ℝp→ℝH:𝐹→superscriptℝ𝑝superscriptℝ𝐻F:\mathbb{R}^{p}\rightarrow\mathbb{R}^{H}italic_F : blackboard_R...
First, we should convert the time series to a supervised learning problem as follows
Although this paper solely tackles univariate time series, AEnbMIMOCQR can be easily generalized to cope with multivariate time series and hints are provided to do so. Furthermore, AEnbMIMOCQR can be employed as a replacement for CQR in volatile regression settings, not necessarily time series, and for unsupervised ano...
We will now delve into establishing the key attributes that constitute a high-quality PI, not exclusively tied to time series forecasting, but more generally any regression task. To this end, consider an unseen pair of covariates and target, denoted as (𝒙𝒏+𝟏,yn+1)subscript𝒙𝒏1subscript𝑦𝑛1(\bm{x_{n+1}},y_{n+1})( b...
B
The same construction as in [OS24, Example 4.1] shows that the dissimilarity dB^^subscript𝑑𝐵\widehat{d_{B}}over^ start_ARG italic_d start_POSTSUBSCRIPT italic_B end_POSTSUBSCRIPT end_ARG on ℛ2superscriptℛ2\mathscr{R}^{2}script_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-barcodes does not satisfy the triangle inequa...
Although the example deals with the Betti signed barcode, that is, the signed barcode associated to the usual exact structure on fp ℛ2superscriptℛ2\mathscr{R}^{2}script_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-persistence modules, it applies without any changes to the rank exact decomposition, since the (usual) mi...
The prototypical example comes from the usual exact structure on the category of fp 𝒫𝒫\mathscr{P}script_P-persistence modules, which has as exact sequences the usual exact sequences.
The notion of minimality of the minimal rank decomposition by rectangles—which requires the positive and negative barcodes to be disjoint—is replaced by the requirement that the signed barcode comes from a minimal projective resolution in the so-called rank exact structure, also described below.
The minimal rank projective resolution of Fig. 1(a) is used to define the rank exact decomposition of the ℛ2superscriptℛ2\mathscr{R}^{2}script_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT-persistence module M𝑀Mitalic_M, which, in this case, is an interval module.
A
However, the test inputs and the mocks produced by these techniques are either synthetic or manually written by developers per their assumptions of how the system behaves.
These approaches do not guarantee that the generated mocks reflect realistic behaviors as observed in production contexts.
Second, for projects that already contain automated tests, rick can contribute with unit tests that reflect realistic behavior, as observed in production.
Contrary to these approaches, rick monitors applications in production in order to generate mocks. Consequently, the generated tests reflect the behavior of an application with respect to actual user interactions.
This may result in incomplete or unfaithful program states within the generated tests, which do not reflect the ones observed in production.
A
Modeling heterogeneity, which is the focus of this paper, has gained interest [16, 17]. Heterogeneity comes in many forms, e.g., differences in roles [18], robotic capabilities and or sensors [19], dynamics [20], and even teams of air and ground robots [21]. Heterogeneity has been defined [22] for systems in the finite...
We use a mathematical model (Section III) of the human adaptive immune system first proposed in [1] to understand how a defending team can optimally allocate its resources to minimize the harm incurred from a heterogeneous team of attackers. We focus our analysis on two situations in Section IV: (i) when no single type...
We used a mathematical model to understand what an optimal defender team composition should be. The key property of this model is the cross-reactivity which enables defender agents of a given type to recognize attackers, of a few different types. This allows the defender distribution to be supported on a discrete set, ...
Task assignment with heterogeneous agents [25] is another similar problem to ours, but a desired trait distribution is necessitated by the objective instead of calculating it explicitly. In comparison, the present paper uses a simple formulation to understand what distribution is best and how to allocate heterogeneous ...
In the context of the above literature, the place of the present paper is to study a theoretical model where large heterogeneous multi-agent interaction problems can be analyzed precisely.
C
HappySimpleProperStrictNon-strict×\times×Lemma 2×\times×Lemma 4×\times×Lemma 3×\times×Lemma 5×\times×Corollary 2SemaphoreDilationSaturationCorollary 6⪯precedes-or-equals\preceq⪯⪯precedes-or-equals\preceq⪯⪯precedes-or-equals\preceq⪯⪯precedes-or-equals\preceq⪯⪯precedes-or-equals\preceq⪯⪰succeeds-or-equals\succeq⪰⪰succeed...
Does “non-strict” ⪯precedes-or-equals\preceq⪯ “simple & strict”? In other words, is there a reachability-preserving transformation from the former to the latter?
Does “simple & non-strict” ⪯precedes-or-equals\preceq⪯ “simple & strict”? In other words, is there a reachability-preserving transformation from the former to the latter?
Finally, the fact that there is a reachability-preserving transformation from “non-strict” to “strict” (the saturation technique), and some reachability graphs from “simple & strict” are unrealizable in “non-strict” (by Lemma 3), we also have
Similarly, combining the fact that “simple & non-strict” is strictly contained in “non-strict” (by Corollary 2), and there exists a reachability-preserving (in fact, support-preserving) transformation from “non-strict” to “proper”, we also have that
A
Nado et al. (2020) propose using batch statistics during inference from the target domain instead of the training statistics acquired from the source domain.
In this work, we study the generalization problem of semantic segmentation from synthetic data (Richter et al., 2016; Ros et al., 2016) through the lens of adaptation.
To our surprise, we found that no previous work on domain generalization for semantic segmentation has yet fulfilled all of these principles.
A more comprehensive approach by (Yue et al., 2019) considered a number of target domains.
Our comprehensive empirical study complements these results by demonstrating improved generalization of semantic segmentation models.
D
Then, by Claim 18, we can obtain two disjoint independent sets V1subscript𝑉1V_{1}italic_V start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and V2subscript𝑉2V_{2}italic_V start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT in G𝐺Gitalic_G with |V1|+|V2|>2⁢ε⁢nsubscript𝑉1subscript𝑉22𝜀𝑛|V_{1}|+|V_{2}|>2\varepsilon n| italic_V start_POSTS...
If no ties are allowed at all in the preference orders (i.e., the preferences are strict), then we obtain the standard stable marriage problem, where all stable matchings have the same size by the so-called “rural hospitals theorem” [39, 17], and the Gale–Shapley algorithm finds one efficiently. In fact, we can obtain ...
The aim of this paper is to bring together two directions in which the problem has been extended. One is the design of approximation algorithms for finding a maximum stable matching when ties are allowed in the preference lists. The other is the generalization of the stable marriage problem to matroid intersection, in ...
The last author was supported by JST PRESTO Grant Number JPMJPR212B and JST ERATO Grant Number JPMJER2301, and the joint project of Kyoto University and Toyota Motor Corporation,titled “Advanced Mathematical Science for Mobility Society”.
Some of our results were obtained at the Emléktábla Workshop in Gárdony, July 2022. We would like to thank Tamás Fleiner, Zsuzsanna Jankó, and Ildikó Schlotter for the fruitful discussions. We thank the anonymous reviewers of the previous versions for their helpful feedback. The work was supported by the Lendület Progr...
D
In this section, the paper proposes two approaches to transform the original non-convex problem into two equivalent convex forms in order to adapt the two different offloading scenarios, i.e., pure BAC-NOMA offloading and hybrid BAC-NOMA offloading. If BDs are able to finish offloading within t0subscript𝑡0t_{0}italic_...
where μ𝜇\muitalic_μ is an auxiliary variable which is determined by the proposed iterative algorithm in the later section.
Therefore, an algorithm is proposed to iteratively update the power allocation solution and the iterative variable μ𝜇\muitalic_μ. The procedure of this proposed scheme is summarized in Algorithm 1. With the increment of l𝑙litalic_l, the Dinkelbach iterative variable μ𝜇\muitalic_μ will finally converge to the ε𝜀\var...
Based on the above two schemes, an iterative based algorithm is proposed to obtain the resource allocation efficiently.
Different from the existing works which mainly focus on the EE maximization [6] and sum rate maximization [5, 8] of BAC-NOMA schemes, this paper considers the delay minimization problem of a hybrid BAC-NOMA assisted MEC offloading scenario. In particular, the signal of the downlink transmission can excite the circuit o...
C
The Tor network is based on onion routing, the communication technique developed in the 1990s which aimed to ensure both private and anonymous communication. Messages are first encrypted several times before being sent across nodes (onion routers) in the network that serves to successively decrypt and pass along the en...
In 2008, the Tor browser was deployed to enable easier and more widespread use of the Tor network. Tor has since been noted for its impact on societal and sociopolitical causes: as a key communication tool during the Arab Spring uprisings, for instance, or to actively evade censors like China’s Great Firewall. Improvem...
Tor is widely used as a medium to circumvent censorship. Access to certain kinds of websites that contain propaganda messages, pornographic content, social media sites, etc. is the kinds of sites that are typically censored if the content is not authoritatively approved in the country. Over the years, governments have ...
From its origins as a research project and early use mostly by the technical community, Tor has evolved into being a usable tool with significant societal impact. Recent events in the past month alone highlight the continued relevance of Tor to such studies. It was revealed this past month that a threat actor was runni...
Tor is a circuit-based low-latency anonymous communication service [1] [2]. Over the years, the usage of Tor has increased and it is being used in a variety of scenarios, both good and bad (legal and illegal as well). The current trends show that Tor is suitable due to its relatively low latency for being used in circu...
A
We defined Prevent by benefitting from the lessons learned with PreMiSe [43], EmBeD [47, 46], and Loud [42], three representative techniques to predict and localize failures.
The supervised PreMiSe approach that we developed as a joint project with industrial partners, gives us important insights about the strong limitations of supervised approaches in many industrially-relevant domains.
In this paper we discuss advantages and limitations of the approaches reported in the literature, and propose Prevent, a novel approach that overcomes the main limitations of current approaches.
PreMiSe indicates that supervised approaches can indeed precisely and timely predict failures, localize faults and identify the fault types. It also highlights the strong limitations of training systems with seeded faults in production, as supervised approaches require.
RQ2 focuses on the advantages and limitations of the unsupervised Prevent approach with respect to state-of-the-art (supervised) approaches. Unsupervised approaches do not require training with data collected during failing executions, thus they can be used in the many industrially relevant cases where it is not possib...
A
Stakeholders may have different value preferences, or their values may conflict with norms (Jakesch et al., 2022).
Floridi (2018a) proposes that ethical evaluation can be understood in terms of hard ethics and soft ethics.
However, there may be cases where ambiguities arise that hard ethics cannot provide an answer for.
Soft ethics examines what ought to be done over and above existing norms, such as in cases where competing values and interests need to be balanced, or existing regulations provide no guidance (Floridi, 2018b).
What are existing gaps in ethics research in AI and computer science, specifically in relation to operationalising principles in reasoning capacities?
C
(d+22)>deg⁡([D+𝒜]+).binomial𝑑22degreesubscriptdelimited-[]𝐷𝒜\binom{d+2}{2}>\deg([D+\mathcal{A}]_{+}).( FRACOP start_ARG italic_d + 2 end_ARG start_ARG 2 end_ARG ) > roman_deg ( [ italic_D + caligraphic_A ] start_POSTSUBSCRIPT + end_POSTSUBSCRIPT ) .
common denominator for all the elements of ℒ⁢(D)ℒ𝐷\mathcal{L}(D)caligraphic_L ( italic_D ).
In other words, common denominators H𝐻Hitalic_H of ℒ⁢(D)ℒ𝐷\mathcal{L}(D)caligraphic_L ( italic_D ) do exist in
If a common denominator H𝐻Hitalic_H is known for ℒ⁢(D)ℒ𝐷\mathcal{L}(D)caligraphic_L ( italic_D ), then G1/H,…,Gℓ/Hsubscript𝐺1𝐻…subscript𝐺ℓ𝐻G_{1}/H,\ldots,G_{\ell}/Hitalic_G start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT / italic_H , … , italic_G start_POSTSUBSCRIPT roman_ℓ end_POSTSUBSCRIPT / italic_H is a 𝕂𝕂\mathbb{K...
dimensions of ℒ⁢(D)ℒ𝐷\mathcal{L}(D)caligraphic_L ( italic_D ) and ℒ⁢(D¯)ℒ¯𝐷\mathcal{L}(\bar{D})caligraphic_L ( over¯ start_ARG italic_D end_ARG ) can be estimated
B
Fig. 2(A) shows the profiles of k⁢(x)𝑘𝑥k(x)italic_k ( italic_x ) and f⁢(x)𝑓𝑥f(x)italic_f ( italic_x ) considered. In Fig. 2(B), we show the profiles of typical eigenmodes ϕihsubscriptsuperscriptbold-italic-ϕℎ𝑖\bm{\phi}^{h}_{i}bold_italic_ϕ start_POSTSUPERSCRIPT italic_h end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ital...
Mode 1 has the lowest spatial frequency, while mode 10 has a relatively high spatial frequency. (C) Numerical results. We consider three setups, each shown in one row: (M1) Jacobi solver only; (M2) Jacobi solver with DeepONet initializer, i.e., one-time usage of DeepONet followed by Jacobi iterations; (M3) HINTS-Jacobi...
The remaining results in the first rows of Figs. 4A-B, similar to 1D cases, show the histories of the norms of the residual, error, and the mode-wise error. The second rows of Figs. 4A-B show the error of three key snapshots during the iterations together with the true/converged solution. For the case of 3D Helmholtz e...
The third column shows the histories of the norms of residual and error of the approximate solution, with the snapshots in the second column marked correspondingly. The fourth column shows the history of the norm of error for eigenmodes 1, 5, and 10.
We show key snapshots of the approximate solution and the norms of residual and error of the approximate solution, where the reference solution is obtained using a direct solver.
D
The aforementioned theoretical result necessitates that all weights undergo changes across 𝐮𝐮\mathbf{u}bold_u, as constrained by the assumption (v) in Theorem 1. However, in practical applications, this assumption may not hold true. Consequently, two fundamental questions naturally arise: Is this assumption necessary...
Suppose latent causal variables 𝐳𝐳\mathbf{z}bold_z and the observed variable 𝐱𝐱\mathbf{x}bold_x follow the generative models defined in Eq. (1)- Eq. (3). Under the condition that the assumptions (i)-(iv) in
Suppose latent causal variables 𝐳𝐳\mathbf{z}bold_z and the observed variable 𝐱𝐱\mathbf{x}bold_x follow the generative models defined in Eq. (1)- Eq. (3),
Intuitively, variant causal influences among latent causal variables cannot be ‘absorbed’ by an invariant nonlinear mapping from 𝐳𝐳\mathbf{z}bold_z to 𝐱𝐱\mathbf{x}bold_x, breaking the transitivity, resulting in identifiable causal representations. Specifically, we explore latent causal generative models where the o...
As discussed in Section 3.2, the key factor that impedes identifiable causal representations is the transitivity in latent space. Note that the transitivity is because the causal influences among the latent causal variables may be ‘absorbed’ by the nonlinear mapping from latent variables 𝐳𝐳\mathbf{z}bold_z to the obs...
A
One may wonder which of the two postulates is responsible for conditional entropy becoming negative in the quantum world. Interestingly, we identify that it is the extensivity postulate that does so. To arrive at this conclusion, we provide an example of a non-negative measure of conditional entropy that satisfies the ...
As additional findings, we show that all plausible quantum conditional entropies cannot be smaller than the quantum conditional min-entropy, thus justifying once and for all the name of the latter quantity as the smallest plausible quantum conditional entropy. We also establish a logical equivalence between the non-neg...
The following statement is a direct consequence of Theorem 1, indicating that every plausible conditional entropy being non-negative is equivalent to the reduction criterion [24], well known in entanglement theory. Thus, Corollary 1 provides a direct link between every conditional entropy (including the conditional min...
Although all plausible conditional entropies are non-negative on all separable states, this does not mean that the conditional entropies are only non-negative on separable states. In fact, some entangled states have non-negative conditional entropy on all choices of conditional entropy functions. In light of this, Coro...
This quantity was previously given the name conditional min-entropy because it is known to be the least among all Rényi conditional entropies [15, 17, 18]. As part of our main result, we strengthen this observation by proving that all plausible quantum conditional entropies are not smaller than the conditional min-entr...
A
Applying non-parametric Chi-Square test and parametric Gaussian Mixture Model approaches, [19] proposes a mobile network outage prediction with logs.
A decentralized online clustering algorithm based anomaly detection from resource usage logs of supercomputer clusters has been explored in [21]. Disk failure prediction in data centers using different ML techniques such as online random forests [22], auto encoders [23] has also been proposed. A predictive learning mec...
For high-performance computing (HPC), [9] presents a long short-term memory (LSTM) based recurrent neural network (RNN) making use of log files to predict lead time to failures. An LSTM-based solution for mission-critical information systems analyzing logs has been presented in [10]. [11] presents a mechanism to predic...
System logs contain a wealth of information. Several research proposals have been put forward over decades to mine the logs and predict system failures [1]. Detecting anomalies in systems applying deep learning on logs has been proposed [2]. A security vulnerability in systems applying log analysis has also been explor...
An ML mechanism to predict job and task failures in multiple large-scale production systems from system logs has been described in [20].
D
(Chundawat et al., 2022b) proposed error-minimizing noise to generate an approximate version of Drsubscript𝐷𝑟D_{r}italic_D start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT such that the impaired model could be trained to mitigate the performance degradation.
As the Fisher-based method aims to approximate the model without the deleted data, there can be no guarantee that all the influence of the deleted data has been removed. Although injecting noise can help mitigate information leaks, the model’s performance may be affected by the noise (Cong and Mahdavi, 2022a).
et al., 2022). However, one has to rely on a customized learning algorithm that optimizes a perturbed version of the regularized empirical risk, where the added noise is drawn from a standard normal distribution. This normalized noise allows conventional convex optimization techniques to solve the learning problem with...
Unfortunately, the above error max-min approach yields poor unlearning outcomes as the generated noise is somewhat adhoc. Hence, inspired by (Micaelli
Tarun et al. (Tarun et al., 2021) proposed an unlearning method for class removal based on data augmentation. The basic concept is to introduce noise into the model such that the classification error is maximized for the target class(es). The model is updated by training on this noise without the need to access any sam...
C
We present two proofs of the theorem, both relying on the preliminaries in Section 2. The first proof is given in Section 3.
Algorithm 1 shows the algorithm of [9] for the factorization of a generic motion polynomial M𝑀Mitalic_M. Let us explain the notation there. If F∈ℝ⁢[t]𝐹ℝdelimited-[]𝑡F\in\mathbb{R}[t]italic_F ∈ blackboard_R [ italic_t ] is an irreducible quadratic factor of the norm polynomial ν⁢(M)𝜈𝑀\nu(M)italic_ν ( italic_M ), th...
The necessity of the factorization condition is Proposition 7, and the sufficiency is Proposition 8 there.
It is easy to give examples of “non-generic” motion polynomials that admit infinitely many or no factorization. In [22], the authors showed that for any bounded motion polynomial M𝑀Mitalic_M (cf. Definition 2 below; the name refers to the bounded trajectories of the underlying rational motion), there exists a real pol...
For a generic motion polynomial, the factorization exists and there is a simple algorithm to construct it:
B