context stringlengths 250 5.57k | A stringlengths 250 4.17k | B stringlengths 250 3.69k | C stringlengths 250 8.2k | D stringlengths 250 4.12k | label stringclasses 4
values |
|---|---|---|---|---|---|
It explores multi-discriminator extensions of GANs with diverse versions of the aggregated prediction of discriminators—from a harsh trainer to a lenient teacher with softened criteria.
Meanwhile, there exist significant differences in the method of ensembling from our approach. | In practice, our specialized discriminators to the subsets of training data outperforms independent training of discriminators on the whole dataset (as in GMAN) or different discriminator assignment strategies such as minimum-score discriminator selections, opposite to MCL-GAN, and random selections.
Also, the performa... | Table 1 summarizes the precision and recall scores of our methods compared to other models with three different GAN objectives, when the number of discriminators is set to 5 or 10 (M=5or10𝑀5or10M=5~{}\text{or}~{}10italic_M = 5 or 10) and the number of experts is 1 (k=1𝑘1k=1italic_k = 1).
MCL-GAN achieves outstandin... | To achieve these goals, we employ a Multiple Choice Learning (MCL) [8] framework to learn multiple discriminators and update the generator via a set of expert discriminators, where each discriminator is associated with a subset of the true and generated examples.
Our approach, based on a single generator and multiple d... | Encouraged by this benefit, [16, 17, 18, 19] apply MCL or its variations [20, 21] to the production of diverse and accurate outputs in several applications.
For instance, Mun et al. [17] propose an MCL-KD framework to come up with a visual question answering (VQA) system based on multiple models that are specialized in... | D |
Due to the breakthroughs of deep learning (DL) in various fields, the intelligent communications have been investigated and considered as a cutting-edge technique to solve the bottlenecks of the traditional communication systems[1]. Particularly, the DL-enabled intelligent communications have achieved lots of successe... | In this article, we have investigated a DL-enabled semantic communication system for speech recognition, named DeepSC-SR, which aims to restore the text transcription by utilizing the text-related semantic features. Particularly, we jointly design the semantic and channel coding to learn and extract the features and mi... | The communication systems utilizing DL techniques are typically designed to transmit digital bit sequences and optimized by minimizing the bit-error rate (BER) or symbol-error rate (SER), which achieves the first level communications according to the categorization by Shannon and Weaver[6]. Inspired by potentially high... |
Due to the breakthroughs of deep learning (DL) in various fields, the intelligent communications have been investigated and considered as a cutting-edge technique to solve the bottlenecks of the traditional communication systems[1]. Particularly, the DL-enabled intelligent communications have achieved lots of successe... | Inspired by the end-to-end (E2E) communication systems developed to address the challenges in traditional block-wise commutation systems[9, 10], different types of sources have been considered on E2E semantic communication systems. Particularly, an initial research on semantic communication systems for text information... | B |
ℒseg_basicsubscriptℒ𝑠𝑒𝑔_𝑏𝑎𝑠𝑖𝑐\mathcal{L}_{seg\_basic}caligraphic_L start_POSTSUBSCRIPT italic_s italic_e italic_g _ italic_b italic_a italic_s italic_i italic_c end_POSTSUBSCRIPT
ℒSRsubscriptℒ𝑆𝑅\mathcal{L}_{SR}caligraphic_L start_POSTSUBSCRIPT italic_S italic_R end_POSTSUBSCRIPT |
TABLE IV: We compare the segmentation predictions in mIoU(%) from different decoder branches for the two stages of training. The basic branch is the one we use for inference. Cross-branch and intra-branch are the decoder outputs produced from CSFR and ISFR propagated features. |
We also compare the results between the two settings. The outputs from all three branches in CSFR-ISFR perform better than those in ISFR-CSFR, which also supports our argument in section III-D. From the results, if the ISFR module is trained first, the network can overfit within samples and not generalize across sampl... | Performance of different branches: Table IV compare the segmentation performance of different decoder branches in the two-stage settings. In both settings, the cross branches produce the poorest segmentation results the features of this branch are propagated from the other sample. However, the cross-branch can still pr... |
Two stage training versus joint training: Table I compares one-stage training with two-stage training performances trained with 10% and 1% labels. For one-stage training, we perform experiments with only the CSFR module or ISFR module, each of the modules produces performance gain over the baseline method for both 10%... | A |
Figure 5: Qualitative results of our method for multi-class 3D object detection. We use orange box for cars, purple box for pedestrians, and green box for cyclists. All illustrated images are from the KITTI test set. Zoom in the image for more details. |
Figure 6: Qualitative results of our method for Bird’s-Eye-View. We use black box for ground-truth, red box for baseline results, and blue box for our results. All the illustrated images are from the KITTI val set. Zoom in on the circles for more detailed comparison. | Qualitative results of our method for Bird’s-Eye-View. We use black box for ground-truth, red box for baseline results, and blue box for our results. All the illustrated images are from the KITTI val set. Zoom in on the circles for more detailed comparison.
|
Figure 5: Qualitative results of our method for multi-class 3D object detection. We use orange box for cars, purple box for pedestrians, and green box for cyclists. All illustrated images are from the KITTI test set. Zoom in the image for more details. | Table 3:
Monocular 3D object detection results on the KITTI val set for the car category with the evaluation metric of AP40subscriptAP40\rm{AP}_{40}roman_AP start_POSTSUBSCRIPT 40 end_POSTSUBSCRIPT. The results of the previous works are from [9]. Our approach significantly outperforms the previous state-of-the-arts on... | A |
On the dataset ICDAR2015, as shown in Table I, our method has also achieved a comparable SOTA result of an F-measure of 89.5%percent89.589.5\%89.5 %, outperforming some high-performing top-down methods such as FCENet [20] and ContourNet [11] by more than 2.5 percentage points.
| Our algorithm is implemented using PyTorch 1.7. The VGG16 is pre-trained with ImageNet, with FPN adopted for multi-scale feature extraction. We conducted our experiments on an RTX3090 GPU with 24GB memory.
All images used for training and testing are of a single scale. | We argue that TextFuseNet adopted an instance segmentation strategy, which introduced classification (recognition) to enhance detection results.
Also, the approaches of [9, 8, 10, 11] have adopted a multi-scale training/testing strategy, which is a widely-known tip used on ICDAR2015 that can boost the detection accurac... | Moreover, thanks to our proposed FPNS strategy, our method has achieved the highest recall rate of 83.8%percent83.883.8\%83.8 %. This shows the effectiveness of our idea of depicting the streamline of the texts, which helps to retrieve some misdetected texts.
As our approach focuses on detecting arbitrary-shape text, i... | Table I shows quantitative comparisons of the detection results with the state-of-the-art approaches on all mainstream benchmark datasets, i.e., CTW1500, Total-Text, ICDAR2015, MSRA-TD500 and MLT2017, respectively.
The “-” in the table indicates that the comparative method did not report results on the dataset. | B |
(one in UaW(𝒱,ℰ)superscriptsubscript𝑈𝑎𝑊𝒱ℰU_{a}^{W}({{\cal V},{\cal E}})italic_U start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ( caligraphic_V , caligraphic_E ), see Definition 2)
that joins the vertices γ𝛾\gammaitalic_γ and λ𝜆\lambdaitalic_λ in the five cases | \lambda})\qquad\big{(}{\forall\gamma,\lambda\in{\cal V}}\big{)}\;.( start_RELOP start_ARG start_ROW start_CELL ∥ end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL italic_d end_CELL end_ROW end_ARG end_RELOP ∣ italic_W ) = ( caligraphic_A start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ) sta... | E%
}})}\Big{)}\qquad\big{(}{\forall\gamma,\lambda\in{\cal V}}\big{)}\;,italic_γ start_RELOP start_ARG start_ROW start_CELL ∥ end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL italic_d end_CELL end_ROW end_ARG end_RELOP italic_λ ∣ italic_W ⇔ ( italic_γ ≠ italic_λ ) ∧ ( italic_D start_POSTSUBSCR... | W$}}_{*}\lambda¬ ( italic_γ start_RELOP start_ARG start_ROW start_CELL ∥ end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL italic_d end_CELL end_ROW end_ARG end_RELOP italic_λ ∣ italic_W ) ⟹ italic_γ caligraphic_A start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ∗ end_POS... | \line(1,0){10.0}\\
d\end{subarray}$}}\lambda\mid W}\big{)}italic_γ caligraphic_A start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT start_POSTSUBSCRIPT ∗ end_POSTSUBSCRIPT italic_λ ⟹ ¬ ( italic_γ start_RELOP start_ARG start_ROW start_CELL ∥ end_CELL end_ROW start_ROW start_CELL end_CELL end_ROW start_ROW start_CELL ita... | C |
Assume that all IP addresses are logically divided into q𝑞qitalic_q subsets according to the value of the first part of an individual IP address. Further assume are q𝑞qitalic_q computers for parallel computation, where the statistics collection task of each subset can be performed by an individual computer. A minimu... |
IP Mapping. All IP records are partitioned into q𝑞qitalic_q subsets according to the first part of each IP address. The statistics of the IP records in each subset are mapped into an array, whose memory is pre-allocated on a computer according to the last three parts of each IP address. The first k𝑘kitalic_k most fr... |
In this section, we evaluate the performance of the proposed methods 111https://github.com/chenjie20/IPStatistics on three synthetic datasets that contain 5 million, 10 million, and 50 million randomly generated IP records. Each individual IP address contains one or more of IP records. The average number of IP records... | In this paper, we present two efficient algorithms for collecting the statistics of large-scale IP address data. We can obtain the frequently occurring IP addresses from the statistics, which can be regarded as a pre-processing step of user behavior analysis in network traffic management. Because of the increasing volu... | Parameter k𝑘kitalic_k was set to 10 or 100, and we repeated each experiment 10 times. The average computational costs and standard deviations are reported in Table 4, and the mean memory use and standard deviations are given in Table 4. The results show that TLMB consistently outperformed all the other methods in term... | B |
In this work, we only assume that A1subscript𝐴1A_{1}italic_A start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT is invertible and the global system matrix 𝒜𝒜\mathcal{A}caligraphic_A is invertible. Many special cases can be cast into the
above forms of twofold saddle point systems. For example, (a) A2=0subscript𝐴20A_{2}=0itali... | The outline of the remainder of this paper is as follows. In section 2, we briefly recall the classic saddle point problem and its Schur complement, and introduce the twofold saddle point problem and the form of Schur complement, we then construct and analyze the block-triangular and block-diagonal preconditioners base... | In this study, we explore two methodologies for designing preconditioners tailored for 3333-by-3333 block systems exhibiting block tridiagonal form, as well as their extensions to n𝑛nitalic_n-tuple scenarios. One approach centers around the nested (or recursive) Schur complement [37]. Unlike the approach presented in ... | Commencing with a twofold saddle point problem, we generalize our theory to n𝑛nitalic_n-tuple block tridiagonal saddle point problems. Our study demonstrates that judiciously selecting signs in front of Schur complements in preconditioners results in a positively stable preconditioned system [16]. By using the Routh–H... | The above 3333-by-3333 block linear problems (1) and (2) can be naturally extended to the n𝑛nitalic_n-tuple cases. For example, when the system matrix in (1) is extended to the n𝑛nitalic_n-tuple case, it is the block tridiagonal systems discussed in [37]. When
the system matrix in (2) is extended to the n𝑛nitalic_n-... | D |
TDCD works by performing a non-trivial combination of parallel coordinate descent on the top tier between silos and distributed stochastic gradient descent, with multiple local steps per communication round, in the bottom tier of clients inside each silo.
More specifically, each hub maintains a block of the trainable p... | However, in cases when the labels are sensitive and sharing the labels for a sample ID across silos is not feasible, the label information for a sample ID may only be present in a client in one silo. In this case, we could modify our algorithm in the following way, similar to (Liu et al., 2020a): the clients in all sil... | To execute the local iterations, clients need embeddings from the data from clients in other silos. The hubs orchestrate this information exchange every time they update their parameter blocks. By waiting for several training iterations to exchange this information, the communication cost of the algorithm is reduced.
O... | of federated learning is horizontal federated learning. In horizontal federated learning, the clients’ datasets share the same set of features, but each client holds only a subset of the sample space, i.e., the data is horizontally partitioned among clients (McMahan et al., 2017; Konečný et al., 2016; Kairouz et al., ... |
We have introduced TDCD, a communication efficient decentralized algorithm for a multi-tier network model with both horizontally and vertically partitioned data. We have provided a theoretical analysis of the algorithm convergence and its dependence on the number of vertical partitions and the number of local iteratio... | B |
We can further extend the concept of T-eigenvalues into generalized T-eigenvalues, similar to the case of generalized matrix eigenvalues. Let ℬℬ\mathcal{B}caligraphic_B be another tensor with the same size as 𝒜𝒜\mathcal{A}caligraphic_A. Under the same conditions as defined in Definition 7, if the following equation h... | Assuming μ𝜇\muitalic_μ is a T-eigenvalue of 𝒜+δ𝒜𝒜𝛿𝒜\mathcal{A}+\delta\mathcal{A}caligraphic_A + italic_δ caligraphic_A, with δ𝛿\deltaitalic_δ being a small number and indicating a small perturbation on the original tensor 𝒜𝒜\mathcal{A}caligraphic_A. Then, considering the spectral norm or Frobenius norm cases,... | We can further extend the concept of T-eigenvalues into generalized T-eigenvalues, similar to the case of generalized matrix eigenvalues. Let ℬℬ\mathcal{B}caligraphic_B be another tensor with the same size as 𝒜𝒜\mathcal{A}caligraphic_A. Under the same conditions as defined in Definition 7, if the following equation h... |
Within the framework of tensor-tensor multiplication (3) proposed and investigated by Kilmer and Martin Kilmer2011 , T-eigenvalues and T-eigenvectors have garnered significant attention from researchers. They offer a novel perspective to characterize the properties of the widely employed tensor-tensor multiplication (... |
then λ𝜆\lambdaitalic_λ is referred to as a generalized T-eigenvalue of 𝒜𝒜\mathcal{A}caligraphic_A with respect to ℬℬ\mathcal{B}caligraphic_B. This generalization encompasses the cases presented in prior works braman2010thirdorder ; Kilmer2013SIAM ; Jin2020 ; Miao2020T ; zheng2020t . | D |
To deal with this problem, a number of multi-stage methods are proposed to explicitly incorporate structure modeling, which hallucinate structures of missing regions in the first stage and use them to guide pixel generation in the second stage. For instance, EdgeConnect [18] encodes such structures by edges, while [20... | More recently, a few attempts mix the modeling processes of structures and textures. PRVS (Progressive Reconstruction of Visual Structure) [10] and MED (Mutual Encoder-Decoder) [14] are the representatives, and they generally exploit a shared generator for both textures and structures. Despite some performance gains re... | a number of multi-stage methods that serially incorporate additional structural priors are proposed, producing more impressive results. EdgeConnect [18] extracts image structures by edges, based on which the holes are filled. Xiong et al. [28] show a similar model while it employs foreground object contours as structur... | Figure 5 compares our results with the ones of the representative methods including the current state-of-the-arts on the three benchmarks. It can be seen, as a classical patch-based method, PatchMatch [2] fails in handling large holes. PConv [13] is suitable for irregular corruptions, but obvious artifacts can be obser... | The generator is a two-stream architecture, modeled by a U-Net variant, as shown in Figure 2 (a). At the encoding stage, the corrupted image and its corresponding edge map are individually projected into the latent space, where the left branch focuses on texture features and the right branch targets structure features.... | A |
In digital communication, it is common that messages transmitted through a public channel may be distorted by the channel noise. The theory of error-correcting codes is the study of mechanisms to cope with this problem. This is an important research area with many applications in modern life. For example, error-correc... | In a binary erasure channel (BEC), a binary symbol is either received correctly or totally erased with probability ε𝜀\varepsilonitalic_ε. The concept of BEC was first introduced by Elias in 1955 InfThe . Together with the binary symmetric channel (BSC), they are frequently used in coding theory and information theory ... |
In digital communication, it is common that messages transmitted through a public channel may be distorted by the channel noise. The theory of error-correcting codes is the study of mechanisms to cope with this problem. This is an important research area with many applications in modern life. For example, error-correc... |
In a q𝑞qitalic_q-ary erasure channel, the channel input alphabet is a finite field 𝔽qsubscript𝔽𝑞\mathbb{F}_{q}blackboard_F start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT of order q𝑞qitalic_q, and during transmission, each symbol x∈𝔽q𝑥subscript𝔽𝑞x\in\mathbb{F}_{q}italic_x ∈ blackboard_F start_POSTSUBSCRIPT ita... |
The ensemble ℛm,nsubscriptℛ𝑚𝑛\mathcal{R}_{m,n}caligraphic_R start_POSTSUBSCRIPT italic_m , italic_n end_POSTSUBSCRIPT has been studied for a long time and many strong results have been obtained. For example, in the classical work of Gallager Gallager2 , an upper bound of the average number of codewords of a given we... | A |
The planner is used to search over the graph induced by the subgoal generator and is guided by the value function. The role of the low-level policy is to prune the search tree as well as to transition between subgoals.
In this paper, we assume that the generator predicts subgoals that are k𝑘kitalic_k step ahead (towar... |
Subgoal Search consists of four components: planner, subgoal generator, low-level policy, and a value function. The planner, coupled with a value function, is used to search over the graph induced by the subgoal generator. Namely, for each selected subgoal, the generator allows for sampling the candidates for the next... | MCTS-kSubS and BF-kSubS differ in the choice of the search engine: the former uses Monte-Carlo Tree Search (MCTS), while the latter is backed by Best-First Search (BestFS).
We provide two sets of implementations for the generator, the low-level policy, and the value functions. The first one uses transformer architectur... | The subgoals are trained to predict states k𝑘kitalic_k steps ahead of the current one. Higher k𝑘kitalic_k should make planning easier as the search graph is smaller. However, as k𝑘kitalic_k increases, the quality of the generator may drop, and thus the overall effect is uncertain. Similarly, the task of the low-leve... | The planner is used to search over the graph induced by the subgoal generator and is guided by the value function. The role of the low-level policy is to prune the search tree as well as to transition between subgoals.
In this paper, we assume that the generator predicts subgoals that are k𝑘kitalic_k step ahead (towar... | B |
Meanwhile, in order to relieve character substitution problems and enhance the robustness of NER models, researchers have also paid attention to utilizing glyph and phonetic features of Chinese characters. Jiang Yang and Hongman Wang suggested using the ‘Four-corner’ code, a radical-based encoding method for Chinese ch... | In this paper, we propose a lightweight method, Multi-feature Fusion Embedding for Chinese Named Entity Recognition (MFE-NER), which fuses extra glyph and phonetic features to detect possible substitution forms of named entities in Chinese. On top of using pre-trained models to represent the semantic feature, we choose... | In this section, we first show why our method is lightweight and explain the improvement that MFE-NER achieved in recognizing substitution forms of named entities. Then, we will analyse the overall performance enhancement by applying MFE-NER. Here, the embeddings without glyph and phonetic features are named ‘pure’ emb... | Nowadays, the informal language environment created by social media has deeply changed the way that people express their thoughts. Using character substitution to generate new named entities becomes a common linguistic phenomenon which is a big challenge for NER. In this paper, we propose a lightweight method fusing th... |
our MFE-NER is a lightweight Named Entity Recognition method fusing the glyph and phonetic feature embeddings for Chinese character substitution, which is complementary to pre-trained language models in the representation of Chinese characters. As shown in Figure 2, MFE-NER introduces an extra module, fusing glyph emb... | D |
In addition to field-of-view, we also investigate the eyebox that is produced with neural étendue expansion. By initializing the learning process with a uniform random expander we bias the optimized solution towards expanders that distribute energy throughout the eyebox, in contrast to a quadratic phase profiles[28] t... |
To characterize the hologram reconstruction with the proposed neural étendue expander we simulate a Fourier holographic setup that has been augmented with a neural étendue expander. Fig. 3a reports qualitative examples of trichromatic and monochromatic reconstructions achieved with neural étendue expanders, binary ran... | The uniform random expander is constructed by assigning each pixel a phase that is uniformly randomly chosen within [0,2π]02𝜋[0,2\pi][ 0 , 2 italic_π ]. To ensure at least 2π2𝜋2\pi2 italic_π phase is available for all wavelengths the [0,2π]02𝜋[0,2\pi][ 0 , 2 italic_π ] phase range is defined for 660 nmtimes660nm6... | The experimental findings on the display prototype verify that conventional non-étendue expanded holography can produce high-fidelity content but at the cost of a small FOV. Increasing the étendue via a binary random expander will increase the FOV but at the cost of low image fidelity, even at the design wavelength of ... | Finally, we also investigate 3D étendue expanded holograms. We find that neural étendue expansion also enables higher fidelity étendue expanded 3D color holograms. We note that existing methods on étendue expanded holography has focused on monochromatic 3D holograms[7, 28, 29]. Photon sieves[21] only achieves 3D color ... | D |
The parallel architecture shares the bulk of the model among multiple tasks while each task has its own task-specific output layer.
The hierarchical architecture models the hierarchical relationships between tasks. Such architecture can hierarchically combine features from different tasks, take the output of one task a... | The modular architecture decomposes the whole model into shared components and task-specific components that learn task-invariant and task-specific features, respectively.
Different from the above three architectures, the generative adversarial architecture borrows the idea of the generative adversarial network (Goodfe... | An additional benefit of generative adversarial architectures is that unlabeled data can be fully utilized.
\addedWang et al. (2020c) add an auxiliary generative model that reconstructs documents from document representations learned by the primary model and improves the quality of document representations by training ... | The parallel architecture shares the bulk of the model among multiple tasks while each task has its own task-specific output layer.
The hierarchical architecture models the hierarchical relationships between tasks. Such architecture can hierarchically combine features from different tasks, take the output of one task a... | The idea behind the modular MTL architecture is simple: breaking an MTL model into shared modules and task-specific modules. The shared modules learn shared features from multiple tasks. Since the shared modules can learn from many tasks, they can be sufficiently trained and can generalize better, which is particularly... | A |
The IEEE Template Selector will always have the most up-to-date versions of the LaTeX and MSWord templates. Please see: https://template-selector.ieee.org/ and follow the steps to find the correct template for your intended publication. Many publications use the IEEETran LaTeX templates, however, some publications have... | There are other options available for each of these when submitting for peer review or other special requirements. IEEE recommends to compose your article in the base 2-column format to make sure all your equations, tables and graphics will fit the final 2-column format. Please refer to the document “IEEEtran_HOWTO.pdf... | The templates are intended to approximate the final look and page length of the articles/papers. Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. They will help to give the authors an approximation of the number of pages that will be in the final version. The ... | For Transactions and Journals papers, this is not necessary to use at the submission stage of your paper. The IEEE production process will add the appropriate copyright line. If you are writing a conference paper, please see the “IEEEtran_HOWTO.pdf” for specific information on how to code ”Publication ID Marks”.
| Make sure that your equations are numbered sequentially and there are no equation numbers missing or duplicated. Avoid hyphens and periods in your equation numbering. Stay with IEEE style, i.e., (1), (2), (3) or for sub-equations (1a), (1b). For equations in the appendix (A1), (A2), etc..
| A |
For each j∈[m]𝑗delimited-[]𝑚j\in[m]italic_j ∈ [ italic_m ] we do the following. Suppose the j𝑗jitalic_j-th edge of G𝐺Gitalic_G
has endpoints vi1,vi2subscript𝑣subscript𝑖1subscript𝑣subscript𝑖2v_{i_{1}},v_{i_{2}}italic_v start_POSTSUBSCRIPT italic_i start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , itali... | y(j,1)1,…,y(j,k)1superscriptsubscript𝑦𝑗11…superscriptsubscript𝑦𝑗𝑘1y_{(j,1)}^{1},\ldots,y_{(j,k)}^{1}italic_y start_POSTSUBSCRIPT ( italic_j , 1 ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , … , italic_y start_POSTSUBSCRIPT ( italic_j , italic_k ) end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_PO... | all 2k2𝑘2k2 italic_k vertices of the cliques Yj1,Yj2superscriptsubscript𝑌𝑗1superscriptsubscript𝑌𝑗2Y_{j}^{1},Y_{j}^{2}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSC... | exists a Yjsubscript𝑌𝑗Y_{j}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT such that each vertex of Yj1superscriptsubscript𝑌𝑗1Y_{j}^{1}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT has the same neighborhood in
S𝑆Sitalic_S as Wisubscript𝑊𝑖W_{i}italic_W... | of vertices as the set Yj=Yj1∪Yj2subscript𝑌𝑗superscriptsubscript𝑌𝑗1superscriptsubscript𝑌𝑗2Y_{j}=Y_{j}^{1}\cup Y_{j}^{2}italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT ∪ italic_Y start_POSTSUBSCRIPT italic... | A |
_{j}∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 1 , italic_j ≠ italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_A start_POSTSUBSCRIPT italic_i italic_j end_POSTSUBSCRIPT italic_A start_POSTSU... |
The information intervention has a positive effect both immediately and differentially. In the treatment condition, reciprocity displays a large positive level effect and switches from following a downward to an upward trend. These trends are apparent in Figure 1(e) and supported by the coefficient estimates in column... |
Figure 4 shows the effect of a one-standard deviation increase in the trust component, applied uniformly across the population. As the figure shows, this intervention improves outcomes along all measures, including contributions, links, reciprocity, and centralization, in the treatment condition. In the baseline, howe... | Contribution decisions and link decisions in isolation may not capture the true dynamic of behavior. This is because both variables together determine the cost of sharing—the marginal return on a player’s contribution in this purely congestive game decreases as they share with more other players. For each observed acti... |
The estimation in column (3) of Table 1 shows the effects of the treatment on subjects’ abilities to coordinate on efficient structure. By efficient structure, we refer to a network topology that satisfies the conditions required for efficiency by Proposition 2, without necessarily satisfying the requirement of full c... | A |
Gating Mechanism: Skip connection in the above residual learning tends to make the channel dimension of the output features extremely high. If such a high-dimension channel remains the same in the following layers, the computational cost will be terribly large and therefore will affect the reconstruction efficiency an... |
Gating Mechanism: Skip connection in the above residual learning tends to make the channel dimension of the output features extremely high. If such a high-dimension channel remains the same in the following layers, the computational cost will be terribly large and therefore will affect the reconstruction efficiency an... | To solve this issue, researchers recommend using the gating mechanism to adaptively extract and learn more efficient information. Most of the time, a 1×1111\times 11 × 1 convolutional layer is adopted to accomplish the gating mechanism, which can reduce the channel dimension and leave more effective information. In SRD... | et al., 2017b) and CARN (Ahn
et al., 2018b), the gating mechanism is used in both global and local regions. Sometimes, it can be combined with other operations, such as the attention mechanism, to construct a more effective gate module to achieve feature distillation. For instance, Li et al. (Li | Motivated by the dense connection mechanism, Tong et al. (Tong
et al., 2017) proposed an SRDenseNet. SRDenseNet uses not only the layer-level dense connections but also the block-level ones, where the output of each dense block is connected by dense connections. In this way, the low-level features and high-level featur... | B |
Figure 2: Neural Knitwork architecture consists of 3 shallow s. The network knits patches for related coordinates by enforcing consistency of predictions and optimizing likelihoods of individual patches. Each patch stack is translated back to a single color by the Reconstructor. |
To perform super-resolution, a Neural Knitwork has to translate the information contained in the patches of the original scale to a domain of patches of finer scale. This can be done by matching the patch distribution across scales [8, 25, 26, 29]. For blind super-resolution, Neural Knitwork core module is utilized wi... | The core structure of the proposed network is presented in Figure 2. It consists of three small networks: (i) Patch for translating from the original coordinate domain to the patch domain (ii) the discriminator responsible for assessing patch likelihoods, and (iii) Reconstructor for mapping the patch domain to indivi... |
Figure 2: Neural Knitwork architecture consists of 3 shallow s. The network knits patches for related coordinates by enforcing consistency of predictions and optimizing likelihoods of individual patches. Each patch stack is translated back to a single color by the Reconstructor. |
The resulting architecture performs the equivalent operation to a conventional coordinate-based since the network ultimately predicts a single pixel value. However, the intermediate patch-based representation of the proposed architecture forces the model to establish the natural relationship between the encoded coord... | B |
Under Assumptions 2, and 3, the sequence of actions {At(M)}t=1∞superscriptsubscriptsuperscriptsubscript𝐴𝑡𝑀𝑡1\{A_{t}^{(M)}\}_{t=1}^{\infty}{ italic_A start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_M ) end_POSTSUPERSCRIPT } start_POSTSUBSCRIPT italic_t = 1 end_POSTSUBSCRIPT start_POSTS... | The PG-augmentation scheme can also be used to devise an approximate Information Directed Sampling (IDS) scheme, based on the framework proposed by Russo and Van Roy, (2018). IDS algorithms are randomised policies which construct an action sampling distribution, in each round t𝑡titalic_t, based on a trade-off of regre... | The regret results of the Section 2 are based on exact sampling from the posterior. The PG-TS algorithm necessarily samples from an approximation of the posterior, to maintain a reasonable computational overhead. Recent work of Phan et al., (2019) has identified conditions under which sampling from an approximate poste... |
To go further than these results - i.e. to establish that the regret guarantees associated with exact TS carry to PG-TS - is likely to be much more complex. The most advanced results on the regret of approximate TS in simple multi-armed bandits, for instance, rely on complex Bayesian non-parametric theory, which, to t... |
The present paper is the first work we aware of that specifically applies TS to apple tasting, but previous work has considered its use for logistic bandits. For logistic contextual bandits, the implementation of exact TS (i.e. the policy that draws its sample from the exact posterior) is infeasible due to the intract... | C |
The design choices behind our memory-augmented transformers architecture are motivated by our intended use of external textual knowledge.
Past research on deep learning models tackled the problem of handling external content by introducing external memory blocks the model could interact with in a differentiable way [53... |
In contrast, we do not propose an architectural modification of transformer architecture. Instead, we leverage the general-purpose architecture of memory-augmented neural networks and use transformers as a possible implementation of one of the constituting blocks of such an architecture. Furthermore, our role of memor... | Therefore, they seem an ideal candidate architecture for the integration of natural language explanations.
However, the main purpose of our extension of transformers is not to improve classification performance, but to generate explanations in the form of grounding to elements of a textual knowledge. Accordingly, the k... | The main building block of our architecture is a transformer-based model. In our experiments, we considered BERT [1] and DistilBERT [55], a distilled version of BERT that achieves competitive performance while limiting the overall computational burden. However, our approach is general and is not restricted to these two... | Another frequent use of the memory block is for external knowledge integration. Transformers have been directly applied as advanced input encoding methods in traditional memory-augmented architectures for information retrieval in dialogue systems [25, 26, 27, 28], question-answering [29] and aspect-based sentiment anal... | B |
In this paper, we propose a framework for predictive model checking and comparison, where in addition to usual approaches, we advocate for the specification of concrete (spatio-temporal) properties that the predictions from a model should satisfy. Given trajectories from the Bayesian predictive distribution, the poste... |
In this paper, we propose a framework for predictive model checking and comparison, where in addition to usual approaches, we advocate for the specification of concrete (spatio-temporal) properties that the predictions from a model should satisfy. Given trajectories from the Bayesian predictive distribution, the poste... |
We demonstrate our novel approach with spatio-temporal areal data, where measurements are collected over time at various areal units, and a neighboring matrix allows calculating the distance between the different units. In particular, we consider an urban mobility application, given that urban population density dynam... | We illustrate the approach by building a Bayesian spatio-temporal model for areal crowdedness extracted from aggregated mobile phone data in the city of Milan and by formulating properties that the crowdedness level in the city network should satisfy in order to robustly withstand critical events.
We compare various mo... | In this section, we propose some informal properties that the crowdedness level in a big city should satisfy to robustly withstand critical events.
Let c𝑐citalic_c represent a crowdedness threshold for all the areas of the city. Note, however, that the framework can accommodate for, e.g., area-specific threshold value... | C |
i.e., log(1)x=logxsuperscript1𝑥𝑥\log^{(1)}{x}=\log{x}roman_log start_POSTSUPERSCRIPT ( 1 ) end_POSTSUPERSCRIPT italic_x = roman_log italic_x
and log(i)x=log(log(i−1)x)superscript𝑖𝑥superscript𝑖1𝑥\log^{(i)}{x}=\log(\log^{(i-1)}{x})roman_log start_POSTSUPERSCRIPT ( italic_i ) end_POSTSUPERSCRIPT italic_x = roma... | a coreset Xisubscript𝑋𝑖X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT from the current coreset Xi−1subscript𝑋𝑖1X_{i-1}italic_X start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT,
to reduce the number of distinct elements in Xi−1subscript𝑋𝑖1X_{i-1}italic_X start_POSTSUBSCRIPT italic_i - 1 end_POSTSUB... | A function K:X×X→ℝ:𝐾→𝑋𝑋ℝK:X\times X\rightarrow\mathbb{R}italic_K : italic_X × italic_X → blackboard_R is a kernel function
if the n×n𝑛𝑛n\times nitalic_n × italic_n matrix M𝑀Mitalic_M such that Mij=K(xi,xj)subscript𝑀𝑖𝑗𝐾subscript𝑥𝑖subscript𝑥𝑗M_{ij}=K(x_{i},x_{j})italic_M start_POSTSUBSCRIPT italic_i itali... | \Until‖Xi‖0subscriptnormsubscript𝑋𝑖0\|X_{i}\|_{0}∥ italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT does not decrease compared to ‖Xi−1‖0subscriptnormsubscript𝑋𝑖10\|X_{i-1}\|_{0}∥ italic_X start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT ∥ start_POSTSUBSCRIPT 0 end... |
3:Xi←Importance-Sampling(Xi−1,ϵi)←subscript𝑋𝑖Importance-Samplingsubscript𝑋𝑖1subscriptitalic-ϵ𝑖X_{i}\leftarrow\textsc{Importance-Sampling}(X_{i-1},\epsilon_{i})italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ← Importance-Sampling ( italic_X start_POSTSUBSCRIPT italic_i - 1 end_POSTSUBSCRIPT , italic_ϵ sta... | B |
U∗XV={⟨x,w⟩∣∃w1,w2.w1∘w2≤w,⟨x,w1⟩∈U,⟨x,w2⟩∈V}subscript∗𝑋𝑈𝑉conditional-set𝑥𝑤formulae-sequencesubscriptsubscript𝑤1subscript𝑤2formulae-sequencesubscript𝑤1subscript𝑤2𝑤formulae-sequence𝑥subscript𝑤1𝑈𝑥subscript𝑤2𝑉U\ast_{X}V=\{{\langle{{x,w}}\rangle}\mid\exists_{w_{1},w_{2}}.\,w_{1}\circ w_{%
2}\leq w,{\langle{... | First of all, we cast Lawvere’s equality to R𝑅Ritalic_R-graded doctrines.
This is quite easy since R𝑅Ritalic_R-graded doctrines are in particular primary linear doctrines, hence one can always consider those that are elementary according to Definition 2.2. |
Recall that elementary doctrines can be understood as those primary doctrine endowed with equality predicates. The following definition introduces those primary linear doctrines that are elementary as a direct linearisation of Definition 2.1 (i.e. replacing of ∧,⊤top\wedge,\top∧ , ⊤ by ∗,κ∗𝜅\ast,\kappa∗ , italic_κ). | It shows an adjoint situation between 𝐏𝐃𝐏𝐃\mathbf{PD}bold_PD and 𝐄𝐃𝐄𝐃\mathbf{ED}bold_ED, i.e. the 2-categories of primary doctrines and that of elementary ones that is, primary doctrines with equality. That adjoint situation is comonadic.
This fact not only reveals the coalgebraic nature of equality, but provid... |
Theorem 22 shows that the notion of quantitative equality given in this paper is coalgebraic, in the sense that Lipschitz doctrines are the coalgebras of a comonad over the category of graded doctrines. This generalizes a known situation that holds in the non-linear case, where elementary doctrines are the coalgebras ... | B |
Efficient top-k similarity search algorithm : We devise ForestSimSearch for the top-k similarity search. ForestSimSearch can handle a top-k query in O(k)𝑂𝑘O(k)italic_O ( italic_k ) time once the precomputation is finished. Furthermore, we use the fast approximate algorithm to compute the diagonal entries of the for... |
In this paper, we propose a novel node similarity metric, namely ForestSim, to quickly and effectively process top-k similarity search on large networks. Different from previous frameworks, ForestSim, based on spanning rooted forests of graphs, adopts the average size of all trees rooted at node u𝑢uitalic_u in spanni... |
Figure 5 shows the Average Precision@K𝐾Kitalic_K of four studied measures on six labeled networks. Note that the difference between ForestSim-EX and ForestSim-AP is marginal though the latter requires less time and space. Moreover, ForestSim achieves comparable performance to RoleSim, the state-of-art role similarity... |
Extensive experimental studies : We test the effectiveness of studied role similarity metrics on six labeled networks and estimate their efficiency on 20 real-world networks, including several large networks. For effectiveness, ForestSim achieves comparable performance to RoleSim and better results than StructSim. For... |
Note that for large networks, RoleSim, StructSim, and ForestSim-EX cannot finish the computation due to their high time and memory cost, while ForestSim-AP works well on all these real networks, which shows the significant efficiency advantages of ForestSim-AP. | C |
Specifically, we assess the model’s performance when employing fixed static weights ηlsubscript𝜂𝑙\eta_{l}italic_η start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT and ηrsubscript𝜂𝑟\eta_{r}italic_η start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT to create sentiment aggregation windows, as opposed to the DWA.
The experi... | In this section, we delve into a case study to validate the capability of our model in learning local sentiment coherency.
We present a series of examples in Table 6, which showcase instances where LSA excels in identifying aspect sentiment coherency. | Specifically, we assess the model’s performance when employing fixed static weights ηlsubscript𝜂𝑙\eta_{l}italic_η start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT and ηrsubscript𝜂𝑟\eta_{r}italic_η start_POSTSUBSCRIPT italic_r end_POSTSUBSCRIPT to create sentiment aggregation windows, as opposed to the DWA.
The experi... | In most scenarios, we observe a modest yet notable improvement of approximately 0.2% to 0.5% when DWA is incorporated into our model.
We also present the experimental results for an ablated version of LSA featuring a simplified sentiment aggregation window in Table 10. | When it comes to sentiment classification performance, the results in Table 4 clearly demonstrate the superiority of our models over significant baselines, particularly in the case of the LSAE model.
The experimental results are as expected and show the proficiency of LSA. | C |
𝐗𝐗\mathbf{X}bold_X. Suppose that there exist orthogonal projections 𝐋:ℝN→𝒫p(𝕊):𝐋→superscriptℝ𝑁superscript𝒫𝑝𝕊\mathbf{L}:\mathbb{R}^{N}\rightarrow{\mathcal{P}}^{p}(\mathbb{S})bold_L : blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT → caligraphic_P start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCR... | gradient estimate of 𝜸𝐖subscript𝜸𝐖\bm{\gamma}_{\mathbf{W}}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT, where 𝜸𝐖0superscriptsubscript𝜸𝐖0\bm{\gamma}_{\mathbf{W}}^{0}bold_italic_γ start_POSTSUBSCRIPT bold_W end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 0 end_POSTSUPERSCRIPT is the initial
guess. The main ... | In Figure 4, we can observe a comparison of performance among different methods for the totchg (total charge) variable: Kriging/BLUP, KNN-Reg, KNN, GLS, and DDL. This comparison is conducted across varying training/validation proportions of the dataset (from (10%/90%) to (90%/10%) of the data. The horizontal axis depic... | The operator 𝐋𝐋\mathbf{L}bold_L and 𝐖𝐖\mathbf{W}bold_W are constructed from a multilevel decomposition of the location of predictors. This process is somewhat elaborate and the reader is referred to [31] and [32] for all of the details. However, for the exposition in this section it sufficient to know what the prop... |
Figure 4: Performance comparison for imputation of the total charge variable among Kriging/BLUP, KNN-Reg, KNN, GLS and DDL for different training/validation proportions of the data. On the horizontal axis we have the percentage proportion for the training dataset. The vertical axis corresponds to the rMSE, MAPE and me... | C |
Visualization of QNN extracted features. MNIST-2 classification result is determined by which feature is larger between the two: feature one is the sum of measurement outcomes of qubit 0 and 1; feature 2 is that of qubit 2 and 3. We visualize the two features obtained from experiments on Belem in a 2-D plane as in Figu... | Figure 7. Ablation on different noise injection methods. Left: Without quantization, gate insertion and measurement perturbation performs similar, both better than rotation angle perturbation. Right: With quantization, gate insertion is better as perturbation effect can be canceled by quantization.
| Figure 9 shows the performance of only applying noise-injection, only applying quantization, and both. Using two techniques individually can both improve accuracy by 9%. Combining two techniques delivers better performance with a 17% accuracy gain. This indicates the benefits of synergistically applying three technique... | With different noise factors T𝑇Titalic_T, the gate insertion and measurement outcome perturbation have similar accuracy, both better than rotation angle perturbation. A possible explanation is that the rotation angle perturbation does not consider non-rotation gates such as X and SX.
The right side further investigate... | the noise-injected training to real QC using techniques such as parameter shift (Crooks, 2019). In this case, the training cost is linearly scaled with qubit number. Post-measurement normalization and quantization are also linearly scalable because they are performed on the measurement outcomes. Gradients obtained with... | B |
The proposed EDA is evaluated on a PC with an Intel Core i7 CPU and an NVIDIA GTX 1080 GPU. For conventional tracking methods, they are synchronized with the global camera shutter, and thus their speeds are evaluated by a synchronous criterion (e.g., 25 frames per second and above can be considered as real-time). Since... |
In this paper, we propose a novel unifying event data association (EDA) approach to effectively and explicitly handle the essential event data association and event information fusion problem. The proposed EDA performs a model fitting on event data, which can asynchronously associate and fuse the event data over time ... | Event-based methods have achieved promising performance on various tasks gallego2022event . However, the study of the fundamental event data association problem is still challenging and in its infancy. Unlike a traditional camera, an event camera only sparsely emits binary (i.e., On and Off) retinal events at the edges... |
However, gallego2018unifying and other event-based data association methods zhu2017event ; gallego2019focus ; peng2022globally show that the events triggered by the same edge in the scene can be associated with each other using an event trajectory. Furthermore, they also show that the event trajectories, triggered a... |
However, in the current event-based studies, most methods usually handle the fundamental event-based data association problem in implicit ways, which are designed for their specific tasks. As a result, event-based data association has not been effectively solved by the current event-based works. There are relatively f... | A |
As a connected vertex-ordering of H𝐻Hitalic_H can be obtained in linear time using a standard graph traversal algorithm, and a colouring of G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT may be computed in O(mn)𝑂𝑚𝑛O(mn)italic_O ( italic_m italic_n ) time [12, Chapter 5.7], we concl... | Concerning other subclasses of interest, as mentioned in the introduction, the same task can be done in time O(m+n)𝑂𝑚𝑛O(m+n)italic_O ( italic_m + italic_n ) for chordal graphs using the LexBFS algorithm [21], for Meyniel graphs this can be done in time O(n2)𝑂superscript𝑛2O(n^{2})italic_O ( italic_n start_POSTSUP... | Recently, connected greedy edge-colourings (equivalently, connected greedy colourings of line graphs) have been studied in [3], and it was proved that there is no line graph of a bipartite graph that is ugly.444Moreover, a careful analysis of the proof of [3] gives an algorithm running in time O(n4)𝑂superscript𝑛4O(n... | class of perfect graphs. We also give a simple and constructive proof for comparability graphs (which are perfect). Note that there exist bad graphs in these graph classes, consider for example the fish graph, which is K4subscript𝐾4K_{4}italic_K start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT-minor-free and comparability; see... |
We now prove our main result, that there are no ugly perfect graphs. This generalizes the same fact which was previously proved for Meyniel graphs [22] (a class which contains chordal graphs, HHD-free graphs, Gallai graphs, parity graphs, distance-hereditary graphs…) and line graphs of bipartite graphs [3]. Our proof ... | D |
In Table I, we fine-tune a ResNet-50 pre-trained with various methods on the labeled training set of STL-10. GenURL outperforms other methods under 400-epoch and 800-epoch pre-training, which reflects its fast convergence speeds while maintaining the second-best classification accuracy with longer training. |
In Table I, we fine-tune a ResNet-50 pre-trained with various methods on the labeled training set of STL-10. GenURL outperforms other methods under 400-epoch and 800-epoch pre-training, which reflects its fast convergence speeds while maintaining the second-best classification accuracy with longer training. |
The main goal of unsupervised learning is to learn transferrable features. In Table V, we compare the representation quality of unsupervised pre-training on STL-10 by transferring to the classification task. We adopt linear evaluation on the CIFAR-10 in 64×\times×64 resolutions with 1600-epoch pre-trained ResNet-50 on... | We evaluate the KD tasks based on self-supervised learning on STL-10 dataset. In this experiment, we adopt MoCo.v2 with ResNet-50 under 1600-epoch pre-training. We choose multiple smaller networks with fewer parameters as the student network: ResNet-18 [70], MobileNet.v2 [86], ShuffleNet.v1 [87]. Similar to the pre-tra... |
We first compare with existing methods in terms of different training epochs on STL-10, as shown in Table I. The proposed GenURL achieves the highest accuracy among all settings. It not only converges faster than other algorithms under 400-epoch pre-training but gains better performance when training longer. Then, we ... | B |
Afterward, we sort the channels according to their importance (we used L-1 norm for importance estimation [18]).
Then we initialize the super network with the weights and then perform super network training using the same hyper-parameters for twice the epochs. For each iteration, 4 random architectures are sampled, and... |
To prevent over-fitting the real validation set, we evaluate the performance of each sub-network on the split validation set. The weights are taken from the super network using indexing. We re-calibrate the batch normalization statistics using 20 batches of data with a batch size 64. | To prevent over-fitting the real validation set, we evaluate the performance of each sub-network on the split validation set. The weights are taken from the super network using indexing. We re-calibrate the batch normalization statistics using 20 batches of data with a batch size 64.
| To prevent over-fitting the real validation set, we evaluate the performance of each sub-network on the split validation set. The weights are taken from the super network using indexing. We re-calibrate the batch normalization statistics using 20 batches of data with a batch size 64.
| To prevent over-fitting the real validation set, we evaluate the performance of each sub-network on the split validation set. The weights are taken from the super network using indexing. We re-calibrate the batch normalization statistics using 20 batches of data with a batch size 64.
| B |
We use a pre-trained BERT model555https://storage.googleapis.com/bert_models/2019_05_30/wwm_uncased_L-24_H-1024_A-16.zip to initialize the model weight. For training details, We train our model using Adam optimizer with a 2e-5 learning rate and 12 mini-batch sizes, and the L2-norm regularization coefficient is set to ... | where S𝑆Sitalic_S is th matrix of one-hot vectors of sub-words indices in the input sentence,
Wssubscript𝑊𝑠W_{s}italic_W start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT is the sub-words embedding matrix, Wpsubscript𝑊𝑝W_{p}italic_W start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT is the positional embedding matrix whe... | Besides, we modeled label sequences jointly using a CRF to improve the performance of our method. The model effect was not improved when the learning rate of CRF was relatively small. To explore the effect of CRF on BERT, we tried to continuously increase the learning rate of CRF. At last, we concluded that the learnin... | In this way, we can use the softmax function to decode each label independently, and obtain the set of all possible event types and event-causes pairs. Inspired by sequence tagging tasks, it is beneficial to consider the correlations between labels in neighborhoods and jointly decode the best chain of labels for a give... |
We use a pre-trained BERT model555https://storage.googleapis.com/bert_models/2019_05_30/wwm_uncased_L-24_H-1024_A-16.zip to initialize the model weight. For training details, We train our model using Adam optimizer with a 2e-5 learning rate and 12 mini-batch sizes, and the L2-norm regularization coefficient is set to ... | B |
In Figure 2, we illustrate the overview of CGCL. Multiple graph encoders process input graphs, yielding embeddings for each graph. Every graph encoder updates its parameters by contrasting its learned embeddings to the outputs from the other graph encoders. Specifically, the graph embeddings learned by Graph Encoder i�... | Those techniques can be concluded as introducing asymmetry into model architecture. Reflecting on CGCL, diverse GNN-based graph encoders with distinct message-passing schemes are employed to ensure an asymmetric architecture. Essentially, the variance in these schemes introduces the desired asymmetry. Thus, CGCL’s asse... | In this study, we introduce CGCL, a novel collaborative graph contrastive learning framework, designed to address the invariance challenge encountered in current GCL methods. Unlike the conventional practice of constructing augmented graphs by hand, CGCL employs multiple GNN-based encoders to generate multiple contrast... | Here, 𝒉n(l−1)superscriptsubscript𝒉𝑛𝑙1\boldsymbol{h}_{n}^{(l-1)}bold_italic_h start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT ( italic_l - 1 ) end_POSTSUPERSCRIPT is the representation of node n𝑛nitalic_n at the (l−1)𝑙1(l-1)( italic_l - 1 )-th layer and 𝒉n(0)superscriptsubscript𝒉𝑛0\boldsymb... |
Given a set of graphs, CGCL needs to encode them into vectorized representations. GNNs [28, 12, 8] have demonstrated their outstanding ability in encoding graphs. In CGCL, we mainly employ GNNs as graph encoders. GNNs follow the recursive neighborhood aggregation and certain message-passing scheme [32] to encode graph... | D |
However, after a few network updates on the whole dataset (without the removed diagonal)
we observed a quite significant increase in compositionality metrics, see Figure 6. The highest improvement can be seen roughly in the first 50505050 additional network updates. | The topic of communication is actively studied in multi-agent RL, see Hernandez-Leal et al., (2020, Table 2) for a recent survey. Compositionality is often investigated in the context of signaling games (Fudenberg and Tirole, (1991), Lewis, (1969), Skyrms, (2010), Lazaridou et al., (2018)). Recent research has shown th... | In general, a language is defined as a mapping from objects to strings over some finite alphabet 𝒜𝒜\mathcal{A}caligraphic_A (sometimes called messages), ℓ:𝒳→𝒜∗:ℓ→𝒳superscript𝒜\ell:\mathcal{X}\to\mathcal{A}^{*}roman_ℓ : caligraphic_X → caligraphic_A start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT.
In this paper, we wi... |
Theory Theorem 2 uses several restrictive assumptions, e.g. fixed message length or alphabet size equal to the range of feature values. In a general setting, it could be the case that noise would promote non-compositional, error-correcting, communication protocols. Stating the general conditions for the emergence of c... | The noisy channel model of communication was famously introduced by Shannon, (1948). The idea of noise as a driving force in the emergence of communication was first proposed by Nowak and Krakauer, (1999), who showed that word-level compositionality is the optimal solution to the problem of communication in a noisy env... | C |
A promising research direction is to learn CBFs from data. The authors in [36] construct CBFs from safe and unsafe data using support vector machines, while authors in [37] learn a set of linear CBFs for clustered datasets. The authors in [38] proposed learning limited duration CBFs and the work in [39] learns signed ... | In this paper, we learn safe output feedback control laws for unknown systems. We first present robust output control barrier functions (ROCBFs) to establish safety under system dynamics and state estimation uncertainties. We then formulate a constrained optimization problem for constructing ROCBFs from safe expert dem... |
In this section, we present the algorithmic implementation of the previously presented results. We will discuss various aspects related to solving the constrained optimization problem (7), the construction of the involved datasets, and estimating Lipschitz constants of the functions h(x)ℎ𝑥h(x)italic_h ( italic_x ) a... | Safety-critical systems rely on robust control laws that can account for uncertainties in system dynamics and state estimation. For example, consider an autonomous car equipped with noisy sensors that navigates through urban traffic [1]. The state of the car is not exactly known and estimated from output measurements, ... |
In this paper, we have shown how safe control laws can be learned from expert demonstrations under system model and measurement map uncertainties. We first presented robust output control barrier functions (ROCBFs) as a means to enforce safety, which is here defined as the ability of a system to remain within a safe s... | A |
There exists an oracle relative to which 𝖭𝖯𝖡𝖰𝖯⊄𝖡𝖰𝖯𝖭𝖯not-subset-ofsuperscript𝖭𝖯𝖡𝖰𝖯superscript𝖡𝖰𝖯𝖭𝖯\mathsf{NP}^{\mathsf{BQP}}\not\subset\mathsf{BQP}^{\mathsf{NP}}sansserif_NP start_POSTSUPERSCRIPT sansserif_BQP end_POSTSUPERSCRIPT ⊄ sansserif_BQP start_POSTSUPERSCRIPT sansserif_NP end_POSTSUPERSCRIPT,... | As mentioned earlier, Theorem 3 resolves an open problem of Fortnow [For05], and demonstrates a clear difference between 𝖡𝖯𝖯𝖡𝖯𝖯\mathsf{BPP}sansserif_BPP and 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP that exemplifies the impossibility of pulling the randomness out of a quantum algorithm. Indeed, Theorem 3 shows that t... |
So what is it that distinguishes 𝖡𝖯𝖯𝖡𝖯𝖯\mathsf{BPP}sansserif_BPP from 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP in these cases? In all of the above examples, the answer turns out to be one of the fundamental properties of classical randomized algorithms: namely, that one can always “pull the randomness out” from suc... | Given the experience of classical complexity theory, it would be
reasonable to hope for a theorem showing that, if 𝖭𝖯⊆𝖡𝖰𝖯𝖭𝖯𝖡𝖰𝖯\mathsf{NP}\subseteq\mathsf{BQP}sansserif_NP ⊆ sansserif_BQP, then 𝖯𝖧𝖯𝖧\mathsf{PH}sansserif_PH collapses—analogous to the Karp-Lipton Theorem [KL80], that if 𝖭𝖯⊂𝖯/𝗉𝗈𝗅𝗒𝖭𝖯𝖯... | Theorem 10 says, in effect, that there is no relativizing obstruction to 𝖡𝖰𝖯𝖡𝖰𝖯\mathsf{BQP}sansserif_BQP being inordinately powerful even while 𝖭𝖯𝖭𝖯\mathsf{NP}sansserif_NP is inordinately weak. It substantially extends the Raz-Tal Theorem, that there is an oracle relative to which 𝖡𝖰𝖯⊄𝖯𝖧not-subset-of𝖡𝖰... | A |
Our result (2) suggests that one possibility is to define the multiplicity of a solution as the growth rate of multiplicities of its truncations, and this definition will be consistent with the usual algebraic multiplicity for the case of a fat point on a line.
| Note that the series does not depend on the multiplicity m𝑚mitalic_m of the point.
One way to capture the scheme structure of ℒ(X)ℒ𝑋\mathcal{L}(X)caligraphic_L ( italic_X ) could be to take the components of the projections in (3) with their multiplicities. | From the point of view of the algebraic geometry, I(∞)superscript𝐼I^{(\infty)}italic_I start_POSTSUPERSCRIPT ( ∞ ) end_POSTSUPERSCRIPT defines the arc space ℒ(X)ℒ𝑋\mathcal{L}(X)caligraphic_L ( italic_X ) [13] of the scheme X𝑋Xitalic_X.
Geometrically, the points of the arc space correspond to the Taylor coefficients... | We used Macaulay2 [19] and, in particular, package Jets [18, 17] to explore possible analogues of our Theorem 3.1 for this more general case.
A related Sage implementation for computing the arc space of an affine scheme with respect to a fat point can be found in [37, Section 9] and [36, Section 5.4]. | Connections between the multiplicity structure of the arc space of a fat point and Rogers-Ramanujan partition identities from combinatorics were pointed out by Bruschek, Mourtada, and Schepers in [11] (for a recent survey, see [29, §9]).
In particular, they used Hilbert-Poincare series of similar nature to (1) (motivat... | D |
(b) To fit DCDFM, an efficient spectral clustering algorithm called nDFA is designed. We build theoretical framework on consistent estimation for the proposed algorithm under DCDFM. Benefited from the distribution-free property of DCDFM, our theoretical results under DCDFM are general. Especially, when DCDFM reduces t... |
Meanwhile, Q,OE,error,τ𝑄𝑂𝐸𝑒𝑟𝑟𝑜𝑟𝜏Q,OE,error,\tauitalic_Q , italic_O italic_E , italic_e italic_r italic_r italic_o italic_r , italic_τ and T𝑇Titalic_T obtained by applying DFA and nDFA to adjacency matrices A𝐴Aitalic_A for the above four real-world networks with known nodes labels are reported in Table ... | (c) To measure performances of different methods on real-world weighted network with unknown information on nodes labels, we propose a general modularity as an extension of classical Newman’s modularity [23]. For weighted network in which all edge weights are nonnegative, the general modularity is exactly the Newman’s ... | Both simulated and empirical data are presented to compare nDFA with existing algorithm DFA developed in [16] for weighted networks, where DFA applies k-means on all rows of U^^𝑈\hat{U}over^ start_ARG italic_U end_ARG with K𝐾Kitalic_K clusters to estimate nodes labels. Meanwhile, codes for all experimental results in... |
In this paper, we introduced the Degree-Corrected Distribution-Free Model (DCDFM), a model for community detection on weighted networks. The proposed model is an extension of previous Distribution-Free Models by incorporating node heterogeneity to model real-world weighted networks in which nodes degrees vary, and it ... | B |
Banerjee (2016); Oliehoek and
Amato (2016); Foerster et al. (2016) is based on using the centralized information during training. During execution, the agents act using only their respective observations. Following this scheme, Foerster et al. (2016) introduces the RIAL and DIAL algorithms in the context of Q𝑄Qitalic_... | Following this idea, Rashid et al. (2018) introduced QMIX, which learns a complex state-dependent decomposition by using monotonic mixing hypernetworks. Extensions of QMIX include MAVEN Mahajan et al. (2019), COMIX de Witt et al. (2020), SMIX(λ𝜆\lambdaitalic_λ) Wen
et al. (2020), and QTRAN Son | et al. (2018), a distributed single-agent algorithm. The idea of extending RL algorithms to the multi-agent setting has been successfully executed multiple times. Lowe et al. (2017) propose a multi-agent actor-critic algorithm MADDPG, which is based on the DDPG algorithm Lillicrap et al. (2016). Yu
et al. (2020) introd... | Reinforcement learning has witnessed impressive development in recent years. Famously, superhuman performance has been achieved in games Go Silver et al. (2016), StarCraft II Vinyals et al. (2019b), Dota 2 Berner et al. (2019) and other applications. These successes are the result of rapid algorithmic development. Rese... |
In this work, we take a step towards amending this situation. We propose MA-Trace, a new on-policy actor-critic algorithm, which adheres to the centralized training and decentralized execution paradigm Lowe et al. (2017); Foerster et al. (2018); Rashid et al. (2018). The key component of MA-Trace is the usage of impor... | A |
An extra hyperparameter of RF is the maximum number of features to consider when looking for the best split ((number_of_features𝑛𝑢𝑚𝑏𝑒𝑟_𝑜𝑓_𝑓𝑒𝑎𝑡𝑢𝑟𝑒𝑠\sqrt{number\_of\_features}square-root start_ARG italic_n italic_u italic_m italic_b italic_e italic_r _ italic_o italic_f _ italic_f italic_... | Figure 7: An outlier case exploration, the final prediction, and the training of another bunch of RF and AB models. (a) presents the anchoring of a cluster of 8 HS-Level-2 decisions to compare the overlapping rules against 3 HS-Level-3 decisions. In (b), after checking the common regions of agreement for the two cluste... | Checking the cases where the majority of the models disagree with the GT, we stop in the \nth15 test instance. Figure 7(a) shows the decisions applicable for this unusual case. We use the comparison mode to select a pure cluster on the left to juxtapose it with decisions classifying countries as HS-Level-3 on the right... |
To investigate the global decisions based on the AB8 model we set the impurity to 0, disable limiting decisions based on the current test instance, hide the RF models, and reveal the density view of the active AB model (cf. Figure 5(a)). Most decisions are positioned in the right-hand side of the projection. Thus, we ... |
In the following subsections, we explain VisRuler by describing a use case with the World Happiness Report 2019 Helliwell2019World data set obtained from the Kaggle repository. Kaggle2019 This data set contains 156 countries (i.e., instances) ranked according to an index representing how happy the citizens of each c... | D |
Various other aspects of polarization in MIMO systems have been investigated as well. Ref. [16] showed that space-time block coding (STBC) with single polarization outperforms STBC with dual polarization in Rayleigh and Ricean fading channels. A MIMO system with dual-polarized antenna elements can have lower spatial di... | By optimally employing the polarization reconfigurable antennas in conjunction with MIMO spatial multiplexing, this paper will show that the system performance is substantially enhanced compared to that of a conventional scheme with the same number of antenna ports. Throughout the paper, we will assume perfect knowledg... |
This section delineates the PR-HS-MIMO communication system that performs spatial multiplexing with the selected antenna elements as depicted in Fig. 3. The Tx selects Ltsubscript𝐿𝑡L_{t}italic_L start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT out of Ntsubscript𝑁𝑡N_{t}italic_N start_POSTSUBSCRIPT italic_t end_POSTSU... | Tx and Rx have co-located dual-polarized antennas, such that two antenna ports are available for each antenna element at a distinct spatial location. Compared to the case where the same number of antenna elements with
only a single polarization is available, this leads to an increase in diversity and capacity, although... | On the other hand, polarization diversity is not taken into account in the majority of previous research works on antenna selection. Although there are previous reports that consider polarization diversity with antenna selection, they consider fixed antenna polarization such as dual-polarized antennas in[44] or tri-pol... | C |
In simplified terms, it is then proved that for some constant δ=δ(d)>0𝛿𝛿𝑑0\delta=\delta(d)>0italic_δ = italic_δ ( italic_d ) > 0, any sequence of axis-parallel boxes of diameter and total area at most δ𝛿\deltaitalic_δ can be packed online in the d𝑑ditalic_d-dimensional unit hypercube.
Determining whether the crit... | A lower bound on the critical density of online packing squares into the unit square has been improved in a sequence of papers [33, 15, 31, 25] from 5/165165/165 / 16 [33] to 2/5252/52 / 5 [15].
Interestingly, Januszewski and Lassak [33] proved that in dimension d≥5𝑑5d\geq 5italic_d ≥ 5, the critical density of online... | On one hand, this packing problem is harder than the 2D offline version which has positive critical density (Theorem 6 d), and on the other hand, it is easier than the 3D online version which has 00 critical density (since also the 3D offline version has 00 critical density [8]).
In this paper, we prove that the 2D onl... | Alt, Cheong, Park, and Scharf [8] showed that there exist n𝑛nitalic_n 2D unit disks embedded in 3D (with different normal vectors) such that whenever they are placed in a non-overlapping way, their bounding box has volume Ω(n)Ω𝑛\Omega(\sqrt{n})roman_Ω ( square-root start_ARG italic_n end_ARG ).
It follows that when ... | In simplified terms, it is then proved that for some constant δ=δ(d)>0𝛿𝛿𝑑0\delta=\delta(d)>0italic_δ = italic_δ ( italic_d ) > 0, any sequence of axis-parallel boxes of diameter and total area at most δ𝛿\deltaitalic_δ can be packed online in the d𝑑ditalic_d-dimensional unit hypercube.
Determining whether the crit... | B |
Uncertainty and Entropy are also common tools for active learning [28].
We re-implement the framework by connecting one encoder with two classifiers, and train by classifying all instances and enlarging the difference of outputs from two classifiers. | Finally, we estimate the entropy (of one-hot vectors), uncertainty (difference of two classifiers) and loss as metrics to suggest templates to label.
As shown in Table 6, VAAL needs labeled data to initialize, and performs badly in only one iteration. | Uncertainty and Entropy are also common tools for active learning [28].
We re-implement the framework by connecting one encoder with two classifiers, and train by classifying all instances and enlarging the difference of outputs from two classifiers. |
As shown in Table 1, our approach cannot find the best templates, instead, it finds a fairly good choice well above the average performance. When there is only one template, the best template achieves 2.8632.8632.8632.863mm in MRE. Although the template we choose achieves an MRE of only 3.0833.0833.0833.083mm, it is m... | MSE works while uncertainty and entropy are not, where the probable reason is that patterns in medical images appear simple for both classifiers which give similar predictions and cause very low uncertainty for all images and low entropy to distinguish instances clearly.
| A |
(2) We use a spectral algorithm to fit MMDF. We show that the proposed algorithm stably yields consistent community detection under MMDF. Especially, theoretical results when edge weights follow a specific distribution can be obtained immediately from our results.
|
In this paper, we have proposed a general, flexible, and identifiable mixed membership distribution-free (MMDF) model to capture community structures of overlapping weighted networks. An efficient spectral algorithm, DFSP, was used to conduct mixed membership community detection and shown to be consistent under mild c... | MMDF is a generative model and fuzzy weighted modularity is a general modularity for overlapping weighted networks. We expect that our model MMDF and fuzzy weighted modularity proposed in this paper will have wide applications in learning and understanding the latent structure of overlapping weighted networks, just as ... |
(3) We provide fuzzy weighted modularity to evaluate the quality of mixed membership community detection for overlapping weighted networks. We then provide a method to determine the number of communities for overlapping weighted networks by increasing the number of communities until the fuzzy weighted modularity does ... |
In the DFSP algorithm, the number of communities K𝐾Kitalic_K should be known in advance, which is usually impractical for real-world networks. Here, we introduce fuzzy weighted modularity, then we combine it with DFSP to estimate K𝐾Kitalic_K for overlapping weighted networks. | C |
Specifically, since the oracle model is trained with more classes, we investigate how representations are affected by the number of training classes.
To this end, we compute and analyze the eigenvalues of the covariance matrix of representations of each class. | Interestingly, we find that when training with fewer classes, the top eigenvalues of the covariance matrix of representations of each class dominate, indicating that the representations of each class lie in a long and narrow region (see Fig. 1 (a) for example).
On the other hand, for models trained with more classes (p... | Specifically, since the oracle model is trained with more classes, we investigate how representations are affected by the number of training classes.
To this end, we compute and analyze the eigenvalues of the covariance matrix of representations of each class. | This shows that for the 10 class model, the top eigenvalues dominate for covariance matrix of data representations of each class, indicating that data representations lie in a long and narrow region.
In addition, for any fixed k𝑘kitalic_k, αk(c)subscriptsuperscript𝛼𝑐𝑘\alpha^{(c)}_{k}italic_α start_POSTSUPERSCRIPT (... | The contributions of this paper are as follows: 1) We empirically discover that encouraging the CIL learner to mimic the oracle model in the initial phase can boost the CIL performance. 2) We find that compared with naïvely-trained initial-phase model, data representations of each class produced by the oracle model sca... | A |
For the private institutional datasets, the protocol for releasing the data was approved by the institutional review board of the contributing institutions.
The complete dataset consisted of 259 pairs of pre-operative baseline and follow-up brain mpMRI scans, with each pair corresponding to the same patient diagnosed a... | This may be because task 3 was the only task where registration was performed between two follow-up time points.
The presence of similar deformations and structures in these scans likely rendered the registration between these two time points comparatively easier than the other three tasks. | Subsequently, the expert identified the corresponding landmarks in the post-operative follow-up scan. For the data used in the longitudinal analyses, the corresponding landmarks were identified by the experts in the second follow-up scans as well.
The landmarks were defined on anatomically distinct locations, such as b... | This may be because task 3 was the only task where registration was performed between two follow-up time points.
The presence of similar deformations and structures in these scans likely rendered the registration between these two time points comparatively easier than the other three tasks. | Following close coordination with the clinical experts of the organizing committee (H.A., M.B., B.W., J.S., E.C., J.R., S.A., M.M.), the time-window between the two paired scans of each patient was decided to be selected such that i) the scans of the two time-points had sufficient apparent tissue deformations, and ii) ... | D |
Rewrite Rule R10: (Σ,Q∪{R(z¯,x,y¯)})▷(Σ,Q∪{R(z¯,c,y¯)})▷Σ𝑄𝑅¯𝑧𝑥¯𝑦Σ𝑄𝑅¯𝑧𝑐¯𝑦(\Sigma,Q\cup\mathord{\{R(\bar{z},x,\bar{y})\}})\triangleright\,(\Sigma,Q\cup%
\mathord{\{R(\bar{z},c,\bar{y})\}})( roman_Σ , italic_Q ∪ start_ID { italic_R ( over¯ start_ARG italic_z end_ARG , italic_x , over¯ start_ARG italic_y end_AR... |
The last auxiliary lemma considers SJFCQs with at least one atom in their complex part that does not have a variable at a primary-lhs position, but has a variable at a position associated with an attribute occurring in the right-hand side of the primary FD. |
This rule is a generalization of the corresponding rule for primary keys [25]. While the rule R10 for primary keys considers binary relation names where the same variable occurs at both positions, our rule considers arbitrary relation names where the variable x𝑥xitalic_x occurs both at a primary-lhs position and a no... | Similarly to the case of the rule R5, the fact that the variables x𝑥xitalic_x and y𝑦yitalic_y might appear in two different FDs makes the proof more involved than the corresponding proof for primary keys [25]. Moreover, we have added an additional condition on the variables x𝑥xitalic_x and y𝑦yitalic_y that does not... |
Assume, towards a contradiction, that Q𝑄Qitalic_Q has an atom R(z¯,x,y¯)𝑅¯𝑧𝑥¯𝑦R(\bar{z},x,\bar{y})italic_R ( over¯ start_ARG italic_z end_ARG , italic_x , over¯ start_ARG italic_y end_ARG ) that uses the variable x𝑥xitalic_x at both a primary-lhs position and a non-primary-lhs position. Then, by the rewrite rul... | B |
In our work, we considered a specific compartmental model on a specific type of random graph. It is important to examine the relationships between node-absorption rates, effective community structure, and dynamics in more general settings, including for other random-graph models and for empirical networks. |
Absorbing random walks have been used to develop centrality measures [14], other methods to rank the nodes of a network [46], transduction algorithms (which one can use to infer the labels of the nodes of a graph from the labels of a subset of the nodes) [7], and more. For example, Jaydeep et al. [7] proposed a transd... | creates self-edges that are positively correlated with the node-absorption rates δisubscript𝛿𝑖\delta_{i}italic_δ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. This correlation reflects the idea that a random walk gets stuck longer in nodes with larger node-absorption rates.
| Some relevant settings include mobility networks that link spatial locations, with node-absorption rates that reflect variations in habitat quality; sexual networks, with nodes corresponding to individuals and node-absorption rates that reflect heterogeneities in treatment rates in different subsets of a population; an... |
We study an example that plays a similar role to the example of Salathé and Jones [38]. In our example, setting the absorption rates of bridge nodes to larger values than the absorption rates of other nodes is analogous to removing community bridges. Unlike in the example of Salathé and Jones, our example uses the sam... | C |
Now, to compute the optimal path for each path-length, we can use a simple dynamic
programming approach that run in O(mτl)𝑂𝑚subscript𝜏𝑙O(m\tau_{l})italic_O ( italic_m italic_τ start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ) time where m𝑚mitalic_m is the number of edges | The general QNR problem can be formulated in terms of hypergraph flows and solved using LP (see Appendix A). Although polynomial-time and provably optimal, the LP-based approach has a very high time-complexity for
it to be practically useful. Here, we develop an efficient heuristic | may be applicable for the QNR-SP problem. However, we need to “combine” trees
rather than paths in the recursive step of a DP approach. Consequently, we were unable to design a DP approach based on the Floyd-Warshall’s approach, but, are able to extend the Bellman-Ford approach for the QNR-SP problem after addressing a... | Among our schemes, we use DP-OPT, DP-Approx and Balanced-Tree (see §IV-B) for the QNR-SP problem, and LP (Appendix A) and ITER schemes for the QNR problem. For ITER, we use three schemes:
ITER-DPA, ITER-Bal and ITER-SP, which iterate over DP-Approx, Balanced-Tree and SP respectively. To be comprehensive, | in that both
consider only balanced trees; however, we use a heuristic metric that facilitates a polynomial-time Dijkstra-like heuristic to select the optimal path, while their recursive metric 666We note that their formula (Eqn. 10 in [18]) is incorrect as it either ignores the 3/2 factor or assumes the EP generations... | A |
Finally, while the preliminary studies and further works on explainable autonomous driving primarily focused on a combination of various AI techniques revisited above, large language models (LLMs) and vision-language models (VLMs) have recently emerged as a novel paradigm to interpreting AVs decisions and describing t... | Presenting live natural language explanations during the trip: The promising work in this context is Wayve’s LINGO-1 [168] and LINGO-2 [169] architectures. LINGO-1 is a vision-language-action (VLAM) model that provides live natural language explanations for describing a vehicle’s chosen actions in end-to-end autonomous... | 3. An explanation component: This constituent of the framework provides understandable insights into the real-time action decisions made by autonomous driving, complying with and corresponding to an eeC𝑒𝑒𝐶eeCitalic_e italic_e italic_C and a srC𝑠𝑟𝐶srCitalic_s italic_r italic_C. The explanation component must j... | Based on the mentioned process steps and crucial elements, we see that achieving the interpretability of self-driving models is challenging, necessitating integration of those steps and cooperation between users and AVs. Consequently, while we argue that transparent and highly autonomous driving is feasible, human fact... | Wang et al. [77] propose an approach that enables a human driver to provide scene forecasting to an intelligent driving system using a purposeful gaze. They develop a graphical user interface to understand the effect of human drivers on the prediction and control of an intelligent vehicle. A simulator is used to test a... | A |
Dilated convolution can expand the receptive field to get multi-scale context information.
To further improve the accuracy of Ghost-NetVLAD, we try to apply dilated convolutions to GhostCNN on the premise of not increasing the model size and training speed. We vary the dilated rate to validate our hypothesis. From Tabl... |
In this paper, to improve the original NetVLAD, we proposes a lightweight model to make a good trade-off between accuracy and model efficiency. The experimental results show that the proposed model, Ghost-dil-NetVLAD (i.e., Ghost-NetVLAD with dilated convolutions), achieves similar accuracy with VGG16-NetVLAD and outp... |
In this section, six models including Alex-NetVLAD, VGG16-NetVLAD, Patch-NetVLAD (Considering our limited computational resources, we only use its built-in storage mode. In this paper, we uniformly call this method Patch-NetVLAD.), MobileNetV3-NetVLAD (lightweight CNN + NetVLAD), Ghost-NetVLAD (the Ghost module does n... | Tokyo 24/7 dataset. The experimental results demonstrate that Patch-NetVLAD achieves the best performance on the Pitts30k test dataset and Tokyo 24/7 dataset, while Ghost-dil-NetVLAD performs the best on TJU-Location test dataset because most Recall@N of Ghost-dil-NetVLAD are greater than those of the remaining models.... | Dilated convolution can expand the receptive field to get multi-scale context information.
To further improve the accuracy of Ghost-NetVLAD, we try to apply dilated convolutions to GhostCNN on the premise of not increasing the model size and training speed. We vary the dilated rate to validate our hypothesis. From Tabl... | A |
Stream ciphers[16] are one of the main cryptographic primitives used in symmetric cryptography. Historically, the first stream ciphers were built with “linear” registers, where linearity is meant both in the register update function (which sends one state to the next) and in the output function, which computes the keys... | Stream ciphers[16] are one of the main cryptographic primitives used in symmetric cryptography. Historically, the first stream ciphers were built with “linear” registers, where linearity is meant both in the register update function (which sends one state to the next) and in the output function, which computes the keys... | Traditionally, stream ciphers are attacked with two approaches: correlation attacks, that exploit possible correlations between some part of the keystream and a portion of the initial state, and approximation attacks, where the nonlinear part is approximated by a linear component. The design defenses against these type... | The main idea behind this attack is to decrease the degree of the original system by multiplying each equation in (3), that are usually of high degree, by a well chosen g∈𝔹n𝑔subscript𝔹𝑛g\in\mathbb{B}_{n}italic_g ∈ blackboard_B start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT. The resulting equations are
| In this paper, we propose a new form of algebraic attack, which is especially effective against nonlinear filter generators. We show with two toy examples how the attack can be performed in practice. We also apply our attack to WG-PRNG and we provide a complexity estimate that shows a fatal weakness of this cipher. We ... | B |
We show that other gadgets can overestimate or underestimate exploitability, which could shift the distribution of the parameter p𝑝pitalic_p, and we could still compute the same solutions. However, in Figure 4, we show the results of games created to break the other gadgets. We have two games in which the full gadget ... | We show that other gadgets can overestimate or underestimate exploitability, which could shift the distribution of the parameter p𝑝pitalic_p, and we could still compute the same solutions. However, in Figure 4, we show the results of games created to break the other gadgets. We have two games in which the full gadget ... | We can use other gadgets in CDRNR to obtain fast algorithms without any theoretical guarantees and with the same bound on computation as we have for the combination of CDBR and Nash equilibrium. The soundness of the algorithm relies on using the full gadget, which requires solving increasingly larger parts of the game ... | et al. (2008). We adapt the method to limited look-ahead solving, creating a continual depth-limited restricted Nash response (CDRNR). Similarly to the full robust response, CDRNR significantly outperforms the linear combination. However, it comes with drawbacks in the limited look-ahead solving. Namely, we need to kee... | The following examples show that the requirement is not satisfied for common resolving gadgets. We tried to construct a gadget that would satisfy the condition, but in the end, we kept the previously resolved parts of the game G′superscript𝐺′G^{\prime}italic_G start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT followed by th... | B |
Let k≥2𝑘2k\geq 2italic_k ≥ 2 and 0≤q≤p≤10𝑞𝑝10\leq q\leq p\leq 10 ≤ italic_q ≤ italic_p ≤ 1. There are two versions of the balanced k𝑘kitalic_k-community stochastic block model, where edges appears independently with probability p𝑝pitalic_p within blocks and probability q𝑞qitalic_q between blocks. In the first ve... | There are related results in [5] :
this paper analysed testability of graph parameters relating to minimising the number of edges between parts, some of which were normalised with respect to the size of the parts but in a different way to modularity, and found some of these parameters not to be testable. Our results co... |
The plan of the rest of the paper is as follows. In Section 2, we first show an application of our results to the stochastic block model in Section 2.1, then provide an overview of the further results of this paper in Section 2.2 and lastly give some background and the relation of this paper to previous results in Sec... | We first give an example where bounds at some (not too small) edge density are known from spectral results, and a version of Theorem 1.2 allows us to bootstrap these results to sparser models. After that we present Theorem 2.1 concerning the general k𝑘kitalic_k-community model.
| Using existing spectral results from [11, 27] we may deduce that the lower bound in (2.1) is tight for some values of p,q=Θ(n−1logn)𝑝𝑞Θsuperscript𝑛1𝑛p,q=\Theta(n^{-1}\log n)italic_p , italic_q = roman_Θ ( italic_n start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT roman_log italic_n ). (We suppress the details here, ... | C |
Micro-level literature provides a vast variety of case studies on different potential impacts of environmental factors on mobility. In our sample, they almost double macro-level contributions (86 contributions against 47) and provide different scenarios. Firstly, while macro-level studies mostly provide analyses at th... | The single countries that receive singularly the most attention are Mexico, with 10 case studies, and the U.S., with 9 case studies. This should not be a surprise because of two reasons: firstly, the stock of Mexican emigrates has been constantly the highest in the world (in absolute terms) as well as the migratory flo... | An evident gap in the literature emerges in Figure 4: European countries have rarely been the object of study of the impact of environmental factors on mobility. This might be motivated by the fact that the European continent is mostly seen as a destination for migrants than an origin. It should not surprise that the t... |
The first cluster (Cluster 1) is the most populated, counting 51 papers spanning the entire period considered (from 2003 to 2020). In terms of the type of analysis, it contains the largest variety: as in all clusters, quantitative studies represent the majority (as they are the 76% of the full sample), but this cluste... | The PRISMA flow diagram (Moher et al., , 2009) in Figure 1 shows the process of identifica-tion, screening, eligibility, and inclusion of contributions in the final sample. It is important to note that there are two levels of inclusion: the first level identifies the sample of contributions included in our network anal... | A |
The rest is structured as follows. Section 2 briefly reviews related works. In Section 3, we introduce the problem setup and the optimistic online mirror descent framework, where a general dynamic regret analysis is provided. Section 4 establishes the gradient-variation dynamic regret bounds under two different gradie... |
In this paper, we exploit the easiness of problem instances to enhance the universal dynamic regret. We propose two novel online ensemble algorithms, Sword and Sword++, for convex and smooth online learning. Both algorithms achieve a best-of-both-worlds dynamic regret of order 𝒪((1+PT+min{VT,FT})(1+PT))𝒪1subscrip... | In the following, we describe the details of the meta-algorithm. We will provide a brief explanation of the design of corrections in Remark 4 and offer a more comprehensive elaboration on the general framework of collaborative online ensemble in Section 5.
| In this section, we present a brief overview of both static and dynamic regret minimization in the context of online convex optimization. Additionally, we provide more discussions on the subsequent studies after the preprint of our manuscript is publicly available.
|
Here, ε>0𝜀0\varepsilon>0italic_ε > 0 is the learning rate of the meta-algorithm and we consider a fixed learning rate for simplicity.333We adopt the terminology “learning rate” for the meta-algorithm of our approach following the convention in the prediction with expert advice, and use “step size” for the general onl... | C |
orbits of points of this type which are admissible fixed points of a power
of σ𝜎\sigmaitalic_σ. The case where x−=σnω(v)superscript𝑥superscript𝜎𝑛𝜔𝑣x^{-}=\sigma^{n\omega}(v)italic_x start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT = italic_σ start_POSTSUPERSCRIPT italic_n italic_ω end_POSTSUPERSCRIPT ( italic_v ) and... | y=σnω(u)𝑦superscript𝜎𝑛𝜔𝑢y=\sigma^{n\omega}(u)italic_y = italic_σ start_POSTSUPERSCRIPT italic_n italic_ω end_POSTSUPERSCRIPT ( italic_u ) where σnsuperscript𝜎𝑛\sigma^{n}italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is right-prolongable
on u𝑢uitalic_u. | A morphism σ:A∗→A∗:𝜎→superscript𝐴superscript𝐴\sigma\colon A^{*}\to A^{*}italic_σ : italic_A start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT → italic_A start_POSTSUPERSCRIPT ∗ end_POSTSUPERSCRIPT is right-prolongable
on u∈A+𝑢superscript𝐴u\in A^{+}italic_u ∈ italic_A start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT if σ(u)�... | x=σω~(r)⋅σω(s)𝑥⋅superscript𝜎~𝜔𝑟superscript𝜎𝜔𝑠x=\sigma^{\tilde{\omega}}(r)\cdot\sigma^{\omega}(s)italic_x = italic_σ start_POSTSUPERSCRIPT over~ start_ARG italic_ω end_ARG end_POSTSUPERSCRIPT ( italic_r ) ⋅ italic_σ start_POSTSUPERSCRIPT italic_ω end_POSTSUPERSCRIPT ( italic_s )
with σ𝜎\sigmaitalic_σ left-prol... | where σnsuperscript𝜎𝑛\sigma^{n}italic_σ start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT is left-prolongable on u𝑢uitalic_u
and right-prolongable on v𝑣vitalic_v. Set u=paq𝑢𝑝𝑎𝑞u=paqitalic_u = italic_p italic_a italic_q and v=rbs𝑣𝑟𝑏𝑠v=rbsitalic_v = italic_r italic_b italic_s | D |
{n}}{n}+\bigg{(}\frac{C_{n}}{n}\bigg{)}^{\frac{2}{2s+1}}\Bigg{)}.italic_R ( caligraphic_T start_POSTSUBSCRIPT italic_n , italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_C start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ) = roman_Ω ( divide start_ARG 1 end_ARG start_ARG italic_n e... | smoothness index k+1𝑘1k+1italic_k + 1 in each coordinate direction, and any third index q≥1𝑞1q\geq 1italic_q ≥ 1, is indeed n−2s/(2s+1)superscript𝑛2𝑠2𝑠1n^{-2s/(2s+1)}italic_n start_POSTSUPERSCRIPT - 2 italic_s / ( 2 italic_s + 1 ) end_POSTSUPERSCRIPT for s>1/2𝑠12s>1/2italic_s > 1 / 2 (or 2k+2>d2𝑘2𝑑2k+2>d2 it... | This paper is a unification and extension of Sadhanala et al. (2016, 2017) (more will be said about the relationship to these papers
in Section 1.3). The models of smoothness for f0subscript𝑓0f_{0}italic_f start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT that we | ℋdk(1)superscriptsubscriptℋ𝑑𝑘1\mathcal{H}_{d}^{k}(1)caligraphic_H start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( 1 ). This matches the optimal rate for estimation over Holder
classes (see Sadhanala et al. (2017) for a formal statement and proof for | The result in Theorem 4 for s≥1/2𝑠12s\geq 1/2italic_s ≥ 1 / 2 (that is, 2k+2≥d2𝑘2𝑑2k+2\geq d2 italic_k + 2 ≥ italic_d) was already derived in Sadhanala et al. (2017). More precisely,
these authors established the third term on the right-hand side in | D |
While Euclidean loss remains the dominant cost function in deep learning, topological losses based on persistent homology are emerging as superior in tasks requiring topological understanding (Chen et al., 2019; Hu et al., 2019; Gupta et al., 2022; Lin et al., 2023). These topological losses incorporate penalties base... | The paper introduces a new data-driven topological data analysis (TDA) method for studying dynamically changing human functional brain networks obtained from the resting-state functional magnetic resonance imaging (rs-fMRI). Leveraging persistent homology, a multiscale topological approach, we present a framework that ... |
The method employs the Wasserstein distance to measure the topological differences between networks and demonstrates greater efficiency and performance than the commonly used k𝑘kitalic_k-means clustering in defining the state spaces of dynamic brain networks. | In this paper, we propose to develop a novel dynamic persistent homology framework for time varying network data. Our coherent scalable framework for the computation is based on the Wasserstein distance between persistent diagrams, which provides the topological profile of data into 2D scatter plots. We directly establ... | In this study, we proposed the topological clustering method for the estimation and quantification of dynamic state changes in time-varying brain networks. A coherent statistical theory, grounded in persistent homology, was developed, and we demonstrated the application of this method to resting-state fMRI data. Restin... | C |
limk→∞(tk+1−tk)=τs(θ)subscript→𝑘subscript𝑡𝑘1subscript𝑡𝑘subscript𝜏𝑠𝜃\displaystyle\lim_{k\to\infty}(t_{k+1}-t_{k})=\tau_{s}(\theta)roman_lim start_POSTSUBSCRIPT italic_k → ∞ end_POSTSUBSCRIPT ( italic_t start_POSTSUBSCRIPT italic_k + 1 end_POSTSUBSCRIPT - italic_t start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT )... | As the iner-event time function τs(θ)subscript𝜏𝑠𝜃\tau_{s}(\theta)italic_τ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_θ ) is periodic with period π𝜋\piitalic_π, ϕ(θ+π)=arg(G(τs(θ+π))xθ+π)=arg(−G(τs(θ))xθ)=ϕ(θ)+πitalic-ϕ𝜃𝜋𝐺subscript𝜏𝑠𝜃𝜋subscript𝑥𝜃𝜋𝐺subscript𝜏𝑠𝜃subscript𝑥𝜃italic... | θ∈[0,π)𝜃0𝜋\theta\in[0,\pi)italic_θ ∈ [ 0 , italic_π ) or fs(θ,τ)=0subscript𝑓𝑠𝜃𝜏0f_{s}(\theta,\tau)=0italic_f start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_θ , italic_τ ) = 0 for all
θ∈[0,π)𝜃0𝜋\theta\in[0,\pi)italic_θ ∈ [ 0 , italic_π ); if det(M(τ))<0𝑀𝜏0\det(M(\tau))<0roman_det ( italic_M ( italic... | rule (4), for a general M(.)M(.)italic_M ( . ) that
satisfies Assumption (A1). If there does not exist θ∈[0,π)𝜃0𝜋\theta\in[0,\pi)italic_θ ∈ [ 0 , italic_π ) such that ϕk(θ)−θ=dπsuperscriptitalic-ϕ𝑘𝜃𝜃𝑑𝜋\phi^{k}(\theta)-\theta=d\piitalic_ϕ start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT ( italic_θ ) - italic_... | of the level set τs(θ)=csubscript𝜏𝑠𝜃𝑐\tau_{s}(\theta)=citalic_τ start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT ( italic_θ ) = italic_c which is positively invariant
under the angle map only if ∃θ∈[0,π)𝜃0𝜋\exists\theta\in[0,\pi)∃ italic_θ ∈ [ 0 , italic_π ) such that ϕk(θ)−θ=dπsuperscriptitalic-ϕ𝑘𝜃𝜃𝑑𝜋\phi^... | C |
In the context of this work, we consider disturbances as external inputs in the notions of ISSt and ISSf. ISSt in sense of Sontag has been investigated using Lyapunov functionals [3]. The notion of pratical ISSt has been explored in [3] that augments the original ISSt with certain practical considerations. On the othe... | Control of PDE systems has been widely explored over the years [15, 16, 17, 18]. Similar to ODEs, notions of ISSt for PDE systems have garnered a lot of attention recently (see survey paper [19]). For example, PDE ISSt have been explored for reaction-diffusion systems [20], hyperbolic systems [21], [22], parabolic syst... |
In this paper, we have explored safe control of a class of linear Parabolic PDEs under disturbances. First, we defined unsafe sets and distance of the system states from such unsafe sets. Next, we constructed both control barrier and Lyapunov functional in order to develop a design framework for the controller under s... |
In light of the aforementioned discussion, the main contributions of this paper is the following: Building upon the existing literature, we extend PDE safety research by designing a feedback based control that satisfies both pISSf and ISSt under disturbances, utilizing pISSf barrier functional characterization and ISS... |
In the context of this work, we consider disturbances as external inputs in the notions of ISSt and ISSf. ISSt in sense of Sontag has been investigated using Lyapunov functionals [3]. The notion of pratical ISSt has been explored in [3] that augments the original ISSt with certain practical considerations. On the othe... | A |
In this section, we examine the pivotal role of passive sensing technologies in deepening our understanding of workplace wellbeing and productivity. Through an examination of recent research studies, we highlight how these innovative tools offer important insights into the complex interplay between work environments an... | Passive sensing technologies are also being embraced by organizations to promote employees’ wellbeing. Organizations make use of gamification, personalized recommendations or even offer incentive programs to encourage employees to be more active in their day-to-day life [8]. Researchers studying the use of wearable tec... |
Over the years, there has been an increase in studies that explore the sensing capabilities of smartphones and wearables to assess as well as improve worker’s health and wellbeing. Among the papers selected, mental health issues such as stress, anxiety, affects are important dimensions studied. Most of these studies a... | Assessing workers’ stress and/or anxiety is perhaps the most studied topic about worker’s wellbeing. As stress and anxiety are known to have an impact on heart rate (HR), most of the studies rely on HR-based signals obtained from wrist-worn wearables that employ photoplethysmography (PPG) [45] sensors. For instance, Fe... |
Other health and wellbeing related topics that have been studied using passive sensing among workers include focus and awakeness. Soto et al. utilize biometric data from an arm-wear (viz., physical activity, HR, skin response, skin temperature and respiration) to estimate worker’s stress, focus and awakeness [25]. The... | B |
(\mathbf{x})\|<\sigma^{2}blackboard_E start_POSTSUBSCRIPT caligraphic_D start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ∥ ∇ italic_f start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( bold_x ) - ∇ caligraphic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( bold_x ) ∥ < italic_σ start_POSTSUPERSCRIPT 2... | For non-i.i.d. datasets, we simulate the data heterogeneity by sampling the label ratios from a Dirichlet distribution with a symmetric parameter, 0.3 or 0.6, following the strategies in Hsu et al. [14].
In both i.i.d. and non-i.i.d. cases, each client holds the same number of examples as in other works [45, 19]. | These assumptions are widely used for analyzing the non-convex loss functions in FL in the previous works [15, 45, 1, 33, 16, 32].
Note that our convergence proof is free from the bounded gradient assumption of the global or local loss while it is commonly used for the proofs in momentum-based or adaptive optimization ... | These approaches incorporate a momentum SGD [14, 40] or an adaptive gradient-descent method [33, 4] into server model updates while FedCM [45] and FedADC [30] employs global momentum to correct gradients in local updates.
STEM [18] and FedGLOMO [7] apply the STORM algorithm [6] to both server- and client-level SGD proc... |
(Convergence for non-convex functions) Suppose that local functions {ℱi}i=1Nsuperscriptsubscriptsubscriptℱii1N\{\mathcal{F}_{i}\}_{i=1}^{N}{ caligraphic_F start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT are non-conve... | B |
Third, learning the path-loss function first and then using Eqn. 1 to allocate
spectrum is certainly a feasible approach – but, since the path-loss function fundamentally encodes more information than the SA function, it would likely require much more training (note that | Second, in our context, an unsupervised approach is meaningless as unlabelled samples have minimal information (actually, zero information in the PU-Setting), and as explained
in §III, a reinforcement-learning approach is also not suitable for our setting. | the input (features) being the primary-user parameters, spectrum sensor (SS) readings, and secondary user (SU) request parameters, and the output (label) being the maximum power that can be allocated to the SU without
resulting in any harmful interference to the PUs’ receivers. | Finally, non-trivial parameters such as weather, terrain and obstacles, PU transmitters being directional, etc., can be relatively easily incorporated in a learning approach (see §III-F), while they would require more sophisticated modelling techniques and algorithms to be incorporated
in non-learning approaches. |
We develop a novel technique to represent the spectrum allocation function input (i.e., the location and transmission/received powers of primary users or spectrum sensors, and the request parameters of the secondary user) as an image; such an image representation is essential to effectively use a CNN-based learning mo... | C |
\frac{(c\alpha)^{n+1}}{(n+1)!}\leq e^{c\alpha}\frac{(c\alpha)^{n+1}}{(n+1)!},italic_R start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT = italic_e start_POSTSUPERSCRIPT italic_c italic_α end_POSTSUPERSCRIPT - ∑ start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT divide sta... | In the next theorem, we establish an upper bound on how close (in the Hausdorff distance) two curves with δ𝛿\deltaitalic_δ-close (in the L∞superscript𝐿L^{\infty}italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT-norm) affine curvature functions can be brought together by a special affine transformation.
| proving some upper bounds related to Picard iterations and using them to estimate how close, relative to the Hausdorff distance, two curves can be brought together by a special affine transformation, provided the affine curvature functions of the curves are δ𝛿\deltaitalic_δ-close in the L∞superscript𝐿L^{\infty}italic... | In this section, we review how a curve can be reconstructed from its Euclidean curvature by two successive integrations (Theorem thm-euc-rec). We then use these formulas to estimate how close, relative to the Hausdorff distance, two curves can be brought together by a special-Euclidean transformation, provided their Eu... | Next, we establish bounds on the distance between two affine frames reconstructed from two δ𝛿\deltaitalic_δ-close (in the L∞superscript𝐿L^{\infty}italic_L start_POSTSUPERSCRIPT ∞ end_POSTSUPERSCRIPT norm) affine curvature functions.999This result is consistent with a well known ODE result on continuous dependence of ... | D |
In order to see how the variation of the problem impacts the performance of the algorithms. We add an extra term of 100I20100subscript𝐼20100I_{20}100 italic_I start_POSTSUBSCRIPT 20 end_POSTSUBSCRIPT such that the matrix Qtsubscript𝑄𝑡Q_{t}italic_Q start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is diagonally dominan... |
As expected, the algorithm using full gradient has the best performance in terms of minimizing the dynamic regret. Yet, it is worth mentioning that among the three coordinate descent algorithms considered for this numerical example, Gauss-Southwell rule gives the best performance which is consistent with Remark 2. The... | In this work, we have proposed an online coordinate descent algorithms to deal with optimization problems that may change over time. Three widely used update rules of coordinate descent are considered. Under different assumptions, we have provided different upper bounds on the regrets of these online algorithms. In par... |
The main contributions of the paper can be summarized as follows. First, we extend the coordinate descent algorithms considered in [31] to the online case and provide their regret analysis. To the best of our knowledge, this is the first attempt to look at possibilities of using coordinate descent methods to solve OCO... | To the best of our knowledge, Coordinate descent [31], as an important class of optimization algorithms, is not sufficiently analyzed by researchers in the online optimization community. In coordinate descent algorithms, most components of the decision variable are fixed during one iteration while the cost function is ... | A |
The commonly used level of abstraction, which closely resembles direct biological replication, models neural computation using coupled ordinary differential equations. The temporal dynamics of this model enable information integration over time, while the coupling across state variables models spatial information inte... | A related level of abstraction involves coupled difference equations and is commonly implemented using digital design methodologies in standard CMOS processes or FPGAs [51, 52, 53, 54]. However, even with fully custom designs, these implementations fail to achieve the low power levels sought by neuromorphic engineers.
| Advancements in technology offer a broader range of materials that could potentially facilitate the design of improved silicon-based brains. Architectures using this device challenge the conventional choices for abstraction level and partitioning in mixed-signal neuromorphic processors. These designs can employ more ad... |
Currently, available neuromorphic processors are still in their infancy, as they aim to replicate biological neurons using silicon [51, 52, 53, 54]. However, their application has been limited due to several factors. First, our understanding of the brain is incomplete, lacking a comprehensive theory explaining its ope... |
The commonly used level of abstraction, which closely resembles direct biological replication, models neural computation using coupled ordinary differential equations. The temporal dynamics of this model enable information integration over time, while the coupling across state variables models spatial information inte... | A |
We highlight that our simulations not only confirm our theoretical results but also allow us to empirically pinpoint the constants hidden in the asymptotic analysis. For the simulations carried out on the path our simulation results indicate a constant of roughly 1.51.51.51.5, giving a total running time of roughly 1.5... | For these systems, we can prove a better upper bound on the expected number of steps needed to reach a δ𝛿\deltaitalic_δ-stable state.
Examples of such graphs are the path, where all the nodes are positioned with equal distance of at most ε𝜀\varepsilonitalic_ε, as well as the graph from Theorem 9 if the social network... | Researchers have investigated the convergence to stable states and the corresponding convergence speed in many variants of the Hegselmann-Krause model. The existing work can be categorized along two dimensions: complete or arbitrary social network and synchronous or asynchronous updates of the opinions.
Synchronous opi... | In the following lemma, we prove that the projected system behaves similarly to the original system in the sense that the length of the edge e𝑒eitalic_e stays the same and the influence network does not change.
Furthermore, the agents in the original HKS move at least as much as the agents in the projected state, when... | Interestingly, the total running times are sharply concentrated around their mean. In fact, the plot in Fig. 2 actually shows box plots, but starting with a number of agents as small as only 40 the upper and lower whiskers become almost identical with no outliers detected.
| D |
The F1 score is computed by taking the harmonic mean of precision and recall. It is a single metric that balances both precision and recall. The best possible value of the F1 score is 1 (perfect precision and recall), while the worst value is 0. If a single F1 score is required for multiclass classification, a micro-a... | In our experiment, we computed the confidence interval of the AUC score for each pathology and obtained the results, as presented in Table 4.5. These findings provide valuable insights into the reliability and consistency of our model predictions. The confidence interval represents a range of values within which we can... | By analyzing these metrics, we gain insights into the model’s performance in terms of sensitivity, specificity, predictive values, discrimination power (ROC curve), and overall classification accuracy (F1 score). These evaluations help us understand the strengths and limitations of the model in accurately predicting di... |
Table 1 shows the evaluation metrics for the model trained on 1000 X-ray images and tested on a test dataset. The results indicate poor generalization of the model, as reflected in its performance across all metrics. Although accuracy can be deceptive, particularly for conditions such as Pneumothorax, Hernia, and Pleu... |
We computed several metrics to assess the generalization of our diagnostic models. These metrics include sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), the receiver operating characteristic (ROC) curve, and the F1 score. | B |
Unlike multi-armed bandit algorithms, BAI algorithms are designed solely to deliver the most effective exploration.111Although the term “best arm identification” has appeared only recently, several strands of research share the same goal, among which ranking and selection (RS; Bechhofer 1954) is among the best known. S... | a probability of 1−δ1𝛿1-\delta1 - italic_δ. For fixed-confidence identification, several algorithms are known, such as successive elimination (Even-Dar et al., 2006), track and stop (Garivier and Kaufmann, 2016), and top-two algorithms (Russo, 2020; Shang et al., 2020; Jourdan et al., 2022; You et al., 2023). The desi... |
Our results offer analytical innovation by establishing foundational principles for a formal analysis that yields exact solutions in dynamic programming. This is notable because it enables solutions that extend beyond the conventional one- or two-step lookahead. We demonstrate several instances in which it is feasible... | Popular algorithms for fixed-budget identification include successive rejects (SR; Audibert et al. 2010) and successive halving (Karnin et al., 2013). There are also several Bayesian algorithms that utilize a prior, such as top-two Thompson sampling (Russo, 2020), knowledge gradient (KG; Gupta and Miescke 1994), and ex... | The KG algorithm (Gupta and Miescke, 1994) also adopts a Bayesian approach. However, the analytical properies of KG are little understood.
The results of Ryzhov et al. (2012) show the discounted version of KG most frequently drawing the best arm. Elsewhere, Wang and Powell (2018) have further characterized KG without g... | C |
The maximum velocity and acceleration of Crazyflies are set to 1m/s1ms1\rm{m/s}1 roman_m / roman_s and 1m/s21msuperscripts21\rm{m/s^{2}}1 roman_m / roman_s start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, respectively, to ensure safety. Lastly, the sampling time hℎhitalic_h is set to 0.20.20.20.2s and the horizon length ... | This scenario is designed to emphasize that the modified space partition constraint in (III-A) leads to a more accurate separation among the robots and thus a higher utility rate of the workspace.
A comparison between the proposed method and the traditional BVC [32] is shown in Fig. 9 for a particular setup. | As shown in Fig. 6, four robots located in a 2m×2m2m2m2{\rm m}\times 2{\rm m}2 roman_m × 2 roman_m square transit to their antipodal positions.
The robots approach the center point initially at t=0.2𝑡0.2t=0.2italic_t = 0.2s and the terminal positions have entered the warning band. | The first experiment is the symmetry scenario where 20202020 quadrotors transit to their antipodal position in a 2D circle with radius 1.71.71.71.7m.
As shown in Fig. 14(a), the navigation task is accomplished within 11.311.311.311.3s with smooth and safe trajectories. | As shown in Fig. 13, the platform consists of several nano quadrotors named Bitcraze Crazyflie 2.1.
Their states in the workspace are captured by an indoor motion capture system OptiTrack, of which the update frequency is 120120120120Hz. This information is sent to the main control computer where the proposed trajector... | C |
Figure 1: a) Schematic of the learning task this work is concerned with. Data points x∈X𝑥𝑋x\in Xitalic_x ∈ italic_X are encoded to and decoded from latent space Z𝑍Zitalic_Z. Points in the same orbit in X𝑋Xitalic_X are mapped to the same point (orbit) z∈Z=X/G𝑧𝑍𝑋𝐺z\in Z=X/Gitalic_z ∈ italic_Z = italic_X / italic... |
To this end, we introduce a group-invariant representation learning method that encodes data in a group-invariant latent code and a group action. By separating the embedding in a group-invariant and a group-equivariant part, we can learn expressive lower-dimensional group-invariant representations utilizing the power ... | In this work we proposed a novel unsupervised learning strategy to extract representations from data that are separated in a group invariant and equivariant part for any group G𝐺Gitalic_G. We defined the sufficient conditions for the different parts of our proposed framework, namely the encoder, decoder and group func... | We characterize the mathematical conditions of the group action function component and
we propose an explicit construction suitable for any group G𝐺Gitalic_G. To the best of our knowledge, this is the first method for unsupervised learning of separated invariant-equivariant representations valid for any group. | All aforementioned considerations require some sort of supervision to extract invariant representations from data. Unsupervised learning of group invariant representations, despite its potential in the field of representation learning, has been impaired by the fact that the representation of the data in general does no... | D |
The details of this sampling process are given in the appendix.
The result is H𝐻Hitalic_H samples of (η(h),θ(h))superscript𝜂ℎsuperscript𝜃ℎ(\eta^{(h)},\theta^{(h)})( italic_η start_POSTSUPERSCRIPT ( italic_h ) end_POSTSUPERSCRIPT , italic_θ start_POSTSUPERSCRIPT ( italic_h ) end_POSTSUPERSCRIPT ). The M𝑀Mitalic_M sa... | We believe the risks are very low from our proposed method. The only data used are NO2 concentrations, and the latitude and longitude of the stationary sensors. No images or personal data of any kinds are used. This is an advantage over other proposed solutions, e.g. using taxis to collect data.
The potential benefits ... |
Table 2: Confidence intervals for the means of the maximiser distance at the final iteration for the satellite data. Given are means ±plus-or-minus\pm± one standard deviation of the mean. The best values are given in bold. This confirms the story from Figs. 4 and 5 that similar results are obtained on the selection su... | Two data sets are used for evaluation. The first comes from the TROPOspheric Measuring Instrument (TROPOMI) aboard the Sentinel-5P satellite from the EU’s Copernicus programme (Copernicus 2018). The data set consists of 1083 images of 28x28 pixels, each giving the NO2 concentration in mol/m2molsuperscriptm2\mathrm{mol/... | The full data set consists of years of data across multiple pollutants. For this paper, the NO2 readings were used, and a tuning set constructed from the 2015 data and a test set from the 2016 data. Examples from the tuning set are shown in Fig. 3. The days with less than 40 readings were discarded, and only the readin... | C |
1K∑k=1K(f(x(k))−1K−1∑j≠kf(x(j)))∇ηlogqη(x(k)).1𝐾superscriptsubscript𝑘1𝐾𝑓superscript𝑥𝑘1𝐾1subscript𝑗𝑘𝑓superscript𝑥𝑗subscript∇𝜂subscript𝑞𝜂superscript𝑥𝑘\textstyle\frac{1}{K}\sum_{k=1}^{K}\big{(}f(x^{(k)})-\frac{1}{K-1}\sum_{j\neq k%
}f(x^{(j)})\big{)}\nabla_{\eta}\log q_{\eta}(x^{(k)}).divide start_... | REBAR [62] constructs the baseline through continuous relaxation of the discrete distribution [26, 35] and applies reparameterization gradients to the correction term.
As a result REBAR uses three evaluations of f𝑓fitalic_f for each x(k)superscript𝑥𝑘x^{(k)}italic_x start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCR... |
All the above methods construct a baseline that is independent of the point x(k)superscript𝑥𝑘x^{(k)}italic_x start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT under consideration, but there are other ways to preserve the unbiasedness of the estimator. | We omit x(k)superscript𝑥𝑘x^{(k)}italic_x start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRIPT in the sum 13 to ensure that the function hksubscriptℎ𝑘h_{k}italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT is independent of x(k)superscript𝑥𝑘x^{(k)}italic_x start_POSTSUPERSCRIPT ( italic_k ) end_POSTSUPERSCRI... | Since the estimator is unbiased, we can optimize the parameters γ𝛾\gammaitalic_γ of the network in an online fashion to minimize the variance of the estimator (similar to [20]).
Specifically, if we denote the RODEO gradient estimate by gγ(x(1:K))subscript𝑔𝛾superscript𝑥:1𝐾g_{\gamma}(x^{(1:K)})italic_g start_POSTSU... | B |
The markers represent the performance of the probability of the outage for the idealized version of the Random selection scheme and serve as a reference.
The dotted lines show the actual performance of MRC in the presence of correlated interference. The dashed lines depict the scenario with finite pool of Q=24𝑄24Q=24i... | We start with the rate R𝑅Ritalic_R of the Full MRC model shown in Fig. 7(a). As expected, for a given mean number of active users, the larger the frame and the number of repetitions, the higher the rate. Increasing the frame size decreases the chance of collisions, while increasing K𝐾Kitalic_K makes the transmission ... | We also broaden the scope by considering other Steiner systems with different parameters (frame length and number of repetitions).
Lastly, we discuss the limitations of access methods based on random selection highlighting issues with their practical implementation. | Lastly, we consider Steiner systems with different configurations of the frame length M𝑀Mitalic_M and number of repetitions K𝐾Kitalic_K. In Table I we provide the relevant parameters for the systems used in this work, namely the number of patterns C𝐶Citalic_C, maximum number of interferers per slot D𝐷Ditalic_D, ord... | The maximum supportable bN𝑏𝑁bNitalic_b italic_N of a Steiner system is a non-trivial function of the order and number of its stopping sets, frame length M𝑀Mitalic_M, and number of patterns C𝐶Citalic_C.
While the general trend is preserved, i.e. higher K𝐾Kitalic_K and M𝑀Mitalic_M lead to higher rates, there are s... | C |
We remark that although time-wise this algorithm is fast, the ratio constant of the yielded path over the optimal path has not been computed and is much larger that Christofides’ 3/2323/23 / 2 ratio. In fact, in our algorithm, the yielded path has length at
most (300)9/2log300superscript30092300(300)^{9/2}\log{300}( ... |
Later, Schul [Sch07] provided a modification of the algorithm so that the ratio of the length of the yielded path over the length of the optimal path is bounded by a constant C𝐶Citalic_C independent of the dimension N𝑁Nitalic_N. Variation of this algorithm also appears in [BNV19]. Here and for the rest of this paper... |
The Analyst’s Traveling Salesman Problem (ATSP) [Jon90] is a generalization of the TSP where it is asked to find a curve of finite length that contains a given (finite or infinite) set V⊂ℝN𝑉superscriptℝ𝑁V\subset\mathbb{R}^{N}italic_V ⊂ blackboard_R start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT. While the TSP (w... | An interesting connection to the Jones-Scul algorithm was given by Gu, Lutz, and Mayordomo [GLM06] who classified the sets V𝑉Vitalic_V that admit a solution to a computable extension of the ASTP. This variant of the problem characterizes the sets which are contained in a rectifiable computable curve. As we are concern... |
It should be noted that the work of Jones [Jon90], Okikiolu [Oki92], and Schul [Sch07] provides the sets for which a solution exists but not the solution itself when the set is infinite. That is, they classify the sets which are contained in curves of finite length, but do not provide the parametrization of the curves... | C |
There is a generalization by Osserman and Trager [46] of Chow forms for varieties in multiprojective space. Our method to compute the Chow form is based on resultant computations
and this seamlessly extends to multiprojective space, see Lemma 4.5 for the complete intersection case and Theorem 4.8 for the general case. | There is a generalization by Osserman and Trager [46] of Chow forms for varieties in multiprojective space. Our method to compute the Chow form is based on resultant computations
and this seamlessly extends to multiprojective space, see Lemma 4.5 for the complete intersection case and Theorem 4.8 for the general case. | In [46], Osserman and Trager gave a generalization of Chow forms to multiprojetive varieties, i.e., varieties in the multiprojective space ℙ𝒏≔∏i=1lℙni≔superscriptℙ𝒏superscriptsubscriptproduct𝑖1𝑙superscriptℙsubscript𝑛𝑖\mathbb{P}^{\bm{n}}\coloneqq\prod_{i=1}^{l}\mathbb{P}^{n_{i}}blackboard_P start_POSTSUPERSCRIPT b... | We should emphasize that the generalization from the projective Chow forms to the multiprojective ones is far from straightforward both from the mathematical and algorithmic complexity point of views. Even though a multiprojective space is isomorphic to a projective variety via the Segre embedding, this requires adding... |
The multidegree222This concept is also called the dimension or the multidimension of a multiprojective variety in the literature. We reserve the term dimension for the dimension of V𝑉Vitalic_V as a projective variety, e.g., the dimension of its Segre embedding. mdeg(V)mdeg𝑉\operatorname{mdeg}(V)roman_mdeg ( italic_... | C |
To statistically confirm the associations observed between the target and self-reported discrete emotions, a Pearson’s chi-squared test is used as implemented in the chisq.test() function in the R software. In this test, the null hypothesis (H0subscript𝐻0H_{0}italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT) states t... | As introduced before, only 8888 of the 12121212 emotions initially selected were included in WEMAC (see the Stimuli Section), although the 12121212 emotions were considered for the discrete emotion labeling (see the Measures Section). It means that the number of targeted emotions is smaller than the reported ones in th... | The following conclusions are extracted from analyzing Table 3: (1) all the target emotions show a strong positive dependence with their corresponding reported, so the videos are especially eliciting the desired emotion in the users; (2) all videos originally classified as non-fear have a significant negative dependenc... | Table 4 shows the reliability ICC metrics analyzing the targeted continuous annotations regarding the targeted discrete ones (see the Targeted field). Analyzing this table it is found poor consistency for 4444 out of 8888 of the emotions, which are amusement, anger, disgust, and sadness. It highlights the case of disgu... |
To statistically confirm the associations observed between the target and self-reported discrete emotions, a Pearson’s chi-squared test is used as implemented in the chisq.test() function in the R software. In this test, the null hypothesis (H0subscript𝐻0H_{0}italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT) states t... | B |
True Positive Rate (TPR): The performance of adversarial sample detectors is evaluated by calculating the ratio of correctly identified adversarial samples (true positives) to the total number of adversarial samples. This metric provides insight into the ability of a detector to correctly identify adversarial samples, ... |
Accuracy: The evaluation of model robustness against adversarial attacks employs accuracy as a key metric. Firstly, normal accuracy measures the model’s performance on clean, non-adversarial examples. A robust model should exhibit high accuracy on normal inputs, indicating its ability to correctly classify non-adversa... | True Positive Rate (TPR): The performance of adversarial sample detectors is evaluated by calculating the ratio of correctly identified adversarial samples (true positives) to the total number of adversarial samples. This metric provides insight into the ability of a detector to correctly identify adversarial samples, ... |
False Positive Rate (FPR): In addition to measuring the detection rate of the detector (TPR), assessing its performance necessitates the computation of the false positive rate (FPR). The FPR quantifies the detector’s susceptibility to incorrectly classifying normal examples as adversarial. A lower FPR indicates that t... |
Evasion Increase Rate (EIR): ER measures the true positive rate when adversarial samples are introduced. A more precise analysis of evasion attack compares the original true positive rate (without adversarial attack) with the true positive rate when an adversarial attack occurs (Chen et al., 2021a). Let OR represent t... | C |
If, e.g., A𝐴Aitalic_A is incorrectly chosen as the deviator, a similar computation to earlier yields that the partial sum of the series (5) up to N𝑁Nitalic_N is Ω(loglogN)Ω𝑁\Omega(\log\log N)roman_Ω ( roman_log roman_log italic_N ), which occurs with a probability approaching 00 as N→∞→𝑁N\to\inftyitalic_N → ∞.
H... |
If s∉D𝑠𝐷s\not\in Ditalic_s ∉ italic_D, the statistician is told the element z∈Z𝑧𝑍z\in Zitalic_z ∈ italic_Z such that s∈[Tz]𝑠delimited-[]subscript𝑇𝑧s\in[T_{z}]italic_s ∈ [ italic_T start_POSTSUBSCRIPT italic_z end_POSTSUBSCRIPT ]. The statistician has then to select an element j∈I𝑗𝐼j\in Iitalic_j ∈ italic_I. | the game has a value in mixed strategies,
and the statistician has an optimal strategy, ξn:Zn→Δ(I):subscript𝜉𝑛→subscript𝑍𝑛Δ𝐼\xi_{n}:Z_{n}\to\Delta(I)italic_ξ start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT : italic_Z start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT → roman_Δ ( italic_I ). | The algorithm above can be adapted to this case.
Indeed, supposing that the realization s𝑠sitalic_s is such that the origin is visited finitely many times, then applying the above algorithm to the suffix of the realization after the last visit to the origin identifies Deviator with probability 1. | Roughly speaking, we consider a zero-sum game between an adversary and a statistician, in which the adversary chooses a deviation and the statistician, after observing the realization s𝑠sitalic_s, has to guess the deviator if s∉D𝑠𝐷s\notin Ditalic_s ∉ italic_D. A strategy for the statistican in this game is a blame f... | D |
Open-source attack modeling code. Another interesting trend in recent sensor attack works is that they often model the attack capability in the digital space for large-scale evaluation. For example, Man et al. [58] modeled the camera lens flare effect caused by attacker’s light beams in digital images, Ji et al. [65] ... | Among all the identified scientific gaps, the one on the general lack of system-level evaluation (§IV-A) is especially critical as proper evaluation methodology is crucial for valid scientific progress. In this section, we take the initiative to address this critical scientific methodology-level gap by developing an op... | Among all these scientific gaps, the one on the general lack of system-level evaluation is especially critical as it may lead to meaningless attack/defense progress at the system level due to the AI-to-system semantic gap (§IV-A). To effectively fill this gap, it is highly desired to have a community-level effort to co... |
To address the most critical scientific methodology-level gap, we take the initiative to develop an open-source, uniform, and extensible system-driven evaluation platform PASS for the semantic AD AI security research community. We also use our implemented prototype to showcase the capabilities and benefits of such a p... |
In this paper, we perform the first systematization of knowledge of the growing semantic AD AI security research space. We collect and analyze 53 such papers, and systematically taxonomize them based on research aspects critical for the security field. We summarize 6 most substantial scientific gaps based on both quan... | A |
This question asks, whether the problem remains NP-complete if there is an additional restriction on how much different are the lengths of short and long intervals.
If the length of short intervals is 1, and the length of long intervals is c𝑐citalic_c, then, by making the value of c𝑐citalic_c smaller, the graph class... | 2η2l2superscript𝜂2𝑙2\eta^{2}l2 italic_η start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT italic_l – the cut between the gadgets of the chain and the long intervals of H𝐻Hitalic_H covering them (the number of gadgets in the chain also equals l𝑙litalic_l).
|
Alexey Barsukov is funded by the European Union (ERC, POCOCOP, 101071674). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held... | The link chains do not use long intervals of two separate lengths anymore.
The issue of non-consecutive link chains partially intersecting the same edge gadget is resolved by using a switch gadget that switches the relative positions of join gadgets of link chains. |
Consider only those summands that participate in the size of the cut when the coloring of the blocks alternates, i.e., when, for all i,j𝑖𝑗i,jitalic_i , italic_j, yi=zj=0subscript𝑦𝑖subscript𝑧𝑗0y_{i}=z_{j}=0italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_z start_POSTSUBSCRIPT italic_j end_POSTSUBS... | B |
With this approach, temporal context is increased during training as well as inference. Now, GN and ConvNeXt backbones outperform 2-stage and other end-to-end strategies (Fig. 3(b)). In BN models, however, this learning strategy is not implementable in a straightforward way since it requires selecting batches in tempo... | While the newer ConvNeXt backbone is not necessarily comparable to ResNet backbones, note that our proposed strategies are necessary
to achieve strong performance. Specifically, ConvNeXt models do not perform better than its ResNet variants with the CHE strategy (Fig. 3(b)) or in 2-stage settings (Fig. 3 & Table 7). Th... | Interestingly, the CHT strategy is not effective with FrozenBN. We found models often to collapse and finally only achieve subpar performance by lowering the initial learning rate. The lack of proper normalization with FrozenBN might cause more instability than in GN or ConvNeXt models and we suspect that the non-i.i.d... | Across backbones, complete end-to-end training is more effective than freezing layers, possibly because anticipation is a more challenging task and requires more finetuning of CNN weights. Also, the CHE strategy outperforms CHT in this task. Presumably, the correlation between subsequent batches (and thus SGD steps) is... | Hypothesis 4: BatchNorm induces similar issues on natural-video datasets.
Fig. 5 confirms our findings on small-scale datasets from the natural-video domain (GTEA and 50Salads) by retraining the same models used for phase recognition on online action segmentation. We find that our main claims can be reproduced, namely ... | B |
Results on LFW, CFP-FP, AgeDB-30, and CALFW. FR on LFW, CFP-FP, AgeDB-30, and CALFW is straightforward. Thus, the performance was saturated. LFW, AgeDB-30, and CALFW contain 6,000 images, and CFP-FP has 6,000 images. They have 1:1 ratios between the positive and negative pairs. Verification accuracy was employed with t... |
In deep feature learning paradigms for pair similarity optimization, loss functions in FR can be categorized based on two approaches: metric loss (ML; e.g., triplet loss[23, 8] and N-pair loss[26]) and classification loss (CL; e.g., softmax loss[1, 21, 30]). The former directly performs the optimization with a pair of... | Test. Cosine similarity was used as a similarity score. Different evaluation metrics were applied depending on the FR tasks. In the verification task (1:1), verification accuracy using the best threshold was exploited for a dataset that has a small number of test images with the same ratio between positive and negative... |
Results on K-FACE. K-FACE focuses on FR under fine-grained conditions. It consists of 4.3 M images with 6 accessories, 30 illuminations, 3 expressions, and 20 poses for 400 persons. We adopted the same training and test splits used in MixFace [41]. The training split was composed of 3.8 M images with 370 persons. In p... |
Results on MegaFace. MegaFace consists of a gallery set of 1 M images with 690 K classes and probe photos of 100 K images with 530 classes. We followed the test protocol of ArcFace[4]. We removed noisy images and measured rank-1 accuracy for the 1 M distractor after following the identification scenarios using the dev... | C |
The experimental results are presented in four sub-sections: (i) FID for synthetic image assessment, (ii) human experts detecting synthetic images, (iii) data augmentation assessment, (iv) comparison between human experts and deep models for detecting AMD images, and (v) web-based tool to access and validate the deep ... |
In the next experiment, we considered three deep architectures pre-trained with ImageNet dataset [51] for performance comparison when the model is augmented with synthetic images, i.e., SqueezeNet [52], AlexNet [53], and ResNet18 [54]. During training, synthetic and real images were mixed within each batch according t... |
Table 1 presents the FID values for each GAN-based architecture. StyleGAN2-ADA achieved the lowest FID value of 166166166166, while EBGAN was placed in last and its FID value of 380380380380 was the highest. The smaller the FID value, the better is the quality of the generated image. Therefore, all further experiments... | We employed three evaluation measures, i.e., the Fréchet Inception Distance (FID), a well-known GAN evaluation score [40], the ability of human experts to identify the synthetic images and the classification accuracy. FID is often used to assess the quality and variety of the generated images and, even though it has be... |
Once an image file is selected, the system automatically uploads the image to the cloud, performs the analysis in approximately ten to third seconds (largely based on the internet conditions) displays the output in the form of a figure. The output also informs the user of the success of uploading the file and gives it... | B |
First, the visualization results from 16 persons are shown in Fig.10. In Fig.10, the first and third rows show the original facial expression images, and the second and fourth rows exhibit the matrix (4×\times×4) of the final global weights 𝐰gsuperscript𝐰𝑔{\bf{w}}^{g}bold_w start_POSTSUPERSCRIPT italic_g end_POSTSUP... | Some patches that are visually more discriminative are lightened with higher weights and some patches located at the non crucial regions cut down with smaller weights.
In summary, the analyses for non-local weights demonstrate that the proposed method can effectively automatically enhance the significance of facial cru... | From Fig.11, it is seen that the non-local weight of each local patch is same at the beginning of training, which implies that each local region is initially regarded as the equal importance.
With the training of our network, each local region is given different weights, and the higher weights are given some more discr... | From Fig.10, it is obvious that some crucial regions obtain higher weights and non-crucial regions get smaller weights for each facial expression.
For examples, the areas including or around eyes are given higher weights for the first person in the first row, where the maximum is given the local region located at the c... | For the sixth person in the first row, four local regions (located at (3,2), (3,3), (4,2), (4,3)) including his mouth are boosted and given higher weights.
In the third and fourth rows, the local regions located around eyes and the mouth are boosted for the second person, and the whole regions including eyes are given ... | C |
\sigma}\left(-t^{1/(1-\varepsilon)}\right)\,\textnormal{d}t\leq 2\int{t^{-1/(1%
-\varepsilon)}}\,\textnormal{d}t<\inftyblackboard_E | italic_η | start_POSTSUPERSCRIPT 1 - italic_ε end_POSTSUPERSCRIPT = ∫ roman_Pr ( | italic_η | start_POSTSUPERSCRIPT 1 - italic_ε end_POSTSUPERSCRIPT ≥ italic_t ) d italic_t = 2 ∫ italic_... | This could occur by finding a better strategy than the ones in Section 3, for instance by slowly increasing δ𝛿\deltaitalic_δ as games are played, or by tightening the analysis in Theorem 4.5.
It’s possible that for heavy-tailed σ𝜎\sigmaitalic_σ the lower bound is too loose, but for light-tail σ𝜎\sigmaitalic_σ the up... | Intuitively, the faster σ(−z)𝜎𝑧\sigma(-z)italic_σ ( - italic_z ) decays to 0 as z→∞→𝑧z\to\inftyitalic_z → ∞, the slower the rate. Indeed, as the next theorem makes explicit, there is a simple expression for the rate in terms of the f𝑓fitalic_f defined in (4). Note f𝑓fitalic_f is only well defined for {x:σ(−x)>0}... | This bound is stronger the heavier the tail of σ𝜎\sigmaitalic_σ. Consider, for instance
σ(z)=11+e−cz𝜎𝑧11superscript𝑒𝑐𝑧\sigma(z)=\frac{1}{1+e^{-cz}}italic_σ ( italic_z ) = divide start_ARG 1 end_ARG start_ARG 1 + italic_e start_POSTSUPERSCRIPT - italic_c italic_z end_POSTSUPERSCRIPT end_ARG |
This establishes the lower bound. For the upper bound, we cannot assume that the optimal strategy is for H𝐻Hitalic_H to win every game. In particular, the function z↦z+σ(−2z)maps-to𝑧𝑧𝜎2𝑧z\mapsto z+\sigma(-2z)italic_z ↦ italic_z + italic_σ ( - 2 italic_z ) need not be monotone, so we cannot exclude the possibili... | A |
This section summarizes previous research on automatic approaches for the identification of different types of instances, visualization methods for outlier/anomaly and rare category detection, and data-centric ML solutions from the visualization community. To underscore the uniqueness of our approach, we explain the di... | In the ML community, several methods for automatically categorizing data instances into different types exist, with a particular focus on the outlier/anomaly detection research in the past decades [CBK09, HA04]. Nevertheless, most algorithms cannot identify rare cases that are typically isolated groups, including a set... |
Density-based algorithms [HHHM11, HLL08] also work well with the detection of rare categories by discovering substantial changes in data densities using a KNN search in the high-dimensional space. But how to choose the best k-value for a given data set? While it is possible to estimate the best k-value automatically b... |
Another challenge related to the previous one is to identify common local characteristics of the instances in order to classify them into data types, as in the work of Napierala and Stefanowski [NS16] that acknowledges four types of data: safe, borderline, rare, and outliers (SBRO in short). As described before, depen... | With regard to undersampling, two advanced techniques for concurrently eliminating and maintaining instances are: one-sided selection (OSS) [KM97] and neighborhood cleaning rule (NCR) [Lau01]. The goal here is to remove ambiguous points on the class boundary and, at the same time, keep any nonredundant examples far fro... | A |
To evaluate the cost of FIRST transaction verifications by the smart contract (Protocol 7) we deployed the aggregated signature [19] verification function on a smart contract using the Solidity programming language.
We used elliptic curve pairing operations, such as addition, multiplication, and pairing checks introduc... | To evaluate the cost of FIRST transaction verifications by the smart contract (Protocol 7) we deployed the aggregated signature [19] verification function on a smart contract using the Solidity programming language.
We used elliptic curve pairing operations, such as addition, multiplication, and pairing checks introduc... | Our code, written in Solidity, is just a proof-of-concept and we have not spent time optimizing it–we expect this cost to be lower. The cost is largely due to the pairing operations in the verification function. This is inefficiency is being addressed by the Ethereum community as per EIPs 197999https://eips.ethereum.or... | Limitations: Adjusting the real-world delay time with the given VDF delay parameter for every user’s computational capabilities is a challenging and, an open-research problem [16]. While it is an orthogonal task to ours, FIRST mitigates the problem by picking the t1>>t2much-greater-thansubscript𝑡1subscript𝑡2t_{1}>>t_... | Many Ethereum Virtual Machine (EVM) based blockchains, such as Polygon and Fantom, have implemented the EIP-1559 patch. Despite the overall trend of EVM-based blockchains adopting EIP-1559, for completeness we also studied a non-EIP-1559 chain protocol. We replicated our analysis on the Binance Smart Chain (BSC) which ... | B |
To show that results from Section 4 transfer to the setting with unit-level perturbations, we can define a new distribution over agent unobservables F~~𝐹\tilde{F}over~ start_ARG italic_F end_ARG that are related to the original distribution over agent unobservables F𝐹Fitalic_F. When agents with unobservables sampled ... | For any Zi∈supp(FZ),subscript𝑍𝑖suppsubscript𝐹𝑍Z_{i}\in\text{supp}(F_{Z}),italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ supp ( italic_F start_POSTSUBSCRIPT italic_Z end_POSTSUBSCRIPT ) , Zi∈Int(𝒳).subscript𝑍𝑖Int𝒳Z_{i}\in\text{Int}(\mathcal{X}).italic_Z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT... |
We require the following assumption to guarantee that the transformed unobservables in supp(F~)supp~𝐹\text{supp}(\tilde{F})supp ( over~ start_ARG italic_F end_ARG ) can be defined on the same spaces as the original unobservables in supp(F).supp𝐹\text{supp}(F).supp ( italic_F ) . | To show that results from Section 4 transfer to the setting with unit-level perturbations, we can define a new distribution over agent unobservables F~~𝐹\tilde{F}over~ start_ARG italic_F end_ARG that are related to the original distribution over agent unobservables F𝐹Fitalic_F. When agents with unobservables sampled ... | In the context of college admissions, Assumption 4 requires that 𝒳𝒳\mathcal{X}caligraphic_X is large enough that agents’ raw covariates do not “bunch” at the boundaries of 𝒳𝒳\mathcal{X}caligraphic_X, and that 𝒞𝒞\mathcal{C}caligraphic_C is large enough that it contains cost functions that are linear offsets of the... | B |
COCO-on-Places [3]. As shown in Fig. 3(b), COCO-on-Places puts COCO objects [41] on spuriously correlated Places backgrounds [78]. For instance, buses mostly appear in front of balloons and birds in front of trees. The dataset provides three different test sets: a) biased backgrounds (in-distribution), which reflects ... | BAR.
First of all, BAR consists of only 1941 samples, so we pre-trained ResNet-18 and OccamResNet-18 on 100 classes of ImageNet (obtaining 92.6% and 92.1% top-5 accuracies respectively) before training on BAR. Without the pre-trained weights, BAR obtains 15-20% lower test set accuracies for both ResNet and OccamResNet ... | Biased Action Recognition (BAR) [47].
BAR reflects real world challenges where bias attributes are not explicitly labeled for debiasing algorithms, with the test set containing additional correlations not seen during training. The dataset consists of correlated action-background pairs, where the train set consists of s... | We introduce the OccamNet architecture, which has architectural inductive biases for favoring simpler solutions to help overcome dataset biases. OccamNets do not require the biases to be explicitly specified during training, unlike many state-of-the-art debiasing algorithms.
|
COCO-on-Places [3]. As shown in Fig. 3(b), COCO-on-Places puts COCO objects [41] on spuriously correlated Places backgrounds [78]. For instance, buses mostly appear in front of balloons and birds in front of trees. The dataset provides three different test sets: a) biased backgrounds (in-distribution), which reflects ... | B |
Impelmentation details for CFFM++. Our CFFM++ is built on CFFM. Once CFFM is trained, the corresponding encoder has the ability to extract informative features from video frames. Hence, we use the trained encoder from CFFM as the feature extractor for generating the global temporal contextual prototypes (§4.1). When g... | We also obtain results on the test set of the VSPW dataset from the VSPW2021 challenge server, which is shown in Tab. II. We can observe that the proposed CFFM surpasses the considered approaches. For example, upon MiT-B1, CFFM is clearly better than the baseline (SegFormer), with an mIoU gain of 1.6%.
The experimental... | Our experiments are mainly conducted on the VSPW dataset [17], which is the largest VSS benchmark. Its training, validation, and test sets have 2,806 clips (198,244 frames), 343 clips (24,502 frames), and 387 clips (28,887 frames), respectively. It contains diverse scenarios including both indoor and outdoor scenes, an... | Instead of directly modeling global relationships, we propose to model relationships only among necessary tokens for the joint learning of static and motional contexts. Our CFFM technique consists of two steps. The first step, Coarse-to-Fine Feature Assembling (CFFA), assembles the features extracted from neighbouring ... | For global temporal contexts, few VSS methods [17, 53] have exploited the contexts from the whole video. The modeling of global temporal contexts is usually achieved by a memory module in the form of a memory bank [17] or a tiny network [53] which is updated during inference. Although promising results have been achiev... | B |
The deep Learning Recommendation Model (DLRM) and the Time-Based Sequence Model (TBSM) are popular commercial models. These models are typically trained using either a hybrid CPU-GPU mode or a GPU-only mode [6, 7]. In the hybrid mode (Figure 1a), the CPU provides memory capacity for the embedding entries, while GPUs of... |
Alternatively, the GPU-only mode, illustrated in Figure 1b, employs multiple GPUs to store a single copy of the embeddings and trains in a model-parallel manner [9]. However, this approach necessitates continuous all-to-all communication between GPUs to share their embeddings. Additionally, in this mode, one would nee... |
In the GPU-only mode, multiple GPUs are used to store all embeddings and perform data-parallel neural network execution. However, this mode experiences low compute utilization primarily because recommendation models grow with the size of the embedding tables, resulting in a larger memory footprint rather than an incre... | It is unfair to compare Hotline, a hybrid training scheme, to a GPU-only training scheme. Hotline can train even large datasets such as Terabyte with a single GPU. These datasets would otherwise be unable to be trained on a single GPU. The GPU-only mode needs at least four GPUs for such datasets.
| The deep Learning Recommendation Model (DLRM) and the Time-Based Sequence Model (TBSM) are popular commercial models. These models are typically trained using either a hybrid CPU-GPU mode or a GPU-only mode [6, 7]. In the hybrid mode (Figure 1a), the CPU provides memory capacity for the embedding entries, while GPUs of... | A |
Note also that Theorem 4 is sufficient to guarantee that for a map that is affine-linear on cells of a finite polyhedral complex in ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, we need only perform computations at finitely many thresholds a∈ℝ𝑎ℝa\in\mathbb{R}italic_a ∈... | This section is organized as follows. In Subsection 5.1, we establish terminology and notation. In particular, we recall the classical decomposition of polyhedral sets as a Minkowski sum (Theorem 5). In Subsection 5.2 we use this decomposition in the pointed case (see Definition 5.1) to construct a deformation retracti... | Let 𝒞𝒞\mathcal{C}caligraphic_C be a polyhedral complex embedded in ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT in which all cells are pointed.
Define the canonical polytopal complex associated to 𝒞𝒞\mathcal{C}caligraphic_C to be the following family of sets: | This subsection extends the definitions and results of Subsection 5.2 to arbitrary polyhedral complexes 𝒞𝒞\mathcal{C}caligraphic_C in ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT – dropping the requirement that all cells of 𝒞𝒞\mathcal{C}caligraphic_C be pointed.
|
To begin, we establish the Pointedness Dichotomy (Corollary 5.29): If 𝒞𝒞\mathcal{C}caligraphic_C is a connected polyhedral complex imbedded in ℝnsuperscriptℝ𝑛\mathbb{R}^{n}blackboard_R start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, then either every cell of 𝒞𝒞\mathcal{C}caligraphic_C is pointed or every cell... | A |
In this paper, we need to evolve the system from an initial ground state with strong transverse magnetic field to a state without any transverse magnetic field. In order to obtain this initial state, we have to train a beginning wave function whose parameters are random complex numbers with machine learning algorithms,... | These universal power-laws beyond KZM can be achieved by computing the higher order cumulants of the kink numbers. Since we adopt the PBC in the spin chain, the outcomes of the kink numbers are all even [45, 46]. Thus, in practice we compute the higher cumulants for the numbers of kink pairs, i.e., 𝒩^P=𝒩^/2subscript^... |
Utilizing the machine learning method (see Section IV “Materials and Methods”), we consider the time evolution of a one-dimensional TFQIM (1) under a linear quench (2) through the critical point with various quench rates. We set PBC to the spin chains, thus they satisfy the even parity [40]. The coupling strengths bet... |
In [15] the machine learning methods was merely applied in the unitary dynamics without phase transitions. Critical dynamics, i.e., the dynamics across the critical point of a phase transition is more complex and has richer phenomena [39]. Critical slowing down near the phase transition point may invalidate the applic... | After preparing the initial ground state, the system will evolve according to the quench profile (2). Therefore, the wave function should also depend on time. To this end, we render the neural network parameters to be functions of time. Then, the parameters will be computed at every time step with the time-dependent VM... | D |
We now report a couple of numerical tests. One result is the error analysis with analytic solutions. The other is to show that the proposed scheme can conserve the helicity. The numerical experiments are performed on a workstation with 1 10 Core Intel(R) Xeon(R) Silver 4210R CPU, 1 RTX A5000 GPU, 128GB RAM, and a Ubun... |
We shall now look at the PINN model, which uses the strong form of PDE as a loss function. We continue to consider the following 3D Navier-Stokes equation with initial condition as given in (2.1). On the other hand, for the helicity conservation, we shall assume that the external force is zero. Therefore, we shall ass... |
We now report a couple of numerical tests. One result is the error analysis with analytic solutions. The other is to show that the proposed scheme can conserve the helicity. The numerical experiments are performed on a workstation with 1 10 Core Intel(R) Xeon(R) Silver 4210R CPU, 1 RTX A5000 GPU, 128GB RAM, and a Ubun... |
In this section, we carry out a 3D error test with the following form of solutions on the domain Ω=[0,1]3Ωsuperscript013\Omega=[0,1]^{3}roman_Ω = [ 0 , 1 ] start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT. First we generate analytic solutions of the equation. starting at | The rest of the paper is organized as follows. In Section 2, we provide preliminaries, notation and helicity-conservative finite element scheme. In Section 3, we present a PINN-based algorithm that preserves the helicity. In Section 4, we present numerical results on the convergence and helicity-preserving properties o... | C |
The notion of data coverage is a related topic that has been studied across different settings jin2020mithracoverage ; asudeh2019assessing ; lin2020identifying ; asudeh2021identifying ; tae2021slice ; accinelli2021impact ; moskovitch2020countata ; accinelli2020coverage . For categorical data, uncovered regions are ide... | Coverage on continuous space is studied in asudeh2021identifying . Accordingly, lack of coverage is identified as any point in the data space that doesn’t have enough points in a fixed-radius neighborhood around it.
Although coverage does not provide a score for an arbitrary query point, following the idea of whether t... |
Finally, we conduct an experiment to assess the capacity of data coverage techniques to create proper warnings and demonstrate their failure for query points that are in uncertain regions. To this end, using the continuous notion of data coverage asudeh2021identifying and tuned parameters of k=50𝑘50k=50italic_k = 50... |
The literature on data coverage asudeh2019assessing ; asudeh2021identifying ; lin2020identifying only focuses on representation, and hence fails to capture uncertainty. Additionally, they only return a binary signal of whether to trust the outcome of the model for a query point or not which practically is not very in... | We argue that, irrespective of the choice of model and its details, the prediction for a query point is not reliable if the model is not trained on instances similar to the query point or if there is a high variance in the vicinity of the query point in the training set.
Therefore, we introduce two data-centric reliabi... | A |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.