id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 7.24k | title stringlengths 1 278 | comments stringlengths 1 762 ⌀ | journal-ref stringlengths 1 557 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | abstract stringlengths 6 3.8k | versions list | update_date timestamp[s] | authors_parsed list | predictions stringclasses 2 values | probabilities float64 0.5 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1004.3702 | Lizhi Du | Lizhi Du | A Polynomial time Algorithm for Hamilton Cycle with maximum Degree 3,
3SAT | 16 pages. This time, I add a detailed polynomial time algorithm and
proof for 3SAT | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Based on the famous Rotation-Extension technique, by creating the new
concepts and methods: broad cycle, main segment, useful cut and insert,
destroying edges for a main segment, main goal Hamilton cycle, depth-first
search tree, we develop a polynomial time algorithm for a famous NPC: the
Hamilton cycle problem. Thus we proved that NP=P. The key points of this paper
are: 1) there are two ways to get a Hamilton cycle in exponential time: a full
permutation of n vertices; or, chose n edges from all k edges, and check all
possible combinations. The main problem is: how to avoid checking all
combinations of n edges from all edges. My algorithm can avoid this. Lemma 1
and lemma 2 are very important. They are the foundation that we always can get
a good branch in the depth-first search tree and can get a series of destroying
edges (all are bad edges) for this good branch in polynomial time. The
extraordinary insights are: destroying edges, a tree contains each main segment
at most one time at the same time, and dynamic combinations. The difficult part
is to understand how to construct a main segment's series of destroying edges
by dynamic combinations. The proof logic is: if there is at least on Hamilton
cycle in the graph, we always can do useful cut and inserts until a Hamilton
cycle is got. The times of useful cut and inserts are polynomial. So if at any
step we cannot have a useful cut and insert, this means that there are no
Hamilton cycles in the graph. In this version, I add a detailed polynomial time
algorithm and proof for 3SAT
| [
{
"version": "v1",
"created": "Mon, 12 Apr 2010 04:39:27 GMT"
},
{
"version": "v10",
"created": "Mon, 5 Nov 2012 01:44:46 GMT"
},
{
"version": "v11",
"created": "Thu, 31 Jan 2013 11:15:53 GMT"
},
{
"version": "v12",
"created": "Mon, 4 Nov 2013 14:09:42 GMT"
},
{
"version": "v13",
"created": "Thu, 7 Aug 2014 02:01:59 GMT"
},
{
"version": "v14",
"created": "Wed, 20 Aug 2014 01:20:04 GMT"
},
{
"version": "v15",
"created": "Tue, 2 Sep 2014 15:17:04 GMT"
},
{
"version": "v16",
"created": "Thu, 18 Sep 2014 12:10:37 GMT"
},
{
"version": "v17",
"created": "Mon, 13 Oct 2014 03:52:56 GMT"
},
{
"version": "v18",
"created": "Mon, 17 Nov 2014 10:54:13 GMT"
},
{
"version": "v19",
"created": "Mon, 1 Dec 2014 11:05:59 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Nov 2010 22:00:44 GMT"
},
{
"version": "v20",
"created": "Mon, 22 Dec 2014 04:48:29 GMT"
},
{
"version": "v21",
"created": "Mon, 11 May 2015 12:11:22 GMT"
},
{
"version": "v22",
"created": "Thu, 24 Sep 2015 13:53:17 GMT"
},
{
"version": "v23",
"created": "Mon, 30 Nov 2015 04:27:21 GMT"
},
{
"version": "v24",
"created": "Mon, 21 Mar 2016 14:41:54 GMT"
},
{
"version": "v25",
"created": "Mon, 13 Jun 2016 06:25:23 GMT"
},
{
"version": "v26",
"created": "Fri, 5 Aug 2016 18:15:23 GMT"
},
{
"version": "v27",
"created": "Mon, 29 Aug 2016 02:17:42 GMT"
},
{
"version": "v28",
"created": "Thu, 10 Nov 2016 05:51:30 GMT"
},
{
"version": "v29",
"created": "Mon, 23 Jan 2017 14:35:27 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Jan 2011 02:50:16 GMT"
},
{
"version": "v30",
"created": "Tue, 28 Feb 2017 13:16:20 GMT"
},
{
"version": "v31",
"created": "Wed, 22 Mar 2017 12:20:11 GMT"
},
{
"version": "v32",
"created": "Tue, 11 Apr 2017 08:21:53 GMT"
},
{
"version": "v33",
"created": "Mon, 19 Jun 2017 11:57:26 GMT"
},
{
"version": "v34",
"created": "Wed, 17 Jan 2018 12:32:05 GMT"
},
{
"version": "v35",
"created": "Tue, 13 Feb 2018 04:04:57 GMT"
},
{
"version": "v36",
"created": "Mon, 12 Mar 2018 11:17:48 GMT"
},
{
"version": "v37",
"created": "Mon, 11 Jun 2018 01:14:42 GMT"
},
{
"version": "v38",
"created": "Wed, 11 Jul 2018 10:27:51 GMT"
},
{
"version": "v39",
"created": "Mon, 30 Jul 2018 22:02:52 GMT"
},
{
"version": "v4",
"created": "Sun, 1 May 2011 01:32:02 GMT"
},
{
"version": "v40",
"created": "Tue, 21 Aug 2018 00:01:46 GMT"
},
{
"version": "v41",
"created": "Sun, 2 Sep 2018 23:24:26 GMT"
},
{
"version": "v42",
"created": "Tue, 18 Sep 2018 07:54:59 GMT"
},
{
"version": "v43",
"created": "Wed, 24 Oct 2018 01:58:41 GMT"
},
{
"version": "v44",
"created": "Thu, 7 Feb 2019 04:25:15 GMT"
},
{
"version": "v45",
"created": "Thu, 21 Mar 2019 11:10:18 GMT"
},
{
"version": "v46",
"created": "Thu, 2 May 2019 01:32:57 GMT"
},
{
"version": "v47",
"created": "Mon, 24 Jun 2019 00:56:03 GMT"
},
{
"version": "v48",
"created": "Thu, 10 Oct 2019 07:09:08 GMT"
},
{
"version": "v49",
"created": "Sun, 17 Nov 2019 01:38:36 GMT"
},
{
"version": "v5",
"created": "Fri, 7 Oct 2011 01:39:26 GMT"
},
{
"version": "v50",
"created": "Thu, 23 Jan 2020 05:49:11 GMT"
},
{
"version": "v51",
"created": "Mon, 27 Apr 2020 00:24:06 GMT"
},
{
"version": "v52",
"created": "Sun, 7 Jun 2020 21:56:02 GMT"
},
{
"version": "v53",
"created": "Mon, 6 Jul 2020 10:07:25 GMT"
},
{
"version": "v54",
"created": "Sun, 2 Aug 2020 22:55:43 GMT"
},
{
"version": "v55",
"created": "Wed, 2 Sep 2020 01:02:09 GMT"
},
{
"version": "v56",
"created": "Thu, 8 Oct 2020 01:05:54 GMT"
},
{
"version": "v57",
"created": "Tue, 10 Nov 2020 14:01:31 GMT"
},
{
"version": "v58",
"created": "Thu, 3 Dec 2020 06:27:25 GMT"
},
{
"version": "v59",
"created": "Wed, 20 Jan 2021 11:52:23 GMT"
},
{
"version": "v6",
"created": "Fri, 6 Apr 2012 11:16:37 GMT"
},
{
"version": "v60",
"created": "Tue, 2 Feb 2021 01:58:47 GMT"
},
{
"version": "v61",
"created": "Thu, 8 Apr 2021 07:36:54 GMT"
},
{
"version": "v62",
"created": "Mon, 10 May 2021 00:01:29 GMT"
},
{
"version": "v63",
"created": "Tue, 3 Aug 2021 12:02:09 GMT"
},
{
"version": "v64",
"created": "Thu, 30 Sep 2021 08:07:36 GMT"
},
{
"version": "v65",
"created": "Thu, 4 Nov 2021 13:33:17 GMT"
},
{
"version": "v66",
"created": "Tue, 14 Dec 2021 20:57:57 GMT"
},
{
"version": "v67",
"created": "Mon, 10 Jan 2022 09:58:37 GMT"
},
{
"version": "v68",
"created": "Sun, 24 Apr 2022 06:42:13 GMT"
},
{
"version": "v69",
"created": "Tue, 23 Aug 2022 06:41:40 GMT"
},
{
"version": "v7",
"created": "Sun, 27 May 2012 08:15:49 GMT"
},
{
"version": "v70",
"created": "Mon, 3 Oct 2022 09:26:26 GMT"
},
{
"version": "v71",
"created": "Thu, 10 Nov 2022 13:27:56 GMT"
},
{
"version": "v72",
"created": "Wed, 18 Jan 2023 08:58:39 GMT"
},
{
"version": "v73",
"created": "Mon, 13 Mar 2023 03:25:16 GMT"
},
{
"version": "v74",
"created": "Sun, 2 Apr 2023 10:34:41 GMT"
},
{
"version": "v75",
"created": "Mon, 1 May 2023 05:26:28 GMT"
},
{
"version": "v76",
"created": "Sun, 4 Jun 2023 10:38:47 GMT"
},
{
"version": "v77",
"created": "Sun, 9 Jul 2023 23:25:20 GMT"
},
{
"version": "v78",
"created": "Mon, 21 Aug 2023 08:40:32 GMT"
},
{
"version": "v79",
"created": "Wed, 13 Sep 2023 09:55:30 GMT"
},
{
"version": "v8",
"created": "Wed, 15 Aug 2012 12:11:34 GMT"
},
{
"version": "v80",
"created": "Thu, 5 Oct 2023 13:29:54 GMT"
},
{
"version": "v9",
"created": "Wed, 29 Aug 2012 06:39:31 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Du",
"Lizhi",
""
]
] | not_new_dataset | 0.997305 |
1912.05957 | Hamid Mohammadi | Hamid Mohammadi, Seyed Hossein Khasteh | Text as Environment: A Deep Reinforcement Learning Text Readability
Assessment Model | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Evaluating the readability of a text can significantly facilitate the precise
expression of information in written form. The formulation of text readability
assessment involves the identification of meaningful properties of the text
regardless of its length. Sophisticated features and models are used to
evaluate the comprehensibility of texts accurately. Despite this, the problem
of assessing texts' readability efficiently remains relatively untouched. The
efficiency of state-of-the-art text readability assessment models can be
further improved using deep reinforcement learning models. Using a hard
attention-based active inference technique, the proposed approach makes
efficient use of input text and computational resources. Through the use of
semi-supervised signals, the reinforcement learning model uses the minimum
amount of text in order to determine text's readability. A comparison of the
model on Weebit and Cambridge Exams with state-of-the-art models, such as the
BERT text readability model, shows that it is capable of achieving
state-of-the-art accuracy with a significantly smaller amount of input text
than other models.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2019 13:54:09 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Dec 2019 15:46:55 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Oct 2023 19:09:25 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Mohammadi",
"Hamid",
""
],
[
"Khasteh",
"Seyed Hossein",
""
]
] | not_new_dataset | 0.997235 |
2004.05672 | Julliano Rosa Nascimento | Flavia Bonomo-Braberman, Julliano R. Nascimento, Fabiano S. Oliveira,
U\'everton S. Souza, and Jayme L. Szwarcfiter | Linear-time Algorithms for Eliminating Claws in Graphs | 20 pages | International Transactions in Operational Research 31 (2024),
296--315 | 10.1111/itor.13100 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since many NP-complete graph problems have been shown polynomial-time
solvable when restricted to claw-free graphs, we study the problem of
determining the distance of a given graph to a claw-free graph, considering
vertex elimination as measure. CLAW-FREE VERTEX DELETION (CFVD) consists of
determining the minimum number of vertices to be removed from a graph such that
the resulting graph is claw-free. Although CFVD is NP-complete in general and
recognizing claw-free graphs is still a challenge, where the current best
algorithm for a graph $G$ has the same running time of the best algorithm for
matrix multiplication, we present linear-time algorithms for CFVD on weighted
block graphs and weighted graphs with bounded treewidth. Furthermore, we show
that this problem can be solved in linear time by a simpler algorithm on
forests, and we determine the exact values for full $k$-ary trees. On the other
hand, we show that CLAW-FREE VERTEX DELETION is NP-complete even when the input
graph is a split graph. We also show that the problem is hard to approximate
within any constant factor better than $2$, assuming the Unique Games
Conjecture.
| [
{
"version": "v1",
"created": "Sun, 12 Apr 2020 18:49:41 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Bonomo-Braberman",
"Flavia",
""
],
[
"Nascimento",
"Julliano R.",
""
],
[
"Oliveira",
"Fabiano S.",
""
],
[
"Souza",
"Uéverton S.",
""
],
[
"Szwarcfiter",
"Jayme L.",
""
]
] | not_new_dataset | 0.997512 |
2010.11559 | Yangjing Zhang | Yangjing Zhang, Kim-Chuan Toh, Defeng Sun | Learning Graph Laplacian with MCP | 32 pages | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of learning a graph under the Laplacian constraint
with a non-convex penalty: minimax concave penalty (MCP). For solving the MCP
penalized graphical model, we design an inexact proximal difference-of-convex
algorithm (DCA) and prove its convergence to critical points. We note that each
subproblem of the proximal DCA enjoys the nice property that the objective
function in its dual problem is continuously differentiable with a semismooth
gradient. Therefore, we apply an efficient semismooth Newton method to
subproblems of the proximal DCA. Numerical experiments on various synthetic and
real data sets demonstrate the effectiveness of the non-convex penalty MCP in
promoting sparsity. Compared with the existing state-of-the-art method, our
method is demonstrated to be more efficient and reliable for learning graph
Laplacian with MCP.
| [
{
"version": "v1",
"created": "Thu, 22 Oct 2020 09:33:49 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 08:56:20 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Zhang",
"Yangjing",
""
],
[
"Toh",
"Kim-Chuan",
""
],
[
"Sun",
"Defeng",
""
]
] | not_new_dataset | 0.997341 |
2011.15122 | Willem van Jaarsveld | Tarkan Temiz\"oz, Christina Imdahl, Remco Dijkman, Douniel
Lamghari-Idrissi, Willem van Jaarsveld | Deep Controlled Learning for Inventory Control | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Problem Definition: Are traditional deep reinforcement learning (DRL)
algorithms, developed for a broad range of purposes including game-play and
robotics, the most suitable machine learning algorithms for applications in
inventory control? To what extent would DRL algorithms tailored to the unique
characteristics of inventory control problems provide superior performance
compared to DRL and traditional benchmarks? Methodology/results: We propose and
study Deep Controlled Learning (DCL), a new DRL framework based on approximate
policy iteration specifically designed to tackle inventory problems.
Comparative evaluations reveal that DCL outperforms existing state-of-the-art
heuristics in lost sales inventory control, perishable inventory systems, and
inventory systems with random lead times, achieving lower average costs across
all test instances and maintaining an optimality gap of no more than 0.1\%.
Notably, the same hyperparameter set is utilized across all experiments,
underscoring the robustness and generalizability of the proposed method.
Managerial implications: These substantial performance and robustness
improvements pave the way for the effective application of tailored DRL
algorithms to inventory management problems, empowering decision-makers to
optimize stock levels, minimize costs, and enhance responsiveness across
various industries.
| [
{
"version": "v1",
"created": "Mon, 30 Nov 2020 18:53:08 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Sep 2021 10:08:31 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Nov 2021 14:59:21 GMT"
},
{
"version": "v4",
"created": "Fri, 12 Nov 2021 11:47:09 GMT"
},
{
"version": "v5",
"created": "Mon, 25 Sep 2023 08:06:08 GMT"
},
{
"version": "v6",
"created": "Thu, 28 Sep 2023 06:37:18 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Temizöz",
"Tarkan",
""
],
[
"Imdahl",
"Christina",
""
],
[
"Dijkman",
"Remco",
""
],
[
"Lamghari-Idrissi",
"Douniel",
""
],
[
"van Jaarsveld",
"Willem",
""
]
] | not_new_dataset | 0.997478 |
2102.00696 | Selim Furkan Tekin | Selim Furkan Tekin, Arda Fazla and Suleyman Serdar Kozat | Numerical Weather Forecasting using Convolutional-LSTM with Attention
and Context Matcher Mechanisms | - In our journal submission, we removed the integration of the
observational data section since it was not used in the experiments. Thus, we
also removed the authors from the paper who were responsible for that
section. - In the second version, we also performed an experiment on
WeatherBench. We compare our results with the Physical Weather Forecasting
Models | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerical weather forecasting using high-resolution physical models often
requires extensive computational resources on supercomputers, which diminishes
their wide usage in most real-life applications. As a remedy, applying deep
learning methods has revealed innovative solutions within this field. To this
end, we introduce a novel deep learning architecture for forecasting
high-resolution spatio-temporal weather data. Our approach extends the
conventional encoder-decoder structure by integrating Convolutional Long-short
Term Memory and Convolutional Neural Networks. In addition, we incorporate
attention and context matcher mechanisms into the model architecture. Our
Weather Model achieves significant performance improvements compared to
baseline deep learning models, including ConvLSTM, TrajGRU, and U-Net. Our
experimental evaluation involves high-scale, real-world benchmark numerical
weather datasets, namely the ERA5 hourly dataset on pressure levels and
WeatherBench. Our results demonstrate substantial improvements in identifying
spatial and temporal correlations with attention matrices focusing on distinct
parts of the input series to model atmospheric circulations. We also compare
our model with high-resolution physical models using the benchmark metrics and
show that our Weather Model is accurate and easy to interpret.
| [
{
"version": "v1",
"created": "Mon, 1 Feb 2021 08:30:42 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 18:56:52 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Tekin",
"Selim Furkan",
""
],
[
"Fazla",
"Arda",
""
],
[
"Kozat",
"Suleyman Serdar",
""
]
] | not_new_dataset | 0.997217 |
2103.04904 | Laszlo Csirmaz | Laszlo Csirmaz, Franti\v{s}ek Mat\'u\v{s} and Carles Padr\'o | Bipartite secret sharing and staircases | To appear in Discrete Mathematics | null | null | null | cs.CR cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bipartite secret sharing schemes have a bipartite access structure in which
the set of participants is divided into two parts and all participants in the
same part play an equivalent role. Such a bipartite scheme can be described by
a \emph{staircase}: the collection of its minimal points. The complexity of a
scheme is the maximal share size relative to the secret size; and the
$\kappa$-complexity of an access structure is the best lower bound provided by
the entropy method. An access structure is $\kappa$-ideal if it has
$\kappa$-complexity 1. Motivated by the abundance of open problems in this
area, the main results can be summarized as follows. First, a new
characterization of $\kappa$-ideal multipartite access structures is given
which offers a straightforward and simple approach to describe ideal bipartite
and tripartite access structures. Second, the $\kappa$-complexity is determined
for a range of bipartite access structures, including those determined by two
points, staircases with equal widths and heights, and staircases with all
heights 1. Third, matching linear schemes are presented for some non-ideal
cases, including staircases where all heights are 1 and all widths are equal.
Finally, finding the Shannon complexity of a bipartite access structure can be
considered as a discrete submodular optimization problem. An interesting and
intriguing continuous version is defined which might give further insight to
the large-scale behavior of these optimization problems.
| [
{
"version": "v1",
"created": "Mon, 8 Mar 2021 17:09:43 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 14:19:21 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Csirmaz",
"Laszlo",
""
],
[
"Matúš",
"František",
""
],
[
"Padró",
"Carles",
""
]
] | not_new_dataset | 0.997313 |
2104.03937 | Flavia Bonomo | Flavia Bonomo-Braberman and Gast\'on Abel Brito | Intersection models and forbidden pattern characterizations for 2-thin
and proper 2-thin graphs | An extended abstract of this work, entitled "Intersection models for
2-thin and proper 2-thin graphs", was presented at LAGOS 2021 and appears in
Procedia Computer Science 195 (2021), 221-229 (Proc. LAGOS'21, Sao Paulo,
Brazil) | Discrete Applied Mathematics 339 (2023), 53-77 | 10.1016/j.dam.2023.06.013 | null | cs.DM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The \emph{thinness} of a graph is a width parameter that generalizes some
properties of interval graphs, which are exactly the graphs of thinness one.
Graphs with thinness at most two include, for example, bipartite convex graphs.
Many NP-complete problems can be solved in polynomial time for graphs with
bounded thinness, given a suitable representation of the graph. \emph{Proper
thinness} is defined analogously, generalizing proper interval graphs, and a
larger family of NP-complete problems are known to be polynomially solvable for
graphs with bounded proper thinness.
The complexity of recognizing 2-thin and proper 2-thin graphs is still open.
In this work, we present characterizations of 2-thin and proper 2-thin graphs
as intersection graphs of rectangles in the plane, as vertex intersection
graphs of paths on a grid (VPG graphs), and by forbidden ordered patterns. We
also prove that independent 2-thin graphs are exactly the interval bigraphs,
and that proper independent 2-thin graphs are exactly the bipartite permutation
graphs.
Finally, we take a step towards placing the thinness and its variations in
the landscape of width parameters, by upper bounding the proper thinness in
terms of the bandwidth.
| [
{
"version": "v1",
"created": "Thu, 8 Apr 2021 17:31:41 GMT"
},
{
"version": "v2",
"created": "Sat, 1 Apr 2023 19:20:18 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Bonomo-Braberman",
"Flavia",
""
],
[
"Brito",
"Gastón Abel",
""
]
] | not_new_dataset | 0.99737 |
2104.07454 | Rohitash Chandra | Animesh Renanse, Alok Sharma, Rohitash Chandra | Memory Capacity of Recurrent Neural Networks with Matrix Representation | null | null | null | null | cs.LG cs.AI cs.CC | http://creativecommons.org/licenses/by/4.0/ | It is well known that canonical recurrent neural networks (RNNs) face
limitations in learning long-term dependencies which have been addressed by
memory structures in long short-term memory (LSTM) networks. Neural Turing
machines (NTMs) are novel RNNs that implement the notion of programmable
computers with neural network controllers that can learn simple algorithmic
tasks. Matrix neural networks feature matrix representation which inherently
preserves the spatial structure of data when compared to canonical neural
networks that use vector-based representation. One may then argue that neural
networks with matrix representations may have the potential to provide better
memory capacity. In this paper, we define and study a probabilistic notion of
memory capacity based on Fisher information for matrix-based RNNs. We find
bounds on memory capacity for such networks under various hypotheses and
compare them with their vector counterparts. In particular, we show that the
memory capacity of such networks is bounded by $N^2$ for $N\times N$ state
matrix which generalizes the one known for vector networks. We also show and
analyze the increase in memory capacity for such networks which is introduced
when one exhibits an external state memory, such as NTMs. Consequently, we
construct NTMs with RNN controllers with matrix-based representation of
external memory, leading us to introduce Matrix NTMs. We demonstrate the
performance of this class of memory networks under certain algorithmic learning
tasks such as copying and recall and compare it with Matrix RNNs. We find an
improvement in the performance of Matrix NTMs by the addition of external
memory, in comparison to Matrix RNNs.
| [
{
"version": "v1",
"created": "Sun, 11 Apr 2021 23:43:28 GMT"
},
{
"version": "v2",
"created": "Sun, 30 Oct 2022 06:43:49 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 03:47:41 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Renanse",
"Animesh",
""
],
[
"Sharma",
"Alok",
""
],
[
"Chandra",
"Rohitash",
""
]
] | not_new_dataset | 0.997475 |
2105.07099 | Seyed Omid Davoudi | Omid Davoodi, Majid Komeili | Feature-Based Interpretable Reinforcement Learning based on
State-Transition Models | null | null | 10.1109/SMC52423.2021.9658917 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Growing concerns regarding the operational usage of AI models in the
real-world has caused a surge of interest in explaining AI models' decisions to
humans. Reinforcement Learning is not an exception in this regard. In this
work, we propose a method for offering local explanations on risk in
reinforcement learning. Our method only requires a log of previous interactions
between the agent and the environment to create a state-transition model. It is
designed to work on RL environments with either continuous or discrete state
and action spaces. After creating the model, actions of any agent can be
explained in terms of the features most influential in increasing or decreasing
risk or any other desirable objective function in the locality of the agent.
Through experiments, we demonstrate the effectiveness of the proposed method in
providing such explanations.
| [
{
"version": "v1",
"created": "Fri, 14 May 2021 23:43:11 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Davoodi",
"Omid",
""
],
[
"Komeili",
"Majid",
""
]
] | not_new_dataset | 0.997394 |
2107.08086 | Raman Goyal | Raman Goyal, Ran Wang, Mohamed Naveed Gul Mohamed, Aayushman Sharma,
Suman Chakravorty | An Information-state based Approach to the Optimal Output Feedback
Control of Nonlinear Systems | null | null | null | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | This paper develops a data-based approach to the closed-loop output feedback
control of nonlinear dynamical systems with a partial nonlinear observation
model. We propose an information state based approach to rigorously transform
the partially observed problem into a fully observed problem where the
information state consists of the past several observations and control inputs.
We further show the equivalence of the transformed and the initial partially
observed optimal control problems and provide the conditions to solve for the
deterministic optimal solution. We develop a data based generalization of the
iterative Linear Quadratic Regulator (iLQR) to partially observed systems using
a local linear time varying model of the information state dynamics
approximated by an Autoregressive moving average (ARMA) model, that is
generated using only the input-output data. This open-loop trajectory
optimization solution is then used to design a local feedback control law, and
the composite law then provides an optimum solution to the partially observed
feedback design problem. The efficacy of the developed method is shown by
controlling complex high dimensional nonlinear dynamical systems in the
presence of model and sensing uncertainty.
| [
{
"version": "v1",
"created": "Fri, 16 Jul 2021 19:21:43 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 16:28:20 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Goyal",
"Raman",
""
],
[
"Wang",
"Ran",
""
],
[
"Mohamed",
"Mohamed Naveed Gul",
""
],
[
"Sharma",
"Aayushman",
""
],
[
"Chakravorty",
"Suman",
""
]
] | not_new_dataset | 0.99729 |
2108.05641 | Jinpeng Chen | Jinpeng Chen, Haiyang Li, Xudong Zhang, Fan Zhang, Senzhang Wang,
Kaimin Wei and Jiaqi Ji | SR-HetGNN:Session-based Recommendation with Heterogeneous Graph Neural
Network | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Session-Based Recommendation System aims to predict the user's next click
based on their previous session sequence. The current studies generally learn
user preferences according to the transitions of items in the user's session
sequence. However, other effective information in the session sequence, such as
user profiles, are largely ignored which may lead to the model unable to learn
the user's specific preferences. In this paper, we propose SR-HetGNN, a novel
session recommendation method that uses a heterogeneous graph neural network
(HetGNN) to learn session embeddings and capture the specific preferences of
anonymous users. Specifically, SR-HetGNN first constructs heterogeneous graphs
containing various types of nodes according to the session sequence, which can
capture the dependencies among items, users, and sessions. Second, HetGNN
captures the complex transitions between items and learns the item embeddings
containing user information. Finally, local and global session embeddings are
combined with the attentional network to obtain the final session embedding,
considering the influence of users' long and short-term preferences. SR-HetGNN
is shown to be superior to the existing state-of-the-art session-based
recommendation methods through extensive experiments over two real large
datasets Diginetica and Tmall.
| [
{
"version": "v1",
"created": "Thu, 12 Aug 2021 10:12:48 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 03:21:08 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 08:28:44 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Chen",
"Jinpeng",
""
],
[
"Li",
"Haiyang",
""
],
[
"Zhang",
"Xudong",
""
],
[
"Zhang",
"Fan",
""
],
[
"Wang",
"Senzhang",
""
],
[
"Wei",
"Kaimin",
""
],
[
"Ji",
"Jiaqi",
""
]
] | not_new_dataset | 0.997301 |
2109.03890 | Vignesh Viswanathan | Gagan Biradar, Vignesh Viswanathan, Yair Zick | Model Explanations via the Axiomatic Causal Lens | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explaining the decisions of black-box models is a central theme in the study
of trustworthy ML. Numerous measures have been proposed in the literature;
however, none of them take an axiomatic approach to causal explainability. In
this work, we propose three explanation measures which aggregate the set of all
but-for causes -- a necessary and sufficient explanation -- into feature
importance weights. Our first measure is a natural adaptation of Chockler and
Halpern's notion of causal responsibility, whereas the other two correspond to
existing game-theoretic influence measures. We present an axiomatic treatment
for our proposed indices, showing that they can be uniquely characterized by a
set of desirable properties. We also extend our approach to derive a new method
to compute the Shapley-Shubik and Banzhaf indices for black-box model
explanations. Finally, we analyze and compare the necessity and sufficiency of
all our proposed explanation measures in practice using the Adult-Income
dataset. Thus, our work is the first to formally bridge the gap between model
explanations, game-theoretic influence, and causal analysis.
| [
{
"version": "v1",
"created": "Wed, 8 Sep 2021 19:33:52 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Sep 2021 14:17:59 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Jan 2022 23:50:48 GMT"
},
{
"version": "v4",
"created": "Mon, 11 Sep 2023 19:33:45 GMT"
},
{
"version": "v5",
"created": "Wed, 27 Sep 2023 20:17:38 GMT"
},
{
"version": "v6",
"created": "Wed, 4 Oct 2023 20:36:32 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Biradar",
"Gagan",
""
],
[
"Viswanathan",
"Vignesh",
""
],
[
"Zick",
"Yair",
""
]
] | not_new_dataset | 0.997332 |
2109.04939 | Ryo Yoshida | Ryo Yoshida, Hiroshi Noji, Yohei Oseki | Modeling Human Sentence Processing with Left-Corner Recurrent Neural
Network Grammars | Accepted by EMNLP 2021 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In computational linguistics, it has been shown that hierarchical structures
make language models (LMs) more human-like. However, the previous literature
has been agnostic about a parsing strategy of the hierarchical models. In this
paper, we investigated whether hierarchical structures make LMs more
human-like, and if so, which parsing strategy is most cognitively plausible. In
order to address this question, we evaluated three LMs against human reading
times in Japanese with head-final left-branching structures: Long Short-Term
Memory (LSTM) as a sequential model and Recurrent Neural Network Grammars
(RNNGs) with top-down and left-corner parsing strategies as hierarchical
models. Our computational modeling demonstrated that left-corner RNNGs
outperformed top-down RNNGs and LSTM, suggesting that hierarchical and
left-corner architectures are more cognitively plausible than top-down or
sequential architectures. In addition, the relationships between the cognitive
plausibility and (i) perplexity, (ii) parsing, and (iii) beam size will also be
discussed.
| [
{
"version": "v1",
"created": "Fri, 10 Sep 2021 15:35:00 GMT"
},
{
"version": "v2",
"created": "Thu, 11 May 2023 02:41:41 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 10:33:42 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Yoshida",
"Ryo",
""
],
[
"Noji",
"Hiroshi",
""
],
[
"Oseki",
"Yohei",
""
]
] | not_new_dataset | 0.997377 |
2110.03991 | John Stephan | Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Sebastien Rouault, and
John Stephan | Combining Differential Privacy and Byzantine Resilience in Distributed
SGD | null | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy and Byzantine resilience (BR) are two crucial requirements of
modern-day distributed machine learning. The two concepts have been extensively
studied individually but the question of how to combine them effectively
remains unanswered. This paper contributes to addressing this question by
studying the extent to which the distributed SGD algorithm, in the standard
parameter-server architecture, can learn an accurate model despite (a) a
fraction of the workers being malicious (Byzantine), and (b) the other
fraction, whilst being honest, providing noisy information to the server to
ensure differential privacy (DP). We first observe that the integration of
standard practices in DP and BR is not straightforward. In fact, we show that
many existing results on the convergence of distributed SGD under Byzantine
faults, especially those relying on $(\alpha,f)$-Byzantine resilience, are
rendered invalid when honest workers enforce DP. To circumvent this
shortcoming, we revisit the theory of $(\alpha,f)$-BR to obtain an approximate
convergence guarantee. Our analysis provides key insights on how to improve
this guarantee through hyperparameter optimization. Essentially, our
theoretical and empirical results show that (1) an imprudent combination of
standard approaches to DP and BR might be fruitless, but (2) by carefully
re-tuning the learning algorithm, we can obtain reasonable learning accuracy
while simultaneously guaranteeing DP and BR.
| [
{
"version": "v1",
"created": "Fri, 8 Oct 2021 09:23:03 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Oct 2021 14:21:46 GMT"
},
{
"version": "v3",
"created": "Tue, 26 Oct 2021 13:37:16 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 09:03:58 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Guerraoui",
"Rachid",
""
],
[
"Gupta",
"Nirupam",
""
],
[
"Pinot",
"Rafael",
""
],
[
"Rouault",
"Sebastien",
""
],
[
"Stephan",
"John",
""
]
] | not_new_dataset | 0.997476 |
2110.14883 | Yang You | Shenggui Li and Hongxin Liu and Zhengda Bian and Jiarui Fang and
Haichen Huang and Yuliang Liu and Boxiang Wang and Yang You | Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel
Training | null | null | null | null | cs.LG cs.AI cs.CL cs.CV cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The success of Transformer models has pushed the deep learning model scale to
billions of parameters. Due to the limited memory resource of a single GPU,
However, the best practice for choosing the optimal parallel strategy is still
lacking, since it requires domain expertise in both deep learning and parallel
computing.
The Colossal-AI system addressed the above challenge by introducing a unified
interface to scale your sequential code of model training to distributed
environments. It supports parallel training methods such as data, pipeline,
tensor, and sequence parallelism, as well as heterogeneous training methods
integrated with zero redundancy optimizer. Compared to the baseline system,
Colossal-AI can achieve up to 2.76 times training speedup on large-scale
models.
| [
{
"version": "v1",
"created": "Thu, 28 Oct 2021 04:45:55 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2022 12:54:20 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 04:09:09 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Li",
"Shenggui",
""
],
[
"Liu",
"Hongxin",
""
],
[
"Bian",
"Zhengda",
""
],
[
"Fang",
"Jiarui",
""
],
[
"Huang",
"Haichen",
""
],
[
"Liu",
"Yuliang",
""
],
[
"Wang",
"Boxiang",
""
],
[
"You",
"Yang",
""
]
] | not_new_dataset | 0.997264 |
2110.15497 | Peiyu Yu | Peiyu Yu, Sirui Xie, Xiaojian Ma, Yixin Zhu, Ying Nian Wu, Song-Chun
Zhu | Unsupervised Foreground Extraction via Deep Region Competition | NeurIPS 2021 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | We present Deep Region Competition (DRC), an algorithm designed to extract
foreground objects from images in a fully unsupervised manner. Foreground
extraction can be viewed as a special case of generic image segmentation that
focuses on identifying and disentangling objects from the background. In this
work, we rethink the foreground extraction by reconciling energy-based prior
with generative image modeling in the form of Mixture of Experts (MoE), where
we further introduce the learned pixel re-assignment as the essential inductive
bias to capture the regularities of background regions. With this modeling, the
foreground-background partition can be naturally found through
Expectation-Maximization (EM). We show that the proposed method effectively
exploits the interaction between the mixture components during the partitioning
process, which closely connects to region competition, a seminal approach for
generic image segmentation. Experiments demonstrate that DRC exhibits more
competitive performances on complex real-world data and challenging
multi-object scenes compared with prior methods. Moreover, we show empirically
that DRC can potentially generalize to novel foreground objects even from
categories unseen during training.
| [
{
"version": "v1",
"created": "Fri, 29 Oct 2021 02:32:44 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Nov 2021 01:40:02 GMT"
},
{
"version": "v3",
"created": "Sat, 25 Dec 2021 14:18:17 GMT"
},
{
"version": "v4",
"created": "Wed, 4 Oct 2023 22:05:42 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Yu",
"Peiyu",
""
],
[
"Xie",
"Sirui",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Wu",
"Ying Nian",
""
],
[
"Zhu",
"Song-Chun",
""
]
] | not_new_dataset | 0.997399 |
2111.02062 | Pio Calderon | Pio Calderon, Alexander Soen, Marian-Andrei Rizoiu | Linking Across Data Granularity: Fitting Multivariate Hawkes Processes
to Partially Interval-Censored Data | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | null | null | null | cs.LG cs.CE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The multivariate Hawkes process (MHP) is widely used for analyzing data
streams that interact with each other, where events generate new events within
their own dimension (via self-excitation) or across different dimensions (via
cross-excitation). However, in certain applications, the timestamps of
individual events in some dimensions are unobservable, and only event counts
within intervals are known, referred to as partially interval-censored data.
The MHP is unsuitable for handling such data since its estimation requires
event timestamps. In this study, we introduce the Partial Mean Behavior Poisson
(PMBP) process, a novel point process which shares parameter equivalence with
the MHP and can effectively model both timestamped and interval-censored data.
We demonstrate the capabilities of the PMBP process using synthetic and
real-world datasets. Firstly, we illustrate that the PMBP process can
approximate MHP parameters and recover the spectral radius using synthetic
event histories. Next, we assess the performance of the PMBP process in
predicting YouTube popularity and find that it surpasses state-of-the-art
methods. Lastly, we leverage the PMBP process to gain qualitative insights from
a dataset comprising daily COVID-19 case counts from multiple countries and
COVID-19-related news articles. By clustering the PMBP-modeled countries, we
unveil hidden interaction patterns between occurrences of COVID-19 cases and
news reporting.
| [
{
"version": "v1",
"created": "Wed, 3 Nov 2021 08:25:35 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Feb 2022 04:01:58 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 04:55:06 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Calderon",
"Pio",
""
],
[
"Soen",
"Alexander",
""
],
[
"Rizoiu",
"Marian-Andrei",
""
]
] | not_new_dataset | 0.997466 |
2111.12232 | Katsuya Hotta | Katsuya Hotta, Takuya Akashi, Shogo Tokai, Chao Zhang | PMSSC: Parallelizable multi-subset based self-expressive model for
subspace clustering | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Subspace clustering methods which embrace a self-expressive model that
represents each data point as a linear combination of other data points in the
dataset provide powerful unsupervised learning techniques. However, when
dealing with large datasets, representation of each data point by referring to
all data points via a dictionary suffers from high computational complexity. To
alleviate this issue, we introduce a parallelizable multi-subset based
self-expressive model (PMS) which represents each data point by combining
multiple subsets, with each consisting of only a small proportion of the
samples. The adoption of PMS in subspace clustering (PMSSC) leads to
computational advantages because the optimization problems decomposed over each
subset are small, and can be solved efficiently in parallel. Furthermore, PMSSC
is able to combine multiple self-expressive coefficient vectors obtained from
subsets, which contributes to an improvement in self-expressiveness. Extensive
experiments on synthetic and real-world datasets show the efficiency and
effectiveness of our approach in comparison to other methods.
| [
{
"version": "v1",
"created": "Wed, 24 Nov 2021 02:22:43 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 16:30:48 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Hotta",
"Katsuya",
""
],
[
"Akashi",
"Takuya",
""
],
[
"Tokai",
"Shogo",
""
],
[
"Zhang",
"Chao",
""
]
] | not_new_dataset | 0.997267 |
2112.03379 | Seungwoo Jeong | Seungwoo Jeong, Wonjun Ko, Ahmad Wisnu Mulyadi, Heung-Il Suk | Efficient Continuous Manifold Learning for Time Series Modeling | null | null | 10.1109/TPAMI.2023.3320125 | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling non-Euclidean data is drawing attention along with the unprecedented
successes of deep neural networks in diverse fields. In particular, symmetric
positive definite (SPD) matrix is being actively studied in computer vision,
signal processing, and medical image analysis, thanks to its ability to learn
appropriate statistical representations. However, due to its strong
constraints, it remains challenging for optimization problems or inefficient
computation costs, especially, within a deep learning framework. In this paper,
we propose to exploit a diffeomorphism mapping between Riemannian manifolds and
a Cholesky space, by which it becomes feasible not only to efficiently solve
optimization problems but also to reduce computation costs greatly. Further, in
order for dynamics modeling in time series data, we devise a continuous
manifold learning method by integrating a manifold ordinary differential
equation and a gated recurrent neural network in a systematic manner. It is
noteworthy that because of the nice parameterization of matrices in a Cholesky
space, it is straightforward to train our proposed network with Riemannian
geometric metrics equipped. We demonstrate through experiments that the
proposed model can be efficiently and reliably trained as well as outperform
existing manifold methods and state-of-the-art methods in two classification
tasks: action recognition and sleep staging classification.
| [
{
"version": "v1",
"created": "Fri, 3 Dec 2021 01:38:38 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Jeong",
"Seungwoo",
""
],
[
"Ko",
"Wonjun",
""
],
[
"Mulyadi",
"Ahmad Wisnu",
""
],
[
"Suk",
"Heung-Il",
""
]
] | not_new_dataset | 0.997369 |
2112.08581 | Weijie Zheng | Weijie Zheng, Benjamin Doerr | Mathematical Runtime Analysis for the Non-Dominated Sorting Genetic
Algorithm II (NSGA-II) | Accepted for publication in "Artificial Intelligence". This is the
journal version of the paper "Weijie Zheng, Yufei Liu, Benjamin Doerr: A
First Mathematical Runtime Analysis of the Non-Dominated Sorting Genetic
Algorithm II (NSGA-II). AAAI 2022. arXiv:2112.08581v3" | Artificial Intelligence 325 (2023), 104016 | 10.1016/j.artint.2023.104016 | null | cs.NE cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The non-dominated sorting genetic algorithm II (NSGA-II) is the most
intensively used multi-objective evolutionary algorithm (MOEA) in real-world
applications. However, in contrast to several simple MOEAs analyzed also via
mathematical means, no such study exists for the NSGA-II so far. In this work,
we show that mathematical runtime analyses are feasible also for the NSGA-II.
As particular results, we prove that with a population size four times larger
than the size of the Pareto front, the NSGA-II with two classic mutation
operators and four different ways to select the parents satisfies the same
asymptotic runtime guarantees as the SEMO and GSEMO algorithms on the basic
OneMinMax and LeadingOnesTrailingZeros benchmarks. However, if the population
size is only equal to the size of the Pareto front, then the NSGA-II cannot
efficiently compute the full Pareto front: for an exponential number of
iterations, the population will always miss a constant fraction of the Pareto
front. Our experiments confirm the above findings.
| [
{
"version": "v1",
"created": "Thu, 16 Dec 2021 03:00:20 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Feb 2022 15:16:57 GMT"
},
{
"version": "v3",
"created": "Mon, 20 Jun 2022 10:31:08 GMT"
},
{
"version": "v4",
"created": "Fri, 24 Jun 2022 11:24:43 GMT"
},
{
"version": "v5",
"created": "Sun, 9 Jul 2023 12:19:54 GMT"
},
{
"version": "v6",
"created": "Mon, 18 Sep 2023 14:03:23 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Zheng",
"Weijie",
""
],
[
"Doerr",
"Benjamin",
""
]
] | not_new_dataset | 0.997388 |
2202.09573 | Gabriel Turinici | Gabriel Turinici | Diversity in deep generative models and generative AI | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The decoder-based machine learning generative algorithms such as Generative
Adversarial Networks (GAN), Variational Auto-Encoders (VAE), Transformers show
impressive results when constructing objects similar to those in a training
ensemble. However, the generation of new objects builds mainly on the
understanding of the hidden structure of the training dataset followed by a
sampling from a multi-dimensional normal variable. In particular each sample is
independent from the others and can repeatedly propose same type of objects. To
cure this drawback we introduce a kernel-based measure quantization method that
can produce new objects from a given target measure by approximating it as a
whole and even staying away from elements already drawn from that distribution.
This ensures a better diversity of the produced objects. The method is tested
on classic machine learning benchmarks.
| [
{
"version": "v1",
"created": "Sat, 19 Feb 2022 10:52:52 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Sep 2023 16:55:40 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 13:32:57 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Turinici",
"Gabriel",
""
]
] | not_new_dataset | 0.997369 |
2202.13103 | Prerona Chatterjee | Prerona Chatterjee, Kshitij Gajjar, Anamay Tengse | Monotone Classes Beyond VNP | 30 pages; made changes suggested by reviewers | null | null | null | cs.CC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In this work, we study the natural monotone analogues of various equivalent
definitions of VPSPACE: a well studied class (Poizat 2008, Koiran and Perifel
2009, Malod 2011, Mahajan and Rao 2013) that is believed to be larger than VNP.
We observe that these monotone analogues are not equivalent unlike their
non-monotone counterparts, and propose monotone VPSPACE (mVPSPACE) to be
defined as the monotone analogue of Poizat's definition. With this definition,
mVPSPACE turns out to be exponentially stronger than mVNP and also satisfies
several desirable closure properties that the other analogues may not.
Our initial goal was to understand the monotone complexity of transparent
polynomials, a concept that was recently introduced by Hrube\v{s} and
Yehudayoff (2021). In that context, we show that transparent polynomials of
large sparsity are hard for the monotone analogues of all the known definitions
of VPSPACE, except for the one due to Poizat.
| [
{
"version": "v1",
"created": "Sat, 26 Feb 2022 10:18:15 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Sep 2022 15:17:49 GMT"
},
{
"version": "v3",
"created": "Sun, 23 Jul 2023 12:48:01 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 14:21:06 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Chatterjee",
"Prerona",
""
],
[
"Gajjar",
"Kshitij",
""
],
[
"Tengse",
"Anamay",
""
]
] | not_new_dataset | 0.99735 |
2205.05250 | Hao Ren | Hao Ren, Xiaojun Liang, Chunhua Yang, Zhiwen Chen, and Weihua Gui | Spatial-temporal associations representation and application for process
monitoring using graph convolution neural network | null | null | null | null | cs.LG cs.AI cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Thank you very much for the attention and concern of colleagues and scholars
in this work. With the comments and guidance of experts, editors, and
reviewers, this work has been accepted for publishing in the journal "Process
Safety and Environmental Protection". The theme of this paper relies on the
Spatial-temporal associations of numerous variables in the same industrial
processes, which refers to numerous variables obtained in dynamic industrial
processes with Spatial-temporal correlation characteristics, i.e., these
variables are not only highly correlated in time but also interrelated in
space. To handle this problem, three key issues need to be well addressed:
variable characteristics modeling and representation, graph network
construction (temporal information), and graph characteristics perception. The
first issue is implemented by assuming the data follows one improved Gaussian
distribution, while the graph network can be defined by the monitoring
variables and their edges which are calculated by their characteristics in
time. Finally, these networks corresponding to process states at different
times are fed into a graph convolutional neural network to implement graph
classification to achieve process monitoring. A benchmark experiment (Tennessee
Eastman chemical process) and one application study (cobalt purification from
zinc solution) are employed to demonstrate the feasibility and applicability of
this paper.
| [
{
"version": "v1",
"created": "Wed, 11 May 2022 03:36:35 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 14:32:15 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Ren",
"Hao",
""
],
[
"Liang",
"Xiaojun",
""
],
[
"Yang",
"Chunhua",
""
],
[
"Chen",
"Zhiwen",
""
],
[
"Gui",
"Weihua",
""
]
] | not_new_dataset | 0.997368 |
2205.09174 | Ehud Shapiro | Idit Keidar, Oded Naor, Ouri Poupko, and Ehud Shapiro | Cordial Miners: Fast and Efficient Consensus for Every Eventuality | null | null | 10.4230/LIPIcs.DISC.2023.26 | null | cs.DC cs.MA cs.NI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cordial Miners are a family of efficient Byzantine Atomic Broadcast
protocols, with instances for asynchrony and eventual synchrony.
They improve the latency of state-of-the-art DAG-based protocols by almost 2X
and achieve optimal good-case complexity of O(n) by forgoing Reliable Broadcast
as a building block.
Rather, Cordial Miners use the blocklace -- a partially-ordered counterpart
of the totally-ordered blockchain data structure -- to implement the three
algorithmic components of consensus: Dissemination, equivocation-exclusion, and
ordering.
| [
{
"version": "v1",
"created": "Wed, 18 May 2022 18:45:20 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Aug 2022 02:28:16 GMT"
},
{
"version": "v3",
"created": "Thu, 11 Aug 2022 19:19:31 GMT"
},
{
"version": "v4",
"created": "Wed, 9 Nov 2022 19:34:18 GMT"
},
{
"version": "v5",
"created": "Thu, 11 May 2023 17:39:43 GMT"
},
{
"version": "v6",
"created": "Fri, 22 Sep 2023 20:40:09 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Keidar",
"Idit",
""
],
[
"Naor",
"Oded",
""
],
[
"Poupko",
"Ouri",
""
],
[
"Shapiro",
"Ehud",
""
]
] | not_new_dataset | 0.996969 |
2206.05895 | Peiyu Yu | Peiyu Yu, Sirui Xie, Xiaojian Ma, Baoxiong Jia, Bo Pang, Ruiqi Gao,
Yixin Zhu, Song-Chun Zhu, and Ying Nian Wu | Latent Diffusion Energy-Based Model for Interpretable Text Modeling | ICML 2022 | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Latent space Energy-Based Models (EBMs), also known as energy-based priors,
have drawn growing interests in generative modeling. Fueled by its flexibility
in the formulation and strong modeling power of the latent space, recent works
built upon it have made interesting attempts aiming at the interpretability of
text modeling. However, latent space EBMs also inherit some flaws from EBMs in
data space; the degenerate MCMC sampling quality in practice can lead to poor
generation quality and instability in training, especially on data with complex
latent structures. Inspired by the recent efforts that leverage diffusion
recovery likelihood learning as a cure for the sampling issue, we introduce a
novel symbiosis between the diffusion models and latent space EBMs in a
variational learning framework, coined as the latent diffusion energy-based
model. We develop a geometric clustering-based regularization jointly with the
information bottleneck to further improve the quality of the learned latent
space. Experiments on several challenging tasks demonstrate the superior
performance of our model on interpretable text modeling over strong
counterparts.
| [
{
"version": "v1",
"created": "Mon, 13 Jun 2022 03:41:31 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jun 2022 03:01:05 GMT"
},
{
"version": "v3",
"created": "Mon, 4 Jul 2022 16:28:58 GMT"
},
{
"version": "v4",
"created": "Wed, 4 Oct 2023 22:00:21 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Yu",
"Peiyu",
""
],
[
"Xie",
"Sirui",
""
],
[
"Ma",
"Xiaojian",
""
],
[
"Jia",
"Baoxiong",
""
],
[
"Pang",
"Bo",
""
],
[
"Gao",
"Ruiqi",
""
],
[
"Zhu",
"Yixin",
""
],
[
"Zhu",
"Song-Chun",
""
],
[
"Wu",
"Ying Nian",
""
]
] | not_new_dataset | 0.997415 |
2207.03299 | Juan Bascur | Juan Pablo Bascur, Suzan Verberne, Nees Jan van Eck, Ludo Waltman | Academic information retrieval using citation clusters: In-depth
evaluation based on systematic reviews | Final version | null | null | null | cs.DL | http://creativecommons.org/licenses/by/4.0/ | The field of scientometrics has shown the power of citation-based clusters
for literature analysis, yet this technique has barely been used for
information retrieval tasks. This work evaluates the performance of citation
based-clusters for information retrieval tasks. We simulated a search process
using these clusters with a tree hierarchy of clusters and a cluster selection
algorithm. We evaluated the task of finding the relevant documents for 25
systematic reviews. Our evaluation considered several trade-offs between recall
and precision for the cluster selection, and we also replicated the Boolean
queries self-reported by the systematic review to serve as a reference. We
found that citation-based clusters search performance is highly variable and
unpredictable, that it works best for users that prefer recall over precision
at a ratio between 2 and 8, and that when used along with query-based search
they complement each other, including finding new relevant documents.
| [
{
"version": "v1",
"created": "Thu, 7 Jul 2022 13:50:27 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 15:42:11 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Bascur",
"Juan Pablo",
""
],
[
"Verberne",
"Suzan",
""
],
[
"van Eck",
"Nees Jan",
""
],
[
"Waltman",
"Ludo",
""
]
] | not_new_dataset | 0.997488 |
2207.05132 | Arghavan Moradi Dakhel | Arghavan Moradi Dakhel, Michel C. Desmarais, Foutse Khomh | Dev2vec: Representing Domain Expertise of Developers in an Embedding
Space | 30 pages, 5 figures | null | 10.1016/j.infsof.2023.107218 | null | cs.SE cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate assessment of the domain expertise of developers is important for
assigning the proper candidate to contribute to a project or to attend a job
role. Since the potential candidate can come from a large pool, the automated
assessment of this domain expertise is a desirable goal. While previous methods
have had some success within a single software project, the assessment of a
developer's domain expertise from contributions across multiple projects is
more challenging. In this paper, we employ doc2vec to represent the domain
expertise of developers as embedding vectors. These vectors are derived from
different sources that contain evidence of developers' expertise, such as the
description of repositories that they contributed, their issue resolving
history, and API calls in their commits. We name it dev2vec and demonstrate its
effectiveness in representing the technical specialization of developers. Our
results indicate that encoding the expertise of developers in an embedding
vector outperforms state-of-the-art methods and improves the F1-score up to
21%. Moreover, our findings suggest that ``issue resolving history'' of
developers is the most informative source of information to represent the
domain expertise of developers in embedding spaces.
| [
{
"version": "v1",
"created": "Mon, 11 Jul 2022 18:56:49 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Dakhel",
"Arghavan Moradi",
""
],
[
"Desmarais",
"Michel C.",
""
],
[
"Khomh",
"Foutse",
""
]
] | not_new_dataset | 0.996237 |
2207.11447 | Xu Zhou | Xu Zhou, Xinyu Lei, Cong Yang, Yichun Shi, Xiao Zhang, Jingwen Shi | Handling Data Heterogeneity in Federated Learning via Knowledge
Distillation and Fusion | 15 pages, 3 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning (FL) supports distributed training of a global machine
learning model across multiple devices with the help of a central server.
However, data heterogeneity across different devices leads to the client model
drift issue and results in model performance degradation and poor model
fairness. To address the issue, we design Federated learning with global-local
Knowledge Fusion (FedKF) scheme in this paper. The key idea in FedKF is to let
the server return the global knowledge to be fused with the local knowledge in
each training round so that the local model can be regularized towards the
global optima. Therefore, the client model drift issue can be mitigated. In
FedKF, we first propose the active-inactive model aggregation technique that
supports a precise global knowledge representation. Then, we propose a
data-free knowledge distillation (KD) approach to enable each client model to
learn the global knowledge (embedded in the global model) while each client
model can still learn the local knowledge (embedded in the local dataset)
simultaneously, thereby realizing the global-local knowledge fusion process.
The theoretical analysis and intensive experiments demonstrate the superiority
of FedKF over previous solutions.
| [
{
"version": "v1",
"created": "Sat, 23 Jul 2022 07:20:22 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 20:44:04 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Zhou",
"Xu",
""
],
[
"Lei",
"Xinyu",
""
],
[
"Yang",
"Cong",
""
],
[
"Shi",
"Yichun",
""
],
[
"Zhang",
"Xiao",
""
],
[
"Shi",
"Jingwen",
""
]
] | not_new_dataset | 0.997376 |
2207.11880 | Huaxiong Li | Kaiyi Luo, Chao Zhang, Huaxiong Li, Xiuyi Jia, Chunlin Chen | Adaptive Marginalized Semantic Hashing for Unpaired Cross-Modal
Retrieval | null | null | 10.1109/TMM.2023.3245400 | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, Cross-Modal Hashing (CMH) has aroused much attention due to
its fast query speed and efficient storage. Previous literatures have achieved
promising results for Cross-Modal Retrieval (CMR) by discovering discriminative
hash codes and modality-specific hash functions. Nonetheless, most existing CMR
works are subjected to some restrictions: 1) It is assumed that data of
different modalities are fully paired, which is impractical in real
applications due to sample missing and false data alignment, and 2) binary
regression targets including the label matrix and binary codes are too rigid to
effectively learn semantic-preserving hash codes and hash functions. To address
these problems, this paper proposes an Adaptive Marginalized Semantic Hashing
(AMSH) method which not only enhances the discrimination of latent
representations and hash codes by adaptive margins, but also can be used for
both paired and unpaired CMR. As a two-step method, in the first step, AMSH
generates semantic-aware modality-specific latent representations with
adaptively marginalized labels, which enlarges the distances between different
classes, and exploits the labels to preserve the inter-modal and intra-modal
semantic similarities into latent representations and hash codes. In the second
step, adaptive margin matrices are embedded into the hash codes, and enlarge
the gaps between positive and negative bits, which improves the discrimination
and robustness of hash functions. On this basis, AMSH generates
similarity-preserving hash codes and robust hash functions without strict
one-to-one data correspondence requirement. Experiments are conducted on
several benchmark datasets to demonstrate the superiority and flexibility of
AMSH over some state-of-the-art CMR methods.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2022 02:50:20 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Luo",
"Kaiyi",
""
],
[
"Zhang",
"Chao",
""
],
[
"Li",
"Huaxiong",
""
],
[
"Jia",
"Xiuyi",
""
],
[
"Chen",
"Chunlin",
""
]
] | not_new_dataset | 0.997456 |
2207.14096 | Shaun Yuan | Gong Cheng, Xiang Yuan, Xiwen Yao, Kebing Yan, Qinghua Zeng, Xingxing
Xie, and Junwei Han | Towards Large-Scale Small Object Detection: Survey and Benchmarks | in IEEE Transactions on Pattern Analysis and Machine Intelligence
(2023) | IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 45, no. 11, pp. 13467-13488, 1 Nov. 2023 | 10.1109/TPAMI.2023.3290594 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rise of deep convolutional neural networks, object detection has
achieved prominent advances in past years. However, such prosperity could not
camouflage the unsatisfactory situation of Small Object Detection (SOD), one of
the notoriously challenging tasks in computer vision, owing to the poor visual
appearance and noisy representation caused by the intrinsic structure of small
targets. In addition, large-scale dataset for benchmarking small object
detection methods remains a bottleneck. In this paper, we first conduct a
thorough review of small object detection. Then, to catalyze the development of
SOD, we construct two large-scale Small Object Detection dAtasets (SODA),
SODA-D and SODA-A, which focus on the Driving and Aerial scenarios
respectively. SODA-D includes 24828 high-quality traffic images and 278433
instances of nine categories. For SODA-A, we harvest 2513 high resolution
aerial images and annotate 872069 instances over nine classes. The proposed
datasets, as we know, are the first-ever attempt to large-scale benchmarks with
a vast collection of exhaustively annotated instances tailored for
multi-category SOD. Finally, we evaluate the performance of mainstream methods
on SODA. We expect the released benchmarks could facilitate the development of
SOD and spawn more breakthroughs in this field. Datasets and codes are
available at: \url{https://shaunyuan22.github.io/SODA}.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2022 14:02:18 GMT"
},
{
"version": "v2",
"created": "Sun, 31 Jul 2022 08:33:25 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Dec 2022 15:43:44 GMT"
},
{
"version": "v4",
"created": "Tue, 11 Apr 2023 03:58:28 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Cheng",
"Gong",
""
],
[
"Yuan",
"Xiang",
""
],
[
"Yao",
"Xiwen",
""
],
[
"Yan",
"Kebing",
""
],
[
"Zeng",
"Qinghua",
""
],
[
"Xie",
"Xingxing",
""
],
[
"Han",
"Junwei",
""
]
] | new_dataset | 0.997998 |
2208.07708 | Gyanendra Kumar Verma | Gyanendra K. Verma, Astha Agrawal, R. K. Sharma | Construction Methods for Galois LCD codes over Finite Fields | There are many typos and mathematical typos as well | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | In this article, first we present a method for constructing many Hermitian
LCD codes from a given Hermitian LCD code, and then provide several methods
which utilize either a given [n, k, d] linear code or a given [n, k, d] Galois
LCD code to construct new Galois LCD codes with different parameters. Using
these construction methods, we construct several new [n, k, d] ternary LCD
codes with better parameters for $26\leq n \leq 40$, and $21 \leq k \leq 30$.
Also, optimal 2-Galois LCD codes over $\mathbb{F}_{2^3}$ for code length, $1
\leq n \leq 15$ have been obtained. Finally, we extend some previously known
results to the $\sigma$-inner product from Euclidean inner product.
| [
{
"version": "v1",
"created": "Tue, 16 Aug 2022 12:19:42 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Sep 2022 05:40:26 GMT"
},
{
"version": "v3",
"created": "Tue, 3 Oct 2023 04:59:01 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 09:06:17 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Verma",
"Gyanendra K.",
""
],
[
"Agrawal",
"Astha",
""
],
[
"Sharma",
"R. K.",
""
]
] | not_new_dataset | 0.997271 |
2209.05917 | Sunkyung Lee | Eunseong Choi, Sunkyung Lee, Minjin Choi, Hyeseon Ko, Young-In Song
and Jongwuk Lee | SpaDE: Improving Sparse Representations using a Dual Document Encoder
for First-stage Retrieval | In Proceedings of the 31st ACM International Conference on
Information and Knowledge Management (CIKM '22). 13 pages | null | 10.1145/3511808.3557456 | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Sparse document representations have been widely used to retrieve relevant
documents via exact lexical matching. Owing to the pre-computed inverted index,
it supports fast ad-hoc search but incurs the vocabulary mismatch problem.
Although recent neural ranking models using pre-trained language models can
address this problem, they usually require expensive query inference costs,
implying the trade-off between effectiveness and efficiency. Tackling the
trade-off, we propose a novel uni-encoder ranking model, Sparse retriever using
a Dual document Encoder (SpaDE), learning document representation via the dual
encoder. Each encoder plays a central role in (i) adjusting the importance of
terms to improve lexical matching and (ii) expanding additional terms to
support semantic matching. Furthermore, our co-training strategy trains the
dual encoder effectively and avoids unnecessary intervention in training each
other. Experimental results on several benchmarks show that SpaDE outperforms
existing uni-encoder ranking models.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2022 12:06:01 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Apr 2023 05:57:34 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 02:33:49 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Choi",
"Eunseong",
""
],
[
"Lee",
"Sunkyung",
""
],
[
"Choi",
"Minjin",
""
],
[
"Ko",
"Hyeseon",
""
],
[
"Song",
"Young-In",
""
],
[
"Lee",
"Jongwuk",
""
]
] | not_new_dataset | 0.997335 |
2209.12148 | Radu Tudor Ionescu | Neelu Madan, Nicolae-Catalin Ristea, Radu Tudor Ionescu, Kamal
Nasrollahi, Fahad Shahbaz Khan, Thomas B. Moeslund, Mubarak Shah | Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection | Accepted in IEEE Transactions on Pattern Analysis and Machine
Intelligence | null | 10.1109/TPAMI.2023.3322604 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anomaly detection has recently gained increasing attention in the field of
computer vision, likely due to its broad set of applications ranging from
product fault detection on industrial production lines and impending event
detection in video surveillance to finding lesions in medical scans. Regardless
of the domain, anomaly detection is typically framed as a one-class
classification task, where the learning is conducted on normal examples only.
An entire family of successful anomaly detection methods is based on learning
to reconstruct masked normal inputs (e.g. patches, future frames, etc.) and
exerting the magnitude of the reconstruction error as an indicator for the
abnormality level. Unlike other reconstruction-based methods, we present a
novel self-supervised masked convolutional transformer block (SSMCTB) that
comprises the reconstruction-based functionality at a core architectural level.
The proposed self-supervised block is extremely flexible, enabling information
masking at any layer of a neural network and being compatible with a wide range
of neural architectures. In this work, we extend our previous self-supervised
predictive convolutional attentive block (SSPCAB) with a 3D masked
convolutional layer, a transformer for channel-wise attention, as well as a
novel self-supervised objective based on Huber loss. Furthermore, we show that
our block is applicable to a wider variety of tasks, adding anomaly detection
in medical images and thermal videos to the previously considered tasks based
on RGB images and surveillance videos. We exhibit the generality and
flexibility of SSMCTB by integrating it into multiple state-of-the-art neural
models for anomaly detection, bringing forth empirical results that confirm
considerable performance improvements on five benchmarks. We release our code
and data as open source at: https://github.com/ristea/ssmctb.
| [
{
"version": "v1",
"created": "Sun, 25 Sep 2022 04:56:10 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 10:37:39 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Madan",
"Neelu",
""
],
[
"Ristea",
"Nicolae-Catalin",
""
],
[
"Ionescu",
"Radu Tudor",
""
],
[
"Nasrollahi",
"Kamal",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Moeslund",
"Thomas B.",
""
],
[
"Shah",
"Mubarak",
""
]
] | not_new_dataset | 0.988627 |
2210.01422 | Rasool Fakoor | Rasool Fakoor and Jonas Mueller and Zachary C. Lipton and Pratik
Chaudhari and Alexander J. Smola | Time-Varying Propensity Score to Bridge the Gap between the Past and
Present | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world deployment of machine learning models is challenging because data
evolves over time. While no model can work when data evolves in an arbitrary
fashion, if there is some pattern to these changes, we might be able to design
methods to address it. This paper addresses situations when data evolves
gradually. We introduce a time-varying propensity score that can detect gradual
shifts in the distribution of data which allows us to selectively sample past
data to update the model -- not just similar data from the past like that of a
standard propensity score but also data that evolved in a similar fashion in
the past. The time-varying propensity score is quite general: we demonstrate
different ways of implementing it and evaluate it on a variety of problems
ranging from supervised learning (e.g., image classification problems) where
data undergoes a sequence of gradual shifts, to reinforcement learning tasks
(e.g., robotic manipulation and continuous control) where data shifts as the
policy or the task changes.
| [
{
"version": "v1",
"created": "Tue, 4 Oct 2022 07:21:49 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jan 2023 17:52:10 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Jun 2023 17:47:50 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 17:38:13 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Fakoor",
"Rasool",
""
],
[
"Mueller",
"Jonas",
""
],
[
"Lipton",
"Zachary C.",
""
],
[
"Chaudhari",
"Pratik",
""
],
[
"Smola",
"Alexander J.",
""
]
] | not_new_dataset | 0.997504 |
2210.01944 | Sajad Darabi | Sajad Darabi, Piotr Bigaj, Dawid Majchrowski, Artur Kasymov, Pawel
Morkisz, Alex Fit-Florea | A Framework for Large Scale Synthetic Graph Dataset Generation | null | null | null | null | cs.LG cs.SI | http://creativecommons.org/licenses/by/4.0/ | Recently there has been increasing interest in developing and deploying deep
graph learning algorithms for many tasks, such as fraud detection and
recommender systems. Albeit, there is a limited number of publicly available
graph-structured datasets, most of which are tiny compared to production-sized
applications or are limited in their application domain. This work tackles this
shortcoming by proposing a scalable synthetic graph generation tool to scale
the datasets to production-size graphs with trillions of edges and billions of
nodes. The tool learns a series of parametric models from proprietary datasets
that can be released to researchers to study various graph methods on the
synthetic data increasing prototype development and novel applications. We
demonstrate the generalizability of the framework across a series of datasets,
mimicking structural and feature distributions as well as the ability to scale
them across varying sizes demonstrating their usefulness for benchmarking and
model development. Code can be found on
https://github.com/NVIDIA/DeepLearningExamples/tree/master/Tools/DGLPyTorch/SyntheticGraphGeneration.
| [
{
"version": "v1",
"created": "Tue, 4 Oct 2022 22:41:33 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Oct 2022 15:17:02 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Feb 2023 23:05:44 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 05:22:43 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Darabi",
"Sajad",
""
],
[
"Bigaj",
"Piotr",
""
],
[
"Majchrowski",
"Dawid",
""
],
[
"Kasymov",
"Artur",
""
],
[
"Morkisz",
"Pawel",
""
],
[
"Fit-Florea",
"Alex",
""
]
] | not_new_dataset | 0.997341 |
2210.17505 | Roberto Casadei PhD | Roberto Casadei, Stefano Mariani, Danilo Pianini, Mirko Viroli, Franco
Zambonelli | Space-Fluid Adaptive Sampling by Self-Organisation | 33 pages, 16 figures | null | null | null | cs.DC cs.AI cs.MA cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | A recurrent task in coordinated systems is managing (estimating, predicting,
or controlling) signals that vary in space, such as distributed sensed data or
computation outcomes. Especially in large-scale settings, the problem can be
addressed through decentralised and situated computing systems: nodes can
locally sense, process, and act upon signals, and coordinate with neighbours to
implement collective strategies. Accordingly, in this work we devise
distributed coordination strategies for the estimation of a spatial phenomenon
through collaborative adaptive sampling. Our design is based on the idea of
dynamically partitioning space into regions that compete and grow/shrink to
provide accurate aggregate sampling. Such regions hence define a sort of
virtualised space that is "fluid", since its structure adapts in response to
pressure forces exerted by the underlying phenomenon. We provide an adaptive
sampling algorithm in the field-based coordination framework, and prove it is
self-stabilising and locally optimal. Finally, we verify by simulation that the
proposed algorithm effectively carries out a spatially adaptive sampling while
maintaining a tuneable trade-off between accuracy and efficiency.
| [
{
"version": "v1",
"created": "Mon, 31 Oct 2022 17:29:41 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Mar 2023 17:31:22 GMT"
},
{
"version": "v3",
"created": "Tue, 1 Aug 2023 07:38:51 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 10:46:56 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Casadei",
"Roberto",
""
],
[
"Mariani",
"Stefano",
""
],
[
"Pianini",
"Danilo",
""
],
[
"Viroli",
"Mirko",
""
],
[
"Zambonelli",
"Franco",
""
]
] | not_new_dataset | 0.997303 |
2211.00635 | Yihan Wang | Yihan Wang, Si Si, Daliang Li, Michal Lukasik, Felix Yu, Cho-Jui
Hsieh, Inderjit S Dhillon, Sanjiv Kumar | Two-stage LLM Fine-tuning with Less Specialization and More
Generalization | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Pretrained large language models (LLMs) are general purpose problem solvers
applicable to a diverse set of tasks with prompts. They can be further improved
towards a specific task by fine-tuning on a specialized dataset. However,
fine-tuning usually makes the model narrowly specialized on this dataset with
reduced general in-context learning performances, which is undesirable whenever
the fine-tuned model needs to handle additional tasks where no fine-tuning data
is available. In this work, we first demonstrate that fine-tuning on a single
task indeed decreases LLMs' general in-context learning performance. We
discover one important cause of such forgetting, format specialization, where
the model overfits to the format of the fine-tuned task. We further show that
format specialization happens at the very beginning of fine-tuning. To solve
this problem, we propose Prompt Tuning with MOdel Tuning (ProMoT), a simple yet
effective two-stage fine-tuning framework that reduces format specialization
and improves generalization. ProMoT offloads task-specific format learning into
additional and removable parameters by first doing prompt tuning and then
fine-tuning the model itself with this soft prompt attached. With experiments
on several fine-tuning tasks and 8 in-context evaluation tasks, we show that
ProMoT achieves comparable performance on fine-tuned tasks to standard
fine-tuning, but with much less loss of in-context learning performances across
a board range of out-of-domain evaluation tasks. More importantly, ProMoT can
even enhance generalization on in-context learning tasks that are semantically
related to the fine-tuned task, e.g. ProMoT on En-Fr translation significantly
improves performance on other language pairs, and ProMoT on NLI improves
performance on summarization. Experiments also show that ProMoT can improve the
generalization performance of multi-task training.
| [
{
"version": "v1",
"created": "Tue, 1 Nov 2022 17:56:57 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 20:27:57 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Wang",
"Yihan",
""
],
[
"Si",
"Si",
""
],
[
"Li",
"Daliang",
""
],
[
"Lukasik",
"Michal",
""
],
[
"Yu",
"Felix",
""
],
[
"Hsieh",
"Cho-Jui",
""
],
[
"Dhillon",
"Inderjit S",
""
],
[
"Kumar",
"Sanjiv",
""
]
] | not_new_dataset | 0.997431 |
2211.01856 | Shihan Ma | Shihan Ma, Alexander Kenneth Clarke, Kostiantyn Maksymenko, Samuel
Deslauriers-Gauthier, Xinjun Sheng, Xiangyang Zhu, Dario Farina | Conditional Generative Models for Simulation of EMG During Naturalistic
Movements | null | null | null | null | cs.LG cs.CE eess.SP physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerical models of electromyographic (EMG) signals have provided a huge
contribution to our fundamental understanding of human neurophysiology and
remain a central pillar of motor neuroscience and the development of
human-machine interfaces. However, whilst modern biophysical simulations based
on finite element methods are highly accurate, they are extremely
computationally expensive and thus are generally limited to modelling static
systems such as isometrically contracting limbs. As a solution to this problem,
we propose a transfer learning approach, in which a conditional generative
model is trained to mimic the output of an advanced numerical model. To this
end, we present BioMime, a conditional generative neural network trained
adversarially to generate motor unit activation potential waveforms under a
wide variety of volume conductor parameters. We demonstrate the ability of such
a model to predictively interpolate between a much smaller number of numerical
model's outputs with a high accuracy. Consequently, the computational load is
dramatically reduced, which allows the rapid simulation of EMG signals during
truly dynamic and naturalistic movements.
| [
{
"version": "v1",
"created": "Thu, 3 Nov 2022 14:49:02 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Feb 2023 15:29:54 GMT"
},
{
"version": "v3",
"created": "Thu, 13 Jul 2023 16:07:33 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 17:26:48 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Ma",
"Shihan",
""
],
[
"Clarke",
"Alexander Kenneth",
""
],
[
"Maksymenko",
"Kostiantyn",
""
],
[
"Deslauriers-Gauthier",
"Samuel",
""
],
[
"Sheng",
"Xinjun",
""
],
[
"Zhu",
"Xiangyang",
""
],
[
"Farina",
"Dario",
""
]
] | not_new_dataset | 0.997392 |
2211.03660 | Jiawang Bian | Libo Sun, Jia-Wang Bian, Huangying Zhan, Wei Yin, Ian Reid, Chunhua
Shen | SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for
Dynamic Scenes | Accepted for publication in TPAMI; The code will be available at
https://github.com/JiawangBian/sc_depth_pl | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised monocular depth estimation has shown impressive results in
static scenes. It relies on the multi-view consistency assumption for training
networks, however, that is violated in dynamic object regions and occlusions.
Consequently, existing methods show poor accuracy in dynamic scenes, and the
estimated depth map is blurred at object boundaries because they are usually
occluded in other training views. In this paper, we propose SC-DepthV3 for
addressing the challenges. Specifically, we introduce an external pretrained
monocular depth estimation model for generating single-image depth prior,
namely pseudo-depth, based on which we propose novel losses to boost
self-supervised training. As a result, our model can predict sharp and accurate
depth maps, even when training from monocular videos of highly-dynamic scenes.
We demonstrate the significantly superior performance of our method over
previous methods on six challenging datasets, and we provide detailed ablation
studies for the proposed terms. Source code and data will be released at
https://github.com/JiawangBian/sc_depth_pl
| [
{
"version": "v1",
"created": "Mon, 7 Nov 2022 16:17:47 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 08:53:01 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Sun",
"Libo",
""
],
[
"Bian",
"Jia-Wang",
""
],
[
"Zhan",
"Huangying",
""
],
[
"Yin",
"Wei",
""
],
[
"Reid",
"Ian",
""
],
[
"Shen",
"Chunhua",
""
]
] | not_new_dataset | 0.997263 |
2211.07091 | Yefei He | Yefei He, Zhenyu Lou, Luoming Zhang, Jing Liu, Weijia Wu, Hong Zhou,
Bohan Zhuang | BiViT: Extremely Compressed Binary Vision Transformer | Accepted by ICCV 2023 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model binarization can significantly compress model size, reduce energy
consumption, and accelerate inference through efficient bit-wise operations.
Although binarizing convolutional neural networks have been extensively
studied, there is little work on exploring binarization of vision Transformers
which underpin most recent breakthroughs in visual recognition. To this end, we
propose to solve two fundamental challenges to push the horizon of Binary
Vision Transformers (BiViT). First, the traditional binary method does not take
the long-tailed distribution of softmax attention into consideration, bringing
large binarization errors in the attention module. To solve this, we propose
Softmax-aware Binarization, which dynamically adapts to the data distribution
and reduces the error caused by binarization. Second, to better preserve the
information of the pretrained model and restore accuracy, we propose a
Cross-layer Binarization scheme that decouples the binarization of
self-attention and multi-layer perceptrons (MLPs), and Parameterized Weight
Scales which introduce learnable scaling factors for weight binarization.
Overall, our method performs favorably against state-of-the-arts by 19.8% on
the TinyImageNet dataset. On ImageNet, our BiViT achieves a competitive 75.6%
Top-1 accuracy over Swin-S model. Additionally, on COCO object detection, our
method achieves an mAP of 40.8 with a Swin-T backbone over Cascade Mask R-CNN
framework.
| [
{
"version": "v1",
"created": "Mon, 14 Nov 2022 03:36:38 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 07:59:22 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"He",
"Yefei",
""
],
[
"Lou",
"Zhenyu",
""
],
[
"Zhang",
"Luoming",
""
],
[
"Liu",
"Jing",
""
],
[
"Wu",
"Weijia",
""
],
[
"Zhou",
"Hong",
""
],
[
"Zhuang",
"Bohan",
""
]
] | not_new_dataset | 0.997118 |
2211.11961 | Arghya Chakraborty | Arghya Chakraborty, Rahul Vaze | Online facility location with timed-requests and congestion | 32 pages, 6 figures | null | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | The classic online facility location problem deals with finding the optimal
set of facilities in an online fashion when demand requests arrive one at a
time and facilities need to be opened to service these requests. In this work,
we study two variants of the online facility location problem; (1) weighted
requests and (2) congestion. Both of these variants are motivated by their
applications to real life scenarios and the previously known results on online
facility location cannot be directly adapted to analyse them.
Weighted requests: In this variant, each demand request is a pair $(x,w)$
where $x$ is the standard location of the demand while $w$ is the corresponding
weight of the request. The cost of servicing request $(x,w)$ at facility $F$ is
$w\cdot d(x,F)$. For this variant, given $n$ requests, we present an online
algorithm attaining a competitive ratio of $\mathcal{O}(\log n)$ in the
secretarial model for the weighted requests and show that it is optimal.
Congestion: The congestion variant considers the case when there is an
additional congestion cost that grows with the number of requests served by
each facility. For this variant, when the congestion cost is a monomial, we
show that there exists an algorithm attaining a constant competitive ratio.
This constant is a function of the exponent of the monomial and the facility
opening cost but independent of the number of requests.
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2022 02:50:51 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 15:49:18 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Chakraborty",
"Arghya",
""
],
[
"Vaze",
"Rahul",
""
]
] | not_new_dataset | 0.997388 |
2211.13118 | Vianney Copp\'e | Vianney Copp\'e, Xavier Gillard, Pierre Schaus | Decision Diagram-Based Branch-and-Bound with Caching for Dominance and
Suboptimality Detection | Submitted to INFORMS Journal on Computing | null | null | null | cs.DS cs.AI cs.DM math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The branch-and-bound algorithm based on decision diagrams introduced by
Bergman et al. in 2016 is a framework for solving discrete optimization
problems with a dynamic programming formulation. It works by compiling a series
of bounded-width decision diagrams that can provide lower and upper bounds for
any given subproblem. Eventually, every part of the search space will be either
explored or pruned by the algorithm, thus proving optimality. This paper
presents new ingredients to speed up the search by exploiting the structure of
dynamic programming models. The key idea is to prevent the repeated expansion
of nodes corresponding to the same dynamic programming states by querying
expansion thresholds cached throughout the search. These thresholds are based
on dominance relations between partial solutions previously found and on the
pruning inequalities of the filtering techniques introduced by Gillard et al.
in 2021. Computational experiments show that the pruning brought by this
caching mechanism allows significantly reducing the number of nodes expanded by
the algorithm. This results in more benchmark instances of difficult
optimization problems being solved in less time while using narrower decision
diagrams.
| [
{
"version": "v1",
"created": "Tue, 22 Nov 2022 10:18:33 GMT"
},
{
"version": "v2",
"created": "Fri, 26 May 2023 15:51:22 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 13:50:18 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Coppé",
"Vianney",
""
],
[
"Gillard",
"Xavier",
""
],
[
"Schaus",
"Pierre",
""
]
] | not_new_dataset | 0.997333 |
2212.00431 | Violetta Weger | Markus Grassl, Anna-Lena Horlemann, Violetta Weger | The Subfield Metric and its Application to Quantum Error Correction | null | null | 10.1142/S021949882550063X | null | cs.IT math.IT quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new weight and corresponding metric over finite extension
fields for asymmetric error correction. The weight distinguishes between
elements from the base field and the ones outside of it, which is motivated by
asymmetric quantum codes. We set up the theoretic framework for this weight and
metric, including upper and lower bounds, asymptotic behavior of random codes,
and we show the existence of an optimal family of codes achieving the
Singleton-type upper bound.
| [
{
"version": "v1",
"created": "Thu, 1 Dec 2022 11:02:31 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Grassl",
"Markus",
""
],
[
"Horlemann",
"Anna-Lena",
""
],
[
"Weger",
"Violetta",
""
]
] | not_new_dataset | 0.997357 |
2212.02648 | Mazda Moayeri | Mazda Moayeri, Wenxiao Wang, Sahil Singla, Soheil Feizi | Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases | Accepted to NeurIPS '23 (Spotlight) | null | null | null | cs.CV cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present a simple but effective method to measure and mitigate model biases
caused by reliance on spurious cues. Instead of requiring costly changes to
one's data or model training, our method better utilizes the data one already
has by sorting them. Specifically, we rank images within their classes based on
spuriosity (the degree to which common spurious cues are present), proxied via
deep neural features of an interpretable network. With spuriosity rankings, it
is easy to identify minority subpopulations (i.e. low spuriosity images) and
assess model bias as the gap in accuracy between high and low spuriosity
images. One can even efficiently remove a model's bias at little cost to
accuracy by finetuning its classification head on low spuriosity images,
resulting in fairer treatment of samples regardless of spuriosity. We
demonstrate our method on ImageNet, annotating $5000$ class-feature
dependencies ($630$ of which we find to be spurious) and generating a dataset
of $325k$ soft segmentations for these features along the way. Having computed
spuriosity rankings via the identified spurious neural features, we assess
biases for $89$ diverse models and find that class-wise biases are highly
correlated across models. Our results suggest that model bias due to spurious
feature reliance is influenced far more by what the model is trained on than
how it is trained.
| [
{
"version": "v1",
"created": "Mon, 5 Dec 2022 23:15:43 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 17:59:06 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Moayeri",
"Mazda",
""
],
[
"Wang",
"Wenxiao",
""
],
[
"Singla",
"Sahil",
""
],
[
"Feizi",
"Soheil",
""
]
] | not_new_dataset | 0.996785 |
2212.06074 | Chiyuan Zhang | Badih Ghazi, Pritish Kamath, Ravi Kumar, Ethan Leeman, Pasin
Manurangsi, Avinash V Varadarajan, Chiyuan Zhang | Regression with Label Differential Privacy | Appeared at ICLR '23, 28 pages, 6 figures | null | null | null | cs.LG cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the task of training regression models with the guarantee of label
differential privacy (DP). Based on a global prior distribution on label
values, which could be obtained privately, we derive a label DP randomization
mechanism that is optimal under a given regression loss function. We prove that
the optimal mechanism takes the form of a "randomized response on bins", and
propose an efficient algorithm for finding the optimal bin values. We carry out
a thorough experimental evaluation on several datasets demonstrating the
efficacy of our algorithm.
| [
{
"version": "v1",
"created": "Mon, 12 Dec 2022 17:41:32 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Aug 2023 22:30:15 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Oct 2023 18:45:53 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Ghazi",
"Badih",
""
],
[
"Kamath",
"Pritish",
""
],
[
"Kumar",
"Ravi",
""
],
[
"Leeman",
"Ethan",
""
],
[
"Manurangsi",
"Pasin",
""
],
[
"Varadarajan",
"Avinash V",
""
],
[
"Zhang",
"Chiyuan",
""
]
] | not_new_dataset | 0.997422 |
2212.06921 | Dylan Sam | Dylan Sam, J. Zico Kolter | Losses over Labels: Weakly Supervised Learning via Direct Loss
Construction | 13 pages, 3 figures, AAAI 2023 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Owing to the prohibitive costs of generating large amounts of labeled data,
programmatic weak supervision is a growing paradigm within machine learning. In
this setting, users design heuristics that provide noisy labels for subsets of
the data. These weak labels are combined (typically via a graphical model) to
form pseudolabels, which are then used to train a downstream model. In this
work, we question a foundational premise of the typical weakly supervised
learning pipeline: given that the heuristic provides all ``label" information,
why do we need to generate pseudolabels at all? Instead, we propose to directly
transform the heuristics themselves into corresponding loss functions that
penalize differences between our model and the heuristic. By constructing
losses directly from the heuristics, we can incorporate more information than
is used in the standard weakly supervised pipeline, such as how the heuristics
make their decisions, which explicitly informs feature selection during
training. We call our method Losses over Labels (LoL) as it creates losses
directly from heuristics without going through the intermediate step of a
label. We show that LoL improves upon existing weak supervision methods on
several benchmark text and image classification tasks and further demonstrate
that incorporating gradient information leads to better performance on almost
every task.
| [
{
"version": "v1",
"created": "Tue, 13 Dec 2022 22:29:14 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 23:32:44 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Sam",
"Dylan",
""
],
[
"Kolter",
"J. Zico",
""
]
] | not_new_dataset | 0.997452 |
2212.12055 | Haiyuan Li | Haiyuan Li, Amin Emami, Karcius Assis, Antonis Vafeas, Ruizhi Yang,
Reza Nejabati, Shuangyi Yan, and Dimitra Simeonidou | DRL-based Energy-Efficient Baseband Function Deployments for
Service-Oriented Open RAN | null | null | null | null | cs.NI cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Open Radio Access Network (Open RAN) has gained tremendous attention from
industry and academia with decentralized baseband functions across multiple
processing units located at different places. However, the ever-expanding scope
of RANs, along with fluctuations in resource utilization across different
locations and timeframes, necessitates the implementation of robust function
management policies to minimize network energy consumption. Most recently
developed strategies neglected the activation time and the required energy for
the server activation process, while this process could offset the potential
energy savings gained from server hibernation. Furthermore, user plane
functions, which can be deployed on edge computing servers to provide
low-latency services, have not been sufficiently considered. In this paper, a
multi-agent deep reinforcement learning (DRL) based function deployment
algorithm, coupled with a heuristic method, has been developed to minimize
energy consumption while fulfilling multiple requests and adhering to latency
and resource constraints. In an 8-MEC network, the DRL-based solution
approaches the performance of the benchmark while offering up to 51% energy
savings compared to existing approaches. In a larger network of 14-MEC, it
maintains a 38% energy-saving advantage and ensures real-time response
capabilities. Furthermore, this paper prototypes an Open RAN testbed to verify
the feasibility of the proposed solution.
| [
{
"version": "v1",
"created": "Thu, 22 Dec 2022 22:07:26 GMT"
},
{
"version": "v2",
"created": "Fri, 30 Dec 2022 21:51:34 GMT"
},
{
"version": "v3",
"created": "Tue, 6 Jun 2023 09:00:24 GMT"
},
{
"version": "v4",
"created": "Sun, 18 Jun 2023 13:54:38 GMT"
},
{
"version": "v5",
"created": "Wed, 4 Oct 2023 22:10:23 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Li",
"Haiyuan",
""
],
[
"Emami",
"Amin",
""
],
[
"Assis",
"Karcius",
""
],
[
"Vafeas",
"Antonis",
""
],
[
"Yang",
"Ruizhi",
""
],
[
"Nejabati",
"Reza",
""
],
[
"Yan",
"Shuangyi",
""
],
[
"Simeonidou",
"Dimitra",
""
]
] | not_new_dataset | 0.997202 |
2301.04142 | Fadime Bekmambetova | Fadime Bekmambetova and Piero Triverio | Conservation properties of a leapfrog finite-difference time-domain
method for the Schr\"odinger equation | 36 pages, 11 figures, 5 tables | null | 10.1109/TMTT.2023.3308198. | null | cs.CE physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the probability and energy conservation properties of a leap-frog
finite-difference time-domain (FDTD) method for solving the Schr\"odinger
equation. We propose expressions for the total numerical probability and energy
contained in a region, and for the flux of probability current and power
through its boundary. We show that the proposed expressions satisfy the
conservation of probability and energy under suitable conditions. We
demonstrate their connection to the Courant-Friedrichs-Lewy condition for
stability. We argue that these findings can be used for developing a modular
framework for stability analysis in advanced algorithms based on FDTD for
solving the Schr\"odinger equation.
| [
{
"version": "v1",
"created": "Tue, 10 Jan 2023 16:38:58 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Jul 2023 03:45:26 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Bekmambetova",
"Fadime",
""
],
[
"Triverio",
"Piero",
""
]
] | not_new_dataset | 0.997226 |
2301.04494 | Inder Pal Singh | Indel Pal Singh, Enjie Ghorbel, Oyebade Oyedotun, Djamila Aouada | Multi-label Image Classification using Adaptive Graph Convolutional
Networks: from a Single Domain to Multiple Domains | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper proposes an adaptive graph-based approach for multi-label image
classification. Graph-based methods have been largely exploited in the field of
multi-label classification, given their ability to model label correlations.
Specifically, their effectiveness has been proven not only when considering a
single domain but also when taking into account multiple domains. However, the
topology of the used graph is not optimal as it is pre-defined heuristically.
In addition, consecutive Graph Convolutional Network (GCN) aggregations tend to
destroy the feature similarity. To overcome these issues, an architecture for
learning the graph connectivity in an end-to-end fashion is introduced. This is
done by integrating an attention-based mechanism and a similarity-preserving
strategy. The proposed framework is then extended to multiple domains using an
adversarial training scheme. Numerous experiments are reported on well-known
single-domain and multi-domain benchmarks. The results demonstrate that our
approach achieves competitive results in terms of mean Average Precision (mAP)
and model size as compared to the state-of-the-art. The code will be made
publicly available.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2023 14:42:47 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 09:28:57 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Singh",
"Indel Pal",
""
],
[
"Ghorbel",
"Enjie",
""
],
[
"Oyedotun",
"Oyebade",
""
],
[
"Aouada",
"Djamila",
""
]
] | not_new_dataset | 0.997144 |
2301.04554 | Wei Guo | Wei Guo, Benedetta Tondi, Mauro Barni | Universal Detection of Backdoor Attacks via Density-based Clustering and
Centroids Analysis | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose a Universal Defence against backdoor attacks based on Clustering
and Centroids Analysis (CCA-UD). The goal of the defence is to reveal whether a
Deep Neural Network model is subject to a backdoor attack by inspecting the
training dataset. CCA-UD first clusters the samples of the training set by
means of density-based clustering. Then, it applies a novel strategy to detect
the presence of poisoned clusters. The proposed strategy is based on a general
misclassification behaviour observed when the features of a representative
example of the analysed cluster are added to benign samples. The capability of
inducing a misclassification error is a general characteristic of poisoned
samples, hence the proposed defence is attack-agnostic. This marks a
significant difference with respect to existing defences, that, either can
defend against only some types of backdoor attacks, or are effective only when
some conditions on the poisoning ratio or the kind of triggering signal used by
the attacker are satisfied.
Experiments carried out on several classification tasks and network
architectures, considering different types of backdoor attacks (with either
clean or corrupted labels), and triggering signals, including both global and
local triggering signals, as well as sample-specific and source-specific
triggers, reveal that the proposed method is very effective to defend against
backdoor attacks in all the cases, always outperforming the state of the art
techniques.
| [
{
"version": "v1",
"created": "Wed, 11 Jan 2023 16:31:38 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 13:26:33 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Guo",
"Wei",
""
],
[
"Tondi",
"Benedetta",
""
],
[
"Barni",
"Mauro",
""
]
] | not_new_dataset | 0.99736 |
2301.05603 | Shiye Lei | Shiye Lei and Dacheng Tao | A Comprehensive Survey of Dataset Distillation | Accepted by IEEE TPAMI | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning technology has developed unprecedentedly in the last decade and
has become the primary choice in many application domains. This progress is
mainly attributed to a systematic collaboration in which rapidly growing
computing resources encourage advanced algorithms to deal with massive data.
However, it has gradually become challenging to handle the unlimited growth of
data with limited computing power. To this end, diverse approaches are proposed
to improve data processing efficiency. Dataset distillation, a dataset
reduction method, addresses this problem by synthesizing a small typical
dataset from substantial data and has attracted much attention from the deep
learning community. Existing dataset distillation methods can be taxonomized
into meta-learning and data matching frameworks according to whether they
explicitly mimic the performance of target data. Although dataset distillation
has shown surprising performance in compressing datasets, there are still
several limitations such as distilling high-resolution data or data with
complex label spaces. This paper provides a holistic understanding of dataset
distillation from multiple aspects, including distillation frameworks and
algorithms, factorized dataset distillation, performance comparison, and
applications. Finally, we discuss challenges and promising directions to
further promote future studies on dataset distillation.
| [
{
"version": "v1",
"created": "Fri, 13 Jan 2023 15:11:38 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 09:21:44 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 01:09:29 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Lei",
"Shiye",
""
],
[
"Tao",
"Dacheng",
""
]
] | not_new_dataset | 0.997412 |
2301.06421 | Pei-Yu Chen | Pei-Yu Chen, Myrthe L. Tielman, Dirk K.J. Heylen, Catholijn M. Jonker,
M. Birna van Riemsdijk | AI Alignment Dialogues: An Interactive Approach to AI Alignment in
Support Agents | Withdraw because the content of the paper has been largely revised.
The newest version is very different than the submitted one | null | null | null | cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | AI alignment is about ensuring AI systems only pursue goals and activities
that are beneficial to humans. Most of the current approach to AI alignment is
to learn what humans value from their behavioural data. This paper proposes a
different way of looking at the notion of alignment, namely by introducing AI
Alignment Dialogues: dialogues with which users and agents try to achieve and
maintain alignment via interaction. We argue that alignment dialogues have a
number of advantages in comparison to data-driven approaches, especially for
behaviour support agents, which aim to support users in achieving their desired
future behaviours rather than their current behaviours. The advantages of
alignment dialogues include allowing the users to directly convey higher-level
concepts to the agent, and making the agent more transparent and trustworthy.
In this paper we outline the concept and high-level structure of alignment
dialogues. Moreover, we conducted a qualitative focus group user study from
which we developed a model that describes how alignment dialogues affect users,
and created design suggestions for AI alignment dialogues. Through this we
establish foundations for AI alignment dialogues and shed light on what
requires further development and research.
| [
{
"version": "v1",
"created": "Mon, 16 Jan 2023 13:19:53 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 11:15:23 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Chen",
"Pei-Yu",
""
],
[
"Tielman",
"Myrthe L.",
""
],
[
"Heylen",
"Dirk K. J.",
""
],
[
"Jonker",
"Catholijn M.",
""
],
[
"van Riemsdijk",
"M. Birna",
""
]
] | not_new_dataset | 0.99743 |
2301.07305 | Mohammed Shafae | Md Habibor Rahman (1), Erfan Yazdandoost Hamedani (1), Young-Jun Son
(2), Mohammed Shafae (1) ((1) The University of Arizona, (2) Purdue
University) | Graph-Theoretic Approach for Manufacturing Cybersecurity Risk Modeling
and Assessment | 25 pages, 10 figures | null | null | null | cs.CR cs.SY eess.SY math.OC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Identifying, analyzing, and evaluating cybersecurity risks are essential to
assess the vulnerabilities of modern manufacturing infrastructures and to
devise effective decision-making strategies to secure critical manufacturing
against potential cyberattacks. In response, this work proposes a
graph-theoretic approach for risk modeling and assessment to address the lack
of quantitative cybersecurity risk assessment frameworks for smart
manufacturing systems. In doing so, first, threat attributes are represented
using an attack graphical model derived from manufacturing cyberattack
taxonomies. Attack taxonomies offer consistent structures to categorize threat
attributes, and the graphical approach helps model their interdependence.
Second, the graphs are analyzed to explore how threat events can propagate
through the manufacturing value chain and identify the manufacturing assets
that threat actors can access and compromise during a threat event. Third, the
proposed method identifies the attack path that maximizes the likelihood of
success and minimizes the attack detection probability, and then computes the
associated cybersecurity risk. Finally, the proposed risk modeling and
assessment framework is demonstrated via an interconnected smart manufacturing
system illustrative example. Using the proposed approach, practitioners can
identify critical connections and manufacturing assets requiring prioritized
security controls and develop and deploy appropriate defense measures
accordingly.
| [
{
"version": "v1",
"created": "Wed, 18 Jan 2023 04:54:00 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 22:42:06 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Rahman",
"Md Habibor",
""
],
[
"Hamedani",
"Erfan Yazdandoost",
""
],
[
"Son",
"Young-Jun",
""
],
[
"Shafae",
"Mohammed",
""
]
] | not_new_dataset | 0.997348 |
2301.09350 | Anastasios Nentidis | Anastasios Nentidis, Thomas Chatzopoulos, Anastasia Krithara,
Grigorios Tsoumakas, Georgios Paliouras | Large-scale investigation of weakly-supervised deep learning for the
fine-grained semantic indexing of biomedical literature | 26 pages, 5 figures, 4 tables. A more concise version | Journal of Biomedical Informatics, Volume 146, 2023, 104499, ISSN
1532-0464 | 10.1016/j.jbi.2023.104499 | null | cs.CL cs.DL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: Semantic indexing of biomedical literature is usually done at the
level of MeSH descriptors with several related but distinct biomedical concepts
often grouped together and treated as a single topic. This study proposes a new
method for the automated refinement of subject annotations at the level of MeSH
concepts. Methods: Lacking labelled data, we rely on weak supervision based on
concept occurrence in the abstract of an article, which is also enhanced by
dictionary-based heuristics. In addition, we investigate deep learning
approaches, making design choices to tackle the particular challenges of this
task. The new method is evaluated on a large-scale retrospective scenario,
based on concepts that have been promoted to descriptors. Results: In our
experiments concept occurrence was the strongest heuristic achieving a macro-F1
score of about 0.63 across several labels. The proposed method improved it
further by more than 4pp. Conclusion: The results suggest that concept
occurrence is a strong heuristic for refining the coarse-grained labels at the
level of MeSH concepts and the proposed method improves it further.
| [
{
"version": "v1",
"created": "Mon, 23 Jan 2023 10:33:22 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 14:17:39 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Nentidis",
"Anastasios",
""
],
[
"Chatzopoulos",
"Thomas",
""
],
[
"Krithara",
"Anastasia",
""
],
[
"Tsoumakas",
"Grigorios",
""
],
[
"Paliouras",
"Georgios",
""
]
] | not_new_dataset | 0.997367 |
2302.00589 | Ghazal Kalhor | Ghazal Kalhor, Tanin Zeraati, Behnam Bahrak | Diversity dilemmas: uncovering gender and nationality biases in graduate
admissions across top North American computer science programs | null | null | 10.1140/epjds/s13688-023-00422-5 | null | cs.CY cs.SI | http://creativecommons.org/licenses/by/4.0/ | Although different organizations have defined policies towards diversity in
academia, many argue that minorities are still disadvantaged in university
admissions due to biases. Extensive research has been conducted on detecting
partiality patterns in the academic community. However, in the last few
decades, limited research has focused on assessing gender and nationality
biases in graduate admission results of universities. In this study, we
collected a novel and comprehensive dataset containing information on
approximately 14,000 graduate students majoring in computer science (CS) at the
top 25 North American universities. We used statistical hypothesis tests to
determine whether there is a preference for students' gender and nationality in
the admission processes. In addition to partiality patterns, we discuss the
relationship between gender/nationality diversity and the scientific
achievements of research teams. Consistent with previous studies, our findings
show that there is no gender bias in the admission of graduate students to
research groups, but we observed bias based on students' nationality.
| [
{
"version": "v1",
"created": "Wed, 1 Feb 2023 17:02:08 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Aug 2023 19:30:27 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Kalhor",
"Ghazal",
""
],
[
"Zeraati",
"Tanin",
""
],
[
"Bahrak",
"Behnam",
""
]
] | new_dataset | 0.996857 |
2302.00942 | Han Lin | Krzysztof Choromanski, Arijit Sehanobish, Han Lin, Yunfan Zhao, Eli
Berger, Tetiana Parshakova, Alvin Pan, David Watkins, Tianyi Zhang, Valerii
Likhosherstov, Somnath Basu Roy Chowdhury, Avinava Dubey, Deepali Jain, Tamas
Sarlos, Snigdha Chaturvedi, Adrian Weller | Efficient Graph Field Integrators Meet Point Clouds | null | ICML 2023 | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present two new classes of algorithms for efficient field integration on
graphs encoding point clouds. The first class, SeparatorFactorization(SF),
leverages the bounded genus of point cloud mesh graphs, while the second class,
RFDiffusion(RFD), uses popular epsilon-nearest-neighbor graph representations
for point clouds. Both can be viewed as providing the functionality of Fast
Multipole Methods (FMMs), which have had a tremendous impact on efficient
integration, but for non-Euclidean spaces. We focus on geometries induced by
distributions of walk lengths between points (e.g., shortest-path distance). We
provide an extensive theoretical analysis of our algorithms, obtaining new
results in structural graph theory as a byproduct. We also perform exhaustive
empirical evaluation, including on-surface interpolation for rigid and
deformable objects (particularly for mesh-dynamics modeling), Wasserstein
distance computations for point clouds, and the Gromov-Wasserstein variant.
| [
{
"version": "v1",
"created": "Thu, 2 Feb 2023 08:33:36 GMT"
},
{
"version": "v2",
"created": "Sun, 5 Feb 2023 20:12:24 GMT"
},
{
"version": "v3",
"created": "Wed, 12 Apr 2023 22:27:17 GMT"
},
{
"version": "v4",
"created": "Sat, 10 Jun 2023 01:29:45 GMT"
},
{
"version": "v5",
"created": "Wed, 21 Jun 2023 02:34:32 GMT"
},
{
"version": "v6",
"created": "Wed, 4 Oct 2023 19:17:43 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Choromanski",
"Krzysztof",
""
],
[
"Sehanobish",
"Arijit",
""
],
[
"Lin",
"Han",
""
],
[
"Zhao",
"Yunfan",
""
],
[
"Berger",
"Eli",
""
],
[
"Parshakova",
"Tetiana",
""
],
[
"Pan",
"Alvin",
""
],
[
"Watkins",
"David",
""
],
[
"Zhang",
"Tianyi",
""
],
[
"Likhosherstov",
"Valerii",
""
],
[
"Chowdhury",
"Somnath Basu Roy",
""
],
[
"Dubey",
"Avinava",
""
],
[
"Jain",
"Deepali",
""
],
[
"Sarlos",
"Tamas",
""
],
[
"Chaturvedi",
"Snigdha",
""
],
[
"Weller",
"Adrian",
""
]
] | not_new_dataset | 0.997327 |
2302.02394 | Zuopeng Yang | Zuopeng Yang, Tianshu Chu, Xin Lin, Erdun Gao, Daqing Liu, Jie Yang,
Chaoyue Wang | Eliminating Contextual Prior Bias for Semantic Image Editing via
Dual-Cycle Diffusion | This paper has been accepted by the IEEE Transactions on Circuits and
Systems for Video Technology (TCSVT) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The recent success of text-to-image generation diffusion models has also
revolutionized semantic image editing, enabling the manipulation of images
based on query/target texts. Despite these advancements, a significant
challenge lies in the potential introduction of contextual prior bias in
pre-trained models during image editing, e.g., making unexpected modifications
to inappropriate regions. To address this issue, we present a novel approach
called Dual-Cycle Diffusion, which generates an unbiased mask to guide image
editing. The proposed model incorporates a Bias Elimination Cycle that consists
of both a forward path and an inverted path, each featuring a Structural
Consistency Cycle to ensure the preservation of image content during the
editing process. The forward path utilizes the pre-trained model to produce the
edited image, while the inverted path converts the result back to the source
image. The unbiased mask is generated by comparing differences between the
processed source image and the edited image to ensure that both conform to the
same distribution. Our experiments demonstrate the effectiveness of the
proposed method, as it significantly improves the D-CLIP score from 0.272 to
0.283. The code will be available at
https://github.com/JohnDreamer/DualCycleDiffsion.
| [
{
"version": "v1",
"created": "Sun, 5 Feb 2023 14:30:22 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Feb 2023 02:57:45 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 14:35:08 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Yang",
"Zuopeng",
""
],
[
"Chu",
"Tianshu",
""
],
[
"Lin",
"Xin",
""
],
[
"Gao",
"Erdun",
""
],
[
"Liu",
"Daqing",
""
],
[
"Yang",
"Jie",
""
],
[
"Wang",
"Chaoyue",
""
]
] | not_new_dataset | 0.99726 |
2302.02787 | Lena Mangold | Lena Mangold and Camille Roth | Generative models for two-ground-truth partitions in networks | null | null | null | null | cs.SI cond-mat.stat-mech cs.LG physics.data-an physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A myriad of approaches have been proposed to characterise the mesoscale
structure of networks - most often as a partition based on patterns variously
called communities, blocks, or clusters. Clearly, distinct methods designed to
detect different types of patterns may provide a variety of answers to the
network's mesoscale structure. Yet, even multiple runs of a given method can
sometimes yield diverse and conflicting results, producing entire landscapes of
partitions which potentially include multiple (locally optimal) mesoscale
explanations of the network. Such ambiguity motivates a closer look at the
ability of these methods to find multiple qualitatively different 'ground
truth' partitions in a network. Here, we propose the stochastic cross-block
model (SCBM), a generative model which allows for two distinct partitions to be
built into the mesoscale structure of a single benchmark network. We
demonstrate a use case of the benchmark model by appraising the power of
stochastic block models (SBMs) to detect implicitly planted coexisting
bi-community and core-periphery structures of different strengths. Given our
model design and experimental set-up, we find that the ability to detect the
two partitions individually varies by SBM variant and that coexistence of both
partitions is recovered only in a very limited number of cases. Our findings
suggest that in most instances only one - in some way dominating - structure
can be detected, even in the presence of other partitions. They underline the
need for considering entire landscapes of partitions when different competing
explanations exist and motivate future research to advance partition
coexistence detection methods. Our model also contributes to the field of
benchmark networks more generally by enabling further exploration of the
ability of new and existing methods to detect ambiguity in the mesoscale
structure of networks.
| [
{
"version": "v1",
"created": "Mon, 6 Feb 2023 14:02:28 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Jul 2023 19:01:52 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 13:00:34 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Mangold",
"Lena",
""
],
[
"Roth",
"Camille",
""
]
] | not_new_dataset | 0.997372 |
2302.02936 | Alex Bie | Alex Bie, Gautam Kamath, Guojun Zhang | Private GANs, Revisited | 28 pages; revisions and new experiments from TMLR camera-ready + code
release at https://github.com/alexbie98/dpgan-revisit | null | null | null | cs.LG cs.CR cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that the canonical approach for training differentially private GANs
-- updating the discriminator with differentially private stochastic gradient
descent (DPSGD) -- can yield significantly improved results after modifications
to training. Specifically, we propose that existing instantiations of this
approach neglect to consider how adding noise only to discriminator updates
inhibits discriminator training, disrupting the balance between the generator
and discriminator necessary for successful GAN training. We show that a simple
fix -- taking more discriminator steps between generator steps -- restores
parity between the generator and discriminator and improves results.
Additionally, with the goal of restoring parity, we experiment with other
modifications -- namely, large batch sizes and adaptive discriminator update
frequency -- to improve discriminator training and see further improvements in
generation quality. Our results demonstrate that on standard image synthesis
benchmarks, DPSGD outperforms all alternative GAN privatization schemes. Code:
https://github.com/alexbie98/dpgan-revisit.
| [
{
"version": "v1",
"created": "Mon, 6 Feb 2023 17:11:09 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 04:47:52 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Bie",
"Alex",
""
],
[
"Kamath",
"Gautam",
""
],
[
"Zhang",
"Guojun",
""
]
] | not_new_dataset | 0.997418 |
2302.04054 | Michael Hagmann | Michael Hagmann, Philipp Meier and Stefan Riezler | Towards Inferential Reproducibility of Machine Learning Research | Published at ICLR 2023 | null | null | null | cs.LG cs.AI cs.CL stat.AP stat.ML | http://creativecommons.org/licenses/by/4.0/ | Reliability of machine learning evaluation -- the consistency of observed
evaluation scores across replicated model training runs -- is affected by
several sources of nondeterminism which can be regarded as measurement noise.
Current tendencies to remove noise in order to enforce reproducibility of
research results neglect inherent nondeterminism at the implementation level
and disregard crucial interaction effects between algorithmic noise factors and
data properties. This limits the scope of conclusions that can be drawn from
such experiments. Instead of removing noise, we propose to incorporate several
sources of variance, including their interaction with data properties, into an
analysis of significance and reliability of machine learning evaluation, with
the aim to draw inferences beyond particular instances of trained models. We
show how to use linear mixed effects models (LMEMs) to analyze performance
evaluation scores, and to conduct statistical inference with a generalized
likelihood ratio test (GLRT). This allows us to incorporate arbitrary sources
of noise like meta-parameter variations into statistical significance testing,
and to assess performance differences conditional on data properties.
Furthermore, a variance component analysis (VCA) enables the analysis of the
contribution of noise sources to overall variance and the computation of a
reliability coefficient by the ratio of substantial to total variance.
| [
{
"version": "v1",
"created": "Wed, 8 Feb 2023 13:47:00 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Feb 2023 10:45:09 GMT"
},
{
"version": "v3",
"created": "Thu, 16 Feb 2023 13:56:26 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Mar 2023 11:37:27 GMT"
},
{
"version": "v5",
"created": "Thu, 13 Apr 2023 12:10:37 GMT"
},
{
"version": "v6",
"created": "Thu, 5 Oct 2023 14:19:32 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Hagmann",
"Michael",
""
],
[
"Meier",
"Philipp",
""
],
[
"Riezler",
"Stefan",
""
]
] | not_new_dataset | 0.997504 |
2302.11791 | Gyanendra Kumar Verma | Gyanendra K. Verma and R. K. Sharma | Additive complementary dual codes over $\mathbb{F}_{q^2}$ | There has been major changes in this manuscript we will submit new
one | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | Shi et al. [Additive complementary dual codes over F4. Designs, Codes and
Cryptography, 2022.] studied additive codes over the finite field F4 with
respect to trace Hermitian and trace Euclidean inner products. In this article,
we define additive codes of length n over finite field Fq2 as additive
subgroups of Fn q2 where q is a prime power. We associate an additive code with
a matrix called a generator matrix. We characterize trace Euclidean ACD and
trace Hermitian ACD codes in terms of generator matrices over the finite field
Fq2 . Also, we construct these codes over Fq2 from linear LCD codes over Fq.
| [
{
"version": "v1",
"created": "Thu, 23 Feb 2023 06:12:14 GMT"
},
{
"version": "v2",
"created": "Sat, 6 May 2023 17:38:14 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 09:08:46 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Verma",
"Gyanendra K.",
""
],
[
"Sharma",
"R. K.",
""
]
] | not_new_dataset | 0.996993 |
2303.00047 | Indranil Saha | Ratijit Mitra and Indranil Saha | Online On-Demand Multi-Robot Coverage Path Planning | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an online centralized path planning algorithm to cover a large,
complex, unknown workspace with multiple homogeneous mobile robots. Our
algorithm is horizon-based, synchronous, and on-demand. The recently proposed
horizon-based synchronous algorithms compute all the robots' paths in each
horizon, significantly increasing the computation burden in large workspaces
with many robots. As a remedy, we propose an algorithm that computes the paths
for a subset of robots that have traversed previously computed paths entirely
(thus on-demand) and reuses the remaining paths for the other robots. We
formally prove that the algorithm guarantees complete coverage of the unknown
workspace. Experimental results on several standard benchmark workspaces show
that our algorithm scales to hundreds of robots in large complex workspaces and
consistently beats a state-of-the-art online centralized multi-robot coverage
path planning algorithm in terms of the time needed to achieve complete
coverage. For its validation, we perform ROS+Gazebo simulations in five 2D grid
benchmark workspaces with 10 Quadcopters and 10 TurtleBots, respectively. Also,
to demonstrate its practical feasibility, we conduct one indoor experiment with
two real TurtleBot2 robots and one outdoor experiment with three real
Quadcopters.
| [
{
"version": "v1",
"created": "Tue, 28 Feb 2023 19:43:23 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 10:02:31 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Mitra",
"Ratijit",
""
],
[
"Saha",
"Indranil",
""
]
] | not_new_dataset | 0.997114 |
2303.01338 | Amira Guesmi | Amira Guesmi, Muhammad Abdullah Hanif, and Muhammad Shafique | AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision
Systems | null | null | null | null | cs.CV cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-based perception modules are increasingly deployed in many
applications, especially autonomous vehicles and intelligent robots. These
modules are being used to acquire information about the surroundings and
identify obstacles. Hence, accurate detection and classification are essential
to reach appropriate decisions and take appropriate and safe actions at all
times. Current studies have demonstrated that "printed adversarial attacks",
known as physical adversarial attacks, can successfully mislead perception
models such as object detectors and image classifiers. However, most of these
physical attacks are based on noticeable and eye-catching patterns for
generated perturbations making them identifiable/detectable by human eye or in
test drives. In this paper, we propose a camera-based inconspicuous adversarial
attack (\textbf{AdvRain}) capable of fooling camera-based perception systems
over all objects of the same class. Unlike mask based fake-weather attacks that
require access to the underlying computing hardware or image memory, our attack
is based on emulating the effects of a natural weather condition (i.e.,
Raindrops) that can be printed on a translucent sticker, which is externally
placed over the lens of a camera. To accomplish this, we provide an iterative
process based on performing a random search aiming to identify critical
positions to make sure that the performed transformation is adversarial for a
target classifier. Our transformation is based on blurring predefined parts of
the captured image corresponding to the areas covered by the raindrop. We
achieve a drop in average model accuracy of more than $45\%$ and $40\%$ on
VGG19 for ImageNet and Resnet34 for Caltech-101, respectively, using only $20$
raindrops.
| [
{
"version": "v1",
"created": "Thu, 2 Mar 2023 15:14:46 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 11:55:37 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Guesmi",
"Amira",
""
],
[
"Hanif",
"Muhammad Abdullah",
""
],
[
"Shafique",
"Muhammad",
""
]
] | not_new_dataset | 0.997012 |
2303.02950 | Ying Gao | Ying Gao, Qingqing Wu, Wen Chen, Celimuge Wu, Derrick Wing Kwan Ng,
Naofal Al-Dhahir | Exploiting Intelligent Reflecting Surfaces for Interference Channels
with SWIPT | 30 pages, accepted by IEEE Transactions on Wireless Communications | null | 10.1109/TWC.2023.3318795 | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers intelligent reflecting surface (IRS)-aided simultaneous
wireless information and power transfer (SWIPT) in a multi-user multiple-input
single-output (MISO) interference channel (IFC), where multiple transmitters
(Txs) serve their corresponding receivers (Rxs) in a shared spectrum with the
aid of IRSs. Our goal is to maximize the sum rate of the Rxs by jointly
optimizing the transmit covariance matrices at the Txs, the phase shifts at the
IRSs, and the resource allocation subject to the individual energy harvesting
(EH) constraints at the Rxs. Towards this goal and based on the well-known
power splitting (PS) and time switching (TS) receiver structures, we consider
three practical transmission schemes, namely the IRS-aided hybrid TS-PS scheme,
the IRS-aided time-division multiple access (TDMA) scheme, and the IRS-aided
TDMA-D scheme. The latter two schemes differ in whether the Txs employ
deterministic energy signals known to all the Rxs. Despite the non-convexity of
the three optimization problems corresponding to the three transmission
schemes, we develop computationally efficient algorithms to address them
suboptimally, respectively, by capitalizing on the techniques of alternating
optimization (AO) and successive convex approximation (SCA). Moreover, we
conceive feasibility checking methods for these problems, based on which the
initial points for the proposed algorithms are constructed. Simulation results
demonstrate that our proposed IRS-aided schemes significantly outperform their
counterparts without IRSs in terms of sum rate and maximum EH requirements that
can be satisfied under various setups. In addition, the IRS-aided hybrid TS-PS
scheme generally achieves the best sum rate performance among the three
proposed IRS-aided schemes, and if not, increasing the number of IRS elements
can always accomplish it.
| [
{
"version": "v1",
"created": "Mon, 6 Mar 2023 07:44:05 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 17:09:30 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Gao",
"Ying",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Chen",
"Wen",
""
],
[
"Wu",
"Celimuge",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Al-Dhahir",
"Naofal",
""
]
] | not_new_dataset | 0.997425 |
2303.06088 | Marin Scalbert | Marin Scalbert and Maria Vakalopoulou and Florent Couzini\'e-Devy | Towards domain-invariant Self-Supervised Learning with Batch Styles
Standardization | Under review as conference paper | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In Self-Supervised Learning (SSL), models are typically pretrained,
fine-tuned, and evaluated on the same domains. However, they tend to perform
poorly when evaluated on unseen domains, a challenge that Unsupervised Domain
Generalization (UDG) seeks to address. Current UDG methods rely on domain
labels, which are often challenging to collect, and domain-specific
architectures that lack scalability when confronted with numerous domains,
making the current methodology impractical and rigid. Inspired by
contrastive-based UDG methods that mitigate spurious correlations by
restricting comparisons to examples from the same domain, we hypothesize that
eliminating style variability within a batch could provide a more convenient
and flexible way to reduce spurious correlations without requiring domain
labels. To verify this hypothesis, we introduce Batch Styles Standardization
(BSS), a relatively simple yet powerful Fourier-based method to standardize the
style of images in a batch specifically designed for integration with SSL
methods to tackle UDG. Combining BSS with existing SSL methods offers serious
advantages over prior UDG methods: (1) It eliminates the need for domain labels
or domain-specific network components to enhance domain-invariance in SSL
representations, and (2) offers flexibility as BSS can be seamlessly integrated
with diverse contrastive-based but also non-contrastive-based SSL methods.
Experiments on several UDG datasets demonstrate that it significantly improves
downstream task performances on unseen domains, often outperforming or rivaling
with UDG methods. Finally, this work clarifies the underlying mechanisms
contributing to BSS's effectiveness in improving domain-invariance in SSL
representations and performances on unseen domain.
| [
{
"version": "v1",
"created": "Fri, 10 Mar 2023 17:09:04 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Mar 2023 10:05:01 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Apr 2023 10:04:08 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 09:55:46 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Scalbert",
"Marin",
""
],
[
"Vakalopoulou",
"Maria",
""
],
[
"Couzinié-Devy",
"Florent",
""
]
] | not_new_dataset | 0.99744 |
2303.09230 | Xie Yi | Yi Xie, Huaidong Zhang, Xuemiao Xu, Jianqing Zhu, Shengfeng He | Towards a Smaller Student: Capacity Dynamic Distillation for Efficient
Image Retrieval | Accepted by CVPR2023 | Towards a Smaller Student: Capacity Dynamic Distillation for
Efficient Image Retrieval, Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), 2023,16006-16015 | 10.1109/CVPR52729.2023.01536 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous Knowledge Distillation based efficient image retrieval methods
employs a lightweight network as the student model for fast inference. However,
the lightweight student model lacks adequate representation capacity for
effective knowledge imitation during the most critical early training period,
causing final performance degeneration. To tackle this issue, we propose a
Capacity Dynamic Distillation framework, which constructs a student model with
editable representation capacity. Specifically, the employed student model is
initially a heavy model to fruitfully learn distilled knowledge in the early
training epochs, and the student model is gradually compressed during the
training. To dynamically adjust the model capacity, our dynamic framework
inserts a learnable convolutional layer within each residual block in the
student model as the channel importance indicator. The indicator is optimized
simultaneously by the image retrieval loss and the compression loss, and a
retrieval-guided gradient resetting mechanism is proposed to release the
gradient conflict. Extensive experiments show that our method has superior
inference speed and accuracy, e.g., on the VeRi-776 dataset, given the
ResNet101 as a teacher, our method saves 67.13% model parameters and 65.67%
FLOPs (around 24.13% and 21.94% higher than state-of-the-arts) without
sacrificing accuracy (around 2.11% mAP higher than state-of-the-arts).
| [
{
"version": "v1",
"created": "Thu, 16 Mar 2023 11:09:22 GMT"
},
{
"version": "v2",
"created": "Wed, 31 May 2023 15:32:48 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Xie",
"Yi",
""
],
[
"Zhang",
"Huaidong",
""
],
[
"Xu",
"Xuemiao",
""
],
[
"Zhu",
"Jianqing",
""
],
[
"He",
"Shengfeng",
""
]
] | not_new_dataset | 0.997303 |
2303.09234 | Yining Jiao | Yining Jiao, Carlton Zdanski, Julia Kimbell, Andrew Prince, Cameron
Worden, Samuel Kirse, Christopher Rutter, Benjamin Shields, William Dunn,
Jisan Mahmud, Marc Niethammer | NAISR: A 3D Neural Additive Model for Interpretable Shape Representation | 28 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Deep implicit functions (DIFs) have emerged as a powerful paradigm for many
computer vision tasks such as 3D shape reconstruction, generation,
registration, completion, editing, and understanding. However, given a set of
3D shapes with associated covariates there is at present no shape
representation method which allows to precisely represent the shapes while
capturing the individual dependencies on each covariate. Such a method would be
of high utility to researchers to discover knowledge hidden in a population of
shapes. For scientific shape discovery, we propose a 3D Neural Additive Model
for Interpretable Shape Representation ($\texttt{NAISR}$) which describes
individual shapes by deforming a shape atlas in accordance to the effect of
disentangled covariates. Our approach captures shape population trends and
allows for patient-specific predictions through shape transfer.
$\texttt{NAISR}$ is the first approach to combine the benefits of deep implicit
shape representations with an atlas deforming according to specified
covariates. We evaluate $\texttt{NAISR}$ with respect to shape reconstruction,
shape disentanglement, shape evolution, and shape transfer on three datasets:
1) $\textit{Starman}$, a simulated 2D shape dataset; 2) the ADNI hippocampus 3D
shape dataset; and 3) a pediatric airway 3D shape dataset. Our experiments
demonstrate that $\textit{Starman}$ achieves excellent shape reconstruction
performance while retaining interpretability. Our code is available at
$\href{https://github.com/uncbiag/NAISR}{https://github.com/uncbiag/NAISR}$.
| [
{
"version": "v1",
"created": "Thu, 16 Mar 2023 11:18:04 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Mar 2023 12:13:19 GMT"
},
{
"version": "v3",
"created": "Tue, 28 Mar 2023 20:07:21 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 09:25:26 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Jiao",
"Yining",
""
],
[
"Zdanski",
"Carlton",
""
],
[
"Kimbell",
"Julia",
""
],
[
"Prince",
"Andrew",
""
],
[
"Worden",
"Cameron",
""
],
[
"Kirse",
"Samuel",
""
],
[
"Rutter",
"Christopher",
""
],
[
"Shields",
"Benjamin",
""
],
[
"Dunn",
"William",
""
],
[
"Mahmud",
"Jisan",
""
],
[
"Niethammer",
"Marc",
""
]
] | not_new_dataset | 0.996392 |
2303.09874 | Alexander Hepburn | Alexander Hepburn, Valero Laparra, Ra\'ul Santos-Rodriguez, Jes\'us
Malo | Disentangling the Link Between Image Statistics and Human Perception | null | null | null | null | cs.CV cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the 1950s, Barlow and Attneave hypothesised a link between biological
vision and information maximisation. Following Shannon, information was defined
using the probability of natural images. A number of physiological and
psychophysical phenomena have been derived ever since from principles like
info-max, efficient coding, or optimal denoising. However, it remains unclear
how this link is expressed in mathematical terms from image probability. First,
classical derivations were subjected to strong assumptions on the probability
models and on the behaviour of the sensors. Moreover, the direct evaluation of
the hypothesis was limited by the inability of the classical image models to
deliver accurate estimates of the probability. In this work we directly
evaluate image probabilities using an advanced generative model for natural
images, and we analyse how probability-related factors can be combined to
predict human perception via sensitivity of state-of-the-art subjective image
quality metrics. We use information theory and regression analysis to find a
combination of just two probability-related factors that achieves 0.8
correlation with subjective metrics. This probability-based sensitivity is
psychophysically validated by reproducing the basic trends of the Contrast
Sensitivity Function, its suprathreshold variation, and trends of the Weber-law
and masking.
| [
{
"version": "v1",
"created": "Fri, 17 Mar 2023 10:38:27 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Oct 2023 09:40:54 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 14:06:32 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Hepburn",
"Alexander",
""
],
[
"Laparra",
"Valero",
""
],
[
"Santos-Rodriguez",
"Raúl",
""
],
[
"Malo",
"Jesús",
""
]
] | not_new_dataset | 0.997416 |
2303.10650 | Natalia \'Slusarz | Natalia \'Slusarz, Ekaterina Komendantskaya, Matthew L. Daggitt,
Robert Stewart, Kathrin Stark | Logic of Differentiable Logics: Towards a Uniform Semantics of DL | LPAR'23 | null | null | null | cs.LO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differentiable logics (DL) have recently been proposed as a method of
training neural networks to satisfy logical specifications. A DL consists of a
syntax in which specifications are stated and an interpretation function that
translates expressions in the syntax into loss functions. These loss functions
can then be used during training with standard gradient descent algorithms. The
variety of existing DLs and the differing levels of formality with which they
are treated makes a systematic comparative study of their properties and
implementations difficult. This paper remedies this problem by suggesting a
meta-language for defining DLs that we call the Logic of Differentiable Logics,
or LDL. Syntactically, it generalises the syntax of existing DLs to FOL, and
for the first time introduces the formalism for reasoning about vectors and
learners. Semantically, it introduces a general interpretation function that
can be instantiated to define loss functions arising from different existing
DLs. We use LDL to establish several theoretical properties of existing DLs,
and to conduct their empirical study in neural network verification.
| [
{
"version": "v1",
"created": "Sun, 19 Mar 2023 13:03:51 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 13:30:45 GMT"
},
{
"version": "v3",
"created": "Wed, 24 May 2023 13:33:37 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 11:17:08 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Ślusarz",
"Natalia",
""
],
[
"Komendantskaya",
"Ekaterina",
""
],
[
"Daggitt",
"Matthew L.",
""
],
[
"Stewart",
"Robert",
""
],
[
"Stark",
"Kathrin",
""
]
] | not_new_dataset | 0.997339 |
2303.12214 | Jingwei Zhang | Jingwei Zhang, Saarthak Kapse, Ke Ma, Prateek Prasanna, Joel Saltz,
Maria Vakalopoulou, Dimitris Samaras | Prompt-MIL: Boosting Multi-Instance Learning Schemes via Task-specific
Prompt Tuning | Accepted to MICCAI 2023 (Oral) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Whole slide image (WSI) classification is a critical task in computational
pathology, requiring the processing of gigapixel-sized images, which is
challenging for current deep-learning methods. Current state of the art methods
are based on multi-instance learning schemes (MIL), which usually rely on
pretrained features to represent the instances. Due to the lack of
task-specific annotated data, these features are either obtained from
well-established backbones on natural images, or, more recently from
self-supervised models pretrained on histopathology. However, both approaches
yield task-agnostic features, resulting in performance loss compared to the
appropriate task-related supervision, if available. In this paper, we show that
when task-specific annotations are limited, we can inject such supervision into
downstream task training, to reduce the gap between fully task-tuned and task
agnostic features. We propose Prompt-MIL, an MIL framework that integrates
prompts into WSI classification. Prompt-MIL adopts a prompt tuning mechanism,
where only a small fraction of parameters calibrates the pretrained features to
encode task-specific information, rather than the conventional full fine-tuning
approaches. Extensive experiments on three WSI datasets, TCGA-BRCA, TCGA-CRC,
and BRIGHT, demonstrate the superiority of Prompt-MIL over conventional MIL
methods, achieving a relative improvement of 1.49%-4.03% in accuracy and
0.25%-8.97% in AUROC while using fewer than 0.3% additional parameters.
Compared to conventional full fine-tuning approaches, we fine-tune less than
1.3% of the parameters, yet achieve a relative improvement of 1.29%-13.61% in
accuracy and 3.22%-27.18% in AUROC and reduce GPU memory consumption by 38%-45%
while training 21%-27% faster. Our code is available at
https://github.com/cvlab-stonybrook/PromptMIL.
| [
{
"version": "v1",
"created": "Tue, 21 Mar 2023 22:24:27 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 03:50:19 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Zhang",
"Jingwei",
""
],
[
"Kapse",
"Saarthak",
""
],
[
"Ma",
"Ke",
""
],
[
"Prasanna",
"Prateek",
""
],
[
"Saltz",
"Joel",
""
],
[
"Vakalopoulou",
"Maria",
""
],
[
"Samaras",
"Dimitris",
""
]
] | not_new_dataset | 0.99728 |
2303.14655 | Ji Qi | Ji Qi, Jifan Yu, Teng Tu, Kunyu Gao, Yifan Xu, Xinyu Guan, Xiaozhi
Wang, Yuxiao Dong, Bin Xu, Lei Hou, Juanzi Li, Jie Tang, Weidong Guo, Hui
Liu, Yu Xu | GOAL: A Challenging Knowledge-grounded Video Captioning Benchmark for
Real-time Soccer Commentary Generation | Accepted by CIKM 2023 | null | null | null | cs.CV cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent emergence of video captioning models, how to generate
vivid, fine-grained video descriptions based on the background knowledge (i.e.,
long and informative commentary about the domain-specific scenes with
appropriate reasoning) is still far from being solved, which however has great
applications such as automatic sports narrative. In this paper, we present
GOAL, a benchmark of over 8.9k soccer video clips, 22k sentences, and 42k
knowledge triples for proposing a challenging new task setting as
Knowledge-grounded Video Captioning (KGVC). Moreover, we conduct experimental
adaption of existing methods to show the difficulty and potential directions
for solving this valuable and applicable task. Our data and code are available
at https://github.com/THU-KEG/goal.
| [
{
"version": "v1",
"created": "Sun, 26 Mar 2023 08:43:36 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 06:55:13 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Qi",
"Ji",
""
],
[
"Yu",
"Jifan",
""
],
[
"Tu",
"Teng",
""
],
[
"Gao",
"Kunyu",
""
],
[
"Xu",
"Yifan",
""
],
[
"Guan",
"Xinyu",
""
],
[
"Wang",
"Xiaozhi",
""
],
[
"Dong",
"Yuxiao",
""
],
[
"Xu",
"Bin",
""
],
[
"Hou",
"Lei",
""
],
[
"Li",
"Juanzi",
""
],
[
"Tang",
"Jie",
""
],
[
"Guo",
"Weidong",
""
],
[
"Liu",
"Hui",
""
],
[
"Xu",
"Yu",
""
]
] | new_dataset | 0.997324 |
2303.15375 | Yan Sun | Yan Sun, Yifan Yuan, Zeduo Yu, Reese Kuper, Chihun Song, Jinghan
Huang, Houxiang Ji, Siddharth Agarwal, Jiaqi Lou, Ipoom Jeong, Ren Wang, Jung
Ho Ahn, Tianyin Xu, Nam Sung Kim | Demystifying CXL Memory with Genuine CXL-Ready Systems and Devices | This paper has been accepted by MICRO'23. Please refer to the
https://doi.org/10.1145/3613424.3614256 for the official version of this
paper | null | 10.1145/3613424.3614256 | null | cs.PF cs.AR | http://creativecommons.org/licenses/by/4.0/ | The ever-growing demands for memory with larger capacity and higher bandwidth
have driven recent innovations on memory expansion and disaggregation
technologies based on Compute eXpress Link (CXL). Especially, CXL-based memory
expansion technology has recently gained notable attention for its ability not
only to economically expand memory capacity and bandwidth but also to decouple
memory technologies from a specific memory interface of the CPU. However, since
CXL memory devices have not been widely available, they have been emulated
using DDR memory in a remote NUMA node. In this paper, for the first time, we
comprehensively evaluate a true CXL-ready system based on the latest
4th-generation Intel Xeon CPU with three CXL memory devices from different
manufacturers. Specifically, we run a set of microbenchmarks not only to
compare the performance of true CXL memory with that of emulated CXL memory but
also to analyze the complex interplay between the CPU and CXL memory in depth.
This reveals important differences between emulated CXL memory and true CXL
memory, some of which will compel researchers to revisit the analyses and
proposals from recent work. Next, we identify opportunities for
memory-bandwidth-intensive applications to benefit from the use of CXL memory.
Lastly, we propose a CXL-memory-aware dynamic page allocation policy, Caption
to more efficiently use CXL memory as a bandwidth expander. We demonstrate that
Caption can automatically converge to an empirically favorable percentage of
pages allocated to CXL memory, which improves the performance of
memory-bandwidth-intensive applications by up to 24% when compared to the
default page allocation policy designed for traditional NUMA systems.
| [
{
"version": "v1",
"created": "Mon, 27 Mar 2023 16:51:26 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Apr 2023 04:25:32 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Jul 2023 22:40:13 GMT"
},
{
"version": "v4",
"created": "Thu, 5 Oct 2023 03:58:56 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Sun",
"Yan",
""
],
[
"Yuan",
"Yifan",
""
],
[
"Yu",
"Zeduo",
""
],
[
"Kuper",
"Reese",
""
],
[
"Song",
"Chihun",
""
],
[
"Huang",
"Jinghan",
""
],
[
"Ji",
"Houxiang",
""
],
[
"Agarwal",
"Siddharth",
""
],
[
"Lou",
"Jiaqi",
""
],
[
"Jeong",
"Ipoom",
""
],
[
"Wang",
"Ren",
""
],
[
"Ahn",
"Jung Ho",
""
],
[
"Xu",
"Tianyin",
""
],
[
"Kim",
"Nam Sung",
""
]
] | not_new_dataset | 0.997382 |
2303.16887 | Guan Zhe Hong | Guan Zhe Hong, Yin Cui, Ariel Fuxman, Stanley H. Chan, Enming Luo | Towards Understanding the Effect of Pretraining Label Granularity | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study how the granularity of pretraining labels affects the
generalization of deep neural networks in image classification tasks. We focus
on the "fine-to-coarse" transfer learning setting, where the pretraining label
space is more fine-grained than that of the target problem. Empirically, we
show that pretraining on the leaf labels of ImageNet21k produces better
transfer results on ImageNet1k than pretraining on other coarser granularity
levels, which supports the common practice used in the community.
Theoretically, we explain the benefit of fine-grained pretraining by proving
that, for a data distribution satisfying certain hierarchy conditions, 1)
coarse-grained pretraining only allows a neural network to learn the "common"
or "easy-to-learn" features well, while 2) fine-grained pretraining helps the
network learn the "rarer" or "fine-grained" features in addition to the common
ones, thus improving its accuracy on hard downstream test samples in which
common features are missing or weak in strength. Furthermore, we perform
comprehensive experiments using the label hierarchies of iNaturalist 2021 and
observe that the following conditions, in addition to proper choice of label
granularity, enable the transfer to work well in practice: 1) the pretraining
dataset needs to have a meaningful label hierarchy, and 2) the pretraining and
target label functions need to align well.
| [
{
"version": "v1",
"created": "Wed, 29 Mar 2023 17:56:36 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 17:32:26 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Hong",
"Guan Zhe",
""
],
[
"Cui",
"Yin",
""
],
[
"Fuxman",
"Ariel",
""
],
[
"Chan",
"Stanley H.",
""
],
[
"Luo",
"Enming",
""
]
] | not_new_dataset | 0.997515 |
2304.03752 | Jiaqi Wang | Jiaqi Wang, Pan Zhang, Tao Chu, Yuhang Cao, Yujie Zhou, Tong Wu, Bin
Wang, Conghui He, Dahua Lin | V3Det: Vast Vocabulary Visual Detection Dataset | ICCV 2023 Oral Camera Ready | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in detecting arbitrary objects in the real world are trained
and evaluated on object detection datasets with a relatively restricted
vocabulary. To facilitate the development of more general visual object
detection, we propose V3Det, a vast vocabulary visual detection dataset with
precisely annotated bounding boxes on massive images. V3Det has several
appealing properties: 1) Vast Vocabulary: It contains bounding boxes of objects
from 13,204 categories on real-world images, which is 10 times larger than the
existing large vocabulary object detection dataset, e.g., LVIS. 2) Hierarchical
Category Organization: The vast vocabulary of V3Det is organized by a
hierarchical category tree which annotates the inclusion relationship among
categories, encouraging the exploration of category relationships in vast and
open vocabulary object detection. 3) Rich Annotations: V3Det comprises
precisely annotated objects in 243k images and professional descriptions of
each category written by human experts and a powerful chatbot. By offering a
vast exploration space, V3Det enables extensive benchmarks on both vast and
open vocabulary object detection, leading to new observations, practices, and
insights for future research. It has the potential to serve as a cornerstone
dataset for developing more general visual perception systems. V3Det is
available at https://v3det.openxlab.org.cn/.
| [
{
"version": "v1",
"created": "Fri, 7 Apr 2023 17:45:35 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 12:18:14 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Wang",
"Jiaqi",
""
],
[
"Zhang",
"Pan",
""
],
[
"Chu",
"Tao",
""
],
[
"Cao",
"Yuhang",
""
],
[
"Zhou",
"Yujie",
""
],
[
"Wu",
"Tong",
""
],
[
"Wang",
"Bin",
""
],
[
"He",
"Conghui",
""
],
[
"Lin",
"Dahua",
""
]
] | new_dataset | 0.997916 |
2304.04327 | Jinyi Ye | Jinyi Ye, Nikhil Jindal, Francesco Pierri, Luca Luceri | Online Networks of Support in Distressed Environments: Solidarity and
Mobilization during the Russian Invasion of Ukraine | Presented at ICWSM2023 Workshop "Data for the Wellbeing of Most
Vulnerable" | Proceedings of the ICWSM Workshops 2023 | 10.36190/2023.05 | null | cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Despite their drawbacks and unintended consequences, social media networks
have recently emerged as a crucial resource for individuals in distress,
particularly during times of crisis. These platforms serve as a means to seek
assistance and support, share reliable information, and appeal for action and
solidarity. In this paper, we examine the online networks of support during the
Russia-Ukraine conflict by analyzing four major social media networks: Twitter,
Facebook, Instagram, and YouTube. Using a large dataset of 68 million posts, we
explore the temporal patterns and interconnectedness between these platforms
and online support websites. Our analysis highlights the prevalence of
crowdsourcing and crowdfunding websites as the two main support platforms to
mobilize resources and solicit donations, revealing their purpose and contents,
and investigating different support-seeking and -receiving practices. Overall,
our study underscores the potential of social media in facilitating online
support in distressed environments through grassroots mobilization,
contributing to the growing body of research on the positive impact of online
platforms in promoting social good and protecting vulnerable populations during
times of crisis and conflict.
| [
{
"version": "v1",
"created": "Sun, 9 Apr 2023 23:27:59 GMT"
},
{
"version": "v2",
"created": "Mon, 15 May 2023 22:17:40 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Oct 2023 21:59:32 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Ye",
"Jinyi",
""
],
[
"Jindal",
"Nikhil",
""
],
[
"Pierri",
"Francesco",
""
],
[
"Luceri",
"Luca",
""
]
] | not_new_dataset | 0.99743 |
2304.05128 | Xinyun Chen | Xinyun Chen, Maxwell Lin, Nathanael Sch\"arli, Denny Zhou | Teaching Large Language Models to Self-Debug | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have achieved impressive performance on code
generation. However, for complex programming tasks, generating the correct
solution in one go becomes challenging, thus some prior works have designed
program repair approaches to improve code generation performance. In this work,
we propose Self-Debugging, which teaches a large language model to debug its
predicted program via few-shot demonstrations. In particular, we demonstrate
that Self-Debugging can teach the large language model to perform rubber duck
debugging; i.e., without any human feedback on the code correctness or error
messages, the model is able to identify its mistakes by investigating the
execution results and explaining the generated code in natural language.
Self-Debugging achieves the state-of-the-art performance on several code
generation benchmarks, including the Spider dataset for text-to-SQL generation,
TransCoder for C++-to-Python translation, and MBPP for text-to-Python
generation. On the Spider benchmark where there are no unit tests to verify the
correctness of predictions, Self-Debugging with code explanation consistently
improves the baseline by 2-3%, and improves the prediction accuracy on problems
of the hardest level by 9%. On TransCoder and MBPP where unit tests are
available, Self-Debugging improves the baseline accuracy by up to 12%.
Meanwhile, by leveraging feedback messages and reusing failed predictions,
Self-Debugging notably improves sample efficiency, and can match or outperform
baseline models that generate more than 10x candidate programs.
| [
{
"version": "v1",
"created": "Tue, 11 Apr 2023 10:43:43 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 09:12:07 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Chen",
"Xinyun",
""
],
[
"Lin",
"Maxwell",
""
],
[
"Schärli",
"Nathanael",
""
],
[
"Zhou",
"Denny",
""
]
] | not_new_dataset | 0.977384 |
2304.06715 | Jonathan Crabb\'e | Jonathan Crabb\'e, Mihaela van der Schaar | Evaluating the Robustness of Interpretability Methods through
Explanation Invariance and Equivariance | Presented at NeurIPS 2023 | null | null | null | cs.LG cs.AI cs.CG | http://creativecommons.org/licenses/by/4.0/ | Interpretability methods are valuable only if their explanations faithfully
describe the explained model. In this work, we consider neural networks whose
predictions are invariant under a specific symmetry group. This includes
popular architectures, ranging from convolutional to graph neural networks. Any
explanation that faithfully explains this type of model needs to be in
agreement with this invariance property. We formalize this intuition through
the notion of explanation invariance and equivariance by leveraging the
formalism from geometric deep learning. Through this rigorous formalism, we
derive (1) two metrics to measure the robustness of any interpretability method
with respect to the model symmetry group; (2) theoretical robustness guarantees
for some popular interpretability methods and (3) a systematic approach to
increase the invariance of any interpretability method with respect to a
symmetry group. By empirically measuring our metrics for explanations of models
associated with various modalities and symmetry groups, we derive a set of 5
guidelines to allow users and developers of interpretability methods to produce
robust explanations.
| [
{
"version": "v1",
"created": "Thu, 13 Apr 2023 17:59:03 GMT"
},
{
"version": "v2",
"created": "Fri, 12 May 2023 17:59:25 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 15:29:01 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Crabbé",
"Jonathan",
""
],
[
"van der Schaar",
"Mihaela",
""
]
] | not_new_dataset | 0.997409 |
2304.08247 | Keno Bressem | Tianyu Han and Lisa C. Adams and Jens-Michalis Papaioannou and Paul
Grundmann and Tom Oberhauser and Alexander L\"oser and Daniel Truhn and Keno
K. Bressem | MedAlpaca -- An Open-Source Collection of Medical Conversational AI
Models and Training Data | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) like OpenAI's GPT series continue to make
strides, we witness the emergence of artificial intelligence applications in an
ever-expanding range of fields. In medicine, these LLMs hold considerable
promise for improving medical workflows, diagnostics, patient care, and
education. Yet, there is an urgent need for open-source models that can be
deployed on-premises to safeguard patient privacy. In our work, we present an
innovative dataset consisting of over 160,000 entries, specifically crafted to
fine-tune LLMs for effective medical applications. We investigate the impact of
fine-tuning these datasets on publicly accessible pre-trained LLMs, and
subsequently, we juxtapose the performance of pre-trained-only models against
the fine-tuned models concerning the examinations that future medical doctors
must pass to achieve certification.
| [
{
"version": "v1",
"created": "Fri, 14 Apr 2023 11:28:08 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 23:28:00 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Han",
"Tianyu",
""
],
[
"Adams",
"Lisa C.",
""
],
[
"Papaioannou",
"Jens-Michalis",
""
],
[
"Grundmann",
"Paul",
""
],
[
"Oberhauser",
"Tom",
""
],
[
"Löser",
"Alexander",
""
],
[
"Truhn",
"Daniel",
""
],
[
"Bressem",
"Keno K.",
""
]
] | new_dataset | 0.997839 |
2304.08979 | Xinyue Shen | Xinyue Shen and Zeyuan Chen and Michael Backes and Yang Zhang | In ChatGPT We Trust? Measuring and Characterizing the Reliability of
ChatGPT | null | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | The way users acquire information is undergoing a paradigm shift with the
advent of ChatGPT. Unlike conventional search engines, ChatGPT retrieves
knowledge from the model itself and generates answers for users. ChatGPT's
impressive question-answering (QA) capability has attracted more than 100
million users within a short period of time but has also raised concerns
regarding its reliability. In this paper, we perform the first large-scale
measurement of ChatGPT's reliability in the generic QA scenario with a
carefully curated set of 5,695 questions across ten datasets and eight domains.
We find that ChatGPT's reliability varies across different domains, especially
underperforming in law and science questions. We also demonstrate that system
roles, originally designed by OpenAI to allow users to steer ChatGPT's
behavior, can impact ChatGPT's reliability in an imperceptible way. We further
show that ChatGPT is vulnerable to adversarial examples, and even a single
character change can negatively affect its reliability in certain cases. We
believe that our study provides valuable insights into ChatGPT's reliability
and underscores the need for strengthening the reliability and security of
large language models (LLMs).
| [
{
"version": "v1",
"created": "Tue, 18 Apr 2023 13:20:45 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 13:27:12 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Shen",
"Xinyue",
""
],
[
"Chen",
"Zeyuan",
""
],
[
"Backes",
"Michael",
""
],
[
"Zhang",
"Yang",
""
]
] | not_new_dataset | 0.997463 |
2304.09666 | Marc Fuchs | Marc Fuchs and Fabian Kuhn | List Defective Colorings: Distributed Algorithms and Applications | null | null | 10.4230/LIPIcs.DISC.2023.22 | null | cs.DC cs.DS | http://creativecommons.org/licenses/by/4.0/ | The distributed coloring problem is at the core of the area of distributed
graph algorithms and it is a problem that has seen tremendous progress over the
last few years. Much of the remarkable recent progress on deterministic
distributed coloring algorithms is based on two main tools: a) defective
colorings in which every node of a given color can have a limited number of
neighbors of the same color and b) list coloring, a natural generalization of
the standard coloring problem that naturally appears when colorings are
computed in different stages and one has to extend a previously computed
partial coloring to a full coloring.
In this paper, we introduce 'list defective colorings', which can be seen as
a generalization of these two coloring variants. Essentially, in a list
defective coloring instance, each node $v$ is given a list of colors
$x_{v,1},\dots,x_{v,p}$ together with a list of defects $d_{v,1},\dots,d_{v,p}$
such that if $v$ is colored with color $x_{v, i}$, it is allowed to have at
most $d_{v, i}$ neighbors with color $x_{v, i}$.
We highlight the important role of list defective colorings by showing that
faster list defective coloring algorithms would directly lead to faster
deterministic $(\Delta+1)$-coloring algorithms in the LOCAL model. Further, we
extend a recent distributed list coloring algorithm by Maus and Tonoyan [DISC
'20]. Slightly simplified, we show that if for each node $v$ it holds that
$\sum_{i=1}^p \big(d_{v,i}+1)^2 > \mathrm{deg}_G^2(v)\cdot polylog\Delta$ then
this list defective coloring instance can be solved in a
communication-efficient way in only $O(\log\Delta)$ communication rounds. This
leads to the first deterministic $(\Delta+1)$-coloring algorithm in the
standard CONGEST model with a time complexity of $O(\sqrt{\Delta}\cdot polylog
\Delta+\log^* n)$, matching the best time complexity in the LOCAL model up to a
$polylog\Delta$ factor.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2023 13:52:47 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Aug 2023 14:23:40 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Fuchs",
"Marc",
""
],
[
"Kuhn",
"Fabian",
""
]
] | not_new_dataset | 0.997405 |
2304.11004 | Huayu Li | Huayu Li, Xiwen Chen, Gregory Ditzler, Janet Roveda, Ao Li | Knowledge Distillation Under Ideal Joint Classifier Assumption | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge distillation constitutes a potent methodology for condensing
substantial neural networks into more compact and efficient counterparts.
Within this context, softmax regression representation learning serves as a
widely embraced approach, leveraging a pre-established teacher network to guide
the learning process of a diminutive student network. Notably, despite the
extensive inquiry into the efficacy of softmax regression representation
learning, the intricate underpinnings governing the knowledge transfer
mechanism remain inadequately elucidated. This study introduces the 'Ideal
Joint Classifier Knowledge Distillation' (IJCKD) framework, an overarching
paradigm that not only furnishes a lucid and exhaustive comprehension of
prevailing knowledge distillation techniques but also establishes a theoretical
underpinning for prospective investigations. Employing mathematical
methodologies derived from domain adaptation theory, this investigation
conducts a comprehensive examination of the error boundary of the student
network contingent upon the teacher network. Consequently, our framework
facilitates efficient knowledge transference between teacher and student
networks, thereby accommodating a diverse spectrum of applications.
| [
{
"version": "v1",
"created": "Wed, 19 Apr 2023 21:06:00 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 23:33:35 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Li",
"Huayu",
""
],
[
"Chen",
"Xiwen",
""
],
[
"Ditzler",
"Gregory",
""
],
[
"Roveda",
"Janet",
""
],
[
"Li",
"Ao",
""
]
] | not_new_dataset | 0.997422 |
2304.14420 | Albert Lam | Albert Lam, Mihai Anitescu, Anirudh Subramanyam | Network Cascade Vulnerability using Constrained Bayesian Optimization | 13 pages, 5 figures | null | null | null | cs.SI cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Measures of power grid vulnerability are often assessed by the amount of
damage an adversary can exact on the network. However, the cascading impact of
such attacks is often overlooked, even though cascades are one of the primary
causes of large-scale blackouts. This paper explores modifications of
transmission line protection settings as candidates for adversarial attacks,
which can remain undetectable as long as the network equilibrium state remains
unaltered. This forms the basis of a black-box function in a Bayesian
optimization procedure, where the objective is to find protection settings that
maximize network degradation due to cascading. Notably, our proposed method is
agnostic to the choice of the cascade simulator and its underlying assumptions.
Numerical experiments reveal that, against conventional wisdom, maximally
misconfiguring the protection settings of all network lines does not cause the
most cascading. More surprisingly, even when the degree of misconfiguration is
limited due to resource constraints, it is still possible to find settings that
produce cascades comparable in severity to instances where there are no
resource constraints.
| [
{
"version": "v1",
"created": "Thu, 27 Apr 2023 02:31:20 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 02:19:18 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Lam",
"Albert",
""
],
[
"Anitescu",
"Mihai",
""
],
[
"Subramanyam",
"Anirudh",
""
]
] | not_new_dataset | 0.997363 |
2304.14993 | Dhruv Kumar | Ishika Joshi, Ritvik Budhiraja, Harshal Dev, Jahnvi Kadia, M. Osama
Ataullah, Sayan Mitra, Dhruv Kumar, Harshal D. Akolekar | ChatGPT in the Classroom: An Analysis of Its Strengths and Weaknesses
for Solving Undergraduate Computer Science Questions | Accepted in SIGCSE TS 2024 | null | null | null | cs.HC cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ChatGPT is an AI language model developed by OpenAI that can understand and
generate human-like text. It can be used for a variety of use cases such as
language generation, question answering, text summarization, chatbot
development, language translation, sentiment analysis, content creation,
personalization, text completion, and storytelling. While ChatGPT has garnered
significant positive attention, it has also generated a sense of apprehension
and uncertainty in academic circles. There is concern that students may
leverage ChatGPT to complete take-home assignments and exams and obtain
favorable grades without genuinely acquiring knowledge. This paper adopts a
quantitative approach to demonstrate ChatGPT's high degree of unreliability in
answering a diverse range of questions pertaining to topics in undergraduate
computer science. Our analysis shows that students may risk self-sabotage by
blindly depending on ChatGPT to complete assignments and exams. We build upon
this analysis to provide constructive recommendations to both students and
instructors.
| [
{
"version": "v1",
"created": "Fri, 28 Apr 2023 17:26:32 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2023 14:44:32 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 04:18:28 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Joshi",
"Ishika",
""
],
[
"Budhiraja",
"Ritvik",
""
],
[
"Dev",
"Harshal",
""
],
[
"Kadia",
"Jahnvi",
""
],
[
"Ataullah",
"M. Osama",
""
],
[
"Mitra",
"Sayan",
""
],
[
"Kumar",
"Dhruv",
""
],
[
"Akolekar",
"Harshal D.",
""
]
] | not_new_dataset | 0.997434 |
2305.06410 | Hsueh-Ti Derek Liu | Hsueh-Ti Derek Liu, Mark Gillespie, Benjamin Chislett, Nicholas Sharp,
Alec Jacobson, Keenan Crane | Surface Simplification using Intrinsic Error Metrics | SIGGRAPH 2023 | ACM Transactions on Graphics, Vol.42, No. 4, August 2023 | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a method for fast simplification of surface meshes.
Whereas past methods focus on visual appearance, our goal is to solve equations
on the surface. Hence, rather than approximate the extrinsic geometry, we
construct a coarse intrinsic triangulation of the input domain. In the spirit
of the quadric error metric (QEM), we perform greedy decimation while
agglomerating global information about approximation error. In lieu of
extrinsic quadrics, however, we store intrinsic tangent vectors that track how
far curvature "drifts" during simplification. This process also yields a
bijective map between the fine and coarse mesh, and prolongation operators for
both scalar- and vector-valued data. Moreover, we obtain hard guarantees on
element quality via intrinsic retriangulation - a feature unique to the
intrinsic setting. The overall payoff is a "black box" approach to geometry
processing, which decouples mesh resolution from the size of matrices used to
solve equations. We show how our method benefits several fundamental tasks,
including geometric multigrid, all-pairs geodesic distance, mean curvature
flow, geodesic Voronoi diagrams, and the discrete exponential map.
| [
{
"version": "v1",
"created": "Wed, 10 May 2023 18:41:48 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 18:11:24 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Liu",
"Hsueh-Ti Derek",
""
],
[
"Gillespie",
"Mark",
""
],
[
"Chislett",
"Benjamin",
""
],
[
"Sharp",
"Nicholas",
""
],
[
"Jacobson",
"Alec",
""
],
[
"Crane",
"Keenan",
""
]
] | not_new_dataset | 0.996532 |
2305.07962 | Constantin Runge | Constantin Runge and Thomas Wiegart and Diego Lentner | Improved List Decoding for Polar-Coded Probabilistic Shaping | 5 pages, 3 figures; as presented at ISTC 2023 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A modified successive cancellation list (SCL) decoder is proposed for
polar-coded probabilistic shaping. The decoder exploits the deterministic
encoding rule for shaping bits to rule out candidate code words that the
encoder would not generate. This provides error detection and decreases error
rates compared to standard SCL decoding while at the same time reducing the
length of the outer cyclic redundancy check code.
| [
{
"version": "v1",
"created": "Sat, 13 May 2023 16:41:56 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Aug 2023 08:38:12 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 14:37:51 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Runge",
"Constantin",
""
],
[
"Wiegart",
"Thomas",
""
],
[
"Lentner",
"Diego",
""
]
] | not_new_dataset | 0.996789 |
2305.11779 | Huitong Pan | Huitong Pan, Qi Zhang, Eduard Dragut, Cornelia Caragea, Longin Jan
Latecki | DMDD: A Large-Scale Dataset for Dataset Mentions Detection | Pre-MIT Press publication version. Submitted to TACL | Transactions of the Association for Computational Linguistics. 11
(2023) 1132-1146 | 10.1162/tacl_a_00592 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The recognition of dataset names is a critical task for automatic information
extraction in scientific literature, enabling researchers to understand and
identify research opportunities. However, existing corpora for dataset mention
detection are limited in size and naming diversity. In this paper, we introduce
the Dataset Mentions Detection Dataset (DMDD), the largest publicly available
corpus for this task. DMDD consists of the DMDD main corpus, comprising 31,219
scientific articles with over 449,000 dataset mentions weakly annotated in the
format of in-text spans, and an evaluation set, which comprises of 450
scientific articles manually annotated for evaluation purposes. We use DMDD to
establish baseline performance for dataset mention detection and linking. By
analyzing the performance of various models on DMDD, we are able to identify
open problems in dataset mention detection. We invite the community to use our
dataset as a challenge to develop novel dataset mention detection models.
| [
{
"version": "v1",
"created": "Fri, 19 May 2023 16:18:00 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Pan",
"Huitong",
""
],
[
"Zhang",
"Qi",
""
],
[
"Dragut",
"Eduard",
""
],
[
"Caragea",
"Cornelia",
""
],
[
"Latecki",
"Longin Jan",
""
]
] | new_dataset | 0.997837 |
2305.12081 | Zifeng Wang | Zifeng Wang and Chufan Gao and Cao Xiao and Jimeng Sun | MediTab: Scaling Medical Tabular Data Predictors via Data Consolidation,
Enrichment, and Refinement | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Tabular data prediction has been employed in medical applications such as
patient health risk prediction. However, existing methods usually revolve
around the algorithm design while overlooking the significance of data
engineering. Medical tabular datasets frequently exhibit significant
heterogeneity across different sources, with limited sample sizes per source.
As such, previous predictors are often trained on manually curated small
datasets that struggle to generalize across different tabular datasets during
inference. This paper proposes to scale medical tabular data predictors
(MediTab) to various tabular inputs with varying features. The method uses a
data engine that leverages large language models (LLMs) to consolidate tabular
samples to overcome the barrier across tables with distinct schema. It also
aligns out-domain data with the target task using a "learn, annotate, and
refinement" pipeline. The expanded training data then enables the pre-trained
MediTab to infer for arbitrary tabular input in the domain without fine-tuning,
resulting in significant improvements over supervised baselines: it reaches an
average ranking of 1.57 and 1.00 on 7 patient outcome prediction datasets and 3
trial outcome prediction datasets, respectively. In addition, MediTab exhibits
impressive zero-shot performances: it outperforms supervised XGBoost models by
8.9% and 17.2% on average in two prediction tasks, respectively. The code is
available at https://github.com/RyanWangZf/MediTab.
| [
{
"version": "v1",
"created": "Sat, 20 May 2023 03:37:09 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 05:40:00 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Wang",
"Zifeng",
""
],
[
"Gao",
"Chufan",
""
],
[
"Xiao",
"Cao",
""
],
[
"Sun",
"Jimeng",
""
]
] | not_new_dataset | 0.99709 |
2305.12766 | Chi Han | Chi Han, Ziqi Wang, Han Zhao, Heng Ji | Explaining Emergent In-Context Learning as Kernel Regression | 9 pages, 4 figures | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have initiated a paradigm shift in transfer
learning. In contrast to the classic pretraining-then-finetuning procedure, in
order to use LLMs for downstream prediction tasks, one only needs to provide a
few demonstrations, known as in-context examples, without adding more or
updating existing model parameters. This in-context learning (ICL) capability
of LLMs is intriguing, and it is not yet fully understood how pretrained LLMs
acquire such capabilities. In this paper, we investigate the reason why a
transformer-based language model can accomplish in-context learning after
pre-training on a general language corpus by proposing one hypothesis that LLMs
can simulate kernel regression with internal representations when faced with
in-context examples. More concretely, we first prove that Bayesian inference on
in-context prompts can be asymptotically understood as kernel regression $\hat
y = \sum_i y_i K(x, x_i)/\sum_i K(x, x_i)$ as the number of in-context
demonstrations grows. Then, we empirically investigate the in-context behaviors
of language models. We find that during ICL, the attention and hidden features
in LLMs match the behaviors of a kernel regression. Finally, our theory
provides insights into multiple phenomena observed in the ICL field: why
retrieving demonstrative samples similar to test samples can help, why ICL
performance is sensitive to the output formats, and why ICL accuracy benefits
from selecting in-distribution and representative samples.
| [
{
"version": "v1",
"created": "Mon, 22 May 2023 06:45:02 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 16:04:43 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Han",
"Chi",
""
],
[
"Wang",
"Ziqi",
""
],
[
"Zhao",
"Han",
""
],
[
"Ji",
"Heng",
""
]
] | not_new_dataset | 0.99746 |
2305.13673 | Zeyuan Allen-Zhu | Zeyuan Allen-Zhu, Yuanzhi Li | Physics of Language Models: Part 1, Context-Free Grammar | V2 polishes writing and adds Appendix G | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We design controlled experiments to study HOW generative language models,
like GPT, learn context-free grammars (CFGs) -- diverse language systems with a
tree-like structure capturing many aspects of natural languages, programs, and
logics. CFGs are as hard as pushdown automata, and can be ambiguous so that
verifying if a string satisfies the rules requires dynamic programming. We
construct synthetic data and demonstrate that even for difficult (long and
ambiguous) CFGs, pre-trained transformers can learn to generate sentences with
near-perfect accuracy and impressive diversity.
More importantly, we delve into the physical principles behind how
transformers learns CFGs. We discover that the hidden states within the
transformer implicitly and precisely encode the CFG structure (such as putting
tree node information exactly on the subtree boundary), and learn to form
"boundary to boundary" attentions resembling dynamic programming. We also cover
some extension of CFGs as well as the robustness aspect of transformers against
grammar mistakes. Overall, our research provides a comprehensive and empirical
understanding of how transformers learn CFGs, and reveals the physical
mechanisms utilized by transformers to capture the structure and rules of
languages.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 04:28:16 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 01:43:23 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Allen-Zhu",
"Zeyuan",
""
],
[
"Li",
"Yuanzhi",
""
]
] | not_new_dataset | 0.997382 |
2305.13716 | Yuhao Liang | Yuhao Liang, Fan Yu, Yangze Li, Pengcheng Guo, Shiliang Zhang, Qian
Chen, Lei Xie | BA-SOT: Boundary-Aware Serialized Output Training for Multi-Talker ASR | Accepted by INTERSPEECH 2023 | null | null | null | cs.SD cs.CL eess.AS | http://creativecommons.org/licenses/by-sa/4.0/ | The recently proposed serialized output training (SOT) simplifies
multi-talker automatic speech recognition (ASR) by generating speaker
transcriptions separated by a special token. However, frequent speaker changes
can make speaker change prediction difficult. To address this, we propose
boundary-aware serialized output training (BA-SOT), which explicitly
incorporates boundary knowledge into the decoder via a speaker change detection
task and boundary constraint loss. We also introduce a two-stage connectionist
temporal classification (CTC) strategy that incorporates token-level SOT CTC to
restore temporal context information. Besides typical character error rate
(CER), we introduce utterance-dependent character error rate (UD-CER) to
further measure the precision of speaker change prediction. Compared to
original SOT, BA-SOT reduces CER/UD-CER by 5.1%/14.0%, and leveraging a
pre-trained ASR model for BA-SOT model initialization further reduces
CER/UD-CER by 8.4%/19.9%.
| [
{
"version": "v1",
"created": "Tue, 23 May 2023 06:08:13 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2023 13:45:08 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 11:44:39 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Liang",
"Yuhao",
""
],
[
"Yu",
"Fan",
""
],
[
"Li",
"Yangze",
""
],
[
"Guo",
"Pengcheng",
""
],
[
"Zhang",
"Shiliang",
""
],
[
"Chen",
"Qian",
""
],
[
"Xie",
"Lei",
""
]
] | not_new_dataset | 0.997163 |
2305.14979 | Gabriel Kasmi | Gabriel Kasmi and Laurent Dubus and Yves-Marie Saint Drenan and
Philippe Blanc | Assessment of the Reliablity of a Model's Decision by Generalizing
Attribution to the Wavelet Domain | 16 pages, 10 figures, 2 tables. v1 of the manuscript rejected from
NeurIPS 2023, mainly due to the lack of quantitative evidence of the
relevance of the proposed methodology. In the v2, we propose steps to address
this issue and also plan on expanding the insertion and deletion scores for
our method | null | null | null | cs.CV cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | Neural networks have shown remarkable performance in computer vision, but
their deployment in numerous scientific and technical fields is challenging due
to their black-box nature. Scientists and practitioners need to evaluate the
reliability of a decision, i.e., to know simultaneously if a model relies on
the relevant features and whether these features are robust to image
corruptions. Existing attribution methods aim to provide human-understandable
explanations by highlighting important regions in the image domain, but fail to
fully characterize a decision process's reliability. To bridge this gap, we
introduce the Wavelet sCale Attribution Method (WCAM), a generalization of
attribution from the pixel domain to the space-scale domain using wavelet
transforms. Attribution in the wavelet domain reveals where {\it and} on what
scales the model focuses, thus enabling us to assess whether a decision is
reliable.
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 10:13:32 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Sep 2023 16:03:50 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 11:53:31 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Kasmi",
"Gabriel",
""
],
[
"Dubus",
"Laurent",
""
],
[
"Drenan",
"Yves-Marie Saint",
""
],
[
"Blanc",
"Philippe",
""
]
] | not_new_dataset | 0.99741 |
2305.15070 | London Lowmanstone | London Lowmanstone, Ruyuan Wan, Risako Owan, Jaehyung Kim, Dongyeop
Kang | Annotation Imputation to Individualize Predictions: Initial Studies on
Distribution Dynamics and Model Predictions | NLPerspectives - 2nd Workshop on Perspectivist Approaches to NLP, 39
pages, 13 figures, 13 tables | 2nd Workshop on Perspectivist Approaches to NLP 2023 | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Annotating data via crowdsourcing is time-consuming and expensive. Due to
these costs, dataset creators often have each annotator label only a small
subset of the data. This leads to sparse datasets with examples that are marked
by few annotators. The downside of this process is that if an annotator doesn't
get to label a particular example, their perspective on it is missed. This is
especially concerning for subjective NLP datasets where there is no single
correct label: people may have different valid opinions. Thus, we propose using
imputation methods to generate the opinions of all annotators for all examples,
creating a dataset that does not leave out any annotator's view. We then train
and prompt models, using data from the imputed dataset, to make predictions
about the distribution of responses and individual annotations.
In our analysis of the results, we found that the choice of imputation method
significantly impacts soft label changes and distribution. While the imputation
introduces noise in the prediction of the original dataset, it has shown
potential in enhancing shots for prompts, particularly for low-response-rate
annotators. We have made all of our code and data publicly available.
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 11:54:46 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2023 22:17:17 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 07:10:25 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Lowmanstone",
"London",
""
],
[
"Wan",
"Ruyuan",
""
],
[
"Owan",
"Risako",
""
],
[
"Kim",
"Jaehyung",
""
],
[
"Kang",
"Dongyeop",
""
]
] | not_new_dataset | 0.997202 |
2305.15086 | Jong Chul Ye | Beomsu Kim, Gihyun Kwon, Kwanyoung Kim, Jong Chul Ye | Unpaired Image-to-Image Translation via Neural Schr\"odinger Bridge | null | null | null | null | cs.CV cs.AI cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Diffusion models are a powerful class of generative models which simulate
stochastic differential equations (SDEs) to generate data from noise. Although
diffusion models have achieved remarkable progress in recent years, they have
limitations in the unpaired image-to-image translation tasks due to the
Gaussian prior assumption. Schr\"odinger Bridge (SB), which learns an SDE to
translate between two arbitrary distributions, have risen as an attractive
solution to this problem. However, none of SB models so far have been
successful at unpaired translation between high-resolution images. In this
work, we propose the Unpaired Neural Schr\"odinger Bridge (UNSB), which
expresses SB problem as a sequence of adversarial learning problems. This
allows us to incorporate advanced discriminators and regularization to learn a
SB between unpaired data. We demonstrate that UNSB is scalable and successfully
solves various unpaired image-to-image translation tasks. Code:
\url{https://github.com/cyclomon/UNSB}
| [
{
"version": "v1",
"created": "Wed, 24 May 2023 12:05:24 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 05:12:09 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Kim",
"Beomsu",
""
],
[
"Kwon",
"Gihyun",
""
],
[
"Kim",
"Kwanyoung",
""
],
[
"Ye",
"Jong Chul",
""
]
] | not_new_dataset | 0.997238 |
2305.16102 | Xinyi Wu | Xinyi Wu, Amir Ajorlou, Zihui Wu, Ali Jadbabaie | Demystifying Oversmoothing in Attention-Based Graph Neural Networks | NeurIPS 2023 spotlight. New remarks added | null | null | null | cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oversmoothing in Graph Neural Networks (GNNs) refers to the phenomenon where
increasing network depth leads to homogeneous node representations. While
previous work has established that Graph Convolutional Networks (GCNs)
exponentially lose expressive power, it remains controversial whether the graph
attention mechanism can mitigate oversmoothing. In this work, we provide a
definitive answer to this question through a rigorous mathematical analysis, by
viewing attention-based GNNs as nonlinear time-varying dynamical systems and
incorporating tools and techniques from the theory of products of inhomogeneous
matrices and the joint spectral radius. We establish that, contrary to popular
belief, the graph attention mechanism cannot prevent oversmoothing and loses
expressive power exponentially. The proposed framework extends the existing
results on oversmoothing for symmetric GCNs to a significantly broader class of
GNN models, including random walk GCNs, Graph Attention Networks (GATs) and
(graph) transformers. In particular, our analysis accounts for asymmetric,
state-dependent and time-varying aggregation operators and a wide range of
common nonlinear activation functions, such as ReLU, LeakyReLU, GELU and SiLU.
| [
{
"version": "v1",
"created": "Thu, 25 May 2023 14:31:59 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 15:04:27 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Wu",
"Xinyi",
""
],
[
"Ajorlou",
"Amir",
""
],
[
"Wu",
"Zihui",
""
],
[
"Jadbabaie",
"Ali",
""
]
] | not_new_dataset | 0.997448 |
2305.17455 | Dachuan Shi | Dachuan Shi, Chaofan Tao, Anyi Rao, Zhendong Yang, Chun Yuan, Jiaqi
Wang | CrossGET: Cross-Guided Ensemble of Tokens for Accelerating
Vision-Language Transformers | Technical Report | null | null | null | cs.CV cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent vision-language models have achieved tremendous progress far beyond
what we ever expected. However, their computational costs are also dramatically
growing with rapid development, especially for the large models. It makes model
acceleration exceedingly critical in a scenario of limited resources. Although
extensively studied for unimodal models, the acceleration for multimodal
models, especially the vision-language Transformers, is relatively
under-explored. To pursue more efficient and accessible vision-language
Transformers, this paper introduces \textbf{Cross}-\textbf{G}uided
\textbf{E}nsemble of \textbf{T}okens (\textbf{\emph{CrossGET}}), a universal
acceleration framework for vision-language Transformers. This framework
adaptively combines tokens through real-time, cross-modal guidance, thereby
achieving substantial acceleration while keeping high performance.
\textit{CrossGET} has two key innovations: 1) \textit{Cross-Guided Matching and
Ensemble}. \textit{CrossGET} incorporates cross-modal guided token matching and
ensemble to exploit cross-modal information effectively, only introducing
cross-modal tokens with negligible extra parameters. 2) \textit{Complete-Graph
Soft Matching}. In contrast to the existing bipartite soft matching approach,
\textit{CrossGET} introduces a complete-graph soft matching policy to achieve
more reliable token-matching results while maintaining parallelizability and
high efficiency. Extensive experiments are conducted on various vision-language
tasks, including image-text retrieval, visual reasoning, image captioning, and
visual question answering. Performance on both classic multimodal architectures
and emerging multimodal LLMs demonstrate the effectiveness and versatility of
the proposed \textit{CrossGET} framework. The code will be at
\url{https://github.com/sdc17/CrossGET}.
| [
{
"version": "v1",
"created": "Sat, 27 May 2023 12:07:21 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Oct 2023 22:11:50 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Shi",
"Dachuan",
""
],
[
"Tao",
"Chaofan",
""
],
[
"Rao",
"Anyi",
""
],
[
"Yang",
"Zhendong",
""
],
[
"Yuan",
"Chun",
""
],
[
"Wang",
"Jiaqi",
""
]
] | not_new_dataset | 0.996958 |
2305.20057 | Lisha Chen | Lisha Chen, Heshan Fernando, Yiming Ying, Tianyi Chen | Three-Way Trade-Off in Multi-Objective Learning: Optimization,
Generalization and Conflict-Avoidance | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Multi-objective learning (MOL) problems often arise in emerging machine
learning problems when there are multiple learning criteria, data modalities,
or learning tasks. Different from single-objective learning, one of the
critical challenges in MOL is the potential conflict among different objectives
during the iterative optimization process. Recent works have developed various
dynamic weighting algorithms for MOL such as MGDA and its variants, where the
central idea is to find an update direction that avoids conflicts among
objectives. Albeit its appealing intuition, empirical studies show that dynamic
weighting methods may not always outperform static ones. To understand this
theory-practical gap, we focus on a new stochastic variant of MGDA - the
Multi-objective gradient with Double sampling (MoDo) algorithm, and study the
generalization performance of the dynamic weighting-based MoDo and its
interplay with optimization through the lens of algorithm stability. Perhaps
surprisingly, we find that the key rationale behind MGDA -- updating along
conflict-avoidant direction - may hinder dynamic weighting algorithms from
achieving the optimal ${\cal O}(1/\sqrt{n})$ population risk, where $n$ is the
number of training samples. We further demonstrate the impact of the
variability of dynamic weights on the three-way trade-off among optimization,
generalization, and conflict avoidance that is unique in MOL. We showcase the
generality of our theoretical framework by analyzing other existing stochastic
MOL algorithms under the framework. Experiments on various multi-task learning
benchmarks are performed to demonstrate the practical applicability. Code is
available at https://github.com/heshandevaka/Trade-Off-MOL.
| [
{
"version": "v1",
"created": "Wed, 31 May 2023 17:31:56 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Aug 2023 18:29:36 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 17:41:06 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Chen",
"Lisha",
""
],
[
"Fernando",
"Heshan",
""
],
[
"Ying",
"Yiming",
""
],
[
"Chen",
"Tianyi",
""
]
] | not_new_dataset | 0.997448 |
2305.20062 | Matan Levy | Matan Levy, Rami Ben-Ari, Nir Darshan, Dani Lischinski | Chatting Makes Perfect: Chat-based Image Retrieval | Camera Ready version for NeurIPS 2023 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Chats emerge as an effective user-friendly approach for information
retrieval, and are successfully employed in many domains, such as customer
service, healthcare, and finance. However, existing image retrieval approaches
typically address the case of a single query-to-image round, and the use of
chats for image retrieval has been mostly overlooked. In this work, we
introduce ChatIR: a chat-based image retrieval system that engages in a
conversation with the user to elicit information, in addition to an initial
query, in order to clarify the user's search intent. Motivated by the
capabilities of today's foundation models, we leverage Large Language Models to
generate follow-up questions to an initial image description. These questions
form a dialog with the user in order to retrieve the desired image from a large
corpus. In this study, we explore the capabilities of such a system tested on a
large dataset and reveal that engaging in a dialog yields significant gains in
image retrieval. We start by building an evaluation pipeline from an existing
manually generated dataset and explore different modules and training
strategies for ChatIR. Our comparison includes strong baselines derived from
related applications trained with Reinforcement Learning. Our system is capable
of retrieving the target image from a pool of 50K images with over 78% success
rate after 5 dialogue rounds, compared to 75% when questions are asked by
humans, and 64% for a single shot text-to-image retrieval. Extensive
evaluations reveal the strong capabilities and examine the limitations of
CharIR under different settings. Project repository is available at
https://github.com/levymsn/ChatIR.
| [
{
"version": "v1",
"created": "Wed, 31 May 2023 17:38:08 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Oct 2023 16:40:02 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Levy",
"Matan",
""
],
[
"Ben-Ari",
"Rami",
""
],
[
"Darshan",
"Nir",
""
],
[
"Lischinski",
"Dani",
""
]
] | not_new_dataset | 0.997076 |
2306.00709 | Lakmal Meegahapola | Nathan Kammoun and Lakmal Meegahapola and Daniel Gatica-Perez | Understanding the Social Context of Eating with Multimodal Smartphone
Sensing: The Role of Country Diversity | 25th ACM International Conference on Multimodal Interaction (ICMI) | null | 10.1145/3577190.3614129 | null | cs.HC cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the social context of eating is crucial for promoting healthy
eating behaviors. Multimodal smartphone sensor data could provide valuable
insights into eating behavior, particularly in mobile food diaries and mobile
health apps. However, research on the social context of eating with smartphone
sensor data is limited, despite extensive studies in nutrition and behavioral
science. Moreover, the impact of country differences on the social context of
eating, as measured by multimodal phone sensor data and self-reports, remains
under-explored. To address this research gap, our study focuses on a dataset of
approximately 24K self-reports on eating events provided by 678 college
students in eight countries to investigate the country diversity that emerges
from smartphone sensors during eating events for different social contexts
(alone or with others). Our analysis revealed that while some smartphone usage
features during eating events were similar across countries, others exhibited
unique trends in each country. We further studied how user and country-specific
factors impact social context inference by developing machine learning models
with population-level (non-personalized) and hybrid (partially personalized)
experimental setups. We showed that models based on the hybrid approach achieve
AUC scores up to 0.75 with XGBoost models. These findings emphasize the
importance of considering country differences in building and deploying machine
learning models to minimize biases and improve generalization across different
populations.
| [
{
"version": "v1",
"created": "Thu, 1 Jun 2023 14:16:59 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Aug 2023 20:31:22 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Oct 2023 21:50:48 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Kammoun",
"Nathan",
""
],
[
"Meegahapola",
"Lakmal",
""
],
[
"Gatica-Perez",
"Daniel",
""
]
] | not_new_dataset | 0.995858 |
2306.00966 | Hila Chefer | Hila Chefer, Oran Lang, Mor Geva, Volodymyr Polosukhin, Assaf Shocher,
Michal Irani, Inbar Mosseri, Lior Wolf | The Hidden Language of Diffusion Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text-to-image diffusion models have demonstrated an unparalleled ability to
generate high-quality, diverse images from a textual prompt. However, the
internal representations learned by these models remain an enigma. In this
work, we present Conceptor, a novel method to interpret the internal
representation of a textual concept by a diffusion model. This interpretation
is obtained by decomposing the concept into a small set of human-interpretable
textual elements. Applied over the state-of-the-art Stable Diffusion model,
Conceptor reveals non-trivial structures in the representations of concepts.
For example, we find surprising visual connections between concepts, that
transcend their textual semantics. We additionally discover concepts that rely
on mixtures of exemplars, biases, renowned artistic styles, or a simultaneous
fusion of multiple meanings of the concept. Through a large battery of
experiments, we demonstrate Conceptor's ability to provide meaningful, robust,
and faithful decompositions for a wide variety of abstract, concrete, and
complex textual concepts, while allowing to naturally connect each
decomposition element to its corresponding visual impact on the generated
images. Our code will be available at: https://hila-chefer.github.io/Conceptor/
| [
{
"version": "v1",
"created": "Thu, 1 Jun 2023 17:57:08 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Jun 2023 13:16:43 GMT"
},
{
"version": "v3",
"created": "Thu, 5 Oct 2023 12:55:12 GMT"
}
] | 2023-10-06T00:00:00 | [
[
"Chefer",
"Hila",
""
],
[
"Lang",
"Oran",
""
],
[
"Geva",
"Mor",
""
],
[
"Polosukhin",
"Volodymyr",
""
],
[
"Shocher",
"Assaf",
""
],
[
"Irani",
"Michal",
""
],
[
"Mosseri",
"Inbar",
""
],
[
"Wolf",
"Lior",
""
]
] | not_new_dataset | 0.996881 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.