id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1901.06110
|
Unni V. S.
|
Unni V. S., Sanjay Ghosh and Kunal N. Chaudhury
|
Linearized ADMM and Fast Nonlocal Denoising for Efficient Plug-and-Play
Restoration
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In plug-and-play image restoration, the regularization is performed using
powerful denoisers such as nonlocal means (NLM) or BM3D. This is done within
the framework of alternating direction method of multipliers (ADMM), where the
regularization step is formally replaced by an off-the-shelf denoiser. Each
plug-and-play iteration involves the inversion of the forward model followed by
a denoising step. In this paper, we present a couple of ideas for improving the
efficiency of the inversion and denoising steps. First, we propose to use
linearized ADMM, which generally allows us to perform the inversion at a lower
cost than standard ADMM. Moreover, we can easily incorporate hard constraints
into the optimization framework as a result. Second, we develop a fast
algorithm for doubly stochastic NLM, originally proposed by Sreehari et al.
(IEEE TCI, 2016), which is about 80x faster than brute-force computation. This
particular denoiser can be expressed as the proximal map of a convex
regularizer and, as a consequence, we can guarantee convergence for linearized
plug-and-play ADMM. We demonstrate the effectiveness of our proposals for
super-resolution and single-photon imaging.
|
[
{
"created": "Fri, 18 Jan 2019 07:20:32 GMT",
"version": "v1"
}
] |
2019-01-21
|
[
[
"S.",
"Unni V.",
""
],
[
"Ghosh",
"Sanjay",
""
],
[
"Chaudhury",
"Kunal N.",
""
]
] |
In plug-and-play image restoration, the regularization is performed using powerful denoisers such as nonlocal means (NLM) or BM3D. This is done within the framework of alternating direction method of multipliers (ADMM), where the regularization step is formally replaced by an off-the-shelf denoiser. Each plug-and-play iteration involves the inversion of the forward model followed by a denoising step. In this paper, we present a couple of ideas for improving the efficiency of the inversion and denoising steps. First, we propose to use linearized ADMM, which generally allows us to perform the inversion at a lower cost than standard ADMM. Moreover, we can easily incorporate hard constraints into the optimization framework as a result. Second, we develop a fast algorithm for doubly stochastic NLM, originally proposed by Sreehari et al. (IEEE TCI, 2016), which is about 80x faster than brute-force computation. This particular denoiser can be expressed as the proximal map of a convex regularizer and, as a consequence, we can guarantee convergence for linearized plug-and-play ADMM. We demonstrate the effectiveness of our proposals for super-resolution and single-photon imaging.
|
1402.1607
|
Kangqi Liu
|
Kangqi Liu, Meixia Tao, Zhengzheng Xiang and Xin Long
|
Generalized Signal Alignment For MIMO Two-Way X Relay Channels
|
6 pages, 6 figures, to appear in IEEE ICC 2014
| null |
10.1109/ICC.2014.6884019
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the degrees of freedom (DoF) of MIMO two-way X relay channels.
Previous work studied the case $N < 2M$, where $N$ and $M$ denote the number of
antennas at the relay and each source, respectively, and showed that the
maximum DoF of $2N$ is achievable when $N \leq \lfloor\frac{8M}{5}\rfloor$ by
applying signal alignment (SA) for network coding and interference cancelation.
This work considers the case $N>2M$ where the performance is limited by the
number of antennas at each source node and conventional SA is not feasible. We
propose a \textit{generalized signal alignment} (GSA) based transmission
scheme. The key is to let the signals to be exchanged between every source node
align in a transformed subspace, rather than the direct subspace, at the relay
so as to form network-coded signals. This is realized by jointly designing the
precoding matrices at all source nodes and the processing matrix at the relay.
Moreover, the aligned subspaces are orthogonal to each other. By applying the
GSA, we show that the DoF upper bound $4M$ is achievable when $M \leq
\lfloor\frac{2N}{5}\rfloor$ ($M$ is even) or $M \leq
\lfloor\frac{2N-1}{5}\rfloor$ ($M$ is odd). Numerical results also demonstrate
that our proposed transmission scheme is feasible and effective.
|
[
{
"created": "Fri, 7 Feb 2014 11:32:22 GMT",
"version": "v1"
}
] |
2018-02-21
|
[
[
"Liu",
"Kangqi",
""
],
[
"Tao",
"Meixia",
""
],
[
"Xiang",
"Zhengzheng",
""
],
[
"Long",
"Xin",
""
]
] |
We study the degrees of freedom (DoF) of MIMO two-way X relay channels. Previous work studied the case $N < 2M$, where $N$ and $M$ denote the number of antennas at the relay and each source, respectively, and showed that the maximum DoF of $2N$ is achievable when $N \leq \lfloor\frac{8M}{5}\rfloor$ by applying signal alignment (SA) for network coding and interference cancelation. This work considers the case $N>2M$ where the performance is limited by the number of antennas at each source node and conventional SA is not feasible. We propose a \textit{generalized signal alignment} (GSA) based transmission scheme. The key is to let the signals to be exchanged between every source node align in a transformed subspace, rather than the direct subspace, at the relay so as to form network-coded signals. This is realized by jointly designing the precoding matrices at all source nodes and the processing matrix at the relay. Moreover, the aligned subspaces are orthogonal to each other. By applying the GSA, we show that the DoF upper bound $4M$ is achievable when $M \leq \lfloor\frac{2N}{5}\rfloor$ ($M$ is even) or $M \leq \lfloor\frac{2N-1}{5}\rfloor$ ($M$ is odd). Numerical results also demonstrate that our proposed transmission scheme is feasible and effective.
|
2205.04800
|
Mikhail Panine
|
Mikhail Panine, Maxime Kirgo and Maks Ovsjanikov
|
Non-Isometric Shape Matching via Functional Maps on Landmark-Adapted
Bases
|
To appear in: Computer Graphics Forum // Main Manuscript: 15 pages
(without references), 19 figures, 4 tables // Appendix: 8 pages, 12 figures,
3 tables // Second version fixes typos, font inconsistencies and a minor sign
error
| null | null | null |
cs.CV cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a principled approach for non-isometric landmark-preserving
non-rigid shape matching. Our method is based on the functional maps framework,
but rather than promoting isometries we focus instead on near-conformal maps
that preserve landmarks exactly. We achieve this, first, by introducing a novel
landmark-adapted basis using an intrinsic Dirichlet-Steklov eigenproblem.
Second, we establish the functional decomposition of conformal maps expressed
in this basis. Finally, we formulate a conformally-invariant energy that
promotes high-quality landmark-preserving maps, and show how it can be solved
via a variant of the recently proposed ZoomOut method that we extend to our
setting. Our method is descriptor-free, efficient and robust to significant
mesh variability. We evaluate our approach on a range of benchmark datasets and
demonstrate state-of-the-art performance on non-isometric benchmarks and near
state-of-the-art performance on isometric ones.
|
[
{
"created": "Tue, 10 May 2022 11:02:14 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Jun 2022 11:56:48 GMT",
"version": "v2"
}
] |
2022-06-23
|
[
[
"Panine",
"Mikhail",
""
],
[
"Kirgo",
"Maxime",
""
],
[
"Ovsjanikov",
"Maks",
""
]
] |
We propose a principled approach for non-isometric landmark-preserving non-rigid shape matching. Our method is based on the functional maps framework, but rather than promoting isometries we focus instead on near-conformal maps that preserve landmarks exactly. We achieve this, first, by introducing a novel landmark-adapted basis using an intrinsic Dirichlet-Steklov eigenproblem. Second, we establish the functional decomposition of conformal maps expressed in this basis. Finally, we formulate a conformally-invariant energy that promotes high-quality landmark-preserving maps, and show how it can be solved via a variant of the recently proposed ZoomOut method that we extend to our setting. Our method is descriptor-free, efficient and robust to significant mesh variability. We evaluate our approach on a range of benchmark datasets and demonstrate state-of-the-art performance on non-isometric benchmarks and near state-of-the-art performance on isometric ones.
|
2012.13114
|
Daniela Vianna
|
Daniela Vianna, Am\'elie Marian
|
A Frequency-Based Learning-To-Rank Approach for Personal Digital Traces
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Personal digital traces are constantly produced by connected devices,
internet services and interactions. These digital traces are typically small,
heterogeneous and stored in various locations in the cloud or on local devices,
making it a challenge for users to interact with and search their own data. By
adopting a multidimensional data model based on the six natural questions --
what, when, where, who, why and how -- to represent and unify heterogeneous
personal digital traces, we can propose a learning-to-rank approach using the
state of the art LambdaMART algorithm and frequency-based features that
leverage the correlation between content (what), users (who), time (when),
location (where) and data source (how) to improve the accuracy of search
results. Due to the lack of publicly available personal training data, a
combination of known-item query generation techniques and an unsupervised
ranking model (field-based BM25) is used to build our own training sets.
Experiments performed over a publicly available email collection and a personal
digital data trace collection from a real user show that the frequency-based
learning approach improves search accuracy when compared with traditional
search tools.
|
[
{
"created": "Thu, 24 Dec 2020 05:24:10 GMT",
"version": "v1"
}
] |
2020-12-25
|
[
[
"Vianna",
"Daniela",
""
],
[
"Marian",
"Amélie",
""
]
] |
Personal digital traces are constantly produced by connected devices, internet services and interactions. These digital traces are typically small, heterogeneous and stored in various locations in the cloud or on local devices, making it a challenge for users to interact with and search their own data. By adopting a multidimensional data model based on the six natural questions -- what, when, where, who, why and how -- to represent and unify heterogeneous personal digital traces, we can propose a learning-to-rank approach using the state of the art LambdaMART algorithm and frequency-based features that leverage the correlation between content (what), users (who), time (when), location (where) and data source (how) to improve the accuracy of search results. Due to the lack of publicly available personal training data, a combination of known-item query generation techniques and an unsupervised ranking model (field-based BM25) is used to build our own training sets. Experiments performed over a publicly available email collection and a personal digital data trace collection from a real user show that the frequency-based learning approach improves search accuracy when compared with traditional search tools.
|
2104.00798
|
Jiahao Pang
|
Haiyan Wang, Jiahao Pang, Muhammad A. Lodhi, Yingli Tian, Dong Tian
|
FESTA: Flow Estimation via Spatial-Temporal Attention for Scene Point
Clouds
|
Accepted at CVPR 2021 (Oral Presentation)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene flow depicts the dynamics of a 3D scene, which is critical for various
applications such as autonomous driving, robot navigation, AR/VR, etc.
Conventionally, scene flow is estimated from dense/regular RGB video frames.
With the development of depth-sensing technologies, precise 3D measurements are
available via point clouds which have sparked new research in 3D scene flow.
Nevertheless, it remains challenging to extract scene flow from point clouds
due to the sparsity and irregularity in typical point cloud sampling patterns.
One major issue related to irregular sampling is identified as the randomness
during point set abstraction/feature extraction -- an elementary process in
many flow estimation scenarios. A novel Spatial Abstraction with Attention
(SA^2) layer is accordingly proposed to alleviate the unstable abstraction
problem. Moreover, a Temporal Abstraction with Attention (TA^2) layer is
proposed to rectify attention in temporal domain, leading to benefits with
motions scaled in a larger range. Extensive analysis and experiments verified
the motivation and significant performance gains of our method, dubbed as Flow
Estimation via Spatial-Temporal Attention (FESTA), when compared to several
state-of-the-art benchmarks of scene flow estimation.
|
[
{
"created": "Thu, 1 Apr 2021 23:04:04 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Dec 2021 17:04:51 GMT",
"version": "v2"
}
] |
2021-12-07
|
[
[
"Wang",
"Haiyan",
""
],
[
"Pang",
"Jiahao",
""
],
[
"Lodhi",
"Muhammad A.",
""
],
[
"Tian",
"Yingli",
""
],
[
"Tian",
"Dong",
""
]
] |
Scene flow depicts the dynamics of a 3D scene, which is critical for various applications such as autonomous driving, robot navigation, AR/VR, etc. Conventionally, scene flow is estimated from dense/regular RGB video frames. With the development of depth-sensing technologies, precise 3D measurements are available via point clouds which have sparked new research in 3D scene flow. Nevertheless, it remains challenging to extract scene flow from point clouds due to the sparsity and irregularity in typical point cloud sampling patterns. One major issue related to irregular sampling is identified as the randomness during point set abstraction/feature extraction -- an elementary process in many flow estimation scenarios. A novel Spatial Abstraction with Attention (SA^2) layer is accordingly proposed to alleviate the unstable abstraction problem. Moreover, a Temporal Abstraction with Attention (TA^2) layer is proposed to rectify attention in temporal domain, leading to benefits with motions scaled in a larger range. Extensive analysis and experiments verified the motivation and significant performance gains of our method, dubbed as Flow Estimation via Spatial-Temporal Attention (FESTA), when compared to several state-of-the-art benchmarks of scene flow estimation.
|
0907.4488
|
Gregory Gutin
|
Gregory Gutin, Eun Jung Kim, Michael Lampis, and Valia Mitsou
|
Vertex Cover Problem Parameterized Above and Below Tight Bounds
| null | null | null | null |
cs.CC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the well-known Vertex Cover problem parameterized above and below
tight bounds. We show that two of the parameterizations (both were suggested by
Mahajan, Raman and Sikdar, J. Computer and System Sciences, 75(2):137--153,
2009) are fixed-parameter tractable and two other parameterizations are
W[1]-hard (one of them is, in fact, W[2]-hard).
|
[
{
"created": "Sun, 26 Jul 2009 15:02:39 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Aug 2009 10:25:36 GMT",
"version": "v2"
}
] |
2009-08-28
|
[
[
"Gutin",
"Gregory",
""
],
[
"Kim",
"Eun Jung",
""
],
[
"Lampis",
"Michael",
""
],
[
"Mitsou",
"Valia",
""
]
] |
We study the well-known Vertex Cover problem parameterized above and below tight bounds. We show that two of the parameterizations (both were suggested by Mahajan, Raman and Sikdar, J. Computer and System Sciences, 75(2):137--153, 2009) are fixed-parameter tractable and two other parameterizations are W[1]-hard (one of them is, in fact, W[2]-hard).
|
1804.05901
|
Zulqarnain Khattak
|
Zulqarnain H. Khattak, Hyungjun Park, Seongah Hong, Richard Atta
Boateng, and Brian L. Smith
|
Investigating Cybersecurity Issues In Active Traffic Management Systems
|
25 pages,7 figures, Accepted for Publication in Transportation
Research Record, Journal of Transportation Research Board 2018
| null | null |
TRR Paper Number: 18-03501
|
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Active Traffic Management (ATM) systems have been introduced by
transportation agencies to manage recurrent and non-recurrent congestion. ATM
systems rely on the interconnectivity of components made possible by wired
and/or wireless networks. Unfortunately, this connectivity that supports ATM
systems also provides potential system access points that results in
vulnerability to cyberattacks. This is becoming more pronounced as ATM systems
begin to integrate internet of things (IoT) devices. Hence, there is a need to
rigorously evaluate ATM systems for cyberattack vulnerabilities, and explore
design concepts that provide stability and graceful degradation in the face of
cyberattacks. In this research, a prototype ATM system along with a real-time
cyberattack monitoring system were developed for a 1.5-mile section of I-66 in
Northern Virginia. The monitoring system detects deviation from expected
operation of an ATM system by comparing lane control states generated by the
ATM system with lane control states deemed most likely by the monitoring
system. In case of any deviation between two sets of states, the monitoring
system displays the lane control states generated by the back-up data source.
In a simulation experiment, the prototype ATM system and cyberattack monitoring
system were subject to emulated cyberattacks. The evaluation results showed
that when subject to cyberattack, the mean speed reduced by 15% compared to the
case with the ATM system and was similar to the baseline case. This illustrates
that the effectiveness of the ATM system was negated by cyberattacks. The
monitoring system however, allowed the ATM system to revert to an expected safe
state and reduced the negative impact of cyberattacks. These results illustrate
the need to revisit ATM system design concepts as a means to protect against
cyberattacks in addition to traditional system intrusion prevention approaches.
|
[
{
"created": "Mon, 16 Apr 2018 19:23:19 GMT",
"version": "v1"
}
] |
2018-04-18
|
[
[
"Khattak",
"Zulqarnain H.",
""
],
[
"Park",
"Hyungjun",
""
],
[
"Hong",
"Seongah",
""
],
[
"Boateng",
"Richard Atta",
""
],
[
"Smith",
"Brian L.",
""
]
] |
Active Traffic Management (ATM) systems have been introduced by transportation agencies to manage recurrent and non-recurrent congestion. ATM systems rely on the interconnectivity of components made possible by wired and/or wireless networks. Unfortunately, this connectivity that supports ATM systems also provides potential system access points that results in vulnerability to cyberattacks. This is becoming more pronounced as ATM systems begin to integrate internet of things (IoT) devices. Hence, there is a need to rigorously evaluate ATM systems for cyberattack vulnerabilities, and explore design concepts that provide stability and graceful degradation in the face of cyberattacks. In this research, a prototype ATM system along with a real-time cyberattack monitoring system were developed for a 1.5-mile section of I-66 in Northern Virginia. The monitoring system detects deviation from expected operation of an ATM system by comparing lane control states generated by the ATM system with lane control states deemed most likely by the monitoring system. In case of any deviation between two sets of states, the monitoring system displays the lane control states generated by the back-up data source. In a simulation experiment, the prototype ATM system and cyberattack monitoring system were subject to emulated cyberattacks. The evaluation results showed that when subject to cyberattack, the mean speed reduced by 15% compared to the case with the ATM system and was similar to the baseline case. This illustrates that the effectiveness of the ATM system was negated by cyberattacks. The monitoring system however, allowed the ATM system to revert to an expected safe state and reduced the negative impact of cyberattacks. These results illustrate the need to revisit ATM system design concepts as a means to protect against cyberattacks in addition to traditional system intrusion prevention approaches.
|
1702.06610
|
Markus Luczak-Roesch
|
Markus Luczak-Roesch
|
Towards an Understanding of the Effects of Augmented Reality Games on
Disaster Management
| null | null | null | null |
cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Location-based augmented reality games have entered the mainstream with the
nearly overnight success of Niantic's Pok\'emon Go. Unlike traditional video
games, the fact that players of such games carry out actions in the external,
physical world to accomplish in-game objectives means that the large-scale
adoption of such games motivate people, en masse, to do things and go places
they would not have otherwise done in unprecedented ways. The social
implications of such mass-mobilisation of individual players are, in general,
difficult to anticipate or characterise, even for the short-term. In this work,
we focus on disaster relief, and the short- and long-term implications that a
proliferation of AR games like Pok\'emon Go, may have in disaster-prone regions
of the world. We take a distributed cognition approach and focus on one natural
disaster-prone region of New Zealand, the city of Wellington.
|
[
{
"created": "Tue, 21 Feb 2017 22:52:43 GMT",
"version": "v1"
}
] |
2017-02-23
|
[
[
"Luczak-Roesch",
"Markus",
""
]
] |
Location-based augmented reality games have entered the mainstream with the nearly overnight success of Niantic's Pok\'emon Go. Unlike traditional video games, the fact that players of such games carry out actions in the external, physical world to accomplish in-game objectives means that the large-scale adoption of such games motivate people, en masse, to do things and go places they would not have otherwise done in unprecedented ways. The social implications of such mass-mobilisation of individual players are, in general, difficult to anticipate or characterise, even for the short-term. In this work, we focus on disaster relief, and the short- and long-term implications that a proliferation of AR games like Pok\'emon Go, may have in disaster-prone regions of the world. We take a distributed cognition approach and focus on one natural disaster-prone region of New Zealand, the city of Wellington.
|
1409.2792
|
Xingqin Lin
|
Xingqin Lin and Robert W. Heath Jr. and Jeffrey G. Andrews
|
The Interplay between Massive MIMO and Underlaid D2D Networking
|
35 pages; 7 figures; submitted to IEEE Transactions on Wireless
Communications
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a device-to-device (D2D) underlaid cellular network, the uplink spectrum
is reused by the D2D transmissions, causing mutual interference with the
ongoing cellular transmissions. Massive MIMO is appealing in such a context as
the base station's (BS's) large antenna array can nearly null the D2D-to-BS
interference. The multi-user transmission in massive MIMO, however, may lead to
increased cellular-to-D2D interference. This paper studies the interplay
between massive MIMO and underlaid D2D networking in a multi-cell setting. We
investigate cellular and D2D spectral efficiency under both perfect and
imperfect channel state information (CSI) at the receivers that employ partial
zero-forcing. Compared to the case without D2D, there is a loss in cellular
spectral efficiency due to D2D underlay. With perfect CSI, the loss can be
completely overcome if the number of canceled D2D interfering signals is scaled
with the number of BS antennas at an arbitrarily slow rate. With imperfect CSI,
in addition to pilot contamination, a new asymptotic effect termed underlay
contamination arises. In the non-asymptotic regime, simple analytical lower
bounds are derived for both the cellular and D2D spectral efficiency.
|
[
{
"created": "Tue, 9 Sep 2014 16:05:27 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Nov 2014 05:51:18 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Jan 2015 07:42:44 GMT",
"version": "v3"
}
] |
2015-01-29
|
[
[
"Lin",
"Xingqin",
""
],
[
"Heath",
"Robert W.",
"Jr."
],
[
"Andrews",
"Jeffrey G.",
""
]
] |
In a device-to-device (D2D) underlaid cellular network, the uplink spectrum is reused by the D2D transmissions, causing mutual interference with the ongoing cellular transmissions. Massive MIMO is appealing in such a context as the base station's (BS's) large antenna array can nearly null the D2D-to-BS interference. The multi-user transmission in massive MIMO, however, may lead to increased cellular-to-D2D interference. This paper studies the interplay between massive MIMO and underlaid D2D networking in a multi-cell setting. We investigate cellular and D2D spectral efficiency under both perfect and imperfect channel state information (CSI) at the receivers that employ partial zero-forcing. Compared to the case without D2D, there is a loss in cellular spectral efficiency due to D2D underlay. With perfect CSI, the loss can be completely overcome if the number of canceled D2D interfering signals is scaled with the number of BS antennas at an arbitrarily slow rate. With imperfect CSI, in addition to pilot contamination, a new asymptotic effect termed underlay contamination arises. In the non-asymptotic regime, simple analytical lower bounds are derived for both the cellular and D2D spectral efficiency.
|
2210.06696
|
Huize Li
|
Huize Li, Hai Jin, Long Zheng, Yu Huang, Xiaofei Liao, Dan Chen,
Zhuohui Duan, Cong Liu, Jiahong Xu, Chuanyi Gui
|
CPSAA: Accelerating Sparse Attention using Crossbar-based
Processing-In-Memory Architecture
|
14 pages, 19 figures
| null | null | null |
cs.AR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The attention mechanism requires huge computational efforts to process
unnecessary calculations, significantly limiting the system's performance.
Researchers propose sparse attention to convert some DDMM operations to SDDMM
and SpMM operations. However, current sparse attention solutions introduce
massive off-chip random memory access. We propose CPSAA, a novel crossbar-based
PIM-featured sparse attention accelerator. First, we present a novel attention
calculation mode. Second, we design a novel PIM-based sparsity pruning
architecture. Finally, we present novel crossbar-based methods. Experimental
results show that CPSAA has an average of 89.6X, 32.2X, 17.8X, 3.39X, and 3.84X
performance improvement and 755.6X, 55.3X, 21.3X, 5.7X, and 4.9X energy-saving
when compare with GPU, FPGA, SANGER, ReBERT, and ReTransformer.
|
[
{
"created": "Thu, 13 Oct 2022 03:20:11 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Oct 2023 10:27:43 GMT",
"version": "v2"
}
] |
2023-10-10
|
[
[
"Li",
"Huize",
""
],
[
"Jin",
"Hai",
""
],
[
"Zheng",
"Long",
""
],
[
"Huang",
"Yu",
""
],
[
"Liao",
"Xiaofei",
""
],
[
"Chen",
"Dan",
""
],
[
"Duan",
"Zhuohui",
""
],
[
"Liu",
"Cong",
""
],
[
"Xu",
"Jiahong",
""
],
[
"Gui",
"Chuanyi",
""
]
] |
The attention mechanism requires huge computational efforts to process unnecessary calculations, significantly limiting the system's performance. Researchers propose sparse attention to convert some DDMM operations to SDDMM and SpMM operations. However, current sparse attention solutions introduce massive off-chip random memory access. We propose CPSAA, a novel crossbar-based PIM-featured sparse attention accelerator. First, we present a novel attention calculation mode. Second, we design a novel PIM-based sparsity pruning architecture. Finally, we present novel crossbar-based methods. Experimental results show that CPSAA has an average of 89.6X, 32.2X, 17.8X, 3.39X, and 3.84X performance improvement and 755.6X, 55.3X, 21.3X, 5.7X, and 4.9X energy-saving when compare with GPU, FPGA, SANGER, ReBERT, and ReTransformer.
|
1604.07090
|
Dingwen Zhang
|
Dingwen Zhang, Huazhu Fu, Junwei Han, Ali Borji, Xuelong Li
|
A Review of Co-saliency Detection Technique: Fundamentals, Applications,
and Challenges
|
28 pages, 12 figures, 3 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Co-saliency detection is a newly emerging and rapidly growing research area
in computer vision community. As a novel branch of visual saliency, co-saliency
detection refers to the discovery of common and salient foregrounds from two or
more relevant images, and can be widely used in many computer vision tasks. The
existing co-saliency detection algorithms mainly consist of three components:
extracting effective features to represent the image regions, exploring the
informative cues or factors to characterize co-saliency, and designing
effective computational frameworks to formulate co-saliency. Although numerous
methods have been developed, the literature is still lacking a deep review and
evaluation of co-saliency detection techniques. In this paper, we aim at
providing a comprehensive review of the fundamentals, challenges, and
applications of co-saliency detection. Specifically, we provide an overview of
some related computer vision works, review the history of co-saliency
detection, summarize and categorize the major algorithms in this research area,
discuss some open issues in this area, present the potential applications of
co-saliency detection, and finally point out some unsolved challenges and
promising future works. We expect this review to be beneficial to both fresh
and senior researchers in this field, and give insights to researchers in other
related areas regarding the utility of co-saliency detection algorithms.
|
[
{
"created": "Sun, 24 Apr 2016 22:36:38 GMT",
"version": "v1"
},
{
"created": "Mon, 16 May 2016 14:09:33 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Jan 2017 20:55:33 GMT",
"version": "v3"
},
{
"created": "Mon, 22 May 2017 13:36:24 GMT",
"version": "v4"
},
{
"created": "Sun, 9 Jul 2017 15:44:10 GMT",
"version": "v5"
}
] |
2017-07-11
|
[
[
"Zhang",
"Dingwen",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Han",
"Junwei",
""
],
[
"Borji",
"Ali",
""
],
[
"Li",
"Xuelong",
""
]
] |
Co-saliency detection is a newly emerging and rapidly growing research area in computer vision community. As a novel branch of visual saliency, co-saliency detection refers to the discovery of common and salient foregrounds from two or more relevant images, and can be widely used in many computer vision tasks. The existing co-saliency detection algorithms mainly consist of three components: extracting effective features to represent the image regions, exploring the informative cues or factors to characterize co-saliency, and designing effective computational frameworks to formulate co-saliency. Although numerous methods have been developed, the literature is still lacking a deep review and evaluation of co-saliency detection techniques. In this paper, we aim at providing a comprehensive review of the fundamentals, challenges, and applications of co-saliency detection. Specifically, we provide an overview of some related computer vision works, review the history of co-saliency detection, summarize and categorize the major algorithms in this research area, discuss some open issues in this area, present the potential applications of co-saliency detection, and finally point out some unsolved challenges and promising future works. We expect this review to be beneficial to both fresh and senior researchers in this field, and give insights to researchers in other related areas regarding the utility of co-saliency detection algorithms.
|
1111.0862
|
Emmanuel Filiot
|
Emmanuel Filiot (Universit\'e Libre de Bruxelles), Raffaella Gentilini
(Universit\`a degli Studi di Perugia), Jean-Fran\~A{\S}ois Raskin
(Universit\'e Libre de Bruxelles)
|
Quantitative Languages Defined by Functional Automata
|
32 pages, extended version of CONCUR'12
|
Logical Methods in Computer Science, Volume 11, Issue 3 (September
17, 2015) lmcs:1590
|
10.2168/LMCS-11(3:14)2015
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A weighted automaton is functional if any two accepting runs on the same
finite word have the same value. In this paper, we investigate functional
weighted automata for four different measures: the sum, the mean, the
discounted sum of weights along edges and the ratio between rewards and costs.
On the positive side, we show that functionality is decidable for the four
measures. Furthermore, the existential and universal threshold problems, the
language inclusion problem and the equivalence problem are all decidable when
the weighted automata are functional. On the negative side, we also study the
quantitative extension of the realizability problem and show that it is
undecidable for sum, mean and ratio. We finally show how to decide whether the
language associated with a given functional automaton can be defined with a
deterministic one, for sum, mean and discounted sum. The results on
functionality and determinizability are expressed for the more general class of
functional group automata. This allows one to formulate within the same
framework new results related to discounted sum automata and known results on
sum and mean automata. Ratio automata do not fit within this general scheme and
different techniques are required to decide functionality.
|
[
{
"created": "Sat, 24 Sep 2011 06:29:29 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Nov 2011 16:15:01 GMT",
"version": "v2"
},
{
"created": "Tue, 7 Jul 2015 14:19:16 GMT",
"version": "v3"
},
{
"created": "Wed, 16 Sep 2015 19:39:29 GMT",
"version": "v4"
}
] |
2017-01-11
|
[
[
"Filiot",
"Emmanuel",
"",
"Université Libre de Bruxelles"
],
[
"Gentilini",
"Raffaella",
"",
"Università degli Studi di Perugia"
],
[
"Raskin",
"Jean-François",
"",
"Université Libre de Bruxelles"
]
] |
A weighted automaton is functional if any two accepting runs on the same finite word have the same value. In this paper, we investigate functional weighted automata for four different measures: the sum, the mean, the discounted sum of weights along edges and the ratio between rewards and costs. On the positive side, we show that functionality is decidable for the four measures. Furthermore, the existential and universal threshold problems, the language inclusion problem and the equivalence problem are all decidable when the weighted automata are functional. On the negative side, we also study the quantitative extension of the realizability problem and show that it is undecidable for sum, mean and ratio. We finally show how to decide whether the language associated with a given functional automaton can be defined with a deterministic one, for sum, mean and discounted sum. The results on functionality and determinizability are expressed for the more general class of functional group automata. This allows one to formulate within the same framework new results related to discounted sum automata and known results on sum and mean automata. Ratio automata do not fit within this general scheme and different techniques are required to decide functionality.
|
1802.02254
|
Ping Zhang
|
Ping Zhang, Zhifeng Bao, Yuchen Li, Guoliang Li, Yipeng Zhang, Zhiyong
Peng
|
Trajectory-driven Influential Billboard Placement
| null | null |
10.1145/3219819.3219946
| null |
cs.SI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we propose and study the problem of trajectory-driven
influential billboard placement: given a set of billboards $U$ (each with a
location and a cost), a database of trajectories $\mathcal{T}$ and a budget
$L$, find a set of billboards within the budget to influence the largest number
of trajectories. One core challenge is to identify and reduce the overlap of
the influence from different billboards to the same trajectories, while keeping
the budget constraint into consideration. We show that this problem is NP-hard
and present an enumeration based algorithm with $(1-1/e)$ approximation ratio.
However, the enumeration should be very costly when $|U|$ is large. By
exploiting the locality property of billboards' influence, we propose a
partition-based framework PartSel. PartSel partitions $U$ into a set of small
clusters, computes the locally influential billboards for each cluster, and
merges them to generate the global solution. Since the local solutions can be
obtained much more efficient than the global one, PartSel should reduce the
computation cost greatly; meanwhile it achieves a non-trivial approximation
ratio guarantee. Then we propose a LazyProbe method to further prune billboards
with low marginal influence, while achieving the same approximation ratio as
PartSel. Experiments on real datasets verify the efficiency and effectiveness
of our methods.
|
[
{
"created": "Tue, 6 Feb 2018 23:05:02 GMT",
"version": "v1"
},
{
"created": "Wed, 30 May 2018 11:13:46 GMT",
"version": "v2"
},
{
"created": "Sun, 9 Sep 2018 14:43:37 GMT",
"version": "v3"
},
{
"created": "Sat, 15 Sep 2018 07:43:40 GMT",
"version": "v4"
}
] |
2018-09-18
|
[
[
"Zhang",
"Ping",
""
],
[
"Bao",
"Zhifeng",
""
],
[
"Li",
"Yuchen",
""
],
[
"Li",
"Guoliang",
""
],
[
"Zhang",
"Yipeng",
""
],
[
"Peng",
"Zhiyong",
""
]
] |
In this paper we propose and study the problem of trajectory-driven influential billboard placement: given a set of billboards $U$ (each with a location and a cost), a database of trajectories $\mathcal{T}$ and a budget $L$, find a set of billboards within the budget to influence the largest number of trajectories. One core challenge is to identify and reduce the overlap of the influence from different billboards to the same trajectories, while keeping the budget constraint into consideration. We show that this problem is NP-hard and present an enumeration based algorithm with $(1-1/e)$ approximation ratio. However, the enumeration should be very costly when $|U|$ is large. By exploiting the locality property of billboards' influence, we propose a partition-based framework PartSel. PartSel partitions $U$ into a set of small clusters, computes the locally influential billboards for each cluster, and merges them to generate the global solution. Since the local solutions can be obtained much more efficient than the global one, PartSel should reduce the computation cost greatly; meanwhile it achieves a non-trivial approximation ratio guarantee. Then we propose a LazyProbe method to further prune billboards with low marginal influence, while achieving the same approximation ratio as PartSel. Experiments on real datasets verify the efficiency and effectiveness of our methods.
|
2211.00301
|
Enwei Zhu
|
Enwei Zhu, Yiyang Liu, Ming Jin, Jinpeng Li
|
Recognizing Nested Entities from Flat Supervision: A New NER Subtask,
Feasibility and Challenges
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many recent named entity recognition (NER) studies criticize flat NER for its
non-overlapping assumption, and switch to investigating nested NER. However,
existing nested NER models heavily rely on training data annotated with nested
entities, while labeling such data is costly. This study proposes a new
subtask, nested-from-flat NER, which corresponds to a realistic application
scenario: given data annotated with flat entities only, one may still desire
the trained model capable of recognizing nested entities. To address this task,
we train span-based models and deliberately ignore the spans nested inside
labeled entities, since these spans are possibly unlabeled entities. With
nested entities removed from the training data, our model achieves 54.8%, 54.2%
and 41.1% F1 scores on the subset of spans within entities on ACE 2004, ACE
2005 and GENIA, respectively. This suggests the effectiveness of our approach
and the feasibility of the task. In addition, the model's performance on flat
entities is entirely unaffected. We further manually annotate the nested
entities in the test set of CoNLL 2003, creating a nested-from-flat NER
benchmark. Analysis results show that the main challenges stem from the data
and annotation inconsistencies between the flat and nested entities.
|
[
{
"created": "Tue, 1 Nov 2022 06:41:42 GMT",
"version": "v1"
}
] |
2022-11-02
|
[
[
"Zhu",
"Enwei",
""
],
[
"Liu",
"Yiyang",
""
],
[
"Jin",
"Ming",
""
],
[
"Li",
"Jinpeng",
""
]
] |
Many recent named entity recognition (NER) studies criticize flat NER for its non-overlapping assumption, and switch to investigating nested NER. However, existing nested NER models heavily rely on training data annotated with nested entities, while labeling such data is costly. This study proposes a new subtask, nested-from-flat NER, which corresponds to a realistic application scenario: given data annotated with flat entities only, one may still desire the trained model capable of recognizing nested entities. To address this task, we train span-based models and deliberately ignore the spans nested inside labeled entities, since these spans are possibly unlabeled entities. With nested entities removed from the training data, our model achieves 54.8%, 54.2% and 41.1% F1 scores on the subset of spans within entities on ACE 2004, ACE 2005 and GENIA, respectively. This suggests the effectiveness of our approach and the feasibility of the task. In addition, the model's performance on flat entities is entirely unaffected. We further manually annotate the nested entities in the test set of CoNLL 2003, creating a nested-from-flat NER benchmark. Analysis results show that the main challenges stem from the data and annotation inconsistencies between the flat and nested entities.
|
1910.00868
|
Joan Boyar
|
Joan Boyar, Kim S. Larsen, Denis Pankratov
|
Advice Complexity of Adaptive Priority Algorithms
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The priority model was introduced to capture "greedy-like" algorithms.
Motivated by the success of advice complexity in the area of online algorithms,
the fixed priority model was extended to include advice, and a reduction-based
framework was developed for proving lower bounds on the amount of advice
required to achieve certain approximation ratios in this rather powerful model.
To capture most of the algorithms that are considered greedy-like, the even
stronger model of adaptive priority algorithms is needed. We extend the
adaptive priority model to include advice. We modify the reduction-based
framework from the fixed priority case to work with the more powerful adaptive
priority algorithms, simplifying the proof of correctness and strengthening all
previous lower bounds by a factor of two in the process.
We also present a purely combinatorial adaptive priority algorithm with
advice for Minimum Vertex Cover on triangle-free graphs of maximum degree
three. Our algorithm achieves optimality and uses at most 7n/22 bits of advice.
No adaptive priority algorithm without advice can achieve optimality without
advice, and we prove that an online algorithm with advice needs more than 7n/22
bits of advice to reach optimality.
We show connections between exact algorithms and priority algorithms with
advice. The branching in branch-and-reduce algorithms can be seen as trying all
possible advice strings, and all priority algorithms with advice that achieve
optimality define corresponding exact algorithms, priority exact algorithms.
Lower bounds on advice-based adaptive algorithms imply lower bounds on running
times of exact algorithms designed in this way.
|
[
{
"created": "Wed, 2 Oct 2019 10:37:28 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Nov 2020 07:34:12 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jul 2021 10:43:06 GMT",
"version": "v3"
},
{
"created": "Wed, 26 Jan 2022 09:03:43 GMT",
"version": "v4"
}
] |
2022-01-27
|
[
[
"Boyar",
"Joan",
""
],
[
"Larsen",
"Kim S.",
""
],
[
"Pankratov",
"Denis",
""
]
] |
The priority model was introduced to capture "greedy-like" algorithms. Motivated by the success of advice complexity in the area of online algorithms, the fixed priority model was extended to include advice, and a reduction-based framework was developed for proving lower bounds on the amount of advice required to achieve certain approximation ratios in this rather powerful model. To capture most of the algorithms that are considered greedy-like, the even stronger model of adaptive priority algorithms is needed. We extend the adaptive priority model to include advice. We modify the reduction-based framework from the fixed priority case to work with the more powerful adaptive priority algorithms, simplifying the proof of correctness and strengthening all previous lower bounds by a factor of two in the process. We also present a purely combinatorial adaptive priority algorithm with advice for Minimum Vertex Cover on triangle-free graphs of maximum degree three. Our algorithm achieves optimality and uses at most 7n/22 bits of advice. No adaptive priority algorithm without advice can achieve optimality without advice, and we prove that an online algorithm with advice needs more than 7n/22 bits of advice to reach optimality. We show connections between exact algorithms and priority algorithms with advice. The branching in branch-and-reduce algorithms can be seen as trying all possible advice strings, and all priority algorithms with advice that achieve optimality define corresponding exact algorithms, priority exact algorithms. Lower bounds on advice-based adaptive algorithms imply lower bounds on running times of exact algorithms designed in this way.
|
2202.03393
|
Francisco Valente
|
Francisco Valente
|
Link Prediction of Artificial Intelligence Concepts using Low
Computational Power
|
Solution awarded a special prize in the Science4cast 2021
competition. Presented and published in the IEEE Big Data 2021 conference.
Minor text improvements and typos corrected from the published version
|
2021 IEEE International Conference on Big Data (Big Data),
5828-5832, 2021
|
10.1109/BigData52589.2021.9671719
| null |
cs.SI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an approach proposed for the Science4cast 2021
competition, organized by the Institute of Advanced Research in Artificial
Intelligence, whose main goal was to predict the likelihood of future
associations between machine learning concepts in a semantic network. The
developed methodology corresponds to a solution for a scenario of availability
of low computational power only, exploiting the extraction of low order
topological features and its incorporation in an optimized classifier to
estimate the degree of future connections between the nodes. The reasons that
motivated the developed methodologies will be discussed, as well as some
results, limitations and suggestions of improvements.
|
[
{
"created": "Mon, 7 Feb 2022 18:32:02 GMT",
"version": "v1"
}
] |
2022-02-08
|
[
[
"Valente",
"Francisco",
""
]
] |
This paper presents an approach proposed for the Science4cast 2021 competition, organized by the Institute of Advanced Research in Artificial Intelligence, whose main goal was to predict the likelihood of future associations between machine learning concepts in a semantic network. The developed methodology corresponds to a solution for a scenario of availability of low computational power only, exploiting the extraction of low order topological features and its incorporation in an optimized classifier to estimate the degree of future connections between the nodes. The reasons that motivated the developed methodologies will be discussed, as well as some results, limitations and suggestions of improvements.
|
1709.05254
|
Marco Schreyer
|
Marco Schreyer, Timur Sattarov, Damian Borth, Andreas Dengel and Bernd
Reimer
|
Detection of Anomalies in Large Scale Accounting Data using Deep
Autoencoder Networks
|
19 pages, 6 figures, 3 tables
| null | null | null |
cs.LG cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning to detect fraud in large-scale accounting data is one of the
long-standing challenges in financial statement audits or fraud investigations.
Nowadays, the majority of applied techniques refer to handcrafted rules derived
from known fraud scenarios. While fairly successful, these rules exhibit the
drawback that they often fail to generalize beyond known fraud scenarios and
fraudsters gradually find ways to circumvent them. To overcome this
disadvantage and inspired by the recent success of deep learning we propose the
application of deep autoencoder neural networks to detect anomalous journal
entries. We demonstrate that the trained network's reconstruction error
obtainable for a journal entry and regularized by the entry's individual
attribute probabilities can be interpreted as a highly adaptive anomaly
assessment. Experiments on two real-world datasets of journal entries, show the
effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A)
and 16.95 (dataset B) and less false positive alerts compared to state of the
art baseline methods. Initial feedback received by chartered accountants and
fraud examiners underpinned the quality of the approach in capturing highly
relevant accounting anomalies.
|
[
{
"created": "Fri, 15 Sep 2017 15:07:29 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Aug 2018 15:47:52 GMT",
"version": "v2"
}
] |
2018-08-02
|
[
[
"Schreyer",
"Marco",
""
],
[
"Sattarov",
"Timur",
""
],
[
"Borth",
"Damian",
""
],
[
"Dengel",
"Andreas",
""
],
[
"Reimer",
"Bernd",
""
]
] |
Learning to detect fraud in large-scale accounting data is one of the long-standing challenges in financial statement audits or fraud investigations. Nowadays, the majority of applied techniques refer to handcrafted rules derived from known fraud scenarios. While fairly successful, these rules exhibit the drawback that they often fail to generalize beyond known fraud scenarios and fraudsters gradually find ways to circumvent them. To overcome this disadvantage and inspired by the recent success of deep learning we propose the application of deep autoencoder neural networks to detect anomalous journal entries. We demonstrate that the trained network's reconstruction error obtainable for a journal entry and regularized by the entry's individual attribute probabilities can be interpreted as a highly adaptive anomaly assessment. Experiments on two real-world datasets of journal entries, show the effectiveness of the approach resulting in high f1-scores of 32.93 (dataset A) and 16.95 (dataset B) and less false positive alerts compared to state of the art baseline methods. Initial feedback received by chartered accountants and fraud examiners underpinned the quality of the approach in capturing highly relevant accounting anomalies.
|
2011.02172
|
Naoki Kato
|
Naoki Kato, Hiroto Honda, Yusuke Uchida
|
Leveraging Temporal Joint Depths for Improving 3D Human Pose Estimation
in Video
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The effectiveness of the approaches to predict 3D poses from 2D poses
estimated in each frame of a video has been demonstrated for 3D human pose
estimation. However, 2D poses without appearance information of persons have
much ambiguity with respect to the joint depths. In this paper, we propose to
estimate a 3D pose in each frame of a video and refine it considering temporal
information. The proposed approach reduces the ambiguity of the joint depths
and improves the 3D pose estimation accuracy.
|
[
{
"created": "Wed, 4 Nov 2020 08:23:41 GMT",
"version": "v1"
}
] |
2020-11-05
|
[
[
"Kato",
"Naoki",
""
],
[
"Honda",
"Hiroto",
""
],
[
"Uchida",
"Yusuke",
""
]
] |
The effectiveness of the approaches to predict 3D poses from 2D poses estimated in each frame of a video has been demonstrated for 3D human pose estimation. However, 2D poses without appearance information of persons have much ambiguity with respect to the joint depths. In this paper, we propose to estimate a 3D pose in each frame of a video and refine it considering temporal information. The proposed approach reduces the ambiguity of the joint depths and improves the 3D pose estimation accuracy.
|
2312.10669
|
Kevin Putra Santoso
|
K. P. Santoso, F. A. Madany, H. Suryotrisongko
|
Analisis Eksploratif Dan Augmentasi Data NSL-KDD Menggunakan Deep
Generative Adversarial Networks Untuk Meningkatkan Performa Algoritma Extreme
Gradient Boosting Dalam Klasifikasi Jenis Serangan Siber
|
in Indonesian language
| null | null | null |
cs.CR cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This study proposes the implementation of Deep Generative Adversarial
Networks (GANs) for augmenting the NSL-KDD dataset. The primary objective is to
enhance the efficacy of eXtreme Gradient Boosting (XGBoost) in the
classification of cyber-attacks on the NSL-KDD dataset. As a result, the method
proposed in this research achieved an accuracy of 99.53% using the XGBoost
model without data augmentation with GAN, and 99.78% with data augmentation
using GAN.
|
[
{
"created": "Sun, 17 Dec 2023 09:54:07 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Santoso",
"K. P.",
""
],
[
"Madany",
"F. A.",
""
],
[
"Suryotrisongko",
"H.",
""
]
] |
This study proposes the implementation of Deep Generative Adversarial Networks (GANs) for augmenting the NSL-KDD dataset. The primary objective is to enhance the efficacy of eXtreme Gradient Boosting (XGBoost) in the classification of cyber-attacks on the NSL-KDD dataset. As a result, the method proposed in this research achieved an accuracy of 99.53% using the XGBoost model without data augmentation with GAN, and 99.78% with data augmentation using GAN.
|
2405.20083
|
Philipp G. Haselwarter
|
Philipp G. Haselwarter, Kwing Hei Li, Markus de Medeiros, Simon
Oddershede Gregersen, Alejandro Aguirre, Joseph Tassarotti, Lars Birkedal
|
Tachis: Higher-Order Separation Logic with Credits for Expected Costs
| null | null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present Tachis, a higher-order separation logic to reason about the
expected cost of probabilistic programs. Inspired by the uses of time credits
for reasoning about the running time of deterministic programs, we introduce a
novel notion of probabilistic cost credit. Probabilistic cost credits are a
separation logic resource that can be used to pay for the cost of operations in
programs, and that can be distributed across all possible branches of sampling
instructions according to their weight, thus enabling us to reason about
expected cost. The representation of cost credits as separation logic resources
gives Tachis a great deal of flexibility and expressivity. In particular, it
permits reasoning about amortized expected cost by storing excess credits as
potential into data structures to pay for future operations. Tachis further
supports a range of cost models, including running time and entropy usage. We
showcase the versatility of this approach by applying our techniques to prove
upper bounds on the expected cost of a variety of probabilistic algorithms and
data structures, including randomized quicksort, hash tables, and meldable
heaps.
All of our results have been mechanized using Coq, Iris, and the Coquelicot
real analysis library.
|
[
{
"created": "Thu, 30 May 2024 14:12:00 GMT",
"version": "v1"
}
] |
2024-05-31
|
[
[
"Haselwarter",
"Philipp G.",
""
],
[
"Li",
"Kwing Hei",
""
],
[
"de Medeiros",
"Markus",
""
],
[
"Gregersen",
"Simon Oddershede",
""
],
[
"Aguirre",
"Alejandro",
""
],
[
"Tassarotti",
"Joseph",
""
],
[
"Birkedal",
"Lars",
""
]
] |
We present Tachis, a higher-order separation logic to reason about the expected cost of probabilistic programs. Inspired by the uses of time credits for reasoning about the running time of deterministic programs, we introduce a novel notion of probabilistic cost credit. Probabilistic cost credits are a separation logic resource that can be used to pay for the cost of operations in programs, and that can be distributed across all possible branches of sampling instructions according to their weight, thus enabling us to reason about expected cost. The representation of cost credits as separation logic resources gives Tachis a great deal of flexibility and expressivity. In particular, it permits reasoning about amortized expected cost by storing excess credits as potential into data structures to pay for future operations. Tachis further supports a range of cost models, including running time and entropy usage. We showcase the versatility of this approach by applying our techniques to prove upper bounds on the expected cost of a variety of probabilistic algorithms and data structures, including randomized quicksort, hash tables, and meldable heaps. All of our results have been mechanized using Coq, Iris, and the Coquelicot real analysis library.
|
1911.07793
|
Oliver Korten
|
Oliver Korten
|
On the Complexity of 2-Player Packing Games
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyze the computational complexity of two 2-player games involving
packing objects into a box. In the first game, players alternate drawing
polycubes from a shared pile and placing them into an initially empty box in
any available location; the first player who can't place another piece loses.
In the second game, there is a fixed sequence of polycubes, and on a player's
turn they drop the next piece in through the top of the box, after which it
falls until it hits a previously placed piece (as in Tetris); the first player
who can't place the next piece loses. We prove that in both games, deciding the
outcome under perfect play is PSPACE-complete.
|
[
{
"created": "Mon, 18 Nov 2019 17:48:47 GMT",
"version": "v1"
}
] |
2019-11-19
|
[
[
"Korten",
"Oliver",
""
]
] |
We analyze the computational complexity of two 2-player games involving packing objects into a box. In the first game, players alternate drawing polycubes from a shared pile and placing them into an initially empty box in any available location; the first player who can't place another piece loses. In the second game, there is a fixed sequence of polycubes, and on a player's turn they drop the next piece in through the top of the box, after which it falls until it hits a previously placed piece (as in Tetris); the first player who can't place the next piece loses. We prove that in both games, deciding the outcome under perfect play is PSPACE-complete.
|
2301.01283
|
Junjie Yan
|
Junjie Yan, Yingfei Liu, Jianjian Sun, Fan Jia, Shuailin Li, Tiancai
Wang, Xiangyu Zhang
|
Cross Modal Transformer: Towards Fast and Robust 3D Object Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a robust 3D detector, named Cross Modal Transformer
(CMT), for end-to-end 3D multi-modal detection. Without explicit view
transformation, CMT takes the image and point clouds tokens as inputs and
directly outputs accurate 3D bounding boxes. The spatial alignment of
multi-modal tokens is performed by encoding the 3D points into multi-modal
features. The core design of CMT is quite simple while its performance is
impressive. It achieves 74.1\% NDS (state-of-the-art with single model) on
nuScenes test set while maintaining fast inference speed. Moreover, CMT has a
strong robustness even if the LiDAR is missing. Code is released at
https://github.com/junjie18/CMT.
|
[
{
"created": "Tue, 3 Jan 2023 18:36:52 GMT",
"version": "v1"
},
{
"created": "Sun, 12 Mar 2023 07:56:36 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Sep 2023 09:53:25 GMT",
"version": "v3"
}
] |
2023-09-19
|
[
[
"Yan",
"Junjie",
""
],
[
"Liu",
"Yingfei",
""
],
[
"Sun",
"Jianjian",
""
],
[
"Jia",
"Fan",
""
],
[
"Li",
"Shuailin",
""
],
[
"Wang",
"Tiancai",
""
],
[
"Zhang",
"Xiangyu",
""
]
] |
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. It achieves 74.1\% NDS (state-of-the-art with single model) on nuScenes test set while maintaining fast inference speed. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code is released at https://github.com/junjie18/CMT.
|
2203.15086
|
Satya Krishna Gorti
|
Satya Krishna Gorti, Noel Vouitsis, Junwei Ma, Keyvan Golestan,
Maksims Volkovs, Animesh Garg, Guangwei Yu
|
X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval
|
CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In text-video retrieval, the objective is to learn a cross-modal similarity
function between a text and a video that ranks relevant text-video pairs higher
than irrelevant pairs. However, videos inherently express a much wider gamut of
information than texts. Instead, texts often capture sub-regions of entire
videos and are most semantically similar to certain frames within videos.
Therefore, for a given text, a retrieval model should focus on the text's most
semantically similar video sub-regions to make a more relevant comparison. Yet,
most existing works aggregate entire videos without directly considering text.
Common text-agnostic aggregations schemes include mean-pooling or
self-attention over the frames, but these are likely to encode misleading
visual information not described in the given text. To address this, we propose
a cross-modal attention model called X-Pool that reasons between a text and the
frames of a video. Our core mechanism is a scaled dot product attention for a
text to attend to its most semantically similar frames. We then generate an
aggregated video representation conditioned on the text's attention weights
over the frames. We evaluate our method on three benchmark datasets of MSR-VTT,
MSVD and LSMDC, achieving new state-of-the-art results by up to 12% in relative
improvement in Recall@1. Our findings thereby highlight the importance of joint
text-video reasoning to extract important visual cues according to text. Full
code and demo can be found at: https://layer6ai-labs.github.io/xpool/
|
[
{
"created": "Mon, 28 Mar 2022 20:47:37 GMT",
"version": "v1"
}
] |
2022-03-30
|
[
[
"Gorti",
"Satya Krishna",
""
],
[
"Vouitsis",
"Noel",
""
],
[
"Ma",
"Junwei",
""
],
[
"Golestan",
"Keyvan",
""
],
[
"Volkovs",
"Maksims",
""
],
[
"Garg",
"Animesh",
""
],
[
"Yu",
"Guangwei",
""
]
] |
In text-video retrieval, the objective is to learn a cross-modal similarity function between a text and a video that ranks relevant text-video pairs higher than irrelevant pairs. However, videos inherently express a much wider gamut of information than texts. Instead, texts often capture sub-regions of entire videos and are most semantically similar to certain frames within videos. Therefore, for a given text, a retrieval model should focus on the text's most semantically similar video sub-regions to make a more relevant comparison. Yet, most existing works aggregate entire videos without directly considering text. Common text-agnostic aggregations schemes include mean-pooling or self-attention over the frames, but these are likely to encode misleading visual information not described in the given text. To address this, we propose a cross-modal attention model called X-Pool that reasons between a text and the frames of a video. Our core mechanism is a scaled dot product attention for a text to attend to its most semantically similar frames. We then generate an aggregated video representation conditioned on the text's attention weights over the frames. We evaluate our method on three benchmark datasets of MSR-VTT, MSVD and LSMDC, achieving new state-of-the-art results by up to 12% in relative improvement in Recall@1. Our findings thereby highlight the importance of joint text-video reasoning to extract important visual cues according to text. Full code and demo can be found at: https://layer6ai-labs.github.io/xpool/
|
2307.11628
|
Xingyu Zhu
|
Xingyu Zhu, Guanhui Ye, Xiapu Luo, Xuetao Wei
|
Rethinking Mesh Watermark: Towards Highly Robust and Adaptable Deep 3D
Mesh Watermarking
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The goal of 3D mesh watermarking is to embed the message in 3D meshes that
can withstand various attacks imperceptibly and reconstruct the message
accurately from watermarked meshes. The watermarking algorithm is supposed to
withstand multiple attacks, and the complexity should not grow significantly
with the mesh size. Unfortunately, previous methods are less robust against
attacks and lack of adaptability. In this paper, we propose a robust and
adaptable deep 3D mesh watermarking Deep3DMark that leverages attention-based
convolutions in watermarking tasks to embed binary messages in vertex
distributions without texture assistance. Furthermore, our Deep3DMark exploits
the property that simplified meshes inherit similar relations from the original
ones, where the relation is the offset vector directed from one vertex to its
neighbor. By doing so, our method can be trained on simplified meshes but
remains effective on large size meshes (size adaptable) and unseen categories
of meshes (geometry adaptable). Extensive experiments demonstrate our method
remains efficient and effective even if the mesh size is 190x increased. Under
mesh attacks, Deep3DMark achieves 10%~50% higher accuracy than traditional
methods, and 2x higher SNR and 8% higher accuracy than previous DNN-based
methods.
|
[
{
"created": "Fri, 21 Jul 2023 14:49:30 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2023 03:08:43 GMT",
"version": "v2"
}
] |
2023-12-19
|
[
[
"Zhu",
"Xingyu",
""
],
[
"Ye",
"Guanhui",
""
],
[
"Luo",
"Xiapu",
""
],
[
"Wei",
"Xuetao",
""
]
] |
The goal of 3D mesh watermarking is to embed the message in 3D meshes that can withstand various attacks imperceptibly and reconstruct the message accurately from watermarked meshes. The watermarking algorithm is supposed to withstand multiple attacks, and the complexity should not grow significantly with the mesh size. Unfortunately, previous methods are less robust against attacks and lack of adaptability. In this paper, we propose a robust and adaptable deep 3D mesh watermarking Deep3DMark that leverages attention-based convolutions in watermarking tasks to embed binary messages in vertex distributions without texture assistance. Furthermore, our Deep3DMark exploits the property that simplified meshes inherit similar relations from the original ones, where the relation is the offset vector directed from one vertex to its neighbor. By doing so, our method can be trained on simplified meshes but remains effective on large size meshes (size adaptable) and unseen categories of meshes (geometry adaptable). Extensive experiments demonstrate our method remains efficient and effective even if the mesh size is 190x increased. Under mesh attacks, Deep3DMark achieves 10%~50% higher accuracy than traditional methods, and 2x higher SNR and 8% higher accuracy than previous DNN-based methods.
|
2407.06767
|
Ziwei Liu
|
Ziwei Liu, Wen Chen, Qingqing Wu, Zhendong Li, Xusheng Zhu, Qiong Wu,
and Nan Cheng
|
Enhancing Robustness and Security in ISAC Network Design: Leveraging
Transmissive Reconfigurable Intelligent Surface with RSMA
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel transmissive reconfigurable intelligent
surface transceiver-enhanced robust and secure integrated sensing and
communication network. A time-division sensing communication mechanism is
designed for the scenario, which enables communication and sensing to share
wireless resources. To address the interference management problem and hinder
eavesdropping, we implement rate-splitting multiple access (RSMA), where the
common stream is designed as a useful signal and an artificial noise, while
taking into account the imperfect channel state information and modeling the
channel for the illegal users in a fine-grained manner as well as giving an
upper bound on the error. We introduce the secrecy outage probability and
construct an optimization problem with secrecy sum-rate as the objective
functions to optimize the common stream beamforming matrix, the private stream
beamforming matrix and the timeslot duration variable. Due to the coupling of
the optimization variables and the infinity of the error set, the proposed
problem is a nonconvex optimization problem that cannot be solved directly. In
order to address the above challenges, the block coordinate descent-based
second-order cone programming algorithm is used to decouple the optimization
variables and solving the problem. Specifically, the problem is decoupled into
two subproblems concerning the common stream beamforming matrix, the private
stream beamforming matrix, and the timeslot duration variable, which are solved
by alternating optimization until convergence is reached. To solve the problem,
S-procedure, Bernstein's inequality and successive convex approximation are
employed to deal with the objective function and non-convex constraints.
Numerical simulation results verify the superiority of the proposed scheme in
improving the secrecy energy efficiency and the Cram\'{e}r-Rao boundary.
|
[
{
"created": "Tue, 9 Jul 2024 11:35:02 GMT",
"version": "v1"
}
] |
2024-07-10
|
[
[
"Liu",
"Ziwei",
""
],
[
"Chen",
"Wen",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Li",
"Zhendong",
""
],
[
"Zhu",
"Xusheng",
""
],
[
"Wu",
"Qiong",
""
],
[
"Cheng",
"Nan",
""
]
] |
In this paper, we propose a novel transmissive reconfigurable intelligent surface transceiver-enhanced robust and secure integrated sensing and communication network. A time-division sensing communication mechanism is designed for the scenario, which enables communication and sensing to share wireless resources. To address the interference management problem and hinder eavesdropping, we implement rate-splitting multiple access (RSMA), where the common stream is designed as a useful signal and an artificial noise, while taking into account the imperfect channel state information and modeling the channel for the illegal users in a fine-grained manner as well as giving an upper bound on the error. We introduce the secrecy outage probability and construct an optimization problem with secrecy sum-rate as the objective functions to optimize the common stream beamforming matrix, the private stream beamforming matrix and the timeslot duration variable. Due to the coupling of the optimization variables and the infinity of the error set, the proposed problem is a nonconvex optimization problem that cannot be solved directly. In order to address the above challenges, the block coordinate descent-based second-order cone programming algorithm is used to decouple the optimization variables and solving the problem. Specifically, the problem is decoupled into two subproblems concerning the common stream beamforming matrix, the private stream beamforming matrix, and the timeslot duration variable, which are solved by alternating optimization until convergence is reached. To solve the problem, S-procedure, Bernstein's inequality and successive convex approximation are employed to deal with the objective function and non-convex constraints. Numerical simulation results verify the superiority of the proposed scheme in improving the secrecy energy efficiency and the Cram\'{e}r-Rao boundary.
|
1805.09235
|
PrzemysÅaw Spurek
|
Szymon Knop, Jacek Tabor, Przemys{\l}aw Spurek, Igor Podolak, Marcin
Mazur, Stanis{\l}aw Jastrz\k{e}bski
|
Cramer-Wold AutoEncoder
| null |
Journal of Machine Learning Research, 21, 164, 1-28 2020
| null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new generative model, Cramer-Wold Autoencoder (CWAE). Following
WAE, we directly encourage normality of the latent space. Our paper uses also
the recent idea from Sliced WAE (SWAE) model, which uses one-dimensional
projections as a method of verifying closeness of two distributions. The
crucial new ingredient is the introduction of a new (Cramer-Wold) metric in the
space of densities, which replaces the Wasserstein metric used in SWAE. We show
that the Cramer-Wold metric between Gaussian mixtures is given by a simple
analytic formula, which results in the removal of sampling necessary to
estimate the cost function in WAE and SWAE models. As a consequence, while
drastically simplifying the optimization procedure, CWAE produces samples of a
matching perceptual quality to other SOTA models.
|
[
{
"created": "Wed, 23 May 2018 15:48:31 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Oct 2018 17:31:37 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Jul 2019 09:00:24 GMT",
"version": "v3"
}
] |
2020-09-21
|
[
[
"Knop",
"Szymon",
""
],
[
"Tabor",
"Jacek",
""
],
[
"Spurek",
"Przemysław",
""
],
[
"Podolak",
"Igor",
""
],
[
"Mazur",
"Marcin",
""
],
[
"Jastrzębski",
"Stanisław",
""
]
] |
We propose a new generative model, Cramer-Wold Autoencoder (CWAE). Following WAE, we directly encourage normality of the latent space. Our paper uses also the recent idea from Sliced WAE (SWAE) model, which uses one-dimensional projections as a method of verifying closeness of two distributions. The crucial new ingredient is the introduction of a new (Cramer-Wold) metric in the space of densities, which replaces the Wasserstein metric used in SWAE. We show that the Cramer-Wold metric between Gaussian mixtures is given by a simple analytic formula, which results in the removal of sampling necessary to estimate the cost function in WAE and SWAE models. As a consequence, while drastically simplifying the optimization procedure, CWAE produces samples of a matching perceptual quality to other SOTA models.
|
2010.13304
|
Madhavan Rajagopal Padmanabhan
|
Xiaoyun Fu, Madhavan Rajagopal Padmanabhan, Raj Gaurav Kumar, Samik
Basu, Shawn Dorius, Pavan Aduri
|
Measuring the Impact of Influence on Individuals: Roadmap to Quantifying
Attitude
|
13 pages, To appear in the proceedings of the 2020 IEEE/ACM
International Conference on Advances in Social Networks Analysis and Mining
(ASONAM)
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Influence diffusion has been central to the study of propagation of
information in social networks, where influence is typically modeled as a
binary property of entities: influenced or not influenced. We introduce the
notion of attitude, which, as described in social psychology, is the degree by
which an entity is influenced by the information. We present an information
diffusion model that quantifies the degree of influence, i.e., attitude of
individuals, in a social network. With this model, we formulate and study
attitude maximization problem. We prove that the function for computing
attitude is monotonic and sub-modular, and the attitude maximization problem is
NP-Hard. We present a greedy algorithm for maximization with an approximation
guarantee of $(1-1/e)$. Using the same model, we also introduce the notion of
"actionable" attitude with the aim to study the scenarios where attaining
individuals with high attitude is objectively more important than maximizing
the attitude of the entire network. We show that the function for computing
actionable attitude, unlike that for computing attitude, is non-submodular and
however is \emph{approximately submodular}. We present approximation algorithm
for maximizing actionable attitude in a network. We experimentally evaluated
our algorithms and study empirical properties of the attitude of nodes in
network such as spatial and value distribution of high attitude nodes.
|
[
{
"created": "Mon, 26 Oct 2020 03:21:29 GMT",
"version": "v1"
}
] |
2020-10-27
|
[
[
"Fu",
"Xiaoyun",
""
],
[
"Padmanabhan",
"Madhavan Rajagopal",
""
],
[
"Kumar",
"Raj Gaurav",
""
],
[
"Basu",
"Samik",
""
],
[
"Dorius",
"Shawn",
""
],
[
"Aduri",
"Pavan",
""
]
] |
Influence diffusion has been central to the study of propagation of information in social networks, where influence is typically modeled as a binary property of entities: influenced or not influenced. We introduce the notion of attitude, which, as described in social psychology, is the degree by which an entity is influenced by the information. We present an information diffusion model that quantifies the degree of influence, i.e., attitude of individuals, in a social network. With this model, we formulate and study attitude maximization problem. We prove that the function for computing attitude is monotonic and sub-modular, and the attitude maximization problem is NP-Hard. We present a greedy algorithm for maximization with an approximation guarantee of $(1-1/e)$. Using the same model, we also introduce the notion of "actionable" attitude with the aim to study the scenarios where attaining individuals with high attitude is objectively more important than maximizing the attitude of the entire network. We show that the function for computing actionable attitude, unlike that for computing attitude, is non-submodular and however is \emph{approximately submodular}. We present approximation algorithm for maximizing actionable attitude in a network. We experimentally evaluated our algorithms and study empirical properties of the attitude of nodes in network such as spatial and value distribution of high attitude nodes.
|
2212.02626
|
Linard Arquint
|
Linard Arquint and Malte Schwerhoff and Vaibhav Mehta and Peter
M\"uller
|
A Generic Methodology for the Modular Verification of Security Protocol
Implementations (extended version)
| null | null |
10.1145/3576915.3623105
| null |
cs.CR cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Security protocols are essential building blocks of modern IT systems. Subtle
flaws in their design or implementation may compromise the security of entire
systems. It is, thus, important to prove the absence of such flaws through
formal verification. Much existing work focuses on the verification of protocol
*models*, which is not sufficient to show that their *implementations* are
actually secure. Verification techniques for protocol implementations (e.g.,
via code generation or model extraction) typically impose severe restrictions
on the used programming language and code design, which may lead to sub-optimal
implementations. In this paper, we present a methodology for the modular
verification of strong security properties directly on the level of the
protocol implementations. Our methodology leverages state-of-the-art
verification logics and tools to support a wide range of implementations and
programming languages. We demonstrate its effectiveness by verifying memory
safety and security of Go implementations of the Needham-Schroeder-Lowe,
Diffie-Hellman key exchange, and WireGuard protocols, including forward secrecy
and injective agreement for WireGuard. We also show that our methodology is
agnostic to a particular language or program verifier with a prototype
implementation for C.
|
[
{
"created": "Mon, 5 Dec 2022 22:18:46 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Sep 2023 15:49:26 GMT",
"version": "v2"
}
] |
2023-09-12
|
[
[
"Arquint",
"Linard",
""
],
[
"Schwerhoff",
"Malte",
""
],
[
"Mehta",
"Vaibhav",
""
],
[
"Müller",
"Peter",
""
]
] |
Security protocols are essential building blocks of modern IT systems. Subtle flaws in their design or implementation may compromise the security of entire systems. It is, thus, important to prove the absence of such flaws through formal verification. Much existing work focuses on the verification of protocol *models*, which is not sufficient to show that their *implementations* are actually secure. Verification techniques for protocol implementations (e.g., via code generation or model extraction) typically impose severe restrictions on the used programming language and code design, which may lead to sub-optimal implementations. In this paper, we present a methodology for the modular verification of strong security properties directly on the level of the protocol implementations. Our methodology leverages state-of-the-art verification logics and tools to support a wide range of implementations and programming languages. We demonstrate its effectiveness by verifying memory safety and security of Go implementations of the Needham-Schroeder-Lowe, Diffie-Hellman key exchange, and WireGuard protocols, including forward secrecy and injective agreement for WireGuard. We also show that our methodology is agnostic to a particular language or program verifier with a prototype implementation for C.
|
1112.6320
|
Seyed Hamed Hassani
|
S. Hamed Hassani, Nicolas Macris, Rudiger Urbanke
|
Threshold Saturation in Spatially Coupled Constraint Satisfaction
Problems
| null | null |
10.1007/s10955-012-0664-x
| null |
cs.CC cond-mat.stat-mech cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider chains of random constraint satisfaction models that are
spatially coupled across a finite window along the chain direction. We
investigate their phase diagram at zero temperature using the survey
propagation formalism and the interpolation method. We prove that the SAT-UNSAT
phase transition threshold of an infinite chain is identical to the one of the
individual standard model, and is therefore not affected by spatial coupling.
We compute the survey propagation complexity using population dynamics as well
as large degree approximations, and determine the survey propagation threshold.
We find that a clustering phase survives coupling. However, as one increases
the range of the coupling window, the survey propagation threshold increases
and saturates towards the phase transition threshold. We also briefly discuss
other aspects of the problem. Namely, the condensation threshold is not
affected by coupling, but the dynamic threshold displays saturation towards the
condensation one. All these features may provide a new avenue for obtaining
better provable algorithmic lower bounds on phase transition thresholds of the
individual standard model.
|
[
{
"created": "Fri, 23 Dec 2011 10:42:51 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Jun 2012 07:32:02 GMT",
"version": "v2"
}
] |
2015-06-03
|
[
[
"Hassani",
"S. Hamed",
""
],
[
"Macris",
"Nicolas",
""
],
[
"Urbanke",
"Rudiger",
""
]
] |
We consider chains of random constraint satisfaction models that are spatially coupled across a finite window along the chain direction. We investigate their phase diagram at zero temperature using the survey propagation formalism and the interpolation method. We prove that the SAT-UNSAT phase transition threshold of an infinite chain is identical to the one of the individual standard model, and is therefore not affected by spatial coupling. We compute the survey propagation complexity using population dynamics as well as large degree approximations, and determine the survey propagation threshold. We find that a clustering phase survives coupling. However, as one increases the range of the coupling window, the survey propagation threshold increases and saturates towards the phase transition threshold. We also briefly discuss other aspects of the problem. Namely, the condensation threshold is not affected by coupling, but the dynamic threshold displays saturation towards the condensation one. All these features may provide a new avenue for obtaining better provable algorithmic lower bounds on phase transition thresholds of the individual standard model.
|
2004.08738
|
Shun Zhang
|
Yindi Yang, Shun Zhang, Feifei Gao, Jianpeng Ma, Octavia A. Dobre
|
Graph Neural Network based Channel Tracking for Massive MIMO Networks
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we resort to the graph neural network (GNN) and propose the
new channel tracking method for the massive multiple-input multiple-output
networks under the high mobility scenario. We first utilize a small number of
pilots to achieve the initial channel estimation. Then, we represent the
obtained channel data in the form of graphs and describe the channel spatial
correlation by the weights along the edges of the graph. Furthermore, we
introduce the computation steps of the main unit for the GNN and design a
GNN-based channel tracking framework, which includes an encoder, a core network
and a decoder. Simulation results corroborate that our proposed GNN-based
scheme can achieve better performance than the works with feedforward neural
network.
|
[
{
"created": "Sun, 19 Apr 2020 01:29:01 GMT",
"version": "v1"
}
] |
2020-04-21
|
[
[
"Yang",
"Yindi",
""
],
[
"Zhang",
"Shun",
""
],
[
"Gao",
"Feifei",
""
],
[
"Ma",
"Jianpeng",
""
],
[
"Dobre",
"Octavia A.",
""
]
] |
In this paper, we resort to the graph neural network (GNN) and propose the new channel tracking method for the massive multiple-input multiple-output networks under the high mobility scenario. We first utilize a small number of pilots to achieve the initial channel estimation. Then, we represent the obtained channel data in the form of graphs and describe the channel spatial correlation by the weights along the edges of the graph. Furthermore, we introduce the computation steps of the main unit for the GNN and design a GNN-based channel tracking framework, which includes an encoder, a core network and a decoder. Simulation results corroborate that our proposed GNN-based scheme can achieve better performance than the works with feedforward neural network.
|
2102.11360
|
Greg Bodwin
|
Greg Bodwin, Michael Dinitz, Caleb Robelle
|
Partially Optimal Edge Fault-Tolerant Spanners
| null | null | null | null |
cs.DS cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work has established that, for every positive integer $k$, every
$n$-node graph has a $(2k-1)$-spanner on $O(f^{1-1/k} n^{1+1/k})$ edges that is
resilient to $f$ edge or vertex faults. For vertex faults, this bound is tight.
However, the case of edge faults is not as well understood: the best known
lower bound for general $k$ is $\Omega(f^{\frac12 - \frac{1}{2k}} n^{1+1/k}
+fn)$. Our main result is to nearly close this gap with an improved upper
bound, thus separating the cases of edge and vertex faults. For odd $k$, our
new upper bound is $O_k(f^{\frac12 - \frac{1}{2k}} n^{1+1/k} + fn)$, which is
tight up to hidden $poly(k)$ factors. For even $k$, our new upper bound is
$O_k(f^{1/2} n^{1+1/k} +fn)$, which leaves a gap of $poly(k) f^{1/(2k)}$. Our
proof is an analysis of the fault-tolerant greedy algorithm, which requires
exponential time, but we also show that there is a polynomial-time algorithm
which creates edge fault tolerant spanners that are larger only by factors of
$k$.
|
[
{
"created": "Mon, 22 Feb 2021 21:04:57 GMT",
"version": "v1"
}
] |
2021-02-24
|
[
[
"Bodwin",
"Greg",
""
],
[
"Dinitz",
"Michael",
""
],
[
"Robelle",
"Caleb",
""
]
] |
Recent work has established that, for every positive integer $k$, every $n$-node graph has a $(2k-1)$-spanner on $O(f^{1-1/k} n^{1+1/k})$ edges that is resilient to $f$ edge or vertex faults. For vertex faults, this bound is tight. However, the case of edge faults is not as well understood: the best known lower bound for general $k$ is $\Omega(f^{\frac12 - \frac{1}{2k}} n^{1+1/k} +fn)$. Our main result is to nearly close this gap with an improved upper bound, thus separating the cases of edge and vertex faults. For odd $k$, our new upper bound is $O_k(f^{\frac12 - \frac{1}{2k}} n^{1+1/k} + fn)$, which is tight up to hidden $poly(k)$ factors. For even $k$, our new upper bound is $O_k(f^{1/2} n^{1+1/k} +fn)$, which leaves a gap of $poly(k) f^{1/(2k)}$. Our proof is an analysis of the fault-tolerant greedy algorithm, which requires exponential time, but we also show that there is a polynomial-time algorithm which creates edge fault tolerant spanners that are larger only by factors of $k$.
|
1912.05861
|
Ephraim Zimmer
|
Ephraim Zimmer, Christian Burkert, Tom Petersen, Hannes Federrath
|
PEEPLL: Privacy-Enhanced Event Pseudonymisation with Limited Linkability
|
10 pages. Extended version, Dec. 2019. A shortened version has been
accepted for publication in the proceedings of the 35th ACM/SIGAPP Symposium
On Applied Computing 2020
| null |
10.1145/3341105.3375781
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pseudonymisation provides the means to reduce the privacy impact of
monitoring, auditing, intrusion detection, and data collection in general on
individual subjects. Its application on data records, especially in an
environment with additional constraints, like re-identification in the course
of incident response, implies assumptions and privacy issues, which contradict
the achievement of the desirable privacy level. Proceeding from two real-world
scenarios, where personal and identifying data needs to be processed, we
identify requirements as well as a system model for pseudonymisation and
explicitly state the sustained privacy threats, even when pseudonymisation is
applied. With this system and threat model, we derive privacy protection goals
together with possible technical realisations, which are implemented and
integrated into our event pseudonymisation framework PEEPLL for the context of
event processing, like monitoring and auditing of user, process, and network
activities. Our framework provides privacy-friendly linkability in order to
maintain the possibility for automatic event correlation and evaluation, while
at the same time reduces the privacy impact on individuals. Additionally, the
pseudonymisation framework is evaluated in order to provide some restrained
insights on the impact of assigned paradigms and all necessary new mechanisms
on the performance of monitoring and auditing. With this framework, privacy
provided by event pseudonymisation can be enhanced by a more rigorous
commitment to the concept of personal data minimisation, especially in the
context of regulatory requirements like the European General Data Protection
Regulation.
|
[
{
"created": "Thu, 12 Dec 2019 10:11:18 GMT",
"version": "v1"
}
] |
2020-04-22
|
[
[
"Zimmer",
"Ephraim",
""
],
[
"Burkert",
"Christian",
""
],
[
"Petersen",
"Tom",
""
],
[
"Federrath",
"Hannes",
""
]
] |
Pseudonymisation provides the means to reduce the privacy impact of monitoring, auditing, intrusion detection, and data collection in general on individual subjects. Its application on data records, especially in an environment with additional constraints, like re-identification in the course of incident response, implies assumptions and privacy issues, which contradict the achievement of the desirable privacy level. Proceeding from two real-world scenarios, where personal and identifying data needs to be processed, we identify requirements as well as a system model for pseudonymisation and explicitly state the sustained privacy threats, even when pseudonymisation is applied. With this system and threat model, we derive privacy protection goals together with possible technical realisations, which are implemented and integrated into our event pseudonymisation framework PEEPLL for the context of event processing, like monitoring and auditing of user, process, and network activities. Our framework provides privacy-friendly linkability in order to maintain the possibility for automatic event correlation and evaluation, while at the same time reduces the privacy impact on individuals. Additionally, the pseudonymisation framework is evaluated in order to provide some restrained insights on the impact of assigned paradigms and all necessary new mechanisms on the performance of monitoring and auditing. With this framework, privacy provided by event pseudonymisation can be enhanced by a more rigorous commitment to the concept of personal data minimisation, especially in the context of regulatory requirements like the European General Data Protection Regulation.
|
1901.02161
|
Daniel Brown
|
Daniel S. Brown, Yuchen Cui, Scott Niekum
|
Risk-Aware Active Inverse Reinforcement Learning
|
In proceedings of the 2nd Conference on Robot Learning (CoRL) 2018
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Active learning from demonstration allows a robot to query a human for
specific types of input to achieve efficient learning. Existing work has
explored a variety of active query strategies; however, to our knowledge, none
of these strategies directly minimize the performance risk of the policy the
robot is learning. Utilizing recent advances in performance bounds for inverse
reinforcement learning, we propose a risk-aware active inverse reinforcement
learning algorithm that focuses active queries on areas of the state space with
the potential for large generalization error. We show that risk-aware active
learning outperforms standard active IRL approaches on gridworld, simulated
driving, and table setting tasks, while also providing a performance-based
stopping criterion that allows a robot to know when it has received enough
demonstrations to safely perform a task.
|
[
{
"created": "Tue, 8 Jan 2019 05:23:03 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2019 20:46:56 GMT",
"version": "v2"
}
] |
2019-06-05
|
[
[
"Brown",
"Daniel S.",
""
],
[
"Cui",
"Yuchen",
""
],
[
"Niekum",
"Scott",
""
]
] |
Active learning from demonstration allows a robot to query a human for specific types of input to achieve efficient learning. Existing work has explored a variety of active query strategies; however, to our knowledge, none of these strategies directly minimize the performance risk of the policy the robot is learning. Utilizing recent advances in performance bounds for inverse reinforcement learning, we propose a risk-aware active inverse reinforcement learning algorithm that focuses active queries on areas of the state space with the potential for large generalization error. We show that risk-aware active learning outperforms standard active IRL approaches on gridworld, simulated driving, and table setting tasks, while also providing a performance-based stopping criterion that allows a robot to know when it has received enough demonstrations to safely perform a task.
|
1811.09543
|
Ji Zhang
|
Ji Zhang, Kevin Shih, Andrew Tao, Bryan Catanzaro, Ahmed Elgammal
|
An Interpretable Model for Scene Graph Generation
|
arXiv admin note: substantial text overlap with arXiv:1811.00662
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an efficient and interpretable scene graph generator. We consider
three types of features: visual, spatial and semantic, and we use a late fusion
strategy such that each feature's contribution can be explicitly investigated.
We study the key factors about these features that have the most impact on the
performance, and also visualize the learned visual features for relationships
and investigate the efficacy of our model. We won the champion of the
OpenImages Visual Relationship Detection Challenge on Kaggle, where we
outperform the 2nd place by 5\% (20\% relatively). We believe an accurate scene
graph generator is a fundamental stepping stone for higher-level
vision-language tasks such as image captioning and visual QA, since it provides
a semantic, structured comprehension of an image that is beyond pixels and
objects.
|
[
{
"created": "Wed, 21 Nov 2018 17:51:01 GMT",
"version": "v1"
}
] |
2018-11-26
|
[
[
"Zhang",
"Ji",
""
],
[
"Shih",
"Kevin",
""
],
[
"Tao",
"Andrew",
""
],
[
"Catanzaro",
"Bryan",
""
],
[
"Elgammal",
"Ahmed",
""
]
] |
We propose an efficient and interpretable scene graph generator. We consider three types of features: visual, spatial and semantic, and we use a late fusion strategy such that each feature's contribution can be explicitly investigated. We study the key factors about these features that have the most impact on the performance, and also visualize the learned visual features for relationships and investigate the efficacy of our model. We won the champion of the OpenImages Visual Relationship Detection Challenge on Kaggle, where we outperform the 2nd place by 5\% (20\% relatively). We believe an accurate scene graph generator is a fundamental stepping stone for higher-level vision-language tasks such as image captioning and visual QA, since it provides a semantic, structured comprehension of an image that is beyond pixels and objects.
|
2405.15690
|
Ummay Kulsum
|
Ummay Kulsum, Haotian Zhu, Bowen Xu, Marcelo d'Amorim
|
A Case Study of LLM for Automated Vulnerability Repair: Assessing Impact
of Reasoning and Patch Validation Feedback
|
Code, data and artifacts are available:
http://tinyurl.com/vrpilot-artifacts
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Recent work in automated program repair (APR) proposes the use of reasoning
and patch validation feedback to reduce the semantic gap between the LLMs and
the code under analysis. The idea has been shown to perform well for general
APR, but its effectiveness in other particular contexts remains underexplored.
In this work, we assess the impact of reasoning and patch validation feedback
to LLMs in the context of vulnerability repair, an important and challenging
task in security. To support the evaluation, we present VRpilot, an LLM-based
vulnerability repair technique based on reasoning and patch validation
feedback. VRpilot (1) uses a chain-of-thought prompt to reason about a
vulnerability prior to generating patch candidates and (2) iteratively refines
prompts according to the output of external tools (e.g., compiler, code
sanitizers, test suite, etc.) on previously-generated patches. To evaluate
performance, we compare VRpilot against the state-of-the-art vulnerability
repair techniques for C and Java using public datasets from the literature. Our
results show that VRpilot generates, on average, 14% and 7.6% more correct
patches than the baseline techniques on C and Java, respectively. We show,
through an ablation study, that reasoning and patch validation feedback are
critical. We report several lessons from this study and potential directions
for advancing LLM-empowered vulnerability repair
|
[
{
"created": "Fri, 24 May 2024 16:29:48 GMT",
"version": "v1"
}
] |
2024-05-27
|
[
[
"Kulsum",
"Ummay",
""
],
[
"Zhu",
"Haotian",
""
],
[
"Xu",
"Bowen",
""
],
[
"d'Amorim",
"Marcelo",
""
]
] |
Recent work in automated program repair (APR) proposes the use of reasoning and patch validation feedback to reduce the semantic gap between the LLMs and the code under analysis. The idea has been shown to perform well for general APR, but its effectiveness in other particular contexts remains underexplored. In this work, we assess the impact of reasoning and patch validation feedback to LLMs in the context of vulnerability repair, an important and challenging task in security. To support the evaluation, we present VRpilot, an LLM-based vulnerability repair technique based on reasoning and patch validation feedback. VRpilot (1) uses a chain-of-thought prompt to reason about a vulnerability prior to generating patch candidates and (2) iteratively refines prompts according to the output of external tools (e.g., compiler, code sanitizers, test suite, etc.) on previously-generated patches. To evaluate performance, we compare VRpilot against the state-of-the-art vulnerability repair techniques for C and Java using public datasets from the literature. Our results show that VRpilot generates, on average, 14% and 7.6% more correct patches than the baseline techniques on C and Java, respectively. We show, through an ablation study, that reasoning and patch validation feedback are critical. We report several lessons from this study and potential directions for advancing LLM-empowered vulnerability repair
|
2305.02192
|
Saeed Hadadan
|
Saeed Hadadan, Geng Lin, Jan Nov\'ak, Fabrice Rousselle, Matthias
Zwicker
|
Inverse Global Illumination using a Neural Radiometric Prior
|
Homepage: https://inverse-neural-radiosity.github.io
| null |
10.1145/3588432.3591553
| null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inverse rendering methods that account for global illumination are becoming
more popular, but current methods require evaluating and automatically
differentiating millions of path integrals by tracing multiple light bounces,
which remains expensive and prone to noise. Instead, this paper proposes a
radiometric prior as a simple alternative to building complete path integrals
in a traditional differentiable path tracer, while still correctly accounting
for global illumination. Inspired by the Neural Radiosity technique, we use a
neural network as a radiance function, and we introduce a prior consisting of
the norm of the residual of the rendering equation in the inverse rendering
loss. We train our radiance network and optimize scene parameters
simultaneously using a loss consisting of both a photometric term between
renderings and the multi-view input images, and our radiometric prior (the
residual term). This residual term enforces a physical constraint on the
optimization that ensures that the radiance field accounts for global
illumination. We compare our method to a vanilla differentiable path tracer,
and more advanced techniques such as Path Replay Backpropagation. Despite the
simplicity of our approach, we can recover scene parameters with comparable and
in some cases better quality, at considerably lower computation times.
|
[
{
"created": "Wed, 3 May 2023 15:36:39 GMT",
"version": "v1"
},
{
"created": "Thu, 18 May 2023 02:13:18 GMT",
"version": "v2"
}
] |
2023-08-01
|
[
[
"Hadadan",
"Saeed",
""
],
[
"Lin",
"Geng",
""
],
[
"Novák",
"Jan",
""
],
[
"Rousselle",
"Fabrice",
""
],
[
"Zwicker",
"Matthias",
""
]
] |
Inverse rendering methods that account for global illumination are becoming more popular, but current methods require evaluating and automatically differentiating millions of path integrals by tracing multiple light bounces, which remains expensive and prone to noise. Instead, this paper proposes a radiometric prior as a simple alternative to building complete path integrals in a traditional differentiable path tracer, while still correctly accounting for global illumination. Inspired by the Neural Radiosity technique, we use a neural network as a radiance function, and we introduce a prior consisting of the norm of the residual of the rendering equation in the inverse rendering loss. We train our radiance network and optimize scene parameters simultaneously using a loss consisting of both a photometric term between renderings and the multi-view input images, and our radiometric prior (the residual term). This residual term enforces a physical constraint on the optimization that ensures that the radiance field accounts for global illumination. We compare our method to a vanilla differentiable path tracer, and more advanced techniques such as Path Replay Backpropagation. Despite the simplicity of our approach, we can recover scene parameters with comparable and in some cases better quality, at considerably lower computation times.
|
1612.03663
|
Maksim Lapin
|
Maksim Lapin, Matthias Hein, and Bernt Schiele
|
Analysis and Optimization of Loss Functions for Multiclass, Top-k, and
Multilabel Classification
| null | null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Top-k error is currently a popular performance measure on large scale image
classification benchmarks such as ImageNet and Places. Despite its wide
acceptance, our understanding of this metric is limited as most of the previous
research is focused on its special case, the top-1 error. In this work, we
explore two directions that shed more light on the top-k error. First, we
provide an in-depth analysis of established and recently proposed single-label
multiclass methods along with a detailed account of efficient optimization
algorithms for them. Our results indicate that the softmax loss and the smooth
multiclass SVM are surprisingly competitive in top-k error uniformly across all
k, which can be explained by our analysis of multiclass top-k calibration.
Further improvements for a specific k are possible with a number of proposed
top-k loss functions. Second, we use the top-k methods to explore the
transition from multiclass to multilabel learning. In particular, we find that
it is possible to obtain effective multilabel classifiers on Pascal VOC using a
single label per image for training, while the gap between multiclass and
multilabel methods on MS COCO is more significant. Finally, our contribution of
efficient algorithms for training with the considered top-k and multilabel loss
functions is of independent interest.
|
[
{
"created": "Mon, 12 Dec 2016 13:20:09 GMT",
"version": "v1"
}
] |
2016-12-19
|
[
[
"Lapin",
"Maksim",
""
],
[
"Hein",
"Matthias",
""
],
[
"Schiele",
"Bernt",
""
]
] |
Top-k error is currently a popular performance measure on large scale image classification benchmarks such as ImageNet and Places. Despite its wide acceptance, our understanding of this metric is limited as most of the previous research is focused on its special case, the top-1 error. In this work, we explore two directions that shed more light on the top-k error. First, we provide an in-depth analysis of established and recently proposed single-label multiclass methods along with a detailed account of efficient optimization algorithms for them. Our results indicate that the softmax loss and the smooth multiclass SVM are surprisingly competitive in top-k error uniformly across all k, which can be explained by our analysis of multiclass top-k calibration. Further improvements for a specific k are possible with a number of proposed top-k loss functions. Second, we use the top-k methods to explore the transition from multiclass to multilabel learning. In particular, we find that it is possible to obtain effective multilabel classifiers on Pascal VOC using a single label per image for training, while the gap between multiclass and multilabel methods on MS COCO is more significant. Finally, our contribution of efficient algorithms for training with the considered top-k and multilabel loss functions is of independent interest.
|
2406.04625
|
Sangwon Ryu
|
Sangwon Ryu, Heejin Do, Yunsu Kim, Gary Geunbae Lee, Jungseul Ok
|
Key-Element-Informed sLLM Tuning for Document Summarization
|
Interspeech 2024
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Remarkable advances in large language models (LLMs) have enabled high-quality
text summarization. However, this capability is currently accessible only
through LLMs of substantial size or proprietary LLMs with usage fees. In
response, smaller-scale LLMs (sLLMs) of easy accessibility and low costs have
been extensively studied, yet they often suffer from missing key information
and entities, i.e., low relevance, in particular, when input documents are
long. We hence propose a key-element-informed instruction tuning for
summarization, so-called KEITSum, which identifies key elements in documents
and instructs sLLM to generate summaries capturing these key elements.
Experimental results on dialogue and news datasets demonstrate that sLLM with
KEITSum indeed provides high-quality summarization with higher relevance and
less hallucinations, competitive to proprietary LLM.
|
[
{
"created": "Fri, 7 Jun 2024 04:19:01 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Jun 2024 02:22:11 GMT",
"version": "v2"
}
] |
2024-06-27
|
[
[
"Ryu",
"Sangwon",
""
],
[
"Do",
"Heejin",
""
],
[
"Kim",
"Yunsu",
""
],
[
"Lee",
"Gary Geunbae",
""
],
[
"Ok",
"Jungseul",
""
]
] |
Remarkable advances in large language models (LLMs) have enabled high-quality text summarization. However, this capability is currently accessible only through LLMs of substantial size or proprietary LLMs with usage fees. In response, smaller-scale LLMs (sLLMs) of easy accessibility and low costs have been extensively studied, yet they often suffer from missing key information and entities, i.e., low relevance, in particular, when input documents are long. We hence propose a key-element-informed instruction tuning for summarization, so-called KEITSum, which identifies key elements in documents and instructs sLLM to generate summaries capturing these key elements. Experimental results on dialogue and news datasets demonstrate that sLLM with KEITSum indeed provides high-quality summarization with higher relevance and less hallucinations, competitive to proprietary LLM.
|
2406.09638
|
Shyam Venkatasubramanian
|
Shyam Venkatasubramanian, Bosung Kang, Ali Pezeshki, Muralidhar
Rangaswamy, Vahid Tarokh
|
RASPNet: A Benchmark Dataset for Radar Adaptive Signal Processing
Applications
| null | null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents a large-scale dataset for radar adaptive signal processing
(RASP) applications, aimed at supporting the development of data-driven models
within the radar community. The dataset, called RASPNet, consists of 100
realistic scenarios compiled over a variety of topographies and land types from
across the contiguous United States, designed to reflect a diverse array of
real-world environments. Within each scenario, RASPNet consists of 10,000
clutter realizations from an airborne radar setting, which can be utilized for
radar algorithm development and evaluation. RASPNet intends to fill a prominent
gap in the availability of a large-scale, realistic dataset that standardizes
the evaluation of adaptive radar processing techniques. We describe its
construction, organization, and several potential applications, which includes
a transfer learning example to demonstrate how RASPNet can be leveraged for
realistic adaptive radar processing scenarios.
|
[
{
"created": "Fri, 14 Jun 2024 00:07:52 GMT",
"version": "v1"
}
] |
2024-06-17
|
[
[
"Venkatasubramanian",
"Shyam",
""
],
[
"Kang",
"Bosung",
""
],
[
"Pezeshki",
"Ali",
""
],
[
"Rangaswamy",
"Muralidhar",
""
],
[
"Tarokh",
"Vahid",
""
]
] |
This work presents a large-scale dataset for radar adaptive signal processing (RASP) applications, aimed at supporting the development of data-driven models within the radar community. The dataset, called RASPNet, consists of 100 realistic scenarios compiled over a variety of topographies and land types from across the contiguous United States, designed to reflect a diverse array of real-world environments. Within each scenario, RASPNet consists of 10,000 clutter realizations from an airborne radar setting, which can be utilized for radar algorithm development and evaluation. RASPNet intends to fill a prominent gap in the availability of a large-scale, realistic dataset that standardizes the evaluation of adaptive radar processing techniques. We describe its construction, organization, and several potential applications, which includes a transfer learning example to demonstrate how RASPNet can be leveraged for realistic adaptive radar processing scenarios.
|
2406.00938
|
Chung-En Yu
|
Alice Bizzarri, Chung-En Yu, Brian Jalaian, Fabrizio Riguzzi,
Nathaniel D. Bastian
|
A Synergistic Approach In Network Intrusion Detection By Neurosymbolic
AI
| null | null | null | null |
cs.CR cs.AI cs.SC
|
http://creativecommons.org/licenses/by/4.0/
|
The prevailing approaches in Network Intrusion Detection Systems (NIDS) are
often hampered by issues such as high resource consumption, significant
computational demands, and poor interpretability. Furthermore, these systems
generally struggle to identify novel, rapidly changing cyber threats. This
paper delves into the potential of incorporating Neurosymbolic Artificial
Intelligence (NSAI) into NIDS, combining deep learning's data-driven strengths
with symbolic AI's logical reasoning to tackle the dynamic challenges in
cybersecurity, which also includes detailed NSAI techniques introduction for
cyber professionals to explore the potential strengths of NSAI in NIDS. The
inclusion of NSAI in NIDS marks potential advancements in both the detection
and interpretation of intricate network threats, benefiting from the robust
pattern recognition of neural networks and the interpretive prowess of symbolic
reasoning. By analyzing network traffic data types and machine learning
architectures, we illustrate NSAI's distinctive capability to offer more
profound insights into network behavior, thereby improving both detection
performance and the adaptability of the system. This merging of technologies
not only enhances the functionality of traditional NIDS but also sets the stage
for future developments in building more resilient, interpretable, and dynamic
defense mechanisms against advanced cyber threats. The continued progress in
this area is poised to transform NIDS into a system that is both responsive to
known threats and anticipatory of emerging, unseen ones.
|
[
{
"created": "Mon, 3 Jun 2024 02:24:01 GMT",
"version": "v1"
}
] |
2024-06-04
|
[
[
"Bizzarri",
"Alice",
""
],
[
"Yu",
"Chung-En",
""
],
[
"Jalaian",
"Brian",
""
],
[
"Riguzzi",
"Fabrizio",
""
],
[
"Bastian",
"Nathaniel D.",
""
]
] |
The prevailing approaches in Network Intrusion Detection Systems (NIDS) are often hampered by issues such as high resource consumption, significant computational demands, and poor interpretability. Furthermore, these systems generally struggle to identify novel, rapidly changing cyber threats. This paper delves into the potential of incorporating Neurosymbolic Artificial Intelligence (NSAI) into NIDS, combining deep learning's data-driven strengths with symbolic AI's logical reasoning to tackle the dynamic challenges in cybersecurity, which also includes detailed NSAI techniques introduction for cyber professionals to explore the potential strengths of NSAI in NIDS. The inclusion of NSAI in NIDS marks potential advancements in both the detection and interpretation of intricate network threats, benefiting from the robust pattern recognition of neural networks and the interpretive prowess of symbolic reasoning. By analyzing network traffic data types and machine learning architectures, we illustrate NSAI's distinctive capability to offer more profound insights into network behavior, thereby improving both detection performance and the adaptability of the system. This merging of technologies not only enhances the functionality of traditional NIDS but also sets the stage for future developments in building more resilient, interpretable, and dynamic defense mechanisms against advanced cyber threats. The continued progress in this area is poised to transform NIDS into a system that is both responsive to known threats and anticipatory of emerging, unseen ones.
|
1602.00351
|
Yi Ding
|
Yi Ding, Peilin Zhao, Steven C.H. Hoi, Yew-Soon Ong
|
Adaptive Subgradient Methods for Online AUC Maximization
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning for maximizing AUC performance is an important research problem in
Machine Learning and Artificial Intelligence. Unlike traditional batch learning
methods for maximizing AUC which often suffer from poor scalability, recent
years have witnessed some emerging studies that attempt to maximize AUC by
single-pass online learning approaches. Despite their encouraging results
reported, the existing online AUC maximization algorithms often adopt simple
online gradient descent approaches that fail to exploit the geometrical
knowledge of the data observed during the online learning process, and thus
could suffer from relatively larger regret. To address the above limitation, in
this work, we explore a novel algorithm of Adaptive Online AUC Maximization
(AdaOAM) which employs an adaptive gradient method that exploits the knowledge
of historical gradients to perform more informative online learning. The new
adaptive updating strategy of the AdaOAM is less sensitive to the parameter
settings and maintains the same time complexity as previous non-adaptive
counterparts. Additionally, we extend the algorithm to handle high-dimensional
sparse data (SAdaOAM) and address sparsity in the solution by performing lazy
gradient updating. We analyze the theoretical bounds and evaluate their
empirical performance on various types of data sets. The encouraging empirical
results obtained clearly highlighted the effectiveness and efficiency of the
proposed algorithms.
|
[
{
"created": "Mon, 1 Feb 2016 00:25:18 GMT",
"version": "v1"
}
] |
2016-02-02
|
[
[
"Ding",
"Yi",
""
],
[
"Zhao",
"Peilin",
""
],
[
"Hoi",
"Steven C. H.",
""
],
[
"Ong",
"Yew-Soon",
""
]
] |
Learning for maximizing AUC performance is an important research problem in Machine Learning and Artificial Intelligence. Unlike traditional batch learning methods for maximizing AUC which often suffer from poor scalability, recent years have witnessed some emerging studies that attempt to maximize AUC by single-pass online learning approaches. Despite their encouraging results reported, the existing online AUC maximization algorithms often adopt simple online gradient descent approaches that fail to exploit the geometrical knowledge of the data observed during the online learning process, and thus could suffer from relatively larger regret. To address the above limitation, in this work, we explore a novel algorithm of Adaptive Online AUC Maximization (AdaOAM) which employs an adaptive gradient method that exploits the knowledge of historical gradients to perform more informative online learning. The new adaptive updating strategy of the AdaOAM is less sensitive to the parameter settings and maintains the same time complexity as previous non-adaptive counterparts. Additionally, we extend the algorithm to handle high-dimensional sparse data (SAdaOAM) and address sparsity in the solution by performing lazy gradient updating. We analyze the theoretical bounds and evaluate their empirical performance on various types of data sets. The encouraging empirical results obtained clearly highlighted the effectiveness and efficiency of the proposed algorithms.
|
2408.03583
|
Moran Feldman
|
Niv Buchbinder and Moran Feldman
|
Deterministic Algorithm and Faster Algorithm for Submodular Maximization
subject to a Matroid Constraint
|
22 pages, to appear in FOCS 2024
| null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of maximizing a monotone submodular function subject to
a matroid constraint, and present for it a deterministic non-oblivious local
search algorithm that has an approximation guarantee of $1 - 1/e - \varepsilon$
(for any $\varepsilon> 0$) and query complexity of $\tilde{O}_\varepsilon(nr)$,
where $n$ is the size of the ground set and $r$ is the rank of the matroid. Our
algorithm vastly improves over the previous state-of-the-art
$0.5008$-approximation deterministic algorithm, and in fact, shows that there
is no separation between the approximation guarantees that can be obtained by
deterministic and randomized algorithms for the problem considered. The query
complexity of our algorithm can be improved to $\tilde{O}_\varepsilon(n +
r\sqrt{n})$ using randomization, which is nearly-linear for $r = O(\sqrt{n})$,
and is always at least as good as the previous state-of-the-art algorithms.
|
[
{
"created": "Wed, 7 Aug 2024 06:39:45 GMT",
"version": "v1"
}
] |
2024-08-08
|
[
[
"Buchbinder",
"Niv",
""
],
[
"Feldman",
"Moran",
""
]
] |
We study the problem of maximizing a monotone submodular function subject to a matroid constraint, and present for it a deterministic non-oblivious local search algorithm that has an approximation guarantee of $1 - 1/e - \varepsilon$ (for any $\varepsilon> 0$) and query complexity of $\tilde{O}_\varepsilon(nr)$, where $n$ is the size of the ground set and $r$ is the rank of the matroid. Our algorithm vastly improves over the previous state-of-the-art $0.5008$-approximation deterministic algorithm, and in fact, shows that there is no separation between the approximation guarantees that can be obtained by deterministic and randomized algorithms for the problem considered. The query complexity of our algorithm can be improved to $\tilde{O}_\varepsilon(n + r\sqrt{n})$ using randomization, which is nearly-linear for $r = O(\sqrt{n})$, and is always at least as good as the previous state-of-the-art algorithms.
|
1906.12029
|
Cheng Fu
|
Cheng Fu, Huili Chen, Haolan Liu, Xinyun Chen, Yuandong Tian, Farinaz
Koushanfar, Jishen Zhao
|
A Neural-based Program Decompiler
| null | null | null | null |
cs.PL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reverse engineering of binary executables is a critical problem in the
computer security domain. On the one hand, malicious parties may recover
interpretable source codes from the software products to gain commercial
advantages. On the other hand, binary decompilation can be leveraged for code
vulnerability analysis and malware detection. However, efficient binary
decompilation is challenging. Conventional decompilers have the following major
limitations: (i) they are only applicable to specific source-target language
pair, hence incurs undesired development cost for new language tasks; (ii)
their output high-level code cannot effectively preserve the correct
functionality of the input binary; (iii) their output program does not capture
the semantics of the input and the reversed program is hard to interpret. To
address the above problems, we propose Coda, the first end-to-end neural-based
framework for code decompilation. Coda decomposes the decompilation task into
two key phases: First, Coda employs an instruction type-aware encoder and a
tree decoder for generating an abstract syntax tree (AST) with attention
feeding during the code sketch generation stage. Second, Coda then updates the
code sketch using an iterative error correction machine guided by an ensembled
neural error predictor. By finding a good approximate candidate and then fixing
it towards perfect, Coda achieves superior performance compared to baseline
approaches. We assess Coda's performance with extensive experiments on various
benchmarks. Evaluation results show that Coda achieves an average of 82\%
program recovery accuracy on unseen binary samples, where the state-of-the-art
decompilers yield 0\% accuracy. Furthermore, Coda outperforms the
sequence-to-sequence model with attention by a margin of 70\% program accuracy.
|
[
{
"created": "Fri, 28 Jun 2019 03:29:38 GMT",
"version": "v1"
}
] |
2019-07-01
|
[
[
"Fu",
"Cheng",
""
],
[
"Chen",
"Huili",
""
],
[
"Liu",
"Haolan",
""
],
[
"Chen",
"Xinyun",
""
],
[
"Tian",
"Yuandong",
""
],
[
"Koushanfar",
"Farinaz",
""
],
[
"Zhao",
"Jishen",
""
]
] |
Reverse engineering of binary executables is a critical problem in the computer security domain. On the one hand, malicious parties may recover interpretable source codes from the software products to gain commercial advantages. On the other hand, binary decompilation can be leveraged for code vulnerability analysis and malware detection. However, efficient binary decompilation is challenging. Conventional decompilers have the following major limitations: (i) they are only applicable to specific source-target language pair, hence incurs undesired development cost for new language tasks; (ii) their output high-level code cannot effectively preserve the correct functionality of the input binary; (iii) their output program does not capture the semantics of the input and the reversed program is hard to interpret. To address the above problems, we propose Coda, the first end-to-end neural-based framework for code decompilation. Coda decomposes the decompilation task into two key phases: First, Coda employs an instruction type-aware encoder and a tree decoder for generating an abstract syntax tree (AST) with attention feeding during the code sketch generation stage. Second, Coda then updates the code sketch using an iterative error correction machine guided by an ensembled neural error predictor. By finding a good approximate candidate and then fixing it towards perfect, Coda achieves superior performance compared to baseline approaches. We assess Coda's performance with extensive experiments on various benchmarks. Evaluation results show that Coda achieves an average of 82\% program recovery accuracy on unseen binary samples, where the state-of-the-art decompilers yield 0\% accuracy. Furthermore, Coda outperforms the sequence-to-sequence model with attention by a margin of 70\% program accuracy.
|
2402.11588
|
Shu Yang
|
Shu Yang, Hanzhi Ma, Chengting Yu, Aili Wang, Er-Ping Li
|
SDiT: Spiking Diffusion Model with Transformer
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spiking neural networks (SNNs) have low power consumption and
bio-interpretable characteristics, and are considered to have tremendous
potential for energy-efficient computing. However, the exploration of SNNs on
image generation tasks remains very limited, and a unified and effective
structure for SNN-based generative models has yet to be proposed. In this
paper, we explore a novel diffusion model architecture within spiking neural
networks. We utilize transformer to replace the commonly used U-net structure
in mainstream diffusion models. It can generate higher quality images with
relatively lower computational cost and shorter sampling time. It aims to
provide an empirical baseline for research of generative models based on SNNs.
Experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets demonstrate that our
work is highly competitive compared to existing SNN generative models.
|
[
{
"created": "Sun, 18 Feb 2024 13:42:11 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Feb 2024 07:24:09 GMT",
"version": "v2"
}
] |
2024-02-27
|
[
[
"Yang",
"Shu",
""
],
[
"Ma",
"Hanzhi",
""
],
[
"Yu",
"Chengting",
""
],
[
"Wang",
"Aili",
""
],
[
"Li",
"Er-Ping",
""
]
] |
Spiking neural networks (SNNs) have low power consumption and bio-interpretable characteristics, and are considered to have tremendous potential for energy-efficient computing. However, the exploration of SNNs on image generation tasks remains very limited, and a unified and effective structure for SNN-based generative models has yet to be proposed. In this paper, we explore a novel diffusion model architecture within spiking neural networks. We utilize transformer to replace the commonly used U-net structure in mainstream diffusion models. It can generate higher quality images with relatively lower computational cost and shorter sampling time. It aims to provide an empirical baseline for research of generative models based on SNNs. Experiments on MNIST, Fashion-MNIST, and CIFAR-10 datasets demonstrate that our work is highly competitive compared to existing SNN generative models.
|
2202.11341
|
Marco Spanghero
|
M. Lenhart, M. Spanghero, P. Papadimitratos
|
Distributed and Mobile Message Level Relaying/Replaying of GNSS Signals
| null | null |
10.33012/2022.18227
| null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the introduction of Navigation Message Authentication (NMA), future
Global Navigation Satellite Systems (GNSSs) prevent spoofing by simulation,
i.e., the generation of forged satellite signals based on public information.
However, authentication does not prevent record-and-replay attacks, commonly
termed as meaconing. These attacks are less powerful in terms of adversarial
control over the victim receiver location and time, but by acting at the signal
level, they are not thwarted by NMA. This makes replaying/relaying attacks a
significant threat for GNSS. While there are numerous investigations on
meaconing, the majority does not rely on actual implementation and experimental
evaluation in real-world settings. In this work, we contribute to the
improvement of the experimental understanding of meaconing attacks. We design
and implement a system capable of real-time, distributed, and mobile meaconing,
built with off-the-shelf hardware. We extend from basic distributed attacks,
with signals from different locations relayed over the Internet and replayed
within range of the victim receiver(s): this has high bandwidth requirements
and thus depends on the quality of service of the available network to work. To
overcome this limitation, we propose to replay on message level, including the
authentication part of the payload. The resultant reduced bandwidth enables the
attacker to operate in mobile scenarios, as well as to replay signals from
multiple GNSS constellations and/or bands simultaneously. Additionally, the
attacker can delay individually selected satellite signals to potentially
influence the victim position and time solution in a more fine-grained manner.
Our versatile test-bench, enabling different types of replaying/relaying
attacks, facilitates testing realistic scenarios towards new and improved
replaying/relaying-focused countermeasures in GNSS receivers.
|
[
{
"created": "Wed, 23 Feb 2022 07:54:46 GMT",
"version": "v1"
}
] |
2022-02-24
|
[
[
"Lenhart",
"M.",
""
],
[
"Spanghero",
"M.",
""
],
[
"Papadimitratos",
"P.",
""
]
] |
With the introduction of Navigation Message Authentication (NMA), future Global Navigation Satellite Systems (GNSSs) prevent spoofing by simulation, i.e., the generation of forged satellite signals based on public information. However, authentication does not prevent record-and-replay attacks, commonly termed as meaconing. These attacks are less powerful in terms of adversarial control over the victim receiver location and time, but by acting at the signal level, they are not thwarted by NMA. This makes replaying/relaying attacks a significant threat for GNSS. While there are numerous investigations on meaconing, the majority does not rely on actual implementation and experimental evaluation in real-world settings. In this work, we contribute to the improvement of the experimental understanding of meaconing attacks. We design and implement a system capable of real-time, distributed, and mobile meaconing, built with off-the-shelf hardware. We extend from basic distributed attacks, with signals from different locations relayed over the Internet and replayed within range of the victim receiver(s): this has high bandwidth requirements and thus depends on the quality of service of the available network to work. To overcome this limitation, we propose to replay on message level, including the authentication part of the payload. The resultant reduced bandwidth enables the attacker to operate in mobile scenarios, as well as to replay signals from multiple GNSS constellations and/or bands simultaneously. Additionally, the attacker can delay individually selected satellite signals to potentially influence the victim position and time solution in a more fine-grained manner. Our versatile test-bench, enabling different types of replaying/relaying attacks, facilitates testing realistic scenarios towards new and improved replaying/relaying-focused countermeasures in GNSS receivers.
|
2406.06589
|
You Zuo
|
You Zuo (ALMAnaCH), Kim Gerdes (LISN), Eric Villemonte de La Clergerie
(ALMAnaCH), Beno\^it Sagot (ALMAnaCH)
|
PatentEval: Understanding Errors in Patent Generation
| null |
NAACL2024 - 2024 Annual Conference of the North American Chapter
of the Association for Computational Linguistics, Jun 2024, Mexico City,
Mexico
| null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we introduce a comprehensive error typology specifically
designed for evaluating two distinct tasks in machine-generated patent texts:
claims-to-abstract generation, and the generation of the next claim given
previous ones. We have also developed a benchmark, PatentEval, for
systematically assessing language models in this context. Our study includes a
comparative analysis, annotated by humans, of various models. These range from
those specifically adapted during training for tasks within the patent domain
to the latest general-purpose large language models (LLMs). Furthermore, we
explored and evaluated some metrics to approximate human judgments in patent
text evaluation, analyzing the extent to which these metrics align with expert
assessments. These approaches provide valuable insights into the capabilities
and limitations of current language models in the specialized field of patent
text generation.
|
[
{
"created": "Wed, 5 Jun 2024 13:55:27 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jun 2024 08:23:03 GMT",
"version": "v2"
}
] |
2024-06-26
|
[
[
"Zuo",
"You",
"",
"ALMAnaCH"
],
[
"Gerdes",
"Kim",
"",
"LISN"
],
[
"de La Clergerie",
"Eric Villemonte",
"",
"ALMAnaCH"
],
[
"Sagot",
"Benoît",
"",
"ALMAnaCH"
]
] |
In this work, we introduce a comprehensive error typology specifically designed for evaluating two distinct tasks in machine-generated patent texts: claims-to-abstract generation, and the generation of the next claim given previous ones. We have also developed a benchmark, PatentEval, for systematically assessing language models in this context. Our study includes a comparative analysis, annotated by humans, of various models. These range from those specifically adapted during training for tasks within the patent domain to the latest general-purpose large language models (LLMs). Furthermore, we explored and evaluated some metrics to approximate human judgments in patent text evaluation, analyzing the extent to which these metrics align with expert assessments. These approaches provide valuable insights into the capabilities and limitations of current language models in the specialized field of patent text generation.
|
2311.01073
|
Ljubisa Stankovic
|
Ljubisa Stankovic, Milos Dakovic, Ali Bagheri Bardi, Milos Brajovic,
Isidora Stankovic
|
Fourier Analysis of Signals on Directed Acyclic Graphs (DAG) Using Graph
Zero-Padding
|
10 pages, 12 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Directed acyclic graphs (DAGs) are used for modeling causal relationships,
dependencies, and flows in various systems. However, spectral analysis becomes
impractical in this setting because the eigen-decomposition of the adjacency
matrix yields all eigenvalues equal to zero. This inherent property of DAGs
results in an inability to differentiate between frequency components of
signals on such graphs. This problem can be addressed by alternating the
Fourier basis or adding edges in a DAG. However, these approaches change the
physics of the considered problem. To address this limitation, we propose a
graph zero-padding approach. This approach involves augmenting the original DAG
with additional vertices that are connected to the existing structure. The
added vertices are characterized by signal values set to zero. The proposed
technique enables the spectral evaluation of system outputs on DAGs (in almost
all cases), that is the computation of vertex-domain convolution without the
adverse effects of aliasing due to changes in a graph structure, with the
ultimate goal of preserving the output of the system on a graph as if the
changes in the graph structure were not done.
|
[
{
"created": "Thu, 2 Nov 2023 08:40:21 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Nov 2023 09:33:24 GMT",
"version": "v2"
}
] |
2023-11-14
|
[
[
"Stankovic",
"Ljubisa",
""
],
[
"Dakovic",
"Milos",
""
],
[
"Bardi",
"Ali Bagheri",
""
],
[
"Brajovic",
"Milos",
""
],
[
"Stankovic",
"Isidora",
""
]
] |
Directed acyclic graphs (DAGs) are used for modeling causal relationships, dependencies, and flows in various systems. However, spectral analysis becomes impractical in this setting because the eigen-decomposition of the adjacency matrix yields all eigenvalues equal to zero. This inherent property of DAGs results in an inability to differentiate between frequency components of signals on such graphs. This problem can be addressed by alternating the Fourier basis or adding edges in a DAG. However, these approaches change the physics of the considered problem. To address this limitation, we propose a graph zero-padding approach. This approach involves augmenting the original DAG with additional vertices that are connected to the existing structure. The added vertices are characterized by signal values set to zero. The proposed technique enables the spectral evaluation of system outputs on DAGs (in almost all cases), that is the computation of vertex-domain convolution without the adverse effects of aliasing due to changes in a graph structure, with the ultimate goal of preserving the output of the system on a graph as if the changes in the graph structure were not done.
|
1006.5090
|
Vladimir Pestov
|
Vladimir Pestov
|
PAC learnability of a concept class under non-atomic measures: a problem
by Vidyasagar
|
14 pages, 1 figure, latex 2e with Springer macros
|
Lect. Notes in Artificial Intelligence 6331, Springer, 2010, pp.
134-147
| null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In response to a 1997 problem of M. Vidyasagar, we state a necessary and
sufficient condition for distribution-free PAC learnability of a concept class
$\mathscr C$ under the family of all non-atomic (diffuse) measures on the
domain $\Omega$. Clearly, finiteness of the classical Vapnik-Chervonenkis
dimension of $\mathscr C$ is a sufficient, but no longer necessary, condition.
Besides, learnability of $\mathscr C$ under non-atomic measures does not imply
the uniform Glivenko-Cantelli property with regard to non-atomic measures. Our
learnability criterion is stated in terms of a combinatorial parameter
$\VC({\mathscr C}\,{\mathrm{mod}}\,\omega_1)$ which we call the VC dimension of
$\mathscr C$ modulo countable sets. The new parameter is obtained by
``thickening up'' single points in the definition of VC dimension to
uncountable ``clusters''. Equivalently, $\VC(\mathscr C\modd\omega_1)\leq d$ if
and only if every countable subclass of $\mathscr C$ has VC dimension $\leq d$
outside a countable subset of $\Omega$. The new parameter can be also expressed
as the classical VC dimension of $\mathscr C$ calculated on a suitable subset
of a compactification of $\Omega$. We do not make any measurability assumptions
on $\mathscr C$, assuming instead the validity of Martin's Axiom (MA).
|
[
{
"created": "Sat, 26 Jun 2010 01:44:57 GMT",
"version": "v1"
}
] |
2010-11-08
|
[
[
"Pestov",
"Vladimir",
""
]
] |
In response to a 1997 problem of M. Vidyasagar, we state a necessary and sufficient condition for distribution-free PAC learnability of a concept class $\mathscr C$ under the family of all non-atomic (diffuse) measures on the domain $\Omega$. Clearly, finiteness of the classical Vapnik-Chervonenkis dimension of $\mathscr C$ is a sufficient, but no longer necessary, condition. Besides, learnability of $\mathscr C$ under non-atomic measures does not imply the uniform Glivenko-Cantelli property with regard to non-atomic measures. Our learnability criterion is stated in terms of a combinatorial parameter $\VC({\mathscr C}\,{\mathrm{mod}}\,\omega_1)$ which we call the VC dimension of $\mathscr C$ modulo countable sets. The new parameter is obtained by ``thickening up'' single points in the definition of VC dimension to uncountable ``clusters''. Equivalently, $\VC(\mathscr C\modd\omega_1)\leq d$ if and only if every countable subclass of $\mathscr C$ has VC dimension $\leq d$ outside a countable subset of $\Omega$. The new parameter can be also expressed as the classical VC dimension of $\mathscr C$ calculated on a suitable subset of a compactification of $\Omega$. We do not make any measurability assumptions on $\mathscr C$, assuming instead the validity of Martin's Axiom (MA).
|
2305.14625
|
Shufan Wang
|
Shufan Wang, Yixiao Song, Andrew Drozdov, Aparna Garimella, Varun
Manjunatha, Mohit Iyyer
|
KNN-LM Does Not Improve Open-ended Text Generation
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we study the generation quality of interpolation-based
retrieval-augmented language models (LMs). These methods, best exemplified by
the KNN-LM, interpolate the LM's predicted distribution of the next word with a
distribution formed from the most relevant retrievals for a given prefix. While
the KNN-LM and related methods yield impressive decreases in perplexity, we
discover that they do not exhibit corresponding improvements in open-ended
generation quality, as measured by both automatic evaluation metrics (e.g.,
MAUVE) and human evaluations. Digging deeper, we find that interpolating with a
retrieval distribution actually increases perplexity compared to a baseline
Transformer LM for the majority of tokens in the WikiText-103 test set, even
though the overall perplexity is lower due to a smaller number of tokens for
which perplexity dramatically decreases after interpolation. However, when
decoding a long sequence at inference time, significant improvements on this
smaller subset of tokens are washed out by slightly worse predictions on most
tokens. Furthermore, we discover that the entropy of the retrieval distribution
increases faster than that of the base LM as the generated sequence becomes
longer, which indicates that retrieval is less reliable when using
model-generated text as queries (i.e., is subject to exposure bias). We hope
that our analysis spurs future work on improved decoding algorithms and
interpolation strategies for retrieval-augmented language models.
|
[
{
"created": "Wed, 24 May 2023 01:48:33 GMT",
"version": "v1"
}
] |
2023-05-25
|
[
[
"Wang",
"Shufan",
""
],
[
"Song",
"Yixiao",
""
],
[
"Drozdov",
"Andrew",
""
],
[
"Garimella",
"Aparna",
""
],
[
"Manjunatha",
"Varun",
""
],
[
"Iyyer",
"Mohit",
""
]
] |
In this paper, we study the generation quality of interpolation-based retrieval-augmented language models (LMs). These methods, best exemplified by the KNN-LM, interpolate the LM's predicted distribution of the next word with a distribution formed from the most relevant retrievals for a given prefix. While the KNN-LM and related methods yield impressive decreases in perplexity, we discover that they do not exhibit corresponding improvements in open-ended generation quality, as measured by both automatic evaluation metrics (e.g., MAUVE) and human evaluations. Digging deeper, we find that interpolating with a retrieval distribution actually increases perplexity compared to a baseline Transformer LM for the majority of tokens in the WikiText-103 test set, even though the overall perplexity is lower due to a smaller number of tokens for which perplexity dramatically decreases after interpolation. However, when decoding a long sequence at inference time, significant improvements on this smaller subset of tokens are washed out by slightly worse predictions on most tokens. Furthermore, we discover that the entropy of the retrieval distribution increases faster than that of the base LM as the generated sequence becomes longer, which indicates that retrieval is less reliable when using model-generated text as queries (i.e., is subject to exposure bias). We hope that our analysis spurs future work on improved decoding algorithms and interpolation strategies for retrieval-augmented language models.
|
2403.12627
|
Andreas Florath
|
Andreas Florath
|
Enhancing Formal Theorem Proving: A Comprehensive Dataset for Training
AI Models on Coq Code
|
11 pages
| null | null | null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
In the realm of formal theorem proving, the Coq proof assistant stands out
for its rigorous approach to verifying mathematical assertions and software
correctness. Despite the advances in artificial intelligence and machine
learning, the specialized nature of Coq syntax and semantics poses unique
challenges for Large Language Models (LLMs). Addressing this gap, we present a
comprehensive dataset specifically designed to enhance LLMs' proficiency in
interpreting and generating Coq code. This dataset, derived from a collection
of over 10,000 Coq source files, encompasses a wide array of propositions,
proofs, and definitions, enriched with metadata including source references and
licensing information. Our primary aim is to facilitate the development of LLMs
capable of generating syntactically correct and semantically meaningful Coq
constructs, thereby advancing the frontier of automated theorem proving.
Initial experiments with this dataset have showcased its significant potential;
models trained on this data exhibited enhanced accuracy in Coq code generation.
Notably, a particular experiment revealed that a fine-tuned LLM was capable of
generating 141 valid proofs for a basic lemma, highlighting the dataset's
utility in facilitating the discovery of diverse and valid proof strategies.
This paper discusses the dataset's composition, the methodology behind its
creation, and the implications of our findings for the future of machine
learning in formal verification. The dataset is accessible for further research
and exploration:
https://huggingface.co/datasets/florath/coq-facts-props-proofs-gen0-v1
|
[
{
"created": "Tue, 19 Mar 2024 10:53:40 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 13:54:47 GMT",
"version": "v2"
}
] |
2024-04-03
|
[
[
"Florath",
"Andreas",
""
]
] |
In the realm of formal theorem proving, the Coq proof assistant stands out for its rigorous approach to verifying mathematical assertions and software correctness. Despite the advances in artificial intelligence and machine learning, the specialized nature of Coq syntax and semantics poses unique challenges for Large Language Models (LLMs). Addressing this gap, we present a comprehensive dataset specifically designed to enhance LLMs' proficiency in interpreting and generating Coq code. This dataset, derived from a collection of over 10,000 Coq source files, encompasses a wide array of propositions, proofs, and definitions, enriched with metadata including source references and licensing information. Our primary aim is to facilitate the development of LLMs capable of generating syntactically correct and semantically meaningful Coq constructs, thereby advancing the frontier of automated theorem proving. Initial experiments with this dataset have showcased its significant potential; models trained on this data exhibited enhanced accuracy in Coq code generation. Notably, a particular experiment revealed that a fine-tuned LLM was capable of generating 141 valid proofs for a basic lemma, highlighting the dataset's utility in facilitating the discovery of diverse and valid proof strategies. This paper discusses the dataset's composition, the methodology behind its creation, and the implications of our findings for the future of machine learning in formal verification. The dataset is accessible for further research and exploration: https://huggingface.co/datasets/florath/coq-facts-props-proofs-gen0-v1
|
1806.02689
|
Lorenzo Sabattini
|
Lorenzo Sabattini, Valeria Villani, Julia N. Czerniak, Frieder Loch,
Alexander Mertens, Birgit Vogel-Heuser, Cesare Fantuzzi
|
Methodological Approach for the Evaluation of an Adaptive and Assistive
Human-Machine System
| null |
Proceedings of the 14th IEEE International Conference on
Automation Science and Engineering (CASE 2018)
|
10.1109/COASE.2018.8560574
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increasing complexity of modern industrial automatic and robotic
systems, an increasing burden is put on the operators, who are requested to
supervise and interact with such complex systems, typically under challenging
and stressful conditions. To overcome this issue, it is necessary to adopt a
responsible approach based on the anthropocentric design methodology, such that
machines adapt to the humans capabilities. Moving along these lines, a
methodological approach called MATE was introduced in [1], which consists in
devising complex automatic or robotic solutions that measure current operator's
status, adapting the interaction accordingly, and providing her/him with proper
training to improve the interaction and learn lacking skills and expertise. In
this paper we propose an evaluation and validation procedure to guarantee the
achievement of the requirements of a MATE system.
|
[
{
"created": "Thu, 7 Jun 2018 14:03:32 GMT",
"version": "v1"
}
] |
2020-03-06
|
[
[
"Sabattini",
"Lorenzo",
""
],
[
"Villani",
"Valeria",
""
],
[
"Czerniak",
"Julia N.",
""
],
[
"Loch",
"Frieder",
""
],
[
"Mertens",
"Alexander",
""
],
[
"Vogel-Heuser",
"Birgit",
""
],
[
"Fantuzzi",
"Cesare",
""
]
] |
With the increasing complexity of modern industrial automatic and robotic systems, an increasing burden is put on the operators, who are requested to supervise and interact with such complex systems, typically under challenging and stressful conditions. To overcome this issue, it is necessary to adopt a responsible approach based on the anthropocentric design methodology, such that machines adapt to the humans capabilities. Moving along these lines, a methodological approach called MATE was introduced in [1], which consists in devising complex automatic or robotic solutions that measure current operator's status, adapting the interaction accordingly, and providing her/him with proper training to improve the interaction and learn lacking skills and expertise. In this paper we propose an evaluation and validation procedure to guarantee the achievement of the requirements of a MATE system.
|
2102.07868
|
Idan Achituve
|
Idan Achituve, Aviv Navon, Yochai Yemini, Gal Chechik, Ethan Fetaya
|
GP-Tree: A Gaussian Process Classifier for Few-Shot Incremental Learning
| null | null | null | null |
cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Gaussian processes (GPs) are non-parametric, flexible, models that work well
in many tasks. Combining GPs with deep learning methods via deep kernel
learning (DKL) is especially compelling due to the strong representational
power induced by the network. However, inference in GPs, whether with or
without DKL, can be computationally challenging on large datasets. Here, we
propose GP-Tree, a novel method for multi-class classification with Gaussian
processes and DKL. We develop a tree-based hierarchical model in which each
internal node of the tree fits a GP to the data using the P\'olya Gamma
augmentation scheme. As a result, our method scales well with both the number
of classes and data size. We demonstrate the effectiveness of our method
against other Gaussian process training baselines, and we show how our general
GP approach achieves improved accuracy on standard incremental few-shot
learning benchmarks.
|
[
{
"created": "Mon, 15 Feb 2021 22:16:27 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Jun 2021 20:54:13 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Jun 2021 13:56:45 GMT",
"version": "v3"
},
{
"created": "Tue, 13 Jul 2021 07:27:26 GMT",
"version": "v4"
}
] |
2021-07-14
|
[
[
"Achituve",
"Idan",
""
],
[
"Navon",
"Aviv",
""
],
[
"Yemini",
"Yochai",
""
],
[
"Chechik",
"Gal",
""
],
[
"Fetaya",
"Ethan",
""
]
] |
Gaussian processes (GPs) are non-parametric, flexible, models that work well in many tasks. Combining GPs with deep learning methods via deep kernel learning (DKL) is especially compelling due to the strong representational power induced by the network. However, inference in GPs, whether with or without DKL, can be computationally challenging on large datasets. Here, we propose GP-Tree, a novel method for multi-class classification with Gaussian processes and DKL. We develop a tree-based hierarchical model in which each internal node of the tree fits a GP to the data using the P\'olya Gamma augmentation scheme. As a result, our method scales well with both the number of classes and data size. We demonstrate the effectiveness of our method against other Gaussian process training baselines, and we show how our general GP approach achieves improved accuracy on standard incremental few-shot learning benchmarks.
|
0801.0133
|
Alexandr Savinov
|
Alexandr Savinov
|
An Approach to Programming Based on Concepts
|
49 pages. Related papers: http://conceptoriented.com
|
Institute of Mathematics and Computer Science, Academy of Sciences
of Moldova, Technical Report RT0005, 2007
| null |
Technical Report RT0005
|
cs.PL
| null |
In this paper we describe a new approach to programming which generalizes
object-oriented programming. It is based on using a new programming construct,
called concept, which generalizes classes. Concept is defined as a pair of two
classes: one reference class and one object class. Each concept has a parent
concept which is specified using inclusion relation generalizing inheritance.
We describe several important mechanisms such as reference resolution, context
stack, dual methods and life-cycle management, inheritance and polymorphism.
This approach to programming is positioned as a new programming paradigm and
therefore we formulate its main principles and rules.
|
[
{
"created": "Sun, 30 Dec 2007 14:43:27 GMT",
"version": "v1"
}
] |
2008-01-03
|
[
[
"Savinov",
"Alexandr",
""
]
] |
In this paper we describe a new approach to programming which generalizes object-oriented programming. It is based on using a new programming construct, called concept, which generalizes classes. Concept is defined as a pair of two classes: one reference class and one object class. Each concept has a parent concept which is specified using inclusion relation generalizing inheritance. We describe several important mechanisms such as reference resolution, context stack, dual methods and life-cycle management, inheritance and polymorphism. This approach to programming is positioned as a new programming paradigm and therefore we formulate its main principles and rules.
|
2105.02197
|
Olivier Vincent
|
Olivier Vincent, Charley Gros, Julien Cohen-Adad
|
Impact of individual rater style on deep learning uncertainty in medical
imaging segmentation
|
17 pages, 8 figures, in submission at MELBA journal
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While multiple studies have explored the relation between inter-rater
variability and deep learning model uncertainty in medical segmentation tasks,
little is known about the impact of individual rater style. This study
quantifies rater style in the form of bias and consistency and explores their
impacts when used to train deep learning models. Two multi-rater public
datasets were used, consisting of brain multiple sclerosis lesion and spinal
cord grey matter segmentation. On both datasets, results show a correlation
($R^2 = 0.60$ and $0.93$) between rater bias and deep learning uncertainty. The
impact of label fusion between raters' annotations on this relationship is also
explored, and we show that multi-center consensuses are more effective than
single-center consensuses to reduce uncertainty, since rater style is mostly
center-specific.
|
[
{
"created": "Wed, 5 May 2021 17:11:18 GMT",
"version": "v1"
}
] |
2021-05-06
|
[
[
"Vincent",
"Olivier",
""
],
[
"Gros",
"Charley",
""
],
[
"Cohen-Adad",
"Julien",
""
]
] |
While multiple studies have explored the relation between inter-rater variability and deep learning model uncertainty in medical segmentation tasks, little is known about the impact of individual rater style. This study quantifies rater style in the form of bias and consistency and explores their impacts when used to train deep learning models. Two multi-rater public datasets were used, consisting of brain multiple sclerosis lesion and spinal cord grey matter segmentation. On both datasets, results show a correlation ($R^2 = 0.60$ and $0.93$) between rater bias and deep learning uncertainty. The impact of label fusion between raters' annotations on this relationship is also explored, and we show that multi-center consensuses are more effective than single-center consensuses to reduce uncertainty, since rater style is mostly center-specific.
|
2404.03121
|
Chuang Li
|
Chuang Li, Shuai Shao, Willian Mikason, Rubing Lin, Yantong Liu
|
Utilizing Computer Vision for Continuous Monitoring of Vaccine Side
Effects in Experimental Mice
|
1 figure
| null | null | null |
cs.CV q-bio.NC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The demand for improved efficiency and accuracy in vaccine safety assessments
is increasing. Here, we explore the application of computer vision technologies
to automate the monitoring of experimental mice for potential side effects
after vaccine administration. Traditional observation methods are
labor-intensive and lack the capability for continuous monitoring. By deploying
a computer vision system, our research aims to improve the efficiency and
accuracy of vaccine safety assessments. The methodology involves training
machine learning models on annotated video data of mice behaviors pre- and
post-vaccination. Preliminary results indicate that computer vision effectively
identify subtle changes, signaling possible side effects. Therefore, our
approach has the potential to significantly enhance the monitoring process in
vaccine trials in animals, providing a practical solution to the limitations of
human observation.
|
[
{
"created": "Wed, 3 Apr 2024 23:59:59 GMT",
"version": "v1"
}
] |
2024-04-05
|
[
[
"Li",
"Chuang",
""
],
[
"Shao",
"Shuai",
""
],
[
"Mikason",
"Willian",
""
],
[
"Lin",
"Rubing",
""
],
[
"Liu",
"Yantong",
""
]
] |
The demand for improved efficiency and accuracy in vaccine safety assessments is increasing. Here, we explore the application of computer vision technologies to automate the monitoring of experimental mice for potential side effects after vaccine administration. Traditional observation methods are labor-intensive and lack the capability for continuous monitoring. By deploying a computer vision system, our research aims to improve the efficiency and accuracy of vaccine safety assessments. The methodology involves training machine learning models on annotated video data of mice behaviors pre- and post-vaccination. Preliminary results indicate that computer vision effectively identify subtle changes, signaling possible side effects. Therefore, our approach has the potential to significantly enhance the monitoring process in vaccine trials in animals, providing a practical solution to the limitations of human observation.
|
1312.2701
|
EPTCS
|
Laura Bocchi (Imperial College, London), Romain Demangeon (Imperial
College, London)
|
Embedding Session Types in HML
|
In Proceedings PLACES 2013, arXiv:1312.2218
|
EPTCS 137, 2013, pp. 53-62
|
10.4204/EPTCS.137.5
| null |
cs.DC cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work on the enhancement of multiparty session types with logical
annotations enable the effective verification of properties on (1) the
structure of the conversations, (2) the sorts of the messages, and (3) the
actual values exchanged. In [3] we extend this work to enable the specification
and verification of mutual effects of multiple cross-session interactions. Here
we give a sound and complete embedding into the Hennessy-Milner logic to
justify the expressiveness of the approach in [3] and to provide it with a
logical background that will enable us to compare it with similar approaches.
|
[
{
"created": "Tue, 10 Dec 2013 08:03:49 GMT",
"version": "v1"
}
] |
2013-12-11
|
[
[
"Bocchi",
"Laura",
"",
"Imperial College, London"
],
[
"Demangeon",
"Romain",
"",
"Imperial\n College, London"
]
] |
Recent work on the enhancement of multiparty session types with logical annotations enable the effective verification of properties on (1) the structure of the conversations, (2) the sorts of the messages, and (3) the actual values exchanged. In [3] we extend this work to enable the specification and verification of mutual effects of multiple cross-session interactions. Here we give a sound and complete embedding into the Hennessy-Milner logic to justify the expressiveness of the approach in [3] and to provide it with a logical background that will enable us to compare it with similar approaches.
|
1706.02021
|
Anbang Yao
|
Yiwen Guo, Anbang Yao, Hao Zhao, Yurong Chen
|
Network Sketching: Exploiting Binary Structure in Deep CNNs
|
To appear in CVPR2017
| null | null | null |
cs.NE cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional neural networks (CNNs) with deep architectures have
substantially advanced the state-of-the-art in computer vision tasks. However,
deep networks are typically resource-intensive and thus difficult to be
deployed on mobile devices. Recently, CNNs with binary weights have shown
compelling efficiency to the community, whereas the accuracy of such models is
usually unsatisfactory in practice. In this paper, we introduce network
sketching as a novel technique of pursuing binary-weight CNNs, targeting at
more faithful inference and better trade-off for practical applications. Our
basic idea is to exploit binary structure directly in pre-trained filter banks
and produce binary-weight models via tensor expansion. The whole process can be
treated as a coarse-to-fine model approximation, akin to the pencil drawing
steps of outlining and shading. To further speedup the generated models, namely
the sketches, we also propose an associative implementation of binary tensor
convolutions. Experimental results demonstrate that a proper sketch of AlexNet
(or ResNet) outperforms the existing binary-weight models by large margins on
the ImageNet large scale classification task, while the committed memory for
network parameters only exceeds a little.
|
[
{
"created": "Wed, 7 Jun 2017 01:53:44 GMT",
"version": "v1"
}
] |
2017-06-08
|
[
[
"Guo",
"Yiwen",
""
],
[
"Yao",
"Anbang",
""
],
[
"Zhao",
"Hao",
""
],
[
"Chen",
"Yurong",
""
]
] |
Convolutional neural networks (CNNs) with deep architectures have substantially advanced the state-of-the-art in computer vision tasks. However, deep networks are typically resource-intensive and thus difficult to be deployed on mobile devices. Recently, CNNs with binary weights have shown compelling efficiency to the community, whereas the accuracy of such models is usually unsatisfactory in practice. In this paper, we introduce network sketching as a novel technique of pursuing binary-weight CNNs, targeting at more faithful inference and better trade-off for practical applications. Our basic idea is to exploit binary structure directly in pre-trained filter banks and produce binary-weight models via tensor expansion. The whole process can be treated as a coarse-to-fine model approximation, akin to the pencil drawing steps of outlining and shading. To further speedup the generated models, namely the sketches, we also propose an associative implementation of binary tensor convolutions. Experimental results demonstrate that a proper sketch of AlexNet (or ResNet) outperforms the existing binary-weight models by large margins on the ImageNet large scale classification task, while the committed memory for network parameters only exceeds a little.
|
2210.08465
|
Hong Chen
|
Hong Chen, Rujun Han, Te-Lin Wu, Hideki Nakayama and Nanyun Peng
|
Character-Centric Story Visualization via Visual Planning and Token
Alignment
|
accepted by EMNLP2022
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Story visualization advances the traditional text-to-image generation by
enabling multiple image generation based on a complete story. This task
requires machines to 1) understand long text inputs and 2) produce a globally
consistent image sequence that illustrates the contents of the story. A key
challenge of consistent story visualization is to preserve characters that are
essential in stories. To tackle the challenge, we propose to adapt a recent
work that augments Vector-Quantized Variational Autoencoders (VQ-VAE) with a
text-tovisual-token (transformer) architecture. Specifically, we modify the
text-to-visual-token module with a two-stage framework: 1) character token
planning model that predicts the visual tokens for characters only; 2) visual
token completion model that generates the remaining visual token sequence,
which is sent to VQ-VAE for finalizing image generations. To encourage
characters to appear in the images, we further train the two-stage framework
with a character-token alignment objective. Extensive experiments and
evaluations demonstrate that the proposed method excels at preserving
characters and can produce higher quality image sequences compared with the
strong baselines. Codes can be found in https://github.com/sairin1202/VP-CSV
|
[
{
"created": "Sun, 16 Oct 2022 06:50:39 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Oct 2022 02:16:34 GMT",
"version": "v2"
},
{
"created": "Thu, 20 Oct 2022 15:53:40 GMT",
"version": "v3"
},
{
"created": "Sat, 22 Oct 2022 07:07:05 GMT",
"version": "v4"
}
] |
2022-10-25
|
[
[
"Chen",
"Hong",
""
],
[
"Han",
"Rujun",
""
],
[
"Wu",
"Te-Lin",
""
],
[
"Nakayama",
"Hideki",
""
],
[
"Peng",
"Nanyun",
""
]
] |
Story visualization advances the traditional text-to-image generation by enabling multiple image generation based on a complete story. This task requires machines to 1) understand long text inputs and 2) produce a globally consistent image sequence that illustrates the contents of the story. A key challenge of consistent story visualization is to preserve characters that are essential in stories. To tackle the challenge, we propose to adapt a recent work that augments Vector-Quantized Variational Autoencoders (VQ-VAE) with a text-tovisual-token (transformer) architecture. Specifically, we modify the text-to-visual-token module with a two-stage framework: 1) character token planning model that predicts the visual tokens for characters only; 2) visual token completion model that generates the remaining visual token sequence, which is sent to VQ-VAE for finalizing image generations. To encourage characters to appear in the images, we further train the two-stage framework with a character-token alignment objective. Extensive experiments and evaluations demonstrate that the proposed method excels at preserving characters and can produce higher quality image sequences compared with the strong baselines. Codes can be found in https://github.com/sairin1202/VP-CSV
|
1908.04511
|
Ruslan Nikolaev
|
Ruslan Nikolaev
|
A Scalable, Portable, and Memory-Efficient Lock-Free FIFO Queue
| null |
33rd International Symposium on Distributed Computing (DISC 2019)
|
10.4230/LIPIcs.DISC.2019.28
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new lock-free multiple-producer and multiple-consumer (MPMC)
FIFO queue design which is scalable and, unlike existing high-performant
queues, very memory efficient. Moreover, the design is ABA safe and does not
require any external memory allocators or safe memory reclamation techniques,
typically needed by other scalable designs. In fact, this queue itself can be
leveraged for object allocation and reclamation, as in data pools. We use FAA
(fetch-and-add), a specialized and more scalable than CAS (compare-and-set)
instruction, on the most contended hot spots of the algorithm. However, unlike
prior attempts with FAA, our queue is both lock-free and linearizable.
We propose a general approach, SCQ, for bounded queues. This approach can
easily be extended to support unbounded FIFO queues which can store an
arbitrary number of elements. SCQ is portable across virtually all existing
architectures and flexible enough for a wide variety of uses. We measure the
performance of our algorithm on the x86-64 and PowerPC architectures. Our
evaluation validates that our queue has exceptional memory efficiency compared
to other algorithms and its performance is often comparable to, or exceeding
that of state-of-the-art scalable algorithms.
|
[
{
"created": "Tue, 13 Aug 2019 06:34:10 GMT",
"version": "v1"
}
] |
2019-08-14
|
[
[
"Nikolaev",
"Ruslan",
""
]
] |
We present a new lock-free multiple-producer and multiple-consumer (MPMC) FIFO queue design which is scalable and, unlike existing high-performant queues, very memory efficient. Moreover, the design is ABA safe and does not require any external memory allocators or safe memory reclamation techniques, typically needed by other scalable designs. In fact, this queue itself can be leveraged for object allocation and reclamation, as in data pools. We use FAA (fetch-and-add), a specialized and more scalable than CAS (compare-and-set) instruction, on the most contended hot spots of the algorithm. However, unlike prior attempts with FAA, our queue is both lock-free and linearizable. We propose a general approach, SCQ, for bounded queues. This approach can easily be extended to support unbounded FIFO queues which can store an arbitrary number of elements. SCQ is portable across virtually all existing architectures and flexible enough for a wide variety of uses. We measure the performance of our algorithm on the x86-64 and PowerPC architectures. Our evaluation validates that our queue has exceptional memory efficiency compared to other algorithms and its performance is often comparable to, or exceeding that of state-of-the-art scalable algorithms.
|
0803.2495
|
Gabriel Istrate
|
Gabriel Istrate, Madhav V. Marathe and S.S.Ravi
|
Adversarial Scheduling Analysis of Game Theoretic Models of Norm
Diffusion
| null | null | null | null |
cs.GT cs.DM math.CO math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In (Istrate, Marathe, Ravi SODA 2001) we advocated the investigation of
robustness of results in the theory of learning in games under adversarial
scheduling models. We provide evidence that such an analysis is feasible and
can lead to nontrivial results by investigating, in an adversarial scheduling
setting, Peyton Young's model of diffusion of norms. In particular, our main
result incorporates into Peyton Young's model.
|
[
{
"created": "Mon, 17 Mar 2008 17:42:28 GMT",
"version": "v1"
}
] |
2008-11-04
|
[
[
"Istrate",
"Gabriel",
""
],
[
"Marathe",
"Madhav V.",
""
],
[
"Ravi",
"S. S.",
""
]
] |
In (Istrate, Marathe, Ravi SODA 2001) we advocated the investigation of robustness of results in the theory of learning in games under adversarial scheduling models. We provide evidence that such an analysis is feasible and can lead to nontrivial results by investigating, in an adversarial scheduling setting, Peyton Young's model of diffusion of norms. In particular, our main result incorporates into Peyton Young's model.
|
1403.2808
|
Hafez Fouad Dr
|
Hafez Fouad
|
Web-based Database Management to support Telemedicine System
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The transfer of the medical care services to the patient, rather than the
transport of the patient to the medical services providers is aim of the
project. This is achieved by using web-based applications including Modern
Medical Informatics Services which is easier, faster and less expensive. The
required system implements the suitable informatics and electronics solutions
efficiently for the Tele-medicine care. We proposed an approach to manage
different multimedia medical databases in the telemedicine system. In order to
be efficiently and effectively manage, search, and display database
information, we define an information package for both of doctor and patient as
a concise data set of their medical information from each visit. The
methodology for accessing various types of medical records will be provided,
also we will design two web-based interfaces, high-quality data and display for
many medical service purposes.
|
[
{
"created": "Wed, 12 Mar 2014 04:43:59 GMT",
"version": "v1"
}
] |
2014-03-13
|
[
[
"Fouad",
"Hafez",
""
]
] |
The transfer of the medical care services to the patient, rather than the transport of the patient to the medical services providers is aim of the project. This is achieved by using web-based applications including Modern Medical Informatics Services which is easier, faster and less expensive. The required system implements the suitable informatics and electronics solutions efficiently for the Tele-medicine care. We proposed an approach to manage different multimedia medical databases in the telemedicine system. In order to be efficiently and effectively manage, search, and display database information, we define an information package for both of doctor and patient as a concise data set of their medical information from each visit. The methodology for accessing various types of medical records will be provided, also we will design two web-based interfaces, high-quality data and display for many medical service purposes.
|
2103.13818
|
Ciriaco Andrea D'Angelo
|
Giovanni Abramo, Ciriaco Andrea D'Angelo, Giovanni Felici
|
Informed peer review for publication assessments: Are improved impact
measures worth the hassle?
| null |
Quantitative Science Studies, 1(3), 1321-1333 (2020)
|
10.1162/qss_a_00051
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we ask whether and to what extent applying a predictor of
publications' impact better than early citations, has an effect on the
assessment of research performance of individual scientists. Specifically, we
measure the total impact of Italian professors in the sciences and economics in
a period of time, valuing their publications first by early citations and then
by a weighted combination of early citations and impact factor of the hosting
journal. As expected, scores and ranks by the two indicators show a very strong
correlation, but there occur also significant shifts in many fields, mainly in
Economics and statistics, and Mathematics and computer science. The higher the
share of uncited professors in a field and the shorter the citation time
window, the more recommendable the recourse to the above combination.
|
[
{
"created": "Thu, 25 Mar 2021 13:13:38 GMT",
"version": "v1"
}
] |
2021-03-26
|
[
[
"Abramo",
"Giovanni",
""
],
[
"D'Angelo",
"Ciriaco Andrea",
""
],
[
"Felici",
"Giovanni",
""
]
] |
In this work we ask whether and to what extent applying a predictor of publications' impact better than early citations, has an effect on the assessment of research performance of individual scientists. Specifically, we measure the total impact of Italian professors in the sciences and economics in a period of time, valuing their publications first by early citations and then by a weighted combination of early citations and impact factor of the hosting journal. As expected, scores and ranks by the two indicators show a very strong correlation, but there occur also significant shifts in many fields, mainly in Economics and statistics, and Mathematics and computer science. The higher the share of uncited professors in a field and the shorter the citation time window, the more recommendable the recourse to the above combination.
|
1812.03286
|
Marco Baldi
|
Paolo Santini, Marco Baldi, Franco Chiaraluce
|
Cryptanalysis of a One-Time Code-Based Digital Signature Scheme
|
5 pages, 1 figure
| null | null | null |
cs.CR cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a one-time digital signature scheme recently proposed by
Persichetti and show that a successful key recovery attack can be mounted with
limited complexity. The attack we propose exploits a single signature
intercepted by the attacker, and relies on a statistical analysis performed
over such a signature, followed by information set decoding. We assess the
attack complexity and show that a full recovery of the secret key can be
performed with a work factor that is far below the claimed security level. The
efficiency of the attack is motivated by the sparsity of the signature, which
leads to a significant information leakage about the secret key.
|
[
{
"created": "Sat, 8 Dec 2018 08:34:31 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Jan 2019 22:10:32 GMT",
"version": "v2"
}
] |
2019-01-25
|
[
[
"Santini",
"Paolo",
""
],
[
"Baldi",
"Marco",
""
],
[
"Chiaraluce",
"Franco",
""
]
] |
We consider a one-time digital signature scheme recently proposed by Persichetti and show that a successful key recovery attack can be mounted with limited complexity. The attack we propose exploits a single signature intercepted by the attacker, and relies on a statistical analysis performed over such a signature, followed by information set decoding. We assess the attack complexity and show that a full recovery of the secret key can be performed with a work factor that is far below the claimed security level. The efficiency of the attack is motivated by the sparsity of the signature, which leads to a significant information leakage about the secret key.
|
2406.03726
|
Xihan Qin
|
Xihan Qin and Cencheng Shen
|
Efficient Graph Encoder Embedding for Large Sparse Graphs in Python
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph is a ubiquitous representation of data in various research fields, and
graph embedding is a prevalent machine learning technique for capturing key
features and generating fixed-sized attributes. However, most state-of-the-art
graph embedding methods are computationally and spatially expensive. Recently,
the Graph Encoder Embedding (GEE) has been shown as the fastest graph embedding
technique and is suitable for a variety of network data applications. As
real-world data often involves large and sparse graphs, the huge sparsity
usually results in redundant computations and storage. To address this issue,
we propose an improved version of GEE, sparse GEE, which optimizes the
calculation and storage of zero entries in sparse matrices to enhance the
running time further. Our experiments demonstrate that the sparse version
achieves significant speedup compared to the original GEE with Python
implementation for large sparse graphs, and sparse GEE is capable of processing
millions of edges within minutes on a standard laptop.
|
[
{
"created": "Thu, 6 Jun 2024 03:49:34 GMT",
"version": "v1"
}
] |
2024-06-07
|
[
[
"Qin",
"Xihan",
""
],
[
"Shen",
"Cencheng",
""
]
] |
Graph is a ubiquitous representation of data in various research fields, and graph embedding is a prevalent machine learning technique for capturing key features and generating fixed-sized attributes. However, most state-of-the-art graph embedding methods are computationally and spatially expensive. Recently, the Graph Encoder Embedding (GEE) has been shown as the fastest graph embedding technique and is suitable for a variety of network data applications. As real-world data often involves large and sparse graphs, the huge sparsity usually results in redundant computations and storage. To address this issue, we propose an improved version of GEE, sparse GEE, which optimizes the calculation and storage of zero entries in sparse matrices to enhance the running time further. Our experiments demonstrate that the sparse version achieves significant speedup compared to the original GEE with Python implementation for large sparse graphs, and sparse GEE is capable of processing millions of edges within minutes on a standard laptop.
|
2111.10281
|
Canze Zhu
|
Canze Zhu and Qunying Liao
|
A new class of MDS symbol-pair codes
|
There are some mistakes in the Manuscript
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The symbol-pair code is a new coding framework proposed to guard against
pair-errors in symbol-pair read channels. Especially, a symbol-pair code with
the parameters achieving the Singleton-type bound is called an MDS symbol-pair
code. In this paper, inspiring by the classical construction for Reed-Solomon
codes, for any $3\le k<m\le q-2$ and
$m_1=\Big\lfloor{\tiny\frac{m}{\lfloor\frac{k-1}{2}\rfloor}}\Big\rfloor$, we
construct a class of $q$-ary MDS symbol-pair codes with dimension $k$ and
length $n$ $(n=m+m_1, m+m_1-1)$, where $q$ is a prime power. Furthermore, for
$k\in\{3,4\}$, the symbol-pair weight distributions for these codes are
determined by enumerating the number of polynomials with given roots.
|
[
{
"created": "Fri, 19 Nov 2021 15:43:27 GMT",
"version": "v1"
},
{
"created": "Fri, 27 May 2022 16:59:47 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Aug 2022 01:41:57 GMT",
"version": "v3"
}
] |
2022-08-26
|
[
[
"Zhu",
"Canze",
""
],
[
"Liao",
"Qunying",
""
]
] |
The symbol-pair code is a new coding framework proposed to guard against pair-errors in symbol-pair read channels. Especially, a symbol-pair code with the parameters achieving the Singleton-type bound is called an MDS symbol-pair code. In this paper, inspiring by the classical construction for Reed-Solomon codes, for any $3\le k<m\le q-2$ and $m_1=\Big\lfloor{\tiny\frac{m}{\lfloor\frac{k-1}{2}\rfloor}}\Big\rfloor$, we construct a class of $q$-ary MDS symbol-pair codes with dimension $k$ and length $n$ $(n=m+m_1, m+m_1-1)$, where $q$ is a prime power. Furthermore, for $k\in\{3,4\}$, the symbol-pair weight distributions for these codes are determined by enumerating the number of polynomials with given roots.
|
1804.08111
|
Andreas Galanis
|
Antonio Blanca, Andreas Galanis, Leslie Ann Goldberg, Daniel
Stefankovic, Eric Vigoda, Kuan Yang
|
Sampling in Uniqueness from the Potts and Random-Cluster Models on
Random Regular Graphs
| null | null | null | null |
cs.DM cs.DS math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of sampling from the Potts model on random regular
graphs. It is conjectured that sampling is possible when the temperature of the
model is in the uniqueness regime of the regular tree, but positive algorithmic
results have been for the most part elusive. In this paper, for all integers
$q\geq 3$ and $\Delta\geq 3$, we develop algorithms that produce samples within
error $o(1)$ from the $q$-state Potts model on random $\Delta$-regular graphs,
whenever the temperature is in uniqueness, for both the ferromagnetic and
antiferromagnetic cases.
The algorithm for the antiferromagnetic Potts model is based on iteratively
adding the edges of the graph and resampling a bichromatic class that contains
the endpoints of the newly added edge. Key to the algorithm is how to perform
the resampling step efficiently since bichromatic classes may induce
linear-sized components. To this end, we exploit the tree uniqueness to show
that the average growth of bichromatic components is typically small, which
allows us to use correlation decay algorithms for the resampling step. While
the precise uniqueness threshold on the tree is not known for general values of
$q$ and $\Delta$ in the antiferromagnetic case, our algorithm works throughout
uniqueness regardless of its value.
In the case of the ferromagnetic Potts model, we simplify the algorithm
significantly by utilising the random-cluster representation of the model. In
particular, we show that a percolation-type algorithm succeeds in sampling from
the random-cluster model with parameters $p,q$ on random $\Delta$-regular
graphs for all values of $q\geq 1$ and $p<p_c(q,\Delta)$, where $p_c(q,\Delta)$
corresponds to a uniqueness threshold for the model on the $\Delta$-regular
tree. When restricted to integer values of $q$, this yields a simplified
algorithm for the ferromagnetic Potts model on random $\Delta$-regular graphs.
|
[
{
"created": "Sun, 22 Apr 2018 13:11:24 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Dec 2019 12:21:58 GMT",
"version": "v2"
}
] |
2019-12-03
|
[
[
"Blanca",
"Antonio",
""
],
[
"Galanis",
"Andreas",
""
],
[
"Goldberg",
"Leslie Ann",
""
],
[
"Stefankovic",
"Daniel",
""
],
[
"Vigoda",
"Eric",
""
],
[
"Yang",
"Kuan",
""
]
] |
We consider the problem of sampling from the Potts model on random regular graphs. It is conjectured that sampling is possible when the temperature of the model is in the uniqueness regime of the regular tree, but positive algorithmic results have been for the most part elusive. In this paper, for all integers $q\geq 3$ and $\Delta\geq 3$, we develop algorithms that produce samples within error $o(1)$ from the $q$-state Potts model on random $\Delta$-regular graphs, whenever the temperature is in uniqueness, for both the ferromagnetic and antiferromagnetic cases. The algorithm for the antiferromagnetic Potts model is based on iteratively adding the edges of the graph and resampling a bichromatic class that contains the endpoints of the newly added edge. Key to the algorithm is how to perform the resampling step efficiently since bichromatic classes may induce linear-sized components. To this end, we exploit the tree uniqueness to show that the average growth of bichromatic components is typically small, which allows us to use correlation decay algorithms for the resampling step. While the precise uniqueness threshold on the tree is not known for general values of $q$ and $\Delta$ in the antiferromagnetic case, our algorithm works throughout uniqueness regardless of its value. In the case of the ferromagnetic Potts model, we simplify the algorithm significantly by utilising the random-cluster representation of the model. In particular, we show that a percolation-type algorithm succeeds in sampling from the random-cluster model with parameters $p,q$ on random $\Delta$-regular graphs for all values of $q\geq 1$ and $p<p_c(q,\Delta)$, where $p_c(q,\Delta)$ corresponds to a uniqueness threshold for the model on the $\Delta$-regular tree. When restricted to integer values of $q$, this yields a simplified algorithm for the ferromagnetic Potts model on random $\Delta$-regular graphs.
|
2405.06460
|
Chris Samarinas
|
Chris Samarinas, Hamed Zamani
|
ProCIS: A Benchmark for Proactive Retrieval in Conversations
| null | null |
10.1145/3626772.3657869
| null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The field of conversational information seeking, which is rapidly gaining
interest in both academia and industry, is changing how we interact with search
engines through natural language interactions. Existing datasets and methods
are mostly evaluating reactive conversational information seeking systems that
solely provide response to every query from the user. We identify a gap in
building and evaluating proactive conversational information seeking systems
that can monitor a multi-party human conversation and proactively engage in the
conversation at an opportune moment by retrieving useful resources and
suggestions. In this paper, we introduce a large-scale dataset for proactive
document retrieval that consists of over 2.8 million conversations. We conduct
crowdsourcing experiments to obtain high-quality and relatively complete
relevance judgments through depth-k pooling. We also collect annotations
related to the parts of the conversation that are related to each document,
enabling us to evaluate proactive retrieval systems. We introduce normalized
proactive discounted cumulative gain (npDCG) for evaluating these systems, and
further provide benchmark results for a wide range of models, including a novel
model we developed for this task. We believe that the developed dataset, called
ProCIS, paves the path towards developing proactive conversational information
seeking systems.
|
[
{
"created": "Fri, 10 May 2024 13:11:07 GMT",
"version": "v1"
}
] |
2024-05-13
|
[
[
"Samarinas",
"Chris",
""
],
[
"Zamani",
"Hamed",
""
]
] |
The field of conversational information seeking, which is rapidly gaining interest in both academia and industry, is changing how we interact with search engines through natural language interactions. Existing datasets and methods are mostly evaluating reactive conversational information seeking systems that solely provide response to every query from the user. We identify a gap in building and evaluating proactive conversational information seeking systems that can monitor a multi-party human conversation and proactively engage in the conversation at an opportune moment by retrieving useful resources and suggestions. In this paper, we introduce a large-scale dataset for proactive document retrieval that consists of over 2.8 million conversations. We conduct crowdsourcing experiments to obtain high-quality and relatively complete relevance judgments through depth-k pooling. We also collect annotations related to the parts of the conversation that are related to each document, enabling us to evaluate proactive retrieval systems. We introduce normalized proactive discounted cumulative gain (npDCG) for evaluating these systems, and further provide benchmark results for a wide range of models, including a novel model we developed for this task. We believe that the developed dataset, called ProCIS, paves the path towards developing proactive conversational information seeking systems.
|
2310.18600
|
Debtanu Datta
|
Debtanu Datta, Shubham Soni, Rajdeep Mukherjee, Saptarshi Ghosh
|
MILDSum: A Novel Benchmark Dataset for Multilingual Summarization of
Indian Legal Case Judgments
|
Accepted at EMNLP 2023 (Main Conference)
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic summarization of legal case judgments is a practically important
problem that has attracted substantial research efforts in many countries. In
the context of the Indian judiciary, there is an additional complexity --
Indian legal case judgments are mostly written in complex English, but a
significant portion of India's population lacks command of the English
language. Hence, it is crucial to summarize the legal documents in Indian
languages to ensure equitable access to justice. While prior research primarily
focuses on summarizing legal case judgments in their source languages, this
study presents a pioneering effort toward cross-lingual summarization of
English legal documents into Hindi, the most frequently spoken Indian language.
We construct the first high-quality legal corpus comprising of 3,122 case
judgments from prominent Indian courts in English, along with their summaries
in both English and Hindi, drafted by legal practitioners. We benchmark the
performance of several diverse summarization approaches on our corpus and
demonstrate the need for further research in cross-lingual summarization in the
legal domain.
|
[
{
"created": "Sat, 28 Oct 2023 05:51:57 GMT",
"version": "v1"
}
] |
2023-10-31
|
[
[
"Datta",
"Debtanu",
""
],
[
"Soni",
"Shubham",
""
],
[
"Mukherjee",
"Rajdeep",
""
],
[
"Ghosh",
"Saptarshi",
""
]
] |
Automatic summarization of legal case judgments is a practically important problem that has attracted substantial research efforts in many countries. In the context of the Indian judiciary, there is an additional complexity -- Indian legal case judgments are mostly written in complex English, but a significant portion of India's population lacks command of the English language. Hence, it is crucial to summarize the legal documents in Indian languages to ensure equitable access to justice. While prior research primarily focuses on summarizing legal case judgments in their source languages, this study presents a pioneering effort toward cross-lingual summarization of English legal documents into Hindi, the most frequently spoken Indian language. We construct the first high-quality legal corpus comprising of 3,122 case judgments from prominent Indian courts in English, along with their summaries in both English and Hindi, drafted by legal practitioners. We benchmark the performance of several diverse summarization approaches on our corpus and demonstrate the need for further research in cross-lingual summarization in the legal domain.
|
2403.07432
|
Hanyu Zhou
|
Hanyu Zhou, Yi Chang, Zhiwei Shi, Luxin Yan
|
Bring Event into RGB and LiDAR: Hierarchical Visual-Motion Fusion for
Scene Flow
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Single RGB or LiDAR is the mainstream sensor for the challenging scene flow,
which relies heavily on visual features to match motion features. Compared with
single modality, existing methods adopt a fusion strategy to directly fuse the
cross-modal complementary knowledge in motion space. However, these direct
fusion methods may suffer the modality gap due to the visual intrinsic
heterogeneous nature between RGB and LiDAR, thus deteriorating motion features.
We discover that event has the homogeneous nature with RGB and LiDAR in both
visual and motion spaces. In this work, we bring the event as a bridge between
RGB and LiDAR, and propose a novel hierarchical visual-motion fusion framework
for scene flow, which explores a homogeneous space to fuse the cross-modal
complementary knowledge for physical interpretation. In visual fusion, we
discover that event has a complementarity (relative v.s. absolute) in luminance
space with RGB for high dynamic imaging, and has a complementarity (local
boundary v.s. global shape) in scene structure space with LiDAR for structure
integrity. In motion fusion, we figure out that RGB, event and LiDAR are
complementary (spatial-dense, temporal-dense v.s. spatiotemporal-sparse) to
each other in correlation space, which motivates us to fuse their motion
correlations for motion continuity. The proposed hierarchical fusion can
explicitly fuse the multimodal knowledge to progressively improve scene flow
from visual space to motion space. Extensive experiments have been performed to
verify the superiority of the proposed method.
|
[
{
"created": "Tue, 12 Mar 2024 09:15:19 GMT",
"version": "v1"
}
] |
2024-03-13
|
[
[
"Zhou",
"Hanyu",
""
],
[
"Chang",
"Yi",
""
],
[
"Shi",
"Zhiwei",
""
],
[
"Yan",
"Luxin",
""
]
] |
Single RGB or LiDAR is the mainstream sensor for the challenging scene flow, which relies heavily on visual features to match motion features. Compared with single modality, existing methods adopt a fusion strategy to directly fuse the cross-modal complementary knowledge in motion space. However, these direct fusion methods may suffer the modality gap due to the visual intrinsic heterogeneous nature between RGB and LiDAR, thus deteriorating motion features. We discover that event has the homogeneous nature with RGB and LiDAR in both visual and motion spaces. In this work, we bring the event as a bridge between RGB and LiDAR, and propose a novel hierarchical visual-motion fusion framework for scene flow, which explores a homogeneous space to fuse the cross-modal complementary knowledge for physical interpretation. In visual fusion, we discover that event has a complementarity (relative v.s. absolute) in luminance space with RGB for high dynamic imaging, and has a complementarity (local boundary v.s. global shape) in scene structure space with LiDAR for structure integrity. In motion fusion, we figure out that RGB, event and LiDAR are complementary (spatial-dense, temporal-dense v.s. spatiotemporal-sparse) to each other in correlation space, which motivates us to fuse their motion correlations for motion continuity. The proposed hierarchical fusion can explicitly fuse the multimodal knowledge to progressively improve scene flow from visual space to motion space. Extensive experiments have been performed to verify the superiority of the proposed method.
|
1902.07111
|
Xiaoixa Wu
|
Xiaoxia Wu, Simon S. Du and Rachel Ward
|
Global Convergence of Adaptive Gradient Methods for An
Over-parameterized Neural Network
| null | null | null | null |
cs.LG math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adaptive gradient methods like AdaGrad are widely used in optimizing neural
networks. Yet, existing convergence guarantees for adaptive gradient methods
require either convexity or smoothness, and, in the smooth setting, only
guarantee convergence to a stationary point. We propose an adaptive gradient
method and show that for two-layer over-parameterized neural networks -- if the
width is sufficiently large (polynomially) -- then the proposed method
converges \emph{to the global minimum} in polynomial time, and convergence is
robust, \emph{ without the need to fine-tune hyper-parameters such as the
step-size schedule and with the level of over-parametrization independent of
the training error}. Our analysis indicates in particular that
over-parametrization is crucial for the harnessing the full potential of
adaptive gradient methods in the setting of neural networks.
|
[
{
"created": "Tue, 19 Feb 2019 16:08:55 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Oct 2019 19:07:24 GMT",
"version": "v2"
}
] |
2019-10-22
|
[
[
"Wu",
"Xiaoxia",
""
],
[
"Du",
"Simon S.",
""
],
[
"Ward",
"Rachel",
""
]
] |
Adaptive gradient methods like AdaGrad are widely used in optimizing neural networks. Yet, existing convergence guarantees for adaptive gradient methods require either convexity or smoothness, and, in the smooth setting, only guarantee convergence to a stationary point. We propose an adaptive gradient method and show that for two-layer over-parameterized neural networks -- if the width is sufficiently large (polynomially) -- then the proposed method converges \emph{to the global minimum} in polynomial time, and convergence is robust, \emph{ without the need to fine-tune hyper-parameters such as the step-size schedule and with the level of over-parametrization independent of the training error}. Our analysis indicates in particular that over-parametrization is crucial for the harnessing the full potential of adaptive gradient methods in the setting of neural networks.
|
2002.04778
|
Binhai Zhu
|
Manuel Lafond and Binhai Zhu and Peng Zou
|
Genomic Problems Involving Copy Number Profiles: Complexity and
Algorithms
|
16 pages, 3 figures
| null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, due to the genomic sequence analysis in several types of cancer,
the genomic data based on {\em copy number profiles} ({\em CNP} for short) are
getting more and more popular. A CNP is a vector where each component is a
non-negative integer representing the number of copies of a specific gene or
segment of interest.
In this paper, we present two streams of results. The first is the negative
results on two open problems regarding the computational complexity of the
Minimum Copy Number Generation (MCNG) problem posed by Qingge et al. in 2018.
It was shown by Qingge et al. that the problem is NP-hard if the duplications
are tandem and they left the open question of whether the problem remains
NP-hard if arbitrary duplications are used. We answer this question
affirmatively in this paper; in fact, we prove that it is NP-hard to even
obtain a constant factor approximation. We also prove that the parameterized
version is W[1]-hard, answering another open question by Qingge et al.
The other result is positive and is based on a new (and more general) problem
regarding CNP's. The \emph{Copy Number Profile Conforming (CNPC)} problem is
formally defined as follows: given two CNP's $C_1$ and $C_2$, compute two
strings $S_1$ and $S_2$ with $cnp(S_1)=C_1$ and $cnp(S_2)=C_2$ such that the
distance between $S_1$ and $S_2$, $d(S_1,S_2)$, is minimized. Here,
$d(S_1,S_2)$ is a very general term, which means it could be any genome
rearrangement distance (like reversal, transposition, and tandem duplication,
etc). We make the first step by showing that if $d(S_1,S_2)$ is measured by the
breakpoint distance then the problem is polynomially solvable.
|
[
{
"created": "Wed, 12 Feb 2020 03:31:42 GMT",
"version": "v1"
}
] |
2020-02-13
|
[
[
"Lafond",
"Manuel",
""
],
[
"Zhu",
"Binhai",
""
],
[
"Zou",
"Peng",
""
]
] |
Recently, due to the genomic sequence analysis in several types of cancer, the genomic data based on {\em copy number profiles} ({\em CNP} for short) are getting more and more popular. A CNP is a vector where each component is a non-negative integer representing the number of copies of a specific gene or segment of interest. In this paper, we present two streams of results. The first is the negative results on two open problems regarding the computational complexity of the Minimum Copy Number Generation (MCNG) problem posed by Qingge et al. in 2018. It was shown by Qingge et al. that the problem is NP-hard if the duplications are tandem and they left the open question of whether the problem remains NP-hard if arbitrary duplications are used. We answer this question affirmatively in this paper; in fact, we prove that it is NP-hard to even obtain a constant factor approximation. We also prove that the parameterized version is W[1]-hard, answering another open question by Qingge et al. The other result is positive and is based on a new (and more general) problem regarding CNP's. The \emph{Copy Number Profile Conforming (CNPC)} problem is formally defined as follows: given two CNP's $C_1$ and $C_2$, compute two strings $S_1$ and $S_2$ with $cnp(S_1)=C_1$ and $cnp(S_2)=C_2$ such that the distance between $S_1$ and $S_2$, $d(S_1,S_2)$, is minimized. Here, $d(S_1,S_2)$ is a very general term, which means it could be any genome rearrangement distance (like reversal, transposition, and tandem duplication, etc). We make the first step by showing that if $d(S_1,S_2)$ is measured by the breakpoint distance then the problem is polynomially solvable.
|
2201.07927
|
Jiawei Qin
|
Jiawei Qin, Takuru Shimoyama, Yusuke Sugano
|
Learning-by-Novel-View-Synthesis for Full-Face Appearance-Based 3D Gaze
Estimation
|
Camera-ready version for CVPR 2022 Workshop (GAZE 2022)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Despite recent advances in appearance-based gaze estimation techniques, the
need for training data that covers the target head pose and gaze distribution
remains a crucial challenge for practical deployment. This work examines a
novel approach for synthesizing gaze estimation training data based on
monocular 3D face reconstruction. Unlike prior works using multi-view
reconstruction, photo-realistic CG models, or generative neural networks, our
approach can manipulate and extend the head pose range of existing training
data without any additional requirements. We introduce a projective matching
procedure to align the reconstructed 3D facial mesh with the camera coordinate
system and synthesize face images with accurate gaze labels. We also propose a
mask-guided gaze estimation model and data augmentation strategies to further
improve the estimation accuracy by taking advantage of synthetic training data.
Experiments using multiple public datasets show that our approach significantly
improves the estimation performance on challenging cross-dataset settings with
non-overlapping gaze distributions.
|
[
{
"created": "Thu, 20 Jan 2022 00:29:45 GMT",
"version": "v1"
},
{
"created": "Sun, 23 Jan 2022 06:54:22 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Apr 2022 08:11:57 GMT",
"version": "v3"
}
] |
2022-04-28
|
[
[
"Qin",
"Jiawei",
""
],
[
"Shimoyama",
"Takuru",
""
],
[
"Sugano",
"Yusuke",
""
]
] |
Despite recent advances in appearance-based gaze estimation techniques, the need for training data that covers the target head pose and gaze distribution remains a crucial challenge for practical deployment. This work examines a novel approach for synthesizing gaze estimation training data based on monocular 3D face reconstruction. Unlike prior works using multi-view reconstruction, photo-realistic CG models, or generative neural networks, our approach can manipulate and extend the head pose range of existing training data without any additional requirements. We introduce a projective matching procedure to align the reconstructed 3D facial mesh with the camera coordinate system and synthesize face images with accurate gaze labels. We also propose a mask-guided gaze estimation model and data augmentation strategies to further improve the estimation accuracy by taking advantage of synthetic training data. Experiments using multiple public datasets show that our approach significantly improves the estimation performance on challenging cross-dataset settings with non-overlapping gaze distributions.
|
1104.4518
|
Aydin Buluc
|
Aydin Buluc, Kamesh Madduri
|
Parallel Breadth-First Search on Distributed Memory Systems
| null |
Proceedings of The International Conference for High Performance
Computing, Networking, Storage, and Analysis (SC 2011)
| null | null |
cs.DC cs.MS cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data-intensive, graph-based computations are pervasive in several scientific
applications, and are known to to be quite challenging to implement on
distributed memory systems. In this work, we explore the design space of
parallel algorithms for Breadth-First Search (BFS), a key subroutine in several
graph algorithms. We present two highly-tuned parallel approaches for BFS on
large parallel systems: a level-synchronous strategy that relies on a simple
vertex-based partitioning of the graph, and a two-dimensional sparse
matrix-partitioning-based approach that mitigates parallel communication
overhead. For both approaches, we also present hybrid versions with intra-node
multithreading. Our novel hybrid two-dimensional algorithm reduces
communication times by up to a factor of 3.5, relative to a common vertex based
approach. Our experimental study identifies execution regimes in which these
approaches will be competitive, and we demonstrate extremely high performance
on leading distributed-memory parallel systems. For instance, for a 40,000-core
parallel execution on Hopper, an AMD Magny-Cours based system, we achieve a BFS
performance rate of 17.8 billion edge visits per second on an undirected graph
of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.
|
[
{
"created": "Fri, 22 Apr 2011 23:42:40 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Oct 2011 03:36:03 GMT",
"version": "v2"
}
] |
2011-10-17
|
[
[
"Buluc",
"Aydin",
""
],
[
"Madduri",
"Kamesh",
""
]
] |
Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned parallel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix-partitioning-based approach that mitigates parallel communication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execution regimes in which these approaches will be competitive, and we demonstrate extremely high performance on leading distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD Magny-Cours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.
|
2309.17361
|
Edouard Yvinec
|
Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
|
Network Memory Footprint Compression Through Jointly Learnable Codebooks
and Mappings
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The massive interest in deep neural networks (DNNs) for both computer vision
and natural language processing has been sparked by the growth in computational
power. However, this led to an increase in the memory footprint, to a point
where it can be challenging to simply load a model on commodity devices such as
mobile phones. To address this limitation, quantization is a favored solution
as it maps high precision tensors to a low precision, memory efficient format.
In terms of memory footprint reduction, its most effective variants are based
on codebooks. These methods, however, suffer from two limitations. First, they
either define a single codebook for each tensor, or use a memory-expensive
mapping to multiple codebooks. Second, gradient descent optimization of the
mapping favors jumps toward extreme values, hence not defining a proximal
search. In this work, we propose to address these two limitations. First, we
initially group similarly distributed neurons and leverage the re-ordered
structure to either apply different scale factors to the different groups, or
map weights that fall in these groups to several codebooks, without any mapping
overhead. Second, stemming from this initialization, we propose a joint
learning of the codebook and weight mappings that bears similarities with
recent gradient-based post-training quantization techniques. Third, drawing
estimation from straight-through estimation techniques, we introduce a novel
gradient update definition to enable a proximal search of the codebooks and
their mappings. The proposed jointly learnable codebooks and mappings (JLCM)
method allows a very efficient approximation of any DNN: as such, a Llama 7B
can be compressed down to 2Go and loaded on 5-year-old smartphones.
|
[
{
"created": "Fri, 29 Sep 2023 16:04:55 GMT",
"version": "v1"
}
] |
2023-10-02
|
[
[
"Yvinec",
"Edouard",
""
],
[
"Dapogny",
"Arnaud",
""
],
[
"Bailly",
"Kevin",
""
]
] |
The massive interest in deep neural networks (DNNs) for both computer vision and natural language processing has been sparked by the growth in computational power. However, this led to an increase in the memory footprint, to a point where it can be challenging to simply load a model on commodity devices such as mobile phones. To address this limitation, quantization is a favored solution as it maps high precision tensors to a low precision, memory efficient format. In terms of memory footprint reduction, its most effective variants are based on codebooks. These methods, however, suffer from two limitations. First, they either define a single codebook for each tensor, or use a memory-expensive mapping to multiple codebooks. Second, gradient descent optimization of the mapping favors jumps toward extreme values, hence not defining a proximal search. In this work, we propose to address these two limitations. First, we initially group similarly distributed neurons and leverage the re-ordered structure to either apply different scale factors to the different groups, or map weights that fall in these groups to several codebooks, without any mapping overhead. Second, stemming from this initialization, we propose a joint learning of the codebook and weight mappings that bears similarities with recent gradient-based post-training quantization techniques. Third, drawing estimation from straight-through estimation techniques, we introduce a novel gradient update definition to enable a proximal search of the codebooks and their mappings. The proposed jointly learnable codebooks and mappings (JLCM) method allows a very efficient approximation of any DNN: as such, a Llama 7B can be compressed down to 2Go and loaded on 5-year-old smartphones.
|
2008.01188
|
Quentin Cohen-Solal
|
Quentin Cohen-Solal
|
Learning to Play Two-Player Perfect-Information Games without Knowledge
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, several techniques for learning game state evaluation
functions by reinforcement are proposed. The first is a generalization of tree
bootstrapping (tree learning): it is adapted to the context of reinforcement
learning without knowledge based on non-linear functions. With this technique,
no information is lost during the reinforcement learning process. The second is
a modification of minimax with unbounded depth extending the best sequences of
actions to the terminal states. This modified search is intended to be used
during the learning process. The third is to replace the classic gain of a game
(+1 / -1) with a reinforcement heuristic. We study particular reinforcement
heuristics such as: quick wins and slow defeats ; scoring ; mobility or
presence. The four is another variant of unbounded minimax, which plays the
safest action instead of playing the best action. This modified search is
intended to be used after the learning process. The five is a new action
selection distribution. The conducted experiments suggest that these techniques
improve the level of play. Finally, we apply these different techniques to
design program-players to the game of Hex (size 11 and 13) surpassing the level
of Mohex 3HNN with reinforcement learning from self-play without knowledge.
|
[
{
"created": "Mon, 3 Aug 2020 21:01:22 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Dec 2020 17:50:39 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Oct 2021 17:37:35 GMT",
"version": "v3"
}
] |
2021-10-13
|
[
[
"Cohen-Solal",
"Quentin",
""
]
] |
In this paper, several techniques for learning game state evaluation functions by reinforcement are proposed. The first is a generalization of tree bootstrapping (tree learning): it is adapted to the context of reinforcement learning without knowledge based on non-linear functions. With this technique, no information is lost during the reinforcement learning process. The second is a modification of minimax with unbounded depth extending the best sequences of actions to the terminal states. This modified search is intended to be used during the learning process. The third is to replace the classic gain of a game (+1 / -1) with a reinforcement heuristic. We study particular reinforcement heuristics such as: quick wins and slow defeats ; scoring ; mobility or presence. The four is another variant of unbounded minimax, which plays the safest action instead of playing the best action. This modified search is intended to be used after the learning process. The five is a new action selection distribution. The conducted experiments suggest that these techniques improve the level of play. Finally, we apply these different techniques to design program-players to the game of Hex (size 11 and 13) surpassing the level of Mohex 3HNN with reinforcement learning from self-play without knowledge.
|
2005.12142
|
Kuan-Yu Chen
|
Chia-Chih Kuo, Shang-Bao Luo, Kuan-Yu Chen
|
An Audio-enriched BERT-based Framework for Spoken Multiple-choice
Question Answering
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a spoken multiple-choice question answering (SMCQA) task, given a passage,
a question, and multiple choices all in the form of speech, the machine needs
to pick the correct choice to answer the question. While the audio could
contain useful cues for SMCQA, usually only the auto-transcribed text is
utilized in system development. Thanks to the large-scaled pre-trained language
representation models, such as the bidirectional encoder representations from
transformers (BERT), systems with only auto-transcribed text can still achieve
a certain level of performance. However, previous studies have evidenced that
acoustic-level statistics can offset text inaccuracies caused by the automatic
speech recognition systems or representation inadequacy lurking in word
embedding generators, thereby making the SMCQA system robust. Along the line of
research, this study concentrates on designing a BERT-based SMCQA framework,
which not only inherits the advantages of contextualized language
representations learned by BERT, but integrates the complementary
acoustic-level information distilled from audio with the text-level
information. Consequently, an audio-enriched BERT-based SMCQA framework is
proposed. A series of experiments demonstrates remarkable improvements in
accuracy over selected baselines and SOTA systems on a published Chinese SMCQA
dataset.
|
[
{
"created": "Mon, 25 May 2020 14:41:28 GMT",
"version": "v1"
}
] |
2020-05-26
|
[
[
"Kuo",
"Chia-Chih",
""
],
[
"Luo",
"Shang-Bao",
""
],
[
"Chen",
"Kuan-Yu",
""
]
] |
In a spoken multiple-choice question answering (SMCQA) task, given a passage, a question, and multiple choices all in the form of speech, the machine needs to pick the correct choice to answer the question. While the audio could contain useful cues for SMCQA, usually only the auto-transcribed text is utilized in system development. Thanks to the large-scaled pre-trained language representation models, such as the bidirectional encoder representations from transformers (BERT), systems with only auto-transcribed text can still achieve a certain level of performance. However, previous studies have evidenced that acoustic-level statistics can offset text inaccuracies caused by the automatic speech recognition systems or representation inadequacy lurking in word embedding generators, thereby making the SMCQA system robust. Along the line of research, this study concentrates on designing a BERT-based SMCQA framework, which not only inherits the advantages of contextualized language representations learned by BERT, but integrates the complementary acoustic-level information distilled from audio with the text-level information. Consequently, an audio-enriched BERT-based SMCQA framework is proposed. A series of experiments demonstrates remarkable improvements in accuracy over selected baselines and SOTA systems on a published Chinese SMCQA dataset.
|
2307.13908
|
Chaohui Yu
|
Chaohui Yu, Qiang Zhou, Jingliang Li, Zhe Zhang, Zhibin Wang, Fan Wang
|
Points-to-3D: Bridging the Gap between Sparse Points and
Shape-Controllable Text-to-3D Generation
|
Accepted by ACMMM 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-3D generation has recently garnered significant attention, fueled by
2D diffusion models trained on billions of image-text pairs. Existing methods
primarily rely on score distillation to leverage the 2D diffusion priors to
supervise the generation of 3D models, e.g., NeRF. However, score distillation
is prone to suffer the view inconsistency problem, and implicit NeRF modeling
can also lead to an arbitrary shape, thus leading to less realistic and
uncontrollable 3D generation. In this work, we propose a flexible framework of
Points-to-3D to bridge the gap between sparse yet freely available 3D points
and realistic shape-controllable 3D generation by distilling the knowledge from
both 2D and 3D diffusion models. The core idea of Points-to-3D is to introduce
controllable sparse 3D points to guide the text-to-3D generation. Specifically,
we use the sparse point cloud generated from the 3D diffusion model, Point-E,
as the geometric prior, conditioned on a single reference image. To better
utilize the sparse 3D points, we propose an efficient point cloud guidance loss
to adaptively drive the NeRF's geometry to align with the shape of the sparse
3D points. In addition to controlling the geometry, we propose to optimize the
NeRF for a more view-consistent appearance. To be specific, we perform score
distillation to the publicly available 2D image diffusion model ControlNet,
conditioned on text as well as depth map of the learned compact geometry.
Qualitative and quantitative comparisons demonstrate that Points-to-3D improves
view consistency and achieves good shape controllability for text-to-3D
generation. Points-to-3D provides users with a new way to improve and control
text-to-3D generation.
|
[
{
"created": "Wed, 26 Jul 2023 02:16:55 GMT",
"version": "v1"
}
] |
2023-07-27
|
[
[
"Yu",
"Chaohui",
""
],
[
"Zhou",
"Qiang",
""
],
[
"Li",
"Jingliang",
""
],
[
"Zhang",
"Zhe",
""
],
[
"Wang",
"Zhibin",
""
],
[
"Wang",
"Fan",
""
]
] |
Text-to-3D generation has recently garnered significant attention, fueled by 2D diffusion models trained on billions of image-text pairs. Existing methods primarily rely on score distillation to leverage the 2D diffusion priors to supervise the generation of 3D models, e.g., NeRF. However, score distillation is prone to suffer the view inconsistency problem, and implicit NeRF modeling can also lead to an arbitrary shape, thus leading to less realistic and uncontrollable 3D generation. In this work, we propose a flexible framework of Points-to-3D to bridge the gap between sparse yet freely available 3D points and realistic shape-controllable 3D generation by distilling the knowledge from both 2D and 3D diffusion models. The core idea of Points-to-3D is to introduce controllable sparse 3D points to guide the text-to-3D generation. Specifically, we use the sparse point cloud generated from the 3D diffusion model, Point-E, as the geometric prior, conditioned on a single reference image. To better utilize the sparse 3D points, we propose an efficient point cloud guidance loss to adaptively drive the NeRF's geometry to align with the shape of the sparse 3D points. In addition to controlling the geometry, we propose to optimize the NeRF for a more view-consistent appearance. To be specific, we perform score distillation to the publicly available 2D image diffusion model ControlNet, conditioned on text as well as depth map of the learned compact geometry. Qualitative and quantitative comparisons demonstrate that Points-to-3D improves view consistency and achieves good shape controllability for text-to-3D generation. Points-to-3D provides users with a new way to improve and control text-to-3D generation.
|
1412.6621
|
Arnab Paul
|
Arnab Paul, Suresh Venkatasubramanian
|
Why does Deep Learning work? - A perspective from Group Theory
|
13 pages, 5 figures
| null | null | null |
cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Why does Deep Learning work? What representations does it capture? How do
higher-order representations emerge? We study these questions from the
perspective of group theory, thereby opening a new approach towards a theory of
Deep learning.
One factor behind the recent resurgence of the subject is a key algorithmic
step called pre-training: first search for a good generative model for the
input samples, and repeat the process one layer at a time. We show deeper
implications of this simple principle, by establishing a connection with the
interplay of orbits and stabilizers of group actions. Although the neural
networks themselves may not form groups, we show the existence of {\em shadow}
groups whose elements serve as close approximations.
Over the shadow groups, the pre-training step, originally introduced as a
mechanism to better initialize a network, becomes equivalent to a search for
features with minimal orbits. Intuitively, these features are in a way the {\em
simplest}. Which explains why a deep learning network learns simple features
first. Next, we show how the same principle, when repeated in the deeper
layers, can capture higher order representations, and why representation
complexity increases as the layers get deeper.
|
[
{
"created": "Sat, 20 Dec 2014 07:28:46 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Dec 2014 02:22:01 GMT",
"version": "v2"
},
{
"created": "Sat, 28 Feb 2015 07:19:35 GMT",
"version": "v3"
}
] |
2015-03-03
|
[
[
"Paul",
"Arnab",
""
],
[
"Venkatasubramanian",
"Suresh",
""
]
] |
Why does Deep Learning work? What representations does it capture? How do higher-order representations emerge? We study these questions from the perspective of group theory, thereby opening a new approach towards a theory of Deep learning. One factor behind the recent resurgence of the subject is a key algorithmic step called pre-training: first search for a good generative model for the input samples, and repeat the process one layer at a time. We show deeper implications of this simple principle, by establishing a connection with the interplay of orbits and stabilizers of group actions. Although the neural networks themselves may not form groups, we show the existence of {\em shadow} groups whose elements serve as close approximations. Over the shadow groups, the pre-training step, originally introduced as a mechanism to better initialize a network, becomes equivalent to a search for features with minimal orbits. Intuitively, these features are in a way the {\em simplest}. Which explains why a deep learning network learns simple features first. Next, we show how the same principle, when repeated in the deeper layers, can capture higher order representations, and why representation complexity increases as the layers get deeper.
|
2304.05860
|
Wei Peng
|
Weixuan Wang, Wei Peng and Qun Liu
|
Learning Homographic Disambiguation Representation for Neural Machine
Translation
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Homographs, words with the same spelling but different meanings, remain
challenging in Neural Machine Translation (NMT). While recent works leverage
various word embedding approaches to differentiate word sense in NMT, they do
not focus on the pivotal components in resolving ambiguities of homographs in
NMT: the hidden states of an encoder. In this paper, we propose a novel
approach to tackle homographic issues of NMT in the latent space. We first
train an encoder (aka "HDR-encoder") to learn universal sentence
representations in a natural language inference (NLI) task. We further
fine-tune the encoder using homograph-based synset sentences from WordNet,
enabling it to learn word-level homographic disambiguation representations
(HDR). The pre-trained HDR-encoder is subsequently integrated with a
transformer-based NMT in various schemes to improve translation accuracy.
Experiments on four translation directions demonstrate the effectiveness of the
proposed method in enhancing the performance of NMT systems in the BLEU scores
(up to +2.3 compared to a solid baseline). The effects can be verified by other
metrics (F1, precision, and recall) of translation accuracy in an additional
disambiguation task. Visualization methods like heatmaps, T-SNE and translation
examples are also utilized to demonstrate the effects of the proposed method.
|
[
{
"created": "Wed, 12 Apr 2023 13:42:59 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Apr 2023 00:31:20 GMT",
"version": "v2"
}
] |
2023-04-14
|
[
[
"Wang",
"Weixuan",
""
],
[
"Peng",
"Wei",
""
],
[
"Liu",
"Qun",
""
]
] |
Homographs, words with the same spelling but different meanings, remain challenging in Neural Machine Translation (NMT). While recent works leverage various word embedding approaches to differentiate word sense in NMT, they do not focus on the pivotal components in resolving ambiguities of homographs in NMT: the hidden states of an encoder. In this paper, we propose a novel approach to tackle homographic issues of NMT in the latent space. We first train an encoder (aka "HDR-encoder") to learn universal sentence representations in a natural language inference (NLI) task. We further fine-tune the encoder using homograph-based synset sentences from WordNet, enabling it to learn word-level homographic disambiguation representations (HDR). The pre-trained HDR-encoder is subsequently integrated with a transformer-based NMT in various schemes to improve translation accuracy. Experiments on four translation directions demonstrate the effectiveness of the proposed method in enhancing the performance of NMT systems in the BLEU scores (up to +2.3 compared to a solid baseline). The effects can be verified by other metrics (F1, precision, and recall) of translation accuracy in an additional disambiguation task. Visualization methods like heatmaps, T-SNE and translation examples are also utilized to demonstrate the effects of the proposed method.
|
2311.12947
|
Ren Wang
|
Ren Wang, Ming Zhong, Kaidi Xu, Lola Gir\'aldez S\'anchez-Cort\'es,
Ignacio de Cominges Guerra
|
PINNs-Based Uncertainty Quantification for Transient Stability Analysis
| null | null | null | null |
cs.AI cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the challenge of transient stability in power systems
with missing parameters and uncertainty propagation in swing equations. We
introduce a novel application of Physics-Informed Neural Networks (PINNs),
specifically an Ensemble of PINNs (E-PINNs), to estimate critical parameters
like rotor angle and inertia coefficient with enhanced accuracy and reduced
computational load. E-PINNs capitalize on the underlying physical principles of
swing equations to provide a robust solution. Our approach not only facilitates
efficient parameter estimation but also quantifies uncertainties, delivering
probabilistic insights into the system behavior. The efficacy of E-PINNs is
demonstrated through the analysis of $1$-bus and $2$-bus systems, highlighting
the model's ability to handle parameter variability and data scarcity. The
study advances the application of machine learning in power system stability,
paving the way for reliable and computationally efficient transient stability
analysis.
|
[
{
"created": "Tue, 21 Nov 2023 19:21:49 GMT",
"version": "v1"
}
] |
2023-11-23
|
[
[
"Wang",
"Ren",
""
],
[
"Zhong",
"Ming",
""
],
[
"Xu",
"Kaidi",
""
],
[
"Sánchez-Cortés",
"Lola Giráldez",
""
],
[
"Guerra",
"Ignacio de Cominges",
""
]
] |
This paper addresses the challenge of transient stability in power systems with missing parameters and uncertainty propagation in swing equations. We introduce a novel application of Physics-Informed Neural Networks (PINNs), specifically an Ensemble of PINNs (E-PINNs), to estimate critical parameters like rotor angle and inertia coefficient with enhanced accuracy and reduced computational load. E-PINNs capitalize on the underlying physical principles of swing equations to provide a robust solution. Our approach not only facilitates efficient parameter estimation but also quantifies uncertainties, delivering probabilistic insights into the system behavior. The efficacy of E-PINNs is demonstrated through the analysis of $1$-bus and $2$-bus systems, highlighting the model's ability to handle parameter variability and data scarcity. The study advances the application of machine learning in power system stability, paving the way for reliable and computationally efficient transient stability analysis.
|
1702.07178
|
Zhenyu Li
|
Zhenyu Li and Adrian G. Bors
|
Steganalysis of 3D Objects Using Statistics of Local Feature Sets
| null | null |
10.1016/j.ins.2017.06.011
| null |
cs.CR cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D steganalysis aims to identify subtle invisible changes produced in
graphical objects through digital watermarking or steganography. Sets of
statistical representations of 3D features, extracted from both cover and stego
3D mesh objects, are used as inputs into machine learning classifiers in order
to decide whether any information was hidden in the given graphical object.
According to previous studies, sets of local geometry features can be used to
define the differences between stego and cover-objects. The features proposed
in this paper include those representing the local object curvature, vertex
normals, the local geometry representation in the spherical coordinate system
and are considered in various combinations with others. We also analyze the
effectiveness of various 3D feature sets applied for steganalysis based on the
Pearson correlation coefficient. The classifiers proposed in this study for
discriminating the 3D stego and cover-objects include Support Vector Machine
and the Fisher Linear Discriminant ensemble. Three different watermarking and
steganographic methods are used for hiding information in the 3D objects used
for testing the performance of the proposed steganalysis methodology.
|
[
{
"created": "Thu, 23 Feb 2017 11:24:03 GMT",
"version": "v1"
}
] |
2017-06-21
|
[
[
"Li",
"Zhenyu",
""
],
[
"Bors",
"Adrian G.",
""
]
] |
3D steganalysis aims to identify subtle invisible changes produced in graphical objects through digital watermarking or steganography. Sets of statistical representations of 3D features, extracted from both cover and stego 3D mesh objects, are used as inputs into machine learning classifiers in order to decide whether any information was hidden in the given graphical object. According to previous studies, sets of local geometry features can be used to define the differences between stego and cover-objects. The features proposed in this paper include those representing the local object curvature, vertex normals, the local geometry representation in the spherical coordinate system and are considered in various combinations with others. We also analyze the effectiveness of various 3D feature sets applied for steganalysis based on the Pearson correlation coefficient. The classifiers proposed in this study for discriminating the 3D stego and cover-objects include Support Vector Machine and the Fisher Linear Discriminant ensemble. Three different watermarking and steganographic methods are used for hiding information in the 3D objects used for testing the performance of the proposed steganalysis methodology.
|
2005.06046
|
Neeldhara Misra
|
Neeldhara Misra, Harshil Mittal, Aditi Sethia
|
Red-Blue Point Separation for Points on a Circle
| null | null | null | null |
cs.DS cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a set R of red points and a set B of blue points in the plane, the
Red-Blue point separation problem asks if there are at most k lines that
separate R from B, that is, each cell induced by the lines of the solution is
either empty or monochromatic (containing points of only one color). A common
variant of the problem is when the lines are required to be axis-parallel. The
problem is known to be NP-complete for both scenarios, and W[1]-hard
parameterized by k in the former setting and FPT in the latter. We demonstrate
a polynomial-time algorithm for the special case when the points lie on a
circle. Further, we also demonstrate the W-hardness of a related problem in the
axis-parallel setting, where the question is if there are p horizontal and q
vertical lines that separate R from B. The hardness here is shown in the
parameter p.
|
[
{
"created": "Tue, 12 May 2020 20:54:54 GMT",
"version": "v1"
}
] |
2020-05-14
|
[
[
"Misra",
"Neeldhara",
""
],
[
"Mittal",
"Harshil",
""
],
[
"Sethia",
"Aditi",
""
]
] |
Given a set R of red points and a set B of blue points in the plane, the Red-Blue point separation problem asks if there are at most k lines that separate R from B, that is, each cell induced by the lines of the solution is either empty or monochromatic (containing points of only one color). A common variant of the problem is when the lines are required to be axis-parallel. The problem is known to be NP-complete for both scenarios, and W[1]-hard parameterized by k in the former setting and FPT in the latter. We demonstrate a polynomial-time algorithm for the special case when the points lie on a circle. Further, we also demonstrate the W-hardness of a related problem in the axis-parallel setting, where the question is if there are p horizontal and q vertical lines that separate R from B. The hardness here is shown in the parameter p.
|
2406.17961
|
Md Mahadi Hasan Nahid
|
Md Mahadi Hasan Nahid, Davood Rafiei
|
NormTab: Improving Symbolic Reasoning in LLMs Through Tabular Data
Normalization
|
Work in Progress
| null | null | null |
cs.CL cs.AI cs.DB cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, Large Language Models (LLMs) have demonstrated remarkable
capabilities in parsing textual data and generating code. However, their
performance in tasks involving tabular data, especially those requiring
symbolic reasoning, faces challenges due to the structural variance and
inconsistency in table cell values often found in web tables. In this paper, we
introduce NormTab, a novel framework aimed at enhancing the symbolic reasoning
performance of LLMs by normalizing web tables. We study table normalization as
a stand-alone, one-time preprocessing step using LLMs to support symbolic
reasoning on tabular data. Our experimental evaluation, conducted on
challenging web table datasets such as WikiTableQuestion and TabFact,
demonstrates that leveraging NormTab significantly improves symbolic reasoning
performance, showcasing the importance and effectiveness of web table
normalization for enhancing LLM-based symbolic reasoning tasks.
|
[
{
"created": "Tue, 25 Jun 2024 22:40:03 GMT",
"version": "v1"
}
] |
2024-06-27
|
[
[
"Nahid",
"Md Mahadi Hasan",
""
],
[
"Rafiei",
"Davood",
""
]
] |
In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities in parsing textual data and generating code. However, their performance in tasks involving tabular data, especially those requiring symbolic reasoning, faces challenges due to the structural variance and inconsistency in table cell values often found in web tables. In this paper, we introduce NormTab, a novel framework aimed at enhancing the symbolic reasoning performance of LLMs by normalizing web tables. We study table normalization as a stand-alone, one-time preprocessing step using LLMs to support symbolic reasoning on tabular data. Our experimental evaluation, conducted on challenging web table datasets such as WikiTableQuestion and TabFact, demonstrates that leveraging NormTab significantly improves symbolic reasoning performance, showcasing the importance and effectiveness of web table normalization for enhancing LLM-based symbolic reasoning tasks.
|
1806.02081
|
Rita Ibrahim
|
Rita Ibrahim, Mohamad Assaad, Berna Sayrac, Azeddine Gati
|
Distributed vs. Centralized Scheduling in D2D-enabled Cellular Networks
|
Submitted to IEEE Transactions on Mobile Computing
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Employing channel adaptive resource allocation can yield to a large
enhancement in almost any performance metric of Device-to-Device (D2D)
communications. We observe that D2D users are able to estimate their local
Channel State Information (CSI), however the base station needs some signaling
exchange to acquire this information. Based on the D2D users' knowledge of
their local CSI, we provide a scheduling framework that shows how distributed
approach outperforms centralized one. We start by proposing a centralized
scheduling that requires the knowledge of D2D links' CSI at the base station
level. This CSI reporting suffers from the limited number of resources
available for feedback transmission. Therefore, we benefit from the users'
knowledge of their local CSI to develop a distributed algorithm for D2D
resource allocation. In distributed approach, collisions may occur between the
different CSI reporting; thus a collision reduction algorithm is proposed. We
give a description on how both centralized and distributed algorithms can be
implemented in practice. Furthermore, numerical results are presented to
corroborate our claims and demonstrate the gain that the proposed scheduling
algorithms bring to cellular networks.
|
[
{
"created": "Wed, 6 Jun 2018 09:26:22 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Jun 2018 09:42:59 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Jun 2018 09:06:08 GMT",
"version": "v3"
},
{
"created": "Wed, 20 Feb 2019 13:56:57 GMT",
"version": "v4"
},
{
"created": "Thu, 21 Feb 2019 08:52:09 GMT",
"version": "v5"
},
{
"created": "Mon, 25 Feb 2019 08:43:14 GMT",
"version": "v6"
}
] |
2019-02-26
|
[
[
"Ibrahim",
"Rita",
""
],
[
"Assaad",
"Mohamad",
""
],
[
"Sayrac",
"Berna",
""
],
[
"Gati",
"Azeddine",
""
]
] |
Employing channel adaptive resource allocation can yield to a large enhancement in almost any performance metric of Device-to-Device (D2D) communications. We observe that D2D users are able to estimate their local Channel State Information (CSI), however the base station needs some signaling exchange to acquire this information. Based on the D2D users' knowledge of their local CSI, we provide a scheduling framework that shows how distributed approach outperforms centralized one. We start by proposing a centralized scheduling that requires the knowledge of D2D links' CSI at the base station level. This CSI reporting suffers from the limited number of resources available for feedback transmission. Therefore, we benefit from the users' knowledge of their local CSI to develop a distributed algorithm for D2D resource allocation. In distributed approach, collisions may occur between the different CSI reporting; thus a collision reduction algorithm is proposed. We give a description on how both centralized and distributed algorithms can be implemented in practice. Furthermore, numerical results are presented to corroborate our claims and demonstrate the gain that the proposed scheduling algorithms bring to cellular networks.
|
2309.08988
|
Diego Navarro-Cabrera
|
Diego Navarro-Cabrera, Niceto R. Luque and Eduardo Ros
|
Multi-objective tuning for torque PD controllers of cobots
|
Accepted for presentation at the CPS workshop 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Collaborative robotics is a new and challenging field in the realm of motion
control and human-robot interaction. The safety measures needed for a reliable
interaction between the robot and its environment hinder the use of classical
control methods, pushing researchers to try new techniques such as machine
learning (ML). In this context, reinforcement learning has been adopted as the
primary way to create intelligent controllers for collaborative robots, however
supervised learning shows great promise in the hope of developing data-driven
model based ML controllers in a faster and safer way. In this work we study
several aspects of the methodology needed to create a dataset to be used to
learn the dynamics of a robot. For this we tune several PD controllers to
several trajectories, using a multi-objective genetic algorithm (GA) which
takes into account not only their accuracy, but also their safety. We
demonstrate the need to tune the controllers individually to each trajectory
and empirically explore the best population size for the GA and how the speed
of the trajectory affects the tuning and the dynamics of the robot.
|
[
{
"created": "Sat, 16 Sep 2023 13:06:36 GMT",
"version": "v1"
}
] |
2023-09-19
|
[
[
"Navarro-Cabrera",
"Diego",
""
],
[
"Luque",
"Niceto R.",
""
],
[
"Ros",
"Eduardo",
""
]
] |
Collaborative robotics is a new and challenging field in the realm of motion control and human-robot interaction. The safety measures needed for a reliable interaction between the robot and its environment hinder the use of classical control methods, pushing researchers to try new techniques such as machine learning (ML). In this context, reinforcement learning has been adopted as the primary way to create intelligent controllers for collaborative robots, however supervised learning shows great promise in the hope of developing data-driven model based ML controllers in a faster and safer way. In this work we study several aspects of the methodology needed to create a dataset to be used to learn the dynamics of a robot. For this we tune several PD controllers to several trajectories, using a multi-objective genetic algorithm (GA) which takes into account not only their accuracy, but also their safety. We demonstrate the need to tune the controllers individually to each trajectory and empirically explore the best population size for the GA and how the speed of the trajectory affects the tuning and the dynamics of the robot.
|
2206.05840
|
Sairamvinay Vijayaraghavan
|
Sairamvinay Vijayaraghavan, Terry Guan, Jason (Jinxiao) Song
|
GAN based Data Augmentation to Resolve Class Imbalance
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The number of credit card fraud has been growing as technology grows and
people can take advantage of it. Therefore, it is very important to implement a
robust and effective method to detect such frauds. The machine learning
algorithms are appropriate for these tasks since they try to maximize the
accuracy of predictions and hence can be relied upon. However, there is an
impending flaw where in machine learning models may not perform well due to the
presence of an imbalance across classes distribution within the sample set. So,
in many related tasks, the datasets have a very small number of observed fraud
cases (sometimes around 1 percent positive fraud instances found). Therefore,
this imbalance presence may impact any learning model's behavior by predicting
all labels as the majority class, hence allowing no scope for generalization in
the predictions made by the model. We trained Generative Adversarial
Network(GAN) to generate a large number of convincing (and reliable) synthetic
examples of the minority class that can be used to alleviate the class
imbalance within the training set and hence generalize the learning of the data
more effectively.
|
[
{
"created": "Sun, 12 Jun 2022 21:21:55 GMT",
"version": "v1"
}
] |
2022-06-14
|
[
[
"Vijayaraghavan",
"Sairamvinay",
"",
"Jinxiao"
],
[
"Guan",
"Terry",
"",
"Jinxiao"
],
[
"Jason",
"",
"",
"Jinxiao"
],
[
"Song",
"",
""
]
] |
The number of credit card fraud has been growing as technology grows and people can take advantage of it. Therefore, it is very important to implement a robust and effective method to detect such frauds. The machine learning algorithms are appropriate for these tasks since they try to maximize the accuracy of predictions and hence can be relied upon. However, there is an impending flaw where in machine learning models may not perform well due to the presence of an imbalance across classes distribution within the sample set. So, in many related tasks, the datasets have a very small number of observed fraud cases (sometimes around 1 percent positive fraud instances found). Therefore, this imbalance presence may impact any learning model's behavior by predicting all labels as the majority class, hence allowing no scope for generalization in the predictions made by the model. We trained Generative Adversarial Network(GAN) to generate a large number of convincing (and reliable) synthetic examples of the minority class that can be used to alleviate the class imbalance within the training set and hence generalize the learning of the data more effectively.
|
1507.02890
|
Bedon Nicolas
|
Bedon Nicolas (University of Rouen)
|
Logic and Branching Automata
| null |
Logical Methods in Computer Science, Volume 11, Issue 4 (October
15, 2015) lmcs:1603
|
10.2168/LMCS-11(4:2)2015
| null |
cs.FL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we study the logical aspects of branching automata, as defined
by Lodaya and Weil. We first prove that the class of languages of finite N-free
posets recognized by branching automata is closed under complementation. Then
we define a logic, named P-MSO as it is a extension of monadic second-order
logic with Presburger arithmetic, and show that it is precisely as expressive
as branching automata. As a consequence of the effectiveness of the
construction of one formalism from the other, the P-MSO theory of the class of
all finite N-free posets is decidable.
|
[
{
"created": "Fri, 10 Jul 2015 13:25:10 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Oct 2015 10:48:03 GMT",
"version": "v2"
}
] |
2017-01-11
|
[
[
"Nicolas",
"Bedon",
"",
"University of Rouen"
]
] |
In this paper we study the logical aspects of branching automata, as defined by Lodaya and Weil. We first prove that the class of languages of finite N-free posets recognized by branching automata is closed under complementation. Then we define a logic, named P-MSO as it is a extension of monadic second-order logic with Presburger arithmetic, and show that it is precisely as expressive as branching automata. As a consequence of the effectiveness of the construction of one formalism from the other, the P-MSO theory of the class of all finite N-free posets is decidable.
|
2101.02029
|
Mohammad Tabrez Quasim
|
Mohammad Meraj, Surendra Pal Singh, Prashant Johri, Mohammad Tabrez
Quasim
|
Detection and Prediction of Infectious Diseases Using IoT Sensors: A
Review
|
7 pages, 2figures
| null | null | null |
cs.CY cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
An infectious kind of disease affects a huge number of human beings. A lot of
investigation being conducted throughout the world. There are many interactive
hardware platform packages like IoT in healthcare including smart tracking,
smart sensors, and clinical device integration available in the market.
Emerging technology like IoT has a notable ability to hold patients secure and
healthful and also enhance how physicians supply care. Healthcare IoT also can
bolster affected person pride by permitting patients to spend more time
interacting with their medical doctors due to the fact docs aren't as taken
with the mundane and rote aspects of their career. The most considerable
advantage to IoT in healthcare is that it supports doctors in undertaking extra
significant clinical work in a profession that already is experiencing a
worldwide professional hard work shortage. This paper investigates the basis
exploration of the applicability of IoT in the healthcare System.
|
[
{
"created": "Sat, 2 Jan 2021 15:59:00 GMT",
"version": "v1"
}
] |
2021-01-07
|
[
[
"Meraj",
"Mohammad",
""
],
[
"Singh",
"Surendra Pal",
""
],
[
"Johri",
"Prashant",
""
],
[
"Quasim",
"Mohammad Tabrez",
""
]
] |
An infectious kind of disease affects a huge number of human beings. A lot of investigation being conducted throughout the world. There are many interactive hardware platform packages like IoT in healthcare including smart tracking, smart sensors, and clinical device integration available in the market. Emerging technology like IoT has a notable ability to hold patients secure and healthful and also enhance how physicians supply care. Healthcare IoT also can bolster affected person pride by permitting patients to spend more time interacting with their medical doctors due to the fact docs aren't as taken with the mundane and rote aspects of their career. The most considerable advantage to IoT in healthcare is that it supports doctors in undertaking extra significant clinical work in a profession that already is experiencing a worldwide professional hard work shortage. This paper investigates the basis exploration of the applicability of IoT in the healthcare System.
|
1905.11326
|
Alessandro Neri
|
Alessandro Neri, Sven Puchinger, Anna-Lena Horlemann-Trautmann
|
Invariants and Inequivalence of Linear Rank-Metric Codes
|
5 pages; accepted at IEEE International Symposium on Information
Theory 2019
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that the sequence of dimensions of the linear spaces, generated by a
given rank-metric code together with itself under several applications of a
field automorphism, is an invariant for the whole equivalence class of the
code. These invariants give rise to an easily computable criterion to check if
two codes are inequivalent. With this criterion we then derive bounds on the
number of equivalence classes of classical and twisted Gabidulin codes.
|
[
{
"created": "Mon, 27 May 2019 16:24:16 GMT",
"version": "v1"
}
] |
2019-05-28
|
[
[
"Neri",
"Alessandro",
""
],
[
"Puchinger",
"Sven",
""
],
[
"Horlemann-Trautmann",
"Anna-Lena",
""
]
] |
We show that the sequence of dimensions of the linear spaces, generated by a given rank-metric code together with itself under several applications of a field automorphism, is an invariant for the whole equivalence class of the code. These invariants give rise to an easily computable criterion to check if two codes are inequivalent. With this criterion we then derive bounds on the number of equivalence classes of classical and twisted Gabidulin codes.
|
1611.10024
|
Daniel M\'endez Fern\'andez
|
D. M\'endez Fern\'andez, B. Penzenstadler
|
Artefact-based Requirements Engineering: The AMDiRE Approach
| null |
Requirements Engineering Journal, 2014
|
10.1007/s00766-014-0206-y
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The various influences in the processes and application domains make
Requirements Engineering (RE) inherently complex and difficult to implement. In
general, we have two options for establishing an RE approach: we can either
establish an activity-based RE approach or we can establish an artefact-based
one where project participants concentrate on the RE artefacts rather than on
the way of creating them. While a number of activity-based RE approaches have
been proposed in recent years, we have gained much empirical evidence and
experiences about the advantages of the artefact-based paradigm for RE.
However, artefact orientation is still a young paradigm with various
interpretations and practical manifestations whereby we need a clear
understanding of its basic concepts and a consolidated and evaluated view on
the paradigm.
In this article, we contribute an artefact-based approach to RE (AMDiRE) that
emerges from six years of experiences in fundamental and evidence-based
research. To this end, we first discuss the basic notion of artefact
orientation and its evolution in recent years. We briefly introduce a set of
artefact-based RE models we developed in industrial research cooperations for
different application domains, show their empirical evaluations, and their
dissemination into academia and practice, eventually leading to the AMDiRE
approach. We conclude with a discussion of experiences we made during the
development and different industrial evaluations, and lessons learnt.
|
[
{
"created": "Wed, 30 Nov 2016 07:03:37 GMT",
"version": "v1"
}
] |
2016-12-01
|
[
[
"Fernández",
"D. Méndez",
""
],
[
"Penzenstadler",
"B.",
""
]
] |
The various influences in the processes and application domains make Requirements Engineering (RE) inherently complex and difficult to implement. In general, we have two options for establishing an RE approach: we can either establish an activity-based RE approach or we can establish an artefact-based one where project participants concentrate on the RE artefacts rather than on the way of creating them. While a number of activity-based RE approaches have been proposed in recent years, we have gained much empirical evidence and experiences about the advantages of the artefact-based paradigm for RE. However, artefact orientation is still a young paradigm with various interpretations and practical manifestations whereby we need a clear understanding of its basic concepts and a consolidated and evaluated view on the paradigm. In this article, we contribute an artefact-based approach to RE (AMDiRE) that emerges from six years of experiences in fundamental and evidence-based research. To this end, we first discuss the basic notion of artefact orientation and its evolution in recent years. We briefly introduce a set of artefact-based RE models we developed in industrial research cooperations for different application domains, show their empirical evaluations, and their dissemination into academia and practice, eventually leading to the AMDiRE approach. We conclude with a discussion of experiences we made during the development and different industrial evaluations, and lessons learnt.
|
1612.03339
|
Juli\'an Mestre
|
Maurice Cheung and Juli\'an Mestre and David B. Shmoys and Jos\'e
Verschae
|
A Primal-Dual Approximation Algorithm for Min-Sum Single-Machine
Scheduling Problems
|
26 pages. A preliminary version appeared in APPROX 2011. arXiv admin
note: text overlap with arXiv:1403.0298
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the following single-machine scheduling problem, which is often
denoted $1||\sum f_{j}$: we are given $n$ jobs to be scheduled on a single
machine, where each job $j$ has an integral processing time $p_j$, and there is
a nondecreasing, nonnegative cost function $f_j(C_{j})$ that specifies the cost
of finishing $j$ at time $C_{j}$; the objective is to minimize $\sum_{j=1}^n
f_j(C_j)$. Bansal \& Pruhs recently gave the first constant approximation
algorithm with a performance guarantee of 16. We improve on this result by
giving a primal-dual pseudo-polynomial-time algorithm based on the recently
introduced knapsack-cover inequalities. The algorithm finds a schedule of cost
at most four times the constructed dual solution. Although we show that this
bound is tight for our algorithm, we leave open the question of whether the
integrality gap of the LP is less than 4. Finally, we show how the technique
can be adapted to yield, for any $\epsilon >0$, a $(4+\epsilon )$-approximation
algorithm for this problem.
|
[
{
"created": "Sat, 10 Dec 2016 20:36:09 GMT",
"version": "v1"
}
] |
2016-12-13
|
[
[
"Cheung",
"Maurice",
""
],
[
"Mestre",
"Julián",
""
],
[
"Shmoys",
"David B.",
""
],
[
"Verschae",
"José",
""
]
] |
We consider the following single-machine scheduling problem, which is often denoted $1||\sum f_{j}$: we are given $n$ jobs to be scheduled on a single machine, where each job $j$ has an integral processing time $p_j$, and there is a nondecreasing, nonnegative cost function $f_j(C_{j})$ that specifies the cost of finishing $j$ at time $C_{j}$; the objective is to minimize $\sum_{j=1}^n f_j(C_j)$. Bansal \& Pruhs recently gave the first constant approximation algorithm with a performance guarantee of 16. We improve on this result by giving a primal-dual pseudo-polynomial-time algorithm based on the recently introduced knapsack-cover inequalities. The algorithm finds a schedule of cost at most four times the constructed dual solution. Although we show that this bound is tight for our algorithm, we leave open the question of whether the integrality gap of the LP is less than 4. Finally, we show how the technique can be adapted to yield, for any $\epsilon >0$, a $(4+\epsilon )$-approximation algorithm for this problem.
|
2208.01014
|
Jiahui Fu
|
Jiahui Fu, Yilun Du, Kurran Singh, Joshua B. Tenenbaum, and John J.
Leonard
|
Robust Change Detection Based on Neural Descriptor Fields
|
8 pages, 8 figures, and 2 tables. Accepted to IROS 2022. Project
webpage: https://yilundu.github.io/ndf_change
| null | null | null |
cs.RO cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The ability to reason about changes in the environment is crucial for robots
operating over extended periods of time. Agents are expected to capture changes
during operation so that actions can be followed to ensure a smooth progression
of the working session. However, varying viewing angles and accumulated
localization errors make it easy for robots to falsely detect changes in the
surrounding world due to low observation overlap and drifted object
associations. In this paper, based on the recently proposed category-level
Neural Descriptor Fields (NDFs), we develop an object-level online change
detection approach that is robust to partially overlapping observations and
noisy localization results. Utilizing the shape completion capability and
SE(3)-equivariance of NDFs, we represent objects with compact shape codes
encoding full object shapes from partial observations. The objects are then
organized in a spatial tree structure based on object centers recovered from
NDFs for fast queries of object neighborhoods. By associating objects via shape
code similarity and comparing local object-neighbor spatial layout, our
proposed approach demonstrates robustness to low observation overlap and
localization noises. We conduct experiments on both synthetic and real-world
sequences and achieve improved change detection results compared to multiple
baseline methods. Project webpage: https://yilundu.github.io/ndf_change
|
[
{
"created": "Mon, 1 Aug 2022 17:45:36 GMT",
"version": "v1"
}
] |
2022-08-02
|
[
[
"Fu",
"Jiahui",
""
],
[
"Du",
"Yilun",
""
],
[
"Singh",
"Kurran",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Leonard",
"John J.",
""
]
] |
The ability to reason about changes in the environment is crucial for robots operating over extended periods of time. Agents are expected to capture changes during operation so that actions can be followed to ensure a smooth progression of the working session. However, varying viewing angles and accumulated localization errors make it easy for robots to falsely detect changes in the surrounding world due to low observation overlap and drifted object associations. In this paper, based on the recently proposed category-level Neural Descriptor Fields (NDFs), we develop an object-level online change detection approach that is robust to partially overlapping observations and noisy localization results. Utilizing the shape completion capability and SE(3)-equivariance of NDFs, we represent objects with compact shape codes encoding full object shapes from partial observations. The objects are then organized in a spatial tree structure based on object centers recovered from NDFs for fast queries of object neighborhoods. By associating objects via shape code similarity and comparing local object-neighbor spatial layout, our proposed approach demonstrates robustness to low observation overlap and localization noises. We conduct experiments on both synthetic and real-world sequences and achieve improved change detection results compared to multiple baseline methods. Project webpage: https://yilundu.github.io/ndf_change
|
2005.08926
|
Patrick Kidger
|
Patrick Kidger, James Morrill, James Foster, Terry Lyons
|
Neural Controlled Differential Equations for Irregular Time Series
|
Accepted at NeurIPS 2020 (Spotlight)
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural ordinary differential equations are an attractive option for modelling
temporal dynamics. However, a fundamental issue is that the solution to an
ordinary differential equation is determined by its initial condition, and
there is no mechanism for adjusting the trajectory based on subsequent
observations. Here, we demonstrate how this may be resolved through the
well-understood mathematics of \emph{controlled differential equations}. The
resulting \emph{neural controlled differential equation} model is directly
applicable to the general setting of partially-observed irregularly-sampled
multivariate time series, and (unlike previous work on this problem) it may
utilise memory-efficient adjoint-based backpropagation even across
observations. We demonstrate that our model achieves state-of-the-art
performance against similar (ODE or RNN based) models in empirical studies on a
range of datasets. Finally we provide theoretical results demonstrating
universal approximation, and that our model subsumes alternative ODE models.
|
[
{
"created": "Mon, 18 May 2020 17:52:21 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Nov 2020 17:45:39 GMT",
"version": "v2"
}
] |
2020-11-06
|
[
[
"Kidger",
"Patrick",
""
],
[
"Morrill",
"James",
""
],
[
"Foster",
"James",
""
],
[
"Lyons",
"Terry",
""
]
] |
Neural ordinary differential equations are an attractive option for modelling temporal dynamics. However, a fundamental issue is that the solution to an ordinary differential equation is determined by its initial condition, and there is no mechanism for adjusting the trajectory based on subsequent observations. Here, we demonstrate how this may be resolved through the well-understood mathematics of \emph{controlled differential equations}. The resulting \emph{neural controlled differential equation} model is directly applicable to the general setting of partially-observed irregularly-sampled multivariate time series, and (unlike previous work on this problem) it may utilise memory-efficient adjoint-based backpropagation even across observations. We demonstrate that our model achieves state-of-the-art performance against similar (ODE or RNN based) models in empirical studies on a range of datasets. Finally we provide theoretical results demonstrating universal approximation, and that our model subsumes alternative ODE models.
|
1805.09785
|
Marylou Gabri\'e
|
Marylou Gabri\'e, Andre Manoel, Cl\'ement Luneau, Jean Barbier,
Nicolas Macris, Florent Krzakala, Lenka Zdeborov\'a
|
Entropy and mutual information in models of deep neural networks
| null |
J. Stat. Mech. (2019) 124014. & NeurIPS 2018
|
10.1088/1742-5468/ab3430
| null |
cs.LG cond-mat.dis-nn cs.IT math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We examine a class of deep learning models with a tractable method to compute
information-theoretic quantities. Our contributions are three-fold: (i) We show
how entropies and mutual informations can be derived from heuristic statistical
physics methods, under the assumption that weight matrices are independent and
orthogonally-invariant. (ii) We extend particular cases in which this result is
known to be rigorously exact by providing a proof for two-layers networks with
Gaussian random weights, using the recently introduced adaptive interpolation
method. (iii) We propose an experiment framework with generative models of
synthetic datasets, on which we train deep neural networks with a weight
constraint designed so that the assumption in (i) is verified during learning.
We study the behavior of entropies and mutual informations throughout learning
and conclude that, in the proposed setting, the relationship between
compression and generalization remains elusive.
|
[
{
"created": "Thu, 24 May 2018 17:07:45 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Oct 2018 19:42:49 GMT",
"version": "v2"
}
] |
2020-01-22
|
[
[
"Gabrié",
"Marylou",
""
],
[
"Manoel",
"Andre",
""
],
[
"Luneau",
"Clément",
""
],
[
"Barbier",
"Jean",
""
],
[
"Macris",
"Nicolas",
""
],
[
"Krzakala",
"Florent",
""
],
[
"Zdeborová",
"Lenka",
""
]
] |
We examine a class of deep learning models with a tractable method to compute information-theoretic quantities. Our contributions are three-fold: (i) We show how entropies and mutual informations can be derived from heuristic statistical physics methods, under the assumption that weight matrices are independent and orthogonally-invariant. (ii) We extend particular cases in which this result is known to be rigorously exact by providing a proof for two-layers networks with Gaussian random weights, using the recently introduced adaptive interpolation method. (iii) We propose an experiment framework with generative models of synthetic datasets, on which we train deep neural networks with a weight constraint designed so that the assumption in (i) is verified during learning. We study the behavior of entropies and mutual informations throughout learning and conclude that, in the proposed setting, the relationship between compression and generalization remains elusive.
|
1308.2260
|
Peter Gloor
|
Petteri Raety, Benjamin Behm, Kim-Karol Dikert, Maria Paasivaara,
Casper Lassenius, Daniela Damian
|
Communication Practices in a Distributed Scrum Project
|
Presented at COINs13 Conference, Chile, 2013 (arxiv:1308.1028)
| null | null |
coins13/2013/13
|
cs.SE cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While global software development (GSD) projects face cultural and time
differences, the biggest challenge is communication. We studied a distributed
student project with an industrial customer. The project lasted 3 months,
involved 25 participants, and was distributed between the University of
Victoria, Canada and Aalto University, Finland. We analyzed email
communication, version control system (VCS) data, and surveys on satisfaction.
Our aim was to find out whether reflecting on communication affected it, if
standups influenced when developers committed to the VCS repository, and if
leaders emerged in the three distributed Scrum teams. Initially students sent
on average 21 emails per day. With the reduction to 16 emails, satisfaction
with communication increased. By comparing Scrum standup times and VCS activity
we found that the live communication of standups activated people to work on
the project. Out of the three teams, one had an emergent communication
facilitator.
|
[
{
"created": "Fri, 9 Aug 2013 23:24:47 GMT",
"version": "v1"
}
] |
2013-08-13
|
[
[
"Raety",
"Petteri",
""
],
[
"Behm",
"Benjamin",
""
],
[
"Dikert",
"Kim-Karol",
""
],
[
"Paasivaara",
"Maria",
""
],
[
"Lassenius",
"Casper",
""
],
[
"Damian",
"Daniela",
""
]
] |
While global software development (GSD) projects face cultural and time differences, the biggest challenge is communication. We studied a distributed student project with an industrial customer. The project lasted 3 months, involved 25 participants, and was distributed between the University of Victoria, Canada and Aalto University, Finland. We analyzed email communication, version control system (VCS) data, and surveys on satisfaction. Our aim was to find out whether reflecting on communication affected it, if standups influenced when developers committed to the VCS repository, and if leaders emerged in the three distributed Scrum teams. Initially students sent on average 21 emails per day. With the reduction to 16 emails, satisfaction with communication increased. By comparing Scrum standup times and VCS activity we found that the live communication of standups activated people to work on the project. Out of the three teams, one had an emergent communication facilitator.
|
1108.4705
|
Michael Bannister
|
Michael J. Bannister, David Eppstein, Joseph A. Simons
|
Inapproximability of Orthogonal Compaction
|
Updated to the final version to appear in the Journal of Graph
Algorithms and Applications
|
J. Graph Algorithms & Applications 16(3): 651-673, 2012
|
10.7155/jgaa.00263
| null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that several problems of compacting orthogonal graph drawings to use
the minimum number of rows, area, length of longest edge or total edge length
cannot be approximated better than within a polynomial factor of optimal in
polynomial time unless P = NP. We also provide a fixed-parameter-tractable
algorithm for testing whether a drawing can be compacted to a small number of
rows.
|
[
{
"created": "Tue, 23 Aug 2011 21:37:28 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Nov 2011 02:36:32 GMT",
"version": "v2"
},
{
"created": "Mon, 27 Feb 2012 03:47:56 GMT",
"version": "v3"
}
] |
2015-07-16
|
[
[
"Bannister",
"Michael J.",
""
],
[
"Eppstein",
"David",
""
],
[
"Simons",
"Joseph A.",
""
]
] |
We show that several problems of compacting orthogonal graph drawings to use the minimum number of rows, area, length of longest edge or total edge length cannot be approximated better than within a polynomial factor of optimal in polynomial time unless P = NP. We also provide a fixed-parameter-tractable algorithm for testing whether a drawing can be compacted to a small number of rows.
|
2305.18486
|
Md Tahmid Rahman Laskar
|
Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran
Hossen Bhuiyan, Shafiq Joty, Jimmy Xiangji Huang
|
A Systematic Study and Comprehensive Evaluation of ChatGPT on Benchmark
Datasets
|
Accepted by ACL 2023 Findings. The first three authors contributed
equally
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The development of large language models (LLMs) such as ChatGPT has brought a
lot of attention recently. However, their evaluation in the benchmark academic
datasets remains under-explored due to the difficulty of evaluating the
generative outputs produced by this model against the ground truth. In this
paper, we aim to present a thorough evaluation of ChatGPT's performance on
diverse academic datasets, covering tasks like question-answering, text
summarization, code generation, commonsense reasoning, mathematical
problem-solving, machine translation, bias detection, and ethical
considerations. Specifically, we evaluate ChatGPT across 140 tasks and analyze
255K responses it generates in these datasets. This makes our work the largest
evaluation of ChatGPT in NLP benchmarks. In short, our study aims to validate
the strengths and weaknesses of ChatGPT in various tasks and provide insights
for future research using LLMs. We also report a new emergent ability to follow
multi-query instructions that we mostly found in ChatGPT and other
instruction-tuned models. Our extensive evaluation shows that even though
ChatGPT is capable of performing a wide variety of tasks, and may obtain
impressive performance in several benchmark datasets, it is still far from
achieving the ability to reliably solve many challenging tasks. By providing a
thorough assessment of ChatGPT's performance across diverse NLP tasks, this
paper sets the stage for a targeted deployment of ChatGPT-like LLMs in
real-world applications.
|
[
{
"created": "Mon, 29 May 2023 12:37:21 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Jun 2023 16:21:40 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Jun 2023 03:27:16 GMT",
"version": "v3"
},
{
"created": "Wed, 5 Jul 2023 16:19:38 GMT",
"version": "v4"
}
] |
2023-07-07
|
[
[
"Laskar",
"Md Tahmid Rahman",
""
],
[
"Bari",
"M Saiful",
""
],
[
"Rahman",
"Mizanur",
""
],
[
"Bhuiyan",
"Md Amran Hossen",
""
],
[
"Joty",
"Shafiq",
""
],
[
"Huang",
"Jimmy Xiangji",
""
]
] |
The development of large language models (LLMs) such as ChatGPT has brought a lot of attention recently. However, their evaluation in the benchmark academic datasets remains under-explored due to the difficulty of evaluating the generative outputs produced by this model against the ground truth. In this paper, we aim to present a thorough evaluation of ChatGPT's performance on diverse academic datasets, covering tasks like question-answering, text summarization, code generation, commonsense reasoning, mathematical problem-solving, machine translation, bias detection, and ethical considerations. Specifically, we evaluate ChatGPT across 140 tasks and analyze 255K responses it generates in these datasets. This makes our work the largest evaluation of ChatGPT in NLP benchmarks. In short, our study aims to validate the strengths and weaknesses of ChatGPT in various tasks and provide insights for future research using LLMs. We also report a new emergent ability to follow multi-query instructions that we mostly found in ChatGPT and other instruction-tuned models. Our extensive evaluation shows that even though ChatGPT is capable of performing a wide variety of tasks, and may obtain impressive performance in several benchmark datasets, it is still far from achieving the ability to reliably solve many challenging tasks. By providing a thorough assessment of ChatGPT's performance across diverse NLP tasks, this paper sets the stage for a targeted deployment of ChatGPT-like LLMs in real-world applications.
|
2211.02139
|
Faisal Hamman
|
Faisal Hamman, Jiahao Chen, Sanghamitra Dutta
|
Can Querying for Bias Leak Protected Attributes? Achieving Privacy With
Smooth Sensitivity
|
Published in 2023 ACM Conference on Fairness, Accountability, and
Transparency (FAccT2023)
| null |
10.1145/3593013.3594086
| null |
cs.LG cs.AI cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing regulations prohibit model developers from accessing protected
attributes (gender, race, etc.), often resulting in fairness assessments on
populations without knowing their protected groups. In such scenarios,
institutions often adopt a separation between the model developers (who train
models with no access to the protected attributes) and a compliance team (who
may have access to the entire dataset for auditing purposes). However, the
model developers might be allowed to test their models for bias by querying the
compliance team for group fairness metrics. In this paper, we first demonstrate
that simply querying for fairness metrics, such as statistical parity and
equalized odds can leak the protected attributes of individuals to the model
developers. We demonstrate that there always exist strategies by which the
model developers can identify the protected attribute of a targeted individual
in the test dataset from just a single query. In particular, we show that one
can reconstruct the protected attributes of all the individuals from O(Nk \log(
n /Nk)) queries when Nk<<n using techniques from compressed sensing (n: size of
the test dataset, Nk: size of smallest group). Our results pose an interesting
debate in algorithmic fairness: should querying for fairness metrics be viewed
as a neutral-valued solution to ensure compliance with regulations? Or, does it
constitute a violation of regulations and privacy if the number of queries
answered is enough for the model developers to identify the protected
attributes of specific individuals? To address this supposed violation, we also
propose Attribute-Conceal, a novel technique that achieves differential privacy
by calibrating noise to the smooth sensitivity of our bias query, outperforming
naive techniques such as the Laplace mechanism. We also include experimental
results on the Adult dataset and synthetic data.
|
[
{
"created": "Thu, 3 Nov 2022 20:44:48 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Jun 2023 20:55:12 GMT",
"version": "v2"
}
] |
2023-06-07
|
[
[
"Hamman",
"Faisal",
""
],
[
"Chen",
"Jiahao",
""
],
[
"Dutta",
"Sanghamitra",
""
]
] |
Existing regulations prohibit model developers from accessing protected attributes (gender, race, etc.), often resulting in fairness assessments on populations without knowing their protected groups. In such scenarios, institutions often adopt a separation between the model developers (who train models with no access to the protected attributes) and a compliance team (who may have access to the entire dataset for auditing purposes). However, the model developers might be allowed to test their models for bias by querying the compliance team for group fairness metrics. In this paper, we first demonstrate that simply querying for fairness metrics, such as statistical parity and equalized odds can leak the protected attributes of individuals to the model developers. We demonstrate that there always exist strategies by which the model developers can identify the protected attribute of a targeted individual in the test dataset from just a single query. In particular, we show that one can reconstruct the protected attributes of all the individuals from O(Nk \log( n /Nk)) queries when Nk<<n using techniques from compressed sensing (n: size of the test dataset, Nk: size of smallest group). Our results pose an interesting debate in algorithmic fairness: should querying for fairness metrics be viewed as a neutral-valued solution to ensure compliance with regulations? Or, does it constitute a violation of regulations and privacy if the number of queries answered is enough for the model developers to identify the protected attributes of specific individuals? To address this supposed violation, we also propose Attribute-Conceal, a novel technique that achieves differential privacy by calibrating noise to the smooth sensitivity of our bias query, outperforming naive techniques such as the Laplace mechanism. We also include experimental results on the Adult dataset and synthetic data.
|
2203.10652
|
Yanzhe Zhang
|
Yanzhe Zhang, Xuezhi Wang and Diyi Yang
|
Continual Sequence Generation with Adaptive Compositional Modules
|
15 pages, ACL 2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Continual learning is essential for real-world deployment when there is a
need to quickly adapt the model to new tasks without forgetting knowledge of
old tasks. Existing work on continual sequence generation either always reuses
existing parameters to learn new tasks, which is vulnerable to catastrophic
forgetting on dissimilar tasks, or blindly adds new parameters for every new
task, which could prevent knowledge sharing between similar tasks. To get the
best of both worlds, in this work, we propose continual sequence generation
with adaptive compositional modules to adaptively add modules in transformer
architectures and compose both old and new modules for new tasks. We also
incorporate pseudo experience replay to facilitate knowledge transfer in those
shared modules. Experiment results on various sequences of generation tasks
show that our framework can adaptively add modules or reuse modules based on
task similarity, outperforming state-of-the-art baselines in terms of both
performance and parameter efficiency. We make our code public at
https://github.com/GT-SALT/Adaptive-Compositional-Modules.
|
[
{
"created": "Sun, 20 Mar 2022 21:22:48 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Apr 2022 00:54:35 GMT",
"version": "v2"
}
] |
2022-04-06
|
[
[
"Zhang",
"Yanzhe",
""
],
[
"Wang",
"Xuezhi",
""
],
[
"Yang",
"Diyi",
""
]
] |
Continual learning is essential for real-world deployment when there is a need to quickly adapt the model to new tasks without forgetting knowledge of old tasks. Existing work on continual sequence generation either always reuses existing parameters to learn new tasks, which is vulnerable to catastrophic forgetting on dissimilar tasks, or blindly adds new parameters for every new task, which could prevent knowledge sharing between similar tasks. To get the best of both worlds, in this work, we propose continual sequence generation with adaptive compositional modules to adaptively add modules in transformer architectures and compose both old and new modules for new tasks. We also incorporate pseudo experience replay to facilitate knowledge transfer in those shared modules. Experiment results on various sequences of generation tasks show that our framework can adaptively add modules or reuse modules based on task similarity, outperforming state-of-the-art baselines in terms of both performance and parameter efficiency. We make our code public at https://github.com/GT-SALT/Adaptive-Compositional-Modules.
|
2407.17816
|
Junran Wu
|
Yue Hou, Xueyuan Chen, He Zhu, Romei Liu, Bowen Shi, Jiaheng Liu,
Junran Wu, Ke Xu
|
NC-NCD: Novel Class Discovery for Node Classification
|
Accepted by CIKM'24
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Novel Class Discovery (NCD) involves identifying new categories within
unlabeled data by utilizing knowledge acquired from previously established
categories. However, existing NCD methods often struggle to maintain a balance
between the performance of old and new categories. Discovering unlabeled new
categories in a class-incremental way is more practical but also more
challenging, as it is frequently hindered by either catastrophic forgetting of
old categories or an inability to learn new ones. Furthermore, the
implementation of NCD on continuously scalable graph-structured data remains an
under-explored area. In response to these challenges, we introduce for the
first time a more practical NCD scenario for node classification (i.e.,
NC-NCD), and propose a novel self-training framework with prototype replay and
distillation called SWORD, adopted to our NC-NCD setting. Our approach enables
the model to cluster unlabeled new category nodes after learning labeled nodes
while preserving performance on old categories without reliance on old category
nodes. SWORD achieves this by employing a self-training strategy to learn new
categories and preventing the forgetting of old categories through the joint
use of feature prototypes and knowledge distillation. Extensive experiments on
four common benchmarks demonstrate the superiority of SWORD over other
state-of-the-art methods.
|
[
{
"created": "Thu, 25 Jul 2024 07:10:08 GMT",
"version": "v1"
}
] |
2024-07-26
|
[
[
"Hou",
"Yue",
""
],
[
"Chen",
"Xueyuan",
""
],
[
"Zhu",
"He",
""
],
[
"Liu",
"Romei",
""
],
[
"Shi",
"Bowen",
""
],
[
"Liu",
"Jiaheng",
""
],
[
"Wu",
"Junran",
""
],
[
"Xu",
"Ke",
""
]
] |
Novel Class Discovery (NCD) involves identifying new categories within unlabeled data by utilizing knowledge acquired from previously established categories. However, existing NCD methods often struggle to maintain a balance between the performance of old and new categories. Discovering unlabeled new categories in a class-incremental way is more practical but also more challenging, as it is frequently hindered by either catastrophic forgetting of old categories or an inability to learn new ones. Furthermore, the implementation of NCD on continuously scalable graph-structured data remains an under-explored area. In response to these challenges, we introduce for the first time a more practical NCD scenario for node classification (i.e., NC-NCD), and propose a novel self-training framework with prototype replay and distillation called SWORD, adopted to our NC-NCD setting. Our approach enables the model to cluster unlabeled new category nodes after learning labeled nodes while preserving performance on old categories without reliance on old category nodes. SWORD achieves this by employing a self-training strategy to learn new categories and preventing the forgetting of old categories through the joint use of feature prototypes and knowledge distillation. Extensive experiments on four common benchmarks demonstrate the superiority of SWORD over other state-of-the-art methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.