id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1806.08852
|
Lorenzo Quir\'os
|
Lorenzo Quir\'os
|
Multi-Task Handwritten Document Layout Analysis
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Document Layout Analysis is a fundamental step in Handwritten Text Processing
systems, from the extraction of the text lines to the type of zone it belongs
to. We present a system based on artificial neural networks which is able to
determine not only the baselines of text lines present in the document, but
also performs geometric and logic layout analysis of the document. Experiments
in three different datasets demonstrate the potential of the method and show
competitive results with respect to state-of-the-art methods.
|
[
{
"created": "Fri, 22 Jun 2018 21:00:07 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Nov 2018 11:30:18 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Dec 2018 15:07:40 GMT",
"version": "v3"
}
] |
2018-12-13
|
[
[
"Quirós",
"Lorenzo",
""
]
] |
Document Layout Analysis is a fundamental step in Handwritten Text Processing systems, from the extraction of the text lines to the type of zone it belongs to. We present a system based on artificial neural networks which is able to determine not only the baselines of text lines present in the document, but also performs geometric and logic layout analysis of the document. Experiments in three different datasets demonstrate the potential of the method and show competitive results with respect to state-of-the-art methods.
|
1803.06913
|
Anirban Nag
|
Anirban Nag, Ali Shafiee, Rajeev Balasubramonian, Vivek Srikumar and
Naveen Muralimanohar
|
Newton: Gravitating Towards the Physical Limits of Crossbar Acceleration
|
13 pages with Appendix
| null | null | null |
cs.LG cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many recent works have designed accelerators for Convolutional Neural
Networks (CNNs). While digital accelerators have relied on near data
processing, analog accelerators have further reduced data movement by
performing in-situ computation. Recent works take advantage of highly parallel
analog in-situ computation in memristor crossbars to accelerate the many
vector-matrix multiplication operations in CNNs. However, these in-situ
accelerators have two significant short-comings that we address in this work.
First, the ADCs account for a large fraction of chip power and area. Second,
these accelerators adopt a homogeneous design where every resource is
provisioned for the worst case. By addressing both problems, the new
architecture, Newton, moves closer to achieving optimal energy-per-neuron for
crossbar accelerators.
We introduce multiple new techniques that apply at different levels of the
tile hierarchy. Two of the techniques leverage heterogeneity: one adapts ADC
precision based on the requirements of every sub-computation (with zero impact
on accuracy), and the other designs tiles customized for convolutions or
classifiers. Two other techniques rely on divide-and-conquer numeric algorithms
to reduce computations and ADC pressure. Finally, we place constraints on how a
workload is mapped to tiles, thus helping reduce resource provisioning in
tiles. For a wide range of CNN dataflows and structures, Newton achieves a 77%
decrease in power, 51% improvement in energy efficiency, and 2.2x higher
throughput/area, relative to the state-of-the-art ISAAC accelerator.
|
[
{
"created": "Sat, 10 Mar 2018 05:06:57 GMT",
"version": "v1"
}
] |
2018-03-20
|
[
[
"Nag",
"Anirban",
""
],
[
"Shafiee",
"Ali",
""
],
[
"Balasubramonian",
"Rajeev",
""
],
[
"Srikumar",
"Vivek",
""
],
[
"Muralimanohar",
"Naveen",
""
]
] |
Many recent works have designed accelerators for Convolutional Neural Networks (CNNs). While digital accelerators have relied on near data processing, analog accelerators have further reduced data movement by performing in-situ computation. Recent works take advantage of highly parallel analog in-situ computation in memristor crossbars to accelerate the many vector-matrix multiplication operations in CNNs. However, these in-situ accelerators have two significant short-comings that we address in this work. First, the ADCs account for a large fraction of chip power and area. Second, these accelerators adopt a homogeneous design where every resource is provisioned for the worst case. By addressing both problems, the new architecture, Newton, moves closer to achieving optimal energy-per-neuron for crossbar accelerators. We introduce multiple new techniques that apply at different levels of the tile hierarchy. Two of the techniques leverage heterogeneity: one adapts ADC precision based on the requirements of every sub-computation (with zero impact on accuracy), and the other designs tiles customized for convolutions or classifiers. Two other techniques rely on divide-and-conquer numeric algorithms to reduce computations and ADC pressure. Finally, we place constraints on how a workload is mapped to tiles, thus helping reduce resource provisioning in tiles. For a wide range of CNN dataflows and structures, Newton achieves a 77% decrease in power, 51% improvement in energy efficiency, and 2.2x higher throughput/area, relative to the state-of-the-art ISAAC accelerator.
|
cs/0703153
|
Maurice Margenstern
|
Maurice Margenstern
|
The periodic domino problem is undecidable in the hyperbolic plane
| null | null | null | null |
cs.CG cs.DM
| null |
In this paper, we consider the periodic tiling problem which was proved
undecidable in the Euclidean plane by Yu. Gurevich and I. Koriakov in 1972.
Here, we prove that the same problem for the hyperbolic plane is also
undecidable.
|
[
{
"created": "Fri, 30 Mar 2007 09:31:40 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Margenstern",
"Maurice",
""
]
] |
In this paper, we consider the periodic tiling problem which was proved undecidable in the Euclidean plane by Yu. Gurevich and I. Koriakov in 1972. Here, we prove that the same problem for the hyperbolic plane is also undecidable.
|
1709.06341
|
Benjamin Hou
|
Benjamin Hou, Bishesh Khanal, Amir Alansary, Steven McDonagh, Alice
Davidson, Mary Rutherford, Jo V. Hajnal, Daniel Rueckert, Ben Glocker and
Bernhard Kainz
|
3D Reconstruction in Canonical Co-ordinate Space from Arbitrarily
Oriented 2D Images
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Limited capture range, and the requirement to provide high quality
initialization for optimization-based 2D/3D image registration methods, can
significantly degrade the performance of 3D image reconstruction and motion
compensation pipelines. Challenging clinical imaging scenarios, which contain
significant subject motion such as fetal in-utero imaging, complicate the 3D
image and volume reconstruction process. In this paper we present a learning
based image registration method capable of predicting 3D rigid transformations
of arbitrarily oriented 2D image slices, with respect to a learned canonical
atlas co-ordinate system. Only image slice intensity information is used to
perform registration and canonical alignment, no spatial transform
initialization is required. To find image transformations we utilize a
Convolutional Neural Network (CNN) architecture to learn the regression
function capable of mapping 2D image slices to a 3D canonical atlas space. We
extensively evaluate the effectiveness of our approach quantitatively on
simulated Magnetic Resonance Imaging (MRI), fetal brain imagery with synthetic
motion and further demonstrate qualitative results on real fetal MRI data where
our method is integrated into a full reconstruction and motion compensation
pipeline. Our learning based registration achieves an average spatial
prediction error of 7 mm on simulated data and produces qualitatively improved
reconstructions for heavily moving fetuses with gestational ages of
approximately 20 weeks. Our model provides a general and computationally
efficient solution to the 2D/3D registration initialization problem and is
suitable for real-time scenarios.
|
[
{
"created": "Tue, 19 Sep 2017 10:50:20 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Sep 2017 08:17:57 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Dec 2017 18:55:18 GMT",
"version": "v3"
},
{
"created": "Tue, 23 Jan 2018 18:21:29 GMT",
"version": "v4"
}
] |
2018-01-24
|
[
[
"Hou",
"Benjamin",
""
],
[
"Khanal",
"Bishesh",
""
],
[
"Alansary",
"Amir",
""
],
[
"McDonagh",
"Steven",
""
],
[
"Davidson",
"Alice",
""
],
[
"Rutherford",
"Mary",
""
],
[
"Hajnal",
"Jo V.",
""
],
[
"Rueckert",
"Daniel",
""
],
[
"Glocker",
"Ben",
""
],
[
"Kainz",
"Bernhard",
""
]
] |
Limited capture range, and the requirement to provide high quality initialization for optimization-based 2D/3D image registration methods, can significantly degrade the performance of 3D image reconstruction and motion compensation pipelines. Challenging clinical imaging scenarios, which contain significant subject motion such as fetal in-utero imaging, complicate the 3D image and volume reconstruction process. In this paper we present a learning based image registration method capable of predicting 3D rigid transformations of arbitrarily oriented 2D image slices, with respect to a learned canonical atlas co-ordinate system. Only image slice intensity information is used to perform registration and canonical alignment, no spatial transform initialization is required. To find image transformations we utilize a Convolutional Neural Network (CNN) architecture to learn the regression function capable of mapping 2D image slices to a 3D canonical atlas space. We extensively evaluate the effectiveness of our approach quantitatively on simulated Magnetic Resonance Imaging (MRI), fetal brain imagery with synthetic motion and further demonstrate qualitative results on real fetal MRI data where our method is integrated into a full reconstruction and motion compensation pipeline. Our learning based registration achieves an average spatial prediction error of 7 mm on simulated data and produces qualitatively improved reconstructions for heavily moving fetuses with gestational ages of approximately 20 weeks. Our model provides a general and computationally efficient solution to the 2D/3D registration initialization problem and is suitable for real-time scenarios.
|
2404.10713
|
Tandin Dorji
|
Tandin Dorji, Pakinee Aimmanee, Vich Yindeedej
|
A Plausibility Study of Using Augmented Reality in the
Ventriculoperitoneal Shunt Operations
|
Accepted for the 2024 - 16th International Conference on Knowledge
and Smart Technology (KST). To be published in IEEEXplore Digital Library
(#61284), ISBN: 979-8-3503-7073-7
| null |
10.1109/KST61284.2024.10499675
| null |
cs.CV cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of augmented reality (AR) has undergone substantial growth, finding
diverse applications in the medical industry. This paper delves into various
techniques employed in medical surgeries, scrutinizing factors such as cost,
implementation, and accessibility. The focus of this exploration is on AR-based
solutions, with a particular emphasis on addressing challenges and proposing an
innovative solution for ventriculoperitoneal shunt (VP) operations. The
proposed solution introduces a novel flow in the pre-surgery phase, aiming to
substantially reduce setup time and operation duration by creating 3D models of
the skull and ventricles. Experiments are conducted where the models are
visualized on a 3D- printed skull through an AR device, specifically the
Microsoft HoloLens 2. The paper then conducts an in-depth analysis of this
proposed solution, discussing its feasibility, advantages, limitations,and
future implications.
|
[
{
"created": "Tue, 16 Apr 2024 16:43:14 GMT",
"version": "v1"
}
] |
2024-04-22
|
[
[
"Dorji",
"Tandin",
""
],
[
"Aimmanee",
"Pakinee",
""
],
[
"Yindeedej",
"Vich",
""
]
] |
The field of augmented reality (AR) has undergone substantial growth, finding diverse applications in the medical industry. This paper delves into various techniques employed in medical surgeries, scrutinizing factors such as cost, implementation, and accessibility. The focus of this exploration is on AR-based solutions, with a particular emphasis on addressing challenges and proposing an innovative solution for ventriculoperitoneal shunt (VP) operations. The proposed solution introduces a novel flow in the pre-surgery phase, aiming to substantially reduce setup time and operation duration by creating 3D models of the skull and ventricles. Experiments are conducted where the models are visualized on a 3D- printed skull through an AR device, specifically the Microsoft HoloLens 2. The paper then conducts an in-depth analysis of this proposed solution, discussing its feasibility, advantages, limitations,and future implications.
|
1812.01936
|
Jiankang Deng
|
Jia Guo, Jiankang Deng, Niannan Xue, Stefanos Zafeiriou
|
Stacked Dense U-Nets with Dual Transformers for Robust Face Alignment
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial landmark localisation in images captured in-the-wild is an important
and challenging problem. The current state-of-the-art revolves around certain
kinds of Deep Convolutional Neural Networks (DCNNs) such as stacked U-Nets and
Hourglass networks. In this work, we innovatively propose stacked dense U-Nets
for this task. We design a novel scale aggregation network topology structure
and a channel aggregation building block to improve the model's capacity
without sacrificing the computational complexity and model size. With the
assistance of deformable convolutions inside the stacked dense U-Nets and
coherent loss for outside data transformation, our model obtains the ability to
be spatially invariant to arbitrary input face images. Extensive experiments on
many in-the-wild datasets, validate the robustness of the proposed method under
extreme poses, exaggerated expressions and heavy occlusions. Finally, we show
that accurate 3D face alignment can assist pose-invariant face recognition
where we achieve a new state-of-the-art accuracy on CFP-FP.
|
[
{
"created": "Wed, 5 Dec 2018 12:02:11 GMT",
"version": "v1"
}
] |
2018-12-06
|
[
[
"Guo",
"Jia",
""
],
[
"Deng",
"Jiankang",
""
],
[
"Xue",
"Niannan",
""
],
[
"Zafeiriou",
"Stefanos",
""
]
] |
Facial landmark localisation in images captured in-the-wild is an important and challenging problem. The current state-of-the-art revolves around certain kinds of Deep Convolutional Neural Networks (DCNNs) such as stacked U-Nets and Hourglass networks. In this work, we innovatively propose stacked dense U-Nets for this task. We design a novel scale aggregation network topology structure and a channel aggregation building block to improve the model's capacity without sacrificing the computational complexity and model size. With the assistance of deformable convolutions inside the stacked dense U-Nets and coherent loss for outside data transformation, our model obtains the ability to be spatially invariant to arbitrary input face images. Extensive experiments on many in-the-wild datasets, validate the robustness of the proposed method under extreme poses, exaggerated expressions and heavy occlusions. Finally, we show that accurate 3D face alignment can assist pose-invariant face recognition where we achieve a new state-of-the-art accuracy on CFP-FP.
|
1703.08838
|
Saber Salehkaleybar
|
Saber Salehkaleybar, Arsalan Sharif-Nassab, S. Jamaloddin Golestani
|
Distributed Voting/Ranking with Optimal Number of States per Node
| null | null | null | null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Considering a network with $n$ nodes, where each node initially votes for one
(or more) choices out of $K$ possible choices, we present a Distributed
Multi-choice Voting/Ranking (DMVR) algorithm to determine either the choice
with maximum vote (the voting problem) or to rank all the choices in terms of
their acquired votes (the ranking problem). The algorithm consolidates node
votes across the network by updating the states of interacting nodes using two
key operations, the union and the intersection. The proposed algorithm is
simple, independent from network size, and easily scalable in terms of the
number of choices $K$, using only $K\times 2^{K-1}$ nodal states for voting,
and $K\times K!$ nodal states for ranking. We prove the number of states to be
optimal in the ranking case, this optimality is conjectured to also apply to
the voting case. The time complexity of the algorithm is analyzed in complete
graphs. We show that the time complexity for both ranking and voting is
$O(\log(n))$ for given vote percentages, and is inversely proportional to the
minimum of the vote percentage differences among various choices.
|
[
{
"created": "Sun, 26 Mar 2017 16:19:31 GMT",
"version": "v1"
}
] |
2017-03-28
|
[
[
"Salehkaleybar",
"Saber",
""
],
[
"Sharif-Nassab",
"Arsalan",
""
],
[
"Golestani",
"S. Jamaloddin",
""
]
] |
Considering a network with $n$ nodes, where each node initially votes for one (or more) choices out of $K$ possible choices, we present a Distributed Multi-choice Voting/Ranking (DMVR) algorithm to determine either the choice with maximum vote (the voting problem) or to rank all the choices in terms of their acquired votes (the ranking problem). The algorithm consolidates node votes across the network by updating the states of interacting nodes using two key operations, the union and the intersection. The proposed algorithm is simple, independent from network size, and easily scalable in terms of the number of choices $K$, using only $K\times 2^{K-1}$ nodal states for voting, and $K\times K!$ nodal states for ranking. We prove the number of states to be optimal in the ranking case, this optimality is conjectured to also apply to the voting case. The time complexity of the algorithm is analyzed in complete graphs. We show that the time complexity for both ranking and voting is $O(\log(n))$ for given vote percentages, and is inversely proportional to the minimum of the vote percentage differences among various choices.
|
2009.12326
|
Yuxuan Zhao
|
Yuxuan Zhao, Eric Landgrebe, Eliot Shekhtman and Madeleine Udell
|
Online Missing Value Imputation and Change Point Detection with the
Gaussian Copula
|
Accepted by AAAI 2022
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Missing value imputation is crucial for real-world data science workflows.
Imputation is harder in the online setting, as it requires the imputation
method itself to be able to evolve over time. For practical applications,
imputation algorithms should produce imputations that match the true data
distribution, handle data of mixed types, including ordinal, boolean, and
continuous variables, and scale to large datasets. In this work we develop a
new online imputation algorithm for mixed data using the Gaussian copula. The
online Gaussian copula model meets all the desiderata: its imputations match
the data distribution even for mixed data, improve over its offline counterpart
on the accuracy when the streaming data has a changing distribution, and on the
speed (up to an order of magnitude) especially on large scale datasets. By
fitting the copula model to online data, we also provide a new method to detect
change points in the multivariate dependence structure with missing values.
Experimental results on synthetic and real world data validate the performance
of the proposed methods.
|
[
{
"created": "Fri, 25 Sep 2020 16:27:47 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Dec 2021 20:12:33 GMT",
"version": "v2"
}
] |
2021-12-17
|
[
[
"Zhao",
"Yuxuan",
""
],
[
"Landgrebe",
"Eric",
""
],
[
"Shekhtman",
"Eliot",
""
],
[
"Udell",
"Madeleine",
""
]
] |
Missing value imputation is crucial for real-world data science workflows. Imputation is harder in the online setting, as it requires the imputation method itself to be able to evolve over time. For practical applications, imputation algorithms should produce imputations that match the true data distribution, handle data of mixed types, including ordinal, boolean, and continuous variables, and scale to large datasets. In this work we develop a new online imputation algorithm for mixed data using the Gaussian copula. The online Gaussian copula model meets all the desiderata: its imputations match the data distribution even for mixed data, improve over its offline counterpart on the accuracy when the streaming data has a changing distribution, and on the speed (up to an order of magnitude) especially on large scale datasets. By fitting the copula model to online data, we also provide a new method to detect change points in the multivariate dependence structure with missing values. Experimental results on synthetic and real world data validate the performance of the proposed methods.
|
2004.08096
|
Yanghua Jin
|
Naofumi Akimoto, Huachun Zhu, Yanghua Jin, Yoshimitsu Aoki
|
Fast Soft Color Segmentation
|
Accepted at CVPR 2020
| null | null | null |
cs.CV cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of soft color segmentation, defined as decomposing a
given image into several RGBA layers, each containing only homogeneous color
regions. The resulting layers from decomposition pave the way for applications
that benefit from layer-based editing, such as recoloring and compositing of
images and videos. The current state-of-the-art approach for this problem is
hindered by slow processing time due to its iterative nature, and consequently
does not scale to certain real-world scenarios. To address this issue, we
propose a neural network based method for this task that decomposes a given
image into multiple layers in a single forward pass. Furthermore, our method
separately decomposes the color layers and the alpha channel layers. By
leveraging a novel training objective, our method achieves proper assignment of
colors amongst layers. As a consequence, our method achieve promising quality
without existing issue of inference speed for iterative approaches. Our
thorough experimental analysis shows that our method produces qualitative and
quantitative results comparable to previous methods while achieving a 300,000x
speed improvement. Finally, we utilize our proposed method on several
applications, and demonstrate its speed advantage, especially in video editing.
|
[
{
"created": "Fri, 17 Apr 2020 07:43:33 GMT",
"version": "v1"
}
] |
2020-04-20
|
[
[
"Akimoto",
"Naofumi",
""
],
[
"Zhu",
"Huachun",
""
],
[
"Jin",
"Yanghua",
""
],
[
"Aoki",
"Yoshimitsu",
""
]
] |
We address the problem of soft color segmentation, defined as decomposing a given image into several RGBA layers, each containing only homogeneous color regions. The resulting layers from decomposition pave the way for applications that benefit from layer-based editing, such as recoloring and compositing of images and videos. The current state-of-the-art approach for this problem is hindered by slow processing time due to its iterative nature, and consequently does not scale to certain real-world scenarios. To address this issue, we propose a neural network based method for this task that decomposes a given image into multiple layers in a single forward pass. Furthermore, our method separately decomposes the color layers and the alpha channel layers. By leveraging a novel training objective, our method achieves proper assignment of colors amongst layers. As a consequence, our method achieve promising quality without existing issue of inference speed for iterative approaches. Our thorough experimental analysis shows that our method produces qualitative and quantitative results comparable to previous methods while achieving a 300,000x speed improvement. Finally, we utilize our proposed method on several applications, and demonstrate its speed advantage, especially in video editing.
|
2102.11743
|
Kyle Mills
|
Kyle Mills and Isaac Tamblyn
|
Weakly-supervised multi-class object localization using only object
counts as labels
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate the use of an extensive deep neural network to localize
instances of objects in images. The EDNN is naturally able to accurately
perform multi-class counting using only ground truth count values as labels.
Without providing any conceptual information, object annotations, or pixel
segmentation information, the neural network is able to formulate its own
conceptual representation of the items in the image. Using images labelled with
only the counts of the objects present,the structure of the extensive deep
neural network can be exploited to perform localization of the objects within
the visual field. We demonstrate that a trained EDNN can be used to count
objects in images much larger than those on which it was trained. In order to
demonstrate our technique, we introduce seven new data sets: five progressively
harder MNIST digit-counting data sets, and two datasets of 3d-rendered rubber
ducks in various situations. On most of these datasets, the EDNN achieves
greater than 99% test set accuracy in counting objects.
|
[
{
"created": "Tue, 23 Feb 2021 15:14:46 GMT",
"version": "v1"
}
] |
2021-02-24
|
[
[
"Mills",
"Kyle",
""
],
[
"Tamblyn",
"Isaac",
""
]
] |
We demonstrate the use of an extensive deep neural network to localize instances of objects in images. The EDNN is naturally able to accurately perform multi-class counting using only ground truth count values as labels. Without providing any conceptual information, object annotations, or pixel segmentation information, the neural network is able to formulate its own conceptual representation of the items in the image. Using images labelled with only the counts of the objects present,the structure of the extensive deep neural network can be exploited to perform localization of the objects within the visual field. We demonstrate that a trained EDNN can be used to count objects in images much larger than those on which it was trained. In order to demonstrate our technique, we introduce seven new data sets: five progressively harder MNIST digit-counting data sets, and two datasets of 3d-rendered rubber ducks in various situations. On most of these datasets, the EDNN achieves greater than 99% test set accuracy in counting objects.
|
2403.02783
|
Sebastien Verel
|
S\'ebastien Verel (LISIC), Sarah Thomson, Omar Rifki (LISIC)
|
Where the Really Hard Quadratic Assignment Problems Are: the QAP-SAT
instances
| null |
Evolutionary Computation in Combinatorial Optimization Conference
(evoCOP), Apr 2024, Aberystwyth, United Kingdom
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Quadratic Assignment Problem (QAP) is one of the major domains in the
field of evolutionary computation, and more widely in combinatorial
optimization. This paper studies the phase transition of the QAP, which can be
described as a dramatic change in the problem's computational complexity and
satisfiability, within a narrow range of the problem parameters. To approach
this phenomenon, we introduce a new QAP-SAT design of the initial problem based
on submodularity to capture its difficulty with new features. This
decomposition is studied experimentally using branch-and-bound and tabu search
solvers. A phase transition parameter is then proposed. The critical parameter
of phase transition satisfaction and that of the solving effort are shown to be
highly correlated for tabu search, thus allowing the prediction of difficult
instances.
|
[
{
"created": "Tue, 5 Mar 2024 08:56:30 GMT",
"version": "v1"
}
] |
2024-03-06
|
[
[
"Verel",
"Sébastien",
"",
"LISIC"
],
[
"Thomson",
"Sarah",
"",
"LISIC"
],
[
"Rifki",
"Omar",
"",
"LISIC"
]
] |
The Quadratic Assignment Problem (QAP) is one of the major domains in the field of evolutionary computation, and more widely in combinatorial optimization. This paper studies the phase transition of the QAP, which can be described as a dramatic change in the problem's computational complexity and satisfiability, within a narrow range of the problem parameters. To approach this phenomenon, we introduce a new QAP-SAT design of the initial problem based on submodularity to capture its difficulty with new features. This decomposition is studied experimentally using branch-and-bound and tabu search solvers. A phase transition parameter is then proposed. The critical parameter of phase transition satisfaction and that of the solving effort are shown to be highly correlated for tabu search, thus allowing the prediction of difficult instances.
|
2311.02512
|
Iman Jafarian
|
Iman Jafarian
|
Cryptanalysis of Nikooghadam et al.'s lightweight authentication
protocol for Internet of Drones
|
4 pages, 3 figures, 1 table
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
The Internet of Drones has emerged as a transformative technology with
applications spanning various domains, including surveillance, delivery
services, and disaster management. Secure communication between controller
users and drones is paramount to ensure the transmitted data's confidentiality,
integrity, and authenticity. Key agreement protocols are crucial in
establishing secure communication channels between users and drones, enabling
them to exchange sensitive information and control their operations securely.
Recently Nikooghadam et al. proposed a lightweight mutual authentication and
key agreement protocol for the Internet of drones. In this article, we provide
a descriptive analysis of their proposed scheme and prove that Nikooghadam et
al.'s scheme is vulnerable to user tracking attacks and stolen verifier
attacks.
|
[
{
"created": "Tue, 4 Jul 2023 06:21:38 GMT",
"version": "v1"
}
] |
2023-11-07
|
[
[
"Jafarian",
"Iman",
""
]
] |
The Internet of Drones has emerged as a transformative technology with applications spanning various domains, including surveillance, delivery services, and disaster management. Secure communication between controller users and drones is paramount to ensure the transmitted data's confidentiality, integrity, and authenticity. Key agreement protocols are crucial in establishing secure communication channels between users and drones, enabling them to exchange sensitive information and control their operations securely. Recently Nikooghadam et al. proposed a lightweight mutual authentication and key agreement protocol for the Internet of drones. In this article, we provide a descriptive analysis of their proposed scheme and prove that Nikooghadam et al.'s scheme is vulnerable to user tracking attacks and stolen verifier attacks.
|
2309.05134
|
Effie Daum
|
Effie Daum, Maxime Vaidis, Fran\c{c}ois Pomerleau
|
Benchmarking ground truth trajectories with robotic total stations
|
Accepted and presented at IROS23, Workshop on Methods for Objective
Comparison of Results in Intelligent Robotics Research
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Benchmarks stand as vital cornerstones in elevating SLAM algorithms within
mobile robotics. Consequently, ensuring accurate and reproducible ground truth
generation is vital for fair evaluation. A majority of outdoor ground truths
are generated by GNSS, which can lead to discrepancies over time, especially in
covered areas. However, research showed that RTS setups are more precise and
can alternatively be used to generate these ground truths. In our work, we
compare both RTS and GNSS systems' precision and repeatability through a set of
experiments conducted weeks and months apart in the same area. We demonstrated
that RTS setups give more reproducible results, with disparities having a
median value of 8.6 mm compared to a median value of 10.6 cm coming from a GNSS
setup. These results highlight that RTS can be considered to benchmark process
for SLAM algorithms with higher precision.
|
[
{
"created": "Sun, 10 Sep 2023 21:01:34 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Feb 2024 17:39:49 GMT",
"version": "v2"
}
] |
2024-03-01
|
[
[
"Daum",
"Effie",
""
],
[
"Vaidis",
"Maxime",
""
],
[
"Pomerleau",
"François",
""
]
] |
Benchmarks stand as vital cornerstones in elevating SLAM algorithms within mobile robotics. Consequently, ensuring accurate and reproducible ground truth generation is vital for fair evaluation. A majority of outdoor ground truths are generated by GNSS, which can lead to discrepancies over time, especially in covered areas. However, research showed that RTS setups are more precise and can alternatively be used to generate these ground truths. In our work, we compare both RTS and GNSS systems' precision and repeatability through a set of experiments conducted weeks and months apart in the same area. We demonstrated that RTS setups give more reproducible results, with disparities having a median value of 8.6 mm compared to a median value of 10.6 cm coming from a GNSS setup. These results highlight that RTS can be considered to benchmark process for SLAM algorithms with higher precision.
|
1412.5903
|
Max Jaderberg
|
Max Jaderberg, Karen Simonyan, Andrea Vedaldi, Andrew Zisserman
|
Deep Structured Output Learning for Unconstrained Text Recognition
|
arXiv admin note: text overlap with arXiv:1406.2227
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a representation suitable for the unconstrained recognition of
words in natural images: the general case of no fixed lexicon and unknown
length.
To this end we propose a convolutional neural network (CNN) based
architecture which incorporates a Conditional Random Field (CRF) graphical
model, taking the whole word image as a single input. The unaries of the CRF
are provided by a CNN that predicts characters at each position of the output,
while higher order terms are provided by another CNN that detects the presence
of N-grams. We show that this entire model (CRF, character predictor, N-gram
predictor) can be jointly optimised by back-propagating the structured output
loss, essentially requiring the system to perform multi-task learning, and
training uses purely synthetically generated data. The resulting model is a
more accurate system on standard real-world text recognition benchmarks than
character prediction alone, setting a benchmark for systems that have not been
trained on a particular lexicon. In addition, our model achieves
state-of-the-art accuracy in lexicon-constrained scenarios, without being
specifically modelled for constrained recognition. To test the generalisation
of our model, we also perform experiments with random alpha-numeric strings to
evaluate the method when no visual language model is applicable.
|
[
{
"created": "Thu, 18 Dec 2014 15:49:46 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Dec 2014 17:37:37 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Dec 2014 19:56:48 GMT",
"version": "v3"
},
{
"created": "Tue, 23 Dec 2014 13:17:59 GMT",
"version": "v4"
},
{
"created": "Fri, 10 Apr 2015 15:36:01 GMT",
"version": "v5"
}
] |
2015-04-13
|
[
[
"Jaderberg",
"Max",
""
],
[
"Simonyan",
"Karen",
""
],
[
"Vedaldi",
"Andrea",
""
],
[
"Zisserman",
"Andrew",
""
]
] |
We develop a representation suitable for the unconstrained recognition of words in natural images: the general case of no fixed lexicon and unknown length. To this end we propose a convolutional neural network (CNN) based architecture which incorporates a Conditional Random Field (CRF) graphical model, taking the whole word image as a single input. The unaries of the CRF are provided by a CNN that predicts characters at each position of the output, while higher order terms are provided by another CNN that detects the presence of N-grams. We show that this entire model (CRF, character predictor, N-gram predictor) can be jointly optimised by back-propagating the structured output loss, essentially requiring the system to perform multi-task learning, and training uses purely synthetically generated data. The resulting model is a more accurate system on standard real-world text recognition benchmarks than character prediction alone, setting a benchmark for systems that have not been trained on a particular lexicon. In addition, our model achieves state-of-the-art accuracy in lexicon-constrained scenarios, without being specifically modelled for constrained recognition. To test the generalisation of our model, we also perform experiments with random alpha-numeric strings to evaluate the method when no visual language model is applicable.
|
1506.06833
|
Francis Ferraro
|
Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao (Kenneth) Huang, Lucy
Vanderwende, Jacob Devlin, Michel Galley, Margaret Mitchell
|
A Survey of Current Datasets for Vision and Language Research
|
To appear in EMNLP 2015, short proceedings. Dataset analysis and
discussion expanded, including an initial examination into reporting bias for
one of them. F.F. and N.M. contributed equally to this work
| null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Integrating vision and language has long been a dream in work on artificial
intelligence (AI). In the past two years, we have witnessed an explosion of
work that brings together vision and language from images to videos and beyond.
The available corpora have played a crucial role in advancing this area of
research. In this paper, we propose a set of quality metrics for evaluating and
analyzing the vision & language datasets and categorize them accordingly. Our
analyses show that the most recent datasets have been using more complex
language and more abstract concepts, however, there are different strengths and
weaknesses in each.
|
[
{
"created": "Tue, 23 Jun 2015 00:59:27 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Aug 2015 04:33:37 GMT",
"version": "v2"
}
] |
2021-08-23
|
[
[
"Ferraro",
"Francis",
"",
"Kenneth"
],
[
"Mostafazadeh",
"Nasrin",
"",
"Kenneth"
],
[
"Ting-Hao",
"",
"",
"Kenneth"
],
[
"Huang",
"",
""
],
[
"Vanderwende",
"Lucy",
""
],
[
"Devlin",
"Jacob",
""
],
[
"Galley",
"Michel",
""
],
[
"Mitchell",
"Margaret",
""
]
] |
Integrating vision and language has long been a dream in work on artificial intelligence (AI). In the past two years, we have witnessed an explosion of work that brings together vision and language from images to videos and beyond. The available corpora have played a crucial role in advancing this area of research. In this paper, we propose a set of quality metrics for evaluating and analyzing the vision & language datasets and categorize them accordingly. Our analyses show that the most recent datasets have been using more complex language and more abstract concepts, however, there are different strengths and weaknesses in each.
|
1403.5802
|
Randall Smith
|
Lisa F. Smith, Kimberly K. Arcand, Jeffrey K. Smith, Randall K. Smith,
Jay Bookbinder, Megan Watzke
|
Examining Perceptions of Astronomy Images Across Mobile Platforms
|
23 pages, 1 figure; Journal of Science Communication, in press
| null | null | null |
cs.HC astro-ph.IM physics.ed-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern society has led many people to become consumers of data unlike
previous generations. How this shift in the way information is communicated and
received - including in areas of science - and affects perception and
comprehension is still an open question. This study examined one aspect of this
digital age: perceptions of astronomical images and their labels, on mobile
platforms. Participants were n = 2183 respondents to an online survey, and two
focus groups (n = 12 astrophysicists; n = 11 lay public). Online participants
were randomly assigned to 1 of 12 images, and compared two label formats. Focus
groups compared mobile devices and label formats. Results indicated that the
size and quality of the images on the mobile devices affected label
comprehension and engagement. The question label format was significantly
preferred to the fun fact. Results are discussed in terms of effective science
communication using technology.
|
[
{
"created": "Sun, 23 Mar 2014 21:03:28 GMT",
"version": "v1"
}
] |
2014-03-25
|
[
[
"Smith",
"Lisa F.",
""
],
[
"Arcand",
"Kimberly K.",
""
],
[
"Smith",
"Jeffrey K.",
""
],
[
"Smith",
"Randall K.",
""
],
[
"Bookbinder",
"Jay",
""
],
[
"Watzke",
"Megan",
""
]
] |
Modern society has led many people to become consumers of data unlike previous generations. How this shift in the way information is communicated and received - including in areas of science - and affects perception and comprehension is still an open question. This study examined one aspect of this digital age: perceptions of astronomical images and their labels, on mobile platforms. Participants were n = 2183 respondents to an online survey, and two focus groups (n = 12 astrophysicists; n = 11 lay public). Online participants were randomly assigned to 1 of 12 images, and compared two label formats. Focus groups compared mobile devices and label formats. Results indicated that the size and quality of the images on the mobile devices affected label comprehension and engagement. The question label format was significantly preferred to the fun fact. Results are discussed in terms of effective science communication using technology.
|
2407.08071
|
Noah Haeske
|
Noah Haeske
|
Viability of Low-Cost Infrared Sensors for Short Range Tracking
|
For program, see
https://github.com/noah-haeske/research/blob/main/experimentProgram.py For
sensor datasheet, see
https://www.st.com/en/imaging-and-photonics-solutions/vl53l7cx.html#overview
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A classic task in robotics is tracking a target in the external environment.
There are several well-documented approaches to this problem. This paper
presents a novel approach to this problem using infrared time of flight
sensors. The use of infrared time of flight sensors is not common as a tracking
approach, typically used for simple motion detectors. However, with the
approach highlighted in this paper they can be used to accurately track the
position of a moving subject. Traditional approaches to the tracking problem
often include cameras, or ultrasonic sensors. These approaches can be expensive
and overcompensating in some use cases. The method focused on in this paper can
be superior in terms of cost and simplicity.
|
[
{
"created": "Wed, 10 Jul 2024 22:15:48 GMT",
"version": "v1"
}
] |
2024-07-12
|
[
[
"Haeske",
"Noah",
""
]
] |
A classic task in robotics is tracking a target in the external environment. There are several well-documented approaches to this problem. This paper presents a novel approach to this problem using infrared time of flight sensors. The use of infrared time of flight sensors is not common as a tracking approach, typically used for simple motion detectors. However, with the approach highlighted in this paper they can be used to accurately track the position of a moving subject. Traditional approaches to the tracking problem often include cameras, or ultrasonic sensors. These approaches can be expensive and overcompensating in some use cases. The method focused on in this paper can be superior in terms of cost and simplicity.
|
2011.02578
|
Kihyuk Sohn
|
Kihyuk Sohn, Chun-Liang Li, Jinsung Yoon, Minho Jin, Tomas Pfister
|
Learning and Evaluating Representations for Deep One-class
Classification
|
Published at International Conference on Learning Representation
(ICLR) 2021. The first two authors contributed equally
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We present a two-stage framework for deep one-class classification. We first
learn self-supervised representations from one-class data, and then build
one-class classifiers on learned representations. The framework not only allows
to learn better representations, but also permits building one-class
classifiers that are faithful to the target task. We argue that classifiers
inspired by the statistical perspective in generative or discriminative models
are more effective than existing approaches, such as a normality score from a
surrogate classifier. We thoroughly evaluate different self-supervised
representation learning algorithms under the proposed framework for one-class
classification. Moreover, we present a novel distribution-augmented contrastive
learning that extends training distributions via data augmentation to obstruct
the uniformity of contrastive representations. In experiments, we demonstrate
state-of-the-art performance on visual domain one-class classification
benchmarks, including novelty and anomaly detection. Finally, we present visual
explanations, confirming that the decision-making process of deep one-class
classifiers is intuitive to humans. The code is available at
https://github.com/google-research/deep_representation_one_class.
|
[
{
"created": "Wed, 4 Nov 2020 23:33:41 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Mar 2021 23:11:23 GMT",
"version": "v2"
}
] |
2021-03-29
|
[
[
"Sohn",
"Kihyuk",
""
],
[
"Li",
"Chun-Liang",
""
],
[
"Yoon",
"Jinsung",
""
],
[
"Jin",
"Minho",
""
],
[
"Pfister",
"Tomas",
""
]
] |
We present a two-stage framework for deep one-class classification. We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations. The framework not only allows to learn better representations, but also permits building one-class classifiers that are faithful to the target task. We argue that classifiers inspired by the statistical perspective in generative or discriminative models are more effective than existing approaches, such as a normality score from a surrogate classifier. We thoroughly evaluate different self-supervised representation learning algorithms under the proposed framework for one-class classification. Moreover, we present a novel distribution-augmented contrastive learning that extends training distributions via data augmentation to obstruct the uniformity of contrastive representations. In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks, including novelty and anomaly detection. Finally, we present visual explanations, confirming that the decision-making process of deep one-class classifiers is intuitive to humans. The code is available at https://github.com/google-research/deep_representation_one_class.
|
2007.01839
|
Hado van Hasselt
|
Hado van Hasselt, Sephora Madjiheurem, Matteo Hessel, David Silver,
Andr\'e Barreto, Diana Borsa
|
Expected Eligibility Traces
|
AAAI, distinguished paper award
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The question of how to determine which states and actions are responsible for
a certain outcome is known as the credit assignment problem and remains a
central research question in reinforcement learning and artificial
intelligence. Eligibility traces enable efficient credit assignment to the
recent sequence of states and actions experienced by the agent, but not to
counterfactual sequences that could also have led to the current state. In this
work, we introduce expected eligibility traces. Expected traces allow, with a
single update, to update states and actions that could have preceded the
current state, even if they did not do so on this occasion. We discuss when
expected traces provide benefits over classic (instantaneous) traces in
temporal-difference learning, and show that sometimes substantial improvements
can be attained. We provide a way to smoothly interpolate between instantaneous
and expected traces by a mechanism similar to bootstrapping, which ensures that
the resulting algorithm is a strict generalisation of TD($\lambda$). Finally,
we discuss possible extensions and connections to related ideas, such as
successor features.
|
[
{
"created": "Fri, 3 Jul 2020 17:46:16 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Feb 2021 13:02:30 GMT",
"version": "v2"
}
] |
2021-02-09
|
[
[
"van Hasselt",
"Hado",
""
],
[
"Madjiheurem",
"Sephora",
""
],
[
"Hessel",
"Matteo",
""
],
[
"Silver",
"David",
""
],
[
"Barreto",
"André",
""
],
[
"Borsa",
"Diana",
""
]
] |
The question of how to determine which states and actions are responsible for a certain outcome is known as the credit assignment problem and remains a central research question in reinforcement learning and artificial intelligence. Eligibility traces enable efficient credit assignment to the recent sequence of states and actions experienced by the agent, but not to counterfactual sequences that could also have led to the current state. In this work, we introduce expected eligibility traces. Expected traces allow, with a single update, to update states and actions that could have preceded the current state, even if they did not do so on this occasion. We discuss when expected traces provide benefits over classic (instantaneous) traces in temporal-difference learning, and show that sometimes substantial improvements can be attained. We provide a way to smoothly interpolate between instantaneous and expected traces by a mechanism similar to bootstrapping, which ensures that the resulting algorithm is a strict generalisation of TD($\lambda$). Finally, we discuss possible extensions and connections to related ideas, such as successor features.
|
2311.07096
|
Seyedkhashayar Hashemi
|
Seyedkhashayar Hashemi, Hai Jiang and Masoud Ardakani
|
Optimal Configuration of Reconfigurable Intelligent Surfaces with
Arbitrary Discrete Phase Shifts
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the reflection optimization problem for a reconfigurable
intelligent surface (RIS), where the RIS elements feature a set of
non-uniformly spaced discrete phase shifts. This is motivated by the actual
behavior of practical RIS elements, where it is shown that a uniform phase
shift assumption is not realistic. A problem is formulated to find the optimal
refection amplitudes and reflection phase shifts of the RIS elements such that
the channel capacity of the target user is maximized. We first prove that in
the optimal configuration, each RIS element is either turned off or operates at
maximum amplitude. We then develop a method that finds the optimal reflection
amplitudes and phases with complexity linear in the number of RIS elements.
Some new and interesting insight into the reflection optimization problem is
also provided.
|
[
{
"created": "Mon, 13 Nov 2023 05:52:55 GMT",
"version": "v1"
}
] |
2023-11-14
|
[
[
"Hashemi",
"Seyedkhashayar",
""
],
[
"Jiang",
"Hai",
""
],
[
"Ardakani",
"Masoud",
""
]
] |
We address the reflection optimization problem for a reconfigurable intelligent surface (RIS), where the RIS elements feature a set of non-uniformly spaced discrete phase shifts. This is motivated by the actual behavior of practical RIS elements, where it is shown that a uniform phase shift assumption is not realistic. A problem is formulated to find the optimal refection amplitudes and reflection phase shifts of the RIS elements such that the channel capacity of the target user is maximized. We first prove that in the optimal configuration, each RIS element is either turned off or operates at maximum amplitude. We then develop a method that finds the optimal reflection amplitudes and phases with complexity linear in the number of RIS elements. Some new and interesting insight into the reflection optimization problem is also provided.
|
1412.8712
|
Stavros Nikolopoulos D.
|
Stavros D. Nikolopoulos and Iosif Polenakis
|
Detecting Malicious Code by Exploiting Dependencies of System-call
Groups
|
21 pages, 4 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present an elaborated graph-based algorithmic technique for
efficient malware detection. More precisely, we utilize the system-call
dependency graphs (or, for short ScD graphs), obtained by capturing taint
analysis traces and a set of various similarity metrics in order to detect
whether an unknown test sample is a malicious or a benign one. For the sake of
generalization, we decide to empower our model against strong mutations by
applying our detection technique on a weighted directed graph resulting from
ScD graph after grouping disjoint subsets of its vertices. Additionally, we
have developed a similarity metric, which we call NP-similarity, that combines
qualitative, quantitative, and relational characteristics that are spread among
the members of known malware families to archives a clear distinction between
graph-representations of malware and the ones of benign software. Finally, we
evaluate our detection model and compare our results against the results
achieved by a variety of techniques proving the potentials of our model.
|
[
{
"created": "Tue, 30 Dec 2014 18:06:23 GMT",
"version": "v1"
}
] |
2014-12-31
|
[
[
"Nikolopoulos",
"Stavros D.",
""
],
[
"Polenakis",
"Iosif",
""
]
] |
In this paper we present an elaborated graph-based algorithmic technique for efficient malware detection. More precisely, we utilize the system-call dependency graphs (or, for short ScD graphs), obtained by capturing taint analysis traces and a set of various similarity metrics in order to detect whether an unknown test sample is a malicious or a benign one. For the sake of generalization, we decide to empower our model against strong mutations by applying our detection technique on a weighted directed graph resulting from ScD graph after grouping disjoint subsets of its vertices. Additionally, we have developed a similarity metric, which we call NP-similarity, that combines qualitative, quantitative, and relational characteristics that are spread among the members of known malware families to archives a clear distinction between graph-representations of malware and the ones of benign software. Finally, we evaluate our detection model and compare our results against the results achieved by a variety of techniques proving the potentials of our model.
|
2005.05359
|
Jianwei Wu
|
Jianwei Wu, James Clause
|
A Pattern-based Approach to Detect and Improve Non-descriptive Test
Names
|
Accepted by The Journal of Systems & Software
|
The Journal of Systems & Software 168C (2020) 110639
|
10.1016/j.jss.2020.110639
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unit tests are an important artifact that supports the software development
process in several ways. For example, when a test fails, its name can provide
the first step towards understanding the purpose of the test. Unfortunately,
unit tests often lack descriptive names. In this paper, we propose a new,
pattern-based approach that can help developers improve the quality of test
names of JUnit tests by making them more descriptive. It does this by detecting
non-descriptive test names and in some cases, providing additional information
about how the name can be improved. Our approach was assessed using an
empirical evaluation on 34352 JUnit tests. The results of the evaluation show
that the approach is feasible, accurate, and useful at discriminating
descriptive and non-descriptive names with a 95% true-positive rate.
|
[
{
"created": "Mon, 11 May 2020 18:08:35 GMT",
"version": "v1"
}
] |
2020-05-20
|
[
[
"Wu",
"Jianwei",
""
],
[
"Clause",
"James",
""
]
] |
Unit tests are an important artifact that supports the software development process in several ways. For example, when a test fails, its name can provide the first step towards understanding the purpose of the test. Unfortunately, unit tests often lack descriptive names. In this paper, we propose a new, pattern-based approach that can help developers improve the quality of test names of JUnit tests by making them more descriptive. It does this by detecting non-descriptive test names and in some cases, providing additional information about how the name can be improved. Our approach was assessed using an empirical evaluation on 34352 JUnit tests. The results of the evaluation show that the approach is feasible, accurate, and useful at discriminating descriptive and non-descriptive names with a 95% true-positive rate.
|
1707.07591
|
Timo Schick
|
Timo Schick
|
Transition-Based Generation from Abstract Meaning Representations
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work addresses the task of generating English sentences from Abstract
Meaning Representation (AMR) graphs. To cope with this task, we transform each
input AMR graph into a structure similar to a dependency tree and annotate it
with syntactic information by applying various predefined actions to it.
Subsequently, a sentence is obtained from this tree structure by visiting its
nodes in a specific order. We train maximum entropy models to estimate the
probability of each individual action and devise an algorithm that efficiently
approximates the best sequence of actions to be applied. Using a substandard
language model, our generator achieves a Bleu score of 27.4 on the LDC2014T12
test set, the best result reported so far without using silver standard
annotations from another corpus as additional training data.
|
[
{
"created": "Mon, 24 Jul 2017 14:52:32 GMT",
"version": "v1"
}
] |
2017-07-25
|
[
[
"Schick",
"Timo",
""
]
] |
This work addresses the task of generating English sentences from Abstract Meaning Representation (AMR) graphs. To cope with this task, we transform each input AMR graph into a structure similar to a dependency tree and annotate it with syntactic information by applying various predefined actions to it. Subsequently, a sentence is obtained from this tree structure by visiting its nodes in a specific order. We train maximum entropy models to estimate the probability of each individual action and devise an algorithm that efficiently approximates the best sequence of actions to be applied. Using a substandard language model, our generator achieves a Bleu score of 27.4 on the LDC2014T12 test set, the best result reported so far without using silver standard annotations from another corpus as additional training data.
|
2001.01289
|
Pawe{\l} Gawrychowski
|
Bart{\l}omiej Dudek, Pawe{\l} Gawrychowski, Tatiana Starikovskaya
|
All non-trivial variants of 3-LDT are equivalent
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The popular 3-SUM conjecture states that there is no strongly subquadratic
time algorithm for checking if a given set of integers contains three distinct
elements that sum up to zero. A closely related problem is to check if a given
set of integers contains distinct $x_1, x_2, x_3$ such that $x_1+x_2=2x_3$.
This can be reduced to 3-SUM in almost-linear time, but surprisingly a reverse
reduction establishing 3-SUM hardness was not known.
We provide such a reduction, thus resolving an open question of Erickson. In
fact, we consider a more general problem called 3-LDT parameterized by integer
parameters $\alpha_1, \alpha_2, \alpha_3$ and $t$. In this problem, we need to
check if a given set of integers contains distinct elements $x_1, x_2, x_3$
such that $\alpha_1 x_1+\alpha_2 x_2 +\alpha_3 x_3 = t$. For some combinations
of the parameters, every instance of this problem is a NO-instance or there
exists a simple almost-linear time algorithm. We call such variants trivial. We
prove that all non-trivial variants of 3-LDT are equivalent under subquadratic
reductions. Our main technical contribution is an efficient deterministic
procedure based on the famous Behrend's construction that partitions a given
set of integers into few subsets that avoid a chosen linear equation.
|
[
{
"created": "Sun, 5 Jan 2020 18:43:04 GMT",
"version": "v1"
}
] |
2020-01-07
|
[
[
"Dudek",
"Bartłomiej",
""
],
[
"Gawrychowski",
"Paweł",
""
],
[
"Starikovskaya",
"Tatiana",
""
]
] |
The popular 3-SUM conjecture states that there is no strongly subquadratic time algorithm for checking if a given set of integers contains three distinct elements that sum up to zero. A closely related problem is to check if a given set of integers contains distinct $x_1, x_2, x_3$ such that $x_1+x_2=2x_3$. This can be reduced to 3-SUM in almost-linear time, but surprisingly a reverse reduction establishing 3-SUM hardness was not known. We provide such a reduction, thus resolving an open question of Erickson. In fact, we consider a more general problem called 3-LDT parameterized by integer parameters $\alpha_1, \alpha_2, \alpha_3$ and $t$. In this problem, we need to check if a given set of integers contains distinct elements $x_1, x_2, x_3$ such that $\alpha_1 x_1+\alpha_2 x_2 +\alpha_3 x_3 = t$. For some combinations of the parameters, every instance of this problem is a NO-instance or there exists a simple almost-linear time algorithm. We call such variants trivial. We prove that all non-trivial variants of 3-LDT are equivalent under subquadratic reductions. Our main technical contribution is an efficient deterministic procedure based on the famous Behrend's construction that partitions a given set of integers into few subsets that avoid a chosen linear equation.
|
2310.12454
|
Jinhui Yin
|
You Li, Jinhui Yin and Yuming Lin
|
Rethinking the Construction of Effective Metrics for Understanding the
Mechanisms of Pretrained Language Models
|
Accepted by Findings of EMNLP2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pretrained language models are expected to effectively map input text to a
set of vectors while preserving the inherent relationships within the text.
Consequently, designing a white-box model to compute metrics that reflect the
presence of specific internal relations in these vectors has become a common
approach for post-hoc interpretability analysis of pretrained language models.
However, achieving interpretability in white-box models and ensuring the rigor
of metric computation becomes challenging when the source model lacks inherent
interpretability. Therefore, in this paper, we discuss striking a balance in
this trade-off and propose a novel line to constructing metrics for
understanding the mechanisms of pretrained language models. We have
specifically designed a family of metrics along this line of investigation, and
the model used to compute these metrics is referred to as the tree topological
probe. We conducted measurements on BERT-large by using these metrics. Based on
the experimental results, we propose a speculation regarding the working
mechanism of BERT-like pretrained language models, as well as a strategy for
enhancing fine-tuning performance by leveraging the topological probe to
improve specific submodules.
|
[
{
"created": "Thu, 19 Oct 2023 04:16:40 GMT",
"version": "v1"
}
] |
2023-10-20
|
[
[
"Li",
"You",
""
],
[
"Yin",
"Jinhui",
""
],
[
"Lin",
"Yuming",
""
]
] |
Pretrained language models are expected to effectively map input text to a set of vectors while preserving the inherent relationships within the text. Consequently, designing a white-box model to compute metrics that reflect the presence of specific internal relations in these vectors has become a common approach for post-hoc interpretability analysis of pretrained language models. However, achieving interpretability in white-box models and ensuring the rigor of metric computation becomes challenging when the source model lacks inherent interpretability. Therefore, in this paper, we discuss striking a balance in this trade-off and propose a novel line to constructing metrics for understanding the mechanisms of pretrained language models. We have specifically designed a family of metrics along this line of investigation, and the model used to compute these metrics is referred to as the tree topological probe. We conducted measurements on BERT-large by using these metrics. Based on the experimental results, we propose a speculation regarding the working mechanism of BERT-like pretrained language models, as well as a strategy for enhancing fine-tuning performance by leveraging the topological probe to improve specific submodules.
|
2102.10204
|
Puoya Tabaghi
|
Puoya Tabaghi, Chao Pan, Eli Chien, Jianhao Peng, Olgica Milenkovic
|
Linear Classifiers in Product Space Forms
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Embedding methods for product spaces are powerful techniques for
low-distortion and low-dimensional representation of complex data structures.
Here, we address the new problem of linear classification in product space
forms -- products of Euclidean, spherical, and hyperbolic spaces. First, we
describe novel formulations for linear classifiers on a Riemannian manifold
using geodesics and Riemannian metrics which generalize straight lines and
inner products in vector spaces. Second, we prove that linear classifiers in
$d$-dimensional space forms of any curvature have the same expressive power,
i.e., they can shatter exactly $d+1$ points. Third, we formalize linear
classifiers in product space forms, describe the first known perceptron and
support vector machine classifiers for such spaces and establish rigorous
convergence results for perceptrons. Moreover, we prove that the
Vapnik-Chervonenkis dimension of linear classifiers in a product space form of
dimension $d$ is \emph{at least} $d+1$. We support our theoretical findings
with simulations on several datasets, including synthetic data, image data, and
single-cell RNA sequencing (scRNA-seq) data. The results show that
classification in low-dimensional product space forms for scRNA-seq data
offers, on average, a performance improvement of $\sim15\%$ when compared to
that in Euclidean spaces of the same dimension.
|
[
{
"created": "Fri, 19 Feb 2021 23:29:03 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Jun 2021 14:56:49 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Jan 2022 16:14:34 GMT",
"version": "v3"
}
] |
2022-01-04
|
[
[
"Tabaghi",
"Puoya",
""
],
[
"Pan",
"Chao",
""
],
[
"Chien",
"Eli",
""
],
[
"Peng",
"Jianhao",
""
],
[
"Milenkovic",
"Olgica",
""
]
] |
Embedding methods for product spaces are powerful techniques for low-distortion and low-dimensional representation of complex data structures. Here, we address the new problem of linear classification in product space forms -- products of Euclidean, spherical, and hyperbolic spaces. First, we describe novel formulations for linear classifiers on a Riemannian manifold using geodesics and Riemannian metrics which generalize straight lines and inner products in vector spaces. Second, we prove that linear classifiers in $d$-dimensional space forms of any curvature have the same expressive power, i.e., they can shatter exactly $d+1$ points. Third, we formalize linear classifiers in product space forms, describe the first known perceptron and support vector machine classifiers for such spaces and establish rigorous convergence results for perceptrons. Moreover, we prove that the Vapnik-Chervonenkis dimension of linear classifiers in a product space form of dimension $d$ is \emph{at least} $d+1$. We support our theoretical findings with simulations on several datasets, including synthetic data, image data, and single-cell RNA sequencing (scRNA-seq) data. The results show that classification in low-dimensional product space forms for scRNA-seq data offers, on average, a performance improvement of $\sim15\%$ when compared to that in Euclidean spaces of the same dimension.
|
0901.3314
|
Stephan Tinguely
|
Amos Lapidoth, Stephan Tinguely
|
Sending a Bi-Variate Gaussian over a Gaussian MAC
|
submitted to the IEEE Transactions on Information Theory
| null |
10.1109/ISIT.2006.261926
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the power versus distortion trade-off for the distributed
transmission of a memoryless bi-variate Gaussian source over a two-to-one
average-power limited Gaussian multiple-access channel. In this problem, each
of two separate transmitters observes a different component of a memoryless
bi-variate Gaussian source. The two transmitters then describe their source
component to a common receiver via an average-power constrained Gaussian
multiple-access channel. From the output of the multiple-access channel, the
receiver wishes to reconstruct each source component with the least possible
expected squared-error distortion. Our interest is in characterizing the
distortion pairs that are simultaneously achievable on the two source
components.
We present sufficient conditions and necessary conditions for the
achievability of a distortion pair. These conditions are expressed as a
function of the channel signal-to-noise ratio (SNR) and of the source
correlation. In several cases the necessary conditions and sufficient
conditions are shown to agree. In particular, we show that if the channel SNR
is below a certain threshold, then an uncoded transmission scheme is optimal.
We also derive the precise high-SNR asymptotics of an optimal scheme.
|
[
{
"created": "Wed, 21 Jan 2009 19:14:57 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Lapidoth",
"Amos",
""
],
[
"Tinguely",
"Stephan",
""
]
] |
We study the power versus distortion trade-off for the distributed transmission of a memoryless bi-variate Gaussian source over a two-to-one average-power limited Gaussian multiple-access channel. In this problem, each of two separate transmitters observes a different component of a memoryless bi-variate Gaussian source. The two transmitters then describe their source component to a common receiver via an average-power constrained Gaussian multiple-access channel. From the output of the multiple-access channel, the receiver wishes to reconstruct each source component with the least possible expected squared-error distortion. Our interest is in characterizing the distortion pairs that are simultaneously achievable on the two source components. We present sufficient conditions and necessary conditions for the achievability of a distortion pair. These conditions are expressed as a function of the channel signal-to-noise ratio (SNR) and of the source correlation. In several cases the necessary conditions and sufficient conditions are shown to agree. In particular, we show that if the channel SNR is below a certain threshold, then an uncoded transmission scheme is optimal. We also derive the precise high-SNR asymptotics of an optimal scheme.
|
2401.01923
|
Xin Wang
|
Xin Wang, Zhongwei Wan, Arvin Hekmati, Mingyu Zong, Samiul Alam, Mi
Zhang, Bhaskar Krishnamachari
|
IoT in the Era of Generative AI: Vision and Challenges
|
8 pages, 3 figures, 1 table
| null | null | null |
cs.DC cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advancements in Generative AI hold immense promise to push Internet of Things
(IoT) to the next level. In this article, we share our vision on IoT in the era
of Generative AI. We discuss some of the most important applications of
Generative AI in IoT-related domains. We also identify some of the most
critical challenges and discuss current gaps as well as promising opportunities
on enabling Generative AI for IoT. We hope this article can inspire new
research on IoT in the era of Generative AI.
|
[
{
"created": "Wed, 3 Jan 2024 18:08:57 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Jan 2024 03:11:40 GMT",
"version": "v2"
},
{
"created": "Sun, 11 Aug 2024 15:31:20 GMT",
"version": "v3"
}
] |
2024-08-13
|
[
[
"Wang",
"Xin",
""
],
[
"Wan",
"Zhongwei",
""
],
[
"Hekmati",
"Arvin",
""
],
[
"Zong",
"Mingyu",
""
],
[
"Alam",
"Samiul",
""
],
[
"Zhang",
"Mi",
""
],
[
"Krishnamachari",
"Bhaskar",
""
]
] |
Advancements in Generative AI hold immense promise to push Internet of Things (IoT) to the next level. In this article, we share our vision on IoT in the era of Generative AI. We discuss some of the most important applications of Generative AI in IoT-related domains. We also identify some of the most critical challenges and discuss current gaps as well as promising opportunities on enabling Generative AI for IoT. We hope this article can inspire new research on IoT in the era of Generative AI.
|
2205.03750
|
Guangmo Tong
|
Yifan Wang and Guangmo Tong
|
Learnability of Competitive Threshold Models
|
IJCAI-ECAI 2022
| null | null | null |
cs.LG cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Modeling the spread of social contagions is central to various applications
in social computing. In this paper, we study the learnability of the
competitive threshold model from a theoretical perspective. We demonstrate how
competitive threshold models can be seamlessly simulated by artificial neural
networks with finite VC dimensions, which enables analytical sample complexity
and generalization bounds. Based on the proposed hypothesis space, we design
efficient algorithms under the empirical risk minimization scheme. The
theoretical insights are finally translated into practical and explainable
modeling methods, the effectiveness of which is verified through a sanity check
over a few synthetic and real datasets. The experimental results promisingly
show that our method enjoys a decent performance without using excessive data
points, outperforming off-the-shelf methods.
|
[
{
"created": "Sun, 8 May 2022 01:11:51 GMT",
"version": "v1"
}
] |
2022-05-10
|
[
[
"Wang",
"Yifan",
""
],
[
"Tong",
"Guangmo",
""
]
] |
Modeling the spread of social contagions is central to various applications in social computing. In this paper, we study the learnability of the competitive threshold model from a theoretical perspective. We demonstrate how competitive threshold models can be seamlessly simulated by artificial neural networks with finite VC dimensions, which enables analytical sample complexity and generalization bounds. Based on the proposed hypothesis space, we design efficient algorithms under the empirical risk minimization scheme. The theoretical insights are finally translated into practical and explainable modeling methods, the effectiveness of which is verified through a sanity check over a few synthetic and real datasets. The experimental results promisingly show that our method enjoys a decent performance without using excessive data points, outperforming off-the-shelf methods.
|
2110.07720
|
Rangeet Pan
|
Rangeet Pan and Hridesh Rajan
|
Decomposing Convolutional Neural Networks into Reusable and Replaceable
Modules
|
Accepted at ICSE'22
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Training from scratch is the most common way to build a Convolutional Neural
Network (CNN) based model. What if we can build new CNN models by reusing parts
from previously build CNN models? What if we can improve a CNN model by
replacing (possibly faulty) parts with other parts? In both cases, instead of
training, can we identify the part responsible for each output class (module)
in the model(s) and reuse or replace only the desired output classes to build a
model? Prior work has proposed decomposing dense-based networks into modules
(one for each output class) to enable reusability and replaceability in various
scenarios. However, this work is limited to the dense layers and based on the
one-to-one relationship between the nodes in consecutive layers. Due to the
shared architecture in the CNN model, prior work cannot be adapted directly. In
this paper, we propose to decompose a CNN model used for image classification
problems into modules for each output class. These modules can further be
reused or replaced to build a new model. We have evaluated our approach with
CIFAR-10, CIFAR-100, and ImageNet tiny datasets with three variations of ResNet
models and found that enabling decomposition comes with a small cost (1.77% and
0.85% for top-1 and top-5 accuracy, respectively). Also, building a model by
reusing or replacing modules can be done with a 2.3% and 0.5% average loss of
accuracy. Furthermore, reusing and replacing these modules reduces CO2e
emission by ~37 times compared to training the model from scratch.
|
[
{
"created": "Mon, 11 Oct 2021 20:41:50 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Dec 2021 15:43:58 GMT",
"version": "v2"
}
] |
2021-12-21
|
[
[
"Pan",
"Rangeet",
""
],
[
"Rajan",
"Hridesh",
""
]
] |
Training from scratch is the most common way to build a Convolutional Neural Network (CNN) based model. What if we can build new CNN models by reusing parts from previously build CNN models? What if we can improve a CNN model by replacing (possibly faulty) parts with other parts? In both cases, instead of training, can we identify the part responsible for each output class (module) in the model(s) and reuse or replace only the desired output classes to build a model? Prior work has proposed decomposing dense-based networks into modules (one for each output class) to enable reusability and replaceability in various scenarios. However, this work is limited to the dense layers and based on the one-to-one relationship between the nodes in consecutive layers. Due to the shared architecture in the CNN model, prior work cannot be adapted directly. In this paper, we propose to decompose a CNN model used for image classification problems into modules for each output class. These modules can further be reused or replaced to build a new model. We have evaluated our approach with CIFAR-10, CIFAR-100, and ImageNet tiny datasets with three variations of ResNet models and found that enabling decomposition comes with a small cost (1.77% and 0.85% for top-1 and top-5 accuracy, respectively). Also, building a model by reusing or replacing modules can be done with a 2.3% and 0.5% average loss of accuracy. Furthermore, reusing and replacing these modules reduces CO2e emission by ~37 times compared to training the model from scratch.
|
2207.05669
|
Matteo Bunino
|
Matteo Bunino
|
From Spectral Graph Convolutions to Large Scale Graph Convolutional
Networks
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Graph Convolutional Networks (GCNs) have been shown to be a powerful concept
that has been successfully applied to a large variety of tasks across many
domains over the past years. In this work we study the theory that paved the
way to the definition of GCN, including related parts of classical graph
theory. We also discuss and experimentally demonstrate key properties and
limitations of GCNs such as those caused by the statistical dependency of
samples, introduced by the edges of the graph, which causes the estimates of
the full gradient to be biased. Another limitation we discuss is the negative
impact of minibatch sampling on the model performance. As a consequence, during
parameter update, gradients are computed on the whole dataset, undermining
scalability to large graphs. To account for this, we research alternative
methods which allow to safely learn good parameters while sampling only a
subset of data per iteration. We reproduce the results reported in the work of
Kipf et al. and propose an implementation inspired to SIGN, which is a
sampling-free minibatch method. Eventually we compare the two implementations
on a benchmark dataset, proving that they are comparable in terms of prediction
accuracy for the task of semi-supervised node classification.
|
[
{
"created": "Tue, 12 Jul 2022 16:57:08 GMT",
"version": "v1"
}
] |
2022-07-13
|
[
[
"Bunino",
"Matteo",
""
]
] |
Graph Convolutional Networks (GCNs) have been shown to be a powerful concept that has been successfully applied to a large variety of tasks across many domains over the past years. In this work we study the theory that paved the way to the definition of GCN, including related parts of classical graph theory. We also discuss and experimentally demonstrate key properties and limitations of GCNs such as those caused by the statistical dependency of samples, introduced by the edges of the graph, which causes the estimates of the full gradient to be biased. Another limitation we discuss is the negative impact of minibatch sampling on the model performance. As a consequence, during parameter update, gradients are computed on the whole dataset, undermining scalability to large graphs. To account for this, we research alternative methods which allow to safely learn good parameters while sampling only a subset of data per iteration. We reproduce the results reported in the work of Kipf et al. and propose an implementation inspired to SIGN, which is a sampling-free minibatch method. Eventually we compare the two implementations on a benchmark dataset, proving that they are comparable in terms of prediction accuracy for the task of semi-supervised node classification.
|
2305.08596
|
Youngjin Jin
|
Youngjin Jin, Eugene Jang, Jian Cui, Jin-Woo Chung, Yongjae Lee,
Seungwon Shin
|
DarkBERT: A Language Model for the Dark Side of the Internet
|
9 pages (main paper), 17 pages (including bibliography and appendix),
to appear at the ACL 2023 Main Conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research has suggested that there are clear differences in the
language used in the Dark Web compared to that of the Surface Web. As studies
on the Dark Web commonly require textual analysis of the domain, language
models specific to the Dark Web may provide valuable insights to researchers.
In this work, we introduce DarkBERT, a language model pretrained on Dark Web
data. We describe the steps taken to filter and compile the text data used to
train DarkBERT to combat the extreme lexical and structural diversity of the
Dark Web that may be detrimental to building a proper representation of the
domain. We evaluate DarkBERT and its vanilla counterpart along with other
widely used language models to validate the benefits that a Dark Web domain
specific model offers in various use cases. Our evaluations show that DarkBERT
outperforms current language models and may serve as a valuable resource for
future research on the Dark Web.
|
[
{
"created": "Mon, 15 May 2023 12:23:10 GMT",
"version": "v1"
},
{
"created": "Thu, 18 May 2023 05:02:29 GMT",
"version": "v2"
}
] |
2023-05-19
|
[
[
"Jin",
"Youngjin",
""
],
[
"Jang",
"Eugene",
""
],
[
"Cui",
"Jian",
""
],
[
"Chung",
"Jin-Woo",
""
],
[
"Lee",
"Yongjae",
""
],
[
"Shin",
"Seungwon",
""
]
] |
Recent research has suggested that there are clear differences in the language used in the Dark Web compared to that of the Surface Web. As studies on the Dark Web commonly require textual analysis of the domain, language models specific to the Dark Web may provide valuable insights to researchers. In this work, we introduce DarkBERT, a language model pretrained on Dark Web data. We describe the steps taken to filter and compile the text data used to train DarkBERT to combat the extreme lexical and structural diversity of the Dark Web that may be detrimental to building a proper representation of the domain. We evaluate DarkBERT and its vanilla counterpart along with other widely used language models to validate the benefits that a Dark Web domain specific model offers in various use cases. Our evaluations show that DarkBERT outperforms current language models and may serve as a valuable resource for future research on the Dark Web.
|
2203.13474
|
Erik Nijkamp Dr.
|
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo
Zhou, Silvio Savarese, Caiming Xiong
|
CodeGen: An Open Large Language Model for Code with Multi-Turn Program
Synthesis
| null | null | null | null |
cs.LG cs.CL cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Program synthesis strives to generate a computer program as a solution to a
given problem specification, expressed with input-output examples or natural
language descriptions. The prevalence of large language models advances the
state-of-the-art for program synthesis, though limited training resources and
data impede open access to such models. To democratize this, we train and
release a family of large language models up to 16.1B parameters, called
CODEGEN, on natural language and programming language data, and open source the
training library JAXFORMER. We show the utility of the trained model by
demonstrating that it is competitive with the previous state-of-the-art on
zero-shot Python code generation on HumanEval. We further investigate the
multi-step paradigm for program synthesis, where a single program is factorized
into multiple prompts specifying subproblems. To this end, we construct an open
benchmark, Multi-Turn Programming Benchmark (MTPB), consisting of 115 diverse
problem sets that are factorized into multi-turn prompts. Our analysis on MTPB
shows that the same intent provided to CODEGEN in multi-turn fashion
significantly improves program synthesis over that provided as a single turn.
We make the training library JAXFORMER and model checkpoints available as open
source contribution: https://github.com/salesforce/CodeGen.
|
[
{
"created": "Fri, 25 Mar 2022 06:55:15 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Mar 2022 17:10:30 GMT",
"version": "v2"
},
{
"created": "Wed, 30 Mar 2022 06:57:04 GMT",
"version": "v3"
},
{
"created": "Thu, 29 Sep 2022 20:43:54 GMT",
"version": "v4"
},
{
"created": "Mon, 27 Feb 2023 21:26:48 GMT",
"version": "v5"
}
] |
2023-03-01
|
[
[
"Nijkamp",
"Erik",
""
],
[
"Pang",
"Bo",
""
],
[
"Hayashi",
"Hiroaki",
""
],
[
"Tu",
"Lifu",
""
],
[
"Wang",
"Huan",
""
],
[
"Zhou",
"Yingbo",
""
],
[
"Savarese",
"Silvio",
""
],
[
"Xiong",
"Caiming",
""
]
] |
Program synthesis strives to generate a computer program as a solution to a given problem specification, expressed with input-output examples or natural language descriptions. The prevalence of large language models advances the state-of-the-art for program synthesis, though limited training resources and data impede open access to such models. To democratize this, we train and release a family of large language models up to 16.1B parameters, called CODEGEN, on natural language and programming language data, and open source the training library JAXFORMER. We show the utility of the trained model by demonstrating that it is competitive with the previous state-of-the-art on zero-shot Python code generation on HumanEval. We further investigate the multi-step paradigm for program synthesis, where a single program is factorized into multiple prompts specifying subproblems. To this end, we construct an open benchmark, Multi-Turn Programming Benchmark (MTPB), consisting of 115 diverse problem sets that are factorized into multi-turn prompts. Our analysis on MTPB shows that the same intent provided to CODEGEN in multi-turn fashion significantly improves program synthesis over that provided as a single turn. We make the training library JAXFORMER and model checkpoints available as open source contribution: https://github.com/salesforce/CodeGen.
|
2402.05962
|
Junfeng Fang
|
Junfeng Fang and Xinglin Li and Yongduo Sui and Yuan Gao and Guibin
Zhang and Kun Wang and Xiang Wang and Xiangnan He
|
EXGC: Bridging Efficiency and Explainability in Graph Condensation
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph representation learning on vast datasets, like web data, has made
significant strides. However, the associated computational and storage
overheads raise concerns. In sight of this, Graph condensation (GCond) has been
introduced to distill these large real datasets into a more concise yet
information-rich synthetic graph. Despite acceleration efforts, existing GCond
methods mainly grapple with efficiency, especially on expansive web data
graphs. Hence, in this work, we pinpoint two major inefficiencies of current
paradigms: (1) the concurrent updating of a vast parameter set, and (2)
pronounced parameter redundancy. To counteract these two limitations
correspondingly, we first (1) employ the Mean-Field variational approximation
for convergence acceleration, and then (2) propose the objective of Gradient
Information Bottleneck (GDIB) to prune redundancy. By incorporating the leading
explanation techniques (e.g., GNNExplainer and GSAT) to instantiate the GDIB,
our EXGC, the Efficient and eXplainable Graph Condensation method is proposed,
which can markedly boost efficiency and inject explainability. Our extensive
evaluations across eight datasets underscore EXGC's superiority and relevance.
Code is available at https://github.com/MangoKiller/EXGC.
|
[
{
"created": "Mon, 5 Feb 2024 06:03:38 GMT",
"version": "v1"
}
] |
2024-02-12
|
[
[
"Fang",
"Junfeng",
""
],
[
"Li",
"Xinglin",
""
],
[
"Sui",
"Yongduo",
""
],
[
"Gao",
"Yuan",
""
],
[
"Zhang",
"Guibin",
""
],
[
"Wang",
"Kun",
""
],
[
"Wang",
"Xiang",
""
],
[
"He",
"Xiangnan",
""
]
] |
Graph representation learning on vast datasets, like web data, has made significant strides. However, the associated computational and storage overheads raise concerns. In sight of this, Graph condensation (GCond) has been introduced to distill these large real datasets into a more concise yet information-rich synthetic graph. Despite acceleration efforts, existing GCond methods mainly grapple with efficiency, especially on expansive web data graphs. Hence, in this work, we pinpoint two major inefficiencies of current paradigms: (1) the concurrent updating of a vast parameter set, and (2) pronounced parameter redundancy. To counteract these two limitations correspondingly, we first (1) employ the Mean-Field variational approximation for convergence acceleration, and then (2) propose the objective of Gradient Information Bottleneck (GDIB) to prune redundancy. By incorporating the leading explanation techniques (e.g., GNNExplainer and GSAT) to instantiate the GDIB, our EXGC, the Efficient and eXplainable Graph Condensation method is proposed, which can markedly boost efficiency and inject explainability. Our extensive evaluations across eight datasets underscore EXGC's superiority and relevance. Code is available at https://github.com/MangoKiller/EXGC.
|
2403.11145
|
Fuqiang Niu
|
Fuqiang Niu, Min Yang, Ang Li, Baoquan Zhang, Xiaojiang Peng and Bowen
Zhang
|
A Challenge Dataset and Effective Models for Conversational Stance
Detection
| null |
LREC-COLING 2024
| null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous stance detection studies typically concentrate on evaluating stances
within individual instances, thereby exhibiting limitations in effectively
modeling multi-party discussions concerning the same specific topic, as
naturally transpire in authentic social media interactions. This constraint
arises primarily due to the scarcity of datasets that authentically replicate
real social media contexts, hindering the research progress of conversational
stance detection. In this paper, we introduce a new multi-turn conversation
stance detection dataset (called \textbf{MT-CSD}), which encompasses multiple
targets for conversational stance detection. To derive stances from this
challenging dataset, we propose a global-local attention network
(\textbf{GLAN}) to address both long and short-range dependencies inherent in
conversational data. Notably, even state-of-the-art stance detection methods,
exemplified by GLAN, exhibit an accuracy of only 50.47\%, highlighting the
persistent challenges in conversational stance detection. Furthermore, our
MT-CSD dataset serves as a valuable resource to catalyze advancements in
cross-domain stance detection, where a classifier is adapted from a different
yet related target. We believe that MT-CSD will contribute to advancing
real-world applications of stance detection research. Our source code, data,
and models are available at \url{https://github.com/nfq729/MT-CSD}.
|
[
{
"created": "Sun, 17 Mar 2024 08:51:01 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Mar 2024 06:22:56 GMT",
"version": "v2"
}
] |
2024-03-22
|
[
[
"Niu",
"Fuqiang",
""
],
[
"Yang",
"Min",
""
],
[
"Li",
"Ang",
""
],
[
"Zhang",
"Baoquan",
""
],
[
"Peng",
"Xiaojiang",
""
],
[
"Zhang",
"Bowen",
""
]
] |
Previous stance detection studies typically concentrate on evaluating stances within individual instances, thereby exhibiting limitations in effectively modeling multi-party discussions concerning the same specific topic, as naturally transpire in authentic social media interactions. This constraint arises primarily due to the scarcity of datasets that authentically replicate real social media contexts, hindering the research progress of conversational stance detection. In this paper, we introduce a new multi-turn conversation stance detection dataset (called \textbf{MT-CSD}), which encompasses multiple targets for conversational stance detection. To derive stances from this challenging dataset, we propose a global-local attention network (\textbf{GLAN}) to address both long and short-range dependencies inherent in conversational data. Notably, even state-of-the-art stance detection methods, exemplified by GLAN, exhibit an accuracy of only 50.47\%, highlighting the persistent challenges in conversational stance detection. Furthermore, our MT-CSD dataset serves as a valuable resource to catalyze advancements in cross-domain stance detection, where a classifier is adapted from a different yet related target. We believe that MT-CSD will contribute to advancing real-world applications of stance detection research. Our source code, data, and models are available at \url{https://github.com/nfq729/MT-CSD}.
|
1905.09568
|
Stephan Fahrenkrog-Petersen
|
Stephan A. Fahrenkrog-Petersen, Niek Tax, Irene Teinemaa, Marlon
Dumas, Massimiliano de Leoni, Fabrizio Maria Maggi, Matthias Weidlich
|
Fire Now, Fire Later: Alarm-Based Systems for Prescriptive Process
Monitoring
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predictive process monitoring is a family of techniques to analyze events
produced during the execution of a business process in order to predict the
future state or the final outcome of running process instances. Existing
techniques in this field are able to predict, at each step of a process
instance, the likelihood that it will lead to an undesired outcome.These
techniques, however, focus on generating predictions and do not prescribe when
and how process workers should intervene to decrease the cost of undesired
outcomes. This paper proposes a framework for prescriptive process monitoring,
which extends predictive monitoring with the ability to generate alarms that
trigger interventions to prevent an undesired outcome or mitigate its effect.
The framework incorporates a parameterized cost model to assess the
cost-benefit trade-off of generating alarms. We show how to optimize the
generation of alarms given an event log of past process executions and a set of
cost model parameters. The proposed approaches are empirically evaluated using
a range of real-life event logs. The experimental results show that the net
cost of undesired outcomes can be minimized by changing the threshold for
generating alarms, as the process instance progresses. Moreover, introducing
delays for triggering alarms, instead of triggering them as soon as the
probability of an undesired outcome exceeds a threshold, leads to lower net
costs.
|
[
{
"created": "Thu, 23 May 2019 10:18:25 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Oct 2020 12:33:08 GMT",
"version": "v2"
}
] |
2020-10-15
|
[
[
"Fahrenkrog-Petersen",
"Stephan A.",
""
],
[
"Tax",
"Niek",
""
],
[
"Teinemaa",
"Irene",
""
],
[
"Dumas",
"Marlon",
""
],
[
"de Leoni",
"Massimiliano",
""
],
[
"Maggi",
"Fabrizio Maria",
""
],
[
"Weidlich",
"Matthias",
""
]
] |
Predictive process monitoring is a family of techniques to analyze events produced during the execution of a business process in order to predict the future state or the final outcome of running process instances. Existing techniques in this field are able to predict, at each step of a process instance, the likelihood that it will lead to an undesired outcome.These techniques, however, focus on generating predictions and do not prescribe when and how process workers should intervene to decrease the cost of undesired outcomes. This paper proposes a framework for prescriptive process monitoring, which extends predictive monitoring with the ability to generate alarms that trigger interventions to prevent an undesired outcome or mitigate its effect. The framework incorporates a parameterized cost model to assess the cost-benefit trade-off of generating alarms. We show how to optimize the generation of alarms given an event log of past process executions and a set of cost model parameters. The proposed approaches are empirically evaluated using a range of real-life event logs. The experimental results show that the net cost of undesired outcomes can be minimized by changing the threshold for generating alarms, as the process instance progresses. Moreover, introducing delays for triggering alarms, instead of triggering them as soon as the probability of an undesired outcome exceeds a threshold, leads to lower net costs.
|
2304.00009
|
Siddarth Shandeep Singh
|
Siddarth Singh and Benjamin Rosman
|
The challenge of redundancy on multi-agent value factorisation
|
Published at the 22nd International Conference on Autonomous Agents
and Multiagent Systems (AAMAS 2023). 2 Pages, 1 Figure
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In the field of cooperative multi-agent reinforcement learning (MARL), the
standard paradigm is the use of centralised training and decentralised
execution where a central critic conditions the policies of the cooperative
agents based on a central state. It has been shown, that in cases with large
numbers of redundant agents these methods become less effective. In a more
general case, there is likely to be a larger number of agents in an environment
than is required to solve the task. These redundant agents reduce performance
by enlarging the dimensionality of both the state space and and increasing the
size of the joint policy used to solve the environment. We propose leveraging
layerwise relevance propagation (LRP) to instead separate the learning of the
joint value function and generation of local reward signals and create a new
MARL algorithm: relevance decomposition network (RDN). We find that although
the performance of both baselines VDN and Qmix degrades with the number of
redundant agents, RDN is unaffected.
|
[
{
"created": "Tue, 28 Mar 2023 20:41:12 GMT",
"version": "v1"
}
] |
2023-04-04
|
[
[
"Singh",
"Siddarth",
""
],
[
"Rosman",
"Benjamin",
""
]
] |
In the field of cooperative multi-agent reinforcement learning (MARL), the standard paradigm is the use of centralised training and decentralised execution where a central critic conditions the policies of the cooperative agents based on a central state. It has been shown, that in cases with large numbers of redundant agents these methods become less effective. In a more general case, there is likely to be a larger number of agents in an environment than is required to solve the task. These redundant agents reduce performance by enlarging the dimensionality of both the state space and and increasing the size of the joint policy used to solve the environment. We propose leveraging layerwise relevance propagation (LRP) to instead separate the learning of the joint value function and generation of local reward signals and create a new MARL algorithm: relevance decomposition network (RDN). We find that although the performance of both baselines VDN and Qmix degrades with the number of redundant agents, RDN is unaffected.
|
1604.04893
|
Fouad Khan
|
Fouad Khan
|
An Initial Seed Selection Algorithm for K-means Clustering of
Georeferenced Data to Improve Replicability of Cluster Assignments for
Mapping Application
|
Applied Soft Computing 12 (2012)
| null |
10.1016/j.asoc.2012.07.021
| null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
K-means is one of the most widely used clustering algorithms in various
disciplines, especially for large datasets. However the method is known to be
highly sensitive to initial seed selection of cluster centers. K-means++ has
been proposed to overcome this problem and has been shown to have better
accuracy and computational efficiency than k-means. In many clustering problems
though -such as when classifying georeferenced data for mapping applications-
standardization of clustering methodology, specifically, the ability to arrive
at the same cluster assignment for every run of the method i.e. replicability
of the methodology, may be of greater significance than any perceived measure
of accuracy, especially when the solution is known to be non-unique, as in the
case of k-means clustering. Here we propose a simple initial seed selection
algorithm for k-means clustering along one attribute that draws initial cluster
boundaries along the 'deepest valleys' or greatest gaps in dataset. Thus, it
incorporates a measure to maximize distance between consecutive cluster centers
which augments the conventional k-means optimization for minimum distance
between cluster center and cluster members. Unlike existing initialization
methods, no additional parameters or degrees of freedom are introduced to the
clustering algorithm. This improves the replicability of cluster assignments by
as much as 100% over k-means and k-means++, virtually reducing the variance
over different runs to zero, without introducing any additional parameters to
the clustering process. Further, the proposed method is more computationally
efficient than k-means++ and in some cases, more accurate.
|
[
{
"created": "Sun, 17 Apr 2016 16:25:15 GMT",
"version": "v1"
}
] |
2016-04-19
|
[
[
"Khan",
"Fouad",
""
]
] |
K-means is one of the most widely used clustering algorithms in various disciplines, especially for large datasets. However the method is known to be highly sensitive to initial seed selection of cluster centers. K-means++ has been proposed to overcome this problem and has been shown to have better accuracy and computational efficiency than k-means. In many clustering problems though -such as when classifying georeferenced data for mapping applications- standardization of clustering methodology, specifically, the ability to arrive at the same cluster assignment for every run of the method i.e. replicability of the methodology, may be of greater significance than any perceived measure of accuracy, especially when the solution is known to be non-unique, as in the case of k-means clustering. Here we propose a simple initial seed selection algorithm for k-means clustering along one attribute that draws initial cluster boundaries along the 'deepest valleys' or greatest gaps in dataset. Thus, it incorporates a measure to maximize distance between consecutive cluster centers which augments the conventional k-means optimization for minimum distance between cluster center and cluster members. Unlike existing initialization methods, no additional parameters or degrees of freedom are introduced to the clustering algorithm. This improves the replicability of cluster assignments by as much as 100% over k-means and k-means++, virtually reducing the variance over different runs to zero, without introducing any additional parameters to the clustering process. Further, the proposed method is more computationally efficient than k-means++ and in some cases, more accurate.
|
2010.16115
|
Gabriel Hondet
|
Gabriel Hondet (DEDUCTEAM, LSV, ENS Paris Saclay, CNRS), Fr\'ed\'eric
Blanqui (DEDUCTEAM, LSV, ENS Paris Saclay, CNRS)
|
The New Rewriting Engine of Dedukti
| null |
FSCD 2020 - 5th International Conference on Formal Structures for
Computation and Deduction, Jun 2020, Paris, France. pp.16
|
10.4230/LIPIcs.FSCD.2020.35
| null |
cs.PL cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dedukti is a type-checker for the $\lambda$$\Pi$-calculus modulo rewriting,
an extension of Edinburgh's logicalframework LF where functions and type
symbols can be defined by rewrite rules. It thereforecontains an engine for
rewriting LF terms and types according to the rewrite rules given by the user.A
key component of this engine is the matching algorithm to find which rules can
be fired. In thispaper, we describe the class of rewrite rules supported by
Dedukti and the new implementation ofthe matching algorithm. Dedukti supports
non-linear rewrite rules on terms with binders usinghigher-order
pattern-matching as in Combinatory Reduction Systems (CRS). The new
matchingalgorithm extends the technique of decision trees introduced by Luc
Maranget in the OCamlcompiler to this more general context.
|
[
{
"created": "Fri, 30 Oct 2020 08:19:19 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Feb 2022 14:24:55 GMT",
"version": "v2"
}
] |
2022-02-16
|
[
[
"Hondet",
"Gabriel",
"",
"DEDUCTEAM, LSV, ENS Paris Saclay, CNRS"
],
[
"Blanqui",
"Frédéric",
"",
"DEDUCTEAM, LSV, ENS Paris Saclay, CNRS"
]
] |
Dedukti is a type-checker for the $\lambda$$\Pi$-calculus modulo rewriting, an extension of Edinburgh's logicalframework LF where functions and type symbols can be defined by rewrite rules. It thereforecontains an engine for rewriting LF terms and types according to the rewrite rules given by the user.A key component of this engine is the matching algorithm to find which rules can be fired. In thispaper, we describe the class of rewrite rules supported by Dedukti and the new implementation ofthe matching algorithm. Dedukti supports non-linear rewrite rules on terms with binders usinghigher-order pattern-matching as in Combinatory Reduction Systems (CRS). The new matchingalgorithm extends the technique of decision trees introduced by Luc Maranget in the OCamlcompiler to this more general context.
|
1908.09982
|
Genta Indra Winata
|
Genta Indra Winata, Andrea Madotto, Jamin Shin, Elham J. Barezi,
Pascale Fung
|
On the Effectiveness of Low-Rank Matrix Factorization for LSTM Model
Compression
|
Accepted in PACLIC 2019
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite their ubiquity in NLP tasks, Long Short-Term Memory (LSTM) networks
suffer from computational inefficiencies caused by inherent unparallelizable
recurrences, which further aggravates as LSTMs require more parameters for
larger memory capacity. In this paper, we propose to apply low-rank matrix
factorization (MF) algorithms to different recurrences in LSTMs, and explore
the effectiveness on different NLP tasks and model components. We discover that
additive recurrence is more important than multiplicative recurrence, and
explain this by identifying meaningful correlations between matrix norms and
compression performance. We compare our approach across two settings: 1)
compressing core LSTM recurrences in language models, 2) compressing biLSTM
layers of ELMo evaluated in three downstream NLP tasks.
|
[
{
"created": "Tue, 27 Aug 2019 01:52:07 GMT",
"version": "v1"
}
] |
2019-08-28
|
[
[
"Winata",
"Genta Indra",
""
],
[
"Madotto",
"Andrea",
""
],
[
"Shin",
"Jamin",
""
],
[
"Barezi",
"Elham J.",
""
],
[
"Fung",
"Pascale",
""
]
] |
Despite their ubiquity in NLP tasks, Long Short-Term Memory (LSTM) networks suffer from computational inefficiencies caused by inherent unparallelizable recurrences, which further aggravates as LSTMs require more parameters for larger memory capacity. In this paper, we propose to apply low-rank matrix factorization (MF) algorithms to different recurrences in LSTMs, and explore the effectiveness on different NLP tasks and model components. We discover that additive recurrence is more important than multiplicative recurrence, and explain this by identifying meaningful correlations between matrix norms and compression performance. We compare our approach across two settings: 1) compressing core LSTM recurrences in language models, 2) compressing biLSTM layers of ELMo evaluated in three downstream NLP tasks.
|
2305.01700
|
Lee Milburn
|
Lee Milburn, Juan Gamba, Miguel Fernandes, Claudio Semini
|
Computer-Vision Based Real Time Waypoint Generation for Autonomous
Vineyard Navigation with Quadruped Robots
|
Accepted to IEEE-ICARSC 2023 Conference. arXiv admin note: text
overlap with arXiv:2301.00887
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The VINUM project seeks to address the shortage of skilled labor in modern
vineyards by introducing a cutting-edge mobile robotic solution. Leveraging the
capabilities of the quadruped robot, HyQReal, this system, equipped with arm
and vision sensors, offers autonomous navigation and winter pruning of
grapevines reducing the need for human intervention. At the heart of this
approach lies an architecture that empowers the robot to easily navigate
vineyards, identify grapevines with unparalleled accuracy, and approach them
for pruning with precision. A state machine drives the process, deftly
switching between various stages to ensure seamless and efficient task
completion. The system's performance was assessed through experimentation,
focusing on waypoint precision and optimizing the robot's workspace for
single-plant operations. Results indicate that the architecture is highly
reliable, with a mean error of 21.5cm and a standard deviation of 17.6cm for
HyQReal. However, improvements in grapevine detection accuracy are necessary
for optimal performance. This work is based on a computer-vision-based
navigation method for quadruped robots in vineyards, opening up new
possibilities for selective task automation. The system's architecture works
well in ideal weather conditions, generating and arriving at precise waypoints
that maximize the attached robotic arm's workspace. This work is an extension
of our short paper presented at the Italian Conference on Robotics and
Intelligent Machines (I-RIM).
|
[
{
"created": "Tue, 2 May 2023 18:08:37 GMT",
"version": "v1"
}
] |
2023-05-04
|
[
[
"Milburn",
"Lee",
""
],
[
"Gamba",
"Juan",
""
],
[
"Fernandes",
"Miguel",
""
],
[
"Semini",
"Claudio",
""
]
] |
The VINUM project seeks to address the shortage of skilled labor in modern vineyards by introducing a cutting-edge mobile robotic solution. Leveraging the capabilities of the quadruped robot, HyQReal, this system, equipped with arm and vision sensors, offers autonomous navigation and winter pruning of grapevines reducing the need for human intervention. At the heart of this approach lies an architecture that empowers the robot to easily navigate vineyards, identify grapevines with unparalleled accuracy, and approach them for pruning with precision. A state machine drives the process, deftly switching between various stages to ensure seamless and efficient task completion. The system's performance was assessed through experimentation, focusing on waypoint precision and optimizing the robot's workspace for single-plant operations. Results indicate that the architecture is highly reliable, with a mean error of 21.5cm and a standard deviation of 17.6cm for HyQReal. However, improvements in grapevine detection accuracy are necessary for optimal performance. This work is based on a computer-vision-based navigation method for quadruped robots in vineyards, opening up new possibilities for selective task automation. The system's architecture works well in ideal weather conditions, generating and arriving at precise waypoints that maximize the attached robotic arm's workspace. This work is an extension of our short paper presented at the Italian Conference on Robotics and Intelligent Machines (I-RIM).
|
1210.8291
|
Huanhuan Chen
|
Huanhuan Chen, Peter Tino, Xin Yao, and Ali Rodan
|
Learning in the Model Space for Fault Diagnosis
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of large scaled sensor networks facilitates the collection of
large amounts of real-time data to monitor and control complex engineering
systems. However, in many cases the collected data may be incomplete or
inconsistent, while the underlying environment may be time-varying or
un-formulated. In this paper, we have developed an innovative cognitive fault
diagnosis framework that tackles the above challenges. This framework
investigates fault diagnosis in the model space instead of in the signal space.
Learning in the model space is implemented by fitting a series of models using
a series of signal segments selected with a rolling window. By investigating
the learning techniques in the fitted model space, faulty models can be
discriminated from healthy models using one-class learning algorithm. The
framework enables us to construct fault library when unknown faults occur,
which can be regarded as cognitive fault isolation. This paper also
theoretically investigates how to measure the pairwise distance between two
models in the model space and incorporates the model distance into the learning
algorithm in the model space. The results on three benchmark applications and
one simulated model for the Barcelona water distribution network have confirmed
the effectiveness of the proposed framework.
|
[
{
"created": "Wed, 31 Oct 2012 10:42:32 GMT",
"version": "v1"
}
] |
2012-11-01
|
[
[
"Chen",
"Huanhuan",
""
],
[
"Tino",
"Peter",
""
],
[
"Yao",
"Xin",
""
],
[
"Rodan",
"Ali",
""
]
] |
The emergence of large scaled sensor networks facilitates the collection of large amounts of real-time data to monitor and control complex engineering systems. However, in many cases the collected data may be incomplete or inconsistent, while the underlying environment may be time-varying or un-formulated. In this paper, we have developed an innovative cognitive fault diagnosis framework that tackles the above challenges. This framework investigates fault diagnosis in the model space instead of in the signal space. Learning in the model space is implemented by fitting a series of models using a series of signal segments selected with a rolling window. By investigating the learning techniques in the fitted model space, faulty models can be discriminated from healthy models using one-class learning algorithm. The framework enables us to construct fault library when unknown faults occur, which can be regarded as cognitive fault isolation. This paper also theoretically investigates how to measure the pairwise distance between two models in the model space and incorporates the model distance into the learning algorithm in the model space. The results on three benchmark applications and one simulated model for the Barcelona water distribution network have confirmed the effectiveness of the proposed framework.
|
1701.03247
|
Binnan Zhuang
|
Binnan Zhuang, Dongning Guo, Ermin Wei and Michael L. Honig
|
Scalable Spectrum Allocation and User Association in Networks with Many
Small Cells
|
Submiited to IEEE Transactions on Communications
| null | null | null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A scalable framework is developed to allocate radio resources across a large
number of densely deployed small cells with given traffic statistics on a slow
timescale. Joint user association and spectrum allocation is first formulated
as a convex optimization problem by dividing the spectrum among all possible
transmission patterns of active access points (APs). To improve scalability
with the number of APs, the problem is reformulated using local patterns of
interfering APs. To maintain global consistency among local patterns,
inter-cluster interaction is characterized as hyper-edges in a hyper-graph with
nodes corresponding to neighborhoods of APs. A scalable solution is obtained by
iteratively solving a convex optimization problem for bandwidth allocation with
reduced complexity and constructing a global spectrum allocation using
hyper-graph coloring. Numerical results demonstrate the proposed solution for a
network with 100 APs and several hundred user equipments. For a given quality
of service (QoS), the proposed scheme can increase the network capacity several
fold compared to assigning each user to the strongest AP with full-spectrum
reuse.
|
[
{
"created": "Thu, 12 Jan 2017 06:11:54 GMT",
"version": "v1"
}
] |
2017-01-13
|
[
[
"Zhuang",
"Binnan",
""
],
[
"Guo",
"Dongning",
""
],
[
"Wei",
"Ermin",
""
],
[
"Honig",
"Michael L.",
""
]
] |
A scalable framework is developed to allocate radio resources across a large number of densely deployed small cells with given traffic statistics on a slow timescale. Joint user association and spectrum allocation is first formulated as a convex optimization problem by dividing the spectrum among all possible transmission patterns of active access points (APs). To improve scalability with the number of APs, the problem is reformulated using local patterns of interfering APs. To maintain global consistency among local patterns, inter-cluster interaction is characterized as hyper-edges in a hyper-graph with nodes corresponding to neighborhoods of APs. A scalable solution is obtained by iteratively solving a convex optimization problem for bandwidth allocation with reduced complexity and constructing a global spectrum allocation using hyper-graph coloring. Numerical results demonstrate the proposed solution for a network with 100 APs and several hundred user equipments. For a given quality of service (QoS), the proposed scheme can increase the network capacity several fold compared to assigning each user to the strongest AP with full-spectrum reuse.
|
1203.6722
|
Vinay Bettadapura
|
Vinay Bettadapura
|
Face Expression Recognition and Analysis: The State of the Art
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The automatic recognition of facial expressions has been an active research
topic since the early nineties. There have been several advances in the past
few years in terms of face detection and tracking, feature extraction
mechanisms and the techniques used for expression classification. This paper
surveys some of the published work since 2001 till date. The paper presents a
time-line view of the advances made in this field, the applications of
automatic face expression recognizers, the characteristics of an ideal system,
the databases that have been used and the advances made in terms of their
standardization and a detailed summary of the state of the art. The paper also
discusses facial parameterization using FACS Action Units (AUs) and MPEG-4
Facial Animation Parameters (FAPs) and the recent advances in face detection,
tracking and feature extraction methods. Notes have also been presented on
emotions, expressions and facial features, discussion on the six prototypic
expressions and the recent studies on expression classifiers. The paper ends
with a note on the challenges and the future work. This paper has been written
in a tutorial style with the intention of helping students and researchers who
are new to this field.
|
[
{
"created": "Fri, 30 Mar 2012 05:47:59 GMT",
"version": "v1"
}
] |
2012-04-02
|
[
[
"Bettadapura",
"Vinay",
""
]
] |
The automatic recognition of facial expressions has been an active research topic since the early nineties. There have been several advances in the past few years in terms of face detection and tracking, feature extraction mechanisms and the techniques used for expression classification. This paper surveys some of the published work since 2001 till date. The paper presents a time-line view of the advances made in this field, the applications of automatic face expression recognizers, the characteristics of an ideal system, the databases that have been used and the advances made in terms of their standardization and a detailed summary of the state of the art. The paper also discusses facial parameterization using FACS Action Units (AUs) and MPEG-4 Facial Animation Parameters (FAPs) and the recent advances in face detection, tracking and feature extraction methods. Notes have also been presented on emotions, expressions and facial features, discussion on the six prototypic expressions and the recent studies on expression classifiers. The paper ends with a note on the challenges and the future work. This paper has been written in a tutorial style with the intention of helping students and researchers who are new to this field.
|
2103.01783
|
Mauricio Aniche
|
Maur\'icio Aniche and Christoph Treude and Andy Zaidman
|
How Developers Engineer Test Cases: An Observational Study
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the main challenges that developers face when testing their systems
lies in engineering test cases that are good enough to reveal bugs. And while
our body of knowledge on software testing and automated test case generation is
already quite significant, in practice, developers are still the ones
responsible for engineering test cases manually. Therefore, understanding the
developers' thought- and decision-making processes while engineering test cases
is a fundamental step in making developers better at testing software. In this
paper, we observe 13 developers thinking-aloud while testing different
real-world open-source methods, and use these observations to explain how
developers engineer test cases. We then challenge and augment our main findings
by surveying 72 software developers on their testing practices. We discuss our
results from three different angles. First, we propose a general framework that
explains how developers reason about testing. Second, we propose and describe
in detail the three different overarching strategies that developers apply when
testing. Third, we compare and relate our observations with the existing body
of knowledge and propose future studies that would advance our knowledge on the
topic.
|
[
{
"created": "Mon, 1 Mar 2021 13:20:18 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Apr 2021 10:05:34 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Aug 2021 18:51:11 GMT",
"version": "v3"
},
{
"created": "Sat, 6 Nov 2021 08:25:49 GMT",
"version": "v4"
}
] |
2021-11-09
|
[
[
"Aniche",
"Maurício",
""
],
[
"Treude",
"Christoph",
""
],
[
"Zaidman",
"Andy",
""
]
] |
One of the main challenges that developers face when testing their systems lies in engineering test cases that are good enough to reveal bugs. And while our body of knowledge on software testing and automated test case generation is already quite significant, in practice, developers are still the ones responsible for engineering test cases manually. Therefore, understanding the developers' thought- and decision-making processes while engineering test cases is a fundamental step in making developers better at testing software. In this paper, we observe 13 developers thinking-aloud while testing different real-world open-source methods, and use these observations to explain how developers engineer test cases. We then challenge and augment our main findings by surveying 72 software developers on their testing practices. We discuss our results from three different angles. First, we propose a general framework that explains how developers reason about testing. Second, we propose and describe in detail the three different overarching strategies that developers apply when testing. Third, we compare and relate our observations with the existing body of knowledge and propose future studies that would advance our knowledge on the topic.
|
1308.0502
|
James Cheney
|
James Cheney
|
Static Enforceability of XPath-Based Access Control Policies
|
Proceedings of the 14th International Symposium on Database
Programming Languages (DBPL 2013), August 30, 2013, Riva del Garda, Trento,
Italy
| null | null | null |
cs.DB cs.CR cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of extending XML databases with fine-grained,
high-level access control policies specified using XPath expressions. Most
prior work checks individual updates dynamically, which is expensive (requiring
worst-case execution time proportional to the size of the database). On the
other hand, static enforcement can be performed without accessing the database
but may be incomplete, in the sense that it may forbid accesses that dynamic
enforcement would allow. We introduce topological characterizations of XPath
fragments in order to study the problem of determining when an access control
policy can be enforced statically without loss of precision. We introduce the
notion of fair policies that are statically enforceable, and study the
complexity of determining fairness and of static enforcement itself.
|
[
{
"created": "Fri, 2 Aug 2013 13:52:36 GMT",
"version": "v1"
}
] |
2013-08-05
|
[
[
"Cheney",
"James",
""
]
] |
We consider the problem of extending XML databases with fine-grained, high-level access control policies specified using XPath expressions. Most prior work checks individual updates dynamically, which is expensive (requiring worst-case execution time proportional to the size of the database). On the other hand, static enforcement can be performed without accessing the database but may be incomplete, in the sense that it may forbid accesses that dynamic enforcement would allow. We introduce topological characterizations of XPath fragments in order to study the problem of determining when an access control policy can be enforced statically without loss of precision. We introduce the notion of fair policies that are statically enforceable, and study the complexity of determining fairness and of static enforcement itself.
|
2103.12419
|
Artur Sokolovsky
|
Artur Sokolovsky, Luca Arnaboldi, Jaume Bacardit, Thomas Gross
|
Volume-Centred Range Bars: Novel Interpretable Representation of
Financial Markets Designed for Machine Learning Applications
|
The reproducibility package available at:
https://doi.org/10.5281/zenodo.4629567
| null | null | null |
cs.LG cs.AI cs.SY eess.SY q-fin.TR
|
http://creativecommons.org/licenses/by/4.0/
|
Financial markets are a source of non-stationary multidimensional time series
which has been drawing attention for decades. Each financial instrument has its
specific changing-over-time properties, making its analysis a complex task.
Hence, improvement of understanding and development of more informative,
generalisable market representations are essential for the successful operation
in financial markets, including risk assessment, diversification, trading, and
order execution. In this study, we propose a volume-price-based market
representation for making financial time series more suitable for machine
learning pipelines. We use a statistical approach for evaluating the
representation. Through the research questions, we investigate, i) whether the
proposed representation allows the more efficient design of machine learning
models; ii) whether the proposed representation leads to increased performance
over the price levels market pattern; iii) whether the proposed representation
performs better on the liquid markets, and iv) whether SHAP feature
interactions are reliable to be used in the considered setting. Our analysis
shows that the proposed volume-based method allows successful classification of
the financial time series patterns, and also leads to better classification
performance than the price levels-based method, excelling specifically on more
liquid financial instruments. Finally, we propose an approach for obtaining
feature interactions directly from tree-based models and compare the outcomes
to those of the SHAP method. This results in the significant similarity between
the two methods, hence we claim that SHAP feature interactions are reliable to
be used in the setting of financial markets.
|
[
{
"created": "Tue, 23 Mar 2021 09:55:46 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Apr 2021 14:36:44 GMT",
"version": "v2"
},
{
"created": "Thu, 13 May 2021 14:35:25 GMT",
"version": "v3"
},
{
"created": "Sun, 8 May 2022 18:23:07 GMT",
"version": "v4"
}
] |
2022-05-10
|
[
[
"Sokolovsky",
"Artur",
""
],
[
"Arnaboldi",
"Luca",
""
],
[
"Bacardit",
"Jaume",
""
],
[
"Gross",
"Thomas",
""
]
] |
Financial markets are a source of non-stationary multidimensional time series which has been drawing attention for decades. Each financial instrument has its specific changing-over-time properties, making its analysis a complex task. Hence, improvement of understanding and development of more informative, generalisable market representations are essential for the successful operation in financial markets, including risk assessment, diversification, trading, and order execution. In this study, we propose a volume-price-based market representation for making financial time series more suitable for machine learning pipelines. We use a statistical approach for evaluating the representation. Through the research questions, we investigate, i) whether the proposed representation allows the more efficient design of machine learning models; ii) whether the proposed representation leads to increased performance over the price levels market pattern; iii) whether the proposed representation performs better on the liquid markets, and iv) whether SHAP feature interactions are reliable to be used in the considered setting. Our analysis shows that the proposed volume-based method allows successful classification of the financial time series patterns, and also leads to better classification performance than the price levels-based method, excelling specifically on more liquid financial instruments. Finally, we propose an approach for obtaining feature interactions directly from tree-based models and compare the outcomes to those of the SHAP method. This results in the significant similarity between the two methods, hence we claim that SHAP feature interactions are reliable to be used in the setting of financial markets.
|
2304.08458
|
Tianji Shen
|
Tianji Shen, Vamoua Yachongka, Yuto Hama, Hideki Ochiai
|
Secrecy Design of Indoor Visible Light Communication Network under
Downlink NOMA Transmission
|
30 pages, 13 figures. This work has been submitted to the IEEE for
possible publication. Copyright may be transferred without notice, after
which this version may no longer be accessible
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we investigate the transmission sum rate as well as the secrecy
sum rate of indoor visible light communication (VLC) networks for mobile
devices with the power domain non-orthogonal multiple access (NOMA)
transmission, where multiple legitimate users are equipped with photodiodes
(PDs). We introduce a body blockage model of the legitimate users as well as
the eavesdropper to focus on the case where the communications from
transmitting light-emitting diodes (LEDs) to receiving devices are blocked by
the bodies of receiving users. Furthermore, in order to improve the secrecy
without any knowledge of the channel state information (CSI) of the
eavesdropper, a novel LED arrangement is introduced to reduce the overlapping
area covered by LED units supporting different users. We also propose two LED
operation strategies, called simple and smart LED linking, and evaluate their
performance against the conventional broadcasting in terms of transmission sum
rate and secrecy sum rate. Through computer simulations, the superiority of our
proposed strategies is demonstrated.
|
[
{
"created": "Mon, 17 Apr 2023 17:33:39 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Apr 2023 15:07:42 GMT",
"version": "v2"
}
] |
2023-04-19
|
[
[
"Shen",
"Tianji",
""
],
[
"Yachongka",
"Vamoua",
""
],
[
"Hama",
"Yuto",
""
],
[
"Ochiai",
"Hideki",
""
]
] |
In this work, we investigate the transmission sum rate as well as the secrecy sum rate of indoor visible light communication (VLC) networks for mobile devices with the power domain non-orthogonal multiple access (NOMA) transmission, where multiple legitimate users are equipped with photodiodes (PDs). We introduce a body blockage model of the legitimate users as well as the eavesdropper to focus on the case where the communications from transmitting light-emitting diodes (LEDs) to receiving devices are blocked by the bodies of receiving users. Furthermore, in order to improve the secrecy without any knowledge of the channel state information (CSI) of the eavesdropper, a novel LED arrangement is introduced to reduce the overlapping area covered by LED units supporting different users. We also propose two LED operation strategies, called simple and smart LED linking, and evaluate their performance against the conventional broadcasting in terms of transmission sum rate and secrecy sum rate. Through computer simulations, the superiority of our proposed strategies is demonstrated.
|
1906.03897
|
Leshem Choshen
|
Yoav Kantor and Yoav Katz and Leshem Choshen and Edo Cohen-Karlik and
Naftali Liberman and Assaf Toledo and Amir Menczel and Noam Slonim
|
Learning to combine Grammatical Error Corrections
|
BEA 2019
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of Grammatical Error Correction (GEC) has produced various systems
to deal with focused phenomena or general text editing. We propose an automatic
way to combine black-box systems. Our method automatically detects the strength
of a system or the combination of several systems per error type, improving
precision and recall while optimizing $F$ score directly. We show consistent
improvement over the best standalone system in all the configurations tested.
This approach also outperforms average ensembling of different RNN models with
random initializations.
In addition, we analyze the use of BERT for GEC - reporting promising results
on this end. We also present a spellchecker created for this task which
outperforms standard spellcheckers tested on the task of spellchecking.
This paper describes a system submission to Building Educational Applications
2019 Shared Task: Grammatical Error Correction.
Combining the output of top BEA 2019 shared task systems using our approach,
currently holds the highest reported score in the open phase of the BEA 2019
shared task, improving F0.5 by 3.7 points over the best result reported.
|
[
{
"created": "Mon, 10 Jun 2019 10:57:47 GMT",
"version": "v1"
}
] |
2019-06-11
|
[
[
"Kantor",
"Yoav",
""
],
[
"Katz",
"Yoav",
""
],
[
"Choshen",
"Leshem",
""
],
[
"Cohen-Karlik",
"Edo",
""
],
[
"Liberman",
"Naftali",
""
],
[
"Toledo",
"Assaf",
""
],
[
"Menczel",
"Amir",
""
],
[
"Slonim",
"Noam",
""
]
] |
The field of Grammatical Error Correction (GEC) has produced various systems to deal with focused phenomena or general text editing. We propose an automatic way to combine black-box systems. Our method automatically detects the strength of a system or the combination of several systems per error type, improving precision and recall while optimizing $F$ score directly. We show consistent improvement over the best standalone system in all the configurations tested. This approach also outperforms average ensembling of different RNN models with random initializations. In addition, we analyze the use of BERT for GEC - reporting promising results on this end. We also present a spellchecker created for this task which outperforms standard spellcheckers tested on the task of spellchecking. This paper describes a system submission to Building Educational Applications 2019 Shared Task: Grammatical Error Correction. Combining the output of top BEA 2019 shared task systems using our approach, currently holds the highest reported score in the open phase of the BEA 2019 shared task, improving F0.5 by 3.7 points over the best result reported.
|
1007.3229
|
Pradeepa BK
|
Pradeepa BK and Joy Kuri
|
Aggregate Download Throughput for TCP-controlled long file transfers in
a WLAN with multiple STA-AP association rates
|
Double columns, 3 pages, 3 figures, typos updated
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider several WLAN stations associated at rates r1, r2, ..., rk with an
Access Point. Each station is downloading a long file from a local server,
located on the LAN to which the AP is attached. We model these simultaneous
TCP-controlled transfers using a Markov Chain. Our analytical approach leads to
a procedure to compute aggregate download throughput numerically, and the
results match simulations very well.
|
[
{
"created": "Mon, 19 Jul 2010 18:13:56 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Feb 2011 03:32:08 GMT",
"version": "v2"
}
] |
2011-02-22
|
[
[
"BK",
"Pradeepa",
""
],
[
"Kuri",
"Joy",
""
]
] |
We consider several WLAN stations associated at rates r1, r2, ..., rk with an Access Point. Each station is downloading a long file from a local server, located on the LAN to which the AP is attached. We model these simultaneous TCP-controlled transfers using a Markov Chain. Our analytical approach leads to a procedure to compute aggregate download throughput numerically, and the results match simulations very well.
|
2404.09454
|
Sepehr Dehdashtian
|
Sepehr Dehdashtian, Bashir Sadeghi, Vishnu Naresh Boddeti
|
Utility-Fairness Trade-Offs and How to Find Them
|
IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024
| null | null | null |
cs.CV cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
When building classification systems with demographic fairness
considerations, there are two objectives to satisfy: 1) maximizing utility for
the specific task and 2) ensuring fairness w.r.t. a known demographic
attribute. These objectives often compete, so optimizing both can lead to a
trade-off between utility and fairness. While existing works acknowledge the
trade-offs and study their limits, two questions remain unanswered: 1) What are
the optimal trade-offs between utility and fairness? and 2) How can we
numerically quantify these trade-offs from data for a desired prediction task
and demographic attribute of interest? This paper addresses these questions. We
introduce two utility-fairness trade-offs: the Data-Space and Label-Space
Trade-off. The trade-offs reveal three regions within the utility-fairness
plane, delineating what is fully and partially possible and impossible. We
propose U-FaTE, a method to numerically quantify the trade-offs for a given
prediction task and group fairness definition from data samples. Based on the
trade-offs, we introduce a new scheme for evaluating representations. An
extensive evaluation of fair representation learning methods and
representations from over 1000 pre-trained models revealed that most current
approaches are far from the estimated and achievable fairness-utility
trade-offs across multiple datasets and prediction tasks.
|
[
{
"created": "Mon, 15 Apr 2024 04:43:53 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Apr 2024 00:08:42 GMT",
"version": "v2"
}
] |
2024-04-25
|
[
[
"Dehdashtian",
"Sepehr",
""
],
[
"Sadeghi",
"Bashir",
""
],
[
"Boddeti",
"Vishnu Naresh",
""
]
] |
When building classification systems with demographic fairness considerations, there are two objectives to satisfy: 1) maximizing utility for the specific task and 2) ensuring fairness w.r.t. a known demographic attribute. These objectives often compete, so optimizing both can lead to a trade-off between utility and fairness. While existing works acknowledge the trade-offs and study their limits, two questions remain unanswered: 1) What are the optimal trade-offs between utility and fairness? and 2) How can we numerically quantify these trade-offs from data for a desired prediction task and demographic attribute of interest? This paper addresses these questions. We introduce two utility-fairness trade-offs: the Data-Space and Label-Space Trade-off. The trade-offs reveal three regions within the utility-fairness plane, delineating what is fully and partially possible and impossible. We propose U-FaTE, a method to numerically quantify the trade-offs for a given prediction task and group fairness definition from data samples. Based on the trade-offs, we introduce a new scheme for evaluating representations. An extensive evaluation of fair representation learning methods and representations from over 1000 pre-trained models revealed that most current approaches are far from the estimated and achievable fairness-utility trade-offs across multiple datasets and prediction tasks.
|
1809.04737
|
Xintao Wu
|
Yongkai Wu and Lu Zhang and Xintao Wu
|
Fairness-aware Classification: Criterion, Convexity, and Bounds
| null | null | null | null |
cs.LG cs.AI cs.CY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fairness-aware classification is receiving increasing attention in the
machine learning fields. Recently research proposes to formulate the
fairness-aware classification as constrained optimization problems. However,
several limitations exist in previous works due to the lack of a theoretical
framework for guiding the formulation. In this paper, we propose a general
framework for learning fair classifiers which addresses previous limitations.
The framework formulates various commonly-used fairness metrics as convex
constraints that can be directly incorporated into classic classification
models. Within the framework, we propose a constraint-free criterion on the
training data which ensures that any classifier learned from the data is fair.
We also derive the constraints which ensure that the real fairness metric is
satisfied when surrogate functions are used to achieve convexity. Our framework
can be used to for formulating fairness-aware classification with fairness
guarantee and computational efficiency. The experiments using real-world
datasets demonstrate our theoretical results and show the effectiveness of
proposed framework and methods.
|
[
{
"created": "Thu, 13 Sep 2018 01:56:57 GMT",
"version": "v1"
}
] |
2018-09-14
|
[
[
"Wu",
"Yongkai",
""
],
[
"Zhang",
"Lu",
""
],
[
"Wu",
"Xintao",
""
]
] |
Fairness-aware classification is receiving increasing attention in the machine learning fields. Recently research proposes to formulate the fairness-aware classification as constrained optimization problems. However, several limitations exist in previous works due to the lack of a theoretical framework for guiding the formulation. In this paper, we propose a general framework for learning fair classifiers which addresses previous limitations. The framework formulates various commonly-used fairness metrics as convex constraints that can be directly incorporated into classic classification models. Within the framework, we propose a constraint-free criterion on the training data which ensures that any classifier learned from the data is fair. We also derive the constraints which ensure that the real fairness metric is satisfied when surrogate functions are used to achieve convexity. Our framework can be used to for formulating fairness-aware classification with fairness guarantee and computational efficiency. The experiments using real-world datasets demonstrate our theoretical results and show the effectiveness of proposed framework and methods.
|
2008.01960
|
Kai Sun
|
Kai Sun
|
A Novel Approach for the Process Planning and Scheduling Problem Using
the Concept of Maximum Weighted Independent Set
| null | null | null | null |
cs.DC math.CO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Process Planning and Scheduling (PPS) is an essential and practical topic but
a very intractable problem in manufacturing systems. Many research use
iterative methods to solve such problems; however, they cannot achieve
satisfactory results in both quality and computational speed. Other studies
formulate scheduling problems as a graph coloring problem (GCP) or its
extensions, but these formulations are limited to certain types of scheduling
problems. In this paper, we propose a novel approach to formulate a general
type of the PPS problem with resource allocation and process planning
integrated towards a typical objective, minimizing the makespan. The PPS
problem is formulated into an undirected weighted conflicting graph, where
nodes represent operations and their resources; edges represent constraints,
and weight factors are guidelines for the node selection at each time slot.
Then, the Maximum Weighted Independent Set (MWIS) problem can be solved to find
the best set of operations with their desired resources for each discrete time
slot. This proposed approach solves the PPS problem directly with minimum
iterations. We establish that the proposed approach always returns a feasible
optimum or near-optimum solution to the PPS problem. The different weight
configurations of the proposed approach for solving the PPS problem are tested
on a real-world PPS example and further designated test instances to evaluate
the scalability, accuracy, and robustness.
|
[
{
"created": "Wed, 5 Aug 2020 07:02:27 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Aug 2020 03:07:13 GMT",
"version": "v2"
},
{
"created": "Sun, 16 Aug 2020 06:33:33 GMT",
"version": "v3"
}
] |
2020-08-18
|
[
[
"Sun",
"Kai",
""
]
] |
Process Planning and Scheduling (PPS) is an essential and practical topic but a very intractable problem in manufacturing systems. Many research use iterative methods to solve such problems; however, they cannot achieve satisfactory results in both quality and computational speed. Other studies formulate scheduling problems as a graph coloring problem (GCP) or its extensions, but these formulations are limited to certain types of scheduling problems. In this paper, we propose a novel approach to formulate a general type of the PPS problem with resource allocation and process planning integrated towards a typical objective, minimizing the makespan. The PPS problem is formulated into an undirected weighted conflicting graph, where nodes represent operations and their resources; edges represent constraints, and weight factors are guidelines for the node selection at each time slot. Then, the Maximum Weighted Independent Set (MWIS) problem can be solved to find the best set of operations with their desired resources for each discrete time slot. This proposed approach solves the PPS problem directly with minimum iterations. We establish that the proposed approach always returns a feasible optimum or near-optimum solution to the PPS problem. The different weight configurations of the proposed approach for solving the PPS problem are tested on a real-world PPS example and further designated test instances to evaluate the scalability, accuracy, and robustness.
|
1304.6763
|
Joakim And\'en
|
Joakim And\'en, St\'ephane Mallat
|
Deep Scattering Spectrum
| null | null |
10.1109/TSP.2014.2326991
| null |
cs.SD cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A scattering transform defines a locally translation invariant representation
which is stable to time-warping deformations. It extends MFCC representations
by computing modulation spectrum coefficients of multiple orders, through
cascades of wavelet convolutions and modulus operators. Second-order scattering
coefficients characterize transient phenomena such as attacks and amplitude
modulation. A frequency transposition invariant representation is obtained by
applying a scattering transform along log-frequency. State-the-of-art
classification results are obtained for musical genre and phone classification
on GTZAN and TIMIT databases, respectively.
|
[
{
"created": "Wed, 24 Apr 2013 21:50:03 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Jan 2014 20:35:56 GMT",
"version": "v2"
}
] |
2015-06-15
|
[
[
"Andén",
"Joakim",
""
],
[
"Mallat",
"Stéphane",
""
]
] |
A scattering transform defines a locally translation invariant representation which is stable to time-warping deformations. It extends MFCC representations by computing modulation spectrum coefficients of multiple orders, through cascades of wavelet convolutions and modulus operators. Second-order scattering coefficients characterize transient phenomena such as attacks and amplitude modulation. A frequency transposition invariant representation is obtained by applying a scattering transform along log-frequency. State-the-of-art classification results are obtained for musical genre and phone classification on GTZAN and TIMIT databases, respectively.
|
2110.14859
|
Nate Veldt
|
Nate Veldt, Austin R. Benson, Jon Kleinberg
|
Approximate Decomposable Submodular Function Minimization for
Cardinality-Based Components
| null | null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Minimizing a sum of simple submodular functions of limited support is a
special case of general submodular function minimization that has seen numerous
applications in machine learning. We develop fast techniques for instances
where components in the sum are cardinality-based, meaning they depend only on
the size of the input set. This variant is one of the most widely applied in
practice, encompassing, e.g., common energy functions arising in image
segmentation and recent generalized hypergraph cut functions. We develop the
first approximation algorithms for this problem, where the approximations can
be quickly computed via reduction to a sparse graph cut problem, with graph
sparsity controlled by the desired approximation factor. Our method relies on a
new connection between sparse graph reduction techniques and piecewise linear
approximations to concave functions. Our sparse reduction technique leads to
significant improvements in theoretical runtimes, as well as substantial
practical gains in problems ranging from benchmark image segmentation tasks to
hypergraph clustering problems.
|
[
{
"created": "Thu, 28 Oct 2021 02:36:55 GMT",
"version": "v1"
}
] |
2021-10-29
|
[
[
"Veldt",
"Nate",
""
],
[
"Benson",
"Austin R.",
""
],
[
"Kleinberg",
"Jon",
""
]
] |
Minimizing a sum of simple submodular functions of limited support is a special case of general submodular function minimization that has seen numerous applications in machine learning. We develop fast techniques for instances where components in the sum are cardinality-based, meaning they depend only on the size of the input set. This variant is one of the most widely applied in practice, encompassing, e.g., common energy functions arising in image segmentation and recent generalized hypergraph cut functions. We develop the first approximation algorithms for this problem, where the approximations can be quickly computed via reduction to a sparse graph cut problem, with graph sparsity controlled by the desired approximation factor. Our method relies on a new connection between sparse graph reduction techniques and piecewise linear approximations to concave functions. Our sparse reduction technique leads to significant improvements in theoretical runtimes, as well as substantial practical gains in problems ranging from benchmark image segmentation tasks to hypergraph clustering problems.
|
2306.04792
|
Nimrod Megiddo
|
Nimrod Megiddo
|
On the Use of Generative Models in Observational Causal Analysis
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of a hypothetical generative model was been suggested for causal
analysis of observational data. The very assumption of a particular model is a
commitment to a certain set of variables and therefore to a certain set of
possible causes. Estimating the joint probability distribution of can be useful
for predicting values of variables in view of the observed values of others,
but it is not sufficient for inferring causal relationships. The model
describes a single observable distribution and cannot a chain of effects of
intervention that deviate from the observed distribution.
|
[
{
"created": "Wed, 7 Jun 2023 21:29:49 GMT",
"version": "v1"
}
] |
2023-06-09
|
[
[
"Megiddo",
"Nimrod",
""
]
] |
The use of a hypothetical generative model was been suggested for causal analysis of observational data. The very assumption of a particular model is a commitment to a certain set of variables and therefore to a certain set of possible causes. Estimating the joint probability distribution of can be useful for predicting values of variables in view of the observed values of others, but it is not sufficient for inferring causal relationships. The model describes a single observable distribution and cannot a chain of effects of intervention that deviate from the observed distribution.
|
2102.00769
|
Yukai Shi
|
Yukai Shi, Sen Zhang, Chenxing Zhou, Xiaodan Liang, Xiaojun Yang,
Liang Lin
|
GTAE: Graph-Transformer based Auto-Encoders for Linguistic-Constrained
Text Style Transfer
|
The first two authors share equal-authorship;
Code:https://github.com/SenZHANG-GitHub/graph-text-style-transfer ;
benchmark: https://github.com/ykshi/text-style-transfer-benchmark
| null | null | null |
cs.CL cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-parallel text style transfer has attracted increasing research interests
in recent years. Despite successes in transferring the style based on the
encoder-decoder framework, current approaches still lack the ability to
preserve the content and even logic of original sentences, mainly due to the
large unconstrained model space or too simplified assumptions on latent
embedding space. Since language itself is an intelligent product of humans with
certain grammars and has a limited rule-based model space by its nature,
relieving this problem requires reconciling the model capacity of deep neural
networks with the intrinsic model constraints from human linguistic rules. To
this end, we propose a method called Graph Transformer based Auto Encoder
(GTAE), which models a sentence as a linguistic graph and performs feature
extraction and style transfer at the graph level, to maximally retain the
content and the linguistic structure of original sentences. Quantitative
experiment results on three non-parallel text style transfer tasks show that
our model outperforms state-of-the-art methods in content preservation, while
achieving comparable performance on transfer accuracy and sentence naturalness.
|
[
{
"created": "Mon, 1 Feb 2021 11:08:45 GMT",
"version": "v1"
}
] |
2021-02-02
|
[
[
"Shi",
"Yukai",
""
],
[
"Zhang",
"Sen",
""
],
[
"Zhou",
"Chenxing",
""
],
[
"Liang",
"Xiaodan",
""
],
[
"Yang",
"Xiaojun",
""
],
[
"Lin",
"Liang",
""
]
] |
Non-parallel text style transfer has attracted increasing research interests in recent years. Despite successes in transferring the style based on the encoder-decoder framework, current approaches still lack the ability to preserve the content and even logic of original sentences, mainly due to the large unconstrained model space or too simplified assumptions on latent embedding space. Since language itself is an intelligent product of humans with certain grammars and has a limited rule-based model space by its nature, relieving this problem requires reconciling the model capacity of deep neural networks with the intrinsic model constraints from human linguistic rules. To this end, we propose a method called Graph Transformer based Auto Encoder (GTAE), which models a sentence as a linguistic graph and performs feature extraction and style transfer at the graph level, to maximally retain the content and the linguistic structure of original sentences. Quantitative experiment results on three non-parallel text style transfer tasks show that our model outperforms state-of-the-art methods in content preservation, while achieving comparable performance on transfer accuracy and sentence naturalness.
|
2009.09354
|
Ruturaj Raval
|
Ruturaj Raval
|
An Improved Approach of Intention Discovery with Machine Learning for
POMDP-based Dialogue Management
|
In addition to my thesis: https://scholar.uwindsor.ca/etd/7731/
| null | null | null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
An Embodied Conversational Agent (ECA) is an intelligent agent that works as
the front end of software applications to interact with users through
verbal/nonverbal expressions and to provide online assistance without the
limits of time, location, and language. To help to improve the experience of
human-computer interaction, there is an increasing need to empower ECA with not
only the realistic look of its human counterparts but also a higher level of
intelligence. This thesis first highlights the main topics related to the
construction of ECA, including different approaches of dialogue management, and
then discusses existing techniques of trend analysis for its application in
user classification. As a further refinement and enhancement to prior work on
ECA, this thesis research proposes a cohesive framework to integrate
emotion-based facial animation with improved intention discovery. In addition,
a machine learning technique is introduced to support sentiment analysis for
the adjustment of policy design in POMDP-based dialogue management. The
proposed research work is going to improve the accuracy of intention discovery
while reducing the length of dialogues.
|
[
{
"created": "Sun, 20 Sep 2020 05:28:36 GMT",
"version": "v1"
}
] |
2020-09-22
|
[
[
"Raval",
"Ruturaj",
""
]
] |
An Embodied Conversational Agent (ECA) is an intelligent agent that works as the front end of software applications to interact with users through verbal/nonverbal expressions and to provide online assistance without the limits of time, location, and language. To help to improve the experience of human-computer interaction, there is an increasing need to empower ECA with not only the realistic look of its human counterparts but also a higher level of intelligence. This thesis first highlights the main topics related to the construction of ECA, including different approaches of dialogue management, and then discusses existing techniques of trend analysis for its application in user classification. As a further refinement and enhancement to prior work on ECA, this thesis research proposes a cohesive framework to integrate emotion-based facial animation with improved intention discovery. In addition, a machine learning technique is introduced to support sentiment analysis for the adjustment of policy design in POMDP-based dialogue management. The proposed research work is going to improve the accuracy of intention discovery while reducing the length of dialogues.
|
2406.08106
|
Juliane Weilbach
|
Juliane Weilbach, Sebastian Gerwinn, Karim Barsim, Martin Fr\"anzle
|
Counterfactual-based Root Cause Analysis for Dynamical Systems
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identifying the underlying reason for a failing dynamic process or otherwise
anomalous observation is a fundamental challenge, yet has numerous industrial
applications. Identifying the failure-causing sub-system using causal
inference, one can ask the question: "Would the observed failure also occur, if
we had replaced the behaviour of a sub-system at a certain point in time with
its normal behaviour?" To this end, a formal description of behaviour of the
full system is needed in which such counterfactual questions can be answered.
However, existing causal methods for root cause identification are typically
limited to static settings and focusing on additive external influences causing
failures rather than structural influences. In this paper, we address these
problems by modelling the dynamic causal system using a Residual Neural Network
and deriving corresponding counterfactual distributions over trajectories. We
show quantitatively that more root causes are identified when an intervention
is performed on the structural equation and the external influence, compared to
an intervention on the external influence only. By employing an efficient
approximation to a corresponding Shapley value, we also obtain a ranking
between the different subsystems at different points in time being responsible
for an observed failure, which is applicable in settings with large number of
variables. We illustrate the effectiveness of the proposed method on a
benchmark dynamic system as well as on a real world river dataset.
|
[
{
"created": "Wed, 12 Jun 2024 11:38:13 GMT",
"version": "v1"
}
] |
2024-06-13
|
[
[
"Weilbach",
"Juliane",
""
],
[
"Gerwinn",
"Sebastian",
""
],
[
"Barsim",
"Karim",
""
],
[
"Fränzle",
"Martin",
""
]
] |
Identifying the underlying reason for a failing dynamic process or otherwise anomalous observation is a fundamental challenge, yet has numerous industrial applications. Identifying the failure-causing sub-system using causal inference, one can ask the question: "Would the observed failure also occur, if we had replaced the behaviour of a sub-system at a certain point in time with its normal behaviour?" To this end, a formal description of behaviour of the full system is needed in which such counterfactual questions can be answered. However, existing causal methods for root cause identification are typically limited to static settings and focusing on additive external influences causing failures rather than structural influences. In this paper, we address these problems by modelling the dynamic causal system using a Residual Neural Network and deriving corresponding counterfactual distributions over trajectories. We show quantitatively that more root causes are identified when an intervention is performed on the structural equation and the external influence, compared to an intervention on the external influence only. By employing an efficient approximation to a corresponding Shapley value, we also obtain a ranking between the different subsystems at different points in time being responsible for an observed failure, which is applicable in settings with large number of variables. We illustrate the effectiveness of the proposed method on a benchmark dynamic system as well as on a real world river dataset.
|
2101.08095
|
Jesse Sigal
|
Jesse Sigal
|
Automatic Differentiation via Effects and Handlers: An Implementation in
Frank
|
Appeared as short paper in PEPM'21, see
https://www.youtube.com/watch?v=BmBSJFkfL2M for associated talk
| null | null | null |
cs.PL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automatic differentiation (AD) is an important family of algorithms which
enables derivative based optimization. We show that AD can be simply
implemented with effects and handlers by doing so in the Frank language. By
considering how our implementation behaves in Frank's operational semantics, we
show how our code performs the dynamic creation of programs during evaluation.
|
[
{
"created": "Wed, 20 Jan 2021 12:34:25 GMT",
"version": "v1"
}
] |
2021-01-21
|
[
[
"Sigal",
"Jesse",
""
]
] |
Automatic differentiation (AD) is an important family of algorithms which enables derivative based optimization. We show that AD can be simply implemented with effects and handlers by doing so in the Frank language. By considering how our implementation behaves in Frank's operational semantics, we show how our code performs the dynamic creation of programs during evaluation.
|
2406.17815
|
Alireza Hosseini
|
Alireza Hosseini, Amirhossein Kazerouni, Saeed Akhavan, Michael
Brudno, Babak Taati
|
SUM: Saliency Unification through Mamba for Visual Attention Modeling
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual attention modeling, important for interpreting and prioritizing visual
stimuli, plays a significant role in applications such as marketing,
multimedia, and robotics. Traditional saliency prediction models, especially
those based on Convolutional Neural Networks (CNNs) or Transformers, achieve
notable success by leveraging large-scale annotated datasets. However, the
current state-of-the-art (SOTA) models that use Transformers are
computationally expensive. Additionally, separate models are often required for
each image type, lacking a unified approach. In this paper, we propose Saliency
Unification through Mamba (SUM), a novel approach that integrates the efficient
long-range dependency modeling of Mamba with U-Net to provide a unified model
for diverse image types. Using a novel Conditional Visual State Space (C-VSS)
block, SUM dynamically adapts to various image types, including natural scenes,
web pages, and commercial imagery, ensuring universal applicability across
different data types. Our comprehensive evaluations across five benchmarks
demonstrate that SUM seamlessly adapts to different visual characteristics and
consistently outperforms existing models. These results position SUM as a
versatile and powerful tool for advancing visual attention modeling, offering a
robust solution universally applicable across different types of visual
content.
|
[
{
"created": "Tue, 25 Jun 2024 05:54:07 GMT",
"version": "v1"
}
] |
2024-06-27
|
[
[
"Hosseini",
"Alireza",
""
],
[
"Kazerouni",
"Amirhossein",
""
],
[
"Akhavan",
"Saeed",
""
],
[
"Brudno",
"Michael",
""
],
[
"Taati",
"Babak",
""
]
] |
Visual attention modeling, important for interpreting and prioritizing visual stimuli, plays a significant role in applications such as marketing, multimedia, and robotics. Traditional saliency prediction models, especially those based on Convolutional Neural Networks (CNNs) or Transformers, achieve notable success by leveraging large-scale annotated datasets. However, the current state-of-the-art (SOTA) models that use Transformers are computationally expensive. Additionally, separate models are often required for each image type, lacking a unified approach. In this paper, we propose Saliency Unification through Mamba (SUM), a novel approach that integrates the efficient long-range dependency modeling of Mamba with U-Net to provide a unified model for diverse image types. Using a novel Conditional Visual State Space (C-VSS) block, SUM dynamically adapts to various image types, including natural scenes, web pages, and commercial imagery, ensuring universal applicability across different data types. Our comprehensive evaluations across five benchmarks demonstrate that SUM seamlessly adapts to different visual characteristics and consistently outperforms existing models. These results position SUM as a versatile and powerful tool for advancing visual attention modeling, offering a robust solution universally applicable across different types of visual content.
|
1708.05106
|
Arin Chaudhuri
|
Arin Chaudhuri, Deovrat Kakde, Carol Sadek, Laura Gonzalez, Seunghyun
Kong
|
The Mean and Median Criterion for Automatic Kernel Bandwidth Selection
for Support Vector Data Description
| null | null |
10.1109/ICDMW.2017.116
| null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Support vector data description (SVDD) is a popular technique for detecting
anomalies. The SVDD classifier partitions the whole space into an inlier
region, which consists of the region near the training data, and an outlier
region, which consists of points away from the training data. The computation
of the SVDD classifier requires a kernel function, and the Gaussian kernel is a
common choice for the kernel function. The Gaussian kernel has a bandwidth
parameter, whose value is important for good results. A small bandwidth leads
to overfitting, and the resulting SVDD classifier overestimates the number of
anomalies. A large bandwidth leads to underfitting, and the classifier fails to
detect many anomalies. In this paper we present a new automatic, unsupervised
method for selecting the Gaussian kernel bandwidth. The selected value can be
computed quickly, and it is competitive with existing bandwidth selection
methods.
|
[
{
"created": "Wed, 16 Aug 2017 23:38:35 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Aug 2017 23:12:34 GMT",
"version": "v2"
}
] |
2018-07-23
|
[
[
"Chaudhuri",
"Arin",
""
],
[
"Kakde",
"Deovrat",
""
],
[
"Sadek",
"Carol",
""
],
[
"Gonzalez",
"Laura",
""
],
[
"Kong",
"Seunghyun",
""
]
] |
Support vector data description (SVDD) is a popular technique for detecting anomalies. The SVDD classifier partitions the whole space into an inlier region, which consists of the region near the training data, and an outlier region, which consists of points away from the training data. The computation of the SVDD classifier requires a kernel function, and the Gaussian kernel is a common choice for the kernel function. The Gaussian kernel has a bandwidth parameter, whose value is important for good results. A small bandwidth leads to overfitting, and the resulting SVDD classifier overestimates the number of anomalies. A large bandwidth leads to underfitting, and the classifier fails to detect many anomalies. In this paper we present a new automatic, unsupervised method for selecting the Gaussian kernel bandwidth. The selected value can be computed quickly, and it is competitive with existing bandwidth selection methods.
|
2006.03951
|
Rahul Singh
|
Rahul Singh, Fang Liu, Xin Liu, Ness Shroff
|
Contextual Bandits with Side-Observations
|
under review
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate contextual bandits in the presence of side-observations across
arms in order to design recommendation algorithms for users connected via
social networks. Users in social networks respond to their friends' activity,
and hence provide information about each other's preferences. In our model,
when a learning algorithm recommends an article to a user, not only does it
observe his/her response (e.g. an ad click), but also the side-observations,
i.e., the response of his neighbors if they were presented with the same
article. We model these observation dependencies by a graph $\mathcal{G}$ in
which nodes correspond to users, and edges correspond to social links. We
derive a problem/instance-dependent lower-bound on the regret of any consistent
algorithm. We propose an optimization (linear programming) based data-driven
learning algorithm that utilizes the structure of $\mathcal{G}$ in order to
make recommendations to users and show that it is asymptotically optimal, in
the sense that its regret matches the lower-bound as the number of rounds
$T\to\infty$. We show that this asymptotically optimal regret is upper-bounded
as $O\left(|\chi(\mathcal{G})|\log T\right)$, where $|\chi(\mathcal{G})|$ is
the domination number of $\mathcal{G}$. In contrast, a naive application of the
existing learning algorithms results in $O\left(N\log T\right)$ regret, where
$N$ is the number of users.
|
[
{
"created": "Sat, 6 Jun 2020 19:34:50 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Oct 2020 21:27:46 GMT",
"version": "v2"
}
] |
2020-10-27
|
[
[
"Singh",
"Rahul",
""
],
[
"Liu",
"Fang",
""
],
[
"Liu",
"Xin",
""
],
[
"Shroff",
"Ness",
""
]
] |
We investigate contextual bandits in the presence of side-observations across arms in order to design recommendation algorithms for users connected via social networks. Users in social networks respond to their friends' activity, and hence provide information about each other's preferences. In our model, when a learning algorithm recommends an article to a user, not only does it observe his/her response (e.g. an ad click), but also the side-observations, i.e., the response of his neighbors if they were presented with the same article. We model these observation dependencies by a graph $\mathcal{G}$ in which nodes correspond to users, and edges correspond to social links. We derive a problem/instance-dependent lower-bound on the regret of any consistent algorithm. We propose an optimization (linear programming) based data-driven learning algorithm that utilizes the structure of $\mathcal{G}$ in order to make recommendations to users and show that it is asymptotically optimal, in the sense that its regret matches the lower-bound as the number of rounds $T\to\infty$. We show that this asymptotically optimal regret is upper-bounded as $O\left(|\chi(\mathcal{G})|\log T\right)$, where $|\chi(\mathcal{G})|$ is the domination number of $\mathcal{G}$. In contrast, a naive application of the existing learning algorithms results in $O\left(N\log T\right)$ regret, where $N$ is the number of users.
|
2403.05308
|
Biswarup Mukherjee
|
Anne Tryphosa Kamatham, Kavita Sharma, Srikumar Venkataraman, Biswarup
Mukherjee
|
Sparse Wearable Sonomyography Sensor-based Proprioceptive Proportional
Control Across Multiple Gestures
| null | null | null | null |
cs.HC cs.RO eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Sonomyography (SMG) is a non-invasive technique that uses ultrasound imaging
to detect the dynamic activity of muscles. Wearable SMG systems have recently
gained popularity due to their potential as human-computer interfaces for their
superior performance compared to conventional methods. This paper demonstrates
real-time positional proportional control of multiple gestures using a
multiplexed 8-channel wearable SMG system. The amplitude-mode ultrasound
signals from the SMG system were utilized to detect muscle activity from the
forearm of 8 healthy individuals. The derived signals were used to control the
on-screen movement of the cursor. A target achievement task was performed to
analyze the performance of our SMG-based human-machine interface. Our wearable
SMG system provided accurate, stable, and intuitive control in real-time by
achieving an average success rate greater than 80% with all gestures.
Furthermore, the wearable SMG system's abilities to detect volitional movement
and decode movement kinematic information from SMG trajectories using standard
performance metrics were evaluated. Our results provide insights to validate
SMG as an intuitive human-machine interface.
|
[
{
"created": "Fri, 8 Mar 2024 13:38:07 GMT",
"version": "v1"
}
] |
2024-03-11
|
[
[
"Kamatham",
"Anne Tryphosa",
""
],
[
"Sharma",
"Kavita",
""
],
[
"Venkataraman",
"Srikumar",
""
],
[
"Mukherjee",
"Biswarup",
""
]
] |
Sonomyography (SMG) is a non-invasive technique that uses ultrasound imaging to detect the dynamic activity of muscles. Wearable SMG systems have recently gained popularity due to their potential as human-computer interfaces for their superior performance compared to conventional methods. This paper demonstrates real-time positional proportional control of multiple gestures using a multiplexed 8-channel wearable SMG system. The amplitude-mode ultrasound signals from the SMG system were utilized to detect muscle activity from the forearm of 8 healthy individuals. The derived signals were used to control the on-screen movement of the cursor. A target achievement task was performed to analyze the performance of our SMG-based human-machine interface. Our wearable SMG system provided accurate, stable, and intuitive control in real-time by achieving an average success rate greater than 80% with all gestures. Furthermore, the wearable SMG system's abilities to detect volitional movement and decode movement kinematic information from SMG trajectories using standard performance metrics were evaluated. Our results provide insights to validate SMG as an intuitive human-machine interface.
|
2305.10703
|
Yue Yu
|
Yue Yu, Yuchen Zhuang, Rongzhi Zhang, Yu Meng, Jiaming Shen, Chao
Zhang
|
ReGen: Zero-Shot Text Classification via Training Data Generation with
Progressive Dense Retrieval
|
ACL 2023 Findings (Code: https://github.com/yueyu1030/ReGen)
| null | null | null |
cs.CL cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of large language models (LLMs), zero-shot learning has
attracted much attention for various NLP tasks. Different from prior works that
generate training data with billion-scale natural language generation (NLG)
models, we propose a retrieval-enhanced framework to create training data from
a general-domain unlabeled corpus. To realize this, we first conduct
contrastive pretraining to learn an unsupervised dense retriever for extracting
the most relevant documents using class-descriptive verbalizers. We then
further propose two simple strategies, namely Verbalizer Augmentation with
Demonstrations and Self-consistency Guided Filtering to improve the topic
coverage of the dataset while removing noisy examples. Experiments on nine
datasets demonstrate that REGEN achieves 4.3% gain over the strongest baselines
and saves around 70% of the time compared to baselines using large NLG models.
Besides, REGEN can be naturally integrated with recently proposed large
language models to boost performance.
|
[
{
"created": "Thu, 18 May 2023 04:30:09 GMT",
"version": "v1"
}
] |
2023-05-19
|
[
[
"Yu",
"Yue",
""
],
[
"Zhuang",
"Yuchen",
""
],
[
"Zhang",
"Rongzhi",
""
],
[
"Meng",
"Yu",
""
],
[
"Shen",
"Jiaming",
""
],
[
"Zhang",
"Chao",
""
]
] |
With the development of large language models (LLMs), zero-shot learning has attracted much attention for various NLP tasks. Different from prior works that generate training data with billion-scale natural language generation (NLG) models, we propose a retrieval-enhanced framework to create training data from a general-domain unlabeled corpus. To realize this, we first conduct contrastive pretraining to learn an unsupervised dense retriever for extracting the most relevant documents using class-descriptive verbalizers. We then further propose two simple strategies, namely Verbalizer Augmentation with Demonstrations and Self-consistency Guided Filtering to improve the topic coverage of the dataset while removing noisy examples. Experiments on nine datasets demonstrate that REGEN achieves 4.3% gain over the strongest baselines and saves around 70% of the time compared to baselines using large NLG models. Besides, REGEN can be naturally integrated with recently proposed large language models to boost performance.
|
2105.13573
|
El Moatez Billah Nagoudi
|
El Moatez Billah Nagoudi, AbdelRahim Elmadany, Muhammad Abdul-Mageed
|
Investigating Code-Mixed Modern Standard Arabic-Egyptian to English
Machine Translation
|
CALCS2021, colocated with NAACL-2021
| null | null | null |
cs.LG cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent progress in neural machine translation (NMT) has made it possible to
translate successfully between monolingual language pairs where large parallel
data exist, with pre-trained models improving performance even further.
Although there exists work on translating in code-mixed settings (where one of
the pairs includes text from two or more languages), it is still unclear what
recent success in NMT and language modeling exactly means for translating
code-mixed text. We investigate one such context, namely MT from code-mixed
Modern Standard Arabic and Egyptian Arabic (MSAEA) into English. We develop
models under different conditions, employing both (i) standard end-to-end
sequence-to-sequence (S2S) Transformers trained from scratch and (ii)
pre-trained S2S language models (LMs). We are able to acquire reasonable
performance using only MSA-EN parallel data with S2S models trained from
scratch. We also find LMs fine-tuned on data from various Arabic dialects to
help the MSAEA-EN task. Our work is in the context of the Shared Task on
Machine Translation in Code-Switching. Our best model achieves $\bf25.72$ BLEU,
placing us first on the official shared task evaluation for MSAEA-EN.
|
[
{
"created": "Fri, 28 May 2021 03:38:35 GMT",
"version": "v1"
}
] |
2021-05-31
|
[
[
"Nagoudi",
"El Moatez Billah",
""
],
[
"Elmadany",
"AbdelRahim",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
] |
Recent progress in neural machine translation (NMT) has made it possible to translate successfully between monolingual language pairs where large parallel data exist, with pre-trained models improving performance even further. Although there exists work on translating in code-mixed settings (where one of the pairs includes text from two or more languages), it is still unclear what recent success in NMT and language modeling exactly means for translating code-mixed text. We investigate one such context, namely MT from code-mixed Modern Standard Arabic and Egyptian Arabic (MSAEA) into English. We develop models under different conditions, employing both (i) standard end-to-end sequence-to-sequence (S2S) Transformers trained from scratch and (ii) pre-trained S2S language models (LMs). We are able to acquire reasonable performance using only MSA-EN parallel data with S2S models trained from scratch. We also find LMs fine-tuned on data from various Arabic dialects to help the MSAEA-EN task. Our work is in the context of the Shared Task on Machine Translation in Code-Switching. Our best model achieves $\bf25.72$ BLEU, placing us first on the official shared task evaluation for MSAEA-EN.
|
2110.13883
|
Alexander Marx
|
Alexander Marx and Jonas Fischer
|
Estimating Mutual Information via Geodesic $k$NN
|
Accepted at SIAM SDM'22
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating mutual information (MI) between two continuous random variables
$X$ and $Y$ allows to capture non-linear dependencies between them,
non-parametrically. As such, MI estimation lies at the core of many data
science applications. Yet, robustly estimating MI for high-dimensional $X$ and
$Y$ is still an open research question.
In this paper, we formulate this problem through the lens of manifold
learning. That is, we leverage the common assumption that the information of
$X$ and $Y$ is captured by a low-dimensional manifold embedded in the observed
high-dimensional space and transfer it to MI estimation. As an extension to
state-of-the-art $k$NN estimators, we propose to determine the $k$-nearest
neighbors via geodesic distances on this manifold rather than from the ambient
space, which allows us to estimate MI even in the high-dimensional setting. An
empirical evaluation of our method, G-KSG, against the state-of-the-art shows
that it yields good estimations of MI in classical benchmark and manifold
tasks, even for high dimensional datasets, which none of the existing methods
can provide.
|
[
{
"created": "Tue, 26 Oct 2021 17:40:35 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jan 2022 12:59:04 GMT",
"version": "v2"
}
] |
2022-01-19
|
[
[
"Marx",
"Alexander",
""
],
[
"Fischer",
"Jonas",
""
]
] |
Estimating mutual information (MI) between two continuous random variables $X$ and $Y$ allows to capture non-linear dependencies between them, non-parametrically. As such, MI estimation lies at the core of many data science applications. Yet, robustly estimating MI for high-dimensional $X$ and $Y$ is still an open research question. In this paper, we formulate this problem through the lens of manifold learning. That is, we leverage the common assumption that the information of $X$ and $Y$ is captured by a low-dimensional manifold embedded in the observed high-dimensional space and transfer it to MI estimation. As an extension to state-of-the-art $k$NN estimators, we propose to determine the $k$-nearest neighbors via geodesic distances on this manifold rather than from the ambient space, which allows us to estimate MI even in the high-dimensional setting. An empirical evaluation of our method, G-KSG, against the state-of-the-art shows that it yields good estimations of MI in classical benchmark and manifold tasks, even for high dimensional datasets, which none of the existing methods can provide.
|
1104.2690
|
Nick Gravin
|
Ioannis Caragiannis, Angelo Fanelli, Nick Gravin, Alexander Skopalik
|
Efficient computation of approximate pure Nash equilibria in congestion
games
| null | null | null | null |
cs.GT cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Congestion games constitute an important class of games in which computing an
exact or even approximate pure Nash equilibrium is in general {\sf
PLS}-complete. We present a surprisingly simple polynomial-time algorithm that
computes O(1)-approximate Nash equilibria in these games. In particular, for
congestion games with linear latency functions, our algorithm computes
$(2+\epsilon)$-approximate pure Nash equilibria in time polynomial in the
number of players, the number of resources and $1/\epsilon$. It also applies to
games with polynomial latency functions with constant maximum degree $d$;
there, the approximation guarantee is $d^{O(d)}$. The algorithm essentially
identifies a polynomially long sequence of best-response moves that lead to an
approximate equilibrium; the existence of such short sequences is interesting
in itself. These are the first positive algorithmic results for approximate
equilibria in non-symmetric congestion games. We strengthen them further by
proving that, for congestion games that deviate from our mild assumptions,
computing $\rho$-approximate equilibria is {\sf PLS}-complete for any
polynomial-time computable $\rho$.
|
[
{
"created": "Thu, 14 Apr 2011 08:14:47 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Jul 2011 18:15:02 GMT",
"version": "v2"
}
] |
2011-07-14
|
[
[
"Caragiannis",
"Ioannis",
""
],
[
"Fanelli",
"Angelo",
""
],
[
"Gravin",
"Nick",
""
],
[
"Skopalik",
"Alexander",
""
]
] |
Congestion games constitute an important class of games in which computing an exact or even approximate pure Nash equilibrium is in general {\sf PLS}-complete. We present a surprisingly simple polynomial-time algorithm that computes O(1)-approximate Nash equilibria in these games. In particular, for congestion games with linear latency functions, our algorithm computes $(2+\epsilon)$-approximate pure Nash equilibria in time polynomial in the number of players, the number of resources and $1/\epsilon$. It also applies to games with polynomial latency functions with constant maximum degree $d$; there, the approximation guarantee is $d^{O(d)}$. The algorithm essentially identifies a polynomially long sequence of best-response moves that lead to an approximate equilibrium; the existence of such short sequences is interesting in itself. These are the first positive algorithmic results for approximate equilibria in non-symmetric congestion games. We strengthen them further by proving that, for congestion games that deviate from our mild assumptions, computing $\rho$-approximate equilibria is {\sf PLS}-complete for any polynomial-time computable $\rho$.
|
2408.03291
|
Lianwei Yang
|
Lianwei Yang, Haisong Gong
|
DopQ-ViT: Towards Distribution-Friendly and Outlier-Aware Post-Training
Quantization for Vision Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision transformers (ViTs) have garnered significant attention for their
performance in vision tasks; however, the high computational cost and
significant latency issues have hinder widespread adoption. Post-training
quantization (PTQ), a promising method for model compression, still faces
accuracy degradation challenges with ViTs. There are two reasons for this: the
existing quantization paradigm does not fit the power-law distribution of
post-Softmax activations well, and accuracy inevitably decreases after
reparameterizing post-LayerNorm activations. We propose a Distribution-Friendly
and Outlier-Aware Post-training Quantization method for Vision Transformers,
named DopQ-ViT. DopQ-ViT analyzes the inefficiencies of current quantizers and
introduces a distribution-friendly Tan Quantizer called TanQ. TanQ focuses more
on values near 1, more accurately preserving the power-law distribution of
post-Softmax activations, and achieves favorable results. Moreover, when
reparameterizing post-LayerNorm activations from channel-wise to layer-wise
quantization, the accuracy degradation is mainly due to the significant impact
of outliers in the scaling factors. Therefore, DopQ-ViT proposes a method to
Search for the Optimal Scaling Factor, denoted as SOSF, which compensates for
the influence of outliers and preserves the performance of the quantization
model. DopQ-ViT has undergone extensive validation and demonstrates significant
performance improvements in quantization models, particularly in low-bit
settings.
|
[
{
"created": "Tue, 6 Aug 2024 16:40:04 GMT",
"version": "v1"
}
] |
2024-08-07
|
[
[
"Yang",
"Lianwei",
""
],
[
"Gong",
"Haisong",
""
]
] |
Vision transformers (ViTs) have garnered significant attention for their performance in vision tasks; however, the high computational cost and significant latency issues have hinder widespread adoption. Post-training quantization (PTQ), a promising method for model compression, still faces accuracy degradation challenges with ViTs. There are two reasons for this: the existing quantization paradigm does not fit the power-law distribution of post-Softmax activations well, and accuracy inevitably decreases after reparameterizing post-LayerNorm activations. We propose a Distribution-Friendly and Outlier-Aware Post-training Quantization method for Vision Transformers, named DopQ-ViT. DopQ-ViT analyzes the inefficiencies of current quantizers and introduces a distribution-friendly Tan Quantizer called TanQ. TanQ focuses more on values near 1, more accurately preserving the power-law distribution of post-Softmax activations, and achieves favorable results. Moreover, when reparameterizing post-LayerNorm activations from channel-wise to layer-wise quantization, the accuracy degradation is mainly due to the significant impact of outliers in the scaling factors. Therefore, DopQ-ViT proposes a method to Search for the Optimal Scaling Factor, denoted as SOSF, which compensates for the influence of outliers and preserves the performance of the quantization model. DopQ-ViT has undergone extensive validation and demonstrates significant performance improvements in quantization models, particularly in low-bit settings.
|
1505.03759
|
Keren Zhou
|
Keren Zhou, Guocheng Niu, Wuzhao Zhang, Xueqi Li, Wenqin Liu
|
Parse Concurrent Data Structures: BST as an Example
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Designing concurrent data structures should follow some basic rules. By
separating the algorithms into two phases, we present guidelines for scalable
data structures, with a analysis model based on the Amadal's law. To the best
of our knowledge, we are the first to formalize a practical model for measuring
concurrent structures' speedup. We also build some edge-cutting BSTs following
our principles, testing them under different workloads. The result provides
compelling evidence to back the our guidelines, and shows that our theory is
useful for reasoning the varied speedup.
|
[
{
"created": "Thu, 14 May 2015 15:38:21 GMT",
"version": "v1"
},
{
"created": "Sat, 30 May 2015 16:29:50 GMT",
"version": "v2"
}
] |
2015-06-02
|
[
[
"Zhou",
"Keren",
""
],
[
"Niu",
"Guocheng",
""
],
[
"Zhang",
"Wuzhao",
""
],
[
"Li",
"Xueqi",
""
],
[
"Liu",
"Wenqin",
""
]
] |
Designing concurrent data structures should follow some basic rules. By separating the algorithms into two phases, we present guidelines for scalable data structures, with a analysis model based on the Amadal's law. To the best of our knowledge, we are the first to formalize a practical model for measuring concurrent structures' speedup. We also build some edge-cutting BSTs following our principles, testing them under different workloads. The result provides compelling evidence to back the our guidelines, and shows that our theory is useful for reasoning the varied speedup.
|
1906.05000
|
Arne K\"ohn
|
Max Friedrich, Arne K\"ohn, Gregor Wiedemann, Chris Biemann
|
Adversarial Learning of Privacy-Preserving Text Representations for
De-Identification of Medical Records
|
Accepted at ACL 2019; camera-ready version
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
De-identification is the task of detecting protected health information (PHI)
in medical text. It is a critical step in sanitizing electronic health records
(EHRs) to be shared for research. Automatic de-identification classifierscan
significantly speed up the sanitization process. However, obtaining a large and
diverse dataset to train such a classifier that works wellacross many types of
medical text poses a challenge as privacy laws prohibit the sharing of raw
medical records. We introduce a method to create privacy-preserving shareable
representations of medical text (i.e. they contain no PHI) that does not
require expensive manual pseudonymization. These representations can be shared
between organizations to create unified datasets for training de-identification
models. Our representation allows training a simple LSTM-CRF de-identification
model to an F1 score of 97.4%, which is comparable to a strong baseline that
exposes private information in its representation. A robust, widely available
de-identification classifier based on our representation could potentially
enable studies for which de-identification would otherwise be too costly.
|
[
{
"created": "Wed, 12 Jun 2019 08:29:24 GMT",
"version": "v1"
}
] |
2019-06-13
|
[
[
"Friedrich",
"Max",
""
],
[
"Köhn",
"Arne",
""
],
[
"Wiedemann",
"Gregor",
""
],
[
"Biemann",
"Chris",
""
]
] |
De-identification is the task of detecting protected health information (PHI) in medical text. It is a critical step in sanitizing electronic health records (EHRs) to be shared for research. Automatic de-identification classifierscan significantly speed up the sanitization process. However, obtaining a large and diverse dataset to train such a classifier that works wellacross many types of medical text poses a challenge as privacy laws prohibit the sharing of raw medical records. We introduce a method to create privacy-preserving shareable representations of medical text (i.e. they contain no PHI) that does not require expensive manual pseudonymization. These representations can be shared between organizations to create unified datasets for training de-identification models. Our representation allows training a simple LSTM-CRF de-identification model to an F1 score of 97.4%, which is comparable to a strong baseline that exposes private information in its representation. A robust, widely available de-identification classifier based on our representation could potentially enable studies for which de-identification would otherwise be too costly.
|
2206.06357
|
Haolin Yu
|
Haolin Yu, Kaiyang Guo, Mahdi Karami, Xi Chen, Guojun Zhang, Pascal
Poupart
|
Federated Bayesian Neural Regression: A Scalable Global Federated
Gaussian Process
|
10 pages main text, 5 pages appendix, 5 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In typical scenarios where the Federated Learning (FL) framework applies, it
is common for clients to have insufficient training data to produce an accurate
model. Thus, models that provide not only point estimations, but also some
notion of confidence are beneficial. Gaussian Process (GP) is a powerful
Bayesian model that comes with naturally well-calibrated variance estimations.
However, it is challenging to learn a stand-alone global GP since merging local
kernels leads to privacy leakage. To preserve privacy, previous works that
consider federated GPs avoid learning a global model by focusing on the
personalized setting or learning an ensemble of local models. We present
Federated Bayesian Neural Regression (FedBNR), an algorithm that learns a
scalable stand-alone global federated GP that respects clients' privacy. We
incorporate deep kernel learning and random features for scalability by
defining a unifying random kernel. We show this random kernel can recover any
stationary kernel and many non-stationary kernels. We then derive a principled
approach of learning a global predictive model as if all client data is
centralized. We also learn global kernels with knowledge distillation methods
for non-identically and independently distributed (non-i.i.d.) clients.
Experiments are conducted on real-world regression datasets and show
statistically significant improvements compared to other federated GP models.
|
[
{
"created": "Mon, 13 Jun 2022 17:52:58 GMT",
"version": "v1"
}
] |
2022-06-14
|
[
[
"Yu",
"Haolin",
""
],
[
"Guo",
"Kaiyang",
""
],
[
"Karami",
"Mahdi",
""
],
[
"Chen",
"Xi",
""
],
[
"Zhang",
"Guojun",
""
],
[
"Poupart",
"Pascal",
""
]
] |
In typical scenarios where the Federated Learning (FL) framework applies, it is common for clients to have insufficient training data to produce an accurate model. Thus, models that provide not only point estimations, but also some notion of confidence are beneficial. Gaussian Process (GP) is a powerful Bayesian model that comes with naturally well-calibrated variance estimations. However, it is challenging to learn a stand-alone global GP since merging local kernels leads to privacy leakage. To preserve privacy, previous works that consider federated GPs avoid learning a global model by focusing on the personalized setting or learning an ensemble of local models. We present Federated Bayesian Neural Regression (FedBNR), an algorithm that learns a scalable stand-alone global federated GP that respects clients' privacy. We incorporate deep kernel learning and random features for scalability by defining a unifying random kernel. We show this random kernel can recover any stationary kernel and many non-stationary kernels. We then derive a principled approach of learning a global predictive model as if all client data is centralized. We also learn global kernels with knowledge distillation methods for non-identically and independently distributed (non-i.i.d.) clients. Experiments are conducted on real-world regression datasets and show statistically significant improvements compared to other federated GP models.
|
2110.06703
|
Jochen De Weerdt
|
Pieter De Koninck and Klaas Nelissen and Seppe vanden Broucke and Bart
Baesens and Monique Snoeck and Jochen De Weerdt
|
Expert-driven Trace Clustering with Instance-level Constraints
| null |
Knowl Inf Syst 63, 1197-1220 (2021)
|
10.1007/s10115-021-01548-6
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Within the field of process mining, several different trace clustering
approaches exist for partitioning traces or process instances into similar
groups. Typically, this partitioning is based on certain patterns or similarity
between the traces, or driven by the discovery of a process model for each
cluster. The main drawback of these techniques, however, is that their
solutions are usually hard to evaluate or justify by domain experts. In this
paper, we present two constrained trace clustering techniques that are capable
to leverage expert knowledge in the form of instance-level constraints. In an
extensive experimental evaluation using two real-life datasets, we show that
our novel techniques are indeed capable of producing clustering solutions that
are more justifiable without a substantial negative impact on their quality.
|
[
{
"created": "Wed, 13 Oct 2021 13:18:58 GMT",
"version": "v1"
}
] |
2022-08-18
|
[
[
"De Koninck",
"Pieter",
""
],
[
"Nelissen",
"Klaas",
""
],
[
"Broucke",
"Seppe vanden",
""
],
[
"Baesens",
"Bart",
""
],
[
"Snoeck",
"Monique",
""
],
[
"De Weerdt",
"Jochen",
""
]
] |
Within the field of process mining, several different trace clustering approaches exist for partitioning traces or process instances into similar groups. Typically, this partitioning is based on certain patterns or similarity between the traces, or driven by the discovery of a process model for each cluster. The main drawback of these techniques, however, is that their solutions are usually hard to evaluate or justify by domain experts. In this paper, we present two constrained trace clustering techniques that are capable to leverage expert knowledge in the form of instance-level constraints. In an extensive experimental evaluation using two real-life datasets, we show that our novel techniques are indeed capable of producing clustering solutions that are more justifiable without a substantial negative impact on their quality.
|
1904.04176
|
Ashwin Rajadesingan
|
Ashwin Rajadesingan, Ramaswami Mahalingam, David Jurgens
|
Smart, Responsible, and Upper Caste Only: Measuring Caste Attitudes
through Large-Scale Analysis of Matrimonial Profiles
|
12 pages; Accepted to be published at ICWSM'19
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Discriminatory caste attitudes currently stigmatize millions of Indians,
subjecting individuals to prejudice in all aspects of life. Governmental
incentives and societal movements have attempted to counter these attitudes,
yet accurate measurements of public opinions on caste are not yet available for
understanding whether progress is being made. Here, we introduce a novel
approach to measure public attitudes of caste through an indicator variable:
openness to intercaste marriage. Using a massive dataset of over 313K profiles
from a major Indian matrimonial site, we precisely quantify public attitudes,
along with differences between generations and between Indian residents and
diaspora. We show that younger generations are more open to intercaste
marriage, yet attitudes are based on a complex function of social status beyond
their own caste. In examining the desired qualities in a spouse, we find that
individuals open to intercaste marriage are more individualistic in the
qualities they desire, rather than favoring family-related qualities, which
mirrors larger societal trends away from collectivism. Finally, we show that
attitudes in diaspora are significantly less open, suggesting a bi-cultural
model of integration. Our research provides the first empirical evidence
identifying how various intersections of identity shape attitudes toward
intercaste marriage in India and among the Indian diaspora in the US.
|
[
{
"created": "Mon, 8 Apr 2019 16:30:33 GMT",
"version": "v1"
}
] |
2019-04-09
|
[
[
"Rajadesingan",
"Ashwin",
""
],
[
"Mahalingam",
"Ramaswami",
""
],
[
"Jurgens",
"David",
""
]
] |
Discriminatory caste attitudes currently stigmatize millions of Indians, subjecting individuals to prejudice in all aspects of life. Governmental incentives and societal movements have attempted to counter these attitudes, yet accurate measurements of public opinions on caste are not yet available for understanding whether progress is being made. Here, we introduce a novel approach to measure public attitudes of caste through an indicator variable: openness to intercaste marriage. Using a massive dataset of over 313K profiles from a major Indian matrimonial site, we precisely quantify public attitudes, along with differences between generations and between Indian residents and diaspora. We show that younger generations are more open to intercaste marriage, yet attitudes are based on a complex function of social status beyond their own caste. In examining the desired qualities in a spouse, we find that individuals open to intercaste marriage are more individualistic in the qualities they desire, rather than favoring family-related qualities, which mirrors larger societal trends away from collectivism. Finally, we show that attitudes in diaspora are significantly less open, suggesting a bi-cultural model of integration. Our research provides the first empirical evidence identifying how various intersections of identity shape attitudes toward intercaste marriage in India and among the Indian diaspora in the US.
|
2106.04011
|
AkshatKumar Nigam Mr
|
AkshatKumar Nigam, Robert Pollice, Alan Aspuru-Guzik
|
JANUS: Parallel Tempered Genetic Algorithm Guided by Deep Neural
Networks for Inverse Molecular Design
|
20 pages, 12 figures, 4 tables. Comments are welcome! (code will be
uploaded when paper is formally published)
| null | null | null |
cs.NE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inverse molecular design, i.e., designing molecules with specific target
properties, can be posed as an optimization problem. High-dimensional
optimization tasks in the natural sciences are commonly tackled via
population-based metaheuristic optimization algorithms such as evolutionary
algorithms. However, expensive property evaluation, which is often required,
can limit the widespread use of such approaches as the associated cost can
become prohibitive. Herein, we present JANUS, a genetic algorithm that is
inspired by parallel tempering. It propagates two populations, one for
exploration and another for exploitation, improving optimization by reducing
expensive property evaluations. Additionally, JANUS is augmented by a deep
neural network that approximates molecular properties via active learning for
enhanced sampling of the chemical space. Our method uses the SELFIES molecular
representation and the STONED algorithm for the efficient generation of
structures, and outperforms other generative models in common inverse molecular
design tasks achieving state-of-the-art performance.
|
[
{
"created": "Mon, 7 Jun 2021 23:41:34 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Aug 2021 20:57:26 GMT",
"version": "v2"
}
] |
2021-08-17
|
[
[
"Nigam",
"AkshatKumar",
""
],
[
"Pollice",
"Robert",
""
],
[
"Aspuru-Guzik",
"Alan",
""
]
] |
Inverse molecular design, i.e., designing molecules with specific target properties, can be posed as an optimization problem. High-dimensional optimization tasks in the natural sciences are commonly tackled via population-based metaheuristic optimization algorithms such as evolutionary algorithms. However, expensive property evaluation, which is often required, can limit the widespread use of such approaches as the associated cost can become prohibitive. Herein, we present JANUS, a genetic algorithm that is inspired by parallel tempering. It propagates two populations, one for exploration and another for exploitation, improving optimization by reducing expensive property evaluations. Additionally, JANUS is augmented by a deep neural network that approximates molecular properties via active learning for enhanced sampling of the chemical space. Our method uses the SELFIES molecular representation and the STONED algorithm for the efficient generation of structures, and outperforms other generative models in common inverse molecular design tasks achieving state-of-the-art performance.
|
1906.01539
|
Samira Abnar
|
Samira Abnar, Lisa Beinborn, Rochelle Choenni, Willem Zuidema
|
Blackbox meets blackbox: Representational Similarity and Stability
Analysis of Neural Language Models and Brains
| null |
2nd BlackBoxNLP workshop @ACL2019
| null | null |
cs.AI cs.CL q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we define and apply representational stability analysis
(ReStA), an intuitive way of analyzing neural language models. ReStA is a
variant of the popular representational similarity analysis (RSA) in cognitive
neuroscience. While RSA can be used to compare representations in models, model
components, and human brains, ReStA compares instances of the same model, while
systematically varying single model parameter. Using ReStA, we study four
recent and successful neural language models, and evaluate how sensitive their
internal representations are to the amount of prior context. Using RSA, we
perform a systematic study of how similar the representational spaces in the
first and second (or higher) layers of these models are to each other and to
patterns of activation in the human brain. Our results reveal surprisingly
strong differences between language models, and give insights into where the
deep linguistic processing, that integrates information over multiple
sentences, is happening in these models. The combination of ReStA and RSA on
models and brains allows us to start addressing the important question of what
kind of linguistic processes we can hope to observe in fMRI brain imaging data.
In particular, our results suggest that the data on story reading from Wehbe et
al. (2014) contains a signal of shallow linguistic processing, but show no
evidence on the more interesting deep linguistic processing.
|
[
{
"created": "Tue, 4 Jun 2019 15:52:46 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jun 2019 09:58:34 GMT",
"version": "v2"
}
] |
2019-06-06
|
[
[
"Abnar",
"Samira",
""
],
[
"Beinborn",
"Lisa",
""
],
[
"Choenni",
"Rochelle",
""
],
[
"Zuidema",
"Willem",
""
]
] |
In this paper, we define and apply representational stability analysis (ReStA), an intuitive way of analyzing neural language models. ReStA is a variant of the popular representational similarity analysis (RSA) in cognitive neuroscience. While RSA can be used to compare representations in models, model components, and human brains, ReStA compares instances of the same model, while systematically varying single model parameter. Using ReStA, we study four recent and successful neural language models, and evaluate how sensitive their internal representations are to the amount of prior context. Using RSA, we perform a systematic study of how similar the representational spaces in the first and second (or higher) layers of these models are to each other and to patterns of activation in the human brain. Our results reveal surprisingly strong differences between language models, and give insights into where the deep linguistic processing, that integrates information over multiple sentences, is happening in these models. The combination of ReStA and RSA on models and brains allows us to start addressing the important question of what kind of linguistic processes we can hope to observe in fMRI brain imaging data. In particular, our results suggest that the data on story reading from Wehbe et al. (2014) contains a signal of shallow linguistic processing, but show no evidence on the more interesting deep linguistic processing.
|
2007.07032
|
Christian Timmerer
|
Andrew Perkis, Christian Timmerer, Sabina Barakovi\'c, Jasmina
Barakovi\'c Husi\'c, S{\o}ren Bech, Sebastian Bosse, Jean Botev, Kjell
Brunnstr\"om, Luis Cruz, Katrien De Moor, Andrea de Polo Saibanti, Wouter
Durnez, Sebastian Egger-Lampl, Ulrich Engelke, Tiago H. Falk, Jes\'us
Guti\'errez, Asim Hameed, Andrew Hines, Tanja Kojic, Dragan Kukolj, Eirini
Liotou, Dragorad Milovanovic, Sebastian M\"oller, Niall Murray, Babak Naderi,
Manuela Pereira, Stuart Perry, Antonio Pinheiro, Andres Pinilla, Alexander
Raake, Sarvesh Rajesh Agrawal, Ulrich Reiter, Rafael Rodrigues, Raimund
Schatz, Peter Schelkens, Steven Schmidt, Saeed Shafiee Sabet, Ashutosh
Singla, Lea Skorin-Kapov, Mirko Suznjevic, Stefan Uhrig, Sara Vlahovi\'c,
Jan-Niklas Voigt-Antons, Saman Zadtootaghaj
|
QUALINET White Paper on Definitions of Immersive Media Experience (IMEx)
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the coming of age of virtual/augmented reality and interactive media,
numerous definitions, frameworks, and models of immersion have emerged across
different fields ranging from computer graphics to literary works. Immersion is
oftentimes used interchangeably with presence as both concepts are closely
related. However, there are noticeable interdisciplinary differences regarding
definitions, scope, and constituents that are required to be addressed so that
a coherent understanding of the concepts can be achieved. Such consensus is
vital for paving the directionality of the future of immersive media
experiences (IMEx) and all related matters. The aim of this white paper is to
provide a survey of definitions of immersion and presence which leads to a
definition of immersive media experience (IMEx). The Quality of Experience
(QoE) for immersive media is described by establishing a relationship between
the concepts of QoE and IMEx followed by application areas of immersive media
experience. Influencing factors on immersive media experience are elaborated as
well as the assessment of immersive media experience. Finally, standardization
activities related to IMEx are highlighted and the white paper is concluded
with an outlook related to future developments.
|
[
{
"created": "Wed, 10 Jun 2020 15:59:42 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Nov 2020 23:04:36 GMT",
"version": "v2"
}
] |
2020-11-26
|
[
[
"Perkis",
"Andrew",
""
],
[
"Timmerer",
"Christian",
""
],
[
"Baraković",
"Sabina",
""
],
[
"Husić",
"Jasmina Baraković",
""
],
[
"Bech",
"Søren",
""
],
[
"Bosse",
"Sebastian",
""
],
[
"Botev",
"Jean",
""
],
[
"Brunnström",
"Kjell",
""
],
[
"Cruz",
"Luis",
""
],
[
"De Moor",
"Katrien",
""
],
[
"Saibanti",
"Andrea de Polo",
""
],
[
"Durnez",
"Wouter",
""
],
[
"Egger-Lampl",
"Sebastian",
""
],
[
"Engelke",
"Ulrich",
""
],
[
"Falk",
"Tiago H.",
""
],
[
"Gutiérrez",
"Jesús",
""
],
[
"Hameed",
"Asim",
""
],
[
"Hines",
"Andrew",
""
],
[
"Kojic",
"Tanja",
""
],
[
"Kukolj",
"Dragan",
""
],
[
"Liotou",
"Eirini",
""
],
[
"Milovanovic",
"Dragorad",
""
],
[
"Möller",
"Sebastian",
""
],
[
"Murray",
"Niall",
""
],
[
"Naderi",
"Babak",
""
],
[
"Pereira",
"Manuela",
""
],
[
"Perry",
"Stuart",
""
],
[
"Pinheiro",
"Antonio",
""
],
[
"Pinilla",
"Andres",
""
],
[
"Raake",
"Alexander",
""
],
[
"Agrawal",
"Sarvesh Rajesh",
""
],
[
"Reiter",
"Ulrich",
""
],
[
"Rodrigues",
"Rafael",
""
],
[
"Schatz",
"Raimund",
""
],
[
"Schelkens",
"Peter",
""
],
[
"Schmidt",
"Steven",
""
],
[
"Sabet",
"Saeed Shafiee",
""
],
[
"Singla",
"Ashutosh",
""
],
[
"Skorin-Kapov",
"Lea",
""
],
[
"Suznjevic",
"Mirko",
""
],
[
"Uhrig",
"Stefan",
""
],
[
"Vlahović",
"Sara",
""
],
[
"Voigt-Antons",
"Jan-Niklas",
""
],
[
"Zadtootaghaj",
"Saman",
""
]
] |
With the coming of age of virtual/augmented reality and interactive media, numerous definitions, frameworks, and models of immersion have emerged across different fields ranging from computer graphics to literary works. Immersion is oftentimes used interchangeably with presence as both concepts are closely related. However, there are noticeable interdisciplinary differences regarding definitions, scope, and constituents that are required to be addressed so that a coherent understanding of the concepts can be achieved. Such consensus is vital for paving the directionality of the future of immersive media experiences (IMEx) and all related matters. The aim of this white paper is to provide a survey of definitions of immersion and presence which leads to a definition of immersive media experience (IMEx). The Quality of Experience (QoE) for immersive media is described by establishing a relationship between the concepts of QoE and IMEx followed by application areas of immersive media experience. Influencing factors on immersive media experience are elaborated as well as the assessment of immersive media experience. Finally, standardization activities related to IMEx are highlighted and the white paper is concluded with an outlook related to future developments.
|
1806.03417
|
Maximilian Nickel
|
Maximilian Nickel, Douwe Kiela
|
Learning Continuous Hierarchies in the Lorentz Model of Hyperbolic
Geometry
|
Accepted at ICML'18
| null | null | null |
cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We are concerned with the discovery of hierarchical relationships from
large-scale unstructured similarity scores. For this purpose, we study
different models of hyperbolic space and find that learning embeddings in the
Lorentz model is substantially more efficient than in the Poincar\'e-ball
model. We show that the proposed approach allows us to learn high-quality
embeddings of large taxonomies which yield improvements over Poincar\'e
embeddings, especially in low dimensions. Lastly, we apply our model to
discover hierarchies in two real-world datasets: we show that an embedding in
hyperbolic space can reveal important aspects of a company's organizational
structure as well as reveal historical relationships between language families.
|
[
{
"created": "Sat, 9 Jun 2018 05:56:50 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Jul 2018 13:06:31 GMT",
"version": "v2"
}
] |
2018-07-10
|
[
[
"Nickel",
"Maximilian",
""
],
[
"Kiela",
"Douwe",
""
]
] |
We are concerned with the discovery of hierarchical relationships from large-scale unstructured similarity scores. For this purpose, we study different models of hyperbolic space and find that learning embeddings in the Lorentz model is substantially more efficient than in the Poincar\'e-ball model. We show that the proposed approach allows us to learn high-quality embeddings of large taxonomies which yield improvements over Poincar\'e embeddings, especially in low dimensions. Lastly, we apply our model to discover hierarchies in two real-world datasets: we show that an embedding in hyperbolic space can reveal important aspects of a company's organizational structure as well as reveal historical relationships between language families.
|
1804.02273
|
Vyacheslav Moklev
|
Vyacheslav Moklev and Vladimir Ulyantsev
|
BFS Enumeration for Breaking Symmetries in Graphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are numerous NP-hard combinatorial problems which involve searching for
an undirected graph satisfying a certain property. One way to solve such
problems is to translate a problem into an instance of the boolean
satisfiability (SAT) or constraint satisfaction (CSP) problem. Such reduction
usually can give rise to numerous isomorphic representations of the same graph.
One way to reduce the search space and speed up the search under these
conditions is to introduce symmetrybreaking predicates. In this paper we
introduce three novel and practically effective symmetry-breaking predicates
for an undirected connected graph search based on breadth-first search (BFS)
enumeration and compare with existing symmetry-breaking methods on several
graph problems.
|
[
{
"created": "Fri, 6 Apr 2018 13:53:31 GMT",
"version": "v1"
}
] |
2018-04-09
|
[
[
"Moklev",
"Vyacheslav",
""
],
[
"Ulyantsev",
"Vladimir",
""
]
] |
There are numerous NP-hard combinatorial problems which involve searching for an undirected graph satisfying a certain property. One way to solve such problems is to translate a problem into an instance of the boolean satisfiability (SAT) or constraint satisfaction (CSP) problem. Such reduction usually can give rise to numerous isomorphic representations of the same graph. One way to reduce the search space and speed up the search under these conditions is to introduce symmetrybreaking predicates. In this paper we introduce three novel and practically effective symmetry-breaking predicates for an undirected connected graph search based on breadth-first search (BFS) enumeration and compare with existing symmetry-breaking methods on several graph problems.
|
2005.07293
|
Ninareh Mehrabi
|
Ninareh Mehrabi, Yuzhong Huang, Fred Morstatter
|
Statistical Equity: A Fairness Classification Objective
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning systems have been shown to propagate the societal errors of
the past. In light of this, a wealth of research focuses on designing solutions
that are "fair." Even with this abundance of work, there is no singular
definition of fairness, mainly because fairness is subjective and context
dependent. We propose a new fairness definition, motivated by the principle of
equity, that considers existing biases in the data and attempts to make
equitable decisions that account for these previous historical biases. We
formalize our definition of fairness, and motivate it with its appropriate
contexts. Next, we operationalize it for equitable classification. We perform
multiple automatic and human evaluations to show the effectiveness of our
definition and demonstrate its utility for aspects of fairness, such as the
feedback loop.
|
[
{
"created": "Thu, 14 May 2020 23:19:38 GMT",
"version": "v1"
}
] |
2020-05-18
|
[
[
"Mehrabi",
"Ninareh",
""
],
[
"Huang",
"Yuzhong",
""
],
[
"Morstatter",
"Fred",
""
]
] |
Machine learning systems have been shown to propagate the societal errors of the past. In light of this, a wealth of research focuses on designing solutions that are "fair." Even with this abundance of work, there is no singular definition of fairness, mainly because fairness is subjective and context dependent. We propose a new fairness definition, motivated by the principle of equity, that considers existing biases in the data and attempts to make equitable decisions that account for these previous historical biases. We formalize our definition of fairness, and motivate it with its appropriate contexts. Next, we operationalize it for equitable classification. We perform multiple automatic and human evaluations to show the effectiveness of our definition and demonstrate its utility for aspects of fairness, such as the feedback loop.
|
2209.03069
|
Durga Shree N
|
Durga Shree N, Sree Dharinya S, Dasari Vijayasree, Nadendla Sai Roopa,
Anugu Arun
|
A Review on the Process of Automated Software Testing
|
7 pages, 2 figures, 2 tables
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The requirements in automation, digitalization, and fast computations have
loaded the IT sector with expectations of highly reliable, efficient, and
cost-effective software. Given that the process of testing, verification, and
validation of software products consumes 50-75% of the total revenue if the
testing process is ineffective, "n" times the expenditure must be invested to
mend the havoc caused. A delay in project completion is often attributed to the
testing phase because of the numerous cycles of debugging process. The software
testing process determines the face of the product released to the user. It
sets the standard and reliability of a company's outputs. As the complexity
increases, testing gets intense so as to examine all the outliers and various
branches of the processing flow. The testing process is automated using
software tools to avoid the tedious manual process of test input generation and
validation criteria, which certifies the program only to a certain confidence
level in the presence of outliers.
|
[
{
"created": "Wed, 7 Sep 2022 11:06:07 GMT",
"version": "v1"
}
] |
2022-09-08
|
[
[
"N",
"Durga Shree",
""
],
[
"S",
"Sree Dharinya",
""
],
[
"Vijayasree",
"Dasari",
""
],
[
"Roopa",
"Nadendla Sai",
""
],
[
"Arun",
"Anugu",
""
]
] |
The requirements in automation, digitalization, and fast computations have loaded the IT sector with expectations of highly reliable, efficient, and cost-effective software. Given that the process of testing, verification, and validation of software products consumes 50-75% of the total revenue if the testing process is ineffective, "n" times the expenditure must be invested to mend the havoc caused. A delay in project completion is often attributed to the testing phase because of the numerous cycles of debugging process. The software testing process determines the face of the product released to the user. It sets the standard and reliability of a company's outputs. As the complexity increases, testing gets intense so as to examine all the outliers and various branches of the processing flow. The testing process is automated using software tools to avoid the tedious manual process of test input generation and validation criteria, which certifies the program only to a certain confidence level in the presence of outliers.
|
2208.07362
|
Subodh Mishra
|
Subodh Mishra, Sushruth Nagesh, Sagar Manglani, Graham Mills, Punarjay
Chakravarty, Gaurav Pandey
|
Look Both Ways: Bidirectional Visual Sensing for Automatic Multi-Camera
Registration
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work describes the automatic registration of a large network
(approximately 40) of fixed, ceiling-mounted environment cameras spread over a
large area (approximately 800 squared meters) using a mobile calibration robot
equipped with a single upward-facing fisheye camera and a backlit ArUco marker
for easy detection. The fisheye camera is used to do visual odometry (VO), and
the ArUco marker facilitates easy detection of the calibration robot in the
environment cameras. In addition, the fisheye camera is also able to detect the
environment cameras. This two-way, bidirectional detection constrains the pose
of the environment cameras to solve an optimization problem. Such an approach
can be used to automatically register a large-scale multi-camera system used
for surveillance, automated parking, or robotic applications. This VO based
multi-camera registration method has been extensively validated using
real-world experiments, and also compared against a similar approach which uses
a LiDAR - an expensive, heavier and power hungry sensor.
|
[
{
"created": "Mon, 15 Aug 2022 17:55:55 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Oct 2022 23:19:58 GMT",
"version": "v2"
}
] |
2022-10-11
|
[
[
"Mishra",
"Subodh",
""
],
[
"Nagesh",
"Sushruth",
""
],
[
"Manglani",
"Sagar",
""
],
[
"Mills",
"Graham",
""
],
[
"Chakravarty",
"Punarjay",
""
],
[
"Pandey",
"Gaurav",
""
]
] |
This work describes the automatic registration of a large network (approximately 40) of fixed, ceiling-mounted environment cameras spread over a large area (approximately 800 squared meters) using a mobile calibration robot equipped with a single upward-facing fisheye camera and a backlit ArUco marker for easy detection. The fisheye camera is used to do visual odometry (VO), and the ArUco marker facilitates easy detection of the calibration robot in the environment cameras. In addition, the fisheye camera is also able to detect the environment cameras. This two-way, bidirectional detection constrains the pose of the environment cameras to solve an optimization problem. Such an approach can be used to automatically register a large-scale multi-camera system used for surveillance, automated parking, or robotic applications. This VO based multi-camera registration method has been extensively validated using real-world experiments, and also compared against a similar approach which uses a LiDAR - an expensive, heavier and power hungry sensor.
|
1710.02447
|
Bernease Herman
|
Bernease Herman (1), Gundula Proksch (1), Rachel Berney (1), Hillary
Dawkins (1), Jacob Kovacs (1), Yahui Ma (1), Jacob Rich (2), Amanda Tan (1)
((1) U. of Washington, (2) U. of Wisconsin)
|
Data science for urban equity: Making gentrification an accessible topic
for data scientists, policymakers, and the community
|
Presented at the Data For Good Exchange 2017
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The University of Washington eScience Institute runs an annual Data Science
for Social Good (DSSG) program that selects four projects each year to train
students from a wide range of disciplines while helping community members
execute social good projects, often with an urban focus.
We present observations and deliberations of one such project, the DSSG 2017
'Equitable Futures' project, which investigates the ongoing gentrification
process and the increasingly inequitable access to opportunities in Seattle.
Similar processes can be observed in many major cities. The project connects
issues usually analyzed in the disciplines of the built environment, geography,
sociology, economics, social work and city governments with data science
methodologies and visualizations.
|
[
{
"created": "Fri, 6 Oct 2017 15:24:57 GMT",
"version": "v1"
}
] |
2017-10-09
|
[
[
"Herman",
"Bernease",
"",
"U. of Washington"
],
[
"Proksch",
"Gundula",
"",
"U. of Washington"
],
[
"Berney",
"Rachel",
"",
"U. of Washington"
],
[
"Dawkins",
"Hillary",
"",
"U. of Washington"
],
[
"Kovacs",
"Jacob",
"",
"U. of Washington"
],
[
"Ma",
"Yahui",
"",
"U. of Washington"
],
[
"Rich",
"Jacob",
"",
"U. of Wisconsin"
],
[
"Tan",
"Amanda",
"",
"U. of Washington"
]
] |
The University of Washington eScience Institute runs an annual Data Science for Social Good (DSSG) program that selects four projects each year to train students from a wide range of disciplines while helping community members execute social good projects, often with an urban focus. We present observations and deliberations of one such project, the DSSG 2017 'Equitable Futures' project, which investigates the ongoing gentrification process and the increasingly inequitable access to opportunities in Seattle. Similar processes can be observed in many major cities. The project connects issues usually analyzed in the disciplines of the built environment, geography, sociology, economics, social work and city governments with data science methodologies and visualizations.
|
1808.06497
|
Keting Lu
|
Keting Lu, Shiqi Zhang, Xiaoping Chen
|
Goal-oriented Dialogue Policy Learning from Failures
| null | null | null | null |
cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning methods have been used for learning dialogue policies.
However, learning an effective dialogue policy frequently requires
prohibitively many conversations. This is partly because of the sparse rewards
in dialogues, and the very few successful dialogues in early learning phase.
Hindsight experience replay (HER) enables learning from failures, but the
vanilla HER is inapplicable to dialogue learning due to the implicit goals. In
this work, we develop two complex HER methods providing different trade-offs
between complexity and performance, and, for the first time, enabled HER-based
dialogue policy learning. Experiments using a realistic user simulator show
that our HER methods perform better than existing experience replay methods (as
applied to deep Q-networks) in learning rate.
|
[
{
"created": "Mon, 20 Aug 2018 15:04:30 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Nov 2018 13:51:34 GMT",
"version": "v2"
}
] |
2018-11-26
|
[
[
"Lu",
"Keting",
""
],
[
"Zhang",
"Shiqi",
""
],
[
"Chen",
"Xiaoping",
""
]
] |
Reinforcement learning methods have been used for learning dialogue policies. However, learning an effective dialogue policy frequently requires prohibitively many conversations. This is partly because of the sparse rewards in dialogues, and the very few successful dialogues in early learning phase. Hindsight experience replay (HER) enables learning from failures, but the vanilla HER is inapplicable to dialogue learning due to the implicit goals. In this work, we develop two complex HER methods providing different trade-offs between complexity and performance, and, for the first time, enabled HER-based dialogue policy learning. Experiments using a realistic user simulator show that our HER methods perform better than existing experience replay methods (as applied to deep Q-networks) in learning rate.
|
2406.06908
|
Shuaiyi Huang
|
Shuaiyi Huang, Saksham Suri, Kamal Gupta, Sai Saketh Rambhatla,
Ser-nam Lim, Abhinav Shrivastava
|
UVIS: Unsupervised Video Instance Segmentation
|
CVPR2024 Workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video instance segmentation requires classifying, segmenting, and tracking
every object across video frames. Unlike existing approaches that rely on
masks, boxes, or category labels, we propose UVIS, a novel Unsupervised Video
Instance Segmentation (UVIS) framework that can perform video instance
segmentation without any video annotations or dense label-based pretraining.
Our key insight comes from leveraging the dense shape prior from the
self-supervised vision foundation model DINO and the openset recognition
ability from the image-caption supervised vision-language model CLIP. Our UVIS
framework consists of three essential steps: frame-level pseudo-label
generation, transformer-based VIS model training, and query-based tracking. To
improve the quality of VIS predictions in the unsupervised setup, we introduce
a dual-memory design. This design includes a semantic memory bank for
generating accurate pseudo-labels and a tracking memory bank for maintaining
temporal consistency in object tracks. We evaluate our approach on three
standard VIS benchmarks, namely YoutubeVIS-2019, YoutubeVIS-2021, and Occluded
VIS. Our UVIS achieves 21.1 AP on YoutubeVIS-2019 without any video annotations
or dense pretraining, demonstrating the potential of our unsupervised VIS
framework.
|
[
{
"created": "Tue, 11 Jun 2024 03:05:50 GMT",
"version": "v1"
}
] |
2024-06-12
|
[
[
"Huang",
"Shuaiyi",
""
],
[
"Suri",
"Saksham",
""
],
[
"Gupta",
"Kamal",
""
],
[
"Rambhatla",
"Sai Saketh",
""
],
[
"Lim",
"Ser-nam",
""
],
[
"Shrivastava",
"Abhinav",
""
]
] |
Video instance segmentation requires classifying, segmenting, and tracking every object across video frames. Unlike existing approaches that rely on masks, boxes, or category labels, we propose UVIS, a novel Unsupervised Video Instance Segmentation (UVIS) framework that can perform video instance segmentation without any video annotations or dense label-based pretraining. Our key insight comes from leveraging the dense shape prior from the self-supervised vision foundation model DINO and the openset recognition ability from the image-caption supervised vision-language model CLIP. Our UVIS framework consists of three essential steps: frame-level pseudo-label generation, transformer-based VIS model training, and query-based tracking. To improve the quality of VIS predictions in the unsupervised setup, we introduce a dual-memory design. This design includes a semantic memory bank for generating accurate pseudo-labels and a tracking memory bank for maintaining temporal consistency in object tracks. We evaluate our approach on three standard VIS benchmarks, namely YoutubeVIS-2019, YoutubeVIS-2021, and Occluded VIS. Our UVIS achieves 21.1 AP on YoutubeVIS-2019 without any video annotations or dense pretraining, demonstrating the potential of our unsupervised VIS framework.
|
1110.4034
|
Ian Pratt-Hartmann
|
Roman Kontchakov and Yavor Nenov and Ian Pratt-Hartmann and Michael
Zakharyaschev
|
Topological Logics with Connectedness over Euclidean Spaces
| null |
ACM Transactions on Computational Logic, 14(2:13), 2013
|
10.1145/2480759.2480765
| null |
cs.LO math.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the quantifier-free languages, Bc and Bc0, obtained by augmenting
the signature of Boolean algebras with a unary predicate representing,
respectively, the property of being connected, and the property of having a
connected interior. These languages are interpreted over the regular closed
sets of n-dimensional Euclidean space (n greater than 1) and, additionally,
over the regular closed polyhedral sets of n-dimensional Euclidean space. The
resulting logics are examples of formalisms that have recently been proposed in
the Artificial Intelligence literature under the rubric "Qualitative Spatial
Reasoning." We prove that the satisfiability problem for Bc is undecidable over
the regular closed polyhedra in all dimensions greater than 1, and that the
satisfiability problem for both languages is undecidable over both the regular
closed sets and the regular closed polyhedra in the Euclidean plane. However,
we also prove that the satisfiability problem for Bc0 is NP-complete over the
regular closed sets in all dimensions greater than 2, while the corresponding
problem for the regular closed polyhedra is ExpTime-complete. Our results show,
in particular, that spatial reasoning over Euclidean spaces is much harder than
reasoning over arbitrary topological spaces.
|
[
{
"created": "Tue, 18 Oct 2011 15:54:46 GMT",
"version": "v1"
}
] |
2024-04-24
|
[
[
"Kontchakov",
"Roman",
""
],
[
"Nenov",
"Yavor",
""
],
[
"Pratt-Hartmann",
"Ian",
""
],
[
"Zakharyaschev",
"Michael",
""
]
] |
We consider the quantifier-free languages, Bc and Bc0, obtained by augmenting the signature of Boolean algebras with a unary predicate representing, respectively, the property of being connected, and the property of having a connected interior. These languages are interpreted over the regular closed sets of n-dimensional Euclidean space (n greater than 1) and, additionally, over the regular closed polyhedral sets of n-dimensional Euclidean space. The resulting logics are examples of formalisms that have recently been proposed in the Artificial Intelligence literature under the rubric "Qualitative Spatial Reasoning." We prove that the satisfiability problem for Bc is undecidable over the regular closed polyhedra in all dimensions greater than 1, and that the satisfiability problem for both languages is undecidable over both the regular closed sets and the regular closed polyhedra in the Euclidean plane. However, we also prove that the satisfiability problem for Bc0 is NP-complete over the regular closed sets in all dimensions greater than 2, while the corresponding problem for the regular closed polyhedra is ExpTime-complete. Our results show, in particular, that spatial reasoning over Euclidean spaces is much harder than reasoning over arbitrary topological spaces.
|
2403.16368
|
Xiaoyu Liu
|
Quan Zhang, Xiaoyu Liu, Wei Li, Hanting Chen, Junchao Liu, Jie Hu,
Zhiwei Xiong, Chun Yuan, Yunhe Wang
|
Distilling Semantic Priors from SAM to Efficient Image Restoration
Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In image restoration (IR), leveraging semantic priors from segmentation
models has been a common approach to improve performance. The recent segment
anything model (SAM) has emerged as a powerful tool for extracting advanced
semantic priors to enhance IR tasks. However, the computational cost of SAM is
prohibitive for IR, compared to existing smaller IR models. The incorporation
of SAM for extracting semantic priors considerably hampers the model inference
efficiency. To address this issue, we propose a general framework to distill
SAM's semantic knowledge to boost exiting IR models without interfering with
their inference process. Specifically, our proposed framework consists of the
semantic priors fusion (SPF) scheme and the semantic priors distillation (SPD)
scheme. SPF fuses two kinds of information between the restored image predicted
by the original IR model and the semantic mask predicted by SAM for the refined
restored image. SPD leverages a self-distillation manner to distill the fused
semantic priors to boost the performance of original IR models. Additionally,
we design a semantic-guided relation (SGR) module for SPD, which ensures
semantic feature representation space consistency to fully distill the priors.
We demonstrate the effectiveness of our framework across multiple IR models and
tasks, including deraining, deblurring, and denoising.
|
[
{
"created": "Mon, 25 Mar 2024 02:17:20 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2024 10:25:07 GMT",
"version": "v2"
}
] |
2024-04-03
|
[
[
"Zhang",
"Quan",
""
],
[
"Liu",
"Xiaoyu",
""
],
[
"Li",
"Wei",
""
],
[
"Chen",
"Hanting",
""
],
[
"Liu",
"Junchao",
""
],
[
"Hu",
"Jie",
""
],
[
"Xiong",
"Zhiwei",
""
],
[
"Yuan",
"Chun",
""
],
[
"Wang",
"Yunhe",
""
]
] |
In image restoration (IR), leveraging semantic priors from segmentation models has been a common approach to improve performance. The recent segment anything model (SAM) has emerged as a powerful tool for extracting advanced semantic priors to enhance IR tasks. However, the computational cost of SAM is prohibitive for IR, compared to existing smaller IR models. The incorporation of SAM for extracting semantic priors considerably hampers the model inference efficiency. To address this issue, we propose a general framework to distill SAM's semantic knowledge to boost exiting IR models without interfering with their inference process. Specifically, our proposed framework consists of the semantic priors fusion (SPF) scheme and the semantic priors distillation (SPD) scheme. SPF fuses two kinds of information between the restored image predicted by the original IR model and the semantic mask predicted by SAM for the refined restored image. SPD leverages a self-distillation manner to distill the fused semantic priors to boost the performance of original IR models. Additionally, we design a semantic-guided relation (SGR) module for SPD, which ensures semantic feature representation space consistency to fully distill the priors. We demonstrate the effectiveness of our framework across multiple IR models and tasks, including deraining, deblurring, and denoising.
|
2210.13371
|
Yuan Gao
|
Yuan Gao, Yukai Gong, Victor Paredes, Ayonga Hereid, Yan Gu
|
Time-Varying ALIP Model and Robust Foot-Placement Control for
Underactuated Bipedal Robot Walking on a Swaying Rigid Surface
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Controller design for bipedal walking on dynamic rigid surfaces (DRSes),
which are rigid surfaces moving in the inertial frame (e.g., ships and
airplanes), remains largely uninvestigated. This paper introduces a
hierarchical control approach that achieves stable underactuated bipedal robot
walking on a horizontally oscillating DRS. The highest layer of our approach is
a real-time motion planner that generates desired global behaviors (i.e., the
center of mass trajectories and footstep locations) by stabilizing a
reduced-order robot model. One key novelty of this layer is the derivation of
the reduced-order model by analytically extending the angular momentum based
linear inverted pendulum (ALIP) model from stationary to horizontally moving
surfaces. The other novelty is the development of a discrete-time
foot-placement controller that exponentially stabilizes the hybrid, linear,
time-varying ALIP model. The middle layer of the proposed approach is a walking
pattern generator that translates the desired global behaviors into the robot's
full-body reference trajectories for all directly actuated degrees of freedom.
The lowest layer is an input-output linearizing controller that exponentially
tracks those full-body reference trajectories based on the full-order, hybrid,
nonlinear robot dynamics. Simulations of planar underactuated bipedal walking
on a swaying DRS confirm that the proposed framework ensures the walking
stability under different DRS motions and gait types.
|
[
{
"created": "Mon, 24 Oct 2022 16:12:51 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Nov 2022 19:34:39 GMT",
"version": "v2"
}
] |
2022-12-01
|
[
[
"Gao",
"Yuan",
""
],
[
"Gong",
"Yukai",
""
],
[
"Paredes",
"Victor",
""
],
[
"Hereid",
"Ayonga",
""
],
[
"Gu",
"Yan",
""
]
] |
Controller design for bipedal walking on dynamic rigid surfaces (DRSes), which are rigid surfaces moving in the inertial frame (e.g., ships and airplanes), remains largely uninvestigated. This paper introduces a hierarchical control approach that achieves stable underactuated bipedal robot walking on a horizontally oscillating DRS. The highest layer of our approach is a real-time motion planner that generates desired global behaviors (i.e., the center of mass trajectories and footstep locations) by stabilizing a reduced-order robot model. One key novelty of this layer is the derivation of the reduced-order model by analytically extending the angular momentum based linear inverted pendulum (ALIP) model from stationary to horizontally moving surfaces. The other novelty is the development of a discrete-time foot-placement controller that exponentially stabilizes the hybrid, linear, time-varying ALIP model. The middle layer of the proposed approach is a walking pattern generator that translates the desired global behaviors into the robot's full-body reference trajectories for all directly actuated degrees of freedom. The lowest layer is an input-output linearizing controller that exponentially tracks those full-body reference trajectories based on the full-order, hybrid, nonlinear robot dynamics. Simulations of planar underactuated bipedal walking on a swaying DRS confirm that the proposed framework ensures the walking stability under different DRS motions and gait types.
|
2201.11932
|
Shiyu Wang
|
Shiyu Wang, Xiaojie Guo, Liang Zhao
|
Deep Generative Model for Periodic Graphs
|
This paper has been accepted by NeurIPS 2022
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Periodic graphs are graphs consisting of repetitive local structures, such as
crystal nets and polygon mesh. Their generative modeling has great potential in
real-world applications such as material design and graphics synthesis.
Classical models either rely on domain-specific predefined generation
principles (e.g., in crystal net design), or follow geometry-based prescribed
rules. Recently, deep generative models has shown great promise in
automatically generating general graphs. However, their advancement into
periodic graphs have not been well explored due to several key challenges in 1)
maintaining graph periodicity; 2) disentangling local and global patterns; and
3) efficiency in learning repetitive patterns. To address them, this paper
proposes Periodical-Graph Disentangled Variational Auto-encoder (PGD-VAE), a
new deep generative models for periodic graphs that can automatically learn,
disentangle, and generate local and global graph patterns. Specifically, we
develop a new periodic graph encoder consisting of global-pattern encoder and
local-pattern encoder that ensures to disentangle the representation into
global and local semantics. We then propose a new periodic graph decoder
consisting of local structure decoder, neighborhood decoder, and global
structure decoder, as well as the assembler of their outputs that guarantees
periodicity. Moreover, we design a new model learning objective that helps
ensure the invariance of local-semantic representations for the graphs with the
same local structure. Comprehensive experimental evaluations have been
conducted to demonstrate the effectiveness of the proposed method. The code of
proposed PGD-VAE is availabe at https://github.com/shi-yu-wang/PGD-VAE.
|
[
{
"created": "Fri, 28 Jan 2022 04:56:28 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Sep 2022 23:48:31 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Oct 2022 17:48:08 GMT",
"version": "v3"
},
{
"created": "Thu, 6 Oct 2022 00:33:20 GMT",
"version": "v4"
}
] |
2022-10-07
|
[
[
"Wang",
"Shiyu",
""
],
[
"Guo",
"Xiaojie",
""
],
[
"Zhao",
"Liang",
""
]
] |
Periodic graphs are graphs consisting of repetitive local structures, such as crystal nets and polygon mesh. Their generative modeling has great potential in real-world applications such as material design and graphics synthesis. Classical models either rely on domain-specific predefined generation principles (e.g., in crystal net design), or follow geometry-based prescribed rules. Recently, deep generative models has shown great promise in automatically generating general graphs. However, their advancement into periodic graphs have not been well explored due to several key challenges in 1) maintaining graph periodicity; 2) disentangling local and global patterns; and 3) efficiency in learning repetitive patterns. To address them, this paper proposes Periodical-Graph Disentangled Variational Auto-encoder (PGD-VAE), a new deep generative models for periodic graphs that can automatically learn, disentangle, and generate local and global graph patterns. Specifically, we develop a new periodic graph encoder consisting of global-pattern encoder and local-pattern encoder that ensures to disentangle the representation into global and local semantics. We then propose a new periodic graph decoder consisting of local structure decoder, neighborhood decoder, and global structure decoder, as well as the assembler of their outputs that guarantees periodicity. Moreover, we design a new model learning objective that helps ensure the invariance of local-semantic representations for the graphs with the same local structure. Comprehensive experimental evaluations have been conducted to demonstrate the effectiveness of the proposed method. The code of proposed PGD-VAE is availabe at https://github.com/shi-yu-wang/PGD-VAE.
|
2202.09470
|
Max von Hippel
|
Maria Leonor Pacheco, Max von Hippel, Ben Weintraub, Dan Goldwasser,
Cristina Nita-Rotaru
|
Automated Attack Synthesis by Extracting Finite State Machines from
Protocol Specification Documents
|
To appear in IEEE Security and Privacy, 2022
| null | null | null |
cs.CR cs.CL cs.FL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automated attack discovery techniques, such as attacker synthesis or
model-based fuzzing, provide powerful ways to ensure network protocols operate
correctly and securely. Such techniques, in general, require a formal
representation of the protocol, often in the form of a finite state machine
(FSM). Unfortunately, many protocols are only described in English prose, and
implementing even a simple network protocol as an FSM is time-consuming and
prone to subtle logical errors. Automatically extracting protocol FSMs from
documentation can significantly contribute to increased use of these techniques
and result in more robust and secure protocol implementations.
In this work we focus on attacker synthesis as a representative technique for
protocol security, and on RFCs as a representative format for protocol prose
description. Unlike other works that rely on rule-based approaches or use
off-the-shelf NLP tools directly, we suggest a data-driven approach for
extracting FSMs from RFC documents. Specifically, we use a hybrid approach
consisting of three key steps: (1) large-scale word-representation learning for
technical language, (2) focused zero-shot learning for mapping protocol text to
a protocol-independent information language, and (3) rule-based mapping from
protocol-independent information to a specific protocol FSM. We show the
generalizability of our FSM extraction by using the RFCs for six different
protocols: BGPv4, DCCP, LTP, PPTP, SCTP and TCP. We demonstrate how automated
extraction of an FSM from an RFC can be applied to the synthesis of attacks,
with TCP and DCCP as case-studies. Our approach shows that it is possible to
automate attacker synthesis against protocols by using textual specifications
such as RFCs.
|
[
{
"created": "Fri, 18 Feb 2022 23:27:29 GMT",
"version": "v1"
}
] |
2022-02-24
|
[
[
"Pacheco",
"Maria Leonor",
""
],
[
"von Hippel",
"Max",
""
],
[
"Weintraub",
"Ben",
""
],
[
"Goldwasser",
"Dan",
""
],
[
"Nita-Rotaru",
"Cristina",
""
]
] |
Automated attack discovery techniques, such as attacker synthesis or model-based fuzzing, provide powerful ways to ensure network protocols operate correctly and securely. Such techniques, in general, require a formal representation of the protocol, often in the form of a finite state machine (FSM). Unfortunately, many protocols are only described in English prose, and implementing even a simple network protocol as an FSM is time-consuming and prone to subtle logical errors. Automatically extracting protocol FSMs from documentation can significantly contribute to increased use of these techniques and result in more robust and secure protocol implementations. In this work we focus on attacker synthesis as a representative technique for protocol security, and on RFCs as a representative format for protocol prose description. Unlike other works that rely on rule-based approaches or use off-the-shelf NLP tools directly, we suggest a data-driven approach for extracting FSMs from RFC documents. Specifically, we use a hybrid approach consisting of three key steps: (1) large-scale word-representation learning for technical language, (2) focused zero-shot learning for mapping protocol text to a protocol-independent information language, and (3) rule-based mapping from protocol-independent information to a specific protocol FSM. We show the generalizability of our FSM extraction by using the RFCs for six different protocols: BGPv4, DCCP, LTP, PPTP, SCTP and TCP. We demonstrate how automated extraction of an FSM from an RFC can be applied to the synthesis of attacks, with TCP and DCCP as case-studies. Our approach shows that it is possible to automate attacker synthesis against protocols by using textual specifications such as RFCs.
|
cs/0605107
|
Daowen Qiu
|
Daowen Qiu, Fuchun Liu
|
Fuzzy Discrete Event Systems under Fuzzy Observability and a
test-algorithm
|
A further revised version to appear in IEEE Trans. Fuzzy Systems
|
IEEE Transactions on Fuzzy Systems, 2009, 17 (3): 578-589.
| null | null |
cs.LO
| null |
In order to more effectively cope with the real-world problems of vagueness,
impreciseness, and subjectivity, fuzzy discrete event systems (FDESs) were
proposed recently. Notably, FDESs have been applied to biomedical control for
HIV/AIDS treatment planning and sensory information processing for robotic
control. Qiu, Cao and Ying independently developed supervisory control theory
of FDESs. We note that the controllability of events in Qiu's work is fuzzy but
the observability of events is crisp, and, the observability of events in Cao
and Ying's work is also crisp although the controllability is not completely
crisp since the controllable events can be disabled with any degrees. Motivated
by the necessity to consider the situation that the events may be observed or
controlled with some membership degrees, in this paper, we establish the
supervisory control theory of FDESs with partial observations, in which both
the observability and controllability of events are fuzzy instead. We formalize
the notions of fuzzy controllability condition and fuzzy observability
condition. And Controllability and Observability Theorem of FDESs is set up in
a more generic framework. In particular, we present a detailed computing flow
to verify whether the controllability and observability conditions hold. Thus,
this result can decide the existence of supervisors. Also, we use this
computing method to check the existence of supervisors in the Controllability
and Observability Theorem of classical discrete event systems (DESs), which is
a new method and different from classical case. A number of examples are
elaborated on to illustrate the presented results.
|
[
{
"created": "Wed, 24 May 2006 15:41:00 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Apr 2007 04:27:12 GMT",
"version": "v2"
},
{
"created": "Sun, 7 Oct 2007 02:32:38 GMT",
"version": "v3"
}
] |
2010-03-10
|
[
[
"Qiu",
"Daowen",
""
],
[
"Liu",
"Fuchun",
""
]
] |
In order to more effectively cope with the real-world problems of vagueness, impreciseness, and subjectivity, fuzzy discrete event systems (FDESs) were proposed recently. Notably, FDESs have been applied to biomedical control for HIV/AIDS treatment planning and sensory information processing for robotic control. Qiu, Cao and Ying independently developed supervisory control theory of FDESs. We note that the controllability of events in Qiu's work is fuzzy but the observability of events is crisp, and, the observability of events in Cao and Ying's work is also crisp although the controllability is not completely crisp since the controllable events can be disabled with any degrees. Motivated by the necessity to consider the situation that the events may be observed or controlled with some membership degrees, in this paper, we establish the supervisory control theory of FDESs with partial observations, in which both the observability and controllability of events are fuzzy instead. We formalize the notions of fuzzy controllability condition and fuzzy observability condition. And Controllability and Observability Theorem of FDESs is set up in a more generic framework. In particular, we present a detailed computing flow to verify whether the controllability and observability conditions hold. Thus, this result can decide the existence of supervisors. Also, we use this computing method to check the existence of supervisors in the Controllability and Observability Theorem of classical discrete event systems (DESs), which is a new method and different from classical case. A number of examples are elaborated on to illustrate the presented results.
|
1706.09976
|
Michael Skirpan
|
Michael Skirpan and Micha Gorelick
|
The Authority of "Fair" in Machine Learning
|
Presented as a talk at the 2017 Workshop on Fairness, Accountability,
and Transparency in Machine Learning (FAT/ML 2017)
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we argue for the adoption of a normative definition of
fairness within the machine learning community. After characterizing this
definition, we review the current literature of Fair ML in light of its
implications. We end by suggesting ways to incorporate a broader community and
generate further debate around how to decide what is fair in ML.
|
[
{
"created": "Thu, 29 Jun 2017 23:37:34 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jul 2017 17:54:48 GMT",
"version": "v2"
}
] |
2017-07-10
|
[
[
"Skirpan",
"Michael",
""
],
[
"Gorelick",
"Micha",
""
]
] |
In this paper, we argue for the adoption of a normative definition of fairness within the machine learning community. After characterizing this definition, we review the current literature of Fair ML in light of its implications. We end by suggesting ways to incorporate a broader community and generate further debate around how to decide what is fair in ML.
|
2311.14081
|
Hana Chockler
|
David A. Kelly, Hana Chockler, Daniel Kroening, Nathan Blake, Aditi
Ramaswamy, Melane Navaratnarajah, Aaditya Shivakumar
|
You Only Explain Once
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a new black-box explainability algorithm and tool,
YO-ReX, for efficient explanation of the outputs of object detectors. The new
algorithm computes explanations for all objects detected in the image
simultaneously. Hence, compared to the baseline, the new algorithm reduces the
number of queries by a factor of 10X for the case of ten detected objects. The
speedup increases further with with the number of objects. Our experimental
results demonstrate that YO-ReX can explain the outputs of YOLO with a
negligible overhead over the running time of YOLO. We also demonstrate similar
results for explaining SSD and Faster R-CNN. The speedup is achieved by
avoiding backtracking by combining aggressive pruning with a causal analysis.
|
[
{
"created": "Thu, 23 Nov 2023 16:19:59 GMT",
"version": "v1"
}
] |
2023-11-27
|
[
[
"Kelly",
"David A.",
""
],
[
"Chockler",
"Hana",
""
],
[
"Kroening",
"Daniel",
""
],
[
"Blake",
"Nathan",
""
],
[
"Ramaswamy",
"Aditi",
""
],
[
"Navaratnarajah",
"Melane",
""
],
[
"Shivakumar",
"Aaditya",
""
]
] |
In this paper, we propose a new black-box explainability algorithm and tool, YO-ReX, for efficient explanation of the outputs of object detectors. The new algorithm computes explanations for all objects detected in the image simultaneously. Hence, compared to the baseline, the new algorithm reduces the number of queries by a factor of 10X for the case of ten detected objects. The speedup increases further with with the number of objects. Our experimental results demonstrate that YO-ReX can explain the outputs of YOLO with a negligible overhead over the running time of YOLO. We also demonstrate similar results for explaining SSD and Faster R-CNN. The speedup is achieved by avoiding backtracking by combining aggressive pruning with a causal analysis.
|
1908.02409
|
Anhong Guo
|
Anhong Guo, Ilter Canberk, Hannah Murphy, Andr\'es Monroy-Hern\'andez,
Rajan Vaish
|
Blocks: Collaborative and Persistent Augmented Reality Experiences
|
ACM UbiComp 2019
| null |
10.1145/3351241
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce Blocks, a mobile application that enables people to co-create AR
structures that persist in the physical environment. Using Blocks, end users
can collaborate synchronously or asynchronously, whether they are colocated or
remote. Additionally, the AR structures can be tied to a physical location or
can be accessed from anywhere. We evaluated how people used Blocks through a
series of lab and field deployment studies with over 160 participants, and
explored the interplay between two collaborative dimensions: space and time. We
found that participants preferred creating structures synchronously with
colocated collaborators. Additionally, they were most active when they created
structures that were not restricted by time or place. Unlike most of today's AR
experiences, which focus on content consumption, this work outlines new design
opportunities for persistent and collaborative AR experiences that empower
anyone to collaborate and create AR content.
|
[
{
"created": "Wed, 7 Aug 2019 00:49:35 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Aug 2019 17:53:37 GMT",
"version": "v2"
}
] |
2019-08-15
|
[
[
"Guo",
"Anhong",
""
],
[
"Canberk",
"Ilter",
""
],
[
"Murphy",
"Hannah",
""
],
[
"Monroy-Hernández",
"Andrés",
""
],
[
"Vaish",
"Rajan",
""
]
] |
We introduce Blocks, a mobile application that enables people to co-create AR structures that persist in the physical environment. Using Blocks, end users can collaborate synchronously or asynchronously, whether they are colocated or remote. Additionally, the AR structures can be tied to a physical location or can be accessed from anywhere. We evaluated how people used Blocks through a series of lab and field deployment studies with over 160 participants, and explored the interplay between two collaborative dimensions: space and time. We found that participants preferred creating structures synchronously with colocated collaborators. Additionally, they were most active when they created structures that were not restricted by time or place. Unlike most of today's AR experiences, which focus on content consumption, this work outlines new design opportunities for persistent and collaborative AR experiences that empower anyone to collaborate and create AR content.
|
1804.00657
|
Yuval Bahat
|
Yuval Bahat and Gregory Shakhnarovich
|
Confidence from Invariance to Image Transformations
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a technique for automatically detecting the classification errors
of a pre-trained visual classifier. Our method is agnostic to the form of the
classifier, requiring access only to classifier responses to a set of inputs.
We train a parametric binary classifier (error/correct) on a representation
derived from a set of classifier responses generated from multiple copies of
the same input, each subject to a different natural image transformation. Thus,
we establish a measure of confidence in classifier's decision by analyzing the
invariance of its decision under various transformations. In experiments with
multiple data sets (STL-10,CIFAR-100,ImageNet) and classifiers, we demonstrate
new state of the art for the error detection task. In addition, we apply our
technique to novelty detection scenarios, where we also demonstrate state of
the art results.
|
[
{
"created": "Mon, 2 Apr 2018 20:38:52 GMT",
"version": "v1"
}
] |
2018-04-04
|
[
[
"Bahat",
"Yuval",
""
],
[
"Shakhnarovich",
"Gregory",
""
]
] |
We develop a technique for automatically detecting the classification errors of a pre-trained visual classifier. Our method is agnostic to the form of the classifier, requiring access only to classifier responses to a set of inputs. We train a parametric binary classifier (error/correct) on a representation derived from a set of classifier responses generated from multiple copies of the same input, each subject to a different natural image transformation. Thus, we establish a measure of confidence in classifier's decision by analyzing the invariance of its decision under various transformations. In experiments with multiple data sets (STL-10,CIFAR-100,ImageNet) and classifiers, we demonstrate new state of the art for the error detection task. In addition, we apply our technique to novelty detection scenarios, where we also demonstrate state of the art results.
|
2102.07970
|
Justin Fu
|
Justin Fu and Sergey Levine
|
Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we consider data-driven optimization problems where one must
maximize a function given only queries at a fixed set of points. This problem
setting emerges in many domains where function evaluation is a complex and
expensive process, such as in the design of materials, vehicles, or neural
network architectures. Because the available data typically only covers a small
manifold of the possible space of inputs, a principal challenge is to be able
to construct algorithms that can reason about uncertainty and
out-of-distribution values, since a naive optimizer can easily exploit an
estimated model to return adversarial inputs. We propose to tackle this problem
by leveraging the normalized maximum-likelihood (NML) estimator, which provides
a principled approach to handling uncertainty and out-of-distribution inputs.
While in the standard formulation NML is intractable, we propose a tractable
approximation that allows us to scale our method to high-capacity neural
network models. We demonstrate that our method can effectively optimize
high-dimensional design problems in a variety of disciplines such as chemistry,
biology, and materials engineering.
|
[
{
"created": "Tue, 16 Feb 2021 06:04:27 GMT",
"version": "v1"
}
] |
2021-02-17
|
[
[
"Fu",
"Justin",
""
],
[
"Levine",
"Sergey",
""
]
] |
In this work we consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points. This problem setting emerges in many domains where function evaluation is a complex and expensive process, such as in the design of materials, vehicles, or neural network architectures. Because the available data typically only covers a small manifold of the possible space of inputs, a principal challenge is to be able to construct algorithms that can reason about uncertainty and out-of-distribution values, since a naive optimizer can easily exploit an estimated model to return adversarial inputs. We propose to tackle this problem by leveraging the normalized maximum-likelihood (NML) estimator, which provides a principled approach to handling uncertainty and out-of-distribution inputs. While in the standard formulation NML is intractable, we propose a tractable approximation that allows us to scale our method to high-capacity neural network models. We demonstrate that our method can effectively optimize high-dimensional design problems in a variety of disciplines such as chemistry, biology, and materials engineering.
|
1811.01852
|
Naeemul Hassan
|
Sameer Dhoju, Md Main Uddin Rony, Naeemul Hassan
|
Differences between Health Related News Articles from Reliable and
Unreliable Media
| null | null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we examine a collection of health-related news articles
published by reliable and unreliable media outlets. Our analysis shows that
there are structural, topical, and semantic differences in the way reliable and
unreliable media outlets conduct health journalism. We argue that the findings
from this study will be useful for combating health disinformation problem.
|
[
{
"created": "Mon, 5 Nov 2018 17:17:18 GMT",
"version": "v1"
}
] |
2018-11-06
|
[
[
"Dhoju",
"Sameer",
""
],
[
"Rony",
"Md Main Uddin",
""
],
[
"Hassan",
"Naeemul",
""
]
] |
In this study, we examine a collection of health-related news articles published by reliable and unreliable media outlets. Our analysis shows that there are structural, topical, and semantic differences in the way reliable and unreliable media outlets conduct health journalism. We argue that the findings from this study will be useful for combating health disinformation problem.
|
1107.0998
|
Samuel Epstein
|
Samuel Epstein and Margrit Betke
|
An Information Theoretic Representation of Agent Dynamics as Set
Intersections
| null | null | null | null |
cs.IT cs.AI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We represent agents as sets of strings. Each string encodes a potential
interaction with another agent or environment. We represent the total set of
dynamics between two agents as the intersection of their respective strings, we
prove complexity properties of player interactions using Algorithmic
Information Theory. We show how the proposed construction is compatible with
Universal Artificial Intelligence, in that the AIXI model can be seen as
universal with respect to interaction.
|
[
{
"created": "Tue, 5 Jul 2011 22:23:42 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Epstein",
"Samuel",
""
],
[
"Betke",
"Margrit",
""
]
] |
We represent agents as sets of strings. Each string encodes a potential interaction with another agent or environment. We represent the total set of dynamics between two agents as the intersection of their respective strings, we prove complexity properties of player interactions using Algorithmic Information Theory. We show how the proposed construction is compatible with Universal Artificial Intelligence, in that the AIXI model can be seen as universal with respect to interaction.
|
2212.13544
|
Tinghao Zhang
|
Tinghao Zhang, Kwok-Yan Lam, Jun Zhao, Feng Li, Huimei Han, Norziana
Jamil
|
Enhancing Federated Learning with spectrum allocation optimization and
device selection
|
This paper is accepted by IEEE/ACM Transactions on Networking
| null | null | null |
cs.DC eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning (ML) is a widely accepted means for supporting customized
services for mobile devices and applications. Federated Learning (FL), which is
a promising approach to implement machine learning while addressing data
privacy concerns, typically involves a large number of wireless mobile devices
to collect model training data. Under such circumstances, FL is expected to
meet stringent training latency requirements in the face of limited resources
such as demand for wireless bandwidth, power consumption, and computation
constraints of participating devices. Due to practical considerations, FL
selects a portion of devices to participate in the model training process at
each iteration. Therefore, the tasks of efficient resource management and
device selection will have a significant impact on the practical uses of FL. In
this paper, we propose a spectrum allocation optimization mechanism for
enhancing FL over a wireless mobile network. Specifically, the proposed
spectrum allocation optimization mechanism minimizes the time delay of FL while
considering the energy consumption of individual participating devices; thus
ensuring that all the participating devices have sufficient resources to train
their local models. In this connection, to ensure fast convergence of FL, a
robust device selection is also proposed to help FL reach convergence swiftly,
especially when the local datasets of the devices are not independent and
identically distributed (non-iid). Experimental results show that (1) the
proposed spectrum allocation optimization method optimizes time delay while
satisfying the individual energy constraints; (2) the proposed device selection
method enables FL to achieve the fastest convergence on non-iid datasets.
|
[
{
"created": "Tue, 27 Dec 2022 16:32:33 GMT",
"version": "v1"
}
] |
2022-12-29
|
[
[
"Zhang",
"Tinghao",
""
],
[
"Lam",
"Kwok-Yan",
""
],
[
"Zhao",
"Jun",
""
],
[
"Li",
"Feng",
""
],
[
"Han",
"Huimei",
""
],
[
"Jamil",
"Norziana",
""
]
] |
Machine learning (ML) is a widely accepted means for supporting customized services for mobile devices and applications. Federated Learning (FL), which is a promising approach to implement machine learning while addressing data privacy concerns, typically involves a large number of wireless mobile devices to collect model training data. Under such circumstances, FL is expected to meet stringent training latency requirements in the face of limited resources such as demand for wireless bandwidth, power consumption, and computation constraints of participating devices. Due to practical considerations, FL selects a portion of devices to participate in the model training process at each iteration. Therefore, the tasks of efficient resource management and device selection will have a significant impact on the practical uses of FL. In this paper, we propose a spectrum allocation optimization mechanism for enhancing FL over a wireless mobile network. Specifically, the proposed spectrum allocation optimization mechanism minimizes the time delay of FL while considering the energy consumption of individual participating devices; thus ensuring that all the participating devices have sufficient resources to train their local models. In this connection, to ensure fast convergence of FL, a robust device selection is also proposed to help FL reach convergence swiftly, especially when the local datasets of the devices are not independent and identically distributed (non-iid). Experimental results show that (1) the proposed spectrum allocation optimization method optimizes time delay while satisfying the individual energy constraints; (2) the proposed device selection method enables FL to achieve the fastest convergence on non-iid datasets.
|
2204.13384
|
Jan Philip Wahle
|
Jan Philip Wahle and Terry Ruas and Saif M. Mohammad and Bela Gipp
|
D3: A Massive Dataset of Scholarly Metadata for Analyzing the State of
Computer Science Research
| null |
LREC 2022
| null | null |
cs.DL cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
DBLP is the largest open-access repository of scientific articles on computer
science and provides metadata associated with publications, authors, and
venues. We retrieved more than 6 million publications from DBLP and extracted
pertinent metadata (e.g., abstracts, author affiliations, citations) from the
publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to
identify trends in research activity, productivity, focus, bias, accessibility,
and impact of computer science research. We present an initial analysis focused
on the volume of computer science research (e.g., number of papers, authors,
research activity), trends in topics of interest, and citation patterns. Our
findings show that computer science is a growing research field (approx. 15%
annually), with an active and collaborative researcher community. While papers
in recent years present more bibliographical entries in comparison to previous
decades, the average number of citations has been declining. Investigating
papers' abstracts reveals that recent topic trends are clearly reflected in D3.
Finally, we list further applications of D3 and pose supplemental research
questions. The D3 dataset, our findings, and source code are publicly available
for research purposes.
|
[
{
"created": "Thu, 28 Apr 2022 09:59:52 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Sep 2022 15:07:17 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Nov 2022 15:03:09 GMT",
"version": "v3"
},
{
"created": "Thu, 10 Nov 2022 10:55:39 GMT",
"version": "v4"
}
] |
2024-02-09
|
[
[
"Wahle",
"Jan Philip",
""
],
[
"Ruas",
"Terry",
""
],
[
"Mohammad",
"Saif M.",
""
],
[
"Gipp",
"Bela",
""
]
] |
DBLP is the largest open-access repository of scientific articles on computer science and provides metadata associated with publications, authors, and venues. We retrieved more than 6 million publications from DBLP and extracted pertinent metadata (e.g., abstracts, author affiliations, citations) from the publication texts to create the DBLP Discovery Dataset (D3). D3 can be used to identify trends in research activity, productivity, focus, bias, accessibility, and impact of computer science research. We present an initial analysis focused on the volume of computer science research (e.g., number of papers, authors, research activity), trends in topics of interest, and citation patterns. Our findings show that computer science is a growing research field (approx. 15% annually), with an active and collaborative researcher community. While papers in recent years present more bibliographical entries in comparison to previous decades, the average number of citations has been declining. Investigating papers' abstracts reveals that recent topic trends are clearly reflected in D3. Finally, we list further applications of D3 and pose supplemental research questions. The D3 dataset, our findings, and source code are publicly available for research purposes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.