id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1909.08092 | Hooman Alavizadeh | Jin-Hee Cho, Dilli P. Sharma, Hooman Alavizadeh, Seunghyun Yoon, Noam
Ben-Asher, Terrence J. Moore, Dong Seong Kim, Hyuk Lim, Frederica F. Nelson | Toward Proactive, Adaptive Defense: A Survey on Moving Target Defense | 36 pages, 15 figures | null | null | null | cs.NI cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reactive defense mechanisms, such as intrusion detection systems, have made
significant efforts to secure a system or network for the last several decades.
However, the nature of reactive security mechanisms has limitations because
potential attackers cannot be prevented in advance. We are facing a reality
with the proliferation of persistent, advanced, intelligent attacks while
defenders are often way behind attackers in taking appropriate actions to
thwart potential attackers. The concept of moving target defense (MTD) has
emerged as a proactive defense mechanism aiming to prevent attacks. In this
work, we conducted a comprehensive, in-depth survey to discuss the following
aspects of MTD: key roles, design principles, classifications, common attacks,
key methodologies, important algorithms, metrics, evaluation methods, and
application domains. We discuss the pros and cons of all aspects of MTD
surveyed in this work. Lastly, we highlight insights and lessons learned from
this study and suggest future work directions. The aim of this paper is to
provide the overall trends of MTD research in terms of critical aspects of
defense systems for researchers who seek for developing proactive, adaptive MTD
mechanisms.
| [
{
"created": "Thu, 12 Sep 2019 14:14:01 GMT",
"version": "v1"
}
] | 2019-09-19 | [
[
"Cho",
"Jin-Hee",
""
],
[
"Sharma",
"Dilli P.",
""
],
[
"Alavizadeh",
"Hooman",
""
],
[
"Yoon",
"Seunghyun",
""
],
[
"Ben-Asher",
"Noam",
""
],
[
"Moore",
"Terrence J.",
""
],
[
"Kim",
"Dong Seong",
""
],
[
"Lim",
"Hyuk",
""
],
[
"Nelson",
"Frederica F.",
""
]
] | Reactive defense mechanisms, such as intrusion detection systems, have made significant efforts to secure a system or network for the last several decades. However, the nature of reactive security mechanisms has limitations because potential attackers cannot be prevented in advance. We are facing a reality with the proliferation of persistent, advanced, intelligent attacks while defenders are often way behind attackers in taking appropriate actions to thwart potential attackers. The concept of moving target defense (MTD) has emerged as a proactive defense mechanism aiming to prevent attacks. In this work, we conducted a comprehensive, in-depth survey to discuss the following aspects of MTD: key roles, design principles, classifications, common attacks, key methodologies, important algorithms, metrics, evaluation methods, and application domains. We discuss the pros and cons of all aspects of MTD surveyed in this work. Lastly, we highlight insights and lessons learned from this study and suggest future work directions. The aim of this paper is to provide the overall trends of MTD research in terms of critical aspects of defense systems for researchers who seek for developing proactive, adaptive MTD mechanisms. |
2101.00235 | Yuting Zhan | Yuting Zhan, Hamed Haddadi | MoSen: Activity Modelling in Multiple-Occupancy Smart Homes | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Smart home solutions increasingly rely on a variety of sensors for behavioral
analytics and activity recognition to provide context-aware applications and
personalized care. Optimizing the sensor network is one of the most important
approaches to ensure classification accuracy and the system's efficiency.
However, the trade-off between the cost and performance is often a challenge in
real deployments, particularly for multiple-occupancy smart homes or care
homes.
In this paper, using real indoor activity and mobility traces, floor plans,
and synthetic multi-occupancy behavior models, we evaluate several
multi-occupancy household scenarios with 2-5 residents. We explore and quantify
the trade-offs between the cost of sensor deployments and expected labeling
accuracy in different scenarios. Our evaluation across different scenarios show
that the performance of the desired context-aware task is affected by different
localization resolutions, the number of residents, the number of sensors, and
varying sensor deployments. To aid in accelerating the adoption of practical
sensor-based activity recognition technology, we design MoSen, a framework to
simulate the interaction dynamics between sensor-based environments and
multiple residents. By evaluating the factors that affect the performance of
the desired sensor network, we provide a sensor selection strategy and design
metrics for sensor layout in real environments. Using our selection strategy in
a 5-person scenario case study, we demonstrate that MoSen can significantly
improve overall system performance without increasing the deployment costs.
| [
{
"created": "Fri, 1 Jan 2021 13:53:36 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Zhan",
"Yuting",
""
],
[
"Haddadi",
"Hamed",
""
]
] | Smart home solutions increasingly rely on a variety of sensors for behavioral analytics and activity recognition to provide context-aware applications and personalized care. Optimizing the sensor network is one of the most important approaches to ensure classification accuracy and the system's efficiency. However, the trade-off between the cost and performance is often a challenge in real deployments, particularly for multiple-occupancy smart homes or care homes. In this paper, using real indoor activity and mobility traces, floor plans, and synthetic multi-occupancy behavior models, we evaluate several multi-occupancy household scenarios with 2-5 residents. We explore and quantify the trade-offs between the cost of sensor deployments and expected labeling accuracy in different scenarios. Our evaluation across different scenarios show that the performance of the desired context-aware task is affected by different localization resolutions, the number of residents, the number of sensors, and varying sensor deployments. To aid in accelerating the adoption of practical sensor-based activity recognition technology, we design MoSen, a framework to simulate the interaction dynamics between sensor-based environments and multiple residents. By evaluating the factors that affect the performance of the desired sensor network, we provide a sensor selection strategy and design metrics for sensor layout in real environments. Using our selection strategy in a 5-person scenario case study, we demonstrate that MoSen can significantly improve overall system performance without increasing the deployment costs. |
2202.11857 | Bastien Rivier | Arun Kumar Das, Sandip Das, Guilherme D. da Fonseca, Yan Gerard,
Bastien Rivier | Complexity Results on Untangling Red-Blue Matchings | 28 pages, 27 figures, accepted at EuroCG 2022, at CORE 2022 (ICALP
Workshop), at LATIN 2022, and at CGTA 2022 (EuroCG 2022 special issue) | null | null | null | cs.CG | http://creativecommons.org/licenses/by/4.0/ | Given a matching between n red points and n blue points by line segments in
the plane, we consider the problem of obtaining a crossing-free matching
through flip operations that replace two crossing segments by two non-crossing
ones. We first show that (i) it is NP-hard to alpha-approximate the shortest
flip sequence, for any constant alpha. Second, we show that when the red points
are colinear, (ii) given a matching, a flip sequence of length at most n(n-1)/2
always exists, and (iii) the number of flips in any sequence never exceeds
(n(n-1)/2) (n+4)/6. Finally, we present (iv) a lower bounding flip sequence
with roughly 1.5 n(n-1)/2 flips, which shows that the n(n-1)/2 flips attained
in the convex case are not the maximum, and (v) a convex matching from which
any flip sequence has roughly 1.5 n flips. The last four results, based on
novel analyses, improve the constants of state-of-the-art bounds.
| [
{
"created": "Thu, 24 Feb 2022 01:31:32 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Mar 2022 18:10:31 GMT",
"version": "v2"
},
{
"created": "Sat, 4 Jun 2022 21:57:14 GMT",
"version": "v3"
},
{
"created": "Mon, 15 Aug 2022 17:29:35 GMT",
"version": "v4"
},
{
"created": "Wed, 17 Aug 2022 08:35:18 GMT",
"version": "v5"
},
{
"created": "Tue, 22 Nov 2022 21:19:06 GMT",
"version": "v6"
}
] | 2022-11-24 | [
[
"Das",
"Arun Kumar",
""
],
[
"Das",
"Sandip",
""
],
[
"da Fonseca",
"Guilherme D.",
""
],
[
"Gerard",
"Yan",
""
],
[
"Rivier",
"Bastien",
""
]
] | Given a matching between n red points and n blue points by line segments in the plane, we consider the problem of obtaining a crossing-free matching through flip operations that replace two crossing segments by two non-crossing ones. We first show that (i) it is NP-hard to alpha-approximate the shortest flip sequence, for any constant alpha. Second, we show that when the red points are colinear, (ii) given a matching, a flip sequence of length at most n(n-1)/2 always exists, and (iii) the number of flips in any sequence never exceeds (n(n-1)/2) (n+4)/6. Finally, we present (iv) a lower bounding flip sequence with roughly 1.5 n(n-1)/2 flips, which shows that the n(n-1)/2 flips attained in the convex case are not the maximum, and (v) a convex matching from which any flip sequence has roughly 1.5 n flips. The last four results, based on novel analyses, improve the constants of state-of-the-art bounds. |
1509.05137 | Juan Liu | Juan Liu, Wei Chen, and Khaled B. Letaief | Joint Channel and Queue Aware Scheduling for Wireless Links with
Multiple Fading States | conference version | null | null | null | cs.IT cs.PF math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we address the delay optimal scheduling problem for wireless
transmission with fixed modulation over multi-state fading channels. We propose
a stochastic scheduling policy which schedules the source to transmit with
probability jointly based on the buffer and channel states, with an average
power constraint at the transmitter. Our objective is to minimize the average
queueing delay by choosing the optimal transmission probabilities. Using Markov
chain modeling, we formulate a power-constrained delay minimization problem,
and then transform it into a Linear Programming (LP) one. By analyzing its
property, we can derive the optimal threshold-based scheduling policy together
with the corresponding transmission probabilities. Our theoretical analysis is
corroborated by simulation results.
| [
{
"created": "Thu, 17 Sep 2015 06:05:45 GMT",
"version": "v1"
}
] | 2015-09-18 | [
[
"Liu",
"Juan",
""
],
[
"Chen",
"Wei",
""
],
[
"Letaief",
"Khaled B.",
""
]
] | In this work, we address the delay optimal scheduling problem for wireless transmission with fixed modulation over multi-state fading channels. We propose a stochastic scheduling policy which schedules the source to transmit with probability jointly based on the buffer and channel states, with an average power constraint at the transmitter. Our objective is to minimize the average queueing delay by choosing the optimal transmission probabilities. Using Markov chain modeling, we formulate a power-constrained delay minimization problem, and then transform it into a Linear Programming (LP) one. By analyzing its property, we can derive the optimal threshold-based scheduling policy together with the corresponding transmission probabilities. Our theoretical analysis is corroborated by simulation results. |
1902.03940 | Yury Dvorkin | Jip Kim and Yury Dvorkin | A P2P-dominant Distribution System Architecture | null | null | 10.1109/TPWRS.2019.2961330 | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Peer-to-peer interactions between small-scale energy resources exploit
distribution network infrastructure as an electricity carrier, but remain
financially unaccountable to electric power utilities. This status-quo raises
multiple challenges. First, peer-to-peer energy trading reduces the portion of
electricity supplied to end-customers by utilities and their revenue streams.
Second, utilities must ensure that peer-to-peer transactions comply with
distribution network limits. This paper proposes a peer-to-peer energy trading
architecture, in two configurations, that couples peer-to-peer interactions and
distribution network operations. The first configuration assumes that these
interactions are settled by the utility in a centralized manner, while the
second one is peer-centric and does not involve the utility. Both
configurations use distribution locational marginal prices to compute network
usage charges that peers must pay to the utility for using the distribution
network.
| [
{
"created": "Mon, 11 Feb 2019 15:19:24 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Apr 2019 14:05:44 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Oct 2019 19:45:23 GMT",
"version": "v3"
},
{
"created": "Fri, 20 Dec 2019 16:35:51 GMT",
"version": "v4"
}
] | 2019-12-23 | [
[
"Kim",
"Jip",
""
],
[
"Dvorkin",
"Yury",
""
]
] | Peer-to-peer interactions between small-scale energy resources exploit distribution network infrastructure as an electricity carrier, but remain financially unaccountable to electric power utilities. This status-quo raises multiple challenges. First, peer-to-peer energy trading reduces the portion of electricity supplied to end-customers by utilities and their revenue streams. Second, utilities must ensure that peer-to-peer transactions comply with distribution network limits. This paper proposes a peer-to-peer energy trading architecture, in two configurations, that couples peer-to-peer interactions and distribution network operations. The first configuration assumes that these interactions are settled by the utility in a centralized manner, while the second one is peer-centric and does not involve the utility. Both configurations use distribution locational marginal prices to compute network usage charges that peers must pay to the utility for using the distribution network. |
1410.3506 | Hiroki Sayama | Hiroki Sayama and Roberta Sinatra | Social Diffusion and Global Drift on Networks | 7 pages, 3 figures; to appear in Phys. Rev. E | Physical Review E, 91, 032809, 2015 | 10.1103/PhysRevE.91.032809 | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a mathematical model of social diffusion on a symmetric weighted
network where individual nodes' states gradually assimilate to local social
norms made by their neighbors' average states. Unlike physical diffusion, this
process is not state conservational and thus the global state of the network
(i.e., sum of node states) will drift. The asymptotic average node state will
be the average of initial node states weighted by their strengths. Here we show
that, while the global state is not conserved in this process, the inner
product of strength and state vectors is conserved instead, and perfect
positive correlation between node states and local averages of their
self/neighbor strength ratios always results in upward (or at least neutral)
global drift. We also show that the strength assortativity negatively affects
the speed of homogenization. Based on these findings, we propose an adaptive
link weight adjustment method to achieve the highest upward global drift by
increasing the strength-state correlation. The effectiveness of the method was
confirmed through numerical simulations and implications for real-world social
applications are discussed.
| [
{
"created": "Mon, 13 Oct 2014 20:28:17 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Mar 2015 22:19:35 GMT",
"version": "v2"
}
] | 2017-05-29 | [
[
"Sayama",
"Hiroki",
""
],
[
"Sinatra",
"Roberta",
""
]
] | We study a mathematical model of social diffusion on a symmetric weighted network where individual nodes' states gradually assimilate to local social norms made by their neighbors' average states. Unlike physical diffusion, this process is not state conservational and thus the global state of the network (i.e., sum of node states) will drift. The asymptotic average node state will be the average of initial node states weighted by their strengths. Here we show that, while the global state is not conserved in this process, the inner product of strength and state vectors is conserved instead, and perfect positive correlation between node states and local averages of their self/neighbor strength ratios always results in upward (or at least neutral) global drift. We also show that the strength assortativity negatively affects the speed of homogenization. Based on these findings, we propose an adaptive link weight adjustment method to achieve the highest upward global drift by increasing the strength-state correlation. The effectiveness of the method was confirmed through numerical simulations and implications for real-world social applications are discussed. |
2202.07167 | Miguel A. Mosteiro | Dariusz R. Kowalski and Miguel A. Mosteiro | Efficient Distributed Computations in Anonymous Dynamic Congested
Systems with Opportunistic Connectivity | 28 pages | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | In this work we address the question of efficiency of distributed computing
in anonymous, congested and highly dynamic and not-always-connected
networks/systems. More precisely, the system consists of an unknown number of
anonymous nodes with congestion on links and local computation. Links can
change arbitrarily from round to round, with only limitation that the union of
any T consecutive networks must form a temporarily connected (multi-)graph on
all nodes (knowledge of T is the only information the nodes require, otherwise
the communication would not be feasible). Nodes do not have any IDs, only some
number l of them have a bit distinguishing them from nodes without such a bit.
In each round a node can send and receive messages from its current neighbors.
Links and nodes are congested, in the sense that the length of messages and
local cache memory for local computation is (asymptotically) logarithmic.
All-to-all communication is a fundamental principle in distributed computing
- it assumes that each node has an input message to be delivered to all other
nodes. Without loss of generality, the size of each input message is
logarithmic to fit in the link and node congestion assumption; otherwise, they
could be split in logarithmic batches and considered one-by-one. Because of
anonymity, each node needs to receive only a set of all input messages, each
accompanied by a number of initiating nodes (message multiplicity). We prove
that this task can be done in time polynomial in the (initially unknown) number
of nodes n and in the lower bound on the isoperimetric numbers of dynamically
evolving graphs. This allows to efficiently emulate a popular Congested Clique
model on top of Anonymous Dynamic Congested Systems (ADCS) with Opportunistic
Connectivity, even if the number of nodes may arbitrarily change in the
beginning of emulation.
| [
{
"created": "Tue, 15 Feb 2022 03:32:16 GMT",
"version": "v1"
}
] | 2022-02-16 | [
[
"Kowalski",
"Dariusz R.",
""
],
[
"Mosteiro",
"Miguel A.",
""
]
] | In this work we address the question of efficiency of distributed computing in anonymous, congested and highly dynamic and not-always-connected networks/systems. More precisely, the system consists of an unknown number of anonymous nodes with congestion on links and local computation. Links can change arbitrarily from round to round, with only limitation that the union of any T consecutive networks must form a temporarily connected (multi-)graph on all nodes (knowledge of T is the only information the nodes require, otherwise the communication would not be feasible). Nodes do not have any IDs, only some number l of them have a bit distinguishing them from nodes without such a bit. In each round a node can send and receive messages from its current neighbors. Links and nodes are congested, in the sense that the length of messages and local cache memory for local computation is (asymptotically) logarithmic. All-to-all communication is a fundamental principle in distributed computing - it assumes that each node has an input message to be delivered to all other nodes. Without loss of generality, the size of each input message is logarithmic to fit in the link and node congestion assumption; otherwise, they could be split in logarithmic batches and considered one-by-one. Because of anonymity, each node needs to receive only a set of all input messages, each accompanied by a number of initiating nodes (message multiplicity). We prove that this task can be done in time polynomial in the (initially unknown) number of nodes n and in the lower bound on the isoperimetric numbers of dynamically evolving graphs. This allows to efficiently emulate a popular Congested Clique model on top of Anonymous Dynamic Congested Systems (ADCS) with Opportunistic Connectivity, even if the number of nodes may arbitrarily change in the beginning of emulation. |
2310.07786 | Zheqing Zhu | Zheqing Zhu, Yueyang Liu, Xu Kuang, Benjamin Van Roy | Non-Stationary Contextual Bandit Learning via Neural Predictive Ensemble
Sampling | null | null | null | null | cs.LG cs.IR | http://creativecommons.org/licenses/by/4.0/ | Real-world applications of contextual bandits often exhibit non-stationarity
due to seasonality, serendipity, and evolving social trends. While a number of
non-stationary contextual bandit learning algorithms have been proposed in the
literature, they excessively explore due to a lack of prioritization for
information of enduring value, or are designed in ways that do not scale in
modern applications with high-dimensional user-specific features and large
action set, or both. In this paper, we introduce a novel non-stationary
contextual bandit algorithm that addresses these concerns. It combines a
scalable, deep-neural-network-based architecture with a carefully designed
exploration mechanism that strategically prioritizes collecting information
with the most lasting value in a non-stationary environment. Through empirical
evaluations on two real-world recommendation datasets, which exhibit pronounced
non-stationarity, we demonstrate that our approach significantly outperforms
the state-of-the-art baselines.
| [
{
"created": "Wed, 11 Oct 2023 18:15:55 GMT",
"version": "v1"
},
{
"created": "Sat, 14 Oct 2023 20:10:12 GMT",
"version": "v2"
}
] | 2023-10-17 | [
[
"Zhu",
"Zheqing",
""
],
[
"Liu",
"Yueyang",
""
],
[
"Kuang",
"Xu",
""
],
[
"Van Roy",
"Benjamin",
""
]
] | Real-world applications of contextual bandits often exhibit non-stationarity due to seasonality, serendipity, and evolving social trends. While a number of non-stationary contextual bandit learning algorithms have been proposed in the literature, they excessively explore due to a lack of prioritization for information of enduring value, or are designed in ways that do not scale in modern applications with high-dimensional user-specific features and large action set, or both. In this paper, we introduce a novel non-stationary contextual bandit algorithm that addresses these concerns. It combines a scalable, deep-neural-network-based architecture with a carefully designed exploration mechanism that strategically prioritizes collecting information with the most lasting value in a non-stationary environment. Through empirical evaluations on two real-world recommendation datasets, which exhibit pronounced non-stationarity, we demonstrate that our approach significantly outperforms the state-of-the-art baselines. |
1906.05194 | Todd Murphey | Ian Abraham and Todd D. Murphey | Active Learning of Dynamics for Data-Driven Control Using Koopman
Operators | 14 pages, In Press | IEEE Transactions on Robotics, 2019 | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an active learning strategy for robotic systems that
takes into account task information, enables fast learning, and allows control
to be readily synthesized by taking advantage of the Koopman operator
representation. We first motivate the use of representing nonlinear systems as
linear Koopman operator systems by illustrating the improved model-based
control performance with an actuated Van der Pol system. Information-theoretic
methods are then applied to the Koopman operator formulation of dynamical
systems where we derive a controller for active learning of robot dynamics. The
active learning controller is shown to increase the rate of information about
the Koopman operator. In addition, our active learning controller can readily
incorporate policies built on the Koopman dynamics, enabling the benefits of
fast active learning and improved control. Results using a quadcopter
illustrate single-execution active learning and stabilization capabilities
during free-fall. The results for active learning are extended for automating
Koopman observables and we implement our method on real robotic systems.
| [
{
"created": "Wed, 12 Jun 2019 15:07:12 GMT",
"version": "v1"
}
] | 2019-06-13 | [
[
"Abraham",
"Ian",
""
],
[
"Murphey",
"Todd D.",
""
]
] | This paper presents an active learning strategy for robotic systems that takes into account task information, enables fast learning, and allows control to be readily synthesized by taking advantage of the Koopman operator representation. We first motivate the use of representing nonlinear systems as linear Koopman operator systems by illustrating the improved model-based control performance with an actuated Van der Pol system. Information-theoretic methods are then applied to the Koopman operator formulation of dynamical systems where we derive a controller for active learning of robot dynamics. The active learning controller is shown to increase the rate of information about the Koopman operator. In addition, our active learning controller can readily incorporate policies built on the Koopman dynamics, enabling the benefits of fast active learning and improved control. Results using a quadcopter illustrate single-execution active learning and stabilization capabilities during free-fall. The results for active learning are extended for automating Koopman observables and we implement our method on real robotic systems. |
2204.10422 | Giuseppe Abrami | Giuseppe Abrami, Mevl\"ut Bagci, Leon Hammerla, Alexander Mehler | German Parliamentary Corpus (GerParCor) | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Parliamentary debates represent a large and partly unexploited treasure trove
of publicly accessible texts. In the German-speaking area, there is a certain
deficit of uniformly accessible and annotated corpora covering all
German-speaking parliaments at the national and federal level. To address this
gap, we introduce the German Parliament Corpus (GerParCor). GerParCor is a
genre-specific corpus of (predominantly historical) German-language
parliamentary protocols from three centuries and four countries, including
state and federal level data. In addition, GerParCor contains conversions of
scanned protocols and, in particular, of protocols in Fraktur converted via an
OCR process based on Tesseract. All protocols were preprocessed by means of the
NLP pipeline of spaCy3 and automatically annotated with metadata regarding
their session date. GerParCor is made available in the XMI format of the UIMA
project. In this way, GerParCor can be used as a large corpus of historical
texts in the field of political communication for various tasks in NLP.
| [
{
"created": "Thu, 21 Apr 2022 22:06:55 GMT",
"version": "v1"
}
] | 2022-04-25 | [
[
"Abrami",
"Giuseppe",
""
],
[
"Bagci",
"Mevlüt",
""
],
[
"Hammerla",
"Leon",
""
],
[
"Mehler",
"Alexander",
""
]
] | Parliamentary debates represent a large and partly unexploited treasure trove of publicly accessible texts. In the German-speaking area, there is a certain deficit of uniformly accessible and annotated corpora covering all German-speaking parliaments at the national and federal level. To address this gap, we introduce the German Parliament Corpus (GerParCor). GerParCor is a genre-specific corpus of (predominantly historical) German-language parliamentary protocols from three centuries and four countries, including state and federal level data. In addition, GerParCor contains conversions of scanned protocols and, in particular, of protocols in Fraktur converted via an OCR process based on Tesseract. All protocols were preprocessed by means of the NLP pipeline of spaCy3 and automatically annotated with metadata regarding their session date. GerParCor is made available in the XMI format of the UIMA project. In this way, GerParCor can be used as a large corpus of historical texts in the field of political communication for various tasks in NLP. |
1803.10358 | Ervin Teng | Ervin Teng, Rui Huang, Bob Iannucci | ClickBAIT-v2: Training an Object Detector in Real-Time | 8 pages, 13 figures. For ClickBAIT-v1, see arXiv:1709.05021 | null | null | null | cs.CV cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern deep convolutional neural networks (CNNs) for image classification and
object detection are often trained offline on large static datasets. Some
applications, however, will require training in real-time on live video streams
with a human-in-the-loop. We refer to this class of problem as time-ordered
online training (ToOT). These problems will require a consideration of not only
the quantity of incoming training data, but the human effort required to
annotate and use it. We demonstrate and evaluate a system tailored to training
an object detector on a live video stream with minimal input from a human
operator. We show that we can obtain bounding box annotation from
weakly-supervised single-point clicks through interactive segmentation.
Furthermore, by exploiting the time-ordered nature of the video stream through
object tracking, we can increase the average training benefit of human
interactions by 3-4 times.
| [
{
"created": "Tue, 27 Mar 2018 23:30:08 GMT",
"version": "v1"
}
] | 2018-03-29 | [
[
"Teng",
"Ervin",
""
],
[
"Huang",
"Rui",
""
],
[
"Iannucci",
"Bob",
""
]
] | Modern deep convolutional neural networks (CNNs) for image classification and object detection are often trained offline on large static datasets. Some applications, however, will require training in real-time on live video streams with a human-in-the-loop. We refer to this class of problem as time-ordered online training (ToOT). These problems will require a consideration of not only the quantity of incoming training data, but the human effort required to annotate and use it. We demonstrate and evaluate a system tailored to training an object detector on a live video stream with minimal input from a human operator. We show that we can obtain bounding box annotation from weakly-supervised single-point clicks through interactive segmentation. Furthermore, by exploiting the time-ordered nature of the video stream through object tracking, we can increase the average training benefit of human interactions by 3-4 times. |
1407.3636 | Ana Mestrovic | Sabina \v{S}i\v{s}ovi\'c, Sanda Martin\v{c}i\'c-Ip\v{s}i\'c and Ana
Me\v{s}trovi\'c | Toward Network-based Keyword Extraction from Multitopic Web Documents | 10 pages | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we analyse the selectivity measure calculated from the complex
network in the task of the automatic keyword extraction. Texts, collected from
different web sources (portals, forums), are represented as directed and
weighted co-occurrence complex networks of words. Words are nodes and links are
established between two nodes if they are directly co-occurring within the
sentence. We test different centrality measures for ranking nodes - keyword
candidates. The promising results are achieved using the selectivity measure.
Then we propose an approach which enables extracting word pairs according to
the values of the in/out selectivity and weight measures combined with
filtering.
| [
{
"created": "Mon, 14 Jul 2014 13:22:36 GMT",
"version": "v1"
}
] | 2014-07-15 | [
[
"Šišović",
"Sabina",
""
],
[
"Martinčić-Ipšić",
"Sanda",
""
],
[
"Meštrović",
"Ana",
""
]
] | In this paper we analyse the selectivity measure calculated from the complex network in the task of the automatic keyword extraction. Texts, collected from different web sources (portals, forums), are represented as directed and weighted co-occurrence complex networks of words. Words are nodes and links are established between two nodes if they are directly co-occurring within the sentence. We test different centrality measures for ranking nodes - keyword candidates. The promising results are achieved using the selectivity measure. Then we propose an approach which enables extracting word pairs according to the values of the in/out selectivity and weight measures combined with filtering. |
1902.06965 | Dmitry Ivanov | Dmitry Ivanov | DEDPUL: Difference-of-Estimated-Densities-based Positive-Unlabeled
Learning | Implementation of DEDPUL and experimental data are available at
https://github.com/dimonenka/DEDPUL | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Positive-Unlabeled (PU) learning is an analog to supervised binary
classification for the case when only the positive sample is clean, while the
negative sample is contaminated with latent instances of positive class and
hence can be considered as an unlabeled mixture. The objectives are to classify
the unlabeled sample and train an unbiased PN classifier, which generally
requires to identify the mixing proportions of positives and negatives first.
Recently, unbiased risk estimation framework has achieved state-of-the-art
performance in PU learning. This approach, however, exhibits two major
bottlenecks. First, the mixing proportions are assumed to be identified, i.e.
known in the domain or estimated with additional methods. Second, the approach
relies on the classifier being a neural network. In this paper, we propose
DEDPUL, a method that solves PU Learning without the aforementioned issues. The
mechanism behind DEDPUL is to apply a computationally cheap post-processing
procedure to the predictions of any classifier trained to distinguish positive
and unlabeled data. Instead of assuming the proportions to be identified,
DEDPUL estimates them alongside with classifying unlabeled sample. Experiments
show that DEDPUL outperforms the current state-of-the-art in both proportion
estimation and PU Classification.
| [
{
"created": "Tue, 19 Feb 2019 09:30:53 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Mar 2019 11:49:03 GMT",
"version": "v2"
},
{
"created": "Mon, 27 May 2019 14:55:22 GMT",
"version": "v3"
},
{
"created": "Thu, 21 Nov 2019 00:11:21 GMT",
"version": "v4"
},
{
"created": "Sun, 7 Jun 2020 13:40:20 GMT",
"version": "v5"
}
] | 2020-06-09 | [
[
"Ivanov",
"Dmitry",
""
]
] | Positive-Unlabeled (PU) learning is an analog to supervised binary classification for the case when only the positive sample is clean, while the negative sample is contaminated with latent instances of positive class and hence can be considered as an unlabeled mixture. The objectives are to classify the unlabeled sample and train an unbiased PN classifier, which generally requires to identify the mixing proportions of positives and negatives first. Recently, unbiased risk estimation framework has achieved state-of-the-art performance in PU learning. This approach, however, exhibits two major bottlenecks. First, the mixing proportions are assumed to be identified, i.e. known in the domain or estimated with additional methods. Second, the approach relies on the classifier being a neural network. In this paper, we propose DEDPUL, a method that solves PU Learning without the aforementioned issues. The mechanism behind DEDPUL is to apply a computationally cheap post-processing procedure to the predictions of any classifier trained to distinguish positive and unlabeled data. Instead of assuming the proportions to be identified, DEDPUL estimates them alongside with classifying unlabeled sample. Experiments show that DEDPUL outperforms the current state-of-the-art in both proportion estimation and PU Classification. |
1803.03576 | Shweta Bhatt | Shweta Bhatt, Sagar Joglekar, Shehar Bano, Nishanth Sastry | Illuminating an Ecosystem of Partisan Websites | Published at The Web Conference 2018 (WWW 2018). Please cite the WWW
version | null | 10.1145/3184558.3188725 | null | cs.SI | http://creativecommons.org/licenses/by/4.0/ | This paper aims to shed light on alternative news media ecosystems that are
believed to have influenced opinions and beliefs by false and/or biased news
reporting during the 2016 US Presidential Elections. We examine a large,
professionally curated list of 668 hyper-partisan websites and their
corresponding Facebook pages, and identify key characteristics that mediate the
traffic flow within this ecosystem. We uncover a pattern of new websites being
established in the run up to the elections, and abandoned after. Such websites
form an ecosystem, creating links from one website to another, and by `liking'
each others' Facebook pages. These practices are highly effective in directing
user traffic internally within the ecosystem in a highly partisan manner, with
right-leaning sites linking to and liking other right-leaning sites and
similarly left-leaning sites linking to other sites on the left, thus forming a
filter bubble amongst news producers similar to the filter bubble which has
been widely observed among consumers of partisan news. Whereas there is
activity along both left- and right-leaning sites, right-leaning sites are more
evolved, accounting for a disproportionate number of abandoned websites and
partisan internal links. We also examine demographic characteristics of
consumers of hyper-partisan news and find that some of the more populous
demographic groups in the US tend to be consumers of more right-leaning sites.
| [
{
"created": "Fri, 9 Mar 2018 15:48:00 GMT",
"version": "v1"
}
] | 2018-03-12 | [
[
"Bhatt",
"Shweta",
""
],
[
"Joglekar",
"Sagar",
""
],
[
"Bano",
"Shehar",
""
],
[
"Sastry",
"Nishanth",
""
]
] | This paper aims to shed light on alternative news media ecosystems that are believed to have influenced opinions and beliefs by false and/or biased news reporting during the 2016 US Presidential Elections. We examine a large, professionally curated list of 668 hyper-partisan websites and their corresponding Facebook pages, and identify key characteristics that mediate the traffic flow within this ecosystem. We uncover a pattern of new websites being established in the run up to the elections, and abandoned after. Such websites form an ecosystem, creating links from one website to another, and by `liking' each others' Facebook pages. These practices are highly effective in directing user traffic internally within the ecosystem in a highly partisan manner, with right-leaning sites linking to and liking other right-leaning sites and similarly left-leaning sites linking to other sites on the left, thus forming a filter bubble amongst news producers similar to the filter bubble which has been widely observed among consumers of partisan news. Whereas there is activity along both left- and right-leaning sites, right-leaning sites are more evolved, accounting for a disproportionate number of abandoned websites and partisan internal links. We also examine demographic characteristics of consumers of hyper-partisan news and find that some of the more populous demographic groups in the US tend to be consumers of more right-leaning sites. |
1611.07485 | Qiangui Huang | Qiangui Huang, Weiyue Wang, Kevin Zhou, Suya You, Ulrich Neumann | Scene Labeling using Gated Recurrent Units with Explicit Long Range
Conditioning | updated version 2 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural network (RNN), as a powerful contextual dependency modeling
framework, has been widely applied to scene labeling problems. However, this
work shows that directly applying traditional RNN architectures, which unfolds
a 2D lattice grid into a sequence, is not sufficient to model structure
dependencies in images due to the "impact vanishing" problem. First, we give an
empirical analysis about the "impact vanishing" problem. Then, a new RNN unit
named Recurrent Neural Network with explicit long range conditioning (RNN-ELC)
is designed to alleviate this problem. A novel neural network architecture is
built for scene labeling tasks where one of the variants of the new RNN unit,
Gated Recurrent Unit with Explicit Long-range Conditioning (GRU-ELC), is used
to model multi scale contextual dependencies in images. We validate the use of
GRU-ELC units with state-of-the-art performance on three standard scene
labeling datasets. Comprehensive experiments demonstrate that the new GRU-ELC
unit benefits scene labeling problem a lot as it can encode longer contextual
dependencies in images more effectively than traditional RNN units.
| [
{
"created": "Tue, 22 Nov 2016 19:43:24 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Mar 2017 05:12:44 GMT",
"version": "v2"
}
] | 2017-03-29 | [
[
"Huang",
"Qiangui",
""
],
[
"Wang",
"Weiyue",
""
],
[
"Zhou",
"Kevin",
""
],
[
"You",
"Suya",
""
],
[
"Neumann",
"Ulrich",
""
]
] | Recurrent neural network (RNN), as a powerful contextual dependency modeling framework, has been widely applied to scene labeling problems. However, this work shows that directly applying traditional RNN architectures, which unfolds a 2D lattice grid into a sequence, is not sufficient to model structure dependencies in images due to the "impact vanishing" problem. First, we give an empirical analysis about the "impact vanishing" problem. Then, a new RNN unit named Recurrent Neural Network with explicit long range conditioning (RNN-ELC) is designed to alleviate this problem. A novel neural network architecture is built for scene labeling tasks where one of the variants of the new RNN unit, Gated Recurrent Unit with Explicit Long-range Conditioning (GRU-ELC), is used to model multi scale contextual dependencies in images. We validate the use of GRU-ELC units with state-of-the-art performance on three standard scene labeling datasets. Comprehensive experiments demonstrate that the new GRU-ELC unit benefits scene labeling problem a lot as it can encode longer contextual dependencies in images more effectively than traditional RNN units. |
2207.07797 | Lei Hsiung | Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho | CARBEN: Composite Adversarial Robustness Benchmark | IJCAI 2022 Demo Track; The demonstration is at
https://hsiung.cc/CARBEN/ | null | null | null | cs.CV cs.AI cs.HC | http://creativecommons.org/licenses/by/4.0/ | Prior literature on adversarial attack methods has mainly focused on
attacking with and defending against a single threat model, e.g., perturbations
bounded in Lp ball. However, multiple threat models can be combined into
composite perturbations. One such approach, composite adversarial attack (CAA),
not only expands the perturbable space of the image, but also may be overlooked
by current modes of robustness evaluation. This paper demonstrates how CAA's
attack order affects the resulting image, and provides real-time inferences of
different models, which will facilitate users' configuration of the parameters
of the attack level and their rapid evaluation of model prediction. A
leaderboard to benchmark adversarial robustness against CAA is also introduced.
| [
{
"created": "Sat, 16 Jul 2022 01:08:44 GMT",
"version": "v1"
}
] | 2022-07-19 | [
[
"Hsiung",
"Lei",
""
],
[
"Tsai",
"Yun-Yun",
""
],
[
"Chen",
"Pin-Yu",
""
],
[
"Ho",
"Tsung-Yi",
""
]
] | Prior literature on adversarial attack methods has mainly focused on attacking with and defending against a single threat model, e.g., perturbations bounded in Lp ball. However, multiple threat models can be combined into composite perturbations. One such approach, composite adversarial attack (CAA), not only expands the perturbable space of the image, but also may be overlooked by current modes of robustness evaluation. This paper demonstrates how CAA's attack order affects the resulting image, and provides real-time inferences of different models, which will facilitate users' configuration of the parameters of the attack level and their rapid evaluation of model prediction. A leaderboard to benchmark adversarial robustness against CAA is also introduced. |
2307.08493 | Yixi Cai | Yixi Cai, Fanze Kong, Yunfan Ren, Fangcheng Zhu, Jiarong Lin, Fu Zhang | Occupancy Grid Mapping without Ray-Casting for High-resolution LiDAR
Sensors | Supplementary material included. Accepted for publication in IEEE
Transactions on Robotics (T-RO) | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Occupancy mapping is a fundamental component of robotic systems to reason
about the unknown and known regions of the environment. This article presents
an efficient occupancy mapping framework for high-resolution LiDAR sensors,
termed D-Map. The framework introduces three main novelties to address the
computational efficiency challenges of occupancy mapping. Firstly, we use a
depth image to determine the occupancy state of regions instead of the
traditional ray-casting method. Secondly, we introduce an efficient on-tree
update strategy on a tree-based map structure. These two techniques avoid
redundant visits to small cells, significantly reducing the number of cells to
be updated. Thirdly, we remove known cells from the map at each update by
leveraging the low false alarm rate of LiDAR sensors. This approach not only
enhances our framework's update efficiency by reducing map size but also endows
it with an interesting decremental property, which we have named D-Map. To
support our design, we provide theoretical analyses of the accuracy of the
depth image projection and time complexity of occupancy updates. Furthermore,
we conduct extensive benchmark experiments on various LiDAR sensors in both
public and private datasets. Our framework demonstrates superior efficiency in
comparison with other state-of-the-art methods while maintaining comparable
mapping accuracy and high memory efficiency. We demonstrate two real-world
applications of D-Map for real-time occupancy mapping on a handle device and an
aerial platform carrying a high-resolution LiDAR. In addition, we open-source
the implementation of D-Map on GitHub to benefit society:
github.com/hku-mars/D-Map.
| [
{
"created": "Mon, 17 Jul 2023 13:56:28 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Oct 2023 09:16:44 GMT",
"version": "v2"
}
] | 2023-10-06 | [
[
"Cai",
"Yixi",
""
],
[
"Kong",
"Fanze",
""
],
[
"Ren",
"Yunfan",
""
],
[
"Zhu",
"Fangcheng",
""
],
[
"Lin",
"Jiarong",
""
],
[
"Zhang",
"Fu",
""
]
] | Occupancy mapping is a fundamental component of robotic systems to reason about the unknown and known regions of the environment. This article presents an efficient occupancy mapping framework for high-resolution LiDAR sensors, termed D-Map. The framework introduces three main novelties to address the computational efficiency challenges of occupancy mapping. Firstly, we use a depth image to determine the occupancy state of regions instead of the traditional ray-casting method. Secondly, we introduce an efficient on-tree update strategy on a tree-based map structure. These two techniques avoid redundant visits to small cells, significantly reducing the number of cells to be updated. Thirdly, we remove known cells from the map at each update by leveraging the low false alarm rate of LiDAR sensors. This approach not only enhances our framework's update efficiency by reducing map size but also endows it with an interesting decremental property, which we have named D-Map. To support our design, we provide theoretical analyses of the accuracy of the depth image projection and time complexity of occupancy updates. Furthermore, we conduct extensive benchmark experiments on various LiDAR sensors in both public and private datasets. Our framework demonstrates superior efficiency in comparison with other state-of-the-art methods while maintaining comparable mapping accuracy and high memory efficiency. We demonstrate two real-world applications of D-Map for real-time occupancy mapping on a handle device and an aerial platform carrying a high-resolution LiDAR. In addition, we open-source the implementation of D-Map on GitHub to benefit society: github.com/hku-mars/D-Map. |
2103.11169 | Jogendra Nath Kundu | Naveen Venkat, Jogendra Nath Kundu, Durgesh Kumar Singh, Ambareesh
Revanur, R. Venkatesh Babu | Your Classifier can Secretly Suffice Multi-Source Domain Adaptation | NeurIPS 2020. Project page: https://sites.google.com/view/simpal | null | null | null | cs.LG cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Multi-Source Domain Adaptation (MSDA) deals with the transfer of task
knowledge from multiple labeled source domains to an unlabeled target domain,
under a domain-shift. Existing methods aim to minimize this domain-shift using
auxiliary distribution alignment objectives. In this work, we present a
different perspective to MSDA wherein deep models are observed to implicitly
align the domains under label supervision. Thus, we aim to utilize implicit
alignment without additional training objectives to perform adaptation. To this
end, we use pseudo-labeled target samples and enforce a classifier agreement on
the pseudo-labels, a process called Self-supervised Implicit Alignment
(SImpAl). We find that SImpAl readily works even under category-shift among the
source domains. Further, we propose classifier agreement as a cue to determine
the training convergence, resulting in a simple training algorithm. We provide
a thorough evaluation of our approach on five benchmarks, along with detailed
insights into each component of our approach.
| [
{
"created": "Sat, 20 Mar 2021 12:44:13 GMT",
"version": "v1"
}
] | 2021-03-23 | [
[
"Venkat",
"Naveen",
""
],
[
"Kundu",
"Jogendra Nath",
""
],
[
"Singh",
"Durgesh Kumar",
""
],
[
"Revanur",
"Ambareesh",
""
],
[
"Babu",
"R. Venkatesh",
""
]
] | Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain, under a domain-shift. Existing methods aim to minimize this domain-shift using auxiliary distribution alignment objectives. In this work, we present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision. Thus, we aim to utilize implicit alignment without additional training objectives to perform adaptation. To this end, we use pseudo-labeled target samples and enforce a classifier agreement on the pseudo-labels, a process called Self-supervised Implicit Alignment (SImpAl). We find that SImpAl readily works even under category-shift among the source domains. Further, we propose classifier agreement as a cue to determine the training convergence, resulting in a simple training algorithm. We provide a thorough evaluation of our approach on five benchmarks, along with detailed insights into each component of our approach. |
2301.01113 | Thanh Le-Cong Le-Cong Thanh | Thanh Le-Cong, Duc-Minh Luong, Xuan Bach D. Le, David Lo, Nhat-Hoa
Tran, Bui Quang-Huy and Quyet-Thang Huynh | Invalidator: Automated Patch Correctness Assessment via Semantic and
Syntactic Reasoning | null | IEEE Transactions on Software Engineering, 2023 | 10.1109/TSE.2023.3255177 | null | cs.SE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Automated program repair (APR) faces the challenge of test overfitting, where
generated patches pass validation tests but fail to generalize. Existing
methods for patch assessment involve generating new tests or manual inspection,
which can be time-consuming or biased. In this paper, we propose a novel
technique, INVALIDATOR, to automatically assess the correctness of
APR-generated patches via semantic and syntactic reasoning. INVALIDATOR
leverages program invariants to reason about program semantics while also
capturing program syntax through language semantics learned from a large code
corpus using a pre-trained language model. Given a buggy program and the
developer-patched program, INVALIDATOR infers likely invariants on both
programs. Then, INVALIDATOR determines that an APR-generated patch overfits if:
(1) it violates correct specifications or (2) maintains erroneous behaviors
from the original buggy program. In case our approach fails to determine an
overfitting patch based on invariants, INVALIDATOR utilizes a trained model
from labeled patches to assess patch correctness based on program syntax. The
benefit of INVALIDATOR is threefold. First, INVALIDATOR leverages both semantic
and syntactic reasoning to enhance its discriminative capability. Second,
INVALIDATOR does not require new test cases to be generated, but instead only
relies on the current test suite and uses invariant inference to generalize
program behaviors. Third, INVALIDATOR is fully automated. Experimental results
demonstrate that INVALIDATOR outperforms existing methods in terms of Accuracy
and F-measure, correctly identifying 79% of overfitting patches and detecting
23% more overfitting patches than the best baseline.
| [
{
"created": "Tue, 3 Jan 2023 14:16:32 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Mar 2023 10:56:58 GMT",
"version": "v2"
}
] | 2023-03-20 | [
[
"Le-Cong",
"Thanh",
""
],
[
"Luong",
"Duc-Minh",
""
],
[
"Le",
"Xuan Bach D.",
""
],
[
"Lo",
"David",
""
],
[
"Tran",
"Nhat-Hoa",
""
],
[
"Quang-Huy",
"Bui",
""
],
[
"Huynh",
"Quyet-Thang",
""
]
] | Automated program repair (APR) faces the challenge of test overfitting, where generated patches pass validation tests but fail to generalize. Existing methods for patch assessment involve generating new tests or manual inspection, which can be time-consuming or biased. In this paper, we propose a novel technique, INVALIDATOR, to automatically assess the correctness of APR-generated patches via semantic and syntactic reasoning. INVALIDATOR leverages program invariants to reason about program semantics while also capturing program syntax through language semantics learned from a large code corpus using a pre-trained language model. Given a buggy program and the developer-patched program, INVALIDATOR infers likely invariants on both programs. Then, INVALIDATOR determines that an APR-generated patch overfits if: (1) it violates correct specifications or (2) maintains erroneous behaviors from the original buggy program. In case our approach fails to determine an overfitting patch based on invariants, INVALIDATOR utilizes a trained model from labeled patches to assess patch correctness based on program syntax. The benefit of INVALIDATOR is threefold. First, INVALIDATOR leverages both semantic and syntactic reasoning to enhance its discriminative capability. Second, INVALIDATOR does not require new test cases to be generated, but instead only relies on the current test suite and uses invariant inference to generalize program behaviors. Third, INVALIDATOR is fully automated. Experimental results demonstrate that INVALIDATOR outperforms existing methods in terms of Accuracy and F-measure, correctly identifying 79% of overfitting patches and detecting 23% more overfitting patches than the best baseline. |
2108.04990 | Sanchit Sinha | Sanchit Sinha, Hanjie Chen, Arshdeep Sekhon, Yangfeng Ji, Yanjun Qi | Perturbing Inputs for Fragile Interpretations in Deep Natural Language
Processing | EMNLP-BlackboxNLP, 2021 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Interpretability methods like Integrated Gradient and LIME are popular
choices for explaining natural language model predictions with relative word
importance scores. These interpretations need to be robust for trustworthy NLP
applications in high-stake areas like medicine or finance. Our paper
demonstrates how interpretations can be manipulated by making simple word
perturbations on an input text. Via a small portion of word-level swaps, these
adversarial perturbations aim to make the resulting text semantically and
spatially similar to its seed input (therefore sharing similar
interpretations). Simultaneously, the generated examples achieve the same
prediction label as the seed yet are given a substantially different
explanation by the interpretation methods. Our experiments generate fragile
interpretations to attack two SOTA interpretation methods, across three popular
Transformer models and on two different NLP datasets. We observe that the rank
order correlation drops by over 20% when less than 10% of words are perturbed
on average. Further, rank-order correlation keeps decreasing as more words get
perturbed. Furthermore, we demonstrate that candidates generated from our
method have good quality metrics.
| [
{
"created": "Wed, 11 Aug 2021 02:07:21 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Sep 2021 17:07:24 GMT",
"version": "v2"
}
] | 2021-09-16 | [
[
"Sinha",
"Sanchit",
""
],
[
"Chen",
"Hanjie",
""
],
[
"Sekhon",
"Arshdeep",
""
],
[
"Ji",
"Yangfeng",
""
],
[
"Qi",
"Yanjun",
""
]
] | Interpretability methods like Integrated Gradient and LIME are popular choices for explaining natural language model predictions with relative word importance scores. These interpretations need to be robust for trustworthy NLP applications in high-stake areas like medicine or finance. Our paper demonstrates how interpretations can be manipulated by making simple word perturbations on an input text. Via a small portion of word-level swaps, these adversarial perturbations aim to make the resulting text semantically and spatially similar to its seed input (therefore sharing similar interpretations). Simultaneously, the generated examples achieve the same prediction label as the seed yet are given a substantially different explanation by the interpretation methods. Our experiments generate fragile interpretations to attack two SOTA interpretation methods, across three popular Transformer models and on two different NLP datasets. We observe that the rank order correlation drops by over 20% when less than 10% of words are perturbed on average. Further, rank-order correlation keeps decreasing as more words get perturbed. Furthermore, we demonstrate that candidates generated from our method have good quality metrics. |
2309.10977 | Jayaraman J. Thiagarajan | Jayaraman J. Thiagarajan, Vivek Narayanaswamy, Puja Trivedi, Rushil
Anirudh | PAGER: A Framework for Failure Analysis of Deep Regression Models | Published at ICML 2024 | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Safe deployment of AI models requires proactive detection of failures to
prevent costly errors. To this end, we study the important problem of detecting
failures in deep regression models. Existing approaches rely on epistemic
uncertainty estimates or inconsistency w.r.t the training data to identify
failure. Interestingly, we find that while uncertainties are necessary they are
insufficient to accurately characterize failure in practice. Hence, we
introduce PAGER (Principled Analysis of Generalization Errors in Regressors), a
framework to systematically detect and characterize failures in deep
regressors. Built upon the principle of anchored training in deep models, PAGER
unifies both epistemic uncertainty and complementary manifold non-conformity
scores to accurately organize samples into different risk regimes.
| [
{
"created": "Wed, 20 Sep 2023 00:37:35 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Jun 2024 18:55:12 GMT",
"version": "v2"
}
] | 2024-06-04 | [
[
"Thiagarajan",
"Jayaraman J.",
""
],
[
"Narayanaswamy",
"Vivek",
""
],
[
"Trivedi",
"Puja",
""
],
[
"Anirudh",
"Rushil",
""
]
] | Safe deployment of AI models requires proactive detection of failures to prevent costly errors. To this end, we study the important problem of detecting failures in deep regression models. Existing approaches rely on epistemic uncertainty estimates or inconsistency w.r.t the training data to identify failure. Interestingly, we find that while uncertainties are necessary they are insufficient to accurately characterize failure in practice. Hence, we introduce PAGER (Principled Analysis of Generalization Errors in Regressors), a framework to systematically detect and characterize failures in deep regressors. Built upon the principle of anchored training in deep models, PAGER unifies both epistemic uncertainty and complementary manifold non-conformity scores to accurately organize samples into different risk regimes. |
2402.03610 | Tomoyuki Kagaya | Tomoyuki Kagaya, Thong Jing Yuan, Yuxuan Lou, Jayashree Karlekar,
Sugiri Pranata, Akira Kinose, Koki Oguri, Felix Wick, Yang You | RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal
LLM Agents | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Owing to recent advancements, Large Language Models (LLMs) can now be
deployed as agents for increasingly complex decision-making applications in
areas including robotics, gaming, and API integration. However, reflecting past
experiences in current decision-making processes, an innate human behavior,
continues to pose significant challenges. Addressing this, we propose
Retrieval-Augmented Planning (RAP) framework, designed to dynamically leverage
past experiences corresponding to the current situation and context, thereby
enhancing agents' planning capabilities. RAP distinguishes itself by being
versatile: it excels in both text-only and multimodal environments, making it
suitable for a wide range of tasks. Empirical evaluations demonstrate RAP's
effectiveness, where it achieves SOTA performance in textual scenarios and
notably enhances multimodal LLM agents' performance for embodied tasks. These
results highlight RAP's potential in advancing the functionality and
applicability of LLM agents in complex, real-world applications.
| [
{
"created": "Tue, 6 Feb 2024 00:53:27 GMT",
"version": "v1"
}
] | 2024-02-07 | [
[
"Kagaya",
"Tomoyuki",
""
],
[
"Yuan",
"Thong Jing",
""
],
[
"Lou",
"Yuxuan",
""
],
[
"Karlekar",
"Jayashree",
""
],
[
"Pranata",
"Sugiri",
""
],
[
"Kinose",
"Akira",
""
],
[
"Oguri",
"Koki",
""
],
[
"Wick",
"Felix",
""
],
[
"You",
"Yang",
""
]
] | Owing to recent advancements, Large Language Models (LLMs) can now be deployed as agents for increasingly complex decision-making applications in areas including robotics, gaming, and API integration. However, reflecting past experiences in current decision-making processes, an innate human behavior, continues to pose significant challenges. Addressing this, we propose Retrieval-Augmented Planning (RAP) framework, designed to dynamically leverage past experiences corresponding to the current situation and context, thereby enhancing agents' planning capabilities. RAP distinguishes itself by being versatile: it excels in both text-only and multimodal environments, making it suitable for a wide range of tasks. Empirical evaluations demonstrate RAP's effectiveness, where it achieves SOTA performance in textual scenarios and notably enhances multimodal LLM agents' performance for embodied tasks. These results highlight RAP's potential in advancing the functionality and applicability of LLM agents in complex, real-world applications. |
1910.07910 | Matthias Naaf | Katrin M. Dannert, Erich Gr\"adel, Matthias Naaf, Val Tannen | Generalized Absorptive Polynomials and Provenance Semantics for
Fixed-Point Logic | null | null | null | null | cs.LO cs.DB math.LO | http://creativecommons.org/publicdomain/zero/1.0/ | Semiring provenance is a successful approach to provide detailed information
on the combinations of atomic facts that are responsible for the result of a
query. In particular, interpretations in general provenance semirings of
polynomials or formal power series give precise descriptions of the successful
evaluation strategies for the query. While provenance analysis in databases
has, for a long time, been largely confined to negation-free query languages, a
recent approach extends this to model checking problems for logics with full
negation. Algebraically this relies on new quotient semirings of
dual-indeterminate polynomials or power series. So far, this approach has been
developed mainly for first-order logic and for the positive fragment of least
fixed-point logic. What has remained open is an adequate treatment for
fixed-point calculi that admit arbitrary interleavings of least and greatest
fixed points. We show that an adequate framework for the provenance analysis of
full fixed-point logics is provided by semirings that are (1) fully continuous,
(2) absorptive, and (3) chain-positive. Full continuity guarantees that
provenance values of least and greatest fixed-points are well-defined.
Absorptive semirings provide a symmetry between least and greatest fixed-point
computations and make sure that provenance values of greatest fixed points are
informative. Finally, chain-positivity is responsible for having
truth-preserving interpretations, which give non-zero values to all true
formulae. We further identify semirings of generalized absorptive polynomials
and prove universal properties that make them the most general appropriate
semirings for LFP. We illustrate the power of provenance interpretations in
these semirings by relating them to provenance values of plays and strategies
in the associated model-checking games.
| [
{
"created": "Thu, 17 Oct 2019 13:44:37 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Feb 2020 15:55:33 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Jul 2020 10:11:57 GMT",
"version": "v3"
}
] | 2020-07-13 | [
[
"Dannert",
"Katrin M.",
""
],
[
"Grädel",
"Erich",
""
],
[
"Naaf",
"Matthias",
""
],
[
"Tannen",
"Val",
""
]
] | Semiring provenance is a successful approach to provide detailed information on the combinations of atomic facts that are responsible for the result of a query. In particular, interpretations in general provenance semirings of polynomials or formal power series give precise descriptions of the successful evaluation strategies for the query. While provenance analysis in databases has, for a long time, been largely confined to negation-free query languages, a recent approach extends this to model checking problems for logics with full negation. Algebraically this relies on new quotient semirings of dual-indeterminate polynomials or power series. So far, this approach has been developed mainly for first-order logic and for the positive fragment of least fixed-point logic. What has remained open is an adequate treatment for fixed-point calculi that admit arbitrary interleavings of least and greatest fixed points. We show that an adequate framework for the provenance analysis of full fixed-point logics is provided by semirings that are (1) fully continuous, (2) absorptive, and (3) chain-positive. Full continuity guarantees that provenance values of least and greatest fixed-points are well-defined. Absorptive semirings provide a symmetry between least and greatest fixed-point computations and make sure that provenance values of greatest fixed points are informative. Finally, chain-positivity is responsible for having truth-preserving interpretations, which give non-zero values to all true formulae. We further identify semirings of generalized absorptive polynomials and prove universal properties that make them the most general appropriate semirings for LFP. We illustrate the power of provenance interpretations in these semirings by relating them to provenance values of plays and strategies in the associated model-checking games. |
2311.06579 | Jiaxin Zhang | Jiaxin Zhang, Meiqin Liu, Senlin Zhang, Ronghao Zheng, Shanling Dong | Five-Tiered Route Planner for Multi-AUV Accessing Fixed Nodes in
Uncertain Ocean Environments | null | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article introduces a five-tiered route planner for accessing multiple
nodes with multiple autonomous underwater vehicles (AUVs) that enables
efficient task completion in stochastic ocean environments. First, the
pre-planning tier solves the single-AUV routing problem to find the optimal
giant route (GR), estimates the number of required AUVs based on GR
segmentation, and allocates nodes for each AUV to access. Second, the route
planning tier plans individual routes for each AUV. During navigation, the path
planning tier provides each AUV with physical paths between any two points,
while the actuation tier is responsible for path tracking and obstacle
avoidance. Finally, in the stochastic ocean environment, deviations from the
initial plan may occur, thus, an auction-based coordination tier drives online
task coordination among AUVs in a distributed manner. Simulation experiments
are conducted in multiple different scenarios to test the performance of the
proposed planner, and the promising results show that the proposed method
reduces AUV usage by 7.5% compared with the existing methods. When using the
same number of AUVs, the fleet equipped with the proposed planner achieves a
6.2% improvement in average task completion rate.
| [
{
"created": "Sat, 11 Nov 2023 14:19:43 GMT",
"version": "v1"
}
] | 2023-11-14 | [
[
"Zhang",
"Jiaxin",
""
],
[
"Liu",
"Meiqin",
""
],
[
"Zhang",
"Senlin",
""
],
[
"Zheng",
"Ronghao",
""
],
[
"Dong",
"Shanling",
""
]
] | This article introduces a five-tiered route planner for accessing multiple nodes with multiple autonomous underwater vehicles (AUVs) that enables efficient task completion in stochastic ocean environments. First, the pre-planning tier solves the single-AUV routing problem to find the optimal giant route (GR), estimates the number of required AUVs based on GR segmentation, and allocates nodes for each AUV to access. Second, the route planning tier plans individual routes for each AUV. During navigation, the path planning tier provides each AUV with physical paths between any two points, while the actuation tier is responsible for path tracking and obstacle avoidance. Finally, in the stochastic ocean environment, deviations from the initial plan may occur, thus, an auction-based coordination tier drives online task coordination among AUVs in a distributed manner. Simulation experiments are conducted in multiple different scenarios to test the performance of the proposed planner, and the promising results show that the proposed method reduces AUV usage by 7.5% compared with the existing methods. When using the same number of AUVs, the fleet equipped with the proposed planner achieves a 6.2% improvement in average task completion rate. |
2312.00823 | Zongqian Wu | Zongqian Wu, Yujing Liu, Mengmeng Zhan, Jialie Shen, Ping Hu, Xiaofeng
Zhu | Adaptive Multi-Modality Prompt Learning | null | null | null | null | cs.LG cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although current prompt learning methods have successfully been designed to
effectively reuse the large pre-trained models without fine-tuning their large
number of parameters, they still have limitations to be addressed, i.e.,
without considering the adverse impact of meaningless patches in every image
and without simultaneously considering in-sample generalization and
out-of-sample generalization. In this paper, we propose an adaptive
multi-modality prompt learning to address the above issues. To do this, we
employ previous text prompt learning and propose a new image prompt learning.
The image prompt learning achieves in-sample and out-of-sample generalization,
by first masking meaningless patches and then padding them with the learnable
parameters and the information from texts. Moreover, each of the prompts
provides auxiliary information to each other, further strengthening these two
kinds of generalization. Experimental results on real datasets demonstrate that
our method outperforms SOTA methods, in terms of different downstream tasks.
| [
{
"created": "Thu, 30 Nov 2023 12:10:22 GMT",
"version": "v1"
}
] | 2023-12-05 | [
[
"Wu",
"Zongqian",
""
],
[
"Liu",
"Yujing",
""
],
[
"Zhan",
"Mengmeng",
""
],
[
"Shen",
"Jialie",
""
],
[
"Hu",
"Ping",
""
],
[
"Zhu",
"Xiaofeng",
""
]
] | Although current prompt learning methods have successfully been designed to effectively reuse the large pre-trained models without fine-tuning their large number of parameters, they still have limitations to be addressed, i.e., without considering the adverse impact of meaningless patches in every image and without simultaneously considering in-sample generalization and out-of-sample generalization. In this paper, we propose an adaptive multi-modality prompt learning to address the above issues. To do this, we employ previous text prompt learning and propose a new image prompt learning. The image prompt learning achieves in-sample and out-of-sample generalization, by first masking meaningless patches and then padding them with the learnable parameters and the information from texts. Moreover, each of the prompts provides auxiliary information to each other, further strengthening these two kinds of generalization. Experimental results on real datasets demonstrate that our method outperforms SOTA methods, in terms of different downstream tasks. |
2402.14380 | Changsong Pang | Changsong Pang, Xieyuanli Chen, Yimin Liu, Huimin Lu, Yuwei Cheng | RadarMOSEVE: A Spatial-Temporal Transformer Network for Radar-Only
Moving Object Segmentation and Ego-Velocity Estimation | Accepted at AAAI-24 | Proceedings of the AAAI Conference on Artificial
Intelligence.38(2024)4424-4432 | 10.1609/aaai.v38i5.28240 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Moving object segmentation (MOS) and Ego velocity estimation (EVE) are vital
capabilities for mobile systems to achieve full autonomy. Several approaches
have attempted to achieve MOSEVE using a LiDAR sensor. However, LiDAR sensors
are typically expensive and susceptible to adverse weather conditions. Instead,
millimeter-wave radar (MWR) has gained popularity in robotics and autonomous
driving for real applications due to its cost-effectiveness and resilience to
bad weather. Nonetheless, publicly available MOSEVE datasets and approaches
using radar data are limited. Some existing methods adopt point convolutional
networks from LiDAR-based approaches, ignoring the specific artifacts and the
valuable radial velocity information of radar measurements, leading to
suboptimal performance. In this paper, we propose a novel transformer network
that effectively addresses the sparsity and noise issues and leverages the
radial velocity measurements of radar points using our devised radar self- and
cross-attention mechanisms. Based on that, our method achieves accurate EVE of
the robot and performs MOS using only radar data simultaneously. To thoroughly
evaluate the MOSEVE performance of our method, we annotated the radar points in
the public View-of-Delft (VoD) dataset and additionally constructed a new radar
dataset in various environments. The experimental results demonstrate the
superiority of our approach over existing state-of-the-art methods. The code is
available at https://github.com/ORCA-Uboat/RadarMOSEVE.
| [
{
"created": "Thu, 22 Feb 2024 08:48:59 GMT",
"version": "v1"
}
] | 2024-05-22 | [
[
"Pang",
"Changsong",
""
],
[
"Chen",
"Xieyuanli",
""
],
[
"Liu",
"Yimin",
""
],
[
"Lu",
"Huimin",
""
],
[
"Cheng",
"Yuwei",
""
]
] | Moving object segmentation (MOS) and Ego velocity estimation (EVE) are vital capabilities for mobile systems to achieve full autonomy. Several approaches have attempted to achieve MOSEVE using a LiDAR sensor. However, LiDAR sensors are typically expensive and susceptible to adverse weather conditions. Instead, millimeter-wave radar (MWR) has gained popularity in robotics and autonomous driving for real applications due to its cost-effectiveness and resilience to bad weather. Nonetheless, publicly available MOSEVE datasets and approaches using radar data are limited. Some existing methods adopt point convolutional networks from LiDAR-based approaches, ignoring the specific artifacts and the valuable radial velocity information of radar measurements, leading to suboptimal performance. In this paper, we propose a novel transformer network that effectively addresses the sparsity and noise issues and leverages the radial velocity measurements of radar points using our devised radar self- and cross-attention mechanisms. Based on that, our method achieves accurate EVE of the robot and performs MOS using only radar data simultaneously. To thoroughly evaluate the MOSEVE performance of our method, we annotated the radar points in the public View-of-Delft (VoD) dataset and additionally constructed a new radar dataset in various environments. The experimental results demonstrate the superiority of our approach over existing state-of-the-art methods. The code is available at https://github.com/ORCA-Uboat/RadarMOSEVE. |
2102.02417 | Kai Yuan Tay | Kai Yuan Tay, Lynnette Ng, Wei Han Chua, Lucerne Loke, Danqi Ye,
Melissa Chua | Audio Adversarial Examples: Attacks Using Vocal Masks | 9 pages, 1 figure, 2 tables. Submitted to COLING2020 | null | null | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by/4.0/ | We construct audio adversarial examples on automatic Speech-To-Text systems .
Given any audio waveform, we produce an another by overlaying an audio vocal
mask generated from the original audio. We apply our audio adversarial attack
to five SOTA STT systems: DeepSpeech, Julius, Kaldi, wav2letter@anywhere and
CMUSphinx. In addition, we engaged human annotators to transcribe the
adversarial audio. Our experiments show that these adversarial examples fool
State-Of-The-Art Speech-To-Text systems, yet humans are able to consistently
pick out the speech. The feasibility of this attack introduces a new domain to
study machine and human perception of speech.
| [
{
"created": "Thu, 4 Feb 2021 05:21:10 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Feb 2021 03:31:23 GMT",
"version": "v2"
}
] | 2021-02-09 | [
[
"Tay",
"Kai Yuan",
""
],
[
"Ng",
"Lynnette",
""
],
[
"Chua",
"Wei Han",
""
],
[
"Loke",
"Lucerne",
""
],
[
"Ye",
"Danqi",
""
],
[
"Chua",
"Melissa",
""
]
] | We construct audio adversarial examples on automatic Speech-To-Text systems . Given any audio waveform, we produce an another by overlaying an audio vocal mask generated from the original audio. We apply our audio adversarial attack to five SOTA STT systems: DeepSpeech, Julius, Kaldi, wav2letter@anywhere and CMUSphinx. In addition, we engaged human annotators to transcribe the adversarial audio. Our experiments show that these adversarial examples fool State-Of-The-Art Speech-To-Text systems, yet humans are able to consistently pick out the speech. The feasibility of this attack introduces a new domain to study machine and human perception of speech. |
2309.12466 | Chuta Sano | Chuta Sano and Ryan Kavanagh and Brigitte Pientka | Mechanizing Session-Types using a Structural View: Enforcing Linearity
without Linearity | Technical report containing an appendix with additional proofs.
Companion to an OOPSLA'23 paper of the same name | null | null | null | cs.PL | http://creativecommons.org/licenses/by/4.0/ | Session types employ a linear type system that ensures that communication
channels cannot be implicitly copied or discarded. As a result, many
mechanizations of these systems require modeling channel contexts and carefully
ensuring that they treat channels linearly. We demonstrate a technique that
localizes linearity conditions as additional predicates embedded within type
judgments, which allows us to use structural typing contexts instead of linear
ones. This technique is especially relevant when leveraging (weak) higher-order
abstract syntax to handle channel mobility and the intricate binding structures
that arise in session-typed systems. Following this approach, we mechanize a
session-typed system based on classical linear logic and its type preservation
proof in the proof assistant Beluga, which uses the logical framework LF as its
encoding language. We also prove adequacy for our encoding. This shows the
tractability and effectiveness of our approach in modelling substructural
systems such as session-typed languages.
| [
{
"created": "Thu, 21 Sep 2023 20:20:28 GMT",
"version": "v1"
}
] | 2023-09-25 | [
[
"Sano",
"Chuta",
""
],
[
"Kavanagh",
"Ryan",
""
],
[
"Pientka",
"Brigitte",
""
]
] | Session types employ a linear type system that ensures that communication channels cannot be implicitly copied or discarded. As a result, many mechanizations of these systems require modeling channel contexts and carefully ensuring that they treat channels linearly. We demonstrate a technique that localizes linearity conditions as additional predicates embedded within type judgments, which allows us to use structural typing contexts instead of linear ones. This technique is especially relevant when leveraging (weak) higher-order abstract syntax to handle channel mobility and the intricate binding structures that arise in session-typed systems. Following this approach, we mechanize a session-typed system based on classical linear logic and its type preservation proof in the proof assistant Beluga, which uses the logical framework LF as its encoding language. We also prove adequacy for our encoding. This shows the tractability and effectiveness of our approach in modelling substructural systems such as session-typed languages. |
2002.10829 | Alina Karakanta | Alina Karakanta, Matteo Negri, Marco Turchi | MuST-Cinema: a Speech-to-Subtitles corpus | Accepted at LREC 2020 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Growing needs in localising audiovisual content in multiple languages through
subtitles call for the development of automatic solutions for human subtitling.
Neural Machine Translation (NMT) can contribute to the automatisation of
subtitling, facilitating the work of human subtitlers and reducing turn-around
times and related costs. NMT requires high-quality, large, task-specific
training data. The existing subtitling corpora, however, are missing both
alignments to the source language audio and important information about
subtitle breaks. This poses a significant limitation for developing efficient
automatic approaches for subtitling, since the length and form of a subtitle
directly depends on the duration of the utterance. In this work, we present
MuST-Cinema, a multilingual speech translation corpus built from TED subtitles.
The corpus is comprised of (audio, transcription, translation) triplets.
Subtitle breaks are preserved by inserting special symbols. We show that the
corpus can be used to build models that efficiently segment sentences into
subtitles and propose a method for annotating existing subtitling corpora with
subtitle breaks, conforming to the constraint of length.
| [
{
"created": "Tue, 25 Feb 2020 12:40:06 GMT",
"version": "v1"
}
] | 2020-02-26 | [
[
"Karakanta",
"Alina",
""
],
[
"Negri",
"Matteo",
""
],
[
"Turchi",
"Marco",
""
]
] | Growing needs in localising audiovisual content in multiple languages through subtitles call for the development of automatic solutions for human subtitling. Neural Machine Translation (NMT) can contribute to the automatisation of subtitling, facilitating the work of human subtitlers and reducing turn-around times and related costs. NMT requires high-quality, large, task-specific training data. The existing subtitling corpora, however, are missing both alignments to the source language audio and important information about subtitle breaks. This poses a significant limitation for developing efficient automatic approaches for subtitling, since the length and form of a subtitle directly depends on the duration of the utterance. In this work, we present MuST-Cinema, a multilingual speech translation corpus built from TED subtitles. The corpus is comprised of (audio, transcription, translation) triplets. Subtitle breaks are preserved by inserting special symbols. We show that the corpus can be used to build models that efficiently segment sentences into subtitles and propose a method for annotating existing subtitling corpora with subtitle breaks, conforming to the constraint of length. |
2206.15349 | Lingfei Song | Lingfei Song, Hua Huang | Revisiting Competitive Coding Approach for Palmprint Recognition: A
Linear Discriminant Analysis Perspective | 12 pages, 14 figures | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The competitive Coding approach (CompCode) is one of the most promising
methods for palmprint recognition. Due to its high performance and simple
formulation, it has been continuously studied for many years. However, although
numerous variations of CompCode have been proposed, a detailed analysis of the
method is still absent. In this paper, we provide a detailed analysis of
CompCode from the perspective of linear discriminant analysis (LDA) for the
first time. A non-trivial sufficient condition under which the CompCode is
optimal in the sense of Fisher's criterion is presented. Based on our analysis,
we examined the statistics of palmprints and concluded that CompCode deviates
from the optimal condition. To mitigate the deviation, we propose a new method
called Class-Specific CompCode that improves CompCode by excluding
non-palm-line areas from matching. A nonlinear mapping of the competitive code
is also applied in this method to further enhance accuracy. Experiments on two
public databases demonstrate the effectiveness of the proposed method.
| [
{
"created": "Thu, 30 Jun 2022 15:18:39 GMT",
"version": "v1"
}
] | 2022-07-01 | [
[
"Song",
"Lingfei",
""
],
[
"Huang",
"Hua",
""
]
] | The competitive Coding approach (CompCode) is one of the most promising methods for palmprint recognition. Due to its high performance and simple formulation, it has been continuously studied for many years. However, although numerous variations of CompCode have been proposed, a detailed analysis of the method is still absent. In this paper, we provide a detailed analysis of CompCode from the perspective of linear discriminant analysis (LDA) for the first time. A non-trivial sufficient condition under which the CompCode is optimal in the sense of Fisher's criterion is presented. Based on our analysis, we examined the statistics of palmprints and concluded that CompCode deviates from the optimal condition. To mitigate the deviation, we propose a new method called Class-Specific CompCode that improves CompCode by excluding non-palm-line areas from matching. A nonlinear mapping of the competitive code is also applied in this method to further enhance accuracy. Experiments on two public databases demonstrate the effectiveness of the proposed method. |
2407.13530 | Michael Pantic | Isar Meijer, Michael Pantic, Helen Oleynikova, Roland Siegwart | Pushing the Limits of Reactive Planning: Learning to Escape Local Minima | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When does a robot planner need a map? Reactive methods that use only the
robot's current sensor data and local information are fast and flexible, but
prone to getting stuck in local minima. Is there a middle-ground between fully
reactive methods and map-based path planners? In this paper, we investigate
feed forward and recurrent networks to augment a purely reactive sensor-based
planner, which should give the robot geometric intuition about how to escape
local minima. We train on a large number of extremely cluttered worlds
auto-generated from primitive shapes, and show that our system zero-shot
transfers to real 3D man-made environments, and can handle up to 30% sensor
noise without degeneration of performance. We also offer a discussion of what
role network memory plays in our final system, and what insights can be drawn
about the nature of reactive vs. map-based navigation.
| [
{
"created": "Thu, 18 Jul 2024 14:04:01 GMT",
"version": "v1"
}
] | 2024-07-19 | [
[
"Meijer",
"Isar",
""
],
[
"Pantic",
"Michael",
""
],
[
"Oleynikova",
"Helen",
""
],
[
"Siegwart",
"Roland",
""
]
] | When does a robot planner need a map? Reactive methods that use only the robot's current sensor data and local information are fast and flexible, but prone to getting stuck in local minima. Is there a middle-ground between fully reactive methods and map-based path planners? In this paper, we investigate feed forward and recurrent networks to augment a purely reactive sensor-based planner, which should give the robot geometric intuition about how to escape local minima. We train on a large number of extremely cluttered worlds auto-generated from primitive shapes, and show that our system zero-shot transfers to real 3D man-made environments, and can handle up to 30% sensor noise without degeneration of performance. We also offer a discussion of what role network memory plays in our final system, and what insights can be drawn about the nature of reactive vs. map-based navigation. |
1510.05891 | Muhammad Jawaherul Alam | Md. Jawaherul Alam, Franz J. Brandenburg and Stephen G. Kobourov | On the Book Thickness of 1-Planar Graphs | null | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a book embedding of a graph G, the vertices of G are placed in order along
a straight-line called spine of the book, and the edges of G are drawn on a set
of half-planes, called the pages of the book, such that two edges drawn on a
page do not cross each other. The minimum number of pages in which a graph can
be embedded is called the book-thickness or the page-number of the graph. It is
known that every planar graph has a book embedding on at most four pages. Here
we investigate the book-embeddings of 1-planar graphs. A graph is 1-planar if
it can be drawn in the plane such that each edge is crossed at most once. We
prove that every 1-planar graph has a book embedding on at most 16 pages and
every 3-connected 1-planar graph has a book embedding on at most 12 pages. The
drawings can be computed in linear time from any given 1-planar embedding of
the graph.
| [
{
"created": "Tue, 20 Oct 2015 13:40:33 GMT",
"version": "v1"
}
] | 2015-10-21 | [
[
"Alam",
"Md. Jawaherul",
""
],
[
"Brandenburg",
"Franz J.",
""
],
[
"Kobourov",
"Stephen G.",
""
]
] | In a book embedding of a graph G, the vertices of G are placed in order along a straight-line called spine of the book, and the edges of G are drawn on a set of half-planes, called the pages of the book, such that two edges drawn on a page do not cross each other. The minimum number of pages in which a graph can be embedded is called the book-thickness or the page-number of the graph. It is known that every planar graph has a book embedding on at most four pages. Here we investigate the book-embeddings of 1-planar graphs. A graph is 1-planar if it can be drawn in the plane such that each edge is crossed at most once. We prove that every 1-planar graph has a book embedding on at most 16 pages and every 3-connected 1-planar graph has a book embedding on at most 12 pages. The drawings can be computed in linear time from any given 1-planar embedding of the graph. |
1812.00600 | Abhinav Bhatia | Abhinav Bhatia, Pradeep Varakantham and Akshat Kumar | Resource Constrained Deep Reinforcement Learning | null | Proceedings of the International Conference on Automated Planning
and Scheduling. 29, 1 (Jul. 2019), 610-620 | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In urban environments, supply resources have to be constantly matched to the
"right" locations (where customer demand is present) so as to improve quality
of life. For instance, ambulances have to be matched to base stations regularly
so as to reduce response time for emergency incidents in EMS (Emergency
Management Systems); vehicles (cars, bikes, scooters etc.) have to be matched
to docking stations so as to reduce lost demand in shared mobility systems.
Such problem domains are challenging owing to the demand uncertainty,
combinatorial action spaces (due to allocation) and constraints on allocation
of resources (e.g., total resources, minimum and maximum number of resources at
locations and regions).
Existing systems typically employ myopic and greedy optimization approaches
to optimize allocation of supply resources to locations. Such approaches
typically are unable to handle surges or variances in demand patterns well.
Recent research has demonstrated the ability of Deep RL methods in adapting
well to highly uncertain environments. However, existing Deep RL methods are
unable to handle combinatorial action spaces and constraints on allocation of
resources. To that end, we have developed three approaches on top of the well
known actor critic approach, DDPG (Deep Deterministic Policy Gradient) that are
able to handle constraints on resource allocation. More importantly, we
demonstrate that they are able to outperform leading approaches on simulators
validated on semi-real and real data sets.
| [
{
"created": "Mon, 3 Dec 2018 08:34:36 GMT",
"version": "v1"
}
] | 2021-02-25 | [
[
"Bhatia",
"Abhinav",
""
],
[
"Varakantham",
"Pradeep",
""
],
[
"Kumar",
"Akshat",
""
]
] | In urban environments, supply resources have to be constantly matched to the "right" locations (where customer demand is present) so as to improve quality of life. For instance, ambulances have to be matched to base stations regularly so as to reduce response time for emergency incidents in EMS (Emergency Management Systems); vehicles (cars, bikes, scooters etc.) have to be matched to docking stations so as to reduce lost demand in shared mobility systems. Such problem domains are challenging owing to the demand uncertainty, combinatorial action spaces (due to allocation) and constraints on allocation of resources (e.g., total resources, minimum and maximum number of resources at locations and regions). Existing systems typically employ myopic and greedy optimization approaches to optimize allocation of supply resources to locations. Such approaches typically are unable to handle surges or variances in demand patterns well. Recent research has demonstrated the ability of Deep RL methods in adapting well to highly uncertain environments. However, existing Deep RL methods are unable to handle combinatorial action spaces and constraints on allocation of resources. To that end, we have developed three approaches on top of the well known actor critic approach, DDPG (Deep Deterministic Policy Gradient) that are able to handle constraints on resource allocation. More importantly, we demonstrate that they are able to outperform leading approaches on simulators validated on semi-real and real data sets. |
1607.07514 | Soroush Vosoughi Dr | Soroush Vosoughi, Prashanth Vijayaraghavan and Deb Roy | Tweet2Vec: Learning Tweet Embeddings Using Character-level CNN-LSTM
Encoder-Decoder | SIGIR 2016, July 17-21, 2016, Pisa. Proceedings of SIGIR 2016. Pisa,
Italy (2016) | null | 10.1145/2911451.2914762 | null | cs.CL cs.AI cs.NE cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Tweet2Vec, a novel method for generating general-purpose vector
representation of tweets. The model learns tweet embeddings using
character-level CNN-LSTM encoder-decoder. We trained our model on 3 million,
randomly selected English-language tweets. The model was evaluated using two
methods: tweet semantic similarity and tweet sentiment categorization,
outperforming the previous state-of-the-art in both tasks. The evaluations
demonstrate the power of the tweet embeddings generated by our model for
various tweet categorization tasks. The vector representations generated by our
model are generic, and hence can be applied to a variety of tasks. Though the
model presented in this paper is trained on English-language tweets, the method
presented can be used to learn tweet embeddings for different languages.
| [
{
"created": "Tue, 26 Jul 2016 00:58:14 GMT",
"version": "v1"
}
] | 2016-07-27 | [
[
"Vosoughi",
"Soroush",
""
],
[
"Vijayaraghavan",
"Prashanth",
""
],
[
"Roy",
"Deb",
""
]
] | We present Tweet2Vec, a novel method for generating general-purpose vector representation of tweets. The model learns tweet embeddings using character-level CNN-LSTM encoder-decoder. We trained our model on 3 million, randomly selected English-language tweets. The model was evaluated using two methods: tweet semantic similarity and tweet sentiment categorization, outperforming the previous state-of-the-art in both tasks. The evaluations demonstrate the power of the tweet embeddings generated by our model for various tweet categorization tasks. The vector representations generated by our model are generic, and hence can be applied to a variety of tasks. Though the model presented in this paper is trained on English-language tweets, the method presented can be used to learn tweet embeddings for different languages. |
1807.01961 | Ondrej Bajgar | Ondrej Bajgar, Rudolf Kadlec, Jan Kleindienst | A Boo(n) for Evaluating Architecture Performance | ICML 2018 | Proceedings of the 35th International Conference on Machine
Learning (ICML 2018). Volume 80 of the Proceedings of Machine Learning
Research (PMLR) | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | We point out important problems with the common practice of using the best
single model performance for comparing deep learning architectures, and we
propose a method that corrects these flaws. Each time a model is trained, one
gets a different result due to random factors in the training process, which
include random parameter initialization and random data shuffling. Reporting
the best single model performance does not appropriately address this
stochasticity. We propose a normalized expected best-out-of-$n$ performance
($\text{Boo}_n$) as a way to correct these problems.
| [
{
"created": "Thu, 5 Jul 2018 12:33:31 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Jul 2018 11:14:20 GMT",
"version": "v2"
}
] | 2018-07-24 | [
[
"Bajgar",
"Ondrej",
""
],
[
"Kadlec",
"Rudolf",
""
],
[
"Kleindienst",
"Jan",
""
]
] | We point out important problems with the common practice of using the best single model performance for comparing deep learning architectures, and we propose a method that corrects these flaws. Each time a model is trained, one gets a different result due to random factors in the training process, which include random parameter initialization and random data shuffling. Reporting the best single model performance does not appropriately address this stochasticity. We propose a normalized expected best-out-of-$n$ performance ($\text{Boo}_n$) as a way to correct these problems. |
1901.08406 | Bharat Gaind | Anusha Holla, Bharat Gaind, Vikas Reddy Katta, Abhishek Kundu, S
Kamalesh | Hybrid NER System for Multi-Source Offer Feeds | Published in the Global Journal of Engineering Science and Researches
(ISSN 2348 - 8034, Pg. 69-77) after getting accepted in the International
Conference on Recent Trends In Computational Engineering and Technologies
(ICRTCET'18), May 17-18, 2018, Bengaluru, India. Journal Link -
http://www.gjesr.com/ICRTCET-18.html | Global Journal of Engineering Science and Researches (ICRTCET-18)
(2019) 69-77 | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data available across the web is largely unstructured. Offers published by
multiple sources like banks, digital wallets, merchants, etc., are one of the
most accessed advertising data in today's world. This data gets accessed by
millions of people on a daily basis and is easily interpreted by humans, but
since it is largely unstructured and diverse, using an algorithmic way to
extract meaningful information out of these offers is hard. Identifying the
essential offer entities (for instance, its amount, the product on which the
offer is applicable, the merchant providing the offer, etc.) from these offers
plays a vital role in targeting the right customers to improve sales. This work
presents and evaluates various existing Named Entity Recognizer (NER) models
which can identify the required entities from offer feeds. We also propose a
novel Hybrid NER model constructed by two-level stacking of Conditional Random
Field, Bidirectional LSTM and Spacy models at the first level and an SVM
classifier at the second. The proposed hybrid model has been tested on offer
feeds collected from multiple sources and has shown better performance in the
offer domain when compared to the existing models.
| [
{
"created": "Thu, 24 Jan 2019 13:53:04 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2019 11:07:50 GMT",
"version": "v2"
}
] | 2019-06-12 | [
[
"Holla",
"Anusha",
""
],
[
"Gaind",
"Bharat",
""
],
[
"Katta",
"Vikas Reddy",
""
],
[
"Kundu",
"Abhishek",
""
],
[
"Kamalesh",
"S",
""
]
] | Data available across the web is largely unstructured. Offers published by multiple sources like banks, digital wallets, merchants, etc., are one of the most accessed advertising data in today's world. This data gets accessed by millions of people on a daily basis and is easily interpreted by humans, but since it is largely unstructured and diverse, using an algorithmic way to extract meaningful information out of these offers is hard. Identifying the essential offer entities (for instance, its amount, the product on which the offer is applicable, the merchant providing the offer, etc.) from these offers plays a vital role in targeting the right customers to improve sales. This work presents and evaluates various existing Named Entity Recognizer (NER) models which can identify the required entities from offer feeds. We also propose a novel Hybrid NER model constructed by two-level stacking of Conditional Random Field, Bidirectional LSTM and Spacy models at the first level and an SVM classifier at the second. The proposed hybrid model has been tested on offer feeds collected from multiple sources and has shown better performance in the offer domain when compared to the existing models. |
2204.06607 | Wojciech Zielonka | Wojciech Zielonka and Timo Bolkart and Justus Thies | Towards Metrical Reconstruction of Human Faces | Video: https://youtu.be/vzzEbvv08VA Website:
https://zielon.github.io/mica/ Accepted to ECCV 2022 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Face reconstruction and tracking is a building block of numerous applications
in AR/VR, human-machine interaction, as well as medical applications. Most of
these applications rely on a metrically correct prediction of the shape,
especially, when the reconstructed subject is put into a metrical context
(i.e., when there is a reference object of known size). A metrical
reconstruction is also needed for any application that measures distances and
dimensions of the subject (e.g., to virtually fit a glasses frame).
State-of-the-art methods for face reconstruction from a single image are
trained on large 2D image datasets in a self-supervised fashion. However, due
to the nature of a perspective projection they are not able to reconstruct the
actual face dimensions, and even predicting the average human face outperforms
some of these methods in a metrical sense. To learn the actual shape of a face,
we argue for a supervised training scheme. Since there exists no large-scale 3D
dataset for this task, we annotated and unified small- and medium-scale
databases. The resulting unified dataset is still a medium-scale dataset with
more than 2k identities and training purely on it would lead to overfitting. To
this end, we take advantage of a face recognition network pretrained on a
large-scale 2D image dataset, which provides distinct features for different
faces and is robust to expression, illumination, and camera changes. Using
these features, we train our face shape estimator in a supervised fashion,
inheriting the robustness and generalization of the face recognition network.
Our method, which we call MICA (MetrIC fAce), outperforms the state-of-the-art
reconstruction methods by a large margin, both on current non-metric benchmarks
as well as on our metric benchmarks (15% and 24% lower average error on NoW,
respectively).
| [
{
"created": "Wed, 13 Apr 2022 18:57:33 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Oct 2022 17:29:53 GMT",
"version": "v2"
}
] | 2022-10-20 | [
[
"Zielonka",
"Wojciech",
""
],
[
"Bolkart",
"Timo",
""
],
[
"Thies",
"Justus",
""
]
] | Face reconstruction and tracking is a building block of numerous applications in AR/VR, human-machine interaction, as well as medical applications. Most of these applications rely on a metrically correct prediction of the shape, especially, when the reconstructed subject is put into a metrical context (i.e., when there is a reference object of known size). A metrical reconstruction is also needed for any application that measures distances and dimensions of the subject (e.g., to virtually fit a glasses frame). State-of-the-art methods for face reconstruction from a single image are trained on large 2D image datasets in a self-supervised fashion. However, due to the nature of a perspective projection they are not able to reconstruct the actual face dimensions, and even predicting the average human face outperforms some of these methods in a metrical sense. To learn the actual shape of a face, we argue for a supervised training scheme. Since there exists no large-scale 3D dataset for this task, we annotated and unified small- and medium-scale databases. The resulting unified dataset is still a medium-scale dataset with more than 2k identities and training purely on it would lead to overfitting. To this end, we take advantage of a face recognition network pretrained on a large-scale 2D image dataset, which provides distinct features for different faces and is robust to expression, illumination, and camera changes. Using these features, we train our face shape estimator in a supervised fashion, inheriting the robustness and generalization of the face recognition network. Our method, which we call MICA (MetrIC fAce), outperforms the state-of-the-art reconstruction methods by a large margin, both on current non-metric benchmarks as well as on our metric benchmarks (15% and 24% lower average error on NoW, respectively). |
2406.03772 | Yang Hou | Yang Hou, Zhenghua Li | Character-Level Chinese Dependency Parsing via Modeling Latent
Intra-Word Structure | Findings of ACL 2024 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Revealing the syntactic structure of sentences in Chinese poses significant
challenges for word-level parsers due to the absence of clear word boundaries.
To facilitate a transition from word-level to character-level Chinese
dependency parsing, this paper proposes modeling latent internal structures
within words. In this way, each word-level dependency tree is interpreted as a
forest of character-level trees. A constrained Eisner algorithm is implemented
to ensure the compatibility of character-level trees, guaranteeing a single
root for intra-word structures and establishing inter-word dependencies between
these roots. Experiments on Chinese treebanks demonstrate the superiority of
our method over both the pipeline framework and previous joint models. A
detailed analysis reveals that a coarse-to-fine parsing strategy empowers the
model to predict more linguistically plausible intra-word structures.
| [
{
"created": "Thu, 6 Jun 2024 06:23:02 GMT",
"version": "v1"
}
] | 2024-06-07 | [
[
"Hou",
"Yang",
""
],
[
"Li",
"Zhenghua",
""
]
] | Revealing the syntactic structure of sentences in Chinese poses significant challenges for word-level parsers due to the absence of clear word boundaries. To facilitate a transition from word-level to character-level Chinese dependency parsing, this paper proposes modeling latent internal structures within words. In this way, each word-level dependency tree is interpreted as a forest of character-level trees. A constrained Eisner algorithm is implemented to ensure the compatibility of character-level trees, guaranteeing a single root for intra-word structures and establishing inter-word dependencies between these roots. Experiments on Chinese treebanks demonstrate the superiority of our method over both the pipeline framework and previous joint models. A detailed analysis reveals that a coarse-to-fine parsing strategy empowers the model to predict more linguistically plausible intra-word structures. |
2406.02958 | Charlie Hou | Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang
Le, Adithya Sagar, Giulia Fanti, Daniel Lazar | PrE-Text: Training Language Models on Private Federated Data in the Age
of LLMs | ICML 2024 (Oral) | null | null | null | cs.LG cs.AI cs.CL cs.CR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | On-device training is currently the most common approach for training machine
learning (ML) models on private, distributed user data. Despite this, on-device
training has several drawbacks: (1) most user devices are too small to train
large models on-device, (2) on-device training is communication- and
computation-intensive, and (3) on-device training can be difficult to debug and
deploy. To address these problems, we propose Private Evolution-Text
(PrE-Text), a method for generating differentially private (DP) synthetic
textual data. First, we show that across multiple datasets, training small
models (models that fit on user devices) with PrE-Text synthetic data
outperforms small models trained on-device under practical privacy regimes
($\epsilon=1.29$, $\epsilon=7.58$). We achieve these results while using
9$\times$ fewer rounds, 6$\times$ less client computation per round, and
100$\times$ less communication per round. Second, finetuning large models on
PrE-Text's DP synthetic data improves large language model (LLM) performance on
private data across the same range of privacy budgets. Altogether, these
results suggest that training on DP synthetic data can be a better option than
training a model on-device on private distributed data. Code is available at
https://github.com/houcharlie/PrE-Text.
| [
{
"created": "Wed, 5 Jun 2024 05:27:02 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jul 2024 18:09:22 GMT",
"version": "v2"
}
] | 2024-07-19 | [
[
"Hou",
"Charlie",
""
],
[
"Shrivastava",
"Akshat",
""
],
[
"Zhan",
"Hongyuan",
""
],
[
"Conway",
"Rylan",
""
],
[
"Le",
"Trang",
""
],
[
"Sagar",
"Adithya",
""
],
[
"Fanti",
"Giulia",
""
],
[
"Lazar",
"Daniel",
""
]
] | On-device training is currently the most common approach for training machine learning (ML) models on private, distributed user data. Despite this, on-device training has several drawbacks: (1) most user devices are too small to train large models on-device, (2) on-device training is communication- and computation-intensive, and (3) on-device training can be difficult to debug and deploy. To address these problems, we propose Private Evolution-Text (PrE-Text), a method for generating differentially private (DP) synthetic textual data. First, we show that across multiple datasets, training small models (models that fit on user devices) with PrE-Text synthetic data outperforms small models trained on-device under practical privacy regimes ($\epsilon=1.29$, $\epsilon=7.58$). We achieve these results while using 9$\times$ fewer rounds, 6$\times$ less client computation per round, and 100$\times$ less communication per round. Second, finetuning large models on PrE-Text's DP synthetic data improves large language model (LLM) performance on private data across the same range of privacy budgets. Altogether, these results suggest that training on DP synthetic data can be a better option than training a model on-device on private distributed data. Code is available at https://github.com/houcharlie/PrE-Text. |
2305.13893 | Jasenka Dizdarevic | Jasenka Dizdarevic, Marc Michalke and Admela Jukan | Engineering and Experimentally Benchmarking Open Source MQTT Broker
Implementations | This paper is uploaded here for research community, thus it is for
non-commercial purposes | null | null | null | cs.NI | http://creativecommons.org/licenses/by/4.0/ | The Message Queuing Telemetry Transport (MQTT) protocol is one of the most
widely used IoT protocol solutions. In this work, we are especially interested
in open-source MQTT Broker implementations (such as Mosquitto, EMQX, RabbitMQ,
VerneMQ, and HiveMQ). To this end, we engineer a network testbed to
experimentally benchmark the performance of these implementations in an edge
computing context with constrained devices. In more detail, we engineer an
automated deployment and orchestration of the containerized MQTT broker
implementations, with support for deployment across either moderately powerful
AMD64 devices, or more resource constrained ARM64 devices. The proposed MQTT
implementations are evaluated in terms of overhead response time and different
payload sizes. Results showed that the hardware platform used as well as the
message size, and the network parameters (latency, packet loss and jitter) have
a significant impact on the performance differences between the brokers. All
results, software tools and code are fully reproducible and free and open
source.
| [
{
"created": "Tue, 23 May 2023 10:20:40 GMT",
"version": "v1"
}
] | 2023-05-24 | [
[
"Dizdarevic",
"Jasenka",
""
],
[
"Michalke",
"Marc",
""
],
[
"Jukan",
"Admela",
""
]
] | The Message Queuing Telemetry Transport (MQTT) protocol is one of the most widely used IoT protocol solutions. In this work, we are especially interested in open-source MQTT Broker implementations (such as Mosquitto, EMQX, RabbitMQ, VerneMQ, and HiveMQ). To this end, we engineer a network testbed to experimentally benchmark the performance of these implementations in an edge computing context with constrained devices. In more detail, we engineer an automated deployment and orchestration of the containerized MQTT broker implementations, with support for deployment across either moderately powerful AMD64 devices, or more resource constrained ARM64 devices. The proposed MQTT implementations are evaluated in terms of overhead response time and different payload sizes. Results showed that the hardware platform used as well as the message size, and the network parameters (latency, packet loss and jitter) have a significant impact on the performance differences between the brokers. All results, software tools and code are fully reproducible and free and open source. |
1305.7053 | Kaihua Zhang | Kaihua Zhang, Lei Zhang, Kin-Man Lam, and David Zhang | A Local Active Contour Model for Image Segmentation with Intensity
Inhomogeneity | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/3.0/ | A novel locally statistical active contour model (ACM) for image segmentation
in the presence of intensity inhomogeneity is presented in this paper. The
inhomogeneous objects are modeled as Gaussian distributions of different means
and variances, and a moving window is used to map the original image into
another domain, where the intensity distributions of inhomogeneous objects are
still Gaussian but are better separated. The means of the Gaussian
distributions in the transformed domain can be adaptively estimated by
multiplying a bias field with the original signal within the window. A
statistical energy functional is then defined for each local region, which
combines the bias field, the level set function, and the constant approximating
the true signal of the corresponding object. Experiments on both synthetic and
real images demonstrate the superiority of our proposed algorithm to
state-of-the-art and representative methods.
| [
{
"created": "Thu, 30 May 2013 10:14:14 GMT",
"version": "v1"
}
] | 2013-05-31 | [
[
"Zhang",
"Kaihua",
""
],
[
"Zhang",
"Lei",
""
],
[
"Lam",
"Kin-Man",
""
],
[
"Zhang",
"David",
""
]
] | A novel locally statistical active contour model (ACM) for image segmentation in the presence of intensity inhomogeneity is presented in this paper. The inhomogeneous objects are modeled as Gaussian distributions of different means and variances, and a moving window is used to map the original image into another domain, where the intensity distributions of inhomogeneous objects are still Gaussian but are better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field with the original signal within the window. A statistical energy functional is then defined for each local region, which combines the bias field, the level set function, and the constant approximating the true signal of the corresponding object. Experiments on both synthetic and real images demonstrate the superiority of our proposed algorithm to state-of-the-art and representative methods. |
1308.5125 | Derek Greene | Derek Greene and P\'adraig Cunningham | Discovering Latent Patterns from the Analysis of User-Curated Movie
Lists | 13 pages | null | null | null | cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | User content curation is becoming an important source of preference data, as
well as providing information regarding the items being curated. One popular
approach involves the creation of lists. On Twitter, these lists might contain
accounts relevant to a particular topic, whereas on a community site such as
the Internet Movie Database (IMDb), this might take the form of lists of movies
sharing common characteristics. While list curation involves substantial
combined effort on the part of users, researchers have rarely looked at mining
the outputs of this kind of crowdsourcing activity. Here we study a large
collection of movie lists from IMDb. We apply network analysis methods to a
graph that reflects the degree to which pairs of movies are "co-listed", that
is, assigned to the same lists. This allows us to uncover a more nuanced
grouping of movies that goes beyond categorisation schemes based on attributes
such as genre or director.
| [
{
"created": "Fri, 23 Aug 2013 13:44:28 GMT",
"version": "v1"
}
] | 2013-08-26 | [
[
"Greene",
"Derek",
""
],
[
"Cunningham",
"Pádraig",
""
]
] | User content curation is becoming an important source of preference data, as well as providing information regarding the items being curated. One popular approach involves the creation of lists. On Twitter, these lists might contain accounts relevant to a particular topic, whereas on a community site such as the Internet Movie Database (IMDb), this might take the form of lists of movies sharing common characteristics. While list curation involves substantial combined effort on the part of users, researchers have rarely looked at mining the outputs of this kind of crowdsourcing activity. Here we study a large collection of movie lists from IMDb. We apply network analysis methods to a graph that reflects the degree to which pairs of movies are "co-listed", that is, assigned to the same lists. This allows us to uncover a more nuanced grouping of movies that goes beyond categorisation schemes based on attributes such as genre or director. |
2310.09733 | Aswadh Khumar Gurusamy | Barath Kumar JK and Aswadh Khumar G S | Evaluating Intelligent Algorithms for Gait Phase Classification in Lower
Limb Robotic Systems | 24 Pages,28 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate and rapid detection of gait phases is of utmost importance in
achieving optimal performance of powered lower-limb prostheses and
exoskeletons. With the increasing versatility and complexity of these robotic
systems, there is a growing need to enhance the performance of gait detection
algorithms. The development of reliable and functional gait detection
algorithms holds the potential to enhance precision, stability, and safety in
prosthetic devices and other rehabilitation technologies. In this systematic
review, we delve into the extensive body of research and development in the
domain of gait event detection methods, with a specific focus on their
application to prosthetic devices. Our review critically assesses various
proposed methods, aiming to identify the most effective approaches for gait
phase classification in lower limb robotic systems. Through a comprehensive
comparative analysis, we highlight the strengths and weaknesses of different
algorithms, shedding light on their performance characteristics, applicability,
and potential for further improvements. This comprehensive review was conducted
by screening two databases, namely IEEE and Scopus. The search was limited to
204 papers published from 2010 to 2023. A total of 6 papers that focused on
Heuristic, Thresholding, and Amplitude Zero Crossing involved techniques were
identified and included in the review. 33.3% of implemented Algorithms used
kinematic parameters such as joint angles, joint linear and angular velocity,
and joint angular acceleration. This study purely focuses on threshold-based
algorithms and thus paper focusing on other gait phase detection methods were
excluded.
| [
{
"created": "Sun, 15 Oct 2023 04:45:26 GMT",
"version": "v1"
}
] | 2023-10-17 | [
[
"JK",
"Barath Kumar",
""
],
[
"S",
"Aswadh Khumar G",
""
]
] | Accurate and rapid detection of gait phases is of utmost importance in achieving optimal performance of powered lower-limb prostheses and exoskeletons. With the increasing versatility and complexity of these robotic systems, there is a growing need to enhance the performance of gait detection algorithms. The development of reliable and functional gait detection algorithms holds the potential to enhance precision, stability, and safety in prosthetic devices and other rehabilitation technologies. In this systematic review, we delve into the extensive body of research and development in the domain of gait event detection methods, with a specific focus on their application to prosthetic devices. Our review critically assesses various proposed methods, aiming to identify the most effective approaches for gait phase classification in lower limb robotic systems. Through a comprehensive comparative analysis, we highlight the strengths and weaknesses of different algorithms, shedding light on their performance characteristics, applicability, and potential for further improvements. This comprehensive review was conducted by screening two databases, namely IEEE and Scopus. The search was limited to 204 papers published from 2010 to 2023. A total of 6 papers that focused on Heuristic, Thresholding, and Amplitude Zero Crossing involved techniques were identified and included in the review. 33.3% of implemented Algorithms used kinematic parameters such as joint angles, joint linear and angular velocity, and joint angular acceleration. This study purely focuses on threshold-based algorithms and thus paper focusing on other gait phase detection methods were excluded. |
2403.19316 | Jiaxuan Lu | Yue Gao, Jiaxuan Lu, Siqi Li, Yipeng Li, Shaoyi Du | Hypergraph-based Multi-View Action Recognition using Event Cameras | Accepted by IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPAMI 2024) | null | 10.1109/TPAMI.2024.3382117 | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Action recognition from video data forms a cornerstone with wide-ranging
applications. Single-view action recognition faces limitations due to its
reliance on a single viewpoint. In contrast, multi-view approaches capture
complementary information from various viewpoints for improved accuracy.
Recently, event cameras have emerged as innovative bio-inspired sensors,
leading to advancements in event-based action recognition. However, existing
works predominantly focus on single-view scenarios, leaving a gap in multi-view
event data exploitation, particularly in challenges like information deficit
and semantic misalignment. To bridge this gap, we introduce HyperMV, a
multi-view event-based action recognition framework. HyperMV converts discrete
event data into frame-like representations and extracts view-related features
using a shared convolutional network. By treating segments as vertices and
constructing hyperedges using rule-based and KNN-based strategies, a multi-view
hypergraph neural network that captures relationships across viewpoint and
temporal features is established. The vertex attention hypergraph propagation
is also introduced for enhanced feature fusion. To prompt research in this
area, we present the largest multi-view event-based action dataset
$\text{THU}^{\text{MV-EACT}}\text{-50}$, comprising 50 actions from 6
viewpoints, which surpasses existing datasets by over tenfold. Experimental
results show that HyperMV significantly outperforms baselines in both
cross-subject and cross-view scenarios, and also exceeds the state-of-the-arts
in frame-based multi-view action recognition.
| [
{
"created": "Thu, 28 Mar 2024 11:17:00 GMT",
"version": "v1"
}
] | 2024-03-29 | [
[
"Gao",
"Yue",
""
],
[
"Lu",
"Jiaxuan",
""
],
[
"Li",
"Siqi",
""
],
[
"Li",
"Yipeng",
""
],
[
"Du",
"Shaoyi",
""
]
] | Action recognition from video data forms a cornerstone with wide-ranging applications. Single-view action recognition faces limitations due to its reliance on a single viewpoint. In contrast, multi-view approaches capture complementary information from various viewpoints for improved accuracy. Recently, event cameras have emerged as innovative bio-inspired sensors, leading to advancements in event-based action recognition. However, existing works predominantly focus on single-view scenarios, leaving a gap in multi-view event data exploitation, particularly in challenges like information deficit and semantic misalignment. To bridge this gap, we introduce HyperMV, a multi-view event-based action recognition framework. HyperMV converts discrete event data into frame-like representations and extracts view-related features using a shared convolutional network. By treating segments as vertices and constructing hyperedges using rule-based and KNN-based strategies, a multi-view hypergraph neural network that captures relationships across viewpoint and temporal features is established. The vertex attention hypergraph propagation is also introduced for enhanced feature fusion. To prompt research in this area, we present the largest multi-view event-based action dataset $\text{THU}^{\text{MV-EACT}}\text{-50}$, comprising 50 actions from 6 viewpoints, which surpasses existing datasets by over tenfold. Experimental results show that HyperMV significantly outperforms baselines in both cross-subject and cross-view scenarios, and also exceeds the state-of-the-arts in frame-based multi-view action recognition. |
2205.05961 | Alexander Brenner | Catharina Marie van Alen, Alexander Brenner, Tobias Warnecke and
Julian Varghese | Subgroup discovery of Parkinson's Disease by utilizing a multi-modal
smart device system | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In recent years, sensors from smart consumer devices have shown great
diagnostic potential in movement disorders. In this context, data modalities
such as electronic questionnaires, hand movement and voice captures have
successfully captured biomarkers and allowed discrimination between Parkinson's
disease (PD) and healthy controls (HC) or differential diagnosis (DD). However,
to the best of our knowledge, a comprehensive evaluation of assessments with a
multi-modal smart device system has still been lacking. In a prospective study
exploring PD, we used smartwatches and smartphones to collect multi-modal data
from 504 participants, including PD patients, DD and HC. This study aims to
assess the effect of multi-modal vs. single-modal data on PD vs. HC and PD vs.
DD classification, as well as on PD group clustering for subgroup
identification. We were able to show that by combining various modalities,
classification accuracy improved and further PD clusters were discovered.
| [
{
"created": "Thu, 12 May 2022 08:59:57 GMT",
"version": "v1"
}
] | 2022-05-13 | [
[
"van Alen",
"Catharina Marie",
""
],
[
"Brenner",
"Alexander",
""
],
[
"Warnecke",
"Tobias",
""
],
[
"Varghese",
"Julian",
""
]
] | In recent years, sensors from smart consumer devices have shown great diagnostic potential in movement disorders. In this context, data modalities such as electronic questionnaires, hand movement and voice captures have successfully captured biomarkers and allowed discrimination between Parkinson's disease (PD) and healthy controls (HC) or differential diagnosis (DD). However, to the best of our knowledge, a comprehensive evaluation of assessments with a multi-modal smart device system has still been lacking. In a prospective study exploring PD, we used smartwatches and smartphones to collect multi-modal data from 504 participants, including PD patients, DD and HC. This study aims to assess the effect of multi-modal vs. single-modal data on PD vs. HC and PD vs. DD classification, as well as on PD group clustering for subgroup identification. We were able to show that by combining various modalities, classification accuracy improved and further PD clusters were discovered. |
2307.07960 | Yuwei Chuai | Yuwei Chuai, Haoye Tian, Nicolas Pr\"ollochs, Gabriele Lenzini | The Roll-Out of Community Notes Did Not Reduce Engagement With
Misinformation on Twitter | null | null | null | null | cs.SI cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing interventions that successfully reduce engagement with
misinformation on social media is challenging. One intervention that has
recently gained great attention is Twitter's Community Notes (previously known
as "Birdwatch"). Community Notes is a crowdsourced fact-checking approach that
allows users to write textual notes to inform others about potentially
misleading posts on Twitter. Yet, empirical evidence regarding its
effectiveness in reducing engagement with misinformation on social media is
missing. In this paper, we perform a large-scale empirical study to analyze
whether the introduction of the Community Notes feature and its roll-out to
users in the U. S. and around the world have reduced engagement with
misinformation on Twitter in terms of retweet volume and likes. We employ
Difference-in-Difference (DiD) models and Regression Discontinuity Design (RDD)
to analyze a comprehensive dataset consisting of all fact-checking notes and
corresponding source tweets since the launch of Community Notes in early 2021.
Although we observe a significant increase in the volume of fact-checks carried
out via Community Notes, particularly for tweets from verified users with many
followers, we find no evidence that the introduction of Community Notes
significantly reduced engagement with misleading tweets on Twitter. Rather, our
findings suggest that Community Notes might be too slow to effectively reduce
engagement with misinformation in the early (and most viral) stage of
diffusion. Our work emphasizes the importance of evaluating fact-checking
interventions in the field and offers important implications to enhance
crowdsourced fact-checking strategies on social media.
| [
{
"created": "Sun, 16 Jul 2023 06:41:01 GMT",
"version": "v1"
}
] | 2023-07-18 | [
[
"Chuai",
"Yuwei",
""
],
[
"Tian",
"Haoye",
""
],
[
"Pröllochs",
"Nicolas",
""
],
[
"Lenzini",
"Gabriele",
""
]
] | Developing interventions that successfully reduce engagement with misinformation on social media is challenging. One intervention that has recently gained great attention is Twitter's Community Notes (previously known as "Birdwatch"). Community Notes is a crowdsourced fact-checking approach that allows users to write textual notes to inform others about potentially misleading posts on Twitter. Yet, empirical evidence regarding its effectiveness in reducing engagement with misinformation on social media is missing. In this paper, we perform a large-scale empirical study to analyze whether the introduction of the Community Notes feature and its roll-out to users in the U. S. and around the world have reduced engagement with misinformation on Twitter in terms of retweet volume and likes. We employ Difference-in-Difference (DiD) models and Regression Discontinuity Design (RDD) to analyze a comprehensive dataset consisting of all fact-checking notes and corresponding source tweets since the launch of Community Notes in early 2021. Although we observe a significant increase in the volume of fact-checks carried out via Community Notes, particularly for tweets from verified users with many followers, we find no evidence that the introduction of Community Notes significantly reduced engagement with misleading tweets on Twitter. Rather, our findings suggest that Community Notes might be too slow to effectively reduce engagement with misinformation in the early (and most viral) stage of diffusion. Our work emphasizes the importance of evaluating fact-checking interventions in the field and offers important implications to enhance crowdsourced fact-checking strategies on social media. |
1912.01320 | Arren Glover | Arren Glover, Alan B. Stokes, Steve Furber, Chiara Bartolozzi | ATIS + SpiNNaker: a Fully Event-based Visual Tracking Demonstration | Presented at the Unconventional Sensing and Processing for Robotic
Visual Perception workshop at the 2018 IEEE/RSJ International Conference on
Intelligent Robots and Systems. 2 pages, 2 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Asynchronous Time-based Image Sensor (ATIS) and the Spiking Neural
Network Architecture (SpiNNaker) are both neuromorphic technologies that
"unconventionally" use binary spikes to represent information. The ATIS
produces spikes to represent the change in light falling on the sensor, and the
SpiNNaker is a massively parallel computing platform that asynchronously sends
spikes between cores for processing. In this demonstration we show these two
hardware used together to perform a visual tracking task. We aim to show the
hardware and software architecture that integrates the ATIS and SpiNNaker
together in a robot middle-ware that makes processing agnostic to the platform
(CPU or SpiNNaker). We also aim to describe the algorithm, why it is suitable
for the "unconventional" sensor and processing platform including the
advantages as well as challenges faced.
| [
{
"created": "Tue, 3 Dec 2019 11:46:54 GMT",
"version": "v1"
}
] | 2019-12-04 | [
[
"Glover",
"Arren",
""
],
[
"Stokes",
"Alan B.",
""
],
[
"Furber",
"Steve",
""
],
[
"Bartolozzi",
"Chiara",
""
]
] | The Asynchronous Time-based Image Sensor (ATIS) and the Spiking Neural Network Architecture (SpiNNaker) are both neuromorphic technologies that "unconventionally" use binary spikes to represent information. The ATIS produces spikes to represent the change in light falling on the sensor, and the SpiNNaker is a massively parallel computing platform that asynchronously sends spikes between cores for processing. In this demonstration we show these two hardware used together to perform a visual tracking task. We aim to show the hardware and software architecture that integrates the ATIS and SpiNNaker together in a robot middle-ware that makes processing agnostic to the platform (CPU or SpiNNaker). We also aim to describe the algorithm, why it is suitable for the "unconventional" sensor and processing platform including the advantages as well as challenges faced. |
2104.03487 | Chao Huang | Chao Huang, Haoran Yu, Jianwei Huang, Randall A. Berry | Strategic Information Revelation in Crowdsourcing Systems Without
Verification | To appear in IEEE INFOCOM 2021 | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a crowdsourcing problem where the platform aims to incentivize
distributed workers to provide high quality and truthful solutions without the
ability to verify the solutions. While most prior work assumes that the
platform and workers have symmetric information, we study an asymmetric
information scenario where the platform has informational advantages.
Specifically, the platform knows more information regarding worker average
solution accuracy, and can strategically reveal such information to workers.
Workers will utilize the announced information to determine the likelihood that
they obtain a reward if exerting effort on the task. We study two types of
workers, naive workers who fully trust the announcement, and strategic workers
who update prior belief based on the announcement. For naive workers, we show
that the platform should always announce a high average accuracy to maximize
its payoff. However, this is not always optimal for strategic workers, as it
may reduce the credibility of the platform announcement and hence reduce the
platform payoff. Interestingly, the platform may have an incentive to even
announce an average accuracy lower than the actual value when facing strategic
workers. Another counterintuitive result is that the platform payoff may
decrease in the number of high accuracy workers.
| [
{
"created": "Thu, 8 Apr 2021 03:00:33 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Apr 2021 04:48:57 GMT",
"version": "v2"
}
] | 2021-04-12 | [
[
"Huang",
"Chao",
""
],
[
"Yu",
"Haoran",
""
],
[
"Huang",
"Jianwei",
""
],
[
"Berry",
"Randall A.",
""
]
] | We study a crowdsourcing problem where the platform aims to incentivize distributed workers to provide high quality and truthful solutions without the ability to verify the solutions. While most prior work assumes that the platform and workers have symmetric information, we study an asymmetric information scenario where the platform has informational advantages. Specifically, the platform knows more information regarding worker average solution accuracy, and can strategically reveal such information to workers. Workers will utilize the announced information to determine the likelihood that they obtain a reward if exerting effort on the task. We study two types of workers, naive workers who fully trust the announcement, and strategic workers who update prior belief based on the announcement. For naive workers, we show that the platform should always announce a high average accuracy to maximize its payoff. However, this is not always optimal for strategic workers, as it may reduce the credibility of the platform announcement and hence reduce the platform payoff. Interestingly, the platform may have an incentive to even announce an average accuracy lower than the actual value when facing strategic workers. Another counterintuitive result is that the platform payoff may decrease in the number of high accuracy workers. |
2007.11646 | Diego Romeres | Yifang Liu, Diego Romeres, Devesh K. Jha and Daniel Nikovski | Understanding Multi-Modal Perception Using Behavioral Cloning for
Peg-In-a-Hole Insertion Tasks | Published at a RSS20 workshop | null | null | null | cs.RO cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the main challenges in peg-in-a-hole (PiH) insertion tasks is in
handling the uncertainty in the location of the target hole. In order to
address it, high-dimensional sensor inputs from sensor modalities such as
vision, force/torque sensing, and proprioception can be combined to learn
control policies that are robust to this uncertainty in the target pose.
Whereas deep learning has shown success in recognizing objects and making
decisions with high-dimensional inputs, the learning procedure might damage the
robot when applying directly trial- and-error algorithms on the real system. At
the same time, learning from Demonstration (LfD) methods have been shown to
achieve compelling performance in real robotic systems by leveraging
demonstration data provided by experts. In this paper, we investigate the
merits of multiple sensor modalities such as vision, force/torque sensors, and
proprioception when combined to learn a controller for real world assembly
operation tasks using LfD techniques. The study is limited to PiH insertions;
we plan to extend the study to more experiments in the future. Additionally, we
propose a multi-step-ahead loss function to improve the performance of the
behavioral cloning method. Experimental results on a real manipulator support
our findings, and show the effectiveness of the proposed loss function.
| [
{
"created": "Wed, 22 Jul 2020 19:46:51 GMT",
"version": "v1"
}
] | 2020-07-24 | [
[
"Liu",
"Yifang",
""
],
[
"Romeres",
"Diego",
""
],
[
"Jha",
"Devesh K.",
""
],
[
"Nikovski",
"Daniel",
""
]
] | One of the main challenges in peg-in-a-hole (PiH) insertion tasks is in handling the uncertainty in the location of the target hole. In order to address it, high-dimensional sensor inputs from sensor modalities such as vision, force/torque sensing, and proprioception can be combined to learn control policies that are robust to this uncertainty in the target pose. Whereas deep learning has shown success in recognizing objects and making decisions with high-dimensional inputs, the learning procedure might damage the robot when applying directly trial- and-error algorithms on the real system. At the same time, learning from Demonstration (LfD) methods have been shown to achieve compelling performance in real robotic systems by leveraging demonstration data provided by experts. In this paper, we investigate the merits of multiple sensor modalities such as vision, force/torque sensors, and proprioception when combined to learn a controller for real world assembly operation tasks using LfD techniques. The study is limited to PiH insertions; we plan to extend the study to more experiments in the future. Additionally, we propose a multi-step-ahead loss function to improve the performance of the behavioral cloning method. Experimental results on a real manipulator support our findings, and show the effectiveness of the proposed loss function. |
2111.00166 | Taha Elmokadem | Taha Elmokadem | Advanced Algorithms of Collision Free Navigation and Flocking for
Autonomous UAVs | null | null | null | null | cs.RO cs.AI cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Unmanned aerial vehicles (UAVs) have become very popular for many military
and civilian applications including in agriculture, construction, mining,
environmental monitoring, etc. A desirable feature for UAVs is the ability to
navigate and perform tasks autonomously with least human interaction. This is a
very challenging problem due to several factors such as the high complexity of
UAV applications, operation in harsh environments, limited payload and onboard
computing power and highly nonlinear dynamics. The work presented in this
report contributes towards the state-of-the-art in UAV control for safe
autonomous navigation and motion coordination of multi-UAV systems. The first
part of this report deals with single-UAV systems. The complex problem of
three-dimensional (3D) collision-free navigation in unknown/dynamic
environments is addressed. To that end, advanced 3D reactive control strategies
are developed adopting the sense-and-avoid paradigm to produce quick reactions
around obstacles. A special case of navigation in 3D unknown confined
environments (i.e. tunnel-like) is also addressed. General 3D kinematic models
are considered in the design which makes these methods applicable to different
UAV types in addition to underwater vehicles. Moreover, different
implementation methods for these strategies with quadrotor-type UAVs are also
investigated considering UAV dynamics in the control design. Practical
experiments and simulations were carried out to analyze the performance of the
developed methods. The second part of this report addresses safe navigation for
multi-UAV systems. Distributed motion coordination methods of multi-UAV systems
for flocking and 3D area coverage are developed. These methods offer good
computational cost for large-scale systems. Simulations were performed to
verify the performance of these methods considering systems with different
sizes.
| [
{
"created": "Sat, 30 Oct 2021 03:51:40 GMT",
"version": "v1"
}
] | 2021-11-02 | [
[
"Elmokadem",
"Taha",
""
]
] | Unmanned aerial vehicles (UAVs) have become very popular for many military and civilian applications including in agriculture, construction, mining, environmental monitoring, etc. A desirable feature for UAVs is the ability to navigate and perform tasks autonomously with least human interaction. This is a very challenging problem due to several factors such as the high complexity of UAV applications, operation in harsh environments, limited payload and onboard computing power and highly nonlinear dynamics. The work presented in this report contributes towards the state-of-the-art in UAV control for safe autonomous navigation and motion coordination of multi-UAV systems. The first part of this report deals with single-UAV systems. The complex problem of three-dimensional (3D) collision-free navigation in unknown/dynamic environments is addressed. To that end, advanced 3D reactive control strategies are developed adopting the sense-and-avoid paradigm to produce quick reactions around obstacles. A special case of navigation in 3D unknown confined environments (i.e. tunnel-like) is also addressed. General 3D kinematic models are considered in the design which makes these methods applicable to different UAV types in addition to underwater vehicles. Moreover, different implementation methods for these strategies with quadrotor-type UAVs are also investigated considering UAV dynamics in the control design. Practical experiments and simulations were carried out to analyze the performance of the developed methods. The second part of this report addresses safe navigation for multi-UAV systems. Distributed motion coordination methods of multi-UAV systems for flocking and 3D area coverage are developed. These methods offer good computational cost for large-scale systems. Simulations were performed to verify the performance of these methods considering systems with different sizes. |
2311.09930 | Jaykumar Kasundra | Jaykumar Kasundra, Claudia Schulz, Melicaalsadat Mirsafian, Stavroula
Skylaki | A Framework for Monitoring and Retraining Language Models in Real-World
Applications | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In the Machine Learning (ML) model development lifecycle, training candidate
models using an offline holdout dataset and identifying the best model for the
given task is only the first step. After the deployment of the selected model,
continuous model monitoring and model retraining is required in many real-world
applications. There are multiple reasons for retraining, including data or
concept drift, which may be reflected on the model performance as monitored by
an appropriate metric. Another motivation for retraining is the acquisition of
increasing amounts of data over time, which may be used to retrain and improve
the model performance even in the absence of drifts. We examine the impact of
various retraining decision points on crucial factors, such as model
performance and resource utilization, in the context of Multilabel
Classification models. We explain our key decision points and propose a
reference framework for designing an effective model retraining strategy.
| [
{
"created": "Thu, 16 Nov 2023 14:32:18 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Nov 2023 09:23:20 GMT",
"version": "v2"
}
] | 2023-11-20 | [
[
"Kasundra",
"Jaykumar",
""
],
[
"Schulz",
"Claudia",
""
],
[
"Mirsafian",
"Melicaalsadat",
""
],
[
"Skylaki",
"Stavroula",
""
]
] | In the Machine Learning (ML) model development lifecycle, training candidate models using an offline holdout dataset and identifying the best model for the given task is only the first step. After the deployment of the selected model, continuous model monitoring and model retraining is required in many real-world applications. There are multiple reasons for retraining, including data or concept drift, which may be reflected on the model performance as monitored by an appropriate metric. Another motivation for retraining is the acquisition of increasing amounts of data over time, which may be used to retrain and improve the model performance even in the absence of drifts. We examine the impact of various retraining decision points on crucial factors, such as model performance and resource utilization, in the context of Multilabel Classification models. We explain our key decision points and propose a reference framework for designing an effective model retraining strategy. |
1612.05710 | Saeed Moghaddam | Saeed Moghaddam, Ahmed Helmy | Multi-modal Mining and Modeling of Big Mobile Networks Based on Users
Behavior and Interest | null | null | null | null | cs.NI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Usage of mobile wireless Internet has grown very fast in recent years. This
radical change in availability of Internet has led to communication of big
amount of data over mobile networks and consequently new challenges and
opportunities for modeling of mobile Internet characteristics. While the
traditional approach toward network modeling suggests finding a generic traffic
model for the whole network, in this paper, we show that this approach does not
capture all the dynamics of big mobile networks and does not provide enough
accuracy. Our case study based on a big dataset including billions of netflow
records collected from a campus-wide wireless mobile network shows that user
interests acquired based on accessed domains and visited locations as well as
user behavioral groups have a significant impact on traffic characteristics of
big mobile networks. For this purpose, we utilize a novel graph-based approach
based on KS-test as well as a novel co-clustering technique. Our study shows
that interest-based modeling of big mobile networks can significantly improve
the accuracy and reduce the KS distance by factor of 5 comparing to the generic
approach.
| [
{
"created": "Sat, 17 Dec 2016 06:21:05 GMT",
"version": "v1"
}
] | 2016-12-20 | [
[
"Moghaddam",
"Saeed",
""
],
[
"Helmy",
"Ahmed",
""
]
] | Usage of mobile wireless Internet has grown very fast in recent years. This radical change in availability of Internet has led to communication of big amount of data over mobile networks and consequently new challenges and opportunities for modeling of mobile Internet characteristics. While the traditional approach toward network modeling suggests finding a generic traffic model for the whole network, in this paper, we show that this approach does not capture all the dynamics of big mobile networks and does not provide enough accuracy. Our case study based on a big dataset including billions of netflow records collected from a campus-wide wireless mobile network shows that user interests acquired based on accessed domains and visited locations as well as user behavioral groups have a significant impact on traffic characteristics of big mobile networks. For this purpose, we utilize a novel graph-based approach based on KS-test as well as a novel co-clustering technique. Our study shows that interest-based modeling of big mobile networks can significantly improve the accuracy and reduce the KS distance by factor of 5 comparing to the generic approach. |
2309.16962 | Yuqiu Zhang | Yuqiu Zhang, Tongkun Zhang, Gengrui Zhang, Hans-Arno Jacobsen | Lifting the Fog of Uncertainties: Dynamic Resource Orchestration for the
Containerized Cloud | To appear at ACM SoCC '23 | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | The advances in virtualization technologies have sparked a growing transition
from virtual machine (VM)-based to container-based infrastructure for cloud
computing. From the resource orchestration perspective, containers' lightweight
and highly configurable nature not only enables opportunities for more
optimized strategies, but also poses greater challenges due to additional
uncertainties and a larger configuration parameter search space. Towards this
end, we propose Drone, a resource orchestration framework that adaptively
configures resource parameters to improve application performance and reduce
operational cost in the presence of cloud uncertainties. Built on Contextual
Bandit techniques, Drone is able to achieve a balance between performance and
resource cost on public clouds, and optimize performance on private clouds
where a hard resource constraint is present. We show that our algorithms can
achieve sub-linear growth in cumulative regret, a theoretically sound
convergence guarantee, and our extensive experiments show that Drone achieves
an up to 45% performance improvement and a 20% resource footprint reduction
across batch processing jobs and microservice workloads.
| [
{
"created": "Fri, 29 Sep 2023 04:11:12 GMT",
"version": "v1"
}
] | 2023-10-02 | [
[
"Zhang",
"Yuqiu",
""
],
[
"Zhang",
"Tongkun",
""
],
[
"Zhang",
"Gengrui",
""
],
[
"Jacobsen",
"Hans-Arno",
""
]
] | The advances in virtualization technologies have sparked a growing transition from virtual machine (VM)-based to container-based infrastructure for cloud computing. From the resource orchestration perspective, containers' lightweight and highly configurable nature not only enables opportunities for more optimized strategies, but also poses greater challenges due to additional uncertainties and a larger configuration parameter search space. Towards this end, we propose Drone, a resource orchestration framework that adaptively configures resource parameters to improve application performance and reduce operational cost in the presence of cloud uncertainties. Built on Contextual Bandit techniques, Drone is able to achieve a balance between performance and resource cost on public clouds, and optimize performance on private clouds where a hard resource constraint is present. We show that our algorithms can achieve sub-linear growth in cumulative regret, a theoretically sound convergence guarantee, and our extensive experiments show that Drone achieves an up to 45% performance improvement and a 20% resource footprint reduction across batch processing jobs and microservice workloads. |
1708.08611 | Mohammed Alshiekh | Mohammed Alshiekh, Roderick Bloem, Ruediger Ehlers, Bettina
K\"onighofer, Scott Niekum, Ufuk Topcu | Safe Reinforcement Learning via Shielding | null | null | null | null | cs.LO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning algorithms discover policies that maximize reward, but
do not necessarily guarantee safety during learning or execution phases. We
introduce a new approach to learn optimal policies while enforcing properties
expressed in temporal logic. To this end, given the temporal logic
specification that is to be obeyed by the learning system, we propose to
synthesize a reactive system called a shield. The shield is introduced in the
traditional learning process in two alternative ways, depending on the location
at which the shield is implemented. In the first one, the shield acts each time
the learning agent is about to make a decision and provides a list of safe
actions. In the second way, the shield is introduced after the learning agent.
The shield monitors the actions from the learner and corrects them only if the
chosen action causes a violation of the specification. We discuss which
requirements a shield must meet to preserve the convergence guarantees of the
learner. Finally, we demonstrate the versatility of our approach on several
challenging reinforcement learning scenarios.
| [
{
"created": "Tue, 29 Aug 2017 07:16:54 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Sep 2017 20:35:33 GMT",
"version": "v2"
}
] | 2017-09-05 | [
[
"Alshiekh",
"Mohammed",
""
],
[
"Bloem",
"Roderick",
""
],
[
"Ehlers",
"Ruediger",
""
],
[
"Könighofer",
"Bettina",
""
],
[
"Niekum",
"Scott",
""
],
[
"Topcu",
"Ufuk",
""
]
] | Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. We introduce a new approach to learn optimal policies while enforcing properties expressed in temporal logic. To this end, given the temporal logic specification that is to be obeyed by the learning system, we propose to synthesize a reactive system called a shield. The shield is introduced in the traditional learning process in two alternative ways, depending on the location at which the shield is implemented. In the first one, the shield acts each time the learning agent is about to make a decision and provides a list of safe actions. In the second way, the shield is introduced after the learning agent. The shield monitors the actions from the learner and corrects them only if the chosen action causes a violation of the specification. We discuss which requirements a shield must meet to preserve the convergence guarantees of the learner. Finally, we demonstrate the versatility of our approach on several challenging reinforcement learning scenarios. |
2407.13717 | Usman Gohar | Usman Gohar, Michael C. Hunter, Robyn R. Lutz, Myra B. Cohen | CoDefeater: Using LLMs To Find Defeaters in Assurance Cases | null | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Constructing assurance cases is a widely used, and sometimes required,
process toward demonstrating that safety-critical systems will operate safely
in their planned environment. To mitigate the risk of errors and missing edge
cases, the concept of defeaters - arguments or evidence that challenge claims
in an assurance case - has been introduced. Defeaters can provide timely
detection of weaknesses in the arguments, prompting further investigation and
timely mitigations. However, capturing defeaters relies on expert judgment,
experience, and creativity and must be done iteratively due to evolving
requirements and regulations. This paper proposes CoDefeater, an automated
process to leverage large language models (LLMs) for finding defeaters. Initial
results on two systems show that LLMs can efficiently find known and unforeseen
feasible defeaters to support safety analysts in enhancing the completeness and
confidence of assurance cases.
| [
{
"created": "Thu, 18 Jul 2024 17:16:35 GMT",
"version": "v1"
}
] | 2024-07-19 | [
[
"Gohar",
"Usman",
""
],
[
"Hunter",
"Michael C.",
""
],
[
"Lutz",
"Robyn R.",
""
],
[
"Cohen",
"Myra B.",
""
]
] | Constructing assurance cases is a widely used, and sometimes required, process toward demonstrating that safety-critical systems will operate safely in their planned environment. To mitigate the risk of errors and missing edge cases, the concept of defeaters - arguments or evidence that challenge claims in an assurance case - has been introduced. Defeaters can provide timely detection of weaknesses in the arguments, prompting further investigation and timely mitigations. However, capturing defeaters relies on expert judgment, experience, and creativity and must be done iteratively due to evolving requirements and regulations. This paper proposes CoDefeater, an automated process to leverage large language models (LLMs) for finding defeaters. Initial results on two systems show that LLMs can efficiently find known and unforeseen feasible defeaters to support safety analysts in enhancing the completeness and confidence of assurance cases. |
2404.17565 | Mubashir Noman | Mubashir Noman and Mustansar Fiaz and Hisham Cholakkal | ChangeBind: A Hybrid Change Encoder for Remote Sensing Change Detection | accepted at IGARSS 2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Change detection (CD) is a fundamental task in remote sensing (RS) which aims
to detect the semantic changes between the same geographical regions at
different time stamps. Existing convolutional neural networks (CNNs) based
approaches often struggle to capture long-range dependencies. Whereas recent
transformer-based methods are prone to the dominant global representation and
may limit their capabilities to capture the subtle change regions due to the
complexity of the objects in the scene. To address these limitations, we
propose an effective Siamese-based framework to encode the semantic changes
occurring in the bi-temporal RS images. The main focus of our design is to
introduce a change encoder that leverages local and global feature
representations to capture both subtle and large change feature information
from multi-scale features to precisely estimate the change regions. Our
experimental study on two challenging CD datasets reveals the merits of our
approach and obtains state-of-the-art performance.
| [
{
"created": "Fri, 26 Apr 2024 17:47:14 GMT",
"version": "v1"
}
] | 2024-04-29 | [
[
"Noman",
"Mubashir",
""
],
[
"Fiaz",
"Mustansar",
""
],
[
"Cholakkal",
"Hisham",
""
]
] | Change detection (CD) is a fundamental task in remote sensing (RS) which aims to detect the semantic changes between the same geographical regions at different time stamps. Existing convolutional neural networks (CNNs) based approaches often struggle to capture long-range dependencies. Whereas recent transformer-based methods are prone to the dominant global representation and may limit their capabilities to capture the subtle change regions due to the complexity of the objects in the scene. To address these limitations, we propose an effective Siamese-based framework to encode the semantic changes occurring in the bi-temporal RS images. The main focus of our design is to introduce a change encoder that leverages local and global feature representations to capture both subtle and large change feature information from multi-scale features to precisely estimate the change regions. Our experimental study on two challenging CD datasets reveals the merits of our approach and obtains state-of-the-art performance. |
1705.11063 | Simon Hill | Simon Hill (1 and 3), Daniel Deising (2), Thomas Acher (1), Harald
Klein (3), Dieter Bothe (2), Holger Marschall (2) ((1) Linde Engineering AG,
Pullach, Germany, (2) Technische Universit\"at Darmstadt, Darmstadt, Germany,
(3) Technische Universit\"at M\"unchen, Germany) | Boundedness-Preserving Implicit Correction of Mesh-Induced Errors for
VoF Based Heat and Mass Transfer | null | null | 10.1016/j.jcp.2017.09.027 | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatial discretisation of geometrically complex computational domains often
entails unstructured meshes of general topology for Computational Fluid
Dynamics (CFD). Mesh skewness is then typically encountered causing severe
deterioration of the formal order of accuracy of the discretisation, or
boundedness of the solution, or both. Particularly methods inherently relying
on the accurate and bounded transport of sharp fields suffer from all types of
mesh-induced skewness errors, namely both non-orthogonality and
non-conjunctionality errors. This work is devoted to a boundedness-preserving
strategy to correct for skewness errors arising from discretisation of
advection and diffusion terms within the context of interfacial heat and mass
transfer based on the Volume-of-Fluid methodology. The implementation has been
accomplished using a second-order finite volume method with support for
unstructured meshes of general topology. We examine and advance suitable
corrections for the finite volume discretisation of a consistent single-field
model, where both accurate and bounded transport due to diffusion and advection
is crucial. In order to ensure consistency of both the volume fraction and the
species concentration transport, i.e. to avoid artificial heat or species
transfer, corrections are studied for both cases. The cross interfacial jump
and adjacent sharp gradients of species concentration render the correction for
skewness-induced diffusion and advection errors additionally demanding and has
not so far been addressed in the literature.
| [
{
"created": "Wed, 31 May 2017 12:36:38 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Jun 2017 13:42:48 GMT",
"version": "v2"
}
] | 2017-10-25 | [
[
"Hill",
"Simon",
"",
"1 and 3"
],
[
"Deising",
"Daniel",
""
],
[
"Acher",
"Thomas",
""
],
[
"Klein",
"Harald",
""
],
[
"Bothe",
"Dieter",
""
],
[
"Marschall",
"Holger",
""
]
] | Spatial discretisation of geometrically complex computational domains often entails unstructured meshes of general topology for Computational Fluid Dynamics (CFD). Mesh skewness is then typically encountered causing severe deterioration of the formal order of accuracy of the discretisation, or boundedness of the solution, or both. Particularly methods inherently relying on the accurate and bounded transport of sharp fields suffer from all types of mesh-induced skewness errors, namely both non-orthogonality and non-conjunctionality errors. This work is devoted to a boundedness-preserving strategy to correct for skewness errors arising from discretisation of advection and diffusion terms within the context of interfacial heat and mass transfer based on the Volume-of-Fluid methodology. The implementation has been accomplished using a second-order finite volume method with support for unstructured meshes of general topology. We examine and advance suitable corrections for the finite volume discretisation of a consistent single-field model, where both accurate and bounded transport due to diffusion and advection is crucial. In order to ensure consistency of both the volume fraction and the species concentration transport, i.e. to avoid artificial heat or species transfer, corrections are studied for both cases. The cross interfacial jump and adjacent sharp gradients of species concentration render the correction for skewness-induced diffusion and advection errors additionally demanding and has not so far been addressed in the literature. |
2402.11818 | Sameer Jain | Sameer Jain, Sedrick Scott Keh, Shova Chettri, Karun Dewan, Pablo
Izquierdo, Johanna Prussman, Pooja Shreshtha, Cesar Suarez, Zheyuan Ryan Shi,
Lei Li, Fei Fang | Where It Really Matters: Few-Shot Environmental Conservation Media
Monitoring for Low-Resource Languages | AAAI 2024: AI for Social Impact Track | null | null | null | cs.CL cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | Environmental conservation organizations routinely monitor news content on
conservation in protected areas to maintain situational awareness of
developments that can have an environmental impact. Existing automated media
monitoring systems require large amounts of data labeled by domain experts,
which is only feasible at scale for high-resource languages like English.
However, such tools are most needed in the global south where news of interest
is mainly in local low-resource languages, and far fewer experts are available
to annotate datasets sustainably. In this paper, we propose NewsSerow, a method
to automatically recognize environmental conservation content in low-resource
languages. NewsSerow is a pipeline of summarization, in-context few-shot
classification, and self-reflection using large language models (LLMs). Using
at most 10 demonstration example news articles in Nepali, NewsSerow
significantly outperforms other few-shot methods and achieves comparable
performance with models fully fine-tuned using thousands of examples. The World
Wide Fund for Nature (WWF) has deployed NewsSerow for media monitoring in
Nepal, significantly reducing their operational burden, and ensuring that AI
tools for conservation actually reach the communities that need them the most.
NewsSerow has also been deployed for countries with other languages like
Colombia.
| [
{
"created": "Mon, 19 Feb 2024 04:17:21 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Jain",
"Sameer",
""
],
[
"Keh",
"Sedrick Scott",
""
],
[
"Chettri",
"Shova",
""
],
[
"Dewan",
"Karun",
""
],
[
"Izquierdo",
"Pablo",
""
],
[
"Prussman",
"Johanna",
""
],
[
"Shreshtha",
"Pooja",
""
],
[
"Suarez",
"Cesar",
""
],
[
"Shi",
"Zheyuan Ryan",
""
],
[
"Li",
"Lei",
""
],
[
"Fang",
"Fei",
""
]
] | Environmental conservation organizations routinely monitor news content on conservation in protected areas to maintain situational awareness of developments that can have an environmental impact. Existing automated media monitoring systems require large amounts of data labeled by domain experts, which is only feasible at scale for high-resource languages like English. However, such tools are most needed in the global south where news of interest is mainly in local low-resource languages, and far fewer experts are available to annotate datasets sustainably. In this paper, we propose NewsSerow, a method to automatically recognize environmental conservation content in low-resource languages. NewsSerow is a pipeline of summarization, in-context few-shot classification, and self-reflection using large language models (LLMs). Using at most 10 demonstration example news articles in Nepali, NewsSerow significantly outperforms other few-shot methods and achieves comparable performance with models fully fine-tuned using thousands of examples. The World Wide Fund for Nature (WWF) has deployed NewsSerow for media monitoring in Nepal, significantly reducing their operational burden, and ensuring that AI tools for conservation actually reach the communities that need them the most. NewsSerow has also been deployed for countries with other languages like Colombia. |
2404.13674 | Hoang Ta | Yeow Meng Chee, Tuvi Etzion, Hoang Ta, and Van Khu Vu | On de Bruijn Covering Sequences and Arrays | null | null | null | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | An $(m,n,R)$-de Bruijn covering array (dBCA) is a doubly periodic $M \times
N$ array over an alphabet of size $q$ such that the set of all its $m \times n$
windows form a covering code with radius $R$. An upper bound of the smallest
array area of an $(m,n,R)$-dBCA is provided using a probabilistic technique
which is similar to the one that was used for an upper bound on the length of a
de Bruijn covering sequence. A folding technique to construct a dBCA from a de
Bruijn covering sequence or de Bruijn covering sequences code is presented.
Several new constructions that yield shorter de Bruijn covering sequences and
$(m,n,R)$-dBCAs with smaller areas are also provided. These constructions are
mainly based on sequences derived from cyclic codes, self-dual sequences,
primitive polynomials, an interleaving technique, folding, and mutual shifts of
sequences with the same covering radius. Finally, constructions of de Bruijn
covering sequences codes are also discussed.
| [
{
"created": "Sun, 21 Apr 2024 14:26:44 GMT",
"version": "v1"
},
{
"created": "Thu, 9 May 2024 09:22:36 GMT",
"version": "v2"
}
] | 2024-05-10 | [
[
"Chee",
"Yeow Meng",
""
],
[
"Etzion",
"Tuvi",
""
],
[
"Ta",
"Hoang",
""
],
[
"Vu",
"Van Khu",
""
]
] | An $(m,n,R)$-de Bruijn covering array (dBCA) is a doubly periodic $M \times N$ array over an alphabet of size $q$ such that the set of all its $m \times n$ windows form a covering code with radius $R$. An upper bound of the smallest array area of an $(m,n,R)$-dBCA is provided using a probabilistic technique which is similar to the one that was used for an upper bound on the length of a de Bruijn covering sequence. A folding technique to construct a dBCA from a de Bruijn covering sequence or de Bruijn covering sequences code is presented. Several new constructions that yield shorter de Bruijn covering sequences and $(m,n,R)$-dBCAs with smaller areas are also provided. These constructions are mainly based on sequences derived from cyclic codes, self-dual sequences, primitive polynomials, an interleaving technique, folding, and mutual shifts of sequences with the same covering radius. Finally, constructions of de Bruijn covering sequences codes are also discussed. |
2009.02166 | Cornelis Jan Van Leeuwen | Cornelis Jan van Leeuwen, Joost Stam, Arun Subramanian, Koen Kok | Collaboratively Optimizing Power Scheduling and Mitigating Congestion
using Local Pricing in a Receding Horizon Market | 10 pages, 9 figures, 2 tables, 1 algorithm in pseudocode | null | null | null | cs.MA cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A distributed, hierarchical, market based approach is introduced to solve the
economic dispatch problem. The approach requires only a minimal amount of
information to be shared between a central market operator and the end-users.
Price signals from the market operator are sent down to end-user device agents,
which in turn respond with power schedules. Intermediate congestion agents make
sure that local power constraints are satisfied and any potential congestion is
avoided by adding local pricing differences. Our results show that in 20% of
the evaluated scenarios the solutions are identical to the global optimum when
perfect knowledge is available. In the other 80% the results are not
significantly worse, while providing a higher level of scalability and
increasing the consumer's privacy.
| [
{
"created": "Fri, 4 Sep 2020 13:04:50 GMT",
"version": "v1"
}
] | 2020-09-07 | [
[
"van Leeuwen",
"Cornelis Jan",
""
],
[
"Stam",
"Joost",
""
],
[
"Subramanian",
"Arun",
""
],
[
"Kok",
"Koen",
""
]
] | A distributed, hierarchical, market based approach is introduced to solve the economic dispatch problem. The approach requires only a minimal amount of information to be shared between a central market operator and the end-users. Price signals from the market operator are sent down to end-user device agents, which in turn respond with power schedules. Intermediate congestion agents make sure that local power constraints are satisfied and any potential congestion is avoided by adding local pricing differences. Our results show that in 20% of the evaluated scenarios the solutions are identical to the global optimum when perfect knowledge is available. In the other 80% the results are not significantly worse, while providing a higher level of scalability and increasing the consumer's privacy. |
1603.05763 | Boshra Rajaei | Boshra Rajaei, Rafael Grompone von Gioi, Jean-Michel Morel | From line segments to more organized Gestalts | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we reconsider the early computer vision bottom-up program,
according to which higher level features (geometric structures) in an image
could be built up recursively from elementary features by simple grouping
principles coming from Gestalt theory. Taking advantage of the (recent)
advances in reliable line segment detectors, we propose three feature detectors
that constitute one step up in this bottom up pyramid. For any digital image,
our unsupervised algorithm computes three classic Gestalts from the set of
predetected line segments: good continuations, nonlocal alignments, and bars.
The methodology is based on a common stochastic {\it a contrario model}
yielding three simple detection formulas, characterized by their number of
false alarms. This detection algorithm is illustrated on several digital
images.
| [
{
"created": "Fri, 18 Mar 2016 04:05:35 GMT",
"version": "v1"
}
] | 2016-03-21 | [
[
"Rajaei",
"Boshra",
""
],
[
"von Gioi",
"Rafael Grompone",
""
],
[
"Morel",
"Jean-Michel",
""
]
] | In this paper, we reconsider the early computer vision bottom-up program, according to which higher level features (geometric structures) in an image could be built up recursively from elementary features by simple grouping principles coming from Gestalt theory. Taking advantage of the (recent) advances in reliable line segment detectors, we propose three feature detectors that constitute one step up in this bottom up pyramid. For any digital image, our unsupervised algorithm computes three classic Gestalts from the set of predetected line segments: good continuations, nonlocal alignments, and bars. The methodology is based on a common stochastic {\it a contrario model} yielding three simple detection formulas, characterized by their number of false alarms. This detection algorithm is illustrated on several digital images. |
1806.04291 | Manuel Mager | Manuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, Ivan Meza | Challenges of language technologies for the indigenous languages of the
Americas | In Proceedings of the 27th International Conference on Computational
Linguistics (COLING 2018) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Indigenous languages of the American continent are highly diverse. However,
they have received little attention from the technological perspective. In this
paper, we review the research, the digital resources and the available NLP
systems that focus on these languages. We present the main challenges and
research questions that arise when distant languages and low-resource scenarios
are faced. We would like to encourage NLP research in linguistically rich and
diverse areas like the Americas.
| [
{
"created": "Tue, 12 Jun 2018 01:26:55 GMT",
"version": "v1"
}
] | 2018-06-13 | [
[
"Mager",
"Manuel",
""
],
[
"Gutierrez-Vasques",
"Ximena",
""
],
[
"Sierra",
"Gerardo",
""
],
[
"Meza",
"Ivan",
""
]
] | Indigenous languages of the American continent are highly diverse. However, they have received little attention from the technological perspective. In this paper, we review the research, the digital resources and the available NLP systems that focus on these languages. We present the main challenges and research questions that arise when distant languages and low-resource scenarios are faced. We would like to encourage NLP research in linguistically rich and diverse areas like the Americas. |
2406.15074 | Suvadeep Mukherjee | Suvadeep Mukherjee, Verena Distler, Gabriele Lenzini and Pedro
Cardoso-Leite | Balancing The Perception of Cheating Detection, Privacy and Fairness: A
Mixed-Methods Study of Visual Data Obfuscation in Remote Proctoring | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Remote proctoring technology, a cheating-preventive measure, often raises
privacy and fairness concerns that may affect test-takers' experiences and the
validity of test results. Our study explores how selectively obfuscating
information in video recordings can protect test-takers' privacy while ensuring
effective and fair cheating detection. Interviews with experts (N=9) identified
four key video regions indicative of potential cheating behaviors: the
test-taker's face, body, background and the presence of individuals in the
background. Experts recommended specific obfuscation methods for each region
based on privacy significance and cheating behavior frequency, ranging from
conventional blurring to advanced methods like replacement with deepfake, 3D
avatars and silhouetting. We then conducted a vignette experiment with
potential test-takers (N=259, non-experts) to evaluate their perceptions of
cheating detection, visual privacy and fairness, using descriptions and
examples of still images for each expert-recommended combination of video
regions and obfuscation methods. Our results indicate that the effectiveness of
obfuscation methods varies by region. Tailoring remote proctoring with
region-specific advanced obfuscation methods can improve the perceptions of
privacy and fairness compared to the conventional methods, though it may
decrease perceived information sufficiency for detecting cheating. However,
non-experts preferred conventional blurring for videos they were more willing
to share, highlighting a gap between the perceived effectiveness of the
advanced obfuscation methods and their practical acceptance. This study
contributes to the field of user-centered privacy by suggesting promising
directions to address current remote proctoring challenges and guiding future
research.
| [
{
"created": "Fri, 21 Jun 2024 11:40:56 GMT",
"version": "v1"
}
] | 2024-06-24 | [
[
"Mukherjee",
"Suvadeep",
""
],
[
"Distler",
"Verena",
""
],
[
"Lenzini",
"Gabriele",
""
],
[
"Cardoso-Leite",
"Pedro",
""
]
] | Remote proctoring technology, a cheating-preventive measure, often raises privacy and fairness concerns that may affect test-takers' experiences and the validity of test results. Our study explores how selectively obfuscating information in video recordings can protect test-takers' privacy while ensuring effective and fair cheating detection. Interviews with experts (N=9) identified four key video regions indicative of potential cheating behaviors: the test-taker's face, body, background and the presence of individuals in the background. Experts recommended specific obfuscation methods for each region based on privacy significance and cheating behavior frequency, ranging from conventional blurring to advanced methods like replacement with deepfake, 3D avatars and silhouetting. We then conducted a vignette experiment with potential test-takers (N=259, non-experts) to evaluate their perceptions of cheating detection, visual privacy and fairness, using descriptions and examples of still images for each expert-recommended combination of video regions and obfuscation methods. Our results indicate that the effectiveness of obfuscation methods varies by region. Tailoring remote proctoring with region-specific advanced obfuscation methods can improve the perceptions of privacy and fairness compared to the conventional methods, though it may decrease perceived information sufficiency for detecting cheating. However, non-experts preferred conventional blurring for videos they were more willing to share, highlighting a gap between the perceived effectiveness of the advanced obfuscation methods and their practical acceptance. This study contributes to the field of user-centered privacy by suggesting promising directions to address current remote proctoring challenges and guiding future research. |
2108.07871 | Stephanie Schoch | Stephanie Schoch, Wanyu Du, Yangfeng Ji | Contextualizing Variation in Text Style Transfer Datasets | Accepted to INLG 2021 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text style transfer involves rewriting the content of a source sentence in a
target style. Despite there being a number of style tasks with available data,
there has been limited systematic discussion of how text style datasets relate
to each other. This understanding, however, is likely to have implications for
selecting multiple data sources for model training. While it is prudent to
consider inherent stylistic properties when determining these relationships, we
also must consider how a style is realized in a particular dataset. In this
paper, we conduct several empirical analyses of existing text style datasets.
Based on our results, we propose a categorization of stylistic and dataset
properties to consider when utilizing or comparing text style datasets.
| [
{
"created": "Tue, 17 Aug 2021 20:54:24 GMT",
"version": "v1"
}
] | 2021-08-19 | [
[
"Schoch",
"Stephanie",
""
],
[
"Du",
"Wanyu",
""
],
[
"Ji",
"Yangfeng",
""
]
] | Text style transfer involves rewriting the content of a source sentence in a target style. Despite there being a number of style tasks with available data, there has been limited systematic discussion of how text style datasets relate to each other. This understanding, however, is likely to have implications for selecting multiple data sources for model training. While it is prudent to consider inherent stylistic properties when determining these relationships, we also must consider how a style is realized in a particular dataset. In this paper, we conduct several empirical analyses of existing text style datasets. Based on our results, we propose a categorization of stylistic and dataset properties to consider when utilizing or comparing text style datasets. |
2402.01335 | Chintan Trivedi | Chintan Trivedi, Nemanja Ra\v{s}ajski, Konstantinos Makantasis,
Antonios Liapis and Georgios N. Yannakakis | Simulator-Free Visual Domain Randomization via Video Games | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Domain randomization is an effective computer vision technique for improving
transferability of vision models across visually distinct domains exhibiting
similar content. Existing approaches, however, rely extensively on tweaking
complex and specialized simulation engines that are difficult to construct,
subsequently affecting their feasibility and scalability. This paper introduces
BehAVE, a video understanding framework that uniquely leverages the plethora of
existing commercial video games for domain randomization, without requiring
access to their simulation engines. Under BehAVE (1) the inherent rich visual
diversity of video games acts as the source of randomization and (2) player
behavior -- represented semantically via textual descriptions of actions --
guides the *alignment* of videos with similar content. We test BehAVE on 25
games of the first-person shooter (FPS) genre across various video and text
foundation models and we report its robustness for domain randomization. BehAVE
successfully aligns player behavioral patterns and is able to zero-shot
transfer them to multiple unseen FPS games when trained on just one FPS game.
In a more challenging setting, BehAVE manages to improve the zero-shot
transferability of foundation models to unseen FPS games (up to 22%) even when
trained on a game of a different genre (Minecraft). Code and dataset can be
found at https://github.com/nrasajski/BehAVE.
| [
{
"created": "Fri, 2 Feb 2024 11:40:27 GMT",
"version": "v1"
},
{
"created": "Thu, 30 May 2024 21:04:36 GMT",
"version": "v2"
}
] | 2024-06-03 | [
[
"Trivedi",
"Chintan",
""
],
[
"Rašajski",
"Nemanja",
""
],
[
"Makantasis",
"Konstantinos",
""
],
[
"Liapis",
"Antonios",
""
],
[
"Yannakakis",
"Georgios N.",
""
]
] | Domain randomization is an effective computer vision technique for improving transferability of vision models across visually distinct domains exhibiting similar content. Existing approaches, however, rely extensively on tweaking complex and specialized simulation engines that are difficult to construct, subsequently affecting their feasibility and scalability. This paper introduces BehAVE, a video understanding framework that uniquely leverages the plethora of existing commercial video games for domain randomization, without requiring access to their simulation engines. Under BehAVE (1) the inherent rich visual diversity of video games acts as the source of randomization and (2) player behavior -- represented semantically via textual descriptions of actions -- guides the *alignment* of videos with similar content. We test BehAVE on 25 games of the first-person shooter (FPS) genre across various video and text foundation models and we report its robustness for domain randomization. BehAVE successfully aligns player behavioral patterns and is able to zero-shot transfer them to multiple unseen FPS games when trained on just one FPS game. In a more challenging setting, BehAVE manages to improve the zero-shot transferability of foundation models to unseen FPS games (up to 22%) even when trained on a game of a different genre (Minecraft). Code and dataset can be found at https://github.com/nrasajski/BehAVE. |
2403.15388 | Yuzhang Shang | Yuzhang Shang, Mu Cai, Bingxin Xu, Yong Jae Lee, Yan Yan | LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal
Models | Project page: https://llava-prumerge.github.io/ | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Multimodal Models (LMMs) have shown significant visual reasoning
capabilities by connecting a visual encoder and a large language model. LMMs
typically take in a fixed and large amount of visual tokens, such as the
penultimate layer features in the CLIP visual encoder, as the prefix content.
Recent LMMs incorporate more complex visual inputs, such as high-resolution
images and videos, which further increases the number of visual tokens
significantly. However, due to the inherent design of the Transformer
architecture, the computational costs of these models tend to increase
quadratically with the number of input tokens. To tackle this problem, we
explore a token reduction mechanism that identifies significant spatial
redundancy among visual tokens. In response, we propose PruMerge, a novel
adaptive visual token reduction strategy that significantly reduces the number
of visual tokens without compromising the performance of LMMs. Specifically, to
metric the importance of each token, we exploit the sparsity observed in the
visual encoder, characterized by the sparse distribution of attention scores
between the class token and visual tokens. This sparsity enables us to
dynamically select the most crucial visual tokens to retain. Subsequently, we
cluster the selected (unpruned) tokens based on their key similarity and merge
them with the unpruned tokens, effectively supplementing and enhancing their
informational content. Empirically, when applied to LLaVA-1.5, our approach can
compress the visual tokens by 14 times on average, and achieve comparable
performance across diverse visual question-answering and reasoning tasks. Code
and checkpoints are at https://llava-prumerge.github.io/.
| [
{
"created": "Fri, 22 Mar 2024 17:59:52 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Mar 2024 17:59:55 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Apr 2024 14:08:06 GMT",
"version": "v3"
},
{
"created": "Fri, 12 Apr 2024 17:34:29 GMT",
"version": "v4"
},
{
"created": "Wed, 22 May 2024 20:50:37 GMT",
"version": "v5"
}
] | 2024-05-24 | [
[
"Shang",
"Yuzhang",
""
],
[
"Cai",
"Mu",
""
],
[
"Xu",
"Bingxin",
""
],
[
"Lee",
"Yong Jae",
""
],
[
"Yan",
"Yan",
""
]
] | Large Multimodal Models (LMMs) have shown significant visual reasoning capabilities by connecting a visual encoder and a large language model. LMMs typically take in a fixed and large amount of visual tokens, such as the penultimate layer features in the CLIP visual encoder, as the prefix content. Recent LMMs incorporate more complex visual inputs, such as high-resolution images and videos, which further increases the number of visual tokens significantly. However, due to the inherent design of the Transformer architecture, the computational costs of these models tend to increase quadratically with the number of input tokens. To tackle this problem, we explore a token reduction mechanism that identifies significant spatial redundancy among visual tokens. In response, we propose PruMerge, a novel adaptive visual token reduction strategy that significantly reduces the number of visual tokens without compromising the performance of LMMs. Specifically, to metric the importance of each token, we exploit the sparsity observed in the visual encoder, characterized by the sparse distribution of attention scores between the class token and visual tokens. This sparsity enables us to dynamically select the most crucial visual tokens to retain. Subsequently, we cluster the selected (unpruned) tokens based on their key similarity and merge them with the unpruned tokens, effectively supplementing and enhancing their informational content. Empirically, when applied to LLaVA-1.5, our approach can compress the visual tokens by 14 times on average, and achieve comparable performance across diverse visual question-answering and reasoning tasks. Code and checkpoints are at https://llava-prumerge.github.io/. |
1704.09028 | Benjamin Van Roy | Daniel Russo and David Tse and Benjamin Van Roy | Time-Sensitive Bandit Learning and Satisficing Thompson Sampling | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The literature on bandit learning and regret analysis has focused on contexts
where the goal is to converge on an optimal action in a manner that limits
exploration costs. One shortcoming imposed by this orientation is that it does
not treat time preference in a coherent manner. Time preference plays an
important role when the optimal action is costly to learn relative to
near-optimal actions. This limitation has not only restricted the relevance of
theoretical results but has also influenced the design of algorithms. Indeed,
popular approaches such as Thompson sampling and UCB can fare poorly in such
situations. In this paper, we consider discounted rather than cumulative
regret, where a discount factor encodes time preference. We propose satisficing
Thompson sampling -- a variation of Thompson sampling -- and establish a strong
discounted regret bound for this new algorithm.
| [
{
"created": "Fri, 28 Apr 2017 17:54:59 GMT",
"version": "v1"
}
] | 2017-05-01 | [
[
"Russo",
"Daniel",
""
],
[
"Tse",
"David",
""
],
[
"Van Roy",
"Benjamin",
""
]
] | The literature on bandit learning and regret analysis has focused on contexts where the goal is to converge on an optimal action in a manner that limits exploration costs. One shortcoming imposed by this orientation is that it does not treat time preference in a coherent manner. Time preference plays an important role when the optimal action is costly to learn relative to near-optimal actions. This limitation has not only restricted the relevance of theoretical results but has also influenced the design of algorithms. Indeed, popular approaches such as Thompson sampling and UCB can fare poorly in such situations. In this paper, we consider discounted rather than cumulative regret, where a discount factor encodes time preference. We propose satisficing Thompson sampling -- a variation of Thompson sampling -- and establish a strong discounted regret bound for this new algorithm. |
1403.7939 | Vincent Pilaud | J\"urgen Bokowski and Vincent Pilaud | Quasi-configurations: building blocks for point-line configurations | 12 pages, 9 figures | Ars Math. Contemp., 10(1): 99-112, 2016 | 10.26493/1855-3974.642.bbb | null | cs.CG cs.DM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study point-line incidence structures and their properties in the
projective plane. Our motivation is the problem of the existence of $(n_4)$
configurations, still open for few remaining values of $n$. Our approach is
based on quasi-configurations: point-line incidence structures where each point
is incident to at least $3$ lines and each line is incident to at least $3$
points. We investigate the existence problem for these quasi-configurations,
with a particular attention to $3|4$-configurations where each element is $3$-
or $4$-valent. We use these quasi-configurations to construct the first
$(37_4)$ and $(43_4)$ configurations. The existence problem of finding
$(22_4)$, $(23_4)$, and $(26_4)$ configurations remains open.
| [
{
"created": "Mon, 31 Mar 2014 10:17:56 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Dec 2014 17:17:55 GMT",
"version": "v2"
}
] | 2023-11-14 | [
[
"Bokowski",
"Jürgen",
""
],
[
"Pilaud",
"Vincent",
""
]
] | We study point-line incidence structures and their properties in the projective plane. Our motivation is the problem of the existence of $(n_4)$ configurations, still open for few remaining values of $n$. Our approach is based on quasi-configurations: point-line incidence structures where each point is incident to at least $3$ lines and each line is incident to at least $3$ points. We investigate the existence problem for these quasi-configurations, with a particular attention to $3|4$-configurations where each element is $3$- or $4$-valent. We use these quasi-configurations to construct the first $(37_4)$ and $(43_4)$ configurations. The existence problem of finding $(22_4)$, $(23_4)$, and $(26_4)$ configurations remains open. |
1807.10819 | Andrew Jesson D | Andrew Jesson, Nicolas Guizard, Sina Hamidi Ghalehjegh, Damien Goblot,
Florian Soudan, Nicolas Chapados | CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance | 20th International Conference on Medical Image Computing and Computer
Assisted Intervention 2017 | null | 10.1007/978-3-319-66179-7_73 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce CASED, a novel curriculum sampling algorithm that facilitates
the optimization of deep learning segmentation or detection models on data sets
with extreme class imbalance. We evaluate the CASED learning framework on the
task of lung nodule detection in chest CT. In contrast to two-stage solutions,
wherein nodule candidates are first proposed by a segmentation model and
refined by a second detection stage, CASED improves the training of deep nodule
segmentation models (e.g. UNet) to the point where state of the art results are
achieved using only a trivial detection stage. CASED improves the optimization
of deep segmentation models by allowing them to first learn how to distinguish
nodules from their immediate surroundings, while continuously adding a greater
proportion of difficult-to-classify global context, until uniformly sampling
from the empirical data distribution. Using CASED during training yields a
minimalist proposal to the lung nodule detection problem that tops the LUNA16
nodule detection benchmark with an average sensitivity score of 88.35%.
Furthermore, we find that models trained using CASED are robust to nodule
annotation quality by showing that comparable results can be achieved when only
a point and radius for each ground truth nodule are provided during training.
Finally, the CASED learning framework makes no assumptions with regard to
imaging modality or segmentation target and should generalize to other medical
imaging problems where class imbalance is a persistent problem.
| [
{
"created": "Fri, 27 Jul 2018 20:10:11 GMT",
"version": "v1"
}
] | 2018-07-31 | [
[
"Jesson",
"Andrew",
""
],
[
"Guizard",
"Nicolas",
""
],
[
"Ghalehjegh",
"Sina Hamidi",
""
],
[
"Goblot",
"Damien",
""
],
[
"Soudan",
"Florian",
""
],
[
"Chapados",
"Nicolas",
""
]
] | We introduce CASED, a novel curriculum sampling algorithm that facilitates the optimization of deep learning segmentation or detection models on data sets with extreme class imbalance. We evaluate the CASED learning framework on the task of lung nodule detection in chest CT. In contrast to two-stage solutions, wherein nodule candidates are first proposed by a segmentation model and refined by a second detection stage, CASED improves the training of deep nodule segmentation models (e.g. UNet) to the point where state of the art results are achieved using only a trivial detection stage. CASED improves the optimization of deep segmentation models by allowing them to first learn how to distinguish nodules from their immediate surroundings, while continuously adding a greater proportion of difficult-to-classify global context, until uniformly sampling from the empirical data distribution. Using CASED during training yields a minimalist proposal to the lung nodule detection problem that tops the LUNA16 nodule detection benchmark with an average sensitivity score of 88.35%. Furthermore, we find that models trained using CASED are robust to nodule annotation quality by showing that comparable results can be achieved when only a point and radius for each ground truth nodule are provided during training. Finally, the CASED learning framework makes no assumptions with regard to imaging modality or segmentation target and should generalize to other medical imaging problems where class imbalance is a persistent problem. |
1304.6476 | Noah Daniels | Noah M. Daniels | Remote Homology Detection in Proteins Using Graphical Models | Doctoral dissertation | null | 10.1109/TCBB.2014.2344682 | null | cs.CE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given the amino acid sequence of a protein, researchers often infer its
structure and function by finding homologous, or evolutionarily-related,
proteins of known structure and function. Since structure is typically more
conserved than sequence over long evolutionary distances, recognizing remote
protein homologs from their sequence poses a challenge.
We first consider all proteins of known three-dimensional structure, and
explore how they cluster according to different levels of homology. An
automatic computational method reasonably approximates a human-curated
hierarchical organization of proteins according to their degree of homology.
Next, we return to homology prediction, based only on the one-dimensional
amino acid sequence of a protein. Menke, Berger, and Cowen proposed a Markov
random field model to predict remote homology for beta-structural proteins, but
their formulation was computationally intractable on many beta-strand
topologies.
We show two different approaches to approximate this random field, both of
which make it computationally tractable, for the first time, on all protein
folds. One method simplifies the random field itself, while the other retains
the full random field, but approximates the solution through stochastic search.
Both methods achieve improvements over the state of the art in remote homology
detection for beta-structural protein folds.
| [
{
"created": "Wed, 24 Apr 2013 03:29:23 GMT",
"version": "v1"
}
] | 2015-03-23 | [
[
"Daniels",
"Noah M.",
""
]
] | Given the amino acid sequence of a protein, researchers often infer its structure and function by finding homologous, or evolutionarily-related, proteins of known structure and function. Since structure is typically more conserved than sequence over long evolutionary distances, recognizing remote protein homologs from their sequence poses a challenge. We first consider all proteins of known three-dimensional structure, and explore how they cluster according to different levels of homology. An automatic computational method reasonably approximates a human-curated hierarchical organization of proteins according to their degree of homology. Next, we return to homology prediction, based only on the one-dimensional amino acid sequence of a protein. Menke, Berger, and Cowen proposed a Markov random field model to predict remote homology for beta-structural proteins, but their formulation was computationally intractable on many beta-strand topologies. We show two different approaches to approximate this random field, both of which make it computationally tractable, for the first time, on all protein folds. One method simplifies the random field itself, while the other retains the full random field, but approximates the solution through stochastic search. Both methods achieve improvements over the state of the art in remote homology detection for beta-structural protein folds. |
1209.4233 | Laurent Najman | Roland Levillain (LIGM, LRDE), Thierry G\'eraud (LRDE), Laurent Najman
(LIGM) | Writing Reusable Digital Geometry Algorithms in a Generic Image
Processing Framework | Workshop on Applications of Discrete Geometry and Mathematical
Morphology, Istanb : France (2010) | null | 10.1007/978-3-642-32313-3_10 | null | cs.MS cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital Geometry software should reflect the generality of the underlying
mathe- matics: mapping the latter to the former requires genericity. By
designing generic solutions, one can effectively reuse digital geometry data
structures and algorithms. We propose an image processing framework focused on
the Generic Programming paradigm in which an algorithm on the paper can be
turned into a single code, written once and usable with various input types.
This approach enables users to design and implement new methods at a lower
cost, try cross-domain experiments and help generalize results
| [
{
"created": "Tue, 18 Sep 2012 15:17:10 GMT",
"version": "v1"
}
] | 2012-09-20 | [
[
"Levillain",
"Roland",
"",
"LIGM, LRDE"
],
[
"Géraud",
"Thierry",
"",
"LRDE"
],
[
"Najman",
"Laurent",
"",
"LIGM"
]
] | Digital Geometry software should reflect the generality of the underlying mathe- matics: mapping the latter to the former requires genericity. By designing generic solutions, one can effectively reuse digital geometry data structures and algorithms. We propose an image processing framework focused on the Generic Programming paradigm in which an algorithm on the paper can be turned into a single code, written once and usable with various input types. This approach enables users to design and implement new methods at a lower cost, try cross-domain experiments and help generalize results |
2001.09030 | Georg Maringer | Christian Deppe and Vladimir Lebedev and Georg Maringer | Bounds for the capacity error function for unidirectional channels with
noiseless feedback | 24 pages, short version accepted at ISIT 2020 | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In digital systems such as fiber optical communications, the ratio between
probability of errors of type $1\to 0$ and $0 \to 1$ can be large. Practically,
one can assume that only one type of error can occur. These errors arecalled
asymmetric. Unidirectional errors differ from asymmetric type of errors; here
both $1 \to 0$ and $0 \to 1$ type of errors are possible, but in any
submittedcodeword all the errors are of the same type. This can be generalized
for the $q$-ary case. We consider $q$-ary unidirectional channels with feedback
and give bounds for the capacity error function. It turns out that the bounds
depend on the parity of the alphabet $q$. Furthermore, we show that for
feedback, the capacity error function for the binary asymmetric channel is
different from the symmetric channel. This is in contrast to the behavior of
the function without feedback.
| [
{
"created": "Fri, 24 Jan 2020 14:27:55 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Apr 2020 14:12:43 GMT",
"version": "v2"
}
] | 2020-04-29 | [
[
"Deppe",
"Christian",
""
],
[
"Lebedev",
"Vladimir",
""
],
[
"Maringer",
"Georg",
""
]
] | In digital systems such as fiber optical communications, the ratio between probability of errors of type $1\to 0$ and $0 \to 1$ can be large. Practically, one can assume that only one type of error can occur. These errors arecalled asymmetric. Unidirectional errors differ from asymmetric type of errors; here both $1 \to 0$ and $0 \to 1$ type of errors are possible, but in any submittedcodeword all the errors are of the same type. This can be generalized for the $q$-ary case. We consider $q$-ary unidirectional channels with feedback and give bounds for the capacity error function. It turns out that the bounds depend on the parity of the alphabet $q$. Furthermore, we show that for feedback, the capacity error function for the binary asymmetric channel is different from the symmetric channel. This is in contrast to the behavior of the function without feedback. |
2103.10207 | Nick W\"urdemann | Manuel Gieseking and Nick W\"urdemann | Canonical Representations for Direct Generation of Strategies in
High-level Petri Games (Full Version) | 31 pages, 5 figures, 2 tables, full version of the corresponding
Petri Nets 2021 (ICATPN2021) paper | null | null | null | cs.GT | http://creativecommons.org/licenses/by/4.0/ | Petri games are a multi-player game model for the synthesis problem in
distributed systems, i.e., the automatic generation of local controllers. The
model represents causal memory of the players, which are tokens on a Petri net
and divided into two teams: the controllable system and the uncontrollable
environment. For one environment player and a bounded number of system players,
the problem of solving Petri games can be reduced to that of solving B\"uchi
games.
High-level Petri games are a concise representation of ordinary Petri games.
Symmetries, derived from a high-level representation, can be exploited to
significantly reduce the state space in the corresponding B\"uchi game. We
present a new construction for solving high-level Petri games. It involves the
definition of a unique, canonical representation of the reduced B\"uchi game.
This allows us to translate a strategy in the B\"uchi game directly into a
strategy in the Petri game. An implementation applied on six structurally
different benchmark families shows in most cases a performance increase for
larger state spaces.
| [
{
"created": "Thu, 18 Mar 2021 12:20:08 GMT",
"version": "v1"
}
] | 2021-03-19 | [
[
"Gieseking",
"Manuel",
""
],
[
"Würdemann",
"Nick",
""
]
] | Petri games are a multi-player game model for the synthesis problem in distributed systems, i.e., the automatic generation of local controllers. The model represents causal memory of the players, which are tokens on a Petri net and divided into two teams: the controllable system and the uncontrollable environment. For one environment player and a bounded number of system players, the problem of solving Petri games can be reduced to that of solving B\"uchi games. High-level Petri games are a concise representation of ordinary Petri games. Symmetries, derived from a high-level representation, can be exploited to significantly reduce the state space in the corresponding B\"uchi game. We present a new construction for solving high-level Petri games. It involves the definition of a unique, canonical representation of the reduced B\"uchi game. This allows us to translate a strategy in the B\"uchi game directly into a strategy in the Petri game. An implementation applied on six structurally different benchmark families shows in most cases a performance increase for larger state spaces. |
1203.5395 | Mohammad Firooz | Mohammad Hamed Firooz, Sumit Roy | Data Dissemination in Wireless Networks with Network Coding | null | IEEE Communications Letters, Volume:17 , Issue: 5, 2013 | 10.1109/LCOMM.2013.031313.121994 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the use of network coding for information dissemination over a
wireless network. Using network coding allows for a simple, distributed and
robust algorithm where nodes do not need any information from their neighbors.
In this paper, we analyze the time needed to diffuse information throughout a
network when network coding is implemented at all nodes. We then provide an
upper bound for the dissemination time for ad-hoc networks with general
topology. Moreover, we derive a relation between dissemination time and the
size of the wireless network. It is shown that for a wireless network with N
nodes, the dissemination latency is between O(N) and O(N^2), depending on the
reception probabilities of the nodes. These observations are validated by the
simulation results.
| [
{
"created": "Sat, 24 Mar 2012 07:53:13 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Sep 2012 21:11:46 GMT",
"version": "v2"
},
{
"created": "Sun, 8 Dec 2013 07:56:52 GMT",
"version": "v3"
}
] | 2016-11-18 | [
[
"Firooz",
"Mohammad Hamed",
""
],
[
"Roy",
"Sumit",
""
]
] | We investigate the use of network coding for information dissemination over a wireless network. Using network coding allows for a simple, distributed and robust algorithm where nodes do not need any information from their neighbors. In this paper, we analyze the time needed to diffuse information throughout a network when network coding is implemented at all nodes. We then provide an upper bound for the dissemination time for ad-hoc networks with general topology. Moreover, we derive a relation between dissemination time and the size of the wireless network. It is shown that for a wireless network with N nodes, the dissemination latency is between O(N) and O(N^2), depending on the reception probabilities of the nodes. These observations are validated by the simulation results. |
2110.02667 | Mehmet F. Demirel | Mehmet F. Demirel, Shengchao Liu, Siddhant Garg, Zhenmei Shi, Yingyu
Liang | Attentive Walk-Aggregating Graph Neural Networks | Published in TMLR (Transactions on Machine Learning Research)
(08/2022) 32 pages | null | null | null | cs.LG cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph neural networks (GNNs) have been shown to possess strong representation
power, which can be exploited for downstream prediction tasks on
graph-structured data, such as molecules and social networks. They typically
learn representations by aggregating information from the $K$-hop neighborhood
of individual vertices or from the enumerated walks in the graph. Prior studies
have demonstrated the effectiveness of incorporating weighting schemes into
GNNs; however, this has been primarily limited to $K$-hop neighborhood GNNs so
far. In this paper, we aim to design an algorithm incorporating weighting
schemes into walk-aggregating GNNs and analyze their effect. We propose a novel
GNN model, called AWARE, that aggregates information about the walks in the
graph using attention schemes. This leads to an end-to-end supervised learning
method for graph-level prediction tasks in the standard setting where the input
is the adjacency and vertex information of a graph, and the output is a
predicted label for the graph. We then perform theoretical, empirical, and
interpretability analyses of AWARE. Our theoretical analysis in a simplified
setting identifies successful conditions for provable guarantees, demonstrating
how the graph information is encoded in the representation, and how the
weighting schemes in AWARE affect the representation and learning performance.
Our experiments demonstrate the strong performance of AWARE in graph-level
prediction tasks in the standard setting in the domains of molecular property
prediction and social networks. Lastly, our interpretation study illustrates
that AWARE can successfully capture the important substructures of the input
graph. The code is available on
$\href{https://github.com/mehmetfdemirel/aware}{GitHub}$.
| [
{
"created": "Wed, 6 Oct 2021 11:41:12 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Aug 2022 14:29:46 GMT",
"version": "v2"
}
] | 2022-08-23 | [
[
"Demirel",
"Mehmet F.",
""
],
[
"Liu",
"Shengchao",
""
],
[
"Garg",
"Siddhant",
""
],
[
"Shi",
"Zhenmei",
""
],
[
"Liang",
"Yingyu",
""
]
] | Graph neural networks (GNNs) have been shown to possess strong representation power, which can be exploited for downstream prediction tasks on graph-structured data, such as molecules and social networks. They typically learn representations by aggregating information from the $K$-hop neighborhood of individual vertices or from the enumerated walks in the graph. Prior studies have demonstrated the effectiveness of incorporating weighting schemes into GNNs; however, this has been primarily limited to $K$-hop neighborhood GNNs so far. In this paper, we aim to design an algorithm incorporating weighting schemes into walk-aggregating GNNs and analyze their effect. We propose a novel GNN model, called AWARE, that aggregates information about the walks in the graph using attention schemes. This leads to an end-to-end supervised learning method for graph-level prediction tasks in the standard setting where the input is the adjacency and vertex information of a graph, and the output is a predicted label for the graph. We then perform theoretical, empirical, and interpretability analyses of AWARE. Our theoretical analysis in a simplified setting identifies successful conditions for provable guarantees, demonstrating how the graph information is encoded in the representation, and how the weighting schemes in AWARE affect the representation and learning performance. Our experiments demonstrate the strong performance of AWARE in graph-level prediction tasks in the standard setting in the domains of molecular property prediction and social networks. Lastly, our interpretation study illustrates that AWARE can successfully capture the important substructures of the input graph. The code is available on $\href{https://github.com/mehmetfdemirel/aware}{GitHub}$. |
1408.4049 | Giuseppe Toscani | Giuseppe Toscani | A strengthened entropy power inequality for log-concave densities | 21 pages | null | null | null | cs.IT math.FA math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that Shannon's entropy--power inequality admits a strengthened
version in the case in which the densities are log-concave. In such a case, in
fact, one can extend the Blachman--Stam argument to obtain a sharp inequality
for the second derivative of Shannon's entropy functional with respect to the
heat semigroup.
| [
{
"created": "Mon, 18 Aug 2014 15:49:11 GMT",
"version": "v1"
}
] | 2014-08-19 | [
[
"Toscani",
"Giuseppe",
""
]
] | We show that Shannon's entropy--power inequality admits a strengthened version in the case in which the densities are log-concave. In such a case, in fact, one can extend the Blachman--Stam argument to obtain a sharp inequality for the second derivative of Shannon's entropy functional with respect to the heat semigroup. |
1503.00173 | Jonathan Mei | Jonathan Mei and Jos\'e M. F. Moura | Signal Processing on Graphs: Causal Modeling of Unstructured Data | null | IEEE Transactions on Signal Processing, vol. 65, no. 8, pp.
2077-2092, April 15, 2017 | 10.1109/TSP.2016.2634543 | null | cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many applications collect a large number of time series, for example, the
financial data of companies quoted in a stock exchange, the health care data of
all patients that visit the emergency room of a hospital, or the temperature
sequences continuously measured by weather stations across the US. These data
are often referred to as unstructured. A first task in its analytics is to
derive a low dimensional representation, a graph or discrete manifold, that
describes well the interrelations among the time series and their
intrarelations across time. This paper presents a computationally tractable
algorithm for estimating this graph that structures the data. The resulting
graph is directed and weighted, possibly capturing causal relations, not just
reciprocal correlations as in many existing approaches in the literature. A
convergence analysis is carried out. The algorithm is demonstrated on random
graph datasets and real network time series datasets, and its performance is
compared to that of related methods. The adjacency matrices estimated with the
new method are close to the true graph in the simulated data and consistent
with prior physical knowledge in the real dataset tested.
| [
{
"created": "Sat, 28 Feb 2015 20:28:05 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Apr 2016 20:58:45 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Sep 2016 13:19:02 GMT",
"version": "v3"
},
{
"created": "Mon, 31 Oct 2016 22:05:33 GMT",
"version": "v4"
},
{
"created": "Wed, 30 Nov 2016 19:12:41 GMT",
"version": "v5"
},
{
"created": "Wed, 8 Feb 2017 15:49:58 GMT",
"version": "v6"
}
] | 2017-02-09 | [
[
"Mei",
"Jonathan",
""
],
[
"Moura",
"José M. F.",
""
]
] | Many applications collect a large number of time series, for example, the financial data of companies quoted in a stock exchange, the health care data of all patients that visit the emergency room of a hospital, or the temperature sequences continuously measured by weather stations across the US. These data are often referred to as unstructured. A first task in its analytics is to derive a low dimensional representation, a graph or discrete manifold, that describes well the interrelations among the time series and their intrarelations across time. This paper presents a computationally tractable algorithm for estimating this graph that structures the data. The resulting graph is directed and weighted, possibly capturing causal relations, not just reciprocal correlations as in many existing approaches in the literature. A convergence analysis is carried out. The algorithm is demonstrated on random graph datasets and real network time series datasets, and its performance is compared to that of related methods. The adjacency matrices estimated with the new method are close to the true graph in the simulated data and consistent with prior physical knowledge in the real dataset tested. |
2103.14124 | Ansgar Scherp | Steffen Epp, Marcel Hoffmann, Nicolas Lell, Michael Mohr, Ansgar
Scherp | STEREO: A Pipeline for Extracting Experiment Statistics, Conditions, and
Topics from Scientific Papers | Paper accepted at iiWAS2021 | null | null | null | cs.DL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A common writing style for statistical results are the recommendations of the
American Psychology Association, known as APA-style. However, in practice,
writing styles vary as reports are not 100% following APA-style or parameters
are not reported despite being mandatory. In addition, the statistics are not
reported in isolation but in context of experimental conditions investigated
and the general topic. We address these challenges by proposing a flexible
pipeline STEREO based on active wrapper induction and unsupervised aspect
extraction. We applied our pipeline to the over 100,000 documents in the
CORD-19 dataset. It required only 0.25% of the corpus (about 500 documents) to
learn statistics extraction rules that cover 95% of the sentences in CORD-19.
The statistic extraction has nearly 100% precision on APA-conform and 95%
precision on non-APA writing styles. In total, we were able to extract 113k
reported statistics, of which only <1% is APA conform. We could extract in 46%
the correct conditions from APA-conform reports (30% for non-APA). The best
model for topic extraction achieves a precision of 75% on statistics reported
in APA style (73% for non-APA conform). We conclude that STEREO is a good
foundation for automatic statistic extraction and future developments for
scientific paper analysis. Particularly the extraction of non-APA conform
reports is important and allows applications such as giving feedback to authors
about what is missing and could be changed.
| [
{
"created": "Thu, 25 Mar 2021 20:30:57 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Dec 2022 20:43:50 GMT",
"version": "v2"
}
] | 2022-12-09 | [
[
"Epp",
"Steffen",
""
],
[
"Hoffmann",
"Marcel",
""
],
[
"Lell",
"Nicolas",
""
],
[
"Mohr",
"Michael",
""
],
[
"Scherp",
"Ansgar",
""
]
] | A common writing style for statistical results are the recommendations of the American Psychology Association, known as APA-style. However, in practice, writing styles vary as reports are not 100% following APA-style or parameters are not reported despite being mandatory. In addition, the statistics are not reported in isolation but in context of experimental conditions investigated and the general topic. We address these challenges by proposing a flexible pipeline STEREO based on active wrapper induction and unsupervised aspect extraction. We applied our pipeline to the over 100,000 documents in the CORD-19 dataset. It required only 0.25% of the corpus (about 500 documents) to learn statistics extraction rules that cover 95% of the sentences in CORD-19. The statistic extraction has nearly 100% precision on APA-conform and 95% precision on non-APA writing styles. In total, we were able to extract 113k reported statistics, of which only <1% is APA conform. We could extract in 46% the correct conditions from APA-conform reports (30% for non-APA). The best model for topic extraction achieves a precision of 75% on statistics reported in APA style (73% for non-APA conform). We conclude that STEREO is a good foundation for automatic statistic extraction and future developments for scientific paper analysis. Particularly the extraction of non-APA conform reports is important and allows applications such as giving feedback to authors about what is missing and could be changed. |
2109.13009 | Soeren Becker | Soeren Becker, Florian Schmidt, Lauritz Thamsen, Ana Juan Ferrer, Odej
Kao | LOS: Local-Optimistic Scheduling of Periodic Model Training For Anomaly
Detection on Sensor Data Streams in Meshed Edge Networks | 2nd IEEE International Conference on Autonomic Computing and
Self-Organizing Systems - ACSOS 2021 | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anomaly detection is increasingly important to handle the amount of sensor
data in Edge and Fog environments, Smart Cities, as well as in Industry 4.0. To
ensure good results, the utilized ML models need to be updated periodically to
adapt to seasonal changes and concept drifts in the sensor data. Although the
increasing resource availability at the edge can allow for in-situ execution of
model training directly on the devices, it is still often offloaded to fog
devices or the cloud.
In this paper, we propose Local-Optimistic Scheduling (LOS), a method for
executing periodic ML model training jobs in close proximity to the data
sources, without overloading lightweight edge devices. Training jobs are
offloaded to nearby neighbor nodes as necessary and the resource consumption is
optimized to meet the training period while still ensuring enough resources for
further training executions. This scheduling is accomplished in a
decentralized, collaborative and opportunistic manner, without full knowledge
of the infrastructure and workload. We evaluated our method in an edge
computing testbed on real-world datasets. The experimental results show that
LOS places the training executions close to the input sensor streams, decreases
the deviation between training time and training period by up to 40% and
increases the amount of successfully scheduled training jobs compared to an
in-situ execution.
| [
{
"created": "Mon, 27 Sep 2021 12:45:26 GMT",
"version": "v1"
}
] | 2021-09-28 | [
[
"Becker",
"Soeren",
""
],
[
"Schmidt",
"Florian",
""
],
[
"Thamsen",
"Lauritz",
""
],
[
"Ferrer",
"Ana Juan",
""
],
[
"Kao",
"Odej",
""
]
] | Anomaly detection is increasingly important to handle the amount of sensor data in Edge and Fog environments, Smart Cities, as well as in Industry 4.0. To ensure good results, the utilized ML models need to be updated periodically to adapt to seasonal changes and concept drifts in the sensor data. Although the increasing resource availability at the edge can allow for in-situ execution of model training directly on the devices, it is still often offloaded to fog devices or the cloud. In this paper, we propose Local-Optimistic Scheduling (LOS), a method for executing periodic ML model training jobs in close proximity to the data sources, without overloading lightweight edge devices. Training jobs are offloaded to nearby neighbor nodes as necessary and the resource consumption is optimized to meet the training period while still ensuring enough resources for further training executions. This scheduling is accomplished in a decentralized, collaborative and opportunistic manner, without full knowledge of the infrastructure and workload. We evaluated our method in an edge computing testbed on real-world datasets. The experimental results show that LOS places the training executions close to the input sensor streams, decreases the deviation between training time and training period by up to 40% and increases the amount of successfully scheduled training jobs compared to an in-situ execution. |
1802.07572 | David McAllester | David McAllester | Information Theoretic Co-Training | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces an information theoretic co-training objective for
unsupervised learning. We consider the problem of predicting the future. Rather
than predict future sensations (image pixels or sound waves) we predict
"hypotheses" to be confirmed by future sensations. More formally, we assume a
population distribution on pairs $(x,y)$ where we can think of $x$ as a past
sensation and $y$ as a future sensation. We train both a predictor model
$P_\Phi(z|x)$ and a confirmation model $P_\Psi(z|y)$ where we view $z$ as
hypotheses (when predicted) or facts (when confirmed). For a population
distribution on pairs $(x,y)$ we focus on the problem of measuring the mutual
information between $x$ and $y$. By the data processing inequality this mutual
information is at least as large as the mutual information between $x$ and $z$
under the distribution on triples $(x,z,y)$ defined by the confirmation model
$P_\Psi(z|y)$. The information theoretic training objective for $P_\Phi(z|x)$
and $P_\Psi(z|y)$ can be viewed as a form of co-training where we want the
prediction from $x$ to match the confirmation from $y$.
| [
{
"created": "Wed, 21 Feb 2018 14:01:20 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Aug 2018 13:06:04 GMT",
"version": "v2"
}
] | 2018-08-15 | [
[
"McAllester",
"David",
""
]
] | This paper introduces an information theoretic co-training objective for unsupervised learning. We consider the problem of predicting the future. Rather than predict future sensations (image pixels or sound waves) we predict "hypotheses" to be confirmed by future sensations. More formally, we assume a population distribution on pairs $(x,y)$ where we can think of $x$ as a past sensation and $y$ as a future sensation. We train both a predictor model $P_\Phi(z|x)$ and a confirmation model $P_\Psi(z|y)$ where we view $z$ as hypotheses (when predicted) or facts (when confirmed). For a population distribution on pairs $(x,y)$ we focus on the problem of measuring the mutual information between $x$ and $y$. By the data processing inequality this mutual information is at least as large as the mutual information between $x$ and $z$ under the distribution on triples $(x,z,y)$ defined by the confirmation model $P_\Psi(z|y)$. The information theoretic training objective for $P_\Phi(z|x)$ and $P_\Psi(z|y)$ can be viewed as a form of co-training where we want the prediction from $x$ to match the confirmation from $y$. |
2106.07473 | Nandini Ramanan | Deokwoo Jung, Nandini Ramanan, Mehrnaz Amjadi, Sankeerth Rao
Karingula, Jake Taylor, and Claudionor Nunes Coelho Jr | Time Series Anomaly Detection with label-free Model Selection | 11 pages, 1 Figure, 4 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Anomaly detection for time-series data becomes an essential task for many
data-driven applications fueled with an abundance of data and out-of-the-box
machine-learning algorithms. In many real-world settings, developing a reliable
anomaly model is highly challenging due to insufficient anomaly labels and the
prohibitively expensive cost of obtaining anomaly examples. It imposes a
significant bottleneck to evaluate model quality for model selection and
parameter tuning reliably. As a result, many existing anomaly detection
algorithms fail to show their promised performance after deployment.
In this paper, we propose LaF-AD, a novel anomaly detection algorithm with
label-free model selection for unlabeled times-series data. Our proposed
algorithm performs a fully unsupervised ensemble learning across a large number
of candidate parametric models. We develop a model variance metric that
quantifies the sensitivity of anomaly probability with a bootstrapping method.
Then it makes a collective decision for anomaly events by model learners using
the model variance. Our algorithm is easily parallelizable, more robust for
ill-conditioned and seasonal data, and highly scalable for a large number of
anomaly models. We evaluate our algorithm against other state-of-the-art
methods on a synthetic domain and a benchmark public data set.
| [
{
"created": "Fri, 11 Jun 2021 00:21:06 GMT",
"version": "v1"
}
] | 2021-06-15 | [
[
"Jung",
"Deokwoo",
""
],
[
"Ramanan",
"Nandini",
""
],
[
"Amjadi",
"Mehrnaz",
""
],
[
"Karingula",
"Sankeerth Rao",
""
],
[
"Taylor",
"Jake",
""
],
[
"Coelho",
"Claudionor Nunes",
"Jr"
]
] | Anomaly detection for time-series data becomes an essential task for many data-driven applications fueled with an abundance of data and out-of-the-box machine-learning algorithms. In many real-world settings, developing a reliable anomaly model is highly challenging due to insufficient anomaly labels and the prohibitively expensive cost of obtaining anomaly examples. It imposes a significant bottleneck to evaluate model quality for model selection and parameter tuning reliably. As a result, many existing anomaly detection algorithms fail to show their promised performance after deployment. In this paper, we propose LaF-AD, a novel anomaly detection algorithm with label-free model selection for unlabeled times-series data. Our proposed algorithm performs a fully unsupervised ensemble learning across a large number of candidate parametric models. We develop a model variance metric that quantifies the sensitivity of anomaly probability with a bootstrapping method. Then it makes a collective decision for anomaly events by model learners using the model variance. Our algorithm is easily parallelizable, more robust for ill-conditioned and seasonal data, and highly scalable for a large number of anomaly models. We evaluate our algorithm against other state-of-the-art methods on a synthetic domain and a benchmark public data set. |
1504.05477 | Christopher Musco | Cameron Musco and Christopher Musco | Randomized Block Krylov Methods for Stronger and Faster Approximate
Singular Value Decomposition | Neural Information Processing Systems 2015 | null | null | null | cs.DS cs.LG cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since being analyzed by Rokhlin, Szlam, and Tygert and popularized by Halko,
Martinsson, and Tropp, randomized Simultaneous Power Iteration has become the
method of choice for approximate singular value decomposition. It is more
accurate than simpler sketching algorithms, yet still converges quickly for any
matrix, independently of singular value gaps. After $\tilde{O}(1/\epsilon)$
iterations, it gives a low-rank approximation within $(1+\epsilon)$ of optimal
for spectral norm error.
We give the first provable runtime improvement on Simultaneous Iteration: a
simple randomized block Krylov method, closely related to the classic Block
Lanczos algorithm, gives the same guarantees in just
$\tilde{O}(1/\sqrt{\epsilon})$ iterations and performs substantially better
experimentally. Despite their long history, our analysis is the first of a
Krylov subspace method that does not depend on singular value gaps, which are
unreliable in practice.
Furthermore, while it is a simple accuracy benchmark, even $(1+\epsilon)$
error for spectral norm low-rank approximation does not imply that an algorithm
returns high quality principal components, a major issue for data applications.
We address this problem for the first time by showing that both Block Krylov
Iteration and a minor modification of Simultaneous Iteration give nearly
optimal PCA for any matrix. This result further justifies their strength over
non-iterative sketching methods.
Finally, we give insight beyond the worst case, justifying why both
algorithms can run much faster in practice than predicted. We clarify how
simple techniques can take advantage of common matrix properties to
significantly improve runtime.
| [
{
"created": "Tue, 21 Apr 2015 15:48:44 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Jun 2015 23:43:50 GMT",
"version": "v2"
},
{
"created": "Wed, 1 Jul 2015 03:55:11 GMT",
"version": "v3"
},
{
"created": "Fri, 30 Oct 2015 19:35:08 GMT",
"version": "v4"
}
] | 2015-11-02 | [
[
"Musco",
"Cameron",
""
],
[
"Musco",
"Christopher",
""
]
] | Since being analyzed by Rokhlin, Szlam, and Tygert and popularized by Halko, Martinsson, and Tropp, randomized Simultaneous Power Iteration has become the method of choice for approximate singular value decomposition. It is more accurate than simpler sketching algorithms, yet still converges quickly for any matrix, independently of singular value gaps. After $\tilde{O}(1/\epsilon)$ iterations, it gives a low-rank approximation within $(1+\epsilon)$ of optimal for spectral norm error. We give the first provable runtime improvement on Simultaneous Iteration: a simple randomized block Krylov method, closely related to the classic Block Lanczos algorithm, gives the same guarantees in just $\tilde{O}(1/\sqrt{\epsilon})$ iterations and performs substantially better experimentally. Despite their long history, our analysis is the first of a Krylov subspace method that does not depend on singular value gaps, which are unreliable in practice. Furthermore, while it is a simple accuracy benchmark, even $(1+\epsilon)$ error for spectral norm low-rank approximation does not imply that an algorithm returns high quality principal components, a major issue for data applications. We address this problem for the first time by showing that both Block Krylov Iteration and a minor modification of Simultaneous Iteration give nearly optimal PCA for any matrix. This result further justifies their strength over non-iterative sketching methods. Finally, we give insight beyond the worst case, justifying why both algorithms can run much faster in practice than predicted. We clarify how simple techniques can take advantage of common matrix properties to significantly improve runtime. |
1208.4528 | Mohamed I Shehata | E. Ahmed, M. I. Shehata and H. A. A. El-Saka | On Dynamical Cournot Game on a Graph | null | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cournot dynamical game is studied on a graph. The stability of the system is
studied. Prisoner's dilemma game is used to model natural gas transmission.
| [
{
"created": "Fri, 13 Jul 2012 05:40:46 GMT",
"version": "v1"
}
] | 2012-08-23 | [
[
"Ahmed",
"E.",
""
],
[
"Shehata",
"M. I.",
""
],
[
"El-Saka",
"H. A. A.",
""
]
] | Cournot dynamical game is studied on a graph. The stability of the system is studied. Prisoner's dilemma game is used to model natural gas transmission. |
1810.02276 | Muhammad Amjad | Muhammad Amjad and Leila Musavian | Performance Analysis of NOMA for Ultra-Reliable and Low-Latency
Communications | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grant-free non-orthogonal multiple access (NOMA) has been regarded as a
key-enabler technology for ultra-reliable and low-latency communications
(URLLC). In this paper, we analyse the performance of NOMA with short packet
communications for URLLC. In this regard, the overall packet loss probability
consists of transmission error probability and queueing-delay violation
probability. Queueing-delay has been modelled using the effective bandwidth.
Due to short transmission time, the infinite block-length has been replaced
with finite blocklength of the channel codes which rules out the application of
Shannon's formula. The achievable effective bandwidth of the system is derived,
and then, the transmission error probability has been analysed. The derivations
are validated through extensive simulations, which shows the variations of the
signal-to-noise ratio (SNR) requirements of the system for various
transmission-error probability, QoS exponent, and the transmission packet size.
| [
{
"created": "Thu, 4 Oct 2018 15:30:12 GMT",
"version": "v1"
}
] | 2018-10-05 | [
[
"Amjad",
"Muhammad",
""
],
[
"Musavian",
"Leila",
""
]
] | Grant-free non-orthogonal multiple access (NOMA) has been regarded as a key-enabler technology for ultra-reliable and low-latency communications (URLLC). In this paper, we analyse the performance of NOMA with short packet communications for URLLC. In this regard, the overall packet loss probability consists of transmission error probability and queueing-delay violation probability. Queueing-delay has been modelled using the effective bandwidth. Due to short transmission time, the infinite block-length has been replaced with finite blocklength of the channel codes which rules out the application of Shannon's formula. The achievable effective bandwidth of the system is derived, and then, the transmission error probability has been analysed. The derivations are validated through extensive simulations, which shows the variations of the signal-to-noise ratio (SNR) requirements of the system for various transmission-error probability, QoS exponent, and the transmission packet size. |
1902.07535 | Akira Imakura | Akira Imakura and Tetsuya Sakurai | Data collaboration analysis for distributed datasets | 7 pages | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a data collaboration analysis method for
distributed datasets. The proposed method is a centralized machine learning
while training datasets and models remain distributed over some institutions.
Recently, data became large and distributed with decreasing costs of data
collection. If we can centralize these distributed datasets and analyse them as
one dataset, we expect to obtain novel insight and achieve a higher prediction
performance compared with individual analyses on each distributed dataset.
However, it is generally difficult to centralize the original datasets due to
their huge data size or regarding a privacy-preserving problem. To avoid these
difficulties, we propose a data collaboration analysis method for distributed
datasets without sharing the original datasets. The proposed method centralizes
only intermediate representation constructed individually instead of the
original dataset.
| [
{
"created": "Wed, 20 Feb 2019 12:33:39 GMT",
"version": "v1"
}
] | 2019-02-21 | [
[
"Imakura",
"Akira",
""
],
[
"Sakurai",
"Tetsuya",
""
]
] | In this paper, we propose a data collaboration analysis method for distributed datasets. The proposed method is a centralized machine learning while training datasets and models remain distributed over some institutions. Recently, data became large and distributed with decreasing costs of data collection. If we can centralize these distributed datasets and analyse them as one dataset, we expect to obtain novel insight and achieve a higher prediction performance compared with individual analyses on each distributed dataset. However, it is generally difficult to centralize the original datasets due to their huge data size or regarding a privacy-preserving problem. To avoid these difficulties, we propose a data collaboration analysis method for distributed datasets without sharing the original datasets. The proposed method centralizes only intermediate representation constructed individually instead of the original dataset. |
1809.09293 | Vaneet Aggarwal | Vaneet Aggarwal and Hamed Asadi and Mayank Gupta and Jae Joong Lee and
Denny Yu | Covfefe: A Computer Vision Approach For Estimating Force Exertion | 12 pages | null | null | null | cs.HC cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cumulative exposure to repetitive and forceful activities may lead to
musculoskeletal injuries which not only reduce workers' efficiency and
productivity, but also affect their quality of life. Thus, widely accessible
techniques for reliable detection of unsafe muscle force exertion levels for
human activity is necessary for their well-being. However, measurement of force
exertion levels is challenging and the existing techniques pose a great
challenge as they are either intrusive, interfere with human-machine interface,
and/or subjective in the nature, thus are not scalable for all workers. In this
work, we use face videos and the photoplethysmography (PPG) signals to classify
force exertion levels of 0\%, 50\%, and 100\% (representing rest, moderate
effort, and high effort), thus providing a non-intrusive and scalable approach.
Efficient feature extraction approaches have been investigated, including
standard deviation of the movement of different landmarks of the face,
distances between peaks and troughs in the PPG signals. We note that the PPG
signals can be obtained from the face videos, thus giving an efficient
classification algorithm for the force exertion levels using face videos. Based
on the data collected from 20 subjects, features extracted from the face videos
give 90\% accuracy in classification among the 100\% and the combination of 0\%
and 50\% datasets. Further combining the PPG signals provide 81.7\% accuracy.
The approach is also shown to be robust to the correctly identify force level
when the person is talking, even though such datasets are not included in the
training.
| [
{
"created": "Tue, 25 Sep 2018 02:45:19 GMT",
"version": "v1"
}
] | 2018-09-26 | [
[
"Aggarwal",
"Vaneet",
""
],
[
"Asadi",
"Hamed",
""
],
[
"Gupta",
"Mayank",
""
],
[
"Lee",
"Jae Joong",
""
],
[
"Yu",
"Denny",
""
]
] | Cumulative exposure to repetitive and forceful activities may lead to musculoskeletal injuries which not only reduce workers' efficiency and productivity, but also affect their quality of life. Thus, widely accessible techniques for reliable detection of unsafe muscle force exertion levels for human activity is necessary for their well-being. However, measurement of force exertion levels is challenging and the existing techniques pose a great challenge as they are either intrusive, interfere with human-machine interface, and/or subjective in the nature, thus are not scalable for all workers. In this work, we use face videos and the photoplethysmography (PPG) signals to classify force exertion levels of 0\%, 50\%, and 100\% (representing rest, moderate effort, and high effort), thus providing a non-intrusive and scalable approach. Efficient feature extraction approaches have been investigated, including standard deviation of the movement of different landmarks of the face, distances between peaks and troughs in the PPG signals. We note that the PPG signals can be obtained from the face videos, thus giving an efficient classification algorithm for the force exertion levels using face videos. Based on the data collected from 20 subjects, features extracted from the face videos give 90\% accuracy in classification among the 100\% and the combination of 0\% and 50\% datasets. Further combining the PPG signals provide 81.7\% accuracy. The approach is also shown to be robust to the correctly identify force level when the person is talking, even though such datasets are not included in the training. |
0707.2436 | Massimiliano Laddomada Ph.D. | Massimiliano Laddomada | On the Polyphase Decomposition for Design of Generalized Comb Decimation
Filters | Submitted to IEEE TCAS-I, February 2007; 11 double-column pages, 9
figures, 1 table | null | 10.1109/TCSI.2008.920136 | null | cs.OH | null | Generalized comb filters (GCFs) are efficient anti-aliasing decimation
filters with improved selectivity and quantization noise (QN) rejection
performance around the so called folding bands with respect to classical comb
filters.
In this paper, we address the design of GCF filters by proposing an efficient
partial polyphase architecture with the aim to reduce the data rate as much as
possible after the Sigma-Delta A/D conversion. We propose a mathematical
framework in order to completely characterize the dependence of the frequency
response of GCFs on the quantization of the multipliers embedded in the
proposed filter architecture. This analysis paves the way to the design of
multiplier-less decimation architectures.
We also derive the impulse response of a sample 3rd order GCF filter used as
a reference scheme throughout the paper.
| [
{
"created": "Tue, 17 Jul 2007 05:38:30 GMT",
"version": "v1"
}
] | 2016-11-18 | [
[
"Laddomada",
"Massimiliano",
""
]
] | Generalized comb filters (GCFs) are efficient anti-aliasing decimation filters with improved selectivity and quantization noise (QN) rejection performance around the so called folding bands with respect to classical comb filters. In this paper, we address the design of GCF filters by proposing an efficient partial polyphase architecture with the aim to reduce the data rate as much as possible after the Sigma-Delta A/D conversion. We propose a mathematical framework in order to completely characterize the dependence of the frequency response of GCFs on the quantization of the multipliers embedded in the proposed filter architecture. This analysis paves the way to the design of multiplier-less decimation architectures. We also derive the impulse response of a sample 3rd order GCF filter used as a reference scheme throughout the paper. |
2311.12688 | Paul Scemama | Paul Scemama, Ariel Kapusta | On the Out-of-Distribution Coverage of Combining Split Conformal
Prediction and Bayesian Deep Learning | 26 pages, 18 figures | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Bayesian deep learning and conformal prediction are two methods that have
been used to convey uncertainty and increase safety in machine learning
systems. We focus on combining Bayesian deep learning with split conformal
prediction and how this combination effects out-of-distribution coverage;
particularly in the case of multiclass image classification. We suggest that if
the model is generally underconfident on the calibration set, then the
resultant conformal sets may exhibit worse out-of-distribution coverage
compared to simple predictive credible sets. Conversely, if the model is
overconfident on the calibration set, the use of conformal prediction may
improve out-of-distribution coverage. We evaluate prediction sets as a result
of combining split conformal methods and neural networks trained with (i)
stochastic gradient descent, (ii) deep ensembles, and (iii) mean-field
variational inference. Our results suggest that combining Bayesian deep
learning models with split conformal prediction can, in some cases, cause
unintended consequences such as reducing out-of-distribution coverage.
| [
{
"created": "Tue, 21 Nov 2023 15:50:37 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Mar 2024 17:00:03 GMT",
"version": "v2"
}
] | 2024-03-08 | [
[
"Scemama",
"Paul",
""
],
[
"Kapusta",
"Ariel",
""
]
] | Bayesian deep learning and conformal prediction are two methods that have been used to convey uncertainty and increase safety in machine learning systems. We focus on combining Bayesian deep learning with split conformal prediction and how this combination effects out-of-distribution coverage; particularly in the case of multiclass image classification. We suggest that if the model is generally underconfident on the calibration set, then the resultant conformal sets may exhibit worse out-of-distribution coverage compared to simple predictive credible sets. Conversely, if the model is overconfident on the calibration set, the use of conformal prediction may improve out-of-distribution coverage. We evaluate prediction sets as a result of combining split conformal methods and neural networks trained with (i) stochastic gradient descent, (ii) deep ensembles, and (iii) mean-field variational inference. Our results suggest that combining Bayesian deep learning models with split conformal prediction can, in some cases, cause unintended consequences such as reducing out-of-distribution coverage. |
2110.06817 | Matteo Romanello | Matteo Romanello, Sven Najem-Meyer and Bruce Robertson | Optical Character Recognition of 19th Century Classical Commentaries:
the Current State of Affairs | null | null | null | null | cs.DL cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Together with critical editions and translations, commentaries are one of the
main genres of publication in literary and textual scholarship, and have a
century-long tradition. Yet, the exploitation of thousands of digitized
historical commentaries was hitherto hindered by the poor quality of Optical
Character Recognition (OCR), especially on commentaries to Greek texts. In this
paper, we evaluate the performances of two pipelines suitable for the OCR of
historical classical commentaries. Our results show that Kraken + Ciaconna
reaches a substantially lower character error rate (CER) than Tesseract/OCR-D
on commentary sections with high density of polytonic Greek text (average CER
7% vs. 13%), while Tesseract/OCR-D is slightly more accurate than Kraken +
Ciaconna on text sections written predominantly in Latin script (average CER
8.2% vs. 8.4%). As part of this paper, we also release GT4HistComment, a small
dataset with OCR ground truth for 19th classical commentaries and Pogretra, a
large collection of training data and pre-trained models for a wide variety of
ancient Greek typefaces.
| [
{
"created": "Wed, 13 Oct 2021 16:01:16 GMT",
"version": "v1"
}
] | 2021-10-14 | [
[
"Romanello",
"Matteo",
""
],
[
"Najem-Meyer",
"Sven",
""
],
[
"Robertson",
"Bruce",
""
]
] | Together with critical editions and translations, commentaries are one of the main genres of publication in literary and textual scholarship, and have a century-long tradition. Yet, the exploitation of thousands of digitized historical commentaries was hitherto hindered by the poor quality of Optical Character Recognition (OCR), especially on commentaries to Greek texts. In this paper, we evaluate the performances of two pipelines suitable for the OCR of historical classical commentaries. Our results show that Kraken + Ciaconna reaches a substantially lower character error rate (CER) than Tesseract/OCR-D on commentary sections with high density of polytonic Greek text (average CER 7% vs. 13%), while Tesseract/OCR-D is slightly more accurate than Kraken + Ciaconna on text sections written predominantly in Latin script (average CER 8.2% vs. 8.4%). As part of this paper, we also release GT4HistComment, a small dataset with OCR ground truth for 19th classical commentaries and Pogretra, a large collection of training data and pre-trained models for a wide variety of ancient Greek typefaces. |
1710.00217 | Anindya Maiti | Anindya Maiti, Ryan Heard, Mohd Sabra, Murtuza Jadliwala | Towards Inferring Mechanical Lock Combinations using Wrist-Wearables as
a Side-Channel | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Wrist-wearables such as smartwatches and fitness bands are equipped with a
variety of high-precision sensors that support novel contextual and
activity-based applications. The presence of a diverse set of on-board sensors,
however, also expose an additional attack surface which, if not adequately
protected, could be potentially exploited to leak private user information. In
this paper, we investigate the feasibility of a new attack that takes advantage
of a wrist-wearable's motion sensors to infer input on mechanical devices
typically used to secure physical access, for example, combination locks. We
outline an inference framework that attempts to infer a lock's unlock
combination from the wrist motion captured by a smartwatch's gyroscope sensor,
and uses a probabilistic model to produce a ranked list of likely unlock
combinations. We conduct a thorough empirical evaluation of the proposed
framework by employing unlocking-related motion data collected from human
subject participants in a variety of controlled and realistic settings.
Evaluation results from these experiments demonstrate that motion data from
wrist-wearables can be effectively employed as a side-channel to significantly
reduce the unlock combination search-space of commonly found combination locks,
thus compromising the physical security provided by these locks.
| [
{
"created": "Sat, 30 Sep 2017 16:18:03 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Jul 2018 20:30:00 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Sep 2018 20:38:12 GMT",
"version": "v3"
}
] | 2018-09-28 | [
[
"Maiti",
"Anindya",
""
],
[
"Heard",
"Ryan",
""
],
[
"Sabra",
"Mohd",
""
],
[
"Jadliwala",
"Murtuza",
""
]
] | Wrist-wearables such as smartwatches and fitness bands are equipped with a variety of high-precision sensors that support novel contextual and activity-based applications. The presence of a diverse set of on-board sensors, however, also expose an additional attack surface which, if not adequately protected, could be potentially exploited to leak private user information. In this paper, we investigate the feasibility of a new attack that takes advantage of a wrist-wearable's motion sensors to infer input on mechanical devices typically used to secure physical access, for example, combination locks. We outline an inference framework that attempts to infer a lock's unlock combination from the wrist motion captured by a smartwatch's gyroscope sensor, and uses a probabilistic model to produce a ranked list of likely unlock combinations. We conduct a thorough empirical evaluation of the proposed framework by employing unlocking-related motion data collected from human subject participants in a variety of controlled and realistic settings. Evaluation results from these experiments demonstrate that motion data from wrist-wearables can be effectively employed as a side-channel to significantly reduce the unlock combination search-space of commonly found combination locks, thus compromising the physical security provided by these locks. |
1012.5314 | Filippo Radicchi | Filippo Radicchi, Claudio Castellano | Rescaling citations of publications in physics | 8 pages, 10 figures, 1 table | Phys. Rev. E 83, 046116 (2011) | 10.1103/PhysRevE.83.046116 | null | cs.DL physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the citation distributions of all papers published in Physical
Review journals between 1985 and 2009. The average number of citations received
by papers published in a given year and in a given field is computed. Large
variations are found, showing that it is not fair to compare citation numbers
across fields and years. However, when a rescaling procedure by the average is
used, it is possible to compare impartially articles across years and fields.
We make the rescaling factors available for use by the readers. We also show
that rescaling citation numbers by the number of publication authors has strong
effects and should therefore be taken into account when assessing the
bibliometric performance of researchers.
| [
{
"created": "Thu, 23 Dec 2010 22:37:27 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Apr 2011 20:21:42 GMT",
"version": "v2"
}
] | 2015-03-17 | [
[
"Radicchi",
"Filippo",
""
],
[
"Castellano",
"Claudio",
""
]
] | We analyze the citation distributions of all papers published in Physical Review journals between 1985 and 2009. The average number of citations received by papers published in a given year and in a given field is computed. Large variations are found, showing that it is not fair to compare citation numbers across fields and years. However, when a rescaling procedure by the average is used, it is possible to compare impartially articles across years and fields. We make the rescaling factors available for use by the readers. We also show that rescaling citation numbers by the number of publication authors has strong effects and should therefore be taken into account when assessing the bibliometric performance of researchers. |
2107.05368 | Golsa Heidari | Golsa Heidari, Kamran Zamanifar | A Three Phase Semantic Web Matchmaker | 14 pages, 1 figure, International Journal of Smart Home. arXiv admin
note: text overlap with arXiv:2107.02609 | International Journal of Smart Home, Vol.4, No.3, July, 2010 | null | null | cs.IR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Since using environments that are made according to the service oriented
architecture, we have more effective and dynamic applications. Semantic
matchmaking process is finding valuable service candidates for substitution. It
is a very important aspect of using semantic Web Services. Our proposed
matchmaker algorithm performs semantic matching of Web Services on the basis of
input and output descriptions of semantic Web Services matching. This technique
takes advantages from a graph structure and flow networks. Our novel approach
is assigning matchmaking scores to semantics of the inputs and outputs
parameters and their types. It makes a flow network in which the weights of the
edges are these scores, using FordFulkerson algorithm, we find matching rate of
two web services. So, all services should be described in the same Ontology Web
Language. Among these candidates, best one is chosen for substitution in the
case of an execution failure. Our approach uses the algorithm that has the
least running time among all others that can be used for bipartite matching.
The importance of problem is that in real systems, many fundamental problems
will occur by late answering. So system`s service should always be on and if
one of them crashes, it would be replaced fast. Semantic web matchmaker eases
this process.
| [
{
"created": "Tue, 6 Jul 2021 13:39:11 GMT",
"version": "v1"
}
] | 2021-07-13 | [
[
"Heidari",
"Golsa",
""
],
[
"Zamanifar",
"Kamran",
""
]
] | Since using environments that are made according to the service oriented architecture, we have more effective and dynamic applications. Semantic matchmaking process is finding valuable service candidates for substitution. It is a very important aspect of using semantic Web Services. Our proposed matchmaker algorithm performs semantic matching of Web Services on the basis of input and output descriptions of semantic Web Services matching. This technique takes advantages from a graph structure and flow networks. Our novel approach is assigning matchmaking scores to semantics of the inputs and outputs parameters and their types. It makes a flow network in which the weights of the edges are these scores, using FordFulkerson algorithm, we find matching rate of two web services. So, all services should be described in the same Ontology Web Language. Among these candidates, best one is chosen for substitution in the case of an execution failure. Our approach uses the algorithm that has the least running time among all others that can be used for bipartite matching. The importance of problem is that in real systems, many fundamental problems will occur by late answering. So system`s service should always be on and if one of them crashes, it would be replaced fast. Semantic web matchmaker eases this process. |
1810.01791 | Anil Koyuncu | Anil Koyuncu and Kui Liu and Tegawend\'e F. Bissyand\'e and Dongsun
Kim and Jacques Klein and Martin Monperrus and Yves Le Traon | FixMiner: Mining Relevant Fix Patterns for Automated Program Repair | 31 pages, 11 figures | Empirical Software Engineering, Springer Verlag, 2020 | 10.1007/s10664-019-09780-z | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Patching is a common activity in software development. It is generally
performed on a source code base to address bugs or add new functionalities. In
this context, given the recurrence of bugs across projects, the associated
similar patches can be leveraged to extract generic fix actions. While the
literature includes various approaches leveraging similarity among patches to
guide program repair, these approaches often do not yield fix patterns that are
tractable and reusable as actionable input to APR systems. In this paper, we
propose a systematic and automated approach to mining relevant and actionable
fix patterns based on an iterative clustering strategy applied to atomic
changes within patches. The goal of FixMiner is thus to infer separate and
reusable fix patterns that can be leveraged in other patch generation systems.
Our technique, FixMiner, leverages Rich Edit Script which is a specialized tree
structure of the edit scripts that captures the AST-level context of the code
changes. FixMiner uses different tree representations of Rich Edit Scripts for
each round of clustering to identify similar changes. These are abstract syntax
trees, edit actions trees, and code context trees. We have evaluated FixMiner
on thousands of software patches collected from open source projects.
Preliminary results show that we are able to mine accurate patterns,
efficiently exploiting change information in Rich Edit Scripts. We further
integrated the mined patterns to an automated program repair prototype,
PARFixMiner, with which we are able to correctly fix 26 bugs of the Defects4J
benchmark. Beyond this quantitative performance, we show that the mined fix
patterns are sufficiently relevant to produce patches with a high probability
of correctness: 81% of PARFixMiner's generated plausible patches are correct.
| [
{
"created": "Wed, 3 Oct 2018 15:21:20 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Sep 2019 11:44:54 GMT",
"version": "v2"
}
] | 2023-05-05 | [
[
"Koyuncu",
"Anil",
""
],
[
"Liu",
"Kui",
""
],
[
"Bissyandé",
"Tegawendé F.",
""
],
[
"Kim",
"Dongsun",
""
],
[
"Klein",
"Jacques",
""
],
[
"Monperrus",
"Martin",
""
],
[
"Traon",
"Yves Le",
""
]
] | Patching is a common activity in software development. It is generally performed on a source code base to address bugs or add new functionalities. In this context, given the recurrence of bugs across projects, the associated similar patches can be leveraged to extract generic fix actions. While the literature includes various approaches leveraging similarity among patches to guide program repair, these approaches often do not yield fix patterns that are tractable and reusable as actionable input to APR systems. In this paper, we propose a systematic and automated approach to mining relevant and actionable fix patterns based on an iterative clustering strategy applied to atomic changes within patches. The goal of FixMiner is thus to infer separate and reusable fix patterns that can be leveraged in other patch generation systems. Our technique, FixMiner, leverages Rich Edit Script which is a specialized tree structure of the edit scripts that captures the AST-level context of the code changes. FixMiner uses different tree representations of Rich Edit Scripts for each round of clustering to identify similar changes. These are abstract syntax trees, edit actions trees, and code context trees. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in Rich Edit Scripts. We further integrated the mined patterns to an automated program repair prototype, PARFixMiner, with which we are able to correctly fix 26 bugs of the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 81% of PARFixMiner's generated plausible patches are correct. |
1401.8030 | Asif Haque | Asif Haque | Transit Fare Arbitrage: Case Study of San Francisco Bay Area Rapid
Transit (BART) System | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transit fare arbitrage is the scenario when two or more commuters agree to
swap tickets during travel in such a way that total cost is lower than
otherwise. Such arbitrage allows pricing inefficiencies to be explored and
exploited, leading to improved pricing models. In this paper we discuss the
basics of fare arbitrage through an intuitive pricing framework involving
population density. We then analyze the San Francisco Bay Area Rapid Transit
(BART) system to understand underlying inefficiencies. We also provide source
code and comprehensive list of pairs of trips with significant arbitrage gain
at github.com/asifhaque/transit-arbitrage. Finally, we point towards a uniform
payment interface for different kinds of transit systems.
| [
{
"created": "Thu, 30 Jan 2014 23:45:01 GMT",
"version": "v1"
}
] | 2014-02-03 | [
[
"Haque",
"Asif",
""
]
] | Transit fare arbitrage is the scenario when two or more commuters agree to swap tickets during travel in such a way that total cost is lower than otherwise. Such arbitrage allows pricing inefficiencies to be explored and exploited, leading to improved pricing models. In this paper we discuss the basics of fare arbitrage through an intuitive pricing framework involving population density. We then analyze the San Francisco Bay Area Rapid Transit (BART) system to understand underlying inefficiencies. We also provide source code and comprehensive list of pairs of trips with significant arbitrage gain at github.com/asifhaque/transit-arbitrage. Finally, we point towards a uniform payment interface for different kinds of transit systems. |
2103.09847 | Lin Chen | Lin Chen, Bruno Scherrer, Peter L. Bartlett | Infinite-Horizon Offline Reinforcement Learning with Linear Function
Approximation: Curse of Dimensionality and Algorithm | null | null | null | null | cs.LG cs.AI math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate the sample complexity of policy evaluation in
infinite-horizon offline reinforcement learning (also known as the off-policy
evaluation problem) with linear function approximation. We identify a hard
regime $d\gamma^{2}>1$, where $d$ is the dimension of the feature vector and
$\gamma$ is the discount rate. In this regime, for any $q\in[\gamma^{2},1]$, we
can construct a hard instance such that the smallest eigenvalue of its feature
covariance matrix is $q/d$ and it requires
$\Omega\left(\frac{d}{\gamma^{2}\left(q-\gamma^{2}\right)\varepsilon^{2}}\exp\left(\Theta\left(d\gamma^{2}\right)\right)\right)$
samples to approximate the value function up to an additive error
$\varepsilon$. Note that the lower bound of the sample complexity is
exponential in $d$. If $q=\gamma^{2}$, even infinite data cannot suffice. Under
the low distribution shift assumption, we show that there is an algorithm that
needs at most $O\left(\max\left\{ \frac{\left\Vert \theta^{\pi}\right\Vert
_{2}^{4}}{\varepsilon^{4}}\log\frac{d}{\delta},\frac{1}{\varepsilon^{2}}\left(d+\log\frac{1}{\delta}\right)\right\}
\right)$ samples ($\theta^{\pi}$ is the parameter of the policy in linear
function approximation) and guarantees approximation to the value function up
to an additive error of $\varepsilon$ with probability at least $1-\delta$.
| [
{
"created": "Wed, 17 Mar 2021 18:18:57 GMT",
"version": "v1"
}
] | 2021-03-19 | [
[
"Chen",
"Lin",
""
],
[
"Scherrer",
"Bruno",
""
],
[
"Bartlett",
"Peter L.",
""
]
] | In this paper, we investigate the sample complexity of policy evaluation in infinite-horizon offline reinforcement learning (also known as the off-policy evaluation problem) with linear function approximation. We identify a hard regime $d\gamma^{2}>1$, where $d$ is the dimension of the feature vector and $\gamma$ is the discount rate. In this regime, for any $q\in[\gamma^{2},1]$, we can construct a hard instance such that the smallest eigenvalue of its feature covariance matrix is $q/d$ and it requires $\Omega\left(\frac{d}{\gamma^{2}\left(q-\gamma^{2}\right)\varepsilon^{2}}\exp\left(\Theta\left(d\gamma^{2}\right)\right)\right)$ samples to approximate the value function up to an additive error $\varepsilon$. Note that the lower bound of the sample complexity is exponential in $d$. If $q=\gamma^{2}$, even infinite data cannot suffice. Under the low distribution shift assumption, we show that there is an algorithm that needs at most $O\left(\max\left\{ \frac{\left\Vert \theta^{\pi}\right\Vert _{2}^{4}}{\varepsilon^{4}}\log\frac{d}{\delta},\frac{1}{\varepsilon^{2}}\left(d+\log\frac{1}{\delta}\right)\right\} \right)$ samples ($\theta^{\pi}$ is the parameter of the policy in linear function approximation) and guarantees approximation to the value function up to an additive error of $\varepsilon$ with probability at least $1-\delta$. |
2210.05364 | Yu Wei Tan | Yu Wei Tan, Xiaohan Cui and Anand Bhojan | Hybrid MBlur: Using Ray Tracing to Solve the Partial Occlusion Artifacts
in Real-Time Rendering of Motion Blur Effect | null | ACM SIGGRAPH 2020 Posters | 10.1145/3388770.3407436 | null | cs.GR | http://creativecommons.org/licenses/by/4.0/ | For a foreground object in motion, details of its background which would
otherwise be hidden are uncovered through its inner blur. This paper presents a
novel hybrid motion blur rendering technique combining post-process image
filtering and hardware-accelerated ray tracing. In each frame, we advance rays
recursively into the scene to retrieve background information for inner blur
regions and apply a post-process filtering pass on the ray-traced background
and rasterized colour before compositing them together. Our approach achieves
more accurate partial occlusion semi-transparencies for moving objects while
maintaining interactive frame rates.
| [
{
"created": "Tue, 11 Oct 2022 11:47:59 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Tan",
"Yu Wei",
""
],
[
"Cui",
"Xiaohan",
""
],
[
"Bhojan",
"Anand",
""
]
] | For a foreground object in motion, details of its background which would otherwise be hidden are uncovered through its inner blur. This paper presents a novel hybrid motion blur rendering technique combining post-process image filtering and hardware-accelerated ray tracing. In each frame, we advance rays recursively into the scene to retrieve background information for inner blur regions and apply a post-process filtering pass on the ray-traced background and rasterized colour before compositing them together. Our approach achieves more accurate partial occlusion semi-transparencies for moving objects while maintaining interactive frame rates. |
2009.08497 | Tarek Richard Besold | Lorijn Zaadnoordijk, Tarek R. Besold, Rhodri Cusack | The Next Big Thing(s) in Unsupervised Machine Learning: Five Lessons
from Infant Learning | null | null | null | null | cs.LG cs.AI cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After a surge in popularity of supervised Deep Learning, the desire to reduce
the dependence on curated, labelled data sets and to leverage the vast
quantities of unlabelled data available recently triggered renewed interest in
unsupervised learning algorithms. Despite a significantly improved performance
due to approaches such as the identification of disentangled latent
representations, contrastive learning, and clustering optimisations, the
performance of unsupervised machine learning still falls short of its
hypothesised potential. Machine learning has previously taken inspiration from
neuroscience and cognitive science with great success. However, this has mostly
been based on adult learners with access to labels and a vast amount of prior
knowledge. In order to push unsupervised machine learning forward, we argue
that developmental science of infant cognition might hold the key to unlocking
the next generation of unsupervised learning approaches. Conceptually, human
infant learning is the closest biological parallel to artificial unsupervised
learning, as infants too must learn useful representations from unlabelled
data. In contrast to machine learning, these new representations are learned
rapidly and from relatively few examples. Moreover, infants learn robust
representations that can be used flexibly and efficiently in a number of
different tasks and contexts. We identify five crucial factors enabling
infants' quality and speed of learning, assess the extent to which these have
already been exploited in machine learning, and propose how further adoption of
these factors can give rise to previously unseen performance levels in
unsupervised learning.
| [
{
"created": "Thu, 17 Sep 2020 18:47:06 GMT",
"version": "v1"
}
] | 2020-09-21 | [
[
"Zaadnoordijk",
"Lorijn",
""
],
[
"Besold",
"Tarek R.",
""
],
[
"Cusack",
"Rhodri",
""
]
] | After a surge in popularity of supervised Deep Learning, the desire to reduce the dependence on curated, labelled data sets and to leverage the vast quantities of unlabelled data available recently triggered renewed interest in unsupervised learning algorithms. Despite a significantly improved performance due to approaches such as the identification of disentangled latent representations, contrastive learning, and clustering optimisations, the performance of unsupervised machine learning still falls short of its hypothesised potential. Machine learning has previously taken inspiration from neuroscience and cognitive science with great success. However, this has mostly been based on adult learners with access to labels and a vast amount of prior knowledge. In order to push unsupervised machine learning forward, we argue that developmental science of infant cognition might hold the key to unlocking the next generation of unsupervised learning approaches. Conceptually, human infant learning is the closest biological parallel to artificial unsupervised learning, as infants too must learn useful representations from unlabelled data. In contrast to machine learning, these new representations are learned rapidly and from relatively few examples. Moreover, infants learn robust representations that can be used flexibly and efficiently in a number of different tasks and contexts. We identify five crucial factors enabling infants' quality and speed of learning, assess the extent to which these have already been exploited in machine learning, and propose how further adoption of these factors can give rise to previously unseen performance levels in unsupervised learning. |
1907.00148 | Amir Bar | Amir Bar, Michal Mauda, Yoni Turner, Michal Safadi and Eldad Elnekave | Improved ICH classification using task-dependent learning | IEEE International Symposium on Biomedical Imaging (ISBI) 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Head CT is one of the most commonly performed imaging studied in the
Emergency Department setting and Intracranial hemorrhage (ICH) is among the
most critical and timesensitive findings to be detected on Head CT. We present
BloodNet, a deep learning architecture designed for optimal triaging of Head
CTs, with the goal of decreasing the time from CT acquisition to accurate ICH
detection. The BloodNet architecture incorporates dependency between the
otherwise independent tasks of segmentation and classification, achieving
improved classification results. AUCs of 0.9493 and 0.9566 are reported on held
out positive-enriched and randomly sampled sets comprised of over 1400 studies
acquired from over 10 different hospitals. These results are comparable to
previously reported results with smaller number of tagged studies.
| [
{
"created": "Sat, 29 Jun 2019 05:26:24 GMT",
"version": "v1"
}
] | 2019-07-02 | [
[
"Bar",
"Amir",
""
],
[
"Mauda",
"Michal",
""
],
[
"Turner",
"Yoni",
""
],
[
"Safadi",
"Michal",
""
],
[
"Elnekave",
"Eldad",
""
]
] | Head CT is one of the most commonly performed imaging studied in the Emergency Department setting and Intracranial hemorrhage (ICH) is among the most critical and timesensitive findings to be detected on Head CT. We present BloodNet, a deep learning architecture designed for optimal triaging of Head CTs, with the goal of decreasing the time from CT acquisition to accurate ICH detection. The BloodNet architecture incorporates dependency between the otherwise independent tasks of segmentation and classification, achieving improved classification results. AUCs of 0.9493 and 0.9566 are reported on held out positive-enriched and randomly sampled sets comprised of over 1400 studies acquired from over 10 different hospitals. These results are comparable to previously reported results with smaller number of tagged studies. |
2109.13922 | Andreas Martin | Charuta Pande, Hans Friedrich Witschel and Andreas Martin | New Hybrid Techniques for Business Recommender Systems | This article is an extended version of the peer-reviewed publication
by Witschel and Martin (2018) and comprises parts from the MSc thesis of the
first author Pande (2019) | null | null | null | cs.IR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Besides the typical applications of recommender systems in B2C scenarios such
as movie or shopping platforms, there is a rising interest in transforming the
human-driven advice provided e.g. in consultancy via the use of recommender
systems. We explore the special characteristics of such knowledge-based B2B
services and propose a process that allows to incorporate recommender systems
into them. We suggest and compare several recommender techniques that allow to
incorporate the necessary contextual knowledge (e.g. company demographics).
These techniques are evaluated in isolation on a test set of business
intelligence consultancy cases. We then identify the respective strengths of
the different techniques and propose a new hybridisation strategy to combine
these strengths. Our results show that the hybridisation leads to a substantial
performance improvement over the individual methods.
| [
{
"created": "Mon, 27 Sep 2021 11:21:31 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Dec 2021 13:15:45 GMT",
"version": "v2"
}
] | 2021-12-07 | [
[
"Pande",
"Charuta",
""
],
[
"Witschel",
"Hans Friedrich",
""
],
[
"Martin",
"Andreas",
""
]
] | Besides the typical applications of recommender systems in B2C scenarios such as movie or shopping platforms, there is a rising interest in transforming the human-driven advice provided e.g. in consultancy via the use of recommender systems. We explore the special characteristics of such knowledge-based B2B services and propose a process that allows to incorporate recommender systems into them. We suggest and compare several recommender techniques that allow to incorporate the necessary contextual knowledge (e.g. company demographics). These techniques are evaluated in isolation on a test set of business intelligence consultancy cases. We then identify the respective strengths of the different techniques and propose a new hybridisation strategy to combine these strengths. Our results show that the hybridisation leads to a substantial performance improvement over the individual methods. |
2305.19917 | Georgios Zacharopoulos | Georgios Zacharopoulos, Ilias Bournias, Verner Vlacic, Lukas Cavigelli | ReDSEa: Automated Acceleration of Triangular Solver on Supercloud
Heterogeneous Systems | 4 pages, SSH-S0C DAC 2023 Workshop | null | null | null | cs.AR cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When utilized effectively, Supercloud heterogeneous systems have the
potential to significantly enhance performance. Our ReDSEa tool-chain automates
the mapping, load balancing, scheduling, parallelism, and overlapping processes
for the Triangular System Solver (TS) on a heterogeneous system consisting of a
Huawei Kunpeng ARM multi-core CPU and an Ascend 910 AI HW accelerator. We
propose an LLVM compiler tool-chain that a) leverages compiler analysis and b)
utilizes novel performance models exploring recursive, iterative, and blocked
computation models. Our tool-chain facilitates a speedup of up to 16x compared
to an optimized 48-core CPU-only implementation.
| [
{
"created": "Wed, 31 May 2023 14:51:08 GMT",
"version": "v1"
}
] | 2023-06-01 | [
[
"Zacharopoulos",
"Georgios",
""
],
[
"Bournias",
"Ilias",
""
],
[
"Vlacic",
"Verner",
""
],
[
"Cavigelli",
"Lukas",
""
]
] | When utilized effectively, Supercloud heterogeneous systems have the potential to significantly enhance performance. Our ReDSEa tool-chain automates the mapping, load balancing, scheduling, parallelism, and overlapping processes for the Triangular System Solver (TS) on a heterogeneous system consisting of a Huawei Kunpeng ARM multi-core CPU and an Ascend 910 AI HW accelerator. We propose an LLVM compiler tool-chain that a) leverages compiler analysis and b) utilizes novel performance models exploring recursive, iterative, and blocked computation models. Our tool-chain facilitates a speedup of up to 16x compared to an optimized 48-core CPU-only implementation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.