id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0812.1012 | Kamesh Munagala | Sudipto Guha and Kamesh Munagala | Adaptive Uncertainty Resolution in Bayesian Combinatorial Optimization
Problems | Journal version of the paper "Model-driven Optimization using
Adaptive Probes" that appeared in the ACM-SIAM Symposium on Discrete
Algorithms (SODA), 2007 | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In several applications such as databases, planning, and sensor networks,
parameters such as selectivity, load, or sensed values are known only with some
associated uncertainty. The performance of such a system (as captured by some
objective function over the parameters) is significantly improved if some of
these parameters can be probed or observed. In a resource constrained
situation, deciding which parameters to observe in order to optimize system
performance itself becomes an interesting and important optimization problem.
This general problem is the focus of this paper.
One of the most important considerations in this framework is whether
adaptivity is required for the observations. Adaptive observations introduce
blocking or sequential operations in the system whereas non-adaptive
observations can be performed in parallel. One of the important questions in
this regard is to characterize the benefit of adaptivity for probes and
observation.
We present general techniques for designing constant factor approximations to
the optimal observation schemes for several widely used scheduling and metric
objective functions. We show a unifying technique that relates this
optimization problem to the outlier version of the corresponding deterministic
optimization. By making this connection, our technique shows constant factor
upper bounds for the benefit of adaptivity of the observation schemes. We show
that while probing yields significant improvement in the objective function,
being adaptive about the probing is not beneficial beyond constant factors.
| [
{
"created": "Thu, 4 Dec 2008 19:48:16 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Sep 2009 14:17:22 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Jan 2010 15:08:30 GMT",
"version": "v3"
}
] | 2010-01-28 | [
[
"Guha",
"Sudipto",
""
],
[
"Munagala",
"Kamesh",
""
]
] | In several applications such as databases, planning, and sensor networks, parameters such as selectivity, load, or sensed values are known only with some associated uncertainty. The performance of such a system (as captured by some objective function over the parameters) is significantly improved if some of these parameters can be probed or observed. In a resource constrained situation, deciding which parameters to observe in order to optimize system performance itself becomes an interesting and important optimization problem. This general problem is the focus of this paper. One of the most important considerations in this framework is whether adaptivity is required for the observations. Adaptive observations introduce blocking or sequential operations in the system whereas non-adaptive observations can be performed in parallel. One of the important questions in this regard is to characterize the benefit of adaptivity for probes and observation. We present general techniques for designing constant factor approximations to the optimal observation schemes for several widely used scheduling and metric objective functions. We show a unifying technique that relates this optimization problem to the outlier version of the corresponding deterministic optimization. By making this connection, our technique shows constant factor upper bounds for the benefit of adaptivity of the observation schemes. We show that while probing yields significant improvement in the objective function, being adaptive about the probing is not beneficial beyond constant factors. |
1302.1334 | Yuriy Parzhin | Yuri Parzhin | Principles of modal and vector theory of formal intelligence systems | 34 pages, 8 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | The paper considers the class of information systems capable of solving
heuristic problems on basis of formal theory that was termed modal and vector
theory of formal intelligent systems (FIS). The paper justifies the
construction of FIS resolution algorithm, defines the main features of these
systems and proves theorems that underlie the theory. The principle of
representation diversity of FIS construction is formulated. The paper deals
with the main principles of constructing and functioning formal intelligent
system (FIS) on basis of FIS modal and vector theory. The following phenomena
are considered: modular architecture of FIS presentation sub-system, algorithms
of data processing at every step of the stage of creating presentations.
Besides the paper suggests the structure of neural elements, i.e. zone
detectors and processors that are the basis for FIS construction.
| [
{
"created": "Wed, 6 Feb 2013 12:16:33 GMT",
"version": "v1"
}
] | 2013-02-07 | [
[
"Parzhin",
"Yuri",
""
]
] | The paper considers the class of information systems capable of solving heuristic problems on basis of formal theory that was termed modal and vector theory of formal intelligent systems (FIS). The paper justifies the construction of FIS resolution algorithm, defines the main features of these systems and proves theorems that underlie the theory. The principle of representation diversity of FIS construction is formulated. The paper deals with the main principles of constructing and functioning formal intelligent system (FIS) on basis of FIS modal and vector theory. The following phenomena are considered: modular architecture of FIS presentation sub-system, algorithms of data processing at every step of the stage of creating presentations. Besides the paper suggests the structure of neural elements, i.e. zone detectors and processors that are the basis for FIS construction. |
2407.17478 | Clara Ziche | Clara Ziche and Giovanni Apruzzese | LLM4PM: A case study on using Large Language Models for Process Modeling
in Enterprise Organizations | 10 pages, 6 figures | null | null | null | cs.HC cs.AI cs.CL cs.CY cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We investigate the potential of using Large Language Models (LLM) to support
process model creation in organizational contexts. Specifically, we carry out a
case study wherein we develop and test an LLM-based chatbot, PRODIGY (PROcess
moDellIng Guidance for You), in a multinational company, the Hilti Group. We
are particularly interested in understanding how LLM can aid (human) modellers
in creating process flow diagrams. To this purpose, we first conduct a
preliminary user study (n=10) with professional process modellers from Hilti,
inquiring for various pain-points they encounter in their daily routines. Then,
we use their responses to design and implement PRODIGY. Finally, we evaluate
PRODIGY by letting our user study's participants use PRODIGY, and then ask for
their opinion on the pros and cons of PRODIGY. We coalesce our results in
actionable takeaways. Through our research, we showcase the first practical
application of LLM for process modelling in the real world, shedding light on
how industries can leverage LLM to enhance their Business Process Management
activities.
| [
{
"created": "Mon, 1 Jul 2024 19:57:36 GMT",
"version": "v1"
}
] | 2024-07-26 | [
[
"Ziche",
"Clara",
""
],
[
"Apruzzese",
"Giovanni",
""
]
] | We investigate the potential of using Large Language Models (LLM) to support process model creation in organizational contexts. Specifically, we carry out a case study wherein we develop and test an LLM-based chatbot, PRODIGY (PROcess moDellIng Guidance for You), in a multinational company, the Hilti Group. We are particularly interested in understanding how LLM can aid (human) modellers in creating process flow diagrams. To this purpose, we first conduct a preliminary user study (n=10) with professional process modellers from Hilti, inquiring for various pain-points they encounter in their daily routines. Then, we use their responses to design and implement PRODIGY. Finally, we evaluate PRODIGY by letting our user study's participants use PRODIGY, and then ask for their opinion on the pros and cons of PRODIGY. We coalesce our results in actionable takeaways. Through our research, we showcase the first practical application of LLM for process modelling in the real world, shedding light on how industries can leverage LLM to enhance their Business Process Management activities. |
2212.12964 | Steffen Becker | Franziska Herbert and Steffen Becker and Annalina Buckmann and Marvin
Kowalewski and Jonas Hielscher and Yasemin Acar and Markus D\"urmuth and
Yixin Zou and M. Angela Sasse | Digital Security -- A Question of Perspective. A Large-Scale Telephone
Survey with Four At-Risk User Groups | null | null | 10.1109/SP54263.2024.00027 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the digital security experiences of four at-risk user
groups in Germany, including older adults (70+), teenagers (14-17), people with
migration backgrounds, and people with low formal education. Using
computer-assisted telephone interviews, we sampled 250 participants per group,
representative of region, gender, and partly age distributions. We examine
their device usage, concerns, prior negative incidents, perceptions of
potential attackers, and information sources. Our study provides the first
quantitative and nationally representative insights into the digital security
experiences of these four at-risk groups in Germany. Our findings show that
participants with migration backgrounds used the most devices, sought more
security information, and reported more experiences with cybercrime incidents
than other groups. Older adults used the fewest devices and were least affected
by cybercrimes. All groups relied on friends and family and online news as
their primary sources of security information, with little concern about their
social circles being potential attackers. We highlight the nuanced differences
between the four at-risk groups and compare them to the broader German
population when possible. We conclude by presenting recommendations for
education, policy, and future research aimed at addressing the digital security
needs of these at-risk user groups.
| [
{
"created": "Sun, 25 Dec 2022 22:11:57 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Sep 2023 20:25:38 GMT",
"version": "v2"
}
] | 2024-02-29 | [
[
"Herbert",
"Franziska",
""
],
[
"Becker",
"Steffen",
""
],
[
"Buckmann",
"Annalina",
""
],
[
"Kowalewski",
"Marvin",
""
],
[
"Hielscher",
"Jonas",
""
],
[
"Acar",
"Yasemin",
""
],
[
"Dürmuth",
"Markus",
""
],
[
"Zou",
"Yixin",
""
],
[
"Sasse",
"M. Angela",
""
]
] | This paper investigates the digital security experiences of four at-risk user groups in Germany, including older adults (70+), teenagers (14-17), people with migration backgrounds, and people with low formal education. Using computer-assisted telephone interviews, we sampled 250 participants per group, representative of region, gender, and partly age distributions. We examine their device usage, concerns, prior negative incidents, perceptions of potential attackers, and information sources. Our study provides the first quantitative and nationally representative insights into the digital security experiences of these four at-risk groups in Germany. Our findings show that participants with migration backgrounds used the most devices, sought more security information, and reported more experiences with cybercrime incidents than other groups. Older adults used the fewest devices and were least affected by cybercrimes. All groups relied on friends and family and online news as their primary sources of security information, with little concern about their social circles being potential attackers. We highlight the nuanced differences between the four at-risk groups and compare them to the broader German population when possible. We conclude by presenting recommendations for education, policy, and future research aimed at addressing the digital security needs of these at-risk user groups. |
2111.03363 | Aidmar Wainakh | Aidmar Wainakh, Ephraim Zimmer, Sandeep Subedi, Jens Keim, Tim Grube,
Shankar Karuppayah, Alejandro Sanchez Guinea, Max M\"uhlh\"auser | Federated Learning Attacks Revisited: A Critical Discussion of Gaps,
Assumptions, and Evaluation Setups | In Section 5.2, incomplete information are mentioned on reference [9]
("How To Backdoor Federated Learning"). This part of text will be revised and
enriched | null | null | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Federated learning (FL) enables a set of entities to collaboratively train a
machine learning model without sharing their sensitive data, thus, mitigating
some privacy concerns. However, an increasing number of works in the literature
propose attacks that can manipulate the model and disclose information about
the training data in FL. As a result, there has been a growing belief in the
research community that FL is highly vulnerable to a variety of severe attacks.
Although these attacks do indeed highlight security and privacy risks in FL,
some of them may not be as effective in production deployment because they are
feasible only under special -- sometimes impractical -- assumptions.
Furthermore, some attacks are evaluated under limited setups that may not match
real-world scenarios. In this paper, we investigate this issue by conducting a
systematic mapping study of attacks against FL, covering 48 relevant papers
from 2016 to the third quarter of 2021. On the basis of this study, we provide
a quantitative analysis of the proposed attacks and their evaluation settings.
This analysis reveals several research gaps with regard to the type of target
ML models and their architectures. Additionally, we highlight unrealistic
assumptions in the problem settings of some attacks, related to the
hyper-parameters of the ML model and data distribution among clients.
Furthermore, we identify and discuss several fallacies in the evaluation of
attacks, which open up questions on the generalizability of the conclusions. As
a remedy, we propose a set of recommendations to avoid these fallacies and to
promote adequate evaluations.
| [
{
"created": "Fri, 5 Nov 2021 10:07:34 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jan 2022 08:30:00 GMT",
"version": "v2"
}
] | 2022-01-04 | [
[
"Wainakh",
"Aidmar",
""
],
[
"Zimmer",
"Ephraim",
""
],
[
"Subedi",
"Sandeep",
""
],
[
"Keim",
"Jens",
""
],
[
"Grube",
"Tim",
""
],
[
"Karuppayah",
"Shankar",
""
],
[
"Guinea",
"Alejandro Sanchez",
""
],
[
"Mühlhäuser",
"Max",
""
]
] | Federated learning (FL) enables a set of entities to collaboratively train a machine learning model without sharing their sensitive data, thus, mitigating some privacy concerns. However, an increasing number of works in the literature propose attacks that can manipulate the model and disclose information about the training data in FL. As a result, there has been a growing belief in the research community that FL is highly vulnerable to a variety of severe attacks. Although these attacks do indeed highlight security and privacy risks in FL, some of them may not be as effective in production deployment because they are feasible only under special -- sometimes impractical -- assumptions. Furthermore, some attacks are evaluated under limited setups that may not match real-world scenarios. In this paper, we investigate this issue by conducting a systematic mapping study of attacks against FL, covering 48 relevant papers from 2016 to the third quarter of 2021. On the basis of this study, we provide a quantitative analysis of the proposed attacks and their evaluation settings. This analysis reveals several research gaps with regard to the type of target ML models and their architectures. Additionally, we highlight unrealistic assumptions in the problem settings of some attacks, related to the hyper-parameters of the ML model and data distribution among clients. Furthermore, we identify and discuss several fallacies in the evaluation of attacks, which open up questions on the generalizability of the conclusions. As a remedy, we propose a set of recommendations to avoid these fallacies and to promote adequate evaluations. |
1308.1279 | Russell Brown | Russell A. Brown | Barycentric Coordinates as Interpolants | 8 pages, 1 figure | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Barycentric coordinates are frequently used as interpolants to shade computer
graphics images. A simple equation transforms barycentric coordinates from
screen space into eye space in order to undo the perspective transformation and
permit accurate interpolative shading of texture maps. This technique is
amenable to computation using a block-normalized integer representation.
| [
{
"created": "Tue, 6 Aug 2013 14:15:42 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Aug 2013 13:39:44 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Aug 2013 14:09:09 GMT",
"version": "v3"
},
{
"created": "Thu, 23 Oct 2014 09:46:57 GMT",
"version": "v4"
},
{
"created": "Fri, 24 Oct 2014 03:08:21 GMT",
"version": "v5"
},
{
"created": "Mon, 27 Oct 2014 04:34:08 GMT",
"version": "v6"
},
{
"created": "Tue, 28 Oct 2014 00:42:03 GMT",
"version": "v7"
},
{
"created": "Wed, 29 Oct 2014 02:34:55 GMT",
"version": "v8"
},
{
"created": "Thu, 30 Oct 2014 05:07:47 GMT",
"version": "v9"
}
] | 2014-10-31 | [
[
"Brown",
"Russell A.",
""
]
] | Barycentric coordinates are frequently used as interpolants to shade computer graphics images. A simple equation transforms barycentric coordinates from screen space into eye space in order to undo the perspective transformation and permit accurate interpolative shading of texture maps. This technique is amenable to computation using a block-normalized integer representation. |
2210.12877 | Chidera Biringa | Chidera Biringa and G\"okhan Kul | A Secure Design Pattern Approach Toward Tackling Lateral-Injection
Attacks | 4 pages, 3 figures. Accepted to The 15th IEEE International
Conference on Security of Information and Networks (SIN) | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Software weaknesses that create attack surfaces for adversarial exploits,
such as lateral SQL injection (LSQLi) attacks, are usually introduced during
the design phase of software development. Security design patterns are
sometimes applied to tackle these weaknesses. However, due to the stealthy
nature of lateral-based attacks, employing traditional security patterns to
address these threats is insufficient. Hence, we present SEAL, a secure design
that extrapolates architectural, design, and implementation abstraction levels
to delegate security strategies toward tackling LSQLi attacks. We evaluated
SEAL using case study software, where we assumed the role of an adversary and
injected several attack vectors tasked with compromising the confidentiality
and integrity of its database. Our evaluation of SEAL demonstrated its capacity
to address LSQLi attacks.
| [
{
"created": "Sun, 23 Oct 2022 23:02:52 GMT",
"version": "v1"
}
] | 2022-10-25 | [
[
"Biringa",
"Chidera",
""
],
[
"Kul",
"Gökhan",
""
]
] | Software weaknesses that create attack surfaces for adversarial exploits, such as lateral SQL injection (LSQLi) attacks, are usually introduced during the design phase of software development. Security design patterns are sometimes applied to tackle these weaknesses. However, due to the stealthy nature of lateral-based attacks, employing traditional security patterns to address these threats is insufficient. Hence, we present SEAL, a secure design that extrapolates architectural, design, and implementation abstraction levels to delegate security strategies toward tackling LSQLi attacks. We evaluated SEAL using case study software, where we assumed the role of an adversary and injected several attack vectors tasked with compromising the confidentiality and integrity of its database. Our evaluation of SEAL demonstrated its capacity to address LSQLi attacks. |
2305.09779 | Ali Gorji | Ali Gorji, Andisheh Amrollahi, Andreas Krause | A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks | Accepted for the 39th Conference on Uncertainty in Artificial
Intelligence (UAI 2023) | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Despite the capacity of neural nets to learn arbitrary functions, models
trained through gradient descent often exhibit a bias towards ``simpler''
functions. Various notions of simplicity have been introduced to characterize
this behavior. Here, we focus on the case of neural networks with discrete
(zero-one), high-dimensional, inputs through the lens of their Fourier
(Walsh-Hadamard) transforms, where the notion of simplicity can be captured
through the degree of the Fourier coefficients. We empirically show that neural
networks have a tendency to learn lower-degree frequencies. We show how this
spectral bias towards low-degree frequencies can in fact hurt the neural
network's generalization on real-world datasets. To remedy this we propose a
new scalable functional regularization scheme that aids the neural network to
learn higher degree frequencies. Our regularizer also helps avoid erroneous
identification of low-degree frequencies, which further improves
generalization. We extensively evaluate our regularizer on synthetic datasets
to gain insights into its behavior. Finally, we show significantly improved
generalization on four different datasets compared to standard neural networks
and other relevant baselines.
| [
{
"created": "Tue, 16 May 2023 20:06:01 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Jun 2023 09:10:14 GMT",
"version": "v2"
}
] | 2023-06-13 | [
[
"Gorji",
"Ali",
""
],
[
"Amrollahi",
"Andisheh",
""
],
[
"Krause",
"Andreas",
""
]
] | Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards ``simpler'' functions. Various notions of simplicity have been introduced to characterize this behavior. Here, we focus on the case of neural networks with discrete (zero-one), high-dimensional, inputs through the lens of their Fourier (Walsh-Hadamard) transforms, where the notion of simplicity can be captured through the degree of the Fourier coefficients. We empirically show that neural networks have a tendency to learn lower-degree frequencies. We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets. To remedy this we propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies. Our regularizer also helps avoid erroneous identification of low-degree frequencies, which further improves generalization. We extensively evaluate our regularizer on synthetic datasets to gain insights into its behavior. Finally, we show significantly improved generalization on four different datasets compared to standard neural networks and other relevant baselines. |
2103.08006 | Hamid Majidi Balanji | Hamid Majidi Balanji, and Ali Emre Turgut | Vision based range and bearing algorithm for robot swarms | 2 pages, 3 figures, 2018 Turkey Robotic Conference (TORK 2018) | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel computer vision the algorithm proposed for the
on-line range and bearing detection of the robot swarms. Results demonstrated
the reliability of the proposed vision system such that it can be used for the
robot swarms applications.
| [
{
"created": "Sun, 14 Mar 2021 19:35:39 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Balanji",
"Hamid Majidi",
""
],
[
"Turgut",
"Ali Emre",
""
]
] | This paper presents a novel computer vision the algorithm proposed for the on-line range and bearing detection of the robot swarms. Results demonstrated the reliability of the proposed vision system such that it can be used for the robot swarms applications. |
1704.06178 | Andrea Esuli | Fabio Carrara, Andrea Esuli, Fabrizio Falchi, Alejandro Moreo
Fern\'andez | Exploring epoch-dependent stochastic residual networks | Preliminary report | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recently proposed stochastic residual networks selectively activate or
bypass the layers during training, based on independent stochastic choices,
each of which following a probability distribution that is fixed in advance. In
this paper we present a first exploration on the use of an epoch-dependent
distribution, starting with a higher probability of bypassing deeper layers and
then activating them more frequently as training progresses. Preliminary
results are mixed, yet they show some potential of adding an epoch-dependent
management of distributions, worth of further investigation.
| [
{
"created": "Thu, 20 Apr 2017 15:08:28 GMT",
"version": "v1"
}
] | 2017-04-21 | [
[
"Carrara",
"Fabio",
""
],
[
"Esuli",
"Andrea",
""
],
[
"Falchi",
"Fabrizio",
""
],
[
"Fernández",
"Alejandro Moreo",
""
]
] | The recently proposed stochastic residual networks selectively activate or bypass the layers during training, based on independent stochastic choices, each of which following a probability distribution that is fixed in advance. In this paper we present a first exploration on the use of an epoch-dependent distribution, starting with a higher probability of bypassing deeper layers and then activating them more frequently as training progresses. Preliminary results are mixed, yet they show some potential of adding an epoch-dependent management of distributions, worth of further investigation. |
2312.11399 | Chris Hokamp | Chris Hokamp and Demian Gholipour Ghalandari and Parsa Ghaffari | News Signals: An NLP Library for Text and Time Series | EMNLP NLP-OSS Workshop, December 2023 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present an open-source Python library for building and using datasets
where inputs are clusters of textual data, and outputs are sequences of real
values representing one or more time series signals. The news-signals library
supports diverse data science and NLP problem settings related to the
prediction of time series behaviour using textual data feeds. For example, in
the news domain, inputs are document clusters corresponding to daily news
articles about a particular entity, and targets are explicitly associated
real-valued time series: the volume of news about a particular person or
company, or the number of pageviews of specific Wikimedia pages. Despite many
industry and research use cases for this class of problem settings, to the best
of our knowledge, News Signals is the only open-source library designed
specifically to facilitate data science and research settings with natural
language inputs and time series targets. In addition to the core codebase for
building and interacting with datasets, we also conduct a suite of experiments
using several popular Machine Learning libraries, which are used to establish
baselines for time series anomaly prediction using textual inputs.
| [
{
"created": "Mon, 18 Dec 2023 18:02:41 GMT",
"version": "v1"
}
] | 2023-12-19 | [
[
"Hokamp",
"Chris",
""
],
[
"Ghalandari",
"Demian Gholipour",
""
],
[
"Ghaffari",
"Parsa",
""
]
] | We present an open-source Python library for building and using datasets where inputs are clusters of textual data, and outputs are sequences of real values representing one or more time series signals. The news-signals library supports diverse data science and NLP problem settings related to the prediction of time series behaviour using textual data feeds. For example, in the news domain, inputs are document clusters corresponding to daily news articles about a particular entity, and targets are explicitly associated real-valued time series: the volume of news about a particular person or company, or the number of pageviews of specific Wikimedia pages. Despite many industry and research use cases for this class of problem settings, to the best of our knowledge, News Signals is the only open-source library designed specifically to facilitate data science and research settings with natural language inputs and time series targets. In addition to the core codebase for building and interacting with datasets, we also conduct a suite of experiments using several popular Machine Learning libraries, which are used to establish baselines for time series anomaly prediction using textual inputs. |
2201.05576 | Jayati Deshmukh | Srinath Srinivasa and Jayati Deshmukh | AI and the Sense of Self | Previous version of this paper was published in Jijnasa 2021 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | After several winters, AI is center-stage once again, with current advances
enabling a vast array of AI applications. This renewed wave of AI has brought
back to the fore several questions from the past, about philosophical
foundations of intelligence and common sense -- predominantly motivated by
ethical concerns of AI decision-making. In this paper, we address some of the
arguments that led to research interest in intelligent agents, and argue for
their relevance even in today's context. Specifically we focus on the cognitive
sense of "self" and its role in autonomous decision-making leading to
responsible behaviour. The authors hope to make a case for greater research
interest in building richer computational models of AI agents with a sense of
self.
| [
{
"created": "Fri, 7 Jan 2022 10:54:06 GMT",
"version": "v1"
}
] | 2022-01-17 | [
[
"Srinivasa",
"Srinath",
""
],
[
"Deshmukh",
"Jayati",
""
]
] | After several winters, AI is center-stage once again, with current advances enabling a vast array of AI applications. This renewed wave of AI has brought back to the fore several questions from the past, about philosophical foundations of intelligence and common sense -- predominantly motivated by ethical concerns of AI decision-making. In this paper, we address some of the arguments that led to research interest in intelligent agents, and argue for their relevance even in today's context. Specifically we focus on the cognitive sense of "self" and its role in autonomous decision-making leading to responsible behaviour. The authors hope to make a case for greater research interest in building richer computational models of AI agents with a sense of self. |
2407.07606 | Katrien Beuls | Jonas Doumen, Veronica Juliana Schmalz, Katrien Beuls and Paul Van
Eecke | The Computational Learning of Construction Grammars: State of the Art
and Prospective Roadmap | Peer-reviewed author's draft of a journal article to appear in
Constructions and Frames (2025) | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper documents and reviews the state of the art concerning
computational models of construction grammar learning. It brings together prior
work on the computational learning of form-meaning pairings, which has so far
been studied in several distinct areas of research. The goal of this paper is
threefold. First of all, it aims to synthesise the variety of methodologies
that have been proposed to date and the results that have been obtained.
Second, it aims to identify those parts of the challenge that have been
successfully tackled and reveal those that require further research. Finally,
it aims to provide a roadmap which can help to boost and streamline future
research efforts on the computational learning of large-scale, usage-based
construction grammars.
| [
{
"created": "Wed, 10 Jul 2024 12:45:02 GMT",
"version": "v1"
}
] | 2024-07-11 | [
[
"Doumen",
"Jonas",
""
],
[
"Schmalz",
"Veronica Juliana",
""
],
[
"Beuls",
"Katrien",
""
],
[
"Van Eecke",
"Paul",
""
]
] | This paper documents and reviews the state of the art concerning computational models of construction grammar learning. It brings together prior work on the computational learning of form-meaning pairings, which has so far been studied in several distinct areas of research. The goal of this paper is threefold. First of all, it aims to synthesise the variety of methodologies that have been proposed to date and the results that have been obtained. Second, it aims to identify those parts of the challenge that have been successfully tackled and reveal those that require further research. Finally, it aims to provide a roadmap which can help to boost and streamline future research efforts on the computational learning of large-scale, usage-based construction grammars. |
2001.04284 | Thomas Ehrhard | Thomas Ehrhard (IRIF (UMR\_8243)) | On the linear structure of cones | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For encompassing the limitations of probabilistic coherence spaces which do
not seem to provide natural interpretations of continuous data types such as
the real line, Ehrhard and al. introduced a model of probabilistic higher order
computation based on (positive) cones, and a class of totally monotone
functions that they called "stable". Then Crubill{\'e} proved that this model
is a conservative extension of the earlier probabilistic coherence space model.
We continue these investigations by showing that the category of cones and
linear and Scott-continuous functions is a model of intuitionistic linear
logic. To define the tensor product, we use the special adjoint functor
theorem, and we prove that this operation is and extension of the standard
tensor product of probabilistic coherence spaces. We also show that these
latter are dense in cones, thus allowing to lift the main properties of the
tensor product of probabilistic coherence spaces to general cones. Last we
define in the same way an exponential of cones and extend measurability to
these new operations.
| [
{
"created": "Mon, 13 Jan 2020 14:32:46 GMT",
"version": "v1"
}
] | 2020-01-14 | [
[
"Ehrhard",
"Thomas",
"",
"IRIF"
]
] | For encompassing the limitations of probabilistic coherence spaces which do not seem to provide natural interpretations of continuous data types such as the real line, Ehrhard and al. introduced a model of probabilistic higher order computation based on (positive) cones, and a class of totally monotone functions that they called "stable". Then Crubill{\'e} proved that this model is a conservative extension of the earlier probabilistic coherence space model. We continue these investigations by showing that the category of cones and linear and Scott-continuous functions is a model of intuitionistic linear logic. To define the tensor product, we use the special adjoint functor theorem, and we prove that this operation is and extension of the standard tensor product of probabilistic coherence spaces. We also show that these latter are dense in cones, thus allowing to lift the main properties of the tensor product of probabilistic coherence spaces to general cones. Last we define in the same way an exponential of cones and extend measurability to these new operations. |
2204.11677 | Philipp Christmann | Philipp Christmann, Rishiraj Saha Roy, Gerhard Weikum | Conversational Question Answering on Heterogeneous Sources | SIGIR 2022 Research Track Long Paper | null | null | null | cs.IR cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conversational question answering (ConvQA) tackles sequential information
needs where contexts in follow-up questions are left implicit. Current ConvQA
systems operate over homogeneous sources of information: either a knowledge
base (KB), or a text corpus, or a collection of tables. This paper addresses
the novel issue of jointly tapping into all of these together, this way
boosting answer coverage and confidence. We present CONVINSE, an end-to-end
pipeline for ConvQA over heterogeneous sources, operating in three stages: i)
learning an explicit structured representation of an incoming question and its
conversational context, ii) harnessing this frame-like representation to
uniformly capture relevant evidences from KB, text, and tables, and iii)
running a fusion-in-decoder model to generate the answer. We construct and
release the first benchmark, ConvMix, for ConvQA over heterogeneous sources,
comprising 3000 real-user conversations with 16000 questions, along with entity
annotations, completed question utterances, and question paraphrases.
Experiments demonstrate the viability and advantages of our method, compared to
state-of-the-art baselines.
| [
{
"created": "Mon, 25 Apr 2022 14:13:44 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Jun 2023 12:32:03 GMT",
"version": "v2"
}
] | 2023-07-03 | [
[
"Christmann",
"Philipp",
""
],
[
"Roy",
"Rishiraj Saha",
""
],
[
"Weikum",
"Gerhard",
""
]
] | Conversational question answering (ConvQA) tackles sequential information needs where contexts in follow-up questions are left implicit. Current ConvQA systems operate over homogeneous sources of information: either a knowledge base (KB), or a text corpus, or a collection of tables. This paper addresses the novel issue of jointly tapping into all of these together, this way boosting answer coverage and confidence. We present CONVINSE, an end-to-end pipeline for ConvQA over heterogeneous sources, operating in three stages: i) learning an explicit structured representation of an incoming question and its conversational context, ii) harnessing this frame-like representation to uniformly capture relevant evidences from KB, text, and tables, and iii) running a fusion-in-decoder model to generate the answer. We construct and release the first benchmark, ConvMix, for ConvQA over heterogeneous sources, comprising 3000 real-user conversations with 16000 questions, along with entity annotations, completed question utterances, and question paraphrases. Experiments demonstrate the viability and advantages of our method, compared to state-of-the-art baselines. |
1404.7041 | Weiyu Xu | Kumar Vijay Mishra, Myung Cho, Anton Kruger and Weiyu Xu | Super-resolution Line Spectrum Estimation with Block Priors | 7 pages, double column | null | null | null | cs.IT math.IT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of super-resolution line spectrum estimation of an
undersampled signal with block prior information. The component frequencies of
the signal are assumed to take arbitrary continuous values in known frequency
blocks. We formulate a general semidefinite program to recover these
continuous-valued frequencies using theories of positive trigonometric
polynomials. The proposed semidefinite program achieves super-resolution
frequency recovery by taking advantage of known structures of frequency blocks.
Numerical experiments show great performance enhancements using our method.
| [
{
"created": "Mon, 28 Apr 2014 16:22:19 GMT",
"version": "v1"
}
] | 2014-04-29 | [
[
"Mishra",
"Kumar Vijay",
""
],
[
"Cho",
"Myung",
""
],
[
"Kruger",
"Anton",
""
],
[
"Xu",
"Weiyu",
""
]
] | We address the problem of super-resolution line spectrum estimation of an undersampled signal with block prior information. The component frequencies of the signal are assumed to take arbitrary continuous values in known frequency blocks. We formulate a general semidefinite program to recover these continuous-valued frequencies using theories of positive trigonometric polynomials. The proposed semidefinite program achieves super-resolution frequency recovery by taking advantage of known structures of frequency blocks. Numerical experiments show great performance enhancements using our method. |
2104.08835 | Qinyuan Ye | Qinyuan Ye, Bill Yuchen Lin, Xiang Ren | CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in
NLP | Accepted to EMNLP 2021. Camera-ready version. Code:
https://github.com/INK-USC/CrossFit | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Humans can learn a new language task efficiently with only few examples, by
leveraging their knowledge obtained when learning prior tasks. In this paper,
we explore whether and how such cross-task generalization ability can be
acquired, and further applied to build better few-shot learners across diverse
NLP tasks. We introduce CrossFit, a problem setup for studying cross-task
generalization ability, which standardizes seen/unseen task partitions, data
access during different learning stages, and the evaluation protocols. To
instantiate different seen/unseen task partitions in CrossFit and facilitate
in-depth analysis, we present the NLP Few-shot Gym, a repository of 160 diverse
few-shot NLP tasks created from open-access NLP datasets and converted to a
unified text-to-text format. Our analysis reveals that the few-shot learning
ability on unseen tasks can be improved via an upstream learning stage using a
set of seen tasks. We also observe that the selection of upstream learning
tasks can significantly influence few-shot performance on unseen tasks, asking
further analysis on task similarity and transferability.
| [
{
"created": "Sun, 18 Apr 2021 12:14:46 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Sep 2021 22:36:50 GMT",
"version": "v2"
}
] | 2021-10-04 | [
[
"Ye",
"Qinyuan",
""
],
[
"Lin",
"Bill Yuchen",
""
],
[
"Ren",
"Xiang",
""
]
] | Humans can learn a new language task efficiently with only few examples, by leveraging their knowledge obtained when learning prior tasks. In this paper, we explore whether and how such cross-task generalization ability can be acquired, and further applied to build better few-shot learners across diverse NLP tasks. We introduce CrossFit, a problem setup for studying cross-task generalization ability, which standardizes seen/unseen task partitions, data access during different learning stages, and the evaluation protocols. To instantiate different seen/unseen task partitions in CrossFit and facilitate in-depth analysis, we present the NLP Few-shot Gym, a repository of 160 diverse few-shot NLP tasks created from open-access NLP datasets and converted to a unified text-to-text format. Our analysis reveals that the few-shot learning ability on unseen tasks can be improved via an upstream learning stage using a set of seen tasks. We also observe that the selection of upstream learning tasks can significantly influence few-shot performance on unseen tasks, asking further analysis on task similarity and transferability. |
1911.10516 | Weijia Zhang | Weijia Zhang, Hao Liu, Yanchi Liu, Jingbo Zhou, Hui Xiong | Semi-Supervised Hierarchical Recurrent Graph Neural Network for
City-Wide Parking Availability Prediction | 8 pages, 9 figures, AAAI-2020 | null | null | null | cs.LG eess.SP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to predict city-wide parking availability is crucial for the
successful development of Parking Guidance and Information (PGI) systems.
Indeed, the effective prediction of city-wide parking availability can improve
parking efficiency, help urban planning, and ultimately alleviate city
congestion. However, it is a non-trivial task for predicting citywide parking
availability because of three major challenges: 1) the non-Euclidean spatial
autocorrelation among parking lots, 2) the dynamic temporal autocorrelation
inside of and between parking lots, and 3) the scarcity of information about
real-time parking availability obtained from real-time sensors (e.g., camera,
ultrasonic sensor, and GPS). To this end, we propose Semi-supervised
Hierarchical Recurrent Graph Neural Network (SHARE) for predicting city-wide
parking availability. Specifically, we first propose a hierarchical graph
convolution structure to model non-Euclidean spatial autocorrelation among
parking lots. Along this line, a contextual graph convolution block and a soft
clustering graph convolution block are respectively proposed to capture local
and global spatial dependencies between parking lots. Additionally, we adopt a
recurrent neural network to incorporate dynamic temporal dependencies of
parking lots. Moreover, we propose a parking availability approximation module
to estimate missing real-time parking availabilities from both spatial and
temporal domain. Finally, experiments on two real-world datasets demonstrate
the prediction performance of SHARE outperforms seven state-of-the-art
baselines.
| [
{
"created": "Sun, 24 Nov 2019 12:17:04 GMT",
"version": "v1"
}
] | 2019-12-02 | [
[
"Zhang",
"Weijia",
""
],
[
"Liu",
"Hao",
""
],
[
"Liu",
"Yanchi",
""
],
[
"Zhou",
"Jingbo",
""
],
[
"Xiong",
"Hui",
""
]
] | The ability to predict city-wide parking availability is crucial for the successful development of Parking Guidance and Information (PGI) systems. Indeed, the effective prediction of city-wide parking availability can improve parking efficiency, help urban planning, and ultimately alleviate city congestion. However, it is a non-trivial task for predicting citywide parking availability because of three major challenges: 1) the non-Euclidean spatial autocorrelation among parking lots, 2) the dynamic temporal autocorrelation inside of and between parking lots, and 3) the scarcity of information about real-time parking availability obtained from real-time sensors (e.g., camera, ultrasonic sensor, and GPS). To this end, we propose Semi-supervised Hierarchical Recurrent Graph Neural Network (SHARE) for predicting city-wide parking availability. Specifically, we first propose a hierarchical graph convolution structure to model non-Euclidean spatial autocorrelation among parking lots. Along this line, a contextual graph convolution block and a soft clustering graph convolution block are respectively proposed to capture local and global spatial dependencies between parking lots. Additionally, we adopt a recurrent neural network to incorporate dynamic temporal dependencies of parking lots. Moreover, we propose a parking availability approximation module to estimate missing real-time parking availabilities from both spatial and temporal domain. Finally, experiments on two real-world datasets demonstrate the prediction performance of SHARE outperforms seven state-of-the-art baselines. |
1310.2778 | Guillaume Cheze | Alin Bostan (INRIA Saclay - Ile de France, MSR - INRIA), Guillaume
Ch\`eze (IMT), Thomas Cluzeau (XLIM), Jacques-Arthur Weil (XLIM) | Efficient Algorithms for Computing Rational First Integrals and Darboux
Polynomials of Planar Polynomial Vector Fields | null | null | null | null | cs.SC cs.DS math.CA nlin.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present fast algorithms for computing rational first integrals with
bounded degree of a planar polynomial vector field. Our approach is inspired by
an idea of Ferragut and Giacomini. We improve upon their work by proving that
rational first integrals can be computed via systems of linear equations
instead of systems of quadratic equations. This leads to a probabilistic
algorithm with arithmetic complexity $\bigOsoft(N^{2 \omega})$ and to a
deterministic algorithm solving the problem in $\bigOsoft(d^2N^{2 \omega+1})$
arithmetic operations, where $N$ denotes the given bound for the degree of the
rational first integral, and where $d \leq N$ is the degree of the vector
field, and $\omega$ the exponent of linear algebra. We also provide a fast
heuristic variant which computes a rational first integral, or fails, in
$\bigOsoft(N^{\omega+2})$ arithmetic operations. By comparison, the best
previous algorithm uses at least $d^{\omega+1}\, N^{4\omega +4}$ arithmetic
operations. We then show how to apply a similar method to the computation of
Darboux polynomials. The algorithms are implemented in a Maple package which is
available to interested readers with examples showing its efficiency.
| [
{
"created": "Thu, 10 Oct 2013 11:39:24 GMT",
"version": "v1"
}
] | 2013-10-11 | [
[
"Bostan",
"Alin",
"",
"INRIA Saclay - Ile de France, MSR - INRIA"
],
[
"Chèze",
"Guillaume",
"",
"IMT"
],
[
"Cluzeau",
"Thomas",
"",
"XLIM"
],
[
"Weil",
"Jacques-Arthur",
"",
"XLIM"
]
] | We present fast algorithms for computing rational first integrals with bounded degree of a planar polynomial vector field. Our approach is inspired by an idea of Ferragut and Giacomini. We improve upon their work by proving that rational first integrals can be computed via systems of linear equations instead of systems of quadratic equations. This leads to a probabilistic algorithm with arithmetic complexity $\bigOsoft(N^{2 \omega})$ and to a deterministic algorithm solving the problem in $\bigOsoft(d^2N^{2 \omega+1})$ arithmetic operations, where $N$ denotes the given bound for the degree of the rational first integral, and where $d \leq N$ is the degree of the vector field, and $\omega$ the exponent of linear algebra. We also provide a fast heuristic variant which computes a rational first integral, or fails, in $\bigOsoft(N^{\omega+2})$ arithmetic operations. By comparison, the best previous algorithm uses at least $d^{\omega+1}\, N^{4\omega +4}$ arithmetic operations. We then show how to apply a similar method to the computation of Darboux polynomials. The algorithms are implemented in a Maple package which is available to interested readers with examples showing its efficiency. |
1904.02530 | Hamidreza Kasaei | S. Hamidreza Kasaei, Nima Shafii, Luis Seabra Lopes, Ana Maria Tome | Interactive Open-Ended Object, Affordance and Grasp Learning for Robotic
Manipulation | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Service robots are expected to autonomously and efficiently work in
human-centric environments. For this type of robots, object perception and
manipulation are challenging tasks due to need for accurate and real-time
response. This paper presents an interactive open-ended learning approach to
recognize multiple objects and their grasp affordances concurrently. This is an
important contribution in the field of service robots since no matter how
extensive the training data used for batch learning, a robot might always be
confronted with an unknown object when operating in human-centric environments.
The paper describes the system architecture and the learning and recognition
capabilities. Grasp learning associates grasp configurations (i.e.,
end-effector positions and orientations) to grasp affordance categories. The
grasp affordance category and the grasp configuration are taught through verbal
and kinesthetic teaching, respectively. A Bayesian approach is adopted for
learning and recognition of object categories and an instance-based approach is
used for learning and recognition of affordance categories. An extensive set of
experiments has been performed to assess the performance of the proposed
approach regarding recognition accuracy, scalability and grasp success rate on
challenging datasets and real-world scenarios.
| [
{
"created": "Thu, 4 Apr 2019 13:10:24 GMT",
"version": "v1"
}
] | 2019-04-05 | [
[
"Kasaei",
"S. Hamidreza",
""
],
[
"Shafii",
"Nima",
""
],
[
"Lopes",
"Luis Seabra",
""
],
[
"Tome",
"Ana Maria",
""
]
] | Service robots are expected to autonomously and efficiently work in human-centric environments. For this type of robots, object perception and manipulation are challenging tasks due to need for accurate and real-time response. This paper presents an interactive open-ended learning approach to recognize multiple objects and their grasp affordances concurrently. This is an important contribution in the field of service robots since no matter how extensive the training data used for batch learning, a robot might always be confronted with an unknown object when operating in human-centric environments. The paper describes the system architecture and the learning and recognition capabilities. Grasp learning associates grasp configurations (i.e., end-effector positions and orientations) to grasp affordance categories. The grasp affordance category and the grasp configuration are taught through verbal and kinesthetic teaching, respectively. A Bayesian approach is adopted for learning and recognition of object categories and an instance-based approach is used for learning and recognition of affordance categories. An extensive set of experiments has been performed to assess the performance of the proposed approach regarding recognition accuracy, scalability and grasp success rate on challenging datasets and real-world scenarios. |
1910.01722 | Osnat Mokryn | Hadar Miller and Osnat Mokryn | Constant State of Change: Engagement Inequality in Temporal Dynamic
Networks | arXiv admin note: substantial text overlap with arXiv:1809.09613 | PLOS ONE 15(4): e0231035 (2020) | 10.1371/journal.pone.0231035 | null | cs.SI physics.data-an stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The temporal changes in complex systems of interactions have excited the
research community in recent years as they encompass understandings on their
dynamics and evolution. From the collective dynamics of organizations and
online communities to the spreading of information and fake news, to name a
few, temporal dynamics are fundamental in the understanding of complex systems.
In this work, we quantify the level of engagement in dynamic complex systems of
interactions, modeled as networks. We focus on interaction networks for which
the dynamics of the interactions are coupled with that of the topology, such as
online messaging, forums, and emails. We define two indices to capture the
temporal level of engagement: the Temporal Network (edge) Intensity index, and
the Temporal Dominance Inequality index. Our surprising results are that these
measures are stationary for most measured networks, regardless of vast
fluctuations in the size of the networks in time. Moreover, more than 80% of
weekly changes in the indices values are bounded by less than 10%. The indices
are stable between the temporal evolution of a network but are different
between networks, and a classifier can determine the network the temporal
indices belong to with high success. We find an exception in the Enron
management email exchange during the year before its disintegration, in which
both indices show high volatility throughout the inspected period.
| [
{
"created": "Thu, 3 Oct 2019 21:12:57 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Apr 2020 19:25:59 GMT",
"version": "v2"
}
] | 2020-04-15 | [
[
"Miller",
"Hadar",
""
],
[
"Mokryn",
"Osnat",
""
]
] | The temporal changes in complex systems of interactions have excited the research community in recent years as they encompass understandings on their dynamics and evolution. From the collective dynamics of organizations and online communities to the spreading of information and fake news, to name a few, temporal dynamics are fundamental in the understanding of complex systems. In this work, we quantify the level of engagement in dynamic complex systems of interactions, modeled as networks. We focus on interaction networks for which the dynamics of the interactions are coupled with that of the topology, such as online messaging, forums, and emails. We define two indices to capture the temporal level of engagement: the Temporal Network (edge) Intensity index, and the Temporal Dominance Inequality index. Our surprising results are that these measures are stationary for most measured networks, regardless of vast fluctuations in the size of the networks in time. Moreover, more than 80% of weekly changes in the indices values are bounded by less than 10%. The indices are stable between the temporal evolution of a network but are different between networks, and a classifier can determine the network the temporal indices belong to with high success. We find an exception in the Enron management email exchange during the year before its disintegration, in which both indices show high volatility throughout the inspected period. |
2004.00554 | Tao Lu | Bin Wang, Tao Lu, Yanduo Zhang | Feature-Driven Super-Resolution for Object Detection | 4 pages, 3 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although some convolutional neural networks (CNNs) based super-resolution
(SR) algorithms yield good visual performances on single images recently. Most
of them focus on perfect perceptual quality but ignore specific needs of
subsequent detection task. This paper proposes a simple but powerful
feature-driven super-resolution (FDSR) to improve the detection performance of
low-resolution (LR) images. First, the proposed method uses feature-domain
prior which extracts from an existing detector backbone to guide the HR image
reconstruction. Then, with the aligned features, FDSR update SR parameters for
better detection performance. Comparing with some state-of-the-art SR
algorithms with 4$\times$ scale factor, FDSR outperforms the detection
performance mAP on MS COCO validation, VOC2007 databases with good
generalization to other detection networks.
| [
{
"created": "Wed, 1 Apr 2020 16:33:07 GMT",
"version": "v1"
}
] | 2020-04-02 | [
[
"Wang",
"Bin",
""
],
[
"Lu",
"Tao",
""
],
[
"Zhang",
"Yanduo",
""
]
] | Although some convolutional neural networks (CNNs) based super-resolution (SR) algorithms yield good visual performances on single images recently. Most of them focus on perfect perceptual quality but ignore specific needs of subsequent detection task. This paper proposes a simple but powerful feature-driven super-resolution (FDSR) to improve the detection performance of low-resolution (LR) images. First, the proposed method uses feature-domain prior which extracts from an existing detector backbone to guide the HR image reconstruction. Then, with the aligned features, FDSR update SR parameters for better detection performance. Comparing with some state-of-the-art SR algorithms with 4$\times$ scale factor, FDSR outperforms the detection performance mAP on MS COCO validation, VOC2007 databases with good generalization to other detection networks. |
2006.11026 | Carola Doerr | Arina Buzdalova, Carola Doerr, Anna Rodionova | Hybridizing the 1/5-th Success Rule with Q-Learning for Controlling the
Mutation Rate of an Evolutionary Algorithm | To appear in the Proceedings of Parallel Problem Solving from Nature
(PPSN'2020) | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is well known that evolutionary algorithms (EAs) achieve peak performance
only when their parameters are suitably tuned to the given problem. Even more,
it is known that the best parameter values can change during the optimization
process. Parameter control mechanisms are techniques developed to identify and
to track these values.
Recently, a series of rigorous theoretical works confirmed the superiority of
several parameter control techniques over EAs with best possible static
parameters. Among these results are examples for controlling the mutation rate
of the $(1+\lambda)$~EA when optimizing the OneMax problem. However, it was
shown in [Rodionova et al., GECCO'19] that the quality of these techniques
strongly depends on the offspring population size $\lambda$.
We introduce in this work a new hybrid parameter control technique, which
combines the well-known one-fifth success rule with Q-learning. We demonstrate
that our HQL mechanism achieves equal or superior performance to all techniques
tested in [Rodionova et al., GECCO'19] and this -- in contrast to previous
parameter control methods -- simultaneously for all offspring population sizes
$\lambda$. We also show that the promising performance of HQL is not restricted
to OneMax, but extends to several other benchmark problems.
| [
{
"created": "Fri, 19 Jun 2020 09:12:49 GMT",
"version": "v1"
}
] | 2020-06-22 | [
[
"Buzdalova",
"Arina",
""
],
[
"Doerr",
"Carola",
""
],
[
"Rodionova",
"Anna",
""
]
] | It is well known that evolutionary algorithms (EAs) achieve peak performance only when their parameters are suitably tuned to the given problem. Even more, it is known that the best parameter values can change during the optimization process. Parameter control mechanisms are techniques developed to identify and to track these values. Recently, a series of rigorous theoretical works confirmed the superiority of several parameter control techniques over EAs with best possible static parameters. Among these results are examples for controlling the mutation rate of the $(1+\lambda)$~EA when optimizing the OneMax problem. However, it was shown in [Rodionova et al., GECCO'19] that the quality of these techniques strongly depends on the offspring population size $\lambda$. We introduce in this work a new hybrid parameter control technique, which combines the well-known one-fifth success rule with Q-learning. We demonstrate that our HQL mechanism achieves equal or superior performance to all techniques tested in [Rodionova et al., GECCO'19] and this -- in contrast to previous parameter control methods -- simultaneously for all offspring population sizes $\lambda$. We also show that the promising performance of HQL is not restricted to OneMax, but extends to several other benchmark problems. |
2212.01736 | Min Qiu | Min Qiu and Yu-Chih Huang and Jinhong Yuan | Downlink Transmission with Heterogeneous URLLC Services: Discrete
Signaling With Single-User Decoding | 16 pages, 7 figures, accepted by IEEE Journal on Selected Areas in
Communications | null | null | null | cs.IT eess.SP math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of designing downlink transmission schemes for supporting
heterogeneous ultra-reliable low-latency communications (URLLC) and/or with
other types of services is investigated. We consider the broadcast channel,
where the base station sends superimposed signals to multiple users. Under
heterogeneous blocklength constraints, strong users who are URLLC users cannot
wait to receive the entire transmission frame and perform successive
interference cancellation (SIC) due to stringent latency requirements, in
contrast to the conventional infinite blocklength cases. Even if SIC is
feasible, SIC may be imperfect under finite blocklength constraints. To cope
with the heterogeneity in latency and reliability requirements, we propose a
practical downlink transmission scheme with discrete signaling and single-user
decoding (SUD), i.e., without SIC. We carefully design the discrete input
distributions to enable efficient SUD by exploiting the structural
interference. Furthermore, we derive the second-order achievable rate under
heterogenous blocklength and error probability constraints and use it to guide
the design of channel coding and modulations. It is shown that in terms of
achievable rate under short blocklength, the proposed scheme with regular
quadrature amplitude modulations and SUD can operate extremely close to the
benchmark schemes that assume perfect SIC with Gaussian signaling.
| [
{
"created": "Sun, 4 Dec 2022 04:00:19 GMT",
"version": "v1"
},
{
"created": "Tue, 2 May 2023 13:18:50 GMT",
"version": "v2"
}
] | 2023-05-03 | [
[
"Qiu",
"Min",
""
],
[
"Huang",
"Yu-Chih",
""
],
[
"Yuan",
"Jinhong",
""
]
] | The problem of designing downlink transmission schemes for supporting heterogeneous ultra-reliable low-latency communications (URLLC) and/or with other types of services is investigated. We consider the broadcast channel, where the base station sends superimposed signals to multiple users. Under heterogeneous blocklength constraints, strong users who are URLLC users cannot wait to receive the entire transmission frame and perform successive interference cancellation (SIC) due to stringent latency requirements, in contrast to the conventional infinite blocklength cases. Even if SIC is feasible, SIC may be imperfect under finite blocklength constraints. To cope with the heterogeneity in latency and reliability requirements, we propose a practical downlink transmission scheme with discrete signaling and single-user decoding (SUD), i.e., without SIC. We carefully design the discrete input distributions to enable efficient SUD by exploiting the structural interference. Furthermore, we derive the second-order achievable rate under heterogenous blocklength and error probability constraints and use it to guide the design of channel coding and modulations. It is shown that in terms of achievable rate under short blocklength, the proposed scheme with regular quadrature amplitude modulations and SUD can operate extremely close to the benchmark schemes that assume perfect SIC with Gaussian signaling. |
1807.09751 | Han Xiao | Han Xiao, Yidong Chen, Xiaodong Shi | Multi-Perspective Neural Architecture for Recommendation System | null | null | null | null | cs.IR cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently, there starts a research trend to leverage neural architecture for
recommendation systems. Though several deep recommender models are proposed,
most methods are too simple to characterize users' complex preference. In this
paper, for a fine-grain analysis, users' ratings are explained from multiple
perspectives, based on which, we propose our neural architecture. Specifically,
our model employs several sequential stages to encode the user and item into
hidden representations. In one stage, the user and item are represented from
multiple perspectives and in each perspective, the representations of user and
item put attentions to each other. Last, we metric the output representations
of final stage to approach the users' rating. Extensive experiments demonstrate
that our method achieves substantial improvements against baselines.
| [
{
"created": "Thu, 12 Jul 2018 05:06:39 GMT",
"version": "v1"
}
] | 2018-07-26 | [
[
"Xiao",
"Han",
""
],
[
"Chen",
"Yidong",
""
],
[
"Shi",
"Xiaodong",
""
]
] | Currently, there starts a research trend to leverage neural architecture for recommendation systems. Though several deep recommender models are proposed, most methods are too simple to characterize users' complex preference. In this paper, for a fine-grain analysis, users' ratings are explained from multiple perspectives, based on which, we propose our neural architecture. Specifically, our model employs several sequential stages to encode the user and item into hidden representations. In one stage, the user and item are represented from multiple perspectives and in each perspective, the representations of user and item put attentions to each other. Last, we metric the output representations of final stage to approach the users' rating. Extensive experiments demonstrate that our method achieves substantial improvements against baselines. |
1905.09777 | Oded Stein | Oded Stein, Alec Jacobson, Max Wardetzky and Eitan Grinspun | A Smoothness Energy without Boundary Distortion for Curved Surfaces | 17 pages, 18 figures | null | 10.1145/3377406 | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current quadratic smoothness energies for curved surfaces either exhibit
distortions near the boundary due to zero Neumann boundary conditions, or they
do not correctly account for intrinsic curvature, which leads to
unnatural-looking behavior away from the boundary. This leads to an unfortunate
trade-off: one can either have natural behavior in the interior, or a
distortion-free result at the boundary, but not both. We introduce a
generalized Hessian energy for curved surfaces, expressed in terms of the
covariant one-form Dirichlet energy, the Gaussian curvature, and the exterior
derivative. Energy minimizers solve the Laplace-Beltrami biharmonic equation,
correctly accounting for intrinsic curvature, leading to natural-looking
isolines. On the boundary, minimizers are as-linear-as-possible, which reduces
the distortion of isolines at the boundary. We discretize the covariant
one-form Dirichlet energy using Crouzeix-Raviart finite elements, arriving at a
discrete formulation of the Hessian energy for applications on curved surfaces.
We observe convergence of the discretization in our experiments.
| [
{
"created": "Thu, 23 May 2019 17:04:31 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Apr 2020 23:08:09 GMT",
"version": "v2"
}
] | 2020-04-29 | [
[
"Stein",
"Oded",
""
],
[
"Jacobson",
"Alec",
""
],
[
"Wardetzky",
"Max",
""
],
[
"Grinspun",
"Eitan",
""
]
] | Current quadratic smoothness energies for curved surfaces either exhibit distortions near the boundary due to zero Neumann boundary conditions, or they do not correctly account for intrinsic curvature, which leads to unnatural-looking behavior away from the boundary. This leads to an unfortunate trade-off: one can either have natural behavior in the interior, or a distortion-free result at the boundary, but not both. We introduce a generalized Hessian energy for curved surfaces, expressed in terms of the covariant one-form Dirichlet energy, the Gaussian curvature, and the exterior derivative. Energy minimizers solve the Laplace-Beltrami biharmonic equation, correctly accounting for intrinsic curvature, leading to natural-looking isolines. On the boundary, minimizers are as-linear-as-possible, which reduces the distortion of isolines at the boundary. We discretize the covariant one-form Dirichlet energy using Crouzeix-Raviart finite elements, arriving at a discrete formulation of the Hessian energy for applications on curved surfaces. We observe convergence of the discretization in our experiments. |
1811.09961 | Bo Pang | Bo Pang, Kaiwen Zha, Hanwen Cao, Chen Shi, Cewu Lu | Deep RNN Framework for Visual Sequential Applications | 10 pages, 7 figures, CVPR 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting temporal and representation features efficiently plays a pivotal
role in understanding visual sequence information. To deal with this, we
propose a new recurrent neural framework that can be stacked deep effectively.
There are mainly two novel designs in our deep RNN framework: one is a new RNN
module called Context Bridge Module (CBM) which splits the information flowing
along the sequence (temporal direction) and along depth (spatial representation
direction), making it easier to train when building deep by balancing these two
directions; the other is the Overlap Coherence Training Scheme that reduces the
training complexity for long visual sequential tasks on account of the
limitation of computing resources.
We provide empirical evidence to show that our deep RNN framework is easy to
optimize and can gain accuracy from the increased depth on several visual
sequence problems. On these tasks, we evaluate our deep RNN framework with 15
layers, 7* than conventional RNN networks, but it is still easy to train. Our
deep framework achieves more than 11% relative improvements over shallow RNN
models on Kinetics, UCF-101, and HMDB-51 for video classification. For
auxiliary annotation, after replacing the shallow RNN part of Polygon-RNN with
our 15-layer deep CBM, the performance improves by 14.7%. For video future
prediction, our deep RNN improves the state-of-the-art shallow model's
performance by 2.4% on PSNR and SSIM. The code and trained models are published
accompanied by this paper: https://github.com/BoPang1996/Deep-RNN-Framework.
| [
{
"created": "Sun, 25 Nov 2018 06:34:29 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Nov 2018 08:04:56 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Nov 2018 09:34:28 GMT",
"version": "v3"
},
{
"created": "Fri, 25 Oct 2019 03:55:16 GMT",
"version": "v4"
}
] | 2019-10-28 | [
[
"Pang",
"Bo",
""
],
[
"Zha",
"Kaiwen",
""
],
[
"Cao",
"Hanwen",
""
],
[
"Shi",
"Chen",
""
],
[
"Lu",
"Cewu",
""
]
] | Extracting temporal and representation features efficiently plays a pivotal role in understanding visual sequence information. To deal with this, we propose a new recurrent neural framework that can be stacked deep effectively. There are mainly two novel designs in our deep RNN framework: one is a new RNN module called Context Bridge Module (CBM) which splits the information flowing along the sequence (temporal direction) and along depth (spatial representation direction), making it easier to train when building deep by balancing these two directions; the other is the Overlap Coherence Training Scheme that reduces the training complexity for long visual sequential tasks on account of the limitation of computing resources. We provide empirical evidence to show that our deep RNN framework is easy to optimize and can gain accuracy from the increased depth on several visual sequence problems. On these tasks, we evaluate our deep RNN framework with 15 layers, 7* than conventional RNN networks, but it is still easy to train. Our deep framework achieves more than 11% relative improvements over shallow RNN models on Kinetics, UCF-101, and HMDB-51 for video classification. For auxiliary annotation, after replacing the shallow RNN part of Polygon-RNN with our 15-layer deep CBM, the performance improves by 14.7%. For video future prediction, our deep RNN improves the state-of-the-art shallow model's performance by 2.4% on PSNR and SSIM. The code and trained models are published accompanied by this paper: https://github.com/BoPang1996/Deep-RNN-Framework. |
2309.16882 | Shaoming Xu | Shaoming Xu, Ankush Khandelwal, Arvind Renganathan, Vipin Kumar | Message Propagation Through Time: An Algorithm for Sequence Dependency
Retention in Time Series Modeling | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series modeling, a crucial area in science, often encounters challenges
when training Machine Learning (ML) models like Recurrent Neural Networks
(RNNs) using the conventional mini-batch training strategy that assumes
independent and identically distributed (IID) samples and initializes RNNs with
zero hidden states. The IID assumption ignores temporal dependencies among
samples, resulting in poor performance. This paper proposes the Message
Propagation Through Time (MPTT) algorithm to effectively incorporate long
temporal dependencies while preserving faster training times relative to the
stateful solutions. MPTT utilizes two memory modules to asynchronously manage
initial hidden states for RNNs, fostering seamless information exchange between
samples and allowing diverse mini-batches throughout epochs. MPTT further
implements three policies to filter outdated and preserve essential information
in the hidden states to generate informative initial hidden states for RNNs,
facilitating robust training. Experimental results demonstrate that MPTT
outperforms seven strategies on four climate datasets with varying levels of
temporal dependencies.
| [
{
"created": "Thu, 28 Sep 2023 22:38:18 GMT",
"version": "v1"
}
] | 2023-10-02 | [
[
"Xu",
"Shaoming",
""
],
[
"Khandelwal",
"Ankush",
""
],
[
"Renganathan",
"Arvind",
""
],
[
"Kumar",
"Vipin",
""
]
] | Time series modeling, a crucial area in science, often encounters challenges when training Machine Learning (ML) models like Recurrent Neural Networks (RNNs) using the conventional mini-batch training strategy that assumes independent and identically distributed (IID) samples and initializes RNNs with zero hidden states. The IID assumption ignores temporal dependencies among samples, resulting in poor performance. This paper proposes the Message Propagation Through Time (MPTT) algorithm to effectively incorporate long temporal dependencies while preserving faster training times relative to the stateful solutions. MPTT utilizes two memory modules to asynchronously manage initial hidden states for RNNs, fostering seamless information exchange between samples and allowing diverse mini-batches throughout epochs. MPTT further implements three policies to filter outdated and preserve essential information in the hidden states to generate informative initial hidden states for RNNs, facilitating robust training. Experimental results demonstrate that MPTT outperforms seven strategies on four climate datasets with varying levels of temporal dependencies. |
1408.3764 | Loren Schwiebert | Loren Schwiebert, Eyad Hailat, Kamel Rushaidat, Jason Mick, and
Jeffrey Potoff | An Efficient Cell List Implementation for Monte Carlo Simulation on GPUs | 30 pages | null | null | null | cs.DC physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Maximizing the performance potential of the modern day GPU architecture
requires judicious utilization of available parallel resources. Although
dramatic reductions can often be obtained through straightforward mappings,
further performance improvements often require algorithmic redesigns to more
closely exploit the target architecture. In this paper, we focus on efficient
molecular simulations for the GPU and propose a novel cell list algorithm that
better utilizes its parallel resources. Our goal is an efficient GPU
implementation of large-scale Monte Carlo simulations for the grand canonical
ensemble. This is a particularly challenging application because there is
inherently less computation and parallelism than in similar applications with
molecular dynamics. Consistent with the results of prior researchers, our
simulation results show traditional cell list implementations for Monte Carlo
simulations of molecular systems offer effectively no performance improvement
for small systems [5, 14], even when porting to the GPU. However for larger
systems, the cell list implementation offers significant gains in performance.
Furthermore, our novel cell list approach results in better performance for all
problem sizes when compared with other GPU implementations with or without cell
lists.
| [
{
"created": "Sat, 16 Aug 2014 19:30:37 GMT",
"version": "v1"
}
] | 2014-08-19 | [
[
"Schwiebert",
"Loren",
""
],
[
"Hailat",
"Eyad",
""
],
[
"Rushaidat",
"Kamel",
""
],
[
"Mick",
"Jason",
""
],
[
"Potoff",
"Jeffrey",
""
]
] | Maximizing the performance potential of the modern day GPU architecture requires judicious utilization of available parallel resources. Although dramatic reductions can often be obtained through straightforward mappings, further performance improvements often require algorithmic redesigns to more closely exploit the target architecture. In this paper, we focus on efficient molecular simulations for the GPU and propose a novel cell list algorithm that better utilizes its parallel resources. Our goal is an efficient GPU implementation of large-scale Monte Carlo simulations for the grand canonical ensemble. This is a particularly challenging application because there is inherently less computation and parallelism than in similar applications with molecular dynamics. Consistent with the results of prior researchers, our simulation results show traditional cell list implementations for Monte Carlo simulations of molecular systems offer effectively no performance improvement for small systems [5, 14], even when porting to the GPU. However for larger systems, the cell list implementation offers significant gains in performance. Furthermore, our novel cell list approach results in better performance for all problem sizes when compared with other GPU implementations with or without cell lists. |
2310.13441 | Guillaume Allais | Guillaume Allais | Seamless, Correct, and Generic Programming over Serialised Data | As submitted to JFP | null | null | null | cs.PL | http://creativecommons.org/licenses/by/4.0/ | In typed functional languages, one can typically only manipulate data in a
type-safe manner if it first has been deserialised into an in-memory tree
represented as a graph of nodes-as-structs and subterms-as-pointers.
We demonstrate how we can use QTT as implemented in \idris{} to define a
small universe of serialised datatypes, and provide generic programs allowing
users to process values stored contiguously in buffers.
Our approach allows implementors to prove the full functional correctness by
construction of the IO functions processing the data stored in the buffer.
| [
{
"created": "Fri, 20 Oct 2023 12:09:17 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2024 16:18:36 GMT",
"version": "v2"
}
] | 2024-04-29 | [
[
"Allais",
"Guillaume",
""
]
] | In typed functional languages, one can typically only manipulate data in a type-safe manner if it first has been deserialised into an in-memory tree represented as a graph of nodes-as-structs and subterms-as-pointers. We demonstrate how we can use QTT as implemented in \idris{} to define a small universe of serialised datatypes, and provide generic programs allowing users to process values stored contiguously in buffers. Our approach allows implementors to prove the full functional correctness by construction of the IO functions processing the data stored in the buffer. |
1307.4308 | Junichiro Fukuyama | Junichiro Fukuyama | An Alternative Proof of the Exponential Monotone Complexity of the
Clique Function | arXiv admin note: substantial text overlap with arXiv:1305.3218 | null | null | null | cs.CC math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In 1985, Razborov discovered a proof that the monotone circuit complexity of
the clique problem is super-polynomial. Alon and Boppana improved the result
into exponential lower bound exp(\Omega(n / \log n)^{1/3})) of a monotone
circuit C to compute cliques of size (1/4) (n / log n)^{2/3}, where n is the
number of vertices in a graph. Both proofs are based on the method of
approximations and Erdos and Rado's sunflower lemma. There has been an interest
in further generalization of the proof scheme.
In this paper, we present a new approach to show the exponential monotone
complexity. Unlike the standard method, it dynamically constructs a counter
example: Assuming a monotone circuit C of sub-exponential size to compute
k-cliques c, an algorithm finds an edge set t containing no c in the
disjunctive normal form constructed at the root of C. We call such t a shift.
The proof shows that t is disjoint from an edge set z whose removal leaves no
k-cliques.
We explore the set theoretical nature of computation by Boolean circuits. We
develop a theory by finding topological properties of the Hamming space 2^{[n]}
where [n]={1, 2, ..., n}. A structural theorem is presented, which is closely
related to the sunflower lemma and claims a stronger statement in most cases.
The theory lays the foundation of the above shift method. It also shows the
existence of a sunflower with small core in a family of sets, which is not an
obvious consequence of the sunflower lemma.
Lastly, we point out that the new methodology has potential to apply to a
general circuit computing cliques due to the dynamic selection of t and z, and
to improve the Alon-Boppana bound exp(\Omega(n / \log n)^{1/3})).
| [
{
"created": "Tue, 16 Jul 2013 15:29:16 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Sep 2013 07:36:13 GMT",
"version": "v2"
}
] | 2013-09-10 | [
[
"Fukuyama",
"Junichiro",
""
]
] | In 1985, Razborov discovered a proof that the monotone circuit complexity of the clique problem is super-polynomial. Alon and Boppana improved the result into exponential lower bound exp(\Omega(n / \log n)^{1/3})) of a monotone circuit C to compute cliques of size (1/4) (n / log n)^{2/3}, where n is the number of vertices in a graph. Both proofs are based on the method of approximations and Erdos and Rado's sunflower lemma. There has been an interest in further generalization of the proof scheme. In this paper, we present a new approach to show the exponential monotone complexity. Unlike the standard method, it dynamically constructs a counter example: Assuming a monotone circuit C of sub-exponential size to compute k-cliques c, an algorithm finds an edge set t containing no c in the disjunctive normal form constructed at the root of C. We call such t a shift. The proof shows that t is disjoint from an edge set z whose removal leaves no k-cliques. We explore the set theoretical nature of computation by Boolean circuits. We develop a theory by finding topological properties of the Hamming space 2^{[n]} where [n]={1, 2, ..., n}. A structural theorem is presented, which is closely related to the sunflower lemma and claims a stronger statement in most cases. The theory lays the foundation of the above shift method. It also shows the existence of a sunflower with small core in a family of sets, which is not an obvious consequence of the sunflower lemma. Lastly, we point out that the new methodology has potential to apply to a general circuit computing cliques due to the dynamic selection of t and z, and to improve the Alon-Boppana bound exp(\Omega(n / \log n)^{1/3})). |
2109.14364 | Keshav Kolluru | Keshav Kolluru, Martin Rezk, Pat Verga, William W. Cohen and Partha
Talukdar | Multilingual Fact Linking | AKBC 2021 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Knowledge-intensive NLP tasks can benefit from linking natural language text
with facts from a Knowledge Graph (KG). Although facts themselves are
language-agnostic, the fact labels (i.e., language-specific representation of
the fact) in the KG are often present only in a few languages. This makes it
challenging to link KG facts to sentences in languages other than the limited
set of languages. To address this problem, we introduce the task of
Multilingual Fact Linking (MFL) where the goal is to link fact expressed in a
sentence to corresponding fact in the KG, even when the fact label in the KG is
not available in the language of the sentence. To facilitate research in this
area, we present a new evaluation dataset, IndicLink. This dataset contains
11,293 linked WikiData facts and 6,429 sentences spanning English and six
Indian languages. We propose a Retrieval+Generation model, ReFCoG, that can
scale to millions of KG facts by combining Dual Encoder based retrieval with a
Seq2Seq based generation model which is constrained to output only valid KG
facts. ReFCoG outperforms standard Retrieval+Re-ranking models by 10.7 pts in
Precision@1. In spite of this gain, the model achieves an overall score of
52.1, showing ample scope for improvement in the task.ReFCoG code and IndicLink
data are available at https://github.com/SaiKeshav/mfl
| [
{
"created": "Wed, 29 Sep 2021 11:50:44 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Oct 2021 03:58:54 GMT",
"version": "v2"
}
] | 2021-10-04 | [
[
"Kolluru",
"Keshav",
""
],
[
"Rezk",
"Martin",
""
],
[
"Verga",
"Pat",
""
],
[
"Cohen",
"William W.",
""
],
[
"Talukdar",
"Partha",
""
]
] | Knowledge-intensive NLP tasks can benefit from linking natural language text with facts from a Knowledge Graph (KG). Although facts themselves are language-agnostic, the fact labels (i.e., language-specific representation of the fact) in the KG are often present only in a few languages. This makes it challenging to link KG facts to sentences in languages other than the limited set of languages. To address this problem, we introduce the task of Multilingual Fact Linking (MFL) where the goal is to link fact expressed in a sentence to corresponding fact in the KG, even when the fact label in the KG is not available in the language of the sentence. To facilitate research in this area, we present a new evaluation dataset, IndicLink. This dataset contains 11,293 linked WikiData facts and 6,429 sentences spanning English and six Indian languages. We propose a Retrieval+Generation model, ReFCoG, that can scale to millions of KG facts by combining Dual Encoder based retrieval with a Seq2Seq based generation model which is constrained to output only valid KG facts. ReFCoG outperforms standard Retrieval+Re-ranking models by 10.7 pts in Precision@1. In spite of this gain, the model achieves an overall score of 52.1, showing ample scope for improvement in the task.ReFCoG code and IndicLink data are available at https://github.com/SaiKeshav/mfl |
1805.11179 | Octavio Narvaez Aroche | Octavio Narvaez-Aroche and Pierre-Jean Meyer and Murat Arcak and
Andrew Packard | Reachability Analysis for Robustness Evaluation of the Sit-To-Stand
Movement for Powered Lower Limb Orthoses | 14 pages, 19 figures, DSCC 2018 Submission | null | 10.1115/DSCC2018-9066 | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A sensitivity-based approach for computing over-approximations of reachable
sets, in the presence of constant parameter uncertainties and a single initial
state, is used to analyze a three-link planar robot modeling a Powered Lower
Limb Orthosis and its user. Given the nature of the mappings relating the state
and parameters of the system with the inputs, and outputs describing the
trajectories of its Center of Mass, reachable sets for their respective spaces
can be obtained relying on the sensitivities of the nonlinear closed-loop
dynamics in the state space. These over-approximations are used to evaluate the
worst-case performances of a finite time horizon linear-quadratic regulator
(LQR) for controlling the ascending phase of the Sit-To-Stand movement.
| [
{
"created": "Mon, 28 May 2018 21:27:04 GMT",
"version": "v1"
}
] | 2020-11-26 | [
[
"Narvaez-Aroche",
"Octavio",
""
],
[
"Meyer",
"Pierre-Jean",
""
],
[
"Arcak",
"Murat",
""
],
[
"Packard",
"Andrew",
""
]
] | A sensitivity-based approach for computing over-approximations of reachable sets, in the presence of constant parameter uncertainties and a single initial state, is used to analyze a three-link planar robot modeling a Powered Lower Limb Orthosis and its user. Given the nature of the mappings relating the state and parameters of the system with the inputs, and outputs describing the trajectories of its Center of Mass, reachable sets for their respective spaces can be obtained relying on the sensitivities of the nonlinear closed-loop dynamics in the state space. These over-approximations are used to evaluate the worst-case performances of a finite time horizon linear-quadratic regulator (LQR) for controlling the ascending phase of the Sit-To-Stand movement. |
1408.2157 | Tobias Christiani | Tobias Christiani and Rasmus Pagh | Generating k-independent variables in constant time | Accepted to The 55th Annual Symposium on Foundations of Computer
Science (FOCS 2014). Copyright IEEE | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generation of pseudorandom elements over finite fields is fundamental to
the time, space and randomness complexity of randomized algorithms and data
structures. We consider the problem of generating $k$-independent random values
over a finite field $\mathbb{F}$ in a word RAM model equipped with constant
time addition and multiplication in $\mathbb{F}$, and present the first
nontrivial construction of a generator that outputs each value in constant
time, not dependent on $k$. Our generator has period length
$|\mathbb{F}|\,\mbox{poly} \log k$ and uses $k\,\mbox{poly}(\log k) \log
|\mathbb{F}|$ bits of space, which is optimal up to a $\mbox{poly} \log k$
factor. We are able to bypass Siegel's lower bound on the time-space tradeoff
for $k$-independent functions by a restriction to sequential evaluation.
| [
{
"created": "Sat, 9 Aug 2014 21:42:05 GMT",
"version": "v1"
}
] | 2014-08-12 | [
[
"Christiani",
"Tobias",
""
],
[
"Pagh",
"Rasmus",
""
]
] | The generation of pseudorandom elements over finite fields is fundamental to the time, space and randomness complexity of randomized algorithms and data structures. We consider the problem of generating $k$-independent random values over a finite field $\mathbb{F}$ in a word RAM model equipped with constant time addition and multiplication in $\mathbb{F}$, and present the first nontrivial construction of a generator that outputs each value in constant time, not dependent on $k$. Our generator has period length $|\mathbb{F}|\,\mbox{poly} \log k$ and uses $k\,\mbox{poly}(\log k) \log |\mathbb{F}|$ bits of space, which is optimal up to a $\mbox{poly} \log k$ factor. We are able to bypass Siegel's lower bound on the time-space tradeoff for $k$-independent functions by a restriction to sequential evaluation. |
2202.01451 | Nikhil Singh | Nikhil Singh and Anupam Saxena | Topology Optimization with Tetra-kai-decahedra and Spheroidal Masks | null | null | null | null | cs.CE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | A novel meshing scheme, based on regular tetra-kai-decahedron, also referred
to as truncated octahedron, cells is presented for use in spatial topology
optimization. A tetra-kai-decahedron mesh ensures face connectivity between
elements thereby eliminating singular solutions from the solution space.
Various other benefits of implementing the said mesh are also highlighted, and
the corresponding finite element is introduced. Material mask overlay strategy
or MMOS, a feature based method for topology optimization is extended for use
in 3-dimensions (MMOS-3D) via the aforementioned finite element and spheroidal
negative masks. Formulation for density computation and sensitivity analysis
for gradient based optimization is developed. Examples on traditional
structural topology optimization problems are presented with detailed
discussion on efficacy of the proposed approach.
| [
{
"created": "Thu, 3 Feb 2022 08:03:46 GMT",
"version": "v1"
}
] | 2022-02-04 | [
[
"Singh",
"Nikhil",
""
],
[
"Saxena",
"Anupam",
""
]
] | A novel meshing scheme, based on regular tetra-kai-decahedron, also referred to as truncated octahedron, cells is presented for use in spatial topology optimization. A tetra-kai-decahedron mesh ensures face connectivity between elements thereby eliminating singular solutions from the solution space. Various other benefits of implementing the said mesh are also highlighted, and the corresponding finite element is introduced. Material mask overlay strategy or MMOS, a feature based method for topology optimization is extended for use in 3-dimensions (MMOS-3D) via the aforementioned finite element and spheroidal negative masks. Formulation for density computation and sensitivity analysis for gradient based optimization is developed. Examples on traditional structural topology optimization problems are presented with detailed discussion on efficacy of the proposed approach. |
1802.02556 | Huan Li | Huan Li, Richard Peng, Liren Shan, Yuhao Yi, Zhongzhi Zhang | Current Flow Group Closeness Centrality for Complex Networks | 31 pages, 4 figures | WWW'2019 | 10.1145/3308558.3313490 | null | cs.DS cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current flow closeness centrality (CFCC) has a better discriminating ability
than the ordinary closeness centrality based on shortest paths. In this paper,
we extend this notion to a group of vertices in a weighted graph, and then
study the problem of finding a subset $S$ of $k$ vertices to maximize its CFCC
$C(S)$, both theoretically and experimentally. We show that the problem is
NP-hard, but propose two greedy algorithms for minimizing the reciprocal of
$C(S)$ with provable guarantees using the monotoncity and supermodularity. The
first is a deterministic algorithm with an approximation factor
$(1-\frac{k}{k-1}\cdot\frac{1}{e})$ and cubic running time; while the second is
a randomized algorithm with a
$(1-\frac{k}{k-1}\cdot\frac{1}{e}-\epsilon)$-approximation and nearly-linear
running time for any $\epsilon > 0$. Extensive experiments on model and real
networks demonstrate that our algorithms are effective and efficient, with the
second algorithm being scalable to massive networks with more than a million
vertices.
| [
{
"created": "Wed, 7 Feb 2018 18:30:40 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Feb 2018 19:30:05 GMT",
"version": "v2"
}
] | 2019-02-25 | [
[
"Li",
"Huan",
""
],
[
"Peng",
"Richard",
""
],
[
"Shan",
"Liren",
""
],
[
"Yi",
"Yuhao",
""
],
[
"Zhang",
"Zhongzhi",
""
]
] | Current flow closeness centrality (CFCC) has a better discriminating ability than the ordinary closeness centrality based on shortest paths. In this paper, we extend this notion to a group of vertices in a weighted graph, and then study the problem of finding a subset $S$ of $k$ vertices to maximize its CFCC $C(S)$, both theoretically and experimentally. We show that the problem is NP-hard, but propose two greedy algorithms for minimizing the reciprocal of $C(S)$ with provable guarantees using the monotoncity and supermodularity. The first is a deterministic algorithm with an approximation factor $(1-\frac{k}{k-1}\cdot\frac{1}{e})$ and cubic running time; while the second is a randomized algorithm with a $(1-\frac{k}{k-1}\cdot\frac{1}{e}-\epsilon)$-approximation and nearly-linear running time for any $\epsilon > 0$. Extensive experiments on model and real networks demonstrate that our algorithms are effective and efficient, with the second algorithm being scalable to massive networks with more than a million vertices. |
2401.16841 | Eric M\"uller | Eric M\"uller, Moritz Althaus, Elias Arnold, Philipp Spilger,
Christian Pehle, Johannes Schemmel | jaxsnn: Event-driven Gradient Estimation for Analog Neuromorphic
Hardware | null | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional neuromorphic hardware architectures rely on event-driven
computation, where the asynchronous transmission of events, such as spikes,
triggers local computations within synapses and neurons. While machine learning
frameworks are commonly used for gradient-based training, their emphasis on
dense data structures poses challenges for processing asynchronous data such as
spike trains. This problem is particularly pronounced for typical tensor data
structures. In this context, we present a novel library (jaxsnn) built on top
of JAX, that departs from conventional machine learning frameworks by providing
flexibility in the data structures used and the handling of time, while
maintaining Autograd functionality and composability. Our library facilitates
the simulation of spiking neural networks and gradient estimation, with a focus
on compatibility with time-continuous neuromorphic backends, such as the
BrainScaleS-2 system, during the forward pass. This approach opens avenues for
more efficient and flexible training of spiking neural networks, bridging the
gap between traditional neuromorphic architectures and contemporary machine
learning frameworks.
| [
{
"created": "Tue, 30 Jan 2024 09:27:13 GMT",
"version": "v1"
}
] | 2024-01-31 | [
[
"Müller",
"Eric",
""
],
[
"Althaus",
"Moritz",
""
],
[
"Arnold",
"Elias",
""
],
[
"Spilger",
"Philipp",
""
],
[
"Pehle",
"Christian",
""
],
[
"Schemmel",
"Johannes",
""
]
] | Traditional neuromorphic hardware architectures rely on event-driven computation, where the asynchronous transmission of events, such as spikes, triggers local computations within synapses and neurons. While machine learning frameworks are commonly used for gradient-based training, their emphasis on dense data structures poses challenges for processing asynchronous data such as spike trains. This problem is particularly pronounced for typical tensor data structures. In this context, we present a novel library (jaxsnn) built on top of JAX, that departs from conventional machine learning frameworks by providing flexibility in the data structures used and the handling of time, while maintaining Autograd functionality and composability. Our library facilitates the simulation of spiking neural networks and gradient estimation, with a focus on compatibility with time-continuous neuromorphic backends, such as the BrainScaleS-2 system, during the forward pass. This approach opens avenues for more efficient and flexible training of spiking neural networks, bridging the gap between traditional neuromorphic architectures and contemporary machine learning frameworks. |
2101.11351 | Dario Stein | Dario Stein, Sam Staton | Compositional Semantics for Probabilistic Programs with Exact
Conditioning | 16 pages, 5 figures | null | null | null | cs.PL cs.AI cs.LO math.CT math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define a probabilistic programming language for Gaussian random variables
with a first-class exact conditioning construct. We give operational,
denotational and equational semantics for this language, establishing
convenient properties like exchangeability of conditions. Conditioning on
equality of continuous random variables is nontrivial, as the exact observation
may have probability zero; this is Borel's paradox. Using categorical
formulations of conditional probability, we show that the good properties of
our language are not particular to Gaussians, but can be derived from universal
properties, thus generalizing to wider settings. We define the Cond
construction, which internalizes conditioning as a morphism, providing general
compositional semantics for probabilistic programming with exact conditioning.
| [
{
"created": "Wed, 27 Jan 2021 12:31:18 GMT",
"version": "v1"
}
] | 2021-01-28 | [
[
"Stein",
"Dario",
""
],
[
"Staton",
"Sam",
""
]
] | We define a probabilistic programming language for Gaussian random variables with a first-class exact conditioning construct. We give operational, denotational and equational semantics for this language, establishing convenient properties like exchangeability of conditions. Conditioning on equality of continuous random variables is nontrivial, as the exact observation may have probability zero; this is Borel's paradox. Using categorical formulations of conditional probability, we show that the good properties of our language are not particular to Gaussians, but can be derived from universal properties, thus generalizing to wider settings. We define the Cond construction, which internalizes conditioning as a morphism, providing general compositional semantics for probabilistic programming with exact conditioning. |
2203.10157 | Jon\'a\v{s} Kulh\'anek | Jon\'a\v{s} Kulh\'anek and Erik Derner and Torsten Sattler and Robert
Babu\v{s}ka | ViewFormer: NeRF-free Neural Rendering from Few Images Using
Transformers | ECCV 2022 poster | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Novel view synthesis is a long-standing problem. In this work, we consider a
variant of the problem where we are given only a few context views sparsely
covering a scene or an object. The goal is to predict novel viewpoints in the
scene, which requires learning priors. The current state of the art is based on
Neural Radiance Field (NeRF), and while achieving impressive results, the
methods suffer from long training times as they require evaluating millions of
3D point samples via a neural network for each image. We propose a 2D-only
method that maps multiple context views and a query pose to a new image in a
single pass of a neural network. Our model uses a two-stage architecture
consisting of a codebook and a transformer model. The codebook is used to embed
individual images into a smaller latent space, and the transformer solves the
view synthesis task in this more compact space. To train our model efficiently,
we introduce a novel branching attention mechanism that allows us to use the
same model not only for neural rendering but also for camera pose estimation.
Experimental results on real-world scenes show that our approach is competitive
compared to NeRF-based methods while not reasoning explicitly in 3D, and it is
faster to train.
| [
{
"created": "Fri, 18 Mar 2022 21:08:23 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Jul 2022 06:03:51 GMT",
"version": "v2"
}
] | 2022-07-22 | [
[
"Kulhánek",
"Jonáš",
""
],
[
"Derner",
"Erik",
""
],
[
"Sattler",
"Torsten",
""
],
[
"Babuška",
"Robert",
""
]
] | Novel view synthesis is a long-standing problem. In this work, we consider a variant of the problem where we are given only a few context views sparsely covering a scene or an object. The goal is to predict novel viewpoints in the scene, which requires learning priors. The current state of the art is based on Neural Radiance Field (NeRF), and while achieving impressive results, the methods suffer from long training times as they require evaluating millions of 3D point samples via a neural network for each image. We propose a 2D-only method that maps multiple context views and a query pose to a new image in a single pass of a neural network. Our model uses a two-stage architecture consisting of a codebook and a transformer model. The codebook is used to embed individual images into a smaller latent space, and the transformer solves the view synthesis task in this more compact space. To train our model efficiently, we introduce a novel branching attention mechanism that allows us to use the same model not only for neural rendering but also for camera pose estimation. Experimental results on real-world scenes show that our approach is competitive compared to NeRF-based methods while not reasoning explicitly in 3D, and it is faster to train. |
2102.04897 | Zeyu Zheng | Zeyu Zheng, Vivek Veeriah, Risto Vuorio, Richard Lewis, Satinder Singh | Learning State Representations from Random Deep Action-conditional
Predictions | NeurIPS 2021 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our main contribution in this work is an empirical finding that random
General Value Functions (GVFs), i.e., deep action-conditional predictions --
random both in what feature of observations they predict as well as in the
sequence of actions the predictions are conditioned upon -- form good auxiliary
tasks for reinforcement learning (RL) problems. In particular, we show that
random deep action-conditional predictions when used as auxiliary tasks yield
state representations that produce control performance competitive with
state-of-the-art hand-crafted auxiliary tasks like value prediction, pixel
control, and CURL in both Atari and DeepMind Lab tasks. In another set of
experiments we stop the gradients from the RL part of the network to the state
representation learning part of the network and show, perhaps surprisingly,
that the auxiliary tasks alone are sufficient to learn state representations
good enough to outperform an end-to-end trained actor-critic baseline. We
opensourced our code at https://github.com/Hwhitetooth/random_gvfs.
| [
{
"created": "Tue, 9 Feb 2021 15:53:22 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Nov 2021 18:19:02 GMT",
"version": "v2"
}
] | 2021-11-09 | [
[
"Zheng",
"Zeyu",
""
],
[
"Veeriah",
"Vivek",
""
],
[
"Vuorio",
"Risto",
""
],
[
"Lewis",
"Richard",
""
],
[
"Singh",
"Satinder",
""
]
] | Our main contribution in this work is an empirical finding that random General Value Functions (GVFs), i.e., deep action-conditional predictions -- random both in what feature of observations they predict as well as in the sequence of actions the predictions are conditioned upon -- form good auxiliary tasks for reinforcement learning (RL) problems. In particular, we show that random deep action-conditional predictions when used as auxiliary tasks yield state representations that produce control performance competitive with state-of-the-art hand-crafted auxiliary tasks like value prediction, pixel control, and CURL in both Atari and DeepMind Lab tasks. In another set of experiments we stop the gradients from the RL part of the network to the state representation learning part of the network and show, perhaps surprisingly, that the auxiliary tasks alone are sufficient to learn state representations good enough to outperform an end-to-end trained actor-critic baseline. We opensourced our code at https://github.com/Hwhitetooth/random_gvfs. |
2311.14678 | Kate Saunders | Kate R. Saunders, Owen Forbes, Jess K. Hopf, Charlotte R. Patterson,
Sarah A. Vollert, Kaitlyn Brown, Raiha Browning, Miguel Canizares, Richard S.
Cottrell, Lanxi Li, Catherine J.S. Kim, Tace P. Stewart, Connie Susilawati,
Xiang Y. Zhao, Kate J. Helmstedt | Data-driven recommendations for enhancing real-time natural hazard
warnings, communication, and response | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The effectiveness and adequacy of natural hazard warnings hinges on the
availability of data and its transformation into actionable knowledge for the
public. Real-time warning communication and emergency response therefore need
to be evaluated from a data science perspective. However, there are currently
gaps between established data science best practices and their application in
supporting natural hazard warnings. This Perspective reviews existing
data-driven approaches that underpin real-time warning communication and
emergency response, highlighting limitations in hazard and impact forecasts.
Four main themes for enhancing warnings are emphasised: (i) applying
best-practice principles in visualising hazard forecasts, (ii) data
opportunities for more effective impact forecasts, (iii) utilising data for
more localised forecasts, and (iv) improving data-driven decision-making using
uncertainty. Motivating examples are provided from the extensive flooding
experienced in Australia in 2022. This Perspective shows the capacity for
improving the efficacy of natural hazard warnings using data science, and the
collaborative potential between the data science and natural hazards
communities.
| [
{
"created": "Wed, 1 Nov 2023 02:59:45 GMT",
"version": "v1"
}
] | 2023-11-28 | [
[
"Saunders",
"Kate R.",
""
],
[
"Forbes",
"Owen",
""
],
[
"Hopf",
"Jess K.",
""
],
[
"Patterson",
"Charlotte R.",
""
],
[
"Vollert",
"Sarah A.",
""
],
[
"Brown",
"Kaitlyn",
""
],
[
"Browning",
"Raiha",
""
],
[
"Canizares",
"Miguel",
""
],
[
"Cottrell",
"Richard S.",
""
],
[
"Li",
"Lanxi",
""
],
[
"Kim",
"Catherine J. S.",
""
],
[
"Stewart",
"Tace P.",
""
],
[
"Susilawati",
"Connie",
""
],
[
"Zhao",
"Xiang Y.",
""
],
[
"Helmstedt",
"Kate J.",
""
]
] | The effectiveness and adequacy of natural hazard warnings hinges on the availability of data and its transformation into actionable knowledge for the public. Real-time warning communication and emergency response therefore need to be evaluated from a data science perspective. However, there are currently gaps between established data science best practices and their application in supporting natural hazard warnings. This Perspective reviews existing data-driven approaches that underpin real-time warning communication and emergency response, highlighting limitations in hazard and impact forecasts. Four main themes for enhancing warnings are emphasised: (i) applying best-practice principles in visualising hazard forecasts, (ii) data opportunities for more effective impact forecasts, (iii) utilising data for more localised forecasts, and (iv) improving data-driven decision-making using uncertainty. Motivating examples are provided from the extensive flooding experienced in Australia in 2022. This Perspective shows the capacity for improving the efficacy of natural hazard warnings using data science, and the collaborative potential between the data science and natural hazards communities. |
2107.13681 | David Doty | Ho-Lin Chen, David Doty, Wyatt Reeves, David Soloveichik | Rate-Independent Computation in Continuous Chemical Reaction Networks | accepted to JACM (https://doi.org/10.1145/3590776); preliminary
version appeared in ITCS 2014: http://doi.org/10.1145/2554797.2554827 | null | 10.1145/3590776 | null | cs.ET q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coupled chemical interactions in a well-mixed solution are commonly
formalized as chemical reaction networks (CRNs). However, despite the
widespread use of CRNs in the natural sciences, the range of computational
behaviors exhibited by CRNs is not well understood. Here we study the following
problem: what functions $f:\mathbb{R}^k \to \mathbb{R}$ can be computed by a
CRN, in which the CRN eventually produces the correct amount of the "output"
molecule, no matter the rate at which reactions proceed? This captures a
previously unexplored, but very natural class of computations: for example, the
reaction $X_1 + X_2 \to Y$ can be thought to compute the function $y =
\min(x_1, x_2)$. Such a CRN is robust in the sense that it is correct no matter
the kinetic model of chemistry, so long as it respects the stoichiometric
constraints.
We develop a reachability relation based on "what could happen" if reaction
rates can vary arbitrarily over time. We define *stable computation*
analogously to probability 1 computation in distributed computing, and connect
it with a seemingly stronger notion of rate-independent computation based on
convergence under a wide class of generalized rate laws. We also consider the
"dual-rail representation" that can represent negative values as the difference
of two concentrations and allows the composition of CRN modules. We prove that
a function is rate-independently computable if and only if it is piecewise
linear (with rational coefficients) and continuous (dual-rail representation),
or non-negative with discontinuities occurring only when some inputs switch
from zero to positive (direct representation). The many contexts where
continuous piecewise linear functions are powerful targets for implementation,
combined with the systematic construction we develop for computing these
functions, demonstrate the potential of rate-independent chemical computation.
| [
{
"created": "Thu, 29 Jul 2021 00:37:07 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Oct 2022 16:57:27 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Apr 2023 21:30:44 GMT",
"version": "v3"
}
] | 2023-04-11 | [
[
"Chen",
"Ho-Lin",
""
],
[
"Doty",
"David",
""
],
[
"Reeves",
"Wyatt",
""
],
[
"Soloveichik",
"David",
""
]
] | Coupled chemical interactions in a well-mixed solution are commonly formalized as chemical reaction networks (CRNs). However, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. Here we study the following problem: what functions $f:\mathbb{R}^k \to \mathbb{R}$ can be computed by a CRN, in which the CRN eventually produces the correct amount of the "output" molecule, no matter the rate at which reactions proceed? This captures a previously unexplored, but very natural class of computations: for example, the reaction $X_1 + X_2 \to Y$ can be thought to compute the function $y = \min(x_1, x_2)$. Such a CRN is robust in the sense that it is correct no matter the kinetic model of chemistry, so long as it respects the stoichiometric constraints. We develop a reachability relation based on "what could happen" if reaction rates can vary arbitrarily over time. We define *stable computation* analogously to probability 1 computation in distributed computing, and connect it with a seemingly stronger notion of rate-independent computation based on convergence under a wide class of generalized rate laws. We also consider the "dual-rail representation" that can represent negative values as the difference of two concentrations and allows the composition of CRN modules. We prove that a function is rate-independently computable if and only if it is piecewise linear (with rational coefficients) and continuous (dual-rail representation), or non-negative with discontinuities occurring only when some inputs switch from zero to positive (direct representation). The many contexts where continuous piecewise linear functions are powerful targets for implementation, combined with the systematic construction we develop for computing these functions, demonstrate the potential of rate-independent chemical computation. |
1611.03056 | Amr Abed | Amr S. Abed, Charles Clancy, David S. Levy | Intrusion Detection System for Applications using Linux Containers | The final publication is available at
http://link.springer.com/chapter/10.1007%2F978-3-319-24858-5_8. arXiv admin
note: substantial text overlap with arXiv:1611.03053 | STM 2015. LNCS, vol. 9331, pp. 123-135. Springer, Heidelberg
(2015) | 10.1007/978-3-319-24858-5_8 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Linux containers are gaining increasing traction in both individual and
industrial use, and as these containers get integrated into mission-critical
systems, real-time detection of malicious cyber attacks becomes a critical
operational requirement. This paper introduces a real-time host-based intrusion
detection system that can be used to passively detect malfeasance against
applications within Linux containers running in a standalone or in a cloud
multi-tenancy environment. The demonstrated intrusion detection system uses
bags of system calls monitored from the host kernel for learning the behavior
of an application running within a Linux container and determining anomalous
container behavior. Performance of the approach using a database application
was measured and results are discussed.
| [
{
"created": "Wed, 9 Nov 2016 19:37:55 GMT",
"version": "v1"
}
] | 2017-01-05 | [
[
"Abed",
"Amr S.",
""
],
[
"Clancy",
"Charles",
""
],
[
"Levy",
"David S.",
""
]
] | Linux containers are gaining increasing traction in both individual and industrial use, and as these containers get integrated into mission-critical systems, real-time detection of malicious cyber attacks becomes a critical operational requirement. This paper introduces a real-time host-based intrusion detection system that can be used to passively detect malfeasance against applications within Linux containers running in a standalone or in a cloud multi-tenancy environment. The demonstrated intrusion detection system uses bags of system calls monitored from the host kernel for learning the behavior of an application running within a Linux container and determining anomalous container behavior. Performance of the approach using a database application was measured and results are discussed. |
2106.12307 | Mats Richter | Mats L. Richter, Julius Sch\"oning, Anna Wiedenroth, Ulf Krumnack | Should You Go Deeper? Optimizing Convolutional Neural Network
Architectures without Training by Receptive Field Analysis | Preprint | null | 10.1109/ICMLA52953.2021.00159 | null | cs.LG cs.AI cs.NE stat.ML | http://creativecommons.org/licenses/by/4.0/ | When optimizing convolutional neural networks (CNN) for a specific
image-based task, specialists commonly overshoot the number of convolutional
layers in their designs. By implication, these CNNs are unnecessarily resource
intensive to train and deploy, with diminishing beneficial effects on the
predictive performance.
The features a convolutional layer can process are strictly limited by its
receptive field. By layer-wise analyzing the size of the receptive fields, we
can reliably predict sequences of layers that will not contribute qualitatively
to the test accuracy in the given CNN architecture. Based on this analysis, we
propose design strategies based on a so-called border layer. This layer allows
to identify unproductive convolutional layers and hence to resolve these
inefficiencies, optimize the explainability and the computational performance
of CNNs. Since neither the strategies nor the analysis requires training of the
actual model, these insights allow for a very efficient design process of CNN
architectures, which might be automated in the future.
| [
{
"created": "Wed, 23 Jun 2021 11:04:16 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Oct 2021 19:19:25 GMT",
"version": "v2"
}
] | 2022-06-23 | [
[
"Richter",
"Mats L.",
""
],
[
"Schöning",
"Julius",
""
],
[
"Wiedenroth",
"Anna",
""
],
[
"Krumnack",
"Ulf",
""
]
] | When optimizing convolutional neural networks (CNN) for a specific image-based task, specialists commonly overshoot the number of convolutional layers in their designs. By implication, these CNNs are unnecessarily resource intensive to train and deploy, with diminishing beneficial effects on the predictive performance. The features a convolutional layer can process are strictly limited by its receptive field. By layer-wise analyzing the size of the receptive fields, we can reliably predict sequences of layers that will not contribute qualitatively to the test accuracy in the given CNN architecture. Based on this analysis, we propose design strategies based on a so-called border layer. This layer allows to identify unproductive convolutional layers and hence to resolve these inefficiencies, optimize the explainability and the computational performance of CNNs. Since neither the strategies nor the analysis requires training of the actual model, these insights allow for a very efficient design process of CNN architectures, which might be automated in the future. |
2205.12471 | Kun Qian | Yukun Huang, Kun Qian, Zhou Yu | Learning a Better Initialization for Soft Prompts via Meta-Learning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prompt tuning (PT) is an effective approach to adapting pre-trained language
models to downstream tasks. Without a good initialization, prompt tuning
doesn't perform well under few-shot settings. So pre-trained prompt tuning
(PPT) is proposed to initialize prompts by leveraging pre-training data. We
propose MetaPT (Meta-learned Prompt Tuning) to further improve PPT's
initialization by considering latent structure within the pre-training data.
Specifically, we introduce the structure by first clustering pre-training data
into different auxiliary tasks with unsupervised methods. Then we use these
tasks to pre-train prompts with a meta-learning algorithm. Such a process can
make prompts learn a better initialization by discovering commonalities among
these auxiliary tasks. We evaluate our method on seven downstream tasks. Our
MetaPT achieves better and more stable performance than the state-of-the-art
method.
| [
{
"created": "Wed, 25 May 2022 03:50:23 GMT",
"version": "v1"
}
] | 2022-05-26 | [
[
"Huang",
"Yukun",
""
],
[
"Qian",
"Kun",
""
],
[
"Yu",
"Zhou",
""
]
] | Prompt tuning (PT) is an effective approach to adapting pre-trained language models to downstream tasks. Without a good initialization, prompt tuning doesn't perform well under few-shot settings. So pre-trained prompt tuning (PPT) is proposed to initialize prompts by leveraging pre-training data. We propose MetaPT (Meta-learned Prompt Tuning) to further improve PPT's initialization by considering latent structure within the pre-training data. Specifically, we introduce the structure by first clustering pre-training data into different auxiliary tasks with unsupervised methods. Then we use these tasks to pre-train prompts with a meta-learning algorithm. Such a process can make prompts learn a better initialization by discovering commonalities among these auxiliary tasks. We evaluate our method on seven downstream tasks. Our MetaPT achieves better and more stable performance than the state-of-the-art method. |
1503.07903 | Safia Haloui | Safia Haloui | Codes from Jacobian surfaces | null | null | null | null | cs.IT math.AG math.IT math.NT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is concerned with some Algebraic Geometry codes on Jacobians of
genus 2 curves. We derive a lower bound for the minimum distance of these codes
from an upper "Weil type" bound for the number of rational points on
irreducible (possibly singular or non-absolutely irreducible) curves lying on
an abelian surface over a finite field.
| [
{
"created": "Thu, 26 Mar 2015 21:12:08 GMT",
"version": "v1"
}
] | 2015-03-30 | [
[
"Haloui",
"Safia",
""
]
] | This paper is concerned with some Algebraic Geometry codes on Jacobians of genus 2 curves. We derive a lower bound for the minimum distance of these codes from an upper "Weil type" bound for the number of rational points on irreducible (possibly singular or non-absolutely irreducible) curves lying on an abelian surface over a finite field. |
2212.04638 | Aoyang Liu | Yansong Tang, Jinpeng Liu, Aoyang Liu, Bin Yang, Wenxun Dai, Yongming
Rao, Jiwen Lu, Jie Zhou, Xiu Li | FLAG3D: A 3D Fitness Activity Dataset with Language Instruction | Accepted to CVPR2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the continuously thriving popularity around the world, fitness activity
analytic has become an emerging research topic in computer vision. While a
variety of new tasks and algorithms have been proposed recently, there are
growing hunger for data resources involved in high-quality data, fine-grained
labels, and diverse environments. In this paper, we present FLAG3D, a
large-scale 3D fitness activity dataset with language instruction containing
180K sequences of 60 categories. FLAG3D features the following three aspects:
1) accurate and dense 3D human pose captured from advanced MoCap system to
handle the complex activity and large movement, 2) detailed and professional
language instruction to describe how to perform a specific activity, 3)
versatile video resources from a high-tech MoCap system, rendering software,
and cost-effective smartphones in natural environments. Extensive experiments
and in-depth analysis show that FLAG3D contributes great research value for
various challenges, such as cross-domain human action recognition, dynamic
human mesh recovery, and language-guided human action generation. Our dataset
and source code are publicly available at https://andytang15.github.io/FLAG3D.
| [
{
"created": "Fri, 9 Dec 2022 02:33:33 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Apr 2023 13:31:03 GMT",
"version": "v2"
}
] | 2023-04-20 | [
[
"Tang",
"Yansong",
""
],
[
"Liu",
"Jinpeng",
""
],
[
"Liu",
"Aoyang",
""
],
[
"Yang",
"Bin",
""
],
[
"Dai",
"Wenxun",
""
],
[
"Rao",
"Yongming",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Zhou",
"Jie",
""
],
[
"Li",
"Xiu",
""
]
] | With the continuously thriving popularity around the world, fitness activity analytic has become an emerging research topic in computer vision. While a variety of new tasks and algorithms have been proposed recently, there are growing hunger for data resources involved in high-quality data, fine-grained labels, and diverse environments. In this paper, we present FLAG3D, a large-scale 3D fitness activity dataset with language instruction containing 180K sequences of 60 categories. FLAG3D features the following three aspects: 1) accurate and dense 3D human pose captured from advanced MoCap system to handle the complex activity and large movement, 2) detailed and professional language instruction to describe how to perform a specific activity, 3) versatile video resources from a high-tech MoCap system, rendering software, and cost-effective smartphones in natural environments. Extensive experiments and in-depth analysis show that FLAG3D contributes great research value for various challenges, such as cross-domain human action recognition, dynamic human mesh recovery, and language-guided human action generation. Our dataset and source code are publicly available at https://andytang15.github.io/FLAG3D. |
1607.01838 | Chengcheng Wang | Chengcheng Wang, Yonggang Zhang, Bicheng Ying, and Ali H. Sayed | Coordinate-Descent Diffusion Learning by Networked Agents | Accepted for publication | null | 10.1109/TSP.2017.2757903 | null | cs.MA cs.DC cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work examines the mean-square error performance of diffusion stochastic
algorithms under a generalized coordinate-descent scheme. In this setting, the
adaptation step by each agent is limited to a random subset of the coordinates
of its stochastic gradient vector. The selection of coordinates varies randomly
from iteration to iteration and from agent to agent across the network. Such
schemes are useful in reducing computational complexity at each iteration in
power-intensive large data applications. They are also useful in modeling
situations where some partial gradient information may be missing at random.
Interestingly, the results show that the steady-state performance of the
learning strategy is not always degraded, while the convergence rate suffers
some degradation. The results provide yet another indication of the resilience
and robustness of adaptive distributed strategies.
| [
{
"created": "Wed, 6 Jul 2016 23:18:04 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Oct 2017 01:38:44 GMT",
"version": "v2"
}
] | 2017-10-12 | [
[
"Wang",
"Chengcheng",
""
],
[
"Zhang",
"Yonggang",
""
],
[
"Ying",
"Bicheng",
""
],
[
"Sayed",
"Ali H.",
""
]
] | This work examines the mean-square error performance of diffusion stochastic algorithms under a generalized coordinate-descent scheme. In this setting, the adaptation step by each agent is limited to a random subset of the coordinates of its stochastic gradient vector. The selection of coordinates varies randomly from iteration to iteration and from agent to agent across the network. Such schemes are useful in reducing computational complexity at each iteration in power-intensive large data applications. They are also useful in modeling situations where some partial gradient information may be missing at random. Interestingly, the results show that the steady-state performance of the learning strategy is not always degraded, while the convergence rate suffers some degradation. The results provide yet another indication of the resilience and robustness of adaptive distributed strategies. |
2312.13295 | Christoph Gadermaier | Markus Kloimwieder, Christoph Gadermaier | A functional scripting interface to an object oriented C++ library | 10 pages, code highlighted in color | null | null | null | cs.PL | http://creativecommons.org/licenses/by/4.0/ | The object oriented programming paradigm is widely used in science and
engineering. Many open and commercial libraries are written in C++ and
increasingly provide bindings to Python, which is much easier to learn, but
still partly encourages the use of object oriented programming. However,
scientific ideas are much more directly and meaningfully expressed in the
purely functional programming paradigm. Here, we take a best practice example,
CERNs Python binding for its ROOT library, designed to handle the enormous
amounts of data generated by the worlds largest particle accelerator, and
translate a simple segment of its tutorial into Clojure, a functional language
from the Lisp family. The code examples demonstrate how a purely functional
language straightforwardly expresses scientific ideas. Subsequently, we develop
a compiled Lisp-C++ interoperation layer to access the ROOT library exclusively
via functional code. To preserve the expressivity of the Lisp code, the type
hints necessary for C++ code generation are stored in a separate file. The
interop system presented here is a generic framework that, when provided with a
suitable file of type hints, facilitates access to methods of arbitrary C++
libraries and platforms like real-time microcontrollers.
| [
{
"created": "Sun, 17 Dec 2023 12:18:26 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Kloimwieder",
"Markus",
""
],
[
"Gadermaier",
"Christoph",
""
]
] | The object oriented programming paradigm is widely used in science and engineering. Many open and commercial libraries are written in C++ and increasingly provide bindings to Python, which is much easier to learn, but still partly encourages the use of object oriented programming. However, scientific ideas are much more directly and meaningfully expressed in the purely functional programming paradigm. Here, we take a best practice example, CERNs Python binding for its ROOT library, designed to handle the enormous amounts of data generated by the worlds largest particle accelerator, and translate a simple segment of its tutorial into Clojure, a functional language from the Lisp family. The code examples demonstrate how a purely functional language straightforwardly expresses scientific ideas. Subsequently, we develop a compiled Lisp-C++ interoperation layer to access the ROOT library exclusively via functional code. To preserve the expressivity of the Lisp code, the type hints necessary for C++ code generation are stored in a separate file. The interop system presented here is a generic framework that, when provided with a suitable file of type hints, facilitates access to methods of arbitrary C++ libraries and platforms like real-time microcontrollers. |
2108.01330 | Dalia Papuc | M.K. Aguilera, N. Ben-David, R. Guerraoui, D. Papuc, A. Xygkis, I.
Zablotchi | Frugal Byzantine Computing | This paper is an extended version of the DISC 2021 paper | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional techniques for handling Byzantine failures are expensive: digital
signatures are too costly, while using $3f{+}1$ replicas is uneconomical ($f$
denotes the maximum number of Byzantine processes). We seek algorithms that
reduce the number of replicas to $2f{+}1$ and minimize the number of
signatures. While the first goal can be achieved in the message-and-memory
model, accomplishing the second goal simultaneously is challenging. We first
address this challenge for the problem of broadcasting messages reliably. We
consider two variants of this problem, Consistent Broadcast and Reliable
Broadcast, typically considered very close. Perhaps surprisingly, we establish
a separation between them in terms of signatures required. In particular, we
show that Consistent Broadcast requires at least 1 signature in some execution,
while Reliable Broadcast requires $O(n)$ signatures in some execution. We
present matching upper bounds for both primitives within constant factors. We
then turn to the problem of consensus and argue that this separation matters
for solving consensus with Byzantine failures: we present a practical consensus
algorithm that uses Consistent Broadcast as its main communication primitive.
This algorithm works for $n=2f{+}1$ and avoids signatures in the common-case --
properties that have not been simultaneously achieved previously. Overall, our
work approaches Byzantine computing in a frugal manner and motivates the use of
Consistent Broadcast -- rather than Reliable Broadcast -- as a key primitive
for reaching agreement.
| [
{
"created": "Tue, 3 Aug 2021 07:14:31 GMT",
"version": "v1"
}
] | 2021-08-04 | [
[
"Aguilera",
"M. K.",
""
],
[
"Ben-David",
"N.",
""
],
[
"Guerraoui",
"R.",
""
],
[
"Papuc",
"D.",
""
],
[
"Xygkis",
"A.",
""
],
[
"Zablotchi",
"I.",
""
]
] | Traditional techniques for handling Byzantine failures are expensive: digital signatures are too costly, while using $3f{+}1$ replicas is uneconomical ($f$ denotes the maximum number of Byzantine processes). We seek algorithms that reduce the number of replicas to $2f{+}1$ and minimize the number of signatures. While the first goal can be achieved in the message-and-memory model, accomplishing the second goal simultaneously is challenging. We first address this challenge for the problem of broadcasting messages reliably. We consider two variants of this problem, Consistent Broadcast and Reliable Broadcast, typically considered very close. Perhaps surprisingly, we establish a separation between them in terms of signatures required. In particular, we show that Consistent Broadcast requires at least 1 signature in some execution, while Reliable Broadcast requires $O(n)$ signatures in some execution. We present matching upper bounds for both primitives within constant factors. We then turn to the problem of consensus and argue that this separation matters for solving consensus with Byzantine failures: we present a practical consensus algorithm that uses Consistent Broadcast as its main communication primitive. This algorithm works for $n=2f{+}1$ and avoids signatures in the common-case -- properties that have not been simultaneously achieved previously. Overall, our work approaches Byzantine computing in a frugal manner and motivates the use of Consistent Broadcast -- rather than Reliable Broadcast -- as a key primitive for reaching agreement. |
2203.06457 | MInsoo Lee | Minsoo Lee, Chaeyeon Chung, Hojun Cho, Minjung Kim, Sanghun Jung,
Jaegul Choo, and Minhyuk Sung | 3D-GIF: 3D-Controllable Object Generation via Implicit Factorized
Representations | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | While NeRF-based 3D-aware image generation methods enable viewpoint control,
limitations still remain to be adopted to various 3D applications. Due to their
view-dependent and light-entangled volume representation, the 3D geometry
presents unrealistic quality and the color should be re-rendered for every
desired viewpoint. To broaden the 3D applicability from 3D-aware image
generation to 3D-controllable object generation, we propose the factorized
representations which are view-independent and light-disentangled, and training
schemes with randomly sampled light conditions. We demonstrate the superiority
of our method by visualizing factorized representations, re-lighted images, and
albedo-textured meshes. In addition, we show that our approach improves the
quality of the generated geometry via visualization and quantitative
comparison. To the best of our knowledge, this is the first work that extracts
albedo-textured meshes with unposed 2D images without any additional labels or
assumptions.
| [
{
"created": "Sat, 12 Mar 2022 15:23:17 GMT",
"version": "v1"
}
] | 2022-03-15 | [
[
"Lee",
"Minsoo",
""
],
[
"Chung",
"Chaeyeon",
""
],
[
"Cho",
"Hojun",
""
],
[
"Kim",
"Minjung",
""
],
[
"Jung",
"Sanghun",
""
],
[
"Choo",
"Jaegul",
""
],
[
"Sung",
"Minhyuk",
""
]
] | While NeRF-based 3D-aware image generation methods enable viewpoint control, limitations still remain to be adopted to various 3D applications. Due to their view-dependent and light-entangled volume representation, the 3D geometry presents unrealistic quality and the color should be re-rendered for every desired viewpoint. To broaden the 3D applicability from 3D-aware image generation to 3D-controllable object generation, we propose the factorized representations which are view-independent and light-disentangled, and training schemes with randomly sampled light conditions. We demonstrate the superiority of our method by visualizing factorized representations, re-lighted images, and albedo-textured meshes. In addition, we show that our approach improves the quality of the generated geometry via visualization and quantitative comparison. To the best of our knowledge, this is the first work that extracts albedo-textured meshes with unposed 2D images without any additional labels or assumptions. |
2309.16269 | Youbin Jeon | Youbin Jeon and Sangheon Pack | Hierarchical Network Data Analytics Framework for B5G Network
Automation: Design and Implementation | 7 pages | null | null | null | cs.NI cs.LG cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 5G introduced modularized network functions (NFs) to support emerging
services in a more flexible and elastic manner. To mitigate the complexity in
such modularized NF management, automated network operation and management are
indispensable, and thus the 3rd generation partnership project (3GPP) has
introduced a network data analytics function (NWDAF). However, a conventional
NWDAF needs to conduct both inference and training tasks, and thus it is
difficult to provide the analytics results to NFs in a timely manner for an
increased number of analytics requests. In this article, we propose a
hierarchical network data analytics framework (H-NDAF) where inference tasks
are distributed to multiple leaf NWDAFs and training tasks are conducted at the
root NWDAF. Extensive simulation results using open-source software (i.e.,
free5GC) demonstrate that H-NDAF can provide sufficiently accurate analytics
and faster analytics provision time compared to the conventional NWDAF.
| [
{
"created": "Thu, 28 Sep 2023 09:04:58 GMT",
"version": "v1"
}
] | 2023-09-29 | [
[
"Jeon",
"Youbin",
""
],
[
"Pack",
"Sangheon",
""
]
] | 5G introduced modularized network functions (NFs) to support emerging services in a more flexible and elastic manner. To mitigate the complexity in such modularized NF management, automated network operation and management are indispensable, and thus the 3rd generation partnership project (3GPP) has introduced a network data analytics function (NWDAF). However, a conventional NWDAF needs to conduct both inference and training tasks, and thus it is difficult to provide the analytics results to NFs in a timely manner for an increased number of analytics requests. In this article, we propose a hierarchical network data analytics framework (H-NDAF) where inference tasks are distributed to multiple leaf NWDAFs and training tasks are conducted at the root NWDAF. Extensive simulation results using open-source software (i.e., free5GC) demonstrate that H-NDAF can provide sufficiently accurate analytics and faster analytics provision time compared to the conventional NWDAF. |
1912.13490 | Giovanni Granato G | Giovanni Granato and Gianluca Baldassarre | A Neurocomputational Account of Flexible Goal-directed Cognition and
Consciousness: The Goal-Aligning Representation Internal Manipulation Theory
(GARIM) | null | null | null | null | cs.AI cs.LG cs.NE q-bio.NC stat.ML | http://creativecommons.org/licenses/by/4.0/ | Goal-directed manipulation of representations is a key element of human
flexible behaviour, while consciousness is often related to several aspects of
higher-order cognition and human flexibility. Currently these two phenomena are
only partially integrated (e.g., see Neurorepresentationalism) and this (a)
limits our understanding of neuro-computational processes that lead conscious
states to produce flexible goal-directed behaviours, (b) prevents a
computational formalisation of conscious goal-directed manipulations of
representations occurring in the brain, and (c) inhibits the exploitation of
this knowledge for modelling and technological purposes. Addressing these
issues, here we extend our `three-component theory of flexible cognition' by
proposing the `Goal-Aligning Representations Internal Manipulation' (GARIM)
theory of conscious and flexible goal-directed cognition. The central idea of
the theory is that conscious states support the active manipulation of
goal-relevant internal representations (e.g., of world states, objects, and
action sequences) to make them more aligned with the pursued goals. This leads
to the generation of the knowledge which is necessary to face novel
situations/goals, thus increasing the flexibility of goal-directed behaviours.
The GARIM theory integrates key aspects of the main theories of consciousness
into the functional neuro-computational framework of goal-directed behaviour.
Moreover, it takes into account the subjective sensation of agency that
accompanies conscious goal-directed processes (`GARIM agency'). The proposal
has also implications for experimental studies on consciousness and clinical
aspects of conscious goal-directed behaviour. Finally, the GARIM theory benefit
technological fields such as autonomous robotics and machine learning (e.g.,
the manipulation process may describe the operations performed by systems based
on transformers).
| [
{
"created": "Tue, 31 Dec 2019 18:45:33 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Dec 2022 14:37:15 GMT",
"version": "v2"
},
{
"created": "Wed, 25 Oct 2023 07:56:24 GMT",
"version": "v3"
},
{
"created": "Fri, 27 Oct 2023 12:08:01 GMT",
"version": "v4"
}
] | 2023-10-30 | [
[
"Granato",
"Giovanni",
""
],
[
"Baldassarre",
"Gianluca",
""
]
] | Goal-directed manipulation of representations is a key element of human flexible behaviour, while consciousness is often related to several aspects of higher-order cognition and human flexibility. Currently these two phenomena are only partially integrated (e.g., see Neurorepresentationalism) and this (a) limits our understanding of neuro-computational processes that lead conscious states to produce flexible goal-directed behaviours, (b) prevents a computational formalisation of conscious goal-directed manipulations of representations occurring in the brain, and (c) inhibits the exploitation of this knowledge for modelling and technological purposes. Addressing these issues, here we extend our `three-component theory of flexible cognition' by proposing the `Goal-Aligning Representations Internal Manipulation' (GARIM) theory of conscious and flexible goal-directed cognition. The central idea of the theory is that conscious states support the active manipulation of goal-relevant internal representations (e.g., of world states, objects, and action sequences) to make them more aligned with the pursued goals. This leads to the generation of the knowledge which is necessary to face novel situations/goals, thus increasing the flexibility of goal-directed behaviours. The GARIM theory integrates key aspects of the main theories of consciousness into the functional neuro-computational framework of goal-directed behaviour. Moreover, it takes into account the subjective sensation of agency that accompanies conscious goal-directed processes (`GARIM agency'). The proposal has also implications for experimental studies on consciousness and clinical aspects of conscious goal-directed behaviour. Finally, the GARIM theory benefit technological fields such as autonomous robotics and machine learning (e.g., the manipulation process may describe the operations performed by systems based on transformers). |
2209.05274 | Quan Zhou | Quan Zhou, Jakub Marecek, Robert N. Shorten | Fairness in Forecasting of Observations of Linear Dynamical Systems | Journal version of Zhou et al. [arXiv:2006.07315, AAAI 2021] | Journal of Artificial Intelligence Research, Volume 76, 2023 | 10.1613/jair.1.14050 | null | cs.LG cs.SY eess.SY math.DS math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In machine learning, training data often capture the behaviour of multiple
subgroups of some underlying human population. This behaviour can often be
modelled as observations of an unknown dynamical system with an unobserved
state. When the training data for the subgroups are not controlled carefully,
however, under-representation bias arises. To counter under-representation
bias, we introduce two natural notions of fairness in time-series forecasting
problems: subgroup fairness and instantaneous fairness. These notions extend
predictive parity to the learning of dynamical systems. We also show globally
convergent methods for the fairness-constrained learning problems using
hierarchies of convexifications of non-commutative polynomial optimisation
problems. We also show that by exploiting sparsity in the convexifications, we
can reduce the run time of our methods considerably. Our empirical results on a
biased data set motivated by insurance applications and the well-known COMPAS
data set demonstrate the efficacy of our methods.
| [
{
"created": "Mon, 12 Sep 2022 14:32:12 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Sep 2022 12:14:51 GMT",
"version": "v2"
},
{
"created": "Wed, 8 Mar 2023 23:14:14 GMT",
"version": "v3"
},
{
"created": "Mon, 15 May 2023 20:13:53 GMT",
"version": "v4"
}
] | 2023-05-17 | [
[
"Zhou",
"Quan",
""
],
[
"Marecek",
"Jakub",
""
],
[
"Shorten",
"Robert N.",
""
]
] | In machine learning, training data often capture the behaviour of multiple subgroups of some underlying human population. This behaviour can often be modelled as observations of an unknown dynamical system with an unobserved state. When the training data for the subgroups are not controlled carefully, however, under-representation bias arises. To counter under-representation bias, we introduce two natural notions of fairness in time-series forecasting problems: subgroup fairness and instantaneous fairness. These notions extend predictive parity to the learning of dynamical systems. We also show globally convergent methods for the fairness-constrained learning problems using hierarchies of convexifications of non-commutative polynomial optimisation problems. We also show that by exploiting sparsity in the convexifications, we can reduce the run time of our methods considerably. Our empirical results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate the efficacy of our methods. |
2210.08901 | Xuran Pan | Xuran Pan, Tianzhu Ye, Dongchen Han, Shiji Song, Gao Huang | Contrastive Language-Image Pre-Training with Knowledge Graphs | Accepted by NeurIPS2022 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have witnessed the fast development of large-scale pre-training
frameworks that can extract multi-modal representations in a unified form and
achieve promising performances when transferred to downstream tasks.
Nevertheless, existing approaches mainly focus on pre-training with simple
image-text pairs, while neglecting the semantic connections between concepts
from different modalities. In this paper, we propose a knowledge-based
pre-training framework, dubbed Knowledge-CLIP, which injects semantic
information into the widely used CLIP model. Through introducing
knowledge-based objectives in the pre-training process and utilizing different
types of knowledge graphs as training data, our model can semantically align
the representations in vision and language with higher quality, and enhance the
reasoning ability across scenarios and modalities. Extensive experiments on
various vision-language downstream tasks demonstrate the effectiveness of
Knowledge-CLIP compared with the original CLIP and competitive baselines.
| [
{
"created": "Mon, 17 Oct 2022 09:49:22 GMT",
"version": "v1"
}
] | 2022-10-18 | [
[
"Pan",
"Xuran",
""
],
[
"Ye",
"Tianzhu",
""
],
[
"Han",
"Dongchen",
""
],
[
"Song",
"Shiji",
""
],
[
"Huang",
"Gao",
""
]
] | Recent years have witnessed the fast development of large-scale pre-training frameworks that can extract multi-modal representations in a unified form and achieve promising performances when transferred to downstream tasks. Nevertheless, existing approaches mainly focus on pre-training with simple image-text pairs, while neglecting the semantic connections between concepts from different modalities. In this paper, we propose a knowledge-based pre-training framework, dubbed Knowledge-CLIP, which injects semantic information into the widely used CLIP model. Through introducing knowledge-based objectives in the pre-training process and utilizing different types of knowledge graphs as training data, our model can semantically align the representations in vision and language with higher quality, and enhance the reasoning ability across scenarios and modalities. Extensive experiments on various vision-language downstream tasks demonstrate the effectiveness of Knowledge-CLIP compared with the original CLIP and competitive baselines. |
1904.12936 | Amirreza Shaban | Amirreza Shaban, Amir Rahimi, Shray Bansal, Stephen Gould, Byron
Boots, Richard Hartley | Learning to Find Common Objects Across Few Image Collections | ICCV 2019 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a collection of bags where each bag is a set of images, our goal is to
select one image from each bag such that the selected images are from the same
object class. We model the selection as an energy minimization problem with
unary and pairwise potential functions. Inspired by recent few-shot learning
algorithms, we propose an approach to learn the potential functions directly
from the data. Furthermore, we propose a fast greedy inference algorithm for
energy minimization. We evaluate our approach on few-shot common object
recognition as well as object co-localization tasks. Our experiments show that
learning the pairwise and unary terms greatly improves the performance of the
model over several well-known methods for these tasks. The proposed greedy
optimization algorithm achieves performance comparable to state-of-the-art
structured inference algorithms while being ~10 times faster. The code is
publicly available on https://github.com/haamoon/finding_common_object.
| [
{
"created": "Mon, 29 Apr 2019 20:26:40 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Aug 2019 01:08:21 GMT",
"version": "v2"
}
] | 2019-08-20 | [
[
"Shaban",
"Amirreza",
""
],
[
"Rahimi",
"Amir",
""
],
[
"Bansal",
"Shray",
""
],
[
"Gould",
"Stephen",
""
],
[
"Boots",
"Byron",
""
],
[
"Hartley",
"Richard",
""
]
] | Given a collection of bags where each bag is a set of images, our goal is to select one image from each bag such that the selected images are from the same object class. We model the selection as an energy minimization problem with unary and pairwise potential functions. Inspired by recent few-shot learning algorithms, we propose an approach to learn the potential functions directly from the data. Furthermore, we propose a fast greedy inference algorithm for energy minimization. We evaluate our approach on few-shot common object recognition as well as object co-localization tasks. Our experiments show that learning the pairwise and unary terms greatly improves the performance of the model over several well-known methods for these tasks. The proposed greedy optimization algorithm achieves performance comparable to state-of-the-art structured inference algorithms while being ~10 times faster. The code is publicly available on https://github.com/haamoon/finding_common_object. |
2212.07998 | Dimitri Bertsekas | Dimitri Bertsekas | Rollout Algorithms and Approximate Dynamic Programming for Bayesian
Optimization and Sequential Estimation | null | null | null | null | cs.AI cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | We provide a unifying approximate dynamic programming framework that applies
to a broad variety of problems involving sequential estimation. We consider
first the construction of surrogate cost functions for the purposes of
optimization, and we focus on the special case of Bayesian optimization, using
the rollout algorithm and some of its variations. We then discuss the more
general case of sequential estimation of a random vector using optimal
measurement selection, and its application to problems of stochastic and
adaptive control. We distinguish between adaptive control of deterministic and
stochastic systems: the former are better suited for the use of rollout, while
the latter are well suited for the use of rollout with certainty equivalence
approximations. As an example of the deterministic case, we discuss sequential
decoding problems, and a rollout algorithm for the approximate solution of the
Wordle and Mastermind puzzles, recently developed in the paper [BBB22].
| [
{
"created": "Thu, 15 Dec 2022 17:50:23 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2022 14:19:03 GMT",
"version": "v2"
},
{
"created": "Thu, 29 Dec 2022 05:13:39 GMT",
"version": "v3"
}
] | 2023-01-02 | [
[
"Bertsekas",
"Dimitri",
""
]
] | We provide a unifying approximate dynamic programming framework that applies to a broad variety of problems involving sequential estimation. We consider first the construction of surrogate cost functions for the purposes of optimization, and we focus on the special case of Bayesian optimization, using the rollout algorithm and some of its variations. We then discuss the more general case of sequential estimation of a random vector using optimal measurement selection, and its application to problems of stochastic and adaptive control. We distinguish between adaptive control of deterministic and stochastic systems: the former are better suited for the use of rollout, while the latter are well suited for the use of rollout with certainty equivalence approximations. As an example of the deterministic case, we discuss sequential decoding problems, and a rollout algorithm for the approximate solution of the Wordle and Mastermind puzzles, recently developed in the paper [BBB22]. |
1803.09492 | Jussi Hanhirova | Jussi Hanhirova, Teemu K\"am\"ar\"ainen, Sipi Sepp\"al\"a, Matti
Siekkinen, Vesa Hirvisalo, Antti Yl\"a-J\"a\"aski | Latency and Throughput Characterization of Convolutional Neural Networks
for Mobile Computer Vision | 13 pages, 18 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study performance characteristics of convolutional neural networks (CNN)
for mobile computer vision systems. CNNs have proven to be a powerful and
efficient approach to implement such systems. However, the system performance
depends largely on the utilization of hardware accelerators, which are able to
speed up the execution of the underlying mathematical operations tremendously
through massive parallelism. Our contribution is performance characterization
of multiple CNN-based models for object recognition and detection with several
different hardware platforms and software frameworks, using both local
(on-device) and remote (network-side server) computation. The measurements are
conducted using real workloads and real processing platforms. On the platform
side, we concentrate especially on TensorFlow and TensorRT. Our measurements
include embedded processors found on mobile devices and high-performance
processors that can be used on the network side of mobile systems. We show that
there exists significant latency--throughput trade-offs but the behavior is
very complex. We demonstrate and discuss several factors that affect the
performance and yield this complex behavior.
| [
{
"created": "Mon, 26 Mar 2018 09:49:03 GMT",
"version": "v1"
}
] | 2018-03-28 | [
[
"Hanhirova",
"Jussi",
""
],
[
"Kämäräinen",
"Teemu",
""
],
[
"Seppälä",
"Sipi",
""
],
[
"Siekkinen",
"Matti",
""
],
[
"Hirvisalo",
"Vesa",
""
],
[
"Ylä-Jääski",
"Antti",
""
]
] | We study performance characteristics of convolutional neural networks (CNN) for mobile computer vision systems. CNNs have proven to be a powerful and efficient approach to implement such systems. However, the system performance depends largely on the utilization of hardware accelerators, which are able to speed up the execution of the underlying mathematical operations tremendously through massive parallelism. Our contribution is performance characterization of multiple CNN-based models for object recognition and detection with several different hardware platforms and software frameworks, using both local (on-device) and remote (network-side server) computation. The measurements are conducted using real workloads and real processing platforms. On the platform side, we concentrate especially on TensorFlow and TensorRT. Our measurements include embedded processors found on mobile devices and high-performance processors that can be used on the network side of mobile systems. We show that there exists significant latency--throughput trade-offs but the behavior is very complex. We demonstrate and discuss several factors that affect the performance and yield this complex behavior. |
2110.11579 | Ziv Scully | Ziv Scully, Mor Harchol-Balter | How to Schedule Near-Optimally under Real-World Constraints | null | null | null | null | cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scheduling is a critical part of practical computer systems, and scheduling
has also been extensively studied from a theoretical perspective.
Unfortunately, there is a gap between theory and practice, as the optimal
scheduling policies presented by theory can be difficult or impossible to
perfectly implement in practice. In this work, we use recent breakthroughs in
queueing theory to begin to bridge this gap. We show how to translate
theoretically optimal policies -- which provably minimize mean response time
(a.k.a. latency) -- into near-optimal policies that are easily implemented in
practical settings. Specifically, we handle the following real-world
constraints:
- We show how to schedule in systems where job sizes (a.k.a. running time)
are unknown, or only partially known. We do so using simple policies that
achieve performance very close to the much more complicated theoretically
optimal policies.
- We show how to schedule in systems that have only a limited number of
priority levels available. We show how to adapt theoretically optimal policies
to this constrained setting and determine how many levels we need for
near-optimal performance.
- We show how to schedule in systems where job preemption can only happen at
specific checkpoints. Adding checkpoints allows for smarter scheduling, but
each checkpoint incurs time overhead. We give a rule of thumb that
near-optimally balances this tradeoff.
| [
{
"created": "Fri, 22 Oct 2021 04:15:19 GMT",
"version": "v1"
}
] | 2021-10-25 | [
[
"Scully",
"Ziv",
""
],
[
"Harchol-Balter",
"Mor",
""
]
] | Scheduling is a critical part of practical computer systems, and scheduling has also been extensively studied from a theoretical perspective. Unfortunately, there is a gap between theory and practice, as the optimal scheduling policies presented by theory can be difficult or impossible to perfectly implement in practice. In this work, we use recent breakthroughs in queueing theory to begin to bridge this gap. We show how to translate theoretically optimal policies -- which provably minimize mean response time (a.k.a. latency) -- into near-optimal policies that are easily implemented in practical settings. Specifically, we handle the following real-world constraints: - We show how to schedule in systems where job sizes (a.k.a. running time) are unknown, or only partially known. We do so using simple policies that achieve performance very close to the much more complicated theoretically optimal policies. - We show how to schedule in systems that have only a limited number of priority levels available. We show how to adapt theoretically optimal policies to this constrained setting and determine how many levels we need for near-optimal performance. - We show how to schedule in systems where job preemption can only happen at specific checkpoints. Adding checkpoints allows for smarter scheduling, but each checkpoint incurs time overhead. We give a rule of thumb that near-optimally balances this tradeoff. |
1809.02910 | Tsang-Kai Chang | Tsang-Kai Chang, Shengkang Chen, and Ankur Mehta | Localization Algorithm with Circular Representation in 2D and its
Similarity to Mammalian Brains | 8 pages, 2 figures, submitted to the IEEE Robotics and Automation
Letters (RA-L) journal with the option for presentation at RSS | null | null | null | cs.RO q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extended Kalman filter (EKF) does not guarantee consistent mean and
covariance under linearization, even though it is the main framework for
robotic localization. While Lie group improves the modeling of the state space
in localization, the EKF on Lie group still relies on the arbitrary Gaussian
assumption in face of nonlinear models. We instead use von Mises filter for
orientation estimation together with the conventional Kalman filter for
position estimation, and thus we are able to characterize the first two moments
of the state estimates. Since the proposed algorithm holds a solid
probabilistic basis, it is fundamentally relieved from the inconsistency
problem. Furthermore, we extend the localization algorithm to fully circular
representation even for position, which is similar to grid patterns found in
mammalian brains and in recurrent neural networks. The applicability of the
proposed algorithms is substantiated not only by strong mathematical foundation
but also by the comparison against other common localization methods.
| [
{
"created": "Sun, 9 Sep 2018 01:54:21 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Jan 2019 03:56:57 GMT",
"version": "v2"
}
] | 2019-01-28 | [
[
"Chang",
"Tsang-Kai",
""
],
[
"Chen",
"Shengkang",
""
],
[
"Mehta",
"Ankur",
""
]
] | Extended Kalman filter (EKF) does not guarantee consistent mean and covariance under linearization, even though it is the main framework for robotic localization. While Lie group improves the modeling of the state space in localization, the EKF on Lie group still relies on the arbitrary Gaussian assumption in face of nonlinear models. We instead use von Mises filter for orientation estimation together with the conventional Kalman filter for position estimation, and thus we are able to characterize the first two moments of the state estimates. Since the proposed algorithm holds a solid probabilistic basis, it is fundamentally relieved from the inconsistency problem. Furthermore, we extend the localization algorithm to fully circular representation even for position, which is similar to grid patterns found in mammalian brains and in recurrent neural networks. The applicability of the proposed algorithms is substantiated not only by strong mathematical foundation but also by the comparison against other common localization methods. |
0805.2854 | Feng Xia | Feng Xia, Longhua Ma, Jinxiang Dong and Youxian Sun | Network QoS Management in Cyber-Physical Systems | To appear in The 2008 Int.Conf. on Embedded Software and Systems
(ICESS), Chengdu, China, July 2008 | null | null | null | cs.NI cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Technical advances in ubiquitous sensing, embedded computing, and wireless
communication are leading to a new generation of engineered systems called
cyber-physical systems (CPS). CPS promises to transform the way we interact
with the physical world just as the Internet transformed how we interact with
one another. Before this vision becomes a reality, however, a large number of
challenges have to be addressed. Network quality of service (QoS) management in
this new realm is among those issues that deserve extensive research efforts.
It is envisioned that wireless sensor/actuator networks (WSANs) will play an
essential role in CPS. This paper examines the main characteristics of WSANs
and the requirements of QoS provisioning in the context of cyber-physical
computing. Several research topics and challenges are identified. As a sample
solution, a feedback scheduling framework is proposed to tackle some of the
identified challenges. A simple example is also presented that illustrates the
effectiveness of the proposed solution.
| [
{
"created": "Mon, 19 May 2008 12:49:11 GMT",
"version": "v1"
}
] | 2008-12-18 | [
[
"Xia",
"Feng",
""
],
[
"Ma",
"Longhua",
""
],
[
"Dong",
"Jinxiang",
""
],
[
"Sun",
"Youxian",
""
]
] | Technical advances in ubiquitous sensing, embedded computing, and wireless communication are leading to a new generation of engineered systems called cyber-physical systems (CPS). CPS promises to transform the way we interact with the physical world just as the Internet transformed how we interact with one another. Before this vision becomes a reality, however, a large number of challenges have to be addressed. Network quality of service (QoS) management in this new realm is among those issues that deserve extensive research efforts. It is envisioned that wireless sensor/actuator networks (WSANs) will play an essential role in CPS. This paper examines the main characteristics of WSANs and the requirements of QoS provisioning in the context of cyber-physical computing. Several research topics and challenges are identified. As a sample solution, a feedback scheduling framework is proposed to tackle some of the identified challenges. A simple example is also presented that illustrates the effectiveness of the proposed solution. |
2004.13181 | Sheldon Tan | Wentian Jin, Sheriff Sadiqbatcha, Jinwei Zhang, Sheldon X.-D. Tan | EM-GAN: Fast Stress Analysis for Multi-Segment Interconnect Using
Generative Adversarial Networks | null | null | null | null | cs.LG cs.NE eess.IV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a fast transient hydrostatic stress analysis for
electromigration (EM) failure assessment for multi-segment interconnects using
generative adversarial networks (GANs). Our work leverages the image synthesis
feature of GAN-based generative deep neural networks. The stress evaluation of
multi-segment interconnects, modeled by partial differential equations, can be
viewed as time-varying 2D-images-to-image problem where the input is the
multi-segment interconnects topology with current densities and the output is
the EM stress distribution in those wire segments at the given aging time.
Based on this observation, we train conditional GAN model using the images of
many self-generated multi-segment wires and wire current densities and aging
time (as conditions) against the COMSOL simulation results. Different
hyperparameters of GAN were studied and compared. The proposed algorithm,
called {\it EM-GAN}, can quickly give accurate stress distribution of a general
multi-segment wire tree for a given aging time, which is important for
full-chip fast EM failure assessment. Our experimental results show that the
EM-GAN shows 6.6\% averaged error compared to COMSOL simulation results with
orders of magnitude speedup. It also delivers 8.3X speedup over
state-of-the-art analytic based EM analysis solver.
| [
{
"created": "Mon, 27 Apr 2020 21:18:11 GMT",
"version": "v1"
}
] | 2020-04-29 | [
[
"Jin",
"Wentian",
""
],
[
"Sadiqbatcha",
"Sheriff",
""
],
[
"Zhang",
"Jinwei",
""
],
[
"Tan",
"Sheldon X. -D.",
""
]
] | In this paper, we propose a fast transient hydrostatic stress analysis for electromigration (EM) failure assessment for multi-segment interconnects using generative adversarial networks (GANs). Our work leverages the image synthesis feature of GAN-based generative deep neural networks. The stress evaluation of multi-segment interconnects, modeled by partial differential equations, can be viewed as time-varying 2D-images-to-image problem where the input is the multi-segment interconnects topology with current densities and the output is the EM stress distribution in those wire segments at the given aging time. Based on this observation, we train conditional GAN model using the images of many self-generated multi-segment wires and wire current densities and aging time (as conditions) against the COMSOL simulation results. Different hyperparameters of GAN were studied and compared. The proposed algorithm, called {\it EM-GAN}, can quickly give accurate stress distribution of a general multi-segment wire tree for a given aging time, which is important for full-chip fast EM failure assessment. Our experimental results show that the EM-GAN shows 6.6\% averaged error compared to COMSOL simulation results with orders of magnitude speedup. It also delivers 8.3X speedup over state-of-the-art analytic based EM analysis solver. |
2009.11796 | Neha Kaushik | Niladri Chatterjee, Neha Kaushik | Automatic Extraction of Agriculture Terms from Domain Text: A Survey of
Tools and Techniques | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Agriculture is a key component in any country's development. Domain-specific
knowledge resources serve to gain insight into the domain. Existing knowledge
resources such as AGROVOC and NAL Thesaurus are developed and maintained by the
domain experts. Population of terms into these knowledge resources can be
automated by using automatic term extraction tools for processing unstructured
agricultural text. Automatic term extraction is also a key component in many
semantic web applications, such as ontology creation, recommendation systems,
sentiment classification, query expansion among others. The primary goal of an
automatic term extraction system is to maximize the number of valid terms and
minimize the number of invalid terms extracted from the input set of documents.
Despite its importance in various applications, the availability of online
tools for the said purpose is rather limited. Moreover, the performance of the
most popular ones among them varies significantly. As a consequence, selection
of the right term extraction tool is perceived as a serious problem for
different knowledge-based applications. This paper presents an analysis of
three commonly used term extraction tools, viz. RAKE, TerMine, TermRaider and
compares their performance in terms of precision and recall, vis-a-vis RENT, a
more recent term extractor developed by these authors for agriculture domain.
| [
{
"created": "Thu, 24 Sep 2020 16:38:44 GMT",
"version": "v1"
}
] | 2020-09-25 | [
[
"Chatterjee",
"Niladri",
""
],
[
"Kaushik",
"Neha",
""
]
] | Agriculture is a key component in any country's development. Domain-specific knowledge resources serve to gain insight into the domain. Existing knowledge resources such as AGROVOC and NAL Thesaurus are developed and maintained by the domain experts. Population of terms into these knowledge resources can be automated by using automatic term extraction tools for processing unstructured agricultural text. Automatic term extraction is also a key component in many semantic web applications, such as ontology creation, recommendation systems, sentiment classification, query expansion among others. The primary goal of an automatic term extraction system is to maximize the number of valid terms and minimize the number of invalid terms extracted from the input set of documents. Despite its importance in various applications, the availability of online tools for the said purpose is rather limited. Moreover, the performance of the most popular ones among them varies significantly. As a consequence, selection of the right term extraction tool is perceived as a serious problem for different knowledge-based applications. This paper presents an analysis of three commonly used term extraction tools, viz. RAKE, TerMine, TermRaider and compares their performance in terms of precision and recall, vis-a-vis RENT, a more recent term extractor developed by these authors for agriculture domain. |
2402.13545 | Guandong Li | Guandong Li, Xian Yang, Wenpin Ma | A Two-Stage Dual-Path Framework for Text Tampering Detection and
Recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Document tamper detection has always been an important aspect of tamper
detection. Before the advent of deep learning, document tamper detection was
difficult. We have made some explorations in the field of text tamper detection
based on deep learning. Our Ps tamper detection method includes three steps:
feature assistance, audit point positioning, and tamper recognition. It
involves hierarchical filtering and graded output (tampered/suspected
tampered/untampered). By combining artificial tamper data features, we simulate
and augment data samples in various scenarios (cropping with noise
addition/replacement, single character/space replacement, smearing/splicing,
brightness/contrast adjustment, etc.). The auxiliary features include
exif/binary stream keyword retrieval/noise, which are used for branch detection
based on the results. Audit point positioning uses detection frameworks and
controls thresholds for high and low density detection. Tamper recognition
employs a dual-path dual-stream recognition network, with RGB and ELA stream
feature extraction. After dimensionality reduction through self-correlation
percentile pooling, the fused output is processed through vlad, yielding an
accuracy of 0.804, recall of 0.659, and precision of 0.913.
| [
{
"created": "Wed, 21 Feb 2024 05:54:42 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Feb 2024 02:12:19 GMT",
"version": "v2"
}
] | 2024-02-23 | [
[
"Li",
"Guandong",
""
],
[
"Yang",
"Xian",
""
],
[
"Ma",
"Wenpin",
""
]
] | Document tamper detection has always been an important aspect of tamper detection. Before the advent of deep learning, document tamper detection was difficult. We have made some explorations in the field of text tamper detection based on deep learning. Our Ps tamper detection method includes three steps: feature assistance, audit point positioning, and tamper recognition. It involves hierarchical filtering and graded output (tampered/suspected tampered/untampered). By combining artificial tamper data features, we simulate and augment data samples in various scenarios (cropping with noise addition/replacement, single character/space replacement, smearing/splicing, brightness/contrast adjustment, etc.). The auxiliary features include exif/binary stream keyword retrieval/noise, which are used for branch detection based on the results. Audit point positioning uses detection frameworks and controls thresholds for high and low density detection. Tamper recognition employs a dual-path dual-stream recognition network, with RGB and ELA stream feature extraction. After dimensionality reduction through self-correlation percentile pooling, the fused output is processed through vlad, yielding an accuracy of 0.804, recall of 0.659, and precision of 0.913. |
2308.08451 | Shenghui Cheng | Xiangyu Li, Yuqing Fan, Shenghui Cheng | AIGC In China: Current Developments And Future Outlook | null | null | null | null | cs.AI cs.CY | http://creativecommons.org/licenses/by/4.0/ | The increasing attention given to AI Generated Content (AIGC) has brought a
profound impact on various aspects of daily life, industrial manufacturing, and
the academic sector. Recognizing the global trends and competitiveness in AIGC
development, this study aims to analyze China's current status in the field.
The investigation begins with an overview of the foundational technologies and
current applications of AIGC. Subsequently, the study delves into the market
status, policy landscape, and development trajectory of AIGC in China,
utilizing keyword searches to identify relevant scholarly papers. Furthermore,
the paper provides a comprehensive examination of AIGC products and their
corresponding ecosystem, emphasizing the ecological construction of AIGC.
Finally, this paper discusses the challenges and risks faced by the AIGC
industry while presenting a forward-looking perspective on the industry's
future based on competitive insights in AIGC.
| [
{
"created": "Mon, 14 Aug 2023 13:55:38 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Aug 2023 07:23:13 GMT",
"version": "v2"
}
] | 2023-08-22 | [
[
"Li",
"Xiangyu",
""
],
[
"Fan",
"Yuqing",
""
],
[
"Cheng",
"Shenghui",
""
]
] | The increasing attention given to AI Generated Content (AIGC) has brought a profound impact on various aspects of daily life, industrial manufacturing, and the academic sector. Recognizing the global trends and competitiveness in AIGC development, this study aims to analyze China's current status in the field. The investigation begins with an overview of the foundational technologies and current applications of AIGC. Subsequently, the study delves into the market status, policy landscape, and development trajectory of AIGC in China, utilizing keyword searches to identify relevant scholarly papers. Furthermore, the paper provides a comprehensive examination of AIGC products and their corresponding ecosystem, emphasizing the ecological construction of AIGC. Finally, this paper discusses the challenges and risks faced by the AIGC industry while presenting a forward-looking perspective on the industry's future based on competitive insights in AIGC. |
2403.13667 | Zixuan Wang | Zixuan Wang, Jia Jia, Shikun Sun, Haozhe Wu, Rong Han, Zhenyu Li, Di
Tang, Jiaqing Zhou, Jiebo Luo | DanceCamera3D: 3D Camera Movement Synthesis with Music and Dance | Accept to CVPR 2024 | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Choreographers determine what the dances look like, while cameramen determine
the final presentation of dances. Recently, various methods and datasets have
showcased the feasibility of dance synthesis. However, camera movement
synthesis with music and dance remains an unsolved challenging problem due to
the scarcity of paired data. Thus, we present DCM, a new multi-modal 3D
dataset, which for the first time combines camera movement with dance motion
and music audio. This dataset encompasses 108 dance sequences (3.2 hours) of
paired dance-camera-music data from the anime community, covering 4 music
genres. With this dataset, we uncover that dance camera movement is
multifaceted and human-centric, and possesses multiple influencing factors,
making dance camera synthesis a more challenging task compared to camera or
dance synthesis alone. To overcome these difficulties, we propose
DanceCamera3D, a transformer-based diffusion model that incorporates a novel
body attention loss and a condition separation strategy. For evaluation, we
devise new metrics measuring camera movement quality, diversity, and dancer
fidelity. Utilizing these metrics, we conduct extensive experiments on our DCM
dataset, providing both quantitative and qualitative evidence showcasing the
effectiveness of our DanceCamera3D model. Code and video demos are available at
https://github.com/Carmenw1203/DanceCamera3D-Official.
| [
{
"created": "Wed, 20 Mar 2024 15:24:57 GMT",
"version": "v1"
}
] | 2024-03-21 | [
[
"Wang",
"Zixuan",
""
],
[
"Jia",
"Jia",
""
],
[
"Sun",
"Shikun",
""
],
[
"Wu",
"Haozhe",
""
],
[
"Han",
"Rong",
""
],
[
"Li",
"Zhenyu",
""
],
[
"Tang",
"Di",
""
],
[
"Zhou",
"Jiaqing",
""
],
[
"Luo",
"Jiebo",
""
]
] | Choreographers determine what the dances look like, while cameramen determine the final presentation of dances. Recently, various methods and datasets have showcased the feasibility of dance synthesis. However, camera movement synthesis with music and dance remains an unsolved challenging problem due to the scarcity of paired data. Thus, we present DCM, a new multi-modal 3D dataset, which for the first time combines camera movement with dance motion and music audio. This dataset encompasses 108 dance sequences (3.2 hours) of paired dance-camera-music data from the anime community, covering 4 music genres. With this dataset, we uncover that dance camera movement is multifaceted and human-centric, and possesses multiple influencing factors, making dance camera synthesis a more challenging task compared to camera or dance synthesis alone. To overcome these difficulties, we propose DanceCamera3D, a transformer-based diffusion model that incorporates a novel body attention loss and a condition separation strategy. For evaluation, we devise new metrics measuring camera movement quality, diversity, and dancer fidelity. Utilizing these metrics, we conduct extensive experiments on our DCM dataset, providing both quantitative and qualitative evidence showcasing the effectiveness of our DanceCamera3D model. Code and video demos are available at https://github.com/Carmenw1203/DanceCamera3D-Official. |
2110.02453 | Lin Zheng | Lin Zheng, Huijie Pan, Lingpeng Kong | Ripple Attention for Visual Perception with Sub-quadratic Complexity | 19 pages, 2 figures, ICML 2022 camera ready | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transformer architectures are now central to sequence modeling tasks. At its
heart is the attention mechanism, which enables effective modeling of long-term
dependencies in a sequence. Recently, transformers have been successfully
applied in the computer vision domain, where 2D images are first segmented into
patches and then treated as 1D sequences. Such linearization, however, impairs
the notion of spatial locality in images, which bears important visual clues.
To bridge the gap, we propose ripple attention, a sub-quadratic attention
mechanism for vision transformers. Built upon the recent kernel-based efficient
attention mechanisms, we design a novel dynamic programming algorithm that
weights contributions of different tokens to a query with respect to their
relative spatial distances in the 2D space in linear observed time. Extensive
experiments and analyses demonstrate the effectiveness of ripple attention on
various visual tasks.
| [
{
"created": "Wed, 6 Oct 2021 02:00:38 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Jun 2022 13:59:31 GMT",
"version": "v2"
}
] | 2022-06-16 | [
[
"Zheng",
"Lin",
""
],
[
"Pan",
"Huijie",
""
],
[
"Kong",
"Lingpeng",
""
]
] | Transformer architectures are now central to sequence modeling tasks. At its heart is the attention mechanism, which enables effective modeling of long-term dependencies in a sequence. Recently, transformers have been successfully applied in the computer vision domain, where 2D images are first segmented into patches and then treated as 1D sequences. Such linearization, however, impairs the notion of spatial locality in images, which bears important visual clues. To bridge the gap, we propose ripple attention, a sub-quadratic attention mechanism for vision transformers. Built upon the recent kernel-based efficient attention mechanisms, we design a novel dynamic programming algorithm that weights contributions of different tokens to a query with respect to their relative spatial distances in the 2D space in linear observed time. Extensive experiments and analyses demonstrate the effectiveness of ripple attention on various visual tasks. |
2407.13214 | Lu Gan | Lu Gan and Xi Li | TXL-PBC: a freely accessible labeled peripheral blood cell dataset | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a recent study, we found that publicly BCCD and BCD datasets have
significant issues such as labeling errors, insufficient sample size, and poor
data quality. To address these problems, we performed sample deletion,
re-labeling, and integration of these two datasets. Additionally, we introduced
the PBC and Raabin-WBC datasets, and ultimately created a high-quality,
sample-balanced new dataset, which we named TXL-PBC. The dataset contains 1008
training sets, 288 validation sets, and 144 test sets. Firstly, The dataset
underwent strict manual annotation, automatic annotation with YOLOv8n model,
and manual audit steps to ensure the accuracy and consistency of annotations.
Secondly, we addresses the blood cell mislabeling problem of the original
datasets. The distribution of label boundary box areas and the number of labels
are better than the BCCD and BCD datasets. Moreover, we used the YOLOv8n model
to train these three datasets, the performance of the TXL-PBC dataset surpass
the original two datasets. Finally, we employed YOLOv5n, YOLOv5s, YOLOv5l,
YOLOv8s, YOLOv8m detection models as the baseline models for TXL-PBC. This
study not only enhances the quality of the blood cell dataset but also supports
researchers in improving models for blood cell target detection. We published
our freely accessible TXL-PBC dataset at
https://github.com/lugan113/TXL-PBC\_Dataset.
| [
{
"created": "Thu, 18 Jul 2024 06:54:49 GMT",
"version": "v1"
}
] | 2024-07-19 | [
[
"Gan",
"Lu",
""
],
[
"Li",
"Xi",
""
]
] | In a recent study, we found that publicly BCCD and BCD datasets have significant issues such as labeling errors, insufficient sample size, and poor data quality. To address these problems, we performed sample deletion, re-labeling, and integration of these two datasets. Additionally, we introduced the PBC and Raabin-WBC datasets, and ultimately created a high-quality, sample-balanced new dataset, which we named TXL-PBC. The dataset contains 1008 training sets, 288 validation sets, and 144 test sets. Firstly, The dataset underwent strict manual annotation, automatic annotation with YOLOv8n model, and manual audit steps to ensure the accuracy and consistency of annotations. Secondly, we addresses the blood cell mislabeling problem of the original datasets. The distribution of label boundary box areas and the number of labels are better than the BCCD and BCD datasets. Moreover, we used the YOLOv8n model to train these three datasets, the performance of the TXL-PBC dataset surpass the original two datasets. Finally, we employed YOLOv5n, YOLOv5s, YOLOv5l, YOLOv8s, YOLOv8m detection models as the baseline models for TXL-PBC. This study not only enhances the quality of the blood cell dataset but also supports researchers in improving models for blood cell target detection. We published our freely accessible TXL-PBC dataset at https://github.com/lugan113/TXL-PBC\_Dataset. |
2310.19990 | Ankur Nath | Ankur Nath, Alan Kuhnle | Unveiling the Limits of Learned Local Search Heuristics: Are You the
Mightiest of the Meek? | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In recent years, combining neural networks with local search heuristics has
become popular in the field of combinatorial optimization. Despite its
considerable computational demands, this approach has exhibited promising
outcomes with minimal manual engineering. However, we have identified three
critical limitations in the empirical evaluation of these integration attempts.
Firstly, instances with moderate complexity and weak baselines pose a challenge
in accurately evaluating the effectiveness of learning-based approaches.
Secondly, the absence of an ablation study makes it difficult to quantify and
attribute improvements accurately to the deep learning architecture. Lastly,
the generalization of learned heuristics across diverse distributions remains
underexplored. In this study, we conduct a comprehensive investigation into
these identified limitations. Surprisingly, we demonstrate that a simple
learned heuristic based on Tabu Search surpasses state-of-the-art (SOTA)
learned heuristics in terms of performance and generalizability. Our findings
challenge prevailing assumptions and open up exciting avenues for future
research and innovation in combinatorial optimization.
| [
{
"created": "Mon, 30 Oct 2023 20:16:42 GMT",
"version": "v1"
}
] | 2023-11-01 | [
[
"Nath",
"Ankur",
""
],
[
"Kuhnle",
"Alan",
""
]
] | In recent years, combining neural networks with local search heuristics has become popular in the field of combinatorial optimization. Despite its considerable computational demands, this approach has exhibited promising outcomes with minimal manual engineering. However, we have identified three critical limitations in the empirical evaluation of these integration attempts. Firstly, instances with moderate complexity and weak baselines pose a challenge in accurately evaluating the effectiveness of learning-based approaches. Secondly, the absence of an ablation study makes it difficult to quantify and attribute improvements accurately to the deep learning architecture. Lastly, the generalization of learned heuristics across diverse distributions remains underexplored. In this study, we conduct a comprehensive investigation into these identified limitations. Surprisingly, we demonstrate that a simple learned heuristic based on Tabu Search surpasses state-of-the-art (SOTA) learned heuristics in terms of performance and generalizability. Our findings challenge prevailing assumptions and open up exciting avenues for future research and innovation in combinatorial optimization. |
2105.07157 | Yuyang Wei | Yuyang Wei, Yijun Yu, Minxue Pan, Tian Zhang | A Feature Table approach to decomposing monolithic applications into
microservices | null | null | 10.1145/3457913.3457939 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microservice architecture refers to the use of numerous small-scale and
independently deployed services, instead of encapsulating all functions into
one monolith. It has been a challenge in software engineering to decompose a
monolithic system into smaller parts. In this paper, we propose the Feature
Table approach, a structured approach to service decomposition based on the
correlation between functional features and microservices: (1) we defined the
concept of {\em Feature Cards} and 12 instances of such cards; (2) we
formulated {\em Decomposition Rules} to decompose monolithic applications; (3)
we designed the {\em Feature Table Analysis Tool} to provide semi-automatic
analysis for identification of microservices; and (4) we formulated {\em
Mapping Rules} to help developers implement microservice candidates. We
performed a case study on Cargo Tracking System to validate our
microservice-oriented decomposition approach. Cargo Tracking System is a
typical case that has been decomposed by other related methods (dataflow-driven
approach, Service Cutter, and API Analysis). Through comparison with the
related methods in terms of specific coupling and cohesion metrics, the results
show that the proposed Feature Table approach can deliver more reasonable
microservice candidates, which are feasible in implementation with
semi-automatic support.
| [
{
"created": "Sat, 15 May 2021 07:08:30 GMT",
"version": "v1"
}
] | 2021-05-18 | [
[
"Wei",
"Yuyang",
""
],
[
"Yu",
"Yijun",
""
],
[
"Pan",
"Minxue",
""
],
[
"Zhang",
"Tian",
""
]
] | Microservice architecture refers to the use of numerous small-scale and independently deployed services, instead of encapsulating all functions into one monolith. It has been a challenge in software engineering to decompose a monolithic system into smaller parts. In this paper, we propose the Feature Table approach, a structured approach to service decomposition based on the correlation between functional features and microservices: (1) we defined the concept of {\em Feature Cards} and 12 instances of such cards; (2) we formulated {\em Decomposition Rules} to decompose monolithic applications; (3) we designed the {\em Feature Table Analysis Tool} to provide semi-automatic analysis for identification of microservices; and (4) we formulated {\em Mapping Rules} to help developers implement microservice candidates. We performed a case study on Cargo Tracking System to validate our microservice-oriented decomposition approach. Cargo Tracking System is a typical case that has been decomposed by other related methods (dataflow-driven approach, Service Cutter, and API Analysis). Through comparison with the related methods in terms of specific coupling and cohesion metrics, the results show that the proposed Feature Table approach can deliver more reasonable microservice candidates, which are feasible in implementation with semi-automatic support. |
2109.14286 | Maximilian Hils | Maximilian Hils, Daniel W. Woods, Rainer B\"ohme | Conflicting Privacy Preference Signals in the Wild | null | null | null | null | cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Privacy preference signals allow users to express preferences over how their
personal data is processed. These signals become important in determining
privacy outcomes when they reference an enforceable legal basis, as is the case
with recent signals such as the Global Privacy Control and the Transparency &
Consent Framework. However, the coexistence of multiple privacy preference
signals creates ambiguity as users may transmit more than one signal. This
paper collects evidence about ambiguity flowing from the aforementioned two
signals and the historic Do Not Track signal. We provide the first empirical
evidence that ambiguous signals are sent by web users in the wild. We also show
that preferences stored in the browser are reliable predictors of privacy
preferences expressed in web dialogs. Finally, we provide the first evidence
that popular cookie dialogs are blocked by the majority of users who adopted
the Do Not Track and Global Privacy Control standards. These empirical results
inform forthcoming legal debates about how to interpret privacy preference
signals.
| [
{
"created": "Wed, 29 Sep 2021 09:10:47 GMT",
"version": "v1"
}
] | 2021-09-30 | [
[
"Hils",
"Maximilian",
""
],
[
"Woods",
"Daniel W.",
""
],
[
"Böhme",
"Rainer",
""
]
] | Privacy preference signals allow users to express preferences over how their personal data is processed. These signals become important in determining privacy outcomes when they reference an enforceable legal basis, as is the case with recent signals such as the Global Privacy Control and the Transparency & Consent Framework. However, the coexistence of multiple privacy preference signals creates ambiguity as users may transmit more than one signal. This paper collects evidence about ambiguity flowing from the aforementioned two signals and the historic Do Not Track signal. We provide the first empirical evidence that ambiguous signals are sent by web users in the wild. We also show that preferences stored in the browser are reliable predictors of privacy preferences expressed in web dialogs. Finally, we provide the first evidence that popular cookie dialogs are blocked by the majority of users who adopted the Do Not Track and Global Privacy Control standards. These empirical results inform forthcoming legal debates about how to interpret privacy preference signals. |
2110.15525 | Bin Wang | Kaitai Zhang, Bin Wang, C.-C. Jay Kuo | PEDENet: Image Anomaly Localization via Patch Embedding and Density
Estimation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A neural network targeting at unsupervised image anomaly localization, called
the PEDENet, is proposed in this work. PEDENet contains a patch embedding (PE)
network, a density estimation (DE) network, and an auxiliary network called the
location prediction (LP) network. The PE network takes local image patches as
input and performs dimension reduction to get low-dimensional patch embeddings
via a deep encoder structure. Being inspired by the Gaussian Mixture Model
(GMM), the DE network takes those patch embeddings and then predicts the
cluster membership of an embedded patch. The sum of membership probabilities is
used as a loss term to guide the learning process. The LP network is a
Multi-layer Perception (MLP), which takes embeddings from two neighboring
patches as input and predicts their relative location. The performance of the
proposed PEDENet is evaluated extensively and benchmarked with that of
state-of-the-art methods.
| [
{
"created": "Fri, 29 Oct 2021 03:52:56 GMT",
"version": "v1"
}
] | 2021-11-01 | [
[
"Zhang",
"Kaitai",
""
],
[
"Wang",
"Bin",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] | A neural network targeting at unsupervised image anomaly localization, called the PEDENet, is proposed in this work. PEDENet contains a patch embedding (PE) network, a density estimation (DE) network, and an auxiliary network called the location prediction (LP) network. The PE network takes local image patches as input and performs dimension reduction to get low-dimensional patch embeddings via a deep encoder structure. Being inspired by the Gaussian Mixture Model (GMM), the DE network takes those patch embeddings and then predicts the cluster membership of an embedded patch. The sum of membership probabilities is used as a loss term to guide the learning process. The LP network is a Multi-layer Perception (MLP), which takes embeddings from two neighboring patches as input and predicts their relative location. The performance of the proposed PEDENet is evaluated extensively and benchmarked with that of state-of-the-art methods. |
2010.00796 | Chenguang Zhu | Donghan Yu, Chenguang Zhu, Yiming Yang, Michael Zeng | JAKET: Joint Pre-training of Knowledge Graph and Language Understanding | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graphs (KGs) contain rich information about world knowledge,
entities and relations. Thus, they can be great supplements to existing
pre-trained language models. However, it remains a challenge to efficiently
integrate information from KG into language modeling. And the understanding of
a knowledge graph requires related context. We propose a novel joint
pre-training framework, JAKET, to model both the knowledge graph and language.
The knowledge module and language module provide essential information to
mutually assist each other: the knowledge module produces embeddings for
entities in text while the language module generates context-aware initial
embeddings for entities and relations in the graph. Our design enables the
pre-trained model to easily adapt to unseen knowledge graphs in new domains.
Experimental results on several knowledge-aware NLP tasks show that our
proposed framework achieves superior performance by effectively leveraging
knowledge in language understanding.
| [
{
"created": "Fri, 2 Oct 2020 05:53:36 GMT",
"version": "v1"
}
] | 2020-10-05 | [
[
"Yu",
"Donghan",
""
],
[
"Zhu",
"Chenguang",
""
],
[
"Yang",
"Yiming",
""
],
[
"Zeng",
"Michael",
""
]
] | Knowledge graphs (KGs) contain rich information about world knowledge, entities and relations. Thus, they can be great supplements to existing pre-trained language models. However, it remains a challenge to efficiently integrate information from KG into language modeling. And the understanding of a knowledge graph requires related context. We propose a novel joint pre-training framework, JAKET, to model both the knowledge graph and language. The knowledge module and language module provide essential information to mutually assist each other: the knowledge module produces embeddings for entities in text while the language module generates context-aware initial embeddings for entities and relations in the graph. Our design enables the pre-trained model to easily adapt to unseen knowledge graphs in new domains. Experimental results on several knowledge-aware NLP tasks show that our proposed framework achieves superior performance by effectively leveraging knowledge in language understanding. |
1504.03880 | Martin Zimmermann | Peter Faymonville and Martin Zimmermann | Parametric Linear Dynamic Logic (full version) | Accepted for publication at Information and Computation. A
preliminary version of this work appeared in GandALF 2014 (arXiv:1408.5957) | null | null | null | cs.LO cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Parametric Linear Dynamic Logic (PLDL), which extends Linear
Dynamic Logic (LDL) by temporal operators equipped with parameters that bound
their scope. LDL itself was proposed as an extension of Linear Temporal Logic
(LTL) that is able to express all omega-regular specifications while still
maintaining many of LTL's desirable properties like intuitive syntax and
semantics and a translation into non-deterministic B\"uchi automata of
exponential size. But LDL lacks capabilities to express timing constraints. By
adding parameterized operators to LDL, we obtain a logic that is able to
express all omega-regular properties and that subsumes parameterized extensions
of LTL like Parametric LTL and PROMPT-LTL.
Our main technical contribution is a translation of PLDL formulas into
non-deterministic B\"uchi automata of exponential size via alternating
automata. This yields polynomial space algorithms for model checking and
assume-guarantee model checking and a realizability algorithm with
doubly-exponential running time. All three problems are also shown to be
complete for these complexity classes. Moreover, we give tight upper and lower
bounds on optimal parameter values for model checking and realizability. Using
these bounds, we present a polynomial space procedure for model checking
optimization and an algorithm with triply-exponential running time for
realizability optimization. Our results show that PLDL model checking and
realizability are no harder than their respective (parametric) LTL
counterparts.
| [
{
"created": "Wed, 15 Apr 2015 12:19:52 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Dec 2015 14:57:53 GMT",
"version": "v2"
}
] | 2015-12-08 | [
[
"Faymonville",
"Peter",
""
],
[
"Zimmermann",
"Martin",
""
]
] | We introduce Parametric Linear Dynamic Logic (PLDL), which extends Linear Dynamic Logic (LDL) by temporal operators equipped with parameters that bound their scope. LDL itself was proposed as an extension of Linear Temporal Logic (LTL) that is able to express all omega-regular specifications while still maintaining many of LTL's desirable properties like intuitive syntax and semantics and a translation into non-deterministic B\"uchi automata of exponential size. But LDL lacks capabilities to express timing constraints. By adding parameterized operators to LDL, we obtain a logic that is able to express all omega-regular properties and that subsumes parameterized extensions of LTL like Parametric LTL and PROMPT-LTL. Our main technical contribution is a translation of PLDL formulas into non-deterministic B\"uchi automata of exponential size via alternating automata. This yields polynomial space algorithms for model checking and assume-guarantee model checking and a realizability algorithm with doubly-exponential running time. All three problems are also shown to be complete for these complexity classes. Moreover, we give tight upper and lower bounds on optimal parameter values for model checking and realizability. Using these bounds, we present a polynomial space procedure for model checking optimization and an algorithm with triply-exponential running time for realizability optimization. Our results show that PLDL model checking and realizability are no harder than their respective (parametric) LTL counterparts. |
2310.00840 | Tianjian Li | Tianjian Li, Haoran Xu, Philipp Koehn, Daniel Khashabi, Kenton Murray | Error Norm Truncation: Robust Training in the Presence of Data Noise for
Text Generation Models | ICLR 2024 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Text generation models are notoriously vulnerable to errors in the training
data. With the wide-spread availability of massive amounts of web-crawled data
becoming more commonplace, how can we enhance the robustness of models trained
on a massive amount of noisy web-crawled text? In our work, we propose Error
Norm Truncation (ENT), a robust enhancement method to the standard training
objective that truncates noisy data. Compared to methods that only uses the
negative log-likelihood loss to estimate data quality, our method provides a
more accurate estimation by considering the distribution of non-target tokens,
which is often overlooked by previous work. Through comprehensive experiments
across language modeling, machine translation, and text summarization, we show
that equipping text generation models with ENT improves generation quality over
standard training and previous soft and hard truncation methods. Furthermore,
we show that our method improves the robustness of models against two of the
most detrimental types of noise in machine translation, resulting in an
increase of more than 2 BLEU points over the MLE baseline when up to 50% of
noise is added to the data.
| [
{
"created": "Mon, 2 Oct 2023 01:30:27 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Mar 2024 19:28:38 GMT",
"version": "v2"
}
] | 2024-03-20 | [
[
"Li",
"Tianjian",
""
],
[
"Xu",
"Haoran",
""
],
[
"Koehn",
"Philipp",
""
],
[
"Khashabi",
"Daniel",
""
],
[
"Murray",
"Kenton",
""
]
] | Text generation models are notoriously vulnerable to errors in the training data. With the wide-spread availability of massive amounts of web-crawled data becoming more commonplace, how can we enhance the robustness of models trained on a massive amount of noisy web-crawled text? In our work, we propose Error Norm Truncation (ENT), a robust enhancement method to the standard training objective that truncates noisy data. Compared to methods that only uses the negative log-likelihood loss to estimate data quality, our method provides a more accurate estimation by considering the distribution of non-target tokens, which is often overlooked by previous work. Through comprehensive experiments across language modeling, machine translation, and text summarization, we show that equipping text generation models with ENT improves generation quality over standard training and previous soft and hard truncation methods. Furthermore, we show that our method improves the robustness of models against two of the most detrimental types of noise in machine translation, resulting in an increase of more than 2 BLEU points over the MLE baseline when up to 50% of noise is added to the data. |
1203.5742 | Marin\^es Guerreiro | Raul Antonio Ferraz, Marin\^es Guerreiro, and C\'esar Polcino Milies | G-equivalence in group algebras and minimal abelian codes | 8 pages, 4 tables | null | null | null | cs.IT math.GR math.IT math.RA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Let G be a finite abelian group and F a field such that char(F) does not
divide |G|. Denote by FG the group algebra of G over F. A (semisimple) abelian
code is an ideal of FG. Two codes I and J of FG are G-equivalent if there
exists an automorphism of G whose linear extension to FG maps I onto J In this
paper we give a necessary and sufficient condition for minimal abelian codes to
be G-equivalent and show how to correct some results in the literature.
| [
{
"created": "Mon, 26 Mar 2012 17:43:03 GMT",
"version": "v1"
}
] | 2012-03-27 | [
[
"Ferraz",
"Raul Antonio",
""
],
[
"Guerreiro",
"Marinês",
""
],
[
"Milies",
"César Polcino",
""
]
] | Let G be a finite abelian group and F a field such that char(F) does not divide |G|. Denote by FG the group algebra of G over F. A (semisimple) abelian code is an ideal of FG. Two codes I and J of FG are G-equivalent if there exists an automorphism of G whose linear extension to FG maps I onto J In this paper we give a necessary and sufficient condition for minimal abelian codes to be G-equivalent and show how to correct some results in the literature. |
2102.03443 | Jesus Tordesillas Torres | Andrea Tagliabue, Jesus Tordesillas, Xiaoyi Cai, Angel
Santamaria-Navarro, Jonathan P. How, Luca Carlone, Ali-akbar Agha-mohammadi | LION: Lidar-Inertial Observability-Aware Navigator for Vision-Denied
Environments | 2020 International Symposium on Experimental Robotics (ISER 2020) | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State estimation for robots navigating in GPS-denied and
perceptually-degraded environments, such as underground tunnels, mines and
planetary subsurface voids, remains challenging in robotics. Towards this goal,
we present LION (Lidar-Inertial Observability-Aware Navigator), which is part
of the state estimation framework developed by the team CoSTAR for the DARPA
Subterranean Challenge, where the team achieved second and first places in the
Tunnel and Urban circuits in August 2019 and February 2020, respectively. LION
provides high-rate odometry estimates by fusing high-frequency inertial data
from an IMU and low-rate relative pose estimates from a lidar via a fixed-lag
sliding window smoother. LION does not require knowledge of relative
positioning between lidar and IMU, as the extrinsic calibration is estimated
online. In addition, LION is able to self-assess its performance using an
observability metric that evaluates whether the pose estimate is geometrically
ill-constrained. Odometry and confidence estimates are used by HeRO, a
supervisory algorithm that provides robust estimates by switching between
different odometry sources. In this paper we benchmark the performance of LION
in perceptually-degraded subterranean environments, demonstrating its high
technology readiness level for deployment in the field.
| [
{
"created": "Fri, 5 Feb 2021 23:12:29 GMT",
"version": "v1"
}
] | 2021-02-09 | [
[
"Tagliabue",
"Andrea",
""
],
[
"Tordesillas",
"Jesus",
""
],
[
"Cai",
"Xiaoyi",
""
],
[
"Santamaria-Navarro",
"Angel",
""
],
[
"How",
"Jonathan P.",
""
],
[
"Carlone",
"Luca",
""
],
[
"Agha-mohammadi",
"Ali-akbar",
""
]
] | State estimation for robots navigating in GPS-denied and perceptually-degraded environments, such as underground tunnels, mines and planetary subsurface voids, remains challenging in robotics. Towards this goal, we present LION (Lidar-Inertial Observability-Aware Navigator), which is part of the state estimation framework developed by the team CoSTAR for the DARPA Subterranean Challenge, where the team achieved second and first places in the Tunnel and Urban circuits in August 2019 and February 2020, respectively. LION provides high-rate odometry estimates by fusing high-frequency inertial data from an IMU and low-rate relative pose estimates from a lidar via a fixed-lag sliding window smoother. LION does not require knowledge of relative positioning between lidar and IMU, as the extrinsic calibration is estimated online. In addition, LION is able to self-assess its performance using an observability metric that evaluates whether the pose estimate is geometrically ill-constrained. Odometry and confidence estimates are used by HeRO, a supervisory algorithm that provides robust estimates by switching between different odometry sources. In this paper we benchmark the performance of LION in perceptually-degraded subterranean environments, demonstrating its high technology readiness level for deployment in the field. |
1805.00780 | Maren Awiszus | Maren Awiszus, Stella Gra{\ss}hof, Felix Kuhnke, J\"orn Ostermann | Unsupervised Features for Facial Expression Intensity Estimation over
Time | Accepted for CVPR 2018 Workshop Track | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The diversity of facial shapes and motions among persons is one of the
greatest challenges for automatic analysis of facial expressions. In this
paper, we propose a feature describing expression intensity over time, while
being invariant to person and the type of performed expression. Our feature is
a weighted combination of the dynamics of multiple points adapted to the
overall expression trajectory. We evaluate our method on several tasks all
related to temporal analysis of facial expression. The proposed feature is
compared to a state-of-the-art method for expression intensity estimation,
which it outperforms. We use our proposed feature to temporally align multiple
sequences of recorded 3D facial expressions. Furthermore, we show how our
feature can be used to reveal person-specific differences in performances of
facial expressions. Additionally, we apply our feature to identify the local
changes in face video sequences based on action unit labels. For all the
experiments our feature proves to be robust against noise and outliers, making
it applicable to a variety of applications for analysis of facial movements.
| [
{
"created": "Wed, 2 May 2018 13:12:05 GMT",
"version": "v1"
},
{
"created": "Thu, 3 May 2018 07:15:26 GMT",
"version": "v2"
}
] | 2018-05-04 | [
[
"Awiszus",
"Maren",
""
],
[
"Graßhof",
"Stella",
""
],
[
"Kuhnke",
"Felix",
""
],
[
"Ostermann",
"Jörn",
""
]
] | The diversity of facial shapes and motions among persons is one of the greatest challenges for automatic analysis of facial expressions. In this paper, we propose a feature describing expression intensity over time, while being invariant to person and the type of performed expression. Our feature is a weighted combination of the dynamics of multiple points adapted to the overall expression trajectory. We evaluate our method on several tasks all related to temporal analysis of facial expression. The proposed feature is compared to a state-of-the-art method for expression intensity estimation, which it outperforms. We use our proposed feature to temporally align multiple sequences of recorded 3D facial expressions. Furthermore, we show how our feature can be used to reveal person-specific differences in performances of facial expressions. Additionally, we apply our feature to identify the local changes in face video sequences based on action unit labels. For all the experiments our feature proves to be robust against noise and outliers, making it applicable to a variety of applications for analysis of facial movements. |
2301.05892 | Pau Gall\'es Ravent\'os | Pau Gall\'es, Katalin Takats and Javier Marin | Object Detection performance variation on compressed satellite image
datasets with iquaflow | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | A lot of work has been done to reach the best possible performance of
predictive models on images. There are fewer studies about the resilience of
these models when they are trained on image datasets that suffer modifications
altering their original quality. Yet this is a common problem that is often
encountered in the industry. A good example of that is with earth observation
satellites that are capturing many images. The energy and time of connection to
the earth of an orbiting satellite are limited and must be carefully used. An
approach to mitigate that is to compress the images on board before
downloading. The compression can be regulated depending on the intended usage
of the image and the requirements of this application. We present a new
software tool with the name iquaflow that is designed to study image quality
and model performance variation given an alteration of the image dataset.
Furthermore, we do a showcase study about oriented object detection models
adoption on a public image dataset DOTA Xia_2018_CVPR given different
compression levels. The optimal compression point is found and the usefulness
of iquaflow becomes evident.
| [
{
"created": "Sat, 14 Jan 2023 11:20:27 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Jan 2023 14:21:07 GMT",
"version": "v2"
}
] | 2023-01-19 | [
[
"Gallés",
"Pau",
""
],
[
"Takats",
"Katalin",
""
],
[
"Marin",
"Javier",
""
]
] | A lot of work has been done to reach the best possible performance of predictive models on images. There are fewer studies about the resilience of these models when they are trained on image datasets that suffer modifications altering their original quality. Yet this is a common problem that is often encountered in the industry. A good example of that is with earth observation satellites that are capturing many images. The energy and time of connection to the earth of an orbiting satellite are limited and must be carefully used. An approach to mitigate that is to compress the images on board before downloading. The compression can be regulated depending on the intended usage of the image and the requirements of this application. We present a new software tool with the name iquaflow that is designed to study image quality and model performance variation given an alteration of the image dataset. Furthermore, we do a showcase study about oriented object detection models adoption on a public image dataset DOTA Xia_2018_CVPR given different compression levels. The optimal compression point is found and the usefulness of iquaflow becomes evident. |
1504.03212 | Carola Doerr | Benjamin Doerr and Carola Doerr | Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th
Rule in Discrete Settings | This is the full version of a paper that is to appear at GECCO 2015 | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While evolutionary algorithms are known to be very successful for a broad
range of applications, the algorithm designer is often left with many
algorithmic choices, for example, the size of the population, the mutation
rates, and the crossover rates of the algorithm. These parameters are known to
have a crucial influence on the optimization time, and thus need to be chosen
carefully, a task that often requires substantial efforts. Moreover, the
optimal parameters can change during the optimization process. It is therefore
of great interest to design mechanisms that dynamically choose best-possible
parameters. An example for such an update mechanism is the one-fifth success
rule for step-size adaption in evolutionary strategies. While in continuous
domains this principle is well understood also from a mathematical point of
view, no comparable theory is available for problems in discrete domains.
In this work we show that the one-fifth success rule can be effective also in
discrete settings. We regard the $(1+(\lambda,\lambda))$~GA proposed in
[Doerr/Doerr/Ebel: From black-box complexity to designing new genetic
algorithms, TCS 2015]. We prove that if its population size is chosen according
to the one-fifth success rule then the expected optimization time on
\textsc{OneMax} is linear. This is better than what \emph{any} static
population size $\lambda$ can achieve and is asymptotically optimal also among
all adaptive parameter choices.
| [
{
"created": "Mon, 13 Apr 2015 15:16:00 GMT",
"version": "v1"
}
] | 2015-04-14 | [
[
"Doerr",
"Benjamin",
""
],
[
"Doerr",
"Carola",
""
]
] | While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the $(1+(\lambda,\lambda))$~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size $\lambda$ can achieve and is asymptotically optimal also among all adaptive parameter choices. |
2012.14592 | Hazem Torfah | Rayna Dimitrova, Bernd Finkbeiner, Hazem Torfah | Synthesizing Approximate Implementations for Unrealizable Specifications | Published at CAV 2019 | null | 10.1007/978-3-030-25540-4_13 | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The unrealizability of a specification is often due to the assumption that
the behavior of the environment is unrestricted. In this paper, we present
algorithms for synthesis in bounded environments, where the environment can
only generate input sequences that are ultimately periodic words (lassos) with
finite representations of bounded size. We provide automata-theoretic and
symbolic approaches for solving this synthesis problem, and also study the
synthesis of approximative implementations from unrealizable specifications.
Such implementations may violate the specification in general, but are
guaranteed to satisfy the specification on at least a specified portion of the
bounded-size lassos. We evaluate the algorithms on different arbiter
specifications.
| [
{
"created": "Tue, 29 Dec 2020 03:57:45 GMT",
"version": "v1"
}
] | 2021-01-01 | [
[
"Dimitrova",
"Rayna",
""
],
[
"Finkbeiner",
"Bernd",
""
],
[
"Torfah",
"Hazem",
""
]
] | The unrealizability of a specification is often due to the assumption that the behavior of the environment is unrestricted. In this paper, we present algorithms for synthesis in bounded environments, where the environment can only generate input sequences that are ultimately periodic words (lassos) with finite representations of bounded size. We provide automata-theoretic and symbolic approaches for solving this synthesis problem, and also study the synthesis of approximative implementations from unrealizable specifications. Such implementations may violate the specification in general, but are guaranteed to satisfy the specification on at least a specified portion of the bounded-size lassos. We evaluate the algorithms on different arbiter specifications. |
1811.05250 | Pan Zhou | Pan Zhou, Wenwen Yang, Wei Chen, Yanfeng Wang, Jia Jia | Modality Attention for End-to-End Audio-visual Speech Recognition | accepted by ICASSP2019 | null | null | null | cs.CL cs.CV cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio-visual speech recognition (AVSR) system is thought to be one of the
most promising solutions for robust speech recognition, especially in noisy
environment. In this paper, we propose a novel multimodal attention based
method for audio-visual speech recognition which could automatically learn the
fused representation from both modalities based on their importance. Our method
is realized using state-of-the-art sequence-to-sequence (Seq2seq)
architectures. Experimental results show that relative improvements from 2% up
to 36% over the auditory modality alone are obtained depending on the different
signal-to-noise-ratio (SNR). Compared to the traditional feature concatenation
methods, our proposed approach can achieve better recognition performance under
both clean and noisy conditions. We believe modality attention based end-to-end
method can be easily generalized to other multimodal tasks with correlated
information.
| [
{
"created": "Tue, 13 Nov 2018 12:28:03 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Apr 2019 04:21:06 GMT",
"version": "v2"
}
] | 2019-04-24 | [
[
"Zhou",
"Pan",
""
],
[
"Yang",
"Wenwen",
""
],
[
"Chen",
"Wei",
""
],
[
"Wang",
"Yanfeng",
""
],
[
"Jia",
"Jia",
""
]
] | Audio-visual speech recognition (AVSR) system is thought to be one of the most promising solutions for robust speech recognition, especially in noisy environment. In this paper, we propose a novel multimodal attention based method for audio-visual speech recognition which could automatically learn the fused representation from both modalities based on their importance. Our method is realized using state-of-the-art sequence-to-sequence (Seq2seq) architectures. Experimental results show that relative improvements from 2% up to 36% over the auditory modality alone are obtained depending on the different signal-to-noise-ratio (SNR). Compared to the traditional feature concatenation methods, our proposed approach can achieve better recognition performance under both clean and noisy conditions. We believe modality attention based end-to-end method can be easily generalized to other multimodal tasks with correlated information. |
1705.05684 | Rafael Pereira Pires | Rafael Pires and Daniel Gavril and Pascal Felber and Emanuel Onica and
Marcelo Pasin | A lightweight MapReduce framework for secure processing with SGX | 8 pages WACC@CCGRID International Workshop on Assured Cloud Computing
and QoS aware Big Data | 2017 17th IEEE/ACM International Symposium on Cluster, Cloud and
Grid Computing | 10.1109/CCGRID.2017.129 | null | cs.DC cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MapReduce is a programming model used extensively for parallel data
processing in distributed environments. A wide range of algorithms were
implemented using MapReduce, from simple tasks like sorting and searching up to
complex clustering and machine learning operations. Many of these
implementations are part of services externalized to cloud infrastructures.
Over the past years, however, many concerns have been raised regarding the
security guarantees offered in such environments. Some solutions relying on
cryptography were proposed for countering threats but these typically imply a
high computational overhead. Intel, the largest manufacturer of commodity CPUs,
recently introduced SGX (software guard extensions), a set of hardware
instructions that support execution of code in an isolated secure environment.
In this paper, we explore the use of Intel SGX for providing privacy guarantees
for MapReduce operations, and based on our evaluation we conclude that it
represents a viable alternative to a cryptographic mechanism. We present
results based on the widely used k-means clustering algorithm, but our
implementation can be generalized to other applications that can be expressed
using MapReduce model.
| [
{
"created": "Tue, 16 May 2017 12:46:38 GMT",
"version": "v1"
}
] | 2017-05-17 | [
[
"Pires",
"Rafael",
""
],
[
"Gavril",
"Daniel",
""
],
[
"Felber",
"Pascal",
""
],
[
"Onica",
"Emanuel",
""
],
[
"Pasin",
"Marcelo",
""
]
] | MapReduce is a programming model used extensively for parallel data processing in distributed environments. A wide range of algorithms were implemented using MapReduce, from simple tasks like sorting and searching up to complex clustering and machine learning operations. Many of these implementations are part of services externalized to cloud infrastructures. Over the past years, however, many concerns have been raised regarding the security guarantees offered in such environments. Some solutions relying on cryptography were proposed for countering threats but these typically imply a high computational overhead. Intel, the largest manufacturer of commodity CPUs, recently introduced SGX (software guard extensions), a set of hardware instructions that support execution of code in an isolated secure environment. In this paper, we explore the use of Intel SGX for providing privacy guarantees for MapReduce operations, and based on our evaluation we conclude that it represents a viable alternative to a cryptographic mechanism. We present results based on the widely used k-means clustering algorithm, but our implementation can be generalized to other applications that can be expressed using MapReduce model. |
2205.09214 | Yong Niu | Jing Li, Yong Niu, Hao Wu, Bo Ai, Sheng Chen, Zhiyong Feng, Zhangdui
Zhong, Ning Wang | Mobility Support for Millimeter Wave Communications: Opportunities and
Challenges | 25 pages,11 figures,journal | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Millimeter-wave (mmWave) communication technology offers a potential and
promising solution to support 5G and B5G wireless networks in dynamic scenarios
and applications. However, mobility introduces many challenges as well as
opportunities to mmWave applications. To address these problems, we conduct a
survey of the opportunities and technologies to support mmWave communications
in mobile scenarios. Firstly, we summarize the mobile scenarios where mmWave
communications are exploited, including indoor wireless local area network
(WLAN) or wireless personal area network (WPAN), cellular access,
vehicle-to-everything (V2X), high speed train (HST), unmanned aerial vehicle
(UAV), and the new space-air-ground-sea communication scenarios. Then, to
address users' mobility impact on the system performance in different
application scenarios, we introduce several representative mobility models in
mmWave systems, including human mobility, vehicular mobility, high speed train
mobility and ship mobility. Next we survey the key challenges and existing
solutions to mmWave applications, such as channel modeling, channel estimation,
anti-blockage, and capacity improvement. Lastly, we discuss the open issues
concerning mobility-aware mmWave communications that deserve further
investigation. In particular, we highlight future heterogeneous mobile
networks, dynamic resource management, artificial intelligence (AI) for
mobility and integration of geographical information, deployment of large
intelligent surface and reconfigurable antenna technology, and finally, the
evolution to Terahertz (THz) communications.
| [
{
"created": "Wed, 18 May 2022 20:59:14 GMT",
"version": "v1"
}
] | 2022-05-20 | [
[
"Li",
"Jing",
""
],
[
"Niu",
"Yong",
""
],
[
"Wu",
"Hao",
""
],
[
"Ai",
"Bo",
""
],
[
"Chen",
"Sheng",
""
],
[
"Feng",
"Zhiyong",
""
],
[
"Zhong",
"Zhangdui",
""
],
[
"Wang",
"Ning",
""
]
] | Millimeter-wave (mmWave) communication technology offers a potential and promising solution to support 5G and B5G wireless networks in dynamic scenarios and applications. However, mobility introduces many challenges as well as opportunities to mmWave applications. To address these problems, we conduct a survey of the opportunities and technologies to support mmWave communications in mobile scenarios. Firstly, we summarize the mobile scenarios where mmWave communications are exploited, including indoor wireless local area network (WLAN) or wireless personal area network (WPAN), cellular access, vehicle-to-everything (V2X), high speed train (HST), unmanned aerial vehicle (UAV), and the new space-air-ground-sea communication scenarios. Then, to address users' mobility impact on the system performance in different application scenarios, we introduce several representative mobility models in mmWave systems, including human mobility, vehicular mobility, high speed train mobility and ship mobility. Next we survey the key challenges and existing solutions to mmWave applications, such as channel modeling, channel estimation, anti-blockage, and capacity improvement. Lastly, we discuss the open issues concerning mobility-aware mmWave communications that deserve further investigation. In particular, we highlight future heterogeneous mobile networks, dynamic resource management, artificial intelligence (AI) for mobility and integration of geographical information, deployment of large intelligent surface and reconfigurable antenna technology, and finally, the evolution to Terahertz (THz) communications. |
2408.05715 | Zhi-Cun Lyu | Zhi-Cun Lyu, Xin-Ye Li, Zheng Xie, Ming Li | Top Pass: Improve Code Generation by Pass@k-Maximized Code Ranking | Accepted by Frontier of Computer Science | null | null | null | cs.AI cs.SE | http://creativecommons.org/licenses/by/4.0/ | Code generation has been greatly enhanced by the profound advancements in
Large Language Models (LLMs) recently. Nevertheless, such LLM-based code
generation approaches still struggle to generate error-free code in a few tries
when faced with complex problems. To address this, the prevailing strategy is
to sample a huge number of candidate programs, with the hope of any one in them
could work. However, users of code generation systems usually expect to find a
correct program by reviewing or testing only a small number of code candidates.
Otherwise, the system would be unhelpful. In this paper, we propose Top Pass, a
code ranking approach that identifies potential correct solutions from a large
number of candidates. Top Pass directly optimizes the pass@k loss function,
enhancing the quality at the top of the candidate list. This enables the user
to find the correct solution within as few tries as possible. Experimental
results on four benchmarks indicate that our Top Pass method enhances the
usability of code generation models by producing better ranking results,
particularly achieving a 32.9\% relative improvement in pass@1 on CodeContests
when compared to the state-of-the-art ranking method.
| [
{
"created": "Sun, 11 Aug 2024 07:53:51 GMT",
"version": "v1"
}
] | 2024-08-13 | [
[
"Lyu",
"Zhi-Cun",
""
],
[
"Li",
"Xin-Ye",
""
],
[
"Xie",
"Zheng",
""
],
[
"Li",
"Ming",
""
]
] | Code generation has been greatly enhanced by the profound advancements in Large Language Models (LLMs) recently. Nevertheless, such LLM-based code generation approaches still struggle to generate error-free code in a few tries when faced with complex problems. To address this, the prevailing strategy is to sample a huge number of candidate programs, with the hope of any one in them could work. However, users of code generation systems usually expect to find a correct program by reviewing or testing only a small number of code candidates. Otherwise, the system would be unhelpful. In this paper, we propose Top Pass, a code ranking approach that identifies potential correct solutions from a large number of candidates. Top Pass directly optimizes the pass@k loss function, enhancing the quality at the top of the candidate list. This enables the user to find the correct solution within as few tries as possible. Experimental results on four benchmarks indicate that our Top Pass method enhances the usability of code generation models by producing better ranking results, particularly achieving a 32.9\% relative improvement in pass@1 on CodeContests when compared to the state-of-the-art ranking method. |
2002.05511 | Sanna Wager C | Sanna Wager, George Tzanetakis, Cheng-i Wang, Minje Kim | Deep Autotuner: a Pitch Correcting Network for Singing Performances | arXiv admin note: text overlap with arXiv:1902.00956 | IEEE International Conference on Acoustics, Speech, and Signal
Processing (ICASSP), 2020 | null | null | cs.SD cs.LG eess.AS stat.ML | http://creativecommons.org/publicdomain/zero/1.0/ | We introduce a data-driven approach to automatic pitch correction of solo
singing performances. The proposed approach predicts note-wise pitch shifts
from the relationship between the respective spectrograms of the singing and
accompaniment. This approach differs from commercial systems, where vocal track
notes are usually shifted to be centered around pitches in a user-defined
score, or mapped to the closest pitch among the twelve equal-tempered scale
degrees. The proposed system treats pitch as a continuous value rather than
relying on a set of discretized notes found in musical scores, thus allowing
for improvisation and harmonization in the singing performance. We train our
neural network model using a dataset of 4,702 amateur karaoke performances
selected for good intonation. Our model is trained on both incorrect
intonation, for which it learns a correction, and intentional pitch variation,
which it learns to preserve. The proposed deep neural network with gated
recurrent units on top of convolutional layers shows promising performance on
the real-world score-free singing pitch correction task of autotuning.
| [
{
"created": "Wed, 12 Feb 2020 01:33:56 GMT",
"version": "v1"
}
] | 2020-02-25 | [
[
"Wager",
"Sanna",
""
],
[
"Tzanetakis",
"George",
""
],
[
"Wang",
"Cheng-i",
""
],
[
"Kim",
"Minje",
""
]
] | We introduce a data-driven approach to automatic pitch correction of solo singing performances. The proposed approach predicts note-wise pitch shifts from the relationship between the respective spectrograms of the singing and accompaniment. This approach differs from commercial systems, where vocal track notes are usually shifted to be centered around pitches in a user-defined score, or mapped to the closest pitch among the twelve equal-tempered scale degrees. The proposed system treats pitch as a continuous value rather than relying on a set of discretized notes found in musical scores, thus allowing for improvisation and harmonization in the singing performance. We train our neural network model using a dataset of 4,702 amateur karaoke performances selected for good intonation. Our model is trained on both incorrect intonation, for which it learns a correction, and intentional pitch variation, which it learns to preserve. The proposed deep neural network with gated recurrent units on top of convolutional layers shows promising performance on the real-world score-free singing pitch correction task of autotuning. |
0803.0515 | Christopher Pearson | Christopher Pearson, Celina Gibbs, Yvonne Coady | Intuitive Source Code Visualization Tools for Improving Student
Comprehension: BRICS | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Even relatively simple code analysis can be a daunting task for many first
year students. Perceived complexity, coupled with foreign and harsh syntax,
often outstrips the ability for students to take in what they are seeing in
terms of their verbal memory. That is, first year students often lack the
experience to encode critical building blocks in source code, and their
interrelationships, into their own words. We believe this argues for the need
for IDEs to provide additional support for representations that would appeal
directly to visual memory. In this paper, we examine this need for intuitive
source code visualization tools that are easily accessible to novice
programmers, discuss the requirements for such a tool, and suggest a novel idea
that takes advantage of human peripheral vision to achieve stronger overall
code structure awareness.
| [
{
"created": "Tue, 4 Mar 2008 18:46:49 GMT",
"version": "v1"
}
] | 2008-03-05 | [
[
"Pearson",
"Christopher",
""
],
[
"Gibbs",
"Celina",
""
],
[
"Coady",
"Yvonne",
""
]
] | Even relatively simple code analysis can be a daunting task for many first year students. Perceived complexity, coupled with foreign and harsh syntax, often outstrips the ability for students to take in what they are seeing in terms of their verbal memory. That is, first year students often lack the experience to encode critical building blocks in source code, and their interrelationships, into their own words. We believe this argues for the need for IDEs to provide additional support for representations that would appeal directly to visual memory. In this paper, we examine this need for intuitive source code visualization tools that are easily accessible to novice programmers, discuss the requirements for such a tool, and suggest a novel idea that takes advantage of human peripheral vision to achieve stronger overall code structure awareness. |
2407.15131 | Jun Young Park | Junyoung Park, Myeonggu Kang, Yunki Han, Yanggon Kim, Jaekang Shin,
Lee-Sup Kim | Token-Picker: Accelerating Attention in Text Generation with Minimized
Memory Transfer via Probability Estimation | To appear in the proceedings of 61st Design Automation Conference
(DAC) | null | null | null | cs.AR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The attention mechanism in text generation is memory-bounded due to its
sequential characteristics. Therefore, off-chip memory accesses should be
minimized for faster execution. Although previous methods addressed this by
pruning unimportant tokens, they fall short in selectively removing tokens with
near-zero attention probabilities in each instance. Our method estimates the
probability before the softmax function, effectively removing low probability
tokens and achieving an 12.1x pruning ratio without fine-tuning. Additionally,
we present a hardware design supporting seamless on-demand off-chip access. Our
approach shows 2.6x reduced memory accesses, leading to an average 2.3x speedup
and a 2.4x energy efficiency.
| [
{
"created": "Sun, 21 Jul 2024 11:56:54 GMT",
"version": "v1"
}
] | 2024-07-23 | [
[
"Park",
"Junyoung",
""
],
[
"Kang",
"Myeonggu",
""
],
[
"Han",
"Yunki",
""
],
[
"Kim",
"Yanggon",
""
],
[
"Shin",
"Jaekang",
""
],
[
"Kim",
"Lee-Sup",
""
]
] | The attention mechanism in text generation is memory-bounded due to its sequential characteristics. Therefore, off-chip memory accesses should be minimized for faster execution. Although previous methods addressed this by pruning unimportant tokens, they fall short in selectively removing tokens with near-zero attention probabilities in each instance. Our method estimates the probability before the softmax function, effectively removing low probability tokens and achieving an 12.1x pruning ratio without fine-tuning. Additionally, we present a hardware design supporting seamless on-demand off-chip access. Our approach shows 2.6x reduced memory accesses, leading to an average 2.3x speedup and a 2.4x energy efficiency. |
1901.11010 | Thorsten Wissmann | Mirai Ikebuchi and Keisuke Nakano | On properties of $B$-terms | Journal version in Logical Methods in Computer Science. arXiv admin
note: substantial text overlap with arXiv:1703.10938 | Logical Methods in Computer Science, Volume 16, Issue 2 (June 2,
2020) lmcs:5156 | 10.23638/LMCS-16(2:8)2020 | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | $B$-terms are built from the $B$ combinator alone defined by $B\equiv\lambda
fgx. f(g~x)$, which is well known as a function composition operator. This
paper investigates an interesting property of $B$-terms, that is, whether
repetitive right applications of a $B$-term cycles or not. We discuss
conditions for $B$-terms to have and not to have the property through a sound
and complete equational axiomatization. Specifically, we give examples of
$B$-terms which have the cyclic property and show that there are infinitely
many $B$-terms which do not have the property. Also, we introduce another
interesting property about a canonical representation of $B$-terms that is
useful to detect cycles, or equivalently, to prove the cyclic property, with an
efficient algorithm.
| [
{
"created": "Wed, 30 Jan 2019 04:24:03 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Nov 2019 08:35:03 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Jun 2020 11:29:40 GMT",
"version": "v3"
}
] | 2023-06-22 | [
[
"Ikebuchi",
"Mirai",
""
],
[
"Nakano",
"Keisuke",
""
]
] | $B$-terms are built from the $B$ combinator alone defined by $B\equiv\lambda fgx. f(g~x)$, which is well known as a function composition operator. This paper investigates an interesting property of $B$-terms, that is, whether repetitive right applications of a $B$-term cycles or not. We discuss conditions for $B$-terms to have and not to have the property through a sound and complete equational axiomatization. Specifically, we give examples of $B$-terms which have the cyclic property and show that there are infinitely many $B$-terms which do not have the property. Also, we introduce another interesting property about a canonical representation of $B$-terms that is useful to detect cycles, or equivalently, to prove the cyclic property, with an efficient algorithm. |
1704.03672 | Mario Gleirscher | Mario Gleirscher and Carmen Carlan | Arguing from Hazard Analysis in Safety Cases: A Modular Argument Pattern | null | null | 10.1109/HASE.2017.15 | null | cs.SE cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We observed that safety arguments are prone to stay too abstract, e.g.
solutions refer to large packages, argument strategies to complex reasoning
steps, contexts and assumptions lack traceability. These issues can reduce the
confidence we require of such arguments. In this paper, we investigate the
construction of confident arguments from (i) hazard analysis (HA) results and
(ii) the design of safety measures, i.e., both used for confidence evaluation.
We present an argument pattern integrating three HA techniques, i.e., FTA,
FMEA, and STPA, as well as the reactions on the results of these analyses,
i.e., safety requirements and design increments. We provide an example of how
our pattern can help in argument construction and discuss steps towards using
our pattern in formal analysis and computer-assisted construction of safety
cases.
| [
{
"created": "Wed, 12 Apr 2017 09:41:30 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Feb 2018 20:03:55 GMT",
"version": "v2"
}
] | 2021-01-29 | [
[
"Gleirscher",
"Mario",
""
],
[
"Carlan",
"Carmen",
""
]
] | We observed that safety arguments are prone to stay too abstract, e.g. solutions refer to large packages, argument strategies to complex reasoning steps, contexts and assumptions lack traceability. These issues can reduce the confidence we require of such arguments. In this paper, we investigate the construction of confident arguments from (i) hazard analysis (HA) results and (ii) the design of safety measures, i.e., both used for confidence evaluation. We present an argument pattern integrating three HA techniques, i.e., FTA, FMEA, and STPA, as well as the reactions on the results of these analyses, i.e., safety requirements and design increments. We provide an example of how our pattern can help in argument construction and discuss steps towards using our pattern in formal analysis and computer-assisted construction of safety cases. |
1909.01432 | Kai Zhou | Kai Zhou, Tomasz P. Michalak, and Yevgeniy Vorobeychik | Adversarial Robustness of Similarity-Based Link Prediction | ICDM 2019 | null | null | null | cs.AI cs.CR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Link prediction is one of the fundamental problems in social network
analysis. A common set of techniques for link prediction rely on similarity
metrics which use the topology of the observed subnetwork to quantify the
likelihood of unobserved links. Recently, similarity metrics for link
prediction have been shown to be vulnerable to attacks whereby observations
about the network are adversarially modified to hide target links. We propose a
novel approach for increasing robustness of similarity-based link prediction by
endowing the analyst with a restricted set of reliable queries which accurately
measure the existence of queried links. The analyst aims to robustly predict a
collection of possible links by optimally allocating the reliable queries. We
formalize the analyst problem as a Bayesian Stackelberg game in which they
first choose the reliable queries, followed by an adversary who deletes a
subset of links among the remaining (unreliable) queries by the analyst. The
analyst in our model is uncertain about the particular target link the
adversary attempts to hide, whereas the adversary has full information about
the analyst and the network. Focusing on similarity metrics using only local
information, we show that the problem is NP-Hard for both players, and devise
two principled and efficient approaches for solving it approximately. Extensive
experiments with real and synthetic networks demonstrate the effectiveness of
our approach.
| [
{
"created": "Tue, 3 Sep 2019 20:20:45 GMT",
"version": "v1"
}
] | 2019-09-05 | [
[
"Zhou",
"Kai",
""
],
[
"Michalak",
"Tomasz P.",
""
],
[
"Vorobeychik",
"Yevgeniy",
""
]
] | Link prediction is one of the fundamental problems in social network analysis. A common set of techniques for link prediction rely on similarity metrics which use the topology of the observed subnetwork to quantify the likelihood of unobserved links. Recently, similarity metrics for link prediction have been shown to be vulnerable to attacks whereby observations about the network are adversarially modified to hide target links. We propose a novel approach for increasing robustness of similarity-based link prediction by endowing the analyst with a restricted set of reliable queries which accurately measure the existence of queried links. The analyst aims to robustly predict a collection of possible links by optimally allocating the reliable queries. We formalize the analyst problem as a Bayesian Stackelberg game in which they first choose the reliable queries, followed by an adversary who deletes a subset of links among the remaining (unreliable) queries by the analyst. The analyst in our model is uncertain about the particular target link the adversary attempts to hide, whereas the adversary has full information about the analyst and the network. Focusing on similarity metrics using only local information, we show that the problem is NP-Hard for both players, and devise two principled and efficient approaches for solving it approximately. Extensive experiments with real and synthetic networks demonstrate the effectiveness of our approach. |
2112.12597 | Mubin Ul Haque | Mubin Ul Haque and M. Ali Babar | Well Begun is Half Done: An Empirical Study of Exploitability & Impact
of Base-Image Vulnerabilities | null | null | null | null | cs.CR cs.SE | http://creativecommons.org/licenses/by/4.0/ | Container technology, (e.g., Docker) is being widely adopted for deploying
software infrastructures or applications in the form of container images.
Security vulnerabilities in the container images are a primary concern for
developing containerized software. Exploitation of the vulnerabilities could
result in disastrous impact, such as loss of confidentiality, integrity, and
availability of containerized software. Understanding the exploitability and
impact characteristics of vulnerabilities can help in securing the
configuration of containerized software. However, there is a lack of research
aimed at empirically identifying and understanding the exploitability and
impact of vulnerabilities in container images. We carried out an empirical
study to investigate the exploitability and impact of security vulnerabilities
in base-images and their prevalence in open-source containerized software. We
considered base-images since container images are built from base-images that
provide all the core functionalities to build and operate containerized
software. We discovered and characterized the exploitability and impact of
security vulnerabilities in 261 base-images, which are the origin of 4,681
actively maintained official container images in the largest container
registry, i.e., Docker Hub. To characterize the prevalence of vulnerable
base-images in real-world projects, we analysed 64,579 containerized software
from GitHub. Our analysis of a set of $1,983$ unique base-image security
vulnerabilities revealed 13 novel findings. These findings are expected to help
developers to understand the potential security problems related to base-images
and encourage them to investigate base-images from security perspective before
developing their applications.
| [
{
"created": "Tue, 21 Dec 2021 07:41:02 GMT",
"version": "v1"
}
] | 2021-12-24 | [
[
"Haque",
"Mubin Ul",
""
],
[
"Babar",
"M. Ali",
""
]
] | Container technology, (e.g., Docker) is being widely adopted for deploying software infrastructures or applications in the form of container images. Security vulnerabilities in the container images are a primary concern for developing containerized software. Exploitation of the vulnerabilities could result in disastrous impact, such as loss of confidentiality, integrity, and availability of containerized software. Understanding the exploitability and impact characteristics of vulnerabilities can help in securing the configuration of containerized software. However, there is a lack of research aimed at empirically identifying and understanding the exploitability and impact of vulnerabilities in container images. We carried out an empirical study to investigate the exploitability and impact of security vulnerabilities in base-images and their prevalence in open-source containerized software. We considered base-images since container images are built from base-images that provide all the core functionalities to build and operate containerized software. We discovered and characterized the exploitability and impact of security vulnerabilities in 261 base-images, which are the origin of 4,681 actively maintained official container images in the largest container registry, i.e., Docker Hub. To characterize the prevalence of vulnerable base-images in real-world projects, we analysed 64,579 containerized software from GitHub. Our analysis of a set of $1,983$ unique base-image security vulnerabilities revealed 13 novel findings. These findings are expected to help developers to understand the potential security problems related to base-images and encourage them to investigate base-images from security perspective before developing their applications. |
1711.09057 | Alejandro Torre\~no | Alejandro Torre\~no, Eva Onaindia, Anton\'in Komenda, Michal
\v{S}tolba | Cooperative Multi-Agent Planning: A Survey | 34 pages, 4 figures, 4 tables | ACM Computing Surveys, Volume 50, Number 6, Article 84.
Publication date: November 2017 | 10.1145/3128584 | null | cs.AI cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperative multi-agent planning (MAP) is a relatively recent research field
that combines technologies, algorithms and techniques developed by the
Artificial Intelligence Planning and Multi-Agent Systems communities. While
planning has been generally treated as a single-agent task, MAP generalizes
this concept by considering multiple intelligent agents that work cooperatively
to develop a course of action that satisfies the goals of the group.
This paper reviews the most relevant approaches to MAP, putting the focus on
the solvers that took part in the 2015 Competition of Distributed and
Multi-Agent Planning, and classifies them according to their key features and
relative performance.
| [
{
"created": "Fri, 24 Nov 2017 17:43:14 GMT",
"version": "v1"
}
] | 2017-11-27 | [
[
"Torreño",
"Alejandro",
""
],
[
"Onaindia",
"Eva",
""
],
[
"Komenda",
"Antonín",
""
],
[
"Štolba",
"Michal",
""
]
] | Cooperative multi-agent planning (MAP) is a relatively recent research field that combines technologies, algorithms and techniques developed by the Artificial Intelligence Planning and Multi-Agent Systems communities. While planning has been generally treated as a single-agent task, MAP generalizes this concept by considering multiple intelligent agents that work cooperatively to develop a course of action that satisfies the goals of the group. This paper reviews the most relevant approaches to MAP, putting the focus on the solvers that took part in the 2015 Competition of Distributed and Multi-Agent Planning, and classifies them according to their key features and relative performance. |
2309.13501 | Rundong Gan | Rundong Gan, Le Wang, Xiaodong Lin | Why Trick Me: The Honeypot Traps on Decentralized Exchanges | null | null | 10.1145/3605768.3623546 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized Exchanges (DEXs) are one of the most important infrastructures
in the world of Decentralized Finance (DeFi) and are generally considered more
reliable than centralized exchanges (CEXs). However, some well-known
decentralized exchanges (e.g., Uniswap) allow the deployment of any unaudited
ERC20 tokens, resulting in the creation of numerous honeypot traps designed to
steal traders' assets: traders can exchange valuable assets (e.g., ETH) for
fraudulent tokens in liquidity pools but are unable to exchange them back for
the original assets.
In this paper, we introduce honeypot traps on decentralized exchanges and
provide a taxonomy for these traps according to the attack effect. For
different types of traps, we design a detection scheme based on historical data
analysis and transaction simulation. We randomly select 10,000 pools from
Uniswap V2 \& V3, and then utilize our method to check these pools.Finally, we
discover 8,443 abnormal pools, which shows that honeypot traps may exist widely
in exchanges like Uniswap. Furthermore, we discuss possible mitigation and
defense strategies to protect traders' assets.
| [
{
"created": "Sat, 23 Sep 2023 23:43:41 GMT",
"version": "v1"
}
] | 2023-09-26 | [
[
"Gan",
"Rundong",
""
],
[
"Wang",
"Le",
""
],
[
"Lin",
"Xiaodong",
""
]
] | Decentralized Exchanges (DEXs) are one of the most important infrastructures in the world of Decentralized Finance (DeFi) and are generally considered more reliable than centralized exchanges (CEXs). However, some well-known decentralized exchanges (e.g., Uniswap) allow the deployment of any unaudited ERC20 tokens, resulting in the creation of numerous honeypot traps designed to steal traders' assets: traders can exchange valuable assets (e.g., ETH) for fraudulent tokens in liquidity pools but are unable to exchange them back for the original assets. In this paper, we introduce honeypot traps on decentralized exchanges and provide a taxonomy for these traps according to the attack effect. For different types of traps, we design a detection scheme based on historical data analysis and transaction simulation. We randomly select 10,000 pools from Uniswap V2 \& V3, and then utilize our method to check these pools.Finally, we discover 8,443 abnormal pools, which shows that honeypot traps may exist widely in exchanges like Uniswap. Furthermore, we discuss possible mitigation and defense strategies to protect traders' assets. |
1512.01921 | Nariman Farsad Dr. | Nariman Farsad, Weisi Guo, Chan-Byoung Chae, Andrew Eckford | Stable Distributions as Noise Models for Molecular Communication | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we consider diffusion-based molecular communication timing
channels. Three different timing channels are presented based on three
different modulation techniques, i.e., i) modulation of the release timing of
the information particles, ii) modulation on the time between two consecutive
information particles of the same type, and iii) modulation on the time between
two consecutive information particles of different types. We show that each
channel can be represented as an additive noise channel, where the noise
follows one of the subclasses of stable distributions. We provide expressions
for the probability density function of the noise terms, and numerical
evaluations for the probability density function and cumulative density
function. We also show that the tails are longer than Gaussian distribution, as
expected.
| [
{
"created": "Mon, 7 Dec 2015 06:06:54 GMT",
"version": "v1"
}
] | 2015-12-08 | [
[
"Farsad",
"Nariman",
""
],
[
"Guo",
"Weisi",
""
],
[
"Chae",
"Chan-Byoung",
""
],
[
"Eckford",
"Andrew",
""
]
] | In this work, we consider diffusion-based molecular communication timing channels. Three different timing channels are presented based on three different modulation techniques, i.e., i) modulation of the release timing of the information particles, ii) modulation on the time between two consecutive information particles of the same type, and iii) modulation on the time between two consecutive information particles of different types. We show that each channel can be represented as an additive noise channel, where the noise follows one of the subclasses of stable distributions. We provide expressions for the probability density function of the noise terms, and numerical evaluations for the probability density function and cumulative density function. We also show that the tails are longer than Gaussian distribution, as expected. |
2402.15215 | Meng Jiang | Meng Jiang, Keqin Bao, Jizhi Zhang, Wenjie Wang, Zhengyi Yang, Fuli
Feng, Xiangnan He | Item-side Fairness of Large Language Model-based Recommendation System | Accepted by the Proceedings of the ACM Web Conference 2024 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommendation systems for Web content distribution intricately connect to
the information access and exposure opportunities for vulnerable populations.
The emergence of Large Language Models-based Recommendation System (LRS) may
introduce additional societal challenges to recommendation systems due to the
inherent biases in Large Language Models (LLMs). From the perspective of
item-side fairness, there remains a lack of comprehensive investigation into
the item-side fairness of LRS given the unique characteristics of LRS compared
to conventional recommendation systems. To bridge this gap, this study examines
the property of LRS with respect to item-side fairness and reveals the
influencing factors of both historical users' interactions and inherent
semantic biases of LLMs, shedding light on the need to extend conventional
item-side fairness methods for LRS. Towards this goal, we develop a concise and
effective framework called IFairLRS to enhance the item-side fairness of an
LRS. IFairLRS covers the main stages of building an LRS with specifically
adapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS
to fine-tune LLaMA, a representative LLM, on \textit{MovieLens} and
\textit{Steam} datasets, and observe significant item-side fairness
improvements. The code can be found in
https://github.com/JiangM-C/IFairLRS.git.
| [
{
"created": "Fri, 23 Feb 2024 09:24:04 GMT",
"version": "v1"
}
] | 2024-02-26 | [
[
"Jiang",
"Meng",
""
],
[
"Bao",
"Keqin",
""
],
[
"Zhang",
"Jizhi",
""
],
[
"Wang",
"Wenjie",
""
],
[
"Yang",
"Zhengyi",
""
],
[
"Feng",
"Fuli",
""
],
[
"He",
"Xiangnan",
""
]
] | Recommendation systems for Web content distribution intricately connect to the information access and exposure opportunities for vulnerable populations. The emergence of Large Language Models-based Recommendation System (LRS) may introduce additional societal challenges to recommendation systems due to the inherent biases in Large Language Models (LLMs). From the perspective of item-side fairness, there remains a lack of comprehensive investigation into the item-side fairness of LRS given the unique characteristics of LRS compared to conventional recommendation systems. To bridge this gap, this study examines the property of LRS with respect to item-side fairness and reveals the influencing factors of both historical users' interactions and inherent semantic biases of LLMs, shedding light on the need to extend conventional item-side fairness methods for LRS. Towards this goal, we develop a concise and effective framework called IFairLRS to enhance the item-side fairness of an LRS. IFairLRS covers the main stages of building an LRS with specifically adapted strategies to calibrate the recommendations of LRS. We utilize IFairLRS to fine-tune LLaMA, a representative LLM, on \textit{MovieLens} and \textit{Steam} datasets, and observe significant item-side fairness improvements. The code can be found in https://github.com/JiangM-C/IFairLRS.git. |
2112.08740 | Zhikang Wang | Zhikang Wang, Feng Zhu, Shixiang Tang, Rui Zhao, Lihuo He, Jiangning
Song | Feature Erasing and Diffusion Network for Occluded Person
Re-Identification | 10 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Occluded person re-identification (ReID) aims at matching occluded person
images to holistic ones across different camera views. Target Pedestrians (TP)
are usually disturbed by Non-Pedestrian Occlusions (NPO) and NonTarget
Pedestrians (NTP). Previous methods mainly focus on increasing model's
robustness against NPO while ignoring feature contamination from NTP. In this
paper, we propose a novel Feature Erasing and Diffusion Network (FED) to
simultaneously handle NPO and NTP. Specifically, NPO features are eliminated by
our proposed Occlusion Erasing Module (OEM), aided by the NPO augmentation
strategy which simulates NPO on holistic pedestrian images and generates
precise occlusion masks. Subsequently, we Subsequently, we diffuse the
pedestrian representations with other memorized features to synthesize NTP
characteristics in the feature space which is achieved by a novel Feature
Diffusion Module (FDM) through a learnable cross attention mechanism. With the
guidance of the occlusion scores from OEM, the feature diffusion process is
mainly conducted on visible body parts, which guarantees the quality of the
synthesized NTP characteristics. By jointly optimizing OEM and FDM in our
proposed FED network, we can greatly improve the model's perception ability
towards TP and alleviate the influence of NPO and NTP. Furthermore, the
proposed FDM only works as an auxiliary module for training and will be
discarded in the inference phase, thus introducing little inference
computational overhead. Experiments on occluded and holistic person ReID
benchmarks demonstrate the superiority of FED over state-of-the-arts, where FED
achieves 86.3% Rank-1 accuracy on Occluded-REID, surpassing others by at least
4.7%.
| [
{
"created": "Thu, 16 Dec 2021 09:47:17 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Mar 2022 03:17:26 GMT",
"version": "v2"
}
] | 2022-04-01 | [
[
"Wang",
"Zhikang",
""
],
[
"Zhu",
"Feng",
""
],
[
"Tang",
"Shixiang",
""
],
[
"Zhao",
"Rui",
""
],
[
"He",
"Lihuo",
""
],
[
"Song",
"Jiangning",
""
]
] | Occluded person re-identification (ReID) aims at matching occluded person images to holistic ones across different camera views. Target Pedestrians (TP) are usually disturbed by Non-Pedestrian Occlusions (NPO) and NonTarget Pedestrians (NTP). Previous methods mainly focus on increasing model's robustness against NPO while ignoring feature contamination from NTP. In this paper, we propose a novel Feature Erasing and Diffusion Network (FED) to simultaneously handle NPO and NTP. Specifically, NPO features are eliminated by our proposed Occlusion Erasing Module (OEM), aided by the NPO augmentation strategy which simulates NPO on holistic pedestrian images and generates precise occlusion masks. Subsequently, we Subsequently, we diffuse the pedestrian representations with other memorized features to synthesize NTP characteristics in the feature space which is achieved by a novel Feature Diffusion Module (FDM) through a learnable cross attention mechanism. With the guidance of the occlusion scores from OEM, the feature diffusion process is mainly conducted on visible body parts, which guarantees the quality of the synthesized NTP characteristics. By jointly optimizing OEM and FDM in our proposed FED network, we can greatly improve the model's perception ability towards TP and alleviate the influence of NPO and NTP. Furthermore, the proposed FDM only works as an auxiliary module for training and will be discarded in the inference phase, thus introducing little inference computational overhead. Experiments on occluded and holistic person ReID benchmarks demonstrate the superiority of FED over state-of-the-arts, where FED achieves 86.3% Rank-1 accuracy on Occluded-REID, surpassing others by at least 4.7%. |
2402.12891 | Tim Michels | Tim Michels, Daniel M\"ackelmann and Reinhard Koch | Mind the Exit Pupil Gap: Revisiting the Intrinsics of a Standard
Plenoptic Camera | 29 pages, 16 figures, Accepted for publication in MDPI Sensors,
Special Issue 'Short-Range Optical 3D Scanning and 3D Data Processing ' | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the common applications of plenoptic cameras are depth reconstruction
and post-shot refocusing. These require a calibration relating the camera-side
light field to that of the scene. Numerous methods with this goal have been
developed based on thin lens models for the plenoptic camera's main lens and
microlenses. Our work addresses the often-overlooked role of the main lens exit
pupil in these models and specifically in the decoding process of standard
plenoptic camera (SPC) images. We formally deduce the connection between the
refocusing distance and the resampling parameter for the decoded light field
and provide an analysis of the errors that arise when the exit pupil is not
considered. In addition, previous work is revisited with respect to the exit
pupil's role and all theoretical results are validated through a
ray-tracing-based simulation. With the public release of the evaluated SPC
designs alongside our simulation and experimental data we aim to contribute to
a more accurate and nuanced understanding of plenoptic camera optics.
| [
{
"created": "Tue, 20 Feb 2024 10:35:51 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Apr 2024 09:26:07 GMT",
"version": "v2"
}
] | 2024-04-08 | [
[
"Michels",
"Tim",
""
],
[
"Mäckelmann",
"Daniel",
""
],
[
"Koch",
"Reinhard",
""
]
] | Among the common applications of plenoptic cameras are depth reconstruction and post-shot refocusing. These require a calibration relating the camera-side light field to that of the scene. Numerous methods with this goal have been developed based on thin lens models for the plenoptic camera's main lens and microlenses. Our work addresses the often-overlooked role of the main lens exit pupil in these models and specifically in the decoding process of standard plenoptic camera (SPC) images. We formally deduce the connection between the refocusing distance and the resampling parameter for the decoded light field and provide an analysis of the errors that arise when the exit pupil is not considered. In addition, previous work is revisited with respect to the exit pupil's role and all theoretical results are validated through a ray-tracing-based simulation. With the public release of the evaluated SPC designs alongside our simulation and experimental data we aim to contribute to a more accurate and nuanced understanding of plenoptic camera optics. |
1704.01992 | Shirin Jalali | Sajjad Beygi, Shirin Jalali, Arian Maleki, Urbashi Mitra | An efficient algorithm for compression-based compressed sensing | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern image and video compression codes employ elaborate structures existing
in such signals to encode them into few number of bits. Compressed sensing
recovery algorithms on the other hand use such signals' structures to recover
them from few linear observations. Despite the steady progress in the field of
compressed sensing, structures that are often used for signal recovery are
still much simpler than those employed by state-of-the-art compression codes.
The main goal of this paper is to bridge this gap through answering the
following question: Can one employ a given compression code to build an
efficient (polynomial time) compressed sensing recovery algorithm? In response
to this question, the compression-based gradient descent (C-GD) algorithm is
proposed. C-GD, which is a low-complexity iterative algorithm, is able to
employ a generic compression code for compressed sensing and therefore elevates
the scope of structures used in compressed sensing to those used by compression
codes. The convergence performance of C-GD and its required number of
measurements in terms of the rate-distortion performance of the compression
code are theoretically analyzed. It is also shown that C-GD is robust to
additive white Gaussian noise. Finally, the presented simulation results show
that combining C-GD with commercial image compression codes such as JPEG2000
yields state-of-the-art performance in imaging applications.
| [
{
"created": "Thu, 6 Apr 2017 19:23:08 GMT",
"version": "v1"
}
] | 2017-04-10 | [
[
"Beygi",
"Sajjad",
""
],
[
"Jalali",
"Shirin",
""
],
[
"Maleki",
"Arian",
""
],
[
"Mitra",
"Urbashi",
""
]
] | Modern image and video compression codes employ elaborate structures existing in such signals to encode them into few number of bits. Compressed sensing recovery algorithms on the other hand use such signals' structures to recover them from few linear observations. Despite the steady progress in the field of compressed sensing, structures that are often used for signal recovery are still much simpler than those employed by state-of-the-art compression codes. The main goal of this paper is to bridge this gap through answering the following question: Can one employ a given compression code to build an efficient (polynomial time) compressed sensing recovery algorithm? In response to this question, the compression-based gradient descent (C-GD) algorithm is proposed. C-GD, which is a low-complexity iterative algorithm, is able to employ a generic compression code for compressed sensing and therefore elevates the scope of structures used in compressed sensing to those used by compression codes. The convergence performance of C-GD and its required number of measurements in terms of the rate-distortion performance of the compression code are theoretically analyzed. It is also shown that C-GD is robust to additive white Gaussian noise. Finally, the presented simulation results show that combining C-GD with commercial image compression codes such as JPEG2000 yields state-of-the-art performance in imaging applications. |
2010.03680 | Yaqing Wang | Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu,
Jing Gao, Ahmed Hassan Awadallah | Adaptive Self-training for Few-shot Neural Sequence Labeling | null | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequence labeling is an important technique employed for many Natural
Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot
tagging for dialog systems and semantic parsing. Large-scale pre-trained
language models obtain very good performance on these tasks when fine-tuned on
large amounts of task-specific labeled data. However, such large-scale labeled
datasets are difficult to obtain for several tasks and domains due to the high
cost of human annotation as well as privacy and data access constraints for
sensitive user applications. This is exacerbated for sequence labeling tasks
requiring such annotations at token-level. In this work, we develop techniques
to address the label scarcity challenge for neural sequence labeling models.
Specifically, we develop self-training and meta-learning techniques for
training neural sequence taggers with few labels. While self-training serves as
an effective mechanism to learn from large amounts of unlabeled data --
meta-learning helps in adaptive sample re-weighting to mitigate error
propagation from noisy pseudo-labels. Extensive experiments on six benchmark
datasets including two for massive multilingual NER and four slot tagging
datasets for task-oriented dialog systems demonstrate the effectiveness of our
method. With only 10 labeled examples for each class for each task, our method
obtains 10% improvement over state-of-the-art systems demonstrating its
effectiveness for the low-resource setting.
| [
{
"created": "Wed, 7 Oct 2020 22:29:05 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Dec 2020 17:16:57 GMT",
"version": "v2"
}
] | 2020-12-14 | [
[
"Wang",
"Yaqing",
""
],
[
"Mukherjee",
"Subhabrata",
""
],
[
"Chu",
"Haoda",
""
],
[
"Tu",
"Yuancheng",
""
],
[
"Wu",
"Ming",
""
],
[
"Gao",
"Jing",
""
],
[
"Awadallah",
"Ahmed Hassan",
""
]
] | Sequence labeling is an important technique employed for many Natural Language Processing (NLP) tasks, such as Named Entity Recognition (NER), slot tagging for dialog systems and semantic parsing. Large-scale pre-trained language models obtain very good performance on these tasks when fine-tuned on large amounts of task-specific labeled data. However, such large-scale labeled datasets are difficult to obtain for several tasks and domains due to the high cost of human annotation as well as privacy and data access constraints for sensitive user applications. This is exacerbated for sequence labeling tasks requiring such annotations at token-level. In this work, we develop techniques to address the label scarcity challenge for neural sequence labeling models. Specifically, we develop self-training and meta-learning techniques for training neural sequence taggers with few labels. While self-training serves as an effective mechanism to learn from large amounts of unlabeled data -- meta-learning helps in adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels. Extensive experiments on six benchmark datasets including two for massive multilingual NER and four slot tagging datasets for task-oriented dialog systems demonstrate the effectiveness of our method. With only 10 labeled examples for each class for each task, our method obtains 10% improvement over state-of-the-art systems demonstrating its effectiveness for the low-resource setting. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.