id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1705.10786
|
Hamidreza Alvari
|
Hamidreza Alvari, Paulo Shakarian, J.E. Kelly Snyder
|
Semi-Supervised Learning for Detecting Human Trafficking
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human trafficking is one of the most atrocious crimes and among the
challenging problems facing law enforcement which demands attention of global
magnitude. In this study, we leverage textual data from the website "Backpage"-
used for classified advertisement- to discern potential patterns of human
trafficking activities which manifest online and identify advertisements of
high interest to law enforcement. Due to the lack of ground truth, we rely on a
human analyst from law enforcement, for hand-labeling a small portion of the
crawled data. We extend the existing Laplacian SVM and present S3VM-R, by
adding a regularization term to exploit exogenous information embedded in our
feature space in favor of the task at hand. We train the proposed method using
labeled and unlabeled data and evaluate it on a fraction of the unlabeled data,
herein referred to as unseen data, with our expert's further verification.
Results from comparisons between our method and other semi-supervised and
supervised approaches on the labeled data demonstrate that our learner is
effective in identifying advertisements of high interest to law enforcement
|
[
{
"created": "Tue, 30 May 2017 05:51:53 GMT",
"version": "v1"
}
] |
2017-06-01
|
[
[
"Alvari",
"Hamidreza",
""
],
[
"Shakarian",
"Paulo",
""
],
[
"Snyder",
"J. E. Kelly",
""
]
] |
Human trafficking is one of the most atrocious crimes and among the challenging problems facing law enforcement which demands attention of global magnitude. In this study, we leverage textual data from the website "Backpage"- used for classified advertisement- to discern potential patterns of human trafficking activities which manifest online and identify advertisements of high interest to law enforcement. Due to the lack of ground truth, we rely on a human analyst from law enforcement, for hand-labeling a small portion of the crawled data. We extend the existing Laplacian SVM and present S3VM-R, by adding a regularization term to exploit exogenous information embedded in our feature space in favor of the task at hand. We train the proposed method using labeled and unlabeled data and evaluate it on a fraction of the unlabeled data, herein referred to as unseen data, with our expert's further verification. Results from comparisons between our method and other semi-supervised and supervised approaches on the labeled data demonstrate that our learner is effective in identifying advertisements of high interest to law enforcement
|
1208.4692
|
Francis Maes
|
Francis Maes and David Lupien St-Pierre and Damien Ernst
|
Monte Carlo Search Algorithm Discovery for One Player Games
| null | null | null | null |
cs.AI cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Much current research in AI and games is being devoted to Monte Carlo search
(MCS) algorithms. While the quest for a single unified MCS algorithm that would
perform well on all problems is of major interest for AI, practitioners often
know in advance the problem they want to solve, and spend plenty of time
exploiting this knowledge to customize their MCS algorithm in a problem-driven
way. We propose an MCS algorithm discovery scheme to perform this in an
automatic and reproducible way. We first introduce a grammar over MCS
algorithms that enables inducing a rich space of candidate algorithms.
Afterwards, we search in this space for the algorithm that performs best on
average for a given distribution of training problems. We rely on multi-armed
bandits to approximately solve this optimization problem. The experiments,
generated on three different domains, show that our approach enables
discovering algorithms that outperform several well-known MCS algorithms such
as Upper Confidence bounds applied to Trees and Nested Monte Carlo search. We
also show that the discovered algorithms are generally quite robust with
respect to changes in the distribution over the training problems.
|
[
{
"created": "Thu, 23 Aug 2012 08:44:59 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Nov 2012 15:57:58 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Dec 2012 10:44:37 GMT",
"version": "v3"
}
] |
2015-03-20
|
[
[
"Maes",
"Francis",
""
],
[
"St-Pierre",
"David Lupien",
""
],
[
"Ernst",
"Damien",
""
]
] |
Much current research in AI and games is being devoted to Monte Carlo search (MCS) algorithms. While the quest for a single unified MCS algorithm that would perform well on all problems is of major interest for AI, practitioners often know in advance the problem they want to solve, and spend plenty of time exploiting this knowledge to customize their MCS algorithm in a problem-driven way. We propose an MCS algorithm discovery scheme to perform this in an automatic and reproducible way. We first introduce a grammar over MCS algorithms that enables inducing a rich space of candidate algorithms. Afterwards, we search in this space for the algorithm that performs best on average for a given distribution of training problems. We rely on multi-armed bandits to approximately solve this optimization problem. The experiments, generated on three different domains, show that our approach enables discovering algorithms that outperform several well-known MCS algorithms such as Upper Confidence bounds applied to Trees and Nested Monte Carlo search. We also show that the discovered algorithms are generally quite robust with respect to changes in the distribution over the training problems.
|
2206.03857
|
Vitor Bosshard
|
Vitor Bosshard, Ye Wang and Sven Seuken
|
Non-decreasing Payment Rules for Combinatorial Auctions
|
Published at IJCAI 2018. Includes corrigendum explaining a mistake in
the original paper
| null |
10.24963/ijcai.2018/15
| null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Combinatorial auctions are used to allocate resources in domains where
bidders have complex preferences over bundles of goods. However, the behavior
of bidders under different payment rules is not well understood, and there has
been limited success in finding Bayes-Nash equilibria of such auctions due to
the computational difficulties involved. In this paper, we introduce
non-decreasing payment rules. Under such a rule, the payment of a bidder cannot
decrease when he increases his bid, which is a natural and desirable property.
VCG-nearest, the payment rule most commonly used in practice, violates this
property and can thus be manipulated in surprising ways. In contrast, we show
that many other payment rules are non-decreasing. We also show that a
non-decreasing payment rule imposes a structure on the auction game that
enables us to search for an approximate Bayes-Nash equilibrium much more
efficiently than in the general case. Finally, we introduce the utility planes
BNE algorithm, which exploits this structure and outperforms a state-of-the-art
algorithm by multiple orders of magnitude.
|
[
{
"created": "Tue, 7 Jun 2022 13:17:09 GMT",
"version": "v1"
}
] |
2022-06-09
|
[
[
"Bosshard",
"Vitor",
""
],
[
"Wang",
"Ye",
""
],
[
"Seuken",
"Sven",
""
]
] |
Combinatorial auctions are used to allocate resources in domains where bidders have complex preferences over bundles of goods. However, the behavior of bidders under different payment rules is not well understood, and there has been limited success in finding Bayes-Nash equilibria of such auctions due to the computational difficulties involved. In this paper, we introduce non-decreasing payment rules. Under such a rule, the payment of a bidder cannot decrease when he increases his bid, which is a natural and desirable property. VCG-nearest, the payment rule most commonly used in practice, violates this property and can thus be manipulated in surprising ways. In contrast, we show that many other payment rules are non-decreasing. We also show that a non-decreasing payment rule imposes a structure on the auction game that enables us to search for an approximate Bayes-Nash equilibrium much more efficiently than in the general case. Finally, we introduce the utility planes BNE algorithm, which exploits this structure and outperforms a state-of-the-art algorithm by multiple orders of magnitude.
|
2407.01866
|
Yunxiang Zhang
|
Yunxiang Zhang, Alexandr Kuznetsov, Akshay Jindal, Kenneth Chen, Anton
Sochenov, Anton Kaplanyan, Qi Sun
|
Image-GS: Content-Adaptive Image Representation via 2D Gaussians
| null | null | null | null |
cs.CV cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Neural image representations have recently emerged as a promising technique
for storing, streaming, and rendering visual data. Coupled with learning-based
workflows, these novel representations have demonstrated remarkable visual
fidelity and memory efficiency. However, existing neural image representations
often rely on explicit uniform data structures without content adaptivity or
computation-intensive implicit models, limiting their adoption in real-time
graphics applications.
Inspired by recent advances in radiance field rendering, we propose Image-GS,
a content-adaptive image representation. Using anisotropic 2D Gaussians as the
basis, Image-GS shows high memory efficiency, supports fast random access, and
offers a natural level of detail stack. Leveraging a tailored differentiable
renderer, Image-GS fits a target image by adaptively allocating and
progressively optimizing a set of 2D Gaussians. The generalizable efficiency
and fidelity of Image-GS are validated against several recent neural image
representations and industry-standard texture compressors on a diverse set of
images. Notably, its memory and computation requirements solely depend on and
linearly scale with the number of 2D Gaussians, providing flexible controls
over the trade-off between visual fidelity and run-time efficiency. We hope
this research offers insights for developing new applications that require
adaptive quality and resource control, such as machine perception, asset
streaming, and content generation.
|
[
{
"created": "Tue, 2 Jul 2024 00:45:21 GMT",
"version": "v1"
}
] |
2024-07-03
|
[
[
"Zhang",
"Yunxiang",
""
],
[
"Kuznetsov",
"Alexandr",
""
],
[
"Jindal",
"Akshay",
""
],
[
"Chen",
"Kenneth",
""
],
[
"Sochenov",
"Anton",
""
],
[
"Kaplanyan",
"Anton",
""
],
[
"Sun",
"Qi",
""
]
] |
Neural image representations have recently emerged as a promising technique for storing, streaming, and rendering visual data. Coupled with learning-based workflows, these novel representations have demonstrated remarkable visual fidelity and memory efficiency. However, existing neural image representations often rely on explicit uniform data structures without content adaptivity or computation-intensive implicit models, limiting their adoption in real-time graphics applications. Inspired by recent advances in radiance field rendering, we propose Image-GS, a content-adaptive image representation. Using anisotropic 2D Gaussians as the basis, Image-GS shows high memory efficiency, supports fast random access, and offers a natural level of detail stack. Leveraging a tailored differentiable renderer, Image-GS fits a target image by adaptively allocating and progressively optimizing a set of 2D Gaussians. The generalizable efficiency and fidelity of Image-GS are validated against several recent neural image representations and industry-standard texture compressors on a diverse set of images. Notably, its memory and computation requirements solely depend on and linearly scale with the number of 2D Gaussians, providing flexible controls over the trade-off between visual fidelity and run-time efficiency. We hope this research offers insights for developing new applications that require adaptive quality and resource control, such as machine perception, asset streaming, and content generation.
|
2311.06838
|
Chengguang Gan
|
Chengguang Gan, Qinghao Zhang, Tatsunori Mori
|
GIELLM: Japanese General Information Extraction Large Language Model
Utilizing Mutual Reinforcement Effect
|
10 pages, 6 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information Extraction (IE) stands as a cornerstone in natural language
processing, traditionally segmented into distinct sub-tasks. The advent of
Large Language Models (LLMs) heralds a paradigm shift, suggesting the
feasibility of a singular model addressing multiple IE subtasks. In this vein,
we introduce the General Information Extraction Large Language Model (GIELLM),
which integrates text Classification, Sentiment Analysis, Named Entity
Recognition, Relation Extraction, and Event Extraction using a uniform
input-output schema. This innovation marks the first instance of a model
simultaneously handling such a diverse array of IE subtasks. Notably, the
GIELLM leverages the Mutual Reinforcement Effect (MRE), enhancing performance
in integrated tasks compared to their isolated counterparts. Our experiments
demonstrate State-of-the-Art (SOTA) results in five out of six Japanese mixed
datasets, significantly surpassing GPT-3.5-Turbo. Further, an independent
evaluation using the novel Text Classification Relation and Event
Extraction(TCREE) dataset corroborates the synergistic advantages of MRE in
text and word classification. This breakthrough paves the way for most IE
subtasks to be subsumed under a singular LLM framework. Specialized fine-tune
task-specific models are no longer needed.
|
[
{
"created": "Sun, 12 Nov 2023 13:30:38 GMT",
"version": "v1"
}
] |
2023-11-14
|
[
[
"Gan",
"Chengguang",
""
],
[
"Zhang",
"Qinghao",
""
],
[
"Mori",
"Tatsunori",
""
]
] |
Information Extraction (IE) stands as a cornerstone in natural language processing, traditionally segmented into distinct sub-tasks. The advent of Large Language Models (LLMs) heralds a paradigm shift, suggesting the feasibility of a singular model addressing multiple IE subtasks. In this vein, we introduce the General Information Extraction Large Language Model (GIELLM), which integrates text Classification, Sentiment Analysis, Named Entity Recognition, Relation Extraction, and Event Extraction using a uniform input-output schema. This innovation marks the first instance of a model simultaneously handling such a diverse array of IE subtasks. Notably, the GIELLM leverages the Mutual Reinforcement Effect (MRE), enhancing performance in integrated tasks compared to their isolated counterparts. Our experiments demonstrate State-of-the-Art (SOTA) results in five out of six Japanese mixed datasets, significantly surpassing GPT-3.5-Turbo. Further, an independent evaluation using the novel Text Classification Relation and Event Extraction(TCREE) dataset corroborates the synergistic advantages of MRE in text and word classification. This breakthrough paves the way for most IE subtasks to be subsumed under a singular LLM framework. Specialized fine-tune task-specific models are no longer needed.
|
1812.10961
|
Sergey Belim
|
S.V. Belim, N.F. Bogachenko, A.N. Kabanov
|
A Precedent Approach to Assigning Access Rights
| null | null |
10.1088/1742-6596/1210/1/012010
| null |
cs.CR cs.AI cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To design a discretionary access control policy, a technique is proposed that
uses the principle of analogies and is based on both the properties of objects
and the properties of subjects. As attributes characterizing these properties,
the values of the security attributes of subjects and objects are chosen. The
concept of precedent is defined as an access rule explicitly specified by the
security administrator. The problem of interpolation of the access matrix is
formulated: the security administrator defines a sequence of precedents, it is
required to automate the process of filling the remaining cells of the access
matrix. On the family of sets of security attributes, a linear order is
introduced. The principles of filling the access matrix on the basis of analogy
with the dominant precedent in accordance with a given order relation are
developed. The analysis of the proposed methodology is performed and its main
advantages are revealed.
|
[
{
"created": "Fri, 28 Dec 2018 11:51:14 GMT",
"version": "v1"
}
] |
2019-05-22
|
[
[
"Belim",
"S. V.",
""
],
[
"Bogachenko",
"N. F.",
""
],
[
"Kabanov",
"A. N.",
""
]
] |
To design a discretionary access control policy, a technique is proposed that uses the principle of analogies and is based on both the properties of objects and the properties of subjects. As attributes characterizing these properties, the values of the security attributes of subjects and objects are chosen. The concept of precedent is defined as an access rule explicitly specified by the security administrator. The problem of interpolation of the access matrix is formulated: the security administrator defines a sequence of precedents, it is required to automate the process of filling the remaining cells of the access matrix. On the family of sets of security attributes, a linear order is introduced. The principles of filling the access matrix on the basis of analogy with the dominant precedent in accordance with a given order relation are developed. The analysis of the proposed methodology is performed and its main advantages are revealed.
|
1010.3947
|
Pedro Aguiar
|
Bernardo Esteves Pires and Pedro M. Q. Aguiar
|
Maximum Likelihood Mosaics
|
13 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The majority of the approaches to the automatic recovery of a panoramic image
from a set of partial views are suboptimal in the sense that the input images
are aligned, or registered, pair by pair, e.g., consecutive frames of a video
clip. These approaches lead to propagation errors that may be very severe,
particularly when dealing with videos that show the same region at disjoint
time intervals. Although some authors have proposed a post-processing step to
reduce the registration errors in these situations, there have not been
attempts to compute the optimal solution, i.e., the registrations leading to
the panorama that best matches the entire set of partial views}. This is our
goal. In this paper, we use a generative model for the partial views of the
panorama and develop an algorithm to compute in an efficient way the Maximum
Likelihood estimate of all the unknowns involved: the parameters describing the
alignment of all the images and the panorama itself.
|
[
{
"created": "Tue, 19 Oct 2010 15:13:40 GMT",
"version": "v1"
}
] |
2010-10-20
|
[
[
"Pires",
"Bernardo Esteves",
""
],
[
"Aguiar",
"Pedro M. Q.",
""
]
] |
The majority of the approaches to the automatic recovery of a panoramic image from a set of partial views are suboptimal in the sense that the input images are aligned, or registered, pair by pair, e.g., consecutive frames of a video clip. These approaches lead to propagation errors that may be very severe, particularly when dealing with videos that show the same region at disjoint time intervals. Although some authors have proposed a post-processing step to reduce the registration errors in these situations, there have not been attempts to compute the optimal solution, i.e., the registrations leading to the panorama that best matches the entire set of partial views}. This is our goal. In this paper, we use a generative model for the partial views of the panorama and develop an algorithm to compute in an efficient way the Maximum Likelihood estimate of all the unknowns involved: the parameters describing the alignment of all the images and the panorama itself.
|
2212.08709
|
Clayton Thomas
|
Yannai A. Gonczarowski, Clayton Thomas
|
Structural Complexities of Matching Mechanisms
| null | null | null | null |
cs.GT cs.CC econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study various novel complexity measures for two-sided matching mechanisms,
applied to the two canonical strategyproof matching mechanisms, Deferred
Acceptance (DA) and Top Trading Cycles (TTC). Our metrics are designed to
capture the complexity of various structural (rather than computational)
concerns, in particular ones of recent interest within economics. We consider a
unified, flexible approach to formalizing our questions: Define a protocol or
data structure performing some task, and bound the number of bits that it
requires. Our main results apply this approach to four questions of general
interest; for mechanisms matching applicants to institutions, our questions
are:
(1) How can one applicant affect the outcome matching?
(2) How can one applicant affect another applicant's set of options?
(3) How can the outcome matching be represented / communicated?
(4) How can the outcome matching be verified?
Holistically, our results show that TTC is more complex than DA, formalizing
previous intuitions that DA has a simpler structure than TTC. For question (2),
our result gives a new combinatorial characterization of which institutions are
removed from each applicant's set of options when a new applicant is added in
DA; this characterization may be of independent interest. For question (3), our
result gives new tight lower bounds proving that the relationship between the
matching and the priorities is more complex in TTC than in DA. We nonetheless
showcase that this higher complexity of TTC is nuanced: By constructing new
tight lower-bound instances and new verification protocols, we prove that DA
and TTC are comparable in complexity under questions (1) and (4). This more
precisely delineates the ways in which TTC is more complex than DA, and
emphasizes that diverse considerations must factor into gauging the complexity
of matching mechanisms.
|
[
{
"created": "Fri, 16 Dec 2022 20:53:30 GMT",
"version": "v1"
},
{
"created": "Thu, 11 May 2023 16:43:32 GMT",
"version": "v2"
},
{
"created": "Sat, 30 Mar 2024 22:17:26 GMT",
"version": "v3"
}
] |
2024-04-02
|
[
[
"Gonczarowski",
"Yannai A.",
""
],
[
"Thomas",
"Clayton",
""
]
] |
We study various novel complexity measures for two-sided matching mechanisms, applied to the two canonical strategyproof matching mechanisms, Deferred Acceptance (DA) and Top Trading Cycles (TTC). Our metrics are designed to capture the complexity of various structural (rather than computational) concerns, in particular ones of recent interest within economics. We consider a unified, flexible approach to formalizing our questions: Define a protocol or data structure performing some task, and bound the number of bits that it requires. Our main results apply this approach to four questions of general interest; for mechanisms matching applicants to institutions, our questions are: (1) How can one applicant affect the outcome matching? (2) How can one applicant affect another applicant's set of options? (3) How can the outcome matching be represented / communicated? (4) How can the outcome matching be verified? Holistically, our results show that TTC is more complex than DA, formalizing previous intuitions that DA has a simpler structure than TTC. For question (2), our result gives a new combinatorial characterization of which institutions are removed from each applicant's set of options when a new applicant is added in DA; this characterization may be of independent interest. For question (3), our result gives new tight lower bounds proving that the relationship between the matching and the priorities is more complex in TTC than in DA. We nonetheless showcase that this higher complexity of TTC is nuanced: By constructing new tight lower-bound instances and new verification protocols, we prove that DA and TTC are comparable in complexity under questions (1) and (4). This more precisely delineates the ways in which TTC is more complex than DA, and emphasizes that diverse considerations must factor into gauging the complexity of matching mechanisms.
|
2407.15861
|
Chenyu Zhang
|
Chenyu Zhang, Mingwang Hu, Wenhui Li and Lanjun Wang
|
Adversarial Attacks and Defenses on Text-to-Image Diffusion Models: A
Survey
| null | null | null | null |
cs.CR cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, the text-to-image diffusion model has gained considerable attention
from the community due to its exceptional image generation capability. A
representative model, Stable Diffusion, amassed more than 10 million users
within just two months of its release. This surge in popularity has facilitated
studies on the robustness and safety of the model, leading to the proposal of
various adversarial attack methods. Simultaneously, there has been a marked
increase in research focused on defense methods to improve the robustness and
safety of these models. In this survey, we provide a comprehensive review of
the literature on adversarial attacks and defenses targeting text-to-image
diffusion models. We begin with an overview of text-to-image diffusion models,
followed by an introduction to a taxonomy of adversarial attacks and an
in-depth review of existing attack methods. We then present a detailed analysis
of current defense methods that improve model robustness and safety. Finally,
we discuss ongoing challenges and explore promising future research directions.
For a complete list of the adversarial attack and defense methods covered in
this survey, please refer to our curated repository at
https://github.com/datar001/Awesome-AD-on-T2IDM.
|
[
{
"created": "Wed, 10 Jul 2024 13:50:31 GMT",
"version": "v1"
}
] |
2024-07-24
|
[
[
"Zhang",
"Chenyu",
""
],
[
"Hu",
"Mingwang",
""
],
[
"Li",
"Wenhui",
""
],
[
"Wang",
"Lanjun",
""
]
] |
Recently, the text-to-image diffusion model has gained considerable attention from the community due to its exceptional image generation capability. A representative model, Stable Diffusion, amassed more than 10 million users within just two months of its release. This surge in popularity has facilitated studies on the robustness and safety of the model, leading to the proposal of various adversarial attack methods. Simultaneously, there has been a marked increase in research focused on defense methods to improve the robustness and safety of these models. In this survey, we provide a comprehensive review of the literature on adversarial attacks and defenses targeting text-to-image diffusion models. We begin with an overview of text-to-image diffusion models, followed by an introduction to a taxonomy of adversarial attacks and an in-depth review of existing attack methods. We then present a detailed analysis of current defense methods that improve model robustness and safety. Finally, we discuss ongoing challenges and explore promising future research directions. For a complete list of the adversarial attack and defense methods covered in this survey, please refer to our curated repository at https://github.com/datar001/Awesome-AD-on-T2IDM.
|
2311.06157
|
Grischa Fraumann
|
Grischa Fraumann, Svantje Lilienthal, and Christian Hauschke
|
The Registry of Scientometric Data Sources
|
20 pages, 4 figures, 1 table
| null |
10.31222/osf.io/f2bx3
| null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
In this article, we describe the Registry of Scientometric Data Sources
(RSDS) and several scientometric data sources recorded in this open registry
that could be of interest for scientometricians, institutional researchers,
librarians, practitioners, policy makers, students and other stakeholders with
an interest in scientometrics. This registry was created after carrying out a
literature review and a technical evaluation of several data sources. Each data
source is recorded with descriptive metadata fields and URLs to further
information. This article describes the motivation behind the development of
the registry, explains the features that are available on its public website
(https://labs.tib.eu/rosi), and closes with a call for participation.
|
[
{
"created": "Fri, 10 Nov 2023 16:23:40 GMT",
"version": "v1"
}
] |
2023-11-13
|
[
[
"Fraumann",
"Grischa",
""
],
[
"Lilienthal",
"Svantje",
""
],
[
"Hauschke",
"Christian",
""
]
] |
In this article, we describe the Registry of Scientometric Data Sources (RSDS) and several scientometric data sources recorded in this open registry that could be of interest for scientometricians, institutional researchers, librarians, practitioners, policy makers, students and other stakeholders with an interest in scientometrics. This registry was created after carrying out a literature review and a technical evaluation of several data sources. Each data source is recorded with descriptive metadata fields and URLs to further information. This article describes the motivation behind the development of the registry, explains the features that are available on its public website (https://labs.tib.eu/rosi), and closes with a call for participation.
|
2105.07445
|
Hyesun Chung
|
Hyesun Chung and Woojin Park
|
Enhancing the Usability of Self-service Kiosks for Older Adults: Effects
of Using Privacy Partitions and Chairs
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This study aimed to evaluate the effects of possible physical design features
of self-service kiosks (SSK), side and back partitions and chairs, on workload
and task performance of older users during a typical SSK task. The study
comparatively evaluated eight physical SSK design alternatives, and younger and
older participants performed a menu ordering task using each physical design
alternative. Older participants showed a large variation in task performance
across the design alternatives indicating stronger impacts of the physical
design features. In particular, sitting significantly reduced task completion
time and workload in multiple dimensions, including time pressure and
frustration. In addition, the use of either side or back partitions reduced
mean ratings of mental demand and effort. The study suggests placing chairs and
either side or back partitions to enhance older adults' user experience. The
use of the proposed physical design recommendations would greatly help them use
SSK more effectively.
|
[
{
"created": "Sun, 16 May 2021 14:43:51 GMT",
"version": "v1"
}
] |
2021-05-18
|
[
[
"Chung",
"Hyesun",
""
],
[
"Park",
"Woojin",
""
]
] |
This study aimed to evaluate the effects of possible physical design features of self-service kiosks (SSK), side and back partitions and chairs, on workload and task performance of older users during a typical SSK task. The study comparatively evaluated eight physical SSK design alternatives, and younger and older participants performed a menu ordering task using each physical design alternative. Older participants showed a large variation in task performance across the design alternatives indicating stronger impacts of the physical design features. In particular, sitting significantly reduced task completion time and workload in multiple dimensions, including time pressure and frustration. In addition, the use of either side or back partitions reduced mean ratings of mental demand and effort. The study suggests placing chairs and either side or back partitions to enhance older adults' user experience. The use of the proposed physical design recommendations would greatly help them use SSK more effectively.
|
2309.08600
|
Robert Huben
|
Hoagy Cunningham, Aidan Ewart, Logan Riggs, Robert Huben, Lee Sharkey
|
Sparse Autoencoders Find Highly Interpretable Features in Language
Models
|
20 pages, 18 figures, 2 tables
| null | null | null |
cs.LG cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
One of the roadblocks to a better understanding of neural networks' internals
is \textit{polysemanticity}, where neurons appear to activate in multiple,
semantically distinct contexts. Polysemanticity prevents us from identifying
concise, human-understandable explanations for what neural networks are doing
internally. One hypothesised cause of polysemanticity is
\textit{superposition}, where neural networks represent more features than they
have neurons by assigning features to an overcomplete set of directions in
activation space, rather than to individual neurons. Here, we attempt to
identify those directions, using sparse autoencoders to reconstruct the
internal activations of a language model. These autoencoders learn sets of
sparsely activating features that are more interpretable and monosemantic than
directions identified by alternative approaches, where interpretability is
measured by automated methods. Moreover, we show that with our learned set of
features, we can pinpoint the features that are causally responsible for
counterfactual behaviour on the indirect object identification task
\citep{wang2022interpretability} to a finer degree than previous
decompositions. This work indicates that it is possible to resolve
superposition in language models using a scalable, unsupervised method. Our
method may serve as a foundation for future mechanistic interpretability work,
which we hope will enable greater model transparency and steerability.
|
[
{
"created": "Fri, 15 Sep 2023 17:56:55 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Sep 2023 17:20:52 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Oct 2023 13:17:38 GMT",
"version": "v3"
}
] |
2023-10-05
|
[
[
"Cunningham",
"Hoagy",
""
],
[
"Ewart",
"Aidan",
""
],
[
"Riggs",
"Logan",
""
],
[
"Huben",
"Robert",
""
],
[
"Sharkey",
"Lee",
""
]
] |
One of the roadblocks to a better understanding of neural networks' internals is \textit{polysemanticity}, where neurons appear to activate in multiple, semantically distinct contexts. Polysemanticity prevents us from identifying concise, human-understandable explanations for what neural networks are doing internally. One hypothesised cause of polysemanticity is \textit{superposition}, where neural networks represent more features than they have neurons by assigning features to an overcomplete set of directions in activation space, rather than to individual neurons. Here, we attempt to identify those directions, using sparse autoencoders to reconstruct the internal activations of a language model. These autoencoders learn sets of sparsely activating features that are more interpretable and monosemantic than directions identified by alternative approaches, where interpretability is measured by automated methods. Moreover, we show that with our learned set of features, we can pinpoint the features that are causally responsible for counterfactual behaviour on the indirect object identification task \citep{wang2022interpretability} to a finer degree than previous decompositions. This work indicates that it is possible to resolve superposition in language models using a scalable, unsupervised method. Our method may serve as a foundation for future mechanistic interpretability work, which we hope will enable greater model transparency and steerability.
|
1309.0633
|
Sandor Vagvolgyi
|
Sandor Vagvolgyi
|
Threefold Post Correspondence System
| null | null | null | null |
cs.CC cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the concept of a threefold Post correspondence system (3PCS for
short) and we consider it as an instance of the threefold Post correspondence
problem. With each 3PCS, we associate three Post correspondence systems, i.e.,
three instances of the Post correspondence problem. We conjecture that for each
3PCS, the question of the threefold Post correspondence problem or for some
associated Post correspondence system the question of the Post correspondence
problem is decidable.
|
[
{
"created": "Tue, 3 Sep 2013 10:23:08 GMT",
"version": "v1"
}
] |
2013-09-04
|
[
[
"Vagvolgyi",
"Sandor",
""
]
] |
We introduce the concept of a threefold Post correspondence system (3PCS for short) and we consider it as an instance of the threefold Post correspondence problem. With each 3PCS, we associate three Post correspondence systems, i.e., three instances of the Post correspondence problem. We conjecture that for each 3PCS, the question of the threefold Post correspondence problem or for some associated Post correspondence system the question of the Post correspondence problem is decidable.
|
2311.16835
|
Kunpeng Wang
|
Kunpeng Wang, Chenglong Li, Zhengzheng Tu, Zhengyi Liu, Bin Luo
|
Unified-modal Salient Object Detection via Adaptive Prompt Learning
|
13 pages, 11 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing single-modal and multi-modal salient object detection (SOD) methods
focus on designing specific architectures tailored for their respective tasks.
However, developing completely different models for different tasks leads to
labor and time consumption, as well as high computational and practical
deployment costs. In this paper, we attempt to address both single-modal and
multi-modal SOD in a unified framework called UniSOD, which fully exploits the
overlapping prior knowledge between different tasks. Nevertheless, assigning
appropriate strategies to modality variable inputs is challenging. To this end,
UniSOD learns modality-aware prompts with task-specific hints through adaptive
prompt learning, which are plugged into the proposed pre-trained baseline SOD
model to handle corresponding tasks, while only requiring few learnable
parameters compared to training the entire model. Each modality-aware prompt is
generated from a switchable prompt generation block, which adaptively performs
structural switching based on single-modal and multi-modal inputs without human
intervention. Through end-to-end joint training, UniSOD achieves overall
performance improvement on 14 benchmark datasets for RGB, RGB-D, and RGB-T SOD,
which demonstrates that our method effectively and efficiently unifies
single-modal and multi-modal SOD tasks.The code and results are available at
https://github.com/Angknpng/UniSOD.
|
[
{
"created": "Tue, 28 Nov 2023 14:51:08 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Nov 2023 13:14:58 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Dec 2023 12:19:34 GMT",
"version": "v3"
},
{
"created": "Mon, 13 May 2024 02:55:17 GMT",
"version": "v4"
},
{
"created": "Wed, 5 Jun 2024 12:43:31 GMT",
"version": "v5"
}
] |
2024-06-06
|
[
[
"Wang",
"Kunpeng",
""
],
[
"Li",
"Chenglong",
""
],
[
"Tu",
"Zhengzheng",
""
],
[
"Liu",
"Zhengyi",
""
],
[
"Luo",
"Bin",
""
]
] |
Existing single-modal and multi-modal salient object detection (SOD) methods focus on designing specific architectures tailored for their respective tasks. However, developing completely different models for different tasks leads to labor and time consumption, as well as high computational and practical deployment costs. In this paper, we attempt to address both single-modal and multi-modal SOD in a unified framework called UniSOD, which fully exploits the overlapping prior knowledge between different tasks. Nevertheless, assigning appropriate strategies to modality variable inputs is challenging. To this end, UniSOD learns modality-aware prompts with task-specific hints through adaptive prompt learning, which are plugged into the proposed pre-trained baseline SOD model to handle corresponding tasks, while only requiring few learnable parameters compared to training the entire model. Each modality-aware prompt is generated from a switchable prompt generation block, which adaptively performs structural switching based on single-modal and multi-modal inputs without human intervention. Through end-to-end joint training, UniSOD achieves overall performance improvement on 14 benchmark datasets for RGB, RGB-D, and RGB-T SOD, which demonstrates that our method effectively and efficiently unifies single-modal and multi-modal SOD tasks.The code and results are available at https://github.com/Angknpng/UniSOD.
|
2305.13969
|
Matej Novosad
|
Matej Novosad, Robert Penicka, Vojtech Vonasek
|
CTopPRM: Clustering Topological PRM for Planning Multiple Distinct Paths
in 3D Environments
|
in IEEE Robotics and Automation Letters
| null |
10.1109/LRA.2023.3315539
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a new method called Clustering Topological PRM
(CTopPRM) for finding multiple homotopically distinct paths in 3D cluttered
environments. Finding such distinct paths, e.g., going around an obstacle from
a different side, is useful in many applications. Among others, using multiple
distinct paths is necessary for optimization-based trajectory planners where
found trajectories are restricted to only a single homotopy class of a given
path. Distinct paths can also be used to guide sampling-based motion planning
and thus increase the effectiveness of planning in environments with narrow
passages. Graph-based representation called roadmap is a common representation
for path planning and also for finding multiple distinct paths. However,
challenging environments with multiple narrow passages require a densely
sampled roadmap to capture the connectivity of the environment. Searching such
a dense roadmap for multiple paths is computationally too expensive. Therefore,
the majority of existing methods construct only a sparse roadmap which,
however, struggles to find all distinct paths in challenging environments. To
this end, we propose the CTopPRM which creates a sparse graph by clustering an
initially sampled dense roadmap. Such a reduced roadmap allows fast
identification of homotopically distinct paths captured in the dense roadmap.
We show, that compared to the existing methods the CTopPRM improves the
probability of finding all distinct paths by almost 20% in tested environments,
during same run-time. The source code of our method is released as an
open-source package.
|
[
{
"created": "Tue, 23 May 2023 11:53:04 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Sep 2023 12:58:36 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Sep 2023 17:58:29 GMT",
"version": "v3"
}
] |
2023-09-29
|
[
[
"Novosad",
"Matej",
""
],
[
"Penicka",
"Robert",
""
],
[
"Vonasek",
"Vojtech",
""
]
] |
In this paper, we propose a new method called Clustering Topological PRM (CTopPRM) for finding multiple homotopically distinct paths in 3D cluttered environments. Finding such distinct paths, e.g., going around an obstacle from a different side, is useful in many applications. Among others, using multiple distinct paths is necessary for optimization-based trajectory planners where found trajectories are restricted to only a single homotopy class of a given path. Distinct paths can also be used to guide sampling-based motion planning and thus increase the effectiveness of planning in environments with narrow passages. Graph-based representation called roadmap is a common representation for path planning and also for finding multiple distinct paths. However, challenging environments with multiple narrow passages require a densely sampled roadmap to capture the connectivity of the environment. Searching such a dense roadmap for multiple paths is computationally too expensive. Therefore, the majority of existing methods construct only a sparse roadmap which, however, struggles to find all distinct paths in challenging environments. To this end, we propose the CTopPRM which creates a sparse graph by clustering an initially sampled dense roadmap. Such a reduced roadmap allows fast identification of homotopically distinct paths captured in the dense roadmap. We show, that compared to the existing methods the CTopPRM improves the probability of finding all distinct paths by almost 20% in tested environments, during same run-time. The source code of our method is released as an open-source package.
|
1811.09946
|
Thomas Sandholm
|
Thomas Sandholm, Bernardo A. Huberman
|
A Learning Approach to Wi-Fi Access
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show experimentally that workload-based AP-STA associations can improve
system throughput significantly. We present a predictive model that guides
optimal resource allocations in dense Wi-Fi networks and achieves 72-77% of the
optimal throughput with varying training data set sizes using a 3-day trace of
real cable modem traffic.
|
[
{
"created": "Sun, 25 Nov 2018 05:26:45 GMT",
"version": "v1"
}
] |
2018-11-27
|
[
[
"Sandholm",
"Thomas",
""
],
[
"Huberman",
"Bernardo A.",
""
]
] |
We show experimentally that workload-based AP-STA associations can improve system throughput significantly. We present a predictive model that guides optimal resource allocations in dense Wi-Fi networks and achieves 72-77% of the optimal throughput with varying training data set sizes using a 3-day trace of real cable modem traffic.
|
2101.00543
|
Taehyeun Park
|
Taehyeun Park, Walid Saad, Bo Zhou
|
Centralized and Distributed Age of Information Minimization with
non-linear Aging Functions in the Internet of Things
|
19 pages, 11 figures, journal
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Resource management in Internet of Things (IoT) systems is a major challenge
due to the massive scale and heterogeneity of the IoT system. For instance,
most IoT applications require timely delivery of collected information, which
is a key challenge for the IoT. In this paper, novel centralized and
distributed resource allocation schemes are proposed to enable IoT devices to
share limited communication resources and to transmit IoT messages in a timely
manner. In the considered system, the timeliness of information is captured
using non-linear age of information (AoI) metrics that can naturally quantify
the freshness of information. To model the inherent heterogeneity of the IoT
system, the non-linear aging functions are defined in terms of IoT device types
and message content. To minimize AoI, the proposed resource management schemes
allocate the limited communication resources considering AoI. In particular,
the proposed centralized scheme enables the base station to learn the device
types and to determine aging functions. Moreover, the proposed distributed
scheme enables the devices to share the limited communication resources based
on available information on other devices and their AoI. The convergence of the
proposed distributed scheme is proved, and the effectiveness in reducing the
AoI with partial information is analyzed. Simulation results show that the
proposed centralized scheme achieves significantly lower average instantaneous
AoI when compared to simple centralized allocation without learning, while the
proposed distributed scheme achieves significantly lower average instantaneous
AoI when compared to random allocation. The results also show that the proposed
centralized scheme outperforms the proposed distributed scheme in almost all
cases, but the distributed approach is more viable for a massive IoT.
|
[
{
"created": "Sun, 3 Jan 2021 02:19:33 GMT",
"version": "v1"
}
] |
2021-01-05
|
[
[
"Park",
"Taehyeun",
""
],
[
"Saad",
"Walid",
""
],
[
"Zhou",
"Bo",
""
]
] |
Resource management in Internet of Things (IoT) systems is a major challenge due to the massive scale and heterogeneity of the IoT system. For instance, most IoT applications require timely delivery of collected information, which is a key challenge for the IoT. In this paper, novel centralized and distributed resource allocation schemes are proposed to enable IoT devices to share limited communication resources and to transmit IoT messages in a timely manner. In the considered system, the timeliness of information is captured using non-linear age of information (AoI) metrics that can naturally quantify the freshness of information. To model the inherent heterogeneity of the IoT system, the non-linear aging functions are defined in terms of IoT device types and message content. To minimize AoI, the proposed resource management schemes allocate the limited communication resources considering AoI. In particular, the proposed centralized scheme enables the base station to learn the device types and to determine aging functions. Moreover, the proposed distributed scheme enables the devices to share the limited communication resources based on available information on other devices and their AoI. The convergence of the proposed distributed scheme is proved, and the effectiveness in reducing the AoI with partial information is analyzed. Simulation results show that the proposed centralized scheme achieves significantly lower average instantaneous AoI when compared to simple centralized allocation without learning, while the proposed distributed scheme achieves significantly lower average instantaneous AoI when compared to random allocation. The results also show that the proposed centralized scheme outperforms the proposed distributed scheme in almost all cases, but the distributed approach is more viable for a massive IoT.
|
1302.1153
|
Carlos Alberto Fernandez-y-Fernandez
|
Moises Homero Sanchez Lopez, Carlos Alberto Fernandez-y-Fernandez,
Jorge Rafael Aguilar Cisneros
|
On the need for optimization of the software development processes in
short-term projects
|
8 pages, conference proceedings T\'opicos Selectos de Tecnolog\'ias
de la Informaci\'on y Comunicaciones in Proceedings of the XXV Congreso
Nacional y XI Congreso Internacional de Inform\'atica y Computaci\'on ANIEI
2012
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, most of the software development projects in Mexico are short-term
projects (micro and small projects); for this reason, in this paper we are
presenting a research proposal with the goal of identifying the elements
contributing to their success or failure. With this research, we are trying to
identify and propose techniques and tools that would contribute in the
successful outcome of these projects.
|
[
{
"created": "Tue, 5 Feb 2013 19:06:06 GMT",
"version": "v1"
}
] |
2013-02-06
|
[
[
"Lopez",
"Moises Homero Sanchez",
""
],
[
"Fernandez-y-Fernandez",
"Carlos Alberto",
""
],
[
"Cisneros",
"Jorge Rafael Aguilar",
""
]
] |
Nowadays, most of the software development projects in Mexico are short-term projects (micro and small projects); for this reason, in this paper we are presenting a research proposal with the goal of identifying the elements contributing to their success or failure. With this research, we are trying to identify and propose techniques and tools that would contribute in the successful outcome of these projects.
|
1211.5795
|
Alexandr Klimchik
|
Alexandr Klimchik (EMN, IRCCyN), Anatol Pashkevich (EMN, IRCCyN),
Damien Chablat (IRCCyN)
|
Stiffness modeling of non-perfect parallel manipulators
| null |
IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS 2012), Vilamoura : Portugal (2012)
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper focuses on the stiffness modeling of parallel manipulators composed
of non-perfect serial chains, whose geometrical parameters differ from the
nominal ones. In these manipulators, there usually exist essential internal
forces/torques that considerably affect the stiffness properties and also
change the end-effector location. These internal load-ings are caused by
elastic deformations of the manipulator ele-ments during assembling, while the
geometrical errors in the chains are compensated for by applying appropriate
forces. For this type of manipulators, a non-linear stiffness modeling
tech-nique is proposed that allows us to take into account inaccuracy in the
chains and to aggregate their stiffness models for the case of both small and
large deflections. Advantages of the developed technique and its ability to
compute and compensate for the compliance errors caused by different factors
are illustrated by an example that deals with parallel manipulators of the
Or-thoglide family
|
[
{
"created": "Sun, 25 Nov 2012 18:55:54 GMT",
"version": "v1"
}
] |
2012-11-27
|
[
[
"Klimchik",
"Alexandr",
"",
"EMN, IRCCyN"
],
[
"Pashkevich",
"Anatol",
"",
"EMN, IRCCyN"
],
[
"Chablat",
"Damien",
"",
"IRCCyN"
]
] |
The paper focuses on the stiffness modeling of parallel manipulators composed of non-perfect serial chains, whose geometrical parameters differ from the nominal ones. In these manipulators, there usually exist essential internal forces/torques that considerably affect the stiffness properties and also change the end-effector location. These internal load-ings are caused by elastic deformations of the manipulator ele-ments during assembling, while the geometrical errors in the chains are compensated for by applying appropriate forces. For this type of manipulators, a non-linear stiffness modeling tech-nique is proposed that allows us to take into account inaccuracy in the chains and to aggregate their stiffness models for the case of both small and large deflections. Advantages of the developed technique and its ability to compute and compensate for the compliance errors caused by different factors are illustrated by an example that deals with parallel manipulators of the Or-thoglide family
|
1901.05876
|
Bin Kong
|
Eric Wu, Bin Kong, Xin Wang, Junjie Bai, Yi Lu, Feng Gao, Shaoting
Zhang, Kunlin Cao, Qi Song, Siwei Lyu, Youbing Yin
|
Residual Attention based Network for Hand Bone Age Assessment
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computerized automatic methods have been employed to boost the productivity
as well as objectiveness of hand bone age assessment. These approaches make
predictions according to the whole X-ray images, which include other objects
that may introduce distractions. Instead, our framework is inspired by the
clinical workflow (Tanner-Whitehouse) of hand bone age assessment, which
focuses on the key components of the hand. The proposed framework is composed
of two components: a Mask R-CNN subnet of pixelwise hand segmentation and a
residual attention network for hand bone age assessment. The Mask R-CNN subnet
segments the hands from X-ray images to avoid the distractions of other objects
(e.g., X-ray tags). The hierarchical attention components of the residual
attention subnet force our network to focus on the key components of the X-ray
images and generate the final predictions as well as the associated visual
supports, which is similar to the assessment procedure of clinicians. We
evaluate the performance of the proposed pipeline on the RSNA pediatric bone
age dataset and the results demonstrate its superiority over the previous
methods.
|
[
{
"created": "Fri, 21 Dec 2018 23:09:32 GMT",
"version": "v1"
}
] |
2019-01-18
|
[
[
"Wu",
"Eric",
""
],
[
"Kong",
"Bin",
""
],
[
"Wang",
"Xin",
""
],
[
"Bai",
"Junjie",
""
],
[
"Lu",
"Yi",
""
],
[
"Gao",
"Feng",
""
],
[
"Zhang",
"Shaoting",
""
],
[
"Cao",
"Kunlin",
""
],
[
"Song",
"Qi",
""
],
[
"Lyu",
"Siwei",
""
],
[
"Yin",
"Youbing",
""
]
] |
Computerized automatic methods have been employed to boost the productivity as well as objectiveness of hand bone age assessment. These approaches make predictions according to the whole X-ray images, which include other objects that may introduce distractions. Instead, our framework is inspired by the clinical workflow (Tanner-Whitehouse) of hand bone age assessment, which focuses on the key components of the hand. The proposed framework is composed of two components: a Mask R-CNN subnet of pixelwise hand segmentation and a residual attention network for hand bone age assessment. The Mask R-CNN subnet segments the hands from X-ray images to avoid the distractions of other objects (e.g., X-ray tags). The hierarchical attention components of the residual attention subnet force our network to focus on the key components of the X-ray images and generate the final predictions as well as the associated visual supports, which is similar to the assessment procedure of clinicians. We evaluate the performance of the proposed pipeline on the RSNA pediatric bone age dataset and the results demonstrate its superiority over the previous methods.
|
2403.09620
|
Shubhankar Mangesh Borse
|
Vibashan VS, Shubhankar Borse, Hyojin Park, Debasmit Das, Vishal
Patel, Munawar Hayat, Fatih Porikli
|
PosSAM: Panoptic Open-vocabulary Segment Anything
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce an open-vocabulary panoptic segmentation model
that effectively unifies the strengths of the Segment Anything Model (SAM) with
the vision-language CLIP model in an end-to-end framework. While SAM excels in
generating spatially-aware masks, it's decoder falls short in recognizing
object class information and tends to oversegment without additional guidance.
Existing approaches address this limitation by using multi-stage techniques and
employing separate models to generate class-aware prompts, such as bounding
boxes or segmentation masks. Our proposed method, PosSAM is an end-to-end model
which leverages SAM's spatially rich features to produce instance-aware masks
and harnesses CLIP's semantically discriminative features for effective
instance classification. Specifically, we address the limitations of SAM and
propose a novel Local Discriminative Pooling (LDP) module leveraging
class-agnostic SAM and class-aware CLIP features for unbiased open-vocabulary
classification. Furthermore, we introduce a Mask-Aware Selective Ensembling
(MASE) algorithm that adaptively enhances the quality of generated masks and
boosts the performance of open-vocabulary classification during inference for
each image. We conducted extensive experiments to demonstrate our methods
strong generalization properties across multiple datasets, achieving
state-of-the-art performance with substantial improvements over SOTA
open-vocabulary panoptic segmentation methods. In both COCO to ADE20K and
ADE20K to COCO settings, PosSAM outperforms the previous state-of-the-art
methods by a large margin, 2.4 PQ and 4.6 PQ, respectively. Project Website:
https://vibashan.github.io/possam-web/.
|
[
{
"created": "Thu, 14 Mar 2024 17:55:03 GMT",
"version": "v1"
}
] |
2024-03-15
|
[
[
"VS",
"Vibashan",
""
],
[
"Borse",
"Shubhankar",
""
],
[
"Park",
"Hyojin",
""
],
[
"Das",
"Debasmit",
""
],
[
"Patel",
"Vishal",
""
],
[
"Hayat",
"Munawar",
""
],
[
"Porikli",
"Fatih",
""
]
] |
In this paper, we introduce an open-vocabulary panoptic segmentation model that effectively unifies the strengths of the Segment Anything Model (SAM) with the vision-language CLIP model in an end-to-end framework. While SAM excels in generating spatially-aware masks, it's decoder falls short in recognizing object class information and tends to oversegment without additional guidance. Existing approaches address this limitation by using multi-stage techniques and employing separate models to generate class-aware prompts, such as bounding boxes or segmentation masks. Our proposed method, PosSAM is an end-to-end model which leverages SAM's spatially rich features to produce instance-aware masks and harnesses CLIP's semantically discriminative features for effective instance classification. Specifically, we address the limitations of SAM and propose a novel Local Discriminative Pooling (LDP) module leveraging class-agnostic SAM and class-aware CLIP features for unbiased open-vocabulary classification. Furthermore, we introduce a Mask-Aware Selective Ensembling (MASE) algorithm that adaptively enhances the quality of generated masks and boosts the performance of open-vocabulary classification during inference for each image. We conducted extensive experiments to demonstrate our methods strong generalization properties across multiple datasets, achieving state-of-the-art performance with substantial improvements over SOTA open-vocabulary panoptic segmentation methods. In both COCO to ADE20K and ADE20K to COCO settings, PosSAM outperforms the previous state-of-the-art methods by a large margin, 2.4 PQ and 4.6 PQ, respectively. Project Website: https://vibashan.github.io/possam-web/.
|
1505.03476
|
Mingyu Chen
|
Zehan Cui, Tianyue Lu, Haiyang Pan, Sally A. Mckee, Mingyu Chen
|
Twin-Load: Building a Scalable Memory System over the Non-Scalable
Interface
|
submitted to PACT15
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Commodity memory interfaces have difficulty in scaling memory capacity to
meet the needs of modern multicore and big data systems. DRAM device density
and maximum device count are constrained by technology, package, and signal in-
tegrity issues that limit total memory capacity. Synchronous DRAM protocols
require data to be returned within a fixed latency, and thus memory extension
methods over commodity DDRx interfaces fail to support scalable topologies.
Current extension approaches either use slow PCIe interfaces, or require
expensive changes to the memory interface, which limits commercial
adoptability. Here we propose twin-load, a lightweight asynchronous memory
access mechanism over the synchronous DDRx interface. Twin-load uses two
special loads to accomplish one access request to extended memory, the first
serves as a prefetch command to the DRAM system, and the second asynchronously
gets the required data. Twin-load requires no hardware changes on the processor
side and only slight soft- ware modifications. We emulate this system on a
prototype to demonstrate the feasibility of our approach. Twin-load has
comparable performance to NUMA extended memory and outperforms a page-swapping
PCIe-based system by several orders of magnitude. Twin-load thus enables
instant capacity increases on commodity platforms, but more importantly, our
architecture opens opportunities for the design of novel, efficient, scalable,
cost-effective memory subsystems.
|
[
{
"created": "Wed, 13 May 2015 17:54:15 GMT",
"version": "v1"
}
] |
2015-05-14
|
[
[
"Cui",
"Zehan",
""
],
[
"Lu",
"Tianyue",
""
],
[
"Pan",
"Haiyang",
""
],
[
"Mckee",
"Sally A.",
""
],
[
"Chen",
"Mingyu",
""
]
] |
Commodity memory interfaces have difficulty in scaling memory capacity to meet the needs of modern multicore and big data systems. DRAM device density and maximum device count are constrained by technology, package, and signal in- tegrity issues that limit total memory capacity. Synchronous DRAM protocols require data to be returned within a fixed latency, and thus memory extension methods over commodity DDRx interfaces fail to support scalable topologies. Current extension approaches either use slow PCIe interfaces, or require expensive changes to the memory interface, which limits commercial adoptability. Here we propose twin-load, a lightweight asynchronous memory access mechanism over the synchronous DDRx interface. Twin-load uses two special loads to accomplish one access request to extended memory, the first serves as a prefetch command to the DRAM system, and the second asynchronously gets the required data. Twin-load requires no hardware changes on the processor side and only slight soft- ware modifications. We emulate this system on a prototype to demonstrate the feasibility of our approach. Twin-load has comparable performance to NUMA extended memory and outperforms a page-swapping PCIe-based system by several orders of magnitude. Twin-load thus enables instant capacity increases on commodity platforms, but more importantly, our architecture opens opportunities for the design of novel, efficient, scalable, cost-effective memory subsystems.
|
2107.00440
|
Piji Li
|
Dong Wang, Ning Ding, Piji Li, Hai-Tao Zheng
|
CLINE: Contrastive Learning with Semantic Negative Examples for Natural
Language Understanding
|
ACL 2021, Main Conference, Long Paper
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite pre-trained language models have proven useful for learning
high-quality semantic representations, these models are still vulnerable to
simple perturbations. Recent works aimed to improve the robustness of
pre-trained models mainly focus on adversarial training from perturbed examples
with similar semantics, neglecting the utilization of different or even
opposite semantics. Different from the image processing field, the text is
discrete and few word substitutions can cause significant semantic changes. To
study the impact of semantics caused by small perturbations, we conduct a
series of pilot experiments and surprisingly find that adversarial training is
useless or even harmful for the model to detect these semantic changes. To
address this problem, we propose Contrastive Learning with semantIc Negative
Examples (CLINE), which constructs semantic negative examples unsupervised to
improve the robustness under semantically adversarial attacking. By comparing
with similar and opposite semantic examples, the model can effectively perceive
the semantic changes caused by small perturbations. Empirical results show that
our approach yields substantial improvements on a range of sentiment analysis,
reasoning, and reading comprehension tasks. And CLINE also ensures the
compactness within the same semantics and separability across different
semantics in sentence-level.
|
[
{
"created": "Thu, 1 Jul 2021 13:34:12 GMT",
"version": "v1"
}
] |
2021-07-02
|
[
[
"Wang",
"Dong",
""
],
[
"Ding",
"Ning",
""
],
[
"Li",
"Piji",
""
],
[
"Zheng",
"Hai-Tao",
""
]
] |
Despite pre-trained language models have proven useful for learning high-quality semantic representations, these models are still vulnerable to simple perturbations. Recent works aimed to improve the robustness of pre-trained models mainly focus on adversarial training from perturbed examples with similar semantics, neglecting the utilization of different or even opposite semantics. Different from the image processing field, the text is discrete and few word substitutions can cause significant semantic changes. To study the impact of semantics caused by small perturbations, we conduct a series of pilot experiments and surprisingly find that adversarial training is useless or even harmful for the model to detect these semantic changes. To address this problem, we propose Contrastive Learning with semantIc Negative Examples (CLINE), which constructs semantic negative examples unsupervised to improve the robustness under semantically adversarial attacking. By comparing with similar and opposite semantic examples, the model can effectively perceive the semantic changes caused by small perturbations. Empirical results show that our approach yields substantial improvements on a range of sentiment analysis, reasoning, and reading comprehension tasks. And CLINE also ensures the compactness within the same semantics and separability across different semantics in sentence-level.
|
2312.08309
|
Faisal Haque Bappy
|
Sabbir Ahmed, Md Nahiduzzaman, Tariqul Islam, Faisal Haque Bappy,
Tarannum Shaila Zaman, Raiful Hasan
|
FASTEN: Towards a FAult-tolerant and STorage EfficieNt Cloud: Balancing
Between Replication and Deduplication
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the surge in cloud storage adoption, enterprises face challenges
managing data duplication and exponential data growth. Deduplication mitigates
redundancy, yet maintaining redundancy ensures high availability, incurring
storage costs. Balancing these aspects is a significant research concern. We
propose FASTEN, a distributed cloud storage scheme ensuring efficiency,
security, and high availability. FASTEN achieves fault tolerance by dispersing
data subsets optimally across servers and maintains redundancy for high
availability. Experimental results show FASTEN's effectiveness in fault
tolerance, cost reduction, batch auditing, and file and block-level
deduplication. It outperforms existing systems with low time complexity, strong
fault tolerance, and commendable deduplication performance.
|
[
{
"created": "Wed, 13 Dec 2023 17:27:17 GMT",
"version": "v1"
}
] |
2023-12-14
|
[
[
"Ahmed",
"Sabbir",
""
],
[
"Nahiduzzaman",
"Md",
""
],
[
"Islam",
"Tariqul",
""
],
[
"Bappy",
"Faisal Haque",
""
],
[
"Zaman",
"Tarannum Shaila",
""
],
[
"Hasan",
"Raiful",
""
]
] |
With the surge in cloud storage adoption, enterprises face challenges managing data duplication and exponential data growth. Deduplication mitigates redundancy, yet maintaining redundancy ensures high availability, incurring storage costs. Balancing these aspects is a significant research concern. We propose FASTEN, a distributed cloud storage scheme ensuring efficiency, security, and high availability. FASTEN achieves fault tolerance by dispersing data subsets optimally across servers and maintains redundancy for high availability. Experimental results show FASTEN's effectiveness in fault tolerance, cost reduction, batch auditing, and file and block-level deduplication. It outperforms existing systems with low time complexity, strong fault tolerance, and commendable deduplication performance.
|
2011.04542
|
Seohyun Kim
|
Gareth Ari Aye, Seohyun Kim, Hongyu Li
|
Learning Autocompletion from Real-World Datasets
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Code completion is a popular software development tool integrated into all
major IDEs. Many neural language models have achieved promising results in
completion suggestion prediction on synthetic benchmarks. However, a recent
study When Code Completion Fails: a Case Study on Real-World Completions
demonstrates that these results may not translate to improvements in real-world
performance. To combat this effect, we train models on real-world code
completion examples and find that these models outperform models trained on
committed source code and working version snapshots by 12.8% and 13.8% accuracy
respectively. We observe this improvement across modeling technologies and show
through A/B testing that it corresponds to a 6.2% increase in programmers'
actual autocompletion usage. Furthermore, our study characterizes a large
corpus of logged autocompletion usages to investigate why training on
real-world examples leads to stronger models.
|
[
{
"created": "Mon, 9 Nov 2020 16:33:02 GMT",
"version": "v1"
}
] |
2020-11-10
|
[
[
"Aye",
"Gareth Ari",
""
],
[
"Kim",
"Seohyun",
""
],
[
"Li",
"Hongyu",
""
]
] |
Code completion is a popular software development tool integrated into all major IDEs. Many neural language models have achieved promising results in completion suggestion prediction on synthetic benchmarks. However, a recent study When Code Completion Fails: a Case Study on Real-World Completions demonstrates that these results may not translate to improvements in real-world performance. To combat this effect, we train models on real-world code completion examples and find that these models outperform models trained on committed source code and working version snapshots by 12.8% and 13.8% accuracy respectively. We observe this improvement across modeling technologies and show through A/B testing that it corresponds to a 6.2% increase in programmers' actual autocompletion usage. Furthermore, our study characterizes a large corpus of logged autocompletion usages to investigate why training on real-world examples leads to stronger models.
|
2403.18197
|
Changyi Lin
|
Changyi Lin, Xingyu Liu, Yuxiang Yang, Yaru Niu, Wenhao Yu, Tingnan
Zhang, Jie Tan, Byron Boots, Ding Zhao
|
LocoMan: Advancing Versatile Quadrupedal Dexterity with Lightweight
Loco-Manipulators
|
Project page: https://linchangyi1.github.io/LocoMan
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Quadrupedal robots have emerged as versatile agents capable of locomoting and
manipulating in complex environments. Traditional designs typically rely on the
robot's inherent body parts or incorporate top-mounted arms for manipulation
tasks. However, these configurations may limit the robot's operational
dexterity, efficiency and adaptability, particularly in cluttered or
constrained spaces. In this work, we present LocoMan, a dexterous quadrupedal
robot with a novel morphology to perform versatile manipulation in diverse
constrained environments. By equipping a Unitree Go1 robot with two low-cost
and lightweight modular 3-DoF loco-manipulators on its front calves, LocoMan
leverages the combined mobility and functionality of the legs and grippers for
complex manipulation tasks that require precise 6D positioning of the end
effector in a wide workspace. To harness the loco-manipulation capabilities of
LocoMan, we introduce a unified control framework that extends the whole-body
controller (WBC) to integrate the dynamics of loco-manipulators. Through
experiments, we validate that the proposed whole-body controller can accurately
and stably follow desired 6D trajectories of the end effector and torso, which,
when combined with the large workspace from our design, facilitates a diverse
set of challenging dexterous loco-manipulation tasks in confined spaces, such
as opening doors, plugging into sockets, picking objects in narrow and
low-lying spaces, and bimanual manipulation.
|
[
{
"created": "Wed, 27 Mar 2024 02:13:24 GMT",
"version": "v1"
}
] |
2024-03-28
|
[
[
"Lin",
"Changyi",
""
],
[
"Liu",
"Xingyu",
""
],
[
"Yang",
"Yuxiang",
""
],
[
"Niu",
"Yaru",
""
],
[
"Yu",
"Wenhao",
""
],
[
"Zhang",
"Tingnan",
""
],
[
"Tan",
"Jie",
""
],
[
"Boots",
"Byron",
""
],
[
"Zhao",
"Ding",
""
]
] |
Quadrupedal robots have emerged as versatile agents capable of locomoting and manipulating in complex environments. Traditional designs typically rely on the robot's inherent body parts or incorporate top-mounted arms for manipulation tasks. However, these configurations may limit the robot's operational dexterity, efficiency and adaptability, particularly in cluttered or constrained spaces. In this work, we present LocoMan, a dexterous quadrupedal robot with a novel morphology to perform versatile manipulation in diverse constrained environments. By equipping a Unitree Go1 robot with two low-cost and lightweight modular 3-DoF loco-manipulators on its front calves, LocoMan leverages the combined mobility and functionality of the legs and grippers for complex manipulation tasks that require precise 6D positioning of the end effector in a wide workspace. To harness the loco-manipulation capabilities of LocoMan, we introduce a unified control framework that extends the whole-body controller (WBC) to integrate the dynamics of loco-manipulators. Through experiments, we validate that the proposed whole-body controller can accurately and stably follow desired 6D trajectories of the end effector and torso, which, when combined with the large workspace from our design, facilitates a diverse set of challenging dexterous loco-manipulation tasks in confined spaces, such as opening doors, plugging into sockets, picking objects in narrow and low-lying spaces, and bimanual manipulation.
|
cs/0607086
|
Farid Nouioua
|
Daniel Kayser (LIPN), Farid Nouioua (LIPN)
|
Representing Knowledge about Norms
| null |
The 16th European Conference on Artificial Intelligence (ECAI'04)
(2004) 363-367
| null | null |
cs.AI
| null |
Norms are essential to extend inference: inferences based on norms are far
richer than those based on logical implications. In the recent decades, much
effort has been devoted to reason on a domain, once its norms are represented.
How to extract and express those norms has received far less attention.
Extraction is difficult: as the readers are supposed to know them, the norms of
a domain are seldom made explicit. For one thing, extracting norms requires a
language to represent them, and this is the topic of this paper. We apply this
language to represent norms in the domain of driving, and show that it is
adequate to reason on the causes of accidents, as described by car-crash
reports.
|
[
{
"created": "Tue, 18 Jul 2006 08:15:04 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Kayser",
"Daniel",
"",
"LIPN"
],
[
"Nouioua",
"Farid",
"",
"LIPN"
]
] |
Norms are essential to extend inference: inferences based on norms are far richer than those based on logical implications. In the recent decades, much effort has been devoted to reason on a domain, once its norms are represented. How to extract and express those norms has received far less attention. Extraction is difficult: as the readers are supposed to know them, the norms of a domain are seldom made explicit. For one thing, extracting norms requires a language to represent them, and this is the topic of this paper. We apply this language to represent norms in the domain of driving, and show that it is adequate to reason on the causes of accidents, as described by car-crash reports.
|
1704.04459
|
Ali Kariminezhad
|
Ali Kariminezhad, Soheil Gherekhloo, and Aydin Sezgin
|
Optimal Power Splitting for Simultaneous Information Detection and
Energy Harvesting
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter deals with the joint information and energy processing at a
receiver of a point-to-point communication channel. In particular, the
trade-off between the achievable information rate and harvested energy for a
multiple-antenna power splitting (PS) receiver is investigated. Here, the rate-
energy region characterization is of particular interest, which is
intrinsically a non-convex problem. In this letter, an efficient algorithm is
proposed for obtaining an approximate solution to the problem in polynomial
time. This algorithm is mainly based on the Taylor approximation in conjunction
with semidefinite relaxation (SDR) which is solved by interior-point methods.
Moreover, we utilize the Gaussian randomization procedure to obtain a feasible
solution for the original problem. It is shown that by proper receiver design
the rate-energy region can be significantly enlarged compared to the state of
the art, while at the same time the receiver hardware costs is reduced by
utilizing less number of energy harvesting circuitry.
|
[
{
"created": "Fri, 14 Apr 2017 15:31:10 GMT",
"version": "v1"
}
] |
2017-04-17
|
[
[
"Kariminezhad",
"Ali",
""
],
[
"Gherekhloo",
"Soheil",
""
],
[
"Sezgin",
"Aydin",
""
]
] |
This letter deals with the joint information and energy processing at a receiver of a point-to-point communication channel. In particular, the trade-off between the achievable information rate and harvested energy for a multiple-antenna power splitting (PS) receiver is investigated. Here, the rate- energy region characterization is of particular interest, which is intrinsically a non-convex problem. In this letter, an efficient algorithm is proposed for obtaining an approximate solution to the problem in polynomial time. This algorithm is mainly based on the Taylor approximation in conjunction with semidefinite relaxation (SDR) which is solved by interior-point methods. Moreover, we utilize the Gaussian randomization procedure to obtain a feasible solution for the original problem. It is shown that by proper receiver design the rate-energy region can be significantly enlarged compared to the state of the art, while at the same time the receiver hardware costs is reduced by utilizing less number of energy harvesting circuitry.
|
2002.11885
|
Gaurav Nagesh Shetty
|
Gaurav N.Shetty, Konstantinos Slavakis, Ukash Nakarmi, Gesualdo
Scutari, and Leslie Ying
|
Kernel Bi-Linear Modeling for Reconstructing Data on Manifolds: The
Dynamic-MRI Case
| null | null | null | null |
cs.LG cs.CV eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper establishes a kernel-based framework for reconstructing data on
manifolds, tailored to fit the dynamic-(d)MRI-data recovery problem. The
proposed methodology exploits simple tangent-space geometries of manifolds in
reproducing kernel Hilbert spaces and follows classical kernel-approximation
arguments to form the data-recovery task as a bi-linear inverse problem.
Departing from mainstream approaches, the proposed methodology uses no training
data, employs no graph Laplacian matrix to penalize the optimization task, uses
no costly (kernel) pre-imaging step to map feature points back to the input
space, and utilizes complex-valued kernel functions to account for k-space
data. The framework is validated on synthetically generated dMRI data, where
comparisons against state-of-the-art schemes highlight the rich potential of
the proposed approach in data-recovery problems.
|
[
{
"created": "Thu, 27 Feb 2020 02:42:08 GMT",
"version": "v1"
}
] |
2020-02-28
|
[
[
"Shetty",
"Gaurav N.",
""
],
[
"Slavakis",
"Konstantinos",
""
],
[
"Nakarmi",
"Ukash",
""
],
[
"Scutari",
"Gesualdo",
""
],
[
"Ying",
"Leslie",
""
]
] |
This paper establishes a kernel-based framework for reconstructing data on manifolds, tailored to fit the dynamic-(d)MRI-data recovery problem. The proposed methodology exploits simple tangent-space geometries of manifolds in reproducing kernel Hilbert spaces and follows classical kernel-approximation arguments to form the data-recovery task as a bi-linear inverse problem. Departing from mainstream approaches, the proposed methodology uses no training data, employs no graph Laplacian matrix to penalize the optimization task, uses no costly (kernel) pre-imaging step to map feature points back to the input space, and utilizes complex-valued kernel functions to account for k-space data. The framework is validated on synthetically generated dMRI data, where comparisons against state-of-the-art schemes highlight the rich potential of the proposed approach in data-recovery problems.
|
2009.03115
|
Youngtaek Kim
|
Youngtaek Kim, Jaeyoung Kim, Hyeon Jeon, Young-Ho Kim, Hyunjoo Song,
Bohyoung Kim, Jinwook Seo
|
Githru: Visual Analytics for Understanding Software Development History
Through Git Metadata Analysis
|
IEEE VIS 2020 (VAST), ACM 2012 CCS - Human-centered computing,
Visualization
|
IEEE Transactions on Visualization and Computer Graphics (TVCG)
Feb. 2021, pp. 656-666, vol. 27
|
10.1109/TVCG.2020.3030414
| null |
cs.SE cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Git metadata contains rich information for developers to understand the
overall context of a large software development project. Thus it can help new
developers, managers, and testers understand the history of development without
needing to dig into a large pile of unfamiliar source code. However, the
current tools for Git visualization are not adequate to analyze and explore the
metadata: They focus mainly on improving the usability of Git commands instead
of on helping users understand the development history. Furthermore, they do
not scale for large and complex Git commit graphs, which can play an important
role in understanding the overall development history. In this paper, we
present Githru, an interactive visual analytics system that enables developers
to effectively understand the context of development history through the
interactive exploration of Git metadata. We design an interactive visual
encoding idiom to represent a large Git graph in a scalable manner while
preserving the topological structures in the Git graph. To enable scalable
exploration of a large Git commit graph, we propose novel techniques (graph
reconstruction, clustering, and Context-Preserving Squash Merge (CSM) methods)
to abstract a large-scale Git commit graph. Based on these Git commit graph
abstraction techniques, Githru provides an interactive summary view to help
users gain an overview of the development history and a comparison view in
which users can compare different clusters of commits. The efficacy of Githru
has been demonstrated by case studies with domain experts using real-world,
in-house datasets from a large software development team at a major
international IT company. A controlled user study with 12 developers comparing
Githru to previous tools also confirms the effectiveness of Githru in terms of
task completion time.
|
[
{
"created": "Mon, 7 Sep 2020 14:06:59 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Sep 2020 04:15:19 GMT",
"version": "v2"
}
] |
2021-04-29
|
[
[
"Kim",
"Youngtaek",
""
],
[
"Kim",
"Jaeyoung",
""
],
[
"Jeon",
"Hyeon",
""
],
[
"Kim",
"Young-Ho",
""
],
[
"Song",
"Hyunjoo",
""
],
[
"Kim",
"Bohyoung",
""
],
[
"Seo",
"Jinwook",
""
]
] |
Git metadata contains rich information for developers to understand the overall context of a large software development project. Thus it can help new developers, managers, and testers understand the history of development without needing to dig into a large pile of unfamiliar source code. However, the current tools for Git visualization are not adequate to analyze and explore the metadata: They focus mainly on improving the usability of Git commands instead of on helping users understand the development history. Furthermore, they do not scale for large and complex Git commit graphs, which can play an important role in understanding the overall development history. In this paper, we present Githru, an interactive visual analytics system that enables developers to effectively understand the context of development history through the interactive exploration of Git metadata. We design an interactive visual encoding idiom to represent a large Git graph in a scalable manner while preserving the topological structures in the Git graph. To enable scalable exploration of a large Git commit graph, we propose novel techniques (graph reconstruction, clustering, and Context-Preserving Squash Merge (CSM) methods) to abstract a large-scale Git commit graph. Based on these Git commit graph abstraction techniques, Githru provides an interactive summary view to help users gain an overview of the development history and a comparison view in which users can compare different clusters of commits. The efficacy of Githru has been demonstrated by case studies with domain experts using real-world, in-house datasets from a large software development team at a major international IT company. A controlled user study with 12 developers comparing Githru to previous tools also confirms the effectiveness of Githru in terms of task completion time.
|
1007.3624
|
Abuzer Yakaryilmaz
|
Abuzer Yakaryilmaz and A. C. Cem Say
|
Unbounded-error quantum computation with small space bounds
|
A preliminary version of this paper appeared in the Proceedings of
the Fourth International Computer Science Symposium in Russia, pages
356--367, 2009
|
Information and Computation, Volume 209, Issue 6, June 2011, Pages
873-892
|
10.1016/j.ic.2011.01.008
| null |
cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove the following facts about the language recognition power of quantum
Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more
powerful than probabilistic Turing machines for any common space bound $ s $
satisfying $ s(n)=o(\log \log n) $. For "one-way" Turing machines, where the
input tape head is not allowed to move left, the above result holds for
$s(n)=o(\log n) $. We also give a characterization for the class of languages
recognized with unbounded error by real-time quantum finite automata (QFAs)
with restricted measurements. It turns out that these automata are equal in
power to their probabilistic counterparts, and this fact does not change when
the QFA model is augmented to allow general measurements and mixed states.
Unlike the case with classical finite automata, when the QFA tape head is
allowed to remain stationary in some steps, more languages become recognizable.
We define and use a QTM model that generalizes the other variants introduced
earlier in the study of quantum space complexity.
|
[
{
"created": "Wed, 21 Jul 2010 12:00:14 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Feb 2011 15:05:45 GMT",
"version": "v2"
}
] |
2014-01-29
|
[
[
"Yakaryilmaz",
"Abuzer",
""
],
[
"Say",
"A. C. Cem",
""
]
] |
We prove the following facts about the language recognition power of quantum Turing machines (QTMs) in the unbounded error setting: QTMs are strictly more powerful than probabilistic Turing machines for any common space bound $ s $ satisfying $ s(n)=o(\log \log n) $. For "one-way" Turing machines, where the input tape head is not allowed to move left, the above result holds for $s(n)=o(\log n) $. We also give a characterization for the class of languages recognized with unbounded error by real-time quantum finite automata (QFAs) with restricted measurements. It turns out that these automata are equal in power to their probabilistic counterparts, and this fact does not change when the QFA model is augmented to allow general measurements and mixed states. Unlike the case with classical finite automata, when the QFA tape head is allowed to remain stationary in some steps, more languages become recognizable. We define and use a QTM model that generalizes the other variants introduced earlier in the study of quantum space complexity.
|
1703.09177
|
Farzad Salehisadaghiani
|
Farzad Salehisadaghiani
|
Nash Equilibrium in Social Media
|
arXiv admin note: substantial text overlap with arXiv:1612.07179
| null | null | null |
cs.GT cs.SI cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we investigate an application of a Nash equilibrium seeking
algorithm in a social network. In a networked game each player (user) takes
action in response to other players' actions in order to decrease (increase)
his cost (profit) in the network. We assume that the players' cost functions
are not necessarily dependent on the actions of all players. This is due to
better mimicking the standard social media rules. A communication graph is
defined for the game through which players are able to share their information
with only their neighbors. We assume that the communication neighbors
necessarily affect the players' cost functions while the reverse is not always
true. In this game, the players are only aware of their own cost functions and
actions. Thus, each of them maintains an estimate of the others' actions and
share it with the neighbors to update his action and estimates.
|
[
{
"created": "Mon, 27 Mar 2017 16:44:49 GMT",
"version": "v1"
}
] |
2017-03-28
|
[
[
"Salehisadaghiani",
"Farzad",
""
]
] |
In this work, we investigate an application of a Nash equilibrium seeking algorithm in a social network. In a networked game each player (user) takes action in response to other players' actions in order to decrease (increase) his cost (profit) in the network. We assume that the players' cost functions are not necessarily dependent on the actions of all players. This is due to better mimicking the standard social media rules. A communication graph is defined for the game through which players are able to share their information with only their neighbors. We assume that the communication neighbors necessarily affect the players' cost functions while the reverse is not always true. In this game, the players are only aware of their own cost functions and actions. Thus, each of them maintains an estimate of the others' actions and share it with the neighbors to update his action and estimates.
|
cs/0605123
|
Jaime Cardoso
|
Jaime S. Cardoso
|
Classification of Ordinal Data
|
62 pages, MSc thesis
| null | null | null |
cs.AI
| null |
Classification of ordinal data is one of the most important tasks of relation
learning. In this thesis a novel framework for ordered classes is proposed. The
technique reduces the problem of classifying ordered classes to the standard
two-class problem. The introduced method is then mapped into support vector
machines and neural networks. Compared with a well-known approach using
pairwise objects as training samples, the new algorithm has a reduced
complexity and training time. A second novel model, the unimodal model, is also
introduced and a parametric version is mapped into neural networks. Several
case studies are presented to assert the validity of the proposed models.
|
[
{
"created": "Fri, 26 May 2006 09:44:44 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Cardoso",
"Jaime S.",
""
]
] |
Classification of ordinal data is one of the most important tasks of relation learning. In this thesis a novel framework for ordered classes is proposed. The technique reduces the problem of classifying ordered classes to the standard two-class problem. The introduced method is then mapped into support vector machines and neural networks. Compared with a well-known approach using pairwise objects as training samples, the new algorithm has a reduced complexity and training time. A second novel model, the unimodal model, is also introduced and a parametric version is mapped into neural networks. Several case studies are presented to assert the validity of the proposed models.
|
cs/0602052
|
Grigoriev Evgeniy
|
Evgeniy Grigoriev
|
The OverRelational Manifesto
|
34 pages
| null | null | null |
cs.DB cs.DS
| null |
The OverRelational Manifesto (below ORM) proposes a possible approach to
creation of data storage systems of the next generation. ORM starts from the
requirement that information in a relational database is represented by a set
of relation values. Accordingly, it is assumed that the information about any
entity of an enterprise must also be represented as a set of relation values
(the ORM main requirement). A system of types is introduced, which allows one
to fulfill the main requirement. The data are represented in the form of
complex objects, and the state of any object is described as a set of relation
values. Emphasize that the types describing the objects are encapsulated,
inherited, and polymorphic. Then, it is shown that the data represented as a
set of such objects may also be represented as a set of relational values
defined on the set of scalar domains (dual data representation). In the general
case, any class is associated with a set of relation variables (R-variables)
each one containing some data about all objects of this class existing in the
system. One of the key points is the fact that the usage of complex (from the
user's viewpoint) refined names of R-variables and their attributes makes it
possible to preserve the semantics of complex data structures represented in
the form of a set of relation values. The most important part of the data
storage system created on the approach proposed is an object-oriented
translator operating over a relational DBMS. The expressiveness of such a
system is comparable with that of OO programming languages.
|
[
{
"created": "Tue, 14 Feb 2006 12:19:08 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Feb 2006 10:28:24 GMT",
"version": "v2"
},
{
"created": "Fri, 17 Mar 2006 09:57:07 GMT",
"version": "v3"
}
] |
2007-05-23
|
[
[
"Grigoriev",
"Evgeniy",
""
]
] |
The OverRelational Manifesto (below ORM) proposes a possible approach to creation of data storage systems of the next generation. ORM starts from the requirement that information in a relational database is represented by a set of relation values. Accordingly, it is assumed that the information about any entity of an enterprise must also be represented as a set of relation values (the ORM main requirement). A system of types is introduced, which allows one to fulfill the main requirement. The data are represented in the form of complex objects, and the state of any object is described as a set of relation values. Emphasize that the types describing the objects are encapsulated, inherited, and polymorphic. Then, it is shown that the data represented as a set of such objects may also be represented as a set of relational values defined on the set of scalar domains (dual data representation). In the general case, any class is associated with a set of relation variables (R-variables) each one containing some data about all objects of this class existing in the system. One of the key points is the fact that the usage of complex (from the user's viewpoint) refined names of R-variables and their attributes makes it possible to preserve the semantics of complex data structures represented in the form of a set of relation values. The most important part of the data storage system created on the approach proposed is an object-oriented translator operating over a relational DBMS. The expressiveness of such a system is comparable with that of OO programming languages.
|
2008.13664
|
Thomas Lange
|
Thomas Lange, Aneesh Balakrishnan, Maximilien Glorieux, Dan
Alexandrescu, Luca Sterpone
|
Machine Learning Clustering Techniques for Selective Mitigation of
Critical Design Features
| null | null |
10.1109/IOLTS50870.2020.9159751
| null |
cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Selective mitigation or selective hardening is an effective technique to
obtain a good trade-off between the improvements in the overall reliability of
a circuit and the hardware overhead induced by the hardening techniques.
Selective mitigation relies on preferentially protecting circuit instances
according to their susceptibility and criticality. However, ranking circuit
parts in terms of vulnerability usually requires computationally intensive
fault-injection simulation campaigns. This paper presents a new methodology
which uses machine learning clustering techniques to group flip-flops with
similar expected contributions to the overall functional failure rate, based on
the analysis of a compact set of features combining attributes from static
elements and dynamic elements. Fault simulation campaigns can then be executed
on a per-group basis, significantly reducing the time and cost of the
evaluation. The effectiveness of grouping similar sensitive flip-flops by
machine learning clustering algorithms is evaluated on a practical
example.Different clustering algorithms are applied and the results are
compared to an ideal selective mitigation obtained by exhaustive
fault-injection simulation.
|
[
{
"created": "Mon, 31 Aug 2020 15:03:16 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Apr 2021 15:48:17 GMT",
"version": "v2"
}
] |
2021-04-05
|
[
[
"Lange",
"Thomas",
""
],
[
"Balakrishnan",
"Aneesh",
""
],
[
"Glorieux",
"Maximilien",
""
],
[
"Alexandrescu",
"Dan",
""
],
[
"Sterpone",
"Luca",
""
]
] |
Selective mitigation or selective hardening is an effective technique to obtain a good trade-off between the improvements in the overall reliability of a circuit and the hardware overhead induced by the hardening techniques. Selective mitigation relies on preferentially protecting circuit instances according to their susceptibility and criticality. However, ranking circuit parts in terms of vulnerability usually requires computationally intensive fault-injection simulation campaigns. This paper presents a new methodology which uses machine learning clustering techniques to group flip-flops with similar expected contributions to the overall functional failure rate, based on the analysis of a compact set of features combining attributes from static elements and dynamic elements. Fault simulation campaigns can then be executed on a per-group basis, significantly reducing the time and cost of the evaluation. The effectiveness of grouping similar sensitive flip-flops by machine learning clustering algorithms is evaluated on a practical example.Different clustering algorithms are applied and the results are compared to an ideal selective mitigation obtained by exhaustive fault-injection simulation.
|
2407.10793
|
Hannah Sansford
|
Hannah Sansford, Nicholas Richardson, Hermina Petric Maretic, Juba
Nait Saada
|
GraphEval: A Knowledge-Graph Based LLM Hallucination Evaluation
Framework
|
12 pages, to be published at KiL'24: Workshop on Knowledge-infused
Learning co-located with 30th ACM KDD Conference, August 26, 2024, Barcelona,
Spain
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Methods to evaluate Large Language Model (LLM) responses and detect
inconsistencies, also known as hallucinations, with respect to the provided
knowledge, are becoming increasingly important for LLM applications. Current
metrics fall short in their ability to provide explainable decisions,
systematically check all pieces of information in the response, and are often
too computationally expensive to be used in practice. We present GraphEval: a
hallucination evaluation framework based on representing information in
Knowledge Graph (KG) structures. Our method identifies the specific triples in
the KG that are prone to hallucinations and hence provides more insight into
where in the response a hallucination has occurred, if at all, than previous
methods. Furthermore, using our approach in conjunction with state-of-the-art
natural language inference (NLI) models leads to an improvement in balanced
accuracy on various hallucination benchmarks, compared to using the raw NLI
models. Lastly, we explore the use of GraphEval for hallucination correction by
leveraging the structure of the KG, a method we name GraphCorrect, and
demonstrate that the majority of hallucinations can indeed be rectified.
|
[
{
"created": "Mon, 15 Jul 2024 15:11:16 GMT",
"version": "v1"
}
] |
2024-07-16
|
[
[
"Sansford",
"Hannah",
""
],
[
"Richardson",
"Nicholas",
""
],
[
"Maretic",
"Hermina Petric",
""
],
[
"Saada",
"Juba Nait",
""
]
] |
Methods to evaluate Large Language Model (LLM) responses and detect inconsistencies, also known as hallucinations, with respect to the provided knowledge, are becoming increasingly important for LLM applications. Current metrics fall short in their ability to provide explainable decisions, systematically check all pieces of information in the response, and are often too computationally expensive to be used in practice. We present GraphEval: a hallucination evaluation framework based on representing information in Knowledge Graph (KG) structures. Our method identifies the specific triples in the KG that are prone to hallucinations and hence provides more insight into where in the response a hallucination has occurred, if at all, than previous methods. Furthermore, using our approach in conjunction with state-of-the-art natural language inference (NLI) models leads to an improvement in balanced accuracy on various hallucination benchmarks, compared to using the raw NLI models. Lastly, we explore the use of GraphEval for hallucination correction by leveraging the structure of the KG, a method we name GraphCorrect, and demonstrate that the majority of hallucinations can indeed be rectified.
|
2206.04585
|
William Chen
|
William Chen, Siyi Hu, Rajat Talak, Luca Carlone
|
Extracting Zero-shot Common Sense from Large Language Models for Robot
3D Scene Understanding
|
4 pages (excluding references and appendix), 2 figures, 2 tables.
Submitted to Robotics: Science and Systems 2022 2nd Workshop on Scaling Robot
Learning. Corrected typos and notation
| null | null | null |
cs.RO cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic 3D scene understanding is a problem of critical importance in
robotics. While significant advances have been made in simultaneous
localization and mapping algorithms, robots are still far from having the
common sense knowledge about household objects and their locations of an
average human. We introduce a novel method for leveraging common sense embedded
within large language models for labelling rooms given the objects contained
within. This algorithm has the added benefits of (i) requiring no task-specific
pre-training (operating entirely in the zero-shot regime) and (ii) generalizing
to arbitrary room and object labels, including previously-unseen ones -- both
of which are highly desirable traits in robotic scene understanding algorithms.
The proposed algorithm operates on 3D scene graphs produced by modern spatial
perception systems, and we hope it will pave the way to more generalizable and
scalable high-level 3D scene understanding for robotics.
|
[
{
"created": "Thu, 9 Jun 2022 16:05:35 GMT",
"version": "v1"
},
{
"created": "Sun, 19 Jun 2022 03:06:05 GMT",
"version": "v2"
}
] |
2022-06-22
|
[
[
"Chen",
"William",
""
],
[
"Hu",
"Siyi",
""
],
[
"Talak",
"Rajat",
""
],
[
"Carlone",
"Luca",
""
]
] |
Semantic 3D scene understanding is a problem of critical importance in robotics. While significant advances have been made in simultaneous localization and mapping algorithms, robots are still far from having the common sense knowledge about household objects and their locations of an average human. We introduce a novel method for leveraging common sense embedded within large language models for labelling rooms given the objects contained within. This algorithm has the added benefits of (i) requiring no task-specific pre-training (operating entirely in the zero-shot regime) and (ii) generalizing to arbitrary room and object labels, including previously-unseen ones -- both of which are highly desirable traits in robotic scene understanding algorithms. The proposed algorithm operates on 3D scene graphs produced by modern spatial perception systems, and we hope it will pave the way to more generalizable and scalable high-level 3D scene understanding for robotics.
|
2006.00661
|
Sho Takemori Ph.D
|
Sho Takemori, Masahiro Sato, Takashi Sonoda, Janmajay Singh, Tomoko
Ohkuma
|
Submodular Bandit Problem Under Multiple Constraints
|
accepted at UAI 2020, minor mistakes fixed
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The linear submodular bandit problem was proposed to simultaneously address
diversified retrieval and online learning in a recommender system. If there is
no uncertainty, this problem is equivalent to a submodular maximization problem
under a cardinality constraint. However, in some situations, recommendation
lists should satisfy additional constraints such as budget constraints, other
than a cardinality constraint. Thus, motivated by diversified retrieval
considering budget constraints, we introduce a submodular bandit problem under
the intersection of $l$ knapsacks and a $k$-system constraint. Here $k$-system
constraints form a very general class of constraints including cardinality
constraints and the intersection of $k$ matroid constraints. To solve this
problem, we propose a non-greedy algorithm that adaptively focuses on a
standard or modified upper-confidence bound. We provide a high-probability
upper bound of an approximation regret, where the approximation ratio matches
that of a fast offline algorithm. Moreover, we perform experiments under
various combinations of constraints using a synthetic and two real-world
datasets and demonstrate that our proposed methods outperform the existing
baselines.
|
[
{
"created": "Mon, 1 Jun 2020 01:28:44 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jun 2020 06:59:23 GMT",
"version": "v2"
},
{
"created": "Fri, 31 Jul 2020 04:10:35 GMT",
"version": "v3"
},
{
"created": "Mon, 26 Oct 2020 05:12:46 GMT",
"version": "v4"
},
{
"created": "Mon, 29 Mar 2021 02:02:19 GMT",
"version": "v5"
}
] |
2021-03-30
|
[
[
"Takemori",
"Sho",
""
],
[
"Sato",
"Masahiro",
""
],
[
"Sonoda",
"Takashi",
""
],
[
"Singh",
"Janmajay",
""
],
[
"Ohkuma",
"Tomoko",
""
]
] |
The linear submodular bandit problem was proposed to simultaneously address diversified retrieval and online learning in a recommender system. If there is no uncertainty, this problem is equivalent to a submodular maximization problem under a cardinality constraint. However, in some situations, recommendation lists should satisfy additional constraints such as budget constraints, other than a cardinality constraint. Thus, motivated by diversified retrieval considering budget constraints, we introduce a submodular bandit problem under the intersection of $l$ knapsacks and a $k$-system constraint. Here $k$-system constraints form a very general class of constraints including cardinality constraints and the intersection of $k$ matroid constraints. To solve this problem, we propose a non-greedy algorithm that adaptively focuses on a standard or modified upper-confidence bound. We provide a high-probability upper bound of an approximation regret, where the approximation ratio matches that of a fast offline algorithm. Moreover, we perform experiments under various combinations of constraints using a synthetic and two real-world datasets and demonstrate that our proposed methods outperform the existing baselines.
|
2305.17813
|
Kevin Jude Concessao
|
Kevin Jude Concessao, Unnikrishnan Cheramangalath, MJ Ricky Dev,
Rupesh Nasre
|
Meerkat: A framework for Dynamic Graph Algorithms on GPUs
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Graph algorithms are challenging to implement due to their varying topology
and irregular access patterns. Real-world graphs are dynamic in nature and
routinely undergo edge and vertex additions, as well as, deletions. Typical
examples of dynamic graphs are social networks, collaboration networks, and
road networks. Applying static algorithms repeatedly on dynamic graphs is
inefficient. Unfortunately, we know little about how to efficiently process
dynamic graphs on massively parallel architectures such as GPUs. Existing
approaches to represent and process dynamic graphs are either not general or
inefficient. In this work, we propose a library-based framework for dynamic
graph algorithms that proposes a GPU-tailored graph representation and exploits
the warp-cooperative execution model. The library, named Meerkat, builds upon a
recently proposed dynamic graph representation on GPUs. This representation
exploits a hashtable-based mechanism to store a vertex's neighborhood. Meerkat
also enables fast iteration through a group of vertices, such as the whole set
of vertices or the neighbors of a vertex. Based on the efficient iterative
patterns encoded in Meerkat, we implement dynamic versions of the popular graph
algorithms such as breadth-first search, single-source shortest paths, triangle
counting, weakly connected components, and PageRank. Compared to the
state-of-the-art dynamic graph analytics framework Hornet, Meerkat is
$12.6\times$, $12.94\times$, and $6.1\times$ faster, for query, insert, and
delete operations, respectively. Using a variety of real-world graphs, we
observe that Meerkat significantly improves the efficiency of the underlying
dynamic graph algorithm. Meerkat performs $1.17\times$ for BFS, $1.32\times$
for SSSP, $1.74\times$ for PageRank, and $6.08\times$ for WCC, better than
Hornet on average.
|
[
{
"created": "Sun, 28 May 2023 21:10:31 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Jun 2023 15:22:20 GMT",
"version": "v2"
}
] |
2023-06-05
|
[
[
"Concessao",
"Kevin Jude",
""
],
[
"Cheramangalath",
"Unnikrishnan",
""
],
[
"Dev",
"MJ Ricky",
""
],
[
"Nasre",
"Rupesh",
""
]
] |
Graph algorithms are challenging to implement due to their varying topology and irregular access patterns. Real-world graphs are dynamic in nature and routinely undergo edge and vertex additions, as well as, deletions. Typical examples of dynamic graphs are social networks, collaboration networks, and road networks. Applying static algorithms repeatedly on dynamic graphs is inefficient. Unfortunately, we know little about how to efficiently process dynamic graphs on massively parallel architectures such as GPUs. Existing approaches to represent and process dynamic graphs are either not general or inefficient. In this work, we propose a library-based framework for dynamic graph algorithms that proposes a GPU-tailored graph representation and exploits the warp-cooperative execution model. The library, named Meerkat, builds upon a recently proposed dynamic graph representation on GPUs. This representation exploits a hashtable-based mechanism to store a vertex's neighborhood. Meerkat also enables fast iteration through a group of vertices, such as the whole set of vertices or the neighbors of a vertex. Based on the efficient iterative patterns encoded in Meerkat, we implement dynamic versions of the popular graph algorithms such as breadth-first search, single-source shortest paths, triangle counting, weakly connected components, and PageRank. Compared to the state-of-the-art dynamic graph analytics framework Hornet, Meerkat is $12.6\times$, $12.94\times$, and $6.1\times$ faster, for query, insert, and delete operations, respectively. Using a variety of real-world graphs, we observe that Meerkat significantly improves the efficiency of the underlying dynamic graph algorithm. Meerkat performs $1.17\times$ for BFS, $1.32\times$ for SSSP, $1.74\times$ for PageRank, and $6.08\times$ for WCC, better than Hornet on average.
|
2203.14806
|
Simon J. Blanchard
|
S. J. Blanchard, T. J. Noseworthy, E. Pancer, M. Poole
|
Extraction of Visual Information to Predict Crowdfunding Success
|
32 pages, 5 figures
| null | null | null |
cs.CV cs.MM stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Researchers have increasingly turned to crowdfunding platforms to gain
insights into entrepreneurial activity and dynamics. While previous studies
have explored various factors influencing crowdfunding success, such as
technology, communication, and marketing strategies, the role of visual
elements that can be automatically extracted from images has received less
attention. This is surprising, considering that crowdfunding platforms
emphasize the importance of attention-grabbing and high-resolution images, and
previous research has shown that image characteristics can significantly impact
product evaluations. Indeed, a comprehensive review of empirical articles (n =
202) that utilized Kickstarter data, focusing on the incorporation of visual
information in their analyses. Our findings reveal that only 29.70% controlled
for the number of images, and less than 12% considered any image details. In
this manuscript, we review the literature on image processing and its relevance
to the business domain, highlighting two types of visual variables: visual
counts (number of pictures and number of videos) and image details. Building
upon previous work that discussed the role of color, composition and
figure-ground relationships, we introduce visual scene elements that have not
yet been explored in crowdfunding, including the number of faces, the number of
concepts depicted, and the ease of identifying those concepts. To demonstrate
the predictive value of visual counts and image details, we analyze Kickstarter
data. Our results highlight that visual count features are two of the top three
predictors of success. Our results also show that simple image detail features
such as color matter a lot, and our proposed measures of visual scene elements
can also be useful. We supplement our article with R and Python codes that help
authors extract image details (https://osf.io/ujnzp/).
|
[
{
"created": "Mon, 28 Mar 2022 14:44:52 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2023 15:13:07 GMT",
"version": "v2"
}
] |
2023-09-07
|
[
[
"Blanchard",
"S. J.",
""
],
[
"Noseworthy",
"T. J.",
""
],
[
"Pancer",
"E.",
""
],
[
"Poole",
"M.",
""
]
] |
Researchers have increasingly turned to crowdfunding platforms to gain insights into entrepreneurial activity and dynamics. While previous studies have explored various factors influencing crowdfunding success, such as technology, communication, and marketing strategies, the role of visual elements that can be automatically extracted from images has received less attention. This is surprising, considering that crowdfunding platforms emphasize the importance of attention-grabbing and high-resolution images, and previous research has shown that image characteristics can significantly impact product evaluations. Indeed, a comprehensive review of empirical articles (n = 202) that utilized Kickstarter data, focusing on the incorporation of visual information in their analyses. Our findings reveal that only 29.70% controlled for the number of images, and less than 12% considered any image details. In this manuscript, we review the literature on image processing and its relevance to the business domain, highlighting two types of visual variables: visual counts (number of pictures and number of videos) and image details. Building upon previous work that discussed the role of color, composition and figure-ground relationships, we introduce visual scene elements that have not yet been explored in crowdfunding, including the number of faces, the number of concepts depicted, and the ease of identifying those concepts. To demonstrate the predictive value of visual counts and image details, we analyze Kickstarter data. Our results highlight that visual count features are two of the top three predictors of success. Our results also show that simple image detail features such as color matter a lot, and our proposed measures of visual scene elements can also be useful. We supplement our article with R and Python codes that help authors extract image details (https://osf.io/ujnzp/).
|
2001.05571
|
Jan Brabec
|
Jan Brabec, Tom\'a\v{s} Kom\'arek, Vojt\v{e}ch Franc, Luk\'a\v{s}
Machlica
|
On Model Evaluation under Non-constant Class Imbalance
|
Accepted for proceedings of ICCS 2020. Supplementary code at:
https://github.com/CiscoCTA/nci_eval
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many real-world classification problems are significantly class-imbalanced to
detriment of the class of interest. The standard set of proper evaluation
metrics is well-known but the usual assumption is that the test dataset
imbalance equals the real-world imbalance. In practice, this assumption is
often broken for various reasons. The reported results are then often too
optimistic and may lead to wrong conclusions about industrial impact and
suitability of proposed techniques. We introduce methods focusing on evaluation
under non-constant class imbalance. We show that not only the absolute values
of commonly used metrics, but even the order of classifiers in relation to the
evaluation metric used is affected by the change of the imbalance rate.
Finally, we demonstrate that using subsampling in order to get a test dataset
with class imbalance equal to the one observed in the wild is not necessary,
and eventually can lead to significant errors in classifier's performance
estimate.
|
[
{
"created": "Wed, 15 Jan 2020 21:52:24 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Apr 2020 17:58:21 GMT",
"version": "v2"
}
] |
2020-04-16
|
[
[
"Brabec",
"Jan",
""
],
[
"Komárek",
"Tomáš",
""
],
[
"Franc",
"Vojtěch",
""
],
[
"Machlica",
"Lukáš",
""
]
] |
Many real-world classification problems are significantly class-imbalanced to detriment of the class of interest. The standard set of proper evaluation metrics is well-known but the usual assumption is that the test dataset imbalance equals the real-world imbalance. In practice, this assumption is often broken for various reasons. The reported results are then often too optimistic and may lead to wrong conclusions about industrial impact and suitability of proposed techniques. We introduce methods focusing on evaluation under non-constant class imbalance. We show that not only the absolute values of commonly used metrics, but even the order of classifiers in relation to the evaluation metric used is affected by the change of the imbalance rate. Finally, we demonstrate that using subsampling in order to get a test dataset with class imbalance equal to the one observed in the wild is not necessary, and eventually can lead to significant errors in classifier's performance estimate.
|
1901.03278
|
Kai Chen
|
Jiaqi Wang, Kai Chen, Shuo Yang, Chen Change Loy, Dahua Lin
|
Region Proposal by Guided Anchoring
|
CVPR 2019 camera ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Region anchors are the cornerstone of modern object detection techniques.
State-of-the-art detectors mostly rely on a dense anchoring scheme, where
anchors are sampled uniformly over the spatial domain with a predefined set of
scales and aspect ratios. In this paper, we revisit this foundational stage.
Our study shows that it can be done much more effectively and efficiently.
Specifically, we present an alternative scheme, named Guided Anchoring, which
leverages semantic features to guide the anchoring. The proposed method jointly
predicts the locations where the center of objects of interest are likely to
exist as well as the scales and aspect ratios at different locations. On top of
predicted anchor shapes, we mitigate the feature inconsistency with a feature
adaption module. We also study the use of high-quality proposals to improve
detection performance. The anchoring scheme can be seamlessly integrated into
proposal methods and detectors. With Guided Anchoring, we achieve 9.1% higher
recall on MS COCO with 90% fewer anchors than the RPN baseline. We also adopt
Guided Anchoring in Fast R-CNN, Faster R-CNN and RetinaNet, respectively
improving the detection mAP by 2.2%, 2.7% and 1.2%. Code will be available at
https://github.com/open-mmlab/mmdetection.
|
[
{
"created": "Thu, 10 Jan 2019 17:13:13 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Apr 2019 06:25:50 GMT",
"version": "v2"
}
] |
2019-04-15
|
[
[
"Wang",
"Jiaqi",
""
],
[
"Chen",
"Kai",
""
],
[
"Yang",
"Shuo",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Lin",
"Dahua",
""
]
] |
Region anchors are the cornerstone of modern object detection techniques. State-of-the-art detectors mostly rely on a dense anchoring scheme, where anchors are sampled uniformly over the spatial domain with a predefined set of scales and aspect ratios. In this paper, we revisit this foundational stage. Our study shows that it can be done much more effectively and efficiently. Specifically, we present an alternative scheme, named Guided Anchoring, which leverages semantic features to guide the anchoring. The proposed method jointly predicts the locations where the center of objects of interest are likely to exist as well as the scales and aspect ratios at different locations. On top of predicted anchor shapes, we mitigate the feature inconsistency with a feature adaption module. We also study the use of high-quality proposals to improve detection performance. The anchoring scheme can be seamlessly integrated into proposal methods and detectors. With Guided Anchoring, we achieve 9.1% higher recall on MS COCO with 90% fewer anchors than the RPN baseline. We also adopt Guided Anchoring in Fast R-CNN, Faster R-CNN and RetinaNet, respectively improving the detection mAP by 2.2%, 2.7% and 1.2%. Code will be available at https://github.com/open-mmlab/mmdetection.
|
2002.04672
|
Kevin Liang
|
Yuewei Yang, Kevin J Liang, Lawrence Carin
|
Object Detection as a Positive-Unlabeled Problem
|
Published as a conference paper in the British Machine Vision
Conference (BMVC) 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As with other deep learning methods, label quality is important for learning
modern convolutional object detectors. However, the potentially large number
and wide diversity of object instances that can be found in complex image
scenes makes constituting complete annotations a challenging task; objects
missing annotations can be observed in a variety of popular object detection
datasets. These missing annotations can be problematic, as the standard
cross-entropy loss employed to train object detection models treats
classification as a positive-negative (PN) problem: unlabeled regions are
implicitly assumed to be background. As such, any object missing a bounding box
results in a confusing learning signal, the effects of which we observe
empirically. To remedy this, we propose treating object detection as a
positive-unlabeled (PU) problem, which removes the assumption that unlabeled
regions must be negative. We demonstrate that our proposed PU classification
loss outperforms the standard PN loss on PASCAL VOC and MS COCO across a range
of label missingness, as well as on Visual Genome and DeepLesion with full
labels.
|
[
{
"created": "Tue, 11 Feb 2020 20:49:34 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Nov 2020 18:25:47 GMT",
"version": "v2"
}
] |
2020-11-03
|
[
[
"Yang",
"Yuewei",
""
],
[
"Liang",
"Kevin J",
""
],
[
"Carin",
"Lawrence",
""
]
] |
As with other deep learning methods, label quality is important for learning modern convolutional object detectors. However, the potentially large number and wide diversity of object instances that can be found in complex image scenes makes constituting complete annotations a challenging task; objects missing annotations can be observed in a variety of popular object detection datasets. These missing annotations can be problematic, as the standard cross-entropy loss employed to train object detection models treats classification as a positive-negative (PN) problem: unlabeled regions are implicitly assumed to be background. As such, any object missing a bounding box results in a confusing learning signal, the effects of which we observe empirically. To remedy this, we propose treating object detection as a positive-unlabeled (PU) problem, which removes the assumption that unlabeled regions must be negative. We demonstrate that our proposed PU classification loss outperforms the standard PN loss on PASCAL VOC and MS COCO across a range of label missingness, as well as on Visual Genome and DeepLesion with full labels.
|
2402.07376
|
Rundong Luo
|
Rundong Luo, Hong-Xing Yu, Jiajun Wu
|
Unsupervised Discovery of Object-Centric Neural Fields
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We study inferring 3D object-centric scene representations from a single
image. While recent methods have shown potential in unsupervised 3D object
discovery from simple synthetic images, they fail to generalize to real-world
scenes with visually rich and diverse objects. This limitation stems from their
object representations, which entangle objects' intrinsic attributes like shape
and appearance with extrinsic, viewer-centric properties such as their 3D
location. To address this bottleneck, we propose Unsupervised discovery of
Object-Centric neural Fields (uOCF). uOCF focuses on learning the intrinsics of
objects and models the extrinsics separately. Our approach significantly
improves systematic generalization, thus enabling unsupervised learning of
high-fidelity object-centric scene representations from sparse real-world
images. To evaluate our approach, we collect three new datasets, including two
real kitchen environments. Extensive experiments show that uOCF enables
unsupervised discovery of visually rich objects from a single real image,
allowing applications such as 3D object segmentation and scene manipulation.
Notably, uOCF demonstrates zero-shot generalization to unseen objects from a
single real image. Project page: https://red-fairy.github.io/uOCF/
|
[
{
"created": "Mon, 12 Feb 2024 02:16:59 GMT",
"version": "v1"
}
] |
2024-02-13
|
[
[
"Luo",
"Rundong",
""
],
[
"Yu",
"Hong-Xing",
""
],
[
"Wu",
"Jiajun",
""
]
] |
We study inferring 3D object-centric scene representations from a single image. While recent methods have shown potential in unsupervised 3D object discovery from simple synthetic images, they fail to generalize to real-world scenes with visually rich and diverse objects. This limitation stems from their object representations, which entangle objects' intrinsic attributes like shape and appearance with extrinsic, viewer-centric properties such as their 3D location. To address this bottleneck, we propose Unsupervised discovery of Object-Centric neural Fields (uOCF). uOCF focuses on learning the intrinsics of objects and models the extrinsics separately. Our approach significantly improves systematic generalization, thus enabling unsupervised learning of high-fidelity object-centric scene representations from sparse real-world images. To evaluate our approach, we collect three new datasets, including two real kitchen environments. Extensive experiments show that uOCF enables unsupervised discovery of visually rich objects from a single real image, allowing applications such as 3D object segmentation and scene manipulation. Notably, uOCF demonstrates zero-shot generalization to unseen objects from a single real image. Project page: https://red-fairy.github.io/uOCF/
|
2404.15591
|
Alberto Presta Mr
|
Alberto Presta, Gabriele Spadaro, Enzo Tartaglione, Attilio Fiandrotti
and Marco Grangetto
|
Domain Adaptation for Learned Image Compression with Supervised Adapters
|
10 pages, published to Data compression conference 2024 (DCC2024)
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
In Learned Image Compression (LIC), a model is trained at encoding and
decoding images sampled from a source domain, often outperforming traditional
codecs on natural images; yet its performance may be far from optimal on images
sampled from different domains. In this work, we tackle the problem of adapting
a pre-trained model to multiple target domains by plugging into the decoder an
adapter module for each of them, including the source one. Each adapter
improves the decoder performance on a specific domain, without the model
forgetting about the images seen at training time. A gate network computes the
weights to optimally blend the contributions from the adapters when the
bitstream is decoded. We experimentally validate our method over two
state-of-the-art pre-trained models, observing improved rate-distortion
efficiency on the target domains without penalties on the source domain.
Furthermore, the gate's ability to find similarities with the learned target
domains enables better encoding efficiency also for images outside them.
|
[
{
"created": "Wed, 24 Apr 2024 01:50:36 GMT",
"version": "v1"
}
] |
2024-04-25
|
[
[
"Presta",
"Alberto",
""
],
[
"Spadaro",
"Gabriele",
""
],
[
"Tartaglione",
"Enzo",
""
],
[
"Fiandrotti",
"Attilio",
""
],
[
"Grangetto",
"Marco",
""
]
] |
In Learned Image Compression (LIC), a model is trained at encoding and decoding images sampled from a source domain, often outperforming traditional codecs on natural images; yet its performance may be far from optimal on images sampled from different domains. In this work, we tackle the problem of adapting a pre-trained model to multiple target domains by plugging into the decoder an adapter module for each of them, including the source one. Each adapter improves the decoder performance on a specific domain, without the model forgetting about the images seen at training time. A gate network computes the weights to optimally blend the contributions from the adapters when the bitstream is decoded. We experimentally validate our method over two state-of-the-art pre-trained models, observing improved rate-distortion efficiency on the target domains without penalties on the source domain. Furthermore, the gate's ability to find similarities with the learned target domains enables better encoding efficiency also for images outside them.
|
1909.09485
|
Iftitahu Ni'mah
|
Iftitahu Ni'mah, Vlado Menkovski, Mykola Pechenizkiy
|
BSDAR: Beam Search Decoding with Attention Reward in Neural Keyphrase
Generation
|
arxiv preprint. a preliminary study
| null | null | null |
cs.CL cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
This study mainly investigates two common decoding problems in neural
keyphrase generation: sequence length bias and beam diversity. To tackle the
problems, we introduce a beam search decoding strategy based on word-level and
ngram-level reward function to constrain and refine Seq2Seq inference at test
time. Results show that our simple proposal can overcome the algorithm bias to
shorter and nearly identical sequences, resulting in a significant improvement
of the decoding performance on generating keyphrases that are present and
absent in source text.
|
[
{
"created": "Tue, 17 Sep 2019 18:44:54 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Oct 2023 04:33:55 GMT",
"version": "v2"
}
] |
2023-10-31
|
[
[
"Ni'mah",
"Iftitahu",
""
],
[
"Menkovski",
"Vlado",
""
],
[
"Pechenizkiy",
"Mykola",
""
]
] |
This study mainly investigates two common decoding problems in neural keyphrase generation: sequence length bias and beam diversity. To tackle the problems, we introduce a beam search decoding strategy based on word-level and ngram-level reward function to constrain and refine Seq2Seq inference at test time. Results show that our simple proposal can overcome the algorithm bias to shorter and nearly identical sequences, resulting in a significant improvement of the decoding performance on generating keyphrases that are present and absent in source text.
|
cs/0311002
|
Andy King
|
Florence Benoy and Andy King and Fred Mesnard
|
Computing Convex Hulls with a Linear Solver
|
13 pages, 1 figure
| null | null | null |
cs.PL
| null |
A programming tactic involving polyhedra is reported that has been widely
applied in the polyhedral analysis of (constraint) logic programs. The method
enables the computations of convex hulls that are required for polyhedral
analysis to be coded with linear constraint solving machinery that is available
in many Prolog systems.
To appear in Theory and Practice of Logic Programming (TPLP)
|
[
{
"created": "Tue, 4 Nov 2003 12:43:54 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Benoy",
"Florence",
""
],
[
"King",
"Andy",
""
],
[
"Mesnard",
"Fred",
""
]
] |
A programming tactic involving polyhedra is reported that has been widely applied in the polyhedral analysis of (constraint) logic programs. The method enables the computations of convex hulls that are required for polyhedral analysis to be coded with linear constraint solving machinery that is available in many Prolog systems. To appear in Theory and Practice of Logic Programming (TPLP)
|
2309.01446
|
Raz Lapid
|
Raz Lapid, Ron Langberg, Moshe Sipper
|
Open Sesame! Universal Black Box Jailbreaking of Large Language Models
|
Accepted at SeT-LLM @ ICLR 2024
|
ICLR 2024 Workshop on Secure and Trustworthy Large Language Models
| null | null |
cs.CL cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language models (LLMs), designed to provide helpful and safe responses,
often rely on alignment techniques to align with user intent and social
guidelines. Unfortunately, this alignment can be exploited by malicious actors
seeking to manipulate an LLM's outputs for unintended purposes. In this paper
we introduce a novel approach that employs a genetic algorithm (GA) to
manipulate LLMs when model architecture and parameters are inaccessible. The GA
attack works by optimizing a universal adversarial prompt that -- when combined
with a user's query -- disrupts the attacked model's alignment, resulting in
unintended and potentially harmful outputs. Our novel approach systematically
reveals a model's limitations and vulnerabilities by uncovering instances where
its responses deviate from expected behavior. Through extensive experiments we
demonstrate the efficacy of our technique, thus contributing to the ongoing
discussion on responsible AI development by providing a diagnostic tool for
evaluating and enhancing alignment of LLMs with human intent. To our knowledge
this is the first automated universal black box jailbreak attack.
|
[
{
"created": "Mon, 4 Sep 2023 08:54:20 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Sep 2023 13:19:11 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Nov 2023 14:02:33 GMT",
"version": "v3"
},
{
"created": "Mon, 5 Aug 2024 11:34:10 GMT",
"version": "v4"
}
] |
2024-08-06
|
[
[
"Lapid",
"Raz",
""
],
[
"Langberg",
"Ron",
""
],
[
"Sipper",
"Moshe",
""
]
] |
Large language models (LLMs), designed to provide helpful and safe responses, often rely on alignment techniques to align with user intent and social guidelines. Unfortunately, this alignment can be exploited by malicious actors seeking to manipulate an LLM's outputs for unintended purposes. In this paper we introduce a novel approach that employs a genetic algorithm (GA) to manipulate LLMs when model architecture and parameters are inaccessible. The GA attack works by optimizing a universal adversarial prompt that -- when combined with a user's query -- disrupts the attacked model's alignment, resulting in unintended and potentially harmful outputs. Our novel approach systematically reveals a model's limitations and vulnerabilities by uncovering instances where its responses deviate from expected behavior. Through extensive experiments we demonstrate the efficacy of our technique, thus contributing to the ongoing discussion on responsible AI development by providing a diagnostic tool for evaluating and enhancing alignment of LLMs with human intent. To our knowledge this is the first automated universal black box jailbreak attack.
|
2402.17863
|
Young Kyung Kim
|
Young Kyung Kim, J. Mat\'ias Di Martino, Guillermo Sapiro
|
Vision Transformers with Natural Language Semantics
|
22 pages, 9 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tokens or patches within Vision Transformers (ViT) lack essential semantic
information, unlike their counterparts in natural language processing (NLP).
Typically, ViT tokens are associated with rectangular image patches that lack
specific semantic context, making interpretation difficult and failing to
effectively encapsulate information. We introduce a novel transformer model,
Semantic Vision Transformers (sViT), which leverages recent progress on
segmentation models to design novel tokenizer strategies. sViT effectively
harnesses semantic information, creating an inductive bias reminiscent of
convolutional neural networks while capturing global dependencies and
contextual information within images that are characteristic of transformers.
Through validation using real datasets, sViT demonstrates superiority over ViT,
requiring less training data while maintaining similar or superior performance.
Furthermore, sViT demonstrates significant superiority in out-of-distribution
generalization and robustness to natural distribution shifts, attributed to its
scale invariance semantic characteristic. Notably, the use of semantic tokens
significantly enhances the model's interpretability. Lastly, the proposed
paradigm facilitates the introduction of new and powerful augmentation
techniques at the token (or segment) level, increasing training data diversity
and generalization capabilities. Just as sentences are made of words, images
are formed by semantic objects; our proposed methodology leverages recent
progress in object segmentation and takes an important and natural step toward
interpretable and robust vision transformers.
|
[
{
"created": "Tue, 27 Feb 2024 19:54:42 GMT",
"version": "v1"
}
] |
2024-02-29
|
[
[
"Kim",
"Young Kyung",
""
],
[
"Di Martino",
"J. Matías",
""
],
[
"Sapiro",
"Guillermo",
""
]
] |
Tokens or patches within Vision Transformers (ViT) lack essential semantic information, unlike their counterparts in natural language processing (NLP). Typically, ViT tokens are associated with rectangular image patches that lack specific semantic context, making interpretation difficult and failing to effectively encapsulate information. We introduce a novel transformer model, Semantic Vision Transformers (sViT), which leverages recent progress on segmentation models to design novel tokenizer strategies. sViT effectively harnesses semantic information, creating an inductive bias reminiscent of convolutional neural networks while capturing global dependencies and contextual information within images that are characteristic of transformers. Through validation using real datasets, sViT demonstrates superiority over ViT, requiring less training data while maintaining similar or superior performance. Furthermore, sViT demonstrates significant superiority in out-of-distribution generalization and robustness to natural distribution shifts, attributed to its scale invariance semantic characteristic. Notably, the use of semantic tokens significantly enhances the model's interpretability. Lastly, the proposed paradigm facilitates the introduction of new and powerful augmentation techniques at the token (or segment) level, increasing training data diversity and generalization capabilities. Just as sentences are made of words, images are formed by semantic objects; our proposed methodology leverages recent progress in object segmentation and takes an important and natural step toward interpretable and robust vision transformers.
|
1705.05254
|
Yanjing Wang
|
Raul Fervari, Andreas Herzig, Yanjun Li, Yanjing Wang
|
Strategically knowing how
|
an earlier version of the paper to appear in IJCAI 2017
| null | null | null |
cs.AI cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a single-agent logic of goal-directed knowing how
extending the standard epistemic logic of knowing that with a new knowing how
operator. The semantics of the new operator is based on the idea that knowing
how to achieve $\phi$ means that there exists a (uniform) strategy such that
the agent knows that it can make sure $\phi$. We give an intuitive
axiomatization of our logic and prove the soundness, completeness, and
decidability of the logic. The crucial axioms relating knowing that and knowing
how illustrate our understanding of knowing how in this setting. This logic can
be used in representing both knowledge-that and knowledge-how.
|
[
{
"created": "Mon, 15 May 2017 14:12:16 GMT",
"version": "v1"
}
] |
2017-05-16
|
[
[
"Fervari",
"Raul",
""
],
[
"Herzig",
"Andreas",
""
],
[
"Li",
"Yanjun",
""
],
[
"Wang",
"Yanjing",
""
]
] |
In this paper, we propose a single-agent logic of goal-directed knowing how extending the standard epistemic logic of knowing that with a new knowing how operator. The semantics of the new operator is based on the idea that knowing how to achieve $\phi$ means that there exists a (uniform) strategy such that the agent knows that it can make sure $\phi$. We give an intuitive axiomatization of our logic and prove the soundness, completeness, and decidability of the logic. The crucial axioms relating knowing that and knowing how illustrate our understanding of knowing how in this setting. This logic can be used in representing both knowledge-that and knowledge-how.
|
2303.04352
|
Robert Wray
|
Robert E. Wray, Steven J. Jones, John E. Laird
|
Computational-level Analysis of Constraint Compliance for General
Intelligence
|
10 pages, 2 figures. Accepted for presentation at AGI 2023. Corrected
author list (segmented list) and abstract text artifacts
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Human behavior is conditioned by codes and norms that constrain action.
Rules, ``manners,'' laws, and moral imperatives are examples of classes of
constraints that govern human behavior. These systems of constraints are
"messy:" individual constraints are often poorly defined, what constraints are
relevant in a particular situation may be unknown or ambiguous, constraints
interact and conflict with one another, and determining how to act within the
bounds of the relevant constraints may be a significant challenge, especially
when rapid decisions are needed. Despite such messiness, humans incorporate
constraints in their decisions robustly and rapidly. General,
artificially-intelligent agents must also be able to navigate the messiness of
systems of real-world constraints in order to behave predictability and
reliably. In this paper, we characterize sources of complexity in constraint
processing for general agents and describe a computational-level analysis for
such constraint compliance. We identify key algorithmic requirements based on
the computational-level analysis and outline an initial, exploratory
implementation of a general approach to constraint compliance.
|
[
{
"created": "Wed, 8 Mar 2023 03:25:24 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Apr 2023 17:58:41 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Jun 2023 15:03:11 GMT",
"version": "v3"
}
] |
2023-06-16
|
[
[
"Wray",
"Robert E.",
""
],
[
"Jones",
"Steven J.",
""
],
[
"Laird",
"John E.",
""
]
] |
Human behavior is conditioned by codes and norms that constrain action. Rules, ``manners,'' laws, and moral imperatives are examples of classes of constraints that govern human behavior. These systems of constraints are "messy:" individual constraints are often poorly defined, what constraints are relevant in a particular situation may be unknown or ambiguous, constraints interact and conflict with one another, and determining how to act within the bounds of the relevant constraints may be a significant challenge, especially when rapid decisions are needed. Despite such messiness, humans incorporate constraints in their decisions robustly and rapidly. General, artificially-intelligent agents must also be able to navigate the messiness of systems of real-world constraints in order to behave predictability and reliably. In this paper, we characterize sources of complexity in constraint processing for general agents and describe a computational-level analysis for such constraint compliance. We identify key algorithmic requirements based on the computational-level analysis and outline an initial, exploratory implementation of a general approach to constraint compliance.
|
1008.1848
|
Kamaljit I Lakhtaria
|
Kamaljit I. Lakhtaria
|
Enhancing QOS and QOE in IMS enabled next generation networks
| null | null |
10.5121/jgraphoc.2010.2206
|
International journal on applications of graph theory in wireless ad
hoc networks and sensor networks (GRAPH-HOC) Vol.2, No.2, June 2010
|
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Managing network complexity, accommodating greater numbers of subscribers,
improving coverage to support data services (e.g. email, video, and music
downloads), keeping up to speed with fast-changing technology, and driving
maximum value from existing networks - all while reducing CapEX and OpEX and
ensuring Quality of Service (QoS) for the network and Quality of Experience
(QoE) for the user. These are just some of the pressing business issues faced
by mobileservice providers, summarized by the demand to "achieve more, for
less." The ultimate goal of optimization techniques at the network and
application layer is to ensure End-user perceived QoS. The next generation
networks (NGN), a composite environment of proven telecommunications and
Internet-oriented mechanisms have become generally recognized as the
telecommunications environment of the future. However, the nature of the NGN
environment presents several complex issues regarding quality assurance that
have not existed in the legacy environments (e.g., multi-network, multi-vendor,
and multi-operator IP-based telecommunications environment, distributed
intelligence, third-party provisioning, fixed-wireless and mobile access,
etc.). In this Research Paper, a service aware policy-based approach to NGN
quality assurance is presented, taking into account both perceptual quality of
experience and technologydependant quality of service issues. The respective
procedures, entities, mechanisms, and profiles are discussed. The purpose of
the presented approach is in research, development, and discussion of pursuing
the end-to-end controllability of the quality of the multimedia NGN-based
communications in an environment that is best effort in its nature and promotes
end user's access agnosticism, service agility, and global mobility.
|
[
{
"created": "Wed, 11 Aug 2010 07:47:27 GMT",
"version": "v1"
}
] |
2010-08-12
|
[
[
"Lakhtaria",
"Kamaljit I.",
""
]
] |
Managing network complexity, accommodating greater numbers of subscribers, improving coverage to support data services (e.g. email, video, and music downloads), keeping up to speed with fast-changing technology, and driving maximum value from existing networks - all while reducing CapEX and OpEX and ensuring Quality of Service (QoS) for the network and Quality of Experience (QoE) for the user. These are just some of the pressing business issues faced by mobileservice providers, summarized by the demand to "achieve more, for less." The ultimate goal of optimization techniques at the network and application layer is to ensure End-user perceived QoS. The next generation networks (NGN), a composite environment of proven telecommunications and Internet-oriented mechanisms have become generally recognized as the telecommunications environment of the future. However, the nature of the NGN environment presents several complex issues regarding quality assurance that have not existed in the legacy environments (e.g., multi-network, multi-vendor, and multi-operator IP-based telecommunications environment, distributed intelligence, third-party provisioning, fixed-wireless and mobile access, etc.). In this Research Paper, a service aware policy-based approach to NGN quality assurance is presented, taking into account both perceptual quality of experience and technologydependant quality of service issues. The respective procedures, entities, mechanisms, and profiles are discussed. The purpose of the presented approach is in research, development, and discussion of pursuing the end-to-end controllability of the quality of the multimedia NGN-based communications in an environment that is best effort in its nature and promotes end user's access agnosticism, service agility, and global mobility.
|
2311.09945
|
Qirui Tang
|
Qirui Tang, Wenkang Jiang, Yihua Du, Lei Lin
|
An Attention-Based Denoising Framework for Personality Detection in
Social Media Texts
| null | null | null | null |
cs.CY cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In social media networks, users produce a large amount of text content
anytime, providing researchers with a valuable approach to digging for
personality-related information. Personality detection based on user-generated
texts is a universal method that can be used to build user portraits. The
presence of noise in social media texts hinders personality detection. However,
previous studies have not fully addressed this challenge. Inspired by the
scanning reading technique, we propose an attention-based information
extraction mechanism (AIEM) for long texts, which is applied to quickly locate
valuable pieces of information, and focus more attention on the deep semantics
of key pieces. Then, we provide a novel attention-based denoising framework
(ADF) for personality detection tasks and achieve state-of-the-art performance
on two commonly used datasets. Notably, we obtain an average accuracy
improvement of 10.2% on the gold standard Twitter-Myers-Briggs Type Indicator
(Twitter-MBTI) dataset. We made our code publicly available on GitHub. We shed
light on how AIEM works to magnify personality-related signals.
|
[
{
"created": "Thu, 16 Nov 2023 14:56:09 GMT",
"version": "v1"
}
] |
2023-11-17
|
[
[
"Tang",
"Qirui",
""
],
[
"Jiang",
"Wenkang",
""
],
[
"Du",
"Yihua",
""
],
[
"Lin",
"Lei",
""
]
] |
In social media networks, users produce a large amount of text content anytime, providing researchers with a valuable approach to digging for personality-related information. Personality detection based on user-generated texts is a universal method that can be used to build user portraits. The presence of noise in social media texts hinders personality detection. However, previous studies have not fully addressed this challenge. Inspired by the scanning reading technique, we propose an attention-based information extraction mechanism (AIEM) for long texts, which is applied to quickly locate valuable pieces of information, and focus more attention on the deep semantics of key pieces. Then, we provide a novel attention-based denoising framework (ADF) for personality detection tasks and achieve state-of-the-art performance on two commonly used datasets. Notably, we obtain an average accuracy improvement of 10.2% on the gold standard Twitter-Myers-Briggs Type Indicator (Twitter-MBTI) dataset. We made our code publicly available on GitHub. We shed light on how AIEM works to magnify personality-related signals.
|
1604.07370
|
Christian Stab
|
Christian Stab and Iryna Gurevych
|
Parsing Argumentation Structures in Persuasive Essays
|
Under review in Computational Linguistics. First submission: 26
October 2015. Revised submission: 15 July 2016
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we present a novel approach for parsing argumentation
structures. We identify argument components using sequence labeling at the
token level and apply a new joint model for detecting argumentation structures.
The proposed model globally optimizes argument component types and
argumentative relations using integer linear programming. We show that our
model considerably improves the performance of base classifiers and
significantly outperforms challenging heuristic baselines. Moreover, we
introduce a novel corpus of persuasive essays annotated with argumentation
structures. We show that our annotation scheme and annotation guidelines
successfully guide human annotators to substantial agreement. This corpus and
the annotation guidelines are freely available for ensuring reproducibility and
to encourage future research in computational argumentation.
|
[
{
"created": "Mon, 25 Apr 2016 19:19:04 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Jul 2016 11:55:03 GMT",
"version": "v2"
}
] |
2016-07-25
|
[
[
"Stab",
"Christian",
""
],
[
"Gurevych",
"Iryna",
""
]
] |
In this article, we present a novel approach for parsing argumentation structures. We identify argument components using sequence labeling at the token level and apply a new joint model for detecting argumentation structures. The proposed model globally optimizes argument component types and argumentative relations using integer linear programming. We show that our model considerably improves the performance of base classifiers and significantly outperforms challenging heuristic baselines. Moreover, we introduce a novel corpus of persuasive essays annotated with argumentation structures. We show that our annotation scheme and annotation guidelines successfully guide human annotators to substantial agreement. This corpus and the annotation guidelines are freely available for ensuring reproducibility and to encourage future research in computational argumentation.
|
2006.05975
|
Ruoqi Shen
|
Simon S. Du, Wei Hu, Zhiyuan Li, Ruoqi Shen, Zhao Song, Jiajun Wu
|
When is Particle Filtering Efficient for Planning in Partially Observed
Linear Dynamical Systems?
| null | null | null | null |
cs.LG math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Particle filtering is a popular method for inferring latent states in
stochastic dynamical systems, whose theoretical properties have been well
studied in machine learning and statistics communities. In many control
problems, e.g., partially observed linear dynamical systems (POLDS), oftentimes
the inferred latent state is further used for planning at each step. This paper
initiates a rigorous study on the efficiency of particle filtering for
sequential planning, and gives the first particle complexity bounds. Though
errors in past actions may affect the future, we are able to bound the number
of particles needed so that the long-run reward of the policy based on particle
filtering is close to that based on exact inference. In particular, we show
that, in stable systems, polynomially many particles suffice. Key in our proof
is a coupling of the ideal sequence based on the exact planning and the
sequence generated by approximate planning based on particle filtering. We
believe this technique can be useful in other sequential decision-making
problems.
|
[
{
"created": "Wed, 10 Jun 2020 17:43:43 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jul 2021 01:28:42 GMT",
"version": "v2"
}
] |
2021-07-12
|
[
[
"Du",
"Simon S.",
""
],
[
"Hu",
"Wei",
""
],
[
"Li",
"Zhiyuan",
""
],
[
"Shen",
"Ruoqi",
""
],
[
"Song",
"Zhao",
""
],
[
"Wu",
"Jiajun",
""
]
] |
Particle filtering is a popular method for inferring latent states in stochastic dynamical systems, whose theoretical properties have been well studied in machine learning and statistics communities. In many control problems, e.g., partially observed linear dynamical systems (POLDS), oftentimes the inferred latent state is further used for planning at each step. This paper initiates a rigorous study on the efficiency of particle filtering for sequential planning, and gives the first particle complexity bounds. Though errors in past actions may affect the future, we are able to bound the number of particles needed so that the long-run reward of the policy based on particle filtering is close to that based on exact inference. In particular, we show that, in stable systems, polynomially many particles suffice. Key in our proof is a coupling of the ideal sequence based on the exact planning and the sequence generated by approximate planning based on particle filtering. We believe this technique can be useful in other sequential decision-making problems.
|
2207.08902
|
Seungwoo Jeong
|
Seungwoo Jeong, Taekwon Ga, Inhwan Jeong, Jongkyu Oh, and Jongeun Choi
|
Layered Cost-Map-Based Traffic Management for Multiple Automated Mobile
Robots via a Data Distribution Service
|
8 pages, 13 figures
| null | null | null |
cs.RO cs.MA cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter proposes traffic management for multiple automated mobile robots
(AMRs) based on a layered cost map. Multiple AMRs communicate via a data
distribution service (DDS), which is shared by topics in the same DDS domain.
The cost of each layer is manipulated by topics. The traffic management server
in the domain sends or receives topics to each of AMRs. Using the layered cost
map, the new concept of prohibition filter, lane filter, fleet layer, and
region filter are proposed and implemented. The prohibition filter can help a
user set an area that would prohibit an AMR from trespassing. The lane filter
can help set one-way directions based on an angle image. The fleet layer can
help AMRs share their locations via the traffic management server. The region
filter requests for or receives an exclusive area, which can be occupied by
only one AMR, from the traffic management server. All the layers are
experimentally validated with real-world AMRs. Each area can be configured with
user-defined images or text-based parameter files.
|
[
{
"created": "Mon, 18 Jul 2022 19:23:30 GMT",
"version": "v1"
}
] |
2022-07-20
|
[
[
"Jeong",
"Seungwoo",
""
],
[
"Ga",
"Taekwon",
""
],
[
"Jeong",
"Inhwan",
""
],
[
"Oh",
"Jongkyu",
""
],
[
"Choi",
"Jongeun",
""
]
] |
This letter proposes traffic management for multiple automated mobile robots (AMRs) based on a layered cost map. Multiple AMRs communicate via a data distribution service (DDS), which is shared by topics in the same DDS domain. The cost of each layer is manipulated by topics. The traffic management server in the domain sends or receives topics to each of AMRs. Using the layered cost map, the new concept of prohibition filter, lane filter, fleet layer, and region filter are proposed and implemented. The prohibition filter can help a user set an area that would prohibit an AMR from trespassing. The lane filter can help set one-way directions based on an angle image. The fleet layer can help AMRs share their locations via the traffic management server. The region filter requests for or receives an exclusive area, which can be occupied by only one AMR, from the traffic management server. All the layers are experimentally validated with real-world AMRs. Each area can be configured with user-defined images or text-based parameter files.
|
2011.12259
|
Jiehua Chen
|
Jiehua Chen and Sanjukta Roy and Manuel Sorge
|
Fractional Matchings under Preferences: Stability and Optimality
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We thoroughly study a generalized version of the classic Stable Marriage and
Stable Roommates problems where agents may share partners. We consider two
prominent stability concepts: ordinal stability [Aharoni and Fleiner, Journal
of Combinatorial Theory, 2003] and cardinal stability [Caragiannis et al., ACM
EC 2019] and two optimality criteria: maximizing social welfare (i.e., the
overall satisfaction of the agents) and maximizing the number of fully matched
agents (i.e., agents whose shares sum up to one). After having observed that
ordinal stability always exists and implies cardinal stability, and that the
set of ordinally stable matchings in a restricted case admits a lattice
structure, we obtain a complete picture regarding the computational complexity
of finding an optimal ordinally stable or cardinally stable matching. In the
process we answer an open question raised by Caragiannis et al. [AIJ 2020].
|
[
{
"created": "Tue, 24 Nov 2020 18:12:23 GMT",
"version": "v1"
}
] |
2020-11-25
|
[
[
"Chen",
"Jiehua",
""
],
[
"Roy",
"Sanjukta",
""
],
[
"Sorge",
"Manuel",
""
]
] |
We thoroughly study a generalized version of the classic Stable Marriage and Stable Roommates problems where agents may share partners. We consider two prominent stability concepts: ordinal stability [Aharoni and Fleiner, Journal of Combinatorial Theory, 2003] and cardinal stability [Caragiannis et al., ACM EC 2019] and two optimality criteria: maximizing social welfare (i.e., the overall satisfaction of the agents) and maximizing the number of fully matched agents (i.e., agents whose shares sum up to one). After having observed that ordinal stability always exists and implies cardinal stability, and that the set of ordinally stable matchings in a restricted case admits a lattice structure, we obtain a complete picture regarding the computational complexity of finding an optimal ordinally stable or cardinally stable matching. In the process we answer an open question raised by Caragiannis et al. [AIJ 2020].
|
1804.08420
|
Qiang Ning
|
Qiang Ning, Zhongzhi Yu, Chuchu Fan, Dan Roth
|
Exploiting Partially Annotated Data for Temporal Relation Extraction
|
[Final Version] short paper accepted by *SEM'18
| null | null | null |
cs.CL cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Annotating temporal relations (TempRel) between events described in natural
language is known to be labor intensive, partly because the total number of
TempRels is quadratic in the number of events. As a result, only a small number
of documents are typically annotated, limiting the coverage of various
lexical/semantic phenomena. In order to improve existing approaches, one
possibility is to make use of the readily available, partially annotated data
(P as in partial) that cover more documents. However, missing annotations in P
are known to hurt, rather than help, existing systems. This work is a case
study in exploring various usages of P for TempRel extraction. Results show
that despite missing annotations, P is still a useful supervision signal for
this task within a constrained bootstrapping learning framework. The system
described in this system is publicly available.
|
[
{
"created": "Wed, 18 Apr 2018 21:33:00 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Apr 2018 02:31:40 GMT",
"version": "v2"
}
] |
2018-04-26
|
[
[
"Ning",
"Qiang",
""
],
[
"Yu",
"Zhongzhi",
""
],
[
"Fan",
"Chuchu",
""
],
[
"Roth",
"Dan",
""
]
] |
Annotating temporal relations (TempRel) between events described in natural language is known to be labor intensive, partly because the total number of TempRels is quadratic in the number of events. As a result, only a small number of documents are typically annotated, limiting the coverage of various lexical/semantic phenomena. In order to improve existing approaches, one possibility is to make use of the readily available, partially annotated data (P as in partial) that cover more documents. However, missing annotations in P are known to hurt, rather than help, existing systems. This work is a case study in exploring various usages of P for TempRel extraction. Results show that despite missing annotations, P is still a useful supervision signal for this task within a constrained bootstrapping learning framework. The system described in this system is publicly available.
|
1601.06454
|
Emiliano De Cristofaro
|
Luca Melis and Hassan Jameel Asghar and Emiliano De Cristofaro and
Mohamed Ali Kaafar
|
Private Processing of Outsourced Network Functions: Feasibility and
Constructions
|
A preliminary version of this paper appears in the 1st ACM
International Workshop on Security in Software Defined Networks & Network
Function Virtualization. This is the full version
| null |
10.1145/2876019.2876021
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aiming to reduce the cost and complexity of maintaining networking
infrastructures, organizations are increasingly outsourcing their network
functions (e.g., firewalls, traffic shapers and intrusion detection systems) to
the cloud, and a number of industrial players have started to offer network
function virtualization (NFV)-based solutions. Alas, outsourcing network
functions in its current setting implies that sensitive network policies, such
as firewall rules, are revealed to the cloud provider. In this paper, we
investigate the use of cryptographic primitives for processing outsourced
network functions, so that the provider does not learn any sensitive
information. More specifically, we present a cryptographic treatment of
privacy-preserving outsourcing of network functions, introducing security
definitions as well as an abstract model of generic network functions, and then
propose a few instantiations using partial homomorphic encryption and
public-key encryption with keyword search. We include a proof-of-concept
implementation of our constructions and show that network functions can be
privately processed by an untrusted cloud provider in a few milliseconds.
|
[
{
"created": "Mon, 25 Jan 2016 00:24:49 GMT",
"version": "v1"
}
] |
2016-01-26
|
[
[
"Melis",
"Luca",
""
],
[
"Asghar",
"Hassan Jameel",
""
],
[
"De Cristofaro",
"Emiliano",
""
],
[
"Kaafar",
"Mohamed Ali",
""
]
] |
Aiming to reduce the cost and complexity of maintaining networking infrastructures, organizations are increasingly outsourcing their network functions (e.g., firewalls, traffic shapers and intrusion detection systems) to the cloud, and a number of industrial players have started to offer network function virtualization (NFV)-based solutions. Alas, outsourcing network functions in its current setting implies that sensitive network policies, such as firewall rules, are revealed to the cloud provider. In this paper, we investigate the use of cryptographic primitives for processing outsourced network functions, so that the provider does not learn any sensitive information. More specifically, we present a cryptographic treatment of privacy-preserving outsourcing of network functions, introducing security definitions as well as an abstract model of generic network functions, and then propose a few instantiations using partial homomorphic encryption and public-key encryption with keyword search. We include a proof-of-concept implementation of our constructions and show that network functions can be privately processed by an untrusted cloud provider in a few milliseconds.
|
2306.07856
|
Alessandro Palmarini
|
Alessandro B. Palmarini, Christopher G. Lucas, N. Siddharth
|
Bayesian Program Learning by Decompiling Amortized Knowledge
| null | null | null | null |
cs.AI cs.LG cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
DreamCoder is an inductive program synthesis system that, whilst solving
problems, learns to simplify search in an iterative wake-sleep procedure. The
cost of search is amortized by training a neural search policy, reducing search
breadth and effectively "compiling" useful information to compose program
solutions across tasks. Additionally, a library of program components is learnt
to compress and express discovered solutions in fewer components, reducing
search depth. We present a novel approach for library learning that directly
leverages the neural search policy, effectively "decompiling" its amortized
knowledge to extract relevant program components. This provides stronger
amortized inference: the amortized knowledge learnt to reduce search breadth is
now also used to reduce search depth. We integrate our approach with DreamCoder
and demonstrate faster domain proficiency with improved generalization on a
range of domains, particularly when fewer example solutions are available.
|
[
{
"created": "Tue, 13 Jun 2023 15:35:01 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Oct 2023 15:01:45 GMT",
"version": "v2"
},
{
"created": "Fri, 31 May 2024 15:14:58 GMT",
"version": "v3"
}
] |
2024-06-03
|
[
[
"Palmarini",
"Alessandro B.",
""
],
[
"Lucas",
"Christopher G.",
""
],
[
"Siddharth",
"N.",
""
]
] |
DreamCoder is an inductive program synthesis system that, whilst solving problems, learns to simplify search in an iterative wake-sleep procedure. The cost of search is amortized by training a neural search policy, reducing search breadth and effectively "compiling" useful information to compose program solutions across tasks. Additionally, a library of program components is learnt to compress and express discovered solutions in fewer components, reducing search depth. We present a novel approach for library learning that directly leverages the neural search policy, effectively "decompiling" its amortized knowledge to extract relevant program components. This provides stronger amortized inference: the amortized knowledge learnt to reduce search breadth is now also used to reduce search depth. We integrate our approach with DreamCoder and demonstrate faster domain proficiency with improved generalization on a range of domains, particularly when fewer example solutions are available.
|
1910.05518
|
Seunghan Yang
|
Seunghan Yang, Yoonhyung Kim, Youngeun Kim, and Changick Kim
|
Combinational Class Activation Maps for Weakly Supervised Object
Localization
|
The paper was accepted to the IEEE Winter Conference on Applications
of Computer Vision (WACV'2020)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weakly supervised object localization has recently attracted attention since
it aims to identify both class labels and locations of objects by using
image-level labels. Most previous methods utilize the activation map
corresponding to the highest activation source. Exploiting only one activation
map of the highest probability class is often biased into limited regions or
sometimes even highlights background regions. To resolve these limitations, we
propose to use activation maps, named combinational class activation maps
(CCAM), which are linear combinations of activation maps from the highest to
the lowest probability class. By using CCAM for localization, we suppress
background regions to help highlighting foreground objects more accurately. In
addition, we design the network architecture to consider spatial relationships
for localizing relevant object regions. Specifically, we integrate non-local
modules into an existing base network at both low- and high-level layers. Our
final model, named non-local combinational class activation maps (NL-CCAM),
obtains superior performance compared to previous methods on representative
object localization benchmarks including ILSVRC 2016 and CUB-200-2011.
Furthermore, we show that the proposed method has a great capability of
generalization by visualizing other datasets.
|
[
{
"created": "Sat, 12 Oct 2019 07:29:59 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Dec 2019 07:53:46 GMT",
"version": "v2"
}
] |
2019-12-20
|
[
[
"Yang",
"Seunghan",
""
],
[
"Kim",
"Yoonhyung",
""
],
[
"Kim",
"Youngeun",
""
],
[
"Kim",
"Changick",
""
]
] |
Weakly supervised object localization has recently attracted attention since it aims to identify both class labels and locations of objects by using image-level labels. Most previous methods utilize the activation map corresponding to the highest activation source. Exploiting only one activation map of the highest probability class is often biased into limited regions or sometimes even highlights background regions. To resolve these limitations, we propose to use activation maps, named combinational class activation maps (CCAM), which are linear combinations of activation maps from the highest to the lowest probability class. By using CCAM for localization, we suppress background regions to help highlighting foreground objects more accurately. In addition, we design the network architecture to consider spatial relationships for localizing relevant object regions. Specifically, we integrate non-local modules into an existing base network at both low- and high-level layers. Our final model, named non-local combinational class activation maps (NL-CCAM), obtains superior performance compared to previous methods on representative object localization benchmarks including ILSVRC 2016 and CUB-200-2011. Furthermore, we show that the proposed method has a great capability of generalization by visualizing other datasets.
|
1402.1697
|
Abhishek Halder
|
Abhishek Halder, Raktim Bhattacharya
|
Geodesic Density Tracking with Applications to Data Driven Modeling
|
8 pages, 7 figures
| null | null | null |
cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many problems in dynamic data driven modeling deals with distributed rather
than lumped observations. In this paper, we show that the Monge-Kantorovich
optimal transport theory provides a unifying framework to tackle such problems
in the systems-control parlance. Specifically, given distributional
measurements at arbitrary instances of measurement availability, we show how to
derive dynamical systems that interpolate the observed distributions along the
geodesics. We demonstrate the framework in the context of three specific
problems: (i) \emph{finding a feedback control} to track observed ensembles
over finite-horizon, (ii) \emph{finding a model} whose prediction matches the
observed distributional data, and (iii) \emph{refining a baseline model} that
results a distribution-level prediction-observation mismatch. We emphasize how
the three problems can be posed as variants of the optimal transport problem,
but lead to different types of numerical methods depending on the problem
context. Several examples are given to elucidate the ideas.
|
[
{
"created": "Fri, 7 Feb 2014 17:17:54 GMT",
"version": "v1"
}
] |
2014-02-10
|
[
[
"Halder",
"Abhishek",
""
],
[
"Bhattacharya",
"Raktim",
""
]
] |
Many problems in dynamic data driven modeling deals with distributed rather than lumped observations. In this paper, we show that the Monge-Kantorovich optimal transport theory provides a unifying framework to tackle such problems in the systems-control parlance. Specifically, given distributional measurements at arbitrary instances of measurement availability, we show how to derive dynamical systems that interpolate the observed distributions along the geodesics. We demonstrate the framework in the context of three specific problems: (i) \emph{finding a feedback control} to track observed ensembles over finite-horizon, (ii) \emph{finding a model} whose prediction matches the observed distributional data, and (iii) \emph{refining a baseline model} that results a distribution-level prediction-observation mismatch. We emphasize how the three problems can be posed as variants of the optimal transport problem, but lead to different types of numerical methods depending on the problem context. Several examples are given to elucidate the ideas.
|
0912.3970
|
William Jackson
|
Nitin A. Naik, Gajanan D. Kurundkar, Santosh D. Khamitkar, Namdeo V.
Kalyankar
|
Penetration Testing: A Roadmap to Network Security
| null |
Journal of Computing, Volume 1, Issue 1, pp 187-190, December 2009
| null | null |
cs.NI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network penetration testing identifies the exploits and vulnerabilities those
exist within computer network infrastructure and help to confirm the security
measures. The objective of this paper is to explain methodology and methods
behind penetration testing and illustrate remedies over it, which will provide
substantial value for network security Penetration testing should model real
world attacks as closely as possible. An authorized and scheduled penetration
testing will probably detected by IDS (Intrusion Detection System). Network
penetration testing is done by either or manual automated tools. Penetration
test can gather evidence of vulnerability in the network. Successful testing
provides indisputable evidence of the problem as well as starting point for
prioritizing remediation. Penetration testing focuses on high severity
vulnerabilities and there are no false positive.
|
[
{
"created": "Sun, 20 Dec 2009 03:39:53 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Dec 2009 15:10:20 GMT",
"version": "v2"
}
] |
2009-12-26
|
[
[
"Naik",
"Nitin A.",
""
],
[
"Kurundkar",
"Gajanan D.",
""
],
[
"Khamitkar",
"Santosh D.",
""
],
[
"Kalyankar",
"Namdeo V.",
""
]
] |
Network penetration testing identifies the exploits and vulnerabilities those exist within computer network infrastructure and help to confirm the security measures. The objective of this paper is to explain methodology and methods behind penetration testing and illustrate remedies over it, which will provide substantial value for network security Penetration testing should model real world attacks as closely as possible. An authorized and scheduled penetration testing will probably detected by IDS (Intrusion Detection System). Network penetration testing is done by either or manual automated tools. Penetration test can gather evidence of vulnerability in the network. Successful testing provides indisputable evidence of the problem as well as starting point for prioritizing remediation. Penetration testing focuses on high severity vulnerabilities and there are no false positive.
|
1808.01423
|
Chris Tensmeyer
|
Chris Tensmeyer, Curtis Wigington, Brian Davis, Seth Stewart, Tony
Martinez, William Barrett
|
Language Model Supervision for Handwriting Recognition Model Adaptation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training state-of-the-art offline handwriting recognition (HWR) models
requires large labeled datasets, but unfortunately such datasets are not
available in all languages and domains due to the high cost of manual
labeling.We address this problem by showing how high resource languages can be
leveraged to help train models for low resource languages.We propose a transfer
learning methodology where we adapt HWR models trained on a source language to
a target language that uses the same writing script.This methodology only
requires labeled data in the source language, unlabeled data in the target
language, and a language model of the target language. The language model is
used in a bootstrapping fashion to refine predictions in the target language
for use as ground truth in training the model.Using this approach we
demonstrate improved transferability among French, English, and Spanish
languages using both historical and modern handwriting datasets. In the best
case, transferring with the proposed methodology results in character error
rates nearly as good as full supervised training.
|
[
{
"created": "Sat, 4 Aug 2018 04:27:05 GMT",
"version": "v1"
}
] |
2018-08-07
|
[
[
"Tensmeyer",
"Chris",
""
],
[
"Wigington",
"Curtis",
""
],
[
"Davis",
"Brian",
""
],
[
"Stewart",
"Seth",
""
],
[
"Martinez",
"Tony",
""
],
[
"Barrett",
"William",
""
]
] |
Training state-of-the-art offline handwriting recognition (HWR) models requires large labeled datasets, but unfortunately such datasets are not available in all languages and domains due to the high cost of manual labeling.We address this problem by showing how high resource languages can be leveraged to help train models for low resource languages.We propose a transfer learning methodology where we adapt HWR models trained on a source language to a target language that uses the same writing script.This methodology only requires labeled data in the source language, unlabeled data in the target language, and a language model of the target language. The language model is used in a bootstrapping fashion to refine predictions in the target language for use as ground truth in training the model.Using this approach we demonstrate improved transferability among French, English, and Spanish languages using both historical and modern handwriting datasets. In the best case, transferring with the proposed methodology results in character error rates nearly as good as full supervised training.
|
2110.12493
|
Yaxiong Lei
|
Shijing He and Yaxiong Lei
|
The privacy protection effectiveness of the video conference platforms'
virtual background and the privacy concerns from the end-users
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the abrupt arise of pandemic worldwide, the video conferencing
platforms are becoming ubiquitously available and being embedded into either
various digital devices or the collaborative daily work. Even though the
service provider has designed many security functions to protect individual's
privacy, such as virtual background (VB), it still remains to be explored that
how the instability of VB leaks users' privacy or impacts their mentality and
behaviours. In order to understand and locate implications for the contextual
of the end-users' privacy awareness and its mental model, we will conduct
survey and interviews for users as the first stage research. We will raise
conceptual challenges in terms of the designing safety and stable VB, as well
as provide design suggestions.
|
[
{
"created": "Sun, 24 Oct 2021 17:22:25 GMT",
"version": "v1"
}
] |
2021-10-26
|
[
[
"He",
"Shijing",
""
],
[
"Lei",
"Yaxiong",
""
]
] |
Due to the abrupt arise of pandemic worldwide, the video conferencing platforms are becoming ubiquitously available and being embedded into either various digital devices or the collaborative daily work. Even though the service provider has designed many security functions to protect individual's privacy, such as virtual background (VB), it still remains to be explored that how the instability of VB leaks users' privacy or impacts their mentality and behaviours. In order to understand and locate implications for the contextual of the end-users' privacy awareness and its mental model, we will conduct survey and interviews for users as the first stage research. We will raise conceptual challenges in terms of the designing safety and stable VB, as well as provide design suggestions.
|
2401.01469
|
Walid Saba
|
Walid Saba, Suzanne Wendelken and James. Shanahan
|
Question-Answering Based Summarization of Electronic Health Records
using Retrieval Augmented Generation
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Summarization of electronic health records (EHRs) can substantially minimize
'screen time' for both patients as well as medical personnel. In recent years
summarization of EHRs have employed machine learning pipelines using state of
the art neural models. However, these models have produced less than adequate
results that are attributed to the difficulty of obtaining sufficient annotated
data for training. Moreover, the requirement to consider the entire content of
an EHR in summarization has resulted in poor performance due to the fact that
attention mechanisms in modern large language models (LLMs) adds a quadratic
complexity in terms of the size of the input. We propose here a method that
mitigates these shortcomings by combining semantic search, retrieval augmented
generation (RAG) and question-answering using the latest LLMs. In our approach
summarization is the extraction of answers to specific questions that are
deemed important by subject-matter experts (SMEs). Our approach is quite
efficient; requires minimal to no training; does not suffer from the
'hallucination' problem of LLMs; and it ensures diversity, since the summary
will not have repeated content but diverse answers to specific questions.
|
[
{
"created": "Wed, 3 Jan 2024 00:09:34 GMT",
"version": "v1"
}
] |
2024-01-04
|
[
[
"Saba",
"Walid",
""
],
[
"Wendelken",
"Suzanne",
""
],
[
"Shanahan",
"James.",
""
]
] |
Summarization of electronic health records (EHRs) can substantially minimize 'screen time' for both patients as well as medical personnel. In recent years summarization of EHRs have employed machine learning pipelines using state of the art neural models. However, these models have produced less than adequate results that are attributed to the difficulty of obtaining sufficient annotated data for training. Moreover, the requirement to consider the entire content of an EHR in summarization has resulted in poor performance due to the fact that attention mechanisms in modern large language models (LLMs) adds a quadratic complexity in terms of the size of the input. We propose here a method that mitigates these shortcomings by combining semantic search, retrieval augmented generation (RAG) and question-answering using the latest LLMs. In our approach summarization is the extraction of answers to specific questions that are deemed important by subject-matter experts (SMEs). Our approach is quite efficient; requires minimal to no training; does not suffer from the 'hallucination' problem of LLMs; and it ensures diversity, since the summary will not have repeated content but diverse answers to specific questions.
|
0909.0095
|
Rakesh Mohanty
|
Rakesh Mohanty and N. S. Narayanaswamy
|
Online Algorithms for Self-Organizing Sequential Search - A Survey
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The main objective of this survey is to present the important theoretical and
experimental results contributed till date in the area of online algorithms for
the self organizing sequential search problem, also popularly known as the List
Update Problem(LUP) in a chronological way. The survey includes competitiveness
results of deterministic and randomized online algorithms and complexity
results of optimal off line algorithms for the list update problem. We also
present the results associated with list update with look ahead, list update
with locality of reference and other variants of the list update problem. We
investigate research issues, explore scope of future work associated with each
issue so that future researchers can find it useful to work on.
|
[
{
"created": "Tue, 1 Sep 2009 05:46:32 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Sep 2009 03:48:38 GMT",
"version": "v2"
}
] |
2009-09-02
|
[
[
"Mohanty",
"Rakesh",
""
],
[
"Narayanaswamy",
"N. S.",
""
]
] |
The main objective of this survey is to present the important theoretical and experimental results contributed till date in the area of online algorithms for the self organizing sequential search problem, also popularly known as the List Update Problem(LUP) in a chronological way. The survey includes competitiveness results of deterministic and randomized online algorithms and complexity results of optimal off line algorithms for the list update problem. We also present the results associated with list update with look ahead, list update with locality of reference and other variants of the list update problem. We investigate research issues, explore scope of future work associated with each issue so that future researchers can find it useful to work on.
|
2207.04686
|
Prateek Varshney
|
Prateek Varshney, Abhradeep Thakurta, Prateek Jain
|
(Nearly) Optimal Private Linear Regression via Adaptive Clipping
|
41 Pages, Accepted in the 35th Annual Conference on Learning Theory
(COLT 2022)
| null | null | null |
cs.LG cs.CR math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of differentially private linear regression where each
data point is sampled from a fixed sub-Gaussian style distribution. We propose
and analyze a one-pass mini-batch stochastic gradient descent method
(DP-AMBSSGD) where points in each iteration are sampled without replacement.
Noise is added for DP but the noise standard deviation is estimated online.
Compared to existing $(\epsilon, \delta)$-DP techniques which have sub-optimal
error bounds, DP-AMBSSGD is able to provide nearly optimal error bounds in
terms of key parameters like dimensionality $d$, number of points $N$, and the
standard deviation $\sigma$ of the noise in observations. For example, when the
$d$-dimensional covariates are sampled i.i.d. from the normal distribution,
then the excess error of DP-AMBSSGD due to privacy is $\frac{\sigma^2
d}{N}(1+\frac{d}{\epsilon^2 N})$, i.e., the error is meaningful when number of
samples $N= \Omega(d \log d)$ which is the standard operative regime for linear
regression. In contrast, error bounds for existing efficient methods in this
setting are: $\mathcal{O}\big(\frac{d^3}{\epsilon^2 N^2}\big)$, even for
$\sigma=0$. That is, for constant $\epsilon$, the existing techniques require
$N=\Omega(d\sqrt{d})$ to provide a non-trivial result.
|
[
{
"created": "Mon, 11 Jul 2022 08:04:46 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Jul 2022 21:09:52 GMT",
"version": "v2"
}
] |
2022-07-14
|
[
[
"Varshney",
"Prateek",
""
],
[
"Thakurta",
"Abhradeep",
""
],
[
"Jain",
"Prateek",
""
]
] |
We study the problem of differentially private linear regression where each data point is sampled from a fixed sub-Gaussian style distribution. We propose and analyze a one-pass mini-batch stochastic gradient descent method (DP-AMBSSGD) where points in each iteration are sampled without replacement. Noise is added for DP but the noise standard deviation is estimated online. Compared to existing $(\epsilon, \delta)$-DP techniques which have sub-optimal error bounds, DP-AMBSSGD is able to provide nearly optimal error bounds in terms of key parameters like dimensionality $d$, number of points $N$, and the standard deviation $\sigma$ of the noise in observations. For example, when the $d$-dimensional covariates are sampled i.i.d. from the normal distribution, then the excess error of DP-AMBSSGD due to privacy is $\frac{\sigma^2 d}{N}(1+\frac{d}{\epsilon^2 N})$, i.e., the error is meaningful when number of samples $N= \Omega(d \log d)$ which is the standard operative regime for linear regression. In contrast, error bounds for existing efficient methods in this setting are: $\mathcal{O}\big(\frac{d^3}{\epsilon^2 N^2}\big)$, even for $\sigma=0$. That is, for constant $\epsilon$, the existing techniques require $N=\Omega(d\sqrt{d})$ to provide a non-trivial result.
|
1904.09571
|
Wayne Wu
|
Wayne Wu, Kaidi Cao, Cheng Li, Chen Qian, Chen Change Loy
|
TransGaGa: Geometry-Aware Unsupervised Image-to-Image Translation
|
Accepted to CVPR 2019. Project page:
https://wywu.github.io/projects/TGaGa/TGaGa.html
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised image-to-image translation aims at learning a mapping between
two visual domains. However, learning a translation across large geometry
variations always ends up with failure. In this work, we present a novel
disentangle-and-translate framework to tackle the complex objects
image-to-image translation task. Instead of learning the mapping on the image
space directly, we disentangle image space into a Cartesian product of the
appearance and the geometry latent spaces. Specifically, we first introduce a
geometry prior loss and a conditional VAE loss to encourage the network to
learn independent but complementary representations. The translation is then
built on appearance and geometry space separately. Extensive experiments
demonstrate the superior performance of our method to other state-of-the-art
approaches, especially in the challenging near-rigid and non-rigid objects
translation tasks. In addition, by taking different exemplars as the appearance
references, our method also supports multimodal translation. Project page:
https://wywu.github.io/projects/TGaGa/TGaGa.html
|
[
{
"created": "Sun, 21 Apr 2019 09:42:10 GMT",
"version": "v1"
}
] |
2019-04-23
|
[
[
"Wu",
"Wayne",
""
],
[
"Cao",
"Kaidi",
""
],
[
"Li",
"Cheng",
""
],
[
"Qian",
"Chen",
""
],
[
"Loy",
"Chen Change",
""
]
] |
Unsupervised image-to-image translation aims at learning a mapping between two visual domains. However, learning a translation across large geometry variations always ends up with failure. In this work, we present a novel disentangle-and-translate framework to tackle the complex objects image-to-image translation task. Instead of learning the mapping on the image space directly, we disentangle image space into a Cartesian product of the appearance and the geometry latent spaces. Specifically, we first introduce a geometry prior loss and a conditional VAE loss to encourage the network to learn independent but complementary representations. The translation is then built on appearance and geometry space separately. Extensive experiments demonstrate the superior performance of our method to other state-of-the-art approaches, especially in the challenging near-rigid and non-rigid objects translation tasks. In addition, by taking different exemplars as the appearance references, our method also supports multimodal translation. Project page: https://wywu.github.io/projects/TGaGa/TGaGa.html
|
cs/0511049
|
Gus Gutoski
|
Gus Gutoski
|
Entropy, Convex Optimization, and Competitive Quantum Interactions
|
withdrawn
| null | null | null |
cs.CC cs.GT quant-ph
| null |
This paper has been withdrawn by the author due to errors.
|
[
{
"created": "Sat, 12 Nov 2005 21:15:40 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Nov 2005 23:09:33 GMT",
"version": "v2"
},
{
"created": "Mon, 30 Oct 2006 20:16:50 GMT",
"version": "v3"
}
] |
2007-05-23
|
[
[
"Gutoski",
"Gus",
""
]
] |
This paper has been withdrawn by the author due to errors.
|
1907.11090
|
Heyang Gong
|
Gong Heyang and Zhu Ke
|
Info Intervention
|
See more information on Causal AI:
https://sites.google.com/view/minituring/home
| null | null | null |
cs.AI cs.LG stat.ME stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Causal diagrams based on do intervention are useful tools to formalize,
process and understand causal relationship among variables. However, the do
intervention has controversial interpretation of causal questions for
non-manipulable variables, and it also lacks the power to check the conditions
related to counterfactual variables. This paper introduces a new info
intervention to tackle these two problems, and provides causal diagrams for
communication and theoretical focus based on this info intervention. Our info
intervention intervenes the input/output information of causal mechanisms,
while the do intervention intervenes the causal mechanisms. Consequently, the
causality is viewed as information transfer in the info intervention framework.
As an extension, the generalized info intervention is also proposed and studied
in this paper.
|
[
{
"created": "Wed, 24 Jul 2019 07:31:14 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Dec 2019 02:59:06 GMT",
"version": "v2"
},
{
"created": "Thu, 16 Apr 2020 15:33:59 GMT",
"version": "v3"
},
{
"created": "Fri, 17 Apr 2020 03:22:07 GMT",
"version": "v4"
},
{
"created": "Mon, 1 Jun 2020 08:11:52 GMT",
"version": "v5"
},
{
"created": "Tue, 2 Jun 2020 03:25:10 GMT",
"version": "v6"
}
] |
2020-06-03
|
[
[
"Heyang",
"Gong",
""
],
[
"Ke",
"Zhu",
""
]
] |
Causal diagrams based on do intervention are useful tools to formalize, process and understand causal relationship among variables. However, the do intervention has controversial interpretation of causal questions for non-manipulable variables, and it also lacks the power to check the conditions related to counterfactual variables. This paper introduces a new info intervention to tackle these two problems, and provides causal diagrams for communication and theoretical focus based on this info intervention. Our info intervention intervenes the input/output information of causal mechanisms, while the do intervention intervenes the causal mechanisms. Consequently, the causality is viewed as information transfer in the info intervention framework. As an extension, the generalized info intervention is also proposed and studied in this paper.
|
2108.06180
|
Jiafei Duan
|
Jiafei Duan, Samson Yu Bai Jian, Cheston Tan
|
SPACE: A Simulator for Physical Interactions and Causal Learning in 3D
Environments
|
Accepted to ICCV 21, Simulation Technology for Embodied AI (SEAI)
Workshop
| null | null | null |
cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in deep learning, computer vision, and embodied AI have
given rise to synthetic causal reasoning video datasets. These datasets
facilitate the development of AI algorithms that can reason about physical
interactions between objects. However, datasets thus far have primarily focused
on elementary physical events such as rolling or falling. There is currently a
scarcity of datasets that focus on the physical interactions that humans
perform daily with objects in the real world. To address this scarcity, we
introduce SPACE: A Simulator for Physical Interactions and Causal Learning in
3D Environments. The SPACE simulator allows us to generate the SPACE dataset, a
synthetic video dataset in a 3D environment, to systematically evaluate
physics-based models on a range of physical causal reasoning tasks. Inspired by
daily object interactions, the SPACE dataset comprises videos depicting three
types of physical events: containment, stability and contact. These events make
up the vast majority of the basic physical interactions between objects. We
then further evaluate it with a state-of-the-art physics-based deep model and
show that the SPACE dataset improves the learning of intuitive physics with an
approach inspired by curriculum learning. Repository:
https://github.com/jiafei1224/SPACE
|
[
{
"created": "Fri, 13 Aug 2021 11:49:46 GMT",
"version": "v1"
}
] |
2021-08-16
|
[
[
"Duan",
"Jiafei",
""
],
[
"Jian",
"Samson Yu Bai",
""
],
[
"Tan",
"Cheston",
""
]
] |
Recent advancements in deep learning, computer vision, and embodied AI have given rise to synthetic causal reasoning video datasets. These datasets facilitate the development of AI algorithms that can reason about physical interactions between objects. However, datasets thus far have primarily focused on elementary physical events such as rolling or falling. There is currently a scarcity of datasets that focus on the physical interactions that humans perform daily with objects in the real world. To address this scarcity, we introduce SPACE: A Simulator for Physical Interactions and Causal Learning in 3D Environments. The SPACE simulator allows us to generate the SPACE dataset, a synthetic video dataset in a 3D environment, to systematically evaluate physics-based models on a range of physical causal reasoning tasks. Inspired by daily object interactions, the SPACE dataset comprises videos depicting three types of physical events: containment, stability and contact. These events make up the vast majority of the basic physical interactions between objects. We then further evaluate it with a state-of-the-art physics-based deep model and show that the SPACE dataset improves the learning of intuitive physics with an approach inspired by curriculum learning. Repository: https://github.com/jiafei1224/SPACE
|
2311.07975
|
Zhilin Zhao
|
Zhilin Zhao and Longbing Cao and Yixuan Zhang
|
Out-of-Distribution Knowledge Distillation via Confidence Amendment
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Out-of-distribution (OOD) detection is essential in identifying test samples
that deviate from the in-distribution (ID) data upon which a standard network
is trained, ensuring network robustness and reliability. This paper introduces
OOD knowledge distillation, a pioneering learning framework applicable whether
or not training ID data is available, given a standard network. This framework
harnesses OOD-sensitive knowledge from the standard network to craft a binary
classifier adept at distinguishing between ID and OOD samples. To accomplish
this, we introduce Confidence Amendment (CA), an innovative methodology that
transforms an OOD sample into an ID one while progressively amending prediction
confidence derived from the standard network. This approach enables the
simultaneous synthesis of both ID and OOD samples, each accompanied by an
adjusted prediction confidence, thereby facilitating the training of a binary
classifier sensitive to OOD. Theoretical analysis provides bounds on the
generalization error of the binary classifier, demonstrating the pivotal role
of confidence amendment in enhancing OOD sensitivity. Extensive experiments
spanning various datasets and network architectures confirm the efficacy of the
proposed method in detecting OOD samples.
|
[
{
"created": "Tue, 14 Nov 2023 08:05:02 GMT",
"version": "v1"
}
] |
2023-11-15
|
[
[
"Zhao",
"Zhilin",
""
],
[
"Cao",
"Longbing",
""
],
[
"Zhang",
"Yixuan",
""
]
] |
Out-of-distribution (OOD) detection is essential in identifying test samples that deviate from the in-distribution (ID) data upon which a standard network is trained, ensuring network robustness and reliability. This paper introduces OOD knowledge distillation, a pioneering learning framework applicable whether or not training ID data is available, given a standard network. This framework harnesses OOD-sensitive knowledge from the standard network to craft a binary classifier adept at distinguishing between ID and OOD samples. To accomplish this, we introduce Confidence Amendment (CA), an innovative methodology that transforms an OOD sample into an ID one while progressively amending prediction confidence derived from the standard network. This approach enables the simultaneous synthesis of both ID and OOD samples, each accompanied by an adjusted prediction confidence, thereby facilitating the training of a binary classifier sensitive to OOD. Theoretical analysis provides bounds on the generalization error of the binary classifier, demonstrating the pivotal role of confidence amendment in enhancing OOD sensitivity. Extensive experiments spanning various datasets and network architectures confirm the efficacy of the proposed method in detecting OOD samples.
|
1911.08642
|
Adam Eck
|
Adam Eck, Maulik Shah, Prashant Doshi, and Leen-Kiat Soh
|
Scalable Decision-Theoretic Planning in Open and Typed Multiagent
Systems
|
Pre-print with appendices for AAAI 2020
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In open agent systems, the set of agents that are cooperating or competing
changes over time and in ways that are nontrivial to predict. For example, if
collaborative robots were tasked with fighting wildfires, they may run out of
suppressants and be temporarily unavailable to assist their peers. We consider
the problem of planning in these contexts with the additional challenges that
the agents are unable to communicate with each other and that there are many of
them. Because an agent's optimal action depends on the actions of others, each
agent must not only predict the actions of its peers, but, before that, reason
whether they are even present to perform an action. Addressing openness thus
requires agents to model each other's presence, which becomes computationally
intractable with high numbers of agents. We present a novel, principled, and
scalable method in this context that enables an agent to reason about others'
presence in its shared environment and their actions. Our method extrapolates
models of a few peers to the overall behavior of the many-agent system, and
combines it with a generalization of Monte Carlo tree search to perform
individual agent reasoning in many-agent open environments. Theoretical
analyses establish the number of agents to model in order to achieve acceptable
worst case bounds on extrapolation error, as well as regret bounds on the
agent's utility from modeling only some neighbors. Simulations of multiagent
wildfire suppression problems demonstrate our approach's efficacy compared with
alternative baselines.
|
[
{
"created": "Wed, 20 Nov 2019 00:39:36 GMT",
"version": "v1"
}
] |
2019-11-21
|
[
[
"Eck",
"Adam",
""
],
[
"Shah",
"Maulik",
""
],
[
"Doshi",
"Prashant",
""
],
[
"Soh",
"Leen-Kiat",
""
]
] |
In open agent systems, the set of agents that are cooperating or competing changes over time and in ways that are nontrivial to predict. For example, if collaborative robots were tasked with fighting wildfires, they may run out of suppressants and be temporarily unavailable to assist their peers. We consider the problem of planning in these contexts with the additional challenges that the agents are unable to communicate with each other and that there are many of them. Because an agent's optimal action depends on the actions of others, each agent must not only predict the actions of its peers, but, before that, reason whether they are even present to perform an action. Addressing openness thus requires agents to model each other's presence, which becomes computationally intractable with high numbers of agents. We present a novel, principled, and scalable method in this context that enables an agent to reason about others' presence in its shared environment and their actions. Our method extrapolates models of a few peers to the overall behavior of the many-agent system, and combines it with a generalization of Monte Carlo tree search to perform individual agent reasoning in many-agent open environments. Theoretical analyses establish the number of agents to model in order to achieve acceptable worst case bounds on extrapolation error, as well as regret bounds on the agent's utility from modeling only some neighbors. Simulations of multiagent wildfire suppression problems demonstrate our approach's efficacy compared with alternative baselines.
|
2011.01506
|
Narayanan Chatapuram Krishnan
|
Rajat Sharma, Nikhil Reddy, Vidhya Kamakshi, Narayanan C Krishnan,
Shweta Jain
|
MAIRE -- A Model-Agnostic Interpretable Rule Extraction Procedure for
Explaining Classifiers
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper introduces a novel framework for extracting model-agnostic human
interpretable rules to explain a classifier's output. The human interpretable
rule is defined as an axis-aligned hyper-cuboid containing the instance for
which the classification decision has to be explained. The proposed procedure
finds the largest (high \textit{coverage}) axis-aligned hyper-cuboid such that
a high percentage of the instances in the hyper-cuboid have the same class
label as the instance being explained (high \textit{precision}). Novel
approximations to the coverage and precision measures in terms of the
parameters of the hyper-cuboid are defined. They are maximized using
gradient-based optimizers. The quality of the approximations is rigorously
analyzed theoretically and experimentally. Heuristics for simplifying the
generated explanations for achieving better interpretability and a greedy
selection algorithm that combines the local explanations for creating global
explanations for the model covering a large part of the instance space are also
proposed. The framework is model agnostic, can be applied to any arbitrary
classifier, and all types of attributes (including continuous, ordered, and
unordered discrete). The wide-scale applicability of the framework is validated
on a variety of synthetic and real-world datasets from different domains
(tabular, text, and image).
|
[
{
"created": "Tue, 3 Nov 2020 06:53:06 GMT",
"version": "v1"
}
] |
2020-11-04
|
[
[
"Sharma",
"Rajat",
""
],
[
"Reddy",
"Nikhil",
""
],
[
"Kamakshi",
"Vidhya",
""
],
[
"Krishnan",
"Narayanan C",
""
],
[
"Jain",
"Shweta",
""
]
] |
The paper introduces a novel framework for extracting model-agnostic human interpretable rules to explain a classifier's output. The human interpretable rule is defined as an axis-aligned hyper-cuboid containing the instance for which the classification decision has to be explained. The proposed procedure finds the largest (high \textit{coverage}) axis-aligned hyper-cuboid such that a high percentage of the instances in the hyper-cuboid have the same class label as the instance being explained (high \textit{precision}). Novel approximations to the coverage and precision measures in terms of the parameters of the hyper-cuboid are defined. They are maximized using gradient-based optimizers. The quality of the approximations is rigorously analyzed theoretically and experimentally. Heuristics for simplifying the generated explanations for achieving better interpretability and a greedy selection algorithm that combines the local explanations for creating global explanations for the model covering a large part of the instance space are also proposed. The framework is model agnostic, can be applied to any arbitrary classifier, and all types of attributes (including continuous, ordered, and unordered discrete). The wide-scale applicability of the framework is validated on a variety of synthetic and real-world datasets from different domains (tabular, text, and image).
|
2202.13293
|
Mahmood Ahmadi
|
Ameneh Zarei, Shahla Safari, Mahmood Ahmadi, Farhad Mardukhi
|
Past, Present and Future of Hadoop: A Survey
|
A survey on Hadoop
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, a technology for massive data storage and computing named
Hadoop is surveyed. Hadoop consists of heterogeneous computing devices like
regular PCs abstracting away the details of parallel processing and developers
can just concentrate on their computational problem. A Hadoop cluster is made
of two parts: HDFs and Mapreduce. Hadoop cluster uses HDFS for data management.
HDFS provides storage for input and output data in MapReduce jobs and is
designed with abilities like high-fault tolerance, high-distribution capacity,
and high throughput. It is also suitable for storing Terabyte data on clusters
and it runs on flexible hardware like commodity devices.
|
[
{
"created": "Sun, 27 Feb 2022 05:09:09 GMT",
"version": "v1"
}
] |
2022-03-01
|
[
[
"Zarei",
"Ameneh",
""
],
[
"Safari",
"Shahla",
""
],
[
"Ahmadi",
"Mahmood",
""
],
[
"Mardukhi",
"Farhad",
""
]
] |
In this paper, a technology for massive data storage and computing named Hadoop is surveyed. Hadoop consists of heterogeneous computing devices like regular PCs abstracting away the details of parallel processing and developers can just concentrate on their computational problem. A Hadoop cluster is made of two parts: HDFs and Mapreduce. Hadoop cluster uses HDFS for data management. HDFS provides storage for input and output data in MapReduce jobs and is designed with abilities like high-fault tolerance, high-distribution capacity, and high throughput. It is also suitable for storing Terabyte data on clusters and it runs on flexible hardware like commodity devices.
|
1805.04737
|
Dongrui Wu
|
Dongrui Wu, Vernon J. Lawhern, Stephen Gordon, Brent J. Lance,
Chin-Teng Lin
|
Offline EEG-Based Driver Drowsiness Estimation Using Enhanced Batch-Mode
Active Learning (EBMAL) for Regression
| null |
IEEE Int'l. Conf. on Systems, Man and Cybernetics, pp. 730-736,
Budapest, Hungary, 2016
| null | null |
cs.LG cs.HC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are many important regression problems in real-world brain-computer
interface (BCI) applications, e.g., driver drowsiness estimation from EEG
signals. This paper considers offline analysis: given a pool of unlabeled EEG
epochs recorded during driving, how do we optimally select a small number of
them to label so that an accurate regression model can be built from them to
label the rest? Active learning is a promising solution to this problem, but
interestingly, to our best knowledge, it has not been used for regression
problems in BCI so far. This paper proposes a novel enhanced batch-mode active
learning (EBMAL) approach for regression, which improves upon a baseline active
learning algorithm by increasing the reliability, representativeness and
diversity of the selected samples to achieve better regression performance. We
validate its effectiveness using driver drowsiness estimation from EEG signals.
However, EBMAL is a general approach that can also be applied to many other
offline regression problems beyond BCI.
|
[
{
"created": "Sat, 12 May 2018 15:36:05 GMT",
"version": "v1"
}
] |
2020-03-31
|
[
[
"Wu",
"Dongrui",
""
],
[
"Lawhern",
"Vernon J.",
""
],
[
"Gordon",
"Stephen",
""
],
[
"Lance",
"Brent J.",
""
],
[
"Lin",
"Chin-Teng",
""
]
] |
There are many important regression problems in real-world brain-computer interface (BCI) applications, e.g., driver drowsiness estimation from EEG signals. This paper considers offline analysis: given a pool of unlabeled EEG epochs recorded during driving, how do we optimally select a small number of them to label so that an accurate regression model can be built from them to label the rest? Active learning is a promising solution to this problem, but interestingly, to our best knowledge, it has not been used for regression problems in BCI so far. This paper proposes a novel enhanced batch-mode active learning (EBMAL) approach for regression, which improves upon a baseline active learning algorithm by increasing the reliability, representativeness and diversity of the selected samples to achieve better regression performance. We validate its effectiveness using driver drowsiness estimation from EEG signals. However, EBMAL is a general approach that can also be applied to many other offline regression problems beyond BCI.
|
1903.03731
|
Hirak Jyoti Kashyap
|
Hirak J Kashyap, Charless Fowlkes, Jeffrey L Krichmar
|
Sparse Representations for Object and Ego-motion Estimation in Dynamic
Scenes
|
With supplementary material
| null |
10.1109/TNNLS.2020.3006467
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic scenes that contain both object motion and egomotion are a challenge
for monocular visual odometry (VO). Another issue with monocular VO is the
scale ambiguity, i.e. these methods cannot estimate scene depth and camera
motion in real scale. Here, we propose a learning based approach to predict
camera motion parameters directly from optic flow, by marginalizing depthmap
variations and outliers. This is achieved by learning a sparse overcomplete
basis set of egomotion in an autoencoder network, which is able to eliminate
irrelevant components of optic flow for the task of camera parameter or
motionfield estimation. The model is trained using a sparsity regularizer and a
supervised egomotion loss, and achieves the state-of-the-art performances on
trajectory prediction and camera rotation prediction tasks on KITTI and Virtual
KITTI datasets, respectively. The sparse latent space egomotion representation
learned by the model is robust and requires only 5% of the hidden layer neurons
to maintain the best trajectory prediction accuracy on KITTI dataset.
Additionally, in presence of depth information, the proposed method
demonstrates faithful object velocity prediction for wide range of object sizes
and speeds by global compensation of predicted egomotion and a divisive
normalization procedure.
|
[
{
"created": "Sat, 9 Mar 2019 03:56:53 GMT",
"version": "v1"
}
] |
2020-08-31
|
[
[
"Kashyap",
"Hirak J",
""
],
[
"Fowlkes",
"Charless",
""
],
[
"Krichmar",
"Jeffrey L",
""
]
] |
Dynamic scenes that contain both object motion and egomotion are a challenge for monocular visual odometry (VO). Another issue with monocular VO is the scale ambiguity, i.e. these methods cannot estimate scene depth and camera motion in real scale. Here, we propose a learning based approach to predict camera motion parameters directly from optic flow, by marginalizing depthmap variations and outliers. This is achieved by learning a sparse overcomplete basis set of egomotion in an autoencoder network, which is able to eliminate irrelevant components of optic flow for the task of camera parameter or motionfield estimation. The model is trained using a sparsity regularizer and a supervised egomotion loss, and achieves the state-of-the-art performances on trajectory prediction and camera rotation prediction tasks on KITTI and Virtual KITTI datasets, respectively. The sparse latent space egomotion representation learned by the model is robust and requires only 5% of the hidden layer neurons to maintain the best trajectory prediction accuracy on KITTI dataset. Additionally, in presence of depth information, the proposed method demonstrates faithful object velocity prediction for wide range of object sizes and speeds by global compensation of predicted egomotion and a divisive normalization procedure.
|
2012.11866
|
Zehua Sun
|
Zehua Sun, Qiuhong Ke, Hossein Rahmani, Mohammed Bennamoun, Gang Wang
and Jun Liu
|
Human Action Recognition from Various Data Modalities: A Review
| null | null |
10.1109/TPAMI.2022.3183112
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human Action Recognition (HAR) aims to understand human behavior and assign a
label to each action. It has a wide range of applications, and therefore has
been attracting increasing attention in the field of computer vision. Human
actions can be represented using various data modalities, such as RGB,
skeleton, depth, infrared, point cloud, event stream, audio, acceleration,
radar, and WiFi signal, which encode different sources of useful yet distinct
information and have various advantages depending on the application scenarios.
Consequently, lots of existing works have attempted to investigate different
types of approaches for HAR using various modalities. In this paper, we present
a comprehensive survey of recent progress in deep learning methods for HAR
based on the type of input data modality. Specifically, we review the current
mainstream deep learning methods for single data modalities and multiple data
modalities, including the fusion-based and the co-learning-based frameworks. We
also present comparative results on several benchmark datasets for HAR,
together with insightful observations and inspiring future research directions.
|
[
{
"created": "Tue, 22 Dec 2020 07:37:43 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Dec 2020 05:34:43 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Jan 2021 12:13:25 GMT",
"version": "v3"
},
{
"created": "Fri, 23 Jul 2021 15:30:59 GMT",
"version": "v4"
},
{
"created": "Tue, 21 Jun 2022 13:42:44 GMT",
"version": "v5"
}
] |
2022-06-22
|
[
[
"Sun",
"Zehua",
""
],
[
"Ke",
"Qiuhong",
""
],
[
"Rahmani",
"Hossein",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Wang",
"Gang",
""
],
[
"Liu",
"Jun",
""
]
] |
Human Action Recognition (HAR) aims to understand human behavior and assign a label to each action. It has a wide range of applications, and therefore has been attracting increasing attention in the field of computer vision. Human actions can be represented using various data modalities, such as RGB, skeleton, depth, infrared, point cloud, event stream, audio, acceleration, radar, and WiFi signal, which encode different sources of useful yet distinct information and have various advantages depending on the application scenarios. Consequently, lots of existing works have attempted to investigate different types of approaches for HAR using various modalities. In this paper, we present a comprehensive survey of recent progress in deep learning methods for HAR based on the type of input data modality. Specifically, we review the current mainstream deep learning methods for single data modalities and multiple data modalities, including the fusion-based and the co-learning-based frameworks. We also present comparative results on several benchmark datasets for HAR, together with insightful observations and inspiring future research directions.
|
2005.10788
|
Anastasiya Gorodilova
|
Anastasiya Gorodilova
|
A note on the properties of associated Boolean functions of quadratic
APN functions
| null |
Prikladnaya Diskretnaya Matematika. 2020. No 47, pp 16-21
|
10.17223/20710410/47/2
| null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $F$ be a quadratic APN function of $n$ variables. The associated Boolean
function $\gamma_F$ in $2n$ variables ($\gamma_F(a,b)=1$ if $a\neq{\bf 0}$ and
equation $F(x)+F(x+a)=b$ has solutions) has the form $\gamma_F(a,b) = \Phi_F(a)
\cdot b + \varphi_F(a) + 1$ for appropriate functions $\Phi_F:\mathbb{F}_2^n\to
\mathbb{F}_2^n$ and $\varphi_F:\mathbb{F}_2^n\to \mathbb{F}_2$. We summarize
the known results and prove new ones regarding properties of $\Phi_F$ and
$\varphi_F$. For instance, we prove that degree of $\Phi_F$ is either $n$ or
less or equal to $n-2$. Based on computation experiments, we formulate a
conjecture that degree of any component function of $\Phi_F$ is $n-2$. We show
that this conjecture is based on two other conjectures of independent interest.
|
[
{
"created": "Thu, 21 May 2020 17:10:53 GMT",
"version": "v1"
}
] |
2020-05-22
|
[
[
"Gorodilova",
"Anastasiya",
""
]
] |
Let $F$ be a quadratic APN function of $n$ variables. The associated Boolean function $\gamma_F$ in $2n$ variables ($\gamma_F(a,b)=1$ if $a\neq{\bf 0}$ and equation $F(x)+F(x+a)=b$ has solutions) has the form $\gamma_F(a,b) = \Phi_F(a) \cdot b + \varphi_F(a) + 1$ for appropriate functions $\Phi_F:\mathbb{F}_2^n\to \mathbb{F}_2^n$ and $\varphi_F:\mathbb{F}_2^n\to \mathbb{F}_2$. We summarize the known results and prove new ones regarding properties of $\Phi_F$ and $\varphi_F$. For instance, we prove that degree of $\Phi_F$ is either $n$ or less or equal to $n-2$. Based on computation experiments, we formulate a conjecture that degree of any component function of $\Phi_F$ is $n-2$. We show that this conjecture is based on two other conjectures of independent interest.
|
1604.02815
|
Dan Rosenbaum
|
Dan Rosenbaum and Yair Weiss
|
Beyond Brightness Constancy: Learning Noise Models for Optical Flow
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Optical flow is typically estimated by minimizing a "data cost" and an
optional regularizer. While there has been much work on different regularizers
many modern algorithms still use a data cost that is not very different from
the ones used over 30 years ago: a robust version of brightness constancy or
gradient constancy. In this paper we leverage the recent availability of
ground-truth optical flow databases in order to learn a data cost. Specifically
we take a generative approach in which the data cost models the distribution of
noise after warping an image according to the flow and we measure the
"goodness" of a data cost by how well it matches the true distribution of flow
warp error. Consistent with current practice, we find that robust versions of
gradient constancy are better models than simple brightness constancy but a
learned GMM that models the density of patches of warp error gives a much
better fit than any existing assumption of constancy. This significant
advantage of the GMM is due to an explicit modeling of the spatial structure of
warp errors, a feature which is missing from almost all existing data costs in
optical flow. Finally, we show how a good density model of warp error patches
can be used for optical flow estimation on whole images. We replace the data
cost by the expected patch log-likelihood (EPLL), and show how this cost can be
optimized iteratively using an additional step of denoising the warp error
image. The results of our experiments are promising and show that patch models
with higher likelihood lead to better optical flow estimation.
|
[
{
"created": "Mon, 11 Apr 2016 07:23:44 GMT",
"version": "v1"
}
] |
2016-04-12
|
[
[
"Rosenbaum",
"Dan",
""
],
[
"Weiss",
"Yair",
""
]
] |
Optical flow is typically estimated by minimizing a "data cost" and an optional regularizer. While there has been much work on different regularizers many modern algorithms still use a data cost that is not very different from the ones used over 30 years ago: a robust version of brightness constancy or gradient constancy. In this paper we leverage the recent availability of ground-truth optical flow databases in order to learn a data cost. Specifically we take a generative approach in which the data cost models the distribution of noise after warping an image according to the flow and we measure the "goodness" of a data cost by how well it matches the true distribution of flow warp error. Consistent with current practice, we find that robust versions of gradient constancy are better models than simple brightness constancy but a learned GMM that models the density of patches of warp error gives a much better fit than any existing assumption of constancy. This significant advantage of the GMM is due to an explicit modeling of the spatial structure of warp errors, a feature which is missing from almost all existing data costs in optical flow. Finally, we show how a good density model of warp error patches can be used for optical flow estimation on whole images. We replace the data cost by the expected patch log-likelihood (EPLL), and show how this cost can be optimized iteratively using an additional step of denoising the warp error image. The results of our experiments are promising and show that patch models with higher likelihood lead to better optical flow estimation.
|
1303.2489
|
Juan Antonio Navarro P\'erez
|
Juan Antonio Navarro-P\'erez and Andrey Rybalchenko
|
Separation Logic Modulo Theories
|
16 pages
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Logical reasoning about program data often requires dealing with heap
structures as well as scalar data types. Recent advances in Satisfiability
Modular Theory (SMT) already offer efficient procedures for dealing with
scalars, yet they lack any support for dealing with heap structures. In this
paper, we present an approach that integrates Separation Logic---a prominent
logic for reasoning about list segments on the heap---and SMT. We follow a
model-based approach that communicates aliasing among heap cells between the
SMT solver and the Separation Logic reasoning part. An experimental evaluation
using the Z3 solver indicates that our approach can effectively put to work the
advances in SMT for dealing with heap structures. This is the first decision
procedure for the combination of separation logic with SMT theories.
|
[
{
"created": "Mon, 11 Mar 2013 11:22:51 GMT",
"version": "v1"
}
] |
2013-03-12
|
[
[
"Navarro-Pérez",
"Juan Antonio",
""
],
[
"Rybalchenko",
"Andrey",
""
]
] |
Logical reasoning about program data often requires dealing with heap structures as well as scalar data types. Recent advances in Satisfiability Modular Theory (SMT) already offer efficient procedures for dealing with scalars, yet they lack any support for dealing with heap structures. In this paper, we present an approach that integrates Separation Logic---a prominent logic for reasoning about list segments on the heap---and SMT. We follow a model-based approach that communicates aliasing among heap cells between the SMT solver and the Separation Logic reasoning part. An experimental evaluation using the Z3 solver indicates that our approach can effectively put to work the advances in SMT for dealing with heap structures. This is the first decision procedure for the combination of separation logic with SMT theories.
|
2406.02542
|
Prajwal Singhania
|
Prajwal Singhania, Siddharth Singh, Shwai He, Soheil Feizi, Abhinav
Bhatele
|
Loki: Low-Rank Keys for Efficient Sparse Attention
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inference on large language models can be expensive in terms of the compute
and memory costs involved, especially when long sequence lengths are used. In
particular, the self-attention mechanism used in such models contributes
significantly to these costs, which has resulted in several recent works that
propose sparse attention approximations for inference. In this work, we propose
to approximate the self-attention computation by focusing on the dimensionality
of key vectors computed in the attention block. Our analysis reveals that the
key vectors lie in a significantly lower-dimensional space, consistently across
several datasets and models. Exploiting this observation, we propose Loki, a
novel sparse attention method that ranks and selects tokens in the KV-cache
based on attention scores computed in low-dimensional space. Our evaluations
show that Loki is able to maintain the efficacy of the models better than other
popular approximation methods, while speeding up the attention computation due
to reduced data movement (load/store) and compute costs.
|
[
{
"created": "Tue, 4 Jun 2024 17:58:03 GMT",
"version": "v1"
}
] |
2024-06-05
|
[
[
"Singhania",
"Prajwal",
""
],
[
"Singh",
"Siddharth",
""
],
[
"He",
"Shwai",
""
],
[
"Feizi",
"Soheil",
""
],
[
"Bhatele",
"Abhinav",
""
]
] |
Inference on large language models can be expensive in terms of the compute and memory costs involved, especially when long sequence lengths are used. In particular, the self-attention mechanism used in such models contributes significantly to these costs, which has resulted in several recent works that propose sparse attention approximations for inference. In this work, we propose to approximate the self-attention computation by focusing on the dimensionality of key vectors computed in the attention block. Our analysis reveals that the key vectors lie in a significantly lower-dimensional space, consistently across several datasets and models. Exploiting this observation, we propose Loki, a novel sparse attention method that ranks and selects tokens in the KV-cache based on attention scores computed in low-dimensional space. Our evaluations show that Loki is able to maintain the efficacy of the models better than other popular approximation methods, while speeding up the attention computation due to reduced data movement (load/store) and compute costs.
|
2307.04872
|
Xinran Zhu
|
Xinran Zhu, Hong Shui, Bodong Chen
|
The Synthesis Lab: Empowering Collaborative Learning in Higher Education
through Knowledge Synthesis
| null | null | null | null |
cs.HC cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The ability to synthesize information has emerged as a critical skill for
success across various fields. However, within the field of education, there is
a lack of systematic understanding and well-defined design infrastructures that
address the mechanisms and processes of knowledge synthesis in collaborative
learning settings. In this poster, we introduce a design innovation - The
Synthesis Lab, which aims to support students in synthesizing ideas from their
online discussions in higher education classrooms. The tool offers structured
work-spaces for students to decompose the synthesis process into intermediate
synthesis products and features two key iterative processes of knowledge
synthesis in collaborative settings: categorizing peers' ideas into conceptual
building blocks and developing a synthesis of the discussions. Future
implementation and evaluation of the design will make significant contributions
to both research and practice.
|
[
{
"created": "Mon, 10 Jul 2023 19:41:54 GMT",
"version": "v1"
}
] |
2023-07-12
|
[
[
"Zhu",
"Xinran",
""
],
[
"Shui",
"Hong",
""
],
[
"Chen",
"Bodong",
""
]
] |
The ability to synthesize information has emerged as a critical skill for success across various fields. However, within the field of education, there is a lack of systematic understanding and well-defined design infrastructures that address the mechanisms and processes of knowledge synthesis in collaborative learning settings. In this poster, we introduce a design innovation - The Synthesis Lab, which aims to support students in synthesizing ideas from their online discussions in higher education classrooms. The tool offers structured work-spaces for students to decompose the synthesis process into intermediate synthesis products and features two key iterative processes of knowledge synthesis in collaborative settings: categorizing peers' ideas into conceptual building blocks and developing a synthesis of the discussions. Future implementation and evaluation of the design will make significant contributions to both research and practice.
|
1001.0210
|
Alexander Vardy
|
Hessam Mahdavifar and Alexander Vardy
|
Achieving the Secrecy Capacity of Wiretap Channels Using Polar Codes
|
15 pages, to appear in the IEEE Transactions on Information Theory
| null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Suppose Alice wishes to send messages to Bob through a communication channel
C_1, but her transmissions also reach an eavesdropper Eve through another
channel C_2. The goal is to design a coding scheme that makes it possible for
Alice to communicate both reliably and securely. Reliability is measured in
terms of Bob's probability of error in recovering the message, while security
is measured in terms of Eve's equivocation ratio. Wyner showed that the
situation is characterized by a single constant C_s, called the secrecy
capacity, which has the following meaning: for all $\epsilon > 0$, there exist
coding schemes of rate $R \ge C_s - \epsilon$ that asymptotically achieve both
the reliability and the security objectives. However, his proof of this result
is based upon a nonconstructive random-coding argument. To date, despite a
considerable research effort, the only case where we know how to construct
coding schemes that achieve secrecy capacity is when Eve's channel C_2 is an
erasure channel, or a combinatorial variation thereof.
Polar codes were recently invented by Arikan; they approach the capacity of
symmetric binary-input discrete memoryless channels with low encoding and
decoding complexity. Herein, we use polar codes to construct a coding scheme
that achieves the secrecy capacity for a wide range of wiretap channels. Our
construction works for any instantiation of the wiretap channel model, as long
as both C_1 and C_2 are symmetric and binary-input, and C_2 is degraded with
respect to C_1. Moreover, we show how to modify our construction in order to
provide strong security, in the sense defined by Maurer, while still operating
at a rate that approaches the secrecy capacity. In this case, we cannot
guarantee that the reliability condition will be satisfied unless the main
channel C_1 is noiseless, although we believe it can be always satisfied in
practice.
|
[
{
"created": "Fri, 1 Jan 2010 05:30:10 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Apr 2011 00:48:19 GMT",
"version": "v2"
}
] |
2015-03-13
|
[
[
"Mahdavifar",
"Hessam",
""
],
[
"Vardy",
"Alexander",
""
]
] |
Suppose Alice wishes to send messages to Bob through a communication channel C_1, but her transmissions also reach an eavesdropper Eve through another channel C_2. The goal is to design a coding scheme that makes it possible for Alice to communicate both reliably and securely. Reliability is measured in terms of Bob's probability of error in recovering the message, while security is measured in terms of Eve's equivocation ratio. Wyner showed that the situation is characterized by a single constant C_s, called the secrecy capacity, which has the following meaning: for all $\epsilon > 0$, there exist coding schemes of rate $R \ge C_s - \epsilon$ that asymptotically achieve both the reliability and the security objectives. However, his proof of this result is based upon a nonconstructive random-coding argument. To date, despite a considerable research effort, the only case where we know how to construct coding schemes that achieve secrecy capacity is when Eve's channel C_2 is an erasure channel, or a combinatorial variation thereof. Polar codes were recently invented by Arikan; they approach the capacity of symmetric binary-input discrete memoryless channels with low encoding and decoding complexity. Herein, we use polar codes to construct a coding scheme that achieves the secrecy capacity for a wide range of wiretap channels. Our construction works for any instantiation of the wiretap channel model, as long as both C_1 and C_2 are symmetric and binary-input, and C_2 is degraded with respect to C_1. Moreover, we show how to modify our construction in order to provide strong security, in the sense defined by Maurer, while still operating at a rate that approaches the secrecy capacity. In this case, we cannot guarantee that the reliability condition will be satisfied unless the main channel C_1 is noiseless, although we believe it can be always satisfied in practice.
|
2107.04197
|
John Chen
|
John Chen, Cameron Wolfe, Anastasios Kyrillidis
|
REX: Revisiting Budgeted Training with an Improved Schedule
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning practitioners often operate on a computational and monetary
budget. Thus, it is critical to design optimization algorithms that perform
well under any budget. The linear learning rate schedule is considered the best
budget-aware schedule, as it outperforms most other schedules in the low budget
regime. On the other hand, learning rate schedules -- such as the
\texttt{30-60-90} step schedule -- are known to achieve high performance when
the model can be trained for many epochs. Yet, it is often not known a priori
whether one's budget will be large or small; thus, the optimal choice of
learning rate schedule is made on a case-by-case basis. In this paper, we frame
the learning rate schedule selection problem as a combination of $i)$ selecting
a profile (i.e., the continuous function that models the learning rate
schedule), and $ii)$ choosing a sampling rate (i.e., how frequently the
learning rate is updated/sampled from this profile). We propose a novel profile
and sampling rate combination called the Reflected Exponential (REX) schedule,
which we evaluate across seven different experimental settings with both SGD
and Adam optimizers. REX outperforms the linear schedule in the low budget
regime, while matching or exceeding the performance of several state-of-the-art
learning rate schedules (linear, step, exponential, cosine, step decay on
plateau, and OneCycle) in both high and low budget regimes. Furthermore, REX
requires no added computation, storage, or hyperparameters.
|
[
{
"created": "Fri, 9 Jul 2021 04:17:35 GMT",
"version": "v1"
}
] |
2021-07-12
|
[
[
"Chen",
"John",
""
],
[
"Wolfe",
"Cameron",
""
],
[
"Kyrillidis",
"Anastasios",
""
]
] |
Deep learning practitioners often operate on a computational and monetary budget. Thus, it is critical to design optimization algorithms that perform well under any budget. The linear learning rate schedule is considered the best budget-aware schedule, as it outperforms most other schedules in the low budget regime. On the other hand, learning rate schedules -- such as the \texttt{30-60-90} step schedule -- are known to achieve high performance when the model can be trained for many epochs. Yet, it is often not known a priori whether one's budget will be large or small; thus, the optimal choice of learning rate schedule is made on a case-by-case basis. In this paper, we frame the learning rate schedule selection problem as a combination of $i)$ selecting a profile (i.e., the continuous function that models the learning rate schedule), and $ii)$ choosing a sampling rate (i.e., how frequently the learning rate is updated/sampled from this profile). We propose a novel profile and sampling rate combination called the Reflected Exponential (REX) schedule, which we evaluate across seven different experimental settings with both SGD and Adam optimizers. REX outperforms the linear schedule in the low budget regime, while matching or exceeding the performance of several state-of-the-art learning rate schedules (linear, step, exponential, cosine, step decay on plateau, and OneCycle) in both high and low budget regimes. Furthermore, REX requires no added computation, storage, or hyperparameters.
|
2308.01359
|
Maxime Flin
|
Maxime Flin, Magn\'us M. Halld\'orsson and Alexandre Nolin
|
Fast Coloring Despite Congested Relays
|
37 pages. To appear in proceedings of DISC 2023
| null | null | null |
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide a $O(\log^6 \log n)$-round randomized algorithm for distance-2
coloring in CONGEST with $\Delta^2+1$ colors. For
$\Delta\gg\operatorname{poly}\log n$, this improves exponentially on the
$O(\log\Delta+\operatorname{poly}\log\log n)$ algorithm of [Halld\'orsson,
Kuhn, Maus, Nolin, DISC'20].
Our study is motivated by the ubiquity and hardness of local reductions in
CONGEST. For instance, algorithms for the Local Lov\'asz Lemma [Moser, Tardos,
JACM'10; Fischer, Ghaffari, DISC'17; Davies, SODA'23] usually assume
communication on the conflict graph, which can be simulated in LOCAL with only
constant overhead, while this may be prohibitively expensive in CONGEST. We
hope our techniques help tackle in CONGEST other coloring problems defined by
local relations.
|
[
{
"created": "Wed, 2 Aug 2023 18:04:52 GMT",
"version": "v1"
}
] |
2023-08-04
|
[
[
"Flin",
"Maxime",
""
],
[
"Halldórsson",
"Magnús M.",
""
],
[
"Nolin",
"Alexandre",
""
]
] |
We provide a $O(\log^6 \log n)$-round randomized algorithm for distance-2 coloring in CONGEST with $\Delta^2+1$ colors. For $\Delta\gg\operatorname{poly}\log n$, this improves exponentially on the $O(\log\Delta+\operatorname{poly}\log\log n)$ algorithm of [Halld\'orsson, Kuhn, Maus, Nolin, DISC'20]. Our study is motivated by the ubiquity and hardness of local reductions in CONGEST. For instance, algorithms for the Local Lov\'asz Lemma [Moser, Tardos, JACM'10; Fischer, Ghaffari, DISC'17; Davies, SODA'23] usually assume communication on the conflict graph, which can be simulated in LOCAL with only constant overhead, while this may be prohibitively expensive in CONGEST. We hope our techniques help tackle in CONGEST other coloring problems defined by local relations.
|
2408.01346
|
Luca Maria Aiello
|
Anders Giovanni M{\o}ller, Luca Maria Aiello
|
Prompt Refinement or Fine-tuning? Best Practices for using LLMs in
Computational Social Science Tasks
|
5 pages, 1 table
| null | null | null |
cs.CY cs.CL physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models are expressive tools that enable complex tasks of text
understanding within Computational Social Science. Their versatility, while
beneficial, poses a barrier for establishing standardized best practices within
the field. To bring clarity on the values of different strategies, we present
an overview of the performance of modern LLM-based classification methods on a
benchmark of 23 social knowledge tasks. Our results point to three best
practices: select models with larger vocabulary and pre-training corpora; avoid
simple zero-shot in favor of AI-enhanced prompting; fine-tune on task-specific
data, and consider more complex forms instruction-tuning on multiple datasets
only when only training data is more abundant.
|
[
{
"created": "Fri, 2 Aug 2024 15:46:36 GMT",
"version": "v1"
}
] |
2024-08-05
|
[
[
"Møller",
"Anders Giovanni",
""
],
[
"Aiello",
"Luca Maria",
""
]
] |
Large Language Models are expressive tools that enable complex tasks of text understanding within Computational Social Science. Their versatility, while beneficial, poses a barrier for establishing standardized best practices within the field. To bring clarity on the values of different strategies, we present an overview of the performance of modern LLM-based classification methods on a benchmark of 23 social knowledge tasks. Our results point to three best practices: select models with larger vocabulary and pre-training corpora; avoid simple zero-shot in favor of AI-enhanced prompting; fine-tune on task-specific data, and consider more complex forms instruction-tuning on multiple datasets only when only training data is more abundant.
|
1403.2802
|
Zhimin Cao
|
Haoqiang Fan, Zhimin Cao, Yuning Jiang, Qi Yin, Chinchilla Doudou
|
Learning Deep Face Representation
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face representation is a crucial step of face recognition systems. An optimal
face representation should be discriminative, robust, compact, and very
easy-to-implement. While numerous hand-crafted and learning-based
representations have been proposed, considerable room for improvement is still
present. In this paper, we present a very easy-to-implement deep learning
framework for face representation. Our method bases on a new structure of deep
network (called Pyramid CNN). The proposed Pyramid CNN adopts a
greedy-filter-and-down-sample operation, which enables the training procedure
to be very fast and computation-efficient. In addition, the structure of
Pyramid CNN can naturally incorporate feature sharing across multi-scale face
representations, increasing the discriminative ability of resulting
representation. Our basic network is capable of achieving high recognition
accuracy ($85.8\%$ on LFW benchmark) with only 8 dimension representation. When
extended to feature-sharing Pyramid CNN, our system achieves the
state-of-the-art performance ($97.3\%$) on LFW benchmark. We also introduce a
new benchmark of realistic face images on social network and validate our
proposed representation has a good ability of generalization.
|
[
{
"created": "Wed, 12 Mar 2014 03:47:18 GMT",
"version": "v1"
}
] |
2014-03-13
|
[
[
"Fan",
"Haoqiang",
""
],
[
"Cao",
"Zhimin",
""
],
[
"Jiang",
"Yuning",
""
],
[
"Yin",
"Qi",
""
],
[
"Doudou",
"Chinchilla",
""
]
] |
Face representation is a crucial step of face recognition systems. An optimal face representation should be discriminative, robust, compact, and very easy-to-implement. While numerous hand-crafted and learning-based representations have been proposed, considerable room for improvement is still present. In this paper, we present a very easy-to-implement deep learning framework for face representation. Our method bases on a new structure of deep network (called Pyramid CNN). The proposed Pyramid CNN adopts a greedy-filter-and-down-sample operation, which enables the training procedure to be very fast and computation-efficient. In addition, the structure of Pyramid CNN can naturally incorporate feature sharing across multi-scale face representations, increasing the discriminative ability of resulting representation. Our basic network is capable of achieving high recognition accuracy ($85.8\%$ on LFW benchmark) with only 8 dimension representation. When extended to feature-sharing Pyramid CNN, our system achieves the state-of-the-art performance ($97.3\%$) on LFW benchmark. We also introduce a new benchmark of realistic face images on social network and validate our proposed representation has a good ability of generalization.
|
1911.03329
|
Stuart Shieber
|
Mirac Suzgun and Sebastian Gehrmann and Yonatan Belinkov and Stuart M.
Shieber
|
Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck
Languages
| null | null | null | null |
cs.CL cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce three memory-augmented Recurrent Neural Networks (MARNNs) and
explore their capabilities on a series of simple language modeling tasks whose
solutions require stack-based mechanisms. We provide the first demonstration of
neural networks recognizing the generalized Dyck languages, which express the
core of what it means to be a language with hierarchical structure. Our
memory-augmented architectures are easy to train in an end-to-end fashion and
can learn the Dyck languages over as many as six parenthesis-pairs, in addition
to two deterministic palindrome languages and the string-reversal transduction
task, by emulating pushdown automata. Our experiments highlight the increased
modeling capacity of memory-augmented models over simple RNNs, while inflecting
our understanding of the limitations of these models.
|
[
{
"created": "Fri, 8 Nov 2019 15:33:51 GMT",
"version": "v1"
}
] |
2019-11-11
|
[
[
"Suzgun",
"Mirac",
""
],
[
"Gehrmann",
"Sebastian",
""
],
[
"Belinkov",
"Yonatan",
""
],
[
"Shieber",
"Stuart M.",
""
]
] |
We introduce three memory-augmented Recurrent Neural Networks (MARNNs) and explore their capabilities on a series of simple language modeling tasks whose solutions require stack-based mechanisms. We provide the first demonstration of neural networks recognizing the generalized Dyck languages, which express the core of what it means to be a language with hierarchical structure. Our memory-augmented architectures are easy to train in an end-to-end fashion and can learn the Dyck languages over as many as six parenthesis-pairs, in addition to two deterministic palindrome languages and the string-reversal transduction task, by emulating pushdown automata. Our experiments highlight the increased modeling capacity of memory-augmented models over simple RNNs, while inflecting our understanding of the limitations of these models.
|
1905.12032
|
Pu Zhao
|
Pu Zhao, Siyue Wang, Cheng Gongye, Yanzhi Wang, Yunsi Fei, Xue Lin
|
Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural
Networks
|
Accepted by the 56th Design Automation Conference (DAC 2019)
| null |
10.1145/3316781.3317825
| null |
cs.LG cs.CR cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the great achievements of deep neural networks (DNNs), the
vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many
application domains requiring high reliability.We propose the fault sneaking
attack on DNNs, where the adversary aims to misclassify certain input images
into any target labels by modifying the DNN parameters. We apply ADMM
(alternating direction method of multipliers) for solving the optimization
problem of the fault sneaking attack with two constraints: 1) the
classification of the other images should be unchanged and 2) the parameter
modifications should be minimized. Specifically, the first constraint requires
us not only to inject designated faults (misclassifications), but also to hide
the faults for stealthy or sneaking considerations by maintaining model
accuracy. The second constraint requires us to minimize the parameter
modifications (using L0 norm to measure the number of modifications and L2 norm
to measure the magnitude of modifications). Comprehensive experimental
evaluation demonstrates that the proposed framework can inject multiple
sneaking faults without losing the overall test accuracy performance.
|
[
{
"created": "Tue, 28 May 2019 18:56:44 GMT",
"version": "v1"
}
] |
2019-05-30
|
[
[
"Zhao",
"Pu",
""
],
[
"Wang",
"Siyue",
""
],
[
"Gongye",
"Cheng",
""
],
[
"Wang",
"Yanzhi",
""
],
[
"Fei",
"Yunsi",
""
],
[
"Lin",
"Xue",
""
]
] |
Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability.We propose the fault sneaking attack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking attack with two constraints: 1) the classification of the other images should be unchanged and 2) the parameter modifications should be minimized. Specifically, the first constraint requires us not only to inject designated faults (misclassifications), but also to hide the faults for stealthy or sneaking considerations by maintaining model accuracy. The second constraint requires us to minimize the parameter modifications (using L0 norm to measure the number of modifications and L2 norm to measure the magnitude of modifications). Comprehensive experimental evaluation demonstrates that the proposed framework can inject multiple sneaking faults without losing the overall test accuracy performance.
|
2407.06533
|
Rachneet Kaur
|
Rachneet Kaur, Zhen Zeng, Tucker Balch, Manuela Veloso
|
LETS-C: Leveraging Language Embedding for Time Series Classification
|
22 pages, 5 figures, 10 tables
| null | null | null |
cs.LG cs.AI cs.CE cs.CL stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in language modeling have shown promising results when
applied to time series data. In particular, fine-tuning pre-trained large
language models (LLMs) for time series classification tasks has achieved
state-of-the-art (SOTA) performance on standard benchmarks. However, these
LLM-based models have a significant drawback due to the large model size, with
the number of trainable parameters in the millions. In this paper, we propose
an alternative approach to leveraging the success of language modeling in the
time series domain. Instead of fine-tuning LLMs, we utilize a language
embedding model to embed time series and then pair the embeddings with a simple
classification head composed of convolutional neural networks (CNN) and
multilayer perceptron (MLP). We conducted extensive experiments on
well-established time series classification benchmark datasets. We demonstrated
LETS-C not only outperforms the current SOTA in classification accuracy but
also offers a lightweight solution, using only 14.5% of the trainable
parameters on average compared to the SOTA model. Our findings suggest that
leveraging language encoders to embed time series data, combined with a simple
yet effective classification head, offers a promising direction for achieving
high-performance time series classification while maintaining a lightweight
model architecture.
|
[
{
"created": "Tue, 9 Jul 2024 04:07:57 GMT",
"version": "v1"
}
] |
2024-07-10
|
[
[
"Kaur",
"Rachneet",
""
],
[
"Zeng",
"Zhen",
""
],
[
"Balch",
"Tucker",
""
],
[
"Veloso",
"Manuela",
""
]
] |
Recent advancements in language modeling have shown promising results when applied to time series data. In particular, fine-tuning pre-trained large language models (LLMs) for time series classification tasks has achieved state-of-the-art (SOTA) performance on standard benchmarks. However, these LLM-based models have a significant drawback due to the large model size, with the number of trainable parameters in the millions. In this paper, we propose an alternative approach to leveraging the success of language modeling in the time series domain. Instead of fine-tuning LLMs, we utilize a language embedding model to embed time series and then pair the embeddings with a simple classification head composed of convolutional neural networks (CNN) and multilayer perceptron (MLP). We conducted extensive experiments on well-established time series classification benchmark datasets. We demonstrated LETS-C not only outperforms the current SOTA in classification accuracy but also offers a lightweight solution, using only 14.5% of the trainable parameters on average compared to the SOTA model. Our findings suggest that leveraging language encoders to embed time series data, combined with a simple yet effective classification head, offers a promising direction for achieving high-performance time series classification while maintaining a lightweight model architecture.
|
2205.11668
|
Benjamin Andres Huerfano Zapata
|
Benjamin A. Huerfano Z., Andres F Ardila, and Pedro L Cifuentes
|
TIC como apoyo del soporte social al enfermo cr\'onico y su cuidador :
Aproximaci\'on al estado del Arte
|
7 pages, in Spanish language, 5 figures, and 35 direct references
|
Event Encuentro Nacional de Investigacion y Desarrollo ENID 2013
1st ed, Vol 1, pages 1-7 (2013)
| null | null |
cs.SI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The current approach is carried out in order to have an overview of the level
of inclusion and the participation of ICTs in social support and support for
vulnerable populations suffering from chronic diseases. The inclusion was made
through a bibliographic review, this being the basis for the collection of data
and pertinent information. The argumentative study that was carried out clearly
and concisely identified the advantages and disadvantages of the use of ICT in
social support from a psychoeducational and engineering point of view. The
regions were characterized by the highest concentration of ICT use in the
social support literature, based on previously studied content and analyzing
the results of this use.
|
[
{
"created": "Mon, 23 May 2022 23:28:30 GMT",
"version": "v1"
}
] |
2022-08-24
|
[
[
"Z.",
"Benjamin A. Huerfano",
""
],
[
"Ardila",
"Andres F",
""
],
[
"Cifuentes",
"Pedro L",
""
]
] |
The current approach is carried out in order to have an overview of the level of inclusion and the participation of ICTs in social support and support for vulnerable populations suffering from chronic diseases. The inclusion was made through a bibliographic review, this being the basis for the collection of data and pertinent information. The argumentative study that was carried out clearly and concisely identified the advantages and disadvantages of the use of ICT in social support from a psychoeducational and engineering point of view. The regions were characterized by the highest concentration of ICT use in the social support literature, based on previously studied content and analyzing the results of this use.
|
1412.0954
|
Laura Toni
|
Laura Toni, Thomas Maugey, Pascal Frossard
|
Optimized Packet Scheduling in Multiview Video Navigation Systems
| null | null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In multiview video systems, multiple cameras generally acquire the same scene
from different perspectives, such that users have the possibility to select
their preferred viewpoint. This results in large amounts of highly redundant
data, which needs to be properly handled during encoding and transmission over
resource-constrained channels. In this work, we study coding and transmission
strategies in multicamera systems, where correlated sources send data through a
bottleneck channel to a central server, which eventually transmits views to
different interactive users. We propose a dynamic correlation-aware packet
scheduling optimization under delay, bandwidth, and interactivity constraints.
The optimization relies both on a novel rate-distortion model, which captures
the importance of each view in the 3D scene reconstruction, and on an objective
function that optimizes resources based on a client navigation model. The
latter takes into account the distortion experienced by interactive clients as
well as the distortion variations that might be observed by clients during
multiview navigation. We solve the scheduling problem with a novel
trellis-based solution, which permits to formally decompose the multivariate
optimization problem thereby significantly reducing the computation complexity.
Simulation results show the gain of the proposed algorithm compared to baseline
scheduling policies. More in details, we show the gain offered by our dynamic
scheduling policy compared to static camera allocation strategies and to
schemes with constant coding strategies. Finally, we show that the best
scheduling policy consistently adapts to the most likely user navigation path
and that it minimizes distortion variations that can be very disturbing for
users in traditional navigation systems.
|
[
{
"created": "Tue, 2 Dec 2014 16:02:12 GMT",
"version": "v1"
}
] |
2014-12-03
|
[
[
"Toni",
"Laura",
""
],
[
"Maugey",
"Thomas",
""
],
[
"Frossard",
"Pascal",
""
]
] |
In multiview video systems, multiple cameras generally acquire the same scene from different perspectives, such that users have the possibility to select their preferred viewpoint. This results in large amounts of highly redundant data, which needs to be properly handled during encoding and transmission over resource-constrained channels. In this work, we study coding and transmission strategies in multicamera systems, where correlated sources send data through a bottleneck channel to a central server, which eventually transmits views to different interactive users. We propose a dynamic correlation-aware packet scheduling optimization under delay, bandwidth, and interactivity constraints. The optimization relies both on a novel rate-distortion model, which captures the importance of each view in the 3D scene reconstruction, and on an objective function that optimizes resources based on a client navigation model. The latter takes into account the distortion experienced by interactive clients as well as the distortion variations that might be observed by clients during multiview navigation. We solve the scheduling problem with a novel trellis-based solution, which permits to formally decompose the multivariate optimization problem thereby significantly reducing the computation complexity. Simulation results show the gain of the proposed algorithm compared to baseline scheduling policies. More in details, we show the gain offered by our dynamic scheduling policy compared to static camera allocation strategies and to schemes with constant coding strategies. Finally, we show that the best scheduling policy consistently adapts to the most likely user navigation path and that it minimizes distortion variations that can be very disturbing for users in traditional navigation systems.
|
2307.16120
|
Qingping Zhou
|
Qingping Zhou, Jiayu Qian, Junqi Tang, Jinglai Li
|
Deep Unrolling Networks with Recurrent Momentum Acceleration for
Nonlinear Inverse Problems
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Combining the strengths of model-based iterative algorithms and data-driven
deep learning solutions, deep unrolling networks (DuNets) have become a popular
tool to solve inverse imaging problems. While DuNets have been successfully
applied to many linear inverse problems, nonlinear problems tend to impair the
performance of the method. Inspired by momentum acceleration techniques that
are often used in optimization algorithms, we propose a recurrent momentum
acceleration (RMA) framework that uses a long short-term memory recurrent
neural network (LSTM-RNN) to simulate the momentum acceleration process. The
RMA module leverages the ability of the LSTM-RNN to learn and retain knowledge
from the previous gradients. We apply RMA to two popular DuNets -- the learned
proximal gradient descent (LPGD) and the learned primal-dual (LPD) methods,
resulting in LPGD-RMA and LPD-RMA respectively. We provide experimental results
on two nonlinear inverse problems: a nonlinear deconvolution problem, and an
electrical impedance tomography problem with limited boundary measurements. In
the first experiment we have observed that the improvement due to RMA largely
increases with respect to the nonlinearity of the problem. The results of the
second example further demonstrate that the RMA schemes can significantly
improve the performance of DuNets in strongly ill-posed problems.
|
[
{
"created": "Sun, 30 Jul 2023 03:59:47 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Aug 2023 13:58:20 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Feb 2024 13:19:29 GMT",
"version": "v3"
},
{
"created": "Sun, 31 Mar 2024 08:40:15 GMT",
"version": "v4"
}
] |
2024-04-02
|
[
[
"Zhou",
"Qingping",
""
],
[
"Qian",
"Jiayu",
""
],
[
"Tang",
"Junqi",
""
],
[
"Li",
"Jinglai",
""
]
] |
Combining the strengths of model-based iterative algorithms and data-driven deep learning solutions, deep unrolling networks (DuNets) have become a popular tool to solve inverse imaging problems. While DuNets have been successfully applied to many linear inverse problems, nonlinear problems tend to impair the performance of the method. Inspired by momentum acceleration techniques that are often used in optimization algorithms, we propose a recurrent momentum acceleration (RMA) framework that uses a long short-term memory recurrent neural network (LSTM-RNN) to simulate the momentum acceleration process. The RMA module leverages the ability of the LSTM-RNN to learn and retain knowledge from the previous gradients. We apply RMA to two popular DuNets -- the learned proximal gradient descent (LPGD) and the learned primal-dual (LPD) methods, resulting in LPGD-RMA and LPD-RMA respectively. We provide experimental results on two nonlinear inverse problems: a nonlinear deconvolution problem, and an electrical impedance tomography problem with limited boundary measurements. In the first experiment we have observed that the improvement due to RMA largely increases with respect to the nonlinearity of the problem. The results of the second example further demonstrate that the RMA schemes can significantly improve the performance of DuNets in strongly ill-posed problems.
|
cs/0604073
|
Muthiah Annamalai
|
Muthiah Annamalai, Hemant Kumar, Leela Velusamy
|
Octave-GTK: A GTK binding for GNU Octave
|
Comments: Presented at Octave2006 Conference, Washington D.C
| null | null |
Octave2006/02
|
cs.SE
| null |
This paper discusses the problems faced with interoperability between two
programming languages, with respect to GNU Octave, and GTK API written in C, to
provide the GTK API on Octave.Octave-GTK is the fusion of two different API's:
one exported by GNU Octave [scientific computing tool] and the other GTK [GUI
toolkit]; this enables one to use GTK primitives within GNU Octave, to build
graphical front ends,at the same time using octave engine for number crunching
power. This paper illustrates our implementation of binding logic, and shows
results extended to various other libraries using the same base code generator.
Also shown, are methods of code generation, binding automation, and the niche
we plan to fill in the absence of GUI in Octave. Canonical discussion of
advantages, feasibility and problems faced in the process are elucidated.
|
[
{
"created": "Wed, 19 Apr 2006 16:46:23 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Apr 2006 18:43:46 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Annamalai",
"Muthiah",
""
],
[
"Kumar",
"Hemant",
""
],
[
"Velusamy",
"Leela",
""
]
] |
This paper discusses the problems faced with interoperability between two programming languages, with respect to GNU Octave, and GTK API written in C, to provide the GTK API on Octave.Octave-GTK is the fusion of two different API's: one exported by GNU Octave [scientific computing tool] and the other GTK [GUI toolkit]; this enables one to use GTK primitives within GNU Octave, to build graphical front ends,at the same time using octave engine for number crunching power. This paper illustrates our implementation of binding logic, and shows results extended to various other libraries using the same base code generator. Also shown, are methods of code generation, binding automation, and the niche we plan to fill in the absence of GUI in Octave. Canonical discussion of advantages, feasibility and problems faced in the process are elucidated.
|
2307.04131
|
Yiyang Zhao
|
Yiyang Zhao and Tian Guo
|
Carbon-Efficient Neural Architecture Search
| null |
In 2nd Workshop on Sustainable Computer Systems (HotCarbon 23)
July 9, 2023
|
10.1145/3604930.3605708
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
This work presents a novel approach to neural architecture search (NAS) that
aims to reduce energy costs and increase carbon efficiency during the model
design process. The proposed framework, called carbon-efficient NAS (CE-NAS),
consists of NAS evaluation algorithms with different energy requirements, a
multi-objective optimizer, and a heuristic GPU allocation strategy. CE-NAS
dynamically balances energy-efficient sampling and energy-consuming evaluation
tasks based on current carbon emissions. Using a recent NAS benchmark dataset
and two carbon traces, our trace-driven simulations demonstrate that CE-NAS
achieves better carbon and search efficiency than the three baselines.
|
[
{
"created": "Sun, 9 Jul 2023 09:03:10 GMT",
"version": "v1"
}
] |
2023-07-12
|
[
[
"Zhao",
"Yiyang",
""
],
[
"Guo",
"Tian",
""
]
] |
This work presents a novel approach to neural architecture search (NAS) that aims to reduce energy costs and increase carbon efficiency during the model design process. The proposed framework, called carbon-efficient NAS (CE-NAS), consists of NAS evaluation algorithms with different energy requirements, a multi-objective optimizer, and a heuristic GPU allocation strategy. CE-NAS dynamically balances energy-efficient sampling and energy-consuming evaluation tasks based on current carbon emissions. Using a recent NAS benchmark dataset and two carbon traces, our trace-driven simulations demonstrate that CE-NAS achieves better carbon and search efficiency than the three baselines.
|
1708.02125
|
Caihua Shan
|
Caihua Shan, Nikos Mamoulis, Guoliang Li, Reynold Cheng, Zhipeng
Huang, Yudian Zheng
|
T-Crowd: Effective Crowdsourcing for Tabular Data
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crowdsourcing employs human workers to solve computer-hard problems, such as
data cleaning, entity resolution, and sentiment analysis. When crowdsourcing
tabular data, e.g., the attribute values of an entity set, a worker's answers
on the different attributes (e.g., the nationality and age of a celebrity star)
are often treated independently. This assumption is not always true and can
lead to suboptimal crowdsourcing performance. In this paper, we present the
T-Crowd system, which takes into consideration the intricate relationships
among tasks, in order to converge faster to their true values. Particularly,
T-Crowd integrates each worker's answers on different attributes to effectively
learn his/her trustworthiness and the true data values. The attribute
relationship information is also used to guide task allocation to workers.
Finally, T-Crowd seamlessly supports categorical and continuous attributes,
which are the two main datatypes found in typical databases. Our extensive
experiments on real and synthetic datasets show that T-Crowd outperforms
state-of-the-art methods in terms of truth inference and reducing the cost of
crowdsourcing.
|
[
{
"created": "Mon, 7 Aug 2017 14:03:17 GMT",
"version": "v1"
}
] |
2017-08-08
|
[
[
"Shan",
"Caihua",
""
],
[
"Mamoulis",
"Nikos",
""
],
[
"Li",
"Guoliang",
""
],
[
"Cheng",
"Reynold",
""
],
[
"Huang",
"Zhipeng",
""
],
[
"Zheng",
"Yudian",
""
]
] |
Crowdsourcing employs human workers to solve computer-hard problems, such as data cleaning, entity resolution, and sentiment analysis. When crowdsourcing tabular data, e.g., the attribute values of an entity set, a worker's answers on the different attributes (e.g., the nationality and age of a celebrity star) are often treated independently. This assumption is not always true and can lead to suboptimal crowdsourcing performance. In this paper, we present the T-Crowd system, which takes into consideration the intricate relationships among tasks, in order to converge faster to their true values. Particularly, T-Crowd integrates each worker's answers on different attributes to effectively learn his/her trustworthiness and the true data values. The attribute relationship information is also used to guide task allocation to workers. Finally, T-Crowd seamlessly supports categorical and continuous attributes, which are the two main datatypes found in typical databases. Our extensive experiments on real and synthetic datasets show that T-Crowd outperforms state-of-the-art methods in terms of truth inference and reducing the cost of crowdsourcing.
|
2306.12718
|
Weiming Qu
|
Qu Weiming, Liu Tianlin, Luo Dingsheng
|
CEMSSL: A Unified Framework for Multi-Solution Inverse Kinematic Model
Learning of Robot Arms with High-Precision Manipulation
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiple solutions mainly originate from the existence of redundant degrees
of freedom in the robot arm, which may cause difficulties in inverse model
learning but they can also bring many benefits, such as higher flexibility and
robustness. Current multi-solution inverse model learning methods rely on
conditional deep generative models, yet they often fail to achieve sufficient
precision when learning multiple solutions. In this paper, we propose
Conditional Embodied Self-Supervised Learning (CEMSSL) for robot arm
multi-solution inverse model learning, and present a unified framework for
high-precision multi-solution inverse model learning that is applicable to
other conditional deep generative models. Our experimental results demonstrate
that our framework can achieve a significant improvement in precision (up to 2
orders of magnitude) while preserving the properties of the original method.
The related code will be available soon.
|
[
{
"created": "Thu, 22 Jun 2023 07:50:17 GMT",
"version": "v1"
}
] |
2023-06-23
|
[
[
"Weiming",
"Qu",
""
],
[
"Tianlin",
"Liu",
""
],
[
"Dingsheng",
"Luo",
""
]
] |
Multiple solutions mainly originate from the existence of redundant degrees of freedom in the robot arm, which may cause difficulties in inverse model learning but they can also bring many benefits, such as higher flexibility and robustness. Current multi-solution inverse model learning methods rely on conditional deep generative models, yet they often fail to achieve sufficient precision when learning multiple solutions. In this paper, we propose Conditional Embodied Self-Supervised Learning (CEMSSL) for robot arm multi-solution inverse model learning, and present a unified framework for high-precision multi-solution inverse model learning that is applicable to other conditional deep generative models. Our experimental results demonstrate that our framework can achieve a significant improvement in precision (up to 2 orders of magnitude) while preserving the properties of the original method. The related code will be available soon.
|
2306.04445
|
Mustapha Bounoua
|
Mustapha Bounoua, Giulio Franzese, Pietro Michiardi
|
Multi-modal Latent Diffusion
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-modal data-sets are ubiquitous in modern applications, and multi-modal
Variational Autoencoders are a popular family of models that aim to learn a
joint representation of the different modalities. However, existing approaches
suffer from a coherence-quality tradeoff, where models with good generation
quality lack generative coherence across modalities, and vice versa. We discuss
the limitations underlying the unsatisfactory performance of existing methods,
to motivate the need for a different approach. We propose a novel method that
uses a set of independently trained, uni-modal, deterministic autoencoders.
Individual latent variables are concatenated into a common latent space, which
is fed to a masked diffusion model to enable generative modeling. We also
introduce a new multi-time training method to learn the conditional score
network for multi-modal diffusion. Our methodology substantially outperforms
competitors in both generation quality and coherence, as shown through an
extensive experimental campaign.
|
[
{
"created": "Wed, 7 Jun 2023 14:16:44 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Dec 2023 10:43:55 GMT",
"version": "v2"
}
] |
2023-12-19
|
[
[
"Bounoua",
"Mustapha",
""
],
[
"Franzese",
"Giulio",
""
],
[
"Michiardi",
"Pietro",
""
]
] |
Multi-modal data-sets are ubiquitous in modern applications, and multi-modal Variational Autoencoders are a popular family of models that aim to learn a joint representation of the different modalities. However, existing approaches suffer from a coherence-quality tradeoff, where models with good generation quality lack generative coherence across modalities, and vice versa. We discuss the limitations underlying the unsatisfactory performance of existing methods, to motivate the need for a different approach. We propose a novel method that uses a set of independently trained, uni-modal, deterministic autoencoders. Individual latent variables are concatenated into a common latent space, which is fed to a masked diffusion model to enable generative modeling. We also introduce a new multi-time training method to learn the conditional score network for multi-modal diffusion. Our methodology substantially outperforms competitors in both generation quality and coherence, as shown through an extensive experimental campaign.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.